Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage

30TB Hard Drives Are Nearly Here (tomshardware.com) 74

Seagate this week unveiled the industry's first hard disk drive platform that uses heat-assisted media recording (HAMR). Tom's Hardware: The new Mozaic 3+ platform relies on several all-new technologies, including new media, new write and read heads, and a brand-new controller. The platform will be used for Seagate's upcoming Exos hard drives for cloud datacenters with a 30TB capacity and higher. Heat-assisted magnetic recording is meant to radically increase areal recording density of magnetic media by making writes while the recording region is briefly heated to a point where its magnetic coercivity drops significantly.

Seagate's Mozaic 3+ uses 10 glass disks with a magnetic layer consisting of an iron-platinum superlattice structure that ensures both longevity and smaller media grain size compared to typical HDD platters. To record the media, the platform uses a plasmonic writer sub-system with a vertically integrated nanophotonic laser that heats the media before writing. Because individual grains are so small with the new media, their individual magnetic signatures are lower, whereas magnetic inter-track interference (ITI) effect is somewhat higher. As a result, Seagate had to introduce its new Gen 7 Spintronic Reader, which features the "world's smallest and most sensitive magnetic field reading sensors," according to the company. Because Seagate's new Mozaic 3+ platform deals with new media with a very small grain size, an all-new writer, and a reader that features multiple tiny magnetic field readers, it also requires a lot of compute horsepower to orchestrate the drive's work. Therefore, Seagate has equipped with Mozaic 3+ platform with an all-new controller made on a 12nm fabrication process.

This discussion has been archived. No new comments can be posted.

30TB Hard Drives Are Nearly Here

Comments Filter:
  • 30TB is a lot of stuff to lose when the disk dies...

    • by darkain ( 749283 )

      Which is exactly why these are not intended for every-day home users. These are enterprise drives with organizations usually purchasing thousands at a time for large storage clusters.

      • by saloomy ( 2817221 ) on Friday January 19, 2024 @04:31PM (#64173887)
        Unless the access times improve, no. These drives can barely do 200MB/s io sequentially, and dont even try random io. Putting that much data behind that small a door is a recipe for a disaster.

        The problem with these drives is the wattage / IOPS. You need too many of them to make a modern workload performant. Let's say you in your datacenter have... 64 of these? In 12 drive Raid 6s. That is 1,200 TBs of data behind a read rate that sucks. No application will be realistically read from / to that infrequently that it makes these drives worthwhile. Seagate should be looking at making a hard drive with 6 head arms all doing their own read/write operations around the drive to compete with drives like this: https://www.serversupply.com/S... [serversupply.com]

        That is 15.6TB on an NVME SSD. It will blow the drive away in performance, power usage, heat dissipation, and density because of form factor. The power savings and buying capacity for what you need rather than IOPS you need make it a way better investment for corporate users.
        • What about situations like YouTube? Sure, thereâ(TM)s a minority of content that needs to be read back very frequently for millions of subscribers (probably through caches anyway), but thereâ(TM)s a huge number of very large video files on there being accessed once a month or less. Having those stored on spinning rust in an archive thatâ(TM)s accessible, but no where near as expensive as nvme flash makes a lot of sense.

          • It takes too long to spin up the drive, so you end up keeping it running taking lots of power. What they do is put it on cheaper NAND like TLC and QLC, and turn off the drive until it is needed. Saves power and becomes worth it.
            • You can store the start of videos on SSDs, then spin up the HDD and load from there. Or simply spin up during the ads, and the spin up time is not even that long, a few seconds. Also, one HDD doesn't consume *that* much power, and can replace multiple SSDs. And HDDs consume less energy when manufacturing, which is why they are cheaper. One HDD vs two SSDs (comparing to Nytro 15 GB from Seagate) uses less power at idle. And then there are virtual computers, which aren't guaranteed huge I/O, but maybe then ot
              • by dargaud ( 518470 )
                Is there a filesystem or OS configuration that *easily* allows for storing the head of large files on one drive and the rest on another ?
                • He has a good idea. With an operation YouTubes size, youâ(TM)d just make two different files and have the web player read one then the other and stitch them together in the API request. Wouldnâ(TM)t be so hard
        • by larryjoe ( 135075 ) on Friday January 19, 2024 @05:46PM (#64174147)

          The problem with these drives is the wattage / IOPS. You need too many of them to make a modern workload performant. Let's say you in your datacenter have... 64 of these? In 12 drive Raid 6s. That is 1,200 TBs of data behind a read rate that sucks. No application will be realistically read from / to that infrequently that it makes these drives worthwhile.

          The use case for these large capacity drives is not performance but cold storage, where $/GB is the key. Large cloud centers make buying decisions on $/GB, where a few cents will sway a large volume order. Performance doesn't matter since either SSDs or DRAM are used when performance matters. For cold storage, erasure codes are used instead of RAID, which isn't practical for such large capacity drives.

          • by suutar ( 1860506 )

            So this is basically a replacement for tape? Or are there aspects I'm not noticing?

            • by NFN_NLN ( 633283 ) on Friday January 19, 2024 @06:45PM (#64174257)

              > So this is basically a replacement for tape? Or are there aspects I'm not noticing?

              Disk storage allows you to dynamically cleanup data. Synthetic fulls, space saving snapshots and deduplication. The best you can get with tape is writing the data dedupe cache at the time to the tape. But if the data changes you have to write out the dedupe cache again. Tape isn't efficient. Same with snapshots. You can do a full backup and incrementals. With disk you can have the advantage of incrementals and the benefit of synthetic fulls.

              Tape is only good for unchanging data you're done with and want to dump with little intention of restoring to any great capacity. Let's not forget the restore time.

            • Tape is for backup. Cold storage is for things like social media posts from many years ago that have a low probability of access but require not-too-slow access when needed. Tape would require tens of minutes to retrieve the data, while cold storage on disks allows the retrieval in seconds or tens of seconds.

        • by NFN_NLN ( 633283 )

          >> . These are enterprise drives with organizations usually purchasing thousands at a time for large storage clusters.
          > no. These drives can barely do 200MB/s io sequentially

          So, striping across 100's of these drives wouldn't yield 20,000MB/s?

        • That is 15.6TB on an NVME SSD. It will blow the drive away in performance, power usage, heat dissipation, and density because of form factor.

          Plus there is at least one 32TB SSD already out there so SSDs can even beat it for single drive capacity but it does cost $4-5k.

        • No application will be realistically read from / to that infrequently that it makes these drives worthwhile

          That observation is not relevant here I think, as the massive networked data storage solutions that are used these days (eg by us) only read/write the physical drives sparingly. No app on the compute cluster can directly access a disk, they must instead talk to a bank of intermediating servers with terabytes of RAM, that cache and consolidate and compress all the files, and organize them for simple a

          • Interesting model. Do you guys spin down your storage when not in use? If not, there are commands you can send to do so. hdparm -y /dev/hd* I believe would do so. If you have them behind an HBA. Raid controller, youd have to investigate if you can talk directly to them.
            • I don't know the low level details you'd have to talk to our sysadmins. Our compute cluster runs simulations 24/365, talking to c-nodes which load balance and pass normalized traffic onto d-nodes. It's the d-nodes that handle actual IO to/from the RAM caches. Statistically, the 80/20 rule applies so some data files are accessed very frequently while others just sit there. I assume that there are fancy algorithms to minimize power usage and put some disks in standby, but mainly we care about availability and
      • That is dangerously close to the "640k is enough for anyone" trap. There are a lot of average users who have need of high capacity drives, if only as an external drive that backs up their stuff and keeps it over a period of time (years), or a drive that they copy all their stuff to for archival storage. Of course, this isn't an optimal use, as you really need 3-2-1 protection, but I've seen users with 22 TB external drives using them as a place to shuffle off stuff when their main computer gets full.

        Even

      • Large portable drives are actually a pretty convenient method for offsite backup. I have 2 I cycle through so 1 is always at work. Takes a few days to transfer, but its a background process so that's fine. Having lots of space on a single disk makes this process easier.

        Large drives are also useful if you have a large amount of data you don't access very often - say archival storage of videos or photos that you don't access often. I have several TB of photos that I can still get to in a few seconds if I
      • by jma05 ( 897351 )

        Is 30 TB that large? It's about 1 month worth of 8K video.

        A TB isn't what it used to be.

        • 8K video is a niche and extreme-enthusiast use case. Very few things require 8K that cannot be done with 4K or lower, like 360 degree or stereoscopic videos. And a month of video stored?

          There's possible use cases for that, but it's not many.

    • by matmos ( 8363419 )
      this is why you always make backups. right? right?!
    • That is why you have more than one and use RAID or backups or some other strategy to ensure redundancy to bring your risk level down to within your personal tolerance.

      • I'd probably go one step further than that, and go with at least an error detecting filesystem. For example, many Synology NAS models use btrfs on top of md-raid. This will show that a file was corrupted. Ideally, ZFS or btrfs handling the LVM/RAID layer, because that means the errors wouldn't be just detected, but likely could be corrected automatically on a scheduled disk scrub.

        Of course, 3-2-1, or even better, 3-2-1-1-0 backups are critical. If I were using these for a two drive NAS, I'd buy two 30 T

    • by HBI ( 10338492 )

      Buy two; mirror.

      Trusting a single drive is not that smart, even if it is small. Even if it's an SSD. You might get away with it for the lifetime of a system's depreciation, but after that? Think redundancy.

      • In some cases, I'd almost consider going up a notch. Buy or build a NAS, use two drives as a mirror, and SSDs for read/write caching. From there, have the NAS be an iSCSI target. If the NAS has a 2.5 or 10gigE connection, or is connected via a TB4 cable, this could give some quick speeds, all with any data that is stored on it would be well protected, with the point of failure being the single connection. If one needs more redundancy on the cheap, get a load balancer, four NAS machines, and go with mult

        • by HBI ( 10338492 )

          I don't necessarily disagree - I run two Linux boxen that do nothing (basically) but provide file storage using md (software). I rsync them occasionally; the secondary box gets powered on by a power switch that handles REST calls, does the rsync process and then shuts itself down, when there is zero draw the power switch gets shut down and ready for the next backup.

          This is more complicated than I was intending to make it though, I just wanted acceptance that mass storage devices are unreliable - even SSDs,

          • That is definitely a case. When SSDs fail, they fail hard, and they can fail in insane ways, such as taking write commands without giving error messages. This is also why a checksumming filesystem is a must, so when drives break, you know it. Same with hard drives, especially newer ones that use helium.

            It seems that modern drives are not just unreliable in the fact that they fail hard, but unreliable in the fact that sometimes you can't trust the data the drive is reading, even though drives have layers

    • Too many eggs in one basket

      I guess this depends on the size and shape of your eggs.

      If you have smallish independent files (say, a collection of pictures), then yes, spreading them over multiple smaller disks ensures at least some of them would still be usable if a disk crashes. However, if your data needs more bytes than can fit on a single disk to be usable, then multiple disk increase the probability of failure. For the second case, consider the installation kits of yesteryear that came on multiple floppies: after spending a couple

    • If you stick one of those in your home PC, it's sort of a upgrade to recycle bin anyway. Write once, never read, except you could if you really needed to.
    • Exactly the point I was going to make. For an individual user, such big drives do not make sense. It's not just when you lose the data, it's when the drive is starting to fail and you are trying to copy 30TB off it at 1Gb/s, or much less because the drive is failing. Even if you have a perfect backup of everything (not usually the case if the drive is actively being used), creating a new clone so that you always have 2 copies can take a long time.

    • by Askmum ( 1038780 )
      Just imagine the rebuild time of your RAID-5 when one drive fails.
  • by sl149q ( 1537343 ) on Friday January 19, 2024 @03:55PM (#64173735)

    The big question will be can you fill one up before the warranty expires?

    • The entire sci-hub collection is 77 TB https://www.reddit.com/r/DataH... [reddit.com] . With 30 TB disks we are getting close to Futurama's Mars University https://en.wikipedia.org/wiki/... [wikipedia.org] whose suppsoedly largest Library in the universe is composed of two "CDs" on a desk.

    • The big question will be can you fill one up before the warranty expires?

      Having everything on a single large drive is less effective.

      A single drive has a lifetime of about 5 years, so if you have data that is only occasionally accessed the OS can spin down the drive and save wear and tear over time.

      Rather than have a 30 TB drive, six 5 TB drives will last much longer. Assuming that 5TB chunks meets your needs, put your OS and projects and work data on a single drive, then put your music library and vacation video collection and photos on a 2nd, all the AI training data on a 3rd

      • by mbunch ( 1594095 )
        A single 30TB drive is going to cost significantly less than six 5TB drives, use less power and take up less space. Also, disk drives have a lifetime of 5 years only because they become obsolete around that time. Most of them are disposed of while still in perfect working condition.
        • You can buy a 5TB drive for $99. Will this 30 TB drive really be less than $500, especially with all that new expensive-sounding technology?

  • ...one of those Rockwell retro encabulators....
  • Seagate's Mozaic 3+ uses 10 glass disks with a magnetic layer consisting of an iron-platinum superlattice structure [...] To record the media, the platform uses a plasmonic writer sub-system with a vertically integrated nanophotonic laser...

    "Superlattice"? "Plasmonic"? "Nanophotonic"? This kind of sounds made up. I wouldn't be surprised if the next sentence mentioned tetryons.

    • Don't you get it? They are doing nanophotons now. NANOPHOTONS! What a time to be alive!
    • These are legitimate concepts in condensed matter physics and nanotechnology and it makes sense to refer to them in the context of an optoelectronic device. Superlattice means the magnetic stack has periodic thicknesses, I guess they are trying to force the L10 long-range chemical order in FePt to create perpendicular magnetic anisotropy (I could be wrong but at least I'm not joking); plasmonic is an effect of electrical field enhancement, happens in noble metals nanoparticles, it is also of high importance

      • As for optical, this makes me wonder why some of these advances can't be used to make a Blu-Ray successor. Everything from variable color lasers to more layers, to getting the size of the pits smaller. Maybe even go back to old school MO drives, if there is a hard limit due to light wavelengths, where the magnetic media is heated up and written. The key is having high capacity, relatively inexpensive WORM media with some terabyte capacity as an alternative to LTO tape.

        • I don't think MO drives can really take advantage of holographic storage, multi-wavelengths, or even multiple layers.

          • Ugh, finger grazed the damn submit button on phone & mobile slashdot has no preview-before-post :-(

            anyway...

            MO's true strength is its likely long-term readability. It works something like this:

            * Disc is manufactured with magnetically-polarized particles

            To write, the laser melts a spot, a magnetic field changes the orientation of the particles embedded in the temporarily-liquid plastic at that spot, and it hardens with the particles in their permanent orientation.

            The advantage is, it doesn't darken or bl

            • I do wish MO came back, because of that. With M-DISC pretty much gone, and Blu-Ray not really having a relevant capacity, there isn't much out there for long term archiving, unless one ponies up for a tape drive. Having old school MO media would make a life easier, and also be a solid defense against ransomware, barring a compromise of drive firmware where it would zero out any disks put in it.

    • by swilver ( 617741 )

      It could be quantum absolute zero antigravity technology, but if it is says "Seagate", I ain't buying it.

    • by tepples ( 727027 )

      They stopped short of "tetrion" so as not to infringe The Tetris Company's copyright.

    • Although I completely agree, the chain of buzzwords is hilarious, it turns out these are all real techs that can be used now. We make all of these in the microchip lab I work in. Superlattice just means alternating 1 to 2 nm layers to control material properties, plasmon means the light interacts with very small particles to increase the effect, like pregnancy tests or Stimulated RAMAN spectroscopy, and Nanophotonic laser means instead of a typical 1mm long telecom laser that draws 500mA to run, theyre usin
  • by Anonymous Coward

    I'd love to see how they handle a three foot drop. SSDs should be standard by now.

  • With compared to older tech. It probably wouldn't take much for these drives to get a bit flipped by low level natural radiation or cosmic rays when the actual bits are a handful of atoms wide vs an older hard drive. Can I store this for years and still have the data remain intact? I just pulled out a hard drive the other day that was 20 years old and it still had the data on it.
    • If you care about such things and have a use case that actually justifies these then you will have ways to mitigate that problem. Redundant arrays with checksumming, backups stored in mines... No home user without way too much money should be buying these. Normal people will use an array to reach this capacity. Equally, write speeds are probably very poor, so you need an array to get decent write speeds anyway.

    • This is why having error correcting filesystems with modern media is becoming more important. In the past, a bit flip by a cosmic ray wouldn't be a big deal -- the ECC of the drive could handle it. However, I have seen cases where two drives had completely different pieces of data, and both didn't report any CRC errors, which meant that there was a 50/50 chance Linux md-raid would get the right stuff. This is why either ZFS on the filesystem level, or something like dm-integrity on the individual drive l

      • by AmiMoJo ( 196126 )

        It seems to be an issue with the drives reallocating blocks and failing to notice that they corrupted the data in the process.

        It's a real pain because with modern drives, some reallocations are inevitable even when brand new and in otherwise good working order. The drive doesn't bother reporting it, and if you do an immediate read-back it will probably come from the cache rather than the platter.

  • I need to upgrade my NAS at home. While I don't need a pair of 30TB drives, a pair of 20TB drives would be awesome. The best part of new drives coming out is they tend to push the prices of existing drives down. I'll keep watching my space slowly fill up, and I'll try to purge stuff I don't need, and the longer I wait, the cheaper the new drives will be.

  • From the summary

    the platform uses a plasmonic writer sub-system with a vertically integrated nanophotonic laser

    Throw in something about tachyons and this could be a technobabble line from Scotty.

    I'm not saying it doesn't make sense, just that it has that flavor. I had to look up what "plasmonic" means. Pretty fancy stuff! I'm a little suspicious of reliability, but not too much. HDD makers have an outstanding track record of upping densities without reducing reliability.

    • But how do they compensate for tetryon phase inbalance across the warp matrix?

    • by radoni ( 267396 )

      I thought the same about the description approaching technobabble, but it sounded familiar. Is this the inverse of Magneto-Optical drive tech?

    • Although I completely agree, the chain of buzzwords is hilarious, it turns out these are all real techs that can be used now. We make all of these in the microchip lab I work in. Superlattice just means alternating 1 to 2 nm layers to control material properties, plasmon means the light interacts with very small particles to increase the effect, like pregnancy tests or Stimulated RAMAN spectroscopy, and Nanophotonic laser means instead of a 1mm laser that draws 500mA to run, theyâ(TM)re using an ultra
  • Ubisoft said we need to get used to not owning anything. Won't need 30TB to store those big bloated games. The future looks bleak my friends.

  • First PC: HarCard slot-and-half 20 MB HDD in a Compaq luggable beast. I was the boss with 1 MB RAM and dual 5.25" floppies and a tiny monochrome screen.

    Now literally 1,000,000 times as much storage in a little under 40 years.

    That's basically doubling every 2 years, sorta like Moore's law...

    https://en.wikipedia.org/wiki/Hardcard/ [wikipedia.org]

    • It seems hard drive development has plateaus and steep climbs. Stuff goes without any real advances in capacity for a while, then something happens where it adds a significant amount of storage, repeat. Things were hanging around 20-22 TB for a while now, and going from 22 to 30 is a pretty good jump.

      One thing that I do think would be useful are some added HDD form factors. We right now have two, the 2.5" drive, and the 3.5" drive. If we could get a form factor that increased the height, HDD makers coul

      • by gatzke ( 2977 )

        For performance, aren't you talking about RAID? But RAID in a single form factor sounds like. Stack five disks in one so you double speed with redundancy for failure and a spare...

        The last time I ran RAID it was great until the controller crapped out and all was lost. So buy two RAID arrays? :-)

        On advances in capacity, I would argue CPUs have discrete jumps as you make new manufacturing fabs/processes. A few incremental increases but some significant jumps along the way as well.

  • I remember learning about those in history class.

    • I remember learning about those in history class.

      The internal storage you have been using all along is commonly referred to as “hard” storage regardless of the technology. Not like they switched to fucking jello inside your machine sometime between magnets and solid state disk.

  • Will it cost an arm and a leg ???
  • Lemme guess they are all shingled drives...

The unfacts, did we have them, are too imprecisely few to warrant our certitude.

Working...