Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage IT

Seagate's Breakthrough 32TB HAMR Hard Drives Are Finally Here (tomshardware.com) 79

Seagate has launched its first mass-produced hard drives using heat-assisted magnetic recording (HAMR) technology, introducing 32TB and 30TB models under the Exos M brand. The drives, based on Seagate's Mozaic 3+ platform, mark the company's commercial breakthrough in HAMR technology after 16 years of development. Compatible with existing systems, the 32TB model uses shingled magnetic recording, while the 30TB version employs conventional magnetic recording.

Seagate's Breakthrough 32TB HAMR Hard Drives Are Finally Here

Comments Filter:
  • by merde ( 464783 ) on Tuesday December 17, 2024 @12:41PM (#65019767)

    ... my initial thought is, "That's a shit lot of data to lose in one go"

    • ... my initial thought is, "That's a shit lot of data to lose in one go"

      Buy two and back up often.

      • Eww. Buy three and RAID two and use the third to backup to.

      • by eneville ( 745111 ) on Tuesday December 17, 2024 @02:05PM (#65020013) Homepage

        Buy two drives from different manufacturers of different sizes, raid1 and then upgrade the smaller disk periodically. This way you're much less likely to have two drives pop in the raid1 at the same time. You sacrifice a TB because the array is the smaller size, but I'm happier knowing that the smaller disk has stood a test of time and both aren't going to cease a month in.

        • by AmiMoJo ( 196126 ) on Tuesday December 17, 2024 @08:15PM (#65021073) Homepage Journal

          It's better to have a proper, periodic backup system that also checks for errors and gives you some file history. With RAID it won't notice data corruption, and most RAID 1 setups don't read both drives and compare, they only write to both drives and spread reads between them.

          RAID is for uptime, it's not really for keeping your data safe.

        • by tlhIngan ( 30335 ) <slashdot.worf@net> on Tuesday December 17, 2024 @09:54PM (#65021241)

          Single redundancy is just silly these days. RAID5, RAID1 with two drives should all be banned.

          Because when one drive fails and the array is no longer redundant, the pucker factor is real - the whole rebuild taking days are some of the most stressful around.

          A double-redundant system (RAID6, for example, or 3+ drives in RAID1) is far less stressful

          RAID5 is great on paper, but after rebuilding countless arrays after drive failure, I'm sticking with doubly redundant.

          • Single redundancy is just silly these days. RAID5, RAID1 with two drives should all be banned.

            Banned? WTF dude. You do NOT get to decide how I choose to satisfy my needs. Maybe data integrity is not the goal and speed of access is. With your 'ban', you would fuck off all my goals. All because you think you are important enough to decide what is suitable for other people.

            Why would you even think of banning something for someone else merely because it doesn't fit any of your usage needs? Shortsighted bro. Very short sighted.

      • ... my initial thought is, "That's a shit lot of data to lose in one go"

        Buy two and back up often.

        At this data density, someone should be probably marketing and selling a minimum of quantity=two, and stop bullshitting about how often Dr. Shit F. Happens stumbles in at 2AM in a drunken stupor to fuck shit up at our expense.

        • by Megane ( 129182 )

          You need two drives from different lots to avoid the curse of sibling failure. I have an old little Drobo array with two 8G and two 6G, and I went out of my way to acquire them from Fry's (that's how long ago it was) in different weeks, specifically for that reason.

    • ... my initial thought is, "That's a shit lot of data to lose in one go"

      The large organizations (the cloud providers/hyperscalers/HPC orgs) purchasing these drivers do understand proper redundancy (RAID, sharding, tested backups, etc.) as individual hard drives (and for that matter entire systems) are failing all the time. Life at (big) scale is simply different than what most people have experienced.

      • Yup. We're not even "hyperscale" or whatever, but we've still got... probably ~10,000 drives. Drives die every day. No data loss in 18 years.
    • ... my initial thought is, "That's a shit lot of data to lose in one go"

      RAID 6 (or some ZFS configuration where 2 drives can fail) + Backups are your friends.

    • ... my initial thought is, "That's a shit lot of data to lose in one go"

      At this point in data density, it should be damn near illegal to sell these outside of RAID-1 out of the box.

    • Yes, this is where you want to use more complex disk geometries. Mirrors of stripes with hot spares, etc.

      Do not put all your digital eggs in one Seagate basket.

    • 22 million floppy disks worth [howmanyfloppydisks.com], give or take a few thousand floppies. (1.44 MB 3.5")
  • MTBF? (Score:5, Interesting)

    by sdinfoserv ( 1793266 ) on Tuesday December 17, 2024 @12:59PM (#65019815)
    I wonder if mean time between failures (MBTF) is less than than the time it takes to write fill an entire drive.
    • Is the F in MTBF "failure" as in totally conking out, or reading a byte incorrectly?
      • by NetCow ( 117556 )
        The former, but it's a bit misleading as a metric given all its assumptions. Seagate isn't using MTBF for estimating failure probability any longer: https://www.seagate.com/gb/en/... [seagate.com]
        • by tlhIngan ( 30335 )

          The former, but it's a bit misleading as a metric given all its assumptions. Seagate isn't using MTBF for estimating failure probability any longer:

          It's always been misleading. It's just like another figure you have - the "200 year flood" or "100 year flood". The numbers do not mean anything intuitive, in fact, they are downright deceptive.

          MTBF is what it is says on the box - how long the drive will work before on average, they will fail. The time is just a "mean", though, it tells you nothing about the med

    • by Entrope ( 68843 )

      Er, yes? Usually MTBF for a hard drive is on the order of a million hours or sub-1% annual failure rates. If the drive supports a sustained 50 MB/sec throughput, that's less than 7 days of writing.

      • That's probably optimistic in terms of write time, a couple years ago I had to copy a full 4TB disk's contents to an 8TB disk and it took about a day and a half.

    • by Temkin ( 112574 )

      More interesting would be the average uncorrectable error rate. Just to get the RAID5/RAIDz1 argument off to a proper start...

    • by GuB-42 ( 2483988 )

      At 100 MB/s, it would take about 100h, or 4 days, to fill such a drive. MTBF is typically in the hundreds of thousands of hours. Realistically, it is often less, but that's the order of magnitude, so that's at least 1000x the time to fill the drive.

      It doesn't mean it is not a problem. In a RAID array, it means there is a significant chance for another drive to fail during rebuild, especially since the rebuild may put them to more stress than usual and drives in the same RAID array are more likely to fail at

  • 60tb and 100tb ssds are available, is the increasingly small price differential of an increasingly obsolete storage method really worth it? Especially with the speed and non random access disadvantage. Windows 10 and 11 are so inefficient on hard drives that Microsoft mandates SSDs for laptops now. This is like buying a computer with a floppy disk drive in 2003.
    • by blahbooboo2 ( 602610 ) on Tuesday December 17, 2024 @01:47PM (#65019957)

      Who uses spinning disks for system drives anymore? SSD are great system drives and for certain applications, but for archive/rarely used data storage the SSD is significantly more costly per TB than a spinning disk (especially in the hundreds of TB in data typically stored for archive systems/purposes).

      Right now WD Red spinning drive 24TB is $569. Whereas an 8TB external SSD is $620 on Amazon.

      Folks have said for 10+ years that SSDs would replace spinning disks for all purposes. So far these folks have been wrong and, from this announcement, looks like they will continue to be wrong for the foreseeable future l

      • Re: (Score:2, Insightful)

        by Luckyo ( 1726890 )

        It's even funnier when you tell those people about tape. I've seen some real jaws dropping and eyes bulging in recent times.

        • Everyone hates LTO until you actually have to have a dependable backup. Then everyone absolutely loves autochangers and LTO.

      • by Miles_O'Toole ( 5152533 ) on Tuesday December 17, 2024 @02:43PM (#65020139)

        These huge HDDs are having another effect: they're putting downward pressure on the cost of backup drives for average folks. I can buy a 3TB HDD for well under a hundred bucks. A couple or three of those are enough to back up everything I need backed up, with redundancy. Screw the cloud. That's my data on my hardware stored at locations I control for under three hundred bucks. Just a few years ago, that would have been a dream.

        • Recently built a new home server (replacing the long line of "I don't know what I'm doing" servers...), and one of the things I did that I never did before was put in a 3.5" drive hotswap bay dedicated to back-ups. Here's an example of the kind of thing I mean: https://www.newegg.com/istarus... [newegg.com]

          The best part is that I don't even have to buy any new drives, I can just insert one of my stack of obsolete drives (we all have them) for the incremental backups, and get one of the larger drives for the once a week

          • Thank you from the bottom of my heart for this! I've been using external USB drives. I have no doubt at all SATA would be better.

            I will be getting one of these as a Christmas present to myself.

            Again, thank you!

            • by mckwant ( 65143 )

              I just arrived at "External USB Drives," so I hate you both. :)

              Curious what you're running. I feel "homelab-adjacent," but I'm not convinced the centralization is worth it.

              Better, maybe: JellyFin on my homelab creates a pet. JellyFin on a semi-disposable fanless J4125 is definitely a tool.

              • For me a combination of development (being able to create and tear down virtual servers in seconds helps) plus some self hosted applications. All supported with PostgreSQL, Minio, Redis, etc. (These all run on SSDs.) There's some shares, backed by spinning rust in a RAID set, that my family backs things up to, and that RAID set also maintains the media server.

                All of this is running under a combination of Xen and LXD (Xen to guarantee resources, LXD to sandbox applications and create/tear down virtual server

            • Glad to help!

      • by dargaud ( 518470 )
        I have a headless server for data storage. Lots of pics and videos and stuff. All laptops are with SSDs, but the server has a RAID5 with 3 hard drives. I change one disk every year or so, put a larger one in so I can grow the RAID 2 years later, and I use the removed disk as backup. The raid doesn't see much write activity, but is used mostly for reading. So far so good. I expect to replace the HD with SSD in 3 years more or less if price points keep converging. Although you cannot have a RAID with a mix of
      • by CAIMLAS ( 41445 )

        Meanwhile, there are companies out there replacing tape and spinning disk with all-flash storage options... the exist, you just don't know about them yet, apparently.

      • Who uses spinning disks for system drives anymore?

        I am about to build a computer with two spinning disks in it. They are much faster than my NAS and can hold ginormous amounts of read-only data (training data).

    • SSD (Score:5, Interesting)

      by JBMcB ( 73720 ) on Tuesday December 17, 2024 @01:53PM (#65019979)

      60tb and 100tb ssds are available, is the increasingly small price differential of an increasingly obsolete storage method really worth it?

      50-100TB SSDs cost several hundred dollars per TB.

      Enterprise 20-30TB hard drives average around $20-30 per TB.

      What is this small price differential are you talking about? Keep in mind that companies buying these things aren't buying one, but dozens or hundreds of them, leading to an overall price delta of tens or hundreds of thousands of dollars.

      • Keep in mind that companies buying these things aren't buying one, but dozens or hundreds of them

        (Tens/Hundreds) of thousands of drives are just the start for the big organizations after the initial samples used for evaluation are shown to perform appropriately.

    • You're on crack, dude.
      At large scale, we use spinners. They're the only economical choice.
      32TB of SSD will cost you ~3k. Spinner? $600.

      We do keep smaller SSDs as cache drives. They're effective to improve the overall rate of random access across the pools, and it won't cost you a Fortune 500's yearly income to house a few PBs.
    • 60tb and 100tb ssds are available, is the increasingly small price differential of an increasingly obsolete storage method really worth it? Especially with the speed and non random access disadvantage. Windows 10 and 11 are so inefficient on hard drives that Microsoft mandates SSDs for laptops now. This is like buying a computer with a floppy disk drive in 2003.

      Compare the price/TB of said SSDs vs the price per TB of these drives.

      Then search the internet for terms like "warm storage", "write once read seldom", or "Virtual Tape Library" , and you will see clearly what these drives are used for.

      • I'd guess the cost for most of the SSD is licensing. Someone probably already figured out how to make SSDs which are 1/100th the size for 1/100th the price. They just still have to pay a license fee for the tech. I'd guess once those patents run out in 15-20 years that's when hard drives actually die.
        • Yeah, I've seen 16TB SSDs listed for $12 on Aliexpress. It's those damned lawyers making EUV lithography so expensive.
          • I don't think anyone is making EUV NAND flash yet, but ya, it sure as hell won't be cheaper.

            btw, I think you can get 16TB micro-SD cards on Aliexpress... why get a full-blown SSD?
        • Na. NAND manufacturing is a pretty competitive space.
          The problem is simple supply, demand, and cost of manufacturing.

          NAND can be cheaper than it is, but on a per-TB basis, it'll never, ever, compete with a hunk of spinning aluminum.
          NAND will win that particular cost battle once boffins quit figuring out ways to squeeze more data into a mm^2 of ferromagnetic coating.
    • Give it time and both storage systems will reach hard limits. It may well be that magnetic recording methods top out at, picking an arbitrary figure, 1000Tb, before it just becomes impractical to store more in current form factors, while SSDs, given their lack of mechanical/moving parts and use of ever improving silicon processes, beat that.

      But that hasn't happened yet, and spinning rust continues to have a substantial cost advantage over SSDs when disks are in the multiple terabyte ranges. And apparentl

  • What the 30TB allows you, is to have a simple 30TB backup for your home lab.

    Your 6x8TB Raid 6 (or equivalent ZFS) Array now can be backed up to one of these (or, more exactlky, 3, 2 off site and one onsite, rotating every week).

    Or your 12x3TB Raid 6 (or equivalent up to 2 failed ZFS) array can be backed up to only one of this.

    This drive simplifies backups of our home labs significantly.

    As for enterprises, HPC and cloud builders... they know full well what to use these drives for....

    • by dgatwood ( 11270 )

      What the 30TB allows you, is to have a simple 30TB backup for your home lab.

      I wish my home backups would fit in 30 TB. I have a RAID 5 with 24 TB drives, and seriously thinking about adding a second RAID set, because I have maybe 8 TiB of that 87 TiB left right now, with usage growing at a couple of TB per year (4K concert video recordings, mostly).

      What I'm hoping for is that this brings down the price of 24 TB drives more before I bite the bullet and order 5 more of the things.

      • > 4K concert video recordings, mostly

        Do you have a videography business or just data hoarding?

        Just curious. Back in the day a buddy of mine had filing cabinets full of CD-R discs with Dead bootlegs. He just felt compelled to collect.

        I finally hit my 'good enough' at about 18TB and it's not filling up faster than a slow rotation of backup drives into RAIDZ2 permits anymore.

        But I just download some research videos and do a little bit of home video, nothing pro or hoardy.

        I too would love a price drop on th

        • by dgatwood ( 11270 )

          > 4K concert video recordings, mostly

          Do you have a videography business or just data hoarding?

          I'm on the board of a nonprofit (orchestra) and another in-the-process-of-being-founded nonprofit (choir). So arguably the first, but really neither? :-D

          I probably need to configure my Atomos recorders to use heavier compression. An hour and a half of 4K video from that is O(120) GB, versus more like 30 GB from my Panasonic 4k camcorder. But either way, we're talking about a bunch of very large files. :-)

      • What the 30TB allows you, is to have a simple 30TB backup for your home lab.

        I wish my home backups would fit in 30 TB. I have a RAID 5 with 24 TB drives, and seriously thinking about adding a second RAID set, because I have maybe 8 TiB of that 87 TiB left right now, with usage growing at a couple of TB per year (4K concert video recordings, mostly).

        What I'm hoping for is that this brings down the price of 24 TB drives more before I bite the bullet and order 5 more of the things.

        First and foremost, do not do RAID5 with 24TB drives. In case of failure, the rebuild time will be so large that a second disk will probably fail before the rebuild is done, and then you lose all your data.

        Buy these 30TB drives to substitute your 24TB ones instead, and go to RAID6. And back up to LTO, you will be better served.

        And do not do raid 5 or 6 on more than 10~14 drives (depending on HW/SW manufacturer). Most storage Manufacturers (HPE and Huawei to name the two I worked with directly) put those lim

        • by dgatwood ( 11270 )

          What the 30TB allows you, is to have a simple 30TB backup for your home lab.

          I wish my home backups would fit in 30 TB. I have a RAID 5 with 24 TB drives, and seriously thinking about adding a second RAID set, because I have maybe 8 TiB of that 87 TiB left right now, with usage growing at a couple of TB per year (4K concert video recordings, mostly).

          What I'm hoping for is that this brings down the price of 24 TB drives more before I bite the bullet and order 5 more of the things.

          First and foremost, do not do RAID5 with 24TB drives. In case of failure, the rebuild time will be so large that a second disk will probably fail before the rebuild is done, and then you lose all your data.

          Only if you buy drives that have a self destruct design flaw, which there have been a few spectacular examples of that, but in truth, those were dying so close together that the second parity drive would die before you could clone the array.

          BTW, the RAID 5 is technically just a backup. The original data is spread across various hard drives; the RAID is a centralized online backup so that accessing the data doesn't require digging through boxes, finding drives, and figuring out which one has the data.

          Buy these 30TB drives to substitute your 24TB ones instead, and go to RAID6

          I need

  • by FreeBSDbigot ( 162899 ) on Tuesday December 17, 2024 @03:06PM (#65020207)

    If I've had the shingles vaccine, can I still use the 32TB drive?

  • for this tech, /. was still doing funny April fools jokes when this stuff was announced.
    • for this tech, /. was still doing funny April fools jokes when this stuff was announced.

      /. did funny April fools jokes? All the ones I ever saw were just annoying and dumb -- never funny. If that tradition has died I'm happier for it.

  • And it only yields an extra 2TB.

  • HAMR! At least they are honest.
  • Is it true that Linux is still limited to a total of 16TB in swap partitions max? I really need the kernel patched so I can use this new hardware for running Java.

    • by narcc ( 412956 )

      Java isn't the problem. The problem is developers that mistakenly believe that using Java means they don't need to think about memory.

      • but ... garbage collection!

        • by narcc ( 412956 )

          Haha! Yep, that's exactly what they'll tell you. ... It's also just about all they can tell you about it. We've really failed the past couple generations of developers.

  • I have had 5 TB drives that would have taken 40 days for a single, linear overwrite after they were used for a while (Toshiba). WD SMR drives are a bit better, but not much. As a consequence, I can basically not buy external preconfigured drives anymore, because they are all SMR or at the very least unspecified.

    Why would Seagate make an SMR drive when it gives them less than 10% more capacity?

After all is said and done, a hell of a lot more is said than done.

Working...