Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage

Seagate Says 20 TB HAMR Drives Will Arrive in December, 50 TB Capacities in 2026 (techpowerup.com) 83

Seagate revealed several interesting points about its upcoming releases of next-generation hard drives during its quarterly earnings call this week. From a report: The company has disclosed a shift to a new generation of HDDs based on so-called heat-assisted magnetic recording (HAMR) technology. This technology is set to bring many improvements compared to the one currently used by Seagate's rivals like Western Digital. The rivaling company uses energy-assisted perpendicular magnetic recording (ePMR) and microwave-assisted (MAMR) technologies and it already has a 20 TB drive in the offering. Seagate announced that they will unveil a 20 TB HDD in December this year, with the use of HAMR technology, which will bring many improvements like better speed and more efficient disk read/write. It added, "Seagate will be the first to ship this crucial technology with a path to deliver 50-TB HAMR drives forecast in 2026."
This discussion has been archived. No new comments can be posted.

Seagate Says 20 TB HAMR Drives Will Arrive in December, 50 TB Capacities in 2026

Comments Filter:
  • stop! (Score:5, Funny)

    by serviscope_minor ( 664417 ) on Friday October 30, 2020 @04:15PM (#60666820) Journal

    Stop!

    HAMR time.

    • HAMR Time (tm), brought to you by the looming threat of flash memory.

      I am glad to see it, but expect it would have shown up a decade from now if the spinning-metal-disk industry was not facing extermination.

      • Right after BSD and the PC, right?

        At the current rate, all of those will outlast humanity.

      • by neurojab ( 15737 )

        > if the spinning-metal-disk industry was not facing extermination.

        Is it though? Seems logical that SSD/Flash technology would take over because it's better in lots of ways, but the price per storage unit of a spinning rust drive is persistently tough to beat at the high end of the range. I would have thought that SSDs would have taken over by 2020, but I just bought a couple of spinning rust drives for my NAS. They were just far cheaper per TB than SSD solutions.

        • Same thoughts here. Back 8 years when SSDs were becoming more popular (because they were getting a "little" more affordable) I figured by this time SSD would be the norm. The price drop for SSDs over the last 8 years have not nearly kept pace with what happened to spinning rust over the same amount of time. With that said however, I'll never purchase spinning rust again. It's just a matter of time before SSD completely wipes out mechanical drives.
          • Spinning disk is the new tape.

            • Except tape is only near-line, spinning rust is on-line. I only have to wait for it to spin up, and even then only because I've chosen to use power saving.

              • I the space I live in, 6ms is an eternity. I prefer latency measured in nanoseconds. So for me, spinning disk is indeed the new tape, and tape is the new landfill.

                • I appreciate that use cases vary, but for almost all uses it's acceptable to wait for some buffering. Don't get me wrong, I'd prefer all SSD, but the prices aren't there yet for the majority of purposes.

              • by sjames ( 1099 )

                Tape USED to be online. Then disks came in to existence and started being used as cache for tape, then finally took over and tape became near-line.

                Of course, way way back, data was input on cards and if you were lucky, you could use tape for intermediate storage before either printing or punching an output deck.

            • And for low cost - high volume bandwidth, there is still nothing that beats a 1962 Chevy station wagon loaded with reels of IBM 729 magnetic tapes. Assuming you can tolerate the latency.
              • The latency of buying and installing the tape drive would be the real killer. A time machine might help.

        • by sjames ( 1099 )

          It's feeling enough pressure to see a need to move forward now. That doesn't mean spinning rust will be gone next year or anything like that, it really does win for price performance and any significant size. The 20 TB drives will help keep it that way for a few more years. If they wait until SSDs get closer in price/performance, it will be too late to avoid being overtaken.

      • Comment removed based on user account deletion
    • Re:stop! (Score:5, Funny)

      by fahrbot-bot ( 874524 ) on Friday October 30, 2020 @04:43PM (#60666904)

      Stop!

      HAMR time.

      Waiting for reliability reports to say, "Too legit to quit".

      • by Chaset ( 552418 )

        I'm wondering what the highest /. ID to genuinely recognize the reference first hand.

  • by rsilvergun ( 571051 ) on Friday October 30, 2020 @04:28PM (#60666856)
    That is roughly equal to:

    1. 200 CODs (Call of Duty installs).

    2. 3.33 libraries of Congress. 3. 1/8 of 1 /. reader's collection of (*ahem*) "movies".
  • I'm not dead yet! (Score:5, Insightful)

    by Tough Love ( 215404 ) on Friday October 30, 2020 @04:44PM (#60666910)

    Hard drive industry: I'm not dead yet! These will likely maintain a 5x cost per bit advantage vs flash, which is compelling for any high volume application that can tolerate 6ms random seek latency. Heck, it's compelling for my backup array in the closet.

    • by BAReFO0t ( 6240524 ) on Friday October 30, 2020 @04:54PM (#60666938)

      Surely, you mean:

      When all you have is a HAMR, every problem starts to look like a thumb drive.

    • Hard drive industry: I'm not dead yet! These will likely maintain a 5x cost per bit advantage vs flash, which is compelling for any high volume application that can tolerate 6ms random seek latency. Heck, it's compelling for my backup array in the closet.

      This generational leap in areal density is about a decade late. Even so, HDDs have been able to maintain their cost advantage relative to SSDs. For many markets such as cold storage and near-line, this cost advantage is much more important than throughput or IOPS.

      What will be interesting to see is if the increase in areal density also provides a performance advantage, as historically has been the case. Hopefully, the read/write mechanism doesn't impair this expected performance increase. For many people

      • Higher areal density improves sequential transfer rate, it does not improve average seek time. This is great for bulk storage, not great for personal workstations and the like. Database applications must be designed with care, but most of the net giants are still running their big databases on spinning disk. The size of the bulk data is still increasing faster than latency sensitive applications move to flash. We won't be seeing that equilibrium shift dramatically for quite some time. Shift slowly, yes. But

        • Higher areal density improves sequential transfer rate, it does not improve average seek time.

          Linear density increases the sequential bit rate. Radial density improves seek time per track, even though the seek time per distance remains the same. At least historically, linear density was easier to achieve, but if the areal density improvement of HAMR/etc. also improves radial density, then seek times will also improve.

          Database applications must be designed with care, but most of the net giants are still running their big databases on spinning disk. The size of the bulk data is still increasing faster than latency sensitive applications move to flash. We won't be seeing that equilibrium shift dramatically for quite some time. Shift slowly, yes. But storage engineers hoping for final liberation from mechanical latency are doomed to be disappointed for the remainder of this decade.

          Big databases tend to have huge caches, and for these caches DRAM is far faster than flash, definitely for reads and especially for writes. Meanwhile, due to the huge caches, HDD acc

          • Area density does not improve average seek time, you said it yourself. Faster actuators and multiple actuators improve average seek time. The latter is deemed too costly, while the former is the main reason for the slow reduction from about 9ms average seek down to a bit less than 6ms. Actuator seek time is not the largest component of seek time, rotational latency is, which depends entirely on rotation speed. The market has spoken firmly that it does not want to pay for anything faster than 7200 RPM, so t

            • Area density does not improve average seek time, you said it yourself.

              No, I said linear density doesn't affect seek time. But areal density is 2-D and sometimes includes a radial component. An increase in radial density does affect seek time. Seek time is dependent on the seek distance, which would decrease with an increase in radial density. For smaller distances, seek time is roughly proportional to seek distance. The big question is not whether high radial density decreases seek time, which is obvious. The big question is whether the HAMR areal density increase has a

              • Again, you glossed over the fact that rotational latency is dominant.

                • Again, you glossed over the fact that rotational latency is dominant.

                  Rotational latency is very important for large sequential access, and and is not that important for small random accesses. So, it depends on the workload.

                  Linear density is almost a given for areal density improvements. I keep harping on the question of radial density because it's a open question how HAMR/etc. will impact radial density. If it helps with seek times, that would be great. If not, then it wouldn't be a big surprise.

                  • ...not that important for small random accesses

                    Incorrect. In fact, the cost of rotational latency becomes vastly higher as a proportion of average latency, the less the head has to move. In practice this turns out to be a large and annoying overhead for perhaps a majority of applications.

                    • ...not that important for small random accesses

                      Incorrect. In fact, the cost of rotational latency becomes vastly higher as a proportion of average latency, the less the head has to move. In practice this turns out to be a large and annoying overhead for perhaps a majority of applications.

                      Well, yes if the head doesn't have to move, then the other components of latency, such as rotational latency, are more significant by definition. But, for small random accesses, the head has to move a lot. If the target tracks span from the OD to the ID, seek times can be very large. Small accesses mean that the rotational time to read the requested blocks is very small. On a track with say 2000 blocks for a 7200rpm drive, a one-block read would take about 1us. In the worst case of missing the target b

  • That's all I care about.

    Because what good are 50TB, if a random 10% of it may vanish in a couple of years.

    Can't back the data up, by definition of those disks being the largest and hence usually the backup drive themselves.

    • I've always been a bit paranoid about heat affecting drive reliability. I'm not clear on how much HAMR will affect the temperature of the drive over a period of time. I'll be very interested to see what the reliability will be on these drives.
      • Re: (Score:3, Informative)

        by cj* ( 149112 )

        It isn't that much heat.

        A long while back I saw a HAMR demo that used a focused bluray laser to create the heat source. So a few handfuls of milliwatts per read/write head.

        It looks like this has drifted up a little but the hit is still much less than a watt. Also, the fact that a laser is used means that a light pipe can move the waste heat side pretty far away from anything that cares about it.

        Of course the most heat sensitive part in the system is the laser, but frequency drift isn't a big deal when you a

    • by Jeremi ( 14640 )

      Sure you can back up, just buy two :)

  • by esperto ( 3521901 ) on Friday October 30, 2020 @05:02PM (#60666966)
    First, does anyone know if these drives will be HAMR but also have SMR? because SMR if the devil's work and should have never been invented.

    Second, the article mentions speed improvement, but how big is this improvement? 10%, 50%, 100%? copying huge amounts of data to current drives is already a time consuming task if you double the drive in size and increase the speed by just 10% it will not be a good thing, specially if those drives are in RAID and need to be rebuild.
    • First, does anyone know if these drives will be HAMR but also have SMR? because SMR if the devil's work and should have never been invented.

      SHAMR on them. Fortunately doesn't seem to be a thing, yet.

    • by tlhIngan ( 30335 ) <slashdot.worf@net> on Friday October 30, 2020 @07:00PM (#60667360)

      First, does anyone know if these drives will be HAMR but also have SMR? because SMR if the devil's work and should have never been invented.

      No, you don't need SMR with HAMR. The problem has always been the write head is huge compared to the read head. The smallest head determines the track width, and the read head is at least half as wide as the write head.

      HAMR works like the way the old MO drives worked - by heating up the spot, the magnetic coercivity goes way down. This means even if you have a giant write head, as long as you're not applying too strong a magnetic field, you can write to the tiny heated spot without corrupting the data around it. Since the data is already small, you don't need SMR.

      SMR will be with us for a long time, because it's a very cheap way of nearly doubling the data density (actual advantage is around 20-30% because you ahve to break the drive into zones and you need a low density CMR landing zone).

      HAMR requires extra stuff - usually a high powered laser - in order to do the heating, while SMR can be done using conventional head mechanisms.

      While there are applications to which SMR Is poor, with the right drive command extensions, there are applications to which it can be very well suited - applications which involve writing large chunks of data to disk, for example (e.g., DVRs, media recorders, etc. If you know you're going to write gigabytes of data to increasing sector addresses, SMR can be just as fast since you're avoiding the landing zone.

      • by tlhIngan ( 30335 )

        Sorry, the smallest head determines the minimum track width, while the widest head determines the actual track width in CMR. Shingled recording allows you to shrink the track width and thus store more data.

      • Anyone that thought that SMR was a good idea should be confined to a lonely cabin north of the polar circle.

        • Backups. GIT repos that only ever grow. Environmental measuring systems in a factory that records things to keep a safety audit. Log files of large databases or websites that have to keep track of everything and always grow.

          Lots of places where you cannot delete or change data.

          • by Z00L00K ( 682162 )

            I wouldn't really consider it for anything like log files for databases because if the log file writing is too slow then the database engine may put transactions on hold until the log is written.

            And Backups - not even going there, it's a solution that's going to be WAAY too slow.

    • SMR itsself isn't bad - if you know what you are getting: Increased capacity at the expense of severely reduced sustained-write performance.

      The problem isn't technology, it's the companies behind it. Note companies, because WD, Seagate and Toshiba have all been caught playing the dirty game of not clearly labeling their drives as SMR. This results in people buying SMR drives for purposes they are not suited for, and blaming SMR for all the problems that result.

  • I'd be happy with 1 TB if I knew it wouldn't fuck up after a couple of years. Fuck hard drives these these days.
    • 1 TB? Did you mean something bigger? You can buy a 1 TB NVMe drives for $130 [amazon.com] (Not all NVMe drives have cache so you need to be careful.)

      If only there was such a thing as ZFS, mirroring, hot-swapping, RAID, etc. /s

    • If you want some resistance to sudden data loss due to individual drive failures, consider getting an old LSI MegaRAID/Dell Perc 6/e or possibly HP P410 or P420 RAID card, and find the cheapest SSDs that will work on it - build a RAID 6 array of e.g. 8x cheap 240GB SSDs. Or if power consumption/space/performance is less important than cost, a bunch of used 250GB HDDs (which can be obtained for approximately the price of and equivalent volume of good fill dirt). One does need to be somewhat picky about drive
      • Note: stuff about conventional disks was added later - claim about card power draw applicable to SSDs only.
    • by mauriceh ( 3721 )

      Actually: "Fuck Seagate mostly."
      Toshiba and HST drives still have excellent reliability
      Of course they cost a bit more.. But not that much

      • by Saffaya ( 702234 )

        HGST drives are now made by Western Digital ... Unfortunately.

      • Seagate Exos seems to be pretty decent (and comes with a 5 year warranty), and tend to be a little cheaper than e.g. IronWolf Pro ... of course, these will be larger drives (4+ TB)... after the whole SMR shenanigans, they're probably what I'll be recommending for purchase should my employer's WD Red (a bit over 4 years) /HGST (going on 3 years) drives start going downhill.
  • Anyone know just how toasty these new toasty boys will get with the "heat" assisted magnetic recording? It sounds like a lot of watts but who knows, maybe it's not.
  • Comment removed based on user account deletion
  • Not exactly what I want to hear for stuff I put in the hold of the plane.

  • Perhaps they will do this.
    Question is "How long will they last?"
    Given Seagates history for a lack of reliability, I wonder if these will be better, or honestly speaking: "How much worse"?

  • Not buying Seagate garbage ever again. Toshiba is my go to company now. Also had good results with Samsung.

    • by xlsior ( 524145 )

      Not buying Seagate garbage ever again. Toshiba is my go to company now. Also had good results with Samsung.

      Samsung doesn't make hard drives anymore, their HDD division was absorbed by Seagate in 2011.
      There's only three companies left in the world that make spinning platter hard drives: Seagate, Western Digital/HGST, and Toshiba.

      Solid state storage is also getting consolidated, with Intel selling their storage division to Hynix

    • My latest Seagate lasted 6 months before dying via click of death. That's pretty good by modern standards isn't it? I also have better luck with Toshiba.

      • Check Seagate Exos before writing the company off completely - I've never used them, but probably will when I next need a bunch of spinning rust. They appear to be confident enough in them to apply a 5 year warranty.
        • by Wolfrider ( 856 )

          --Along with HGST, current Seagate Ironwolf NAS drives have a pretty good track record so far. WD screwed themselves with the whole SMR debacle.

          • Indeed, previously I might have thrown RE4s or other WD RE drives in the list of recommendations, but they can go shaft a stack of "bigfoot" platters.
  • No with 20 and 50 TB drives, what medium do I use to back that up? Two years later I'm back up to the cloud?
    • A second drive, obviously.

      One server in the house and another out in the garage. Use something like SyncBack to synchronize them. Daily for the folders that change frequently. Weekly/monthly for the folders that don't.

Any circuit design must contain at least one part which is obsolete, two parts which are unobtainable, and three parts which are still under development.

Working...