Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage

HAMR Hard Disk Drives Postponed To 2018 (anandtech.com) 122

An anonymous reader writes: Unfortunately the hard disk drive industry is not ready to go live with Heat-assisted Magnetic Recording (HAMR). The technology is yet not reliable enough for mass production. Over the years, producers of hard drives, platters and recording heads have revealed various possible timeframes for commercial availability of drives with HAMR technology. Their predictions were not accurate. The current goalpost is set to year 2018. While solid state disks based on Flash memory keep seeing rapid improvements as well, HDDs still kick butt in scenarios where high areal density is more important than ripping transfer speeds. The areal density of HAMR products is predicted to exceed 1.5 Tb per square inch.
This discussion has been archived. No new comments can be posted.

HAMR Hard Disk Drives Postponed To 2018

Comments Filter:
  • by Anonymous Coward

    thats a lot porn per square inch :P

  • HAMR Time! (Score:3, Funny)

    by Anonymous Coward on Monday December 28, 2015 @12:34PM (#51195711)

    Oh wait ... not yet

  • by JoeyRox ( 2711699 ) on Monday December 28, 2015 @12:41PM (#51195765)
    The summary lists HDDs as viable vs SSDs when "high areal density is more important than ripping transfer speeds" but in most applications it's the random access time that's more useful and SSDs are better than HDDs in this regard by several orders of magnitude.
    • by Xenx ( 2211586 )
      Were you trying to make a point? They very specifically say HDDs are better when storage density is more important than transfer speed. Thus, SSDs having faster access times means nothing.
      • HDDs still kick butt in scenarios where high areal density is more important than ripping transfer speeds.

        There is more to SSD than ripping speed, though that is a huge consideration. Denser, slower drives are still useful, but only to a point where the amount of data needing transference, then you start to run into other bottlenecks. Spinning drives are dying, because there hasn't been much improvement over the last 10-15 years (still SATA?) on the one thing that will change in the next couple years, namely bus speeds and getting data from point A to point B, and as dense as HDD are promising to be, it may be

        • I think it's helpful to envision hard drives as serving two roles: one for fast, persistent data storage and access (OS, program files, documents) in which you want to prioritize access speed and throughput, and another for mass storage, in which you want as much storage capacity for as little cost as possible. SSDs excel in the first role, while spinning HDDs excel in the latter.

          So, I wouldn't characterized spinning HDD drives as "dying"... they're just becoming more specialized, like PCs. The significan

          • RAM Cached storage is not the same as RAM, though they use the same medium. RAM Cache storage would be great for holding things like NTFS/FAT table information, so that it didn't need to be read every time a hard drive was accessed. It is a cached (non-working) copy of a Hard Drive.

        • We haven't seen a standard faster than SATA 3 emerge for hard drives because there is no need for one. It is already faster than the drives are. SSDs are starting to become available with faster interfaces because they can actually use them; current high end drives can exceed the maximum data rate of SATA.
      • My point was that the performance tradeoff between SDDs and HDDs is more about random access performance (which connotes random I/O) than it is about transfer speed (which connotes sequential I/O).
    • So... you agree with the author of TFS? They never said anything that contradicts what you said, but there are definitely solutions where storage density is far more important than access speed. Remember, people still use linear tape drives because speed is the least important factor in backup and archival storage.

      • Remember, people still use linear tape drives because speed is the least important factor in backup and archival storage.

        Some people still use pen/pencil and paper and maybe stone tablets ... Sometimes durability and longevity are the most important factors. Just sayin'. Although, to be fair, carving out my Twitter feed in marble is a huge PITA.

    • Not in all cases, or even many. What do I care if the access time is 0.5 seconds longer for my 20+TB file of my research data? I would rather have it sit on one to two drives ( with backups OFC ) rather than spread across 20+ SSD drives, which, just by the number needed alone are more prone to failure.

      In this case transfer speed isn't an issue either, as long as it isn't significantly slower than current HDD tech, since no matter what, data analysis is going to take quite a bit of time and it wouldn't real

      • You'll care because unless you have 20+TB of memory in your system that allows your 20+TB file to be read in its entirety without any rotational latencies then the per-I/O access time differential between an SSD and HDD will be multiplied by several million I/Os.
      • Not in all cases, or even many. What do I care if the access time is 0.5 seconds longer for my 20+TB file of my research data?

        You may not need instant access to that much data. But then again, if your 1/2/sec longer is multiplied thousands of times a day, five days a week, 4.25 weeks a month ... That is almost 3 hours each month of wasted time, and almost a full week a year of wasted time. My guess, is that you're actually not waiting 1/2 second per, times 1000 a day, it is more likely that you're wasting 3-5 seconds per, several thousand times a day, but since you aren't measuring, you will never know.

        However, based on my Anecdot

        • by Bengie ( 1121981 )
          In my personal experience, high latency for user interactive IO, like opening a file, can be jarring and dramatically reduce my work throughput. I can lose my train of thought if the interactivity is not fluid. It really depends on the context.
        • That is still a different use case. Large data set analysis is always going to be CPU limited, to the speed you can cram the data through the CPU power you have available and analyse what you need.

          These drives are being designed to store very large amounts of data, and on release day should have a smaller failure rate than spanning the whole data set across multiple standard HDDs or SSDs. Every drive you add to an array means another percentage of failure multiplied against all of the other failure chances.

          • HDD and SSD drives have different performance characteristics, however the differences in failure rates isn't really all that well known. Yes, if you add drives, you're adding to the number of possible failures, but you're also mitigating the chances of a single point of failure ruining an already bad day. That is why you have Backups and Redundant copies and disaster recovery plans. The more valuable the data, the more money you'll spend securing that data.

            You can RAID for protection, you can RAID for incr

          • by dbIII ( 701233 )

            That is still a different use case. Large data set analysis is always going to be CPU limited, to the speed you can cram the data through the CPU power you have available and analyse what you need.

            It depends - sorts can be I/O limited while filtering, transforming etc is very much CPU bound. Comparisons of data can go either way depending on how much can be kept in memory.

            spanning the whole data set across multiple standard HDD

            That's often what happens and a large array at the other end of a network link c

        • by Agripa ( 139780 )

          Most of my time these days is spent waiting for Java or Javascript garbage collection. I use a 4 drive RAID controlled by an Areca 1210 and most processing throughput is limited be either CPU or sometime the PCIex4 connection to the Areca.

    • by bonehead ( 6382 )

      Actually, if you do more than gaming, there are MANY situations where capacity is far more important than blistering performance.

      For example, on one of my home NAS boxes, as long as performance is adequate to stream 1080p video to my STBs, faster drives offer little to no additional value. Therefore the price/performance of spinning disk is FAR more attractive than SSD. I won't even go into all of the situations in my professional life where wasting money on extreme performance would be flat out irrespons

      • I wasn't arguing that capacity vs performance is not a valid trade-off for applications. I was arguing about what the performance trade-off actually is (access time vs throughput for most applications).
      • Therefore the price/performance of spinning disk is FAR more attractive than SSD.

        HDD rarely outperform SSD. I think you're talking about price/capacity of HDD vs SSD. And if my suspicions are correct, this will really start to change in the next couple years (2-3). If my suspicions are correct, you'll see > 16 TB SSD drives that are cost / TB comparable to slower spindle drives within that time frame.

        As the SSD densities increase, and the price continues to drop, as the technology continues to improve, the signs are all there, HDD are at the end of their line. My guess, in 5 years, H

        • by bonehead ( 6382 )

          I think you're talking about price/capacity of HDD vs SSD.

          Yes, that is indeed what I meant to say. Caught the error too late, and there's no longer an edit button here....

          • There has never been an edit button on Slashdot. There is a continue editing button before you click submit.

            • by bonehead ( 6382 )

              Yes, there used to be. That's been a long time ago, though. Quite possibly long enough that most of the current population never saw it.

            • by KGIII ( 973947 )

              Heh... I'm pretty sure there used to be an edit button back when you opened the reply in a new window. I'm not sure how long it lasted but I sort of recall the option to edit - I think it was time limited but I do not recall. I even remember some debate as to why it was taken away but I could be conflating that with another site. I'm thinking early 2000s?

              I had an older account back then and I no longer have access to the email nor do I recollect the username but I used to use /. and then took a few years of

        • by fnj ( 64210 )

          if my suspicions are correct, this [price/capacity of HDD vs SSD] will really start to change in the next couple years (2-3)

          SSD fanbois have been saying this for 10 years, and there is still not the slightest sign of it happening.

          Consumer:
          HD [amazon.com] $33.33/TB
          (you couldn't get a 6TB for twice that much a year or two ago)
          SSD [amazon.com] $317.80/TB

          About 10x. OK, so it was 20X or more 10 years ago. Wake me up when it gets below 2X, and I will REALLY pay attention when it gets below 1X.

          Enterprise is similarly dismal. Both HD and S

          • Interesting that you chose the cheapest HDD you could, but a pretty expensive SSD.

            SSDs [newegg.com] are at about $240/TB. That's still pretty expensive, but it's still only 7.3x more than HDDs atm.

            More so, your "20x or more 10 years ago" is technically correct, thanks to the "or more". In reality though, a decade ago 512*MB* of flash storage (not even a true SSD) cost $40, aka $80,000 per TB. So the gap has come down from 2400 times more expensive per GB to 7 times more expensive per GB in a decade.

            In fact, just one

            • This year is going to be interesting. When you finally have a 16 TB SSD drive released, which is higher capacity and faster than any HDD. Once those are released, the end is just a matter of production ramping up. The other thing most people don't realize is that and SSD has 100,000 IOPS, where a Spindle drive is somehwere under 1000 IOPS, even for the best/fastest drive. Those IOPS count. 100 times faster is a big deal. One second becomes 100 seconds (not exactly, but illustrative). Then you have to get th

              • by dbIII ( 701233 )
                They'll still be a place for spinning rust with that lower IOPS in places like storage of large files with few users (eg. 10 instead of hundreds) until the price of SSDs at large volumes go down. So I don't dispute that they are nice, just not always worth it for now.
                It's similar to how you can solve just about any corrosion problem by coating things in gold. Gold costs, but if the price is dropping you can use it more. SSDs used to be the gold plated solution and now they are only that at the large volu
                • until the price of SSDs at large volumes go down.

                  That is what I am suggesting. However, I also believe that we are 2-3 years (short term) away from that. Once you see multiple venders each making 16 TB SSD (first this year), and knowing that HDD aren't likely to reach that size anytime soon, then you'll realize that HDDs are on the cusp of disappearing altogether. This will be especially true if the MTBF of SSDs increase well beyond spinning drives, simply because they do not "wear out" and start failing after 42 months of non-stop use.

                  It isn't just about

          • SSD fanbois have been saying this for 10 years, and there is still not the slightest sign of it happening.

            Isn't 10 years a small exaggeration ? I think the first real mainstream SSD was in 2009 (intel X25 80GB).

        • by mlts ( 1038732 )

          If HDD capacities could go up, but HDD makers pivot from just shoveling more bits into less space to redundancy and reliability, HDDs will take the niche that tapes have now. Especially if a HDD maker could guarantee an archive life of a tape, perhaps having a dual-head mechanism (I remember seeing some older drives which actually had two sets of heads, each independent of the other and in an active/active configuration.) Doing this would keep HDDs around as backup media or media to stick in the NAS (for

          • by dbIII ( 701233 )

            Of course, there is the slowdown once the SSD winds up full... but done right, it can be a way to help with disk I/O for all but the worst sustained random writes.

            With a well designed filesystem that's what memory is for (with an optional extra of using an SSD as cache too if you want but that is a lot slower than memory).

    • SSDs are better than HDDs for GB/cm3 already, but consumers for the most part don't want to buy $8000 SSDs. (Large companies/data centers are already buying them as fast as they can be built).

      Laptop (2.5") drives are limited to about 2TB, and they're 15mm thick. SSDs are hitting 2TB using a 7mm Z-height..

      Really, the only play for HDDs is $/GB, which already has nearly evaporated at the low end when you consider all the other contributors to cost in a system.

    • by dbIII ( 701233 )
      It's the time to get the entire file that matters, so the sum of access time and read time, thus with files beyond a tiny size not "several orders of magnitude" or even a single order of magnitude compared with an array of spinning disks. SSD's are fast but the comment doesn't make any sense unless referring to RAM instead of storage.
      For now spinning platters of rust are cheaper for large volumes and not a lot slower in use - but the SSDs are catching up and having a lot of memory is a lot better than both
      • Let's take a 64KB file, not small by any means. The fast HDDs presently do just over 200 MB/s sustained for reads. On a 7200RPM HDD the average rotational latency will be 4.17ms. Let's be conservative and say the average seek time only adds 3ms to that, although it's often higher...let's round the sum down to an even 7ms. That's 7ms before the drive starts transferring any data. At 200 MB/s and assuming the entire file is in a contiguous block on the transfer time for the 64KB file will be 312.5us. So the t
        • by dbIII ( 701233 )

          Let's take a 64KB file, not small by any means

          That's where we very strongly disagree so you probably don't really get what I wrote about. Even "blank" MS Office documents end up larger than that due to the size of the template file.

          • So I'm able to do complete lifecycle calculations of I/O execution times yet I'm not able to understand the difference between what you subjectively call small vs larger I/Os?
            • by dbIII ( 701233 )
              Consider doing stuff by the GB and you'll get what I meant. Spinning stuff is used for large files now, SSDs won on the small desktop with a web browser.
        • by dbIII ( 701233 )

          An order of magnitude (1st) is defined as 10x.

          So even your tiny file is not "several orders of magnitude" faster to access, since that implies 1000x or maybe 100x at a stretch if used incorrectly instead of "a couple of orders of magnitude".
          "Several orders of magnitude" faster - that's what a ramdisk or keeping the stuff in memory some other way is for.

          • Double the I/O size and the SSD is still an order of magnitude faster. Reduce the I/O size to 4KB or 8KB for more random DB workloads or pagefile operations or filesystem metadata fetches and we're at 2 orders of magnitude faster.
            • by dbIII ( 701233 )
              Double a tiny file is still a tiny file. We are talking past each other and you don't seem to understand what I wrote.
    • by tlhIngan ( 30335 )

      The summary lists HDDs as viable vs SSDs when "high areal density is more important than ripping transfer speeds" but in most applications it's the random access time that's more useful and SSDs are better than HDDs in this regard by several orders of magnitude.

      There are plenty of cases where random access isn't as big an issue, and density is.

      First off, yes, random access is good - for random access patterns. Like an OS drive. And games that load lots of little files randomly.

      But there are cases where user

    • by Agripa ( 139780 )

      If I need to store more data than will fit onto an SSD, I cannot just wait for the process to finish no matter how much faster the SSD is.

  • by U2xhc2hkb3QgU3Vja3M ( 4212163 ) on Monday December 28, 2015 @12:42PM (#51195777)

    Of course it's not reliable, who the hell thought that using a HAMR on hard drives was a good thing? That's what I use to destroy hard drives!

    • by Agripa ( 139780 )

      They might be more reliable; heat assisted recording allows for a "harder" magnetic recording media. Magneto-optical recording technology uses heat assisted magnetic recording to good effect and is highly reliability. Hard drives have the advantage of using a closely spaced magnetic reading head which allows for much higher density than would be achieved with only optical reading.

  • Areal density... (Score:5, Informative)

    by MachineShedFred ( 621896 ) on Monday December 28, 2015 @01:04PM (#51195921) Journal

    Dear Dice Editors:

    If you are going to post a summary which says what *might* be possible in the future, it's helpful to know what the current state-of-the-art is. For example, if you are going to have a summary that says the areal density of HAMR products is predicted to exceed 1.5 Tb per square inch it would be nice to know that Seagate is already shipping a drive with 1.34Tb/in^2 according to Wikipedia.

    As it turns out, context matters when giving statistics, or there is no reference to know if the statistic means anything. Given what I found in 30 seconds of using Google, that would mean that HAMR is expected to yield ~12% increase in density from the current state-of-the-art.

    You're welcome.

    • That quoted Seagate density is for SMR drives, which use overlapping tracks that are considerably slower for many I/O workloads vs traditional encoding techniques like MR/PMR, so the areal densities aren't comparable for general-use HDD applications. In other words, context does matter a great deal :)
  • Stupid writing (Score:4, Interesting)

    by fnj ( 64210 ) on Monday December 28, 2015 @02:08PM (#51196433)

    Areal density doesn't mean shit. Volumetric density is what counts.

    • My guess is that SSDs would beat on areal density anyways seeing as how they are stored in chips that take up nearly no space. As well, each of those chips is mostly ceramic and metal leads, so the actual areal density of the storage would be even more if you remove all that packaging.

    • by Agripa ( 139780 )

      Areal density matters for magnetic recording on a surface because all of the other costs are largely fixed.

      For SSDs, the cost per bit is what matters. I could happily devote more space to bulk storage but hard drives are much cheaper than SSDs except where access time is important.

  • The whole thing hard drives are counting on right now is cramming more data into a device, and at a lower cost, than SSDs. SSDs have yet to stop their progress up the Moore's Law ladder, and hard drives have never been on it. At some point in the not too distant future, cost might be the hard drive's only advantage. Not long after that, all they will have to count on is "SSDs fade if you put them on the shelf too long". The market for archival hard drives is fairly limited. HAMR was supposed to postpone the

    • I would go as far to say that the market for true archival hard drives is non-existent. I've got a couple of drives that are old parallel-ATA that have stuff on them, but I haven't tried to plug them in and retrieve anything, and I don't have any current system that I could even plug them into without buying a controller. Now imagine trying to do that with SCSI-3 or some such that was really only used in enterprise and workstations where you'll have to search the ends of the earth just to get a host adapt

      • by Mal-2 ( 675116 )

        I didn't say archival hard drives aren't a market, I said it's fairly limited. By that, I mean the volume is an order of magnitude less than the market for hard drives in general right now. Also, it's not hard to find external hard drive boxes or cables with PATA even now. (How I got modded "Troll" is beyond me. I haven't said anything I don't believe to be true, or in a manner designed to irritate people.)

        To the poster below: I'd rather archive to a hard drive, moving parts and all, and put it on a shelf t

    • by dbIII ( 701233 )

      The market for archival hard drives is fairly limited

      As it should be. Nice shiny polished surfaces in close contact diffuse together over time and the lubricants needed for a high speed spindle dry out. Even doing nothing that drive is not going to have a good chance of being fully intact in the long term mainly because they were never designed to last.

  • by Behrooz Amoozad ( 2831361 ) on Monday December 28, 2015 @03:28PM (#51196977)
    It will come with hurd pre-installed.
  • by Peter Desnoyers ( 11115 ) on Monday December 28, 2015 @05:46PM (#51197963) Homepage

    As another commenter pointed out, the 1.5Tbit/in^2 number in the posting (which is taken from the original article) is pretty bogus. Seagate's 2TB 7mm 2.5" drive has an areal density of 1.32Tbit/in^2, and it's probably a safe bet that they (and WD) can wring another 15% density improvement out of SMR technology in the next year or two.

    For those commenters bemoaning the fact that the highest density drives today are SMR rather than "regular" drives, get over it - the odds of conventional non-HAMR, non-shingled drives getting much denser than the roughly 1TByte per 3.5" platter we see today are slim to none:

    To get smaller bits, you need a smaller write head. That smaller write head has a weaker magnetic field. The weaker field means the media has to be more easily magnetizable (i.e. has lower coercivity). The lower coercivity media needs to have a bigger grain size (size of the individual magnetic domains), so that grains don't flip polarity too often due to thermal noise.

    Since a bit can't be smaller than a grain, that means that smaller your write head is, the larger your minimum bit size is. Eventually those two lines cross on the graph, and it's game over.

    Two ways of getting out of this are SMR (shingled magnetic recording) and HAMR (heat-assisted magnetic recording):

    SMR - stop making the write head smaller, but keep making the bits smaller. Overlap tracks like clapboards on the side of the house (where'd this "shingle" nonsense come from?), allowing small bits with large write heads. Of course this means that you can't re-write anything without wiping out adjacent tracks, which means you need something like a flash translation layer inside the drive, and because of that, random writes might be *really* slow sometimes. (I've seem peak delays of 4 seconds when we're really trying to make them behave badly)

    HAMR - Write your bits on low-coercivity media with a tiny, wimpy head, and store them on high-coercivity media with tiny magnetic grains. How do you do this? By heating heating a high-coercivity media with a laser (say to 450C or so) to reduce its coercivity to reasonable levels, then letting it cool down afterwards. But you need a big laser (20mw?) on each head, which causes a whole bunch of problems. Which is probably why they're delaying them.

    Oh, and you can overlap tracks on HAMR drives, creating an SMR HAMR drive, with even higher density but the performance problems of both technologies. Which they'll probably do as soon as HAMR hits the market, because with today's SSDs the market for fast HDDs is dying a very quick death.

    • by dbIII ( 701233 )

      The lower coercivity media needs to have a bigger grain size (size of the individual magnetic domains)

      Not quite the same but the grain size is an upper limit on the size of the magnetic domain. In the grains next door all the atoms are lined up in a different direction after all.

  • I'm kind of surprised that the hard drive industry has not created bigger (i.e. size, not just capacity) drives. It seems that a large portion of hard drives these days are going into huge arrays in data centers. All the data that needs super-quick access times is moving to SSD. The multi-TB near line data is staying with HDD storage. It seems to me that the industry could put out a drive with something like 5 inch platters; 20 platters per drive; a really good motor; redundant heads per platter; and an ext
    • We still have good old 5.25'' drive bays in full-size cases, so I'm sure many a consumer might also like one. OTOH, arrays of smaller drives are nice in case one of them breaks.
    • "I'm kind of surprised that the hard drive industry has not created bigger (i.e. size, not just capacity) drives."

      They did. They were fragile, slow, had insanely poor seek times and were - frankly - highly unreliable in normal consumer environments, so Bigfoots died out in the late 1990s.

      Spinning a platter that size at any appreciable speed comes with its own sets of problems both in materials stress and bearing suitability (foil and other bearings are not suited to high twisting moments, which means that l

    • by Agripa ( 139780 )

      Quantum tried that with their Bigfoot line of drives and it was not economical even with the surplus 5.25" infrastructure and supply chain purchased at a low cost. There are also mechanical problems with doing this; the larger platters require more power to spin at a given speed and vibration which affects tracking becomes a greater issue with the heads further from the spindle.

  • "HDDs still kick butt in scenarios where high areal density is more important than ripping transfer speeds"

    I recently installed a bunch of 4TB SSDs in a server - yes, for ripping transfer speeds but more importantly because you couldn't get 4TB rotating media in enterprise 2.5" format until very recently (and those drives are 12mm thick, vs SSDs being 9mm, which is important for airflow when the 12mm drives draw 3 times as much power)

    By the time you can get 4TB 9mm spinning drives, those 4TB SSDs will be 8

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...