Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Science Technology

Microwave Tech Could Produce 40TB Hard Drives In the Near Future (gizmodo.com) 151

Western Digital has announced a potential game changer that promises to expand the limits of traditional HDDs to up to 40TBs using a microwave-based write head, and the company says it will be able to the public in 2019. Gizmodo reports: Western Digital's new approach, microwave-assisted magnetic recording (MAMR), can utilize the company's existing production chain to cram a lot more storage onto a 3.5-inch disk. In a technical overview, Western Digital says it has managed to overcome the biggest issue with traditional HDD drive storage -- the size of the write head. These days, an average hard drive maxes out in the 10-14TB range. But by integrating a new write head, "a spin torque oscillator," microwaves can create the energy levels necessary for copying data within a lower magnetic field than was ever previously possible. There's a more thorough white paper for those who want to dive in. According to Western Digital, MAMR has "the capability to extend areal density gains up to 4 Terabits per square inch." By the year 2025, it hopes to be packing 40TBs into the same size drive it offers today.
This discussion has been archived. No new comments can be posted.

Microwave Tech Could Produce 40TB Hard Drives In the Near Future

Comments Filter:
  • C'mon, guys, can't you even be troubled to proofread the very first sentence?

    • by thegarbz ( 1787294 ) on Saturday October 14, 2017 @03:10AM (#55367171)

      Give them a break. Every so often everyone accidentally a whole word.

      • by tomxor ( 2379126 )
        It's the internet gremlins! They a word and then poop them out in some other random part of the eat sentence. Gremlins I tells ya!
    • The bigger news here is that 2025 is the near future.
    • Came here to say the same thing. I can picture BeauHD asking for a raise and his boss saying, "lol, what? You know you're really shitty and lazy at your job? Grade 8 students put in more effort than you and they don't get paid".
  • Few people cares (Score:4, Interesting)

    by should_be_linear ( 779431 ) on Saturday October 14, 2017 @02:28AM (#55367093)
    Above 1 TB only geeks and IT companies care. Just like it was case for CCD above 10 Megapixel. At 1 TB disk is "solved" problem for general audience and things that matter now are performance and durability.
    • On the plus side it'll let companies host video and backup services much more cheaply and reliably. On the minus side, it'll let authoritarian governments maintain databases of effectively infinite size for every single citizen cheaply and reliably.

      One way or other, it does impact everyone on the planet.

    • Home security cameras gobble up HDD if you want archival for slow motion problems or a belated investigation.
    • by evanh ( 627108 )

      Anyone that does more than mindlessly tap vaporous bubbles cares.

    • by avandesande ( 143899 ) on Saturday October 14, 2017 @02:45AM (#55367125) Journal
      With higher density you get better performance, so yeah it matters to most people
      • Consumers who care are using SSDs already.

        Higher performance -- until you have to seek. Or if you aren't the only one using the drive. Interface speeds do not keep up with capacity increases.

    • Games are coming out at 100GB or more now. The FF7 remake is going to be 180 GB...
    • by Kjella ( 173770 )

      Above 1 TB only geeks and IT companies care. Just like it was case for CCD above 10 Megapixel. At 1 TB disk is "solved" problem for general audience and things that matter now are performance and durability.

      Not for HDDs, the "high performance" market has all but disappeared except for a few hybrid SHDDs for laptops. Even failure rate is for consumers "it can fail", they don't have enough in a RAID setup or whatever to care about the statistics, at most they have a SSD for performance and a HDD for bulk storage. And the enterprise typically has this in some kind of SAN or storage server to handle failures, unless it's so bad that the failure rate works out to a $/TB difference they don't care. So the HDD market

      • On an operating system with working symlinks, you can install part of a game on ssd, and part on HDD. It's only amateur hour windows where this is still a problem.

        • by Kjella ( 173770 )

          On an operating system with working symlinks, you can install part of a game on ssd, and part on HDD. It's only amateur hour windows where this is still a problem.

          Acutally Windows 7+ has symbolic links. But while that's technically true, you'd still have to identify which files go where and redo it every time you install the game and if an updates changes any file paths or folder structures it might not stick. Steam could push a standard split where publishers could put up to say 10% of the installation files in a folder marked for acceleration. Make a 20GB game? Max 2GB goes in that special folder. Of course you could pick that both folders are the same, all 20GB on

          • Also, one has to be _very_ careful how new data is incorporated to avoid breaking symlinks. Tools like "cp" or "tar" in the Linux and UNIX world will normally copy content on top of symlinks, and write changes to the target of the symlink. "rsync" will not, nor will any operation that copies a temporary file in place and moves it to replace the previously symlinked file.

            I've had extensive difficulty with people using symlinks carelessly to move bulky content elsewhere, then wondering why they discovered dif

            • Rsync can do pretty much whatever you want regarding symlinks. There are three different command line options. Since symlinks may point outside of the directory you're copying, and may be either relative paths or absolute paths, the "right" behavior is situation dependent. Rsync lets you choose what is right for your situation.

              • You've a very good point that "rsync" provides options to handle symlinks differently. But those options are aimed at the correct replication of a symlink on the source side. The transfer of content on top of an existing symlink normally breaks the symlink on the receiving side, unless it is a matching symlink.

                If you've seen a way to get it to consistently transfer plain file content from the resource side, on top of an existing symlink, leave the symlink untouched and publish the content to the target of t

                • So the source is NOT a symlink. You want the destination to be a symlink, and you want it to copy from src/a/file to destination/b/somewhere/ file ? So the file contents up up somewhere totally different than where they are on the source, based on a previously existing symlink on the destination?

                  If I'm understanding you correctly, you can probably achieve the same goal by trading the symlink for either a hard link or a bind mount. If the symlink points to a directory, use a bind mount instead. If the syml

                  • I'm afraid that rsync normally deletes the local file and replaces it with the new file, doing the replacment either bore the complete delivery (for certain options) or after completed delivery to a temp file. That breaks the hard link. Rsycing hardlinks is quite tricky if the hardlink exists on the _target_ filesystem, and not on the source filesystem, and does not happen if the hardlink leads outside the target rsync directory. "bind mounts" do not work for individual files in Linux filesystems, only for

                    • > I'm afraid that rsync normally deletes the local file and replaces it with the new file

                      ---inplace

                      >. "bind mounts" do not work for individual files in Linux filesystems

                      That's what I said. I said "if you're symlinking to a directory, consider bind mounts"

                      > and does not happen if the hardlink leads outside the target rsync directory.

                      Hard links don't lead to any directory. Hard links (aka file names) lead to disk blocks. We call them "hard links" when two or more different file names happen to lead

                    • "--inplace" breaks symlinks and hardlinks. I just tested it under CygWin and under a current Linux.
                      "--copy-dest" simply replicates unchanged files. That is not part of the "copying content from elsewhere on toop of symlinks and making sure the conent is copied to the target of the symlink that I was trying to explain.
                      "--backup" doesn't help the issue. The backup exists and you can derive the old symlink to script around the issue. You can also do a "--dry-run" and parse that to deduce what needs to be synce

                    • Goodness, my typing was horrible in my response. It detracted. The core of the message should stand, but I do need some sleep. If the danger of relying on rsync to consistently copy non-symlinked source content to local symlinked content, and expecting the changes to propagate reliably to the symlink target, is unclear, I'll revisit the issue.

                      Symlinks to place content on another fileystem is useful. It needs to be handled with caution.

                    • I pretty much told you how to do it, and marked that with "(this is a big hint)". Do you want to solve your problem, do you want to know how we do it, or do you want to keep arguing that it can't be done?

      • Some games have ridiculously long load times regardless if you put them on HDD or SSD, whatever it's waiting for it's not the disk. It's a bit annoying that games that take 50GB+ can't split their assets up over two disks for fast/slow access but it's not going to change so whatever, if you play it a lot make room on your SSD and put the more rarely played games on HDD if you run out of space.

        ...because most times it's not the disk. It's the fucking "always online" bullshit, together with horrible optimization.
        Game loads and grinds to a halt waiting for some crap server to respond to a shitty security check. When that happens 100 times during load... there's your performance bottleneck right there. That's why many pirated games load faster than their "official" variants.

    • IT department of a regular business trying to backup a VMware install will care, as that's usually to a bulk storage box.

      Demand for increased storage will build, the question is how fast. People collecting 4k content can use up that space easily. 40T is speculated by 2025, which will be well into the 4k transition. 10gb ethernet is getting cheaper. If that hits consumer price levels, it'll be easier for average people to handle higher storage volumes. Faster video cards and high rez target game resolutio
    • by sensei moreh ( 868829 ) on Saturday October 14, 2017 @05:38AM (#55367373)

      Above 1 TB only geeks and IT companies care.

      And 640K of memory should be enough for anybody.

    • Above 1 TB only geeks and IT companies care.

      Even fewer people don't use GMail, Dropbox, and YouTube.

    • by guruevi ( 827432 )

      1TB is surprisingly small these days. Most people I know have ~500GB in home videos and pictures alone, let alone media they may still have for iPod's and similar devices.

      2-4TB is right now the range in what most "consumers" buy hard drives at. Not because of availability, but because of necessity.

    • Yeah sure. No one needs 1TB.

      Except for those people who are taking 1080p/60 videos on their mobile phones, cool kids in the street videoing themselves doing cool shit on their bmx bikes in 2160p with their knock-off GoPros, people who buy either the cheapest DSLR to snap photos of their babies at 30MB per button press or like a more expensive SLR which can weigh in at closer to 100MB, or maybe just someone who has more than a couple of games installed (Doom weighs in at 80GB minus DLC, as does Deus Ex, hell

    • While you’re right about consumers generally not caring about or needing this, you shouldn’t be so dismissive. Anandtech has a nice graph showing the projected split between SSDs and HDDs in enterprise over the next few years [slashdot.org], and it makes it pretty clear that this sort of technology will not only remain relevant, but will, in fact, remain dominant for the foreseeable future.

      Of course, for those of us building NASes or otherwise working with large files at home, cheap storage is still important.

    • Issue #1:
      The game called Shadow of War has a disk footprint of 97 GB. Doom occupies 45-50 Gb. Latest Gears of War occupies cca 100 GB and Ark: Survival Evolved can balloon to over 150 GB with a few mods and maps installed. Battlefield 1: 50 GB. GTA5: 65 GB. Mafia 3: 50 GB. Far Cry 4: 34 GB.

      Get 15 such games and your 1 TB drive explodes.

      You could argue "gamers are not general audience"; maybe, but if you ever want to start playing new games, you'd appreciate larger HDDs.

      Issue #2: If you get married with a we

    • by darkain ( 749283 )

      Considering that video games are now in the 50GB+ range, 1TB doesn't seem like so much total storage anymore.

    • by jedidiah ( 1196 )

      > Above 1 TB only geeks and IT companies care.

      So only "geeks" do home video? You sound like fan of that company that wanted to "enable" people but really only seek to limit them terribly now.

      Just my games take up 400G and I'm not even a Windows user.

    • by AHuxley ( 892839 )
      People with 4K and 8K digital video care. People with a movie thats had every frame converted to 4K care.
      Media projects and even short 4K and 8K movie clips need more space just to work on.
    • by AK Marc ( 707885 )
      Game consoles can hold 10 games and no multimedia at 1 TB high res games. There are lots that would like to see more.
  • Comment removed (Score:4, Insightful)

    by account_deleted ( 4530225 ) on Saturday October 14, 2017 @02:43AM (#55367121)
    Comment removed based on user account deletion
    • by Kjella ( 173770 )

      Some of you are missing the ramifications of this. Even though this is magnetic media it will drive down the cost of cloud storage. Right now it is cheap, but not that cheap. This could make is feasible for everyone to store all of their data in cloud for pennies a year....encrypted of course.

      Archive storage is simple enough, but is there any such service that has an open source client and lets you set an AES key so it can be sorta like a remote mounted encrypted container? Or are the sync tools so smart you can create a big container in a sync'd folder and it'll sync just the bits that change? Because I don't trust anything that's in the hands of Apple/Google/Dropbox to be truly private, in fact we know many people want the cloud provider to provide integration with apps or sharing or whatever

    • Some of you are missing the ramifications of this. Even though this is magnetic media it will drive down the cost of cloud storage. Right now it is cheap, but not that cheap. This could make is feasible for everyone to store all of their data in cloud for pennies a year....encrypted of course.

      I don't think cost is the issue here. First, I don't think people are eschewing cloud storage for cost reasons alone; with BackBlaze being $0.005/gb/month and Amazon and Microsoft both under a nickel a gig even for their highest tier, few people are saying "too expensive". Most of the issues have more to do with either principle (i.e. not wanting their data on someone else's hard drive), bandwidth (10TBytes transferred over 10mbits/sec upload pipe...grab a snickers...), or latency (video editing in The Clou

      • > Further, I'd argue that I doubt most cloud storage companies list "disk storage" as their primary expense

        40TB drives would mean the cloud providers need 90% fewer servers. A tenth as many servers means the datacenter can be 1/10th the size. 90% fewer servers means 90% less tech time of employees going around swapping out bad drives, running cables to new servers, etc. Basically ALL the costs other than marketing and internet bandwidth are a function of the number of drives.

        Looking at it another way,

    • Actually you raise an interesting question here that I hadnâ(TM)t really thought about until now: how much of the cost of cloud storage is for the actual storage? That is to say, if the cost of HDDâ(TM)s miraculously were $0, how much would Dropbox, Amazon cloud, etc cost? How much goes into maintenance/labor, encryption/security, server upgrading, etc?

      • Larger drives means fewer servers per terabyte. Fewer servers means all costs drop. The dollar cost of the bare drives themselves is a fairly small portion of the overall costs, so *cheaper* drives don't make a huge difference, but *bigger* drives make a big difference.

  • Just imagine a 40TB HDD starting developing bad sectors... and then RAID arrays crashing... I really can't handle even thinking about it. I think that the use of tape backup systems for those pesky 10-14 TB disks will soon become mandatory in every home.

  • by Anonymous Coward

    How much porn can I get on that?

  • Curious to know - prices seem to stagnate lately, drops have really slowed of late.
    Picked up 5TB disks for $200 US nearly 3 years ago. There's occassionally deals which beat that but overall, its still a normal price sadly.

    • by guruevi ( 827432 )

      As always, the drops will come when the new tech arrives. 10TB drives are about half the price (~$350) now as they were when they came out (~$6-800). A 5TB drive is now at the $100-150 price point so it's dropped by 30-50%.

    • Those high capacity HDDs are currently helium filled. With MAMR, one day they won't need to. Then they will drop prices.
      • by Khyber ( 864651 )

        No, even with MAMR they'll still need to evacuate the drive case and fill it with helium because you can't risk water molecules in the enclosure attenuating the microwave emissions from the heads and causing bad data writes.

  • This is an impressive density, but my question is whether drive performance increases and what kind of bus will the drive have?

    SATA 3.2 claims 16 GBits/second but my guess from glancing at the other 3.2 features would indicate this is mostly a flash-oriented spec that would be tied to M.2 slots, and that the ordinary SATA slots would still be SATA-3 @ 3 Gbps.

    Without a bus and drive combination capable of moving 40 TB in a reasonable amount of time (4-6 hours), these drives will be a novelty and not useful i

    • by guruevi ( 827432 )

      The bus is fast enough, but there is a limit to the mechanical motion. You can make the drive spin faster (10k, 15k) and the arm move faster but at some point the forces involved becomes an issue (so you end up with 10k, 15k drives only being in 2.5" packages). So to answer your questions, no, there won't be any significant upgrade in speed and yes, the rebuild times for these will be tremendous and you can't really think about your data as "RAID sets" anymore, that model is quickly becoming outdated. You h

      • by Khyber ( 864651 )

        "so you end up with 10k, 15k drives only being in 2.5" packages"

        You say that as I look at all these 3.5" 15KRPM Ultra-Wide SCSI drives sitting in my drawer.

        • by swb ( 14022 )

          and yes, the rebuild times for these will be tremendous and you can't really think about your data as "RAID sets" anymore, that model is quickly becoming outdated. You have to think about "storage nodes" as a single, really large hard drive.

          A distinction without a difference. Whether the data is made device redundant by some kind of RAID system or some kind of network copying, drive performance plays in significantly in restoring redundancy after a node failure.

          I think the value of these drives is dubious if their redundancy rebuild rate is measured in days due to read/write rates not scaling. They may have niche use cases (ie, containing a complete copy of some large storage quantity on a single medium) but I'd also worry that the density w

    • Drive capacity and drive performance improvements have always been different lines on the graph. Both lines go up, but capacity is on a steeper curve. It has always been much cheaper and easier to double the capacity of a hard drive than it has been to double its performance. The same is true of SSD. A 1TB SSD is not twice as fast as a 512GB SSD. This means that with every generation of all kinds of storage devices, it takes longer and longer to copy all the data on one (or back one up, or rebuild a RAID, o
      • by swb ( 14022 )

        Most storage systems I've worked with either strongly advise or outright require drive sizes > 1 TB to use a double parity system like RAID-6 due to the lengthy rebuild times associated with large individual members.

        Flash performance is so far ahead of its bulk capacity, though, that capacity can scale without performance appearing to be a factor in restoring redundancy.

  • Comment removed based on user account deletion
  • So about a 3x capacity in 7 years?
    Doesn't sound particularly ambitious, nor 'in the near future'.

  • Once you get a few million files on these things, you can't find anything in a reasonable time frame. A 40TB HDD will hold about 40 million files if the average file size is about 1MB. Sure there are lots of really big video files, but there are lots of little tiny ones too, so an average of 1MB is very reasonable. Once you have 40 million files trying to find those 10 JPG files you took of your son's birthday party can take an hour or more. Why is that? File systems were invented a long time ago and have h
    • by JustNiz ( 692889 )

      >> Once you have 40 million files trying to find those 10 JPG files you took of your son's birthday party can take an hour or more.

      Thats what you get for using a crap OS like Windows.

      Don't just dump everything in the top level of your "My Pictures" folder and expect some photo app to sort it all out. Organize things hierarchically in the file system in sub directories (or as you Windows users call them, folders).

      • Windows might be the worst at this, but it is still a big problem on OSX and all the Linux distributions as well. Maybe you organize things perfectly so all your files are exactly where you expect them to be and you never move things around, but 99.9% of users out there have stuff thrown all over their directory trees (on multiple devices). Search is a big problem on ALL file systems.
  • How many Hot Pockets Snack Bytes [hotpockets.com] can it heat up? Programmers get hungry ya know.

  • Jeff Frick interviews Brendan Collins on MAMR [youtube.com] — 12 October 2017

    15 seconds logo rotation.
    30 seconds lip-gloss application.
    30 seconds applied lip gloss.

    At 2 m mark there's a flaccid PMR confession.

    2m30 Finally, a useful question. Answer mentions head process "Damascene".

    3m30 three enabling technologies for last three to four years: helium, microactuation, Damascene process.

    Then, effectively "this new thing today makes things better blah blah blah track density blah blah blah linear density blah blah bl

  • I was extremely snarky in another post about an MBA-level chit chat on YouTube that revealed basically nothing.

    Now I've tracked down a 1.5 hour technical talk featuring Mike Cordano, Dave Tang, Janet George, Brendan Collins, and Jimmy Zhu.

    Technology of the Future: Western Digital Announces MAMR for Next Generation HDDs [youtube.com] — 12 October 2017

    The first 6m30 are disposable, that's as far as I made it so far.

    This was via a failure forum where Seagate employees gather to trash Seagate management. Wow, what a

  • military been using this tech in space for decades. next up they'll get rid of the spinning disks and switch to holography using interferometry to read and write to the matter.

    https://www.trumpsweapon.com/ [trumpsweapon.com]

  • hopefully SSDs-like drives will reach at least 40 TB (so that we can say "no thanks" to another heavy, noisy, power greedy and slow technology).

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...