Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

Seagate Overcomes Superparamagnetic Limit 358

Longinus writes "Yahoo! News is reporting that hard drive manufacturer Seagate has "overcome a significant challenge in magnetic memory with a new technology capable of achieving far beyond today's storage densities -- up to as great as 50 terabits per square inch. Currently, the highest storage densities hover around 50 gigabits per square inch, but Seagate said its heat-assisted magnetic recording (HAMR) technology could break through the so-called superparamagnetic limit -- a memory boundary based on data bits so small they become magnetically unstable." Perhaps the near future of storage technology lies, for now, not in nanotech or holography, but still in magnetic recording."
This discussion has been archived. No new comments can be posted.

Seagate Overcomes Superparamagnetic Limit

Comments Filter:
  • Room for more pr0n and mp3.

    Ughh I mean serious business applications

  • I'm sure we will have lots of fun figuring out how to backup our users personal hard drives full of pr0n and muzak.

    Scratches head comtemplating this not so inSIGnificant endeavour.
    • Three harddrives, two alternating between a hotswap enclosure and some safe storage area (such as a fire safe). Easy.

      No good for long term archive, but that's a whole other problem.
  • everyone keep in mind that this says bits, not bytes, i freaked out when i read this, current storage only holds 50 gigawhats?!?! per square inch, and here i am w/ my tiny 160gig drive...
  • Fav Quote (Score:5, Funny)

    by Winnipenguin ( 603571 ) on Wednesday August 28, 2002 @07:56PM (#4160507)
    The need for higher storage density -- the number of data bits stored on a disk surface -- already has been addressed with smaller bits, but these data chunks are becoming so small that they will be magnetically unstable within the next five to 10 years, researchers said.

    This is the real reason hard drive warranties have been getting shorter.
    • by toupsie ( 88295 ) on Wednesday August 28, 2002 @08:26PM (#4160662) Homepage
      Good point! I scares me that more storage is starting to mean less long term data integrity. I have been thinking about long term data stability for a while. I do a ton of digital photography. Its backed up on CDs and stored on an IBM hard drive. Its photos I want to share with my Grandkids when they show up. My Grandparents old photos survived the years on paper. Will my gigabytes of photos survive for my Grandchildren?

      I still have 5 1/4 floppys that were formated in 1982 that work on an old Apple ][ but I am sure they can't last another 5 years in storage. Are we just in a constant race against the degrading of our storage medium? Constantly pushing data from one standard to another? Paper seems to be a hell of a lot better long term storage medium than magnetic media.

      • Your old floppies remind me about the data storage I used to work with-- 13 inch steel platters, 10MB per side encased in a plastic shell. No, I'm not that old, I just used to work in the Navy :p Now I use them for design etching, but they're real troopers... And easy as hell to crash. Speaking of which, anybody know where I can get more of em?
  • heat assisted? (Score:2, Interesting)

    by dollargonzo ( 519030 )
    does this mean that it needs to be VERY hot in order to operate, and the outside will be cooled, or are the harddrives going to be external...or even better: am i completely missing the point?

    • does this mean that it needs to be VERY hot in order to operate

      It sits where the AMD heatsink use to go.
    • Well, the heat is generated by a "laser" and is very localized. I'm not sure about the scale, but I suspect the difference in heat output would be minimal. Actually, I'm trying to think of reasons why perhaps such a hard drive would actually generate less heat. Like not having to spin as fast or something. I dunno, probably not.
  • When their stranglehold on an industry is on the line, some companies are able to overcome the laws of physics.
  • by Beetjebrak ( 545819 ) on Wednesday August 28, 2002 @07:59PM (#4160522) Homepage
    The gap between the price/size ratio of harddisks and that of backup media/drives is becoming ever wider. It's getting almost exponentially more expensive to back up all of your data, Moore doesn't apply to tape backup I guess. What we need is a reliable, fast and cheap system to back up those 200+GB disk arrays without fuss and preferably on a single piece of media. ADR seems nice, but in my experience the reliability is sloppy.. Other alternatives are WAY too expensive compared to how cheap it is to build huge disk arrays.
    • For datacenter type apps disk arrays for backup seem to be gaining in popularity, witness veritas backup exec module for disk backup. Also there are new optical backup solutions on the horizon that are truely huge, isn't BlueRay supposed to be 100GB/side? For personal use I would guess that RAID1 or simply archiving data (not os or programs) to DVD is the way to go.
    • As I posted to another message, we are currently phasing out tape in favor of keeping many copies of data on various RAIDs for backup. You are correct, I'd call it a "backup crisis".

      The biggest helical scan 200GB tapes are very very expensive compared to hard disk prices. We have over 4TB of disk space at work, most of it is redundancy (not counting RAID redundancy), but we do have almost 1TB of live data.

      I've resorted to creative rsyncing for main backups, the Macs use retrospect (we are going to soon target that to a hard disk rather than tape), Veritas and the 1TB tape robot are still running, but too slow and cumbersome to be practical (if we ever needed to restore the full 1TB of data off that thing it would take weeks).

      And really, who do we have to blame? I'd look at the MPAA... who has the most to lose from large removable media?
    • What we need is a reliable, fast and cheap system to back up those 200+GB disk arrays without fuss and preferably on a single piece of media.


      Yeah its called LTO :)

  • Reminds me of the old time ST-238s (ST-238 = ST-220 (20Mb) + RLL encoding)... And to think that now I have more memory on my PDA than that...

  • by gadfium ( 318941 ) on Wednesday August 28, 2002 @08:00PM (#4160526)
    during a code review, that using 32-bit integers to store the number of sectors on the hard disk would be fine.

    Perhaps I should revisit that piece of code....
    • You were wrong even before this announcement. 2TB RAID arrays have been practical for quite a while.
      • The netapp we have on order will be 4TB raw disk space on delivery expandable to I believe 12TB. Usable space after hot spares, raid5, and snapshoting are accounted for is roughly half of raw space, so 2TB to start and expandable to 6TB =)
  • Solid State Memory? (Score:5, Interesting)

    by T-Kir ( 597145 ) on Wednesday August 28, 2002 @08:00PM (#4160527) Homepage

    Yeah, but what is the current progress on the solid state memory devices? I know that there is a Cambridge university team who have got their own division working on this.

    If I remember rightly (this info I read about 3 years ago) they said that they had some HDD manufacturers (probably IBM at the time) were very interested in the tech, and their initial projections were about 2.2TB for a credit card sized module. Although they were still early in research/development, I wonder how they (or any others) are doing now?

    • What I've been wondering recently is why someone doesn't pack a bunch of ECC DDR DIMMS into a 5 1/4" drive bay along with a backup battery and some circuitry to interface it to IDE or SCSI. Make it flexible and you could have an upgradable drive that could max out it's IDE/SCSI interface for *sustained* reads and write, not just bursts like a normal disk. How hard would it be to design a dedicated memory controller that could talk to like 6 or 8 DIMMs abstract them to an IDE/SCSI interface ala ramdisk?

      Maybe it's really hard to do or there is just too small a market...

      But if somebody reads this and builds one...I at least want a couple free samples. :)

      -Sokie
      • by afidel ( 530433 )
        It's been done for about 2 decades. They are used in database apps for accelerating the transaction log partitions as that can quickly become the limiting factor if the rest of the database is spread over a hundred or more spindles. They are in fact a niche product and because of their target audience are both tested to hell and mega expensive, since by their very definition the are used in the largest of database application where the most is at stake.
      • Why bother going to all the trouble? Take those DIMMs, install them on your mainboard and create a ramdrive. Copy your application to it and run it from there. This is faster (memory bus beats PCI/IDE), cheaper, and easier. Plus, your OS can use the unallocated ramdrive space for cache or application memory -- much more efficient than having the empty space go to waste.

        I know what you're going to say: "You can't boot from it." True. "It loses its contents when you shut off the computer." True. But a battery-backed DIMM isn't permanent either, at best you could only have your system off for a number of hours. I honestly don't think many people would trust their sole copy of any data to such a system. What if there was an extended power outage? What if your computer's power supply failed? Don't forget, these ordinary DIMMs are not designed for low power (rather for speed), and with a few gigs worth of RAM, you are looking at much more than a trickle of power. I estimate around 15-20 watts of power, worst case, for each gigabyte of RAM. At that rate, even with a number of large batteries I'd be surprised if it could last overnight. I certainly wouldn't expect PDA-like battery life.

        By creating the ramdisk you enforce the condition that a nonvolatile backup exists. If booting from solid state media is one of your objectives, then buy a relatively small Flash drive to boot from, and then copy whatever is necessary from the HD to the ramdrive. I'm sure you'd come out ahead speed wise over the HD-only solution.

  • The down side of this is that now it'll be that much harder to back up your hard drive. Where are those FMD-ROM drives? We're gonna need 'em soon...
  • by crystalplague ( 547876 ) on Wednesday August 28, 2002 @08:06PM (#4160553)
    Note: I do not take credit for this, it was posted a few months ago. How the hell do you spell that anyway? I used the "Sounds about right" method. hoocd on fonix rilly werked fer me!
  • Magneto Optical (Score:3, Interesting)

    by Warped-Reality ( 125140 ) on Wednesday August 28, 2002 @08:07PM (#4160559) Journal
    whats different with this than the "magnito optical" (or similar) that i've heard about years ago? It basically used a laser to heat up hte individual bits so the magnetic head could read/write there, allowing much more bits/sq inch without shrinking hte head any smaller than it already is.
  • There's no question that being able to jump from giga- to tera- orders of storage/sq. in. is a Good Thing, but I have to wonder how delicate these drives are going to be. Typically, lasers need to be focused pretty accurately to be, uhm, accurate. Methinks that widescale rollout of these drives will be delayed considerably as they figure out ways of ensuring that the focus (mirror-based?) remains unaffected by the typical knocks 'n shocks that are so much the norm, especially in mobile computing.

    As was mentioned in an earlier post, solid-state storage has such a great advantage due to the lack of moving parts. The hurdle to overcome there, however, is how to get the same storage density out of a solid-state device. There's always a catch.
    • Seems like a bit of a silly thing to be worried about, really. I mean, who knows more about stabilizing delicate components and damping vibrations than hard drive manufacturers? That's what they do!
    • They could just mount the laser onto the r/w head.
      Then you only need to worry about the solved
      problem of head alignment. This is still an issue
      when you shrink bit size but this is not what's
      gonna stop HDD industry.
  • magetic unstability (Score:3, Interesting)

    by ndevice ( 304743 ) on Wednesday August 28, 2002 @08:08PM (#4160568)
    speaking of bits being magnetically unstable, this reminds me a bit of DRAM and, if you want to get older, mercury delay lines.

    Not sure if current HDs have to continually refresh their data, but it seems that they might have to do that in the future. It would be a challenge to do with huge drive sizes though, because the drive controller would probably be the component in charge of the refreshes. However, if the data retention limits really were still measured in years (albiet small numbers), it might still have a chance without impacting performance too much.
  • Superparamagnetism...expialidocious!
  • by Myco ( 473173 ) on Wednesday August 28, 2002 @08:13PM (#4160591) Homepage
    This isn't reporting, it's reprinting a press release verbatim. Jebus. Here's the original [seagate.com], from Seagate's site.
  • I was just thinking that heat was what computers could use more of these days...
  • I guess this means my computer will eventually do double duty as a space heater.
  • The notched electron?

    (I don't remember in which story this was - it was about a civilization whose collapse was traced to the failure of a single database index)...

    • Ms Fnd in a Lbry by Hal Draper. They still had all the data, but they couldn't find anything because the index to the index to the index had got corrupted.

      Amazingly enough, the story was written in 1961.
  • by asparagus ( 29121 ) <koonce@gma[ ]com ['il.' in gap]> on Wednesday August 28, 2002 @08:20PM (#4160629) Homepage Journal
    A 40GB/platter drive (4 platters = 160GB) has a density of 80 gigabits/inch.

    So, @ 50 terabits/inch, you could have ~25TB/platter hard drives, or about 100TB in the same form factor as the current maxtors.

    G'damn.

    -asparagus
  • Aw shucks (Score:3, Funny)

    by hkhanna ( 559514 ) on Wednesday August 28, 2002 @08:23PM (#4160645) Journal
    50 terabytes per sq. in. 'ought to be enough for anybody!
  • Some calculations (Score:5, Informative)

    by grahamsz ( 150076 ) on Wednesday August 28, 2002 @08:25PM (#4160657) Homepage Journal
    So at 50Tbit/in^2 that means that a 3.5" drive with 4 double sided platters might hold

    Area of disk (considering .5" hole)
    9.62 - 0.196 = 9.424 in^2

    8 Data surfaces

    8 * 9.424 =~ 75 in^2

    Total data storage:

    75 * 50 / 8 = 471 Terabytes!
    471 TB = 517869976682496 bytes

    Bits needed to address this number of bytes:

    ceil (ln (517869976682496) / ln (2)) = 49

    And thankfully so long as we have a 64 bit architecture then reiserfs will happily work :)

  • Portable Storage? (Score:3, Insightful)

    by OneNonly ( 55197 ) on Wednesday August 28, 2002 @08:26PM (#4160663)
    Well having a 100TB drive might sound lovely, but if our movies are still going to be limited to DVD size (or the future of DVD sizes? Lets say 100GB) it's not going to offer any great improvements in this area..

    I don't know much about this field but "heat-assisted magnetic recording" doesn't sound like it's going to be easily transformed into protable media..

    Then the other question is: Backups.. When I have 100TB of data on my HDD, what will I use to back it up? That's one long tape I'm going to need! (I know there are tape solutions for large quantities of data like this at the moment, but they are not *small* and inexpensive compared to say 100GB backups..)
  • Longinus!?! (Score:5, Funny)

    by Scholasticus ( 567646 ) on Wednesday August 28, 2002 @08:27PM (#4160669) Journal
    According to legend, Longinus was the Roman soldier who pierced the side of Christ with a spear. That spear was for a long time believed to have a role in controlling the destiny of the world. Adolf Hitler spent years and millions of deutschmarks searching for the Spear of Longinus. It's no coincidence that Longinus himself posted this story. The Spear of Longinus was said during the Middle Ages to "havve propertyies of needed to peerce the superparamagnetism barrier," (according to Nostradamus) which will bring on the end times.
  • Marketing (Score:4, Funny)

    by unsinged int ( 561600 ) on Wednesday August 28, 2002 @08:48PM (#4160762)
    Pretend this is from Seagate:

    Since 939 of the 1000 random people we surveyed did not know what a terabit was, we will be using the measure of mp3s per square inch when we release our newest hard drive. If AMD can make their own metric, then by God we can to.

    (Weeks later a class action lawsuit is filed against Maxtor, Toshiba, et al for continuing to label their new products with the confusing terms Gigabyte and Terabyte, which no normal person really understands anyway.)
  • a bad hack? (Score:3, Insightful)

    by banky ( 9941 ) <gregg AT neurobashing DOT com> on Wednesday August 28, 2002 @08:57PM (#4160802) Homepage Journal
    I can't help but think that maybe this is a bad hack, like maybe it's possible that it's great science and great technology but... maybe as well it's time to abandon magnetic media in general.

    Like every time a new Pentium comes out... everyone cries, "It's just a sooper-dooper overclocked 8086! With a couple new instructions!".

    I wonder if continuing to improve on existing technology, and not trying to move in completely new ones, is the best idea.
    • Re:a bad hack? (Score:3, Interesting)

      by mindstrm ( 20013 )
      Intel, and many others, are constantly working on new technology. The Pentium is what it is because of market demand, and because it's cost effective for them to market it. that's business.

      Obviously, at some point this will not do.

      I mean look at the Earth Simulator (#1 on Top500.org by a factor of 5)... it's not Intel based, or x86 based at all.. neither are most of the supercomputers in there.

      We are doubling our speed every 18 months by improving current technology.. that sounds pretty good to me.

    • I wonder if continuing to improve on existing technology, and not trying to move in completely new ones, is the best idea.

      Of course its the best idea. Its just never the most fun.

      Writing new lines of code is way more fun than squashing esoteric bugs in legacy code. Design new computer architectures is always more sexy than modding an old one. Making snazzy solid state storage is currently way more chick-magnet-ish than breaking the superparamagnet barrier, or at least as chick-magnet-ish as either of things can be.

      We all assumed that magnetic media was on its way out because of things like superparamagnetism. If Seagate's research folks had decided that HAMR was too costly, too fragile, or too difficult, they wouldn't be doing it.

      Just like everyone thought that Moore's Law was out the window because X-ray lithography was so expensive and unreliable, and then the manufacturers come back with visible spectrum equipment that can make smaller and smaller features. Then we have no more Moore's Law because we did the math and even X-ray lithography won't save us forever, and then we nanotech semiconductors.

      Magnetic media is here to stay, and that's not a bad thing. We're only leveraging, oh, 40 years worth of research and development :-)
  • Scientific American [sciam.com] had a feature article a while back that explained the superparamagnetic effect [sciam.com], as well as the holographic storage technology that the story poster referred to.

    The article was also featured on Slashdot [slashdot.org].
  • Less then a fourth of my drive is even used on my w2k workstation. I have another 20 gig drive on my gentoo linux box that is only like %12 full.

    I have lots of programming apps including vc,vb,msdn,tlc/tk, active perl, python, apahce, openoffice, java se and ee, as well as all the internet browsers, quake III and the evil .net, which I just found out that I can't develop "viral" gpl programs according to the eula. Anyway all this is less then 5 gigs and I have lots of storage left on my 2 year old drive! Why would anyone besides mp3 bootlegers need a 100 gig drive for. Maybe thats the true market.

    I read in Microsoft's "networking essentials " that, if you made every man women and child on earth write a 2,000 page novel, you would barely equal a terrabyte! You can fit it all one of these new disks.

    That fact that corporate databases can sometimes reach 1 terrabyte to me is truly astounding.

    • I read in Microsoft's "networking essentials " that, if you made every man women and child on earth write a 2,000 page novel, you would barely equal a terrabyte!

      Pardon the math, but figuring 270 million people in America, 1 terabyte would be a little under 4k per person. So, a 2000 page novel would be allowed to have 2 bytes per page. This means, that everyone in america could write this 2000 page novel with one char per page, and a page break. Must be a facinating read. Slight miscalculation.
    • I read in Microsoft's "networking essentials " that, if you made every man women and child on earth write a 2,000 page novel, you would barely equal a terrabyte! You can fit it all one of these new disks.

      Something is either wrong with their math or the quote:

      1 TiB / 6 billion people = 183 bytes/person

      Even with 100:1 compression, you'd only have enough space for 9 characters per page to create a 2000 page novel for every person on the planet.

      You'd require well over a petabyte of storage to store 2000 small book pages worth of text for every person on the planet.
  • ...is what would one do with a single hard disk that insanely huge?

    I know that its the same mentality when the 386 was out and there was talk of a 2ghz processor and people said "I'll never be able to use that!"....but as processors slowly got faster and faster, we always found a way to use them to their full potential. Everytime a new program came out it would always look better and run faster on the faster chips. Yet, virtually all of todays major software applications still ship on a single CD-ROM a now, what, 18 (I think) year old technology--which holds 650MB per disk and require the same disk space...but I digress.

    For casual use, an insanely sized drives serve no forseeable purpose. Even in data intensive situations like databases and video storage/editing, it is overkill. Oh well, maybe I'm just not seeing the future.
    • I don't know about casual use, but any number of scientific application will happily chew up terabytes per dataset given half a chance.

      Consider a 3-d grid of data (modelling say, a section of the Earth's crust, or the data from an MRI scan.) Suppose you want to consider how that data changes over time. Even if one data point is a simple double precision number, 1000 snap shots of a 1000x1000x1000 grid will require nearly 4TB.

      Often even higher resolutions in space or time are desirable. It will be a long time before storage (and memory, and bandwidth) is so great that people will struggle to find ways to use it.
    • Hm. Storing Internet snapshots(*)? Or, perhaps, using a never-overwriting filesystem that keeps all versions of a file around, or at least a full journal...

      (*) Or, for that matter, do as the Seagate press release [seagate.com] suggests and store one Library_Of_Congress unit in a notebook computer...

      'course, that's if the heating, cooling and laser don't add too much overhead in terms of size, weight and cost. It's not specified in either article.

      Even something as mundane as switching to high-resolution uncompressed true-color movies might take advantage of more space. Say, 2048x1536, 24-bit color, 24fps = what, 216MBps required, which should be something like 1.48TB for a 2hr movie. ;)

      ('course, there's the obvious question of how do you transport that, and whether the drive can sustain sufficient throughput... That kind of network bandwidth available to consumers would probably make Jack Valenti spontaneously combust, but unless newer, far denser DVDs or a suitable replacement media appeared, uncompressed video ain't too useful to him.)
      • Assuming that the increased density is split evenly between more tracks and more bits per track, we're looking at about a 30x increase in the number of bits per track. Assuming rotational speeds remain the same, that will take us from 40MB/s on a current IDE drive to about 1.2GB/s. Which is comfortably greater than 216MB/s.
    • Uncompressed HDTV runs at approximately 1.5 gbps. That will fill up a disk pretty quickly.
  • BIOS capability (Score:3, Interesting)

    by PD ( 9577 ) <slashdotlinux@pdrap.org> on Wednesday August 28, 2002 @09:29PM (#4160897) Homepage Journal
    What is the current state of the art for BIOS capability? We're still hitting limits for drive size because they don't plan ahead. In fact it seems that for every motherboard I have ever owned the first drive I get for it works, but the second drive is bigger than the BIOS will handle. Will these 100 Terabyte drives exceed the current capabilities?
    • Re:BIOS capability (Score:5, Informative)

      by SQL Error ( 16383 ) on Wednesday August 28, 2002 @10:20PM (#4161074)
      State of the art is ATA/ATAPI-6 a.k.a. "Big Drive". It supports 48-bit addressing. That's 48-bit sector addressing, so the maximum size of a disk is 144 Petabytes. This standard also supports transferring 32MB of data in a single I/O. This is at least partly implemented in ATA-133 controllers.

      After hitting limits at every factor of 4 (32MB, 128MB, 512MB, 2GB, 8GB, 32GB and most recently 128GB), they've finally got it right.

      Take a look here [maxtor.com] for more details.
  • I bet the warrenty for these drives only covers 4 hours/day operation, worse than the IBM Pixie Dust drives...
  • From the article:

    "heating the disk and recording components makes it easier to write information, which is stabilized with subsequent cooling."

    Hot processors, hot RAM, now even hotter Hard drives. More heat in the case, is this a good idea?
  • FYi this information has been on the StorageReview.com [storagereview.net] forums for about a week. There is a small discussion there.
  • But that doesn't change the fact that it's Seagate. Does anyone really use a Seagate HD? They have a really bad rep. It's usually Maxtor or WD for me.
  • 50 Gb/in^2? What in hell's name is that?

    Can I please have this in something I can understand, like Libraries of Congress per square meter?

  • Square facts (Score:2, Insightful)

    by Anonymous Coward
    "terabits per square inch"

    What the?! If this is so high-tech, why are they using square inch?

Swap read error. You lose your mind.

Working...