Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage Shark Hardware Science

New Technique Promises Much Faster Hard Drive Write Speeds 148

MrSeb writes "Hold onto your hats: Scientists at the University of York, England have completely rewritten the rules of magnetic storage (abstract; full paper paywalled). Instead of switching a magnetic region using a magnetic field (like a hard drive head), the researchers have managed to switch a ferrimagnetic nanoisland using a 60-femtosecond laser. Storing magnetic data using lasers is up to 1,000 times faster than writing to a conventional hard drive (we're talking about gigabytes or terabytes per second) — and the ferrimagnetic nanoislands that store the data are capable of storage densities that are some 15 times greater than existing hard drive platters. Unfortunately the York scientists only detailed writing data with lasers; there's no word on how to read it."
This discussion has been archived. No new comments can be posted.

New Technique Promises Much Faster Hard Drive Write Speeds

Comments Filter:
  • by Anonymous Coward on Wednesday February 08, 2012 @09:41AM (#38966415)

    Who needs to read data back anyway?

  • by Anonymous Coward

    If they can't read it, how do they know if they actually wrote it? Or maybe reading it is 10,000 times slower than current read technology.

    • by TheRaven64 ( 641858 ) on Wednesday February 08, 2012 @09:58AM (#38966677) Journal

      It's stored in the same way as a normal hard disk - in ferromagnetic domains on a platter. You can still read it back using the same techniques as current drives (i.e. put a coil over it and see which way the induced current flows), but you then have a drive that you can write to orders of magnitude than you can read from it. I can think of a few places where this might be useful. The most obvious is the underlying storage for something like ZFS. For reliability, you want to flush everything to the backing store as quickly as possible, and with copy-on-write and snapshotting you may never erase it, but most of your reads are satisfied from flash or DRAM caches. A drive using this technology would let you dump data there as quickly as you wanted and would let you read it back for data recovery if you needed to, while in normal operation you wouldn't care about the read speed because reading from the disk is comparatively rare. It would also be useful for a number of scientific applications. I did some work a few years ago with someone building a solar observatory. A single one of their cameras generated 10GB/s of data, and they had 8 cameras in a typical setup. They run these for the entire time that the sun is visible. A single drive that can handle a sustained write speed of 1GB/s would be very useful for them (although they'd fill up several per hour...).

      For consumer devices, random read speed is still the most important factor, and mechanical drives suck at that.

      • Re: (Score:3, Insightful)

        by Stonent1 ( 594886 )
        I would think that you would still have to read the location of the cluster before writing to it. Sure you can flip magnetic particles N > S or S > N at bazillions per second speed but if you don't know what you're flipping that's not good.
        • by sosume ( 680416 )

          I don't think that when storing such absurd amounts of data as high speed, you're interested in possible fragmentation.

        • by tibit ( 1762298 ) on Wednesday February 08, 2012 @11:07AM (#38967637)

          It's time to dust off the old concept of hard sectored discs ;) Realistically, of course, it's a bit more complex than that.

          First of all, modern hard drives have a servo track that's used to maintain radial position of the head servo. Instead of each hard drive having a very accurate (and expensive) radial and axial head position sensor, you pay for it once, install it in the factory, use it to accurately guide a hard drive to write the servo track. Its cost is amortized over thousands of drives made. This is probably the reason for a covered up radial slot in many hard drive enclosures: I guess it's used for the sensor to couple with the head system while the drive writes the servo track. Or perhaps the servo platter is prewritten outside the disc? Someone familiar with how it's made please chip in!

          The servo track can be also used to provide angular position feedback. A rough estimate of angular position of the spindle is available first from the Hall sensors in the spindle motor. A somewhat more accurate estimate can be had from back-EMF from the spindle motor windings. This still is methinks a couple orders of magnitude away from what's needed to pack sectors tightly on the drive -- thus the feedback can come from the servo track. Not having to read the data tracks helps with packing the sectors: there's no read-write switchover overhead (if it were significant -- perhaps it isn't nowadays). The servo head is always reading, and the data heads can be kept in write/erase standby. It'd be nondestructive, but read amplifiers are disconnected to prevent saturating them -- amplifier overload recovery is slow. Heck, if you want an amp that recovers from overloads quickly, you have to split it into more stages, and you need fast clamps between each stage. There are other similar approaches to this problem, too, and perhaps modern read amps are designed to deal with overloads gracefully -- I never tested a recent one. Stuff from a decade ago was painfully slow on overloads (tried to reuse a head amp from a hard drive for a non-drive-related project).

          Alas, this ultra-fast-writing drive would unfortunately need very accurate position sensors -- both angular and radial. It's an engineering issue to make those affordable, as is the design of the optochip with femtosecond laser and its driver and serializer. The latter would probably take a couple serial lanes and multiplex them -- I presume it's not all that easy to push 10gbit/s data between external chips and the laser driver/laser combo. I think that to make it all practical you need an on-chip serializer, write precompensation, driver, and the diode. Perhaps the diode would be "tacked on" later to a substrate that has everything else. I only imagine that bond wire parasitics, even over a couple mm, become kinda important when the laser waveform has a 100GHz bandwidth...

      • by Twinbee ( 767046 )
        Which makes me often wonder how people get the motivation to research this kind of stuff. I mean, yes puzzles are often fun in and of themself, but if it were me personally, I'd feel kinda soul-crushed to finalize this tech only to see SSDs and technologies like racetrack memory win in the end. Also, they're presumably being *paid* to research this as well.

        It's not too abstruse to see that solid-state devices are the natural way forward.
        • by the_B0fh ( 208483 ) on Wednesday February 08, 2012 @10:30AM (#38967107) Homepage

          Why? If they can write TB/s and store data at 15X of current capacity, and SSDs can't, why move to SSDs?

          The read problem is easily resolved by having multiple read heads that can read independently.

          • Why? If they can write TB/s

            Except that youre still limited by rotational latency and whatnot. Was the magnetic write head ever the main bottleneck?

            • by mike260 ( 224212 )

              Was the magnetic write head ever the main bottleneck?

              Maybe not the 'main' bottleneck, but it depends on the application, no? Seems to me there are at least a few firehose situations where you can never have enough write bandwidth (say, uncompressed video-capture).

              Maybe normal workloads on normal filesystems wouldn't see much improvement, but I bet you could find ways to capitalise on the extra bandwidth and space. Log-structured filesystems spring to mind for one.

              • Maybe not the 'main' bottleneck, but it depends on the application, no? Seems to me there are at least a few firehose situations where you can never have enough write bandwidth (say, uncompressed video-capture).

                Centralized backup, especially of large data-stores. You have to write massive amounts of data on a regular basis, but rarely read the data, and when you do you usually only need a small subset of what's been written. I could imagine it being useful for certain kinds of RAID configurations and networ

          • by Adriax ( 746043 )

            I don't care which way becomes the "best" choice as long as both styles interface through a standardized connector.
            Both sides of chip vs platter will always have their own strengths and weaknesses, I like choice.

            It's very easy to see this becoming the highest cost and highest performance drive of the near future that server admins and performance enthusiasts go to. While the SSD takes over as the PC and small device storage of choice.

        • Because the actual research that has been done is fundamental physics. For better or worse news articles always talk up the applications rather than the science.
      • Harddrives haven't used coils for a long time. Nowdays they use the GMR effect http://en.wikipedia.org/wiki/Giant_magnetoresistance [wikipedia.org] and in principle the CMR effect could give another few orders of magnitude more sensitivity. That only solves the size problem, it doesn't do much for the speed problem.
      • Sounds like a good fit for updating a warehouse database. Our loads are supposed to occur at night but with increased volumes they often spill over well into daytime.
      • by goombah99 ( 560566 ) on Wednesday February 08, 2012 @10:46AM (#38967319)

        This solves a major problem with mag recording. Readback head have always been way smaller than write head. You can read back with just a tiny permalloy head but to write you need large currents and loops of wire. So miniaturization has been limited by the write head size not the read head. This solves the write-head size problem but may have created a new read head problem. But that's very promising.

      • This uses ferrimagnetic domains, not ferromagnetic domains. There is no external magnetic field, and you can't use a coil to read them.

      • Well I could see it useful in more then just a few places. The basic rule of Computer Science is the 80/20 rule. 80% of the Use is on 20% of the data. For the most part we Store far more then we read back. Sounds wasteful? Well it is in a way however not collecting the data will mean 20% of the time you may need to access the other 80% of the data. So it may still be needed, and the 20% of popular data can change over time.
         
      • by awfar ( 211405 )

        As I understand, if you can accurately write such a smaller magnetic domain with a laser vs. the relatively large area under a write head, you obviously increase the data density. And since higher and more focused energy to flip the domains can now be applied, lessening the problem of flipping their neighbors, this probably means smaller particle and higher coercivity media can be used or developed. This also implies that the tracks get smaller and reduces or eliminates guard areas and tracks. All of which

    • You don't know if you actually wrote successfully on today's disk drives either.

  • At last! (Score:5, Funny)

    by undulato ( 2146486 ) on Wednesday February 08, 2012 @09:41AM (#38966425) Homepage
    A future-proof storage medium.
  • omg (Score:4, Funny)

    by Anonymous Coward on Wednesday February 08, 2012 @09:42AM (#38966435)

    frickin hard drives with laser beams!

  • Just got wider.
  • writing the cute useless powerpoint presentations that waste so much everyone's time will be done 1000 times faster, so will downloading swimsuit pictures (minus the swimsuit for some :) )

    awesome, we're gonna be able to waste time so much faster haha

  • by Anonymous Coward

    How can scientists know the write was successful without being able to read back as well...surely there is an in implied read in the mix, otherwise the discovery isn't worth the paper it is written on!

    • They can read the content in their laboratory test samples. They just don't know how to read the content when its spun up to several thousand RPM.
  • by rjejr ( 921275 ) on Wednesday February 08, 2012 @09:47AM (#38966531)
    Considering how often I back stuff up, but how rarely I ever use those backups, I'll gladly take 1,000 times faster backups even if it means slower read speeds than we have now. Really, I'ld take that trade-off in a heartbeat.
    • Re: (Score:3, Insightful)

      by Anonymous Coward

      You can only write as fast as data can be read so your backups will not be 1,000 times faster.

    • by CAIMLAS ( 41445 )

      You say that until you've got to be at the office until 3am waiting for a backup to restore for DR.

  • by na1led ( 1030470 ) on Wednesday February 08, 2012 @09:48AM (#38966533)
    If I remember correctly, several years ago they said a 500 Terabyte Drive would be comming out soon, never happend.
  • I think I'll - $32?! Jeezus, if I was still a student I'd be set..
  • we just need fiber optic internet connections to become standard enough so we can put all that fast reading and writing to use! ;)
  • No way to read these things? Wow, Who needs encryption now... (Ok.. Ok.. Just write your data to /dev/null...)
  • by Yvan256 ( 722131 ) on Wednesday February 08, 2012 @09:55AM (#38966633) Homepage Journal

    If they can read it at least as fast as today's technologies, the power required to read/write data is roughly the same as today's drives and the manufacturing cost is also about the same, this is good news for everyone:

    1. On the consumer side, cheaper drives per terabyte meaning cheaper home media servers
    2. On the commercial side, a lot less energy required, i.e. no need for ultra-fast 15k RPM drives in servers, need up to 15 times fewer drives in server farms. This is BIG.

    There is only one problem [xkcd.com].

    • by LWATCDR ( 28044 )

      "2. On the commercial side, a lot less energy required, i.e. no need for ultra-fast 15k RPM drives in servers, need up to 15 times fewer drives in server farms. This is BIG."
      Probably not. Spindles == speed and redundancy. If you are looking at a data warehousing situation then maybe but if you are dealing with a lot of transactions you will still want as many spindles as you can afford.

    • We have 15k RPM drives because we need to move the sector to which we want to read/write at quicker to the actuator head. The slowest point isn't the transfer of data from head to platter, but a) moving the actuator arm and b) waiting for the correct sector to come around.

      I'm not sure how much performance benefit using lasers could help since access time (moving the mechanical arm) is still on the order of ms.

      • Exactly. 15k RPM drives tended to be small diameter for structural reasons and to allow a shorter arm stroke. As a consequence, their linear velocity, and thus sequential throughput, was not all that much better than for 3.5" 7.2k RPM drives. The only reason to pay for their extremely high cost per GB was for low latency operations. For low latency operations, you're so much better off just buying SSDs. That's a disadvantage that rotating storage will simply never overcome. This technology will only s
  • If it's paywalled, it didn't happen!

  • As an added bonus the factory can continue to operate even if it's flooded [//to do: insert conspiracy theory here] as the lasers can then be attached to sharks.

  • Write Once Read None.
  • A classic tecnology updated!

    http://www.national.com/rap/files/datasheet.pdf [national.com]

  • They've just re-invented the Magneto-optical drive! [wikipedia.org]
    • For writing, magneto-optical drives only used the laser to heat up a bit to a point where it could be flipped. The actual magnetic drive head flipped the heated bit, not the laser. This post says they can now use the laser to flip a bit, and that's a big difference.
  • by Synon ( 847155 ) on Wednesday February 08, 2012 @10:14AM (#38966895) Homepage
    "Unfortunately the York scientists only detailed writing data with lasers; there's no word on how to read it." A bit of a paradox don't you think? How did they know it was written without being able to read it?
    • Probably they used a scanning-tunneling electron microscope or similar to do the read. Those obviously don't scale down easily, hence there is no practical way to read the data yet.
  • Pfffffft. You silly scientists... it's lasers all the way down!
  • No man is a nano-island!
  • or the platter is spinning 1000 times faster to achieve this throughput?
  • Newport has an ultrafast 400 fs laser with a claimed high repetition rate (http://www.newport.com/Spirit#tab_Specifications), but the rep rate is only 1 MHz. Who cares if you set a bit in 60 fs but then your laser can only write 1Mb/sec to disk. What's that, the speed of a Zip Disk?
  • only detailed writing data with lasers; there's no word on how to read it.

    Sounds like Windows' strategy: Crap the write to wherever on disk, and don't care about performance in reading it back. Why bother when read-time performance, when the user can defrag every day?

  • Backup "tapes" currently grind along at 10,000 RPM or so, depending on the device. Their primary purpose is to write data; you hope you never have to read from it. The thought of writing backups at 150K RPM - finishing what is currently a three hour backup in about fifteen minutes - that would be spectacular. Sure, the data restore would still take 3+ hours - but again, you cross your fingers and hope you never have to do that restore anyway.
  • They have re-invented the write-only memory or WOM! Back in grad school some friends and I developed a spec sheet for the wood-insulated gate write-only memory or WIGWOM. Another billion dollar idea that went nowhere.
  • If they can't read it, how can they know that the lasers wrote successfully? Or does that mean they read it using conventional means?

  • The German company Convar reads data from damaged hard drives using blue lasers [convar.com]. They're currently recovering data from the World Trade Centre [youtube.com] hard disks using this blue laser method.
  • "Unfortunately the York scientists only detailed writing data with lasers; there's no word on how to read it."

    Use lasers. Duh. :)

                      -Charlie

  • OK, my visual cortex is officially due for repair. I read the headline as "New Technique Promises Much Faster Hot Damn Write Speeds"

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...