Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage Science

'Millipede' Prototype Shown at CeBIT 156

neutron_p writes "It was a subject of much controversy for last 5 - 7 years, but it's finally got protyped. At CeBIT, IBM for the first time shows the prototype of "Millipede" - nanomechanical data storage device. Using revolutionary nanotechnology, scientists at the IBM Zurich R&D Lab, Switzerland, have made it to the millionths of a millimeter range, achieving data storage densities of more than one terabit per square inch, equivalent to storing the content of 25 DVDs on an area the size of a postage stamp. The principle of operation is comparable with the old punch cards, but now with structural dimensions in the nanometer scale and the ability to erase data and rewrite the medium."
This discussion has been archived. No new comments can be posted.

'Millipede' Prototype Shown at CeBIT

Comments Filter:
  • by tinrobot ( 314936 ) on Sunday March 13, 2005 @01:11PM (#11926797)
    Like when you drop a three foot tall stack of them in the computer lab and have to spend several hours putting them back in order?

    (true story)
  • by SIGBUS ( 8236 ) on Sunday March 13, 2005 @01:12PM (#11926804) Homepage
    ...the worlds smallest keypunch.
    • Re:Also shown... (Score:5, Insightful)

      by deathcloset ( 626704 ) on Sunday March 13, 2005 @01:27PM (#11926906) Journal
      http://en.wikipedia.org/wiki/Compact_disk
      The information on a standard CD is encoded as a spiral track of pits moulded into the top of the polycarbonate layer

      Sometimes it's true: the more things change, the more they stay the same. The preffered method for lengthy data storage still involves making an impression.

      The oldest methods of "data storage" go back to the birth of written language. These involved either making impressions in the sand, or for more permanent storage making engravings into stone.

      How small our stones have gotten, eh? :)

  • by deft ( 253558 ) on Sunday March 13, 2005 @01:12PM (#11926806) Homepage
    "The principle of operation is comparable with the old punch cards"

    So now we feed these stamp sized cards intot he big machine, and it says "working!, working!, working!" till it spits out another stamp with the answer.

    Awesome.
  • by Anonymous Coward
    but how many Libraries of Congress per VW-beetles is it?
  • What about speed? (Score:2, Interesting)

    by Anonymous Coward
    This kind of device would be incredible for backup purposes, and the recording method seems to be fast as well, but would they accept almost-unlimited rewrites? In that case, this technology could finally replace magnetic devices. Solid state is always better, but so far, the existing alternatives don't offer the durability and flexibility of hard disks.
    • End of TFA; "More than 10,000 writing and overwriting cycles have proved the concept's suitability as a reusable storage medium."

      This is the same as flash memory, so it still isn't as flexible as a hard drive. But the big increase in storage space should offset that, though.

  • The next step is obvious...the Ultra Shuffle!

    Also plays FM radio, records voice, AND hooks up to your retina so you can watch a random selection of up to 25 DVDs!
  • by Tavor ( 845700 ) on Sunday March 13, 2005 @01:14PM (#11926832)
    That is some insane data density, to have more than one terabit per inch. And here those crazy people though nano-tech would bring about "grey goo" -- little did they know the only goo it'd bring about is from the toughts of Slashdotters having a multiple TB's of porn on myeir harddrives.
  • by Infinityis ( 807294 ) on Sunday March 13, 2005 @01:17PM (#11926844) Homepage
    Unfortunately, I hear that any hardware that uses the "millipede" ends up being a bit "buggy"...
  • Now what would really be cool is if we actually used this like a stamp. You know, where secret messages aren't written on the letter, but are actually in the stamp itself.

    Granted, stamps are expensive enough as it is, so maybe it's not such a great idea...
  • This is invigorating to see. It's interesting that we come full-circle back to punch cards with these polymer wafers. I wonder if it will suffer from any of the read/write limitations that exist with flash ROM storage?

    At any rate, the fact that it requires so little energy and that it's orders of magnitude smaller than magnetic storage.. if it's as reliable as magnetic and optical discs, this would revolutionize storage even in long-term storage applications where data reliability is a factor.
    • by kebes ( 861706 ) on Sunday March 13, 2005 @01:31PM (#11926931) Journal
      The article quotes 10,000 read/write cycles. Given that this number is probably a slight exagerration for PR purposes, it's a good start, but needs optimizing. Hopefully by the time this technology makes it to market, that will have increased that number enough that it will be competitive with magnetic drives. I think that this will definately be a viable replacement for flash drives.

      The technology uses localized heating of a polymer past its glass transition. There is no reason that this should cause much material degredation if it is done properly (i.e.: avoiding temperature spikes, and engineering polymers that have an accessibly low glass-transition temperature while also being robust against thermal cycling). I think with enough engineering this could be done. There is alot of research on heating polymers past the glass-transition temperature, so they won't be reinventing the wheel or anything.
  • ... is a nano gun and some nano mushrooms! Ah those were the days. [klov.com]
  • by RyanFenton ( 230700 ) on Sunday March 13, 2005 @01:23PM (#11926881)

    1. What's the read/write speed?
    2. What's the operating temperature requirements?
    3. What's the max operating heat output per unit?
    4. How many concurrant inputs/outputs can we get into a unit?
    5. What's the failure rate/expected operating lifespan?
    6. What's the near-term expected commodity cost of these units?
    7. Given 1-6, how many units would be needed to make a properly redundant filesystem with at least the reliability and speed of current file storage devices on the market? What would be the expected near-term cost?

    Ryan Fenton
    • ... all parameters within a range that was thought to be marketable (and reasonable) at the end of the devolopment when development started.

      CC.
      • by RyanFenton ( 230700 ) on Sunday March 13, 2005 @02:59PM (#11927409)
        The question behind the questions is what potential roles that this product could fill.

        If it can't run at room temperature conveniently, but can be made cheap per storage space and is reliable, then it may be useful in stationary servers for extreme-mass remote storage.

        If it can run at room temperature and is somewhat affordable, but slow, it can be used as common backup.

        If it can end up close but superior to hard disk in all aspects, then it may replace them.

        If it can be fast enough to be used as live memory at room temperature, with conventional memory as cache, then even with a few limitations, it could transform the nature of computers as we experience them.

        There's many, many other possibilities. Yes, of course, as you suggest, price will match the market - but the role this technology can play is limited more by it's logical capability than the market. If the possibility is open, it's usually much more of an opportunity if you can create a new technology in a market than to just replace another. That's why my questions are obvious - we all wonder how far this first generation of nanotechnology will take us.

        Ryan Fenton
        • From the press release [ibm.com]:

          "Rüschlikon, 3 March 2005--Given the rapidly increasing data volumes that are downloaded onto mobile devices such as cell phones and PDAs, there is a growing demand for suitable storage media with more and more capacity. ... ... Thus, it is ideally suited for use in mobile devices such as digital cameras, cell phones and USB sticks."

          So the demands of the environment seem to be specified.

          CC.
    • by kebes ( 861706 ) on Sunday March 13, 2005 @01:43PM (#11927001) Journal
      I can't answer on behalf of IBM or the millipede project, but if you want my opinion (as an academic researcher who uses similar technology), then I'd guess:

      1. Competitive to HDD, since the tips don't seek very far (100 microns max) and since data output from multiple tips can be done in parrallel (in principle, 4000 bits at once, depending on data contiguity, etc.). The time required to actually 'melt' the divots might be the limiting factor, but again that should be offset by the ability to write 4000 bits at once.

      2. Room temperature is fine for piezos and cantilevers. Even cold temperatures should be fine. I imagine the material they use would stop responding properly if the device were too hot (above 70 C maybe), but if placed in a computer case away from the hottest components, it should be fine.

      3. Even though each tip uses local heating, I don't think the device temperature would be very high. In read mode, the cantilevers are passive and the piezo doesn't generate much heat (I use AFMs at work, and they don't generate heat the way a magnetic HDD does).

      4. As I describe in another post, each array in principle alloys thousands of tips to read/write together, at the same time. Stacking a bunch of arrays in a real device is straight-forward.

      5. Failure rate might be a problem, and needs consideration. In the lab, sometimes I can use a tip for a long time without damage, but sometimes they can snap off. If the device is properly designed I would guess failure rates for each tip would be okay. Polymer degredation or aging is a very real problem. Presumably they are optimizing that as best they can. I think initial devices will probably have extensive error correction, so that if one tip dies, it can recover the data from that region and write it somewhere else.

      6. The current cost for MEMS tips batch-processed like this can be from 1$ per tip to as much as 50$ per tip, depending what you want. So an array might cost thousands of dollars. Of course, the tips are use are for a small market (academic research). It is easier to use lithography to make a bunch of chips than to make a Pentium chip, though, so I imagine if it went into mass production, it wouldn't cost more than 100$ per array. So competitive with HDD.

      7. My guess: initial devices to hit the market will have 10 redundant arrays with tons of error-checking. The storage will be competitive with magnetic drives and transfer rates will be too. Cost will be a bit higher, but after being in production for about 5 years, most figures of merit will be better than HDD, and cost will be down to what we're currently used to paying for storage.

      But these are, of course, just my (hopefully educated) guesses.
      • Thank you very much for that then!

        Sounds like there's a LOT of room for new production techniques and cost improvements! Even worst-case, this design shows a lot of promise. If things pan out, I'd love to be on one of the first teams possibly integrating this kind of stuff into future motherboards, chipsets and devices. Even with a limited lifespan, (a "data health" meter on a hard drive would be annoying), this first generation of nanotech hardware looks very promising.

        Ryan Fenton
  • by kebes ( 861706 ) on Sunday March 13, 2005 @01:26PM (#11926896) Journal
    For those interested, here are some advantages I see to this technology:

    1. Increased storage density. More importantly, this prototype is not near any fundamental limit. Hence, it would appear that there is plenty of room to reduce the dimensions of the MEMS tips to increase storage densities way past what a magnetic drive can do.

    2. Data transfer rate. In principle, the thousdands of different tips can all return data at the same time, compared to, say, 4 bits returned at once from a 4-platter HDD. Of course, in real situations, not all 4000 bits will necessarily be of interest, but I think with smart caching and device layout the throughput should be very high (i.e.: contiguous bits in a file are spread out so that the entire file is read by the 4000 tips without anything moving).

    3. Low seek times. In a HDD, the head must move by many centimeters in order to seek randomly. In Millipede, the entire surface moves by, at most, 100 micrometers to find a new location. It probably uses piezoelectrics, which are fast and robust. Thus, I see seek times being lower (at least in a mature device).

    4. Scalable. This prototype has a single array of tips on a single polymer layer. Obviously it is straightforward to build real devices using 10 or 20 of these arrays stacked. Unlike the platters in a HDD, these arrays could be seeking independantly, so if properly designed, performance could be very good (like RAID maybe?).

    5. Heat. The piezos shouldn't heat up too much, and even though the tips themselves use pinpoint heating to deform the polymer, I think the bulk device heat would be lower than a HDD spinning at 10k rpm. Less noise too.

    6. Cost. By using established MEMS technology (i.e.: the same lithography used to make microchips nowadays) I don't think implementation costs (and future scaling) will be too expensive (as compared to some more far-fetched nanotech ideas).

    This has been in the works for a long time, but I think we may actually see real devices soon! (6 years?) I think this technology has real potential, and I think IBM is right to pursue it.
    • by Anonymous Coward
      A high storage density will require a lot of error correction and redundancy built into the medium (much like ECC RAM), which may affect the data transfer rate and increase the cost. The manufacturer's will have a choice of either putting their own ECC chip onto the medium, rasing the cost
      or leaving the error correction to the memory controller, lowering the data transfer rate.
    • 2. Data transfer rate. In principle, the thousdands of different tips can all return data at the same time, compared to, say, 4 bits returned at once from a 4-platter HDD. Of course, in real situations, not all 4000 bits will necessarily be of interest, but I think with smart caching and device layout the throughput should be very high (i.e.: contiguous bits in a file are spread out so that the entire file is read by the 4000 tips without anything moving).

      4096 bits is a sector isn't it? That's the minim

    • It probably uses piezoelectrics

      Moving the whole array is done with standard electromagnets. I don't know whether they use piezos for moving the individual candilevers up and down though.

    • A modern hard disk can only read from a single head at a time. It can't read multiple surfaces in parallel. There is only one head positioner and one servo channel. To read multiple surfaces in parallel, you would need an independent head positioner and servo channel for each surface.
  • Should I return the 1gb SD card I just bought?
  • by The-Bus ( 138060 ) on Sunday March 13, 2005 @01:28PM (#11926913)
    "25 DVDs on an area the size of a postage stamp"


    What the hell does that mean? I know a postage stamp, but I would rather know REAL standards. What is the LoC/FF for that item? We need to use real scientific standards people. In data storage we talk about bits and bytes, when you talk data density, you can only use LoC/FF. Anything else is ludicrous! It's like talking about car speeds at Furlongs per Week.

    Geez. I wish journalistic integrity was a bit higher. It just irks me to-

    What? What's LoC/FF?

    Libraries of Congress per Football Field of course. You know, the standard.

  • by Weaselmancer ( 533834 ) on Sunday March 13, 2005 @01:29PM (#11926920)

    ..was that this news is about 23 years old [klov.com], and that's gotta be some kind of record. Even for Slashdot.

  • If we are talking about 4Gb dvd, one Tb is 250 dvds ...
  • From the story title, I thought they'd dug out a prototype Millipede arcade cabinet.

    That would have been much cooler, IMHO. They could have even made a couple bucks in quarters.
  • A mixture deal (Score:1, Insightful)

    by dauthur ( 828910 )
    This technology sounds wonderful, but at the same time dangerous. You could have some extremely potent monitoring devices with this thing: Cameras that can record for weeks or months, microphones can record for years, etc. Then again, the practical uses sound great, our ram is going to be forever changed, and I wont need to sweat over 1gb.
  • Transfer speed? (Score:4, Interesting)

    by Chris Pimlott ( 16212 ) on Sunday March 13, 2005 @01:37PM (#11926968)
    Holding enormous amounts of data becomes less and less useful in practical situations if you can't access a decent sized chunk of it quickly.
    • Worst case -- your _terrabyte_ of data is going to need a hard drive for a predictive read-ahead cache and a delayed-write cache. You're already doing this for your hard drive with your RAM. Sure, a cache hit would suck, but oh well.

      If it's painfully slow then think of it as a replacement for backup tapes instead of hard drives.

      But as many other posters have pointed out, it has the equicalent of thousands of heads, so it's possible it could prod some serious buttock.

  • As in US 2000 elections ? Then I don't want to have that !!!! The brother of the CPU will always win, because he's having "special recount" on the punch cards...
  • C'mon, get with the program: Millipede was announced [tomheroes.com] 23 years ago.
  • Getting there... (Score:5, Insightful)

    by sdo1 ( 213835 ) on Sunday March 13, 2005 @01:53PM (#11927048) Journal
    We're getting there, but we're not there yet. And we won't be until storage is truly ubiquitous. I've actually spent some of my weekend re-organizing my music collection, ripping CDs that hadn't listened in a while, etc. But even with the 600G of storage in my PC, I still can't have everything I want unless it's compressed. And I'm thinking about how to listen to my collection in my car. Bringing hundreds of CDs around with me isn't practical. MP3 CDs hold maybe 10-20 albums. HDD based devices (ipods and the like) still can't hold everything I own... not even close. And I want to have a DVD server so rather than pulling out the DVD, I can just call up one of the hundreds of DVDs I own on a menu.

    Yes, storage is becoming more impressive all the time. But it's still a very long way from being to the point where you don't have to think about how and where you store and move your files. And it will be very cool when that day comes.

    -S
    • We're getting there, but we're not there yet. And we won't be until storage is truly ubiquitous. I've actually spent some of my weekend re-organizing my music collection, ripping CDs that hadn't listened in a while, etc. But even with the 600G of storage in my PC, I still can't have everything I want unless it's compressed. And I'm thinking about how to listen to my collection in my car. Bringing hundreds of CDs around with me isn't practical. MP3 CDs hold maybe 10-20 albums. HDD based devices (ipods and th
      • Cut out the middleman as well - when you buy the new DVD, there's just a bit that flips on for your account, and you have access to the global copy.

        Oh, don't get me wrong. I agree completely with what you said. That would be far better than practically infinite storage. But I have absolutely ZERO faith that the media industry will ever come to their senses enough to allow it in such a simple and non-obtrusive way. They will ALWAYS be trying to control it one way or another (see DVD region coding as an

      • But even with the 600G of storage in my PC, I still can't have everything I want unless it's compressed.

      And you never will because the size of what you want will increase as well. It's a known fact that for most of us our desires grow faster than abilities to fulfill them - and that is something that, as many things about us, cuts both ways. It can be a source of perpetual unhappiness, but it can also be a powerful drive for innovation.

      • You know, I disagree. For the last 10 years one
        could make the case that storage needs were driven
        by one thing and one thing only: multimedia.
        Assuming that this is the limit of our storage
        needs, we can say that we need about 1 Tb per movie
        (uncompressed of course) and so between 10000 and
        100000 Tb for typical storage needs. We also need
        on the order of 10 Tb of RAM to satisfy existing
        demand. Further, lugging around those 500 Gb HDD's
        is impractical, so those need to shrink to the
        size of microdrives. The end res
    • I think even before that day comes it will be possible to stream everything you want from the Net. Come to think of it, it is already possible technologically using a broadband connection. The only problem is that we need to wait a year or two until the movie industry wisens up to online distribution they way music studios (and even book publishers to some extent) did. The second problem is lack of mobile broadband. By 2008-2010 it will probably be possible for first adopters to stream all media they need i
  • by Peaked ( 856340 ) on Sunday March 13, 2005 @01:54PM (#11927051)
    What this obviously means is that I'm one step closer to a cyberpunk style computer in my skull. Who needs to learn when you have google access directly interfaced with your brain?

    God, I hope I'm kidding...
    • Re: (Score:2, Funny)

      Comment removed based on user account deletion
    • I know you were probably just joking, but here goes just in case: all the data on google is becoming increasingly _useless_ without a human expert filtering the good from the bullshit.

      Take any topic from politics to computing to medicine to god-knows-what. You'll get some tens to hundreds of thousands of hits, 90% of them written by bloggers talking out of the ass, and 90% of the rest obsolete.

      E.g., I kid you not, on the German-language wikipedia there was a comprehensive article about cloned _didgeridoos
      • How does the "modern JIT optimizer" knows whether the comparison of i to the array length is correct and therefore knows when it can safely optimize away the bounds check ?

        If that were possible, how come we don't have the same "modern optimization" in C/C++ and thereby rid ourselves of the dreaded off-by-one error?

        In other words I don't believe you.
        • In Java: Because you do a comparison like "i array.length", that's why. Java arrays already contain the length. In C terms, a Java array is really an object containing the actual array _and_ the length.

          That's why Java:

          1. Can throw an Exception when you address out of bounds, instead of having a buffer overflow exploit.

          2. Can know when you've already compared that variable to the bounds, so it doesn't have to again. A "for (i = 0; i arrayVariable.length; i++)", and no other touching "i" inside the loop,
  • If you assumme some device that stores one bit per atom on the surface of a crystal of, say, silicon, exactly what storage density are we talking about?

    Assumme only a 2D array as I really suspect getting at the internal atoms of a cube will never happen. (Though it is likely these devices can be stacked so eventually there may be engineering done to make them as thin as possible...)

    My guess is that this device is still many orders of magnitude away, but I really don't know.
    • Well, if storage that small was ever needed, you could store multiple bytes in each atom, based on spin. The more precisely you can measure and change the spin of an atom, the greater storage density you can have, so there fundamental limit is the limit of measuring the spin of the atom, not just the number of atoms in a given array.
    • Modern magnetic HDD stores on the order of 100 Gb in a 3.5 inch platter, which is ~0.4 Gb/cm^2.

      If each surface atom on a material encodes one bit of data, then your storage density depends on the density of your material. For example, let's say that the atoms are on a square grid, and are spaced by 0.15 nm (i.e.: 1.5E-10 m, the length of a typical carbon-carbon bond). That means that you have about 4E15 atoms per cm^2. So if each atom one holds a bit, that means about 600,000 Gb/cm^2.

      Of course, actually u
  • IBM kick ass again (Score:2, Insightful)

    by Anonymous Coward
    I'm really glad that there are still American companies around that are doing fundamental technological research that will improve our lives in the future. Sure IBM may be huge and somewhat evil in it's own way, but at least they know how to actually invent useful things [duxcw.com], rather relying on lawsuits and dubious claims of "intellectual property" and whatnot to extract wealth from others.
  • IT Changes (Score:3, Insightful)

    by KrackHouse ( 628313 ) on Sunday March 13, 2005 @02:22PM (#11927206) Homepage
    We're all going to be out of work in a few years if this continues! /sarcasm I really like advances like this because it saves us time. Imagine what politics would look like if all of the IT brains that are writing redundant perl scrips suddenly applied their brains to history and politics. It'd probably change the world.
    It's just like the industrial age, we can put down our sledge hammers(mice) and redirect our energy to more important things.
  • by Anonymous Coward on Sunday March 13, 2005 @02:44PM (#11927324)
    It seems to me that this kind of technology has been IBM's wild card for a long time. I think they've got a very good idea of what the face of the computer world will look like in a couple years, and they're doing everything they can to come out ahead. First they become a linux house, most likely because linux has proved to be a very nice archetecture to do things like clustering. Now they're finally using the nanotechnology they've been working on for years in such a way that they've created an amazing new technology like this. A technology, I might add, that has the potential to completely dominate the market and completely change the face of the computer world to the point where IBM is the largest hardware manufacturer in the world.....yet again. I'd love to see what's in their business plan for the next few years.
  • by photonic ( 584757 ) on Sunday March 13, 2005 @02:46PM (#11927330)
    Total device: 6.4 mm length, tip pitch 100 um
    -> 64 rows and 64 columns
    -> 4096 tips

    Writing speed (from TFwebsite): 'a few microsecond' (say 10)
    -> 4096/10e-6 = 410 Mbit /sec

    Per tip: range 100 um, bit pitch 10 nm
    -> 10000 x 10000 bits = 100 Mbit

    Position resolution (really neat device using micro-heaters): 2 nm over 120 um ->
    -> 60000 positions observable (probably 16 bit)
  • I guess this means I'm goning to have to buy another copy of the White Album.
  • I noticed that this is being designed for SD form factor, but I'm curious if they would be able to make a type 2 CF card with 8 of these devices in it for 1 terrabyte.

    What about data transfer rate? Are these things fast enough they could compete with hard disk drives? Could we be seeing petabyte hard drives sometime in the future?
  • by Anonymous Coward
    Will it have a hanging chad problem?

  • In myy opinion, the ideal storage system for the future would supply enough capacity so that an individual can keep one medium for their personal data for their lifetime... perhaps use some form of log-based filesystem, where data is only appended, and never erased.

    Maybe millipede will be a step towards this outcome... or at least, I can hope.

  • Not nanotech (Score:3, Insightful)

    by argent ( 18001 ) <peter@slashdot.2 ... m ['ong' in gap]> on Sunday March 13, 2005 @06:22PM (#11928631) Homepage Journal
    Not to take away from the extreme coolness of this, since it is cool, but it's not nanotechnology. It's built using microelectronic fabrication techniques. We're a long way from nanofabrication yet.
  • by Thagg ( 9904 ) <thadbeier@gmail.com> on Sunday March 13, 2005 @11:08PM (#11929950) Journal
    It seems, from all I've read about this millipede technology, that the real bugaboo is re-writing bits. I'm wondering just how important that really is. While I would preserve the ability to destroy data (easily implemented by writing pits at every location) I think that 99% of the uses of this massive storage could be done without re-writing.

    Let me think of a couple of scenarios for these chips:

    1) Music storage and playback, as in an Ipod.

    This is a perfect example of something that you never need erase. You very rarely want to replace the previous version of a song with a newer one -- mostly you just want to add to your collection. In the very odd case that I never want to hear a song ever again, I could destroy it.

    2) My own business -- visual effects.

    We scan and create a few terabytes a year of images. Perhaps surprisingly, we throw almost none of them away during production, keeping old versions of images as reference. Disks are cheap enough that there's no need to erase frames during a project, and these millipede devices promise to be rugged and permanent enough to act as their own long-term backup. We'd just disconnect the drives and store them on a shelf forever.

    Clearly, we'd want to change the way that filesystems work -- maybe the directory structure would be kept in flash memory where just the data bytes are on the millipede surface until it's time to inter the disk in the archive.

    I think that IBM, and others, should really consider the possibility of non-rewritable millipedes, especially because abandoning that capacity would appear to make everything else much much simpler and cheaper. They might make it into production sooner too.

    Thad Beier

"Our vision is to speed up time, eventually eliminating it." -- Alex Schure

Working...