Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage IBM Hardware Science

IBM Shrinks Bit Size To 12 Atoms 135

Lucas123 writes "IBM researchers say they've been able to shrink the number of iron atoms it takes to store a bit of data from about one million to 12, which could pave the way for storage devices with capacities that are orders of magnitude greater than today's devices. Andreas Heinrich, who led the IBM Research team on the project for five years, said the team used the tip of a scanning tunneling microscope and unconventional antiferromagnetism to change the bits from zeros to ones. By combining 96 of the atoms, the researchers were able to create bytes — spelling out the word THINK. That solved a theoretical problem of how few atoms it could take to store a bit; now comes the engineering challenge: how to make a mass storage device perform the same feat as scanning tunneling microscope."
This discussion has been archived. No new comments can be posted.

IBM Shrinks Bit Size To 12 Atoms

Comments Filter:
  • by Xanny ( 2500844 ) on Thursday January 12, 2012 @04:59PM (#38678124)

    Preface: I'm just a programmer nerd who reads slashdot. I have no idea what I am talking about.

    I wonder if it would be possible to have data storage as an ionization of a solid in the normal operating range of tech (and probably small, like carbon) where ionized atoms represent one bits and non ionized represent zero bits, and you can read atoms in some rigid lattice where the ionized ones represent ones and the neutral atoms are zeroes. Yea, there are huge problems, like preventing electron shell state dropping and keeping the electrons off the negatively charged carbon, but it seems like it would be a great objective considering the smaller data storage type after atom ionization will be measuring quark states to represent multi valued data.

    • by alphatel ( 1450715 ) * on Thursday January 12, 2012 @05:02PM (#38678156)
      That's so 2011. You need a neutrino computer.
      • Like Xanny, I too am just extrapolating what I know from high school physics & chemistry - don't claim to know much in this field about storing bits on atoms. I do know about the semiconductor business part of what I write below, since I have worked in it.

        I'm assuming that there is no combinational logic involved here, in which case, where a bit would require back to back inverters, that would make at least 8 atoms a minimum. I'm assuming that we're just talking about storing floating bits here, in
    • I also have no idea what I'm talking about but:

      Isn't an ionized atom one with too many or too few electrons? Don't electrons flow freely though any material even remotely conductive? So wouldn't you need to separate the atoms with an insulating material of sufficent width to stop electrons moving between the atoms?

    • by Anonymous Coward

      Preface: I'm just a programmer nerd who reads slashdot. I have no idea what I am talking about.

      Most of us consider that a given here.

      • And most people who post here assume an air of authority they don't deserve. With the disclaimer's honesty, I'm more inclined than I would be otherwise to believe Xanny knows what they're talking about. But if I think about this too much I fear I'll find myself in an infinite loop.

    • by tocsy ( 2489832 ) on Thursday January 12, 2012 @05:38PM (#38678546)

      I'm a materials science graduate student, and my research is on semiconductors. While I don't work with materials for data storage, I have a pretty good background in electronic properties of materials so maybe I can shed some light on the situation.

      Basically, I suppose this would be hypothetically possible but the problems you'd face would be very, very difficult to solve. The big problem here is that in order to keep something ionized, you would have to completely isolate it from any other atoms that might donate/steal an electron. Again it's hypothetically possible, but impractical considering most of those are noble gasses. Not to mention, storing data as ionized/unionized atoms is fundamentally different from the way we store data now (magnetic domains). I think the more reasonable idea would be to shrink magnetic domains, as well as the number of magnetic domains required to form a bit. If I remember correctly, currently each magnetic domain consists of several hundred atoms and each bit consists of around 100 magnetic domains. As the article states, the best we could get is one atom representing one bit, and the probability of using magnetism over changing to ionization as the mechanism for differentiation between ones and zeroes is very high.

      • I think, perhaps, here, the challenge would be finding a rapid, cheap way to write/read the data, but one idea that occured to me, instead of ionizing atoms is, what if you could find a simple molecule which could be changed to another simple molecule by the addition/reduction of one atom.

        Something like Carbon Monoxide = 0, Carbon Dioxide = 1. Seems like you could potentially get a lot of data density with something like that?

        • by Khyber ( 864651 )

          That would require tons of power to make bonds. Also, where would you get the extra oxygen?

          We have something better - Phase CHange memory. Made from the same stuff CD-RWs are made of.

          • by tocsy ( 2489832 )

            True, and phase change memory is pretty badass. However, there's definitely a larger size limit on phase change memory than magnetic data storage - you pretty much by definition have to have more than one atom in a system to determine what phase it's in. Also, you then have to figure out how to read the data (probably either optically or my measuring the resistance of the bit), all of which would require a fair sized bit.

        • So what are you going to "stick" these bit to? And how would you change them? Not being negative, you might have a grand insight.

      • Even if you're insisting on magnetic domain only, you aren't limited to one-bit-per-atom. You can point domains more than just up and down - if you accept domains pointing at right angles to the usual directions, you can get two bits per atom.... or as many as you want, limited only by the angular resolution of your sensor. It'd be totally impractical to do something like that though.
        • by tocsy ( 2489832 )

          Interesting point, I hadn't though of that. Although what you're talking about is still only one "bit" per atom, but each bit would have more possibilities than just ones and zeroes. So you could have - for example - a bit with the possibility of being 0-5 for each of the cardinal directions, in which case you'd have to use a language with base 6 instead of binary. Still, conceptually very interesting.

          • I think the word you're looking for is "symbol" rather than "bit". A bit by definition is either 0 or 1.
          • a bit with the possibility of being 0-5 for each of the cardinal directions, in which case you'd have to use a language with base 6 instead of binary.

            The first number that's both mod 5 and mod 2 is 10. But bytes are 8 bits, so .. we have 40 that fit all three.

            Minimum group : 8 atoms and 5 bytes.

      • by Thing 1 ( 178996 )

        unionized atoms

        Hoffa's revenge?

    • by quenda ( 644621 )

      Yes, it would be so cool to say "My scanning tunneling microscope goes to ELEVEN [wikipedia.org]".

    • by dissy ( 172727 ) on Thursday January 12, 2012 @08:05PM (#38679930)

      There was a wonderful paper in Nature titled "The Ultimate physical limits to computation" by Seth Seth Lloyd (Yes the guy with the funny laugh), which discussed exactly how small computation and processing can ever get (Short of discovering new physics of course)

      Entry page: http://arxiv.org/abs/quant-ph/9908043 [arxiv.org]
      Direct PDF Link: http://arxiv.org/PS_cache/quant-ph/pdf/9908/9908043v3.pdf [arxiv.org]

      It's a fascinating read, which I highly recommend. I believe it will answer your questions as well.

      The summary of the paper:

      Computers are physical systems: what they can and cannot do is dictated by the laws of physics. In particular, the speed with which a physical device can process information is limited by its energy and the amount of information that it can process is limited by the number of degrees of freedom it possesses. This paper explores the physical limits of computation as determined by the speed of light $c$, the quantum scale $\hbar$ and the gravitational constant $G$. As an example, quantitative bounds are put to the computational power of an `ultimate laptop' with a mass of one kilogram confined to a volume of one liter.

    • Where the hell did I put that beer
  • How do we know it isn't possible to store a bit in fewer than 12 atoms? I'm not seeing how that "solved" anything, only that they proved it was possible to store a bit with as few as 12 atoms.
    • It solved a theoretical problem. They never solved a real problem. But in theory, it was a problem and they solved it.

      • by Anonymous Coward

        They didn't solve a theoretical problem. The theoretical limit is a function of planks constant and the uncertainty principle and the amount of energy you're allowed to use. They solved (part of) the engineering problem. There's a ways to go before they solve the production/commercialization problem.

        • by tragedy ( 27079 )

          There isn't just one theoretical limit. There are lots of theoretical limits. Some of those theoretical limits people are more sure of than others, but dozens or even hundreds of theoretical limits have been broken through already in computer engineering and in data storage specifically. Then the theories have to change. There were plenty of people who theorized that Charles Babbage's Difference Engine could never work. They might not have been very good theories, but many people still accepted them until t

    • Theoretically they could to it with subatomic particles, in practice who knows when if ever that will become viable. If they manage it though, it would be pretty mindblowing. I'm guessing that it's going to be extremely difficult to accomplish and take decades to arrive, if it ever does.

    • Re:And... (Score:5, Informative)

      by DriedClexler ( 814907 ) on Thursday January 12, 2012 @05:14PM (#38678298)

      There are theoretical limits to how much information can be stored in a molecule -- this given by the molar entropy, typically expressed in J/(K*mol). But it can also be expressed, more intuitively, as bits per molecule.

      (Yes [wikipedia.org], you can convert between J/K and bits -- they measure the same thing, degrees of freedom.)

      Per this table [update.uu.se], iron has a molar entropy of 27.3 J/K*mol, or 4.73 bits/molecule.

      IBM is claiming an information density of (1/12) bits/molecule, which is reasonable -- the thermodynamic limit is ~57x greater.

      • Re:And... (Score:5, Informative)

        by timeOday ( 582209 ) on Thursday January 12, 2012 @05:41PM (#38678580)
        And the document you cited assumes a temperature of 298.15 K (77F). At room temp, the IBM technique requires about 150 molecules, not 12 (cite [extremetech.com]):

        "At low temperatures, this number is 12; at room temperature, the number is around 150 - not quite as impressive, but still an order of magnitude better than any existing hard drive or silicon (MRAM) storage solution."

        So there is even more headroom in the thermodynamic limit.

    • Re:And... (Score:5, Funny)

      by gandhi_2 ( 1108023 ) on Thursday January 12, 2012 @05:41PM (#38678590) Homepage

      You know, when you are storing bits and you are already at 12, where can you go from there? Where?

      No where.

      Ours goes to 11.

      One smaller.

    • by Surt ( 22457 )

      The solved the question of whether or not it was possible with 12. Now on to 11!

    • It'd be foolish to try and engineer a bit per 12 atom storage device without first demonstrating that it is actually possible to do.

      It is a proof of concept and shows that trying to use 12 atoms to store a bit isn't impossible.

  • by s_p_oneil ( 795792 ) on Thursday January 12, 2012 @05:00PM (#38678132) Homepage

    IBM's new vision:
    A scanning tunneling microscope in every home with an IBM sticker on it.

    • by paleo2002 ( 1079697 ) on Thursday January 12, 2012 @05:10PM (#38678256)
      Next thing you know, everyone will have to buy appliances with electron guns, magnetrons, lasers and other outlandish sci-fi devices built into them. They'll probably take up entire rooms and cost hundreds of thousands of dollars!
      • Re: (Score:2, Funny)

        by Anonymous Coward

        ...and cost hundreds of thousands of dollars!

        So like, TWO lattes?

      • by jmkaza ( 173878 )

        Have to, or get to?

      • CRT TV: electron gun: check.
        microwave oven: magnetron: check.
        CD player: laser: check.

        I have a portable finger pulse oximeter in my home medical kit. Think of it as one third of a tricorder.

        Not so outlandish... or large... or expensive.

      • by treeves ( 963993 )

        Cover all those devices with just one super awesome appliance that should be in every kitchen: the microwave oven with built-in oscilloscope and CD player!

    • More likely you will see this sort of thing used by "cloud" providers, who can afford a high up-front cost and greatly expand their capacity. A lot of data will sit unused on service providers' storage devices, and so they can have a much higher ratio of storage to computing power.
    • by Anonymous Coward

      IBM is such a behemoth. Intel brought single-Atom chips to market, like, five years ago...

    • by DriedClexler ( 814907 ) on Thursday January 12, 2012 @07:44PM (#38679714)

      Please. There's a world market for maybe 5 scanning tunneling microscopes.

  • by Prime Mover ( 149173 ) on Thursday January 12, 2012 @05:00PM (#38678136)

    ...once they have these new mass-storage devices, how can I turn it into a homebrew tunnel scanning microscope?

    • by tragedy ( 27079 ) on Thursday January 12, 2012 @09:29PM (#38680882)

      You can make one now if you like. There's an article here [popsci.com] about someone working on an open source kit, but it also mentions other places that will sell you a kit to build your own.

      • by Nursie ( 632944 )

        Page not found :(

        Not that I was seriously going to build one, but I may like to read about it.

        • by tragedy ( 27079 )

          Wierd. I tested the link. Just tried it again and it's working. It should be http://www.popsci.com/diy/article/2010-07/homemade-open-source-scanning-tunneling-electron-microscope

  • awesome (Score:5, Funny)

    by demonbug ( 309515 ) on Thursday January 12, 2012 @05:01PM (#38678146) Journal

    Now they just have to work on that random access time of 300000 milliseconds.

    Should be easy, right?

  • Now give me my subdermal and/or extraneural memory storage, dammit.
  • by PolygamousRanchKid ( 1290638 ) on Thursday January 12, 2012 @05:03PM (#38678172)

    . . . now as to shrinking that scanning tunneling microscope . . . that might take a while . . .

    Is anyone aware of how "big" they are . . . I'm not thinking that the word "small" is appropriate . . .

    • by neokushan ( 932374 ) on Thursday January 12, 2012 @05:07PM (#38678220)

      To be fair, have you seen how big the first Magnetic HDD's were? Granted, different technology and they still stored a hell of a lot more than 5 bytes, but miniaturisation is only a matter of time.

      • To be fair, have you seen how big the first Magnetic HDD's were? Granted, different technology and they still stored a hell of a lot more than 5 bytes, but miniaturisation is only a matter of time.

        Yep, according to the idiots at MSNBC [msn.com], we're already there.

        Talk about reading comprehension failures.

        Sigh.

    • by c0lo ( 1497653 )

      . . . now as to shrinking that scanning tunneling microscope . . . that might take a while . . .

      Is anyone aware of how "big" they are . . . I'm not thinking that the word "small" is appropriate . . .

      Example [cnx.org]

      • Tunneling accelerometers are mainstream. They are basically a STM without the scanning ability, with the "pinhead" on a MEMs arm. These are in tiny chips. Combining these with perhaps thermal expansion "heater" actuators, and you have a crude yet tiny STM, with very limited storage capacity (limited by X * Y travel / bit spacing.

    • by JustinOpinion ( 1246824 ) on Thursday January 12, 2012 @06:13PM (#38678886)

      Is anyone aware of how "big" they are

      An actual STM instrument is pretty big. About the size of, say, a mini-fridge. But the majority of that is the computer to drive the system, the readout electronics, and the enclosure (to dampen out vibrations, establish vacuum, etc.). The actual readout tip is pretty small: a nano-sized tip attached to ~100 micron 'diving board' assembly.

      A related problem with STM is that it's a serial process: you have a small tip that you're scanning over a surface. This makes readout slow. However in a separate project, IBM (and others) has been working on how to solve that: the idea is to use a huge array of tips that scan the surface in parallel (IBM calls it millipede memory [ibm.com]). This makes access faster since you can basically stripe the data and read/write in parallel, and it makes random seeks faster since you don't have to move the tip array as far to get to the data you want. It increases complexity, of course, but modern nano-lithography is certainly up to the task of creating arrays of hundreds of thousands of micron-sized tips with associated electronics.

      Using tip arrays would make the read/write parts more compact (as compared to having separate parallel STMs, I mean). The enclosure and driving electronics could certainly be miniaturized if there were economic incentive to do so. There's no physical barrier preventing these kinds of machines from being substantially micronized. As others have pointed out, the first magnetic disk read/write systems were rather bulk, and now hard drives can fit in your pocket. It's possible the same thing could happen here. Having said that, current data storage techniques have a huge head-start, so for something like this to catch up to the point where consumers will want to buy it may take some time.

      • True, but we've learned a lot of other things since those first hard drives that still probably apply: manufacturing techniques for making high-precision assemblies at extremely low cost, highly reliable low-friction bearings (like the hydrodynamic bearings I believe HDs use) for the spinning media, miniaturized servo motors, etc. It might not be that long before something using this technology comes along.

      • by swalve ( 1980968 )
        Reminds me of an old IBM (or Telex?) printer I once worked on. Instead of a printhead that scanned across the whole page, the printhead was a bar that went across the whole page with pins every quarter inch or so. The bar vibrated back and forth and it was able to print an entire line of text in a couple of vibrations. I think tape drives are like that now- the head is (made up number for simplicity) really 8 heads, and it reads and writes one byte at a time. When it gets to the end of the tape, it step
  • by Anonymous Coward

    I'm only a two bit chemist, but per atom doesn't sound very exact since atoms vary in weight between 1 dalton (1/(6e23)grams) and way over 200 times that.

  • by dzr0001 ( 1053034 ) on Thursday January 12, 2012 @05:19PM (#38678350)
    Increasing disk density only solves a handful of problems. Unfortunately it can create more problems as well. As disk size increases, more and more applications will become io bound due to contending for the same piece of metal. For many, if not most, organizations that need large amounts of data, increasing per disk density is pointless unless new technology can be introduced to retrieve it at an exponentially faster rate.
    • by lgw ( 121541 )

      The disk spins under the head at a certain rate. Guess what happens as you increase the density of bits? There's a reason that I just replaced a 15K rpm 150GB drive with a 5K rpm 1TB drive and saw a significant increase in raw read speed.

      All of the electronics in an HDD are far, far faster than the mechanical parts. Reading all of the data on a consumer drive has taken about 6-12 hours from the 10MB drives to the 3TB drives, because density is the limit on speed as well.

      Not that it really matters - it loo

  • Bad article (Score:5, Insightful)

    by Anonymous Coward on Thursday January 12, 2012 @05:19PM (#38678354)

    There's a better article here [popularmechanics.com] which includes some more information on the experiment. In particular the temperature was 0.5K.

    Also the computerworld article claims that using an antiferromagnetic arrangement of atoms is an advantage because it pulls the atoms more tightly together. I'm not convinced that this is true but even if it is the effect would be completely negligible. The interesting aspect of this arrangement is that each atom cancels out the magnetic field of the atoms either side of it which should help with data stability (a similar effect is seen in perpendicular recording today).

    Unrelatedly: have they/will they publish a paper on this? I can't find anything mentioning a paper in the press releases.

    • Re: (Score:2, Insightful)

      by sheepe2004 ( 1029824 )
      Gah posted this as AC by mistake.
      • How the hell does this get modded troll?

        Protip: Well, I don't have one. Just chalk it up to another mod bot failure.

    • by Anonymous Coward on Thursday January 12, 2012 @06:03PM (#38678798)

      Yes, but the paper is tiny and can only be read at low temperatures.

    • Re:Bad article (Score:5, Informative)

      by JustinOpinion ( 1246824 ) on Thursday January 12, 2012 @06:45PM (#38679134)

      Unrelatedly: have they/will they publish a paper on this? I can't find anything mentioning a paper in the press releases.

      The actual paper was published today in Science:
      Sebastian Loth[1,2], Susanne Baumann[1,3], Christopher P. Lutz[1], D. M. Eigler[1], Andreas J. Heinrich[1] (Affiliations: [1] IBM Almaden Research Division, [2] Max Planck Institute, [3] University of Basel) Bistability in Atomic-Scale Antiferromagnets [sciencemag.org] Science 13 January 2012: Vol. 335 no. 6065 pp. 196-199 DOI: 10.1126/science.1214131 [doi.org].

      The abstract is:

      Control of magnetism on the atomic scale is becoming essential as data storage devices are miniaturized. We show that antiferromagnetic nanostructures, composed of just a few Fe atoms on a surface, exhibit two magnetic states, the Néel states, that are stable for hours at low temperature. For the smallest structures, we observed transitions between Néel states due to quantum tunneling of magnetization. We sensed the magnetic states of the designed structures using spin-polarized tunneling and switched between them electrically with nanosecond speed. Tailoring the properties of neighboring antiferromagnetic nanostructures enables a low-temperature demonstration of dense nonvolatile storage of information.

      Some big names are on this paper (Don Eigler [ibm.com] is a pioneer of STM; responsible for the famous "IBM written with xenon atoms [ibm.com]" proof-of-concept, and along with Lutz worked on the also-famous "quantum corrals [wikipedia.org]").

  • PDP Anyone? (Score:3, Funny)

    by walkerp1 ( 523460 ) on Thursday January 12, 2012 @05:22PM (#38678386)
    Had they used the clearly superior RAD-50 [wikipedia.org] encoding, they could have stored THINK with a mere 384 atoms as opposed to 480.
  • Maybe now we can read all that data stored in those Crystal Skulls.
  • I thought the size of one bit IS one bits. Next they'll tell you that the size of atom is yellow.

    • Anything can be defined as a tautology. It can be useful to measure units in other units, though. E.g. you might want to know the height of a liter (in a particular container) or the weight of a foot (of wire). As long as you know the context, as most of us do for this article, it makes sense. You do have match the unit to the measurement you're trying to make, though; it would be perfectly cogent for them to say that the COLOR of the atom is yellow, though I don't know why we'd care.

  • by claytongulick ( 725397 ) on Thursday January 12, 2012 @05:33PM (#38678490) Homepage

    From what I understand the most severe engineering challenge with designing a portable STM will be overcoming the vibration issues. Current "home brew" STMs are built in a sandbox for this reason, afaik.

    • by JustinOpinion ( 1246824 ) on Thursday January 12, 2012 @06:30PM (#38679002)
      You're right that for STMs and AFMs instruments, vibration is a huge issue. But when using those instruments, you're trying to image nano-sized objects, or even individual atoms. So of course vibrations bigger than an atom's width will ruin your image. You can compensate for this (to a point) by making the device more rigid, and also by dampening out environmental noise. But there's a limit to what you can do (e.g. you can't make the cantilever your tip is attached to very stiff, or you would ruin your sensitivity).

      In an atomic magnetic memory, though, you wouldn't really be imaging individual atoms. You'd be scanning the tip back-and-forth and trying to sense (or set) the local magnetic field. Thus you wouldn't need to use a soft cantilever to hold the tip. A very stiff/rigid one would be fine, as long as it is correctly positioned in relation to the encoding atoms (close enough for sensing, etc.). The magnetic response in general will be stronger than the usual imaging modes for STM.

      My point is just that using a STM-like device for storing/retrieving data eliminates many of the design constraints that a full-blown STM needs (because it's trying to do precise topography and density-of-states imaging...). You can play many engineering tricks that they can't afford to do in a real STM.

      Having said that, many challenges would remain. External vibrations could still make the device unstable (or require it to sample for longer periods to average-out signals, thus making data throughput lower). Temperature stability is probably going to be a major concern (thermal expansion will change the nano-sized gap between the tip and bits, which will need to be compensate for; thermal noise could overwhelm the signal entirely; thermal gradients could make alignment of the tips and compensation for temperature drift even harder; etc.).

      Then again, you only have to look at the absurd sophistication of modern HDDs or CPUs to be convinced that we can handle these kinds of challenging engineering problems (if there is enough economic incentive).
      • by Electricity Likes Me ( 1098643 ) on Thursday January 12, 2012 @06:56PM (#38679224)

        It's also worth noting that modern hard disks already position the read head staggeringly close to the platter already - on the order of 10nm of clearance or less. And this is in a consumer electronic device.

        Most of the constraints of STM and AFM are related to the fact that they are general purpose, highly accurate devices, intended to study arbitrary samples (and work down to the 0.1 nm type scales while doing it).

  • by wrfelts ( 950027 ) on Thursday January 12, 2012 @05:40PM (#38678576)

    I can see it now. 500 petabytes stored on a postage stamp housed in a device the size of an overstuffed, large suitcase. It has geek written all over it! I must have one.!!!

  • by FridayBob ( 619244 ) on Thursday January 12, 2012 @05:43PM (#38678598)

    Imagine having a hard disk with a capacity of 2,000 TB. Using a SATA 3.0 bus with a sustained maximum throughput of 600 MiB/s, it would still take over 37 days to read or write the entire device.

    • Imagine having a hard disk with a capacity of 2,000 TB. Using a SATA 3.0 bus with a sustained maximum throughput of 600 MiB/s, it would still take over 37 days to read or write the entire device.

      By the time this atomic scale HD hits the consumer sphere - if they do, it'll be something like 20+ years from now - I'm sure by then (2030+) they would have parallel version of SATA 9.0 that can read the entire 2ExaByte content in like 0.2 milsec

      • By the time this atomic scale HD hits the consumer sphere - if they do, it'll be something like 20+ years from now - I'm sure by then (2030+) they would have parallel version of SATA 9.0 that can read the entire 2ExaByte content in like 0.2 milsec

        You forget that when it comes to sustained throughput, hard disks have always been slower than the buses used to connect them. It's a problem that's intrinsic to writing/reading data to/from a mechanical medium. In this case the disks can't really rotate any faster, so the bigger they get, the more tracks there are to access and the worse the problem becomes.

        Sure, over time the sustained throughput rates have increased, but that's only because of the steady increase in areal density. Assuming IBM's new t

    • While yes using a SATA 3.0 would take forever, there is no reason to think that when these drives a produced that will be the standard used for them. I think it is more likely that they will connect to something like PCIe 16x slot (or whatever dongle they are using to connect to that bus). A v3.0 PCIe 16x will do 16GB/s so it would take 34 hours with technology in most people's computer right now. By the time 2EB drives get on the market I don't think it will be an issue.

      I'm more concerned about when all

      • While yes using a SATA 3.0 would take forever, there is no reason to think that when these drives a produced that will be the standard used for them. I think it is more likely that they will connect to something like PCIe 16x slot (or whatever dongle they are using to connect to that bus). A v3.0 PCIe 16x will do 16GB/s so it would take 34 hours with technology in most people's computer right now. By the time 2EB drives get on the market I don't think it will be an issue.

        If IBM's new technology eventually makes it into the common hard disk, there will naturally be a faster bus technology to accommodate the increased bit rates to/from the read/write heads (due to the higher areal density), so you can bet it will take less than 37 days to read or write an entire disk. However, because the disks can't really rotate any faster than they do now, the bigger they get, the more tracks there will be to access and the longer it will take to read or write the whole thing. The only wa

    • by Anonymous Coward

      With current buses, we'd mount them as expansion cards with PCI Express [wikipedia.org] (1 GB/s per lane with v3.0, up to 16 GB/s with PCIe 16x). Some SSDs already do this.

      For a more flexible solution, there's Thunderbolt [wikipedia.org]. 2.5 GB/s currently, with planned expansion up to 12 GB/s.

      Even optimistically, though, we'd still be looking at ~2 days to read/write an entire 2,000 TB drive.

    • We already need a faster bus. [anandtech.com]

      The SF-2000 series controllers are already limited on the sequential side by 6Gbps SATA as well as the ONFI 2.x interface. Both need to be addressed to improve sequential performance, which we likely won't see until 2013.

  • by guttentag ( 313541 ) on Thursday January 12, 2012 @06:49PM (#38679172) Journal
    So they should have this ready for practical applications in the consumer market right about the same time hard drive component manufacturing [nytimes.com] becomes available and, coincidentally, about the same time the hard drive industry jumps on the Thunderbolt bandwagon. Perhaps this trifecta will also coincide with the Third Coming of Steve Jobs -- with no hard drives available, almost no one using his new Thunderbolt, and no ability to store his entire movie collection on one hard drive, he figured he'd leave Earth for a while and come back when we were ready for him.

    Anyone else pick up on the note in TFA about how this technology uses 96 bits to make one byte of data? I wonder if the drive sizes will be advertised in bits to make them seem even more ridiculously impressive!
  • They only reduced the size a *little bit*.
  • IBM is applying this technology to storage space right now, but is it also applicable to processing power? Could this sudden advancement in technology be very problematic for the global economy? If we have come to the end of Moore's Law already, then what's next? Processing power can't be increased any further so there will be no reason for people to upgrade their PCs - why bother when CPUs aren't getting any more powerful?. And quantum computing is a long way off, so I imagine this could be VERY bad for no
  • "We can manufacture gazillionabyte memory chips the size of a pinhead. Of course, the interface hardware for reading/wrting the data is the size of a small fridge..."
    • "We can manufacture gazillionabyte memory chips the size of a pinhead. Of course, the interface hardware for reading/wrting the data is the size of a small fridge..."

      Moreover, the bandwidth is a kilobyte per second.

  • I don't want an order of magnitude more storage; I want to be able to process all the storage that I have in the blink of an eye.

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...