



IBM Shrinks Bit Size To 12 Atoms 135
Lucas123 writes "IBM researchers say they've been able to shrink the number of iron atoms it takes to store a bit of data from about one million to 12, which could pave the way for storage devices with capacities that are orders of magnitude greater than today's devices. Andreas Heinrich, who led the IBM Research team on the project for five years, said the team used the tip of a scanning tunneling microscope and unconventional antiferromagnetism to change the bits from zeros to ones. By combining 96 of the atoms, the researchers were able to create bytes — spelling out the word THINK. That solved a theoretical problem of how few atoms it could take to store a bit; now comes the engineering challenge: how to make a mass storage device perform the same feat as scanning tunneling microscope."
12 atoms? Go smaller! (Score:5, Funny)
Preface: I'm just a programmer nerd who reads slashdot. I have no idea what I am talking about.
I wonder if it would be possible to have data storage as an ionization of a solid in the normal operating range of tech (and probably small, like carbon) where ionized atoms represent one bits and non ionized represent zero bits, and you can read atoms in some rigid lattice where the ionized ones represent ones and the neutral atoms are zeroes. Yea, there are huge problems, like preventing electron shell state dropping and keeping the electrons off the negatively charged carbon, but it seems like it would be a great objective considering the smaller data storage type after atom ionization will be measuring quark states to represent multi valued data.
Re:12 atoms? Go smaller! (Score:5, Funny)
Re: (Score:2)
I'm assuming that there is no combinational logic involved here, in which case, where a bit would require back to back inverters, that would make at least 8 atoms a minimum. I'm assuming that we're just talking about storing floating bits here, in
Re: (Score:3)
I also have no idea what I'm talking about but:
Isn't an ionized atom one with too many or too few electrons? Don't electrons flow freely though any material even remotely conductive? So wouldn't you need to separate the atoms with an insulating material of sufficent width to stop electrons moving between the atoms?
Re:12 atoms? Go smaller! (Score:5, Funny)
Only if you care about data integrity...
Re: (Score:1)
Re: (Score:2)
Re:12 atoms? Go smaller! (Score:4, Funny)
so this will work great for WMRN memory - just where you want to keep your secrets that no one should see..
A given (Score:1)
Preface: I'm just a programmer nerd who reads slashdot. I have no idea what I am talking about.
Most of us consider that a given here.
Re: (Score:1)
And most people who post here assume an air of authority they don't deserve. With the disclaimer's honesty, I'm more inclined than I would be otherwise to believe Xanny knows what they're talking about. But if I think about this too much I fear I'll find myself in an infinite loop.
Re:12 atoms? Go smaller! (Score:5, Insightful)
I'm a materials science graduate student, and my research is on semiconductors. While I don't work with materials for data storage, I have a pretty good background in electronic properties of materials so maybe I can shed some light on the situation.
Basically, I suppose this would be hypothetically possible but the problems you'd face would be very, very difficult to solve. The big problem here is that in order to keep something ionized, you would have to completely isolate it from any other atoms that might donate/steal an electron. Again it's hypothetically possible, but impractical considering most of those are noble gasses. Not to mention, storing data as ionized/unionized atoms is fundamentally different from the way we store data now (magnetic domains). I think the more reasonable idea would be to shrink magnetic domains, as well as the number of magnetic domains required to form a bit. If I remember correctly, currently each magnetic domain consists of several hundred atoms and each bit consists of around 100 magnetic domains. As the article states, the best we could get is one atom representing one bit, and the probability of using magnetism over changing to ionization as the mechanism for differentiation between ones and zeroes is very high.
How about chemical representations? (Score:2)
I think, perhaps, here, the challenge would be finding a rapid, cheap way to write/read the data, but one idea that occured to me, instead of ionizing atoms is, what if you could find a simple molecule which could be changed to another simple molecule by the addition/reduction of one atom.
Something like Carbon Monoxide = 0, Carbon Dioxide = 1. Seems like you could potentially get a lot of data density with something like that?
Re: (Score:2)
That would require tons of power to make bonds. Also, where would you get the extra oxygen?
We have something better - Phase CHange memory. Made from the same stuff CD-RWs are made of.
Re: (Score:1)
True, and phase change memory is pretty badass. However, there's definitely a larger size limit on phase change memory than magnetic data storage - you pretty much by definition have to have more than one atom in a system to determine what phase it's in. Also, you then have to figure out how to read the data (probably either optically or my measuring the resistance of the bit), all of which would require a fair sized bit.
Re: (Score:2)
So what are you going to "stick" these bit to? And how would you change them? Not being negative, you might have a grand insight.
Re: (Score:2)
Re: (Score:1)
Interesting point, I hadn't though of that. Although what you're talking about is still only one "bit" per atom, but each bit would have more possibilities than just ones and zeroes. So you could have - for example - a bit with the possibility of being 0-5 for each of the cardinal directions, in which case you'd have to use a language with base 6 instead of binary. Still, conceptually very interesting.
Re: (Score:2)
Re: (Score:2)
a bit with the possibility of being 0-5 for each of the cardinal directions, in which case you'd have to use a language with base 6 instead of binary.
The first number that's both mod 5 and mod 2 is 10. But bytes are 8 bits, so .. we have 40 that fit all three.
Minimum group : 8 atoms and 5 bytes.
Re: (Score:2)
unionized atoms
Hoffa's revenge?
Re: (Score:2)
Yes, it would be so cool to say "My scanning tunneling microscope goes to ELEVEN [wikipedia.org]".
Re:12 atoms? Go smaller! (Score:5, Informative)
There was a wonderful paper in Nature titled "The Ultimate physical limits to computation" by Seth Seth Lloyd (Yes the guy with the funny laugh), which discussed exactly how small computation and processing can ever get (Short of discovering new physics of course)
Entry page: http://arxiv.org/abs/quant-ph/9908043 [arxiv.org]
Direct PDF Link: http://arxiv.org/PS_cache/quant-ph/pdf/9908/9908043v3.pdf [arxiv.org]
It's a fascinating read, which I highly recommend. I believe it will answer your questions as well.
The summary of the paper:
Computers are physical systems: what they can and cannot do is dictated by the laws of physics. In particular, the speed with which a physical device can process information is limited by its energy and the amount of information that it can process is limited by the number of degrees of freedom it possesses. This paper explores the physical limits of computation as determined by the speed of light $c$, the quantum scale $\hbar$ and the gravitational constant $G$. As an example, quantitative bounds are put to the computational power of an `ultimate laptop' with a mass of one kilogram confined to a volume of one liter.
Re: (Score:1)
And... (Score:1)
Re: (Score:2)
It solved a theoretical problem. They never solved a real problem. But in theory, it was a problem and they solved it.
Re: (Score:2)
They didn't solve a theoretical problem. The theoretical limit is a function of planks constant and the uncertainty principle and the amount of energy you're allowed to use. They solved (part of) the engineering problem. There's a ways to go before they solve the production/commercialization problem.
Re: (Score:2)
There isn't just one theoretical limit. There are lots of theoretical limits. Some of those theoretical limits people are more sure of than others, but dozens or even hundreds of theoretical limits have been broken through already in computer engineering and in data storage specifically. Then the theories have to change. There were plenty of people who theorized that Charles Babbage's Difference Engine could never work. They might not have been very good theories, but many people still accepted them until t
Re: (Score:3)
Theoretically they could to it with subatomic particles, in practice who knows when if ever that will become viable. If they manage it though, it would be pretty mindblowing. I'm guessing that it's going to be extremely difficult to accomplish and take decades to arrive, if it ever does.
Re:And... (Score:5, Informative)
There are theoretical limits to how much information can be stored in a molecule -- this given by the molar entropy, typically expressed in J/(K*mol). But it can also be expressed, more intuitively, as bits per molecule.
(Yes [wikipedia.org], you can convert between J/K and bits -- they measure the same thing, degrees of freedom.)
Per this table [update.uu.se], iron has a molar entropy of 27.3 J/K*mol, or 4.73 bits/molecule.
IBM is claiming an information density of (1/12) bits/molecule, which is reasonable -- the thermodynamic limit is ~57x greater.
Re:And... (Score:5, Informative)
So there is even more headroom in the thermodynamic limit.
Re: (Score:2)
That is for bulk metallic iron. Nanoparticles will be a different matter.
Ha! I see what you did there.
Re:And... (Score:5, Funny)
You know, when you are storing bits and you are already at 12, where can you go from there? Where?
No where.
Ours goes to 11.
One smaller.
Re: (Score:2)
The solved the question of whether or not it was possible with 12. Now on to 11!
Re: (Score:2)
It'd be foolish to try and engineer a bit per 12 atom storage device without first demonstrating that it is actually possible to do.
It is a proof of concept and shows that trying to use 12 atoms to store a bit isn't impossible.
IBM's new vision (Score:3, Funny)
IBM's new vision:
A scanning tunneling microscope in every home with an IBM sticker on it.
Re:IBM's new vision (Score:4, Insightful)
Re: (Score:2, Funny)
...and cost hundreds of thousands of dollars!
So like, TWO lattes?
Re: (Score:2)
Have to, or get to?
Re: (Score:2)
CRT TV: electron gun: check.
microwave oven: magnetron: check.
CD player: laser: check.
I have a portable finger pulse oximeter in my home medical kit. Think of it as one third of a tricorder.
Not so outlandish... or large... or expensive.
Re: (Score:2)
Re: (Score:2)
Cover all those devices with just one super awesome appliance that should be in every kitchen: the microwave oven with built-in oscilloscope and CD player!
"Cloud" (Score:3)
Re: (Score:1)
IBM is such a behemoth. Intel brought single-Atom chips to market, like, five years ago...
Re:IBM's new vision (Score:4, Funny)
Please. There's a world market for maybe 5 scanning tunneling microscopes.
The REAL question is... (Score:5, Funny)
...once they have these new mass-storage devices, how can I turn it into a homebrew tunnel scanning microscope?
Re:The REAL question is... (Score:4, Informative)
You can make one now if you like. There's an article here [popsci.com] about someone working on an open source kit, but it also mentions other places that will sell you a kit to build your own.
Re: (Score:2)
Page not found :(
Not that I was seriously going to build one, but I may like to read about it.
Re: (Score:2)
Wierd. I tested the link. Just tried it again and it's working. It should be http://www.popsci.com/diy/article/2010-07/homemade-open-source-scanning-tunneling-electron-microscope
awesome (Score:5, Funny)
Now they just have to work on that random access time of 300000 milliseconds.
Should be easy, right?
Re: (Score:2)
Moore's law is about cost per transistor, not speed.
Re: (Score:2)
Excellent! (Score:2)
Re: (Score:1)
I think 12 atoms should be enough for everyone ... (Score:4, Informative)
. . . now as to shrinking that scanning tunneling microscope . . . that might take a while . . .
Is anyone aware of how "big" they are . . . I'm not thinking that the word "small" is appropriate . . .
Re:I think 12 atoms should be enough for everyone (Score:4, Informative)
To be fair, have you seen how big the first Magnetic HDD's were? Granted, different technology and they still stored a hell of a lot more than 5 bytes, but miniaturisation is only a matter of time.
Re: (Score:2)
To be fair, have you seen how big the first Magnetic HDD's were? Granted, different technology and they still stored a hell of a lot more than 5 bytes, but miniaturisation is only a matter of time.
Yep, according to the idiots at MSNBC [msn.com], we're already there.
Talk about reading comprehension failures.
Sigh.
Re: (Score:2)
. . . now as to shrinking that scanning tunneling microscope . . . that might take a while . . .
Is anyone aware of how "big" they are . . . I'm not thinking that the word "small" is appropriate . . .
Example [cnx.org]
Counterexample (Score:1)
Tunneling accelerometers are mainstream. They are basically a STM without the scanning ability, with the "pinhead" on a MEMs arm. These are in tiny chips. Combining these with perhaps thermal expansion "heater" actuators, and you have a crude yet tiny STM, with very limited storage capacity (limited by X * Y travel / bit spacing.
Re:I think 12 atoms should be enough for everyone (Score:5, Informative)
An actual STM instrument is pretty big. About the size of, say, a mini-fridge. But the majority of that is the computer to drive the system, the readout electronics, and the enclosure (to dampen out vibrations, establish vacuum, etc.). The actual readout tip is pretty small: a nano-sized tip attached to ~100 micron 'diving board' assembly.
A related problem with STM is that it's a serial process: you have a small tip that you're scanning over a surface. This makes readout slow. However in a separate project, IBM (and others) has been working on how to solve that: the idea is to use a huge array of tips that scan the surface in parallel (IBM calls it millipede memory [ibm.com]). This makes access faster since you can basically stripe the data and read/write in parallel, and it makes random seeks faster since you don't have to move the tip array as far to get to the data you want. It increases complexity, of course, but modern nano-lithography is certainly up to the task of creating arrays of hundreds of thousands of micron-sized tips with associated electronics.
Using tip arrays would make the read/write parts more compact (as compared to having separate parallel STMs, I mean). The enclosure and driving electronics could certainly be miniaturized if there were economic incentive to do so. There's no physical barrier preventing these kinds of machines from being substantially micronized. As others have pointed out, the first magnetic disk read/write systems were rather bulk, and now hard drives can fit in your pocket. It's possible the same thing could happen here. Having said that, current data storage techniques have a huge head-start, so for something like this to catch up to the point where consumers will want to buy it may take some time.
Re: (Score:3)
True, but we've learned a lot of other things since those first hard drives that still probably apply: manufacturing techniques for making high-precision assemblies at extremely low cost, highly reliable low-friction bearings (like the hydrodynamic bearings I believe HDs use) for the spinning media, miniaturized servo motors, etc. It might not be that long before something using this technology comes along.
Re: (Score:2)
Re:I think 12 atoms should be enough for everyone (Score:4, Insightful)
Actually, an STM is typically about the size of a baseball. The vacuum chamber housing it, however...
Re: (Score:2)
The size isn't as much of a problem as the speed. How long did it take them to write 5 measly bytes? Anything competing with HDs has to be able to achieve at least 3Gb/s.
Per dalton? (Score:1)
I'm only a two bit chemist, but per atom doesn't sound very exact since atoms vary in weight between 1 dalton (1/(6e23)grams) and way over 200 times that.
Density isn't always the problem (Score:3, Insightful)
Re: (Score:3)
The disk spins under the head at a certain rate. Guess what happens as you increase the density of bits? There's a reason that I just replaced a 15K rpm 150GB drive with a 5K rpm 1TB drive and saw a significant increase in raw read speed.
All of the electronics in an HDD are far, far faster than the mechanical parts. Reading all of the data on a consumer drive has taken about 6-12 hours from the 10MB drives to the 3TB drives, because density is the limit on speed as well.
Not that it really matters - it loo
Bad article (Score:5, Insightful)
There's a better article here [popularmechanics.com] which includes some more information on the experiment. In particular the temperature was 0.5K.
Also the computerworld article claims that using an antiferromagnetic arrangement of atoms is an advantage because it pulls the atoms more tightly together. I'm not convinced that this is true but even if it is the effect would be completely negligible. The interesting aspect of this arrangement is that each atom cancels out the magnetic field of the atoms either side of it which should help with data stability (a similar effect is seen in perpendicular recording today).
Unrelatedly: have they/will they publish a paper on this? I can't find anything mentioning a paper in the press releases.
Re: (Score:2, Insightful)
Re: (Score:2)
How the hell does this get modded troll?
Protip: Well, I don't have one. Just chalk it up to another mod bot failure.
Re:Bad article (Score:4, Funny)
Yes, but the paper is tiny and can only be read at low temperatures.
Re:Bad article (Score:5, Informative)
The actual paper was published today in Science:
Sebastian Loth[1,2], Susanne Baumann[1,3], Christopher P. Lutz[1], D. M. Eigler[1], Andreas J. Heinrich[1] (Affiliations: [1] IBM Almaden Research Division, [2] Max Planck Institute, [3] University of Basel) Bistability in Atomic-Scale Antiferromagnets [sciencemag.org] Science 13 January 2012: Vol. 335 no. 6065 pp. 196-199 DOI: 10.1126/science.1214131 [doi.org].
The abstract is:
Some big names are on this paper (Don Eigler [ibm.com] is a pioneer of STM; responsible for the famous "IBM written with xenon atoms [ibm.com]" proof-of-concept, and along with Lutz worked on the also-famous "quantum corrals [wikipedia.org]").
PDP Anyone? (Score:3, Funny)
Re:PDP Anyone? (Score:4, Funny)
Had they used the clearly superior RAD-50 [wikipedia.org] encoding, they could have stored THINK with a mere 384 atoms as opposed to 480.
I'm just glad they didn't use EBCDIC.
Re: (Score:3, Funny)
Had they used the clearly superior RAD-50 [wikipedia.org] encoding, they could have stored THINK with a mere 384 atoms as opposed to 480.
I'm just glad they didn't use EBCDIC.
They tried, but the inherent chaos very nearly brought on the heat death of the universe.
Re: (Score:2)
I'm just glad they didn't use EBCDIC.
Or Half-ASCII.
Crystal Skulls (Score:1)
Units of measure (Score:1)
I thought the size of one bit IS one bits. Next they'll tell you that the size of atom is yellow.
Re: (Score:2)
Anything can be defined as a tautology. It can be useful to measure units in other units, though. E.g. you might want to know the height of a liter (in a particular container) or the weight of a foot (of wire). As long as you know the context, as most of us do for this article, it makes sense. You do have match the unit to the measurement you're trying to make, though; it would be perfectly cogent for them to say that the COLOR of the atom is yellow, though I don't know why we'd care.
Vibration will be the biggest challenge (Score:4, Interesting)
From what I understand the most severe engineering challenge with designing a portable STM will be overcoming the vibration issues. Current "home brew" STMs are built in a sandbox for this reason, afaik.
Re:Vibration will be the biggest challenge (Score:4, Insightful)
In an atomic magnetic memory, though, you wouldn't really be imaging individual atoms. You'd be scanning the tip back-and-forth and trying to sense (or set) the local magnetic field. Thus you wouldn't need to use a soft cantilever to hold the tip. A very stiff/rigid one would be fine, as long as it is correctly positioned in relation to the encoding atoms (close enough for sensing, etc.). The magnetic response in general will be stronger than the usual imaging modes for STM.
My point is just that using a STM-like device for storing/retrieving data eliminates many of the design constraints that a full-blown STM needs (because it's trying to do precise topography and density-of-states imaging...). You can play many engineering tricks that they can't afford to do in a real STM.
Having said that, many challenges would remain. External vibrations could still make the device unstable (or require it to sample for longer periods to average-out signals, thus making data throughput lower). Temperature stability is probably going to be a major concern (thermal expansion will change the nano-sized gap between the tip and bits, which will need to be compensate for; thermal noise could overwhelm the signal entirely; thermal gradients could make alignment of the tips and compensation for temperature drift even harder; etc.).
Then again, you only have to look at the absurd sophistication of modern HDDs or CPUs to be convinced that we can handle these kinds of challenging engineering problems (if there is enough economic incentive).
Re:Vibration will be the biggest challenge (Score:4, Informative)
It's also worth noting that modern hard disks already position the read head staggeringly close to the platter already - on the order of 10nm of clearance or less. And this is in a consumer electronic device.
Most of the constraints of STM and AFM are related to the fact that they are general purpose, highly accurate devices, intended to study arbitrary samples (and work down to the 0.1 nm type scales while doing it).
Smaller but Bigger!!! (Score:3)
I can see it now. 500 petabytes stored on a postage stamp housed in a device the size of an overstuffed, large suitcase. It has geek written all over it! I must have one.!!!
Then we'll need a faster bus (Score:5, Interesting)
Imagine having a hard disk with a capacity of 2,000 TB. Using a SATA 3.0 bus with a sustained maximum throughput of 600 MiB/s, it would still take over 37 days to read or write the entire device.
Re: (Score:2)
Imagine having a hard disk with a capacity of 2,000 TB. Using a SATA 3.0 bus with a sustained maximum throughput of 600 MiB/s, it would still take over 37 days to read or write the entire device.
By the time this atomic scale HD hits the consumer sphere - if they do, it'll be something like 20+ years from now - I'm sure by then (2030+) they would have parallel version of SATA 9.0 that can read the entire 2ExaByte content in like 0.2 milsec
Re: (Score:2)
By the time this atomic scale HD hits the consumer sphere - if they do, it'll be something like 20+ years from now - I'm sure by then (2030+) they would have parallel version of SATA 9.0 that can read the entire 2ExaByte content in like 0.2 milsec
You forget that when it comes to sustained throughput, hard disks have always been slower than the buses used to connect them. It's a problem that's intrinsic to writing/reading data to/from a mechanical medium. In this case the disks can't really rotate any faster, so the bigger they get, the more tracks there are to access and the worse the problem becomes.
Sure, over time the sustained throughput rates have increased, but that's only because of the steady increase in areal density. Assuming IBM's new t
Re: (Score:2)
While yes using a SATA 3.0 would take forever, there is no reason to think that when these drives a produced that will be the standard used for them. I think it is more likely that they will connect to something like PCIe 16x slot (or whatever dongle they are using to connect to that bus). A v3.0 PCIe 16x will do 16GB/s so it would take 34 hours with technology in most people's computer right now. By the time 2EB drives get on the market I don't think it will be an issue.
I'm more concerned about when all
Re: (Score:2)
While yes using a SATA 3.0 would take forever, there is no reason to think that when these drives a produced that will be the standard used for them. I think it is more likely that they will connect to something like PCIe 16x slot (or whatever dongle they are using to connect to that bus). A v3.0 PCIe 16x will do 16GB/s so it would take 34 hours with technology in most people's computer right now. By the time 2EB drives get on the market I don't think it will be an issue.
If IBM's new technology eventually makes it into the common hard disk, there will naturally be a faster bus technology to accommodate the increased bit rates to/from the read/write heads (due to the higher areal density), so you can bet it will take less than 37 days to read or write an entire disk. However, because the disks can't really rotate any faster than they do now, the bigger they get, the more tracks there will be to access and the longer it will take to read or write the whole thing. The only wa
Re: (Score:1)
With current buses, we'd mount them as expansion cards with PCI Express [wikipedia.org] (1 GB/s per lane with v3.0, up to 16 GB/s with PCIe 16x). Some SSDs already do this.
For a more flexible solution, there's Thunderbolt [wikipedia.org]. 2.5 GB/s currently, with planned expansion up to 12 GB/s.
Even optimistically, though, we'd still be looking at ~2 days to read/write an entire 2,000 TB drive.
Re: (Score:2)
We already need a faster bus. [anandtech.com]
The SF-2000 series controllers are already limited on the sequential side by 6Gbps SATA as well as the ONFI 2.x interface. Both need to be addressed to improve sequential performance, which we likely won't see until 2013.
Re: (Score:2)
Porn would be one of the few special cases where such a drive capacity would be useful (Unless of course you are very serious about your porn!)
Data backups by nature require making a duplicated copy of the used space on the drive. If you fully utilize the drive, that would translate to copying the entire 2 PB drive, and at best only having one backup every 37 days (assuming a continuously running backup process)
So with current bus technology, it would be easiest to simply not make a backup of such a drive,
Perfect Timing (Score:3)
Anyone else pick up on the note in TFA about how this technology uses 96 bits to make one byte of data? I wonder if the drive sizes will be advertised in bits to make them seem even more ridiculously impressive!
Not A Big Deal..... (Score:1)
Bad news for the economy? (Score:1)
My take on this (Score:1)
Re: (Score:2)
"We can manufacture gazillionabyte memory chips the size of a pinhead. Of course, the interface hardware for reading/wrting the data is the size of a small fridge..."
Moreover, the bandwidth is a kilobyte per second.
Bandwidth! (Score:2)
I don't want an order of magnitude more storage; I want to be able to process all the storage that I have in the blink of an eye.
Re: (Score:1)
I am not aware of a process to transmogrify iron into gold...
Re: (Score:3)
You don't need to transmogrify it, just swap it; flipping a bit would require picking up the iron atom and placing a gold atom in its place.
Of course, this sounds pretty far out there too, but at least it doesn't require transmutation.
Re: (Score:2)
IIRC, that's enough to actually apply to some crypto schemes currently in use.
Re: (Score:2)