Nano-Scale Memory Fits A Terabit On A Square Inch 199
prostoalex writes "San Jose Business Journal talks about Nanochip, a company that's developing molecular-scale memory: "Nanochip has developed prototype arrays of atomic-force probes, tiny instruments used to read and write information at the molecular level. These arrays can record up to one trillion bits of data -- known as a terabit -- in a single square inch. That's the storage density that magnetic hard disk drive makers hope to achieve by 2010. It's roughly equivalent to putting the contents of 25 DVDs on a chip the size of a postage stamp." The story also mentions Millipede project from IBM, where scientists are trying to build nano-scale memory that relies on micromechanical components."
What about speed? (Score:5, Interesting)
Re:25 DVDs? (Score:2, Interesting)
Issues untold yet (Score:5, Interesting)
(b) Testing: How are they going to test this trillion element chip ? Testing complexity grows exponential with number of elements and it will require serious consideration. It may be worthwhile to make smaller components which can be tested easily (modern chips has one-third cost devoted to testing)
(c) Redundancy: Is this process going to give more yield than conventional electronic processes ? If no, common technique of redundancy has to be utilized. This brings in the cost in terms of power, speed and delay. For example if the yield is only 90%, that means you will need ~110% resources. Not only you have to make up for the defective components, you will have to provide lot more redundancy for testing. At some point it becomes worthless as the performance will drop to floor.
But still it is a good work and perhaps will generate some new ideas.
And thats just 2-dimensional (Score:4, Interesting)
Whatever that number, we'll still be running out of space since Windows 2050 will take 1/3rd of that space and games+movies the remaining 2/3rd.
Don't hold out for them (Score:3, Interesting)
I'd be really surprised if we see this technology on the shelf in anything close to 5 years from now.
What happened to Millipede? (Score:3, Interesting)
Re:Issues untold yet (Score:2, Interesting)
As an engineer you have to take things with a pinch of salt. Every scientific idea may not be technologically feasible. In the end economics determine if the product will even hit the market or not. Nanotechnology is not cheap, so it is worthwhile consideration to see if it is even possible to tackle the important issues rather than hoping someone else will do it.
Re:Magnetic memory = Doom (Score:4, Interesting)
Standard DRAM will maintain its state --- mostly --- for a remarkably long time without refreshing. Unfortunately, it doesn't do so in a useful state.
I once was working on an embedded device that had VGA out. The development cycle was power on, boot from TFTP, run system, wait until it crashed, power off, repeat. When the system switched on, one of the first things the boot loader did was to initialise the video chipset, but without clearing the video memory.
If the board had been off for less than about five minutes, you could still see the last display that had been there when the board crashed.
Without refreshes, the data would gradually fade; the image was always corrupted with snow. The longer you left it switched off for, the worse the snow got. Different RAM chips lasted different lengths of time --- there was one band across the middle that would become completely unintelligable in about 30s, while another one could hold an image for about two minutes.
I suppose you could use this to store data for short periods during a power down, but you'd have to use so much redundancy to ensure that the data would survive the inevitable corruption that it probably wouldn't be worth it, but I'm sure someone, somewhere, could come up with a Nifty Trick(TM)... You couldn't do it at all on PCs, of course --- on boot, they wipe all their RAM, video or otherwise.
Such products are a godsend (Score:5, Interesting)
But the manufacturers of memory chips, hard disks, even CPUs, have it really easy. All they need to do is solve the technological problem of doubling the capacity/performance and the customer is eager to shell out some $$$ to get the new version. No focus groups are needed, no expensive marketing surveys. The only thing you need to do to please the customer is basically improve the obvious performance metric by 100%. You don't need to lie and twist the facts as those guys in cosmetics do with "73% more volume" for your eyelashes or "54% healthier hair" bullshit. You just make your CPU twice as fast and that flash chip twice as large, and you are done.
And if you really want to, you can say it will make Internet faster, or something...
This is a true magnetic method (Score:2, Interesting)
They have had working prototypes for a long while. I suspect that the problems have more to do with reliably getting it into production.
data transfer rate (Score:5, Interesting)
Firstly, the storage density they are reporting is for a prototype setup, and it's already as good as curent HD technology. The exciting thing is not the value they currently have, but rather the fact that this technology can be pushed very very far. Thus, comparing this new technology to a mature technology (magnetic disks) is not really fair. I do believe that if this new technology is investigated for 10 years, it could outperform magnetic drivers in terms of storage density.
Secondly, the data transfer rate can be much higher with this new technology. The millipede project uses an array of thousands of AFM-like tips, which means that in principle 1000 bits of data are read at a time (compared to, for example, 4 bits read at one time in a magnetic disk drive with 4 platters). We all know that HD access is a major bottleneck in modern computers. This new concept could immediately speed that up by 2 orders of magnitude. I think that's worthy of consideration!
That having been said: don't hold your breath. MEMS is a rapidly evolving field, but it will be awhile before it can really beat out the mature magnetic technology. The article also doesn't give any details on how this new technology works. The potential is great, but alot of work has to be done.
Re:Go ahead (Score:2, Interesting)
It's not a question of the giga part, everyone knows the metric system by now (I hope)
Really, do you? Last time I looked, G or giga is defined as exactly 10^9 [nist.gov] (1,000,000,000).
Here's the important part you were ignoring:
---
Hard drive manufacturer: One GigaByte = 1000 bytes
Wrong. Hard drive manufacturers and everyone else who knows how to use SI prefixes [nist.gov] correctly knows that one gigabyte is 1,000,000,000 bytes.
Software/everyone else: One GigaByte = 1024 bytes
Wrong again. If in this case you mean 2^30 bytes, 1 GiB = 1,073,741,824 bytes. What about network people? To them, 1 GB is certainly 1,000,000,000 bytes. Does a 100 Mb/s Ethernet operate at 1,000,000 bits per second (10^6) or is is 1,048,576 (2^20)? More and more people are becoming aware of this issue and moving from the old ambiguous use of prefixes representing powers of ten to represent powers of two to the new more percise and seperate binary SI prefixes. Case in point. Bittorent [bittorrent.com]. Download the client, use it, and you'll notice that bytes, in binary multiples are correctly refered to as KiB, MiB, etc.
If you had actually read the link I posted on SI prefixes for binary multiples [nist.gov], you might know the following historical context:
In December 1998 the International Electrotechnical Commission (IEC), the leading international organization for worldwide standardization in electrotechnology, approved as an IEC International Standard names and symbols for prefixes for binary multiples for use in the fields of data processing and data transmission.
40 bits on the address bus... (Score:3, Interesting)
Could this be an indication of the data volumes we will be dealing with in the future when 32 bit computing on the deskop is obsolete?