Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware Science IT

Nano-Scale Memory Fits A Terabit On A Square Inch 199

prostoalex writes "San Jose Business Journal talks about Nanochip, a company that's developing molecular-scale memory: "Nanochip has developed prototype arrays of atomic-force probes, tiny instruments used to read and write information at the molecular level. These arrays can record up to one trillion bits of data -- known as a terabit -- in a single square inch. That's the storage density that magnetic hard disk drive makers hope to achieve by 2010. It's roughly equivalent to putting the contents of 25 DVDs on a chip the size of a postage stamp." The story also mentions Millipede project from IBM, where scientists are trying to build nano-scale memory that relies on micromechanical components."
This discussion has been archived. No new comments can be posted.

Nano-Scale Memory Fits A Terabit On A Square Inch

Comments Filter:
  • What about speed? (Score:5, Interesting)

    by GNUALMAFUERTE ( 697061 ) <almafuerte@@@gmail...com> on Sunday February 27, 2005 @07:48PM (#11797751)
    This kind of devices would be incredible for backup purposes, but also, the recording method seems to be also fast, would they accept allmost-unlimited rewrites?, in that case, this technology could finally replace magnetic devices. Solid state is allways better, but so far, the existing alternatives don't offer the durability and flexibility of hard disks.
  • Re:25 DVDs? (Score:2, Interesting)

    by Rosco P. Coltrane ( 209368 ) on Sunday February 27, 2005 @08:01PM (#11797842)
    The LOC is about 20 TB worth of data. So the storage medium here is roughly 1/160th of an LOC :-)
  • Issues untold yet (Score:5, Interesting)

    by karvind ( 833059 ) <karvind.gmail@com> on Sunday February 27, 2005 @08:03PM (#11797853) Journal
    (a) Reliability: No words about how reliable the system and elements are. It is one thing to make a 1M by 1M array and another to make bigger. Silicon semiconductor industry is lot more mature in transferring electronic processes. MEMS process still have low yield and haven't found commercial success yet (except the accelerometers used in air bags etc).

    (b) Testing: How are they going to test this trillion element chip ? Testing complexity grows exponential with number of elements and it will require serious consideration. It may be worthwhile to make smaller components which can be tested easily (modern chips has one-third cost devoted to testing)

    (c) Redundancy: Is this process going to give more yield than conventional electronic processes ? If no, common technique of redundancy has to be utilized. This brings in the cost in terms of power, speed and delay. For example if the yield is only 90%, that means you will need ~110% resources. Not only you have to make up for the defective components, you will have to provide lot more redundancy for testing. At some point it becomes worthless as the performance will drop to floor.

    But still it is a good work and perhaps will generate some new ideas.

  • by mnmn ( 145599 ) on Sunday February 27, 2005 @08:07PM (#11797890) Homepage
    Some earlier stories were mentioning stacking layers of memory to increase it. So considering structural, voltage, data and addressing layers as well, how much data can we store in a 1 inch cube?

    Whatever that number, we'll still be running out of space since Windows 2050 will take 1/3rd of that space and games+movies the remaining 2/3rd.
  • by iammaxus ( 683241 ) on Sunday February 27, 2005 @08:13PM (#11797931)
    These arrays can record up to one trillion bits of data -- known as a terabit -- in a single square inch. That's the storage density that magnetic hard disk drive makers hope to achieve by 2010.

    I'd be really surprised if we see this technology on the shelf in anything close to 5 years from now.

  • by DaleBob ( 676487 ) on Sunday February 27, 2005 @08:15PM (#11797947)
    There was an article written (I believe by researchers from IBM) in Scientific American about two years ago regarding Millipede that said they expected technology to come to market in 3 years. Now the article from the post suggests the project is all but dead. What happened? I'm too lazy to actually look at the patents, but it isn't clear at all how this new technology actually differs from Millipede. I'd guess the write and erase mechanisms are different.
  • Re:Issues untold yet (Score:2, Interesting)

    by karvind ( 833059 ) <karvind.gmail@com> on Sunday February 27, 2005 @08:18PM (#11797963) Journal
    I don't want to flame you, but I would take a scientific/engineering approach rather than accepting opinion from a wall-street magazine. It would be worthwhile to follow the bubble burst of the MEMS technology in the recent 4-5 years. Even after 10 years of work, MEMS elements have serious issues in packaging. Intel withdrew their MEMS program as it doesn't have enough yield. So just making prototype is not the end of the story.

    As an engineer you have to take things with a pinch of salt. Every scientific idea may not be technologically feasible. In the end economics determine if the product will even hit the market or not. Nanotechnology is not cheap, so it is worthwhile consideration to see if it is even possible to tackle the important issues rather than hoping someone else will do it.
  • by david.given ( 6740 ) <dg@cowlark.com> on Sunday February 27, 2005 @08:24PM (#11798011) Homepage Journal
    Warm reboots don't erase memory. Cold reboots usually don't erase memory, either. (There are still fragments of what was left before after doing a cold boot.)

    Standard DRAM will maintain its state --- mostly --- for a remarkably long time without refreshing. Unfortunately, it doesn't do so in a useful state.

    I once was working on an embedded device that had VGA out. The development cycle was power on, boot from TFTP, run system, wait until it crashed, power off, repeat. When the system switched on, one of the first things the boot loader did was to initialise the video chipset, but without clearing the video memory.

    If the board had been off for less than about five minutes, you could still see the last display that had been there when the board crashed.

    Without refreshes, the data would gradually fade; the image was always corrupted with snow. The longer you left it switched off for, the worse the snow got. Different RAM chips lasted different lengths of time --- there was one band across the middle that would become completely unintelligable in about 30s, while another one could hold an image for about two minutes.

    I suppose you could use this to store data for short periods during a power down, but you'd have to use so much redundancy to ensure that the data would survive the inevitable corruption that it probably wouldn't be worth it, but I'm sure someone, somewhere, could come up with a Nifty Trick(TM)... You couldn't do it at all on PCs, of course --- on boot, they wipe all their RAM, video or otherwise.

  • by danila ( 69889 ) on Sunday February 27, 2005 @08:26PM (#11798019) Homepage
    It's amazing how lucky these chip manufacturers are. Imagine to what lengths people need to go in other industries in order to convince customers to upgrade. If all you are selling is a damn chocolate bar, there is only so much that you can do to improve it. They had perfectly edible chocolate bars 100 years ago and there isn't much besides slapping "10% free" on the package that you can do. Ditto for things like headphones, ballpoint pens and pretty much everything else.

    But the manufacturers of memory chips, hard disks, even CPUs, have it really easy. All they need to do is solve the technological problem of doubling the capacity/performance and the customer is eager to shell out some $$$ to get the new version. No focus groups are needed, no expensive marketing surveys. The only thing you need to do to please the customer is basically improve the obvious performance metric by 100%. You don't need to lie and twist the facts as those guys in cosmetics do with "73% more volume" for your eyelashes or "54% healthier hair" bullshit. You just make your CPU twice as fast and that flash chip twice as large, and you are done.

    And if you really want to, you can say it will make Internet faster, or something...
  • by jkocurek ( 863356 ) on Sunday February 27, 2005 @10:32PM (#11798997)
    Other wise it is similar to Millipede. To increase density, they can move the R/W heads and the media. I've been following this for a while, I have exchanged e-mails with Tom Rust starting back in 1998. Like with fusion, it seems that this has been just a year or two from commercialization ever since...

    They have had working prototypes for a long while. I suspect that the problems have more to do with reliably getting it into production.
  • data transfer rate (Score:5, Interesting)

    by kebes ( 861706 ) on Sunday February 27, 2005 @11:11PM (#11799304) Journal
    Most posters seem unimpressed with the storage density they are reporting, but I'd like to point out a couple of things. (Note that I use atomic force microscopes in my "job" -- I do academic research.)

    Firstly, the storage density they are reporting is for a prototype setup, and it's already as good as curent HD technology. The exciting thing is not the value they currently have, but rather the fact that this technology can be pushed very very far. Thus, comparing this new technology to a mature technology (magnetic disks) is not really fair. I do believe that if this new technology is investigated for 10 years, it could outperform magnetic drivers in terms of storage density.

    Secondly, the data transfer rate can be much higher with this new technology. The millipede project uses an array of thousands of AFM-like tips, which means that in principle 1000 bits of data are read at a time (compared to, for example, 4 bits read at one time in a magnetic disk drive with 4 platters). We all know that HD access is a major bottleneck in modern computers. This new concept could immediately speed that up by 2 orders of magnitude. I think that's worthy of consideration!

    That having been said: don't hold your breath. MEMS is a rapidly evolving field, but it will be awhile before it can really beat out the mature magnetic technology. The article also doesn't give any details on how this new technology works. The potential is great, but alot of work has to be done.
  • Re:Go ahead (Score:2, Interesting)

    by bohnsack ( 2301 ) on Monday February 28, 2005 @12:41AM (#11799974)

    It's not a question of the giga part, everyone knows the metric system by now (I hope)

    Really, do you? Last time I looked, G or giga is defined as exactly 10^9 [nist.gov] (1,000,000,000).

    Here's the important part you were ignoring:
    ---
    Hard drive manufacturer: One GigaByte = 1000 bytes

    Wrong. Hard drive manufacturers and everyone else who knows how to use SI prefixes [nist.gov] correctly knows that one gigabyte is 1,000,000,000 bytes.

    Software/everyone else: One GigaByte = 1024 bytes

    Wrong again. If in this case you mean 2^30 bytes, 1 GiB = 1,073,741,824 bytes. What about network people? To them, 1 GB is certainly 1,000,000,000 bytes. Does a 100 Mb/s Ethernet operate at 1,000,000 bits per second (10^6) or is is 1,048,576 (2^20)? More and more people are becoming aware of this issue and moving from the old ambiguous use of prefixes representing powers of ten to represent powers of two to the new more percise and seperate binary SI prefixes. Case in point. Bittorent [bittorrent.com]. Download the client, use it, and you'll notice that bytes, in binary multiples are correctly refered to as KiB, MiB, etc.

    If you had actually read the link I posted on SI prefixes for binary multiples [nist.gov], you might know the following historical context:

    Once upon a time, computer professionals noticed that 2^10 was very nearly equal to 1000 and started using the SI prefix "kilo" to mean 1024. That worked well enough for a decade or two because everybody who talked kilobytes knew that the term implied 1024 bytes. But, almost overnight a much more numerous "everybody" bought computers, and the trade computer professionals needed to talk to physicists and engineers and even to ordinary people, most of whom know that a kilometer is 1000 meters and a kilogram is 1000 grams.
    Then data storage for gigabytes, and even terabytes, became practical, and the storage devices were not constructed on binary trees, which meant that, for many practical purposes, binary arithmetic was less convenient than decimal arithmetic. The result is that today "everybody" does not "know" what a megabyte is. When discussing computer memory, most manufacturers use megabyte to mean 2^20 = 1 048 576 bytes, but the manufacturers of computer storage devices usually use the term to mean 1 000 000 bytes. Some designers of local area networks have used megabit per second to mean 1 048 576 bit/s, but all telecommunications engineers use it to mean 10^6 bit/s. And if two definitions of the megabyte are not enough, a third megabyte of 1 024 000 bytes is the megabyte used to format the familiar 90 mm (3 1/2 inch), "1.44 MB" diskette. The confusion is real, as is the potential for incompatibility in standards and in implemented systems.
    Faced with this reality, the
    IEEE Standards Board [ieee.org] decided that IEEE standards will use the conventional, internationally adopted, definitions of the SI prefixes. Mega will mean 1 000 000, except that the base-two definition may be used (if such usage is explicitly pointed out on a case-by-case basis) until such time that prefixes for binary multiples are adopted by an appropriate standards body.

    In December 1998 the International Electrotechnical Commission (IEC), the leading international organization for worldwide standardization in electrotechnology, approved as an IEC International Standard names and symbols for prefixes for binary multiples for use in the fields of data processing and data transmission.

  • by carlmenezes ( 204187 ) on Monday February 28, 2005 @04:34AM (#11800921) Homepage
    Kinda gives you an idea of how huge a 64 bit address space is. I mean, 116GB is still 24 bits smaller - by about 16 million times (10 bits = 1K, 24=10+10+4) - than the amount of data 64 bits can address.

    Could this be an indication of the data volumes we will be dealing with in the future when 32 bit computing on the deskop is obsolete?

"Look! There! Evil!.. pure and simple, total evil from the Eighth Dimension!" -- Buckaroo Banzai

Working...