Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware Science IT

Nano-Scale Memory Fits A Terabit On A Square Inch 199

prostoalex writes "San Jose Business Journal talks about Nanochip, a company that's developing molecular-scale memory: "Nanochip has developed prototype arrays of atomic-force probes, tiny instruments used to read and write information at the molecular level. These arrays can record up to one trillion bits of data -- known as a terabit -- in a single square inch. That's the storage density that magnetic hard disk drive makers hope to achieve by 2010. It's roughly equivalent to putting the contents of 25 DVDs on a chip the size of a postage stamp." The story also mentions Millipede project from IBM, where scientists are trying to build nano-scale memory that relies on micromechanical components."
This discussion has been archived. No new comments can be posted.

Nano-Scale Memory Fits A Terabit On A Square Inch

Comments Filter:
  • Finally (Score:1, Funny)

    by Anonymous Coward
    I'll be able to store my gigaquads in a compact space.
  • by Anonymous Coward
    They were talkig about this a while back on simulatedlucidity.com
  • 25 DVDs? (Score:1, Insightful)

    Last time I checked, a DVD was (roughly) 4 GB, so 25 DVDs is only 100GB?
  • Hmm (Score:4, Insightful)

    by pHatidic ( 163975 ) on Sunday February 27, 2005 @06:47PM (#11797742)
    These arrays can record up to one trillion bits of data -- known as a terabit -- in a single square inch.

    Is that a hardware terabit or a software terabit?

    • by account_deleted ( 4530225 ) on Sunday February 27, 2005 @06:49PM (#11797759)
      Comment removed based on user account deletion
    • Probably neither. Are you familiar with the word "approximation"?

      And even in the extremely unlikely case that exactly one terabit exactly fits in exactly one square inch, the answer to your question is contained in the sentence you quoted anyway: "one trillion bits of data".
    • As a rule of thumb, bits are measured in base 10 (thousands) and bytes are measured in base 2 (1024's). Other than when marketers get a hold of numbers, this is usually true.
  • What about speed? (Score:5, Interesting)

    by GNUALMAFUERTE ( 697061 ) <almafuerte@gmail.cUMLAUTom minus punct> on Sunday February 27, 2005 @06:48PM (#11797751)
    This kind of devices would be incredible for backup purposes, but also, the recording method seems to be also fast, would they accept allmost-unlimited rewrites?, in that case, this technology could finally replace magnetic devices. Solid state is allways better, but so far, the existing alternatives don't offer the durability and flexibility of hard disks.
  • Go ahead (Score:5, Informative)

    by killa62 ( 828317 ) on Sunday February 27, 2005 @06:50PM (#11797769)
    Mod me -1 redundant if you like, but for people out there, but 1 trillion b= 125,000,000,000 bytes = 116 GB, or if you're a harddrive manufacturer, its 125 GB.
    • Re:Go ahead (Score:2, Informative)

      by bohnsack ( 2301 )
      1 trillion bits is 125 GB, whether you're a hard drive manufacturer or not, as "G" is exactly defined as 10^9 [nist.gov]. If you're interested in representing this quantity in terms of multiples of 2^(30), as in your 116, 1 trillion bits is more correctly stated as 116.5 GiB, 116.5 gigabinary bytes, or 116.5 gibibytes. See the SI spec on prefixes for binary multiples [nist.gov] for more information.
      • Whoever came up with those suffixes (gibi etc) should be forced to say them out loud, in public! They sound ridiculous!
    • by falser ( 11170 ) on Sunday February 27, 2005 @11:28PM (#11799867) Homepage
      Why can't people just standardize on a common unit of measurement such the number of Encyclopedia Brittanica's or the number of Library of Congress's?
    • Kinda gives you an idea of how huge a 64 bit address space is. I mean, 116GB is still 24 bits smaller - by about 16 million times (10 bits = 1K, 24=10+10+4) - than the amount of data 64 bits can address.

      Could this be an indication of the data volumes we will be dealing with in the future when 32 bit computing on the deskop is obsolete?
  • by cybercobra ( 856248 ) on Sunday February 27, 2005 @06:51PM (#11797770)
    Cool, the next time I need to send something over sneakernet to someone far away, I'll just send a postcard with 2 stamps on it. 1 postal and 1 storage stamp.
  • More information (Score:4, Informative)

    by ploss ( 860589 ) on Sunday February 27, 2005 @06:52PM (#11797781)
    More information about the company can be found at their website, http://www.nanochip.com.nyud.net:8090 [nyud.net][Coral Cache Link].
  • impressive (Score:5, Funny)

    by Hellasboy ( 120979 ) on Sunday February 27, 2005 @07:02PM (#11797852)
    i'm impressed... 25 dvds for 1 terabit. but i think were all holding out until we hit 150 zip disks on a square centimeter or 172 ls-120's on the size of a heineken bottle cap.
  • Issues untold yet (Score:5, Interesting)

    by karvind ( 833059 ) <karvind@NospaM.gmail.com> on Sunday February 27, 2005 @07:03PM (#11797853) Journal
    (a) Reliability: No words about how reliable the system and elements are. It is one thing to make a 1M by 1M array and another to make bigger. Silicon semiconductor industry is lot more mature in transferring electronic processes. MEMS process still have low yield and haven't found commercial success yet (except the accelerometers used in air bags etc).

    (b) Testing: How are they going to test this trillion element chip ? Testing complexity grows exponential with number of elements and it will require serious consideration. It may be worthwhile to make smaller components which can be tested easily (modern chips has one-third cost devoted to testing)

    (c) Redundancy: Is this process going to give more yield than conventional electronic processes ? If no, common technique of redundancy has to be utilized. This brings in the cost in terms of power, speed and delay. For example if the yield is only 90%, that means you will need ~110% resources. Not only you have to make up for the defective components, you will have to provide lot more redundancy for testing. At some point it becomes worthless as the performance will drop to floor.

    But still it is a good work and perhaps will generate some new ideas.

    • (a) Reliability: No words about how reliable the system and elements are ...
      (b) Testing: How are they going to test this trillion element chip ? ...
      (c) Redundancy: Is this process going to give more yield than conventional electronic processes ?


      Do you understand the definition of a prototype?

      I'm sure all your questions will be answered in due time, in 5 or 10 years when the device hits the street.
      • Re:Issues untold yet (Score:2, Interesting)

        by karvind ( 833059 )
        I don't want to flame you, but I would take a scientific/engineering approach rather than accepting opinion from a wall-street magazine. It would be worthwhile to follow the bubble burst of the MEMS technology in the recent 4-5 years. Even after 10 years of work, MEMS elements have serious issues in packaging. Intel withdrew their MEMS program as it doesn't have enough yield. So just making prototype is not the end of the story.

        As an engineer you have to take things with a pinch of salt. Every scientific i
  • ATM or AFM? (Score:3, Informative)

    by fermion ( 181285 ) on Sunday February 27, 2005 @07:06PM (#11797881) Homepage Journal
    From the article it is hard to tell what they are taking about. IBM used an atomic tunnelling microscope, a reltively complicated piece of equipment that relies on the fact that quantum particles can tunnel through a potential, to move that atoms. The ATM can either be used to create a atomic scale picture of a surface, or move atoms. An atomic force microscope is simply a physical hammer that gently taps a surface and through the change in deflection creates an image. The tip on an ATM is currently so fragile I don't think it could be used to move atoms. The lifetime of a tip is pretty short just becuase of wear, and their is not way to reliable create good tips.

    So we must assume they are talking about an ATM, which a largish and complicated peice of equipment. It requires a piezoelctric device to move the tip to the proper placed on the substrate. For years, such devics kept cell phones large. The ATM requires a highly senstive feeback loop to keep the current constant. And is still requires a very delicae tip that can be easily damaged. Durable tips are probably years away and involve carbon nanotubes. Tips that have a lifetime more than a few months are probably even longer away.

    It is a neat idea and probably works well in the laboratory on a vibration cancelation table. How would it work on a portable in the train or in the car? Does anyone have any real details on the technology?

    • AFM (Score:5, Informative)

      by DaleBob ( 676487 ) on Sunday February 27, 2005 @07:57PM (#11798226)
      The IBM Millipede project doesn't use tunneling microscope technology (ATM, or usually STM). It uses a modified AFM tip that can be resistively heated. The hot tip pushes into a polymer surface and creates a hole. The hole can be "erased" by heating close to the surface and the region around the hole melts and fills it in. The reading is done with cold tips using regular AFM technology.
    • Re:ATM or AFM? (Score:2, Informative)

      As someone already mentioned i think the ATM you refer to is usually called a STM (scanning tunnelling microscope). However an AFM does not need to operate in contact (hammering) mode. There are other techniques called non-contact/lift mode. In these mode you don't sense the repulsion from the surface. You actually drive the tip near resonance and then sense the change in frequency as the tip is pulled toward the surface.
  • Checksums (Score:4, Funny)

    by LaCosaNostradamus ( 630659 ) <LaCosaNostradamus AT mail DOT com> on Sunday February 27, 2005 @07:07PM (#11797886) Journal
    25 DVDs on a chip the size of a postage stamp

    Well, not with the software overhead in various checksums that will be had in 2010:
    • MPAA/RIAA field (the "copy checksum")
    • Dept. of Homeland Security header (the "red checksum")
    • UN Standards bit (the "blue checksum")
    • .SUM (the "Microsoft checksum")
    Those are apt to take up quite a bit of space. So maybe you'll get 15 DVDs (maybe 20 by paying Microsoft an expansion fee) on that postage stamp.
  • by mnmn ( 145599 ) on Sunday February 27, 2005 @07:07PM (#11797890) Homepage
    Some earlier stories were mentioning stacking layers of memory to increase it. So considering structural, voltage, data and addressing layers as well, how much data can we store in a 1 inch cube?

    Whatever that number, we'll still be running out of space since Windows 2050 will take 1/3rd of that space and games+movies the remaining 2/3rd.
  • by ryanmfw ( 774163 ) on Sunday February 27, 2005 @07:11PM (#11797920)
    So, if we attached a couple square inches of this stuff to a pigeon, or filled a 747 with some of these chips, and flew it around the world, how fast would the transfer rate be?
    • So, if we attached a couple square inches of this stuff to a pigeon, or filled a 747 with some of these chips, and flew it around the world, how fast would the transfer rate be?

      I know you're trying to be funny but...

      What most people really look for in electronic communication networks is not transfer rate but good latency: if I can "download" the entire library of Congress by having it Fedexed to be in a big box full of disks, but I have to wait 3 weeks for the snail mail request to reach the LoC, the gu
      • but I have to wait 3 weeks for the snail mail request to reach the LoC, the guys to package everything up and the box to reach me eventually, I may be better off downloading the LoC on a slower link that answers immediately.

        And nevermind that if the source is persistant and fast, the content is changing or it is impossible to predict which part you'd want later, it might be superior to simply download on demand. The missing factor here though is persistant. URLs move. Torrents die. There is no "repository
    • "So, if we attached a couple square inches of this stuff to a pigeon, or filled a 747 with some of these chips, and flew it around the world, how fast would the transfer rate be?"

      There'd be an amazing transfer rate, but the lag would make CounterStrike quite difficult to play.

    • African or European pigeon?

  • by iammaxus ( 683241 ) on Sunday February 27, 2005 @07:13PM (#11797931)
    These arrays can record up to one trillion bits of data -- known as a terabit -- in a single square inch. That's the storage density that magnetic hard disk drive makers hope to achieve by 2010.

    I'd be really surprised if we see this technology on the shelf in anything close to 5 years from now.

  • Google (Score:2, Informative)

    by blcknight ( 705826 )
    http://www.google.com/search?q=1+terabit+in+gigaby tes 1 terabit is 128 gigabytes. That is the definitive answer from google. It's not 116, not 125.
    • Yep...the poster you are correcting probably made the mistake of thinking that 1 terabit = 1 trillion bits; still have to count those in powers of 2, as well. 1 kilobit = 1024 bits, etc.
  • by DaleBob ( 676487 ) on Sunday February 27, 2005 @07:15PM (#11797947)
    There was an article written (I believe by researchers from IBM) in Scientific American about two years ago regarding Millipede that said they expected technology to come to market in 3 years. Now the article from the post suggests the project is all but dead. What happened? I'm too lazy to actually look at the patents, but it isn't clear at all how this new technology actually differs from Millipede. I'd guess the write and erase mechanisms are different.
  • by NaruVonWilkins ( 844204 ) on Sunday February 27, 2005 @07:16PM (#11797949)
    My god, it's two dimensional! Our memory limitations are over!
  • by danila ( 69889 ) on Sunday February 27, 2005 @07:26PM (#11798019) Homepage
    It's amazing how lucky these chip manufacturers are. Imagine to what lengths people need to go in other industries in order to convince customers to upgrade. If all you are selling is a damn chocolate bar, there is only so much that you can do to improve it. They had perfectly edible chocolate bars 100 years ago and there isn't much besides slapping "10% free" on the package that you can do. Ditto for things like headphones, ballpoint pens and pretty much everything else.

    But the manufacturers of memory chips, hard disks, even CPUs, have it really easy. All they need to do is solve the technological problem of doubling the capacity/performance and the customer is eager to shell out some $$$ to get the new version. No focus groups are needed, no expensive marketing surveys. The only thing you need to do to please the customer is basically improve the obvious performance metric by 100%. You don't need to lie and twist the facts as those guys in cosmetics do with "73% more volume" for your eyelashes or "54% healthier hair" bullshit. You just make your CPU twice as fast and that flash chip twice as large, and you are done.

    And if you really want to, you can say it will make Internet faster, or something...
    • All they need to do is solve the technological problem of doubling the capacity/performance and the customer is eager to shell out some $$$ to get the new version.

      All the chocolate bar maker needs to do is wait until the customer gets hungry and needs a new bar. No R&D needed!
  • by __aailob1448 ( 541069 ) on Sunday February 27, 2005 @07:58PM (#11798228) Journal
    We don't measure HDs in Terabits . 1 Tbit = 128 GBytes or 128 gigs3

    Second, converting this from inches to Centimeters, we get slightly less than 20GB/cm^2

    Yes ladies and gentlemen, 20 Gigs per Squared centimeters.

    That's a nice increase but it sure as hell isn't overwhelming.

    Assuming a radius of 5 cm for a 3.5" HD, we get a surface of 80 cm^2 per platter. That comes to 800 Gb per platter. around 8 times the current density.

    These new-gen HDs will be at most 8 times bigger than those we have right now.

    That's it. 8 times. Not even a single order of magnitude.

    Now mod this up or be destroyed!

    • by dbIII ( 701233 ) on Sunday February 27, 2005 @09:36PM (#11799026)
      We don't measure HDs in Terabits
      It's a business journal - and you can tell. We don't measure size in molecules either - it's a long way from H2 to a really big polymer chain - or since molecules don't make sense where crystals are involved, a single crystal of silicon they cut the wafers from, a jet turbine blade or a cubic galena crystal the size of a house.

      They should stick to their standard business journal units - football feilds - if the ewant to be vague.

      8 times. Not even a single order of magnitude.
      Think of the readership. A response from some would be "IBM can only increase it by two orders of magnitude by these guys can increase it 8 times! Buy! Buy! Buy!

      We need better teaching of basic mathematics in high schools so the guy whose dad owns the company still picks up a clue along the way. Either my country has become a dumping ground for the worst of US management or the USA is really in trouble.

    • "We don't measure HDs in Terabits . 1 Tbit = 128 GBytes or 128 gigs3"

      YOU may not, but I assure you that those doing research into hard-drive platter manufacture do. What's more, this isn't a hard-drive platter, it's a random-access device, which are ALWAYS measured in bits, not bytes (except when labeling a product for the masses).

      This is not a product, ready to ship, it's a prototype, and as such you're being exposed to the technical terminology of the industry that produces these devices, NOT the techn
  • by spworley ( 121031 ) on Sunday February 27, 2005 @08:06PM (#11798292)
    The article says they have working prototypes. Of what? The implication is that it's a device that's a square inch in size, and it holds a terabit of data. But from the usage of "square inch" I think the reality may mean a density of 1 terabit per square inch, not that they have a terabit device. (I hope I'm wrong!). For example, they may have a prototype that stores 1000 bits in an area of a billionth of a square inch. That's a lot different than an actual terabit device! I wish articles had more details...
    • It's one thing to store a small amount of data (a few thousand bits) very densly. It's another to be able to write and access large amounts data stored at that density reliably at high data rates. Just achieving a high density of data storage is not a big deal. The article doesn't indicate that they've actually solved the problems necessary for a successful commercial product.
  • by exp(pi*sqrt(163)) ( 613870 ) on Sunday February 27, 2005 @08:23PM (#11798411) Journal
    ...there is a single atom. Orbiting it is an electron. When it's in a spin up state I consider it to contain a 1. When spin down it's a zero. There: a prototype of a multi exaterapetabit/mm^3 storage device at the end of my nose. Oh wait - I might be able to hype this up more. Oh yes...it's an electron, so it's in a superposition state. It's a multi exapetaterabyte/mm^3 quantum computer at the end of my nose. Surely /. have got to publish this story now.
  • by luwain ( 66565 ) on Sunday February 27, 2005 @08:28PM (#11798454)
    Prototype Arrays of Atomic Force Probes?? Is this real technology? I wonder is the talk of a real product by 2007 is credible, or just marketing to attract venture capital. I'm still waiting for products based on NRAM (made up of arrays of carbon nantubes) from Nantero (nantero.com). I wonder if "atomic force probes" are easier to manufacture than "arrays of carbon nanotubes"? Will Nanochip beat Nantero to the marketplace, or will they just burn through venture capital and next year we'll hear about another "Nano-'something'" company with some other "revolutionary technology" that's going to produce a marketable product "real soon now".
    • Prototype Arrays of Atomic Force Probes?? Is this real technology?

      Maybe. Maybe not. Who knows? But one thing's for sure: with an acronym like PAAFP I'm pretty sure the marketing department hasn't found out about it yet.
  • About 5 years ago there was a story just like this on slashdot that was making all the commotion. The claim was three dimensional non-volatile memory and the capacity was 660gigs per cubic cm. So far I haven't heard anything since the slashdot story 5 years ago.

    It would be nice to actually be able to buy this technology that always seems to be "about to come out". It would also be nice to be at a price comparable to current consumor storage devices.

    For now we will still be stuck with the bottleneck of
    • Its called bubble memory,

      Its been here and gone, unfortunatley due to cost for the most part, it was a wonderful concept, albeit a bit slow, (slow meaning still much much faster than mechanical means I.E Hard drive)

      There was a company called Elephant (I belive based on the company that sold floppy disks in the 70's and early 80's) That sold a Bubble Memory based Hard Drive that had na IDE Interface, last I saw some 5 years ago it was like 1.2 gig, it was meant for Mil Spec applications and had a shock res
  • by elronxenu ( 117773 ) on Sunday February 27, 2005 @09:16PM (#11798884) Homepage
    They didn't explain how many volkswagons per metric second.
  • data transfer rate (Score:5, Interesting)

    by kebes ( 861706 ) on Sunday February 27, 2005 @10:11PM (#11799304) Journal
    Most posters seem unimpressed with the storage density they are reporting, but I'd like to point out a couple of things. (Note that I use atomic force microscopes in my "job" -- I do academic research.)

    Firstly, the storage density they are reporting is for a prototype setup, and it's already as good as curent HD technology. The exciting thing is not the value they currently have, but rather the fact that this technology can be pushed very very far. Thus, comparing this new technology to a mature technology (magnetic disks) is not really fair. I do believe that if this new technology is investigated for 10 years, it could outperform magnetic drivers in terms of storage density.

    Secondly, the data transfer rate can be much higher with this new technology. The millipede project uses an array of thousands of AFM-like tips, which means that in principle 1000 bits of data are read at a time (compared to, for example, 4 bits read at one time in a magnetic disk drive with 4 platters). We all know that HD access is a major bottleneck in modern computers. This new concept could immediately speed that up by 2 orders of magnitude. I think that's worthy of consideration!

    That having been said: don't hold your breath. MEMS is a rapidly evolving field, but it will be awhile before it can really beat out the mature magnetic technology. The article also doesn't give any details on how this new technology works. The potential is great, but alot of work has to be done.
  • by isny ( 681711 ) on Sunday February 27, 2005 @10:45PM (#11799563) Homepage
    Boss: What are you two working on? You've been sitting and staring at the screen for hours.
    Engineer 1: Uh....the millipede project.
    Engineer 2: Yeah. Lots of data stored in two dimensional space.
    Boss: Great! Keep up the good work. (Leaves)
    Engineer 1: Whew that was close.
    Engineer 2: In more ways than one. Look out! Here comes the spider again...
    Engineer 1: I love MAME.
  • A lot of people have complained about how many technologies get reported, and are never heard of again.

    This is pretty ironic, given that most companies (FUD aside) will only talk about products to a) attract venture capital, or b) sell an actual product.

    And any company which has burned all the v.c. without bringing anything to market is hardly to going to trumpet about it.

    Whether this technology will be the next best thing or not is open to question (that's what makes the stock market work ;). What I fo

Our policy is, when in doubt, do the right thing. -- Roy L. Ash, ex-president, Litton Industries

Working...