Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Biotech Hardware Science

NVRAM With Disordered Assemblies (Smaller/Cheaper) 82

chadjg writes " Jim Tour, of Rice University says "Our research shows that ordered precision isn't a prerequisite for computing. It is possible to make memory circuits out of disordered systems." The article on www.e4engineering.com says the team has made "NanoCells", self assembled devices made from gold nanowires and organic conductive molecules. These NanoCells are the first devices of their kind to be made into working microelectronic devices, apparently." Yep. Let an untold number of machines try to create NanoCells, and statistics says you'll find the most efficient kind.
This discussion has been archived. No new comments can be posted.

NVRAM With Disordered Assemblies (Smaller/Cheaper)

Comments Filter:
  • Just a thought... (Score:3, Interesting)

    by whig ( 6869 ) * on Monday November 10, 2003 @09:46AM (#7433515) Homepage Journal
    Is this a step towards creating quantum-effect neural networks (i.e., thinking machines)?
    • by Black Parrot ( 19622 ) on Monday November 10, 2003 @09:54AM (#7433555)


      > Is this a step towards creating quantum-effect neural networks (i.e., thinking machines)?

      No, it's just a memory technology.

      • No, it's just a memory technology.

        A nice trick would be to have a set of molecular parts that self-assemble into neurodes usable in artificial neural networks, and are able to use the gold conductors as artificial axons and dendrites. Some other molecules would be needed to form an interface to the outside world for I/O. Imagine the number of connections per cubic millimeter.

    • Re:Just a thought... (Score:5, Interesting)

      by Zocalo ( 252965 ) on Monday November 10, 2003 @09:57AM (#7433572) Homepage
      Probably not. When I was learning about logic circuits way back when we tried wiring circuits completely at random just to see what would happen. Almost invariably the initial chaos of the breadboarded circuit would stabilise either into a static state or oscillation between two or three set states within a dozen clocks. The longest times to stabilisation that we got were in the mid twenties. A simple demonstration of the principle that inside every chaotic system is order trying to get out.
      • That's F------up!
        What is the order?
        Why would the material spontaneously being trying to organize itself?
        This might be too deep for a discussion here, but I have to wonder what it is inside of chaos that is willfully trying to organize itself.
        Is it an effect of randomness? Some physical property of the materials that causes certain molecules to align and by that create order?
        Is it God?
        I honestly would like to know who/what is pulling the strings here!
        • Re:Just a thought... (Score:5, Informative)

          by Zan Zu from Eridu ( 165657 ) on Monday November 10, 2003 @10:51AM (#7433859) Journal
          The order is the chaotic system itself; chaotic systems are not random but deterministic, their output only appears to be random but is ordered. By wiring the circuits randomly, you do not create randomness in the system, you need (truly) random inputs to do that (and once you do it, the system will no longer fall into or periodicly orbit its attractors like the original poster describes).

          Read up about chaos theory and fractal geometry, this is not unusual behaviour in complex systems.

          • OK, right off the bat, that seems like an impossibility:

            you say that chaotic systems ARE deterministic.

            Then chaos is an illusion, right?

            Maybe they are just words, but I always thought that chaotic and deterministic were opposites.

            If you're saying that chaos is never truly chaotic, and that it is instead ALWAYS deterministic, then some belief systems (mine actually) will have to be rethought because if there is no such thing as chaos, then there is no such thing as free will.

            Free will is that: Free.

            So
            • Re:Just a thought... (Score:5, Interesting)

              by Zan Zu from Eridu ( 165657 ) on Monday November 10, 2003 @06:16PM (#7438021) Journal
              Then chaos is an illusion, right?.

              Nope, chaos means that the system responds with big changes in its output to very small changes in its input.

              If you're a little into math Verhulst' model of biologic growth might help. This model is simply: x(n+1) -> a * x(n) * (1 - x(n)), an iterative model where x(n) is a number between 0 and 1 that indicates the population density at a given step n and a (the Malthusian factor) represents the fertility, a number between 0 and 4.

              If you choose a factor a <= 1, the model simulates a dying population, no matter what x(0) you put in, after some iterations it will become 0.

              If you pick 1 < a <= 2 the model simulates a stable population, no matter what x(0) you put in, after some iterations in will become 1 - 1/a.

              If you pick 2 < a <= 3 the model is still striving for a value of 1 - 1/a but now it will oscillate around this value at an ever smaller absolute distance.

              Models with 1 < a <= 3 are balanced, but the interesting stuff starts happening when we pick 3 < a <= 4, because now the model starts behaving chaoticly. If we take a = 3.2 for instance, the model will alternate between the values 0.51304451 and 0.79945549, a lot like the original posters' two alternating states.

              Now let's take a = 4 for the sake of argument because the model is then completely chaotic. If we start this with model with x(0) = 0.6875 -> x(12) = 0.925930303 but if we add just 0.0001 x(0) = 0.6876 -> x(12) = 0.5676923. That's a big change in output for a small change in input.

              Write a little program and play with this model to really see how randomly it seems to behave, while it's still ruled by a simple deterministic formula.

              Maybe they are just words, but I always thought that chaotic and deterministic were opposites.

              Not really, chaotic in the mathematical sense means hard to predict, while non-deterministic or random means impossible to predict.

              If you're saying that chaos is never truly chaotic, and that it is instead ALWAYS deterministic, then some belief systems (mine actually) will have to be rethought because if there is no such thing as chaos, then there is no such thing as free will.

              I'm not saying there is no randomness in the world, I'm only saying that you can't generate true randomness with deterministic systems (like computers) alone, you need a truely random source (like the clicks of a geiger counter) for that.

              As for free will, I think Hume's compatibilism [vmi.edu] could be helpfull to you. Hume very oversimplified defines free will as the freedom to do what one feels like doing (meaning you're still a slave of your passions and feelings, but that's what defines you).

              Is free will an illusion or is there really things that are non-deterministic?

              The generally accepted interpretation of quantum mechanics claims there is true randomness in the world. However, I personally really don't see how non-determinism would help you in creating a rational definition of free will. If your free will is driven by truely random processes in nature, "rational thought" itself becomes no more than a blind man lead by a fool.

              I personally think that (the concept of) "free will" was a nescessary step in our evolution to unify the various unconcious processes in our minds that drive and define us (that generate our feelings, inspirations and insights). It's natures way to assure you that it's really your ideas and feelings, no matter you don't know how exactly they came into being.

            • Think from one of the higher levels. Suppose you are a superbeing, like a human may be to animals on a farm, to insects in an anthill or to AI objects inside an advanced computing environment, or god(s) to this unverse.

              To the imaginary superbeing at a much higher level of development, smaller worlds are deterministic. (S)he can design, create, percept, analyze, observe, destroy, watch to fail as many universes as they want, knowing the outcome of actions in dimensions limited to our knowledge. Just like yo
      • You might enjoy Stephen Wolfram's A New Kind of Science. I heard him talk about it recently. He went through various one dimensional cellular automata. Most settle into obvious patterns, but a few look less regular. He used their appearance as evidence for his ideas. I did not feel so convinced, however. In his talk, he never brought up a formal measure for what he described as randomness, and I stayed a little confused throughout about which systems he described are inherently random, which are very sensit
    • There is debate over whether the current example of a thinking machine (the human brain) actually demonstrates quantum-effect. There are some structures in the brain that could "act quantumly", but you could say that about anything. I'm not any more qualified to end the debate than anyone else, but I will note the most of the people talking about quantum effects are quite unqualified to do so.
    • by Anonymous Coward
      Why do you think that a "quantum effect" neural network would create a "thinking machine"? There is no evidence that our own neurons make any essential use of quantum superposition, Penrose's speculations notwithstanding.
    • See The Emporer's New Mind [amazon.com] by Roger Penrose.
      In it he argues that the reason we haven't created a thinking machine by now is because we can't simulate quantum effects in a neural network simulation.
      I don't happen to agree with him but this book was #1 on the New York Times booklist for a long time.
    • by sbma44 ( 694130 )
      And seems to presume that the quantum-tunnelling theory of consciousness [erols.com] is correct. Which I think is reaching. It always seemed to me to boil down to "consciousness is hard to understand. So is quantum physics! the two must be connected. we'll figure it out later. for now, let's smash some more subatomic particles together."

      Admittedly it's a more productive approach than just saying "consciousness is intractable" and heading down to the bar or philosophy library (equally productive destinations). Bu

    • I, for one WELCOME all our new SKYNET [imdb.com] overloards!
    • What makes you think that quantum effects are necessary for machines that think?
  • I Predict (Score:3, Funny)

    by Anonymous Coward on Monday November 10, 2003 @09:49AM (#7433530)
    I Predict that 95%+ of the Slashdot crowd doesn't understand more than 2 words of this, yet will pretend to understand it.
    • "I Predict that 95%+ of the Slashdot crowd doesn't understand more than 2 words of this, yet will pretend to understand it."

      "Our research shows . . ." I understand that much.
    • by TopShelf ( 92521 ) on Monday November 10, 2003 @10:00AM (#7433595) Homepage Journal
      Nano-nano... wasn't that from "Mork & Mindy"?
      • ...sigh...

        Once upon a time we built microcomputers and minicomputers. We finally realised that this sounded dumb, and just called them computers.

        Today, if we build something that we measure in nanometers, it is nanotechnology, and it is cool and interesting.

        Twenty years from now, when we are building things on the scale of picometers, what will we think of calling things "nano"?

        Interestingly though, a hydrogen atom is 10E-10m across (one Angstrom), which is a tenth of a nanometer, or 100 picometers. I h
    • 95% of those who actually read it that is, so that pretty much means nobody here will understand it!

      =Smidge=
    • Well I predict.... (Score:5, Insightful)

      by donscarletti ( 569232 ) on Monday November 10, 2003 @10:32AM (#7433753)
      My prediction is that you didn't actually read it yourself.

      This is because if you did, you would realise that it was very well written and not hugely technical. I wouldn't be supprised if 95%+ of the slashdot crowd did understand it.

      As a general rule, slashdotters seem to get very zealous and have a habit of not RTFAing, but they generally have good comprehension skills and I don't think you give them the credit they deserve.

    • by plumby ( 179557 )
      This is why I turn to Slashdot - to dumb down these things to a level that I can just about grasp.
  • Boy, that was quick... And people say no one RTFA here...

    • Actually we don't. We just right click on the link and choose "Open link in new tab" and then go back to surfing porn.

      However, I'm suprised that someone hasn't setup an experiment with a bunch of memory cells and a genetic algorithm that would just go through all the permintations until it came up with the most efficent way to store a bit.
  • by EmagGeek ( 574360 ) on Monday November 10, 2003 @10:00AM (#7433599) Journal
    One problem inherent in disordered structures is the inherent differential cross-coupled field tensors present in a non-homogeneous layout of bipolar, dipolar, and unipolar electrical field vectors. These differential tensors lead to random non-unique ionization of co-recombinant carriers and de-ionization of unique co-recombinant carriers. These random ionizations and deionizations manifest in a statistically significant increase in error vector magnitude during bit placement and deplacement, and transfer. It is because of this that highly ordered systems are required for reliable nonvolatile memory arrays.
    • by Junta ( 36770 ) on Monday November 10, 2003 @10:13AM (#7433668)
      The answer is simple, simply reroute the EPS conduit to discharge antimatter through the deflector dish, and possibly adjust the Heisenberg compensator for the occasion.
      • > The answer is simple, simply reroute the EPS conduit to
        > discharge antimatter through the deflector dish, and
        > possibly adjust the Heisenberg compensator for the
        > occasion.

        And viola! A cup of coffee!

      • No, no, no... the EPS conduits are designed to carry plasma from the warp core to the warp nacelles. They're not verified for antimatter containment by any means, especially since you have to go through a level 1 tap in order to redirect anything to the main deflector dish!

        ... unless, of course, you're talking about an episode of Voyager or Enterprise, in which case anything is possible. If Brannon Braga is writing the episode, you might even have the ship explode several times with a large reset button

    • > inherent differential cross-coupled field tensors present in a non-homogeneous layout of bipolar, dipolar, and unipolar electrical field vectors.

      I think this sentence was organically evolved by a jargon generator.
    • by Anonymous Coward
      That post of EmagGeek's was actually pretty funny in its own right, but it's also a sad reflection on the state of Slashdot these days.

      This used to be a forum for the more technically inclined, but now it's largely populated by wannabes who have no hope of understanding any article containing words of more than 3 syllables or requires concentration for longer than their 15-second attention span limit.

      It's pretty sad when the technical nature of an article is seen as a reason for derision. It says nothing
      • This, of course, was the whole point of my original post. It was my intent to be funny, although I understood the article just fine. I just wanted to see what the moderation breakdown would be between "Funny" (people who got it) and "Informative" (people who didn't). It was about 50/50.

        Slashdot is a playground when it comes to social experimentation :)
  • Monkeys (Score:2, Funny)

    by potcrackpot ( 245556 )
    A million monkeys were originally hired to conduct this study, but the combined might of animal rights activists and the high costs of bananas prevented it.

    Contrary to popular belief, you have to pay bananas, not peanuts, to get monkeys.
  • by WayneConrad ( 312222 ) * <wconrad&yagni,com> on Monday November 10, 2003 @10:10AM (#7433644) Homepage
    There's not enough detail in the article to even say "gee whiz." Could it be that these guys haven't published yet, and wanted to generate some pre-publication buzz without giving away anything?
  • This is just the next logical step in microelectronics. I imagine it will be ome time before we see it in our computers.
  • by G4from128k ( 686170 ) on Monday November 10, 2003 @10:17AM (#7433691)
    Some of the more interesting bulk nanochemical processes create fairly ordered 1-D patterns (like zebra stripes). I'd bet that people are working to create orthogonal 2-layer structures of 1D patterns to create a nice lattices. Sandwich in the appropriate inter-layer, splice in connections at the edges and you have the makings of a 2D array of memory locations.

    Nanocore memory anyone?
  • statistics (Score:2, Insightful)

    by ubera ( 107426 )
    Yep. Let an untold number of machines try to create NanoCells, and statistics says you'll find the most efficient kind.

    Actually, I think it would be more accurate to say that statistics says [sic] you will find a number of answers, some with better performance than others.

    rand() is a poor optimiser.
    • rand() is a poor optimiser.

      .. unless you penalize bad choices and reward good choices. Many non-determinstic search methods in use rely rather heavily on randomizing when choosing the "next step". Simulated annealing, tabu search and genetic algorithms comes quickly to mind. So rand() all by itself is a poor optimizer, but once it finds something useful, why not stick to it try rand() our way to a better solution?
  • by G4from128k ( 686170 ) on Monday November 10, 2003 @10:53AM (#7433868)
    The article lacks useful information on the expected density of the circuits in large-scale applications. On the one hand, the nano nature of the device would seem to permit tremendous density that far exceeds anything that can be fabricated with masks and etching. On the other hand, two major major problems would limit the practical density of memory cells in usefully large dies.

    1) I would expect these devices to have a very large fraction of unusable cells. A fair percentage of nanocells would probably be fixed live (always storing a 1), fixed dead (always storing a 0), leaky (decaying faster than the nominal refresh time), or disconnected. The percentage of writable, readable, nonvolatile, connected cells might be very low. This makes the effect density (and effective memory cell size) much worse than the nanoscale of the process would lead one to expect.

    2) The reach of the disordered connections into the field of nanocells would be limited in distance. I would bet that the disordered wires cannot be made to reach very far from the edge. Phenomena like wire-to-wire disconnects, wire-to-wire shorts, wire-to-substrate shorts, accumulated resistance, accumulated leakage would limit how far from the edge we can access the field of nanocells. Note that the experimental cell is only 10 microns by 40 microns. Can this technology be scaled to a 1000 micron x 1000 micron die or bigger? Even if the density is extremely high, the inability to scale to size might mean that all we can do is an extremely small 64 kilobit device. Of course, this might be solvable with clever overlays (like a mesh of traditionally fabricated conductors) that let us create macroscopic nanoRAM dies that have scale-limited microscopic nanocell field areas. The statistics of interconnections (or percolation theory) can help us determine the scalability of the concept.

    I'm not suggesting we abandon nanocell technology, only that we consider the scaling effects when trying to predict whether this nanocell technology has the potential to revial existing technologies. Moreover, the existing semiconductor technologies are a moving target. By the time nanocells reach the market, we might have 3 nanometer semiconductor circuits using gamma-ray free-electron lasers and vertical ion implantion in a diamond substrate (or something). Future semiconductor densities might makes the nanocell density not that competative.
  • by Anonymous Coward
    The real problem is the need to "train" the cells to do anything usefull. With a collection of cells of any decent size, the computing power needed to teach the system what to do would be enormous. This is the same situation that the project was in 3 years ago when I was involved with it, only then the nanocells were called nanoblocks, and now things are even more dis-ordered. From the sound of the article, they're still spinning the same thing with a new name, only 3 years later. I can only hope a whol
  • It is possible to make memory circuits out of disordered systems.

    This might seem really dumb, but surely this is self evident to some degree. After all, isn't that what our mind does on regular basis? Evolution has beaten us to the punch and created a self-assembled, disordered system: Our central nervous system.

    The description of the system in the article with islands of gold foil and connections of nanowire seems very vaguely analogous to neurons with cell bodies and axons... I wonder if the system

  • by StandardCell ( 589682 ) on Monday November 10, 2003 @11:33AM (#7434123)
    I don't dispute that this is a great discovery, but there's a difference between the chemistry of the process and the chemical engineering for the process. One can reproduce conditions in a lab environment, whereas the other is designed to take the process into mass production. I've seen so many unique technologies in the last few years that are great ideas but don't necessarily translate to something that can be mass-produced. Materials and process costs, materials handling, integration into production lines, packaging, built-in self-repair strategies and off-device drive are all pretty important factors, yet I really didn't see a whole lot on this in the article.

    The other factor is reliability, both in the short term and the long term. Yes, the device seems to retain memory for a week without power at room temperature, but what about other factors? Alpha particle and EM sensitivity, thermal cycles and other long-term reliability issues all have to be investigated. Before I get jumped on, let me give a concrete example of a new technology: low-k dielectric. Low-k dielectrics (SiLK, Coral, Black Diamond) are materials on silicon devices used as insulation between layers of wires that connect circuits and were hailed as miracles a few years ago. However, many manufacturers (most notably TSMC with Nvidia) were having major problems where they would have void formation failures at the vias or inter-layer connections. The scariest part is that these were forming in simulated long-term accelerated tests, implying failures in the field after several years! Now, these failures have supposedly been addressed, but that's a concrete example of reliability issues with a conventional technology.

    We need to tread lightly towards radical new technologies if only so that we don't get burnt down the road. I definitely believe there's room for these types of technologies, but the most essential parts of these reports are so often missing because the focus is on getting this to work in a lab, not on making money. And, as someone who worked in the field of technology commercialization in the past, it's sadly more often the case than not.
  • Any day now, your RAM chips may become self-conscious.
    When I boot Windows, I think that happened already.
  • by Animats ( 122034 ) on Monday November 10, 2003 @02:48PM (#7435764) Homepage
    The semiconductor industry has had periods when the fab-technology people were behind the device-physics people. In those periods, it was possible to build high-density but imperfect parts. This led to various approaches for dealing with parts with defects. Zapping bad cells with a laser or E-beam, redundant circuits, fuses blown during test to isolate dud sections, and prescans of the substrate for defects have been tried and made to work. All these techniques work, but tend to be inefficient either in terms of chip real estate or manufacturing cost.

    But so far, the fab-technology people have always caught up, fixed the defect problem, and made it possible to produce perfect parts with high yields. None of those techniques have been used much in production products.

    Right now, fab technology is ahead of device physics. It's possible to fabricate smaller transistors than can be made to work. Power dissipation is more of a limit than line width. So at least on flat silicon, we don't need this yet.

"Just think, with VLSI we can have 100 ENIACS on a chip!" -- Alan Perlis

Working...