Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware Technology

The Nanomechanical Computer 124

eldavojohn writes "The BBC is reporting on a newly proposed type of nanomechanical computer that mimics J. H. Müller & Charles Babbage's work on mechanical computational devices — just on a much smaller scale. The paper is published today in the New Journal of Physics and cites three reasons to build a computer with nanomechanical transistors over bipolar-junction or field-effect transistors: '(i) mechanical elements are more robust to electromagnetic shocks than current dynamic random access memory based purely on complimentary metal oxide semiconductor technology, (ii) the power dissipated can be orders of magnitude below CMOS, and (iii) the operating temperature of such an NMC can be an order of magnitude above that of conventional CMOS.'"
This discussion has been archived. No new comments can be posted.

The Nanomechanical Computer

Comments Filter:
  • The paper claims to have shown how to make a nano-mechanical computer using contemporary silicon production facilities. I've gotta wonder, what are you waiting for? Pay the $10,000.. send off your image to be made and test the damn thing.

    This is like half science.. "Here's my hypothesis, someone test it for me."
    • I assume he is looking for some cash before hand.
    • by FasterthanaWatch ( 778779 ) on Wednesday July 25, 2007 @01:35AM (#19979855)

      This is like half science..
      Must everyone be an experimentalist?
    • Re: (Score:3, Insightful)

      by heinousjay ( 683506 )

      "Propose to an Englishman any principle, or any instrument, however admirable, and you will observe that the whole effort of the English mind is directed to find a difficulty, a defect, or an impossibility in it. If you speak to him of a machine for peeling a potato, he will pronounce it impossible: if you peel a potato with it before his eyes, he will declare it useless, because it will not slice a pineapple."
      - Charles Babbage
      • by Moraelin ( 679338 ) on Wednesday July 25, 2007 @03:07AM (#19980269) Journal
        1. Funny that you should mention that, given that Babbage, get this: never actually finished his machine, so he never actually delivered any value for the ample funding money he received. Other people get into the v2.0 syndrome after they completed one successful project. Babbage couldn't be even arsed to finish the first one (although, again, he did receive more than enough funding for it) before he started designing the second version. Then the third. Then the fourth. What is now known collectively as the Analytical Engine is actually a whole series of different machines: he could never be arsed to actually finish building one before he got distracted and started the next one. He kept at it until his death.

        His machines _would_ have worked, if they had actually been completed. But he could never be arsed to. Whenever he got funding for one, he'd deliver exactly nothing for that money.

        So, you know, maybe _that_ is why Babbage found the Englishmen somewhat reluctant to invest in his designs. Had he actually finished the Differential Engine, maybe people would have been more receptive to his next ideas. Maybe instead of bitching about his fellow Englishmen, it would have been easier to just deliver what he had promised. Just a thought.

        And maybe we would have had programmable computers a lot earlier. But as it is, it took people like Konrad Zuse in Germany or Alan Turing and the other folks who built the Colossus computer in the UK, to get it started. Because they actually delivered something that worked. Bloody huge difference there.

        2. The complaint about "slicing pineapple" is actually invalid too. Like many nerds today, Babbage was in it just for the fun of researching something new, and apparently thought that people should give him a lot of money just so he can have some nerdy fun.

        Capitalism, even the 19'th century kind -- actually, _especially_ the 19'th century kind -- doesn't work that way. To get some funding, the question you must answer is, basically, "which of _my_ problems does this solve?" If that company is in the business of slicing pineapple, then, yes, a machine which peels potatoes is completely useless to them.

        Governments too, while they do fund some fundamental research too, have a fiscal responsibility to the citizens they tax for that money. Especially in the 19'th century laissez-faire ideology, when the government was lean, mean, and barely funded to maintain the army. You can't seriously propose a tax hike just so Mr Babbage can play with something cool and high tech. So basically they too have to ask, "ok, so what do _I_ gain from this? Does it compute ballistics for our battleships? Total the census? Or what?"

        You'll notice that the working examples that did get computing started, had a satisfactory answer to exactly that. The Colossus computer broke enemy codes for the UK army, and Zuse's machines did aerodynamics calculations for the German airforce. E.g., the Z2 was used to design glide bombs.
        • by gnalre ( 323830 )

          Babbage couldn't be even arsed to finish the first one

          The reason Babbage had problems completing the project was that it required precision and standardization unparalleled in any previous project. The design and concept was there, the production capabilities lagged severely behind. Even so, one of the offshoots of the project was the development of techniques and procedures for the production and reproduction of standard components, one of the cornerstone of modern mass produced industry.

          So even though the computer was never completed, as with other fundamen

          • Wrong, actually (Score:5, Interesting)

            by Moraelin ( 679338 ) on Wednesday July 25, 2007 @04:08AM (#19980513) Journal

            The reason Babbage had problems completing the project was that it required precision and standardization unparalleled in any previous project. The design and concept was there, the production capabilities lagged severely behind.


            Wrong, actually. The machine that was built at the end of the 20'th century was built with the precision and tolerances of the 19'th century. Deliberately, to show that it was possible.

            The precision argument is even more obviously false, when you look at the fact that very precise watches had existed for a long time. That's how they measured longitude before GPS. I use watches as an example, because they're cog-based machines too, and they required even higher precision. By the middle of the _18'th_ century (i.e., a century earlier than Babbage) even a pocket watch would already not deviate more than a minute per day, and the second hand gradually became common. (Previously they tended to have only hours and minutes hands.)

            The first practical nautical clock, John Harrison's H4 was first used aboard the ship Deptford which set sail for Jamaica on 18 November 1761 and actually arrived there on 19 January 1762. That's two months and a day at sea. After all that time, it was only off by 5.1 seconds.

            _That_ is the kind of accuracy that was already available a century before Babbage.

            Babbage's design didn't even need that kind of accuracy, since it was essentially a digital device. All that mattered was how many teeth of the cog had moved, not also to do it within a very exact time interval. Half the sources of inaccuracy of a watch, didn't even apply there.

            So, no, Babbage had no excuse. The production capabilities were there, the precision was enough, and standardization wasn't even necessary for a prototype. He just couldn't be arsed to actually deliver what he promised. Full stop.
            • Re:Wrong, actually (Score:4, Informative)

              by gnalre ( 323830 ) on Wednesday July 25, 2007 @05:54AM (#19980983)

              Wrong, actually. The machine that was built at the end of the 20'th century was built with the precision and tolerances of the 19'th century. Deliberately, to show that it was possible.
              There was no doubt that the capabilities to make precision devices were available at the time of Babbage. However these devices were hand crafted, bespoke items. What Babbage machine required was precision designed machinery on an industrial scale(Note the difference engine required 25,000 parts, compare that to the average part count of a watch). That had never been attempted before and could not be achieved within any reasonable timescales using the hand crafted techniques of watchmakers and the like.

              However the lesson learned allowed engineers such as Joseph_Clement(http://en.wikipedia.org/wiki/Joseph _Clement [wikipedia.org]) and Joseph Whitworth(http://en.wikipedia.org/wiki/Joseph_Whit worth [wikipedia.org]) to learn the importance of standardization and produce the tools to mass produce precision devices and therefore paving the way to modern industrial production.
              • Re:Wrong, actually (Score:4, Interesting)

                by Moraelin ( 679338 ) on Wednesday July 25, 2007 @06:33AM (#19981141) Journal
                1. Even if it were so, I'd bet that showing an imperfect prototype would have done a hell of a lot more good than bitching about Englishmen.

                Let's say it would have been useless past, dunno, 2 decimals. It would still have been proof of concept.

                John Harrison didn't get his nautical watch right in the first try either. There's a reason why the first used version was called "H4". Because H1 to H3 weren't yet accurate enough. And H1 didn't properly compensate for the ship's movement either. But he had _something_ to show to the admiralty, as proof of concept and as proof that indeed it is at least a little more accurate than the average pocket watch one could buy at the nearest watchmaker. It worked wonders to secure more funding for the next version. Although the project was had already overrun the initial money offer and deadline, it showed that _something_ is happening there.

                Basically think in terms of iterative software development. It's easier to keep the client happy if he gets _something_ usable often, or at least sees that some progress is made, than if he has to wait for the deus-ex-machina miracle where everything is just perfect at the end of a very very long time. What Babbage did was, more or less, equivalent to not only keeping the client waiting for the first version, but scrapping the design and starting from scratch, again and again and again, to the point where nothing whatsoever was ever ready or usable or even in a demo state.

                Well, that sounded maybe too harsh. I'm not (primarily) trying to damn Babbage, but to say _why_ those Englismen that he damns were so skeptical. For all his claims, he worked on it from 1822, when he first presented his proposal and got funding, to his death in 1871 without ever having a version that works even as a crude tech demo. That's a whopping 49 years. You can't really blame anyone for being skeptical if your project was _half_ of that time overdue and still had nothing to show.

                Even nowadays most clients would just pull the plug on a project if it was just one year past the deadline, or often less than that. And noone would blame them for it. Even if you had a project of your own funding, you'd be ridiculed long before 49 years passed if it was still going nowhere. We made fun of Duke Nukem Forever when it was overdue a tenth of that time, and also in a tenth of that time Daikatana's hype generated a _very_ nasty backlash. Just, you know, to put things in perspective.

                So what I'm saying is: don't take Babbage's bitching as some great insight into the Victorian era England or into the human species as a whole. Babbage's problem wasn't that the English were blockheaded, but simply that he kept hyping a concept and asking for funding without anything to show even as a proof of concept. With or without technical problems, _of_ _course_ the English were skeptical after all that time. That's all.

                2. At the risk of repeating myself, the machine built in IIRC 1991 after Babbage's schematics was deliberately built using the tolerances and precision that would have realistically been available in the 19'th century. It worked, and calculated PI with 31 decimals.

                • What Babbage did was, more or less, equivalent to not only keeping the client waiting for the first version, but scrapping the design and starting from scratch, again and again and again, to the point where nothing whatsoever was ever ready or usable or even in a demo state.
                  I didn't know Babbage worked on Duke Nukem Forever!
                  • He didn't, but the reason DNF is taking forever is because 3drealms is compiling it on Babbage's computer.
            • Re: (Score:2, Informative)

              by tsjaikdus ( 940791 )
              Babbage had lots of excuses. First of all, Babbage was a very wealthy man, who didn't need government funding to begin with.

              Then Babbage did build a prototype, which worked flawlessly and which he used in numerous public occasions. He even programmed it to perform 'miracles' (disturbances in continuity that up till then was only thought possible by the act of God). "Darwin saw that if apparently inexplicable discontinuities could really be the result of a system of mechanical laws laid down in advance, then
        • by leonem ( 700464 )
          You make a lot of really good points, but the comment about government funding needing a measureable return is a tricky one. Some of the best returns on such speculative funding have been from tangential discoveries, or have greatly outstripped expectations due to some other unforseen development. Arguably, it's up to government to fund exactly the type of research that doesn't offer good returns (N.B. this is best done through universities IMO, not directly).

          Having said that, your point about Babbage'
    • From TFA: "The researchers are currently building the first elements needed for the computer, focusing initially on transistors, the basic switches at the heart of all computers." Insightful? My ass.
    • by CarpetShark ( 865376 ) on Wednesday July 25, 2007 @08:08AM (#19981725)

      what are you waiting for? Pay the $10,000


      Perhaps he's waiting for the $10,000, or perhaps he knows that theory is the important thing, and if it's viable, there will be many organisations vying for better and better implementations.

      This is like half science.. "Here's my hypothesis, someone test it for me."


      I for one don't consider science to be something that only people with money do. One has to wonder how many da Vinci's there would have been, if other people all had the resources he had. The renaissance itself shows that progress pops up everywhere, given resources. Doesn't mean the science wasn't there in the back of people's minds, waiting for them to get past the point of scraping together money for a loaf of bread, though.
    • Re: (Score:2, Funny)

      I'm with you, man. How dare this lazy asshole release an idea to the scientific community to criticise, or, God forbid, improve upon. Lazy fucking cunt.
    • $10,000? Wow.

      Modern mask sets cost over $500,000 before you hit the manufacturing stage.

      Then the equipment needed to control and observe the test device costs in the millions

      I have not yet read the article, but typically, the purpose of papers such as this is to attract investment.
    • The claim of temperature tolerance for nano-mechanical devices is rather theoretical. As a former materials scientist, I would be cautious here. I would think that micro-planar vacuum tubes would be far more feasible. There were reports a decade ago about the feasibility of such devices. I don't think that research money was ever granted, as the need for digital electronics that could work at ~ 1000C is rather modest.
  • by warrior ( 15708 ) on Tuesday July 24, 2007 @11:51PM (#19979321) Homepage
    Neal Stephenson wrote a book about this kind of tech back in 1995 or so, entitled "The Diamond Age" (or "A Young Girl's Primer or something like that). He envisions some pretty incredible stuff made out of this tech. Great book, lots of nerdy CS-type stuff in it. Go to the library and pick it up, very fun stuff. I think this one of his works is very underrated. If we can actually engineer stuff like this it would be impressive, indeed.
    • I quite liked that book, except for the ending. The first three quarters were pretty cool, then the main character goes on a bad trip and gets laid, and everything's suddenly alright. His other works are deservedly more well-known in my opinion.
      • by Fex303 ( 557896 )

        I quite liked that book, except for the ending.
        You better hope you never read Snowcrash then.

        Don't get me wrong, Stephenson is by far and away my favorite author, but his ending tend to be rather sucky. The rest of the book always makes up for this though.

    • by spun ( 1352 )
      I'm naming my first son Rod Logic. Rod Logic Rightmer, has a nice ring to it, don't you think?
  • by Anonymous Coward
    "Order of magnitude" is a pretty silly phrase to use in this context. From the paper: "The operating temperature can be as high as 500C." So I guess they're using Celsius as the scale, not absolute temperature.
    • Re: (Score:2, Funny)

      by Anonymous Coward
      Yeah it doesn't make sense. In another context if something operates at 50C instead of -50C, would they claim log_10(-1) = i*pi/log(10) orders of magnitude?
    • by Anonymous Coward
      Look, to 99.9% of people 'order of magnitude' just means 'stick a zero on the end', and so it should.
    • by arhines ( 620963 )
      Actually, it does make sense when you consider the most important part of thermal engineering: achieving a certain temperature differential. Ambient temp is typically around 300K, and semiconductors will run in the realm of 400K but not much higher. That's a delta-T of 100K. The ability to run at 500C gives you a delta-T of something like 480K from ambient, so I would agree that the maximum operating temperature is (in some sense) an order of magnitude higher.
      • In what sense is the temperature differential the most important part of thermal engineering? Perhaps if you're building a heat engine, but this is not a heat engine. Real devices made from real materials have hard absolute temperature limits: The limits at which the materials will perform their function adequately. If (as in this case) heat is an unwanted by-product you need to get rid of, your only goal is to stay below maximum operating temperature; you do not aim for a particular temperature differentia
        • The reason is that Absolute Zero is a concept that doesn't really exist in nature. Nor does, for all practical purposes, much of the range 'below zero'. Furthermore, to speak of temperature in terms of 'magnitude' and to speak of Absolute Zero (0K) as the default no-magnitude state, is to posit that we all live in a vaccuum. The default no-magnitude state is not absolute zero -- a condition that doesn't really exist. To base an appraisal of the 'magnitude' of an effect on a nonexistent default is to totall
        • Re: (Score:2, Insightful)

          by arhines ( 620963 )
          Simple -- computers do not typically operate in empty space or in cryostats. The most important number to know is "how many degrees over ambient can I allow this device to go." It's true of computing devices every bit as much as for heat engines.
        • You've been given two fairly well qualified viewpoints, so I'll put in a layman's argument. I'm doing this both because it may be simpler to follow and because I am a layperson when it comes to materials work.

          What matters isn't so much the range from theoretical possibility in he universe, but the ability of something to function where it's going to be used. From freezing to boiling of water is roughly what temperature range you'll see where computers might be useful to people, using that range as a baselin
          • Wow, three replies refuting a point I never made. Incredible.
            • If, say, this will operate from 20 C to 320 C (a 300 degree range) and some other device (like a current CPU) is only really stable between 20 C and 50 C (a 30 degree range), then one device has a thermal tolerance _range_ that is clearly much larger than the other.

              Keeping a harsh environment between 20 C and 320 C is much, much easier and more cost effective than keeping it between 20 C and 50 C. If one processor can handle the whole range, and the other can't, how is that less important than the actual nu

              • First you reply refuting a a point I never made, then you reply to my post quoting your post.

                You evidently thought I was talking about something else, so here is the short version of my point again, as an example: You do not design the cooling system for a CPU to maintain less than a 50 K differential to ambient (that's what I would consider to be 'achieving a certain temperature differential', which is what the post I replied to said was 'the most important part of thermal engineering'), you design it

                • Actually, you do design the cooling system to keep the CPU below a certain ambient temperature. I'll use Celsius here because it's important to get some perspective on this if you want to have a conversation about it.

                  If your only goal is keeping it under 350 C, and it's being operated in a climate controlled room at 20 C, then unless it's dissipating shitloads of heat itself it won't need a cooling system at all. Even then, a small and simple one should do well.

                  If, however, you're operating it on Venus or i
                  • What you described is achieving a certain device temperature at a given ambient temperature, not achieving a temperature differential. But thanks for playing.
                    • temperature differential, definition [about.com]

                      By maintaining the temperature of the device at a certain temperature at a given ambient temperature, you are creating and maintaining a difference in temperature between two points (a point on the device and a point in the environment) within the volume affected by the cooling system.

                      So what you just said, besides being overly snarky, was complete and utter nonsense. If you sub in the words for what the words mean, you said, "What you described is achieving a temperature
  • Prior art... (Score:5, Insightful)

    by kebes ( 861706 ) on Tuesday July 24, 2007 @11:52PM (#19979333) Journal
    This present design is a cool idea. I don't want to take anything away from the presented concept, but I thought it would be important to point out previous work on nanomechanical computers. First of all, Eric Drexler [wikipedia.org] (the guy who popularized the term "nanotechnology" and who basically invented the field now known as molecular nanotechnology [wikipedia.org]) has been advocating the concept of nanomechanical computers for many years now (they are described in his book Engines of Creation (1986) and detailed feasibility calculations, and rough schematics, are presented in his book Nanosystems (1992)). Drexler has been trying to get people on-board with his very foreward-looking ideas for nanotechnology: where nano-sized mechanical systems would be performing computation, and controlling chemical reactions with a precision that currently only biological systems can achieve. (It should be noted that current work in "nanotechnology" is hilariously primitive compared to what Drexler intended the term to describe.) Drexler's vision of nano-mechanical systems has been challenged by many people, most notably by Richard Smalley [wikipedia.org] (the guy who discovered buckyballs).

    Beyond Drexler's theoretical work, carbon nanotubes [wikipedia.org] were demonstrated as nano-mechanical transistors in 2000. Basically, the nanotube was positioned over various electric pads. A current could be applied to mechanically deform the nanotube. The deformation was stable, and could be read-out by measuring current across the tube. Since the deformation was stable and reversible, the tubes could be used as persistent storage or as switching/logic elements. In fact, switching speeds of gigahertz were demonstrated. The vision was to have long nanotubes in a huge cross-bar architecture, leading to high-density persistent storage. As is often the case, scale-up was difficult.

    This present work appears to pattern a nano-sized post between conducting pads (out of a gold/silicon layered system) , and to use that post as a single-electron transistor. The 'mechanical' part comes from mechanically coupling multiple pillars to use as a gain mechanism for a transistor. This is basically much closer to conventional micro-lithography, and as such, it should fit in with current lithographic infrastructure much more easily than the nanotube concept did.
    • Re:Prior art... (Score:4, Insightful)

      by erichill ( 583191 ) <eric@stochastic.com> on Wednesday July 25, 2007 @01:23AM (#19979811) Homepage
      Folks in the nanotech crowd have been talking about clockwork computing from the get-go. Gears and rods and the like have a lot of advantages over electronics at molecular scales: atoms much smaller wavelengths than electrons thereby easier to localize. Also, to a first (and second) approximation, covalent bonds don't wear: undesired reactions are the issue, and not much of one in the mechanical phase of matter(*) envisioned by Drexler and company.
      Having skimmed the article, I'm a bit unimpressed by the comparison to Babbage. While this looks like neat technology, it's NOT clockwork--it's electronic transistors with mechanical gates, as noted in the parent.

      (*) Parts only touching where desired, vacuum elsewhere--remember, we're talking atomic scale here.
      • it's NOT clockwork--it's electronic transistors with mechanical gates, as noted in the parent.

        I didn't rta, barely rts, but I was reading the discussion and thinking to myself "Self, can you imagine how unbelievably sluggishly slow a mechanical computer would be?", I'm kinda thinking it would take around 20 years to boot vista, but that's just off the top of my head.

        • Re: (Score:1, Informative)

          by Anonymous Coward
          FWIW, they talk about this scheme (an electric transistor, mechanically amplified) running at 1 GHz. Booting Vista would take a while, but not forever....

          The other thing to note is that with MEMS (all mechanical, except for the motor), that devices move at far higher speeds than you would think. 1) The pieces are so small that they have essentially zero mass. 2) Because the devices have almost no mass, you can drive them to full speed and stop them incredibly fast, because there isn't enough mass to build

          • by HiThere ( 15173 )
            One difference is that MEMs are actually working. I'd think it would make more sense to first build this kind of device out of MEMs, and let others try to solve nano-mech problems with simpler devices. I bet even building out of MEMs you'd run into extreme construction problems.

            The other reasonable alternative would be to first build a half-adder at the nano-tech scale. (Simplifying the device sufficiently to test the proposed design.) Then you could work you way up to a 6502. (That was the old Apple ]
    • This present work appears to pattern a nano-sized post between conducting pads

      Hmmm, I thought they were talking about a little, bitty abacus.
    • Re: (Score:2, Interesting)

      Look back further still, to Richard Feynman in 1959 [zyvex.com]. Absolutely visionary stuff.
  • anyone?
  • I remember the notion of building Babbage engines at a molecular scale at some point in the future being brought up in an electrical engineering course I took in the mid 1990's, and it probably wasn't a particularly new notion even then. Granted, we're certainly closer to being able to actually do it now than we were in 1996, but it's still not a new idea.
  • I keep seeing a miniaturized, massively parallel array of Dr. Nims.

    http://en.wikipedia.org/wiki/Dr._NIM [wikipedia.org]
  • Cassini Division? (Score:1, Interesting)

    by Anonymous Coward
    Anybody remember the mechanical nanotech in the book "Cassini Division"? Though they did that as a protective sandbox measure against supersapient AI and uplifted humans...
  • I'm sure for some applications these will be better than traditional electronics.

    For other applications they won't be.

    The ideas aren't new, but there are probably some legitimate patents to be had in the particulars. That thought should help drive venture capital.
    • Yeah, I see it as a small but useful niche technology. I'm guessing it'll end up in toys, weapons, a few car parts, and anything NASA.
  • by RyanFenton ( 230700 ) on Wednesday July 25, 2007 @12:14AM (#19979461)
    Specifically, The Diamond Age [wikipedia.org], where such specifically mechanical nanomachines, along with artificial diamond, define the era the book takes place in. I'd say it's a charming if hyper-technical story if you haven't read it - though, things get rather unsafe for some young children in terms of strong sexuality for one prominent subplot.

    Anyway, the machines aren't self-replicating, but they are fabricated in microwave-style (and larger) boxes that take an elemental 'feed' of organic compounds and data. The book has some great philosophical and social content, and breaks most of the annoying characteristics of the previous 'cyberpunk'-style writing.

    Ryan Fenton
    • Re: (Score:3, Informative)

      by Prof.Phreak ( 584152 )
      It's also very long... and keeps going on and on, way past the point it should've ended...and then it ends suddenly and for no apparent reason.

      That was the only Stephenson book I really got bored off...
      • by tftp ( 111690 )
        Some books end abruptly when the writer reached the word count that he promised to the publisher.
      • by simong ( 32944 )
        I thought that the first time I read it, but then I read it again a couple of years ago and realised that he got a bit stuck in trying to resolve his different strands of story and had to finish a couple of them a bit abruptly. Then again he did the same with Cryptomnicon, and as far as I can see hasn't managed to finish the Baroque cycle at all (joke).
    • Strong sexuality? Neal Stephenson? Well, it's a great story - but the Bible has sexier passages than Neil... What's happening here? did you get an unexpurgated version the rest of the world was unaware of? Can I get a copy....
    • I seem to remember that part of the plot was based on the "seed". Instead of relying on a feed and a "microwave oven" device, you were free to just plant a seed that would grow into what you wanted.
  • It may be a more robust transistor when dealing with shocks or heat, but I wonder if the same claim can be made for material wear. They'd be some diamond-like carbon structure, sure, but do we really know how robust those would be under such conditions (billions of jitters/sec rubbing against other pieces)? It could cause part wear or moving of parts, I would think....

    Still, good on 'em....
    • by wall0159 ( 881759 ) on Wednesday July 25, 2007 @01:33AM (#19979849)
      I'm not a nano-technologist, but I'd doubt that macroscopic ideas of friction and wear would apply to nano-machines.
      • Re: (Score:1, Informative)

        by Anonymous Coward
        Unfortunatly friction is not just a force defining the macroscopic environments. Friction comes from electromagnetic forces between atoms so unless you go onto a subatomic scale then you are going to have to face the problem of friction. http://en.wikipedia.org/wiki/Friction/ [wikipedia.org] I would assume that this friction is at least one factor causing the extremely high operating temperature of the machine.
  • by Doc Ruby ( 173196 ) on Wednesday July 25, 2007 @12:36AM (#19979591) Homepage Journal
    They're far from the first to propose nanomechanical computing machines like Babbage's original (and failed) machines. A mentor of mine explained to me in 1990 an idea of building nanoscale rod logic [wikipedia.org] in orbit: microgravity, vacuum, solar power. And I don't think he was the first to think of it.

    I'll be impressed when someone actually builds some. Or writes a lot more engaging science fiction than the BBC just published.
  • Nanotech science (Score:4, Informative)

    by the_kanzure ( 1100087 ) on Wednesday July 25, 2007 @12:44AM (#19979629) Homepage
    From my collection [heybryan.org]:
    * Nanotechnology information [archive.org] [archived] [2002]
    * Bibliography of nanotechnology and nanoscience [tu-darmstadt.de] [pdf] [2004]
    * Brad Hein's nanotechnology website [nanosite.net]
    * Ned Seeman's DNA nanotech bibliography [nyu.edu]
    * MEMS/nanotech reading list [mems-exchange.org]
    * Even more publications in nanotechnology [dyndns.org]
    * sci.nano archives [leitl.org]
    * The open micro/nano-manufacturing project [uky.edu]
    * Nanotech in scifi [geocities.com]

    And if anybody has links on nanomechanical synthesis, that'd be much appreciated. IIRC, nanolithography is one of the main areas of development, along with nonlinear optics to get the required precision manufacturing.
  • by flyingfsck ( 986395 ) on Wednesday July 25, 2007 @01:16AM (#19979775)
    If the Nikei stock exchange uses a mechanical computer, then an earthquake in Japan will really send a shudder through the financial markets...
    • Re: (Score:3, Insightful)

      by evanbd ( 210358 )
      Hardly more so than they are now -- the damage would be physical damage to the computers, just like it is now. Vibrations on the scale of Hz to tens of Hz won't impact a mechanical computer operating at many GHz any more than your car is affected by the "vibrations" caused by going up and down hills.
  • Charles Babbage has been both a hero and anti-hero of mine for almost three decades now, ever since I researched and wrote a paper about him in high school. I immediately saw both his best and worst traits in my own behavior, and on the worst side it led to me referring to myself as "pulling a Babbage" whenever I let those traits get the better of me. I now have a string of unfinished projects to my name that certainly eclipses what Babbage did.
    • Re: (Score:3, Funny)

      by RuBLed ( 995686 )

      I now have a string of unfinished projects to my name that certainly eclipses what Babbage did.


      You must be renting a warehouse or something, I'm having trouble fitting a difference engine in our basement..
      • by macraig ( 621737 )
        Nah, I have a buddy who owns an auto junkyard, so I get 'em half finished and then throw 'em in the car crusher. They come out small enough to fit under the bed.
  • Well, I guess the retro-futuristic cathode tube computers in the Fallout series will be trumped by an even more retro-ish mechanical processing power...
  • I, for one, (Score:3, Funny)

    by Tu-Ply ( 1132719 ) on Wednesday July 25, 2007 @01:58AM (#19979933)
    welcome our Nanomechanical overlords...
  • It's not new (Score:3, Interesting)

    by Eivind ( 15695 ) <eivindorama@gmail.com> on Wednesday July 25, 2007 @02:28AM (#19980061) Homepage
    It's an interesting concept, but it is not in any way a "new" concept. It was, for example, explored in Drexlers "Engines of Creation" that is (in full) available online under http://www.e-drexler.com/ [e-drexler.com] EOC was first published in 1986, so the idea is more than 20 years old.
  • by Flying pig ( 925874 ) on Wednesday July 25, 2007 @03:42AM (#19980411)
    I don't think they are actually postulating something like a Difference Engine at all. The reason? I/O. With the exception of piezo inkjets, that market has gravitated to thermal drive for the ink and all the upcoming inkjet designs I know of also use electrical power->heat to drive the ink. The article talks about engine management systems, but again the I/O of these is currently electrical, driving motors and solenoids, and given the sheer amount of development invested in the present technology, the timescale to invent cheap and reliable mechanical amplifiers will probably mean that the I/C engine will be more or less obsolete before it happens.

    I suspect that what is being thought of is actually relay technology - so let's call it a Turing/Von Neumann/Mauchly approach (Alan Turing was a pioneer of relay logic among his other achievements, and Von Neumann and Mauchly were both associated with relay calculators.) Although relay computers were effectively obsolete by the 70s, they persisted in industrial controls for longer because (a) they could be debugged by electricians and (b) they could tolerate levels of contamination that destroyed the electronics of the period. The last generation of ultra-clean sealed relays and mercury relays were extremely durable and reliable. They didn't have the power handling, size for size, of power transistors but they had less internal dissipation. As a simple example, I was designing equipment in the late 80s which had to switch a few watts at around 500VDC. Although there are transistors that can handle these voltages, the design of the switching circuit necessitated a hybrid device costing around $200. A suitable relay cost $10 and was immune from punchthrough.

    I'm prepared to guess that there will be niche applications for these ideas - but as with the IC engine, the sheer accumulated R&D in electromechanical systems will mean that widespread adoption will never be economic. It's easier to duct cold air over an engine management system (as on my car, with a few $ of plastics) than it is to redesign the entire chain from logic to actuator to use a different technology. And the current density of flash memory suggests that the hill to be scaled by electromechanical memory is enormous. Back in the days when flash chips were 256 bytes and not too reliable, there might have been a chance. Now when 8GByte USB dongles are cheap and reliable, it will be a lot harder.

    • Comment removed based on user account deletion
      • I'm just very tired of mass media overhyping engineering that may have almost no effect on most people. It leads to cynicism of the "what happened to my flying car" variety.
        • Re: (Score:3, Funny)

          by Arimus ( 198136 )
          Given the average ability of people to move their car correctly in two dimensions I, for one, am glad that flying cars are still not available...

  • The research presented in this article reminds me of a an abstract I read a while back about a team who developed an on chip vacuum tube micro-triode which used carbon nanotubes as field emitters. It might not be possible to build a computer out of them, but logic built from them would have some of the same advantages mentioned in TFA (high immunity to electromagnetic radiation, etc.)
    Link (warning PDF) http://ieeexplore.ieee.org/Xplore/login.jsp?url=/i el5/16/21940/01019936.pdf [ieee.org]
  • But aren't mechanical shocks more common for your typical computer? And won't these machines take far more damage from them than current solid-state ram?
    • by bagsc ( 254194 )
      Two words: percussive maintenance.

      It always amazes me how, no matter what the technology, when you can't find anything wrong with it, a good beating generally fixes it. Probably because mechanical parts are usually the cause of the problem...
  • I understand this machine as being nano-electro-mechanical, i.e. some sort of relay computer. However, I don't understand how it could be that it uses less power compared to CMOS at equal switching speeds. Accelerating mass is costly, much more so for nuclei then electrons.
    • by ducman ( 107063 )
      The point is that it's NOT at equal switching speeds. If you have an application for a computer that doesn't need to switch very often, but does need to be extremely reliable, this might help.
      • CMOS gates do not draw current when idle, either. They only draw current when they switch. The slower they switch, the less power they use. So, what's the difference? The issue, I think, is that CMOS is just about moving some electrons. While a mechanical switch is about moving nuclei, which happen to be 2000 times havier. Considering Newton's F=m*a, this kind of swiching doesn't look fast nor energy efficient.
  • Ridiculous idea (Score:2, Interesting)

    Like, totally ridiculous idea:
    • Moving parts do not scale down at all! Even at the micro-meter level, effects like friction, surface tension, dust, and gas make moving parts impractical to impossible. Even in a good vacuum there are too many gas molecules to make nano-scale mechanics work.
    • Your typical moving parts are good for maybe ten million operations before they wear out. Figure out how long your computer would last at that rate.
    • Error and assembly error rates for mechanical devices are much too hi
    • There were conventional reed relays that were rated for 300 million operations only because nobody could be bothered to run the test equipment any longer than that. And how do you think a piezoelectric crystal works? Multi-megahertz crystals in radios can last for 20 years or more.

      As for your last comment, it's rubbish. Have you ever seen a teleprinter? A piezo inkjet printer? a hard disk drive?

      Obviously you aren't an ancient hacker, or you would know a bit more about electromechanical technology.

      • >have you seen a teleprinter? A piezo inkjet printer? a hard disk drive? ... poor examples for your thesis.... all those devices have error rates much higher than tolerable in a computer. You need parts with error rates well below 10^-12, it's hard to make mechanical gadgets that will operate a billion times without failure. The technology is over 1000 times too unreliable in the best case.
    • by Cybrex ( 156654 )
      Dust? Friction? Component wear? You *do* realize that they're talking about atomic scale manufacturing, right? I think it would be more accurate to say that your mental image of how these systems function is what's not scaling down correctly.
  • by martyb ( 196687 ) on Wednesday July 25, 2007 @08:10AM (#19981741)

    I know this is /. and actually reading the article [iop.org] is unusual, but *I* did and came upon this:

    A computing architecture made from nanomechanical transistors thus is competitive with 45 nm CMOS technology Note 2, while taking a step towards enabling reversible computing. (emphasis added)

    I would LOVE to see THAT happen!

    <dream>Whenever a program crashes, just open the debugger, run it backwards until it gets "weird". Run it forwards and backwards again to isolate where it's broken. Of course, there are some problems with asynchronous signals (disk I/O, keyboard, mouse, etc.) but I can dream, can't I?</dream>

    But seriously, could this just be something thrown in to help get more funding or is it an actual possibility?

    • Re: (Score:2, Insightful)

      by pscottdv ( 676889 )
      That's not what reversible computing means. Reversible computing allows for power consumption below the 1/2 kT per bit theoretical minimum for energy consumption in a computer. Feynman's lectures on computing describe it pretty well.
    • Re: (Score:3, Informative)

      by marcosdumay ( 620877 )

      You should take a look on the meaning of "reversible computing" [wikipedia.org].

      In short, you won't be able to reverse all operations on a debuger, but it may save some money at your light bill. Personaly, I can't see how it can be used, but I'm no expert on it, and there is a lot of buzz on that.

  • by mattr ( 78516 ) <mattr&telebody,com> on Wednesday July 25, 2007 @08:56AM (#19982091) Homepage Journal
    If I interpret TFA and its references (which are more useful even just as abstracts) correctly, this is not at all the "rod logic" of Stephenson/Babbage fame. It is a single transistor, built out of two metal terminals (source and drain) and a tall, thin pillar standing between them which vibrates like a tuning fork (at 300MHz - 1GHz or so).

    This pillar can be charged from the terminals and by transferring charge it can switch the current. This nano-electromechanical single electron transistor (NEMSET) was invented by other researchers, TFA mainly explores electronic properties of the NEMSET and how to put them together into circuits, create circuit elements, etc. but they didn't really do any of it yet.

    Mainly it can run at high temperatures, is not as fast as ordinary transistors, but seems like it could offer multivalued logic not just binary, and as for power just about anything will do, including self-excitation, environmental vibration, etc.

    So while this might be just the thing for making a laptop you can use without frying your gonads, it is not what one might think when hearing the words "nanomechanical computer".
  • A beowolf cluster of these ... Someone had to say it!!
  • Maybe in the future the internets really will be made of tubes or dump trucks!

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...