The Nanomechanical Computer 124
eldavojohn writes "The BBC is reporting on a newly proposed type of nanomechanical computer that mimics J. H. Müller & Charles Babbage's work on mechanical computational devices — just on a much smaller scale. The paper is published today in the New Journal of Physics and cites three reasons to build a computer with nanomechanical transistors over bipolar-junction or field-effect transistors: '(i) mechanical elements are more robust to electromagnetic shocks than current dynamic random access memory based purely on complimentary metal oxide semiconductor technology, (ii) the power dissipated can be orders of magnitude below CMOS, and (iii) the operating temperature of such an NMC can be an order of magnitude above that of conventional CMOS.'"
You don't need our permission (Score:2, Insightful)
This is like half science.. "Here's my hypothesis, someone test it for me."
Re: (Score:1)
Permission Granted! (Score:2)
Just don't subject to severe shock.
Re:You don't need our permission (Score:4, Insightful)
Re: (Score:3, Insightful)
Funny that you should mention that (Score:4, Interesting)
His machines _would_ have worked, if they had actually been completed. But he could never be arsed to. Whenever he got funding for one, he'd deliver exactly nothing for that money.
So, you know, maybe _that_ is why Babbage found the Englishmen somewhat reluctant to invest in his designs. Had he actually finished the Differential Engine, maybe people would have been more receptive to his next ideas. Maybe instead of bitching about his fellow Englishmen, it would have been easier to just deliver what he had promised. Just a thought.
And maybe we would have had programmable computers a lot earlier. But as it is, it took people like Konrad Zuse in Germany or Alan Turing and the other folks who built the Colossus computer in the UK, to get it started. Because they actually delivered something that worked. Bloody huge difference there.
2. The complaint about "slicing pineapple" is actually invalid too. Like many nerds today, Babbage was in it just for the fun of researching something new, and apparently thought that people should give him a lot of money just so he can have some nerdy fun.
Capitalism, even the 19'th century kind -- actually, _especially_ the 19'th century kind -- doesn't work that way. To get some funding, the question you must answer is, basically, "which of _my_ problems does this solve?" If that company is in the business of slicing pineapple, then, yes, a machine which peels potatoes is completely useless to them.
Governments too, while they do fund some fundamental research too, have a fiscal responsibility to the citizens they tax for that money. Especially in the 19'th century laissez-faire ideology, when the government was lean, mean, and barely funded to maintain the army. You can't seriously propose a tax hike just so Mr Babbage can play with something cool and high tech. So basically they too have to ask, "ok, so what do _I_ gain from this? Does it compute ballistics for our battleships? Total the census? Or what?"
You'll notice that the working examples that did get computing started, had a satisfactory answer to exactly that. The Colossus computer broke enemy codes for the UK army, and Zuse's machines did aerodynamics calculations for the German airforce. E.g., the Z2 was used to design glide bombs.
Re: (Score:2)
Babbage couldn't be even arsed to finish the first one
The reason Babbage had problems completing the project was that it required precision and standardization unparalleled in any previous project. The design and concept was there, the production capabilities lagged severely behind. Even so, one of the offshoots of the project was the development of techniques and procedures for the production and reproduction of standard components, one of the cornerstone of modern mass produced industry.
So even though the computer was never completed, as with other fundamen
Wrong, actually (Score:5, Interesting)
Wrong, actually. The machine that was built at the end of the 20'th century was built with the precision and tolerances of the 19'th century. Deliberately, to show that it was possible.
The precision argument is even more obviously false, when you look at the fact that very precise watches had existed for a long time. That's how they measured longitude before GPS. I use watches as an example, because they're cog-based machines too, and they required even higher precision. By the middle of the _18'th_ century (i.e., a century earlier than Babbage) even a pocket watch would already not deviate more than a minute per day, and the second hand gradually became common. (Previously they tended to have only hours and minutes hands.)
The first practical nautical clock, John Harrison's H4 was first used aboard the ship Deptford which set sail for Jamaica on 18 November 1761 and actually arrived there on 19 January 1762. That's two months and a day at sea. After all that time, it was only off by 5.1 seconds.
_That_ is the kind of accuracy that was already available a century before Babbage.
Babbage's design didn't even need that kind of accuracy, since it was essentially a digital device. All that mattered was how many teeth of the cog had moved, not also to do it within a very exact time interval. Half the sources of inaccuracy of a watch, didn't even apply there.
So, no, Babbage had no excuse. The production capabilities were there, the precision was enough, and standardization wasn't even necessary for a prototype. He just couldn't be arsed to actually deliver what he promised. Full stop.
Re:Wrong, actually (Score:4, Informative)
However the lesson learned allowed engineers such as Joseph_Clement(http://en.wikipedia.org/wiki/Josep
Re:Wrong, actually (Score:4, Interesting)
Let's say it would have been useless past, dunno, 2 decimals. It would still have been proof of concept.
John Harrison didn't get his nautical watch right in the first try either. There's a reason why the first used version was called "H4". Because H1 to H3 weren't yet accurate enough. And H1 didn't properly compensate for the ship's movement either. But he had _something_ to show to the admiralty, as proof of concept and as proof that indeed it is at least a little more accurate than the average pocket watch one could buy at the nearest watchmaker. It worked wonders to secure more funding for the next version. Although the project was had already overrun the initial money offer and deadline, it showed that _something_ is happening there.
Basically think in terms of iterative software development. It's easier to keep the client happy if he gets _something_ usable often, or at least sees that some progress is made, than if he has to wait for the deus-ex-machina miracle where everything is just perfect at the end of a very very long time. What Babbage did was, more or less, equivalent to not only keeping the client waiting for the first version, but scrapping the design and starting from scratch, again and again and again, to the point where nothing whatsoever was ever ready or usable or even in a demo state.
Well, that sounded maybe too harsh. I'm not (primarily) trying to damn Babbage, but to say _why_ those Englismen that he damns were so skeptical. For all his claims, he worked on it from 1822, when he first presented his proposal and got funding, to his death in 1871 without ever having a version that works even as a crude tech demo. That's a whopping 49 years. You can't really blame anyone for being skeptical if your project was _half_ of that time overdue and still had nothing to show.
Even nowadays most clients would just pull the plug on a project if it was just one year past the deadline, or often less than that. And noone would blame them for it. Even if you had a project of your own funding, you'd be ridiculed long before 49 years passed if it was still going nowhere. We made fun of Duke Nukem Forever when it was overdue a tenth of that time, and also in a tenth of that time Daikatana's hype generated a _very_ nasty backlash. Just, you know, to put things in perspective.
So what I'm saying is: don't take Babbage's bitching as some great insight into the Victorian era England or into the human species as a whole. Babbage's problem wasn't that the English were blockheaded, but simply that he kept hyping a concept and asking for funding without anything to show even as a proof of concept. With or without technical problems, _of_ _course_ the English were skeptical after all that time. That's all.
2. At the risk of repeating myself, the machine built in IIRC 1991 after Babbage's schematics was deliberately built using the tolerances and precision that would have realistically been available in the 19'th century. It worked, and calculated PI with 31 decimals.
Re: (Score:1)
Re: (Score:2)
Re: (Score:2, Informative)
Then Babbage did build a prototype, which worked flawlessly and which he used in numerous public occasions. He even programmed it to perform 'miracles' (disturbances in continuity that up till then was only thought possible by the act of God). "Darwin saw that if apparently inexplicable discontinuities could really be the result of a system of mechanical laws laid down in advance, then
Re: (Score:1)
Having said that, your point about Babbage'
Re: (Score:2, Funny)
Re: (Score:2)
Yeah, RTFA (Score:2)
Re:You don't need our permission (Score:5, Interesting)
Perhaps he's waiting for the $10,000, or perhaps he knows that theory is the important thing, and if it's viable, there will be many organisations vying for better and better implementations.
I for one don't consider science to be something that only people with money do. One has to wonder how many da Vinci's there would have been, if other people all had the resources he had. The renaissance itself shows that progress pops up everywhere, given resources. Doesn't mean the science wasn't there in the back of people's minds, waiting for them to get past the point of scraping together money for a loaf of bread, though.
Re: (Score:2, Funny)
Re: (Score:1)
Modern mask sets cost over $500,000 before you hit the manufacturing stage.
Then the equipment needed to control and observe the test device costs in the millions
I have not yet read the article, but typically, the purpose of papers such as this is to attract investment.
Re: (Score:2)
Re:You don't need our permission (Score:5, Insightful)
Re: (Score:3, Funny)
Re:You don't need our permission (Score:5, Informative)
Oh, and your premise is wrong: building a MEMS chip of a non-trivial size pretty quickly runs in the hundreds of thousands of $, even with educational discounts. So pretty much you have to get the design ready, then ask for funding to build the thing, which is what they are presumably doing.
No "Diamond Age" in the tagging? (Score:5, Interesting)
Re: (Score:2)
Re: (Score:2)
Don't get me wrong, Stephenson is by far and away my favorite author, but his ending tend to be rather sucky. The rest of the book always makes up for this though.
And K. Eric Drexler wrote about it in '85 (Score:2, Informative)
Re: (Score:2)
order of magnitude? (Score:2, Insightful)
Re: (Score:2, Funny)
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
temperature differential is the correct model (Score:2, Insightful)
Re: (Score:2, Insightful)
Re: (Score:2)
What matters isn't so much the range from theoretical possibility in he universe, but the ability of something to function where it's going to be used. From freezing to boiling of water is roughly what temperature range you'll see where computers might be useful to people, using that range as a baselin
Re: (Score:2)
Re: (Score:2)
Keeping a harsh environment between 20 C and 320 C is much, much easier and more cost effective than keeping it between 20 C and 50 C. If one processor can handle the whole range, and the other can't, how is that less important than the actual nu
Re: (Score:2)
First you reply refuting a a point I never made, then you reply to my post quoting your post.
You evidently thought I was talking about something else, so here is the short version of my point again, as an example: You do not design the cooling system for a CPU to maintain less than a 50 K differential to ambient (that's what I would consider to be 'achieving a certain temperature differential', which is what the post I replied to said was 'the most important part of thermal engineering'), you design it
Re: (Score:2)
If your only goal is keeping it under 350 C, and it's being operated in a climate controlled room at 20 C, then unless it's dissipating shitloads of heat itself it won't need a cooling system at all. Even then, a small and simple one should do well.
If, however, you're operating it on Venus or i
Re: (Score:2)
Re: (Score:2)
By maintaining the temperature of the device at a certain temperature at a given ambient temperature, you are creating and maintaining a difference in temperature between two points (a point on the device and a point in the environment) within the volume affected by the cooling system.
So what you just said, besides being overly snarky, was complete and utter nonsense. If you sub in the words for what the words mean, you said, "What you described is achieving a temperature
Re: (Score:2)
Prior art... (Score:5, Insightful)
Beyond Drexler's theoretical work, carbon nanotubes [wikipedia.org] were demonstrated as nano-mechanical transistors in 2000. Basically, the nanotube was positioned over various electric pads. A current could be applied to mechanically deform the nanotube. The deformation was stable, and could be read-out by measuring current across the tube. Since the deformation was stable and reversible, the tubes could be used as persistent storage or as switching/logic elements. In fact, switching speeds of gigahertz were demonstrated. The vision was to have long nanotubes in a huge cross-bar architecture, leading to high-density persistent storage. As is often the case, scale-up was difficult.
This present work appears to pattern a nano-sized post between conducting pads (out of a gold/silicon layered system) , and to use that post as a single-electron transistor. The 'mechanical' part comes from mechanically coupling multiple pillars to use as a gain mechanism for a transistor. This is basically much closer to conventional micro-lithography, and as such, it should fit in with current lithographic infrastructure much more easily than the nanotube concept did.
Re:Prior art... (Score:4, Insightful)
Having skimmed the article, I'm a bit unimpressed by the comparison to Babbage. While this looks like neat technology, it's NOT clockwork--it's electronic transistors with mechanical gates, as noted in the parent.
(*) Parts only touching where desired, vacuum elsewhere--remember, we're talking atomic scale here.
Re: (Score:2)
I didn't rta, barely rts, but I was reading the discussion and thinking to myself "Self, can you imagine how unbelievably sluggishly slow a mechanical computer would be?", I'm kinda thinking it would take around 20 years to boot vista, but that's just off the top of my head.
Re: (Score:1, Informative)
The other thing to note is that with MEMS (all mechanical, except for the motor), that devices move at far higher speeds than you would think. 1) The pieces are so small that they have essentially zero mass. 2) Because the devices have almost no mass, you can drive them to full speed and stop them incredibly fast, because there isn't enough mass to build
Re: (Score:2)
The other reasonable alternative would be to first build a half-adder at the nano-tech scale. (Simplifying the device sufficiently to test the proposed design.) Then you could work you way up to a 6502. (That was the old Apple ]
Re: (Score:1)
Hmmm, I thought they were talking about a little, bitty abacus.
Re: (Score:2, Interesting)
Re: (Score:1)
Rod Logic... (Score:1)
This isn't news. (Score:2)
Technological Advancement (Score:2)
http://en.wikipedia.org/wiki/Dr._NIM [wikipedia.org]
Re: (Score:1)
Cassini Division? (Score:1, Interesting)
Limited applications (Score:1)
For other applications they won't be.
The ideas aren't new, but there are probably some legitimate patents to be had in the particulars. That thought should help drive venture capital.
Re: (Score:1)
Also See the works of Neil Stephenson... (Score:5, Informative)
Anyway, the machines aren't self-replicating, but they are fabricated in microwave-style (and larger) boxes that take an elemental 'feed' of organic compounds and data. The book has some great philosophical and social content, and breaks most of the annoying characteristics of the previous 'cyberpunk'-style writing.
Ryan Fenton
Re: (Score:3, Informative)
That was the only Stephenson book I really got bored off...
Re: (Score:2)
Re: (Score:2)
Snow Crash Diamond Age (Score:1)
Let's face it, all Neal Stephenson books are vehicles for a certain type of cool technology, and not bad for it. However, he only seems to have worked out how to end a book well once - Snow Crash. Many authors struggle with endings, and Neal seems to struggle more than most. I say this with huge respect as a fan.
The problem with the Diamond Age is that, for me, it had nothing beyond the projection of what might happen if a cool tech existed - Snow Crash had a great story and fast-paced writing on top o
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
One thing though... (Score:2, Interesting)
Still, good on 'em....
Re:One thing though... (Score:4, Insightful)
Re: (Score:1, Informative)
Man Proposes, Mechanics Disposes (Score:4, Insightful)
I'll be impressed when someone actually builds some. Or writes a lot more engaging science fiction than the BBC just published.
Nanotech science (Score:4, Informative)
* Nanotechnology information [archive.org] [archived] [2002]
* Bibliography of nanotechnology and nanoscience [tu-darmstadt.de] [pdf] [2004]
* Brad Hein's nanotechnology website [nanosite.net]
* Ned Seeman's DNA nanotech bibliography [nyu.edu]
* MEMS/nanotech reading list [mems-exchange.org]
* Even more publications in nanotechnology [dyndns.org]
* sci.nano archives [leitl.org]
* The open micro/nano-manufacturing project [uky.edu]
* Nanotech in scifi [geocities.com]
And if anybody has links on nanomechanical synthesis, that'd be much appreciated. IIRC, nanolithography is one of the main areas of development, along with nonlinear optics to get the required precision manufacturing.
Shock and vibration (Score:5, Insightful)
Re: (Score:3, Insightful)
"Pulling a Babbage" (Score:1, Offtopic)
Re: (Score:3, Funny)
You must be renting a warehouse or something, I'm having trouble fitting a difference engine in our basement..
Re: (Score:2)
Fallout games - cathode tube computers (Score:2)
I, for one, (Score:3, Funny)
It's not new (Score:3, Interesting)
Don't think Babbage, think relay computers (Score:3, Interesting)
I suspect that what is being thought of is actually relay technology - so let's call it a Turing/Von Neumann/Mauchly approach (Alan Turing was a pioneer of relay logic among his other achievements, and Von Neumann and Mauchly were both associated with relay calculators.) Although relay computers were effectively obsolete by the 70s, they persisted in industrial controls for longer because (a) they could be debugged by electricians and (b) they could tolerate levels of contamination that destroyed the electronics of the period. The last generation of ultra-clean sealed relays and mercury relays were extremely durable and reliable. They didn't have the power handling, size for size, of power transistors but they had less internal dissipation. As a simple example, I was designing equipment in the late 80s which had to switch a few watts at around 500VDC. Although there are transistors that can handle these voltages, the design of the switching circuit necessitated a hybrid device costing around $200. A suitable relay cost $10 and was immune from punchthrough.
I'm prepared to guess that there will be niche applications for these ideas - but as with the IC engine, the sheer accumulated R&D in electromechanical systems will mean that widespread adoption will never be economic. It's easier to duct cold air over an engine management system (as on my car, with a few $ of plastics) than it is to redesign the entire chain from logic to actuator to use a different technology. And the current density of flash memory suggests that the hill to be scaled by electromechanical memory is enormous. Back in the days when flash chips were 256 bytes and not too reliable, there might have been a chance. Now when 8GByte USB dongles are cheap and reliable, it will be a lot harder.
Re: (Score:2)
Don't disagree (Score:2)
Re: (Score:3, Funny)
Technology of the past getting new life (Score:2, Interesting)
Link (warning PDF) http://ieeexplore.ieee.org/Xplore/login.jsp?url=/i el5/16/21940/01019936.pdf [ieee.org]
Sure, it resists electromagnetic shocks (Score:2, Interesting)
Re: (Score:2)
It always amazes me how, no matter what the technology, when you can't find anything wrong with it, a good beating generally fixes it. Probably because mechanical parts are usually the cause of the problem...
nano-electro-mechanical from what I understand (Score:1)
Re: (Score:2)
Re: (Score:1)
Ridiculous idea (Score:2, Interesting)
No, actually (Score:2)
As for your last comment, it's rubbish. Have you ever seen a teleprinter? A piezo inkjet printer? a hard disk drive?
Obviously you aren't an ancient hacker, or you would know a bit more about electromechanical technology.
Re: (Score:2)
Re: (Score:2)
Reversible Computing? (Score:5, Interesting)
I know this is /. and actually reading the article [iop.org] is unusual, but *I* did and came upon this:
I would LOVE to see THAT happen!
<dream>Whenever a program crashes, just open the debugger, run it backwards until it gets "weird". Run it forwards and backwards again to isolate where it's broken. Of course, there are some problems with asynchronous signals (disk I/O, keyboard, mouse, etc.) but I can dream, can't I?</dream>
But seriously, could this just be something thrown in to help get more funding or is it an actual possibility?
Re: (Score:2, Insightful)
Re: (Score:3, Informative)
You should take a look on the meaning of "reversible computing" [wikipedia.org].
In short, you won't be able to reverse all operations on a debuger, but it may save some money at your light bill. Personaly, I can't see how it can be used, but I'm no expert on it, and there is a lot of buzz on that.
A tuning fork in a convential chip (Score:4, Informative)
This pillar can be charged from the terminals and by transferring charge it can switch the current. This nano-electromechanical single electron transistor (NEMSET) was invented by other researchers, TFA mainly explores electronic properties of the NEMSET and how to put them together into circuits, create circuit elements, etc. but they didn't really do any of it yet.
Mainly it can run at high temperatures, is not as fast as ordinary transistors, but seems like it could offer multivalued logic not just binary, and as for power just about anything will do, including self-excitation, environmental vibration, etc.
So while this might be just the thing for making a laptop you can use without frying your gonads, it is not what one might think when hearing the words "nanomechanical computer".
Imagine (Score:1)
Tubes? (Score:1)