The Death of the Silicon Computer Chip 150
Stony Stevenson sends a report from the Institute of Physics' Condensed Matter and Material Physics conference, where researchers predicted that the reign of the silicon chip is nearly over. Nanotubes and superconductors are leading candidates for a replacement; they don't mention graphene. "...the conventional silicon chip has no longer than four years left to run... [R]esearchers speculate that the silicon chip will be unable to sustain the same pace of increase in computing power and speed as it has in previous years. Just as Gordon Moore predicted in 2005, physical limitations of the miniaturized electronic devices of today will eventually lead to silicon chips that are saturated with transistors and incapable of holding any more digital information. The challenge now lies in finding alternative components that may pave the way to faster, more powerful computers of the future"
Not again (Score:5, Informative)
And of course what's really reaching a limit is not the CPU's, but our ability to use them effectively. See "TRIPS architecture" on the wiki as an example end-run around the problem that offers hundred-times improvements using existing fabs.
Maury
Unlikely (Score:5, Informative)
Intel's CTO Justin Rattner just gave a talk at Cornell two days ago; he covered this topic carefully and confirmed that Intel has the technology and plans to carry out Moore's Law for another 10 years on silicon. Technologies such as SOI [wikipedia.org] and optical interconnects will be leveraged to hit this.
It's not necessarily the size of the transistors that make chips hard to make these days either (although they are now giving us huge problems with leakage current). It's harder to route the metal between these transistors than it is to pack them onto the silicon. New processors from Intel and AMD have areas with low transistor density just because it was impossible to route the large metal interconnects between them. Before we can take advantage of even smaller transistors we'll need a way for higher interconnect density.
Re:I'll... (Score:4, Informative)
For example:
If it costs 1/100 the price but seen no end users gains in 'speed' and/or 'power' it could replace silicone. It's not better at doing anything, it just has a higher value.
"All this newfangled stuff is pie-in-the-sky at this point."
hmmm, some of this is a lot farther along then pie in the sky.
Most people on
Finally:
The real problem with silicone is the fabs. They are running into some serious problems at these incredibly small sizes. Some fabs are problems with metal atoms in the air, atom that are below detection and the ability to remove.
I am not dooming and glooming silicone here(although there are some advantages to hitting a minimum size) it's just that some problems aren't going away and are getting harder to deal with and the past work a rounds aren't cutting it.
Pet Peeve (Score:2, Informative)
The personal automobile is dead (Score:5, Informative)
gaas - a little nostalgia. (Score:3, Informative)
Of course, wire delays started to become a concern for multi-board processors, and cmos began to deliver enough transistors on a package that out-of-order superpipelining became possible, and the performance advantage of a slightly higher clocked ecl/gaas processor evaporated. This is not to say that there was not a good six-seven year window of opportunity for gallium arsenide, while cmos was still pretty feeble. I'll also point out that gaas has continued to be used in specilized applications like serdes, high-speed signal-drivers, and cell-communications drivers. You're never going to get millions of mesfets on a chip, but they work really well, if you need a few dozen really fast drivers.
As for trips, and a lot of other designs like it, it essentially is working on the problem that modern cmos introduced. We have more transistors than we know what to do with, but we can't drive them any faster. I've seen some clever designs that are very good at solving one type of problem. I have yet to see a design that solves the problem in the general case, and with minimal change in the programming model. A lot of smart people are working on the problem, however, so I suspect that something will come about; It may not happen quickly, however.
Re:I'll... (Score:4, Informative)
No one said anything about mass production in 1916, read the post again.
We started learning to purify it in the 1910's.
From Wikipedia:
The earliest method of silicon purification, first described in 1919 and used on a limited basis to make radar components during World War II, involved crushing metallurgical grade silicon and then partially dissolving the silicon powder in an acid. When crushed, the silicon cracked so that the weaker impurity-rich regions were on the outside of the resulting grains of silicon. As a result, the impurity-rich silicon was the first to be dissolved when treated with acid, leaving behind a more pure product.
From: http://en.wikipedia.org/wiki/Integrated_circuit [wikipedia.org]
The first integrated circuits were manufactured independently by two scientists: Jack Kilby of Texas Instruments filed a patent for a "Solid Circuit" made of germanium on February 6, 1959. Kilby received several US patents.[4][5][6] Robert Noyce of Fairchild Semiconductor was awarded a patent for a more complex "unitary circuit" made of Silicon on April 25, 1961. (See the Chip that Jack built for more information.)
The first integrated circuits contained only a few transistors. Called "Small-Scale Integration" (SSI), they used circuits containing transistors numbering in the tens.
SSI circuits were crucial to early aerospace projects, and vice-versa. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems; the Apollo guidance computer led and motivated the integrated-circuit technology, while the Minuteman missile forced it into mass-production. These programs purchased almost all of the available integrated circuits from 1960 through 1963, and almost alone provided the demand that funded the production improvements to get the production costs from $1000/circuit (in 1960 dollars) to merely $25/circuit (in 1963 dollars).[citation needed] They began to appear in consumer products at the turn of the decade, a typical application being FM inter-carrier sound processing in television receivers.
The next step in the development of integrated circuits, taken in the late 1960s, introduced devices which contained hundreds of transistors on each chip, called "Medium-Scale Integration" (MSI).
They were attractive economically because while they cost little more to produce than SSI devices, they allowed more complex systems to be produced using smaller circuit boards, less assembly work (because of fewer separate components), and a number of other advantages.
Further development, driven by the same economic factors, led to "Large-Scale Integration" (LSI) in the mid 1970s, with tens of thousands of transistors per chip.
Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began to be manufactured in moderate quantities in the early 1970s, had under 4000 transistors. True LSI circuits, approaching 10000 transistors, began to be produced around 1974, for computer main memories and second-generation microprocessors.