New Carbon Nanotube Chip Outperforms Silicon Semiconductors (nanotechweb.org) 42
"Researchers at the University of Wisconsin-Madison are the first to have fabricated carbon nanotube transistors (CNTs) that outperform the current-density of conventional semiconductors like silicon and gallium arsenide," reports NanotechWeb.
Slashdot reader wasteoid
shares the site's interview with one of the researchers:
"When the transistors are turned on to the conductive state (meaning that current is able to pass through the CNT channel) the amount of current traveling through each CNT in the array approaches the fundamental quantum limit," he tells nanotechweb.org.
"Since the CNTs conduct in parallel, and the packing density and conductance per tube are very high, the overall current density is very high too -- at nearly twice that of silicon's. The result is that these CNT array FETs have a conductance that is seven times higher than any previous reported CNT array field-effect transistor."
The research was funded in part by the U.S. Army and Air Force, as well as the National Science Foundation. "The implication here is that by replacing silicon with a CNT channel, it should be possible for us to make either a higher performing device or one that works at lower power."
In other news, Fujitsu announced this week that it's joining an effort to release a 256-megabyte 55-nanometer carbon nanotube-based NRAM by 2018.
"Since the CNTs conduct in parallel, and the packing density and conductance per tube are very high, the overall current density is very high too -- at nearly twice that of silicon's. The result is that these CNT array FETs have a conductance that is seven times higher than any previous reported CNT array field-effect transistor."
The research was funded in part by the U.S. Army and Air Force, as well as the National Science Foundation. "The implication here is that by replacing silicon with a CNT channel, it should be possible for us to make either a higher performing device or one that works at lower power."
In other news, Fujitsu announced this week that it's joining an effort to release a 256-megabyte 55-nanometer carbon nanotube-based NRAM by 2018.
Yeah? (Score:5, Insightful)
Hooray? There have been breathless articles about how diamond or CNT or whatever stomps silicon flat for 30 years now. The problem is that silicon is a moving target - it keeps being improved. If CNT or diamond is fundamentally better than the best possible silicon, which it probably is, the only way it can "catch up" is if silicon is improved to it's practical limits. That might be just a few years away - there's talk of the next few die shrinks being the last ones for silicon before physics don't allow any further improvement for 2d silicon wafers. (and 3d has the fundamental problem of trapping heat and much more difficult manufacturing)
Still, this is cool. I wonder if large scale power switch transistors can be a new future use for CNT tech? If they have better current flow and less "on" resistance, superior to silicon, that would be great.
Genetics is the future (Score:3)
We are close to peak compute as Moores law will be finished in a couple more shrinks. The next step will be in the software and the architecture and improvements will likely be linear not geometric as they have been since the invention of the integrated circuit. Fortunately it seems that the complexity of current computing systems is close to the number in biological brains so it may be enough. Carbon nanotubes may give one more generation of geometric shrink after the last silicon one but quantum physics -
Re: (Score:2)
I disagree. I think you're completely wrong and you should know it if you think about it for a few seconds.
1. Genetic engineering takes 28 years minimum to get even initial results.
2. "Peak compute" is for 2d chips that aren't tightly integrated. If we tried to make computers with similar memory and processing power to the human brain, we'd need many square kilometers of total chip area and to use optical interconnects. That's a whole additional set of improvements - making sets of chips that can be ma
Re: (Score:2)
Based on what? Optics aren't faster than electrical interconnects.
Re: (Score:2)
The reason to go with optical computing is the massive gain in bandwidth and much lower attenuation. One electrical computing core is just that, one computing core, an optical comput
Re: (Score:2)
The optical interconnects are to carry large quantities of information with low latency between chips that have to be physically spread apart a number of meters because there are thousands or millions of them. There would also be on chip busses that are electrical.
Re: (Score:2)
The delay before results in genetic engineering depends upon the species. Many plants and animals have a new generation every year.
Even in humans, some corrections of inherited genetic flaws are evident early in life. Haemophilia and color blindness come to mind.
A lot of work remains to be done identifying the causes of inherited flaws.
There's also a wide range of possibilities for genetic engineering to improve what it means to be human. Consider that most animals' bodies manufacture their own vitamin C, a
Re: (Score:2)
Look, kid, I got a master's degree in a related field. I have a vague idea of what's possible. It is true that there are ways to make the world better with genetic engineering. However, nature is a mess of stuff that was never intended to be purposefully edited and the problems are virtually endless and essentially intractable. It's nearly impossible to make a drug that won't kill some of the people you give it to, and genetic edits are even harder because they are various hijacked viruses that cause en
Price? (Score:2)
They say in the article it's half the price of DRAM. And it's Non-volatile. It seems like it's the perfect material for the next gen exascale machines which pretty much demand large persistent memories with near ram speeds (cause you can't get the data from the disk to the processor at that scale-- you need to store stuff locally)
Re: (Score:2)
Perhaps - why not integrate the memory and the processing into the same physical chip? Especially for things like artificial neural nets, where essentially the algorithm is accept inputs from connected synapses, generate output from inputs + synapse weight, transmit output.
On every tick the synapse weight variable is accessed for every node. So you essentially have a chip where you must access every memory location every tick, in parallel across millions or billions of artificial neurons. You also have i
Re: (Score:2)
Re: (Score:2)
Yes, that would be the obvious eventual solution, there just does not yet exist a way to do that.
Re: Yeah? (Score:1)
Use in EVs (Score:4, Insightful)
SRSLY? (Score:2)
OK, who's going to say it first... (Score:3, Funny)
Sooner or later, somebody's going to look at this acronym and ask to buy a vowel.P>
Re: (Score:2)
I have little doubt that your vision of the future is both accurate and demonstrative of a deep understanding of the human condition.
Carbon nanotubes... (Score:4, Insightful)
Have they found a practical way to mass produce single walled carbon nanotubes of arbitrary diameter and length yet?
I ask, because that is sadly a requirement for mass manufacture of quality ICs built using carbon semiconductor.
If they can pull that off cheaply and reliably, that enables carbon to really hit home as an industrial material, and things would get interesting.
Hand assembling an IC out of cherry picked parts in a cleanroom is not the same as the above. Yes, it lets you see that such chips have immense potential, but without a viable path to mass manufacture, the unit costs will be astoundingly prohibitive. Only the USA's DoD would be able to afford them. I really can't get behind such a nasty barrier in tech as that. The NSA has scary enough toys as is. Having access to ICs that they can drive many times harder than silicon, while the rest of us are left to pound sand due to the price, is not something I want to see.
Re: Carbon nanotubes... (Score:2)
Re: (Score:2)
Not even aligning them... just depending on the bit cell area to be sufficiently larger than the tube area that statistically you'll get enough to land in the right arrangement that ti will work.
This turns out to be why Fujitsu is commercializing memory arrays. They can apply error correction and row/column redundancy to hide the number of failed bits.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Prediction (Score:1)
Re: (Score:2)
I agree. I will believe it when I practical applications for it. If this happens I will gladly put on the fools cap with you and eat crow, and ask for seconds.
100nm (Score:3)
100nm Transistors? It'll work for displays, but for logic those transistors are huge and as far as I can tell, they don't get smaller. The tubes are 1nm diameter and you need enough in parallel for it to work. At leasts that's what the low information article implied.
Well, when you get down to it... (Score:3)
It will happen. But it won't be easy. (Score:5, Interesting)
At the moment the sensible money is on silicon. Make silicon circuits 10% smaller or 10% after and the whole of electronics benefits. If you try to do the same thing with carbon, then you have to re-invent many of the fabrication processes from scratch before you can make a single useful gadget.
In the long term, carbon is a no-brainer. It has a huge band gap will lets it be stable at high temperatures. It can bond to itself and be a super-resistor, a resistor, a semiconductor, and a conductor. Down the middle of carbon tubes it may even manage to be a superconductor. You could make a memory element using a few tens of atoms. Can you imagine having a mole of bits? On the other hand, trying to make a conductive track by doping silicon gets harder and harder as the size drops, and there are problems getting the current to turn corners in a single crystal.
So, what do we do in the middle-term? We can make something that is probably bigger than is ideal using the existing silicon technology. We will find a niche market that needs the same simple thing replicated lots of times - and non-volatile memory is the obvious choice - and leave making a carbon microprocessor for when we have more of the other bits working. That is what people have been predicting for years, and now they are actually beginning to do it.
Why are they dong it now? Well, I can remember over the past 40-odd years people saying you cannot get Si fabrication much below 10 microns, and then there were limits making them below 1 micron, and then you absolutely could no get below 0.1 micron. And as long as Silicon technology oprogressed, it was the better short-term investment. But as we go on, the next-generation silicon plants will be more expensive, the rewards are getting smaller, and the chances of some unexpected breakthrough dwindle. It is a good time for something to give.
Carbon nanotube TRANSISTORS (Score:2)
Re: (Score:2)
they add a smoky feel