Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware Science Technology

New Carbon Nanotube Chip Outperforms Silicon Semiconductors (nanotechweb.org) 42

"Researchers at the University of Wisconsin-Madison are the first to have fabricated carbon nanotube transistors (CNTs) that outperform the current-density of conventional semiconductors like silicon and gallium arsenide," reports NanotechWeb. Slashdot reader wasteoid shares the site's interview with one of the researchers: "When the transistors are turned on to the conductive state (meaning that current is able to pass through the CNT channel) the amount of current traveling through each CNT in the array approaches the fundamental quantum limit," he tells nanotechweb.org.

"Since the CNTs conduct in parallel, and the packing density and conductance per tube are very high, the overall current density is very high too -- at nearly twice that of silicon's. The result is that these CNT array FETs have a conductance that is seven times higher than any previous reported CNT array field-effect transistor."

The research was funded in part by the U.S. Army and Air Force, as well as the National Science Foundation. "The implication here is that by replacing silicon with a CNT channel, it should be possible for us to make either a higher performing device or one that works at lower power."

In other news, Fujitsu announced this week that it's joining an effort to release a 256-megabyte 55-nanometer carbon nanotube-based NRAM by 2018.
This discussion has been archived. No new comments can be posted.

New Carbon Nanotube Chip Outperforms Silicon Semiconductors

Comments Filter:
  • Yeah? (Score:5, Insightful)

    by ShooterNeo ( 555040 ) on Sunday September 04, 2016 @07:39AM (#52824223)

    Hooray? There have been breathless articles about how diamond or CNT or whatever stomps silicon flat for 30 years now. The problem is that silicon is a moving target - it keeps being improved. If CNT or diamond is fundamentally better than the best possible silicon, which it probably is, the only way it can "catch up" is if silicon is improved to it's practical limits. That might be just a few years away - there's talk of the next few die shrinks being the last ones for silicon before physics don't allow any further improvement for 2d silicon wafers. (and 3d has the fundamental problem of trapping heat and much more difficult manufacturing)

    Still, this is cool. I wonder if large scale power switch transistors can be a new future use for CNT tech? If they have better current flow and less "on" resistance, superior to silicon, that would be great.

    • We are close to peak compute as Moores law will be finished in a couple more shrinks. The next step will be in the software and the architecture and improvements will likely be linear not geometric as they have been since the invention of the integrated circuit. Fortunately it seems that the complexity of current computing systems is close to the number in biological brains so it may be enough. Carbon nanotubes may give one more generation of geometric shrink after the last silicon one but quantum physics -

      • I disagree. I think you're completely wrong and you should know it if you think about it for a few seconds.

        1. Genetic engineering takes 28 years minimum to get even initial results.
        2. "Peak compute" is for 2d chips that aren't tightly integrated. If we tried to make computers with similar memory and processing power to the human brain, we'd need many square kilometers of total chip area and to use optical interconnects. That's a whole additional set of improvements - making sets of chips that can be ma

        • "we'd need many square kilometers of total chip area and to use optical interconnects"

          Based on what? Optics aren't faster than electrical interconnects.
          • The optical interconnects are to carry large quantities of information with low latency between chips that have to be physically spread apart a number of meters because there are thousands or millions of them. There would also be on chip busses that are electrical.

        • The delay before results in genetic engineering depends upon the species. Many plants and animals have a new generation every year.

          Even in humans, some corrections of inherited genetic flaws are evident early in life. Haemophilia and color blindness come to mind.

          A lot of work remains to be done identifying the causes of inherited flaws.

          There's also a wide range of possibilities for genetic engineering to improve what it means to be human. Consider that most animals' bodies manufacture their own vitamin C, a

          • Look, kid, I got a master's degree in a related field. I have a vague idea of what's possible. It is true that there are ways to make the world better with genetic engineering. However, nature is a mess of stuff that was never intended to be purposefully edited and the problems are virtually endless and essentially intractable. It's nearly impossible to make a drug that won't kill some of the people you give it to, and genetic edits are even harder because they are various hijacked viruses that cause en

    • They say in the article it's half the price of DRAM. And it's Non-volatile. It seems like it's the perfect material for the next gen exascale machines which pretty much demand large persistent memories with near ram speeds (cause you can't get the data from the disk to the processor at that scale-- you need to store stuff locally)

      • Perhaps - why not integrate the memory and the processing into the same physical chip? Especially for things like artificial neural nets, where essentially the algorithm is accept inputs from connected synapses, generate output from inputs + synapse weight, transmit output.

        On every tick the synapse weight variable is accessed for every node. So you essentially have a chip where you must access every memory location every tick, in parallel across millions or billions of artificial neurons. You also have i

        • And why not manufacture three dimensional "chips" the size of an ATX power supply. Build the heat pipes right in.
    • Also CCNTs transfer electrons and photons Silicon need o)e e)o converters..... CCNTs can be photon valves.... we store info in electrons and move info in photons....Mark Horowitz Stanford university..
  • Use in EVs (Score:4, Insightful)

    by shawnhcorey ( 1315781 ) on Sunday September 04, 2016 @07:45AM (#52824239) Homepage
    Good. Maybe high-power motor controllers will become cheaper.
  • CNT's? Gee, no one's going to mock that.
  • by hyades1 ( 1149581 ) <hyades1@hotmail.com> on Sunday September 04, 2016 @09:09AM (#52824431)

    Sooner or later, somebody's going to look at this acronym and ask to buy a vowel.P>

  • by wierd_w ( 1375923 ) on Sunday September 04, 2016 @09:19AM (#52824473)

    Have they found a practical way to mass produce single walled carbon nanotubes of arbitrary diameter and length yet?

    I ask, because that is sadly a requirement for mass manufacture of quality ICs built using carbon semiconductor.

    If they can pull that off cheaply and reliably, that enables carbon to really hit home as an industrial material, and things would get interesting.

    Hand assembling an IC out of cherry picked parts in a cleanroom is not the same as the above. Yes, it lets you see that such chips have immense potential, but without a viable path to mass manufacture, the unit costs will be astoundingly prohibitive. Only the USA's DoD would be able to afford them. I really can't get behind such a nasty barrier in tech as that. The NSA has scary enough toys as is. Having access to ICs that they can drive many times harder than silicon, while the rest of us are left to pound sand due to the price, is not something I want to see.

    • No. The storage, on the other hand, basically works by dumping a bunch of random nanotubes and aligning them to store data.
      • Not even aligning them... just depending on the bit cell area to be sufficiently larger than the tube area that statistically you'll get enough to land in the right arrangement that ti will work.

        This turns out to be why Fujitsu is commercializing memory arrays. They can apply error correction and row/column redundancy to hide the number of failed bits.

    • Perhaps employing more US workers to hand assemble these chips would help offset the prohibitive costs. Sure, the chips will still cost a lot, but now the US population can afford them.
      • All sorts of things suddenly become possible if one is willing to consider manufacturing chips via an industrial process that doesn't look almost exactly like the one already in use.
      • Higher costs for a faster IC have sunk companies before, and will again. Inselek, for example.
    • If it looks like they can improve CPU performance, then they will suddenly have billions of dollars of research money looking for a way to manufacture them.
  • This will come to nothing. If they are lucky, they will scrap a ridiculously expensive and negligible niche application. More likely, in a few weeks time, nobody will remember it, like the hundreds of other breakthrough announcements previously published here. Come on, prove wrong; make me look like a fool.
    • I agree. I will believe it when I practical applications for it. If this happens I will gladly put on the fools cap with you and eat crow, and ask for seconds.

  • by TechyImmigrant ( 175943 ) on Sunday September 04, 2016 @10:14AM (#52824585) Homepage Journal

    100nm Transistors? It'll work for displays, but for logic those transistors are huge and as far as I can tell, they don't get smaller. The tubes are 1nm diameter and you need enough in parallel for it to work. At leasts that's what the low information article implied.

  • by John Smith ( 4340437 ) on Sunday September 04, 2016 @10:39AM (#52824675)
    Carbon is smaller than silicon.
  • by Richard Kirk ( 535523 ) on Sunday September 04, 2016 @11:06AM (#52824745)

    At the moment the sensible money is on silicon. Make silicon circuits 10% smaller or 10% after and the whole of electronics benefits. If you try to do the same thing with carbon, then you have to re-invent many of the fabrication processes from scratch before you can make a single useful gadget.

    In the long term, carbon is a no-brainer. It has a huge band gap will lets it be stable at high temperatures. It can bond to itself and be a super-resistor, a resistor, a semiconductor, and a conductor. Down the middle of carbon tubes it may even manage to be a superconductor. You could make a memory element using a few tens of atoms. Can you imagine having a mole of bits? On the other hand, trying to make a conductive track by doping silicon gets harder and harder as the size drops, and there are problems getting the current to turn corners in a single crystal.

    So, what do we do in the middle-term? We can make something that is probably bigger than is ideal using the existing silicon technology. We will find a niche market that needs the same simple thing replicated lots of times - and non-volatile memory is the obvious choice - and leave making a carbon microprocessor for when we have more of the other bits working. That is what people have been predicting for years, and now they are actually beginning to do it.

    Why are they dong it now? Well, I can remember over the past 40-odd years people saying you cannot get Si fabrication much below 10 microns, and then there were limits making them below 1 micron, and then you absolutely could no get below 0.1 micron. And as long as Silicon technology oprogressed, it was the better short-term investment. But as we go on, the next-generation silicon plants will be more expensive, the rewards are getting smaller, and the chances of some unexpected breakthrough dwindle. It is a good time for something to give.

  • And everyone's speculating how they will impact computing... Am I the only one even halfway curious about how they sound in an amplifier?

"Engineering without management is art." -- Jeff Johnson

Working...