Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

How Vacuum Tubes, New Technology Might Save Moore's Law 183

MojoKid (1002251) writes The transistor is one of the most profound innovations in all of human existence. First discovered in 1947, it has scaled like no advance in human history; we can pack billions of transistors into complicated processors smaller than your thumbnail. After decades of innovation, however, the transistor has faltered. Clock speeds stalled in 2005 and the 20nm process node is set to be more expensive than the 28nm node was for the first time ever. Now, researchers at NASA believe they may have discovered a way to kickstart transistors again — by using technology from the earliest days of computing: The vacuum tube. It turns out that when you shrink a Vacuum transistor to absolutely tiny dimensions, you can recover some of the benefits of a vacuum tube and dodge the negatives that characterized their usage. According to a report, vacuum transistors can draw electrons across the gate without needing a physical connection between them. Make the vacuum area small enough, and reduce the voltage sufficiently, and the field emission effect allows the transistor to fire electrons across the gap without containing enough energy to energize the helium inside the nominal "vacuum" transistor. According to researchers, they've managed to build a successful transistor operating at 460GHz — well into the so-called Terahertz Gap, which sits between microwaves and infrared energy.
This discussion has been archived. No new comments can be posted.

How Vacuum Tubes, New Technology Might Save Moore's Law

Comments Filter:
  • by Animats ( 122034 ) on Tuesday June 24, 2014 @02:41AM (#47304135) Homepage

    As a 450GHz computing element, this is a long way off. But it might lead to better terahertz radar. Right now, operating in the terahertz range is painfully difficult. It's a strange region where both electronics and optics work, but not easily. This may be a more effective way to work in that range.

    • by wanax ( 46819 ) on Tuesday June 24, 2014 @04:58AM (#47304467)

      That's mentioned in the IEEE Spectrum article (which by the way is about the most clearly written article on an early prototype technology that I've ever read).
      The problems are:
      -Too high voltage; can be mitigated by better geometry (probably).
      -Insufficient simulations at present for improving the geometry, with the caveat that getting better performance (voltage-wise) might compromise durability.
      -Because of the above, they don't have a good set of design rules to produce an integrated circuit. They're hopeful about this step, since the technique uses well established CMOS technology and there are many tools available.

      Their next targets are things like gyroscopes and accelerometers. I'd say on the whole this strikes me as realistic and non-sensational. But if anybody knows better, I'd like to hear why.

      • by fermion ( 181285 )
        So 25 years ago or so one of the researches in the lab I worked in was really into this. I think he came from ATT. Anyway, he wanted to put vacuum tubes on a substrate. He wanted to make microlevers and the like, the predecessor to what we now know as nano machines. The microlevers have happened, and we are getting some very tiny machines. The vacuum tubes are another story. From what I have seen recently, the Terahetz problem is solved or is pretty much solved. Labs across the country are working in
    • by Viol8 ( 599362 ) on Tuesday June 24, 2014 @04:59AM (#47304471) Homepage

      Stick her in front of a mike then tell her no more drugs and press record. That would have got you pretty close to that frequency range.

    • by gweihir ( 88907 )

      I am willing to predict that it will not happen as computing element. Computing elements have been limited by interconnect, clock distribution and the like for quite a while now. You cannot do longer traces in the GHz range, unless you spend an inordinate amount of chip-are for it. For analog, things are different, as you have few elements and there this tech may be interesting. But for digital, it is wholly irrelevant.

  • by account_deleted ( 4530225 ) on Tuesday June 24, 2014 @02:41AM (#47304139)
    Comment removed based on user account deletion
    • Isn't this only a problem for branching? If you have a linear set of instructions to queue isn't it possible to start processing the next instruction while the previous is still propagating across the chip assuming the chip is laid out such that this kind of processing works and one instruction won't complete slower than the one following it. Sure it would have some downsides, branching would be expensive and out-of-order execution may be difficult but couldn't it in theory work?

      • It's not just branching. You also run into difficulties when a following instruction needs to use the results of the precending one.

        • You also run into difficulties when a following instruction needs to use the results of the precending one.

          If a scheduler foresees a pipeline bubble due to latency of the ALU, and data forwarding [wikipedia.org] is not enough to resolve it, the scheduler could feed the ALU a mix of instructions from two threads. This sort of simultaneous multithreading appears in Intel's Hyper-Threading Technology and AMD's "modules", and it's been around since the "barrel processor" architecture [wikipedia.org] of the I/O processor in the CDC 6000 mainframe.

      • You sound like an Intel engineer back when the Pentium 4 CPU's NetBurst architecture [wikipedia.org] was the next big thing. Yes, pipelining exists. Yes, branches stall it. Yes, the processor ends up forfeiting a lot of work (and a lot of power and heat) when it mispredicts a branch. There's a reason Intel decided to base the Core architecture on P6 (Pentium II/III family) rather than NetBurst.
        • Oblig: ... and it didn't even occur to you to warn *them* about 9/11 ? Sheesh. You could have saved us all a LOT of sorrow.

          • by tepples ( 727027 )
            For one thing, it's pretty hard to warn them about a future disaster if they left a voice mail.
    • by Chatterton ( 228704 ) on Tuesday June 24, 2014 @03:49AM (#47304307) Homepage

      Asynchronous designs are faster (~3x) and consume less energy (~2x) but need an overhaul of the production process who is deemed too costly. Perhaps this technology could make it interesting again. (Source [columbia.edu])

      • by SuricouRaven ( 1897204 ) on Tuesday June 24, 2014 @04:16AM (#47304377)

        Not the production process so much as the design process. It'd mean starting over from scratch with a whole new architecture, redoing decades of work in hardware and software.

        • by wjcofkc ( 964165 )

          It'd mean starting over from scratch with a whole new architecture, redoing decades of work in hardware and software.

          So? I would say that is bound to happen eventually anyhow. Traditional integrated circuits are quickly on their way to becoming a stick in the mud. Something fundamentally different will have to replace them eventually.

        • HP is already doing that with their memristor tech.
        • Not the production process so much as the design process. It'd mean starting over from scratch with a whole new architecture, redoing decades of work in hardware and software.

          Presumably the hardware and software to which you're referring is the hardware to manufacture the chips and the software used to design them, considering that the asynchronous processor that was "faster (~3x) and consume less energy (~2x)" was an "asynchronous, Pentium-compatible test chip that ran three times as fast, on half the power, as its synchronous equivalent.", so the asynchronous processors themselves don't have to have a shiny new instruction set architecture. (The original PDP-10 KA10 processor [bitsavers.org]

    • but that with increasing clock speed the size of your chip is limited (as electricity can only travel that far in a given amount of time) -> can't keep your chip synchronized -> need to think of new ways how to sync everything / if there are alternatives.

      I don't see why anything new is required. With today's design, bits are shifted from one section to the next on each clock pulse (or some multiple of the clock pulse, which just means that the internal clock is faster than the external clock).

      Sure, the timing might have to be adjusted here and there. But you're still just shifting electrons short distances from one pulse to the next. If your chip die is 1" across, electrons can travel the whole width at about 10GHz. Since they seldom go a tiny fraction o

    • by lowen ( 10529 ) on Tuesday June 24, 2014 @10:22AM (#47306381)

      One of the problems with increasing clock speed is gate capacitance and the RC time constant charging curve causing the switching FETs to operate in the linear region, causing power dissipation to go up with clock speed. This is why a decrease in process size has typically yielded a corresponding decrease in power dissipation at a given clock speed.

      If you make the capacitance smaller, you can increase the switching speed (capacitance would decrease with the square of the feature size (gate capacitance is dependent upon gate area), wheras resistance would increase linearly, inversely proportional to feature width, assuming the feature depth doesn't change (resistance dependent upon cross-sectional area)).

      Another poster has already mentioned asynchronous designs, so I'll pass on that particular nuance.

      But clock propagation is a serious issue, and I can see a vacuum transistor improving this considerably.

      Now, figuring out how far a wavefront will propagate in some period of time isn't too hard.

      Undoped silicon has a relative permittivity of 11.68; the reciprocal of the square root of the relative permittivity is the velocity factor of a particular dielectric; for undoped silicon that's about 30% of c. Silicon dioxide, as used for most of the insulation on the typical MOSFET design, has a relative permittivity of 3.9 and thus a VF of about 51%. On a stripline laid on silicon dioxide (silica glass) the velocity of propagation is about 153 million meters per second, or 153 meters per microsecond or 153 millimeters per nanosecond or 153 microns per picosecond. 153 microns is a bit larger than the cladding on a typical fiber optic strand (most have a cladding diameter of 125 microns; OM1 multimode is 62.5 micron core/125 micron cladding, OM4 is 50 micron core/125 micron cladding, and single-mode is 8 micron core/125 micron cladding, for comparison). That's best case propagation time.

      Now, to see how this translates to something of today, at least one of the models of the latest Haswell-DT Core i7 chips has a die size of 177 square millimeters. The chip is not square, and seems to be about a 4:1 rectangle in photos, which would yield about a 6.5 mm by 27.25mm die (yes, I know that gives 177.125 square millimeters; close enough).

      Now, if a clock signal needs to go straight across the narrow portion, it will take about 42.5 picoseconds to do so, assuming transmission across silion dioxide alone. Propagation in the long direction would take about 178 picoseconds to do so, with the same assumption. The published top speed of this processor is at the time of this writing about 4.5GHz (I know it's a bit higher, but that's a moving target). This is a 222 picosecond clock period; easily doable in the short dimension, a bit more difficult in the long dimension, and probably already requiring some asynchronous elements and delay compensation. If you limit solely on clock propagation time, and are able to work in a slip of a full clock cycle, the long dimension will give you a limit of a bit over 5.5GHz; the short dimension will similarly give you a limit of 23.5GHz.

      That's drastically oversimplified; each gate has it's own propagation delay that must be figured, and there are four cores (which makes it pretty understandable why the chip would have a 4:1 die dimension ratio, no?). A 20% clock delay factor will allow, with care, a good chance for synchronous operation (42.5 is pretty close to 20% of 222), but that's assuming straight clock traces (and they are not just straight across the chip).

      Food for thought.

  • by NixieBunny ( 859050 ) on Tuesday June 24, 2014 @02:42AM (#47304143) Homepage
    I work in a lab where we make radio receivers that work at frequencies around 460 GHz. As it is, we have to use a mixer diode to convert to a lower frequency (10 GHz) before amplifying the signal. This technology would be well suited to this application, provided that the noise is low enough. We already cool the mixer to 4K in a vacuum chamber.
    • Vacuum micro/nano-electronics are interesting for RF/mm-wave applications as the transport can be ballistic which could theoretically enable ultra-high-frequencies with scaling of the size.

      I haven't yet found a paper for the 460GHz claim in the IEEE Spectrum article so I'm not sure exactly which figure of merit they have picked for that claim, but rest assured that their comparisons to other transistor technologies are highly flawed.

      InP devices for example already operate up to 1THz power gain cutoff freque

      • by MattskEE ( 925706 ) on Tuesday June 24, 2014 @04:21AM (#47304395)

        I just noticed another disingenuous aspect to their claim - they say that because this operates at "atmospheric" pressure it will be more reliable than vacuum tubes of yore.

        But these vacuum FETs are filled with 1 atmosphere of helium, so the partial pressure difference with the outside world for all other gases will still be the same as though it was operating with a full vacuum, and this device would require the same long-term hermetic packaging as a vacuum tube. It relies on helium to extend the mean free path of the electrons, though to be fair as dimensions are scaled down further from the current 100nm to say 20nm perhaps neither helium nor vacuum would be required. Still it seems to be a very misleading claim.

        • by necro81 ( 917438 )
          The reason it was difficult to seal vacuum tubes was because you needed to get wire leads to pass through the glass tube (or its end-cap). Sealing to glass is really tough. Sealing an electronics package isn't so tough.
      • Even silicon certainly operates in the multi-hundred-GHz range, not the 40GHz which is for some reason cited in the article. Using graphene as a point of comparison is somewhat laughable as graphene has yet to demonstrate any truly practical advantage over group-IV or III-V transistor technologies, and has never been close to beating other leading device technologies on clock speed despite heavy press coverage.

        Yeah the graphene comparison is spurious, except that it's a wider audience article and graphene

        • Yeah the graphene comparison is spurious, except that it's a wider audience article and graphene has been getting inexplicably large amounts of press recently.
          A fair point, but I still don't excuse them for being part of the graphene press problem instead of the solution.

          As for the other comparisons: what's the maximum speed of a MOSFET? You can get silicon BJTs into the hundreds of GHz, but I'm not sure about MOSFETS
          Maximum published speed I've seen for a Si N-MOSFET is around 450GHz at 32nm, not sure of t

  • Magic Smoke (Score:5, Funny)

    by niftydude ( 1745144 ) on Tuesday June 24, 2014 @03:59AM (#47304331)
    So in the future, you'll know your electronics are broken when magic smoke is sucked into the chip?
    • by Meneth ( 872868 )

      TFA says these transistors would be filled with helium gas, and if it gets replaced with other gases, the thing would quickly stop working due to ionization.

      So I guess there'd be magic smoke going both in and out of the chip.

  • Should this type of component be known as an "valvistor"?

  • Ahead of schedule. (Score:5, Interesting)

    by SuricouRaven ( 1897204 ) on Tuesday June 24, 2014 @05:31AM (#47304545)

    "It was a nice feeling to have a Microvac of your own and Jerrodd was glad he was part of his generation and no other. In his father's youth, the only computers had been tremendous machines taking up a hundred square miles of land. There was only one to a planet. Planetary ACs they were called. They had been growing in size steadily for a thousand years and then, all at once, came refinement. In place of transistors had come molecular valves so that even the largest Planetary AC could be put into a space only half the volume of a spaceship."

    - Issac Asimov, The Last Question, 1956.

  • Today, Moore's law is an interconnect problem. The switching elements are pretty unimportant for it.

  • This looks like the ideal technology for electronics that have to work in extremes of temperatures or high radiation environments. I'm surprised the military and aerospace industries aren't jumping all over this.

    • Gases ionize when hit with atomic decay particles/radiations. A transistor with a high probability of spontaneously turning on in radioactive environments sounds dangerous to me.

  • Discovered? (Score:4, Informative)

    by mark_reh ( 2015546 ) on Tuesday June 24, 2014 @06:27AM (#47304697) Journal

    Natural things and phenomena are "discovered". Transistors were invented after a lot of hard work. By engineers.

    • I was going to give them the benefit of the doubt that they were talking about the phenomena using the end result as a convenient name. But no - that was apparently discovered between 1873-1884.

      Well...even the first triode with a hard vacuum was back in 1915 (says Wikipedia with no citation). I'm thinking 1947 is merely the first commercial use of hard vacuum tubes on a wide consumer market scale. That's really the only thing I can see that lines up with 1947.

      • by SEE ( 7681 )

        The 1947 is about the transistor, not the vacuum tube. The Bardeen-Brattain-Shockley transistor was developed in 1947, and got the three the 1956 Nobel in physics.

        • And yes - I was thinking triode and not transistor. I have no idea how I spent that much time writing without realizing that.

  • by Murdoch5 ( 1563847 ) on Tuesday June 24, 2014 @06:59AM (#47304799) Homepage
    A law needs to stand on it's own with out the need for external help, if Moores law break then it's not a law.
    • by bondsbw ( 888959 )

      Does that mean Obamacare isn't a law?

    • Moore's Law has never been a law and nobody treats it as one.

      It started off as an observation which happened to basically be correct. Then it became more of a roadmap, with industry using it to set technology targets and allocating R&D resources so that they can continue following Moore's "Law".

  • I'm still waiting for my memristor computer...

  • Julius Edgar Lilienfeld patented a FET in 1925. The FET is the type of transistor used in all modern CPUs.

  • ...when will this result in a 100W Marshall head on a chip?

    (Why yes, I am a guitar player! Thanks for asking.)

    • Hah, my second thought when reading this article, was would it enable solid state guitar amps to sound more like tube amps..? Upon further reflection, I think most likely not. I'm not very happy with modeling though, I'm still a tube head.

      My *first* thought was, transistors were "discovered"? What, were they found nesting in a rock by a lake somewhere, pre-assembled? Nomenclature like this leads me to believe the submitter doesn't believe in any kind of IP whatsoever. They were invented. What was
  • ... the re-appearance of magic eye tubes [wikipedia.org] on my computing equipment.

  • Aren't there still going to be problems of scaling this thing? It seems like they are talking about something that is about an order of magnitude or more larger than transistors today, and that's going to limit the complexity of a circuit.

"Aww, if you make me cry anymore, you'll fog up my helmet." -- "Visionaries" cartoon

Working...