Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Power Hardware Technology

Can Transistors Be Made To Work When They're Off? 89

An anonymous reader writes "Engineers at the Belgian research institute IMEC are looking at the use of silicon transistors in the sub-threshold region of their operation as a way of pursuing ultra-low power goals. A chip the engineers are designing for biomedical applications could have blocks designed to operate at 0.2 or 0.3 volts, researchers said, according to EE Times. The threshold voltage is the point at which the transistor nominally switches off. Operating a transistor when it is 'off' would make use of the leakage conduction that is normally seen as wasted energy, according to the article."
This discussion has been archived. No new comments can be posted.

Can Transistors Be Made To Work When They're Off?

Comments Filter:
  • Yes and No (Score:2, Interesting)

    by ModernGeek ( 601932 )
    I believe that if you were to try and utilize the leakage current, it would only cause that much more resistance, making it require more current to stay "off". This would be a good way to get a government grant in publishing some R&D for this, but in reality, I imagine that the amount of complexity that this would add to a device would outweigh any benefits. Plus, most transistors that just sit there in the off state are off because the entire device is off and doesn't require any current. The energy
    • Re:Yes and No (Score:5, Insightful)

      by NevarMore ( 248971 ) on Sunday June 13, 2010 @07:09PM (#32559938) Homepage Journal

      The energy put into thinking about this would far outweigh any perceived benefits.

      Indeed. All scientific research is utterly useless and wasted time unless it has immediate and forseeable tangible benefits.

      • Re: (Score:1, Insightful)

        by Anonymous Coward

        Indeed. All scientific research is utterly useless and wasted time unless it has immediate and forseeable tangible benefits.

        I have to disagree. The pursuit of knowledge can be rewarding in and of itself. Besides you never when some discovery might prove useful. For example, Boolean logic, which is used to design transistor circuits, was invented in the 19th century. It was pretty useless until transistors came along in the 1950's.

      • by Anonymous Coward

        Without fundamental research, no scientific research resulting in tangible results would be possible.
        Fundamental research is the basis of all scientific progress.

    • Re:Yes and No (Score:5, Insightful)

      by Anonymous Coward on Sunday June 13, 2010 @08:02PM (#32560244)

      You're just a fucking ignorant moron.

      This has nothing to do with "green" propaganda and raining on your political masturbation parade, and everything to do with looking at ways to overcome the problems that die shrinkage has on causing waste power from static dissipation to prevent further technology advances, you fuck.

      The summary is only using "off" in an informal sense. In an idealized textbook transistor model, when the transistor is "off" or in cutoff, it is off completely. But in reality, there is leakage, and so this "cutoff" region actually has some more interesting things going on, then a fucking tool like you apparently would understand. With large transistors in CMOS configurations, there is virtually no leakage and no static dissipation. As features have shrunk, the leakage has become a fully technological advancement problem. It isn't just about treehugging, but also the fact that if you get to a certain point where you have tons of transistors in a small space, if you can't remove the waste heat, you've got a major practical problem.

      Get a clue, you useless fucktwit.

      • Mod parent UP!!!
      • Re: (Score:3, Interesting)

        by operagost ( 62405 )
        It's too bad I just used up my modpoints; because while the parent post has a few good points, I could not in conscience mod up a post that's so full of profanity, ad homs, red herrings, and insults like others have done here. It's difficult to read and letting anger overwhelm a perfectly good argument doesn't serve anyone.
    • by Bender_ ( 179208 )

      This has nothing to do with "leakage" current. As basic field effect transistor theory will teach you, there is a region below the threshold voltage [wikipedia.org] where the current depends exponentially on the gate bias. Yes, exponentially instead of linearly or quadratically as in the "on" region. This means that small changes in the gate bias will allow for a huge change in current. The drawback is here that we are talking about extremely low current. In CMOS logic this equals lower operation frequency.

      This idea here

  • Not news (Score:4, Informative)

    by betterunixthanunix ( 980855 ) on Sunday June 13, 2010 @07:12PM (#32559944)
    I heard about this research topic over a year ago when I took VLSI. The main problem, as I understand it, is not building circuits that operate below the threshold voltage, but actually reading the output of those circuits.
    • Re: (Score:3, Insightful)

      No problem! See, you just add a higher voltage rail, and then use that to run the output transistors so you get readable outputs!

      .
      Oh wait...

    • Re:Not news (Score:4, Interesting)

      by tehSpork ( 1000190 ) on Sunday June 13, 2010 @09:35PM (#32560696)
      Same here. I took VLSI Winter 2009 and we spent an inordinate amount of time studying and working on sub-threshold designs. Part of our final project (and final exam) was to produce a simulated and laid-out circuit using a sub-threshold supply. It's not very complicated, you just lower your clockspeed and source voltage, most of your existing circuits work just fine. The major problem is that they are now working at very low clockrates (KHz as opposed to MHz) which doesn't make too many people very excited.
    • Re: (Score:3, Informative)

      by nerdbert ( 71656 )

      It's been "not news" now for at least 20 years. Tsvidis did significant work on subthreshold FETs for neural networks back in the 80s and early 90s. Subthreshold design isn't common, but it's by no means a new field.

      Subthreshold has its place, but it's not a pleasant place to work. Mismatch between transistors is about 4x higher, gate leakage is a huge factor below 130nm, the models you get from your foundry aren't reliable or accurate, etc.

      I make subthreshold circuits all the time when I need bandwidth and

  • I know when I'm off theres no chance of me workin...
  • by dpilot ( 134227 ) on Sunday June 13, 2010 @07:16PM (#32559982) Homepage Journal

    As we've scaled deep into the submicron region, it's been getting harder and harder to turn the devices really "off". Leakage current has been rising and has been quite noticable for several generations now.

    So the idea of doing useful work with subthreshold current sounds neat
    (OK, I just went and read TFA.)

    Still sounds neat, but...

    In deep submicron part of the reason for the subthreshold leakage problems is control of Leff. (The effective channel length of the FETs.) There's a thing called "line length variation" which means that channel lengths in different parts of the chip will be different, sometimes subtly, sometimes not so subtly. Threshold voltage (Vt) is a strong function of channel length, making subthreshold leakage also a strong function of channel length. Performance characteristics will vary widely across the chip, likely much more than conventional transistor operation.

    This will make it tough to scale down, (in feature size) scale up, (in chip size) and make manufacturable.

    • "Off and off-er" or "off and almost-as-off"?

      Reminds me of - It depends on what the definition of the word 'is' is

      Which of course leads to the arch typical;

      "When I use a word," Humpty Dumpty said in rather a scornful tone, "it means just what I choose it to mean -- neither more nor less."
    • In deep submicron part of the reason for the subthreshold leakage problems is control of Leff. (The effective channel length of the FETs.) There's a thing called "line length variation" which means that channel lengths in different parts of the chip will be different, sometimes subtly, sometimes not so subtly. Threshold voltage (Vt) is a strong function of channel length, making subthreshold leakage also a strong function of channel length. Performance characteristics will vary widely across the chip, likely much more than conventional transistor operation.

      This will make it tough to scale down, (in feature size) scale up, (in chip size) and make manufacturable.

      Right, hence why nobody has done it in any quantity yet. I assume this research will find some balancing point where process variation's effect is negligible. For example, if the range for a strong '0' is 0-0.04V and a strong '1' is >0.26V, then you just need to ensure that process variation puts that subthreshold level somewhere between 0.04V and 0.26V, probably significantly so. If process levels can keep that subthreshold level between 0.1V and 0.2V, that might be good enough.

      It will be interestin

  • by overshoot ( 39700 ) on Sunday June 13, 2010 @07:20PM (#32560002)
    Transistors in weak inversion have higher higher transconductance/current ratios than transistors in strong inversion do. Using MOS devices in that mode is standard operating practice for a whole host of applications.

    Notable among those applications are ... wristwatch chips. Eric Vittoz has made a career of this mode of operation. You can't set foot in the subject without running across patents, books, articles -- Hell, probably recipes by him going back 40 years.

  • The thing that popped into my mind while reading this was the possibility of this being used to operate the entire system at this level (rather than just making use of leakage). If it could perform fast enough, this could potentially massively reduce power consumption, and thus the need for cooling as well.
  • by Anonymous Coward on Sunday June 13, 2010 @07:32PM (#32560076)

    A male genitalia heat extraction device. Power devices and increase the dismally low western world sperm count in one.
    Not patented yet because I don't know how to make one. Someone do it!

  • Sounds awesome. Good luck scientists!
  • by Theovon ( 109752 ) on Sunday June 13, 2010 @07:34PM (#32560092)

    Another commenter is correct in pointing out that what they're doing is using leakage current. When we measure power dissipation, we count two things, (a) dynamic power, which is used when a transistor switches and is a function of frequency, voltage, and temperature, and (b) static (leakage) power, which is always going on and is a function of voltage and temperature. At 180nm, the ratio of dynamic to static was about 1000:1. It started to become noticed at around 90nm and a problem at 65nm. Now at 45nm and 32nm, leakage is about half the total power usage. The best way to lower power is to reduce voltage, but this kills performance scaling. Scaling down transistors reduces dynamic power but increases relative static power, which is why processors like the Core i7 use not just clock gating but POWER gating, dynamically, at a functional unit level.

    Regarding subthreshold, as you lower voltage, power goes down. The problem is that transistors also get slower. Above threshold, the power goes down faster than speed, so if you're using a transistor with threshold voltage of 150mV with a supply voltage of 300mV, you get like a 100th the power dissipation, but a tenth the speed, which means that you use one tenth the energy to perform some process. As you lower the supply voltage below threshold, the transistors get slower faster than the power goes down, so total energy actually goes up as you lower voltage below a certain point. There is a supply voltage point either side of the threshold voltage where energy is minimum for the range. You use near threshold or sub threshold depending on if you care about speed. Also, things behave quite differently at low voltages, so you have to change all your design techniques.

    One of the problems with near and sub threshold is that you don't actually know what your threshold voltages are anymore. It's called process variation. The transistors are so tiny that you get on the order of tens of dopant atoms per transistor. The doping process is highly random, so you get wide variance on threshold voltage (and effective channel length too), meaning that two transistors next to each other have different switching characteristics. This is actually a major problem at 32nm, resulting in unfortunately large supply voltage margins to avoid timing-related errors, which translates into excessive power usage. It's an even bigger problem when the supply is near the threshold (above or below), because the speed of a transistor and its power output are actually functions of the difference between supply voltage and threshold voltage. If the supply is 300mV, then the transistor with Vth=130 is going to be way faster (and way leakier) than the transistor on the same die with Vth=170. Of course, both were designed to have Vth=150, but you can't control that well enough.

    My area of research involves coping with the 5X decrease in reliability at NTC, and I'll talk more about it when my papers are accepted. :)

  • Nothing new (Score:4, Interesting)

    by eudean ( 966608 ) on Sunday June 13, 2010 @07:45PM (#32560152)
    As far as I know (i.e., according to some professors I've spoken to), transistors in devices with extremely long battery lives, such as hearing aids and watches, are typically operated in sub-threshold in order to conserve power. Of course, these devices are also typically not speed-critical. A lot of biomedical applications probably fall under the umbrella of requiring low power (for battery life and/or thermal reasons) and not requiring high speed, making the application a natural fit for sub-threshold operation.
  • Been around for ages (Score:2, Informative)

    by AndOne ( 815855 )
    Using transistors in sub threshold modes has been around for ages. Carver Mead proposed their use for modeling neurons in silicon ages ago and there have been others who use these techniques for other low power techniques. See Delbruck(Zurich ETH) or Boahen(Stanford/Penn) or Andreou(Johns Hopkins). Two of my undergrad profs did thesis work at Georgia Tech using these techniques as well to do neuromorphic engineering tasks.
  • This is nothing new (Score:4, Informative)

    by blue_moon_ro ( 973863 ) on Sunday June 13, 2010 @07:59PM (#32560220)
    This is nothing new, this behavior of the CMOS transistors in the subthreshold region of operation has been known for years. I actually wrote a paper 5 years ago on a circuit using transistors in the subthreshold mode of operation. As always, there are trade-offs, and the main one is that the frequency of operation is a lot lower than if the transistor would have worked in the normal region. The main advantages of running the transistors in this operating region are low power and the fact that the current vs. voltage law changes from the quadratic law in the regular operating region, to exponential here, i.e. I ~= e^[n(VGS-VT)/kT] (see Sedra&Smith's or any other reference electronics book). So don't dream of your next low power processor using this technology. This is more suited for analog applications (one of the first ones that I remember is current multipliers and low-power current-mode analog circuits) and this is how these guys at IMEC seem to be actually using it.
    • Yep, that's exactly right and I still find it amusing the everything-old-is-new-again of the fact that the exponential behavior described here mirrors the characteristics of the old high-power bipolar technologies. So we need to dust off our old bipolar engineering texts to start working in the brave new world of low-power design.

      Seriously though, this is a niche analog technology for a small, but important market. I imagine it will always be the realm of small volume, high margin products.
      • by Bengie ( 1121981 )

        kind of like how we went from serial to parallel back to serial interfaces with computers? :P

  • Forget the P = NP question - does PNP = NPN?
  • Funny, but I was just reading an old radio magazine, circa 1938, where they were using 5 to 6 volts to the heater of a rectifier tube that usually needed 25. That's 1/4 the voltage, about 1/16th the power, and the rectifier worked BETTER at detecting radio signals than at full voltage. Some complex thing about the diode work functions one might suspect.

    Engineers have explored most corners of the performance envelope, nothing all that new under the sun.

  • The current into the base of a bipolar transistor has a logarithmic relationship to the BE voltage. That means it is only really off when the base voltage is exactly zero. This makes the article title BS.

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      The current into the base of a bipolar transistor has a logarithmic relationship to the BE voltage. That means it is only really off when the base voltage is exactly zero.

      These are MOS devices, which have a squared voltage/current relationship between the gate and source when operated below the threshold voltage.

      I wrote my thesis on this stuff in 1994, and I used a textbook called "analog VLSI" written by Carver Mead at Caltech. There are commercialized cochlea implants available that use this technology. This has got to be the most non-news story I have come across for a long time.

    • Bipolar transistors have the additional advantage that there is no difficulty in controlling a threshold voltage; the equivalent (and less significant) problem is controlling current gain. Bipolars have the disadvantages that any active circuit must always draw current, and bipolar logic is more complex than CMOS. One correction to your comment: the current in a bipolar transistor is exponentally related to Vbe, so performance degrades very rapidly with decreasing voltage. Multiply speed by about 0.02 for e
  • Low power decisions (Score:3, Interesting)

    by PingPongBoy ( 303994 ) on Sunday June 13, 2010 @11:15PM (#32561130)

    The goal of low power transistors is reasonable, but a new transistor design may be needed. The brain can do a lot of operations with little power but in terms of clock speed, the brain isn't that fast. A similar design may be good for low power electronic decisions - massive number of circuits at low frequency.

    • by Talisein ( 65839 )

      A more important property of the brain is that it is not as precise as a computer. The brain, and many other biological computations, perform their calculations in an analog manner that usually gets them "close enough" to the right answer. Digital designers think they need every bit of precision in a 64-bit floating point computation and they engineer the circuit to require it--this involves a lot of "over engineering." Of course, the really cool thing is that biology has "digital" circuits as well, when it

  • This technique will probably require much tighter control over some figures of merit (specs) that are traditionally not that tightly controlled. For example, junction capacitance and resistance, as well as the thickness of the junctions themselves, will have to be elevated to a level of process-control precision that will likely make this a laboratory curiosity for a while.

"I got everybody to pay up front...then I blew up their planet." "Now why didn't I think of that?" -- Post Bros. Comics

Working...