Follow Slashdot stories on Twitter


Forgot your password?
Power Hardware Technology

Can Transistors Be Made To Work When They're Off? 89

An anonymous reader writes "Engineers at the Belgian research institute IMEC are looking at the use of silicon transistors in the sub-threshold region of their operation as a way of pursuing ultra-low power goals. A chip the engineers are designing for biomedical applications could have blocks designed to operate at 0.2 or 0.3 volts, researchers said, according to EE Times. The threshold voltage is the point at which the transistor nominally switches off. Operating a transistor when it is 'off' would make use of the leakage conduction that is normally seen as wasted energy, according to the article."
This discussion has been archived. No new comments can be posted.

Can Transistors Be Made To Work When They're Off?

Comments Filter:
  • Not news (Score:4, Informative)

    by betterunixthanunix ( 980855 ) on Sunday June 13, 2010 @08:12PM (#32559944)
    I heard about this research topic over a year ago when I took VLSI. The main problem, as I understand it, is not building circuits that operate below the threshold voltage, but actually reading the output of those circuits.
  • by overshoot ( 39700 ) on Sunday June 13, 2010 @08:20PM (#32560002)
    Transistors in weak inversion have higher higher transconductance/current ratios than transistors in strong inversion do. Using MOS devices in that mode is standard operating practice for a whole host of applications.

    Notable among those applications are ... wristwatch chips. Eric Vittoz has made a career of this mode of operation. You can't set foot in the subject without running across patents, books, articles -- Hell, probably recipes by him going back 40 years.

  • by Theovon ( 109752 ) on Sunday June 13, 2010 @08:34PM (#32560092)

    Another commenter is correct in pointing out that what they're doing is using leakage current. When we measure power dissipation, we count two things, (a) dynamic power, which is used when a transistor switches and is a function of frequency, voltage, and temperature, and (b) static (leakage) power, which is always going on and is a function of voltage and temperature. At 180nm, the ratio of dynamic to static was about 1000:1. It started to become noticed at around 90nm and a problem at 65nm. Now at 45nm and 32nm, leakage is about half the total power usage. The best way to lower power is to reduce voltage, but this kills performance scaling. Scaling down transistors reduces dynamic power but increases relative static power, which is why processors like the Core i7 use not just clock gating but POWER gating, dynamically, at a functional unit level.

    Regarding subthreshold, as you lower voltage, power goes down. The problem is that transistors also get slower. Above threshold, the power goes down faster than speed, so if you're using a transistor with threshold voltage of 150mV with a supply voltage of 300mV, you get like a 100th the power dissipation, but a tenth the speed, which means that you use one tenth the energy to perform some process. As you lower the supply voltage below threshold, the transistors get slower faster than the power goes down, so total energy actually goes up as you lower voltage below a certain point. There is a supply voltage point either side of the threshold voltage where energy is minimum for the range. You use near threshold or sub threshold depending on if you care about speed. Also, things behave quite differently at low voltages, so you have to change all your design techniques.

    One of the problems with near and sub threshold is that you don't actually know what your threshold voltages are anymore. It's called process variation. The transistors are so tiny that you get on the order of tens of dopant atoms per transistor. The doping process is highly random, so you get wide variance on threshold voltage (and effective channel length too), meaning that two transistors next to each other have different switching characteristics. This is actually a major problem at 32nm, resulting in unfortunately large supply voltage margins to avoid timing-related errors, which translates into excessive power usage. It's an even bigger problem when the supply is near the threshold (above or below), because the speed of a transistor and its power output are actually functions of the difference between supply voltage and threshold voltage. If the supply is 300mV, then the transistor with Vth=130 is going to be way faster (and way leakier) than the transistor on the same die with Vth=170. Of course, both were designed to have Vth=150, but you can't control that well enough.

    My area of research involves coping with the 5X decrease in reliability at NTC, and I'll talk more about it when my papers are accepted. :)

  • by corsec67 ( 627446 ) on Sunday June 13, 2010 @08:35PM (#32560094) Homepage Journal

    Driving with my foot off the gas works really well on I-70 East towards Denver, you can stay at 70MPH for a while there. Or in flat terrain, coasting along at 700rpm in second gear sometimes works well.

    Which made me think about this, and is really probably where this research would have effects: transistors on the edge of a high-potential region, so that even if the transistors are "off", there is more flowing through them than others in the middle of an "off" block.

  • Been around for ages (Score:2, Informative)

    by AndOne ( 815855 ) on Sunday June 13, 2010 @08:58PM (#32560218)
    Using transistors in sub threshold modes has been around for ages. Carver Mead proposed their use for modeling neurons in silicon ages ago and there have been others who use these techniques for other low power techniques. See Delbruck(Zurich ETH) or Boahen(Stanford/Penn) or Andreou(Johns Hopkins). Two of my undergrad profs did thesis work at Georgia Tech using these techniques as well to do neuromorphic engineering tasks.
  • This is nothing new (Score:4, Informative)

    by blue_moon_ro ( 973863 ) on Sunday June 13, 2010 @08:59PM (#32560220)
    This is nothing new, this behavior of the CMOS transistors in the subthreshold region of operation has been known for years. I actually wrote a paper 5 years ago on a circuit using transistors in the subthreshold mode of operation. As always, there are trade-offs, and the main one is that the frequency of operation is a lot lower than if the transistor would have worked in the normal region. The main advantages of running the transistors in this operating region are low power and the fact that the current vs. voltage law changes from the quadratic law in the regular operating region, to exponential here, i.e. I ~= e^[n(VGS-VT)/kT] (see Sedra&Smith's or any other reference electronics book). So don't dream of your next low power processor using this technology. This is more suited for analog applications (one of the first ones that I remember is current multipliers and low-power current-mode analog circuits) and this is how these guys at IMEC seem to be actually using it.
  • Re:Not news (Score:3, Informative)

    by nerdbert ( 71656 ) on Monday June 14, 2010 @10:15AM (#32564656)

    It's been "not news" now for at least 20 years. Tsvidis did significant work on subthreshold FETs for neural networks back in the 80s and early 90s. Subthreshold design isn't common, but it's by no means a new field.

    Subthreshold has its place, but it's not a pleasant place to work. Mismatch between transistors is about 4x higher, gate leakage is a huge factor below 130nm, the models you get from your foundry aren't reliable or accurate, etc.

    I make subthreshold circuits all the time when I need bandwidth and have no headroom (hello 32nm, how unpleasant to meet you when doing analog design!). But I'm not doing low power subthreshold design, rather it's for very, very, high speed analog signal processing designs.

This screen intentionally left blank.