Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×

Chip Power Breakthrough Reported by Startup 174

Carl Bialik from WSJ writes "The Wall Street Journal reports that a tiny Silicon Valley firm, Multigig, is proposing a novel way to synchronize the operations of computer chips, addressing power-consumption problems facing the semiconductor industry. From the article: 'John Wood, a British engineer who founded Multigig in 2000, devised an approach that involves sending electrical signals around square loop structures, said Haris Basit, Multigig's chief operating officer. The regular rotation works like the tick of a conventional clock, while most of the electrical power is recycled, he said. The technology can achieve 75% power savings over conventional clocking approaches, the company says.'"
This discussion has been archived. No new comments can be posted.

Chip Power Breakthrough Reported by Startup

Comments Filter:
  • by Anonymous Coward on Monday May 08, 2006 @05:31PM (#15288910)
    Chip Power Breakthrough Reported
    By DON CLARK
    May 8, 2006; Page B6

    A tiny Silicon Valley company is proposing a novel way to synchronize the operations of computer chips, addressing power-consumption problems that are a major issue facing the semiconductor industry.

    Multigig Inc., a closely held start-up company in Scotts Valley, Calif., says its technology is a major advance over the clock circuitry used on many kinds of chips.

    Semiconductor clocks work like the drum major in a marching band, sending out electrical pulses to keep tiny components on chips performing operations at the right time. In microprocessor chips used in computers, the frequency of those pulses -- also called clock speed -- helps determine how much computing work gets done per second.

    One problem is that the energy from timing pulses flows in a one-way pattern through a chip until it is discharged, wasting most of the power. Clocks account for 50% or more of the power consumption on some chips, estimates Kenneth Pedrotti, an associate professor of electrical engineering at the University of California at Santa Cruz.

    Partly for that reason, companies such as Intel Corp. have all but stopped increasing the clock speeds of microprocessors, a popular way to increase computing performance through most of the 1990s.

    John Wood, a British engineer who founded Multigig in 2000, devised an approach that involves sending electrical signals around square loop structures, said Haris Basit, Multigig's chief operating officer. The regular rotation works like the tick of a conventional clock, while most of the electrical power is recycled, he said. The technology can achieve 75% power savings over conventional clocking approaches, the company says.

    A typical chip would use an array of timing loops, in a grid akin to a piece of graph paper, Mr. Basit said. The loops automatically synchronize their timing pulses. That feature helps address a problem called "skew" -- the slightly different arrival times of timing pulses throughout a typical chip -- that tends to limit clock precision.

    Multigig says its self-synchronizing loops can run efficiently at unusually high frequencies.

    Mr. Pedrotti said past attempts to address the skew problem have tended to increase power consumption. He and his students, some of whom receive research funding from Multigig, have performed simulations that so far back up the company's claims, though the team is just about to start tests using actual chips, he said.

    Multigig is in talks to license its technology to chip makers, as well as design some of its own products to use the clock technology. Besides microprocessors and other digital chips, the approach could help synchronize frequencies of communication chips, Mr. Basit said.

    "This is a dramatic way of clocking circuits," said Steve Ohr, an analyst at Gartner Inc. He cautioned it could take years to get existing manufacturers to modify existing products to take advantage of the new technology. "Intel is not going to redesign the Pentium tomorrow because of it," he said.
    • In the one corner we have these new guys who tell us that increased syncronisation will reduce power consumption.

      In the opposite corner we have the asynchronous processing folks who tell us that removing clocking will improve power consumption.

      These are at odds with eachother and someone has gotta be wrong. I smell a VC scam.

      • Who said that increased synchronisation reduces power consumption? The reduced power consumption is due to a different design, rather than increased synchronisation (which also is a result of the design).
      • by rufty_tufty ( 888596 ) on Tuesday May 09, 2006 @07:08AM (#15292087) Homepage
        Clock skew impacts your timing margin (If you've got 2 flip flops that in theory see the clock at the same instant, any uncertainty in the clock arriving will inpact your timing from one to the other). One concequence of this is you often have to have larger faster drivers on both your clock tree and your logic to work around this timing problem.
        Larger drivers = larger power.

        Therefore if you've got a method to make your clocks arrive more accuratly then you've more timing margin between FFs and therfore can use smaller drivers.

        Clock trees are also the major consumer of power in most designs, so anything that can reduce them is good.

        Async removes the clock altogether so you save power there.

        So yes both of them can be right.
  • by Anonymous Coward on Monday May 08, 2006 @05:34PM (#15288924)
    Conventional electronics uses circular loop structures to send electrical signals as the electrons would get caught on corners that were too sharp. These people must have overcome that limitation.
    • Conventional chip makers use EVIL CIRCULAR electron path. Embrace SQUARE CIRCUITS and escape EVIL and DUMB circular technology. Then you will realize that electrons travel on FOUR SIMULTANEOUS ORBITS on their way through a microchip.
    • not far off (Score:3, Informative)

      by nomel ( 244635 )
      actually...sharp turns are a problem for high frequency circuits. when the frequencies get very high compared to the wires length, the waves *do* actually reflect back from sharp corners and will favor a straight path. this is the basis for things such as tdr (when finding kinks) and directional couplers.
  • He cautioned it could take years to get existing manufacturers to modify existing products to take advantage of the new technology.

    D'oh! Looks like I won't be getting 12 hour battery life on my laptop anytime soon!

    • by Anonymous Coward
      Why? Quite a few guys got car battery adapted to work with laptops. Up to a week on a single charge! :)
  • Simple Math (Score:4, Informative)

    by Ossifer ( 703813 ) on Monday May 08, 2006 @05:35PM (#15288928)
    So "up to" 75% savings on "up to" 50% of the electricity usage. So 3/8 or 37.5% savings, all in all... Of course this is only for the CPU... Could be noticeable in production... Maybe...
    • So "up to" 75% savings on "up to" 50% of the electricity usage. So 3/8 or 37.5% savings, all in all...
      If it saves that much electricity on the CPU, that should also yield a heat reduction.

      Now, whether it is linear or not, any heat reduction is a Good Thing (tm).

      Hopefully we can choose between faster chips at the heat levels we have now, or the same speed chips at a 37.5% reduction in heat (and points in between).
    • Only for the CPU (Score:3, Interesting)

      by vlad_petric ( 94134 )
      In your average laptop, the power consumed by a CPU when running something (i.e. not just idling around) is about half the total power. The other half, roughly, is consumed by the screen.
    • by Weaselmancer ( 533834 ) on Monday May 08, 2006 @07:42PM (#15289489)

      Remember, in advertising-speak, "up to" means "less than". Values between 0% and 75% fulfill the conditions of being "up to a 75% savings".

      • Indeed, even less than 0%!
      • Remember, in advertising-speak, "up to" means "less than". Values between 0% and 75% fulfill the conditions of being "up to a 75% savings".

        You are a tard. "up to" means "at this point under definable conditions". It's like the EPA ratings on your car, EG: 23 city, 31 highway.

        If you drive on a level highway, with a fully-tuned car, with a recent oil change, properly inflated tires, at 75 degrees farenheit, at *exactly* 55 degrees, you'll see your 31 MPH.

        But reality is that your tires are a tad low, you haven
        • You are a tard. "up to" means "at this point under definable conditions".

          Oh really? Check a dictionary, or even use simple logic. "Up to N" == "not greater than N" == "less than or equals to N". Assuming values can't go negative, "less than or equals to N" == "between 0 and N".

          The only reason EPA ratings on cars are reasonably accurate is bcos it's legally required - they are forced to demonstrate that the mileage figure they give can be achieved under "perfect" conditions. Without that, there would be
        • You are a tard. "up to" means "at this point under definable conditions". It's like the EPA ratings on your car, EG: 23 city, 31 highway. If you drive on a level highway, with a fully-tuned car, with a recent oil change, properly inflated tires, at 75 degrees farenheit, at *exactly* 55 degrees, you'll see your 31 MPH.

          Poor example...you forgot to add in "with your air conditioner off" as a condition. That said, I think the post you reply to makes a good point about advertising in general.

          Just as car co

          • I'm sick of this EPA mileage chatter about the car companies scamming people. Yes, the numbers sometimes fail to be precise. They sometimes even fail to be accurate. They are not what the car company got by test-driving under perfect conditions on the open road.

            They are required, by EPA regulation, to report what the car got on a dynamometer under certain cycles meant to simulate actually driving. The EPA regulates that this is the way. The EPA is therefore responsible for the validity of the results.

            Some c
            • They are required, by EPA regulation, to report what the car got on a dynamometer under certain cycles meant to simulate actually driving. The EPA regulates that this is the way. The EPA is therefore responsible for the validity of the results.

              No, the EPA is responsible for the accuracy of the results under the specified conditions. The validity of the results is a function of whether the test conditions reflect the typical use of your average vehicle. Since (in most cases) they fail to set test conditio

          • Don't forget that EPA mileage estimates are also going to be computed with the windows up. On a truck or a van it probably makes a very small difference, but on a highly aerodynamic car (like most sports cars, or all purpose-built hybrids) it's quite significant.
          • Why does everybody whine about how the EPA estimates are too high? For me they are too LOW ! I can drive my Volvo S40 at 80 MPH with the A/C on and still get 31.5 MPG over hundreds of miles. The EPA says I should only be getting 28 MPG highway...
  • "Intel is not going to redesign the Pentium tomorrow because of it," he said.

    Why not? If this works it sounds like Moore's law would continue, and would give whatever company that deployed it first a performance advantage.

    Is this really so radical we'll have to wait years to get it on our desks?
    • by Mindwarp ( 15738 ) on Monday May 08, 2006 @05:47PM (#15288978) Homepage Journal
      Why not? If this works it sounds like Moore's law would continue, and would give whatever company that deployed it first a performance advantage.

      Because first they're going to get a bunch of their theoreticians to work the math on the problem to make sure it's viable. Then they're going to get a bunch of their VLSI modellers to run virtual simulations on the clock modification to refine exactly how great the potential efficiency gain would be. If that turns out OK then they'd produce some simple mock-ups of the new clock architecture to make sure that it functions correctly in hardware. Then they'd go about the expensive and time-consuming process of redesigning the current chip architectures to include the new style clock. Then they'd produce an initial fabrication of the chip to run through extensive hardware testing (and on the inevitable failure they'd hop two steps back and try again.) Once they were happy with the design they'd scale up to full production and roll it out.

      Everybody in the microprocessor design world remembers this [wikipedia.org] all too well.
      • And then they'll find out that the patent for it is so tightly secured that noone can use it...
        • And then they'll find out that the patent for it is so tightly secured that noone can use it...

          Nah, that's when they bring in the bunny-suited lawyers to prove that they were the ones that invented the technology all along.

          :-)
      • Yeah, strangely everyone remembers the FDIV flaw but nobody seems to remember this: http://apple.slashdot.org/article.pl?sid=06/01/24/ 1537231 [slashdot.org]

        Pentium 4 has 64 flaws, Core Duo has 34 and counting...

        At this point releasing a CPU with only one obscure FDIV bug would probably be a day to celebrate. ;)
        • by GoRK ( 10018 )
          FDIV wasn't particularly obscure; IIRC it went unnoticed for a very long time and affected many real world calculations. It was unlike many other errata in the regard that it was a documented function misbehaving and was not caught early. You could see it in action simply by loading up a spreadsheet app and doing a division. The software workaround wasn't that difficult, but the lack at the time of microcode support made it a big hassle.

          The Pentium also had the more egregious F00F bug, the nonexistent opcod
        • Well, the FDIV was NOT obscure (I remember seeing it in every major PC magz at that time), and it was not only one obscure bug, but more like 0.9986756235 bug.
        • foof and fdiv were particularlly nasty ones. foof because it meant a bad app could crash your system hard even if it had no special privilages. fdiv because it silently currupted results rather than simply causing a crash.

          are any of the core solo/duo ones that bad?
      • Hell, I was a little kid and I remember that bug.

        It isn't every day you can read in a news article how to use windows calculator to make your computer output incorrect math... I remember being tickled to death about it being what the article said it would be.
    • "Intel is not going to redesign the Pentium tomorrow because of it," he said.

      Why not?


      For starters the automated design tools will need a rehack.

      Current synchronous chips use a "clock tree" to try to get all the flops and latches to clock at once. Then the design tools assume that the outputs flip at the same time and try to route the signals so they all get through the logic to set up the flops in time for the next clock.

      This scheme will produce waves of clocking that propagate across/around the chip. So
  • by From A Far Away Land ( 930780 ) on Monday May 08, 2006 @05:39PM (#15288947) Homepage Journal
    We're getting ever closer to the perpetual motion machine, just 25% energy savings to go ;-)

    Seriously though, I'll look forward to seeing this new chip in production, since more energy efficient chips means less waste heat, and thus quieter computers with fewer fans. I'll trust it when I see it, I'm not so swayed by a company that is still just a "startup" probably looking to get a boost to its stock price by anouncing a breakthrough.
  • by Anonymous Coward
    Most of the power in a computer is used once and wasted. The input to a gate acts like a capacitor. When the input is driven from a zero to a one, the current is limited by the resistance of the output gate driving it. That resistance is where the power is dissipated. The charge is drained to ground when the input is driven from a one to a zero. If there was some way to re-use the charge stored in the inputs, the power dissipation of a chip could be dramatically reduced. There would be a limit to how
  • No overclocking (Score:3, Interesting)

    by rcw-work ( 30090 ) on Monday May 08, 2006 @05:49PM (#15288987)
    You can't readily adjust the amount of time it takes electricity to make its way around a fixed-size loop. If this is what is actually clocking the chip, it'll have an official frequency (or two, perhaps, for low-power usage) and you'll be stuck with that. The manufacturer would have to throw out, rather than derate, any parts that don't work at that frequency.
    • Not necessarily. That just sets a lower bound (of sorts) on the performance.

      You're assuming that A. there can be only one pulse in flight at a time (which is probably not the case) and B. that the breadth of the pulse is constant. I would expect that in such a design, calculation might occur on the rise and the value would be propagated to the next stage of the CPU on the fall, which would mean that the pulse width and number of concurrent pulses in the loop could be adjusted to allow for significant va

      • You can put a harmonic on the loop, but it needs to be in phase with the wavelength of the loop, which limits you to integer harmonics, and there are practical upper limits to the harmonics you can cheaply use. I don't think they'd ship a chip that had, say, a 500MHz loop and use a 20x harmonic to get 10GHz, so that as a user, you could select 21x or 22x instead. I think it'd be more likely that for the 10GHz example, they'd give you a CPU with a 3.3GHz loop. You'd get the rated 10GHz frequency, and 3.3GHz
        • Varying the breadth of the pulse doesn't clock you any faster or slower.

          No, but it could improve the stability of the circuit if you have problems with the computation not consistently being done by the time it needs to be propagated on the back side of the pulse. If it gets to a certain width, of course, you'd end up reducing the pulse multiplier or else you'd have problems with the data not being propagated before the next clock arrives. The point is that there's some range in the middle where it wou

  • Ok nerds, tell me if this is feasible....

    First of all, I can barely grasp how chips work in the first place, lots of yes-no-maybe so gates that the electrons have to pass through.

    So, would it be possible to make a 3-D chip? Where, instead of one line or branches that the electron follows but a crazy ass network for it to flow through?

    • It'd be vastly more complex to layout transistors in 3D. You have to keep in mind that the network of interconnects connecting them would have to either be able to skip between layers, or you'd need a chip design in which equal, exact proportions of the transistors talk to each other and only to each other with very limited inter-layer communication. And then there's the heat problem.
    • Re: (Score:3, Insightful)

      Comment removed based on user account deletion
      • Simple, by making water run uphill. [slashdot.org]

        "But the team, writing in Physical Review Letters, believes the effect may be useful in driving coolants through overheating computer microchips."
      • Due to the heat issues you describe, 3D designs probably won't be common for chips until we stop using electrons and switch to photons for computing. At that point, design complexity trade-offs (perhaps a net reduction in circuit paths or increased storage density needs) will probably drive at least some limited 3D structures.

        However, one possible solution to the problem you pose would be to design the chips with lots of little holes and pumping fluid through it. A design could be based on a fractal 3D
        • However, one possible solution to the problem you pose would be to design the chips with lots of little holes and pumping fluid through it. A design could be based on a fractal 3D shape

          Nevermind fluids... I seriously wonder hy we don't already use chips formed as Sierpinsky carpets, with plain ol' copper or aluminum cooling running through the chip!

          You wouldn't even need a true fractal on today's chips... A mere second or third order carpet would vastly improve CPU cooling at almost no expense (well, i
          • I don't know much about the art form, but it seems to me that the reason we don't see this is because it's not necessary. Getting the heat out of the 2D CPU can be done at a pace sufficient to keep the chip from melting, which is good enough.

            The problem really has to do with creating too much heat in the first place (e.g. chewing through the storage capacity of your laptop battery), and what the heck you do with the heat once it's off the chip and on the heat sink. Apple's PowerBook / MacBook case is
      • How do you propose flushing the heat of the dies sandwhiched in the middle?

        Best solution: invent room-temperature superconductor, make the chip out of that, profit.

        Second-best solution: Handle it the same way office buildings do, by "installing air-conditioning ducts". i.e. little hollow tubes full of moving air (or some sort of coolant) that run through the cube at intervals carrying the excess heat away.

        Third-best solution: Run the chip slowly enough that only a little bit of heat is generated: littl

    • The pathways that electrons flow through are pretty 'crazy ass' already. In fact, modern chips are already 'multi-layer' and so are already 3D. One of the biggest problems with stacking processing layers on a chip is that of heat removal. Each time you add an extra layer to the sandwich you make it a little harder to extract the heat from those internal layers.

      There have been some interesting research projects carried out using Sierpinski cubes as the chip fabrication layout, and using the channels in t
    • This was modded interesting?

      There's a circuit in the chip, which is not just "one line or branches"... it really already is a "crazy ass network" it flows through. You might be able to change the layout slightly and make the circuit itself more efficient by giving yourself the freedom of working in 3 dimensions... however I bet that would be harder to design, manufacture, and cool.
    • by slew ( 2918 ) on Monday May 08, 2006 @06:19PM (#15289134)
      Ok nerds, tell me if this is feasible....

      P.S. In this context, the correct spelling of nerd is E-N-G-I-N-E-E-R ;^)

      So, would it be possible to make a 3-D chip? Where, instead of one line or branches that the electron follows but a crazy ass network for it to flow through?

      In most respects, chips today are ALREADY 3d in that there are multiple layers of planar (flat layers) metal wiring (anywhere from 4 to 8) connected by vias (vertical interconnect) over a single layer transistors. The routing of signals on each layer is on purpose designed to be a crazy-ass network (to avoid electromagnetic signal coupling noise between adjacent wires).

      However, in current technology, there's still only 1 layer of transistors, and the main limitation of adding more is that there's no good way to get rid of the heat of transistors. Even today, there isn't a good way to get rid of the heat of the transistors in the 1 layer of current chips, let alone a big pancake stack (or lasagna) of transistors. People are already starting to stack memory chips that don't get too hot together, and I'm sure they'll eventually start doing different kind of stacks too as they get better at figuring out the heat problem...

    • There's nothing inherently stopping you from making a fully 3d chip (existing chips already have many layers) however it's really difficult to get the heat out.

      Current CPUs keep the transistors very very close to the heatsink and still struggle to keep them cool. If you had a cube shaped chip then it would be near impossible (with traditional processes).

      There are some interesting projects to get miniture coolant pipes running through the chip, but that's a way off.
    • There is a crazy ass network that it goes through. Don't be fooled by flat circuit diagrams; if you felt like displaying in 3D, it would be a mess. The PCB's that come out of fabs today have 8 layers most of the time, all of which are filled with connections from place to place. We just flatten everything to make it seem simpler, and avoid confusing ourselvles.
  • by mustafap ( 452510 ) on Monday May 08, 2006 @05:51PM (#15288995) Homepage
    Like with asynchronous processors, maybe its downside will be the silicon area required to implement it.

    Other techniques like multiple independant clock areas that can be shut down when not in use seem far more beneficial, IMHO.
    • And as with asynchronous processors, hasn't this been done before?
    • Async processors normally use less area (well the one I'm aware of did - all down to the clock tree)
      Multiple clock areas already exist (infact I'd say exist in all modern SOCs (certainly every one I've ever worked on)).

      Certainly the last 3 chips I worked on were more power limited than area limited, and with modern processes is becomming ever more so - so another tool in the chest to trade area against power would be welcomed
  • by Anonymous Coward
    This will go well with the robotic tentacles. Now your berserker can use even less power, reserving more for the really critical things like the LASER (we need a /. article on military LASERS).
  • vaporware...? (Score:5, Insightful)

    by moochfish ( 822730 ) on Monday May 08, 2006 @05:54PM (#15289019)
    It just amazes me that a small, never-before-heard-of-company offers a solution to a problem that Intel, IBM, and AMD have been trying to solve for over a decade, each of which have 10 times the budget, expertise, and personel. Did I mention a headstart of a minimum of 10 years of R&D tossed at this problem? I hate to be a pessimistic troll-like poster, but without even a working proof of concept, I can only call this vaporware until they show me a working product. This article says nothing except "we have technology every computer in the world will need in the next ten years... please invest in us and we'll get you a demo soon."
    • Yes, but of course a few years back a young uni graduate came up with an interesting way to do photo-lithography that cpu makers were quick to snap up.

      Sometimes a new mind working on a problem can yeild solutions much faster than 1000 people thinking "the old way"
    • Re:vaporware...? (Score:4, Insightful)

      by Jeremi ( 14640 ) on Monday May 08, 2006 @10:22PM (#15290322) Homepage
      It just amazes me that a small, never-before-heard-of-company offers a solution to a problem that Intel, IBM, and AMD have been trying to solve for over a decade, each of which have 10 times the budget, expertise, and personel.


      I'm in no way qualified to comment on the actual technology here, but I will submit that this situation isn't as unlikely as it might seem. For many problems, the potential solution-space is so large (and the cost of trying out various approaches is so significant) that even a large R&D lab with a big budget and years of effort can end up missing what in retrospect is a very clever and useful solution. It's easy to get bogged down trying "just one more tweak" of your first (or second or third) approach that you never look around and notice the other approach hiding in plain sight. Even worse, a given organization can easily build up a culture that says "this is the way we do things, because this is the way we know things work", which can discourage even bright new employees from looking at alternative methods. (i.e. Why "start from scratch" with approach B when your company has invested millions in developing approach A?)


      A new startup, on the other hand, doesn't have all that baggage that might limit their point of view. Or even more likely, some bright person may have had The Big Idea, and decided to found a startup to exploit it and get rich, rather than donating his idea to some pre-existing corporation.


      That said, there is plenty of room for bullshit vaporware in the world too :^)

  • An impossible concept only invented like a hundred years ago. Next, they will be charging things known as capacitors from the induced current.
  • Is there a technical paper on this? I know it's probably patented and they want to keep as much detail as possible but it seems like a somewhat abstract paper of how this works would convince the chip makers they want to sell this to to be interested. And satisfy curious people like me.
    • Re:Technical paper? (Score:3, Informative)

      by chefmonkey ( 140671 )
      You're a bit confused, I think. If something is patented, then (in theory, at least), there is a publicly available patent disclosure that describes the technique in sufficient detail that anyone "skilled in the art" of its field should be able to read and implement it. Patents and trade secrets are mutually exclusive.
    • Re:Technical paper? (Score:4, Informative)

      by kent.dickey ( 685796 ) on Tuesday May 09, 2006 @12:26AM (#15290942)
      The press has a knack for distorting stories and making it very hard to figure out real technical details.

      http://multigig.com/pub.html [multigig.com] has some whitepapers. I read the ISSCC 2006 slide set, which let me know the general technique.

      Basically, they produce a clock ring to produce a "differential" clock pair that after one lap swaps neg and pos and so it's frequency is tuned by it's own capacitance and inductance. They call it a "moebius" loop since it's not really a differential pair, but the clock wave makes two round trips before getting back to the start.. Neighboring loops can be tuned together (although if that's by just routing the wave throughout the chip I'm not sure). They didn't seem to mention synchronizing the period to outside sources, and I'm not sure how they'll be able to do that.

      The clocking is not the interesting part to me, but rather their logic strategy. The trick is that logic itself has no connection to power or ground. The clock nets provides the "power and ground" and all logic must be done as differential (a and abar as inputs, q and qbar as outputs). This is where they get the power savings from--the swings are reduced and there's no path to power or ground to drain away charge. Without really discussing it, charge seems to just shift around on internal nodes between the differential logic states. They then use pure NMOS fets for logic, which removes all PMOS. The logic will never read the power rail, though--it will always be a Vt drop. I just looked this over quickly, but it seems the full-swing clocks and lack of PMOS make this work out fine.

      For quick adoption, they'll need to work out clever techniques to connect this logic to standard clocked logic. Otherwise, it looks only a little bit easier to use than asynchronous logic. The issues they face seem very similar to asynchronous logic issues--tool support, interface to standard clocked logic, debug, test, etc.

      It's not vapor.
      • I don't have time to read the detail, but your post and the original article's comment about "recycling power" sounds to me like they are using some sort of adiabatic logic approach. Adiabatic logic is well known for significant power reduction, but at least historically it has required significantly more transistors per gate and cannot run as fast as traditional CMOS.

        The ring thing sounds like it's just a new clock generation scheme to go with the existing adiabatic logic techniques (which do have rather
        • In one of their papers they explicitly contrast the adiabatic approach (which seeks the lowest possible power per operation at the cost of slower operations) with their own solution (which seeks to lower power even while operating at the highest possible speeds). So for the best possible battery life you would want adiabatic logic, but for multiple gigahertz operation Multigig seems like a nice option.
  • I call BS (Score:5, Insightful)

    by Avian visitor ( 257765 ) on Monday May 08, 2006 @06:16PM (#15289124) Homepage
    I've read the FA and despite having a couple of CMOS designs behind me I don't understand a bit of what they are saying. Either the reporter that wrote this has absolutely no idea what he is writing or this entire 'breaktrough' is just vapourware.

    The article seems to say that the 'tick' of the clock is carrying energy throughout the chip and when the 'tick' hits the edge, the energy is lost. Electronics in your typical digital circuit does not work that way. Energy does not flow through the chip with the signals (ok, it does theoretically, but that amount is negliable with the dynamic losses in the gates mentioned below).

    You get power dissipation in each gate or buffer that changes state because of some signal, irregardless of the direction in which the information is flowing. You can not recycle this power. This comes directly from the basic principle behind CMOS technology (used by almost all digital chips today) - you are charging and discharging a capacitor.

    Typical example, that running signals in a circuit does not save power: take a ring oscillator (a number of negators wired in a loop). This circuit will oscillate (send changing signals through its loop) and consume an considerable amount of power.
    • Re:I call BS (Score:4, Informative)

      by jelle ( 14827 ) on Monday May 08, 2006 @06:54PM (#15289291) Homepage
      Better link here

      http://www.eetimes.com/news/latest/showArticle.jht ml?articleID=187200783 [eetimes.com]

      Looks interesting. I wonder what they mean with 'taps', and if they calculated their power savings right (would each register need its own tap, or if not, is the buffer needed to boost the power from the loop included in the clock system power?)

      • "Tap" is just lingo for a connection point.

        On its first analog-to-digital converter, MultiGig will implement one physical ring with four phases. Taps can be implemented at any point around the ring to gain access to any of the four phases.

        I interpret this as they will have 4 "clock" wires, each carrying a square wave with a 1/4 On, 3/4 Off cycle, with each of the wires out of phase (1/4th shifted) with each other. Since the wire arrangement has previously been described as a square, this creates an in

        • "when one phase is discharging, the nearby wire is charging at the same rate, which reduces the total losses."

          Aha. That makes it clear. That probably will work to save a lot of power. Neat. I hope many chipmanufacturers (AMD, Xilinx, etc) will be able to successfully use it.
    • Re:I call BS (Score:5, Informative)

      by CTho9305 ( 264265 ) on Monday May 08, 2006 @07:00PM (#15289317) Homepage
      You get power dissipation in each gate or buffer that changes state because of some signal, irregardless of the direction in which the information is flowing. You can not recycle this power. This comes directly from the basic principle behind CMOS technology (used by almost all digital chips today) - you are charging and discharging a capacitor.

      You're half right. You're right that what's going on is a charging and discharging of a cap, but you're wrong that the charge can't be recycled. A conventional clock works by connecting the gates of a bunch of devices (i.e. capacitance) to Vdd, then after a little time connecting it to ground instead. Wait a little bit, then repeat. What effectively happens is that you dump some amount of charge from Vdd to ground each switch, and it's gone (i.e. it's heat now). A water analogy would be a tub of water above you (Vdd), a bucket in your hand (the capacitance), and the ground (gnd). You pour some water from the tub into your bucket (charge the cap), then dump it on the ground.

      It doesn't have to be this way. There are actually ways to charge a capacitor, and then pull the charge back out again (without dumping it to ground)! I'm going to assume you're familiar with LRC circuts, and how they can resonant when an impulse is applied. What's going on during the oscilattions? Charge is moving into the capacitor, and then being pulled back out to the inductor. The same charge goes back and forth, ideally forever (of course, in practice, the resistance isn't 0 so you put out some heat and the oscillations dies down). I'm not sure what exactly the water analogy would be - maybe a wave sloshing back and forth in a trough.

      I recently attended a seminar where the presenter talked about clocking based on LRC oscillations and he had actually fabbed chips that worked. The basic idea was to put an inductor on the die, and set up oscillations between the inductor and the clock load capacitance, which results in a ticking clock. Of course, you get a sinusoidal clock instead of a nice almost-square-wave, so your circuits have to be designed a little bit differently, but the point is, it works and is doable.

      Now, the technology described in this article, as best as I can tell, uses another idea - transmission lines. In a normal design, your clock grid basically looks like a bunch of capacitors with resistors in between (i.e. distributed RC). It takes time for a signal to propagate - signals propagate much slower than the speed of light, becuase you actually have to charge up the capacitance along the line through the resistance of the line itself. Imagine a long trough that's empty. You start pouring water in, and although water reaches the far side pretty quickly, you don't actually observe it until the water level at the far end is half way up. Signals propagate differently when wires are set up as transmission lines - they propagate at much closer to the speed of light, because you're actually sending a wave down the line (imagine creating a ripple on a trough of water, instead of actually filling and emptying the trough).

      Now, I don't understand how they combined charge recycling and transmission lines, I don't understand transmission lines all that well, but your arguments aren't good reasons to disregard the claims made by the company.

      If you're interested, here [cmu.edu] is a little bit of info about the talk I went to.

      Typical example, that running signals in a circuit does not save power: take a ring oscillator (a number of negators wired in a loop). This circuit will oscillate (send changing signals through its loop) and consume an considerable amount of power.
      If you created an oscillator between an inductor and a capacitor, on the other hand, once you started it going, it would continue for a long time with minimal energy injected in the future.
      • Re:I call BS (Score:2, Insightful)

        by imgod2u ( 812837 )
        As I'm aware, most high-speed oscillators are LVDS or LVPECL. They don't oscillate between VDD and GND, they generate two 180-degree phase-shifted voltages relative to each other. The problem isn't generating the clock, it's distributing it across the chip. And unless this oscillator scheme has the ability to not be affected by fanout, line delays, etc. it will not overcome the clocking problem. There is still a need for that clock signal to reach different parts of the circuit and there will still be a
    • You get power dissipation in each gate or buffer that changes state because of some signal, irregardless of the direction in which the information is flowing. You can not recycle this power. This comes directly from the basic principle behind CMOS technology (used by almost all digital chips today) - you are charging and discharging a capacitor.

      Worse than that, this isn't entirely a quirk of the technology; it's partly a basic limitation of physics/information theory. There's a certain amount of energy tha
      • Worse than that, this isn't entirely a quirk of the technology; it's partly a basic limitation of physics/information theory. There's a certain amount of energy that must be expended to delete a bit of data, and that's a hard limit.

        We're still orders of magnitude away from caring about that, though.

    • irregardless

      Goddamnit, please don't resort to making up words in the midst of an otherwise readable post. It impacts your credibility more than you (apparently) realize...

  • by t35t0r ( 751958 ) on Monday May 08, 2006 @06:29PM (#15289177)
    What a breakthrough [wikipedia.org]
  • I think I can decrease my gas consumption by up to 75% by throwing square wheels on my car! Of course the reason would be because i would be 75% less likely to use a car that really cant go anywhere.
    • "Of course the reason would be because i would be 75% less likely to use a car that really cant go anywhere."

      Only 75%? Personally, I would be 100% less likely to use a car that really can't go anywhere. Unless I were homeless or looking for a nice seedy place in which to Fornicate Under Carnal Knowledge...

  • Since clocks take up a large percentage of the power and space on the chips, why not do away with them? Why not use a clockless CPU so results are available as soon as they are ready? There are some processors out there (ARM Amulet for instance) that do this, does it just not scale well to the high speeds we are used to now on our desks and laps, or is it just that current clocking cpu design is way ahead in terms of development?
    • Re:Clockless CPUs? (Score:3, Informative)

      by tomstdenis ( 446163 )
      Clocks are not a high percentage of the power. They're not trivial but mostly the problems with clocks is the length of the line. The bus between the register file and ALU is probably 1/20th that of the clock traces.

      Compared to all the other logic in a cpu from the decoders to the schedulers to the ALUs, load-store, and then all the support pipeline registers, control logic, etc not to mention the cache...

      The problem with "doing away with the clock" is being able to co-ordinate things in some usable amoun
  • In addition to the already cited

    http://www.eetimes.com/news/latest/showArticle.jht ml;jsessionid=SG3NCFVRB3QWEQSNDBESKHA?articleID=18 7200783 [eetimes.com]

    the EE Times piece (in the printed edition not up on the web) has a sidebar,
    with neat background on the inventor:
    ________

    Christmas present leads to ratoary wave epiphany

    The Rotary Traveling Wave technology was the brainchild of MultiGig Inc.
    founder and chief technology officer John Wood, a self-taught inventor
    and son of an inventor w
  • This remind me of low-power reversible computing that I learned back in college from Prof. Jan van de Snepscheut at Caltech... The basic idea is to reduce wasted power by "sloshing" current within the chip, rather than to let the current spill to the ground... (this is a a gross simplification...)

    This (highly technical) paper describes what I'm talking about:
    http://www.zyvex.com/nanotech/electroTextOnly.html [zyvex.com]

    This article mentions a "helical logic" which sounds a bit like what this invention is...

Simplicity does not precede complexity, but follows it.

Working...