Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Power Hardware

Whither Moore's Law; Introducing Koomey's Law 105

Joining the ranks of accepted submitters, Beorytis writes "MIT Technology review reports on a recent paper by Stanford professor Dr. Jon Koomey, which claims to show that the energy efficiency of computing doubles every 1.5 years. Note that efficiency is considered in terms of a fixed computing load, a point soon to be lost on the mainstream press. Also interesting is a graph in a related blog post that really highlights the meaning of the 'fixed computing load' assumption by plotting computations per kWh vs. time. An early hobbyist computer, the Altair 8800 sits right near the Cray-1 supercomputer of the same era."
This discussion has been archived. No new comments can be posted.

Whither Moore's Law; Introducing Koomey's Law

Comments Filter:
  • Power Hog (Score:5, Interesting)

    by Waffle Iron ( 339739 ) on Tuesday September 13, 2011 @06:23PM (#37392450)

    My favorite example of computing (in)efficiency is the USAF's SAGE bomber tracking computers introduced in the 1950s. These vacuum tube machines had CPU horsepower probably in the same ballpark as an 80286, but could draw more than 2 megawatts of power each. They didn't decommission the last one until the 1980s.

    • The words that explain this are 'mission critical'. If a computer that important still works, you need a damn good reason to unplug it and replace it with an untested system. Having something new and shiny is not a good enough reason.
      • by Anonymous Coward

        No, that's not a good explanation. Anyone who can't just ask for more money would build a massively cheaper system, run it in parallel to the old power hog until the new design is sufficiently tested, and then junk the ex-mission-critical space heater. The only reasonable explanation for keeping a tube computer running into the 80s is budget hogging.

      • Would the unreliability of vacuum tubes be a good reason?

    • Re:Power Hog (Score:5, Interesting)

      by anubi ( 640541 ) on Tuesday September 13, 2011 @07:16PM (#37392862) Journal
      Even the idea one could even implement a vacuum-tube machine capable of performing at 286-levels to me is a miracle in itself. 6502 maybe, but, to me, even the lowly 286 represents a level of sophistication I could not even imagine being implemented with vacuum-tube technology.

      I've never seen a SAGE, but it must have been quite a machine. In my imagination, it must have been about the size of a Wal-Mart. With the physical size of the thing, it would amaze me that they would be able to clock the thing anything more than 100 KHz or so.

      Yes, I do know what a 6SN7 is. And a 12AT7, which I suspect the machine was full of ( or its JAN equivalent).

      Do the designations 12SA7, 12SK7, 12SQ7, 50L6, 35Z5 still ring a bell with anyone?
      • by Bertie ( 87778 )

        I remember reading many moons ago that Colossus was able to do code-breaking in a couple of hours that a Pentium II-class machine would take a day and a half to do. The beauty of designing towards a single purpose, I suppose.

      • Yeah, but I've moved on to the CK722 and HEP-1 and GE-1! Go Germanium!

      • by veektor ( 545483 )

        Do the designations 12SA7, 12SK7, 12SQ7, 50L6, 35Z5 still ring a bell with anyone?

        Sounds like the tube line-up of an All-American 5 tube radio of the octal tube socket era. K1LT

      • From http://www.computermuseum.li/Testpage/IBM-SAGE-computer.htm [computermuseum.li]

        Technical Description

        Size: CPU (50 x 150 feet, each); consoles area (25 x 50 feet) (total system=20,000 square feet)

        Weight: 250 tons (500,000 lbs)

        Architecture: duplex CPU, no interrupts, 4 index registers, Real Time Clock

        Word Length: 32 bits

        Memory: magnetic core (4 x 64K word); Magnetic Drum (150K word); 4 IBM Model 729 Magnetic Tape Drives (~100K words ea.); all systems with parity checking

        Memory Cycle Time: 6us

        I/O: CRT display, keyboard, ligh

        • by aix tom ( 902140 )

          The "light gun" made me curious.

          That's some cool tech [nih.gov], even if the plug is almost as big as the the gun. ;-)

          • by anubi ( 640541 )
            I would imagine the "light gun" was a photocell held against the face of the display CRT which would respond when the area "shot" by the gun was illuminated.

            Due to the nature of a CRT, only the phosphor area addressed by the current in its deflection coils will be illuminated, thereby giving the computer a pulse when it directs the beam to the area the gun operator is "shooting".

            We used to build these things for our old IMSAI's and Altairs, as we didn't have mice yet, trackballs were terribly expensive
      • by jnork ( 1307843 )

        Some. I have some Dynaco stuff I keep meaning to rebuild. It's a shame the good KT88s aren't available any more...

      • by nusuth ( 520833 )

        Even the idea one could even implement a vacuum-tube machine capable of performing at 286-levels to me is a miracle in itself. 6502 maybe, but, to me, even the lowly 286 represents a level of sophistication I could not even imagine being implemented with vacuum-tube technology.
         

        There is no miracle, as the machine is about 20 times slower than a 286 according to KIPS value in hamster_nz's sibling post. It is not gener

      • 55,000 tubes vs. 134,000 transistors

        Had 256 KB + 16 KB RAM vs. the 512-640 KB common in the 286

        75,000 instructions per second vs. 1.2 million (@6 MHz)

        SAGE used 52 of them, half online at a time, geographically dispersed, all working on tracking aircraft. But they did communicate with each other, so you might consider this a 1,950,000 instructions per second cluster, beating the first 286s that came out around the time SAGE was stood down.

      • by geekoid ( 135745 )

        well, how many multi-megawatt computers HAD you looked at at the time?

        because, there isn't anything magical about the 286.

    • These vacuum tube machines had CPU horsepower probably in the same ballpark as an 80286, but could draw more than 2 megawatts of power each.

      Surprisingly, according to the computations per kWh chart, transistor computers weren't all that more efficient than vacuum tube computers. For example, the Commodore 64 is about the same distance below the best-fit line as one of the Univac II clusters are.

      • You completely misunderstand the chart. A given distance from the line represents relative efficiency for a given year . Absolute efficiency is the vertical axis. The Commodore 64 and other semiconductor computers are newer than the tube computers like Univac II, and more energy efficient.

        Tubes are inherently energy hogs. You've got to have at least 50 volts between plate and cathode to have anything close to acceptable performance, and the filament draws a substantial portion of a watt.

  • Is there a limit to how efficient calculation can get? Is there some minimum amount of energy required to do one computation? How do you measure "computations" anway, and what units would you use? Bits? Inverse bits?

    • by PaulBu ( 473180 ) on Tuesday September 13, 2011 @06:32PM (#37392522) Homepage

      Yes, there is if you "erase" intermediate results -- look up 'von Neumann-Landauer limit', kT*ln(2) energy must be dissipated for non-reversible computation.

      Reversible computation can theoretically approach zero energy dissipation.

      Wikipedia is your friend! :)

      Paul B.

      • Well, then, we just need to be operating near zero temperature.

        • True, but... (Score:5, Interesting)

          by PaulBu ( 473180 ) on Tuesday September 13, 2011 @06:50PM (#37392654) Homepage

          I do not think that you get net energy savings (by using the same basic technology, e.g., CMOS at room temeprature or "cold"), if you take into account the fact that cooling things down also costs energy! For example, liquid helium refrigeration costs about 1 kW of wall outlet power to compensate for 1 W dissipated at 4.2 K.

          Changing your basic technology to, e.g., some version of superconductor-based logic can help (a lot!), current state of the art (in my very biased opinion, since I am cheering for those guys, and have been involved in related research for years) is here: http://spectrum.ieee.org/semiconductors/design/superconductor-logic-goes-lowpower [ieee.org] ...

          Paul B.

          • Send that computer into space and with huge enough radiators you'll have no ongoing spending to cool it into just above 3K. Of course, when we get anywhere near that limit somebody can spend some time thinking how to launch (or manufacture on space) such computer...

            I've seen somebody cite some highter clock dependent limit. Altought I can't remember the name, neither understood where it came from when I saw it.

      • by bunratty ( 545641 ) on Tuesday September 13, 2011 @07:07PM (#37392788)
        Yes, reversible computation can theoretically approach zero energy dissipation, but if you use no energy, the computation is just as likely to run forwards as backwards. You still need to consume energy to get the computation to make progress in one direction or the other. Richard Feynman has a good description of this idea in his Lectures on Computation.
    • The Landauer limit gives a lower bound on how much energy it takes to change a bit at kT*ln2

      Where k is the Boltzmann constant and T is the circuit temperature in kelvins.

      So, near absolute zero, somewhere on the order of a yoctajoule per bit change.

      • by danlip ( 737336 )

        I think you meant yocto, but seriously who uses those prefixes? I had to look it up. 10^-24 is much clearer.

    • by MajroMax ( 112652 ) on Tuesday September 13, 2011 @06:44PM (#37392614)
      Without reversible computing [wikipedia.org], there indeed is a fundamental limit to how much energy a computation takes. In short, "erasing" one bit of data adds entropy to a system, so it must dissipate kT ln 2 energy to heat. This is an extremely odd intersection between the information theoretic notion of entropy and the physical notion of entropy.

      Since the energy is only required when information is erased, reversible computing can get around this requirement. Aside from basic physics-level problems with building these logic gates, the problem with reversible computing is that it effectively requires keeping each intermediate result. Still, once we get down to anywhere close to the kT ln 2 physical constraint, reversible logic is going to look very attractive.

      • by anubi ( 640541 )
        What amazes me is the computation done in biological systems.

        When I consider the amount of correlation and replication done by RNA/DNA systems, I am left in the dust, wondering just what happened.
        • What amazes me is the computation done in biological systems.

          When I consider the amount of correlation and replication done by RNA/DNA systems, I am left in the dust, wondering just what happened.

          I'm not sure I would classify a polymerization as a "computation." Even then the RNA transcription rate is on the order of ~50 nucleotides per second or so, which isn't all that stunning. The only thing that's really impressive is how interdependent the chemical reactions are, and how sensitive the whole system is.

          Don't be fooled by the DNA :: Computer Code analogy - it is very, very wrong.
          =Smidge=

        • What amazes me is the computation done in biological systems.

          When I consider the amount of correlation and replication done by RNA/DNA systems, I am left in the dust, wondering just what happened.

          Most likely what just happened is you got laid.

      • by Kjella ( 173770 )

        The thing is that even if we could do the whole calculation using reversible computing, then what? If we start over on a new and completely different calculation we can't use any of the previous intermediaries and if we clear them - either before or during the next calculation - then we've just spent as much energy as doing it the non-reversible way. Reusing past calculations or lookup tables that are really cached results is something we do in many algorithms today, so each calculation is likely to be nece

  • Does this take into account the miniaturization of electronics and the associated increase in battery size? We're seeing this in many mobile platforms. I'm curious if this is taken into account when they consider 'battery life' while possibly ignoring that batteries themselves may be more efficient or simply larger due to more space in the enclosure.

    • Does this take into account the miniaturization of electronics and the associated increase in battery size? We're seeing this in many mobile platforms. I'm curious if this is taken into account when they consider 'battery life' while possibly ignoring that batteries themselves may be more efficient or simply larger due to more space in the enclosure.

      Errr, I'm not sure what your point was, but it is interesting that even as devices like laptops get more efficient (more computations per unit energy), we make them do *so* much more computing that they still require more power and bigger/better batteries.

      • It's important to note that a large amount of power in a portable computer is being expended outside performing calculations. Your LCD probably consumes more energy than your processor - heck, if I leave wifi off on my cell phone, more than 90% of my battery consumption is from the OLED screen. Add in a portable's spinning disks, wifi radio and other various bits and you have a system that, even if the processor was 2x as energy efficient, you'll barely be into a double digit percentage savings in overall e

  • The Cray-1 was ECL. The Altair 8800 was TTL. We're now CMOS, but I wouldn't mind an ECL i7, despite the fluorinert waterfall... (My real point is that there were very serious differences between the Altair 8800 and the Cray-1 despite the obvious which lend to significant differences in power dissipation...and speed.)

    Additionally, the other thing this article doesn't take into account is the preponderance of battery-powered modern devices -- before, power consumption wasn't really much of any considerati
    • by blair1q ( 305137 )

      There's an even more obvious difference.

      The Cray-1 is sitting half a division above the line. As that's a logarithmic abscissa, that Cray is putting out about 3X as many calculations per KWh as the on-the-line entrants are.

      The Altair-8800 is sitting right on the line, being non-impressive to its contemporaries, while the Cray is blasting them with its laser vision and eating nothing but salads.

  • Nonsense. What kind of fixed load did they define? How does this fixed load utilize available system resources? I could define a code payload targeted at technologies present in early 90s Pentium CPUs, and then run this code on a modern machine for a much greater overall gap in efficiency. Producing any target number I want, thus correlating or wildly disproving this law. This hardly qualifies as a constant, let alone a "law." There are just to many factors involved to make any kind of statement like
    • by geekoid ( 135745 )

      Moore's law is done. dead. Demised. It has shuffled off this mortal coil... as even Moore expected.

      You completely misunderstood what the submitter was saying. However, you did make the same mistake he implied reporters would make.

      classic.

  • by terraformer ( 617565 ) <tpb@pervici.com> on Tuesday September 13, 2011 @08:19PM (#37393320) Journal

    It's the inverse of Moore's law so yeah, duh....

    If your compute power doubles in the same size die every 1.5 years, then if you halve the die size keeping the compute power the same you actually cut the power in half. This is a very well known phenomenon and Koomey is doing what he has been for a while, making headlines with little substance and lots of flair.

    That Microsoft and Intel paid for this research calls into question what it was they were actually paying for.

    • I had the same initial reaction when I read TFA. If I had any points I'd mod you up.
    • by danhaas ( 891773 )

      With advanced chip refrigeration, like impinging jet or phase change, you can achieve a very high flops per area. The power consumption, though, increases a lot.

  • Isn't this a trivial consequence of Moore's law, if we interpret the latter to mean exponential growth of (computations/time), and additionally make the very reasonable assumption that users' tolerance for power consumption (energy/time) is more or less constant?
    • by erice ( 13380 )

      Isn't this a trivial consequence of Moore's law, if we interpret the latter to mean exponential growth of (computations/time), and additionally make the very reasonable assumption that users' tolerance for power consumption (energy/time) is more or less constant?

      Not really. Moore's law actually says nothing about computation. It is about transistor count and cost. It is just that we have come to expect a similar relationship for end performance from the use of those transistors. It think this result may have more to do with change in how those transistors are allocated. The fraction of transistors directly involved in computation is shrinking. I expect those transistors be rather active and power hungry. Memory, which has come to dominate modern chips, use

  • What about the energy used creating efficiency?

        Are we experiencing an increase in efficiency?

      OR

      Are we expending every increasing amounts of energy creating the appearance of efficiency?

  • I've not read the article (in true Slashdot fashion), but I'm taking issue with the statement "An early hobbyist computer, the Altair 8800 sits right near the Cray-1 supercomputer of the same era" from the stub. Really? Is that meant to be insightful? They're from the same era, so the same research has been done to get both to the same point. The Cray has many more CPUs of the same generation as the Altair, so uses a lot more power. Am I supposed to be surprised by this?

    Either way, I don't really see an ap
    • by geekoid ( 135745 )

      bench-marking efficiency get as close to Landauer's principle as possible.

      So the same computing power using less electricity.

  • Are they saying because Moore's law is a slightly bit off and that someone has proved that it is a .5 year off, that we rename the law to this new scientists name?
    If I proved something different with the theory of relativity, does that mean that Einstein is any less the creator of that theory?
    I would hope not....

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...