Forgot your password?
Power Hardware

PC Power Management, ACPI Explained In Detail 133

Posted by kdawson
from the less-power-to-him dept.
DK writes "Computer performance has increased steadily in recent years, and unfortunately so has power consumption. An ultimate gaming system equipped with a quad-core processor, two NVIDIA GeForce 8800 Ultra, 4 sticks of DDR2 memory, and a few hard drives can easily consume 500W without doing anything! To reduce power wastage, the industry standards APM and ACPI have been developed to make our computers work more efficiently. ACPI is the successor of APM and is explained in detail in this article."
This discussion has been archived. No new comments can be posted.

PC Power Management, ACPI Explained In Detail

Comments Filter:
  • by Anonymous Coward on Wednesday July 11, 2007 @02:13AM (#19822305)
    I'll add: don't waste your time with TFA. It's just a listing of the various ACPI states, not an explanation of ACPI.
  • Re:ACPI? (Score:2, Informative)

    by Aranykai (1053846) <`moc.liamg' `ta' `resnogls'> on Wednesday July 11, 2007 @02:14AM (#19822309)
    Actually, October 2006 saw the release of the latest revision, 3.0b. Still, I agree. This is old news.
  • Re:OS (Score:2, Informative)

    by borizz (1023175) on Wednesday July 11, 2007 @02:49AM (#19822467)
    Sorry, I disagree.

    PSU wattage (of course) has nothing to do with the speed of the machine. And while Vista is a power hungry beast, I don't think you can specify it's performance needs by stating the minimum wattage of the PSU. One can easily spec out a machine with a (say) 400W PSU that will run Vista just fine. You just need to pick speedy hardware that doesn't eat too much power. That means staying of the uber-high end stuff, which historically always had a bad power to performance ratio.

    Besides, in a few months we'll have (more) budget-end PC's with the performance of today's mainstream ones which will run Vista, using an even smaller PSU.
  • ACPI is a disaster (Score:5, Informative)

    by r00t (33219) on Wednesday July 11, 2007 @03:23AM (#19822603) Journal
    We used to standardize hardware interfaces. They stood the test of time, were well supported, and were low overhead. Writing drivers, including boot code, was no serious problem. We didn't need an emulator, virtual machine, etc.

    Decent standards: IDE, VGA, PC serial interface, PC parallel interface, PC keyboard interface, UHCI, OHCI, etc.

    Now we standardize an interface to non-standard hardware via ACPI. The OS is supposed to run ACPI code (a script) in a complicated interpreter. ACPI code is slow and buggy, and generally gets to do whatever it wants with the hardware. It's like making BIOS calls to do everything, but without even the minor advantage of native code.

    This is especially painful for boot loaders. You can't run an ACPI interpreter in a 512-byte boot sector. You probably can't do it in any reasonable boot loader.

    This is even painful for power management. For example, OLPC wants to suspend the CPU between every keystroke; that doesn't work so well if you need to run an ACPI code script to do it.
  • Whatever (Score:5, Informative)

    by Anonymous Coward on Wednesday July 11, 2007 @03:26AM (#19822617)
    Noones computer idles at 500 Watts, not even close. I wish people would check their facts before posting nonsense.

    My 4 year old xenon dual processor (Thats two physical CPUs) PC with (~10 fans) with no power management support in the CPUs idles at 200 watts including powering the display and extraneous trinkets attached to the watt meter plugged into my wall.

    All new PCs with multiple cores on single processors have power management features and use concideribly less power when idling.

    Whats worse is the article spouts all kinds of mostly useless techno crap about power states without providing any context into what it means or useful information in terms of actual OS power settings one can configure to do something about their PCs power usage.
  • Re:500W? (Score:3, Informative)

    by Emetophobe (878584) on Wednesday July 11, 2007 @03:42AM (#19822697)
    Did you even read the summary? They mentioned using *TWO* Geforce 8800 Ultra graphics cards. From Nvidia's own technical specs, each 8800 Ultra uses up to 175 watts under load. Go to [] and look at the "Technical Summary" table if you don't believe me.

    If you think that's bad, the new R600 series from ATI/AMD supposedly uses up to 270 watts.

  • by Bert64 (520050) <> on Wednesday July 11, 2007 @05:01AM (#19823015) Homepage
    Whats worse is that...
    There is a standard for these ACPI scripts, as you pointed out it's not great but at least there is one. There's also a compiler for them, written by Intel that complies with the standard.
    But most hardware makers don't use Intel's compiler that complies with standards... They use Microsoft's compiler that completely breaks the standards, thus OS authors can't just implement according to Intel's published standards, they have to reverse engineer Microsoft's unpublished variations.
  • Re:OS (Score:3, Informative)

    by donaldm (919619) on Wednesday July 11, 2007 @05:07AM (#19823045)
    You are right it is possible to design devices that take a few milliwatts in standby however you have to realise that the power for the detection device (infrared, Blutooth, Wireless) comes from a transformer which has a low voltage tap that has to be converted to DC. Doing this does consume power even though it can be quite small however that plus the detection circuitry consumption does add up. Most commercial products have been designed to consume 1W or less and all the entertainment equipment I have has this specification. Of course once you switch all this on you don't have to worry about heating the living room in winter :-)

    Just about all electronic equipment has what I would call useless add-ons such as digital clocks. Manufacturers are not stupid they want to sell their product and if they feel a clock or other non-essential add-on will make their product more attractive they will add this in as long as the total standby consumption is less than 1W.

    The best way to switch off your entertainment system is via central isolator but do you want to keep reseting your timer clocks every time you power it up? You can switch off non-essential equipment by throwing the main power switch on each device that does not have a clock but this gets tedious.

    This post actually sparked my curiosity on the latest consoles standby modes and surprisingly the PS3 came out well under 1W. The Wii came out at 8W (wow!) and the Xbox360 came out at 2W. However when the consoles were doing something the PS3 runs at approx 200W to the Wii's 17W and the Xbox360's 160W []. If you only have a Wii then yes you can say the PS3 sucks for running power, however we are comparing a machine (Wii) that outputs Standard Def graphics compared to a machine that outputs to 1080p so the Xbox360 owners can take comfort that their machine does not use as much power (of course that does not include the hard drive or the HD-DVD so consumption could be much higher). If you have a gaming PC it is not advisable to say anything about any of the console running costs, "least ye be stoned to death" :-)
  • by *SECADM (223955) on Wednesday July 11, 2007 @05:17AM (#19823097)
    The ACPI interpreter is designed to work within the OSPM. The reason being the assumption that the OS will always be the central point that knows the most about the whole system at any given time, and therefore it is most qualified to make power management decisions.

    About your point about standardizing on an abstract interface of non-standardized hardware, consider:
    1. This is nothing new, and is what always happens in the computer industry. If we were so hung up on standardized hardware interfaces that are well understood by the industry, we would still be coding all in x86/68k/ia64 assembly. Thanks to a wonderful high level (read: abstract) language (C/C++) that is standardized so we can write portable code for.
    2. ACPI hasn't replaced any of those hardware standards you mentioned. E.g. ATA drives still take the same commands as before and the standard is constantly evolving. But more importantly, do those hardware standards have the advance power management capabilities offered by ACPI? And, if they do implement it somehow in the hardware spec, how is updating each individual hardware's firmware *and* driver to include the new power management support, better than just updating the firmware AML code to expose the new capabilities to the OS in a standardized way?
    3. You didn't mention the other half of the ACPI standard, the Configuration aspect of the spec. All those hardware interfaces that you know and love used to each expose its own resource requirement differently, unlike ACPI which standardizes resource descriptors and boot-configured requirements. How is supporting a bunch of hardware specific resource description format better than supporting one standardized format that is well understood by the OS?

    Finally, to go back to your OLPC example. First of all I am not sure why you would want to put an ACPI interpreter into the boot loader? Is there a reason OLPC is doing this? As for you implying that you need to run ACPI code to suspend the CPU (after each keystroke), this is just not true. C-states (which i assume you are talking about) are entered by simply reading a register for the state, as described by the _CST definition. So all you need to do is parse the definition once, and remember the corresponding registers and never parse the AML code again. Same for T-states or P-states, which are entered by writing to the proper registers. There is no such running "ACPI script" overhead like you've described when handling processor power management.
  • Re:500W? (Score:3, Informative)

    by Fweeky (41046) on Wednesday July 11, 2007 @09:29AM (#19824461) Homepage
    A single G80 GPU contains about 690 million transistors; enough for about 3 dual core Opterons. Then you've got 768MB of highly clocked GDDR3 memory to go with it; it's not really surprising if it takes about as much power as an entire computer, especially when it's pushed that hard -- similarly high end CPU's can easily eat 125W just on their own.
  • Math... (Score:2, Informative)

    by Anonymous Coward on Wednesday July 11, 2007 @11:14AM (#19825519)
    Assume a kilowatt-hour cost $0.10. There are ~720 hours in a month (30 days/month * 24 hours/day). How many kwh is 1 watt running constantly?

    720 h * 1 watt / 1000 (w/kw) = 0.720 kw-h

    0.720 kw-h * $0.10 = $0.072 or a little over 7 cents per month.

    I just don't know what you were thinking - did you mean to use pesos?

The way to make a small fortune in the commodities market is to start with a large fortune.