Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Power Hardware

PC Power Management, ACPI Explained In Detail 133

DK writes "Computer performance has increased steadily in recent years, and unfortunately so has power consumption. An ultimate gaming system equipped with a quad-core processor, two NVIDIA GeForce 8800 Ultra, 4 sticks of DDR2 memory, and a few hard drives can easily consume 500W without doing anything! To reduce power wastage, the industry standards APM and ACPI have been developed to make our computers work more efficiently. ACPI is the successor of APM and is explained in detail in this article."
This discussion has been archived. No new comments can be posted.

PC Power Management, ACPI Explained In Detail

Comments Filter:
  • by timmarhy ( 659436 )
    one could read this as OS's that require such hardware just to boot are wasting power.
    • one could read this as OS's that require such hardware just to boot are wasting power.

      I'm not aware of any OS that requires 500w just to boot.
      • vista with all the candy, on a machine which won't crawl? anything less then a 500w psu and it'll be under powered.
        • Re:OS (Score:5, Insightful)

          by Whiney Mac Fanboy ( 963289 ) * <whineymacfanboy@gmail.com> on Wednesday July 11, 2007 @02:46AM (#19822457) Homepage Journal
          anything less then a 500w psu and it'll be under powered.

          Oh Bollocks. Vista might be shit and power hungry, but many laptops with a sub 100w psu will run it just fine.
        • Re: (Score:2, Informative)

          by borizz ( 1023175 )
          Sorry, I disagree.

          PSU wattage (of course) has nothing to do with the speed of the machine. And while Vista is a power hungry beast, I don't think you can specify it's performance needs by stating the minimum wattage of the PSU. One can easily spec out a machine with a (say) 400W PSU that will run Vista just fine. You just need to pick speedy hardware that doesn't eat too much power. That means staying of the uber-high end stuff, which historically always had a bad power to performance ratio.

          Besides, i
          • Re: (Score:3, Insightful)

            by tsa ( 15680 )
            That means staying of the uber-high end stuff, which historically always had a bad power to performance ratio.

            The problem with uber-high end stuff is that in two years it's mediocre and there is new uber-high end stuff that uses even more power. So old stuff uses relatively less power than new stuff, but still power consumption goes up over the years wehn you buy new computers. That trend must be broken.
            • Re: (Score:3, Interesting)

              by fbjon ( 692006 )
              I'd say power consumption goes up very slowly on average. First it goes up fast, but so far there's always been some design breakthrough that drops it down to manageable levels again. Case in point: my X2 4400+ uses power roughly on the same level as my previous 2400+ XP, and significantly less when it drops down to idle usage (due to C'n'Q). I previously had a rather slow GF 6200, that had a passive heatsink. My current 7800GT uses a bit more power, but not so much it couldn't survive with just a heatpipe
          • by Crizp ( 216129 )
            I still run Vista just fine on my old Athlon64 3000+ (the 2GHz version) - the trick is having a couple of gigs of RAM and a semi-fast graphics card (X1650 Pro runs Aero nicely).
        • Re: (Score:3, Insightful)

          Comment removed based on user account deletion
          • by syousef ( 465911 )
            Well I'm dual booting Vista and XP on a new (less than 2 week old) Dell Inspiron 9400 with a Core 2, a go7900GS, and 2 Gig of RAM. XP flies. Vista crawls. Sorry I don't have less subjective benchmarks to give you but believe me booting into Vista feels like a chore. Sometime soon I'll turn down all the useless eyecandy and see if it's okay then.

            • Comment removed based on user account deletion
              • > Also, is this installation of Vista a clean one or is it jacked up with crapware? Same thing. ;) Must... resist... "Vista is crapware"... jokes... AUGH!
              • by syousef ( 465911 )
                Boot is slower, but I could live with that. Everything is slowed down after boot to varying degrees. The more resource intensive the app the more I seem to notice it. Once again subjective.

                As for "crapware" I haven't done a clean install but I have uninstalled several items that could be considered crapware. I am however running an antivirus suite (McAfee).
            • Re: (Score:3, Interesting)

              by Daath ( 225404 )
              I call BS. I have a C2D T7400-based laptop with 2GB RAM, Geforce GO 7600 and a 100GB 7200RPM-drive. It boots [a lot] faster with Vista than it does with XP (I've had both installed, currently using Vista). Vista has a lot of other annoying bugs though - mostly driver-related. A few NVidia-drivers made the Vista "Sleep" BSOD, but a newer beta fixed it for me. Another annoying bug on this (Zepto) laptop is that the NIC (not the WiFi) is flaky. Disabling it often makes it completely disappear :P
              OTOH, there mig
              • I concur. I have vista business installed on an almost 2yr old Thinkpad R50e. its a Pentium M 1.7Ghz (single core) with 768mb ram and onboard intel integrated graphics, I did upgrade to an 80GB seagate IDE 2.5" though.

                I use the "Windows Classic" interface on both Vista and XP, both are clean installs with OEM versions of windows. Guess what, vista boots faster than XP. Also there is no really discernable difference in speed between launching applications between XP and Vista (delay of a 2 or 3 seconds at mo
                • 2-3 seconds is slow on my linux c2d laptop.
                • by syousef ( 465911 )
                  Well I'm seeing the opposite running Vista Ultimate. Boot times aren't a huge issue for me (though XP wins out here too), but application start times and responsiveness are important to me,a nd Vista feels much slower here.

                  People can call me a liar, "call BS" or whatever else. I'm just reporting what I see.

              • Comment removed based on user account deletion
              • by syousef ( 465911 )
                You can call BS all you like. Do you usually call people you don't know liars?

                I have a computer that works just as I've described. Both operating systems are running off the same drive. Boot time isn't my biggest concern either. Starting and running various apps however does. The whole thing feels a lot slower on Vista.
          • I run Vista on my year old laptop, has a centrino duo and integrated intel graphics 945GM i think, either way, i run aero just fine, and get about 4 hours battery life
    • Re:OS (Score:5, Interesting)

      by donaldm ( 919619 ) on Wednesday July 11, 2007 @03:14AM (#19822573)
      The problem with any electronic device is they (to state the obvious) consume power so manufactureres have opted for approx 1W in standby mode. Unfortunately if you take a stereo amp plus active woofer a TV, HDD DVD recorder, set-top box (if you have one) and a least one game console (assuming they also consume 1W in standby) and you have a total of 7W consumption. Now extrapolate that to 10M people (I am being very conservative here) and that is 70MW overall consumption just for your entertainment system to do nothing.

      Of course once you turn on your entertainment system the power consumption (taking the above example) can easily jump to 7GW even with fairly conservative systems. Now try the same simple maths with your fridge, microwave oven, oven clock (in fact any clock) and anything else that consumes power in standby. Add in lights even low wattage ones and your hot water heater (assume electrical off-peak not gas or solar) and the power consumption is massive. With regard to PC's and laptops consumption is dependent on what you have and can vary between 20W to over 1000W, It is possible to put a laptop in standby or sleep mode but this depends on if you are using your laptop as a standalone machine.

      So what are we going to do about all that wastage? Well if you pay for your electricity and you want convenience then absolutely nothing and this is what most people will do.
      • You think it takes 1W to power simple infrared circuitry and a relay?
        It usually says less than 1W.
        • by tsa ( 15680 )
          Who cares, it's still a waste of energy. Things that are idle should use no power at all.
        • Re: (Score:3, Informative)

          by donaldm ( 919619 )
          You are right it is possible to design devices that take a few milliwatts in standby however you have to realise that the power for the detection device (infrared, Blutooth, Wireless) comes from a transformer which has a low voltage tap that has to be converted to DC. Doing this does consume power even though it can be quite small however that plus the detection circuitry consumption does add up. Most commercial products have been designed to consume 1W or less and all the entertainment equipment I have has
          • HCW have a more detailed [hardcoreware.net] article discussing power usage of the current generation, which makes the Wii look impressive from a green or financial perspective - and that's confirmed looking at the last gen [dxgaming.com] numbers vs the 360 which shows the Wii does more with less juice than PS2, GC and DC. It's still using more power than any console before the DC used, but it is a step in the right direction.

      • by fbjon ( 692006 )
        That's still a molecule in a drop in the sea compared to other sources of power wastage. Besides, as you say, people pay for it. If power was more expensive, people would turn devices off if they consumed power unduly, and manufacturers would emphasise low-power designs (more) since there'd be a large(r) market for it.
        • by sjs132 ( 631745 )
          funny and interesting... An X-girlfriend (yes, I've had a few AND I read slashdot... but I'm married now.. FOR REAL!) used to do some crazy things like put every appliance *(except fridge) on powerstrips. (radio, tv, microwave, etc) then at night she'd run around and power off the strips and in most cases, unplug the strips from the wall.

          needless to say, in my impetuous youth, I thought she was a nut and eventually ended the relationship.

          Now looking back, it makes complete sense... she was just, ahead of th
      • by Icculus ( 33027 )

        Add in lights even low wattage ones and your hot water heater

        Why would you need to heat hot water? Maybe the heater itself is hot? :P

      • Convenience... (Score:3, Insightful)

        by C10H14N2 ( 640033 )

        After one particularly eye-opening electric bill, I started putting everything on timers, save one computer and my fridge. If I'm asleep or not at home, the power gets cut. ...at the prevailing rates around here, 1W constantly burning all month is about $5. So, $35/month just to let that A/V system sit idle. For the average person, that's about two hours of work. So, unless it takes four minutes out of your day, every day, it's not worth it. Considering we're probably talking more on the order of ten second
        • One problem with putting modern equipment on electrical timers is that when the power goes out, the thing gets totally reset. TV's forget what channels are available, and require you to go through the whole "setup" before they let you watch TV again, my Satellite DVR has to go through the 5-10 minute "connecting to satellites/downloading channel information" thing, and many other devices have similar problems. This, of course, is all due to poor design of the products in the first place (they should remem
        • Math... (Score:2, Informative)

          by Anonymous Coward
          Assume a kilowatt-hour cost $0.10. There are ~720 hours in a month (30 days/month * 24 hours/day). How many kwh is 1 watt running constantly?

          720 h * 1 watt / 1000 (w/kw) = 0.720 kw-h

          0.720 kw-h * $0.10 = $0.072 or a little over 7 cents per month.

          I just don't know what you were thinking - did you mean to use pesos?
        • After one particularly eye-opening electric bill, I started putting everything on timers, save one computer and my fridge. If I'm asleep or not at home, the power gets cut. ...at the prevailing rates around here, 1W constantly burning all month is about $5.

          This leads to a question: do the timers (over a 24 hour period) use less power than the power saved during the portion of the day that the devices are turned off?
  • ACPI? (Score:5, Insightful)

    by crankyspice ( 63953 ) on Wednesday July 11, 2007 @02:10AM (#19822287)
    2002 called, it wants its Page 3 tech story back.
    • Re: (Score:2, Informative)

      by Aranykai ( 1053846 )
      Actually, October 2006 saw the release of the latest revision, 3.0b. Still, I agree. This is old news.
      • Actually, October 2006 saw the release of the latest revision, 3.0b. Still, I agree. This is old news.

        Yeah, though the spec dates (IIRC) to the mid-90s. I picked 2002 as a sort of arbitrary date; my circa-2000 PCG-Z505R and circa-2001 Latitude C600 both used (or at least could have their power saving features driven by) APM. I didn't pick up another PC for several years and when I did it was ACPI based, and I had to learn a whole new set of Linux incantations ;)

    • The article is pretty in-depth and probably worth a read if you're into that sort of thing, but I'd have to agree. There's no date on the article itself that I can see, but ACPI has been around for years. Why dive into detail now?

      That and the page reloads itself what looks like 3-4 times in rapid succession every 30 seconds. Quite annoying.
  • by Aranykai ( 1053846 ) <slgonser.gmail@com> on Wednesday July 11, 2007 @02:12AM (#19822293)
    Im sick and tired of having to view 11 pages of adds to read an article that could easily fit on one. Easily 6 adds per page.

    The Wikipedia ACPI article is better and doesn't shove crappy adds down your throat. http://en.wikipedia.org/wiki/ACPI [wikipedia.org]
    • Re: (Score:2, Informative)

      by Anonymous Coward
      I'll add: don't waste your time with TFA. It's just a listing of the various ACPI states, not an explanation of ACPI.
      • Re: (Score:2, Interesting)

        Agreed. I was hoping for a couple tips and pointers on how to use a little less power at least, but none there. I suppose the obvious one is just to turn the computer off whenever I'm not using it, but I do that anyway.
        • Except that the +5VSB is still live, and quite often this rail has a 3A rating (so about 15W). Of course the mobo may not necessarliy draw this much when it is off, but still draws some power.
    • by suv4x4 ( 956391 ) on Wednesday July 11, 2007 @02:51AM (#19822481)
      Im sick and tired of having to view 11 pages of adds to read an article that could easily fit on one. Easily 6 adds per page.

      You go there to read the article? Damn, I go there to enjoy the ads, and that little article paragraph in the middle? It's pissing me off. It's right in the middle, getting in my way, demanding attention, as if I have nothing better to do than read articles all day.

      Can't there be site with just the ads and no pesky articles?

      And then I found this [milliondol...mepage.com]. Best. Site. Ever.
      • by Anonymous Coward
        Is it genius or insanity? That site is something else at least.. I love this bit in the faq, "With thousands of random tiny coloured dots all over the place, the homepage would look gastly." Yeah, it really managed to avoid that ghastly look.
    • by tsa ( 15680 )
      I long ago decided that I won't read articles that are spread out over more than one page when there is no 'print this article' button.
      • Oh the irony of wanting to print out an article that relates to power and energy saving just so you can read it!
        • by tsa ( 15680 )
          You misunderstand me. Usually the 'print this article' button links to a page that contains the whole article. You can read it from the screen, or choose to print it. Sometimes you get the 'Print' dialogue automatically but if you then press cancel you can still read the whole article from one webpage.
  • by Anonymous Coward
    Why do we even have a distinction between the low power graphics card brands anymore?
    I understand the uber top end being power hungry, but after that?
    Why isn't the nVidia line up:
    GeForce 8800 GTX-Hyper-turbo-mega-power card extra bonus edition
    GeForce 8800 Go
    GeForce 8600 Go ...

    I'd pay for a "mobile" chip on a PCI-E board...
    Then couple it with a "mobile" processor, some low noise fans, harddisk and whatnot and you get a reasonable but very quiet gaming box.
  • by niceone ( 992278 ) * on Wednesday July 11, 2007 @02:43AM (#19822441) Journal
    TFA lists all the states and how all this power management stuff is supposed to work... what it doesn't go into is how (or if) it actually does work. My experience is that it doesn't - I press sleep on my Windows XP PC ans all I get is a message telling me that the driver of my MIDI controller keyboard will not let the machine go to sleep!

    And on my (admittedly very old) Ubuntu laptop the screen just blacks out for a couple of seconds and then comes back on again. When it was running windows it used to go to sleep fine, but the wireless wouldn't work when it woke up.

    I guess other people's mileage probably does vary...
  • by adolf ( 21054 ) <flodadolf@gmail.com> on Wednesday July 11, 2007 @02:58AM (#19822501) Journal
    ACPI has been around for almost eleven fucking years. In-depth information about it can be had in all of the usual sources, from LKML to Wikipedia to decade-fucking-old back issues of Byte and PC Magazine.

    News? Where?

    • He,
      but it is nuts and it has volts!
      Just not stuff that matters....

      /me ducks
    • Re: (Score:1, Interesting)

      by Anonymous Coward
      So how come linux still fails to support it?
    • When I was at OLS [linuxsymposium.org] a couple years ago there was a good presentation on how much power management sucked in its actual implementation by hardware vendors and why it was such a problem to implement ACPI properly in Linux. If you go on the site above you should be able to find the PDFs of the presentation.
    • and if its been around so long, how come I have to boot linux on most of my MODERN boxes with:

      "pci=noacpi"

      else I see ethernet cards at bootup time, but the eth0 (etc) goes away (no irq, when you cat /proc/interrupts) right after boot. the only cure to many of this pci 'issues' is to boot with noacpi.

      if its that broken - it doesn't seem all that well thought out. (and just WHY doesn't this surprise me?)

      oh, and in terms of intel - they're far from perfect as well. I bought an intel mobo ('bad axe 2' model
  • 500W? (Score:4, Interesting)

    by achurch ( 201270 ) on Wednesday July 11, 2007 @03:06AM (#19822541) Homepage

    Is that why people don't blink at PS3s and X360s that eat 150-200W when they're idle? I guess that locks me and my 100W/system power budget out of gaming . . .

    Seriously, what is it that uses up so much power? I've got a pretty standard dual-core system that idles at about 65W, and I can't push it beyond 150W even when I try.

    • Re: (Score:3, Informative)

      by Emetophobe ( 878584 )
      Did you even read the summary? They mentioned using *TWO* Geforce 8800 Ultra graphics cards. From Nvidia's own technical specs, each 8800 Ultra uses up to 175 watts under load. Go to http://en.wikipedia.org/wiki/GeForce_8_Series [wikipedia.org] and look at the "Technical Summary" table if you don't believe me.

      If you think that's bad, the new R600 series from ATI/AMD supposedly uses up to 270 watts.

      • by achurch ( 201270 )
        Mea culpa. I saw the part about two video cards, but I never imagined that a single video card could use more than 100W. (What do you do with 175W per card, anyway?)
        • Re: (Score:1, Funny)

          by Anonymous Coward
          You play pong.
        • Your whites.

          The G80 chips are general-purpose highly-parallel number-crunchers. Remember the Cell/BE processor? Scale the SPEs down to about half the instruction set with a small register bank. Put 48 of them on one chip.

          It'll do your laundry.

        • by FST777 ( 913657 )
          Two things: calculations and heat generation (cue pixel-pixies jokes).
        • Re: (Score:3, Informative)

          by Fweeky ( 41046 )
          A single G80 GPU contains about 690 million transistors; enough for about 3 dual core Opterons. Then you've got 768MB of highly clocked GDDR3 memory to go with it; it's not really surprising if it takes about as much power as an entire computer, especially when it's pushed that hard -- similarly high end CPU's can easily eat 125W just on their own.
      • Did you even read the summary? They mentioned using *TWO* Geforce 8800 Ultra graphics cards. From Nvidia's own technical specs, each 8800 Ultra uses up to 175 watts under load.

        From the summary: "An ultimate gaming system equipped with a quad-core processor, two NVIDIA GeForce 8800 Ultra, 4 sticks of DDR2 memory, and a few hard drives can easily consume 500W without doing anything!"

        To me, "without doing anything" means idle. Where exactly do these 500W go?
    • Re: (Score:3, Interesting)

      by Rick17JJ ( 744063 )

      I used a Kill-A-Watt meter to measure power usage on my two computers. My main computer is a less than 2 year old single-core AMD-64 3800+ with 1 GB RAM, two hard drives, an 83% efficient power supply, a fanless water cooled CPU, a 20 inch flat panel monitor and runs Kubuntu Linux. The monitor uses 40 Watts and the rest of the computer uses about 94 Watts most of the time. In the sleep mode the monitor only uses about 1 Watt. Under heavy use the CPU power usage is much more. I don't like noise, so I ch

    • I think that 500W figure came off the top of someone's head. I suspect the described system would actually idle at about 200W and peak at around 500W.

      PSU (power supply unit) capacity is being way oversold. A desktop PC just isn't going to break 300W peak unless it is a hard-core gaming machine. Even a decent gaming machine (fast CPU and a single nearly-top-of-the-line GPU) won't break 300W. See here [silentpcreview.com] for examples of what 300W will run. (The thread started ~4 years ago, so you might want to skip to the end.)

      H
  • ACPI is a disaster (Score:5, Informative)

    by r00t ( 33219 ) on Wednesday July 11, 2007 @03:23AM (#19822603) Journal
    We used to standardize hardware interfaces. They stood the test of time, were well supported, and were low overhead. Writing drivers, including boot code, was no serious problem. We didn't need an emulator, virtual machine, etc.

    Decent standards: IDE, VGA, PC serial interface, PC parallel interface, PC keyboard interface, UHCI, OHCI, etc.

    Now we standardize an interface to non-standard hardware via ACPI. The OS is supposed to run ACPI code (a script) in a complicated interpreter. ACPI code is slow and buggy, and generally gets to do whatever it wants with the hardware. It's like making BIOS calls to do everything, but without even the minor advantage of native code.

    This is especially painful for boot loaders. You can't run an ACPI interpreter in a 512-byte boot sector. You probably can't do it in any reasonable boot loader.

    This is even painful for power management. For example, OLPC wants to suspend the CPU between every keystroke; that doesn't work so well if you need to run an ACPI code script to do it.
    • by Bert64 ( 520050 ) <bert@[ ]shdot.fi ... m ['sla' in gap]> on Wednesday July 11, 2007 @05:01AM (#19823015) Homepage
      Whats worse is that...
      There is a standard for these ACPI scripts, as you pointed out it's not great but at least there is one. There's also a compiler for them, written by Intel that complies with the standard.
      But most hardware makers don't use Intel's compiler that complies with standards... They use Microsoft's compiler that completely breaks the standards, thus OS authors can't just implement according to Intel's published standards, they have to reverse engineer Microsoft's unpublished variations.
      • Re: (Score:1, Troll)

        by rthille ( 8526 )
        Jesus, is there _nothing_ that Microsoft can't/didn't fuck up beyond all recognition?
    • by *SECADM ( 223955 ) on Wednesday July 11, 2007 @05:17AM (#19823097)
      The ACPI interpreter is designed to work within the OSPM. The reason being the assumption that the OS will always be the central point that knows the most about the whole system at any given time, and therefore it is most qualified to make power management decisions.

      About your point about standardizing on an abstract interface of non-standardized hardware, consider:
      1. This is nothing new, and is what always happens in the computer industry. If we were so hung up on standardized hardware interfaces that are well understood by the industry, we would still be coding all in x86/68k/ia64 assembly. Thanks to a wonderful high level (read: abstract) language (C/C++) that is standardized so we can write portable code for.
      2. ACPI hasn't replaced any of those hardware standards you mentioned. E.g. ATA drives still take the same commands as before and the standard is constantly evolving. But more importantly, do those hardware standards have the advance power management capabilities offered by ACPI? And, if they do implement it somehow in the hardware spec, how is updating each individual hardware's firmware *and* driver to include the new power management support, better than just updating the firmware AML code to expose the new capabilities to the OS in a standardized way?
      3. You didn't mention the other half of the ACPI standard, the Configuration aspect of the spec. All those hardware interfaces that you know and love used to each expose its own resource requirement differently, unlike ACPI which standardizes resource descriptors and boot-configured requirements. How is supporting a bunch of hardware specific resource description format better than supporting one standardized format that is well understood by the OS?

      Finally, to go back to your OLPC example. First of all I am not sure why you would want to put an ACPI interpreter into the boot loader? Is there a reason OLPC is doing this? As for you implying that you need to run ACPI code to suspend the CPU (after each keystroke), this is just not true. C-states (which i assume you are talking about) are entered by simply reading a register for the state, as described by the _CST definition. So all you need to do is parse the definition once, and remember the corresponding registers and never parse the AML code again. Same for T-states or P-states, which are entered by writing to the proper registers. There is no such running "ACPI script" overhead like you've described when handling processor power management.
      • by r00t ( 33219 )
        The first example doesn't really apply. C has other advantages. Also, the CPU itself is a very special case. It's not a perhiperal.

        ACPI hasn't replaced existing standards, but Intel is using ACPI for most new stuff. We've enjoyed those nice hardware standards. We won't be getting many more. It'll be ACPI function calls.

        The configuration part of the spec is not needed. For over a decade, we've been able to handle PCI devices without needing a complicated script interpreter. Even BIOS calls, done at boot, wer
    • by josephdrivein ( 924831 ) on Wednesday July 11, 2007 @07:09AM (#19823553)

      Modern PCs are horrible. ACPI is a complete design disaster in every way. But we're kind of stuck with it. If any Intel people are listening to this and you had anything to do with ACPI, shoot yourself now, before you reproduce.


      From: http://en.wikiquote.org/wiki/Linus_Torvalds [wikiquote.org]
  • Whatever (Score:5, Informative)

    by Anonymous Coward on Wednesday July 11, 2007 @03:26AM (#19822617)
    Noones computer idles at 500 Watts, not even close. I wish people would check their facts before posting nonsense.

    My 4 year old xenon dual processor (Thats two physical CPUs) PC with (~10 fans) with no power management support in the CPUs idles at 200 watts including powering the display and extraneous trinkets attached to the watt meter plugged into my wall.

    All new PCs with multiple cores on single processors have power management features and use concideribly less power when idling.

    Whats worse is the article spouts all kinds of mostly useless techno crap about power states without providing any context into what it means or useful information in terms of actual OS power settings one can configure to do something about their PCs power usage.
    • by Bengie ( 1121981 )
      ditto. my poor old 1800XP with 1gig of pc-266, 2 HD's, 17" CRT, Logitech z-680 (505RMS) set loud enough to drown out background noise, claims to only consume 180watts while playing WoW. or so says my battery back-up. even then, I ran my back-up till it claimed 10% life left for S&Gs and it came out to almost perfect with predicted times based on 180watts. I have no idea how a new computer can pull 500 watts idling
    • by Bengie ( 1121981 )
      I guess i can't edit my previous post, but i also forgot the mention since the 180 watts is draw from the plug, that also account of inefficiencies in the system. so if my devices are only 80% efficient, the real power draw from the internals is really 144 watts. which isn't bad with all the devices that I was running at the time.
    • I just checked mine yesterday while I was using it for web browsing. It was about 275 watts according to the wattage meter, and that was for the CRT monitor (on and displaying), speakers (powered sub), computer (AMD 64 X2 with a 7800 gts card), and UPS. Kill-A-Watt reported 184 KWh usage over 981 hours; my entire system has been on for 40 days (restarted when necessary) to examine power consumption.
    • agreed with parent poster. I recently bought a 'kill-o-watt' (sp?) lcd plugin ac voltage/current/wattage/frequency (you name it) monitor. for $20 its a neat little device. insert it between your house plug and the load under test (your pc).

      I also have an older dual xeon with an eps12v supply, etc. I don't think I saw the lcd say any more than 150 watts under load! no graphics - just a server - but its still a dual socket xeon (p4 style, which runs hot) and I know that it sits at 100w and peaks at about
  • by jb.cancer ( 905806 ) on Wednesday July 11, 2007 @04:02AM (#19822785)
    Now tell me what ISA is..
  • Sleep is worthless (Score:5, Insightful)

    by paul248 ( 536459 ) on Wednesday July 11, 2007 @04:39AM (#19822907) Homepage
    Being able to put components to sleep is pretty much worthless if you want to run anything resembling a server. Hardware manufacturers need to focus less on sleep states, and more on making components consume less power while they're active.

    A good first step is the 80plus [80plus.org] initiative for power supplies. By increasing the power supply from 65-70% to 80-85% efficiency, you gain a decent amount of active power savings right off the top. If you care at all about conservation, make sure to check the efficiency rating of your next power supply.

    The people at Intel and AMD have made great strides toward power efficient CPUs, which can scale back their clocks on-demand without noticeably hurting performance, but the real remaining problem areas are in video cards, RAM, and especially hard drives.

    The ideal computer would consume almost zero power while sitting there doing "nothing," but be able to wake up at a moment's notice to handle requests from the user or the network. Power management should be hardware-based and completely transparent. ACPI is just a dirty hack that's becoming more useless as network accessibility becomes more important.
    • I think servers are a pretty small portion of the hardware ecosystem. Most people don't have fancy video cards, I don't think the minority of people with fancy video cards are a significant problem. RAM and hard drives don't seem to take much power, I think desktop drives take about 10W each. I don't know about RAM, but most RAM doesn't even get very warm, which suggests that they aren't a significant power consumer.
    • I agree that sleep is worthless for servers but it's great for pc users. For example my parent's dell c521 desktop when sleeping takes only 2-3w at the plug and 'appears' to be off (all fans stopped). It only takes a second or two to get back to the desktop and applications are left as they were. I also agree that hardware vendors (especially graphics and chipset makers) need to focus more on low power solutions not only for laptops but also for desktop and server machines. The cpu vendors have been working
  • How does a system like that get near 500 watts just at idle? My system is drawing 197 watts right now. It's a Mac Quad G5 (2.5GHz), 4.5GB ram, 2 250GB drives and a Geo 7800GT for video. It's running Safari, with 7 other apps running in the background (System Profiler, SubEthaEdit, Temp Monitor, Preview, Pages, NetNewswire, and Mail).
  • When one is looking for performance in an "ultimate road car" the last thing you are concerned about is gas mileage. Same holds true for the "ultimate gaming rig". Other than your hardcore gaming addicts, who the fuck needs dual video cards and more than one 2GB stick of RAM? Shit, not even Vista is that hungry.
  • I imagine that most people who don't pay attention to their electric power consumption can easily cut their power consumption in half, or more, with very little effort. I know I did, and it saves me hundreds of dollars a year.

    The simple answer is to turn off equipment that's not in use. It's easy to start with computers and game consoles, but there are other big power consumers in most homes. Sadly, it is nearly impossible to buy equipment with any knowledge of its power efficiency and the performance of
  • We all seem to be under agreement that most of this ungodly power consumption comes from the outrageous video cards. Why don't graphics chip designers develop some kind of "Speed Step" like architecture that shuts down all but 2 of the pixel pipelines and all but 32 or 64 megs of the ram when a 3D application is not in use? Seems like we could knock down the power consumption right there, when you don't have a pair of SLi cards with 48 pipes and 1.5gb of ram between them, revved up to full tilt in order to
    • Sun actually jumped on this idea of yours (Speed Stepping) a while ago, touting processors with low-power-consumption built into the architecture (eg UltraSPARC T2 advertised as a "green cpu"). From a design standpoint it isn't hard to do; enabling and disabling unused datapaths, control lines, and clocks reduces switching and thus reduces power. This becomes atractive to server buyers, as nobody likes paying for wasted power. It's a suprise to me as well that none of the big graphics card makers have tr
  • I hate to say it, but my Windows XP has been ACPI aware now for what, a few years now? I don't know enough about Linux to know what it does with ACPI, but I do know that if I have the ACPI activated in my BIOS, Linux seems to be pretty happy with it. In the very least, sensors seems to return information about fan speeds and temperatures that fall within reason of what the BIOS says.

  • who cares? let's rather talk about powerpc-management. Or managers with powerpcs. Or pcs with managerpower. Or management by power, not pcs. Or pizza. Or getalife-installenlightenment.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...