Become a fan of Slashdot on Facebook


Forgot your password?
Power The Almighty Buck IT

10 IT Power-Saving Myths Debunked 359

snydeq writes "InfoWorld examines 10 power-saving assumptions IT has been operating under in its quest to rein in energy costs vs. the permanent energy crisis. Under scrutiny, most such assumptions wither. From true CPU efficiency, to the life span effect of power-down frequency on servers, to SSD power consumption, to switching to DC in the datacenter, get the facts before setting your IT energy strategy."
This discussion has been archived. No new comments can be posted.

10 IT Power-Saving Myths Debunked

Comments Filter:
  • I dunno.. (Score:5, Interesting)

    by Anrego ( 830717 ) * on Tuesday October 07, 2008 @11:02AM (#25286499)

    I'm of the school that thinks "debunking" involves some kind of comprehensive stats or numbers or evidence weight against strongly held opinions.

    This article is basically a verbose version of the "nuh uh" argument.

    It's not a bad article.. but I would hardly call this "debunking".

    And I totally disagree on point #2 .. maybe having _all_ your extra servers always on is bad.. but if load peaks there is no _way_ someone should be waiting while a system boots.

  • by houghi ( 78078 ) on Tuesday October 07, 2008 @11:11AM (#25286665)

    I lock my PC at the evening and turn off my monitor. Shutting down takes 5 minutes. Starting up takes 15 minutes. Just checked those time this morning to talk about it to IT. This does not include logging into the remote system with Citrix that takes another 10 minutes.

    So the company has a choice.
    1) Pay me (and everybody else in the company) 20 minutes
    2) Pay the electricity for not turning of the PC
    3) Find a solution that makes it possible to do all of this faster.

    Oh. The only reason we use Citrix is to run Outlook in it.

  • Re:I dunno.. (Score:5, Interesting)

    by ShieldW0lf ( 601553 ) on Tuesday October 07, 2008 @11:23AM (#25286865) Journal

    I've got electric heat, and I've got a pile of servers in my spare bedroom, and I never need to turn on the electric heat, because the servers heat my home.

    Which looks to me like an opportunity. People pay for heat. So, put the servers where people need heat, and suddenly a liability is a resource.

    Apartment buildings, office buildings and malls in cold climates should all be prime locations for a datacenter.

  • Not only that. (Score:3, Interesting)

    by khasim ( 1285 ) <> on Tuesday October 07, 2008 @11:23AM (#25286871)

    We have a similar problem here that I've not been allowed to fix yet.

    The employees typically turn on their computers and then LEAVE THE OFFICE to get Starbucks coffee or whatever. A 10 minute wait turns into 30 minutes of non-productivity.

    The computers should be the same as the phones. Instant on - any time - every time.

  • by EvilRyry ( 1025309 ) on Tuesday October 07, 2008 @11:25AM (#25286905) Journal

    Using my handy killawatt, I tested how much power my desktop (not including accessories) draws while off, on and idle, on and under load, and in S3 suspend.

    Off - 6 watts
    Idle - 140W (dropped from 152W after installing a tickless kernel)
    Loaded - 220W
    S3 - 8 watts

    Ever since I ran that test, I put my machine into suspend at every opportunity. 140W is a lot of juice in the land of $0.18/kWh.

  • Re:I dunno.. (Score:3, Interesting)

    by Nursie ( 632944 ) on Tuesday October 07, 2008 @11:31AM (#25286985)

    Think past "HA" for a second.

    Think about metrics, predictable traffic and planned capacity.

    Think about bringing a percentage of spare capacity online at any one time, in line with predicted peak traffic, and more as the load increases on what's there already.

    HA can still be HA without needing everything on all the time.

    (also, why the hell was my last post modded down as redundant?)

  • by Volante3192 ( 953645 ) on Tuesday October 07, 2008 @11:49AM (#25287337)

    My kingdom for a mod point...

    I built a new system in July, Intel Core2Duo E8400, 2gb ram, ATI 3850, two hds (one's a raptor), and the box on idle pulls 81W.

    My old box, an Athlon 1800+ (actual speed: 1350hz), 2gb ram, two HDs...idle was in the 130s 140s.

    (Both are excluding monitor, a 20" LCD with pulls 35-40W)

    So not only did I build me a faster system, it's nearly half as power hungry as my old box.

  • by alienw ( 585907 ) <alienw DOT slashdot AT gmail DOT com> on Tuesday October 07, 2008 @11:51AM (#25287357)

    Spinning up and down hard drives: as discussed in plenty of places, including here on /. I believe, you can dramatically reduce the life of drives when you cycle them due to mechanical wear-and-tear.

    Do you have any data on this? This is one of those commonly held beliefs that has absolutely no facts behind it. I've seen a google whitepaper that pretty conclusively debunked commonly held assumptions that drives fail because of temperature and "wear and tear". From a mechanical standpoint, this belief also does not make any sense. The only wear components in a hard drive are bearings on the head and spindle. Spinning down the drive should prolong their life, rather than shortening it.

  • Re:I dunno.. (Score:2, Interesting)

    by greenzrx ( 931038 ) on Tuesday October 07, 2008 @11:51AM (#25287365) Homepage
    back in the very early 90's We moved into the first 7 world trade center. There was no heat in that building, it was air conditioned 365 days a year. People and computers heated all the floors. Unoccupied floors were damn cold in winter, i can tell you.
  • Sod NFS (Score:4, Interesting)

    by Colin Smith ( 2679 ) on Tuesday October 07, 2008 @12:13PM (#25287713)

    Sorry, It's just not worth the pain. Boot to RAM.

    You just set high and low load thresholds for server on/off. And a load balancer which simply adds the new server to the server pool when it notices it's there, removes them when it's gone. So no need to try to predict stuff.

    5 seconds or 3 minutes, the server boot times are largely irrelevant. If you think you're going to handle a slashdotting you are mistaken, you can't handle oneoff events this way. You would have to go from 1 to 100 servers and connections in 5 seconds.

    What it can do is grow really quickly if a service becomes very popular very quickly, or reduce your datacenter costs if it's typically used only 9-5. Or even, dual purpose processing. Servers do X from 9-17 and Y from 15-20.


  • by Doc Ruby ( 173196 ) on Tuesday October 07, 2008 @12:26PM (#25287913) Homepage Journal

    That list of myths debunked seems pretty sensible, even in details that run counter to conventional wisdom. But even though the list properly cautions several times against how most any equipment left plugged in will still drain power while doing nothing useful (infinitely bad efficiency), the article still makes an inefficienty mistake:

    Sleeping continues to draw a small amount of power, between 1 and 3 watts, even though the system appears to be inactive. By comparison, Suspend draws less than 1 watt. Even over the course of a year, this difference is probably negligible.

    Over the course of a year, 2 unnecessary watts is 17.532 unnecessary KWh. Sure, that's only about $1.75 at about $0.10:KWh []. But that's for each device. At home, in addition to sleeping computers, there's dozens of devices with AC adapters wasting watts most of the day (and night), which is possibly hundreds of dollars wasted. In offices and datacenters, possibly thousands to hundreds of thousands of dollars a year wasted. And each KWh means loads of extra Greenhouse CO2 unnecessarily pumped into the sky, even if it's (still) cheap to so recklessly pollute.

    Which is what the One Watt Initiative [] is designed to minimize. The US government has joined the global efficiency organization, mandating purchases of equipment that consumes no more than 1 watt in standby mode. Whatever the global impact of 3W wasted in standby can be cut by 2/3 if switching to 1W.

    In the short run, that makes energy bills lower (and, by saving heat from standby devices, further lowers energy costs due to less required cooling). In the long run, we've got more fuel and intact climate left to work with - and that stuff just costs way too much to replace when it runs out.

  • by petermgreen ( 876956 ) <.plugwash. .at.> on Tuesday October 07, 2008 @12:38PM (#25288131) Homepage

    I saw the google whitepaper and it debunked very little about the temperature "myth", not sure about wear and tear.

    With regards to tempreature the study had a couple of fundamental flaws.

    * The temperature measurements came from the drives themselves. That means if say an unreliable hard drive model also underreported it's tempreature it would totally skew the results.
    * It was data from servers running in a well cooled datacenter. That means there was virtually no data about drives running at the kind of tempreatures you see in a poorly ventilated desktop in a hot room.

  • Re:I dunno.. (Score:5, Interesting)

    by Lord Apathy ( 584315 ) on Tuesday October 07, 2008 @12:44PM (#25288267)

    There is also more bullshit in that statement than meets the eye. Power cycling a system can cause failure if you have cheap soldering or marginal parts. Powering up a system causes it to heat up, things expand when they heat up. If you have a solder joint that isn't done right the expanding and contracting will cause it to break eventually. I've actually seen surface mounted parts fall off a board because of shotty soldering.

    Yeah, true the real problem was shotty soldering but the heating cycles helped it along.

  • by UncleTogie ( 1004853 ) * on Tuesday October 07, 2008 @12:50PM (#25288349) Homepage Journal

    You must be the guy who bought my old 386 66. You need to hit the turbo button to get it to boot faster!

    At that clock speed, you meant "486", didn't you?

    BTW, I'm curious why the article's first point doesn't even touch on thermal expansion/contraction...

  • Re:I dunno.. (Score:4, Interesting)

    by TheRaven64 ( 641858 ) on Tuesday October 07, 2008 @12:59PM (#25288501) Journal
    Take a look at the proceedings from the International Conference on Autonomic Computing for the last few years, and you will see papers from universities and companies like Intel and HP describing efficient ways of doing exactly this.
  • by evilviper ( 135110 ) on Tuesday October 07, 2008 @01:39PM (#25289137) Journal

    Data center UPS units take the AC current convert to DC then back again just so the server can convert it back to DC. Even if you have 95% efficiency at each stage the conversion losses will add up.

    There's only 2 stages in there that are affected. You aren't going to get DC from the power company.

    And while the final stage is affected, since the servers are getting DC at lower (48V?) voltages, do you really think a DC power supply is possibly going to be any more efficient than an AC one? Just because the input/output numbers are closer together doesn't mean your saving energy...

    The only thing you're saving is having to convert from the batteries, to AC. Well, I've done some research on DC inverters, and I have to tell you, the best top-out at 97% efficiency... So, now you're saving just 3% by going DC.

    But, you're also going to need larger, more expensive power lines (bus bars) to all the servers, and even with them, you can still expect to have more line losses than you would with relatively tiny ~240V AC wires.

    And even if there were no line-losses, you've still got to make-up for the cost of investing in this new system. At well under 3% energy savings, it's going to take a LONG TIME at best.

    People wouldn't be going DC if it didn't result in measurable power savings.

    Companies rolling out DC datacenters are very few and far between. It's probably safe to say they are all just left-overs remnants that were in the works back when AC power supplies were hovering around 60% efficiency (because nobody had complained yet), and simply couldn't practically be stopped and reworked in time, once the very high efficiency AC power supplies came around.

    If you know of any companies that have just started planning a DC datacenter today, point them out. I would certainly like to find out their reasons.

  • by Just Some Guy ( 3352 ) <> on Tuesday October 07, 2008 @01:46PM (#25289243) Homepage Journal

    This is one of those commonly held beliefs that has absolutely no facts behind it.

    The data sheet for my Hitachi HDS721075KLA330 [] drive rates it at 50,000 load/unload cycles. If it powered up 50 times a day (which would be quite possible in a desktop with aggressive power savings enabled), it's specced to last about 3 years.

    From a mechanical standpoint, this belief also does not make any sense.

    The people who actually built it seem to disagree with you. Hint: a spinning hard drive takes little energy to stay in motion. A stopped hard drive takes quite a bit of torque to spin up to running speed in a small number of seconds.

  • by initialE ( 758110 ) on Tuesday October 07, 2008 @02:07PM (#25289563)

    If I remember correctly, the power savings is not in using AC or DC, it is in stepping up the voltage so that less current flows, resulting in lower power loss due to the innate resistance of the lines, a process that is possible using either AC or DC. Tesla and Watt fought over that one, comparing the relative safety of AC vs DC, a war of pure FUD IIRC.

  • Re:I dunno.. (Score:2, Interesting)

    by operagost ( 62405 ) on Tuesday October 07, 2008 @04:25PM (#25291383) Homepage Journal
    You fail at history. I mean, the Roman Empire alone has a list of debauchery, wastefulness, and unrealized potential 100 leagues long.
  • by Bobfrankly1 ( 1043848 ) on Tuesday October 07, 2008 @07:58PM (#25293917)

    The number shown on the front of darn near 99% of those boxes was set by seating/reseating a few jumpers. It never really did anything on most mobos, not even being connected in most cases, but it sure made the rest of the world see a "difference" in speed by toggling the higher number to display.

    I've heard the jumpers setting the number thats appear on the displays bit before, but not the turbo button not even being hooked up bit. However, on this particular 386, the turbo proved it's differences, especially showing in boot times and pkzip compression, not to mention the X-wing and Tie Fighter games.
    Man, that takes me back....

If you suspect a man, don't employ him.