Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Power The Almighty Buck IT

10 IT Power-Saving Myths Debunked 359

snydeq writes "InfoWorld examines 10 power-saving assumptions IT has been operating under in its quest to rein in energy costs vs. the permanent energy crisis. Under scrutiny, most such assumptions wither. From true CPU efficiency, to the life span effect of power-down frequency on servers, to SSD power consumption, to switching to DC in the datacenter, get the facts before setting your IT energy strategy."
This discussion has been archived. No new comments can be posted.

10 IT Power-Saving Myths Debunked

Comments Filter:
  • I dunno.. (Score:5, Interesting)

    by Anrego ( 830717 ) * on Tuesday October 07, 2008 @10:02AM (#25286499)

    I'm of the school that thinks "debunking" involves some kind of comprehensive stats or numbers or evidence weight against strongly held opinions.

    This article is basically a verbose version of the "nuh uh" argument.

    It's not a bad article.. but I would hardly call this "debunking".

    And I totally disagree on point #2 .. maybe having _all_ your extra servers always on is bad.. but if load peaks there is no _way_ someone should be waiting while a system boots.

    • Re:I dunno.. (Score:4, Informative)

      by Nursie ( 632944 ) on Tuesday October 07, 2008 @10:07AM (#25286583)

      That depends if your system has been tuned to boot in 5 seconds.

      Or if it can return from suspend-to-ram nice and quick.

      • Single page (Score:4, Informative)

        by gnick ( 1211984 ) on Tuesday October 07, 2008 @10:15AM (#25286733) Homepage

        Sorry for the thread hijack, but I decided to post this link as soon as I saw the links to all 4 pages of the top 10 list.
        http://www.infoworld.com/archives/emailPrint.jsp?R=printThis&A=/article/08/10/06/40TC-power-myths_1.html [infoworld.com]

    • Re:I dunno.. (Score:4, Informative)

      by Anonymous Coward on Tuesday October 07, 2008 @10:10AM (#25286641)
      If you're booting those servers diskless with PXE and NFS, the boot time should be negligible. I should imagine the trick would also be to bring additional resources online before you are the point that you must tell users to wait while the server boots. The magic would be in predicting near-term future use...
      • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Tuesday October 07, 2008 @10:28AM (#25286959)

        ... something like monitoring system usage and bringing additional boxes up when usage hits something like 80%?

        And then suspending boxes when usage drops down to 10%?

        All in all, trying to maintain a level 50% utilization level? Maybe with the utilization level setting being an option that the sysadmin could change?

        I'd recommend you patent that idea.

        • by Nursie ( 632944 )

          I'd reckon IBM and VMWare probably have that lot wrapped up already. Still, there's no reason (given current record) that you couldn't have one as well.

          • Re: (Score:2, Informative)

            by Sobrique ( 543255 )
            Actually you're pretty close with VMware at the moment - VM instances can be 'hot' migrated, so you can clump them up on one server, power the rest down, and fire them up/migrate when demand shows. Your response won't be great, but at least you will be able to respond to dynamic load fluctuation.

            Actually VM tech goes a long way to doing that anyway, provided you've a vaguely good concept of workload fluctuations.

      • Sod NFS (Score:4, Interesting)

        by Colin Smith ( 2679 ) on Tuesday October 07, 2008 @11:13AM (#25287713)

        Sorry, It's just not worth the pain. Boot to RAM.

        You just set high and low load thresholds for server on/off. And a load balancer which simply adds the new server to the server pool when it notices it's there, removes them when it's gone. So no need to try to predict stuff.

        5 seconds or 3 minutes, the server boot times are largely irrelevant. If you think you're going to handle a slashdotting you are mistaken, you can't handle oneoff events this way. You would have to go from 1 to 100 servers and connections in 5 seconds.

        What it can do is grow really quickly if a service becomes very popular very quickly, or reduce your datacenter costs if it's typically used only 9-5. Or even, dual purpose processing. Servers do X from 9-17 and Y from 15-20.

         

      • Re:I dunno.. (Score:4, Interesting)

        by TheRaven64 ( 641858 ) on Tuesday October 07, 2008 @11:59AM (#25288501) Journal
        Take a look at the proceedings from the International Conference on Autonomic Computing for the last few years, and you will see papers from universities and companies like Intel and HP describing efficient ways of doing exactly this.
    • Re:I dunno.. (Score:5, Informative)

      by gnick ( 1211984 ) on Tuesday October 07, 2008 @10:23AM (#25286863) Homepage

      FTA:

      Hibernate mode in XP saves the state of the system to RAM and then maintains the RAM image even though the rest of the system is powered down.

      They must be using a different version of XP than I am... When I 'Hibernate' my laptop, it dumps the RAM to a file on the hard drive and then powers off completely. When I 'Stand By' my system, it keeps everything in RAM.

      Maybe they have SP4...

      • by halcyon1234 ( 834388 ) <halcyon1234@hotmail.com> on Tuesday October 07, 2008 @10:44AM (#25287245) Journal

        hey must be using a different version of XP than I am... When I 'Hibernate' my laptop, it dumps the RAM to a file on the hard drive and then powers off completely.

        You must be using a different version of XP than I am... When I 'Hibernate' my laptop, it attempts to dump the RAM to a file, throws a hissy fit like a coddled freshman after their first exam, fails miserably, flickers the screen, disables the Hibernate option, and then just sits around until the battery drains.

        • Re: (Score:2, Informative)

          by drrck ( 959788 )
          I had the same problem. So I installed a patch (http://support.microsoft.com/?kbid=330909) from Microsoft. Essentially it's looking for a continuous free area on your HD to save your RAM to. I believe the fix is to disable this "feature".
        • Re: (Score:3, Informative)

          by compro01 ( 777531 )

          I would guess you either have a piece of hardware or driver that isn't fully ACPI-compatible or you don't have drive space for the hibernate file.

    • Re:I dunno.. (Score:5, Interesting)

      by ShieldW0lf ( 601553 ) on Tuesday October 07, 2008 @10:23AM (#25286865) Journal

      I've got electric heat, and I've got a pile of servers in my spare bedroom, and I never need to turn on the electric heat, because the servers heat my home.

      Which looks to me like an opportunity. People pay for heat. So, put the servers where people need heat, and suddenly a liability is a resource.

      Apartment buildings, office buildings and malls in cold climates should all be prime locations for a datacenter.

      • Re:I dunno.. (Score:5, Informative)

        by snowraver1 ( 1052510 ) on Tuesday October 07, 2008 @10:36AM (#25287089)
        If you are using electric heat, chances are you don't live in a cold climate and pay for air conditioning for much of the year negating any "savings". Here in cold-balls Canada, EVERYONE has centeral heating; it's too expensive to use electricity. That being said, I do agree that datacenters' heat should be used to heat useful things (office bldgs, like you suggest).
        • Re: (Score:3, Informative)

          by Aliencow ( 653119 )
          Well I live in Canada, and most people I know use electric heating. Yes, central electric heating is great, and actually cheaper than oil around here. (Montreal area)
          • by geminidomino ( 614729 ) * on Tuesday October 07, 2008 @11:08AM (#25287643) Journal

            Hell, the way things are going, soon hiring a cadre of hookers to rub on you for heat will be less expensive than oil.

          • Re:I dunno.. (Score:5, Informative)

            by compro01 ( 777531 ) on Tuesday October 07, 2008 @12:20PM (#25288805)

            I know a total of 5 people who don't use natural gas for heating, and 4 of them use propane as they're so far out of the way the gas network doesn't reach them. only 1 guy uses non-central (heating controlled on a room by room basis) electric. In terms of raw dollars-per-joule, gas is a way better proposition. even after the latest electric rate jump (from 6 cents to 9 cents per KW-hr), gas is still about 1/3 the cost of electric heat.

          • TROC (Score:3, Informative)

            by clarkn0va ( 807617 )

            Well I live in Canada, and most people I know use electric heating...(Montreal area)

            To be fair, when snowraver1 said 'Canada', I think he actually was referring to Alan Fotheringham's 'TROC'(The Rest Of Canada), i.e., the unwashed masses outside of the 401 corridor.

            Here in Alberta, as in much of western TROC, it's good old natural gas.

            db

        • Re: (Score:2, Interesting)

          by greenzrx ( 931038 )
          back in the very early 90's We moved into the first 7 world trade center. There was no heat in that building, it was air conditioned 365 days a year. People and computers heated all the floors. Unoccupied floors were damn cold in winter, i can tell you.
      • I have no control over the heating in my apartment. Thus, I have to have an air conditioner running 24/7/365 because my room gets too hot from the two computers I have running... and they're both sub gigahertz machines. Heaven help me if I upgrade to a real machine.
      • by AlecC ( 512609 )

        This is indeed a factor in positioning data centres not: given the choice, they put them in a cold climate, and some of them are operating a shared heating system.

    • If you have a shit system that's really slow and badly written display the following:

      "The sub-optimal response you are experiencing will soon be resolved as we are utilising quantum replicators to produce more server hardware for your request. Once complete we will travel back in time and resubmit your request. Thank you for using One-Born-Every-Minute hosting. Have a nice day."

      (Speaking from experience - different text, same message).

    • Re:I dunno.. (Score:5, Insightful)

      by R2.0 ( 532027 ) on Tuesday October 07, 2008 @10:40AM (#25287197)

      I stopped reading at #1: "Fact: The same electrical components that are used in IT equipment are used in complex devices that are routinely subjected to power cycles and temperature extremes, such as factory-floor automation, medical devices, and your car."

      Well, yes, except for the fact that the it's a total lie. Cars, factory automation, and medical devises most certainly do NOT use "the same" components. While they may do the same things, and even be functionally equivalent, they are rated to much higher temperature and stress levels than consumer or even server grade components. Just ask the folks who have been trying to install "in-car" PC's with consumer grade components.

      • Re:I dunno.. (Score:5, Interesting)

        by Lord Apathy ( 584315 ) on Tuesday October 07, 2008 @11:44AM (#25288267)

        There is also more bullshit in that statement than meets the eye. Power cycling a system can cause failure if you have cheap soldering or marginal parts. Powering up a system causes it to heat up, things expand when they heat up. If you have a solder joint that isn't done right the expanding and contracting will cause it to break eventually. I've actually seen surface mounted parts fall off a board because of shotty soldering.

        Yeah, true the real problem was shotty soldering but the heating cycles helped it along.

    • by sorak ( 246725 ) on Tuesday October 07, 2008 @10:54AM (#25287403)

      For a Web site, put up a static page asking users to wait while additional resources are brought online.

      We're sorry for the inconvenience, but our systems seem to have been shut down. We've asked leroy, rufus, and heraldo to hit the power button, and we assure you that, once they've found that button, they will push it, and then, once the mandatory scandisk operation has completed, the Windows server screen will appear, and once the kernel operations have completed, the services you have requested will be available.

      And that will be awesome!

      While you're waiting, here are some links to our competitors' sites. Remember to open them in a new tab, so you can occasionally come back and hit "refresh". We promise, we're almost ready to serve you.

      • Re:I dunno.. (Score:4, Informative)

        by SuperQ ( 431 ) * on Tuesday October 07, 2008 @11:13AM (#25287723) Homepage

        Yea, I don't know who wrote that bit in the article, but they're just dumb. If you run any kind of system with a load balancer in front of it you can easily script starting up additional machines as soon as your monitoring says you reach 90% capacity.

    • Re:I dunno.. (Score:4, Informative)

      by sexconker ( 1179573 ) on Tuesday October 07, 2008 @12:20PM (#25288809)

      "Myths" 1-4 are true

      I haven't heard "Myth" 5 since 1999.

      "Myth" 6 is also true.

      When a system suspends to disk, it uses no power.
      When a system suspends to RAM, it uses VERY LITTLE (strobe) power, and you can configure wireless adapters and USB devices to be turned OFF when you suspend to RAM. (I'm using "suspend" for both cases - FUCK the sleep/suspend/standby/hibernate/whatever for 2 different states bullshit.)
      A laptop's charging circuitry and ac adapter is independent of the power state, so of course the adapter is going to be running all the time to keep the battery charged and power the system.

      They admit that the power use is negligible when suspending to disk or RAM (and probably running 3 wireless mice that don't turn off, in an idiotic attempt to boost their non-existent numbers).

      They don't admit that they couldn't find anyone who thought that the green light on the power brick meant it was off and using no power.

      Myth 7 is true as well.
      NiCd batteries do suffer from memory effects, and their capacity decreases over time. Conditioning a NiCd will remove the memory effect, but will not restore lost capacity due to general age.

      NiMH batteries have much less of a memory effect, and less of a capacity loss through age. There is no need to condition a NiMH battery. Just drain it fully and then recharge it in a cheapo dumb charger, or buy a better charger (which will likely advertise a battery conditioning feature anyway).

      LiIon batteries do lose capacity over time. If a cell (the smaller cells, not the 6 or 9 individual batteries in your laptop's battery) is completely depleted, it won't recharge again. If a cell is overcharged (or overheated), it will pop, and you've lost that capacity., and maybe your pants + laptop if the damn thing catches fire.

      "Myth" 8 is true, as long as you remember that the hard drive is just one item drawing juice in a system.

      "Myth" 9 is true, as long as you do it right.
      The problem with DC is that you lose power over distance. Converting from AC to DC in a specific box can be more efficient than any server power supply, more reliable, and output cleaner power.
      The issue is distance.

      "Myth" 10 is true. "As soon as possible" means "When the servers are on fire or when we're 6 months overdue on our replacement cycle, whichever comes first...maybe". Energy costs are through the roof, and it makes sense for that to be a high priority in determining what you buy. You may even want to buy a more efficient server/power supply/switch/UPS/line conditioner EARLY if your budget allows for it. We all know that any money sitting around unused will get grabbed up by someone else, so use it or lose it.

      That replaced equipment still has value (especially if you replace it early), and if you can resell those, you'll usually wind up ahead. in the long run.

  • Sleep != Hibernate (Score:5, Informative)

    by Taimat ( 944976 ) on Tuesday October 07, 2008 @10:07AM (#25286575)

    Myth No. 6: A notebook doesn't use any power when it's suspended or sleeping. USB devices charge from the notebook's AC adapter. Fact: Sleep (in Vista) or Hibernate mode in XP saves the state of the system to RAM and then maintains the RAM image even though the rest of the system is powered down. Suspend saves the state of the system to hard disk, which reduces the boot time greatly and allows the system to be shut down. Sleeping continues to draw a small amount of power, between 1 and 3 watts, even though the system appears to be inactive. By comparison, Suspend draws less than 1 watt. Even over the course of a year, this difference is probably negligible.

    um... Hibernate != Sleep. Hibernate in XP saves the RAM to the Hard Drive, and powers off. Suspend keeps RAM powered....

    • Mod up! The article got this one completely backwards.
    • by EvilRyry ( 1025309 ) on Tuesday October 07, 2008 @10:25AM (#25286905) Journal

      Using my handy killawatt, I tested how much power my desktop (not including accessories) draws while off, on and idle, on and under load, and in S3 suspend.

      Off - 6 watts
      Idle - 140W (dropped from 152W after installing a tickless kernel)
      Loaded - 220W
      S3 - 8 watts

      Ever since I ran that test, I put my machine into suspend at every opportunity. 140W is a lot of juice in the land of $0.18/kWh.

      • by afidel ( 530433 ) on Tuesday October 07, 2008 @10:30AM (#25286977)
        How about build an energy efficient PC! I have a LP AMD 64 x2 with a Geforce 7600GS, 2 HDD's, 2GB of ram and a TV tuner and an 85% efficient PSU and I peak at around 150W, using 140W at idle is insane. For the next generation of games I'm thinking about upgrading to a 9600 GSO but that will up my idle and peak numbers by at 20W so I'm holding off till I get a game that really needs it.
        • by Volante3192 ( 953645 ) on Tuesday October 07, 2008 @10:49AM (#25287337)

          My kingdom for a mod point...

          I built a new system in July, Intel Core2Duo E8400, 2gb ram, ATI 3850, two hds (one's a raptor), and the box on idle pulls 81W.

          My old box, an Athlon 1800+ (actual speed: 1350hz), 2gb ram, two HDs...idle was in the 130s 140s.

          (Both are excluding monitor, a 20" LCD with pulls 35-40W)

          So not only did I build me a faster system, it's nearly half as power hungry as my old box.

          • OK, I have a stupid question. How do you tell how much power a component is going to pull before you buy it?

            Looking around newegg, I only see a handful of "green" items that advertise their low power usage.

            For the record, my system pulls 120 W idle, 230 running CoHOF, and 5 in S3. It is extremely overclocked and mostly older components which tends to skew things, but I'm looking to upgrade and wouldn't mind saving a few bucks in energy costs in the long term.

            • by 644bd346996 ( 1012333 ) on Tuesday October 07, 2008 @11:59AM (#25288491)

              http://anandtech.com/casecoolingpsus/showdoc.aspx?i=3413 [anandtech.com]

              Even though the article is about power supplies, it has quite a bit of information about how much power various components draw.

            • by tknd ( 979052 ) on Tuesday October 07, 2008 @12:27PM (#25288905)

              For the record, my system pulls 120 W idle, 230 running CoHOF, and 5 in S3. It is extremely overclocked and mostly older components which tends to skew things, but I'm looking to upgrade and wouldn't mind saving a few bucks in energy costs in the long term.

              5 watts in S3 is pretty bad in my book. Disconnect all USB devices and check again what your S3 power consumption is. If it is still high, most likely the PSU you have is not efficient. It could also come from other things like the motherboard, but most of the time it is the PSU. If your system idles at 120w, and 230w during load, you might be able to run with as low as a good 350w rated PSU. For example if your current PSU was around 70% efficient and you replaced it with an 80% efficient one, then during load your 230w draw would drop to around 201w. But you'll have to check and see if you can find the efficiency numbers for your current PSU.

              How do you tell how much power a component is going to pull before you buy it?

              There's no single source, but there are some useful websites.

              80plus.org [80plus.org]
              Silent PC Review [silentpcreview.com] They generally provide both noise and power consumption measurements in their reviews
              Silent PC Review Forums [silentpcreview.com] More anecdotal but at this point it is still good data. Many users post their own tests and measurements on the boards. It helps you get an idea of what's achievable and what isn't. There are also some nicely compiled charts that combine data from difference sources. I find the numbers are sometimes inaccurate but not too far off.

      • Comment removed based on user account deletion
        • by Kjella ( 173770 )

          0.14kW *0.18$/kWh * 8*200h = 40$? at years? of electricity? not a whole saving you could get from reducing that. or I'm doing it all wrong?

          Something's wrong about the units but I think the figure is right. Of course, it also depends on what climate you're in, if you're running an AC to get rid of those 140W again that costs too, while here up north a lot of the time it supplements other heating in the winter so it actually costs less. Anyway I haven't bothered to check but I imagine my 42" TV draws more when it's on so it's not like the computer is the big sinner. For me lower power is about less fans and less noise, there's no way 140W draw i

      • by cpghost ( 719344 )

        140W is a lot of juice in the land of $0.18/kWh.

        It's even a lot more in Europe, at approx. EUR 0.25/kWh ($0.34/kWh)...

  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Tuesday October 07, 2008 @10:11AM (#25286665)
    Comment removed based on user account deletion
    • Option 3 looks good to me.

      "Oh. The only reason we use Citrix is to run Outlook in it."

      Outlook Web Access (OWA). There, I've just saved you 10 minutes per day, or around 60 hours per year. I don't know how many members of staff your company has, but if you're using Citrix, I'd imagine quite a lot... lets say 500.

      500 x 60 = 30000 man hours per year saved by switching on OWA and NATing port 443.

      • by swb ( 14022 )

        But OWA has a much thinner feature set, and over time, they will spend more time using OWA than waiting for Citrix.

    • Not only that. (Score:3, Interesting)

      by khasim ( 1285 )

      We have a similar problem here that I've not been allowed to fix yet.

      The employees typically turn on their computers and then LEAVE THE OFFICE to get Starbucks coffee or whatever. A 10 minute wait turns into 30 minutes of non-productivity.

      The computers should be the same as the phones. Instant on - any time - every time.

    • by afidel ( 530433 )
      Um, your computer is way underpowered or your IT department sucks because 15 minutes to boot in crazy. I have an old T42 with a 4200 rpm HDD and it only takes about 5 minutes to boot and that's with multiple server type services installed (I have two copies of MSDE installed if that tells you anything). Also Citrix logon times at my shop are ~90 seconds average and they will be more like 30 once I get the users profiles onto a faster file server.
      • by RMH101 ( 636144 ) on Tuesday October 07, 2008 @10:42AM (#25287217)
        It's possibly a combination of the two. My old work laptop (Tosh Centrino, 1.6 or 1.8GHz, 1GB RAM, Win2K) used to take around 12 minutes to boot from cold. Quite a bit of this is due to the Pointsec full disk encryption software, followed by SAV, followed by the usual corporate crippleware. Horrible. In the end it became a tethered desktop as I couldn't be bothered taking it anywhere.
    • Re: (Score:2, Funny)

      You must be the guy who bought my old 386 66. You need to hit the turbo button to get it to boot faster!
      • Re: (Score:3, Interesting)

        by UncleTogie ( 1004853 ) *

        You must be the guy who bought my old 386 66. You need to hit the turbo button to get it to boot faster!

        At that clock speed, you meant "486", didn't you?

        BTW, I'm curious why the article's first point doesn't even touch on thermal expansion/contraction...

  • by Anonymous Coward on Tuesday October 07, 2008 @10:13AM (#25286703)

    Myth No. 3: The power rating (in watts) of a CPU is a simple measurement of the system's efficiency.
    Fact: Efficiency is measured in percentage of power converted, which can range from 50 to 90 percent or more. The AC power not converted to DC is lost as heat...Unfortunately, it's often difficult to tell the efficiency of a power supply, and many manufacturers don't publish the number.

    I'm not sold on taking advice who doesn't understand the difference between the wattage rating of a CPU and the wattage rating of the power supply. They're completely different components.

    • Re: (Score:2, Funny)

      by Vagnaard ( 1366015 )
      CPU , PSU are near. This might have been an abbreviation error.

      Still , shame on them for not editing their stuff properly (If they meant PSU)

    • by Anonymous Coward on Tuesday October 07, 2008 @10:21AM (#25286845)

      I like how this plays with the following assertion filed under "Myth No. 9: Going to DC power will inevitably save energy."

      "New servers have 95 percent efficient power supplies, so any power savings you might have gotten by going DC is lost in the transmission process."

      So, when it suits his argument, power supply efficiencies range from 50-90% efficency, and are kept hidden by manufacturers. Then, when that doesn't suit his argument, all of a sudden power supplies are at least 95% efficient, and everyone knows that.

      I call shenanigans!

      • It's called 'moving the goalposts'. It's like the author decided he was going to write about how wrong everyone was about power usage in computers, then found out that the myths were actually correct.

        Oops.
    • Myth No. 9: Going to DC power will inevitably save energy.
      Fact: Going to DC power entails removing the power supplies from a rack of servers or all the servers in a datacenter and consolidating the AC-DC power supply into a single unit for all the systems. Doing this may not actually be more efficient since you lose a lot of power over the even relatively small distances between the consolidated unit and the machines. New servers have 95 percent efficient power supplies, so any power savings you might have gotten by going DC is lost in the transmission process. Your savings will really depend on the relative efficiency of the power supplies in the servers you're buying as well as the one in the consolidated unit.

      This is completely wrong. The author missed out on two of the three power conversions that take place in a data center. Data center UPS units take the AC current convert to DC then back again just so the server can convert it back to DC. Even if you have 95% efficiency at each stage the conversion losses will add up.

      People wouldn't be going DC if it didn't result in measurable power savings.

      • With the US power system, you do avoid four high loss (DC:AC or AC:AC) power conversions and replace them with two lower loss (DC:DC) conversions. But, compared to the ROW electrical systems, you only save two AC:AC conversions which will just gain you two points or so.

        I like 600VDC as a solution, but it will only work well for the biggest consumers where you can justify a significant increase capital cost with the energy savings. It's nice to have a single 4.8MW critical power bus (with a couple spares)-

      • Re: (Score:3, Interesting)

        by evilviper ( 135110 )

        Data center UPS units take the AC current convert to DC then back again just so the server can convert it back to DC. Even if you have 95% efficiency at each stage the conversion losses will add up.

        There's only 2 stages in there that are affected. You aren't going to get DC from the power company.

        And while the final stage is affected, since the servers are getting DC at lower (48V?) voltages, do you really think a DC power supply is possibly going to be any more efficient than an AC one? Just because the

      • Re: (Score:3, Interesting)

        by initialE ( 758110 )

        If I remember correctly, the power savings is not in using AC or DC, it is in stepping up the voltage so that less current flows, resulting in lower power loss due to the innate resistance of the lines, a process that is possible using either AC or DC. Tesla and Watt fought over that one, comparing the relative safety of AC vs DC, a war of pure FUD IIRC.

    • Didn't they learn you that the CPU is the big box with the blinkin' lights?
  • Debunk this (Score:5, Insightful)

    by sargeUSMC ( 905860 ) on Tuesday October 07, 2008 @10:15AM (#25286737)
    Taking ten suppositions and making suppositions about those suppositions (I'm getting dizzy) is not debunking. All I see here is lots of questionable, completely unattributed information. For example: "The average 17-inch LCD monitor consumes 35 watts of electricity". Really? Where did this information come from? Did you pull this information from the glossy for a 17" monitor? Did you just test your monitor? Did you test a large sample of monitor's here? Did you pull this information from a study? Out of your ass?
    • At the risk of sounding like an idiot, that is a fairly accurate guess for 17" lcds (TN panels anyway).
      Since pretty much all 17" lcds are TN rather than high-contrast panels, it doesn't really hurt to generalize.

      Power ratings on monitors aren't like the ratings on computer power supplies. By effectively estimating average and peak power draw the manufacturer can save money. If the AC adapter is rated to handle too little power, the adapter or monitor will prematurely fail. If the adapter is rated too highl

    • by halcyon1234 ( 834388 ) <halcyon1234@hotmail.com> on Tuesday October 07, 2008 @10:49AM (#25287325) Journal
      They have a 17" monitor up their asses? Well, good to know goatse guy is getting steady work these days, even if he isn't well-versed in the scientific method.
  • Power (Score:4, Funny)

    by Windows_NT ( 1353809 ) on Tuesday October 07, 2008 @10:23AM (#25286881) Homepage Journal
    Turning off your computer is always a good time to give the hamsters food and water, lets them rest, so in the morning your computer will be nice and fast. If it takes parents computer 15 minutes, his hamster need less weight
  • by Taibhsear ( 1286214 ) on Tuesday October 07, 2008 @10:25AM (#25286911)

    Did the definitions of 'fact' and 'debunk' change recently? Every 'myth' listed has 'fact' under it proving it is true. According to my good friend Mr. Webster this is called 'confirmation.'

  • Microsoft's Windows Messenger (MSN Messenger, Live Messenger... whatever they call it these days) Group wrote an awesome abstract [microsoft.com] of how they cycle servers on & off to handle the load while saving power.

    Now, for reasons pointed out in other comments, TFA from Infoworld is a mix of good info and horseshit.

  • Myth #2 suggests making your customers wait. That might work in super-mega-corporate land where your customers are literally married to you and queues in Tech Support are "profitable."

    I would *deserve* to be fired if I made a customer wait. Of course, that sense of urgency doesn't work in super-mega-corporate entities either.

    The myth about going to DC to be more efficient is painful too. If a manager in a workplace would entertain a crackpot ideas like that, I'd leave.

  • Suspend saves the state of the system to hard disk, which reduces the boot time greatly and allows the system to be shut down. Sleeping continues to draw a small amount of power, between 1 and 3 watts, even though the system appears to be inactive. By comparison, Suspend draws less than 1 watt. Even over the course of a year, this difference is probably negligible.

    I though "suspend to disk" and "hibernate" were synonyms. A suspended computer shouldn't draw any more power than a computer that's turned off, b

  • by pmandryk ( 858235 ) on Tuesday October 07, 2008 @10:53AM (#25287391)
    My favourite story (or urban legend) is when an employee came in to an IT shop on the weekend and shut down all of the A/C cooling units for the Data Centre. He claimed that he was "going 'Green' and saving power" because "...all of those computers in that room have their own fans." I'm pretty sure he was let go after that...or promoted to management.
  • by sheldon ( 2322 ) on Tuesday October 07, 2008 @10:53AM (#25287393)

    That's a really bad article. Wow, worse then anything I remember them writing before.

  • Bad article... (Score:3, Informative)

    by Junta ( 36770 ) on Tuesday October 07, 2008 @11:03AM (#25287551)

    Out-right potentially wrong: no one cares if a customer is made to wait for a server to boot to get served. That's not a generalization to be made lightly... It is true, though, that suspend-to-ram has not received the attention it deserves in the data center. A great deal of server-class systems and options are not designed to cope with suspend-to-ram, and thus you must be careful banking on this. The industry should correct it, but a facility can't bank on it yet (just put pressure on your vendors to make it so...)

    Straw-man: A supposed 'myth' that leaving on LCD monitors is fine for energy savings, with the remarkable clarification that being off saves more power... Who would have thought.

    Other straw-man: You will unconditionally save money by rapid upgrades to the latest efficient technology. I don't think anyone is foolish enough to think compulsively following any technical treadmill will lead to any overall financial gain..

  • by flyingfsck ( 986395 ) on Tuesday October 07, 2008 @11:10AM (#25287677)
    Probably the biggest and most annoying/disrupting power saving myth is Daylight Savings Time. Every year, the power companies announce that they don't notice any change whatsoever in power consumption.
  • by Doc Ruby ( 173196 ) on Tuesday October 07, 2008 @11:26AM (#25287913) Homepage Journal

    That list of myths debunked seems pretty sensible, even in details that run counter to conventional wisdom. But even though the list properly cautions several times against how most any equipment left plugged in will still drain power while doing nothing useful (infinitely bad efficiency), the article still makes an inefficienty mistake:

    Sleeping continues to draw a small amount of power, between 1 and 3 watts, even though the system appears to be inactive. By comparison, Suspend draws less than 1 watt. Even over the course of a year, this difference is probably negligible.

    Over the course of a year, 2 unnecessary watts is 17.532 unnecessary KWh. Sure, that's only about $1.75 at about $0.10:KWh [doe.gov]. But that's for each device. At home, in addition to sleeping computers, there's dozens of devices with AC adapters wasting watts most of the day (and night), which is possibly hundreds of dollars wasted. In offices and datacenters, possibly thousands to hundreds of thousands of dollars a year wasted. And each KWh means loads of extra Greenhouse CO2 unnecessarily pumped into the sky, even if it's (still) cheap to so recklessly pollute.

    Which is what the One Watt Initiative [wikipedia.org] is designed to minimize. The US government has joined the global efficiency organization, mandating purchases of equipment that consumes no more than 1 watt in standby mode. Whatever the global impact of 3W wasted in standby can be cut by 2/3 if switching to 1W.

    In the short run, that makes energy bills lower (and, by saving heat from standby devices, further lowers energy costs due to less required cooling). In the long run, we've got more fuel and intact climate left to work with - and that stuff just costs way too much to replace when it runs out.

  • Google developed their own power supply, and open-sourced the hardware, saying it saves them tons of energy and the rest of the world should use it. Mind you, it is DC, and it means a total DC data center, but really that isn't a bad idea.

    Virtualization is also the way to go to save power. Fewer servers.

    • Re:Google (Score:5, Informative)

      by Animats ( 122034 ) on Tuesday October 07, 2008 @11:43AM (#25288245) Homepage

      Google developed their own power supply

      Actually, Google's point was that they wanted motherboards that ran on 12 VDC only. [64.233.179.110] PC power supplies are still providing +12, -12, +5, -5, and +3.3v. Most of those voltages are there for legacy purposes, and DC-DC converters on the motherboard are doing further conversions anyway. So there's no reason not to make motherboards that only need 12 VDC. Disks are already 12 VDC only, so this gets everything on one voltage. This simplifies the power supply considerably, and avoids losses in producing some voltages that aren't used much.

      But Google wasn't talking about using 12 VDC distribution within the data center. The busbars required would be huge at such a low voltage. They were talking about using 12 VDC within each rack. Distribution within the data center would still be 110 or 220 VAC.

      • Re: (Score:3, Informative)

        by PAjamian ( 679137 )

        Disks are already 12 VDC only

        Actually HDDs use +12v for the motors and +5v for the electronics. If you have a 3.5" FDD it only uses 5v. If you don't believe me try swapping the yellow (12v) and red (5v) wires going into the power connector on your HDD some time ... here's a hint, the smoke you see coming off the electronics isn't from putting 5v into something that expects 12v (note if you're really dumb enough to do this I won't be held responsible for ruining your HDD).

  • by sirwired ( 27582 ) on Tuesday October 07, 2008 @12:16PM (#25288739)

    Tip number seven talks about battery conservation in LiIon vs. NiCd batteries. Um, laptops haven't used NiCd's in years. Their predecessors, NiMH hasn't been used in laptops in quite a while either.

    Can you even buy NiCd's anymore, for any device? I can't remember the last time I saw them in an electronics store.

    SirWired

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...