10 IT Power-Saving Myths Debunked 359
snydeq writes "InfoWorld examines 10 power-saving assumptions IT has been operating under in its quest to rein in energy costs vs. the permanent energy crisis. Under scrutiny, most such assumptions wither. From true CPU efficiency, to the life span effect of power-down frequency on servers, to SSD power consumption, to switching to DC in the datacenter, get the facts before setting your IT energy strategy."
I dunno.. (Score:5, Interesting)
I'm of the school that thinks "debunking" involves some kind of comprehensive stats or numbers or evidence weight against strongly held opinions.
This article is basically a verbose version of the "nuh uh" argument.
It's not a bad article.. but I would hardly call this "debunking".
And I totally disagree on point #2 .. maybe having _all_ your extra servers always on is bad.. but if load peaks there is no _way_ someone should be waiting while a system boots.
Re:I dunno.. (Score:4, Informative)
That depends if your system has been tuned to boot in 5 seconds.
Or if it can return from suspend-to-ram nice and quick.
Single page (Score:4, Informative)
Sorry for the thread hijack, but I decided to post this link as soon as I saw the links to all 4 pages of the top 10 list.
http://www.infoworld.com/archives/emailPrint.jsp?R=printThis&A=/article/08/10/06/40TC-power-myths_1.html [infoworld.com]
Re: (Score:3, Interesting)
Think past "HA" for a second.
Think about metrics, predictable traffic and planned capacity.
Think about bringing a percentage of spare capacity online at any one time, in line with predicted peak traffic, and more as the load increases on what's there already.
HA can still be HA without needing everything on all the time.
(also, why the hell was my last post modded down as redundant?)
Re: (Score:3, Insightful)
(also, why the hell was my last post modded down as redundant?)
Probably because a similar point was already made in TFA:
You can also select systems that cold-boot rapidly. Model to model and brand to brand, servers exhibit wide variances in power-up delay. This metric isn't usually measured, but it becomes relevant when you control power consumption by switching off system power. It needn't take long. Servers or blades that boot from a snapshot, a copy of RAM loaded from disk or a SAN can go from power-down mode to work-ready in less than a minute. The most efficient members of a reserve/disaster farm can quiesce in a suspend-to-RAM state rather than be powered down fully so that wake-up does not require BIOS self-test or device querying and cataloging, two major sources of boot delay.
Re: (Score:2)
Which thereby "debunks" point one: the power-on statistics obtained in a cold boot will not be present if they are not run in a hibernate-power-up. But the effects will still be there, if any, because the temperature still cycles.
Re:I dunno.. (Score:4, Informative)
You mean ... (Score:5, Funny)
... something like monitoring system usage and bringing additional boxes up when usage hits something like 80%?
And then suspending boxes when usage drops down to 10%?
All in all, trying to maintain a level 50% utilization level? Maybe with the utilization level setting being an option that the sysadmin could change?
I'd recommend you patent that idea.
Re: (Score:2)
I'd reckon IBM and VMWare probably have that lot wrapped up already. Still, there's no reason (given current record) that you couldn't have one as well.
Re: (Score:2, Informative)
Actually VM tech goes a long way to doing that anyway, provided you've a vaguely good concept of workload fluctuations.
Sod NFS (Score:4, Interesting)
Sorry, It's just not worth the pain. Boot to RAM.
You just set high and low load thresholds for server on/off. And a load balancer which simply adds the new server to the server pool when it notices it's there, removes them when it's gone. So no need to try to predict stuff.
5 seconds or 3 minutes, the server boot times are largely irrelevant. If you think you're going to handle a slashdotting you are mistaken, you can't handle oneoff events this way. You would have to go from 1 to 100 servers and connections in 5 seconds.
What it can do is grow really quickly if a service becomes very popular very quickly, or reduce your datacenter costs if it's typically used only 9-5. Or even, dual purpose processing. Servers do X from 9-17 and Y from 15-20.
Re:I dunno.. (Score:4, Interesting)
Re:I dunno.. (Score:5, Informative)
FTA:
Hibernate mode in XP saves the state of the system to RAM and then maintains the RAM image even though the rest of the system is powered down.
They must be using a different version of XP than I am... When I 'Hibernate' my laptop, it dumps the RAM to a file on the hard drive and then powers off completely. When I 'Stand By' my system, it keeps everything in RAM.
Maybe they have SP4...
Re:I dunno.. (Score:5, Funny)
You must be using a different version of XP than I am... When I 'Hibernate' my laptop, it attempts to dump the RAM to a file, throws a hissy fit like a coddled freshman after their first exam, fails miserably, flickers the screen, disables the Hibernate option, and then just sits around until the battery drains.
Re: (Score:2, Informative)
Re: (Score:3, Informative)
I would guess you either have a piece of hardware or driver that isn't fully ACPI-compatible or you don't have drive space for the hibernate file.
Re:I dunno.. (Score:5, Interesting)
I've got electric heat, and I've got a pile of servers in my spare bedroom, and I never need to turn on the electric heat, because the servers heat my home.
Which looks to me like an opportunity. People pay for heat. So, put the servers where people need heat, and suddenly a liability is a resource.
Apartment buildings, office buildings and malls in cold climates should all be prime locations for a datacenter.
Re:I dunno.. (Score:5, Informative)
Re: (Score:3, Informative)
Re:I dunno.. (Score:5, Funny)
Hell, the way things are going, soon hiring a cadre of hookers to rub on you for heat will be less expensive than oil.
Re:I dunno.. (Score:4, Funny)
I like your thinking! But wait, on second thought forget the hookers and the rubbing.. no wait, that isn't right..
Re:I dunno.. (Score:5, Informative)
I know a total of 5 people who don't use natural gas for heating, and 4 of them use propane as they're so far out of the way the gas network doesn't reach them. only 1 guy uses non-central (heating controlled on a room by room basis) electric. In terms of raw dollars-per-joule, gas is a way better proposition. even after the latest electric rate jump (from 6 cents to 9 cents per KW-hr), gas is still about 1/3 the cost of electric heat.
TROC (Score:3, Informative)
Well I live in Canada, and most people I know use electric heating...(Montreal area)
To be fair, when snowraver1 said 'Canada', I think he actually was referring to Alan Fotheringham's 'TROC'(The Rest Of Canada), i.e., the unwashed masses outside of the 401 corridor.
Here in Alberta, as in much of western TROC, it's good old natural gas.
db
Re: (Score:3, Funny)
I'm not sure I would want to put my balls in the fireplace if the power or gas goes out.
Re: (Score:3, Funny)
Re: (Score:2, Interesting)
Re: (Score:2)
What is the electricity rate in Montreal/Quebec?
Re: (Score:3, Funny)
From South Texas. I pay 20 cents/kWh for electricity. Unfortunately, there is no non-electricity version of air conditioning, so I cry myself to sleep at night (and yes, it's still fricken hot here).
Re: (Score:2)
Re: (Score:2)
This is indeed a factor in positioning data centres not: given the choice, they put them in a cold climate, and some of them are operating a shared heating system.
Loving Myth #2 tip... (Score:2)
If you have a shit system that's really slow and badly written display the following:
"The sub-optimal response you are experiencing will soon be resolved as we are utilising quantum replicators to produce more server hardware for your request. Once complete we will travel back in time and resubmit your request. Thank you for using One-Born-Every-Minute hosting. Have a nice day."
(Speaking from experience - different text, same message).
Re:I dunno.. (Score:5, Insightful)
I stopped reading at #1: "Fact: The same electrical components that are used in IT equipment are used in complex devices that are routinely subjected to power cycles and temperature extremes, such as factory-floor automation, medical devices, and your car."
Well, yes, except for the fact that the it's a total lie. Cars, factory automation, and medical devises most certainly do NOT use "the same" components. While they may do the same things, and even be functionally equivalent, they are rated to much higher temperature and stress levels than consumer or even server grade components. Just ask the folks who have been trying to install "in-car" PC's with consumer grade components.
Re:I dunno.. (Score:5, Interesting)
There is also more bullshit in that statement than meets the eye. Power cycling a system can cause failure if you have cheap soldering or marginal parts. Powering up a system causes it to heat up, things expand when they heat up. If you have a solder joint that isn't done right the expanding and contracting will cause it to break eventually. I've actually seen surface mounted parts fall off a board because of shotty soldering.
Yeah, true the real problem was shotty soldering but the heating cycles helped it along.
Re:I dunno.. (Score:5, Insightful)
However, silicon is silicon, capacitors are still made from the same things
Thank you for playing the game, but you have lost [badcaps.net]. Rather then using more expensive Nippon electronics, the Chinese parts you used had a few part per million more impurities. This lead to early thermal failure of your mainboard.
If you would like to play the game again, please acquire more venture capital and buy quality next time. You may still lose the game to your manufacture buying counterfeit parts, using the wrong specification solder, or unforeseen interactions from running at many gigahertz at high temperature.
This show has been hosted by an automation robot that costs 75 times what your laptop does and still has occasional electronics failures. :)
Re:I dunno.. (Score:5, Funny)
Right. And I'm the same as Albert Einstein because I have DNA, amino acids, and funny hair. Where's my Nobel Prize?
A Pinto is the same as a Mercedes because it's made of steel, has 4 wheels, and an engine. I want $75,000 for my used Ford.
My wife is the same as Elle McPherson because she has hair, tits, and a vagina. My wife should be the supermodel (no, really, honey, I was serious on that last one. No, wait...WAIT!...")
Re:I dunno.. (Score:4, Funny)
For most geeks looking for a girlfriend, that list is followed by the phrase, "Pick any two."
Re: (Score:3, Funny)
"For most geeks looking for a girlfriend, that list is followed by the phrase, "Pick any two.""
Hmmm, lets see.
Hair+Vagina-tits - pre-pubescent. No thanks - I like being on THIS side of a jail door.
Hair+Tits-vagina - Pre-op transexual. That's and example of an UNHAPPY surprise.
Tits+Vagina-hair - Sinead O'Connor. RUN!!! RUN AWAY!!!!
Re:I dunno.. (Score:5, Funny)
For a Web site, put up a static page asking users to wait while additional resources are brought online.
We're sorry for the inconvenience, but our systems seem to have been shut down. We've asked leroy, rufus, and heraldo to hit the power button, and we assure you that, once they've found that button, they will push it, and then, once the mandatory scandisk operation has completed, the Windows server screen will appear, and once the kernel operations have completed, the services you have requested will be available.
And that will be awesome!
While you're waiting, here are some links to our competitors' sites. Remember to open them in a new tab, so you can occasionally come back and hit "refresh". We promise, we're almost ready to serve you.
Re:I dunno.. (Score:4, Informative)
Yea, I don't know who wrote that bit in the article, but they're just dumb. If you run any kind of system with a load balancer in front of it you can easily script starting up additional machines as soon as your monitoring says you reach 90% capacity.
Re:I dunno.. (Score:4, Informative)
"Myths" 1-4 are true
I haven't heard "Myth" 5 since 1999.
"Myth" 6 is also true.
When a system suspends to disk, it uses no power.
When a system suspends to RAM, it uses VERY LITTLE (strobe) power, and you can configure wireless adapters and USB devices to be turned OFF when you suspend to RAM. (I'm using "suspend" for both cases - FUCK the sleep/suspend/standby/hibernate/whatever for 2 different states bullshit.)
A laptop's charging circuitry and ac adapter is independent of the power state, so of course the adapter is going to be running all the time to keep the battery charged and power the system.
They admit that the power use is negligible when suspending to disk or RAM (and probably running 3 wireless mice that don't turn off, in an idiotic attempt to boost their non-existent numbers).
They don't admit that they couldn't find anyone who thought that the green light on the power brick meant it was off and using no power.
Myth 7 is true as well.
NiCd batteries do suffer from memory effects, and their capacity decreases over time. Conditioning a NiCd will remove the memory effect, but will not restore lost capacity due to general age.
NiMH batteries have much less of a memory effect, and less of a capacity loss through age. There is no need to condition a NiMH battery. Just drain it fully and then recharge it in a cheapo dumb charger, or buy a better charger (which will likely advertise a battery conditioning feature anyway).
LiIon batteries do lose capacity over time. If a cell (the smaller cells, not the 6 or 9 individual batteries in your laptop's battery) is completely depleted, it won't recharge again. If a cell is overcharged (or overheated), it will pop, and you've lost that capacity., and maybe your pants + laptop if the damn thing catches fire.
"Myth" 8 is true, as long as you remember that the hard drive is just one item drawing juice in a system.
"Myth" 9 is true, as long as you do it right.
The problem with DC is that you lose power over distance. Converting from AC to DC in a specific box can be more efficient than any server power supply, more reliable, and output cleaner power.
The issue is distance.
"Myth" 10 is true. "As soon as possible" means "When the servers are on fire or when we're 6 months overdue on our replacement cycle, whichever comes first...maybe". Energy costs are through the roof, and it makes sense for that to be a high priority in determining what you buy. You may even want to buy a more efficient server/power supply/switch/UPS/line conditioner EARLY if your budget allows for it. We all know that any money sitting around unused will get grabbed up by someone else, so use it or lose it.
That replaced equipment still has value (especially if you replace it early), and if you can resell those, you'll usually wind up ahead. in the long run.
Sleep != Hibernate (Score:5, Informative)
Myth No. 6: A notebook doesn't use any power when it's suspended or sleeping. USB devices charge from the notebook's AC adapter. Fact: Sleep (in Vista) or Hibernate mode in XP saves the state of the system to RAM and then maintains the RAM image even though the rest of the system is powered down. Suspend saves the state of the system to hard disk, which reduces the boot time greatly and allows the system to be shut down. Sleeping continues to draw a small amount of power, between 1 and 3 watts, even though the system appears to be inactive. By comparison, Suspend draws less than 1 watt. Even over the course of a year, this difference is probably negligible.
um... Hibernate != Sleep. Hibernate in XP saves the RAM to the Hard Drive, and powers off. Suspend keeps RAM powered....
Re: (Score:2)
Re:Sleep != Hibernate (Score:5, Interesting)
Using my handy killawatt, I tested how much power my desktop (not including accessories) draws while off, on and idle, on and under load, and in S3 suspend.
Off - 6 watts
Idle - 140W (dropped from 152W after installing a tickless kernel)
Loaded - 220W
S3 - 8 watts
Ever since I ran that test, I put my machine into suspend at every opportunity. 140W is a lot of juice in the land of $0.18/kWh.
Re:Sleep != Hibernate (Score:5, Informative)
Re:Sleep != Hibernate (Score:5, Interesting)
My kingdom for a mod point...
I built a new system in July, Intel Core2Duo E8400, 2gb ram, ATI 3850, two hds (one's a raptor), and the box on idle pulls 81W.
My old box, an Athlon 1800+ (actual speed: 1350hz), 2gb ram, two HDs...idle was in the 130s 140s.
(Both are excluding monitor, a 20" LCD with pulls 35-40W)
So not only did I build me a faster system, it's nearly half as power hungry as my old box.
Re: (Score:2)
Looking around newegg, I only see a handful of "green" items that advertise their low power usage.
For the record, my system pulls 120 W idle, 230 running CoHOF, and 5 in S3. It is extremely overclocked and mostly older components which tends to skew things, but I'm looking to upgrade and wouldn't mind saving a few bucks in energy costs in the long term.
Re:Sleep != Hibernate (Score:5, Informative)
http://anandtech.com/casecoolingpsus/showdoc.aspx?i=3413 [anandtech.com]
Even though the article is about power supplies, it has quite a bit of information about how much power various components draw.
Re:Sleep != Hibernate (Score:5, Informative)
For the record, my system pulls 120 W idle, 230 running CoHOF, and 5 in S3. It is extremely overclocked and mostly older components which tends to skew things, but I'm looking to upgrade and wouldn't mind saving a few bucks in energy costs in the long term.
5 watts in S3 is pretty bad in my book. Disconnect all USB devices and check again what your S3 power consumption is. If it is still high, most likely the PSU you have is not efficient. It could also come from other things like the motherboard, but most of the time it is the PSU. If your system idles at 120w, and 230w during load, you might be able to run with as low as a good 350w rated PSU. For example if your current PSU was around 70% efficient and you replaced it with an 80% efficient one, then during load your 230w draw would drop to around 201w. But you'll have to check and see if you can find the efficiency numbers for your current PSU.
How do you tell how much power a component is going to pull before you buy it?
There's no single source, but there are some useful websites.
80plus.org [80plus.org]
Silent PC Review [silentpcreview.com] They generally provide both noise and power consumption measurements in their reviews
Silent PC Review Forums [silentpcreview.com] More anecdotal but at this point it is still good data. Many users post their own tests and measurements on the boards. It helps you get an idea of what's achievable and what isn't. There are also some nicely compiled charts that combine data from difference sources. I find the numbers are sometimes inaccurate but not too far off.
Re: (Score:2)
Re: (Score:2)
0.14kW *0.18$/kWh * 8*200h = 40$? at years? of electricity? not a whole saving you could get from reducing that. or I'm doing it all wrong?
Something's wrong about the units but I think the figure is right. Of course, it also depends on what climate you're in, if you're running an AC to get rid of those 140W again that costs too, while here up north a lot of the time it supplements other heating in the winter so it actually costs less. Anyway I haven't bothered to check but I imagine my 42" TV draws more when it's on so it's not like the computer is the big sinner. For me lower power is about less fans and less noise, there's no way 140W draw i
Re: (Score:2)
Re: (Score:2)
140W is a lot of juice in the land of $0.18/kWh.
It's even a lot more in Europe, at approx. EUR 0.25/kWh ($0.34/kWh)...
Comment removed (Score:3, Interesting)
Re: (Score:2)
Option 3 looks good to me.
"Oh. The only reason we use Citrix is to run Outlook in it."
Outlook Web Access (OWA). There, I've just saved you 10 minutes per day, or around 60 hours per year. I don't know how many members of staff your company has, but if you're using Citrix, I'd imagine quite a lot... lets say 500.
500 x 60 = 30000 man hours per year saved by switching on OWA and NATing port 443.
Re: (Score:2)
But OWA has a much thinner feature set, and over time, they will spend more time using OWA than waiting for Citrix.
Not only that. (Score:3, Interesting)
We have a similar problem here that I've not been allowed to fix yet.
The employees typically turn on their computers and then LEAVE THE OFFICE to get Starbucks coffee or whatever. A 10 minute wait turns into 30 minutes of non-productivity.
The computers should be the same as the phones. Instant on - any time - every time.
Re: (Score:2)
get computers that suspend to flash. Then you can turn it on and it instantly is working with a nice little unlock screen.
Re: (Score:2)
Re:For mere mortals there is speed (Score:4, Funny)
Re: (Score:3, Funny)
Re: (Score:2, Funny)
Re: (Score:3, Interesting)
You must be the guy who bought my old 386 66. You need to hit the turbo button to get it to boot faster!
At that clock speed, you meant "486", didn't you?
BTW, I'm curious why the article's first point doesn't even touch on thermal expansion/contraction...
Re: (Score:2, Insightful)
What are they running? Corporate Crippleware: Safe Boot. Virus Checkers. Keyboard Loggers (Hi, guys!). After a few "Regime Changes" it all adds up...
Re: (Score:3, Insightful)
lots of companies have lazy desktop admins who write one giant script to run that checks every resource available in the system for every user even though most of those users will not use most of the resources it is checking for. Smart companies have created multiple scripts and figured out smart ways to quickly identify what scripts the logging in user needs to run, thus reducing boot up significantly.
Questionable grasp on the problem space. (Score:5, Insightful)
Myth No. 3: The power rating (in watts) of a CPU is a simple measurement of the system's efficiency.
Fact: Efficiency is measured in percentage of power converted, which can range from 50 to 90 percent or more. The AC power not converted to DC is lost as heat...Unfortunately, it's often difficult to tell the efficiency of a power supply, and many manufacturers don't publish the number.
I'm not sold on taking advice who doesn't understand the difference between the wattage rating of a CPU and the wattage rating of the power supply. They're completely different components.
Re: (Score:2, Funny)
Still , shame on them for not editing their stuff properly (If they meant PSU)
Re:Questionable grasp on the problem space. (Score:5, Informative)
I like how this plays with the following assertion filed under "Myth No. 9: Going to DC power will inevitably save energy."
"New servers have 95 percent efficient power supplies, so any power savings you might have gotten by going DC is lost in the transmission process."
So, when it suits his argument, power supply efficiencies range from 50-90% efficency, and are kept hidden by manufacturers. Then, when that doesn't suit his argument, all of a sudden power supplies are at least 95% efficient, and everyone knows that.
I call shenanigans!
Re: (Score:2)
Oops.
Re:Questionable grasp on the problem space. (Score:5, Insightful)
Myth No. 9: Going to DC power will inevitably save energy.
Fact: Going to DC power entails removing the power supplies from a rack of servers or all the servers in a datacenter and consolidating the AC-DC power supply into a single unit for all the systems. Doing this may not actually be more efficient since you lose a lot of power over the even relatively small distances between the consolidated unit and the machines. New servers have 95 percent efficient power supplies, so any power savings you might have gotten by going DC is lost in the transmission process. Your savings will really depend on the relative efficiency of the power supplies in the servers you're buying as well as the one in the consolidated unit.
This is completely wrong. The author missed out on two of the three power conversions that take place in a data center. Data center UPS units take the AC current convert to DC then back again just so the server can convert it back to DC. Even if you have 95% efficiency at each stage the conversion losses will add up.
People wouldn't be going DC if it didn't result in measurable power savings.
Re: (Score:2)
With the US power system, you do avoid four high loss (DC:AC or AC:AC) power conversions and replace them with two lower loss (DC:DC) conversions. But, compared to the ROW electrical systems, you only save two AC:AC conversions which will just gain you two points or so.
I like 600VDC as a solution, but it will only work well for the biggest consumers where you can justify a significant increase capital cost with the energy savings. It's nice to have a single 4.8MW critical power bus (with a couple spares)-
Re: (Score:3, Interesting)
There's only 2 stages in there that are affected. You aren't going to get DC from the power company.
And while the final stage is affected, since the servers are getting DC at lower (48V?) voltages, do you really think a DC power supply is possibly going to be any more efficient than an AC one? Just because the
Re: (Score:3, Interesting)
If I remember correctly, the power savings is not in using AC or DC, it is in stepping up the voltage so that less current flows, resulting in lower power loss due to the innate resistance of the lines, a process that is possible using either AC or DC. Tesla and Watt fought over that one, comparing the relative safety of AC vs DC, a war of pure FUD IIRC.
Re:Questionable grasp on the problem space. (Score:4, Insightful)
But all machines do that anyway. Ram runs at 1.5V or 1.8V, the CPU runs at 1.2ish these days. Where does that come from? 3.3V or 5V rails..
This is why people are moving everything to the 12V rail on the PSU (ATX12V standard and other ideas) A single efficient conversion with a local on-board conversion is best.
DC power still has a lot of other issues.
Re: (Score:2)
Debunk this (Score:5, Insightful)
Re: (Score:2)
At the risk of sounding like an idiot, that is a fairly accurate guess for 17" lcds (TN panels anyway).
Since pretty much all 17" lcds are TN rather than high-contrast panels, it doesn't really hurt to generalize.
Power ratings on monitors aren't like the ratings on computer power supplies. By effectively estimating average and peak power draw the manufacturer can save money. If the AC adapter is rated to handle too little power, the adapter or monitor will prematurely fail. If the adapter is rated too highl
Re:Debunk this (Score:5, Funny)
Power (Score:4, Funny)
Did I miss something? (Score:5, Insightful)
Did the definitions of 'fact' and 'debunk' change recently? Every 'myth' listed has 'fact' under it proving it is true. According to my good friend Mr. Webster this is called 'confirmation.'
Re:Did I miss something? (Score:5, Funny)
Great Example of Datacenter Power Management (Score:2)
Microsoft's Windows Messenger (MSN Messenger, Live Messenger... whatever they call it these days) Group wrote an awesome abstract [microsoft.com] of how they cycle servers on & off to handle the load while saving power.
Now, for reasons pointed out in other comments, TFA from Infoworld is a mix of good info and horseshit.
I'd Get Fired If I Followed These (Score:2, Insightful)
Myth #2 suggests making your customers wait. That might work in super-mega-corporate land where your customers are literally married to you and queues in Tech Support are "profitable."
I would *deserve* to be fired if I made a customer wait. Of course, that sense of urgency doesn't work in super-mega-corporate entities either.
The myth about going to DC to be more efficient is painful too. If a manager in a workplace would entertain a crackpot ideas like that, I'd leave.
What do they mean by "suspend"? (Score:2)
Suspend saves the state of the system to hard disk, which reduces the boot time greatly and allows the system to be shut down. Sleeping continues to draw a small amount of power, between 1 and 3 watts, even though the system appears to be inactive. By comparison, Suspend draws less than 1 watt. Even over the course of a year, this difference is probably negligible.
I though "suspend to disk" and "hibernate" were synonyms. A suspended computer shouldn't draw any more power than a computer that's turned off, b
They have their own fans... (Score:3, Funny)
I remember why I don't read Infoworld (Score:4, Insightful)
That's a really bad article. Wow, worse then anything I remember them writing before.
Bad article... (Score:3, Informative)
Out-right potentially wrong: no one cares if a customer is made to wait for a server to boot to get served. That's not a generalization to be made lightly... It is true, though, that suspend-to-ram has not received the attention it deserves in the data center. A great deal of server-class systems and options are not designed to cope with suspend-to-ram, and thus you must be careful banking on this. The industry should correct it, but a facility can't bank on it yet (just put pressure on your vendors to make it so...)
Straw-man: A supposed 'myth' that leaving on LCD monitors is fine for energy savings, with the remarkable clarification that being off saves more power... Who would have thought.
Other straw-man: You will unconditionally save money by rapid upgrades to the latest efficient technology. I don't think anyone is foolish enough to think compulsively following any technical treadmill will lead to any overall financial gain..
Re: (Score:3, Informative)
Well, obviously leaving LCDs on saves energy if you compare it to leaving CRTs on.
Daylight Savings Time (Score:4, Informative)
Feeling it in the 1 Watt (Score:5, Interesting)
That list of myths debunked seems pretty sensible, even in details that run counter to conventional wisdom. But even though the list properly cautions several times against how most any equipment left plugged in will still drain power while doing nothing useful (infinitely bad efficiency), the article still makes an inefficienty mistake:
Over the course of a year, 2 unnecessary watts is 17.532 unnecessary KWh. Sure, that's only about $1.75 at about $0.10:KWh [doe.gov]. But that's for each device. At home, in addition to sleeping computers, there's dozens of devices with AC adapters wasting watts most of the day (and night), which is possibly hundreds of dollars wasted. In offices and datacenters, possibly thousands to hundreds of thousands of dollars a year wasted. And each KWh means loads of extra Greenhouse CO2 unnecessarily pumped into the sky, even if it's (still) cheap to so recklessly pollute.
Which is what the One Watt Initiative [wikipedia.org] is designed to minimize. The US government has joined the global efficiency organization, mandating purchases of equipment that consumes no more than 1 watt in standby mode. Whatever the global impact of 3W wasted in standby can be cut by 2/3 if switching to 1W.
In the short run, that makes energy bills lower (and, by saving heat from standby devices, further lowers energy costs due to less required cooling). In the long run, we've got more fuel and intact climate left to work with - and that stuff just costs way too much to replace when it runs out.
Google (Score:2)
Google developed their own power supply, and open-sourced the hardware, saying it saves them tons of energy and the rest of the world should use it. Mind you, it is DC, and it means a total DC data center, but really that isn't a bad idea.
Virtualization is also the way to go to save power. Fewer servers.
Re:Google (Score:5, Informative)
Google developed their own power supply
Actually, Google's point was that they wanted motherboards that ran on 12 VDC only. [64.233.179.110] PC power supplies are still providing +12, -12, +5, -5, and +3.3v. Most of those voltages are there for legacy purposes, and DC-DC converters on the motherboard are doing further conversions anyway. So there's no reason not to make motherboards that only need 12 VDC. Disks are already 12 VDC only, so this gets everything on one voltage. This simplifies the power supply considerably, and avoids losses in producing some voltages that aren't used much.
But Google wasn't talking about using 12 VDC distribution within the data center. The busbars required would be huge at such a low voltage. They were talking about using 12 VDC within each rack. Distribution within the data center would still be 110 or 220 VAC.
Re: (Score:3, Informative)
Disks are already 12 VDC only
Actually HDDs use +12v for the motors and +5v for the electronics. If you have a 3.5" FDD it only uses 5v. If you don't believe me try swapping the yellow (12v) and red (5v) wires going into the power connector on your HDD some time ... here's a hint, the smoke you see coming off the electronics isn't from putting 5v into something that expects 12v (note if you're really dumb enough to do this I won't be held responsible for ruining your HDD).
NiCd Laptop Batteries ("Myth #7") Huh? (Score:3, Informative)
Tip number seven talks about battery conservation in LiIon vs. NiCd batteries. Um, laptops haven't used NiCd's in years. Their predecessors, NiMH hasn't been used in laptops in quite a while either.
Can you even buy NiCd's anymore, for any device? I can't remember the last time I saw them in an electronics store.
SirWired
Re:Kitteh pr0n (Score:5, Funny)
Show them some nice pictures of kittens. Or some pr0n.
I, for one, was very relieved to see the word or.
Re: (Score:2)
Re: (Score:3, Interesting)
Do you have any data on this? This is one of those commonly held beliefs that has absolutely no facts behind it. I've seen a google whitepaper that pretty conclusively debunked commonly held assumptions that drives fail because of temperature and "wear and tear". From a mechanical standpoint, this belief also
Re: (Score:3, Interesting)
I saw the google whitepaper and it debunked very little about the temperature "myth", not sure about wear and tear.
With regards to tempreature the study had a couple of fundamental flaws.
* The temperature measurements came from the drives themselves. That means if say an unreliable hard drive model also underreported it's tempreature it would totally skew the results.
* It was data from servers running in a well cooled datacenter. That means there was virtually no data about drives running at the kind of tem
Re:Some things conveniently left out (Score:4, Interesting)
This is one of those commonly held beliefs that has absolutely no facts behind it.
The data sheet for my Hitachi HDS721075KLA330 [hitachigst.com] drive rates it at 50,000 load/unload cycles. If it powered up 50 times a day (which would be quite possible in a desktop with aggressive power savings enabled), it's specced to last about 3 years.
From a mechanical standpoint, this belief also does not make any sense.
The people who actually built it seem to disagree with you. Hint: a spinning hard drive takes little energy to stay in motion. A stopped hard drive takes quite a bit of torque to spin up to running speed in a small number of seconds.
Re: (Score:2)
Don't apply for a job at Infoworld, your knowlege level is too high.
Re: (Score:2)
LED backlit LCD monitors run in the $1,500+ range.
Really? [amazon.com]
Damn, I need to get into your supply chain.