Energy Star For Servers Falls Short 69
tsamsoniw writes "The newly released Energy Star requirements for servers may not prove all too useful for companies shopping for the most energy-efficient machines on the market, InfoWorld reports. For starters, the spec only considers how much power a server consumes when it's idling, rather than gauging energy consumption at various levels of utilization. That's like focusing on how much gas a vehicle consumes at stop lights instead of when it's moving. Also, the spec doesn't care whether a server's processors have one core or multiple cores — even though multi-core servers deliver more work at fewer watts. Though this first version of Energy Star for servers isn't entirely without merit, the EPA needs to refine the spec to make it more meaningful."
Improved Version Coming Next Year (Score:5, Informative)
Atom (Score:5, Informative)
Re:Yet Another Bogus Car Analogy (Score:3, Informative)
It's hard to even away the intra-day variation. I work for a phone company for corporate customers only, and basically all calls happen between 7am and 6pm. We run batch tasks at night, but they can't compare to the load that customers put on the servers during the day. The addition of cell phone calls have given our servers a bit more to play with at night, at least.
I suppose we could try to sell excess capacity at night, but I doubt we could make enough to make up for the required extra staff and hardware. Everyone else has idle servers at night too, except for other time zones, and latency generally kills any ideas to utilize servers across time zones.
Anyway, idle power is free for us (we pay for peak, whether we use it or not), so from an economic perspective there's no point to optimize for it. Marketing us as energy-conscious is worth a bit though, so we would get Energy Star compliant servers if the extra cost is small. So far we're focusing on reducing peak consumption, and in all modesty I think we're fairly good at it. (Our new 7600-routers ruin the score a bit. They suck too much juice. )
Just use VMware's DPM (Score:3, Informative)
VMware Distributed Power Management [youtube.com]
Supposedly it will cut your server power usage by 50%.
Re:No, it isn't (Score:3, Informative)
I think you just imagined that.
Very, very, very, very (x4) few data centers do anything of the sort. And the truth is that the vast majority of servers spend the vast majority of their time waiting for something to do.
Re:Atom (Score:3, Informative)
However if you look at power usage and usage of space (which also translates into power, because of infrastructure costs), if you need "shallow web servers", then paralleling even "weaker" nodes could yield a better bottom line.
Blade computing, specifically, is extremely expensive. The reason is simply that you're buying high-end components which are intended for customers with cash reserves. What Google does is use CPUs/Motherboards/RAM/PSUs that are already on thin margins and massive distribution to a far broader audience. They've created their own modular, near-blade density model, which is far cheaper, and more robust (even if it does take up more space).