Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Power Data Storage Government IT News

Energy Star For Servers Falls Short 69

tsamsoniw writes "The newly released Energy Star requirements for servers may not prove all too useful for companies shopping for the most energy-efficient machines on the market, InfoWorld reports. For starters, the spec only considers how much power a server consumes when it's idling, rather than gauging energy consumption at various levels of utilization. That's like focusing on how much gas a vehicle consumes at stop lights instead of when it's moving. Also, the spec doesn't care whether a server's processors have one core or multiple cores — even though multi-core servers deliver more work at fewer watts. Though this first version of Energy Star for servers isn't entirely without merit, the EPA needs to refine the spec to make it more meaningful."
This discussion has been archived. No new comments can be posted.

Energy Star For Servers Falls Short

Comments Filter:
  • by 1sockchuck ( 826398 ) on Friday May 22, 2009 @05:56AM (#28050993) Homepage
    All fair criticisms, but it's a first step. The EPA plans to address many of the shortcomings of the current Energy Star for Servers program in an expanded Tier 2 spec [datacenterknowledge.com] that is scheduled to arrive in the fall of 2010. The update is intended to expand the program to include blade servers and servers with more than four processors.
  • Atom (Score:5, Informative)

    by googlesmith123 ( 1546733 ) on Friday May 22, 2009 @06:04AM (#28051039)
    Intel is releasing an Atom cpu for servers. It's not very powerful, but I reckon it has the highest power per watt of anything out there.
  • by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Friday May 22, 2009 @06:36AM (#28051179)

    It's hard to even away the intra-day variation. I work for a phone company for corporate customers only, and basically all calls happen between 7am and 6pm. We run batch tasks at night, but they can't compare to the load that customers put on the servers during the day. The addition of cell phone calls have given our servers a bit more to play with at night, at least.

    I suppose we could try to sell excess capacity at night, but I doubt we could make enough to make up for the required extra staff and hardware. Everyone else has idle servers at night too, except for other time zones, and latency generally kills any ideas to utilize servers across time zones.

    Anyway, idle power is free for us (we pay for peak, whether we use it or not), so from an economic perspective there's no point to optimize for it. Marketing us as energy-conscious is worth a bit though, so we would get Energy Star compliant servers if the extra cost is small. So far we're focusing on reducing peak consumption, and in all modesty I think we're fairly good at it. (Our new 7600-routers ruin the score a bit. They suck too much juice. )

  • by acoustix ( 123925 ) on Friday May 22, 2009 @10:00AM (#28053189)

    VMware Distributed Power Management [youtube.com]

    Supposedly it will cut your server power usage by 50%.

  • Re:No, it isn't (Score:3, Informative)

    by ergo98 ( 9391 ) on Friday May 22, 2009 @10:49AM (#28053835) Homepage Journal

    Data centers will often outsource whatever "idle machine time" they have to various institutions, at least if they have any sense.

    I think you just imagined that.

    Very, very, very, very (x4) few data centers do anything of the sort. And the truth is that the vast majority of servers spend the vast majority of their time waiting for something to do.

  • Re:Atom (Score:3, Informative)

    by derGoldstein ( 1494129 ) on Friday May 22, 2009 @10:57AM (#28053969) Homepage
    Google is leveraging economy of scale with their cargo containers [slashdot.org]. The primary benefits are modularity, and off-the-shelf components/interfaces.

    However if you look at power usage and usage of space (which also translates into power, because of infrastructure costs), if you need "shallow web servers", then paralleling even "weaker" nodes could yield a better bottom line.

    Blade computing, specifically, is extremely expensive. The reason is simply that you're buying high-end components which are intended for customers with cash reserves. What Google does is use CPUs/Motherboards/RAM/PSUs that are already on thin margins and massive distribution to a far broader audience. They've created their own modular, near-blade density model, which is far cheaper, and more robust (even if it does take up more space).

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...