Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Power Data Storage Government IT News

Energy Star For Servers Falls Short 69

tsamsoniw writes "The newly released Energy Star requirements for servers may not prove all too useful for companies shopping for the most energy-efficient machines on the market, InfoWorld reports. For starters, the spec only considers how much power a server consumes when it's idling, rather than gauging energy consumption at various levels of utilization. That's like focusing on how much gas a vehicle consumes at stop lights instead of when it's moving. Also, the spec doesn't care whether a server's processors have one core or multiple cores — even though multi-core servers deliver more work at fewer watts. Though this first version of Energy Star for servers isn't entirely without merit, the EPA needs to refine the spec to make it more meaningful."
This discussion has been archived. No new comments can be posted.

Energy Star For Servers Falls Short

Comments Filter:
  • by 1sockchuck ( 826398 ) on Friday May 22, 2009 @04:56AM (#28050993) Homepage
    All fair criticisms, but it's a first step. The EPA plans to address many of the shortcomings of the current Energy Star for Servers program in an expanded Tier 2 spec [datacenterknowledge.com] that is scheduled to arrive in the fall of 2010. The update is intended to expand the program to include blade servers and servers with more than four processors.
    • No, the big problem is that it takes something complex, like specifying server hardware and dumbs it down to a little sticker. When I evalutate servers, power consumption is relatively low on my list, following after reliability and performance. Still, I wish Dell/HP/IBM would do a better job of showing power consumption with their server specs. Not that I'm particularly concerned about being green, but I do need to account for the loads on my cooling systems, ups, and backup generator.
      • Re: (Score:3, Interesting)

        by TooMuchToDo ( 882796 )
        You're not the target market. My employer purchases tens of thousands of servers a year. One of our primary considerations is power efficiency. You know, total cost of ownership and all that jazz.
    • by Z00L00K ( 682162 )

      The work pattern of computers varies wildly, regardless of servers or workstations.

      But many servers have long idle periods and some have very low loads during long periods so then the idle consumption factor is valid.

  • Atom (Score:5, Informative)

    by googlesmith123 ( 1546733 ) on Friday May 22, 2009 @05:04AM (#28051039)
    Intel is releasing an Atom cpu for servers. It's not very powerful, but I reckon it has the highest power per watt of anything out there.
    • Re:Atom (Score:5, Interesting)

      by derGoldstein ( 1494129 ) on Friday May 22, 2009 @05:38AM (#28051187) Homepage
      There's also the FAWN project [technologyreview.com] (also on /. [slashdot.org])

      Cores-per-die is not a valid metric, not with emerging prototypes that could drastically change how web content is served.
      • Re: (Score:3, Interesting)

        by mangu ( 126918 )

        There's also the FAWN project

        That's a very interesting link, I had never heard of that. I wonder how it compares with Cuda for parallel numerical computation? The article mention that they are considering using this concept for scientific computation.

        • The project's site is located here [cmu.edu]. There's quite a bit of information there (check out the first PDF [cmu.edu] at the bottom of the page).

          nVidia's CUDA would have a drastically different method for paralleling, as well as a fundamentally different instruction set, which I assume is more appropriate for heavy computation. The cores are on the same die, for one thing, and I'm willing to bet it's easier to program out of the box. Of course, I'm just inferring, I've never worked with the architecture.

      • Re:Atom (Score:4, Insightful)

        by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Friday May 22, 2009 @07:55AM (#28052155) Homepage Journal

        FAWN is what Google is already doing. If you tried getting even cheaper compute nodes you'd run into price-per-port problems making it all talk. There IS a form of this that works, though, It's called blade computing, and we do it already. Using a stack of 500 MHz Geodes is NOT an effective way to get work done. Turning off idle servers IS. Server consolidation IS. Using a stack of commodity systems IS sensible, but not super-gutless ones. You need sigificant computer power per network port.

        • Re: (Score:3, Informative)

          Google is leveraging economy of scale with their cargo containers [slashdot.org]. The primary benefits are modularity, and off-the-shelf components/interfaces.

          However if you look at power usage and usage of space (which also translates into power, because of infrastructure costs), if you need "shallow web servers", then paralleling even "weaker" nodes could yield a better bottom line.

          Blade computing, specifically, is extremely expensive. The reason is simply that you're buying high-end components which are intended
          • by lukas84 ( 912874 )

            Yeah, but it's also important to note that Google's model makes a lot of sense if IT is the _the_ core if your business.

            But for a company where IT is just there to provide necessary infrastructure to keep their core business running, developing your own hardware like this makes very little sense.

    • Since when did Atom have the highest power/performance per watt? People buy them because they are CHEAP and use only 3watts. That doesn't mean they score high on the P:W scale. The Core2 doesn't have that much higher of a TDP than that of the Atom, yet has the option of increasing its draw if required. While some tasks will pin the Atom to 100% CPU, the Core2 can do it with much less CPU utilization and get the job done way faster (and resume to idle).

      http://www.tomshardware.com/reviews/intel-atom-efficienc [tomshardware.com]

    • Nah, these guys [sicortex.com] have the highest power per watt (excluding initial setup cost)

      But they're taking the supercomputer angle rather than server farm angle. Unless you cache everything in RAM, your webserver won't have enough I/O. Not feasible for a company like Google, but potentially feasible for MMOs or even sites like /. (Where you have more processing than disk IO. Heck, with terabytes of memory, just cache everything in RAM until the discussion is locked.)

      But honestly, I wouldn't want to deal with another

    • An atom CPU is, clock for clock, more than an order of magnitude more power hungry than an ARM CPU. I'd be very surprised if it does an order of magnitude more per clock (and, yes, the Cortex A8 and A9 do have FPUs and vector units). Something like the OMAP3 draws under 250mW for the Cortex A8 core, the DSP core, the OpenGL 2 ES GPU core, 256MB of RAM and 512MB of flash under full load. The Atom draws closer to 2W for just the CPU core. Using Intel and power efficiency in the same sentence is laughable.
  • No, it isn't (Score:5, Insightful)

    by Idaho ( 12907 ) on Friday May 22, 2009 @05:10AM (#28051067)

    That's like focusing on how much gas a vehicle consumes at stop lights instead of when it's moving.

    No, it's not. As usual, car analogies are stupid.

    Cars do no spend the majority of their time idling at traffic lights. Computers (especially servers) however do often end up idling a very large percentage of the time.

    Data centers do charge for (actual) power usage, so of course the actual (typically 95th percentile) usage should be taken into account, but still it's a broken analogy.

    • by value_added ( 719364 ) on Friday May 22, 2009 @05:53AM (#28051257)

      Well, it's less broken if you consider that in major metropolitan areas, cars do spend much of their time idling at traffic lights (typically with air conditioning running), as well as on congested city streets and freeways. Then, of course, there's the drive-thrus for those too fat to get out of their cars. ;-)

      As for car analogies generally being stupid, yeah, you're right. But so are most of the alternatives. The reason why "sound bites", for example, are preferrable to hour-long analyses or 5,000 word flabby blog posts isn't that people don't want a full understanding, it's just that doing so is too much work. It's like having to evaluate a car purchase based on specifications instead of ... oh, wait.

      • Re: (Score:3, Funny)

        Well, it's less broken if you consider that in major metropolitan areas

        It's less broken?

        Listen here: either it works, or it's broken. There's no grey area here. I'm not going to buy an analogy and have it crap out on me when conditions become a bit sketchy. Reliability is key in this business -- if an analogy has any downtime, I'm liable for it. You might as well buy a car and expect it to...

        ugh...

    • Re: (Score:3, Interesting)

      Regardless of the analogy (they were probably just thinking "dumb it down because we consider the people who read infoworld -- our audience -- to be idiots"), the part about the idling time usually isn't the case. Data centers will often outsource whatever "idle machine time" they have to various institutions, at least if they have any sense.
      There are many computing tasks that aren't too time sensitive, and research projects can have considerable leeway in terms of when the final computation is done and the

      • Re: (Score:3, Insightful)

        by Zerth ( 26112 )

        Unless that CNC is the chokepoint for your shop or doesn't interact with any other resources in your shop, it should sit idle some of the time. Otherwise you are just creating excess work-in-process inventory.

        • I'm not entirely sure whether you were referring to the CNC analogy or an actual machine shop, so I'll assume both:

          Analogy: You're always going to have exceptions, but if you can quickly re-task servers then there's no reason for them to sit still any of the time, unless you can't manage to find clients for your resource.

          Literally a machine shop: I worked on a few projects involving automation and software/hardware interfaces, most of which required on-site installation. I don't know much about actually run

          • by Zerth ( 26112 )

            Yah, I was taking the literal case:) If it is a job shop where the CNC is the only step on a piece, then sure it should be running as much as possible.

            But if it is part of a line, then even if the machine is expensive, it shouldn't be be running near full time unless it is the slowest machine in the line. The only machine running near full utilization should be the bottleneck, every other step should have excess/unused capacity in relation to that bottleneck.

            If you fall into the trap of "the machines are

      • Re: (Score:3, Informative)

        by ergo98 ( 9391 )

        Data centers will often outsource whatever "idle machine time" they have to various institutions, at least if they have any sense.

        I think you just imagined that.

        Very, very, very, very (x4) few data centers do anything of the sort. And the truth is that the vast majority of servers spend the vast majority of their time waiting for something to do.

        • And the truth is that the vast majority of servers spend the vast majority of their time waiting for something to do.

          True. This is because of the warm and fluffy economy of ~2 years ago.

          I said "often", not "most". And the amount will increase if they want to keep a roof over their heads.

    • Re: (Score:3, Insightful)

      by mangu ( 126918 )

      Cars do no spend the majority of their time idling at traffic lights.

      I live in a place with severe traffic congestion problems, you insensitive clod!

      Seriously, I think the car analogy is not so bad here. Too many people drive in the inner city using cars designed for cruising in an open freeway. Consider this: if so many cars weren't used in congested traffic, where would traffic congestion come from?

    • Re: (Score:3, Insightful)

      by MobyDisk ( 75490 )

      Actually, I disagree. The analogy is very good.

      Cars do no spend the majority of their time idling at traffic lights. Computers (especially servers) however do often end up idling a very large percentage of the time.

      Both statements are not universally true.

      Taxis, for example, may spend the majority of their time idling. So do big-city rush-hour commuters. And many servers idle 90% of the time, while others idle 10% of the time.

      You can't make blanket statements about cars idle time, or computers idle time, since it probably varies 10000:1 based on the usage.

    • Re: (Score:3, Insightful)

      by iamhassi ( 659463 )
      "That's like focusing on how much gas a vehicle consumes at stop lights"

      No, it's not. As usual, car analogies are stupid.


      I'd have to agree, bad analogy. MPG at stoplights is 0 for all cars since you're not moving. You'd have to come up with a whole new rating scheme if you wanted to determine how much gas a vehicle consumes at stop lights, like ounces consumed per hour while idling.

      I'd say a better car analogy (if you must have one) would be to focus on what a vehicle gets on the highway only...
    • by AmiMoJo ( 196126 )

      I guess you don't live in the same city as me then - I spend a lot of time getting exactly zero miles per gallon in traffic. Then again, we do have the longest traffic light waits in the country.

      Last week I put down a deposit on a Mitsubishi with "stop and go". Basically it turns the engine off when you stop and put the handbreak on. It's clever enough not to do it if you have the wheels turned (i.e. waiting to turn into a road) or if the battery is low etc.

      Apparently BMW and Mercedes have something similar

    • by HTH NE1 ( 675604 )

      That's like focusing on how much gas a vehicle consumes at stop lights instead of when it's moving.

      Cars do not spend the majority of their time idling at traffic lights.

      No, they spend it idling in their garage, and then you're less concerned with fuel efficiency than you are in their rate of production of carbon monoxide vs. its rate of escape from the garage.

      Wait, what were we talking about again?

  • by Brama ( 80257 ) on Friday May 22, 2009 @05:14AM (#28051083) Homepage

    Comparing a server idling to a car in front of a red light is seriously wrong. Servers in general tend to spend a _lot_ more time idling than cars wait for a red traffic light. There'll always be servers that _do_ fully utilize their resources, but most of them will idle a lot. So it makes perfect sense to take that as a generic guide-line.

    • by Chrisq ( 894406 )
      This is certainly true of most servers, but is it true of virtualised servers in really big data centres? I would have thought that sizing, evening of load, etc. would mean that there would be some level of constant use.
      • Re: (Score:3, Informative)

        by amorsen ( 7485 )

        It's hard to even away the intra-day variation. I work for a phone company for corporate customers only, and basically all calls happen between 7am and 6pm. We run batch tasks at night, but they can't compare to the load that customers put on the servers during the day. The addition of cell phone calls have given our servers a bit more to play with at night, at least.

        I suppose we could try to sell excess capacity at night, but I doubt we could make enough to make up for the required extra staff and hardware

      • This is certainly true of most servers, but is it true of virtualised servers in really big data centres?

        No. The biggest reason for virtualized servers is that everyone noticed that typical servers spend much of their time idle, so if we throw a 4 servers into one physical box, the hardware will stay utilized. This means we need fewer physical boxes, which means we need less power.

        • by rawler ( 1005089 )

          OR, skip the overhead of virtualization, and use the Operating System.

          From webster:
          operating system : software that controls the operation of a computer and directs the processing of programs (as by assigning storage space in memory and controlling input and output functions)

          The key here is program_s_, as opposed to program. A modern server operating system is designed to do most of the things that people is now cheering for virtualisation to do. Virtualisation solutions however, will either evolve into a n

          • For the most part, I agree with you: I have seen some very stupid implementations. But we don't live in a homogeneous world.

            One of the principle advantages of virtualization, however, is that the guest operating systems need not be the same OS. For example, you could have a LAMP stack running on one VM guest and an Exchange server running on another.

            Furthermore, there are specific reasons why you might want at least the appearance of separated machines for each tier of N-tier solution. Most of these are

            • by rawler ( 1005089 )

              Agreed, there are a lot of valid use-cases for host-level virtualization. Another one is testing, where you're able to set up really close-to production systems for staging test.

              For the cross-os problem, yeah, you will have to have a bunch of hosts, either physical or virtual, where virtual may save you some problems, and give you others. (The famous system-clock problem in time-critical apps, for example). The important thing here I'd say, would be to still trying to keep the number of OS instances down, s

          • Two words for you: Process migration. It is not well supported by most operating systems, but it is by most hypervisors.

            Imagine you have 4 VMs that are normally idle, but at any given time two of them might be fully loaded. If you had physical machines, you'd need four computers. With VMs, you have two and live-migrate the two busy ones to the two real machines. For bonus points, you can shut down one of the real machines when all four VMs are busy.

            The best thing about this kind of solution is that it

            • by rawler ( 1005089 )

              Agreed, there are a few things that hypervisors does better than most OS:es around. I'm not arguing against the use of host-level virtualization, I'm just questioning 90% of how I see it being used in practice.

              As for live migration, it's mostly a question about scaling up, which I would from a purely theoretical standpoint assume think app-level architecture would do a lot better than emulated hardware, especially if the workload is I/O-intensive. Secondly it's not really the most common use-case I see in t

        • Server Virtualization is like Car Pooling. It takes the empty seats in your car and puts a body into them. This means you're efficiently using not only your car but the highway to get people to work.

        • Re: (Score:3, Insightful)

          by Abcd1234 ( 188840 )

          No. The biggest reason for virtualized servers is that everyone noticed that typical servers spend much of their time idle, so if we throw a 4 servers into one physical box, the hardware will stay utilized. This means we need fewer physical boxes, which means we need less power.

          Except, of course, that those servers? Yeah, they're typically busy *at the same times*, because when they're busy, they're busy because people are working.

          Personally, I'm extremely skeptical of the idea that virtualization means th

      • by ergo98 ( 9391 )

        This is certainly true of most servers, but is it true of virtualised servers in really big data centres? I would have thought that sizing, evening of load, etc. would mean that there would be some level of constant use.

        Outside of low-end web hosting, virtualization is still generally in its infancy (though I expect products like vSphere 4 to change things considerably).

        And even in cases were multiple servers are virtualized onto one set of hardware, the candidates for virtualization tend to be extremely lo

    • by smoker2 ( 750216 )
      Not only that, but a car idling at the lights is using more fuel per revolution than its most efficient mode, whereas a server at idle is using the very least energy it can. A car is most energy efficient when doing 56 mph but a server is not more energy efficient under a 65% load. So epic car analogy fail all round.
      • A car uses less fuel sitting at a traffic light in a given amount of time unless there is something badly wrong with the design of your car. It naturally uses more fuel per mile because you aren't going anywhere. I would imagine that servers are similar because when they are sitting idle the useful work compared to power is terrible because no work is being done. Whereas with full load more power is used but it is useful work like when you are using your car to get somewhere.

        Of course in both situations

  • the spec only considers how much power a server consumes when it's idling, rather than gauging energy consumption at various levels of utilization. That's like focusing on how much gas a vehicle consumes at stop lights instead of when it's moving

    In time, you will call *me* master.

    • My name is Torgo. I take care of the place when the master is away.

      (sorry. mention "master" and you get a Manos or a Dr. Who quote every time.)

  • This is a great v1 (Score:4, Insightful)

    by sirwired ( 27582 ) on Friday May 22, 2009 @05:49AM (#28051239)

    Speccing by idle power consumption was a great idea. How exactly was the EPA supposed to grade servers based on CPU "efficiency" when each CPU differs so much? Which of the bazillion CPU benchmarks out there do you choose? This would be a short trip into an epic flame war between vendors, meaning that the spec would never get passed. "Politics is the art of the possible"

    Given that most servers spend almost all their time idle anyway, this could certainly be a big money and energy saver. If you ever stroll through an actual large datacenter, you can see, via HDD ligts, that most of that gear just sits there all day long, doing little actual work. Certainly there are some servers lit up constantly, and virtualization will help to clean some of the idle servers up, but many shops don't do much virtualizing yet.

    SiWired

    • I fully agree. With other EPA ratings, they compare similar sized appliances with each other. Your dorm fridge rating won't be compared to a full size fridge which could be quite a bit more efficient. The customer has probably figured out what size server they want to buy before they look at the energy ratings. If you've decided on the specs of your server, you can look at servers from several different companies who can provide you with similar hardware. At that point, if one has a better Energy Star
    • Performance data is vital for something like this. I am not sure whether it is taken into account in any way since the specification download is broken on the energy star website.

      Basically this is because it you have a server that can handle twice the load then it can use twice as much idle power and be just as efficient as two low performance servers. So performance of the server although being very hard to measure is needed to make the rating anything other than worthless.

      • Yes, under load, a server that can handle twice the work for the same power is twice as efficient, but very few servers outside of Bazillion $ supercomputer clusters spend all their time under full load.

        Also, either a machine is EnergyStar stickered or it isn't. How do you decide on a standard load? Some boxes are I/O monsters, others have crappy I/O, but have fast CPUs. How do you decide which workload the EnergyStar cert is based on?

        Yes, it would be nice if the standard could work in performance someho

        • Whether the server is under load or not is irrelevant as I attempted to point out in my previous post.

          To clarify I am working on the assumption that if you need the performance of one good server or two worse servers then you will buy either the one good server or the two worse servers. Thus when the servers are idle you will still have one good server or two worse servers. So if the idle power of one good server is lower than the idle power of two worse servers then you will use less power with the good

  • the spec only considers how much power a server consumes when it's idling, rather than gauging energy consumption at various levels of utilization. That's like focusing on how much gas a vehicle consumes at stop lights

    While it would be better to include other metrics in a weighted average or something along with this its not entirely wrong. At least in the micro computer world most servers operate when businesses do. They may not in the majority of businesses be utilized even all of that time. Virtualization is helping to reduce idle time on machines but the way I figure it even VM hosts are likely to be idle more than they are not. In large enterprise these figures are different given time zones and global foot prints, although if your multinational you probably have multiple datacenters which host local services and put the numbers back in line somewhat there as well. I would wager of the total number of microcomputer servers out there most are owned by small to medium businesses, simply because most businesses are in the SMB class.

    That means the machines run all the time but probably are idle all but eight to ten hours of the twenty four hours in a day and only five of the seven days in a week. That is roughly 29% of the time in use, the rest is idle time. So efficiency at idle is going to be the driving measure.

  • Build a server with asymmetric processors...
    Something like an Atom for idle use, and a bunch of quad cores that get activated when you actually do anything... Configure the disks to shut off when idle etc...

  • energy consumption at various levels of utilization.

    Think of the energy saving, if you just said "use". Particularly if you utilize that word a lot.

  • I mean why try to make a broad, all-encompassing standard for energy efficiency to try to slap a sticker on the ones that "pass"? This works well for a product that is as relatively simple as a washer, dryer, water heater, etc. but I think a better idea would be to have Dell post a 188/304 number on each server. The low is power pull when idle, the high is power pull when running some standard load test software.
  • Putting "Energy Star" and "useful" in the same sentence?
  • SGI had an Atom-based supercomputer on the drawing board: http://www.pcmag.com/article2/0,2817,2334887,00.asp [pcmag.com]

    Quote:

    "The key to the concept, SGI said, was its Kelvin cooling technology, which could pack 10,000 cores into a single rack. Combining the Atom processor with the Kelvin technology could generate seven times better memory performance per watt than a single-rack X86 cluster. Molecule could also process 20,000 concurrent threads, forty times more than the rack, and 15 terabytes/s of memory performance

  • by acoustix ( 123925 ) on Friday May 22, 2009 @09:00AM (#28053189)

    VMware Distributed Power Management [youtube.com]

    Supposedly it will cut your server power usage by 50%.

  • I used to run little server at home. Then I've got an electricity bill for £400. Now the server is off. It would be very useful to me to be able to compare server's power usage while idling as this is what my server was doing for 90% of the time.

  • It seems people are already hard at work at creating a better solution to this, basically allowing servers to run more efficiently than if they are in a standard rack....

    http://datacenterjournal.com/index.php?option=com_content&task=view&id=2620&Itemid=43 [datacenterjournal.com]

    http://www.missioncriticalmagazine.com/CDA/Articles/Products/BNP_GUID_9-5-2006_A_10000000000000564830 [missioncri...gazine.com]

    http://www.youtube.com/watch?v=knTHr8BQ8rc [youtube.com]

    http://www.nerdsociety.com/2008/09/24/interview-with-spear-co-founder/ [nerdsociety.com]

    Maybe you'll find

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...