Energy Star For Servers Falls Short 69
tsamsoniw writes "The newly released Energy Star requirements for servers may not prove all too useful for companies shopping for the most energy-efficient machines on the market, InfoWorld reports. For starters, the spec only considers how much power a server consumes when it's idling, rather than gauging energy consumption at various levels of utilization. That's like focusing on how much gas a vehicle consumes at stop lights instead of when it's moving. Also, the spec doesn't care whether a server's processors have one core or multiple cores — even though multi-core servers deliver more work at fewer watts. Though this first version of Energy Star for servers isn't entirely without merit, the EPA needs to refine the spec to make it more meaningful."
Improved Version Coming Next Year (Score:5, Informative)
Re: (Score:2)
Re: (Score:3, Interesting)
Re: (Score:2)
The work pattern of computers varies wildly, regardless of servers or workstations.
But many servers have long idle periods and some have very low loads during long periods so then the idle consumption factor is valid.
Atom (Score:5, Informative)
Re:Atom (Score:5, Interesting)
Cores-per-die is not a valid metric, not with emerging prototypes that could drastically change how web content is served.
Re: (Score:3, Interesting)
That's a very interesting link, I had never heard of that. I wonder how it compares with Cuda for parallel numerical computation? The article mention that they are considering using this concept for scientific computation.
Re: (Score:2)
The project's site is located here [cmu.edu]. There's quite a bit of information there (check out the first PDF [cmu.edu] at the bottom of the page).
nVidia's CUDA would have a drastically different method for paralleling, as well as a fundamentally different instruction set, which I assume is more appropriate for heavy computation. The cores are on the same die, for one thing, and I'm willing to bet it's easier to program out of the box. Of course, I'm just inferring, I've never worked with the architecture.
Re:Atom (Score:4, Insightful)
FAWN is what Google is already doing. If you tried getting even cheaper compute nodes you'd run into price-per-port problems making it all talk. There IS a form of this that works, though, It's called blade computing, and we do it already. Using a stack of 500 MHz Geodes is NOT an effective way to get work done. Turning off idle servers IS. Server consolidation IS. Using a stack of commodity systems IS sensible, but not super-gutless ones. You need sigificant computer power per network port.
Re: (Score:3, Informative)
However if you look at power usage and usage of space (which also translates into power, because of infrastructure costs), if you need "shallow web servers", then paralleling even "weaker" nodes could yield a better bottom line.
Blade computing, specifically, is extremely expensive. The reason is simply that you're buying high-end components which are intended
Re: (Score:1)
Yeah, but it's also important to note that Google's model makes a lot of sense if IT is the _the_ core if your business.
But for a company where IT is just there to provide necessary infrastructure to keep their core business running, developing your own hardware like this makes very little sense.
Re: (Score:2)
Since when did Atom have the highest power/performance per watt? People buy them because they are CHEAP and use only 3watts. That doesn't mean they score high on the P:W scale. The Core2 doesn't have that much higher of a TDP than that of the Atom, yet has the option of increasing its draw if required. While some tasks will pin the Atom to 100% CPU, the Core2 can do it with much less CPU utilization and get the job done way faster (and resume to idle).
http://www.tomshardware.com/reviews/intel-atom-efficienc [tomshardware.com]
Re: (Score:2)
It simply depends on what your task is.
Re: (Score:2)
Nah, these guys [sicortex.com] have the highest power per watt (excluding initial setup cost)
But they're taking the supercomputer angle rather than server farm angle. Unless you cache everything in RAM, your webserver won't have enough I/O. Not feasible for a company like Google, but potentially feasible for MMOs or even sites like /. (Where you have more processing than disk IO. Heck, with terabytes of memory, just cache everything in RAM until the discussion is locked.)
But honestly, I wouldn't want to deal with another
Re: (Score:2)
No, it isn't (Score:5, Insightful)
No, it's not. As usual, car analogies are stupid.
Cars do no spend the majority of their time idling at traffic lights. Computers (especially servers) however do often end up idling a very large percentage of the time.
Data centers do charge for (actual) power usage, so of course the actual (typically 95th percentile) usage should be taken into account, but still it's a broken analogy.
Re:No, it isn't (Score:5, Funny)
Well, it's less broken if you consider that in major metropolitan areas, cars do spend much of their time idling at traffic lights (typically with air conditioning running), as well as on congested city streets and freeways. Then, of course, there's the drive-thrus for those too fat to get out of their cars. ;-)
As for car analogies generally being stupid, yeah, you're right. But so are most of the alternatives. The reason why "sound bites", for example, are preferrable to hour-long analyses or 5,000 word flabby blog posts isn't that people don't want a full understanding, it's just that doing so is too much work. It's like having to evaluate a car purchase based on specifications instead of ... oh, wait.
Re: (Score:3, Funny)
Well, it's less broken if you consider that in major metropolitan areas
It's less broken?
Listen here: either it works, or it's broken. There's no grey area here. I'm not going to buy an analogy and have it crap out on me when conditions become a bit sketchy. Reliability is key in this business -- if an analogy has any downtime, I'm liable for it. You might as well buy a car and expect it to...
ugh...
Re: (Score:3, Interesting)
Regardless of the analogy (they were probably just thinking "dumb it down because we consider the people who read infoworld -- our audience -- to be idiots"), the part about the idling time usually isn't the case. Data centers will often outsource whatever "idle machine time" they have to various institutions, at least if they have any sense.
There are many computing tasks that aren't too time sensitive, and research projects can have considerable leeway in terms of when the final computation is done and the
Re: (Score:3, Insightful)
Unless that CNC is the chokepoint for your shop or doesn't interact with any other resources in your shop, it should sit idle some of the time. Otherwise you are just creating excess work-in-process inventory.
Re: (Score:2)
I'm not entirely sure whether you were referring to the CNC analogy or an actual machine shop, so I'll assume both:
Analogy: You're always going to have exceptions, but if you can quickly re-task servers then there's no reason for them to sit still any of the time, unless you can't manage to find clients for your resource.
Literally a machine shop: I worked on a few projects involving automation and software/hardware interfaces, most of which required on-site installation. I don't know much about actually run
Re: (Score:2)
Yah, I was taking the literal case:) If it is a job shop where the CNC is the only step on a piece, then sure it should be running as much as possible.
But if it is part of a line, then even if the machine is expensive, it shouldn't be be running near full time unless it is the slowest machine in the line. The only machine running near full utilization should be the bottleneck, every other step should have excess/unused capacity in relation to that bottleneck.
If you fall into the trap of "the machines are
Re: (Score:3, Informative)
I think you just imagined that.
Very, very, very, very (x4) few data centers do anything of the sort. And the truth is that the vast majority of servers spend the vast majority of their time waiting for something to do.
Re: (Score:2)
And the truth is that the vast majority of servers spend the vast majority of their time waiting for something to do.
True. This is because of the warm and fluffy economy of ~2 years ago.
I said "often", not "most". And the amount will increase if they want to keep a roof over their heads.
Re: (Score:3, Insightful)
I live in a place with severe traffic congestion problems, you insensitive clod!
Seriously, I think the car analogy is not so bad here. Too many people drive in the inner city using cars designed for cruising in an open freeway. Consider this: if so many cars weren't used in congested traffic, where would traffic congestion come from?
Re: (Score:3, Insightful)
Actually, I disagree. The analogy is very good.
Cars do no spend the majority of their time idling at traffic lights. Computers (especially servers) however do often end up idling a very large percentage of the time.
Both statements are not universally true.
Taxis, for example, may spend the majority of their time idling. So do big-city rush-hour commuters. And many servers idle 90% of the time, while others idle 10% of the time.
You can't make blanket statements about cars idle time, or computers idle time, since it probably varies 10000:1 based on the usage.
Re: (Score:3, Insightful)
No, it's not. As usual, car analogies are stupid.
I'd have to agree, bad analogy. MPG at stoplights is 0 for all cars since you're not moving. You'd have to come up with a whole new rating scheme if you wanted to determine how much gas a vehicle consumes at stop lights, like ounces consumed per hour while idling.
I'd say a better car analogy (if you must have one) would be to focus on what a vehicle gets on the highway only...
Re: (Score:2)
I guess you don't live in the same city as me then - I spend a lot of time getting exactly zero miles per gallon in traffic. Then again, we do have the longest traffic light waits in the country.
Last week I put down a deposit on a Mitsubishi with "stop and go". Basically it turns the engine off when you stop and put the handbreak on. It's clever enough not to do it if you have the wheels turned (i.e. waiting to turn into a road) or if the battery is low etc.
Apparently BMW and Mercedes have something similar
Re: (Score:2)
That's like focusing on how much gas a vehicle consumes at stop lights instead of when it's moving.
Cars do not spend the majority of their time idling at traffic lights.
No, they spend it idling in their garage, and then you're less concerned with fuel efficiency than you are in their rate of production of carbon monoxide vs. its rate of escape from the garage.
Wait, what were we talking about again?
Yet Another Bogus Car Analogy (Score:5, Insightful)
Comparing a server idling to a car in front of a red light is seriously wrong. Servers in general tend to spend a _lot_ more time idling than cars wait for a red traffic light. There'll always be servers that _do_ fully utilize their resources, but most of them will idle a lot. So it makes perfect sense to take that as a generic guide-line.
Re: (Score:2)
Re: (Score:3, Informative)
It's hard to even away the intra-day variation. I work for a phone company for corporate customers only, and basically all calls happen between 7am and 6pm. We run batch tasks at night, but they can't compare to the load that customers put on the servers during the day. The addition of cell phone calls have given our servers a bit more to play with at night, at least.
I suppose we could try to sell excess capacity at night, but I doubt we could make enough to make up for the required extra staff and hardware
Re: (Score:1)
This is certainly true of most servers, but is it true of virtualised servers in really big data centres?
No. The biggest reason for virtualized servers is that everyone noticed that typical servers spend much of their time idle, so if we throw a 4 servers into one physical box, the hardware will stay utilized. This means we need fewer physical boxes, which means we need less power.
Re: (Score:1)
OR, skip the overhead of virtualization, and use the Operating System.
From webster:
operating system : software that controls the operation of a computer and directs the processing of programs (as by assigning storage space in memory and controlling input and output functions)
The key here is program_s_, as opposed to program. A modern server operating system is designed to do most of the things that people is now cheering for virtualisation to do. Virtualisation solutions however, will either evolve into a n
Re: (Score:1)
For the most part, I agree with you: I have seen some very stupid implementations. But we don't live in a homogeneous world.
One of the principle advantages of virtualization, however, is that the guest operating systems need not be the same OS. For example, you could have a LAMP stack running on one VM guest and an Exchange server running on another.
Furthermore, there are specific reasons why you might want at least the appearance of separated machines for each tier of N-tier solution. Most of these are
Re: (Score:1)
Agreed, there are a lot of valid use-cases for host-level virtualization. Another one is testing, where you're able to set up really close-to production systems for staging test.
For the cross-os problem, yeah, you will have to have a bunch of hosts, either physical or virtual, where virtual may save you some problems, and give you others. (The famous system-clock problem in time-critical apps, for example). The important thing here I'd say, would be to still trying to keep the number of OS instances down, s
Re: (Score:2)
Imagine you have 4 VMs that are normally idle, but at any given time two of them might be fully loaded. If you had physical machines, you'd need four computers. With VMs, you have two and live-migrate the two busy ones to the two real machines. For bonus points, you can shut down one of the real machines when all four VMs are busy.
The best thing about this kind of solution is that it
Re: (Score:1)
Agreed, there are a few things that hypervisors does better than most OS:es around. I'm not arguing against the use of host-level virtualization, I'm just questioning 90% of how I see it being used in practice.
As for live migration, it's mostly a question about scaling up, which I would from a purely theoretical standpoint assume think app-level architecture would do a lot better than emulated hardware, especially if the workload is I/O-intensive. Secondly it's not really the most common use-case I see in t
Re: (Score:2)
Server Virtualization is like Car Pooling. It takes the empty seats in your car and puts a body into them. This means you're efficiently using not only your car but the highway to get people to work.
Re: (Score:3, Insightful)
No. The biggest reason for virtualized servers is that everyone noticed that typical servers spend much of their time idle, so if we throw a 4 servers into one physical box, the hardware will stay utilized. This means we need fewer physical boxes, which means we need less power.
Except, of course, that those servers? Yeah, they're typically busy *at the same times*, because when they're busy, they're busy because people are working.
Personally, I'm extremely skeptical of the idea that virtualization means th
Re: (Score:1)
Outside of low-end web hosting, virtualization is still generally in its infancy (though I expect products like vSphere 4 to change things considerably).
And even in cases were multiple servers are virtualized onto one set of hardware, the candidates for virtualization tend to be extremely lo
Re: (Score:2)
Re: (Score:1)
A car uses less fuel sitting at a traffic light in a given amount of time unless there is something badly wrong with the design of your car. It naturally uses more fuel per mile because you aren't going anywhere. I would imagine that servers are similar because when they are sitting idle the useful work compared to power is terrible because no work is being done. Whereas with full load more power is used but it is useful work like when you are using your car to get somewhere.
Of course in both situations
I'm looking forward to completing your training (Score:3, Funny)
the spec only considers how much power a server consumes when it's idling, rather than gauging energy consumption at various levels of utilization. That's like focusing on how much gas a vehicle consumes at stop lights instead of when it's moving
In time, you will call *me* master.
Re: (Score:1)
My name is Torgo. I take care of the place when the master is away.
(sorry. mention "master" and you get a Manos or a Dr. Who quote every time.)
This is a great v1 (Score:4, Insightful)
Speccing by idle power consumption was a great idea. How exactly was the EPA supposed to grade servers based on CPU "efficiency" when each CPU differs so much? Which of the bazillion CPU benchmarks out there do you choose? This would be a short trip into an epic flame war between vendors, meaning that the spec would never get passed. "Politics is the art of the possible"
Given that most servers spend almost all their time idle anyway, this could certainly be a big money and energy saver. If you ever stroll through an actual large datacenter, you can see, via HDD ligts, that most of that gear just sits there all day long, doing little actual work. Certainly there are some servers lit up constantly, and virtualization will help to clean some of the idle servers up, but many shops don't do much virtualizing yet.
SiWired
Re: (Score:1)
Re: (Score:1)
Performance data is vital for something like this. I am not sure whether it is taken into account in any way since the specification download is broken on the energy star website.
Basically this is because it you have a server that can handle twice the load then it can use twice as much idle power and be just as efficient as two low performance servers. So performance of the server although being very hard to measure is needed to make the rating anything other than worthless.
Servers spend a lot of time idle (Score:2)
Yes, under load, a server that can handle twice the work for the same power is twice as efficient, but very few servers outside of Bazillion $ supercomputer clusters spend all their time under full load.
Also, either a machine is EnergyStar stickered or it isn't. How do you decide on a standard load? Some boxes are I/O monsters, others have crappy I/O, but have fast CPUs. How do you decide which workload the EnergyStar cert is based on?
Yes, it would be nice if the standard could work in performance someho
Re: (Score:1)
Whether the server is under load or not is irrelevant as I attempted to point out in my previous post.
To clarify I am working on the assumption that if you need the performance of one good server or two worse servers then you will buy either the one good server or the two worse servers. Thus when the servers are idle you will still have one good server or two worse servers. So if the idle power of one good server is lower than the idle power of two worse servers then you will use less power with the good
What the thing does most of the time. (Score:2)
the spec only considers how much power a server consumes when it's idling, rather than gauging energy consumption at various levels of utilization. That's like focusing on how much gas a vehicle consumes at stop lights
While it would be better to include other metrics in a weighted average or something along with this its not entirely wrong. At least in the micro computer world most servers operate when businesses do. They may not in the majority of businesses be utilized even all of that time. Virtualization is helping to reduce idle time on machines but the way I figure it even VM hosts are likely to be idle more than they are not. In large enterprise these figures are different given time zones and global foot prints, although if your multinational you probably have multiple datacenters which host local services and put the numbers back in line somewhat there as well. I would wager of the total number of microcomputer servers out there most are owned by small to medium businesses, simply because most businesses are in the SMB class.
That means the machines run all the time but probably are idle all but eight to ten hours of the twenty four hours in a day and only five of the seven days in a week. That is roughly 29% of the time in use, the rest is idle time. So efficiency at idle is going to be the driving measure.
How to pass... (Score:2)
Build a server with asymmetric processors...
Something like an Atom for idle use, and a bunch of quad cores that get activated when you actually do anything... Configure the disks to shut off when idle etc...
How to save energy (Score:2)
Think of the energy saving, if you just said "use". Particularly if you utilize that word a lot.
Rather see ratings similar to EPA MPG (Score:2)
Re: (Score:2)
Contradiction? (Score:1)
Atom Servers (Score:2)
SGI had an Atom-based supercomputer on the drawing board: http://www.pcmag.com/article2/0,2817,2334887,00.asp [pcmag.com]
Quote:
"The key to the concept, SGI said, was its Kelvin cooling technology, which could pack 10,000 cores into a single rack. Combining the Atom processor with the Kelvin technology could generate seven times better memory performance per watt than a single-rack X86 cluster. Molecule could also process 20,000 concurrent threads, forty times more than the rack, and 15 terabytes/s of memory performance
Just use VMware's DPM (Score:3, Informative)
VMware Distributed Power Management [youtube.com]
Supposedly it will cut your server power usage by 50%.
Hey. Wait a minute with the criticism. (Score:1)
I used to run little server at home. Then I've got an electricity bill for £400. Now the server is off. It would be very useful to me to be able to compare server's power usage while idling as this is what my server was doing for 90% of the time.
another solution altogether? (Score:1)
http://datacenterjournal.com/index.php?option=com_content&task=view&id=2620&Itemid=43 [datacenterjournal.com]
http://www.missioncriticalmagazine.com/CDA/Articles/Products/BNP_GUID_9-5-2006_A_10000000000000564830 [missioncri...gazine.com]
http://www.youtube.com/watch?v=knTHr8BQ8rc [youtube.com]
http://www.nerdsociety.com/2008/09/24/interview-with-spear-co-founder/ [nerdsociety.com]
Maybe you'll find