EPA Sends Data Center Power Study to Congress 127
BDPrime writes "We've all been hearing ad nauseum about power and cooling issues in the data center. Now the EPA has issued a final report to Congress detailing the problem and what might be done to fix it. Most likely what will happen is the EPA will add servers and data centers into its Energy Star program. If you don't feel like reading the entire 133-page report, the 14-page executive summary is a little easier to get through."
Summery (Score:3, Funny)
Still too long. Can anyone reduce it to a single phrase or word? Thanks in advance
Re: (Score:1, Insightful)
These forecasts indicate that unless energy efficiency is improved beyond current trends, the federal government's electricity cost for servers and data centers could be nearly $740 million annually by 2011, with a peak load of approximately 1.2 GW.
It then goes on to describe three scenarios that decrease this to various extents but require work and preparation.
Essentially, we're going to end up building 10 more power plants in the next 4 years because we're so fucking stupid that we can't take simple measures on our current data centers to make them even a little bit more efficient. If you ask me, energy is just too cheap. Put a cap limit on energy use and everything over goes up in price exponentially for a facility. Then you'll see them st
Re: (Score:1)
Re: (Score:2)
Virtualization has put the super smack down on our datacenter. From 250 physical servers to 20...how's that for power savings?
Re: (Score:2)
Great scott! (Score:5, Interesting)
Re:Great scott! (Score:5, Funny)
Re: (Score:1)
Re: (Score:1)
Re:Great scott! (Score:4, Insightful)
($177M/day for Iraq http://www.usatoday.com/news/politicselections/na
That sounds like a big number, and is for most of us, but not for the Federal government. About 29 cents more in taxes off each paycheck (assuming 100 M taxpayers, and paychecks every 2 weeks).
There are much bigger fish to fry.
Also, there is only so much one can cut the energy use, and thus that cost down, and still get the business of the government done. And the improvements in efficiency will require hardware, software, and personnel which have their own costs. Eventually you will hit a point where there is no longer a return on investment to make it worthwhile.
Re: (Score:2)
Re: (Score:3, Insightful)
The same thing was said for many other things over the years; lighting pops to mind. Offices used to consume about 3 watts per square foot of office area in the 70's.
Re: (Score:2)
Our government, now powered by lightning (Score:2)
The only power source capable of generating 1.21 gigawatts of electricity is a bolt of lightning.
(Just reinforcing the reference. heh)
Re: (Score:2)
So what happens now?? Now we wait for a congressional committee meeting broadcasted on c-span where politicians can grand stand and talk about how it needs to change... Fortunately most politicians will consider this typical as a "black box", so they will not touch on technical details, but rather just complain.. Maybe talk about a special "colo server" tax.... Maybe throw in some global warming comments.
Re: (Score:1)
Whirrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr.....
That help you out?
Hot loads (Score:2)
Grampa Simpson: (Score:2, Funny)
Re:Grampa Simpson: (Score:4, Funny)
Plead your case.
Re: (Score:1, Funny)
Re: (Score:3, Funny)
*runs from the hounds*
Mandatory Madonna reference (Score:3, Funny)
Re:Mandatory Madonna reference (Score:5, Funny)
Which Madonna?
Re: (Score:2)
Madonna Ciccone [wikipedia.org], I'm pretty sure.
Re: (Score:2)
wow (Score:3, Informative)
Is that it? Seems like small potatoes to me.
Re:wow (Score:5, Interesting)
More importantly, this could probably be reduced considerably without major disruptions or reduction in quality of service by just embracing higher efficiency components in our datacenter equipment (especially servers).
Re:wow (Score:5, Informative)
Or lets do it this way. Hoover Dam at peak output produces 2 Gigawatts of power per hour. 11 million servers consume 61 billion KW hours annually. It takes Hoover Dam 30,000 hours (about 3.5 years) to produce that much power. So you need four Hoover Dams just to power all the data centers in the US.
Re: (Score:3, Informative)
Re: (Score:2)
Units -arghhhh! (Score:3, Informative)
Re: (Score:2)
Re: (Score:2)
'Gigawatts per hour' would be the correct term/phrase to use when describing how power production or demand changes with time and impli
Re: (Score:2)
Re: (Score:3, Funny)
Re: (Score:2)
cogeneration (Score:3, Funny)
Or better yet! DatacenterBurgerKing with CPU-broiled whoppers.
Re: (Score:2)
Re:cogeneration (Score:4, Interesting)
Another question, why do we vent the exhaust from our refrigerators into the house during the summer? Just seems like there's a lot you could do to save energy just by moving what would outerwise be waste heat to places where it can either be used or at least not cause a larger cooling problem.
Guessing (Score:4, Interesting)
Climate controlled. There's this element among building planners that think any outside air is bad(TM). That's why, even in small buildings where you don't have to worry about pressure differentials blowing windows out like you do in skyscrapers, you can't open a frick'n window in the Fall or Spring when the air smells wonderful and there's this perfect chill in the air the just stimulates the brain.
I'm drenched in sweat here in Hotlanta (it's 82F and 66% humidity and climbing to 94) and I really miss New England's Spring and Fall.
Re: (Score:2)
Haha, it is almost chilly here today in Portland. Well, cool, anyway. Portland summers are the mildest I've ever experienced in the lower 48. Though I imagine Seattle is similar.
-matthew
Re: (Score:2)
It can't be too bad though, I would think that an A/C unit cooling off a building with an outside temp below freezing would be very efficient.
Re: (Score:2)
Um.. this is happening (Score:2)
Re: (Score:2)
When they upgraded the mainframe (double the MIPS, half the size, 1/4 the watts) the mainframe didn't throwing enough heat to do any good.
I haven't worked there for ages, I'm guessing they've got rows of Intel boxes beside the mainframe these days and probably recycle as much heat as the bad old serial terminal days.
Re: (Score:3, Insightful)
Pollution is easily taken care of with a filter. Controlling physical access is trivial. Humidity may be a bit more involved, but then again, you're heating the incoming air which reduces its relative humidity. Condensation isn't likely. If it does turn out to be a problem, use a heat exchanger and preheat the incoming air using the exhaust air.
But it is not terribly practicle to plug the plenum passages once a year.
So? Inst
Data Center Jacuzzis (Score:2)
Re: (Score:2)
And of course in winter, all you would need around here is to open the doors, but since there is a school only 75 yards from the pool, it would not be hard to run the water over there.
P
Re: (Score:2)
Back in 1994 when I still lived in Pecos it got up to 128F one summer. My dad tells me of 132 when he was in school in the 70's. 116 was a typical high, the lack of an "official" weather station means that the town gets credit for whatever Kermit Texas has for a temp (quite a few miles off and quite a few degrees cooler).
Re: (Score:2)
great news for Sun (Score:3, Informative)
Disclaimer: I own a tiny bit of Sun stock. (But I bought it because I believe in them, not vice versa!)
Re: (Score:2)
s/problem/irrelevancy/ (Score:2)
Re: (Score:2)
Simple Solution (Score:5, Insightful)
Further, the cost to handle each extra watt is multiplied thanks to cooling, power back-up, wiring, etc., while increasing the physical size of the building, constructing more datacenters, etc. is just a flat (linear) cost, and mostly just a one-time expenditure at that.
This strange arrangement is what has led us here. It's not the natural evolution of technology to cram as much power consumption into as tiny a box as possible. It's an artificial need, created by the idiotic distribution of fees common to datacenters.
If a few large datacenters declared their fees as a small $$$ value for each unit of space, and additionally a few dollars, per watt of power consumption, you'd see the problem naturally fix itself, through normal economic forces. As soon as watts are the defining factor, companies won't pay more for a cramped 1U server rather than an (inexpensive) 2U or 3U server. You will also see companies happy to pay more for lower-powered server hardware, as having them directly bear the energy cost will make buying efficient servers a significant savings to them.
Re: (Score:2)
Combine these all into a neat little sum on someone's bill.
Try to think of data center 'floor space' as the main stage. Everything is built around maintaining and supply
Re: (Score:3, Interesting)
Of course the (average) price of electricity is figured into it. That is the PROBLEM.
It is a (self-perpetuating) prisoner's dilemma. The more power consumption you can squeeze into the smallest space, the better of a deal you get. Since it's all averaged out, those using more power than average are getting subsidized by those w
Re: (Score:2)
Like shipping packages where the fees are combination of volume, weight, distance, etc. Data center pricing is typically priced based on a combination of space, power, power density, circuits, contract length, continguous space, etc.
Re:Simple Solution (Score:4, Informative)
Re: (Score:2)
Re: (Score:2)
PDUs that can track per-outlet power distribution, and spew the data over serial or SNMP are widely available, and deployed widely.
The problem is also solved for larger (per-rack) situations.
Re: (Score:2)
I can't see any reason why not. An induction coil costs a few cents, and you could easily feed a rack worth into a single cheap meter.
But more to the point, I really wasn't suggesting live monitoring. Just have them select from a range of power levels and charge them as appropriate.
Re: (Score:2)
This is NOT a KWH meter you are simply sampling the instantaneous current in the AC line and logging
Re: (Score:2)
Re: (Score:2)
Before you laugh too much, that's basically how the EPA figures out the mileage of c
Re: (Score:2, Insightful)
Re: (Score:2)
The cost of 1U space = ((power + people + space + loan + hardware + software) / avg. used U's by customers) * profit rate
Somebody can simply do that using an Excel sheet and the customer will know that his server costs $1000/year.
The way you propose would increase that cost with development time, read-out infrastructure and the extra support to handle those things. Next to that, the customer would get a random bill every
Re: (Score:2)
You have an interesting point at least. It is simpler billing, but charging by size is the worst possible thing you can do. It has lead to many problems over time, a few of which I've already mentioned.
A thought experiment:
What if you were to price based purely on number of servers, using average server size?
What if you were to price based on the WEIGHT of the server instead of U size?
What if you b
Re: (Score:2)
Re: (Score:2)
If a few large datacenters declared their fees as a small $$$ value for each unit of space, and additionally a few dollars, per watt of power consumption, you'd see the problem naturally fix itself, through normal economic forces. As soon as watts are the defining factor, companies won't pay more for a cramped 1U server rather than an (inexpensive) 2U or 3U server. You will also see companies happy to pay more for lower-powered server hardware, as having them directly bear the energy cost will make buying efficient servers a significant savings to them.
Yes, that would be a great solution. Unfortunately, real estate space is getting higher day by day. Building an extension to a datacenter isn't always feasible, and any physical space extension requires quite some investment (such as for cooling, you'd need more coolers to cover more space).
Re: (Score:2)
Fortunately, the speed of light is very fast. Even in areas with the highest priced real estate around, it's just a few dozen miles (or a couple milliseconds delay) to get to the middle of nowhere, where the same land is dirt cheap.
If you have an empty building, you don't need a cooler (more accurately, a very very t
Re: (Score:2)
The problem with the equation as you suggest is that installed capacity is more expensive than consumption-- The lifetime cost of the i
Re:Simple Solution (Score:4, Insightful)
Because when you run a multi-million dollar data center, you clearly can't afford install a few-hundred dollar device in each customer's rack especially if it's a major part of how you bill your customer.
Look, the power companies do exactly what the parent poster suggests. Imagine if power companies charged a flat rate each month based on the square footage of your house. There would no incentive (unless your a save-the-planet hippie type which isn't a bad thing) to turn up the setting on the air conditioner (or turn it off all together), keep incandescent lights running 24/7 along with the giant plasma TV. This is essentially how data centers operate today. There is no motivation to have energy efficient servers unless you're the one that owns the data center and pays the power bill. Today the best a data center owner can do is invest in more efficient cooling systems and that's about it.
Congress will act (Score:5, Funny)
Re: (Score:1)
Virtualization? (Score:3, Insightful)
To me, this seems like one of the more important aspects of power efficiency. Individual server efficiency is important, but the gains from higher utilization could be even more significant. Adding another core to a hypervisor will always be more efficient than adding a new system (CPU, Power Supply, disks, video, etc..). The energy efficient hardware can also be applied to the hypervisor hosts. Build efficient servers, and use as few of them as practical.
Many data centers are already greatly decreasing their server count using virtualization. This should be part of any data center energy efficiency discussion.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Informative)
Re: (Score:3, Informative)
I'm not sure EPA is the right party to be advocating virtualization. The EnergyStar ratings and utility-level programs
location (Score:2)
Re: (Score:2)
Re: (Score:2)
Get rid of the AC DC power supplys and replace.... (Score:2)
Re:Get rid of the AC DC power supplys and replace. (Score:2)
For mere mortals, blade servers are a better compromise. When you have 4 power supplies per 10 servers, instead of 20, you can afford to invest in more efficient equipment. It's still not as efficient as the rectif
Re: (Score:2)
Re: (Score:3, Informative)
Server "sleep" mode? (Score:2)
1) With multiple front-end servers behind NLB, make the NLB smart enough to put some servers to sleep when their processing isn't needed and wake-on-lan those servers (or the equivalent) when they are needed again?
2) Do servers do "speedstep" like desktops/notebooks where the processors and other components go to lower power level modes when they are not being fully utilized? If not, they should enable that.
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Federal Guidelines for Clock Speed Limits (Score:4, Funny)
Re: (Score:2, Funny)
DC power distro (Score:1)
Also reduces a major cost and greenness problem: all those little redundant ac/dc power supplies in those rackmount machines. Further, it allosw you to take the heat generated by the power conversion to another nearby location, reducing the CFM reqs for your cooling system.
Re: (Score:2)
Who's on first? (Score:2)
Whose data center? Mine? Yours? The EPA's?
Please don't butcher the English language like that. Throwing random articles around is a sign of laziness (similar in magnitude to "They said...").
Brought to you by SIAA (Society against the Indiscriminant Abuse of Articles)
Energy Star (Score:2)
Higher voltage (Score:3, Informative)
I have found that stock switching power supplies as found in common computers are slightly more efficient when powered with 240 volts rather than 120 volts. Some more so and some less so. And virtually all of them can be changed over to 240 volts (having the correct 2-pole switching).
And by using 240 volts instead of 120 volts, you can run twice as many computers on the same power loss in the building wiring (same current, same size wire, same power loss due to heat, serving twice the load).
Direct DC fed power systems may or may not provide realistic savings. DC introduces new electrical safety challenges and costs (electrical arcs inside switches, circuit breakers, and fuses, cannot be cut off by AC's zero voltage crossing that DC does not have). This requires lower voltages for equivalent interruption safety. But if power supplies end up losing less power than the building wiring at the higher current, then DC may be the better choice.
We will need more in-depth study to determine if DC will save power or not at a given installation (it may at some and not at others). But for most installations, going from 120 volts up to 208 or 240 volts (depending in which is available) is as simple as rewiring the system (using 2-pole breakers ... requiring double size power panels) and verifying the computer power supplies are ready for the higher voltage.
208 volts is the likely line-to-line voltage in data centers powered by 3-phase (208Y/120) power in North America. Future data centers could be designed for a 416Y/240 volt power system which can also be used to power fluorescent lighting.
FFS (Score:2)
"ad nauseam"
Yeah, it's an obscure word. Is it really such an imposition to ask "editors" to use a fucking dictionary? Took me 5 seconds to confirm my suspicion.