Server Power Consumption Doubled Over Past 5 years 148
Watt's up writes "A new study shows an alarming increase in server power consumption over the past five years. In the US, servers (including cooling equipment) consumes 1.2% of all the electricity in 2005, up from 0.6% in 2000. The trend is similar worldwide. 'If current trends continue, server electricity usage will jump 40 percent by 2010, driven in part by the rise of cheap blade servers, which increase overall power use faster than larger ones. Virtualization and consolidation of servers will work against this trend, though, and it's difficult to predict what will happen as data centers increasingly standardize on power-efficient chips." We also had a recent discussion of power consumption in consumer PCs that you might find interesting.
The servers are actually doing something (Score:3, Informative)
Sun's David Douglas, VP Eco Responsibility, estimates that the cost of running computers (power use) will exceed the cost of buying computers in about 5 years: http://www.ase.org/uploaded_files/geed_2007/dougl
--
Get abundant, get solar. http://mdsolar.blogspot.com/2007/01/slashdot-user
cheap blade servers... (Score:2, Informative)
Rubbish. One of the biggest myths in server sales today is that blades consume more power. If you fill racks full of them they consume more power per square metre of floor space, not per server. If you need the same number of servers they should consume less power, largely due to the centralised AC/DC conversion.
HP especially are working to make blades some of the most efficient servers on the market.
Re:Computers are powerhogs (Score:3, Informative)
Performance per watt is a biggie for chip manufactures. Having a less than 10 watt server chip is possible, but who wants to use a Palm Pilot for a transaction server?
Having the performance to handle a slashdotting is what is needed in many servers. Performance is first, power consumption is second. That is why the performance per watt is an important part of the chip design. Low power chips is not the main design item. High performance is the most important. Providing that performance at the lowest power possible is the sweet spot chip designers aim for.
Here is additional reading. Look at what the Core 2 Duo and quad is bringing to the server market.
Please note the Woodcrest and Operon is now obsolete. The Operon was leading, but the new multi-core chips are a new race in the performance per watt race.
http://www.computerworld.com/blogs/node/2160 [computerworld.com]
http://www.intel.com/performance/server/xeon/ppw.
http://www.supermicro.com/newsroom/pressreleases/
http://news.com.com/Chipmakers+admit+Your+power+m
But Other Efficiencies Are Gained (Score:3, Informative)
While it may seem disturbing that computers are consuming a larger percentage of energy usage, one has to realize they probably more than offset their own energy use -- this by allowing other resources to either be used more efficiently or by enabling other economic activity that discovers and distributes resources, energy among them.
Re:Solution (Score:4, Informative)
That's not what the Google paper said. It proposed that power supplies should output only 12V and motherboards should contain many DC-DC converters to generate voltages needed by chips. As chip fabrication technology changes, newer chips need lower voltages to operate optimally (not to mention that lower voltage = lower power); since different chips in a computer are made with different technologies, they need different voltages ranging from 1.8V down to 1.0V.
Re:Solution (Score:3, Informative)
While individual systems may vary, I've noticed that the older the facility where I was working, the more likely they were to have DC power - since the facilities were "telco" before they were "telecom", and most telco stuff is DC. Even in newer datacenters, it's only the small outfits that haven't had DC, most of the larger ones have had DC available.
And say one-third of that is for spam filtering (Score:2, Informative)
Re:Solution (Score:4, Informative)