Follow Slashdot stories on Twitter


Forgot your password?
Power The Almighty Buck The Internet Hardware

How Internet Data Centers Waste Power 170

Rick Zeman writes "The New York Times has extensively surveyed and analyzed data center power usage and patterns. At their behest, the consulting firm McKinsey & Company analyzed energy use by data centers and found that, on average they were using only 6 percent to 12 percent of the electricity powering their servers to perform computations. The rest was essentially used to keep servers idling and ready in case of a surge in activity that could slow or crash their operations. 'Worldwide, the digital warehouses use about 30 billion watts of electricity, roughly equivalent to the output of 30 nuclear power plants.' In other words, 'A single data center can take more power than a medium-size town.' This is the price being paid to ensure everyone has instant access to every email they've ever received, or for their instant Facebook status update. Data Center providers are finding that they can't rack servers fast enough to provide for users' needs: A few companies say they are using extensively re-engineered software and cooling systems to decrease wasted power. Among them are Facebook and Google, which also have redesigned their hardware. Still, according to recent disclosures, Google's data centers consume nearly 300 million watts and Facebook's about 60 million watts. Many of these solutions are readily available, but in a risk-averse industry, most companies have been reluctant to make wholesale change, according to industry experts."
This discussion has been archived. No new comments can be posted.

How Internet Data Centers Waste Power

Comments Filter:
  • So? (Score:2, Insightful)

    by Anonymous Coward on Sunday September 23, 2012 @11:36AM (#41428487)
    Worst case, if we just include first world people, it's only about a 100W per person. Change a few lightbulbs, set down the heat, set the AC up by a degree, and you've reduced your power consumption by that amount. Of course, we need to talk about energy here, not just power, but hey.

    And since when does a tech site need to spell out "millions" and "billions"? Are we not able to grasp mega and giga?

  • by thoriumbr ( 1152281 ) on Sunday September 23, 2012 @11:46AM (#41428575) Homepage
    Or use a mainframe running lots of Linuxes... Can cut the power to 10% while delivering the same computing power. Mainframes have a very good power management this days.
  • by roystgnr ( 4015 ) <> on Sunday September 23, 2012 @11:48AM (#41428589) Homepage

    in this letter and comment. []

    The most ironic point: "Should we discover (as we undoubtedly would) that tens of thousands of copies of today's NYT were printed, delivered, and sold to subscribers who never read Glanz's report, do we conclude that the NYT needs a new and less-wasteful business model?"

  • Lovely. (Score:5, Insightful)

    by 7-Vodka ( 195504 ) on Sunday September 23, 2012 @12:28PM (#41428893) Journal

    This is lovely. Let's worry about problems that don't exist, as if we don't have enough catastrophes to worry about.

    Power is money. As long as there is a somewhat unhampered economy in the locus of data centers (and there is), then every entrepreneur will attempt to economize power usage. You don't have to worry about it because the entrepreneurs that use power efficiently will eat the lunch of those that do not, ceteris paribus (all other things equal).

    Ipso facto this problem will solve itself. Case closed.

    In fact, now that I speculate on the possible reasons for publicity like this to be drummed up, it is to campaign for government regulations that will instruct entrepreneurs how they 'must' handle such a problem. Unfortunately nobody can write such regulations because they cannot foresee every circumstance and possibility, much less predict the future. Nobody on this earth can even tell a single other person what ideal type and amount of preparation is for power efficiency considerations. This is why we have economic calculation.

    If such regulations are enacted, ipso facto they will cause the problem itself.

  • by Waffle Iron ( 339739 ) on Sunday September 23, 2012 @12:43PM (#41428985)

    Take the case of me and Google. My share of their power is about 1W electric (that's usually about 3w thermal).

    However, I estimate that their maps and local business info features alone easily save me at least a couple of hundred miles per year of driving. That would be about 10 gallons of gasoline per year, which is 38 W thermal that I'm not burning thanks to the info they're providing. Google provides at least a 10 to 1 payback in energy savings just for this one case.

  • Re:So? (Score:5, Insightful)

    by flaming error ( 1041742 ) on Sunday September 23, 2012 @12:53PM (#41429065) Journal

    Our energy supply is finite, and so our energy usage should be measured in units of energy, not dollars.

    Prices are not based on market forces or total costs, they are based on government policies.

    And our money supply itself is schizophrenic, as in disconnected from reality. It's value fluctuates by moods, it's continually debased by printing more, it's backed only fractionally, and then only by the good faith and credit of future taxes on today's kindergartners

    Measuring energy with dollars is like scoring sporting events by the applause of drunken fans.

  • by Wrath0fb0b ( 302444 ) on Sunday September 23, 2012 @01:54PM (#41429485)

    This is the same "problem" that faces airline companies, taxi drivers, power companies, cell network operators. Consumers pay for these services by usage and so total revenue is proportional to average use but the costs are heavily skewed towards capital costs and so are proportional to the peak load that you can service. In that case, there's a fundamental tradeoff -- either we have to degrade service when demand hits the 95th percentile (just as an example) or we have to figure out a way to pay for the extra capital investment that's not needed 95% of the time.

    There's a few alternatives you can do:

    (1) Overprovision and soak it up into the price structure for all consumers. This is what most power companies do -- they build enough power generating capacity for peak load and then charge a bit more per KWH to make up for the increased outlay.

    (2) Overprovision and charge extra at peak. This is the airline solution -- they always have service available but under contention the last few seats are exorbitantly expensive. Essentially those that need peak service are paying to leave a few seats open all the time in case they need them.

    (3) Don't overprovision: this is the taxi solution. This means that service degrades significantly under peak demand -- anyone trying to get a cab home on a Saturday night in a major city has experienced this. Those that do get a cab pay the usual fare, everyone else waits around a while. This is also the solution that California has routinely deployed for their inability to provide peak power during heat spells -- same price for everyone but rolling blackouts for the unlucky few.

    That's it -- there aren't any clean answers when you are making compromises between peak availability and average efficiency. You've either got to pay for the extra capacity when you don't need it or else you have to suffer when you don't have the capacity when you do need it.

  • by hawguy ( 1600213 ) on Sunday September 23, 2012 @02:54PM (#41429887)

    that was the whole point of this article, you stupid twat.

    Why are you worried about benchmark scores on servers that typically only run computations 12 percent of the time?

    You people eat up artificial gimmicky numbers like nothing. It's amazing.

    I think the problem is that while you can run 10,000 linux instances on a single mainframe and maybe it can keep them all chugging along at 12% load (though it seems like it would take a rather sizable mainframe to be equivalent to 12% of 10,000 or 1200 standalone servers), but when your peak load comes and those linux servers that are nearly idle all night long are suddenly 80% utilized, can the mainframe keep all 10,000 instances running along at 80% utilization?

    And can it do it more cheaply than on VMWare and Intel? You'd need around 300 4 socket 8 core CPU Intel servers to handle 10,000 instances using up one core each of CPU power, figure around $10M for the cluster and 10 - 15 racks -- can you build the same mainframe for $10M in less space?

    I really don't know the answer.

  • Re:So? (Score:2, Insightful)

    by Anonymous Coward on Sunday September 23, 2012 @06:19PM (#41431347)

    We're not talking capturing 5%, the relevant range is much lower. Within that range, there aren't any physical limitations, just cost limitations. If the price of electricity was $0.50 per kilowatt-hour, we'd start covering the deserts with solar panels and extracting power from many other sources. Costs will rise asymptotically as one approaches a physical limit.

    OTOH, our knowledge of physics is incomplete, a profit incentive will encourage novel approaches that will likely exceed our theoretic maximums. The Malthusian predictions are a good, reoccurring example of this phenomenon. Insurmountable problems are generally circumvented if someone can make a profit by doing so. There are limits obviously (e.g. why we don't have smartphones that last weeks between charges), but it's rather arrogant to assume we know which limits are real and which are mere artifacts of incomplete theory.

  • by sycodon ( 149926 ) on Sunday September 23, 2012 @11:17PM (#41433035)

    I had a visceral reaction to the article. This is because they are pointing out the obvious and then pretending they are performing some kind of public service and pat themselves on the back.

    Do they really think the data centers don't know these things? Do they really think they are not trying to address them? Power costs are pretty high up on the balance sheet and anyone who's been paying attention knows there are millions of dollars spent on researching ways to bring those costs down.

    So it's kind of like a guy standing at a car wreck watching the rescuers trying to pull someone from a car and saying, "if you don't get that guy out of there, he's gonna die". No Shit Sherlock.

    Just shows that reporters are idiots. Always have been and always will be.

  • by Anonymous Coward on Monday September 24, 2012 @09:26AM (#41435881)

    Just shows that reporters are idiots. Always have been and always will be.

    The sad part is how often people will make that connection when faced with their fields of interest, then turn around and believe whatever the headline says about every other subject.

There's no future in time travel.