Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Power The Almighty Buck The Internet Hardware

How Internet Data Centers Waste Power 170

Rick Zeman writes "The New York Times has extensively surveyed and analyzed data center power usage and patterns. At their behest, the consulting firm McKinsey & Company analyzed energy use by data centers and found that, on average they were using only 6 percent to 12 percent of the electricity powering their servers to perform computations. The rest was essentially used to keep servers idling and ready in case of a surge in activity that could slow or crash their operations. 'Worldwide, the digital warehouses use about 30 billion watts of electricity, roughly equivalent to the output of 30 nuclear power plants.' In other words, 'A single data center can take more power than a medium-size town.' This is the price being paid to ensure everyone has instant access to every email they've ever received, or for their instant Facebook status update. Data Center providers are finding that they can't rack servers fast enough to provide for users' needs: A few companies say they are using extensively re-engineered software and cooling systems to decrease wasted power. Among them are Facebook and Google, which also have redesigned their hardware. Still, according to recent disclosures, Google's data centers consume nearly 300 million watts and Facebook's about 60 million watts. Many of these solutions are readily available, but in a risk-averse industry, most companies have been reluctant to make wholesale change, according to industry experts."
This discussion has been archived. No new comments can be posted.

How Internet Data Centers Waste Power

Comments Filter:
  • Corrected URL (Score:5, Informative)

    by Rick Zeman ( 15628 ) on Sunday September 23, 2012 @11:34AM (#41428469)

    I have no idea how the URL got mangled when Timothy moved the anchor text to a different part of the article, but here's the correct link:

    http://www.nytimes.com/2012/09/23/technology/data-centers-waste-vast-amounts-of-energy-belying-industry-image.html?hpw&pagewanted=all [nytimes.com]

  • by tokencode ( 1952944 ) on Sunday September 23, 2012 @12:01PM (#41428683)
    This article is simply trying to make news where there isn't any. Of course only a fraction of the power consumed goes into actual computations. For starters you need to account for cooling. Roughly speaking for every watt of server power load, you nede to account for 1 watt of cooling energy. This essentially halves the potential efficiency. In addition to that, you need to account for the amount of power it takes just to maintain state when you talking about a data center of that scale. Non-volitle memory requires and consumes power just to retain its current values. Unline Facebook and Google, most datacenters do not have 100% control over the hardware and software being run. Additionally datacenters often charge for power, space, etc and the client simply pays for what they use. In many instances efficiency is not for the datacenter to determine and one could argue that it may not even be in the datacenter's financial interest. Great strides have been made in scaling power consumption to fit computational demand but this is more of a hardware/software issue than a datacenter issue.
  • Re:not bad (Score:5, Informative)

    by Qwertie ( 797303 ) on Sunday September 23, 2012 @12:26PM (#41428873) Homepage
    Google's numbers are especially tame. 300 million watts (total) is far below one watt per user (gmail alone has at least 350 million accounts [google.com]). Certainly if you use Google services on your 30-watt laptop, you use more power than Google uses to serve you. According to Google [blogspot.ca], "in the time it takes to do a Google search, your own personal computer will use more energy than Google uses to answer your query."

    Since Google offers almost all services for free, it has a strong incentive to minimize resources per user. I expect the paid services are the ones that use the bulk of the energy, but all data centers together are still a tiny fraction of total worldwide power usage.
  • by bcrowell ( 177657 ) on Sunday September 23, 2012 @12:43PM (#41428993) Homepage

    I'm part of the problem. Wish I wasn't, but I don't seem to have any choice.

    I run a small web site, and if it goes down, there are various consequences in my personal and professional life that can be extremely annoying and embarrassing. To stay sane, I need the site to have good uptime. Over the years, this has caused me to gradually migrate to more and more expensive webhosting, now ~$100/mo.

    The average load on my dedicated server is extremely low, so it's basically like one of the extremely wasteful boxes described in TFA. My site is basically I/O-intensive: I serve big PDF files. In terms of CPU, I'm sure the site would run fine on a low-end ARM, or as one of a dozen sites running off of the same Celeron chip. So by comparison with either of those hypothetical, energy-efficient setups, virtually all of the electrical power is being wasted. I'm a small fry, but there are millions of sites like mine, so I'm sure it adds up. (It would be interesting to know how much of total server-center power consumption comes from the "long tails" of the distribution such as Google and Facebook, and what percentage from cottage industries like me.)

    There are basically two problems. (1) Nobody will sell me high-reliability webhosting on low-end hardware. The only way to get energy-efficient hardware is to get cheap webhosting. I've tried cheap webhosting. Cheap webhosts have low reliability and nonexistent customer service. (2) Sometimes you get spikes in demand, and you want some excess capacity to be able to handle it without crashing the server. Maybe you get slashdotted. Actually, in my case one thing that has been a problem is that some people apparently run IE plugins that are supposed to accelerate large downloads, by opening multiple connections with the server. When these people hit my server and download a large PDF, the effect is very much like a DOS attack. My logs show one IP address using 300 Mb of throughput to download a 3 Mb PDF. I've written scripts that lock these bozos out ASAP, but on a low-end machine, these events would bring my server to its knees instantly.

Real Programmers don't eat quiche. They eat Twinkies and Szechwan food.

Working...