Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Power The Almighty Buck The Internet Hardware

How Internet Data Centers Waste Power 170

Rick Zeman writes "The New York Times has extensively surveyed and analyzed data center power usage and patterns. At their behest, the consulting firm McKinsey & Company analyzed energy use by data centers and found that, on average they were using only 6 percent to 12 percent of the electricity powering their servers to perform computations. The rest was essentially used to keep servers idling and ready in case of a surge in activity that could slow or crash their operations. 'Worldwide, the digital warehouses use about 30 billion watts of electricity, roughly equivalent to the output of 30 nuclear power plants.' In other words, 'A single data center can take more power than a medium-size town.' This is the price being paid to ensure everyone has instant access to every email they've ever received, or for their instant Facebook status update. Data Center providers are finding that they can't rack servers fast enough to provide for users' needs: A few companies say they are using extensively re-engineered software and cooling systems to decrease wasted power. Among them are Facebook and Google, which also have redesigned their hardware. Still, according to recent disclosures, Google's data centers consume nearly 300 million watts and Facebook's about 60 million watts. Many of these solutions are readily available, but in a risk-averse industry, most companies have been reluctant to make wholesale change, according to industry experts."
This discussion has been archived. No new comments can be posted.

How Internet Data Centers Waste Power

Comments Filter:
  • by Scareduck ( 177470 ) on Sunday September 23, 2012 @10:30AM (#41428433) Homepage Journal

    "Buy our product or we'll agitate for standards that make them mandatory." It's shit like this that annoys me mightily about the NYT.

    • Yeah. This article struck me as particularly whiny. 30 Nuclear Power Plants! The horror.

      It's almost like they want you to read a paper newspaper or something.

      • by icebike ( 68054 ) * on Sunday September 23, 2012 @04:31PM (#41431087)

        Yeah. This article struck me as particularly whiny. 30 Nuclear Power Plants! The horror.

        It's almost like they want you to read a paper newspaper or something.

        I question virtually ALL the claims in the story.
        Its nonsense of the highest order, with no research to back it up. Do you see Google or Amazon publishing utilization rates of server farms?

        Do you see Amazon or Google or any cloud provider having problems paying the power bill?
        Did they not say that "Data Center providers are finding that they can't rack servers fast enough to provide for users' needs"?

        If the power bill is paid, what is the problem?

        Why isn't the harm done to the world's resources (and society in general) by publishing the New York Time evaluated?

        Nancy Nielsen, a spokeswoman for The New York Times Company, said [nytimes.com] only the limited supply of recycled paper constrained the company from using more of it. She said 6.5 percent of the newsprint used by the company contained recycled fibers.

        ...

        ''The inventory of waste newspaper is at an all-time record high,'' said J. Rodney Edwards, a spokesman for the American Paper Institute, a trade organization. ''Mills and paper dealers have in their warehouses over one million tons of newspapers, which represents a third of a year's production. There comes a point when the warehouse space will be completely filled.''

    • by Jeremy Erwin ( 2054 ) on Sunday September 23, 2012 @12:51PM (#41429461) Journal

      I don't really understand this hostility. I read the New York Times online, everyday. I don't get a paper delivered to my door. Those few, those happy few who actually read this new york times article, read it online.

      The circulation is a million pulp, half a million online.

      • by sycodon ( 149926 ) on Sunday September 23, 2012 @10:17PM (#41433035)

        I had a visceral reaction to the article. This is because they are pointing out the obvious and then pretending they are performing some kind of public service and pat themselves on the back.

        Do they really think the data centers don't know these things? Do they really think they are not trying to address them? Power costs are pretty high up on the balance sheet and anyone who's been paying attention knows there are millions of dollars spent on researching ways to bring those costs down.

        So it's kind of like a guy standing at a car wreck watching the rescuers trying to pull someone from a car and saying, "if you don't get that guy out of there, he's gonna die". No Shit Sherlock.

        Just shows that reporters are idiots. Always have been and always will be.

  • by Anonymous Coward

    Using VMWare or other similar technologies, you can dramatically cut the amount of the energy you need to power your servers. You can even take advantage of on-demand servers, so that if you do suddenly become busy, it'll power up more hardware to handle the load. Great for optimizing around a 9-5 workday.

    • by thoriumbr ( 1152281 ) on Sunday September 23, 2012 @10:46AM (#41428575) Homepage
      Or use a mainframe running lots of Linuxes... Can cut the power to 10% while delivering the same computing power. Mainframes have a very good power management this days.
      • Mod parent up.

        Mainframes still get a bad rap from those that remembered the early 90s and 80s. But the new IBM series can run 10,000 vms of redhat and unlike VMWare the vms can share ram with each other. The powerpc processors have something insane like 32 mbs of l3 cache per core. Also you can set the mainframe to open or close the number of VMs per load dynamically all by itself.

        Mainframes have alwaus been 15 yeats ahead of lintel servers. VMWare is doing things today mainframrs did in the 1990s.

        • by Bengie ( 1121981 )
          Saying a 32MB L3 cache PowerPC CPU is better than Intel CPUs is like saying Intel Xeons are better than ARM A7.. just look at the performance benchmarks!

          having 32MB of cache is trading off cost, power, and latency, for better data locality, which is almost completely useless for normal PC workloads.
        • just FYI VMWare can share RAM with the VMs - its often used to provision more Windows systems than could otherwise exist on the underlying hardware, as a lot of RAM is used just to provide the same pages of static OS code - no need to have a copy for each instance if it never changes.

          It's one big reason to use VMWare over HyperV, not that that stops anyone using MS stuff from using HyperV simple because it has that big M branding on it :(

    • Virtual machines - working around the limitations of operating systems that cannot effectively do a few tasks at once.
      There is also the security aspect of having Chinese walls between things on the same system, but it was initially as accidental as NAT stopping people getting in, and despite a lot of work to avoid exploits it's not really any better than some of the solutions within an OS.
  • by bmo ( 77928 ) on Sunday September 23, 2012 @10:32AM (#41428449)

    >This is the price being paid to ensure everyone has instant access to every email they've ever received, or for their instant Facebook status update.

    Way to trivialize users' needs.

    Crikes.

    --
    BMO

    • This is the reason there is a race for high performance chips that draw little power. Your tablet may sip power and take a few seconds to render a Facebook page, but the server sending milliions of pages needs to sip little power too. Whoever makes the best server chips wins.

  • Corrected URL (Score:5, Informative)

    by Rick Zeman ( 15628 ) on Sunday September 23, 2012 @10:34AM (#41428469)

    I have no idea how the URL got mangled when Timothy moved the anchor text to a different part of the article, but here's the correct link:

    http://www.nytimes.com/2012/09/23/technology/data-centers-waste-vast-amounts-of-energy-belying-industry-image.html?hpw&pagewanted=all [nytimes.com]

  • So? (Score:2, Insightful)

    by Anonymous Coward
    Worst case, if we just include first world people, it's only about a 100W per person. Change a few lightbulbs, set down the heat, set the AC up by a degree, and you've reduced your power consumption by that amount. Of course, we need to talk about energy here, not just power, but hey.

    And since when does a tech site need to spell out "millions" and "billions"? Are we not able to grasp mega and giga?

    • Re:So? (Score:5, Interesting)

      by vlm ( 69642 ) on Sunday September 23, 2012 @10:54AM (#41428627)

      Worst case, if we just include first world people, it's only about a 100W per person

      Rough engineering estimate, a watt continuously is a buck per year.

      For commercial I'm completely unimpressed. That's like the depreciation on my desk and chair, or the dept "free" coffee budget for a month. A tiny fraction of the overhead lighting power, which is a tiny fraction of the HVAC power, which is a tiny fraction of my salary. In terms of environmental degradation, the gasoline I burn to commute is worse than my share of the corporate data center (based on a KWh being about a pound of coal, so 16 pounds of coal per week, and commute four times per week is about 4 gallons or about 24 pounds of gasoline)

      For residential I'm amazed. They need to make $100/yr off my mom who doesn't even have internet access just to pay the electrical bill. I donno if they can make $100 of me per year and I'm always on the net doing "stuff". One interesting comparison WRT advertising is "one million page views per year = one thousand dollars per month or about a penny per pageview". Donno how true that is anymore. But it would imply that just to pay the electric bill the average person would have to visit 27 web pages per day, every day, which seems pretty high across an entire nation.

      • Re:So? (Score:5, Insightful)

        by flaming error ( 1041742 ) on Sunday September 23, 2012 @11:53AM (#41429065) Journal

        Our energy supply is finite, and so our energy usage should be measured in units of energy, not dollars.

        Prices are not based on market forces or total costs, they are based on government policies.

        And our money supply itself is schizophrenic, as in disconnected from reality. It's value fluctuates by moods, it's continually debased by printing more, it's backed only fractionally, and then only by the good faith and credit of future taxes on today's kindergartners

        Measuring energy with dollars is like scoring sporting events by the applause of drunken fans.

        • Re:So? (Score:4, Funny)

          by Anonymous Coward on Sunday September 23, 2012 @11:57AM (#41429095)

          It's value fluctuates by moods,

          Sort of how people decide to use apostrophes.

        • Re:So? (Score:4, Interesting)

          by Ken_g6 ( 775014 ) on Sunday September 23, 2012 @01:11PM (#41429619)

          The Earth receives 170PW of energy from the sun. The sun's total output is 380YW (trillion trillion Watts). How much of that we can capture and use is limited mainly by how much money we spend. So I would say that measuring energy with money makes perfect sense.

          • Re:So? (Score:4, Funny)

            by flaming error ( 1041742 ) on Sunday September 23, 2012 @02:27PM (#41430179) Journal

            The sun's total output is 380YW (trillion trillion Watts). How much of that we can capture and use is limited mainly by how much money we spend.

            Oh yeah. I sometimes forget that dollars trump Physics.

            • Re: (Score:2, Insightful)

              by Anonymous Coward

              We're not talking capturing 5%, the relevant range is much lower. Within that range, there aren't any physical limitations, just cost limitations. If the price of electricity was $0.50 per kilowatt-hour, we'd start covering the deserts with solar panels and extracting power from many other sources. Costs will rise asymptotically as one approaches a physical limit.

              OTOH, our knowledge of physics is incomplete, a profit incentive will encourage novel approaches that will likely exceed our theoretic maximums

          • You may want to read this [ucsd.edu] before you start talking about how much energy we can use.

            We can't just produce an infinite amount of energy here on Earth; all that energy ends up as waste heat. A back-of-the-envelope calculation is hard, but say we actually used all the sunlight we receive. An ideal earth-sized blackbody would be 5.3 degrees C. A body that reflects 30% more light than that (as the Earth does) should theoretically be about -18 degrees. The average surface temperature on Earth is about 14 degrees,

          • by snarkh ( 118018 )

            The sun produces more energy in a millisecond than our civilization has used since the beginning of time.
            There is no scarcity of resources per se, just the scarcity of our ingenuity to devise methods to capture it.

        • Our energy supply is finite, and so our energy usage should be measured in units of energy, not dollars.

          Of everything we have and do on this planet... Electricity is the closet thing we have to an infinite commodity. Until the entire surface of the earth is shaded, all wind ceases to blow, all rivers stop flowing, all tides stop, all mountains have been leveled, all thunderstorms cease to be generated, the mantle and core cools to surface temps, and we've burned every last calorie of biomass... until then

          • and the only limitation is the cost of converting the energy into usable forms of electricity.

            Ok. Try calculating the cost in joules instead of dollars. I think you'll find that once we have to expend 1 joule of energy to extract one joule of electricity, this virtually infinite commodity will be quite useless.

            • Not so. Energy time shifting and location shifting is just as important as generating it.

              Even if its a net loss - if you can move energy from where its lying dormant or being wasted to where it can be used - it can be worth it.

              • I'm not talking about storing and transporting energy, I'm talking about extracting the energy you want to store or transport.

                Would you burn 100 joules to extract 100 joules? If you're in the energy production business, you'll soon be out of business.

        • "Our energy supply is finite,"

          If you mean "the sun", yes, but that will take a while. Some of our Currently More Convenient supplies are finite.

  • That capacity is in reserve for bursts of activity, which are when something important happens. "Oh, sorry, can't service your request, we're at capacity now, come back later when you've totally lost interest or forgotten about this." Typical lack of insight by the NYT. They did what comes naturally, write the story about how something is bad and then find a lawyerly justification later. I mean, how would it be if they spent all that money on consultants and then failed to find that things are bad?

    Oh,

    • by Compaqt ( 1758360 ) on Sunday September 23, 2012 @10:43AM (#41428547) Homepage

      I wonder if the excess servers could be left off, and during rush periods, they could be turned on via IPMI [wikipedia.org]?

      • I was thinking the same thing. It seems companies like Facebook, Google, Apple, and Amazon could pioneer in these areas. I imagine the saved electricity cost would more than make up for the development efforts.
      • by guruevi ( 827432 ) on Sunday September 23, 2012 @11:09AM (#41428735)

        That's already how many datacenters do it. Still, that capacity takes about 2-10 minutes to come up to speed so you still need somewhat of a buffer. What they need is instant-on servers which the big guys are experimenting with but the problem is not Google or Netflix or Facebook, they run a pretty efficient operation, it's the rest of the business world who'd rather buy an IBM or HP honking piece of metal that converts 20% of it's power to heat before anything remotely useful has been done than experimenting with what they need and could do to improve on such designs.

        • by Sorthum ( 123064 )

          Well... yes. My employer runs three racks of servers all in; we don't have the bandwidth / R&D budget to investigate better options. The big players (Google, Amazon, etc) need to pioneer research in this area, at which point it will (ideally) trickle down to the masses.

        • 2-10 minutes? A machine in S3 suspend should come up a hell of a lot faster than that. Ballpark 6 seconds.
      • "I wonder if the excess servers could be left off, and during rush periods, they could be turned on via IPMI?"

        Of course yes.

        But then, powering on a server via IPMI can take everything between 30 seconds to three minutes (discounting the case that any of its partitions need to be checked...).

        Now, imagine your mails are stored in a server that is now off. Will you want to wait for minutes to get to them?

    • "Oh, sorry, can't service your request, we're at capacity now, come back later when you've totally lost interest or forgotten about this."

      How about this: "We've gone green, and we keep some of our servers turned off. We'll have the page ready for you once you're done watching this interstitial ad."

  • by roystgnr ( 4015 ) <royNO@SPAMstogners.org> on Sunday September 23, 2012 @10:48AM (#41428589) Homepage

    in this letter and comment. [cafehayek.com]

    The most ironic point: "Should we discover (as we undoubtedly would) that tens of thousands of copies of today's NYT were printed, delivered, and sold to subscribers who never read Glanz's report, do we conclude that the NYT needs a new and less-wasteful business model?"

  • Sad sad article... (Score:2, Interesting)

    by Anonymous Coward

    There are so many aspects left unexplored. Part of the problem is that power is also wasted on inefficient code. Bad abstractions and poor data structures. Reasons schedule pressure and untrained monkeys doing coding in PHP. It is too much focus on the ones running the data centers, part of the problem is who they are buying their software of.

    • Part of the problem is that power is also wasted on inefficient code. [...] Reasons schedule pressure and untrained monkeys doing coding in PHP.

      At work we have a reasonably trained monkey coding in PHP. What language would you recommend that is more efficient for a web application, balancing programmer efficiency with runtime efficiency?

      • nothing wrong with PHP, and if you have a small scale system, then there's little to no gain from rewriting it. However....

        if you have a lot of systems then a rewrite in the most efficient system you can get will benefit you a lot. This is why Microsoft has said that 88% of their datacentre costs is in hardware and power, and is also the reason why they're migrating back to native C++ code! (yep, bye .NET, don't let the door hit your bloated ass on the way out).

        I always said if you want programmer productiv

      • He saying you need to write your website in C.

  • by tokencode ( 1952944 ) on Sunday September 23, 2012 @11:01AM (#41428683)
    This article is simply trying to make news where there isn't any. Of course only a fraction of the power consumed goes into actual computations. For starters you need to account for cooling. Roughly speaking for every watt of server power load, you nede to account for 1 watt of cooling energy. This essentially halves the potential efficiency. In addition to that, you need to account for the amount of power it takes just to maintain state when you talking about a data center of that scale. Non-volitle memory requires and consumes power just to retain its current values. Unline Facebook and Google, most datacenters do not have 100% control over the hardware and software being run. Additionally datacenters often charge for power, space, etc and the client simply pays for what they use. In many instances efficiency is not for the datacenter to determine and one could argue that it may not even be in the datacenter's financial interest. Great strides have been made in scaling power consumption to fit computational demand but this is more of a hardware/software issue than a datacenter issue.
  • by PolygamousRanchKid ( 1290638 ) on Sunday September 23, 2012 @11:18AM (#41428819)

    A server is a sort of bulked-up desktop computer, minus a screen and keyboard, that contains chips to process data.

    • by Pieroxy ( 222434 )

      A server is a sort of bulked-up desktop computer, minus a screen and keyboard, that contains chips to process data.

      Does it means servers have mice?

  • How much power is being wasted by sites that do not honor "do not track"?
  • Lovely. (Score:5, Insightful)

    by 7-Vodka ( 195504 ) on Sunday September 23, 2012 @11:28AM (#41428893) Journal

    This is lovely. Let's worry about problems that don't exist, as if we don't have enough catastrophes to worry about.

    Power is money. As long as there is a somewhat unhampered economy in the locus of data centers (and there is), then every entrepreneur will attempt to economize power usage. You don't have to worry about it because the entrepreneurs that use power efficiently will eat the lunch of those that do not, ceteris paribus (all other things equal).

    Ipso facto this problem will solve itself. Case closed.

    In fact, now that I speculate on the possible reasons for publicity like this to be drummed up, it is to campaign for government regulations that will instruct entrepreneurs how they 'must' handle such a problem. Unfortunately nobody can write such regulations because they cannot foresee every circumstance and possibility, much less predict the future. Nobody on this earth can even tell a single other person what ideal type and amount of preparation is for power efficiency considerations. This is why we have economic calculation.

    If such regulations are enacted, ipso facto they will cause the problem itself.

    • Power is money. As long as there is a somewhat unhampered economy in the locus of data centers (and there is), then every entrepreneur will attempt to economize power usage.

      You conveniently ignore that power is also pollution. Gas, coal and oil pose current threats while nuclear power has huge costs associated with future care of spent radioactive materials. Western countries have been very slow to price any of these costs into the electricity supply cost.

      As a result, your hypothesis that the market will ta

    • Just steer the discussion to another point of wasted energy. Do you have an idea how much energy it costs to decrypt DRMed media? I'm sure if you add up the extra power needed for all those DRMed media every time they are used, I'm sure you'd get a very impressive number, too.

      Save the environment! Fight DRM! :-)

    • As long as there is a somewhat unhampered economy in the locus of data centers (and there is), then every entrepreneur will attempt to economize power usage.

      There is not an "unhampered" energy economy. The energy economy is massively subsidized in several ways. The US government has spent trillions in wars and foreign aid to secure energy supplying areas. Our natural gas glut right now is going to be paid for by future generations in the form of devastating environmental damage, like damage to our water tables. Our continued use of fossil fuels in general will also be paid for mostly by future generations in the form of the costs of global climate change

    • Power is money. As long as there is a somewhat unhampered economy in the locus of data centers (and there is), then every entrepreneur will attempt to economize power usage. You don't have to worry about it because the entrepreneurs that use power efficiently will eat the lunch of those that do not, ceteris paribus (all other things equal).

      This would work fine except for all the externalities, which include global warming, people breathing particulates emitted by diesel backup generators, and a ruinous series of wars that the US has fought in the Middle East. All of these amount to government subsidies for energy consumption.

      From your sig, it looks like you're a libertarian. Me too, woo hoo. Hope you're voting for Gary Johnson, who I think is a better candidate than Ron Paul anyway.

      But just because we're libertarians, that doesn't mean we hav

  • by Waffle Iron ( 339739 ) on Sunday September 23, 2012 @11:43AM (#41428985)

    Take the case of me and Google. My share of their power is about 1W electric (that's usually about 3w thermal).

    However, I estimate that their maps and local business info features alone easily save me at least a couple of hundred miles per year of driving. That would be about 10 gallons of gasoline per year, which is 38 W thermal that I'm not burning thanks to the info they're providing. Google provides at least a 10 to 1 payback in energy savings just for this one case.

  • by bcrowell ( 177657 ) on Sunday September 23, 2012 @11:43AM (#41428993) Homepage

    I'm part of the problem. Wish I wasn't, but I don't seem to have any choice.

    I run a small web site, and if it goes down, there are various consequences in my personal and professional life that can be extremely annoying and embarrassing. To stay sane, I need the site to have good uptime. Over the years, this has caused me to gradually migrate to more and more expensive webhosting, now ~$100/mo.

    The average load on my dedicated server is extremely low, so it's basically like one of the extremely wasteful boxes described in TFA. My site is basically I/O-intensive: I serve big PDF files. In terms of CPU, I'm sure the site would run fine on a low-end ARM, or as one of a dozen sites running off of the same Celeron chip. So by comparison with either of those hypothetical, energy-efficient setups, virtually all of the electrical power is being wasted. I'm a small fry, but there are millions of sites like mine, so I'm sure it adds up. (It would be interesting to know how much of total server-center power consumption comes from the "long tails" of the distribution such as Google and Facebook, and what percentage from cottage industries like me.)

    There are basically two problems. (1) Nobody will sell me high-reliability webhosting on low-end hardware. The only way to get energy-efficient hardware is to get cheap webhosting. I've tried cheap webhosting. Cheap webhosts have low reliability and nonexistent customer service. (2) Sometimes you get spikes in demand, and you want some excess capacity to be able to handle it without crashing the server. Maybe you get slashdotted. Actually, in my case one thing that has been a problem is that some people apparently run IE plugins that are supposed to accelerate large downloads, by opening multiple connections with the server. When these people hit my server and download a large PDF, the effect is very much like a DOS attack. My logs show one IP address using 300 Mb of throughput to download a 3 Mb PDF. I've written scripts that lock these bozos out ASAP, but on a low-end machine, these events would bring my server to its knees instantly.

    • by rvw ( 755107 )

      There are basically two problems. (1) Nobody will sell me high-reliability webhosting on low-end hardware. The only way to get energy-efficient hardware is to get cheap webhosting. I've tried cheap webhosting. Cheap webhosts have low reliability and nonexistent customer service. (2) Sometimes you get spikes in demand, and you want some excess capacity to be able to handle it without crashing the server. Maybe you get slashdotted. Actually, in my case one thing that has been a problem is that some people apparently run IE plugins that are supposed to accelerate large downloads, by opening multiple connections with the server. When these people hit my server and download a large PDF, the effect is very much like a DOS attack. My logs show one IP address using 300 Mb of throughput to download a 3 Mb PDF. I've written scripts that lock these bozos out ASAP, but on a low-end machine, these events would bring my server to its knees instantly.

      Have you taken a look at Amazon EC2, S3, and their other services? Post a question on their forums or on stackexchange and describe your situation, and I bet they can give you a solution that is cheaper and more reliable.

    • Get two geographically diverse cheap hosts rather than a single expensive host, and be one of dozens or more VMs on their hardware. Round robin on your DNS. Add more hosts as needed. If there is transactional information needed then it can get harder.

  • Tablets are (apparently) ousting desktop PCs and laptops as consumer devices. These are by necessity low-consumption, hence low-capacity devices (as in, they can barely play an HD video without screeching to a bloody halt), they're certainly not going to be doing any what an 80's admin would have considered big iron work. This would be left to... well, big iron. The infrastructure is already there; thin clients, virtualisation on multicore beasts that can chew through 4k CGI rendition in practically real ti

    • by Bengie ( 1121981 )
      My i7-920 quad core with an ATI6950 doesn't break 300watts while playing video games. Closer to 100watts idle. Not sure where you get 450watts short of a dual-GPU OC'd CPU.
      Intel's Haswell CPU is claiming to use 1/20th the platform(CPU+Motherboard) power while idle than Ivy Bridge and maintaining the same performance.
      • I'm going by what my high-draw 5 year old factory clocked 2.8 P4 draws with its hungry heifer of a GeForce 8600GTS. 125W for the CPU, 90W for the GPU, 170 for the mainboard & RAM, 25 for the hard drive, 30 for the DVDRW, 3 fans at 5 each. It was built as a budget gaming rig, I still use it to play fairly recent (read: less than 2 years old) games.

  • I know the term is horribly abused on a regular basis, but that's the whole point of cloud computing. It looks like a horrible idea because today it usually is, but eventually we'll get used to it and figure out how to make it work most of the time and then we can have a lot less idle resources as they can just be turned off. Even if it doesn't get you entirely out of colocated resources, if it can decrease the amount of hardware you have to have lying around doing nothing most of the time there's a place for it.

  • Virtualization (Score:5, Interesting)

    by notdotcom.com ( 1021409 ) on Sunday September 23, 2012 @12:51PM (#41429463)

    This is one major reason that companies (even very large companies with "money to spare") are moving towards virtualization with incredible speed.

    I'm not going to go digging for numbers right now, but the statistics show that something like 100 percent of Fortune 100 companies use virtualization, and perhaps 85-90% of Fortune 500 companies.

    The larger virtualization solutions will actually take the servers that are idle, migrate them to another host machine, and power down/suspend the "extra" machine(s) that was/were being used during their core business hours.

    Virtualization also allows for spikes in cpu/network, and then can take that power back when everyone goes home (a print server, an intranet web server, a domain controller, etc). So, physical machines actually DO get turned off when they aren't being taxed, and with more and more "software defined networking" the interconnects between systems can be scaled and moved also.

    Now, I don't know how the big players are using this (e.g. Amazon, VMware, Rackspace, Google). I can't see inside their datacenters, but one would think that something like AWS would have a huge stake in saving power by turning off idle instances and moving VMs. Not only for the power savings from the server directly, but for the (approx) 30-40 percent more energy that it takes to cool the physical machines.

    It's also worth noting that larger companies are putting their datacenters in areas with plentiful (cheap) power. Places like Washington state, with hydroelectric power and a cooler average ambient temperature, allow for a huge savings on power right off the bat. Add things like dynamic scaling of server and network hardware, lights-out datacenters, and better designed cooling systems (look at Microsoft's ideas), and there is a huge power savings across the board.

    How much energy does the NYT use to print paper copies of the newspaper, distribute and deliver them, harvest the trees and process the paper? Now compare that with the energy that the online NYT uses. Which allows for more people to view the publication for less energy? I'm positive that it is the electronic version.

     

    • This is one major reason that companies (even very large companies with "money to spare") are moving towards virtualization with incredible speed.

      No, they move to virtualization because they think, they run their services on Windows, and because their admins read VMWare ads in glossy magazines.

      There are non-virtualization solutions such as VServer, OpenVZ and LXC (and infrastructure based on those), not to mention traditional multiple services on a physical host. And any non-crappy server-side application can share load between multiple servers.

      • It's not just because they read something in a magazine. It's a matter of security in depth, i.e., if someone is able to penetrate one application that shouldn't mean they get access to the entire machine.

  • by Wrath0fb0b ( 302444 ) on Sunday September 23, 2012 @12:54PM (#41429485)

    This is the same "problem" that faces airline companies, taxi drivers, power companies, cell network operators. Consumers pay for these services by usage and so total revenue is proportional to average use but the costs are heavily skewed towards capital costs and so are proportional to the peak load that you can service. In that case, there's a fundamental tradeoff -- either we have to degrade service when demand hits the 95th percentile (just as an example) or we have to figure out a way to pay for the extra capital investment that's not needed 95% of the time.

    There's a few alternatives you can do:

    (1) Overprovision and soak it up into the price structure for all consumers. This is what most power companies do -- they build enough power generating capacity for peak load and then charge a bit more per KWH to make up for the increased outlay.

    (2) Overprovision and charge extra at peak. This is the airline solution -- they always have service available but under contention the last few seats are exorbitantly expensive. Essentially those that need peak service are paying to leave a few seats open all the time in case they need them.

    (3) Don't overprovision: this is the taxi solution. This means that service degrades significantly under peak demand -- anyone trying to get a cab home on a Saturday night in a major city has experienced this. Those that do get a cab pay the usual fare, everyone else waits around a while. This is also the solution that California has routinely deployed for their inability to provide peak power during heat spells -- same price for everyone but rolling blackouts for the unlucky few.

    That's it -- there aren't any clean answers when you are making compromises between peak availability and average efficiency. You've either got to pay for the extra capacity when you don't need it or else you have to suffer when you don't have the capacity when you do need it.

    • Good summary.

      One other possibility, at least for servers and datacenters, is to move more of it to enormous virtualized systems such as ECC. The idea being that if you hosted a huge number of diverse websites on what is essentially one system, then they can all share capacity. Any individual site might get slammed at any time, but it's very unlikely that they'll all be slammed at the same time, so each site can have what is for it's own purposes an insane level of extra capacity, but the whole system may ha

  • Think of how many data centers have dedicated appliances for filtering spam. If they want to save on power they should take some actual action against spam instead of just being reactionary.

    The data centers (and to a larger extent ISPs) remind us that spam is an economic problem. It is costing everyone money every day, so that a handful of spammers can make a lot of money pushing fake pills, fake watches, etc. If the data centers seriously want to reduce wasted power they should instead invest some h
  • My first thought when I saw the post was: "bean counters strike again: Look at the wasted resources that aren't used all the time!"
    having extra capacity to handle peak usage is not waste. Never will be. Making sure the extra capacity is efficient is another question, maybe the article linked talks about that, but I always get riled up when someone sees idle capacity and calls it waste.
  • so, that would be like a million servers, maxing out their 300 watt power supplies for every second of every day? Hrm, sounds a bit unlikely. Well, okay, figure it consumes about just as much power to cool as the server consumes, since it is (in theory) not putting any chemicals into high chemical potential...well, yes, then there's the inefficiency of air conditioning, offset by the natural cooling by fans and heat sinks (of buildings, not servers). Still that's a lot of power. I know that cisco gear has n

  • I've talked in the past on why liquid fluoride thorium reactor (LFTR) should be the immediate future of nuclear power generation.

    Here's one thing I haven't mentioned: LFTR's can be scaled down to 50-80 MW powerplants which are amazingly small and require very little real estate to operate one. Because of its very small size, an 80 MW LFTR could be almost near the site of the big server farm itself, and that could mean the server farm doesn't need a land-wasting big solar power farm nearby or have to be loca

"If it's not loud, it doesn't work!" -- Blank Reg, from "Max Headroom"

Working...