Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
The Internet Power Hardware IT

Shortage of Electricity Drives Data Center Talks 194

Engineer-Poet writes "Per the San Jose Mercury News, competitors such as Google and Yahoo are meeting to discuss the issue of electricity in Silicon Valley. How much of the USA's 4038 billion kWh/year goes into data centers? Enough to make a difference. Data centers are moving out of California to spread the load and avoid a single-point-of-failure scenario. This is a serious matter; as Andrew Karsner (assistant secretary of energy efficiency and renewable energy for the Department of Energy) asked, 'What happens to national productivity when Google goes down for 72 hours?' I'm sure nobody wants to know." From the article: "Concern about electricity pricing and volatility has led Microsoft to talk with its network manufacturers about building more efficient servers. IBM and Hewlett-Packard -- which both build data centers -- want to improve efficiency at the facilities. AMD promotes changing the design of data centers to increase airflow to keep the supercomputers cool."
This discussion has been archived. No new comments can be posted.

Shortage of Electricity Drives Data Center Talks

Comments Filter:
  • Nothing FP (Score:5, Funny)

    by Anonymous Coward on Friday December 08, 2006 @10:08AM (#17161444)
    When Google goes down, productivity probably goes up.
    • Exactly.   But it was a poor choice of example.  Of greater concern would be data centers handling ecommerce or other kinds of transactional data - though anyone with that much business probably has considered this when setting up a backup datacenter.
    • Re: (Score:3, Insightful)

      by mrjb ( 547783 )
      Productivity will hardly be influenced at all. When Google goes down for 72 hours, people will switch back to AltaVista for a few days. If Google *regularly* goes down for 72 hours, people will switch back permanently. It's not like they Google are the *only* search engine around, they're just the most popular.

      This question is entirely besides the point though. As it is in Google's interest to stay the most popular search engine, I'm sure they have got their backup mechanisms in place. I'm pretty sure they
      • by pilkul ( 667659 )
        Remember that huge blackout that took out most of the east coast a while ago? Sure Google is great, but they probably don't have the magic powers to stop their service going down if something similar hits them where their main centers are located.
        • Sure they do (Score:5, Insightful)

          by brunes69 ( 86786 ) <`gro.daetsriek' `ta' `todhsals'> on Friday December 08, 2006 @10:47AM (#17161914)
          You gotta remmeber that, when a blackout hits a huge swath of area, it also brings down the *client machines* in that area as well, so your backup centre doesn't necessairily have to handle your entire peak load.

          Google only needs one of two redundant data centers (one in the East, one in the West, one Mid-Central) to basically ensure they can whether any power loss scenario. If they had 3 such separate centers (which I have no doubt they already have), the only way they're going to be totally off line is if the whole national grid goes down - in which case Google should be the least of your worries if you're a lawmaker.

          • Re: (Score:3, Interesting)

            by AK Marc ( 707885 )
            If they had 3 such separate centers (which I have no doubt they already have), the only way they're going to be totally off line is if the whole national grid goes down - in which case Google should be the least of your worries if you're a lawmaker.

            Especially if they have one in Dallas (or any large city in TX other than El Paso). The TX grid is the most independent of all electric grids, and rarely do problems traverse its boundaries.
            • That's because Texas is smart enough to have built new power plants in the past 25 years, whereas California seems to think all they need are good intentions.

               
              • by AK Marc ( 707885 )
                It's the interconnects. I'm not talking about the likelihood of Dallas losing power. Assuming local generation is as reliable as CA or NY (maybe not a reasonable assumption, but separate from my point so it is excluded), a power outage in NY is more likely to affect San Jose than Dallas. Maybe it does come back to your point, being that since TX isn't demanding or supplying excessive power to the areas around it that the interconnects aren't as stressed. But the result is that TX is the most power-indep
        • by Andy Dodd ( 701 )
          They probably do have such magic powers, bought from the likes of Cummins Power Generation - http://www.cumminspower.com/na/ [cumminspower.com]
        • Re: (Score:3, Interesting)

          by Brushfireb ( 635997 ) *
          I thought i read that google now has power plants and data centers, in oregon, that are based primarily on water-power from a huge river.

          So, if power goes down, they actually might be the only ones who *ARE* up and running, which is pretty fucking cool.

          B
          • Yes, in Hood River, OR - right next to the Bonneville Dam which supplies a big chunk of electricity in the west.
        • by Splab ( 574204 )
          I remember a few years back when we had a blackout here in Denmark. A datacenter a friend of mine was working for was somewhat proud that they managed to stay up during the blackout - only thing is, none of their peers stayed up, so they where not actually connected to anything :)
          • Re: (Score:2, Interesting)

            by Tesen ( 858022 )
            I remember a few years back when we had a blackout here in Denmark. A datacenter a friend of mine was working for was somewhat proud that they managed to stay up during the blackout - only thing is, none of their peers stayed up, so they where not actually connected to anything :)

            But that is one of the cool things about an outage; okay I am a Sysadmin, so my job is to keep everything up and running... however during our annual plant shutdowns, I enjoy watching server utilization and network utilization d
      • Re: (Score:2, Funny)

        When Google goes down for 72 hours, people will switch back to AltaVista for a few days.
        Alta who?

        I think a lot of people don't know about other search engines anymore.
        • Memory:
          Google
          AltaVista
          Metacrawler
          dogpile
          yahoo
          MSN
          ASK

          try wikipedia: Yup a nice list on wiki at
          http://en.wikipedia.org/wiki/Search_engine [wikipedia.org]

          try something obvious like...
          www.searchengines.com (nope... some kind of placeholder site)
          www.searchengine.com (yup-- appears to be a search engine).
          www.searchweb.com (maybe-- looks like a placeholder page but sort of looks like search page)

          Look at people that rate search engines
          Typed this in since it seemed logical
          www.searchenginerating.com (appears to be more about getting
    • When Google goes down, productivity probably goes up.

      I have an office where, if you're on one access point instead of another on the same net, certain sites, Google in particular, are inaccessible. I have to do all my searching on MSN (pity me) instead when I'm in that area, and let me tell you with absolute certainty, productivity goes way down.

    • by The Man ( 684 )
      This was my thinking too, but I suppose people would adjust rather quickly to using other search engines; they're not as good but for the most part will get the job done. The newsgroup data would be the biggest loss there. Even if all of them failed, there'd be winners and losers. The losers would be companies doing research or otherwise actually using the Internet; the winners would be just about everyone else, for whom employee web use is a drain on productivity.

      We'd probably also get a hefty productiv
  • by Lanu2000 ( 972889 ) on Friday December 08, 2006 @10:10AM (#17161468)
    One thing that needs to be looked at with the congregation of data centers is why are they like that? Here in the North East, any kind of bandwidth will cost an arm and a leg compared to the North West area. I've recently been involved in pricing out Colocations for one of our webservers and a simple T1 costs 4-5 times in the N.E. that it costs in the N.W. I'm sure we'd see more evenly distributed data centers if costs we evenly distributed too. How about taking some of those new 40% efficiency solar panels and moving some data centers down to the S.W. for a start?
    • by Dun Malg ( 230075 ) on Friday December 08, 2006 @10:46AM (#17161900) Homepage

      How about taking some of those new 40% efficiency solar panels and moving some data centers down to the S.W. for a start?
      A large portion of the power usage goes towards keeping the machines cool. Moving the data centers to a hotter climate to take advantage of the extra sunlight via solar cells is essentially a wash, as the added generatoion capacity is easily eaten up by the additional cooling needs. Actually, it's a net loss, as solar power systems aren't free...
      • Re: (Score:2, Interesting)

        by snark42 ( 816532 )
        There's no reason they couldn't just setup solar panel "power plants" in Arizona and add that power into the grid. I see an old dated power grid as a big concern though.
      • Re: (Score:3, Interesting)

        Comment removed based on user account deletion
      • Re: (Score:3, Informative)

        by Lanu2000 ( 972889 )
        It is warmer in the South West, but the additional heat will be the external ambient outside temperature, not the heat generated from the boxes inside. Effecient insulation will help reduce the electrical cost of cooling associated with the increase of ambient temperature so it will not surpass the generated electricity. Think of root cellars -- they stay cool nearly all year round because of their insulation. Plus with the newer generations of processors radiating less heat, the cooling will be that much l
        • Re: (Score:3, Informative)

          by mithluin ( 979757 )
          It's misleading to say a root cellar works because of insulation: at least as important is exchanging heat freely with the ground around it, which past a few feet down stays at roughly the same temperature year-round.
          See http://en.wikipedia.org/wiki/Earth_cooling_tubes [wikipedia.org] for a more recent take on the same principle.
      • by dasunt ( 249686 )

        A large portion of the power usage goes towards keeping the machines cool. Moving the data centers to a hotter climate to take advantage of the extra sunlight via solar cells is essentially a wash, as the added generatoion capacity is easily eaten up by the additional cooling needs. Actually, it's a net loss, as solar power systems aren't free...

        There are systems (heat pumps) which appear to operate at greater than 100% efficiency when comparing the amount cooled vs the amount of power supplied. They wo

    • While Google, Yahoo and Microsoft can build stand-alone data centers in Washington State to power their search engines, enterprises and Internet companies want space and connectivity in the major business hubs. Right now the strongest demand for data center space is focused on three markets: New York/Northern NJ, northern Virginia, and Silicon Valley. While power is less expensive in nothern Virginia, none of these markets are cheap.

      The other hot data center real estate market is Austin [datacenterknowledge.com], which has benefitte

  • Cool Running (Score:3, Interesting)

    by tedgyz ( 515156 ) * on Friday December 08, 2006 @10:11AM (#17161486) Homepage
    We just started switching from Intel to AMD hardware in our servers (HP DL385). Not that we pay per Kw/h, but I figure less power consumption means less heat and less fried hardware.

    AMD has a website on the topic: Real Efficiency in the Data Center [amd.com]
    • Re: (Score:2, Informative)

      by nocwage ( 956947 )
      I'm surprised no one has mentioned Sun Microsystem's CoolThreads technology. The company I work for had a problem with too much heat and too much power in our co-lo data center. We managed to replace 2 V880 servers with one T2000, not only did we free up an entire rack with a single 2U box but we also eliminated over 5000 watts of power consumption. (estimating 2800 watts per V880 and T2000 consuming around 250) Obviously those boxes are not for everyone, there is only one FPU even though the box has 8 co
      • by tedgyz ( 515156 ) *
        Yes, Sun has entered the effiency game. We are weaning ourself off Solaris/Sparc. However, I would consider their AMD offerings: Sun AMD Opteron servers [sun.com]. The Sun Fire X4100 claims 56% less heat than the Xeon conterpart.
      • The Sun T2000 stuff was obsolete the day it launched when compared to competing x86 solutions.

        http://www.anandtech.com/printarticle.aspx?i=2727 [anandtech.com]

        The CPU power/watt wasn't really that much better compared to x86 stuff of that time.

        It is now nearly 9 months from that, and AMD and Intel have improved significantly. Where is the T2000 or T1 now? Look at Intel - their latest CPUs now trash AMD's by about the same margin which AMD used to trash Intel's offerings.

        As long as you skip the Intel P4 stuff, and the silly
  • by Giant Ape Skeleton ( 638834 ) on Friday December 08, 2006 @10:11AM (#17161492) Homepage
    The larger data centers could install "bike farms" -- row upon row of stationary bikes hooked up to huge capacitors.

    Locals and guest workers would be hired to pedal for one-hour shifts each, generating some portion of the needed power and giving a boost to the local economy. Don't think "galley" -- think "self-sustaining"!

    If you'd like to use this idea, please contact me via my Slashdot account. Thanks.

    • by Joebert ( 946227 ) on Friday December 08, 2006 @10:25AM (#17161652) Homepage
      Locals and guest workers would be hired to pedal for one-hour shifts each

      Don't be foolish, this is America we're talking about, call it a Gym and charge admission to use thoose bikes.
      • call it a Gym and charge admission
        That is absolutely brilliant. Contact me and let's make millions :)
        • by Zocalo ( 252965 )
          Jokes aside, this seems like a brilliant way for a gym to offset at least some of its operating electrical bill, but I can't recall ever reading about a single instance of this being put into practice. When I was a kid I bought a rig to power the lights on my bike via a simple friction mechanism off one of the wheels for about £10, so I doubt cost is an issue. Is anyone aware of this being done on a larger scale, or has the idea really just not occurred to anyone?
          • The problem is most people produce a rather pathetic amount of energy. If you've been to a gym lately you will notice the people who actually spend a lot of time on a machine are working at a rather slow rate ( 100w) so you need 20-30 of them just to light the place. Compare the cost of that setup to buying electricity at $.12/KWh, it takes a LONG LONG time to pay off.
          • Re: (Score:3, Interesting)

            by Kijori ( 897770 )

            Jokes aside, this seems like a brilliant way for a gym to offset at least some of its operating electrical bill, but I can't recall ever reading about a single instance of this being put into practice. When I was a kid I bought a rig to power the lights on my bike via a simple friction mechanism off one of the wheels for about £10, so I doubt cost is an issue. Is anyone aware of this being done on a larger scale, or has the idea really just not occurred to anyone?

            This is done in a number of gyms. The power produced by an average person on an exercise bike isn't enormous, though, and is used to power the display on the bike itself. There may be a tiny bit left over, but not enough to be any real use.

          • I've seen it done at the solar living institute in hopland, california. Or was that solar living center now? I always forget.
      • Re: (Score:3, Insightful)

        by yarbo ( 626329 )
        If you call it a gym you're going to have to put TVs everywhere. There goes your electricity.
    • Re: (Score:3, Insightful)

      Acording to wikipedia, an avergage healthy human is able to produce 3W/kg for at least an hour, therefore 200 to 250W. With minimal pay + other expenses, let's say they cost you 10$ an hour and the system has 50% conversion effciency, that's 80 to 100$ for eack kW*h.
      You really are desperate if you need to rely on that.
    • by Bozzio ( 183974 ) on Friday December 08, 2006 @10:32AM (#17161722)
      I have one mod point left, but I can't seem to find the "Insane" option.
    • by Xugumad ( 39311 )
      Hired? HIRED??? No, you sell them time on the bikes, as a cheaper alternative to going to the gym!
  • Share the Power (Score:3, Insightful)

    by Doc Ruby ( 173196 ) on Friday December 08, 2006 @10:11AM (#17161494) Homepage Journal
    So Google has more money than it has electricity. And it's HQ'd in some of the most expensive real estate in the country. And its servers are remote to practically every user in the world.

    That sounds like a perfect reason for nearly all of Google's servers to live distributed around the US, and the globe. With local operators for physical access, and global remote admins for most normal operations.

    The past year or so we've heard all kinds of wild rumors about "googled in a box": supercomputers in a shipping container for rapid deployment around the world. How about just a briefcase of money dropped on the local economies to build datacenters in-place, the old fashioned way, without the alien assault tech strategy?

    Cheaper, more redundant, more energy efficient (at least not overloaded). Sufficiently distributed, they could use lower-density energy generation, like solar/wind/environmental.

    Google should force manufacturers and designers to make all our power consumption more efficient, using their buying power to improve the tech. Then they should use that tech in the more economical, reliable, power efficient way. Share the wealth and power with the rest of us who are keeping them hot.
    • Data center consolidation. ESX. Good Stuff.
    • Google should convert their huge parking lots into solar 'car ports' providing multiple benefits: lots of square footage for solar power collection, somewhere shaded for employees to park their Ferrari's under, and encourage employees to buy plug-in hybrids that can charge while they're at work all day. Excess power for data centers, less A/C runtime when cars startup (and are least efficient), and fuel savings by employees who buy plug-in hybrids.
      • Re: (Score:2, Interesting)

        Google should convert their huge parking lots into solar 'car ports' providing multiple benefits:

        whoever designs/makes solar panels strong enough to weather car traffic will make millions. i had this idea screw parking lots. turn the roads into solar collectors. (not sure if anyone else has had this idea or not, but it occurred to me one time driving thru middle-of-nowhere, utah, that someone, at some time, was out there with an asphalt crew, laying this freeway for years on end. i'd be interested in
    • by hey! ( 33014 )
      Well, here's a thought.

      We have technolgy for distributing electricity and technology for distributing data. The difference is that data can be transmitted losslessly.

    • There's been speculation that Google may begin investing in or even buying equipment manufacturers that can develop specialized gear to improve the energy efficiency of its data centers, which are already among the most efficient out there. Google sees its data center operations as a competitive advantage, which is why it uses a custom OS and in-house/commodity hardware to run their clusters. If they develop energy-efficient equipment, I'd be surprised if they share it. It's not like they've open sourced th
      • Actually, Google has released their core tech fileystem [google.com]. They might have released other tech/source, too.

        But I'm not talking about them sharing the tech IP content. I'm talking about them sharing their datacenter distribution outside Silicon Valley. The construction and operation of their datacenters. Their money and jobs. Their power demand, distributed among the wider grid. Which also would drive development of better power supplies and Internet bandwidth/resources.

        Instead of keeping it all pent up in Sil
        • Doc - Thanks for the link re the file system. I hadn't seen that.

          Google actually has data center facilities all over the place (it's hiring data center staff in nine different locations [google.com]), and is building more. They are said to be shopping for property in North Carolina, and contemplating a $1 billion facility in India [datacenterknowledge.com]. I think their center network is rapidly becoming more distributed, and given the issues in Silicon Valley, they'll be accelerating that trend.

  • About 10% (Score:3, Interesting)

    by TheRaven64 ( 641858 ) on Friday December 08, 2006 @10:14AM (#17161536) Journal
    The last statistic I read (in a paper published at this year's International Conference on Autonomic Computing, in the Power Management section) was that data centres were responsible for around 10% of the total power consumption of California. This was expected to continue to increase.

    Now, I hope, people will start to understand why Sun and Intel are focussing so hard on performance-per-watt, and not just performance.

    • Now, I hope, people will start to understand why Sun and Intel are focussing so hard on performance-per-watt, and not just performance.

      What's surprising is how long it took for power consumption of computers to be an issue. Back in 1999, no one cared about high power consumption. It wasn't until you started needing 500W power supplies in desktop PCs that people noticed, but the trend was clearly there. Even today, you still see people who want quad core processors and high-end video cards in ultra-thin n
    • by Zebra_X ( 13249 )
      "Now, I hope, people will start to understand why Sun and Intel are focussing so hard on performance-per-watt, and not just performance."

      LOL Sun and Intel!? Yeah, Intel is all about performance per Watt becuase AMD used it as a marketing weapon, to great affect.

      Neither Sun, not Intel is responsible for Performance per Watt - AMD is.
  • Iran's BTU/capita is about 1/4th of the U.S., & its GDP/capita is about 1/4th of the U.S. (CIA World FactBook via "Bottomless Well" by Peter Huber & Mark Mills).

    Personal Opinion: Deregulation has put emphasis on quarterly profits, not on reserve capacity of both power plants and the grid.

    Congress only "works" an average of 3 days a week or less. Some heads in Washington need some knocks.
    • Re: (Score:2, Interesting)

      I recall from a /. comment from the son of a congressman(Congressional brat?), where he said that his father worked rather more than six days a week, due to the necessary reading of bills in his committee, and such.
    • Iran's BTU/capita is about 1/4th of the U.S., & its GDP/capita is about 1/4th of the U.S

      But european countries produce a lot more GDP per BTU than the US, maybe Syd Meyer was right and democracy helps economy to be more efficient.
  • by simm1701 ( 835424 ) on Friday December 08, 2006 @10:20AM (#17161592)
    Given the abundance of geothermal power in iceland (hence why aluminium ore is transported there for refinement) perhaps a few trucks of fibre need to be put in place - Reykjavik becoming the next big hub for data centers... Lots of power on tap, lots of cooling easily available (ie its bloody freezing there), and the good old days of meetings in hot tubs could come back too - though obviously thermal springs rather than hot tubs....
    • Re: (Score:3, Interesting)

      by miller60 ( 554835 )
      Some data centers actually cool their facilities with air pumped in from outside their buildings [datacenterknowledge.com]. There's a study underway at Lawrence Berkeley National Laboratory looking at the use of air economizers at seven data centers that have participated in a PG&E program offering rebates for folks who do this. The study is looking at concerns that the use of outside air will introduce contaminants or excess humidity into the data center. Not for everyone, but seems to work for some folks.
    • "Considering the northerly location of Iceland, its climate is much milder than might be expected, especially in winter. The mean annual temperature for Reykjavík is 5 C, the average January temperature being -0.4 C and July 11.2 C"
      says travelnet.is
  • Moving makes sense (Score:5, Insightful)

    by hcdejong ( 561314 ) <hobbes@@@xmsnet...nl> on Friday December 08, 2006 @10:21AM (#17161598)
    Google had the right idea when they located their datacenter in Oregon, in a colder climate so they don't need as much air con power, and right next to a big hydro power plant.
    What's the point of locating your datacenter in an area with high ground prices, a history of electric power supply problems and a hot climate?
    • What's the point of locating your datacenter in an area with high ground prices, a history of electric power supply problems and a hot climate?

      Being able to get wads of bandwidth for a low fee and without mileage charges?

    • Re: (Score:3, Informative)

      by GWBasic ( 900357 )

      Silicon Valley (where I live) isn't hot. The reasons why a company would locate their data center here are numerous:

      • Many companies were started here because this is where there's lots of venture capital
      • It's easy to start a small data center close to where you live because you can just walk in and fix something.
      • There's lots of talent in this area
      • Some data centers grow organically. Remember that Google started as a bunch of computers cobbled together in an office at Stanford, which is in Silicon Valley
  • Just relocate (Score:3, Interesting)

    by gr8_phk ( 621180 ) on Friday December 08, 2006 @10:23AM (#17161622)
    Move someplace where it's cold. Northern Michigan comes to mind, or Wisconsin, Minesota, North Dakota. These palces are all close to the center of the US and costs are lower all around. If you've got a 65W processor it's going to take several Watts to pump that heat out of a building, but if you can just pump in outside air much of the year, it's going to reduce those cooling costs a lot. Or, if you want to stay in CA and have cheap cooling all year, just move to the top of a mountain.

    It's still a good idea to reduce server power because it reduces both the operating power AND the cooling power required.

    On another note, has anyone noticed that language used impacts performance per Watt?

    • The problem is network infrastructure. In California, it's pretty cheap to connect to the internet backbone and your latency will be slightly lower to boot. Not so in North Dakota.
      • If companies as large as Google, IBM, and Microsoft get together on this, pulling enough fibre to a new datacenter location might become affordable. If not, let the government lend a hand. Maybe there's too much at stake here to leave it to blind market forces.
      • by drasfr ( 219085 )
        I would dare to say that the problem would be more than bandwidth... You would need the find the right kind of skilled people to operate this.

        I for one much rather live in NYC. You would have to pay me so much MORE to make me move to North Dakota? But maybe I am alone in this case?
    • by Xzzy ( 111297 )
      All of those areas can have hot summers. Not as consistently hot as California, but it's not like people are wearing parkas 365 days a year. Building a center there and hoping you can simply pipe in air from outside is going to result in some serious disappointment.

      One thing I have seen used to good effect is pond water. You still need the huge air conditioners in your server rooms but they can operate on water piped in from a body of water. I'm unclear on what the difference in costs is, but I would expec
      • I don't think anybody's expecting they can forego the cooling plant entirely. But the plant can be a bit smaller, and it won't need to run at 100% for most of the year, saving on wear and electricity.

        Water cooling is nice, but you need a fairly deep pond/lake nearby to do it well.
    • Re: (Score:2, Interesting)

      by steevc ( 54110 )
      Or pump that heat to local homes, offices etc during the winter. That would help balance out the local energy consumption.

      I read years ago that some homes in the old East Germany (GDR/DDR) were heated by the local power station. Due to a lack of thermostats they had to open the windows if it go too warm. Then Germany re-unified and they started shutting down those dirty coal power stations. I don't recall what happened after that.

      As a datacentre can probably be just about anywhere in the world where there i
      • That's not unique to East Germany or even unusual at all. Many if not most US metropolitan centers have pervasive steam heat services provided by nearby electrical/steam co-generation facilities, and many of those even provide chilled water made by enormous steam-powered chillers as well. A lot of universities are set up the same way.

        I wouldn't be surprised if major datacenter operators eventually construct small, site-local nuclear reactor facilities that can provide both electricity and cooling capacity

    • Not a bad idea. Michigan (my home state) could sure as hell use the boost to the economy. Plenty of land reasonably cheap, and plenty of network infrastructure as well at least in the lower peninsula.
  • If IBM can do it[1], I'm sure Google and Yahoo can too.

    [1] <http://www-935.ibm.com/services/us/bcrs/sites/ste rling-forest.html> [ibm.com]

  • It isn't just power, how about air conditioning and quality of life. Many of us don't like the idea of living in a concrete jungle. I would take a pay cut to live in a cabin in the mountains by some fishing streams in a heart beat. So go to central BC, with a power dam right next door, 6 months a year the cool air is supplied by mother nature and land is cheap. Or perhaps northern Alberta or even northern Saskatchewan.

    • OK, you're enjoying the quality of life there by your trout stream. And then half the population of California come and live next door to you and build a huge great data centre....
      if you know of somewhere nice, it's best to keep quiet about it.
      • if you know of somewhere nice, it's best to keep quiet about it.

        Problem is, how do I earn a living? I don't want to see a cut down the forest or rape the streams/lakes empty. The area does need an alternative industry.

  • Executives at Google, which is working on its own data center in the Northwest, charged that the energy used by laptops and personal computers should be considered in the discussion, while others argued that standards needed to be set for the development of data centers.

    Laptops and Personal computers (and Consoles) are large draws of electrical power. While data centres consume a huge amount, being able to reduce power consumption in consumer grade electronics would aleviate many issues as well. If we cou

    • If I were a computer hardware manufacturer, I'd commission the OLPC foundation once they got their laptops rolling to help with reducing power consumption in general electronics. The BIGGEST part of the OLPC project was to reduce power consumption; operating the laptop uses 4W peak and in sleep mode it's like a few milliwatts! When they started out it was a lot more than that; they didn't drop the CPU to something cheaper (they started with a 1.1W Geode GX), they primarily hit up the screen and used a gra
  • Whoa, what a wonderful unit! It translates to 460 GW [google.com].
  • ...I'd say productivity increases. videos.google.com, groups.google.com, images.google.com and gmail.com are all great ways to waste time when someone's at work... ... (I LOVE ellipses. They are SO annoying...)
  • 'What happens to national productivity when Google goes down for 72 hours?' I'm sure nobody wants to know."
    Google can't go down nationally, unless multiple data centers go down at the same time, but then you've got probably a national crisis and the least of your worries is Google.

    Heck, even in Hungary, Google has datacenter presence. The load is already distributed smartly.
  • by zappepcs ( 820751 ) on Friday December 08, 2006 @10:47AM (#17161922) Journal
    People who manage and run data centers have to think it through before making changes. Many servers that are more than a year old were not designed for energy efficiencies. To top that, they weren't designed to take advantage of natural efficiencies in telecomms data centers. Most telecomms equipment is designed to run off of -48VDC. This has the effect of reducing the number of wasteful 115VAC to DC conversions along with the subsequent losses to heat that have to be removed by A/C systems. I've seen estimates that show the possibility of up to a 35% reduction in power and A/C costs simply by converting the AC power supplies in servers to DC power supplies.

    Additionally, much of the forced air (from the floor upwards) A/C systems I've seen in data centers is not configured properly. There are vented tiles in places they shouldn't be, and not where they should be... causing hotspots and A/C problems in general.

    I see datacenters with a wide variety of rack types. This can work, but often leads to inefficient use of the A/C systems. Its expensive to change racks, if its even possible (some vendors don't like their kit in someone else's rack) but this problem also needs to be looked at. A/C accounts for a huge energy drain in datacenters.

    Using older hardware rather than buying new hardware saves on the short term, but the savings in energy costs by buying newer, more efficient hardware is something that datacenter managers HAVE to look at if this problem is to be solved. Its not just a matter of being 'green'. Its a matter of saving money that can then be used to bolster other parts/systems of the company.

    I think that we'll see Google et al running VM clusters soon, where unused servers in the cluster are shutdown till they are needed for heavier traffic. In much the same way that complex automotive engines shut off several cylinders during low power requirement times, servers can be shut down (sleep mode) to save power until they are needed.

    These are just some of the ideas that are currently the talk of datacenter managers and the vendors who support them. Try perusing the APC website, or other datacenter vendor's websites.
  • cooling costs... (Score:4, Interesting)

    by archermadness ( 784657 ) on Friday December 08, 2006 @11:02AM (#17162148)
    I was in a seminar a couple of days ago with a data center ops manager from HP. He stated that in a 20,000 sf data center, every degree they lower the temperature of the A/C costs them $200/hr!

    Another interesting tidbit for comparison: a typical high-density rack puts out something in the neighborhood of 15KW of heat. An average home electric oven puts out about 7-8KW of heat. So each high-density rack is like having two ovens going full blast, 24x7.

  • Move your datacenter to Canada. You would not have to pay so much for AC. Quebec has the cheapest electricity in North America, and there is no serious tectonic fault line.
  • This url: http://www.uic.com.au/nip58.htm [uic.com.au] points shows a list of new nuclear plants being built in the US. Most of the building is happening in the Southeast. Nothing in CA.

  • by zogger ( 617870 )
    Data centers need to figure out a way to use the "waste" heat and turn it back into something useful, namely electricity. The problem is they generate a lot of heat, but it isn't hot enough, which seems screwy but for co generation you want as hot as possible. So the tech that needs to be developed (along with the obvious not generating so much waste heat through efficiency gains), is to find better ways to accumulate/move and use the low temp stuff they do have lots of. There are some alternative energy p
  • by jhw539 ( 982431 ) on Friday December 08, 2006 @12:13PM (#17163012)
    The 500 pound gorilla in the corner is that in a typical Silicon Valley datacenter only 50-60% of the power goes to the computers while the other half goes to the support equipment. It does not have to be this way, and things are changing. I have not yet walked into a datacenter that could not cut its total power usage by at least 25% (albeit, in some cases the design damage is done and the simple payback required to make it work would stretch to 4-5 years)(I'm looking at you, datacenters with dozens of 20-30 ton air-cooled compressors on the roof).

    On the gross kWh/yr side, the vast majority of datacenters are unable to use outside air directly for cooling. A 24 hour a day load and they can't 'open the windows' to cool it at night (with appropriate filtration and redundant humidity control lockouts of course)? Come on people! It would even improve reliability (even 70F outdoor air could hold a well configured hot aisle/cold aisle datacenter). But that doesn't help trimming peak load, to do that you have to get the airflow right.

    Efficiency in datacenters starts with just a basic understanding of airflow. You want it very hot behind the racks; you want that hot air to go directly back to your cooling unit not get recirc'd to a rack intake. And you have to have airflow controlled based on the cold aisle temperature to harvest energy savings (fan energy wastage is ridiculous in these things)(oh, and watch out for those server fans that ramp up if you push the cold aisle temp too high - not efficient to provoke a rack of those guys to start screaming).

    You have to know hot aisle / cold aisle to properly design and operate an efficient datacenter, even if that exact configuration is not applicable. Period.

    Of course, its not "that simple," but to the design engineers it certainly should be pretty straightforward work. The information is out there and more is in the pipeline. A good start on the basics of efficient datacenters is available here [lbl.gov] (full disclosure, I was associated with producing that report, so I am not impartial)(but don't blame me for the blurry graphics - I did not create the pdf!).

    And for god's sake people, quit keeping these places at 55-60F - I'm freezing my butt off and you're making a mockery of your own 'tight humidity control' (70-90% RH at the server intakes, but a good 45% +/- 2% at the air handler return).
  • AMD promotes changing the design of data centers to increase airflow to keep the supercomputers cool."

    While AMD is using more power to carry away heat, Intel on the other hand made Blade servers which simply use less power to have less heat to carry away.

    http://www-03.ibm.com/systems/bladecenter/intel-ba sed.html [ibm.com]

    The HS20 ultra low-power blade is a high-density blade server that features high-performance Intel® Xeon® dual-core processors.

    Ideal applications include: Collaboration, Citrix, clusters a
  • by Sergeant Beavis ( 558225 ) on Friday December 08, 2006 @01:11PM (#17163812) Homepage

    The problem is such that PG&E is actually offering rebates of about $150 for every physical server that is virtualized. The rebates can go up to $4MILLION for each company. Then there is the additional savings companies will see in reduced power consumption by the servers themselves and cooling.

    More info HERE [vmware.com]

  • ... or maybe just collapse Jupiter. How else are you going to keep that cloud of computronium fed? Sheesh.
  • by The Mutant ( 167716 ) on Friday December 08, 2006 @02:44PM (#17165126) Homepage
    I run a rather large department for one of the Investment Banks, with users / developers / support staff dispersed between London / Amsterdam / Cairo / Milan and Rio.

    About one year ago the folks maintaining our applications infrastructure were advised by the companies responsible for the municipal grid to reduce our hardware footprint in London.

    The reason? The grid was close to if not already overloaded, and increases in consumption were to be discouraged.

    So we've been putting all new build into Central Europe, and slowly migrating existing systems over as we can.

    A strange situation all around, if you ask me.

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...