



Shortage of Electricity Drives Data Center Talks 194
Engineer-Poet writes "Per the San Jose Mercury News, competitors such as Google and Yahoo are meeting to discuss the issue of electricity in Silicon Valley. How much of the USA's 4038 billion kWh/year goes into data centers? Enough to make a difference. Data centers are moving out of California to spread the load and avoid a single-point-of-failure scenario. This is a serious matter; as Andrew Karsner (assistant secretary of energy efficiency and renewable energy for the Department of Energy) asked, 'What happens to national productivity when Google goes down for 72 hours?' I'm sure nobody wants to know." From the article: "Concern about electricity pricing and volatility has led Microsoft to talk with its network manufacturers about building more efficient servers. IBM and Hewlett-Packard -- which both build data centers -- want to improve efficiency at the facilities. AMD promotes changing the design of data centers to increase airflow to keep the supercomputers cool."
Nothing FP (Score:5, Funny)
Re: (Score:3)
Re: (Score:3, Insightful)
This question is entirely besides the point though. As it is in Google's interest to stay the most popular search engine, I'm sure they have got their backup mechanisms in place. I'm pretty sure they
Re: (Score:2)
Sure they do (Score:5, Insightful)
Google only needs one of two redundant data centers (one in the East, one in the West, one Mid-Central) to basically ensure they can whether any power loss scenario. If they had 3 such separate centers (which I have no doubt they already have), the only way they're going to be totally off line is if the whole national grid goes down - in which case Google should be the least of your worries if you're a lawmaker.
Re: (Score:3, Interesting)
Especially if they have one in Dallas (or any large city in TX other than El Paso). The TX grid is the most independent of all electric grids, and rarely do problems traverse its boundaries.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Interesting)
So, if power goes down, they actually might be the only ones who *ARE* up and running, which is pretty fucking cool.
B
Re: (Score:2)
Re: (Score:2)
Re: (Score:2, Interesting)
But that is one of the cool things about an outage; okay I am a Sysadmin, so my job is to keep everything up and running... however during our annual plant shutdowns, I enjoy watching server utilization and network utilization d
Re: (Score:2)
Heh, for some reason this makes me think of yesterdays dilbert:
dilbert.com [dilbert.com]Re: (Score:2, Funny)
I think a lot of people don't know about other search engines anymore.
Re: (Score:2)
Google
AltaVista
Metacrawler
dogpile
yahoo
MSN
ASK
try wikipedia: Yup a nice list on wiki at
http://en.wikipedia.org/wiki/Search_engine [wikipedia.org]
try something obvious like...
www.searchengines.com (nope... some kind of placeholder site)
www.searchengine.com (yup-- appears to be a search engine).
www.searchweb.com (maybe-- looks like a placeholder page but sort of looks like search page)
Look at people that rate search engines
Typed this in since it seemed logical
www.searchenginerating.com (appears to be more about getting
Productivity (Score:2)
I have an office where, if you're on one access point instead of another on the same net, certain sites, Google in particular, are inaccessible. I have to do all my searching on MSN (pity me) instead when I'm in that area, and let me tell you with absolute certainty, productivity goes way down.
Re: (Score:2)
We'd probably also get a hefty productiv
Data Center Congregation (Score:5, Informative)
Re:Data Center Congregation (Score:5, Insightful)
Re: (Score:2, Interesting)
Re: (Score:3, Interesting)
Re: (Score:3, Informative)
Re: (Score:3, Informative)
See http://en.wikipedia.org/wiki/Earth_cooling_tubes [wikipedia.org] for a more recent take on the same principle.
Re: (Score:2)
There are systems (heat pumps) which appear to operate at greater than 100% efficiency when comparing the amount cooled vs the amount of power supplied. They wo
Demand Patterns Track Customers, not Cost (Score:2)
The other hot data center real estate market is Austin [datacenterknowledge.com], which has benefitte
Cool Running (Score:3, Interesting)
AMD has a website on the topic: Real Efficiency in the Data Center [amd.com]
Re: (Score:2, Informative)
Re: (Score:2)
T2000 was obsolete on launch (Score:3, Informative)
http://www.anandtech.com/printarticle.aspx?i=2727 [anandtech.com]
The CPU power/watt wasn't really that much better compared to x86 stuff of that time.
It is now nearly 9 months from that, and AMD and Intel have improved significantly. Where is the T2000 or T1 now? Look at Intel - their latest CPUs now trash AMD's by about the same margin which AMD used to trash Intel's offerings.
As long as you skip the Intel P4 stuff, and the silly
A Modest Proposal (Score:5, Funny)
Locals and guest workers would be hired to pedal for one-hour shifts each, generating some portion of the needed power and giving a boost to the local economy. Don't think "galley" -- think "self-sustaining"!
If you'd like to use this idea, please contact me via my Slashdot account. Thanks.
Re:A Modest Proposal (Score:4, Funny)
Don't be foolish, this is America we're talking about, call it a Gym and charge admission to use thoose bikes.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Interesting)
Jokes aside, this seems like a brilliant way for a gym to offset at least some of its operating electrical bill, but I can't recall ever reading about a single instance of this being put into practice. When I was a kid I bought a rig to power the lights on my bike via a simple friction mechanism off one of the wheels for about £10, so I doubt cost is an issue. Is anyone aware of this being done on a larger scale, or has the idea really just not occurred to anyone?
This is done in a number of gyms. The power produced by an average person on an exercise bike isn't enormous, though, and is used to power the display on the bike itself. There may be a tiny bit left over, but not enough to be any real use.
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
You really are desperate if you need to rely on that.
Re:A Modest Proposal (Score:4, Funny)
Re: (Score:3, Insightful)
Re:A Modest Proposal (Score:4, Funny)
Re: (Score:2)
Share the Power (Score:3, Insightful)
That sounds like a perfect reason for nearly all of Google's servers to live distributed around the US, and the globe. With local operators for physical access, and global remote admins for most normal operations.
The past year or so we've heard all kinds of wild rumors about "googled in a box": supercomputers in a shipping container for rapid deployment around the world. How about just a briefcase of money dropped on the local economies to build datacenters in-place, the old fashioned way, without the alien assault tech strategy?
Cheaper, more redundant, more energy efficient (at least not overloaded). Sufficiently distributed, they could use lower-density energy generation, like solar/wind/environmental.
Google should force manufacturers and designers to make all our power consumption more efficient, using their buying power to improve the tech. Then they should use that tech in the more economical, reliable, power efficient way. Share the wealth and power with the rest of us who are keeping them hot.
VMware, Baby!!! (Score:2, Informative)
Re: (Score:2)
Re: (Score:2, Interesting)
whoever designs/makes solar panels strong enough to weather car traffic will make millions. i had this idea screw parking lots. turn the roads into solar collectors. (not sure if anyone else has had this idea or not, but it occurred to me one time driving thru middle-of-nowhere, utah, that someone, at some time, was out there with an asphalt crew, laying this freeway for years on end. i'd be interested in
Re: (Score:2)
We have technolgy for distributing electricity and technology for distributing data. The difference is that data can be transmitted losslessly.
Re: (Score:2)
Re: (Score:2)
But I'm not talking about them sharing the tech IP content. I'm talking about them sharing their datacenter distribution outside Silicon Valley. The construction and operation of their datacenters. Their money and jobs. Their power demand, distributed among the wider grid. Which also would drive development of better power supplies and Internet bandwidth/resources.
Instead of keeping it all pent up in Sil
Re: (Score:2)
Google actually has data center facilities all over the place (it's hiring data center staff in nine different locations [google.com]), and is building more. They are said to be shopping for property in North Carolina, and contemplating a $1 billion facility in India [datacenterknowledge.com]. I think their center network is rapidly becoming more distributed, and given the issues in Silicon Valley, they'll be accelerating that trend.
Re: (Score:2)
Like maybe evidence of custom apps for such a densely networked supercomputer? Especially a parallel OS/SW installer optimized for a mesh. Maybe some work by/with Sun's ancient JavaSpaces architecture group?
About 10% (Score:3, Interesting)
Now, I hope, people will start to understand why Sun and Intel are focussing so hard on performance-per-watt, and not just performance.
Re: (Score:2)
What's surprising is how long it took for power consumption of computers to be an issue. Back in 1999, no one cared about high power consumption. It wasn't until you started needing 500W power supplies in desktop PCs that people noticed, but the trend was clearly there. Even today, you still see people who want quad core processors and high-end video cards in ultra-thin n
Re: (Score:2)
LOL Sun and Intel!? Yeah, Intel is all about performance per Watt becuase AMD used it as a marketing weapon, to great affect.
Neither Sun, not Intel is responsible for Performance per Watt - AMD is.
Energy Use = Prosperity (Score:2)
Personal Opinion: Deregulation has put emphasis on quarterly profits, not on reserve capacity of both power plants and the grid.
Congress only "works" an average of 3 days a week or less. Some heads in Washington need some knocks.
Re: (Score:2, Interesting)
Re: (Score:2)
But european countries produce a lot more GDP per BTU than the US, maybe Syd Meyer was right and democracy helps economy to be more efficient.
Iceland needs a really big pipe.... (Score:5, Insightful)
Re: (Score:3, Interesting)
Re: (Score:2)
says travelnet.is
Re: (Score:2, Interesting)
That aside latency is not really a distance issue - its a network design issue.
If you put a big trunk of fiber (as my original comment was saying) from iceland to NY and iceland to london (thus making a nice redundant triangle with the current transatlantic connections) and connected it the the existing back bones sensibly the extra distance would not really be noticed.
Hops add far more latency than distance,
Moving makes sense (Score:5, Insightful)
What's the point of locating your datacenter in an area with high ground prices, a history of electric power supply problems and a hot climate?
Re: (Score:2)
Being able to get wads of bandwidth for a low fee and without mileage charges?
Re: (Score:3, Informative)
Silicon Valley (where I live) isn't hot. The reasons why a company would locate their data center here are numerous:
Just relocate (Score:3, Interesting)
It's still a good idea to reduce server power because it reduces both the operating power AND the cooling power required.
On another note, has anyone noticed that language used impacts performance per Watt?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I for one much rather live in NYC. You would have to pay me so much MORE to make me move to North Dakota? But maybe I am alone in this case?
Re: (Score:2)
One thing I have seen used to good effect is pond water. You still need the huge air conditioners in your server rooms but they can operate on water piped in from a body of water. I'm unclear on what the difference in costs is, but I would expec
Re: (Score:2)
Water cooling is nice, but you need a fairly deep pond/lake nearby to do it well.
Re: (Score:2, Interesting)
Re: (Score:2, Interesting)
I read years ago that some homes in the old East Germany (GDR/DDR) were heated by the local power station. Due to a lack of thermostats they had to open the windows if it go too warm. Then Germany re-unified and they started shutting down those dirty coal power stations. I don't recall what happened after that.
As a datacentre can probably be just about anywhere in the world where there i
Re: (Score:2)
I wouldn't be surprised if major datacenter operators eventually construct small, site-local nuclear reactor facilities that can provide both electricity and cooling capacity
Re: (Score:2)
Generate your own juice (Score:2)
If IBM can do it[1], I'm sure Google and Yahoo can too.
[1] <http://www-935.ibm.com/services/us/bcrs/sites/ste rling-forest.html> [ibm.com]
Go North - Central BC and NW Alberta (Score:2)
It isn't just power, how about air conditioning and quality of life. Many of us don't like the idea of living in a concrete jungle. I would take a pay cut to live in a cabin in the mountains by some fishing streams in a heart beat. So go to central BC, with a power dam right next door, 6 months a year the cool air is supplied by mother nature and land is cheap. Or perhaps northern Alberta or even northern Saskatchewan.
Re: (Score:2)
if you know of somewhere nice, it's best to keep quiet about it.
Re: (Score:2)
if you know of somewhere nice, it's best to keep quiet about it.
Problem is, how do I earn a living? I don't want to see a cut down the forest or rape the streams/lakes empty. The area does need an alternative industry.
More then Data Centres (Score:2)
Laptops and Personal computers (and Consoles) are large draws of electrical power. While data centres consume a huge amount, being able to reduce power consumption in consumer grade electronics would aleviate many issues as well. If we cou
Re: (Score:2)
4038 billion kWh/year (Score:2)
Based on How People Use Google... (Score:2)
Clueless as usual (Score:2)
Heck, even in Hungary, Google has datacenter presence. The load is already distributed smartly.
There is a lot that can be done... but (Score:3, Interesting)
Additionally, much of the forced air (from the floor upwards) A/C systems I've seen in data centers is not configured properly. There are vented tiles in places they shouldn't be, and not where they should be... causing hotspots and A/C problems in general.
I see datacenters with a wide variety of rack types. This can work, but often leads to inefficient use of the A/C systems. Its expensive to change racks, if its even possible (some vendors don't like their kit in someone else's rack) but this problem also needs to be looked at. A/C accounts for a huge energy drain in datacenters.
Using older hardware rather than buying new hardware saves on the short term, but the savings in energy costs by buying newer, more efficient hardware is something that datacenter managers HAVE to look at if this problem is to be solved. Its not just a matter of being 'green'. Its a matter of saving money that can then be used to bolster other parts/systems of the company.
I think that we'll see Google et al running VM clusters soon, where unused servers in the cluster are shutdown till they are needed for heavier traffic. In much the same way that complex automotive engines shut off several cylinders during low power requirement times, servers can be shut down (sleep mode) to save power until they are needed.
These are just some of the ideas that are currently the talk of datacenter managers and the vendors who support them. Try perusing the APC website, or other datacenter vendor's websites.
cooling costs... (Score:4, Interesting)
Another interesting tidbit for comparison: a typical high-density rack puts out something in the neighborhood of 15KW of heat. An average home electric oven puts out about 7-8KW of heat. So each high-density rack is like having two ovens going full blast, 24x7.
Move your datacenters to Canada. (Score:2)
Look where building's happening... (Score:2)
low temp co-generation (Score:2, Interesting)
Take it out of the mechanical system (Score:3, Informative)
On the gross kWh/yr side, the vast majority of datacenters are unable to use outside air directly for cooling. A 24 hour a day load and they can't 'open the windows' to cool it at night (with appropriate filtration and redundant humidity control lockouts of course)? Come on people! It would even improve reliability (even 70F outdoor air could hold a well configured hot aisle/cold aisle datacenter). But that doesn't help trimming peak load, to do that you have to get the airflow right.
Efficiency in datacenters starts with just a basic understanding of airflow. You want it very hot behind the racks; you want that hot air to go directly back to your cooling unit not get recirc'd to a rack intake. And you have to have airflow controlled based on the cold aisle temperature to harvest energy savings (fan energy wastage is ridiculous in these things)(oh, and watch out for those server fans that ramp up if you push the cold aisle temp too high - not efficient to provoke a rack of those guys to start screaming).
You have to know hot aisle / cold aisle to properly design and operate an efficient datacenter, even if that exact configuration is not applicable. Period.
Of course, its not "that simple," but to the design engineers it certainly should be pretty straightforward work. The information is out there and more is in the pipeline. A good start on the basics of efficient datacenters is available here [lbl.gov] (full disclosure, I was associated with producing that report, so I am not impartial)(but don't blame me for the blurry graphics - I did not create the pdf!).
And for god's sake people, quit keeping these places at 55-60F - I'm freezing my butt off and you're making a mockery of your own 'tight humidity control' (70-90% RH at the server intakes, but a good 45% +/- 2% at the air handler return).
MOD PARENT UP! (Score:2)
AMD power solution (Score:2)
While AMD is using more power to carry away heat, Intel on the other hand made Blade servers which simply use less power to have less heat to carry away.
http://www-03.ibm.com/systems/bladecenter/intel-ba sed.html [ibm.com]
The HS20 ultra low-power blade is a high-density blade server that features high-performance Intel® Xeon® dual-core processors.
Ideal applications include: Collaboration, Citrix, clusters a
Power Rebates for Virtualization (Score:3, Informative)
The problem is such that PG&E is actually offering rebates of about $150 for every physical server that is virtualized. The rebates can go up to $4MILLION for each company. Then there is the additional savings companies will see in reduced power consumption by the servers themselves and cooling.
More info HERE [vmware.com]
We'll need more suns ... (Score:2)
It's a problem in Europe as well... (Score:3, Interesting)
About one year ago the folks maintaining our applications infrastructure were advised by the companies responsible for the municipal grid to reduce our hardware footprint in London.
The reason? The grid was close to if not already overloaded, and increases in consumption were to be discouraged.
So we've been putting all new build into Central Europe, and slowly migrating existing systems over as we can.
A strange situation all around, if you ask me.
Re: (Score:3, Funny)
Re: (Score:2)
Damn you, I had!!
Ugh.
Re: (Score:2)
Re: (Score:2)
The story is that during the second world war, the Germans tried to reduce productivity in the British Civil Service by sabotaging the Times Crossword. I think this might work along similar lines.