Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage Facebook Google Social Networks Hardware

Photo Tour of Facebook's Open Source Datacenter 70

An anonymous reader writes "Robert Scoble has published fantastic photo tour of Facebook's new open source data center. This datacenter is the most energy efficient in the world. The Google and other datacenters are pushing back against new efficiency requirements for a while, and an open source competitor will only make things better for the rest of us."
This discussion has been archived. No new comments can be posted.

Photo Tour of Facebook's Open Source Datacenter

Comments Filter:
  • by El_Muerte_TDS ( 592157 ) on Saturday April 16, 2011 @06:20PM (#35843462) Homepage

    So, where can I download the blueprint of it?

  • by Anonymous Coward

    some hardcore great systems

  • Not quite. (Score:4, Interesting)

    by Anonymous Coward on Saturday April 16, 2011 @06:33PM (#35843538)

    No, it's not the most energy-efficient in the world. The numbers they published were only from a VERY limited timespan during the coldest part of the year when energy needed for cooling would be drastically lowered.

    If they published a full-year figure, I can guarantee you it wouldn't be nearly as good as the published one.

    • by Anonymous Coward

      Agreed. And what about being environmentally friendly? There sure are a lot of filters that would need to be replaced on a regular basis. I know this because I'm a refrigeration/air-conditioner technician. I'd like to learn more about how 'environmentally efficient' they are. Not just energy-efficient.

      • I think this would depend entirely on the materials used for the air filters and the source(s) of electricity used... I don't consider outright disposal, or landfill of paper products harmful to the environment... trees are a renewable resource.
    • You might well be right, but wouldn't that be reason for more environmentally friendly designs be made public. It would cool to see companies out-competing each other over improved environmental designs if data centers.

      BTW Can you point to some existong data centers that are likely to out compete already? I would be curious to see their designs.

    • Prineville, OR is high altitude desert.. So even when it gets warm in the summer, the nights are usually cool. there is a very short timespan that it is hot both night and day in that part of the state.

  • Seems like it is becoming the next big buzzword for MBAs to throw around. "Yeah Bill, our new data and commerce center is leveraging the open source capabilities of the cloud to make sure our crowd sourced ROI brings back the best managed results we can get with today's scalability and reliability of "Echs Eight Six" platform development systems...at least that's what this whitepaper in front of me says."
    • I don't think the decision to open up the plans is buzzword compliance, though. They probably have more practical reasons like getting feedback.

      • Comment removed (Score:4, Insightful)

        by account_deleted ( 4530225 ) on Saturday April 16, 2011 @06:43PM (#35843618)
        Comment removed based on user account deletion
        • It's pretty much a classic application of the major business case for "open source" anything(the second, typically less significant, one being to assure a customer or customers that you aren't locking them in, if they are large enough or the business competitive enough, to demand that): commodifying your complements.

          When a hardware vendor gets all "open source", that is usually a sign that they want a cheap OS and/or some flavor of middleware that can sit between their hardware and their consulting servi
        • They also have a fixed hardware platform - not so with a colocation datacenter where the operator is going to need to accommodate a wide mixture of equipment from unrelated vendors that customers bring in.

          One thing I found interesting that seems to be popular with new facilities like this one is omitting the clean agent fire suppression systems that used to be all the rage. Specifically it says:

          4.10 Fire Alarm and Protection System
          Pre-action fire sprinkler system uses nitrogen gas in lieu of compres

          • So on top of destroying hardware, most of which is not replaceable under warranty (I doubt Dells 4hr business service accommodates water damage), they are trying to put out electrical fires with water. I think we can assume the majority of fires in a data center will be electrical.

            • by qubezz ( 520511 )
              They are pumping tons of air through the facility. Lowering o2 levels with a nitrogen or a halon system would be pretty ineffective. A chemical bottle extinguisher is pictured in one photo - manually extinguishing local fires (like if an AC panel goes boom) would probably be expected. The sprinkler system is probably code mandated to keep the whole facility from going up in flames in a disaster.
          • One thing I found interesting that seems to be popular with new facilities like this one is omitting the clean agent fire suppression systems that used to be all the rage.

            New data architectures make this possible. Facebook can lose a room full of equipment and not loose any significant data. It's probably cheaper to replace a room full of commodity servers than to maintain halon systems everywhere.

            If I recall their replication correctly, if a sprinkler system took out a room full of servers, the data laye

    • You can surely get "carbon neurtral" in there somewhere?

  • PUE tricks (Score:4, Interesting)

    by Anonymous Coward on Saturday April 16, 2011 @06:41PM (#35843612)

    Several large data center operators are trying to win this "most efficient" title and putting lots of innovation and resources into them. However, you need to be very careful at comparing the outcome. First notice the actual claim, "most efficient". The Facebook data center consumes water to reduce energy needs. This can be a very dangerous practice if followed on a large scale. Consider the recent annex of a US government site in Utah in order to get priority water service. http://www.datacenterknowledge.com/archives/2011/03/09/annexation-boosts-cooling-for-nsa-data-center/

    So far, anyone attempting to lay claim on the "most efficient" title has moved things from the cooling column, sometimes into the computer load column (most notably fans), sometimes over to water consumption. Yes, you get a better efficiency awarded if you consume more power in non cooling areas.

    The quote showing when this came up is so true;
    The trouble with the rat-race is that even if you win, you're still a rat. -- Lily Tomlin

    • Water can still be used to for example irrigate crops after it has been used for cooling, you can use non-drinking water such as seawater for cooling, though you might want to keep that a bit further away from the computers than you would for pure water, perhaps as a secondary heat exchange, and there are parts of the world with plenty of water, such as Canada, Scotland and Scandinavia.

      • Re:PUE tricks (Score:4, Informative)

        by CastrTroy ( 595695 ) on Saturday April 16, 2011 @09:12PM (#35844268)
        Yeah, Living in Canada, I always laugh when they talk about water conservation. Not that we should waste it, but we have much bigger environmental problems to worry about. If there was anywhere near a shortage, it wouldn't cost only a couple buck for a cubic meter (1000 L). Most of the problems with water in this world are a distribution problem, not a supply problem. And water is something that is quite expensive to transport. It's not like you can dehydrate the water to bring the weight down.
        • Actually, if anybody is interested, I have a large supply of dehydrated Water Ready to Drink(WRD) packets available. These things are the perfect complement to MREs, and great in all sorts of emergency situations.
        • I live in Arizona, and still don't get the argument... if you are using well water, or otherwise isolated, sure... but in general water evaporates, becomes clouds and rains/snows back down again... Used for cooling, like in nuclear plants, it isn't tainted, lost or otherwise removed from the planet, just evaporated. Now split for hydrogen use, that's a different story slightly.
          • The problem is the water doesn't necessarily come down in the same place. This means what ever you are taking out the ground is not necessarily being replaced at the same rate. This is even more true when the number if people exploiting it increases. There are plenty of cases where the level of the water table has dropped from over exploitation.

      • by Cramer ( 69040 )

        Not the way they're doing it. Evap cooling vaporizes the water. Once it's vapor, it's hard to drink, irrigate, etc. with it. You'll have to wait for it to condense back into liquid (ala rain, snow, sleet, etc.) before it's "usable" again. As others point out, that's very rarely a closed system -- the rain comes down hundreds of miles away.

        The way many (some?) office buildings are cooled, on the other hand, does not vaporize the water. It takes water from the muni supply, runs it through a coil, and ret

    • Re:PUE tricks (Score:4, Insightful)

      by fuzzyfuzzyfungus ( 1223518 ) on Saturday April 16, 2011 @07:20PM (#35843784) Journal
      Depending on how local power is generated, the power consumption column may or may not be hiding a substantial amount of water use as well(the giant cooling towers, billowing clouds of steam that weren't successfully recovered, are not there for show, nor is the siting of many sorts of power plants next to bodies of water just because operators like paying more for picturesque real estate with flood risks.) Some flavors of mining are pretty nasty about using and/or filling with delightsome heavy metals and such, water resources as well.

      I don't mean to detract from your, eminently valid, point that there are a lot of accounting shell games, in addition to actual engineering, going on when being "efficient"; but it is really necessary to decompose all the columns in order to figure out what is hiding under each of the shells(and how nasty each something is).

      Swamp coolers are a great way(in non-humid areas) to reduce A/C costs and increase clean freshwater use. Whether or not that is a better choice than using more energy strongly depends on how your friendly local energy producer is producing. Odds are that they are consuming cooling water(or hydropower's consumption of water with potential energy); but it isn't always clear how much.

      Regardless, though, what you Really, Really, Really want to avoid is situations where archaic, weak, nonsensical, and/or outright corrupt regulatory environments allow people to shove major costs under somebody else's rug. Why do they grow crops in the California desert? Because the 'market price', such as it is, of water sucked from surrounding states is virtually zero. Why are there 20-odd water-bottling operations in Florida, a state barely above sea-level and with minimal water resources? Because the cost of a license to pump alarming amounts(you guys weren't using those everglades for anything, right?) of water is basically zero(unless you are a resident, of course, they face water shortages. Trying incorporating next time, sucker). Similar arguments could be made that energy users in a number of locales are paying absurdly low rates for the Appalachian coal regions being turned into a lunar theme park, among other possibilities.

      Playing around with 'efficiency' numbers is a silly game; but largely harmless PR puffery. Making resource tradeoffs that are sensible simply because they allow you to shove major costs onto other people at no cost to yourself is all kinds of serious.
  • by mauriceh ( 3721 ) <mhilarius@gmai l . com> on Saturday April 16, 2011 @07:24PM (#35843798)

    This is a site in a location where most of the power is from coal fired generators.
    Totally lacking in foresight.
    What happens when coal generation is banned in a few years?

    • Nobody is voting to ban coal power production any time in the next thirty years, due to the annoying fact that it would result in a total collapse of the United States economy.
      • Ontario recently made a decision to shut down all the coal power plants. They are phasing them out. There are better sources of electricity out there.
        • I live in the southwestern U.S. so most of the power here is hydro, solar, or nuclear... I really don't get the use of coal, which is terribly inefficient for power generation.
    • I seriously doubt anyone is going to ban coal power production!

      I'd wager a good 70% of us could go look at our energy bill right now and see that just wouldn't be possible.

      Personally, I get about 88% of my energy from coal.. and that's in the California valley.

  • by the linux geek ( 799780 ) on Saturday April 16, 2011 @07:38PM (#35843860)
    I have to wonder how much power that would use if it ran on mainframes or large UNIX servers rather than unreliable and relatively slow clusters of small machines. It's strange that none of the "new generation" websites are choosing to go with bigger systems, despite the fact that they tend to perform better on both performance and power/performance.
    • by Anonymous Coward

      I think it's a problem of culture. With "agile" methods they're pushing code all the time and using their distributed infrastructure to test it in real time. So they deploy to 10-20 machines, if it's ok then to 100-200, etc... Having just a few huge optimized servers with broken would affect too many users at once, so I guess web companies are confortable with their commodity server.

      I work for such a web company and every attempt to introduce more reliable systems is met with mixed feelings... they've grown

      • Re:Ewww, commodity (Score:4, Insightful)

        by turbidostato ( 878842 ) on Sunday April 17, 2011 @12:03AM (#35845058)

        "they've grown in a culture of commodity PCs and think everything is equally unreliable, so why spend more money on big-iron if it's going to fail at the same rate?"

        I don't think that's exactly their point.

        The point is more "why spend more money on big-iron if it's going to eventually fail anyway?" If it's going to fail eventually, you'll have to program-around the failure mode, but once you properly program-around system failure why going with the more expensive equiment? Go with the cheaper one and allow it to fail more frequently, since it really doesn't matter now.

      • by SuperQ ( 431 ) *

        But the thing is, even with big iron you still need to plan for downtimes and maintenance. For the scale of some of the "big" sites out there you still end up having to build software that can tolerate failure. Planned or unplanned. All of the utility of the big iron goes away when you still have to plan to fail over.

    • by SuperQ ( 431 ) *

      I've done these numbers before. How about this:

      IBM Power 795 (4.0 GHz, 256-core) 1TB ram - specint rate 2006 = 11,200 - $2m
      AMD Opteron 6176 dual socket (2.3Ghz, 24 core) 128GB ram - specint rate 2006 = 400 - $8100

      So you need about 30 AMD machines to get the same speed. That's about $250k including rack and networking. Right off the bat you're talking about a 1/8 the cost of the IBM power system.

      As for performance/watt, the AMD machines need about 600W each. A rack plus switch is probably going to need 2

      • Except that you have to deal with the garbage of running a cluster, having a higher cost of licensing, and a higher rate of failure.
        • Licensing? Licensing for WHAT? Do you think Facebook is running windows on these machines? Maybe they are running Oracle Linux?

          Clearly if we are trying to save money by running commodity hardware, we are going to load up Server 2008 R2 on every box with some MSSQL. I think you need to go reread all the posts above yours...
          - rate of failure doesn't matter if its commodity hardware
          - licensing clearly won't matter if you are using open-source software
          - clustering? Easier to use BigTables, Cassandra, Hadoop

  • by perryizgr8 ( 1370173 ) on Sunday April 17, 2011 @01:20AM (#35845474)

    with an iphone!
    how hard would it have been to take a proper camera? the photos are almost unlookable!

    • It takes an act of God to get a proper camera into my company's DC's - I suspect it's like that other places too.
  • Why do they have all the water jets, filters, air temperature changers and so on?

    Is it good for the servers or something?
  • Qwest came into existence through a clever deal to purchase right-of-way along the railroad track, and it's mentioned that the DC is located where it's located because of this access. Qwest is notable for being the only LD provider to not instantly cave when asked to install equipment to permit the federal government to listen in on all calls.

  • How much less power would they consume if they didn't plug in the silly blue lights on all the servers? Per machine it can't be too much but across a data centre that size, it must add up to several hundred watts. It's not like they need to whip out an epeen at a LAN day.

    • by SuperQ ( 431 ) *

      A blue LED like the ones used are probably 5-10mW max. A not so bright 5mW LED * 10k machines = 50W used. 50W is probably 0.001% of the power used by a cluster of that size.

      • by srodden ( 949473 )

        I see what you're saying but looking at the number of lights there, I think it's more than one LED per machine.

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...