Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Facebook Hardware

Facebook Opens Their Data Center Infrastructure 90

gnu let us know about Facebook releasing specifications for their data center infrastructure as an open hardware project. They've released detailed electrical and mechanical data for everything from the server motherboards to the data center power distribution system. Digging further reveals that the specifications are licensed under the new Open Web Foundation Agreement which appears to be an actual open license. The breadth of data released really is quite amazing.
This discussion has been archived. No new comments can be posted.

Facebook Opens Their Data Center Infrastructure

Comments Filter:
  • Comment removed based on user account deletion
  • Faceboook (Score:4, Interesting)

    by yeshuawatso ( 1774190 ) * on Thursday April 07, 2011 @05:33PM (#35751266) Journal

    So open to our partners we'll even give them access to the servers themselves to poke around in your personal info directly.

    On a serious note, the data center is pretty cool. Here's another source of pretty blue images that show better images regarding the evaporation cooling system.

    http://www.technologyreview.com/computing/37295/?a=f [technologyreview.com]

    You might have to 'skip' a couple HP ads but after about 2 or 3 they get the message that you're not interested.

    • A stark contrast to Google's nigh-on paranoid stance, the point of which I never really understood. It will be very interesting to see whether Facebook's open approach ultimately results in a lower infrastructure cost than Google's traditional secrecy. In this case my money is on open. Now how can I get some of that IPO? (Not a Facebook fanboy by any means, privacy issues are deeply disturbing.)

      The cool thing about the Google/Facebook rivalry is, it's Linux vs Linux. I guess we'll be seeing more of that.

    • While Web firms such as Google and Microsoft invest a lot in improving data-center efficiency, they keep their designs a closely guarded secret.

      Actually Google made their data center designs public years ago (quite similar to this setup, but IIRC they were using containers for a modular data center and they were using plain air cooling w/ AC) but not as an open hardware project.

      • Actually Google tried to keep the container based design secret, only trouble was everybody else thought of the same (obvious) thing. See Sun's "black box".

      • by afidel ( 530433 )
        Google only reveals a design they are about to retire from what I've gathered, their current and N-1 designs are never discussed as far as I can tell.
        • by kriston ( 7886 )

          I remember the cork-board server chassis and the fire hazard they presented. Facebook pulls way, way ahead of Google with these professional specifications. I'm especially fond of the omitted video processor, too.

      • by Bug-Y2K ( 126658 )

        Google has never made their *datacenter* designs, or even their locations public. They have shared their server design, or at least an outdated one.

        From what I've heard from Ex-Googlers they never actually deployed the container concept beyond one half, of one of their many, many facilities.

  • They would stop pestering me about how I need to get back on Facebook and blather on about everything I've been doing lately.

    i mean, that's what /. is for. ffs.

    • by by (1706743) ( 1706744 ) on Thursday April 07, 2011 @05:42PM (#35751330)
      by (1706743) and 4 others like this.
      • by (1706743) and 4 others like this.

        Actually... that's a good point. We can currently alter relationships with other users on here, but it'd be interesting if we could "like" posts -- and see how that rating compared to regular moderation. You could even go so far as to say "5 friends, 10 friends of friends, 30 foes and 500 friends of foes liked this." Take it even further, and add "If you liked this comment, you may also like..." and provide a widget that lists comments/submissions liked by friends and friends of friends.

        • by NoAkai ( 2036200 )
          I can definitely see something like this happening in the future, hell, just recently we got the Google +1 button [slashdot.org], the EVE Online forums recently added [eveonline.com] a "native" (i.e. not linked to Facebook) like button. Not to mention all the Facebook "like" buttons strewn all over the web. But to summarize, I think we will be seeing a lot more "native" like-functions appear over the web, especially on forums and other sites aimed primarily at discussion. If nothing else, it may help cut down spam and "me too" posts.
          • the EVE Online forums recently added [eveonline.com] a "native" (i.e. not linked to Facebook) like button.

            Did you see the ad for it? It made me choke on my beer. It was so much like OMG PONIES.

  • by Anonymous Coward

    Open source data centers? Can things really be that lame?

    Where's the storage? Where's the fault tolerance? Where's the monitoring? Where's the fire suppression? Where are the diesel generators? Where's the physical security?

    Where's....jeesh...there's so much more to a datacenter than just servers and racks.

    Some yahoo just got the idea to apply "open source" to something random. Damned if it makes any sense at all, but hey, it's OPEN SOURCE, so it's got to be GREAT!

    Meh. Idiots all.

    • by Daniel Phillips ( 238627 ) on Thursday April 07, 2011 @05:49PM (#35751418)

      But didn't you just demonstrate the value by listing off the issues as you perceive them? Next step is discussion of your points to see if they are/are not addressed. Congratulations on your contribution to the open development process.

      • by Anonymous Coward

        No. It's just dumb. Anyone with any smarts knows there are things such as blueprints, best practices, white papers, books, magazines, etc etc that cover this area in great detail. It's not like data center design is a big secret or has a lock-out to any interested parties.

        Point is, this is just an ignorant application of a concept.

        • So you think that just because the subject has been written about extensively a data center will just assemble itself magically in an optimal configuration?

  • by Anonymous Coward

    It's interesting, if they didn't didn't play the background music at twice the volume as the people talking.

  • This is the sort of stuff everybody can benefit from. I wish more companies did this. And as an Oregon resident, this is doubly good for my state. BTW, what are large concrete security barriers doing there around the facebook data center? Is Facebook concerned about someone bombing them? Or do they serve a different purpose?

  • I wish they'd release more than just the raw data; I'd love to hear/read what & how they came to the final design. Their quasi-competitor Google has always been good with this! (Remember the Chrome zines?)
  • Is it just me, or has Facebook been slashdotted? The page has been loading the whole time I typed this.
  • by ecliptik ( 160746 ) on Thursday April 07, 2011 @06:45PM (#35751898) Homepage

    Buried in the Intel Motheboard PDF on page 10 section 6.8 it says they're using CentOS 5.2 as the OS:

    Update from the operating system over the LAN – the OS standard is CentOS v5.2

    Also, in the chassis design it seems there are rubber passthrus to allow cables to go between servers above and below each other.

  • Not sure how practical a PSU optimized for 277V input is for general use and the 450W max power is a bit tight for some Nehalem based configurations but overall it's pretty cool. The cold side containment, open frame cases, air side economizer, higher set points are now pretty standard design consideration. The airflow and fan optimizations were very cool but I'm not sure how applicable they are to most datacenters with a variable demand (I imagine FB runs their servers at a constant workload with only enou
    • by Anonymous Coward

      Commercial power (large buildings) is 3 phase 480 volt. (One leg of 3 phase 480 is 277 volts.) Normally motors (such as HVAC fans) are run directly from the 277/480 for highest efficiency and sometimes even the facility lighting will be fluorescent fixtures with 277 volt ballasts (again, highest possible efficiency.) There will be on premise transformers to knock the 480 down to 3 phase 208 volt. (One leg of 3 phase 208 is 120 volts.) This is used to power equipment (such as computers) that can not run

      • by afidel ( 530433 )
        Oh, I understand. It's just that most existing IT infrastructure is 120/240V or 208V, if you're doing a greenfield full on datacenter design 277V probably makes sense because you're potentially buying enough equipment to get rates close to the market rate for 100-240V auto-ranging power supplies and you can specify what your PDU design will be. But if you're like 90% of the market you either have standard service off a utility panel in shared space or an existing datacenter with UPS's and PDU's specified fo
    • by thsths ( 31372 )

      > Not sure how practical a PSU optimized for 277V input is for general use and the 450W max power is a bit tight for some Nehalem based configurations but overall it's pretty cool.

      277V is perfectly fine. It gives about 400V DC, which maintains a decent safety margin to the absolute peak voltage of 600V for standard MOSFETs. And it should require only very minor changes from the standard 110-240V power supply.

      450W are a decent amount, too, assuming you can actually load the PSU with 450W (and it does no

  • The lights are even powered by Power-Over-Ethernet. Slick. Anyone know who supplies these?

    From http://opencompute.org/specs/Open_Compute_Project_Data_Center_v1.0.pdf :

    4.11 LED Lighting Systems
    Energy-efficient LED lighting is used throughout the data center interior.
    Innovative power over Ethernet LED lighting system.
    Each fixture has an occupancy sensor with local manual override.
    Programmable alerts via flashing LEDs.

    • IIRC: http://www.redwoodsystems.com/products [redwoodsystems.com]

    • by Junta ( 36770 )

      I don't see how POE is inherently 'efficient' *if* it's a power-only connection. I can see it as convenience if you have gobs of ethernet ports and you don't want to run cable, but otherwise I'd think a more simplistic circuit would do the job as good or better.

  • by Anonymous Coward

    Your title reads: "Facebook Opens Their Data Center Infrastructure 35"

    Since the word Facebook is singular it requires signular verbs and singular possessive adjectives. You have have the singular verb but have a plural possessive adjective.

    The title should read: "Facebook Opens ITS Data Center Infrastructure 35"

  • What are they using for switching infrastructure? How are they handling incoming web load distribution?
  • gnu let us know about...

    Are they referring to the actual GNU organization or just some random /. user with username "gnu?"

  • by Junta ( 36770 ) on Thursday April 07, 2011 @10:06PM (#35753290)

    Looking over the site, it's mostly warm fuzzies (look how green we are) and obvious (the system board specs are mostly bog standard reference designs). The chassis aren't particularly dense or make efficient use of the airflow, and no system vendor can ship implementations of this without running afoul of FCC regulations. There seems to be a lot of thought centered around a tech doing in-depth failure analysis of a failed board in person when even base boards come with IPMI implementations that allow all that to be done remotely. ROL is frankly a horribly dumb idea when you have IPMI capability in nearly every server board with acknowldgement and security. I know I'll get hit with people saying that IPMI costs extra, but the essentially free variants are sufficient to remove the RS232 connector and compete with 'ROL'. The free variants also tend to be flaky and sometimes need static arp tables, but so does WOL (in effect).

  • by melted ( 227442 ) on Thursday April 07, 2011 @10:22PM (#35753370) Homepage

    Google perceives its datacenter know how as its major strength. This sort of removes a bit of that strength.

  • I'm looking forward to hearing how this fits into MZ's world domination plans.
    • by Junta ( 36770 )

      Easy, it reads less like a prescriptive howto and more of a blend between fluff about being green for the public and a requirements document for Tyan, SuperMicro, Asus, and any other board vendor that they might not have thought to explicitly include in their procurement process before. There isn't particularly much that is immediately actionable for datacenter builders.

      • So the motive (and I'm not arguing, just curious) is to try to get some cred with the open source crowd (with whom they probably know they have an PR problem) without actually giving anything useful away?
        • by Junta ( 36770 )

          I would have expected more in-depth techinical stuff (e.g. the expensive part of designing a system that facebook certainly outsourced) if it were a 'legitimate' open hardware project.

          They may genuinely think they did something fancy though, I admit. Many customers don't go this in-depth on their requirements or mechanical designs, but they barely scratch the surface of the complexity of actually building any of the components. Of course, that's the case of most 'open hardware' involving complex things, y

  • So, nothing about the LAN / SAN?

  • by kriston ( 7886 ) on Friday April 08, 2011 @12:03AM (#35753940) Homepage Journal

    Finally, it's so refreshing to see a server system specification that does not call for a video system, does not have onboard video, and properly directs console output to a serial port.

    I've been disgusted with all the VGA crash carts, PS/2 keyboards and mice in server rooms, and all those video processors eating up system memory on servers. Servers should not have video.

    Think of all the carbon dioxide and excess energy consumed by all the idle on-board video processors on most x86 and x64 servers out there. I shudder to think of all the planets resources being wasted displaying a graphical user interface that nobody will ever see, and, worse, reserving precious memory that should be used to serve users holding a useless frame buffer.

    Have you ever smirked at a Linux server machine that is still running X and six virtual consoles? This news is really exciting that someone is honestly taking server hardware design seriously, just like Sun, HP, DEC, SGI, IBM, and others did in the 1980s and 1990s before all these x86 servers came about.

    Bravo, Facebook, on a job well done.

    • by mariushm ( 1022195 ) on Friday April 08, 2011 @04:13AM (#35754944)

      Server video cards embedded on motherboard don't use the system ram, they have an embedded 8 to 128 MB memory chip. Sure, they have a tiny frame buffer in the system ram but there are other things using more system memory than that frame buffer.

      As for power usage, such plain vga video card embedded on the motherboard uses a couple of watts on idle - the chip doesn't even need a heatsink so it's not really a power saving feature if you remove it.

      You would be saving much more power by using a power supply with high efficiency and wattage close to the actual server usage, instead of using (optionally redundant) 500-800 watts server power supplies.

      Seriously, complaining about a few watts... some 1U servers have at least 4 x 40 mm high speed fans inside, each using 2-5 watts of power (because they run at max speed all the time) and you're complaining about a couple of watts on a video card.

    • by Junta ( 36770 )

      They all take hardware design seriously, but if they put out a system without some sort of video, sadly, 98% of their customers won't buy it because they aren't confident in it.

      MS users rarely ever know MS can be managed via serial and even those that do know there is a high chance some third-party software won't be manageable. Employers certainly know random tech off the street will need video on MS to get by.

      Even amongst Linux users who likely would be using nothing more than a text interface, there are

      • by tlhIngan ( 30335 )

        Even amongst Linux users who likely would be using nothing more than a text interface, there are serious issues. For one, Linux implements *no* method for the system firmware to describe serial output. So you can't put in arbitrary linux boot media without first tweaking the kernel command line. There exists a specification for firmware to communicate this data, but it's considered IP of Microsoft and forbidden to Linux.

        Or Linux just defines its own method. Several BIOSes can redirect output to serial (ther

        • by Junta ( 36770 )

          But if you want your server vendor to be replaceable, suddenly you have another system where the port you need is now ttyS1 when you had been using ttyS0. Or the BMC can only do 57600 wheras you have been doing 115200.

          In short, yes there are several firmwares than all this can work. It would be a *lot* better if the board designers had a way of automatically describing the serial console capabilities to the kernel so that serial console would work after the kernel tears down frimware handling of IO.

    • Comment removed based on user account deletion
      • by kriston ( 7886 )

        I had not experienced that. As you know, a line break would go to the PROM monitor at which I just type "continue" or "go" and hit Enter a few times and we're all set. If I really somehow messed up the TTY, it's always recoverable, just turn off the TTY and turn it back on. Or maybe you just type CTRL-Q to release the paused output (which is likely what happened in your case).

        In large installations I use a serial port concentrator. I sign into the concentrator and choose the system to log into. Alterna

  • What else can you do to get technies into your data mining database? "We support open source, our datacenter is omg huge and lets forget all privacy issues".
  • OK. Given 45 mins I can come up with at least 3 improvements.
    Given a week? I could have at least made the case frame at least a pop-in-pop-out, no wires affair...

    Give 1/10 of the time these loud mouths spent, I would have 4 up platters on bakery racks.

    One easy hint! You are ordering at least 1000 motherboards, have the powersupply connector at a 90 degree angle so that it holds the powersupply connector to the side of the motherboard area, and holds the motherboard in. No screws. and for god sake... put som

Children begin by loving their parents. After a time they judge them. Rarely, if ever, do they forgive them. - Oscar Wilde

Working...