Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Facebook AMD Intel Open Source Hardware

Open Compute 'Group Hug' Board Allows Swappable CPUs In Servers 82

Nerval's Lobster writes "AMD, Intel, ARM: for years, their respective CPU architectures required separate sockets, separate motherboards, and in effect, separate servers. But no longer: Facebook and the Open Compute Summit have announced a common daughtercard specification that can link virtually any processor to the motherboard. AMD, Applied Micro, Intel, and Calxeda have already embraced the new board, dubbed 'Group Hug.' Hardware designs based on the technology will reportedly appear at the show. The Group Hug card will be connected via a simple x8 PCI Express connector to the main motherboard. But Frank Frankovsky, director of hardware design and supply chain operations at Facebook, also told an audience at the Summit that, while a standard has been provided, it may be some time before the real-world appearance of servers built on the technology."
This discussion has been archived. No new comments can be posted.

Open Compute 'Group Hug' Board Allows Swappable CPUs In Servers

Comments Filter:
  • who gives a fuck? (Score:1, Insightful)

    by hjf ( 703092 ) on Wednesday January 16, 2013 @05:05PM (#42608599) Homepage

    who gives a fuck?
    seriously. it's a stupid standard. why would I want to swap the CPU for another architecture? Why would I want ARM in a high performance server? Why would I want "easy" replacement of a CPU for another kind, when the rest of the motherboard isn't able to interface with it?

    Why should I care for a "standard" connection where pins will be outdated 2 years from now?
    Why are "high performance, low cost" servers socketed, instead of processors soldered to a motherboard? What dies is the motherboard, not the CPU. When the motherboard dies, the CPU is so outdated it doesn't even make sense to keep it. Why are we talking about socketed CPUs when a soldered-on will do just fine?

    Why do we keep insisting in this new, useless, "proprietary open" standard that NO ONE will use (BTX anyone? wasn't it supposed to be the next great thing and solve everything? Why not focus, say, in a "heatsink landing" standard so i can fit ANY motherboard in a case (1U rackmount cases where the lid almost touches the processor) and have it touch the heatsink. Even make it easier to watercool if you want.

    Still trying to figure out what's the deal with all this. Still trying to figure out why racks are limited to 42U or so, instead of less dense but "taller" racks (pretty sure a custom-made datacenter like google's or facebook's could get away with it. WAIT. Google already does!)

    Really, let facebook fuck off and die already. It'll probably be dead by the time this "standard" hits the streets.

  • by Marcion ( 876801 ) on Wednesday January 16, 2013 @05:06PM (#42608605) Homepage Journal

    I can understand an organisation on the scale of Facebook wanting the ability to take advantage of bargains to buy processors in bulk and swap them out. I am not sure how widely applicable this is though.

    The cool thing about ARM is the lack of complexity and therefore a potentially cheaper cost and greater energy efficiency. The daughter board seems to go against that by adding complexity, if you swap out an ARM chip, which might be passively cooled or have low powered fan, with some high end Intel server chip, you will also need to change the cooling and PSU.

  • Re:huh? (Score:4, Insightful)

    by Bengie ( 1121981 ) on Wednesday January 16, 2013 @06:04PM (#42609503)
    It could be like what Seamicro does and use PCIe and a kind of "network switch", minus 10Gb NICs, cables, an actual switch, etc. Everything a bunch of nodes needs minus a lot of overhead.

    A bunch of daughter-boards that plug into a PCIe motherboard is a great idea.

Experience varies directly with equipment ruined.