Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Facebook AMD Intel Open Source Hardware

Open Compute 'Group Hug' Board Allows Swappable CPUs In Servers 82

Nerval's Lobster writes "AMD, Intel, ARM: for years, their respective CPU architectures required separate sockets, separate motherboards, and in effect, separate servers. But no longer: Facebook and the Open Compute Summit have announced a common daughtercard specification that can link virtually any processor to the motherboard. AMD, Applied Micro, Intel, and Calxeda have already embraced the new board, dubbed 'Group Hug.' Hardware designs based on the technology will reportedly appear at the show. The Group Hug card will be connected via a simple x8 PCI Express connector to the main motherboard. But Frank Frankovsky, director of hardware design and supply chain operations at Facebook, also told an audience at the Summit that, while a standard has been provided, it may be some time before the real-world appearance of servers built on the technology."
This discussion has been archived. No new comments can be posted.

Open Compute 'Group Hug' Board Allows Swappable CPUs In Servers

Comments Filter:
  • by Moses48 ( 1849872 ) on Wednesday January 16, 2013 @04:00PM (#42608543)

    I don't get it. Are we redesigning the whole computer archetecture so we have a different group specing out the north bridge (etc) that all CPU manufactures will use? Or are they just adding an additional cpu into the current architecture the same way we do with graphics cards? And then offloading work onto it, like people have been doing for a while with GPGPUs?

    • Re:huh? (Score:4, Insightful)

      by Bengie ( 1121981 ) on Wednesday January 16, 2013 @05:04PM (#42609503)
      It could be like what Seamicro does and use PCIe and a kind of "network switch", minus 10Gb NICs, cables, an actual switch, etc. Everything a bunch of nodes needs minus a lot of overhead.

      A bunch of daughter-boards that plug into a PCIe motherboard is a great idea.
  • I'm having trouble finding technical information about this design, and I'm curious how much of the motherboard logic has to move onto this daughterboard. For example, is memory still on the main board? If so, an x8 PCIe channel doesn't seem adequate.

    • I would assume CPU and memory on the daughter board. PSU and other shit on the motherboard.

      If the PSU is on the daughter board, there's nothing really left to go on the motherboard, so would be completely pointless. An Intel server motherboard is nothing but power supply CPU sockets, RAM slots, PCI-e slots and some GbE controllers.

      • by Junta ( 36770 ) on Wednesday January 16, 2013 @04:54PM (#42609351)
        The GbE controller may or may not be part of an IO hub which would provide USB and SATA. Also there is likely to be a video device (though ARM has video as SoC as a matter of course, Intel and AMD server chips do not have Video on package... yet....).

        In a server, there is usually some service processor so that software bugs don't require a physical visit to regain capacity. In terms of manageability, I'd expect some I2C connectivity (the relation betwen fan and processor can get very interesting actually). Intel processor speaks PECI exclusively nowadays, wouldn't be surprised if standard basically forces a thermal control mechanism to terminate PECI on daughter card to speak i2c to the fan management subsystem. This is probably the greatest lost opportunity for energy savings; a wholistic system can do some pretty interesting things knowing what the fans are capable of and what sort of baffling is in place.

        Also, the daughtercard undoubtedly brings the firmware with it.

        All in all, the daughtercard is going to be the motherboard and not much changes. Maybe you get to reuse SATA chips, gigabit, usb and 'on-board' video chips for some cost savings on a mass upgrade, but those parts are pretty cheap and even they get dated. video, USB and gigabit might not matter for the internet datacenter of today and several tomorrows to come, but the SATA throughput is actually significant for data mining.
        • though ARM has video as SoC as a matter of course, Intel and AMD server chips do not have Video on package... yet....

          Then what are these AMD APU's [amd.com]?
          • though ARM has video as SoC as a matter of course, Intel and AMD server chips do not have Video on package... yet....

            Then what are these AMD APU's [amd.com]?

            APUs are not server chips. And the server ARMs don't have video either.

        • by Lennie ( 16154 )

          Getting closer, the daughterboard will have a SoC. ARM has been doing SoC for a long time. Remember there are Intel Atom SoC too. AMD I don't know.

    • The only thing that makes sense is that everything is on the daughterboard and the "motherboard" is basically a passive PCIe backplane (with very little bandwidth). This kind of architecture has been used in telco and embedded systems for decades.

      • Sounds like theyve reinvented the blade.

        • Sounds like they've reinvented the S-100 bus. Or maybe the VME bus.

        • by Pikoro ( 844299 )

          No, it sounds like they reinvented the Panda compass connector backplane. I used to have a Panda Archistrat 4s that you could slide in a PPro, or DEC Alpha, with plans to add SPARC support as well.

  • who gives a fuck? (Score:1, Insightful)

    by hjf ( 703092 )

    who gives a fuck?
    seriously. it's a stupid standard. why would I want to swap the CPU for another architecture? Why would I want ARM in a high performance server? Why would I want "easy" replacement of a CPU for another kind, when the rest of the motherboard isn't able to interface with it?

    Why should I care for a "standard" connection where pins will be outdated 2 years from now?
    Why are "high performance, low cost" servers socketed, instead of processors soldered to a motherboard? What dies is the motherboar

    • Indeed. Why not stick an Itanium on it too?

    • If this is just a way to swap out archetectures in a box, that seems very useless to me. On the other hand if what they are providing, is something akin to sun boxes hot-swap that could be useful for uptime. On top of that, if it can just add additional CPUs to scaleup a system before doing a scaleout. That would be very beneficial to database managers. (Facebook is supporting this and that is one of their main logistical problems.)

      The article is sparse on details though, so someone that knows what this

      • by hjf ( 703092 )

        Who needs hotswap when you have a GRID!

        Really, in these "highly distributed" systems, the price of redundancy is much lower than custom hardware.

        A while ago I had to put together this server: http://i.imgur.com/iII52.jpg [imgur.com]
        That's an IBM x3450 or 3650, I don't remember. The specs are pretty much "meh". It's a 20kg box with a HUGE motherboard in an oversized case, with 3 120mm fans (with the ability to add 3 more in "stand by"). It has a socketed CPU, and to the left of it you can see a black cap - that's where

    • I think the 42U rack height comes from the standard 7-foot door/elevator height. Some datacenters definitely have custom tall racks already, and I wouldn't be surprised to see more of that in the future.

    • by Anonymous Coward

      seriously. it's a stupid standard. why would I want to swap the CPU for another architecture?

      Are you trying to tell me that your servers can't run multiple instruction set binaries at the same time? You're not installing generic executables that can be run on Intel, AMD, ARM, PPC, Itanium (thanks for the tip, Marcion), etc, etc all at the same time? What kind of wallflower system admin are you? /sarcasm

      BTW, if Microsoft supported this technology you could upgrade your Surface RT to a Surface Pro in a snap! /more sarcasm

    • What dies is the motherboard, not the CPU. When the motherboard dies, the CPU is so outdated it doesn't even make sense to keep it.

      This is aimed at companies like Facebook, Google, Amazon, and the like. When managing thousands of servers, any number of components will die on a fairly regular basis. Some will die withing a few weeks of them going online. When you have 200k servers and a component with a 1% infant mortality rate, having the ability to quickly and easily change the component is a blessing. You can do this with just about all of the components in a server accept the processor (relatively speaking).

      As for why would you

      • This is aimed at companies like Facebook, Google, Amazon, and the like. When managing thousands of servers, any number of components will die on a fairly regular basis. Some will die withing a few weeks of them going online. When you have 200k servers and a component with a 1% infant mortality rate, having the ability to quickly and easily change the component is a blessing.

        That's why companies that don't build their own specialized hardware (like Google) use these [wikipedia.org], where the whole "server" is a single replaceable component.

      • by TheLink ( 130905 )

        having the ability to quickly and easily change the component is a blessing.

        AFAIK companies like Google and Facebook don't change components.

        They don't even swap out individual machines- they swap out entire racks when enough of the machines in the racks are faulty.

        When you have say 1+ million servers and only 30000 employees total where probably only 300-3000 are involved in the hardware problems, then you can't really inspect and fix each machine.

        Those who might be interested could be companies in cheaper countries (where labour costs are much lower) who can refurbish the compute

    • by pla ( 258480 )
      why would I want to swap the CPU for another architecture? Why would I want ARM in a high performance server?

      Because your "high performance" server most likely has a very low demand for 90% of the day, serving up one or two requests a minute - Then needs to handle a few thousand requests per second at peak times.

      Idling the horsepower without taking the server offline look very attractive to most datacenters. Current load-balancing farms can do this to some degree, but you can have a several minute lag
  • by Marcion ( 876801 ) on Wednesday January 16, 2013 @04:06PM (#42608605) Homepage Journal

    I can understand an organisation on the scale of Facebook wanting the ability to take advantage of bargains to buy processors in bulk and swap them out. I am not sure how widely applicable this is though.

    The cool thing about ARM is the lack of complexity and therefore a potentially cheaper cost and greater energy efficiency. The daughter board seems to go against that by adding complexity, if you swap out an ARM chip, which might be passively cooled or have low powered fan, with some high end Intel server chip, you will also need to change the cooling and PSU.

    • So don't swap the CPU with a single ARM CPU.

      PSU and motherboard power supply can handle a 130W CPU? Stick 10 13W ARM cpu's on the daughter board.

      • ... and watch as they get utterly annihlated in basically every spec by the 130w CPU.

        Theres a reason people dont do that, and its not simply scaling problems.

  • Why? (Score:5, Informative)

    by hawguy ( 1600213 ) on Wednesday January 16, 2013 @04:11PM (#42608691)

    Maybe this would be useful in some HPC environments where applications can be written to maximize the use of CPU cache, but the bandwidth of a PCI /8 or /16 slot is a fraction of what is available to a socketed CPU.

    A core i7 has been clocked at 37GB/sec [anandtech.com] bandwidth, while PCIe /8 is good for 1.6GB/sec, and /16 is good for 3.2GB/sec

    Is replacing the CPU socket with a PCIe card really worth giving up 90% of the memory bandwidth? I've never upgraded a CPU on a motherboard even when new generation CPU's are backwards compatible with the old motherboard since if I'm going to buy an expensive new CPU, I may as well spend the extra $75 and get a new motherboard to go along with it.

    Likewise, by the time I'm ready to retire a 3 or 4 year old server in the datacenter, it's going to take more than a CPU upgrade to make it worth keeping.

    • by Junta ( 36770 )
      <quote>Maybe this would be useful in some HPC environments where applications can be written to maximize the use of CPU cache,,</quote>

      Actually, it would be murder on HPC applications, which generally rely on quality inter-node communication to acheive high scale.
      • by hawguy ( 1600213 )

        <quote>Maybe this would be useful in some HPC environments where applications can be written to maximize the use of CPU cache,,</quote>

        Actually, it would be murder on HPC applications, which generally rely on quality inter-node communication to acheive high scale.

        Not all HPC applications are the same and don't always require fast interconnects. Seti@home is one example (though maybe not a good example of an app that would run well on a computer with CPU's that rely on a "slow" interconnect to main memory).

        • by Junta ( 36770 )
          Even an embarassingly parallel workload like you'd find in world community grid won't fit in cpu cache. Short of that, this architecture would bone you horribly *if* memory were on the long side of the pcie transaction, which there's no way that it would be...
    • by Bengie ( 1121981 )
      The linked article is just talking about a common interface to access additional resources like CPU cores and memory via PCIe. There is no reason why you can't treat daughter-boards more like a co-CPU, like how a GPU is used to offload work.
  • I think I experienced something like this almost two decades ago. It didn't pan out. No one wanted to be limited to the least common denominator.

    Anyway, the little detail I see in the two linked articles is that it's simply a standardized socket for a CPU mezzanine card that will mount to a motherboard that is little more than a x8 PCI express bus with some other peripheral devices/interfaces installed and power.

    • by Blrfl ( 46596 )

      My thoughts exactly. We saw this 20 years ago with the CPU and bus controller on a ISA cards and peripherals out on the bus.

      I don't think this is a bad thing, though. It will encourage re-use of things we'd otherwise throw on the scrap heap because they happened to be soldered onto the same motherboard with all of the CPU- and bus-specific hardware. This will reduce upgrade costs, and I'd much rather see a box of small cards become obsolete than an entire rack full of servers.

  • A PCIe x8 slot is pathetically slow compared to the memory channels used by CPUs today. These CPUs are going to have to be used like GPUs, sent specific workloads on specific datasets to be useful. Any kind of non-cached memory access is going to cause major thread stalls and probably kill any performance benefits.

    A general purpose compute card is probably useful in cases where GPUs aren't a good fit but you want more cores per RU than you can normally get, but I see this as a niche application for the foreseeable future.
    • by Junta ( 36770 )
      They can't *possibly* mean that memory would go over that. They have to mean a CPU+Memory daughtercard. As limited already as the concept is without that assumption, the system would be unusably bad if memory transactions went over such a connection.

      In terms of server design, none of those names are particularly reputable. One might have *assumed* Intel would do decent system design, but anyone who has touched an Intel designed server knows they aren't particularly good at whole server design.
      • by jandrese ( 485 )
        That's not a "hot slot processor" anymore, that's a blade server. We already have blade servers, lots of them. The ATCA [wikipedia.org] demands to know why you want yet another blade server standard.
        • The ATCA [wikipedia.org] demands to know why you want yet another blade server standard.

          I demand to know why every ATCA blade is crazy expensive. Oh yeah, because they're telco and Facebook can't afford to overpay for carrier-grade reliability they don't need.

    • by Blrfl ( 46596 )

      I doubt very much they're proposing that memory be done across the PCI bus. Memory is modular and reusable, so pulling modules out of old CPU cards and snapping it into new ones helps cut the cost of upgrades and changes.

      Or you could just break an ankle jumping to conclusions.

    • they should be in HyperTransport slots / on the HTX bus

  • When they say '8x' PCIe, if that is accurate and the entiriety of the connectivity, what it really means is a standardized IO board for video, ethernet, maybe storage. The CPU daughtercard becomes the new 'motherboard' in terms of cost. You might marginally improve service time for the relatively rare board replace case (CPU replace case is already easy for repair action), but having that board segregated will drive up cost a bit.

    They could mean that 8x is the data path and some amount of I2C and Power is
  • Suppose you have a machine with an Intel x86 cpu in it and you swap it out for an ARM cpu. What happens?

    The BIOS won't even boot! It's instructions are written in x86 machine language. Even if you somehow detect the new CPU architecture, and your BIOS has a copy of code that will run on it, then your OS won't boot!

    What a novel but dumb idea.
    • Nah, they just put the BIOS on the same daughterboard as the CPU.

    • more likely they will go with uefi (bios is to limited) with custom firmware that contains boot codes for multiple architectures and all you have to do is swap the flag on start up unless it detects it automatically, this could also be used as a secondary cpu having a primary cpu on the board still for the OS and the other for number crunching and virtualization.

  • They've rediscovered the backplane. [wikipedia.org]
  • With a 8x pcie interface this sounds like most of the motherboard is on the card. You would be pressed to get storage IO and network IO though that interface forget memory. It's not going to handle many 40gb infiniband adapters or they will move onto the card. This sound alot like the old industrial single board computers of the 80's and 90's.

  • how does it work for HTX cpus? ram linked to cpus? QPI CPUS?

    Now both AMD and Intel have Memory controller build in the cpus.

    all AMD cpus have HTX to link them to the chipset / other cpu. Also that did plan for HTX slots at some time as well.

    intel uses QPI in there higher end desktop systems and more then 1 cpu severs / workstations.

  • by DavidYaw ( 447706 ) on Wednesday January 16, 2013 @05:24PM (#42609813) Homepage

    So the CPU is on the daughterboard. Everything that's specific to the CPU type (North/South bridges, etc) would have to go on there as well. Likely memory as well.

    Congratulations. They've re-discovered the backplane [wikipedia.org] and single board computer.

  • And how is this any different from the PICMG 1.x standards?

    http://www.picmg.org/v2internal/resourcepage2.cfm?id=8 [picmg.org]

    Lots of people have been building systems around this technology for years using passive backplanes.

  • Okay this didn't work in the x186 days, so I guess it's been long enough that it needs a redux. I remember when they were selling these machines which were essentially a backplane and one of the things you could plug into it was your CPU daughtercard. These were being sold as a way to "futureproof" (remember that buzzword?) your system. Turns out that was only good for a few CPU cycles...
  • http://www.opencompute.org/wp/wp-content/uploads/2013/01/Open_Compute_Project_Micro-Server_Card_Specification_v0.5.pdf [opencompute.org]

    It uses that connector but the signals are ethernet, SATA, etc. RAM and optionally mSATA are on board.

  • They want their industrial PC architecture back.

  • The purpose of this connector is to allow the connection of signal-processing co-processors. Two-dimensional video signal processing, similar to that sold at much cost by Texas Memory Systems, is greatly useful to Facebook, Google+, and all other cloud properties which track identity by facial recognition.

    The image-processing algorithms are not as easily distributed as the search indexing. An approach which cuts through the problems at much better cost is the dedicated image processing SIMD pipeline st

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...