Open Compute 'Group Hug' Board Allows Swappable CPUs In Servers 82
Nerval's Lobster writes "AMD, Intel, ARM: for years, their respective CPU architectures required separate sockets, separate motherboards, and in effect, separate servers. But no longer: Facebook and the Open Compute Summit have announced a common daughtercard specification that can link virtually any processor to the motherboard. AMD, Applied Micro, Intel, and Calxeda have already embraced the new board, dubbed 'Group Hug.' Hardware designs based on the technology will reportedly appear at the show. The Group Hug card will be connected via a simple x8 PCI Express connector to the main motherboard. But Frank Frankovsky, director of hardware design and supply chain operations at Facebook, also told an audience at the Summit that, while a standard has been provided, it may be some time before the real-world appearance of servers built on the technology."
Re: (Score:2)
Metric racks (Score:3)
So i just ended up on wikipedia, good grief you where not even joking about that new 21" standard. :'(
http://en.wikipedia.org/wiki/19-inch_rack#Open_Rack [wikipedia.org]
19" = 48,26cm wide.
21" = 53,7cm wide.
23" = 58cm wide.
48U = 200cm high.
So 58cm 48U racks it is, dully noted.
huh? (Score:3)
I don't get it. Are we redesigning the whole computer archetecture so we have a different group specing out the north bridge (etc) that all CPU manufactures will use? Or are they just adding an additional cpu into the current architecture the same way we do with graphics cards? And then offloading work onto it, like people have been doing for a while with GPGPUs?
Re:huh? (Score:4, Insightful)
A bunch of daughter-boards that plug into a PCIe motherboard is a great idea.
Re: (Score:1)
what's on the board? (Score:2)
I'm having trouble finding technical information about this design, and I'm curious how much of the motherboard logic has to move onto this daughterboard. For example, is memory still on the main board? If so, an x8 PCIe channel doesn't seem adequate.
Re: (Score:2)
I would assume CPU and memory on the daughter board. PSU and other shit on the motherboard.
If the PSU is on the daughter board, there's nothing really left to go on the motherboard, so would be completely pointless. An Intel server motherboard is nothing but power supply CPU sockets, RAM slots, PCI-e slots and some GbE controllers.
Re:what's on the board? (Score:4, Interesting)
In a server, there is usually some service processor so that software bugs don't require a physical visit to regain capacity. In terms of manageability, I'd expect some I2C connectivity (the relation betwen fan and processor can get very interesting actually). Intel processor speaks PECI exclusively nowadays, wouldn't be surprised if standard basically forces a thermal control mechanism to terminate PECI on daughter card to speak i2c to the fan management subsystem. This is probably the greatest lost opportunity for energy savings; a wholistic system can do some pretty interesting things knowing what the fans are capable of and what sort of baffling is in place.
Also, the daughtercard undoubtedly brings the firmware with it.
All in all, the daughtercard is going to be the motherboard and not much changes. Maybe you get to reuse SATA chips, gigabit, usb and 'on-board' video chips for some cost savings on a mass upgrade, but those parts are pretty cheap and even they get dated. video, USB and gigabit might not matter for the internet datacenter of today and several tomorrows to come, but the SATA throughput is actually significant for data mining.
Re: (Score:2)
Then what are these AMD APU's [amd.com]?
Re: (Score:2)
though ARM has video as SoC as a matter of course, Intel and AMD server chips do not have Video on package... yet....
Then what are these AMD APU's [amd.com]?
APUs are not server chips. And the server ARMs don't have video either.
Re: (Score:2)
Getting closer, the daughterboard will have a SoC. ARM has been doing SoC for a long time. Remember there are Intel Atom SoC too. AMD I don't know.
Re: (Score:2)
The only thing that makes sense is that everything is on the daughterboard and the "motherboard" is basically a passive PCIe backplane (with very little bandwidth). This kind of architecture has been used in telco and embedded systems for decades.
Re: (Score:2)
Sounds like theyve reinvented the blade.
Re: (Score:2)
Sounds like they've reinvented the S-100 bus. Or maybe the VME bus.
Re: (Score:2)
No, it sounds like they reinvented the Panda compass connector backplane. I used to have a Panda Archistrat 4s that you could slide in a PPro, or DEC Alpha, with plans to add SPARC support as well.
Re: (Score:2)
who gives a fuck? (Score:1, Insightful)
who gives a fuck?
seriously. it's a stupid standard. why would I want to swap the CPU for another architecture? Why would I want ARM in a high performance server? Why would I want "easy" replacement of a CPU for another kind, when the rest of the motherboard isn't able to interface with it?
Why should I care for a "standard" connection where pins will be outdated 2 years from now?
Why are "high performance, low cost" servers socketed, instead of processors soldered to a motherboard? What dies is the motherboar
Re:who gives a front? (Score:2)
Indeed. Why not stick an Itanium on it too?
Re: (Score:2)
If this is just a way to swap out archetectures in a box, that seems very useless to me. On the other hand if what they are providing, is something akin to sun boxes hot-swap that could be useful for uptime. On top of that, if it can just add additional CPUs to scaleup a system before doing a scaleout. That would be very beneficial to database managers. (Facebook is supporting this and that is one of their main logistical problems.)
The article is sparse on details though, so someone that knows what this
Re: (Score:2)
Who needs hotswap when you have a GRID!
Really, in these "highly distributed" systems, the price of redundancy is much lower than custom hardware.
A while ago I had to put together this server: http://i.imgur.com/iII52.jpg [imgur.com]
That's an IBM x3450 or 3650, I don't remember. The specs are pretty much "meh". It's a 20kg box with a HUGE motherboard in an oversized case, with 3 120mm fans (with the ability to add 3 more in "stand by"). It has a socketed CPU, and to the left of it you can see a black cap - that's where
42U rack (Score:2)
I think the 42U rack height comes from the standard 7-foot door/elevator height. Some datacenters definitely have custom tall racks already, and I wouldn't be surprised to see more of that in the future.
Re: (Score:1)
seriously. it's a stupid standard. why would I want to swap the CPU for another architecture?
Are you trying to tell me that your servers can't run multiple instruction set binaries at the same time? You're not installing generic executables that can be run on Intel, AMD, ARM, PPC, Itanium (thanks for the tip, Marcion), etc, etc all at the same time? What kind of wallflower system admin are you? /sarcasm
BTW, if Microsoft supported this technology you could upgrade your Surface RT to a Surface Pro in a snap! /more sarcasm
Re: (Score:1)
What dies is the motherboard, not the CPU. When the motherboard dies, the CPU is so outdated it doesn't even make sense to keep it.
This is aimed at companies like Facebook, Google, Amazon, and the like. When managing thousands of servers, any number of components will die on a fairly regular basis. Some will die withing a few weeks of them going online. When you have 200k servers and a component with a 1% infant mortality rate, having the ability to quickly and easily change the component is a blessing. You can do this with just about all of the components in a server accept the processor (relatively speaking).
As for why would you
Re: (Score:2)
This is aimed at companies like Facebook, Google, Amazon, and the like. When managing thousands of servers, any number of components will die on a fairly regular basis. Some will die withing a few weeks of them going online. When you have 200k servers and a component with a 1% infant mortality rate, having the ability to quickly and easily change the component is a blessing.
That's why companies that don't build their own specialized hardware (like Google) use these [wikipedia.org], where the whole "server" is a single replaceable component.
Re: (Score:2)
having the ability to quickly and easily change the component is a blessing.
AFAIK companies like Google and Facebook don't change components.
They don't even swap out individual machines- they swap out entire racks when enough of the machines in the racks are faulty.
When you have say 1+ million servers and only 30000 employees total where probably only 300-3000 are involved in the hardware problems, then you can't really inspect and fix each machine.
Those who might be interested could be companies in cheaper countries (where labour costs are much lower) who can refurbish the compute
Re: (Score:2)
Because your "high performance" server most likely has a very low demand for 90% of the day, serving up one or two requests a minute - Then needs to handle a few thousand requests per second at peak times.
Idling the horsepower without taking the server offline look very attractive to most datacenters. Current load-balancing farms can do this to some degree, but you can have a several minute lag
Not sure this will help an ARM system much (Score:5, Insightful)
I can understand an organisation on the scale of Facebook wanting the ability to take advantage of bargains to buy processors in bulk and swap them out. I am not sure how widely applicable this is though.
The cool thing about ARM is the lack of complexity and therefore a potentially cheaper cost and greater energy efficiency. The daughter board seems to go against that by adding complexity, if you swap out an ARM chip, which might be passively cooled or have low powered fan, with some high end Intel server chip, you will also need to change the cooling and PSU.
Re: (Score:2)
So don't swap the CPU with a single ARM CPU.
PSU and motherboard power supply can handle a 130W CPU? Stick 10 13W ARM cpu's on the daughter board.
Re: (Score:2)
... and watch as they get utterly annihlated in basically every spec by the 130w CPU.
Theres a reason people dont do that, and its not simply scaling problems.
Re: (Score:2)
I didn't say it would be a good idea, I said it could be done.
Why? (Score:5, Informative)
Maybe this would be useful in some HPC environments where applications can be written to maximize the use of CPU cache, but the bandwidth of a PCI /8 or /16 slot is a fraction of what is available to a socketed CPU.
A core i7 has been clocked at 37GB/sec [anandtech.com] bandwidth, while PCIe /8 is good for 1.6GB/sec, and /16 is good for 3.2GB/sec
Is replacing the CPU socket with a PCIe card really worth giving up 90% of the memory bandwidth? I've never upgraded a CPU on a motherboard even when new generation CPU's are backwards compatible with the old motherboard since if I'm going to buy an expensive new CPU, I may as well spend the extra $75 and get a new motherboard to go along with it.
Likewise, by the time I'm ready to retire a 3 or 4 year old server in the datacenter, it's going to take more than a CPU upgrade to make it worth keeping.
Re: (Score:2)
Actually, it would be murder on HPC applications, which generally rely on quality inter-node communication to acheive high scale.
Re: (Score:2)
<quote>Maybe this would be useful in some HPC environments where applications can be written to maximize the use of CPU cache,,</quote>
Actually, it would be murder on HPC applications, which generally rely on quality inter-node communication to acheive high scale.
Not all HPC applications are the same and don't always require fast interconnects. Seti@home is one example (though maybe not a good example of an app that would run well on a computer with CPU's that rely on a "slow" interconnect to main memory).
Re: (Score:2)
Re: (Score:2)
This sounds familiar... (Score:2)
I think I experienced something like this almost two decades ago. It didn't pan out. No one wanted to be limited to the least common denominator.
Anyway, the little detail I see in the two linked articles is that it's simply a standardized socket for a CPU mezzanine card that will mount to a motherboard that is little more than a x8 PCI express bus with some other peripheral devices/interfaces installed and power.
Re: (Score:2)
My thoughts exactly. We saw this 20 years ago with the CPU and bus controller on a ISA cards and peripherals out on the bus.
I don't think this is a bad thing, though. It will encourage re-use of things we'd otherwise throw on the scrap heap because they happened to be soldered onto the same motherboard with all of the CPU- and bus-specific hardware. This will reduce upgrade costs, and I'd much rather see a box of small cards become obsolete than an entire rack full of servers.
Re: (Score:2)
Hooray for memory constrained CPU cycles? (Score:3)
A general purpose compute card is probably useful in cases where GPUs aren't a good fit but you want more cores per RU than you can normally get, but I see this as a niche application for the foreseeable future.
Re: (Score:2)
In terms of server design, none of those names are particularly reputable. One might have *assumed* Intel would do decent system design, but anyone who has touched an Intel designed server knows they aren't particularly good at whole server design.
Re: (Score:2)
Re: (Score:2)
The ATCA [wikipedia.org] demands to know why you want yet another blade server standard.
I demand to know why every ATCA blade is crazy expensive. Oh yeah, because they're telco and Facebook can't afford to overpay for carrier-grade reliability they don't need.
Re: (Score:2)
I doubt very much they're proposing that memory be done across the PCI bus. Memory is modular and reusable, so pulling modules out of old CPU cards and snapping it into new ones helps cut the cost of upgrades and changes.
Or you could just break an ankle jumping to conclusions.
they should be in HyperTransport slots / bus (Score:2)
they should be in HyperTransport slots / on the HTX bus
Overblown... (Score:2)
They could mean that 8x is the data path and some amount of I2C and Power is
imagine this scenario.... (Score:2)
The BIOS won't even boot! It's instructions are written in x86 machine language. Even if you somehow detect the new CPU architecture, and your BIOS has a copy of code that will run on it, then your OS won't boot!
What a novel but dumb idea.
Re: (Score:2)
Nah, they just put the BIOS on the same daughterboard as the CPU.
Re: (Score:2)
more likely they will go with uefi (bios is to limited) with custom firmware that contains boot codes for multiple architectures and all you have to do is swap the flag on start up unless it detects it automatically, this could also be used as a secondary cpu having a primary cpu on the board still for the OS and the other for number crunching and virtualization.
Hooray! (Score:2)
Re: (Score:2)
It will show up not not in the consumer space, at least for a very longtime. I will before enterprise systems and will more than likely be meant for hot swapping more than arch shifting
80's passive backplane industrial rises again (Score:2)
With a 8x pcie interface this sounds like most of the motherboard is on the card. You would be pressed to get storage IO and network IO though that interface forget memory. It's not going to handle many 40gb infiniband adapters or they will move onto the card. This sound alot like the old industrial single board computers of the 80's and 90's.
how does it work for HTX cpus? ram linked to cpus? (Score:2)
how does it work for HTX cpus? ram linked to cpus? QPI CPUS?
Now both AMD and Intel have Memory controller build in the cpus.
all AMD cpus have HTX to link them to the chipset / other cpu. Also that did plan for HTX slots at some time as well.
intel uses QPI in there higher end desktop systems and more then 1 cpu severs / workstations.
Backplane (Score:3)
So the CPU is on the daughterboard. Everything that's specific to the CPU type (North/South bridges, etc) would have to go on there as well. Likely memory as well.
Congratulations. They've re-discovered the backplane [wikipedia.org] and single board computer.
Re: (Score:2)
Indeed, I personally don't see how it is anymore than a standard backplace interface.
Didn't PICMG standardize this already? (Score:1)
And how is this any different from the PICMG 1.x standards?
http://www.picmg.org/v2internal/resourcepage2.cfm?id=8 [picmg.org]
Lots of people have been building systems around this technology for years using passive backplanes.
Re: (Score:2)
x186 - x286 - x386 (Score:1)
Read the spec, it isn't PCIe (Score:1)
http://www.opencompute.org/wp/wp-content/uploads/2013/01/Open_Compute_Project_Micro-Server_Card_Specification_v0.5.pdf [opencompute.org]
It uses that connector but the signals are ethernet, SATA, etc. RAM and optionally mSATA are on board.
Pssst.. the 80's called (Score:1)
They want their industrial PC architecture back.
I would say everyone has missed the real point... (Score:2)
The image-processing algorithms are not as easily distributed as the search indexing. An approach which cuts through the problems at much better cost is the dedicated image processing SIMD pipeline st