Will "Group Hug" Commoditize the Hardware Market? 72
Will the Open Compute Project’s Common Slot specification and Facebook’s Group Hug board commoditize the data center hardware market even further? Analyst opinions vary widely, indicating that time and additional development work may be necessary before any sort of consensus is reached. At the Open Compute Summit last week, Frank Frankovsky, director of hardware design and supply chain operations at Facebook, announced both the Open Slot specification and Facebook’s prototype Open Slot board, known as “Group Hug.”
Group Hug’s premise is simple: disaggregate the CPU in a way that allows virtually any processor to be linked to the motherboard. This has never been done before with a CPU, which has traditionally required its own socket, its own chipset, and thus its own motherboard. Group Hug is designed to accommodate CPUs from AMD, Intel, and even ARM vendors such as Applied Micro and Calxeda.
This would be awesome.. (Score:1)
Re: (Score:2)
I would figure the memory would be on the daughter card with the processor. That way the main motherboard wouldn't need to be compatible with all the different memory choices, just have to be compatible with the daughter card.
Re: (Score:2)
If memory and CPU on are the daughter card, how is this any different than a blade chassis?
Seems like doing that remove pretty much all the value from the project.
Re: (Score:2)
It's not any different from blades. Actually group hug is not hotswap, so it's worse than blades but probably cheaper.
Re: (Score:2)
Re: (Score:2)
just what i was thinking, looked over at an old self and wondered if i could sell them an old backplane box as an example of how to make it work.. it was so nice to just drop in another CPU card as you needed.
Re: (Score:2)
Why not make memory its own card type and have optical interconnects for memory? That should allow enough speed for memory access and with a common interface standard you could design your CPU to do it natively or have a translation controller on your CPU card.
Re: (Score:2)
Re: (Score:2)
I wouldn't mind a system of going to a completely passive backplane architecture, although with electrical signal distances, this likely wouldn't be really doable until we have the ability to get optical signals onto the fiber from the chip die itself (which means a lot of muxing/de-muxing since having tons of optical connections would be a lot harder than solder pads.)
I mean this as an actual question. (Score:2)
I think the is the basic idea, which is why the whole idea won't work. Basically, they are sawing the motherboard in 2, where the CPU and memory are on the daugterboard, and the rest of the components (SATA,USB3, PCIe slots, sound, video outputs) are all that remain on the motherboard
Why would it work any less than a graphics card? Isn't that the same? GPU and memory on a daughterboard with a fast interface to the motherboard.
Re: (Score:1)
Basically, they are sawing the motherboard in 2, where the CPU and memory are on the daugterboard, and the rest of the components (SATA,USB3, PCIe slots, sound, video outputs) are all that remain on the motherboard
It is actually a micro-server architecture. Think small form-factor blade servers with an optional PCIe interconnect, optional remote SATA devices, and one mandatory ethernet interface, all running through what looks like an ordinary PCIe slot, but isn't.
http://www.opencompute.org/wp/wp-content/up [opencompute.org]
Umm, is there an article here? (Score:5, Interesting)
All I see are links to other slashdot articles. Are we going for a new record here? First the ridiculous post about Microsoft welling their entertainment division, now this. And the same style of headline too, which of course is answered with, "No."
Mr Editor, can you at least post a link to some information, like maybe the site where this specification is detailed? Maybe the project web site itself?
Re: (Score:1)
Mr Editor, can you at least post a link to some information, like maybe the site where this specification is detailed? Maybe the project web site itself?
The editors have been outsourced. Now, a team of twenty people who have english as an eleventh language review every submission and green light only those that meet the criterion spelled out in the three ring binder. The three ring binder itself was created from a 7 line Perl script, written by a subcontractor from China, who was hired by a contractor for Dice, who recently acquired the Slashdot brand identity, who shows up once every two weeks to collect his paycheck and update the seed in the random numbe
Re: (Score:2)
If they have actual links to real articles, then it isn't nearly the electronic masturbatory exercise you see in front of you here.
You can't link to your own shit if you have real information to link to...
Re: (Score:3)
Here is a link to an actual specification. If you read it, you will see that about half of what has been written about this announcement is wildly off base. We are talking micro-servers here - complete with on board cpu, ram, boot eeprom, flash storage, and ethernet. PCIe and SATA connections to the backplane are optional. Think small form factor blade server.
http://www.opencompute.org/wp/wp-content/uploads/2013/01/Open_Compute_Project_Micro-Server_Card_Specification_v0.5.pdf [opencompute.org]
S100 anyone? (Score:4, Insightful)
One architecture that supported "variable CPUs" was S100 where it is was typical to have a CPU card, one or more memory cards, and multiple I/O cards all plugged into a backplane. There were CPU cards for the Apple ][, but these were complete computers on a card that simply allowed use of the Apple ][ I/O.
Given today's multi-gigahertz processors with gigahertz memory access, I would think it would be difficult, if not impossible to effectively separate the CPU and the memory by very much. Similarly, it gets pretty complicated with high speed DMA I/O when you move it away from the memory it is accessing. I'm sure it could be done, but the performance is going to suffer just from the physical distances. Add in connector resistance and noise and you have ample justification for putting the CPU, chipset and RAM in a very small module that then plugs into the rest of the computer for I/O.
Re: (Score:2)
Re: (Score:2)
You would probably find googling for N8VEM SBC v2 to be very interesting. S100 lives! as does a eurocard connector-ized version of the same idea, more or less.
http://n8vem-sbc.pbworks.com/ [pbworks.com]
I have the partially assembled system on my workbench. I need a nice blizzard to keep me inside soldering, that'll take care of it. Its all antique thru-hole instead of modern SMD which I find harder to work with and certainly much bigger but its no big deal.
Add in connector resistance
At least the n8vem design has a standard pc molex on the ecb
Re: (Score:1)
Yeah, I loved the "This has never been done before with a CPU" as I have a couple of S100 systems sitting in my Shed. I am constantly amazed by todays "youth" who know so little (read nothing) of computing's past.
The late 70's and early 80's was probably the era of greatest diversity of computing ideas there has ever been, and perhaps it was even more "open" than today as users could buy complete service manuals for their computers (and have a good chance of fixing them!), there was a ton of info about the
"never been done before" - lols (Score:2)
almost everything we see in consumer devices has been done before in some market, or at the NSA (the latter of which will not talk about it, but we know because of James Bamford)
it's true (Score:2)
I came along just a bit later, but this part especially is true:
"users could buy complete service manuals for their computers (and have a good chance of fixing them!), there was a ton of info about the OS's"
The virtual complete absence of true user manuals to this day baffles/angers me.
When I took 'computer class' in the mid-90s we still learned mostly in versions of DOS and we used 5 1/4 and 3 1/2 floppys (mostly the latter).
We could afford 2 computers that could run the current version of Windows.
Re: (Score:3)
Probably one of the better magazines I bought was the old Computer Shopper, before it shrunk into a "regular" size magazine. Stain Veit's articles were always a treat, and even the ads were useful, back when there were tons of white-box makers (Arche, Bell, Austin PC, etc.)
The early Mac magazines were like this as well. If you had a special device that could scan, you could actually scan a page out of the magazine and have a couple useful applications each month.
I do miss the good magazines that just don'
Re: (Score:2)
Not all of the CPU cards for the Apple ][ had on board memory. The popular Z80 Softcard used the motherboard memory which made it slower than other Z80 expansion cards.
Re: (Score:3)
Given today's multi-gigahertz processors with gigahertz memory access, I would think it would be difficult, if not impossible to effectively separate the CPU and the memory by very much. Similarly, it gets pretty complicated with high speed DMA I/O when you move it away from the memory it is accessing. I'm sure it could be done, but the performance is going to suffer just from the physical distances. Add in connector resistance and noise and you have ample justification for putting the CPU, chipset and RAM in a very small module that then plugs into the rest of the computer for I/O.
If they were just moving the CPU to a card then yes, but apparently they aren't:
Intel, another key member of the Open Compute Project, announced it would release to the group a silicon-based optical system that enables the data and computing elements in a rack of computer servers to communicate at 100 gigabits a second.
More important, it means that elements of memory and processing that now must be fixed closely together can be separated within a rack, and used as needed for different kinds of tasks.
htt [nytimes.com]
Re: (Score:3)
More important, it means that elements of memory and processing that now must be fixed closely together can be separated within a rack, and used as needed for different kinds of tasks.
This statement is in reference to Intel's proposal, which is still vaporware. I seriously doubt they are talking about locating main memory away from the processors. That would more or less be suicidal.
Facebook's design certainly does no such thing.
http://www.opencompute.org/wp/wp-content/uploads/2013/01/Open_Compute_Project [opencompute.org]
Re: (Score:2)
Re: (Score:1)
Add to that CompactPCI, VME, VME64, VME64x, PXI, VXI, VXS, VPI, OpenVPI.
I probably forgot some, but it seems there is more computer-bus/formfactors that don't call for a specific CPU than do.
Re: (Score:2)
Most of these were just a CPU (usually a Z80) and the minimal logic necessary to take over from the 6502 on the motherboard. A relatively small handful of cards included their own RAM; it was far cheaper to use what was already in the computer.
The only Apple II expansion card that comes to mind that really was a complete computer on a card was the Applied Engineering PC Transporte [applearchives.com]
Backplane (Score:2)
Also see wire-wrapped and bit-sliced.
Re: (Score:3)
In the early 90s one of our mainframes blew a CPU, so the IBM CE replaced it while the system continued running. Zero reboot time because it wasn't rebooted. Much like you can swap hard drives in a NAS array while it runs.
There's really nothing new in IT. Couple years back a VMware image of mine got moved to another machine mostly seemlessly. Oh it was "frozen/down" for a minute or so but promptly unfroze on the new hardware. Not nearly as advanced as the mainframe was 20 years ago, but someday modern
Re: (Score:2)
VMWare dosen't have live migration? I know Virtualbox does though i have never had opportunity to need it yet.
Re: (Score:2)
You might be thinking of Slot A/B mounted CPU's from AMD and Alpha Processor Inc. They were compatible slot designs where you could plug in Athlon or Alpha 21264 CPU's. AMD licensed the slot design from Alpha.
Unfortunately, I don't think it ever made much of a dent in the market.
might be thinking of the 1970s (Score:2)
i remember the psych department at the university had an 'old computer' historical display set up in one of their windows. the 'motherboard' was just a bunch of slots you would fit wire-wrapped boards into. one was the cpu board, one was memory, whatever.
not to mention all of the "upgrade your PC" cards from the 1980s - put a 286 CPU-on-a-card into your 8088 "IBM XT", heck you could even put a PC card in your Mac.
im pretty sure 'industrial' users like Airplanes etc have had similar setups.
Where is the ROI? (Score:2)
The CPU is a small part of the cost of the server
What is the point in doing this? Where is the return on investment?
Re: (Score:2)
Re: (Score:2)
--Yeah, I don't know if this is really going to take off. (In general) Universal = generic = NOT optimized for speed/efficiency, etc...
Re: (Score:2)
Well, this is to show those scumbags at Intel and AMD that refuse to create products for their competition who's boss! I mean, why wouldn't they spend extra time and money to create a bunch of connections that their customers aren't going to use, and probably make their products perform worse by introducing unneeded complexity?
Never mind that we did already have a "universal" CPU socket, or at least one as close as it mattered. It was called Socket 7, and it fit Intel / AMD / VIA CPUs. And it was abandon
Re: (Score:1)
It is not a CPU slot specification. It is a micro-server slot specification, which is much more practical. Think small form factor blade server. The PCIe part is actually optional.
Commoditize? You keep using that word. (Score:2, Insightful)
Already 'commodity' (Score:3)
I see this as another step toward two goals:
-Getting ARM into the datacenter in some reputable fashion (which may or may not make sense, depending on whether a compelling performance per watt case can be made that offsets the energy/manufacturing gap that might be incurred from requiring more packages to get to performance desired).
-Rebranding 'whitebox'. White box vendors are viewed as the low cost alternative to HP/Dell/IBM, but image wise they are viewed anywhere between 'unacceptably bad' to, at best, 'just as good' from a select portion of the market. Putting cost aside, no one thinks of white box as 'better' than the expensive names. A lot of open compute at the system level is the same thing that has been the reality for the last decade with a shiny new name. The same standards that everyone already followed are getting highlighted more explicitly. This is the opportunity, through marketing, to change minds to say 'better' in some cases or at least make the 'unacceptable' segment of the market take another look.
If this allowed multiple CPUs on a motherboard... (Score:1)
You could probably have an ARM, low load, low energy comsumption processor and a nice High-Performace processor on the same board. You'd just then manage when the high-performance activates, and you could probably switch any (assuming hot-plug) without taking it offline.... It's nice to dream, isn't it?
do the cards have room for 4-8 / 6-12 ram slots (Score:3)
do the cards have room for 4-8 / 6-12 ram slots each? and yes that's full size ram.
Re: (Score:2)
In a setup like this you wouldn't put the RAM on the CPU card. It'd go on the backplane interconnect, independent of the CPU. Think the PDP-11 Unibus or the VAX-11 Synchronous Backplane Interconnect, which are where I first encountered the concept of a backplane and independent CPU, memory, co-processor and I/O processor modules. I doubt they originated there, though, my guess is the concept goes back to the IBM mainframes of the 60s. It was an amusing cycle: external modules would migrate onto the CPU boar
Re: (Score:2)
The memory is going to stay on the processor cards. It would be somewhere between slow and ridiculously slow (by modern standards) to do anything else. The slot interface is PCIe x8. An I/O interconnect. Not memory, certainly not SMP. More like a tightly coupled cluster.
Re: (Score:1)
No. These are micro-servers we are talking about. Two RAM slots typical. Low power energy efficient CPUs too. ARM to start with.
Take a look:
http://www.opencompute.org/wp/wp-content/uploads/2013/01/Open_Compute_Project_Micro-Server_Card_Specification_v0.5.pdf [opencompute.org]
pci-e X8 is limmted IO why not at least X16? (Score:2)
pci-e X8 is limited IO why not at least X16?
X8 can be used up by 1 video card on it's own.
Re: (Score:1)
This is intended for server use. No video output required. The PCIe part is actually optional. I wouldn't expect to see this in workstations anytime soon, not without a major redesign at any rate. The form factor is designed for small, low power processors. The interface is not designed for SMP or anything like that either.
Re: (Score:2)
They're only using the PCIe x8 physical connectors; the electrical signals do not resemble PCIe at all.
Presumably, they're also relocating the actual slot location to avoid stupid errors (like plugging one of these into an actual PCIe x8 slot or vice-versa).
No (Score:1)
Re: (Score:2)
By Betteridge's Law, I am forced to agree with you.
What is the invention here? (Score:3)
Re: (Score:2)
In other words, an iPhone. Minus the data connectors and inductive power. And the rack.
Ya know, I think you're on to something. If Intel can take their optical interconnect out of the lab, it just might be possible. At a reasonable price. It's possible now, for an unreasonable price.
And that's the machine SGI would be building, if SGI were anything but a shadow of its former self.
Group Hugs Can Be Dangerous (Score:1)
http://pbfcomics.com/115/ [pbfcomics.com]
Good (Score:2)
And add in some optical links so we can finally scale motherboards to something awesome.
Being limited to certain designs / lengths because of electrical circuitry...madness.
Been done before circa 1974 (Score:1)
the digital group -
http://www.pc-history.org/digital.htm [pc-history.org]
http://www.bytecollector.com/the_digital_group.htm [bytecollector.com]
Costs? (Score:2)