Can Open Hardware Transform the Data Center? 41
1sockchuck writes "Is the data center industry on the verge of a revolution in which open source hardware designs transform the process of designing and building these facilities? This week the Open Compute Project gained momentum and structure, forming a foundation as it touted participation from IT heavyweights Intel, Dell, Amazon, Facebook, Red Hat and Rackspace. That turnout is not an isolated event, but reflects a growing focus on collaborative projects to reduce cost, timelines and inefficiency in data center construction and operation. The Open Compute project is just one of a handful of initiatives to bring standards and repeatable designs to IT infrastructure."
facebook (Score:2)
Ironic that Facebook (The walled garden company) is behind this.
Re: (Score:1)
In terms of hardware application and design they have actually been quite forthcoming. Unlike, say, Google who had you guessing about anything related to their infrastructure for a decade.
As a SW platform yes they are a walled garden (sort of).
Re:facebook (Score:4, Informative)
You may not like Facebook's other practices, but they do actually contribute a lot to open source. Much more than any other company.
Re: (Score:2)
I don't know about the general consensus on this but when I refer to Facebook as being a walled garden I am actually referring to the way people use it. they go in there, produce content (discussions, status updates, wall posts links, fan pages and most importantly: trends) and the rest of the Internet doesn't really get a whif of all that happening. Now at this point a lot of people would get started on privacy theories but the fact that matters is that applications like facebook take the Internet and tur
Re: (Score:1)
If people are more interested in discu
Re: (Score:2)
Re: (Score:3)
It is different from jail because, being a garden, the user/prisoner doesn't really mind it so much and maybe eve
Not for everyone (Score:1)
A big difference to Open Source software is in the cost of the manufacturing equipment. Where you can get a nice PC for programming for $1000, including screen, chip manufacturing hardware is a lot more expensive. The Open Graphics project, for instance, is still looking for investors to make an ASIC version possible. See http://en.wikipedia.org/wiki/Open_Graphics_Project [wikipedia.org]
That will exclude those hobbyists who just want to tinker with the design a bit. Only the most determined, who are willing to embark on a
Re: (Score:2)
A big difference to Open Source software is in the cost of the manufacturing equipment.
True, but if you read TFA this is about big-iron firms on the Facebook/Google scale who are engaged in building humungous scale-able data centers and are already being forced to design their own custom solutions and pay to have them manufactured in quantity.
The problem with (say) the Open Graphics Project is that it duplicates the functionality of already available products, produced in vast numbers and developing at a rapid pace, with its only advantage being truly and usefully GPL'd drivers - an admirab
Poor Technology (Score:2, Interesting)
How about we actually stop the insanity that promulgates the need for the insanely sized data centers? Use smart caching, java applets and just send business logic via the connection instead of the bloated insanity of html. Instead of shoe-horning an intentionally stateless 'square peg' protocol into the 'round hole' actually go with something rational. Then your data servers only need to deal with business logic and you farm out more of your processing requirements to clients. (aka the rational approac
Re: (Score:1)
Re: (Score:2)
"just send business logic via the connection instead of the bloated insanity of html."
That would be good but I don't see how this would reduce datacenter size since you should compute again your business logic again at the datacenter.
Javascript, not java applets (Score:2)
This is largely what is already in place, though the web has long since gone mobile and your client systems are mobile phones with limited battery life, limited CPU and display capabilities.
You also need "architects" who are not complete morons.
There are also problems with trust etc, the client cannot be trusted once it's running on the user's computer which means your protocols are opened up to inspection and your servers potentially to abuse.
Re: (Score:2)
Re: (Score:2)
I remember commenting to my Boss after the end of the www project about how the it could save huge amounts of money in deployment.
How is this different from Reference Designs? (Score:1)
How is this different from using reference designs, blueprints, and best practices?
Or are we only calling it "open source" so software weenies will think they know what's going on?
Best leave the real engineering to real engineers.
Re: (Score:1)
Obviously, a big difference can be in the license terms. Can you legally change the reference designs, re-distribute the result without paying royalties and allow the recipient to do the same?
If yes, you have the equivalent of Open Source in software.
If no, then there is a difference and "open source hardware" is actually something new.
Re: (Score:2)
Cool, let's call it "open" (Score:2)
What's all the fuss about... (Score:1)
Compete (Score:2)
Data Centers care about efficiency and processing density. Can open hardware currently compete?
Short answer (Score:2)
No, it won't.
X86 / X86-64 is easy to update / replace / find pa (Score:2)
As all that changes is the chipset drivers / drivers for other stuff on the MB.
The APP / OS code does not need to be changed to go from let's say corei3 to corei5 or say intel to AMD (Same x86-64 code base)
It works if you are dealing only in commodity HW (Score:3)
At a commodity level it is simply about who has the biggest distribution channel and who can get the stuff made for the lowest cost, probably somewhere in China. Since it is all commodity stuff there really isn't a secret about drivers, firmware or manufacturing.
Move up the scale a little bit to real managed servers with fault-tolerant redundant parts and real diagnostics and you have left the commodity vendors behind. And now there is a considerable value difference between Vendor A's approach and Vendor B's approach. You also have the situation where Vendor A's stuff integrates well with Vendor C but not Vendor B.
Google set a somewhat different standard for building a data center and doing it totally with commodity hardware. Cheap commodity hardware. As far as I know, this example has not been replicated by anyone large. I suspect a significant portion of Google's effort in building a data center this way was dealing with non-fault-tolerant hardware and systems with no management and/or diagnostics. It means stuff is going to go down at random times and you just have to deal with it by pulling the whole unit. I guess it works for them. I suspect most other data center level operations really aren't run as a distributed cluster where the cluster is fault-tolerant but the pieces are not. We are still pretty much at the beginning of clustering and fault-tolerant systems with complete fallover support as far as the mainstream is concerned.
Understand that if a company is supplying nothing but commodity hardware (think the low end of Dell), they can be immediately replaced with any other commodity supplyer. Which is why Dell is getting out of the commodity PC business - there is no value proposition in it. On the other hand, Dell supplying servers which are not commodity hardware but using lots of custom parts and firmware means (a) they can supply much higher value to the data center and (b) they are not easily replaced by competitors that do not have matching parts and firmware. Making that level of hardware "open" is suicide because then you have turned your high value hardware into a commodity with no value at all.
Re: (Score:1)
Understand that if a company is supplying nothing but commodity hardware (think the low end of Dell), they can be immediately replaced with any other commodity supplyer. Which is why Dell is getting out of the commodity PC business - there is no value proposition in it. On the other hand, Dell supplying servers which are not commodity hardware but using lots of custom parts and firmware means (a) they can supply much higher value to the data center and (b) they are not easily replaced by competitors that do not have matching parts and firmware. Making that level of hardware "open" is suicide because then you have turned your high value hardware into a commodity with no value at all.
Out of many server suppliers, exactly Dell actually is supplying commodity server hardware and their boxes can easily be replaced by about any kind of vendor.
Dell is taking a few things of what's being sold on the market, do "customize" (brand) its firmware and that's it. And what they're actually replacing usually sucks (e.g. their BIOS) or is somehow outdated and just a little buggy. For example, a colleague of mine did fix a couple of DELL raid controller issues just by downloading official LSI firmware