Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware IT

Can Open Hardware Transform the Data Center? 41

1sockchuck writes "Is the data center industry on the verge of a revolution in which open source hardware designs transform the process of designing and building these facilities? This week the Open Compute Project gained momentum and structure, forming a foundation as it touted participation from IT heavyweights Intel, Dell, Amazon, Facebook, Red Hat and Rackspace. That turnout is not an isolated event, but reflects a growing focus on collaborative projects to reduce cost, timelines and inefficiency in data center construction and operation. The Open Compute project is just one of a handful of initiatives to bring standards and repeatable designs to IT infrastructure."
This discussion has been archived. No new comments can be posted.

Can Open Hardware Transform the Data Center?

Comments Filter:
  • by hey ( 83763 )

    Ironic that Facebook (The walled garden company) is behind this.

    • In terms of hardware application and design they have actually been quite forthcoming. Unlike, say, Google who had you guessing about anything related to their infrastructure for a decade.

      As a SW platform yes they are a walled garden (sort of).

      • Re:facebook (Score:4, Informative)

        by SharkLaser ( 2495316 ) on Sunday October 30, 2011 @04:51AM (#37885052) Journal
        I don't really see how Facebook is a walled garden company. Yes, they don't open up their own most important platform, but if that makes them walled garden company then Google is too. They both do, however, contribute large amounts of code and side-projects (especially in high performance web services side), and Facebook goes even further and opens up their datacenter infrastructure too. Like you said, Google keeps that secret.

        You may not like Facebook's other practices, but they do actually contribute a lot to open source. Much more than any other company.
        • I don't know about the general consensus on this but when I refer to Facebook as being a walled garden I am actually referring to the way people use it. they go in there, produce content (discussions, status updates, wall posts links, fan pages and most importantly: trends) and the rest of the Internet doesn't really get a whif of all that happening. Now at this point a lot of people would get started on privacy theories but the fact that matters is that applications like facebook take the Internet and tur

          • Apart from fan pages (which can be public too), most people use it as a private communication tool, just like discussions on MSN Messenger and other IM networks, and even phone. IRC applies too if the channels are private, but there are lots of private channels and private messages too. For example those discussions have never been public unless someone participating them published them - usually against everyone else's wishes. Facebook is just continutation to that.

            If people are more interested in discu
        • by Yvanhoe ( 564877 )
          Walled garden : You put content in, you can't get it out. It has content it will not share and not allow anyone else from benefiting. It has nothing to do with the use of OSS or open hardware. Facebook could run on open cores, with linux and publishing their source code, it would still be a walled garden as long as they would consider the content uploaded by their users as their own property.
          • by swalve ( 1980968 )
            That is not what a walled garden is. A walled garden is exactly what the metaphor says it is: a wonderful place to be, but there are walls. Keeping others out, but also you in. And you don't have the keys to the gate. Someone else acts as the gatekeeper on your behalf. In computer terms, that means the platform is locked down, and to do anything, it must be first approved by The Management.

            It is different from jail because, being a garden, the user/prisoner doesn't really mind it so much and maybe eve
  • A big difference to Open Source software is in the cost of the manufacturing equipment. Where you can get a nice PC for programming for $1000, including screen, chip manufacturing hardware is a lot more expensive. The Open Graphics project, for instance, is still looking for investors to make an ASIC version possible. See http://en.wikipedia.org/wiki/Open_Graphics_Project [wikipedia.org]

    That will exclude those hobbyists who just want to tinker with the design a bit. Only the most determined, who are willing to embark on a

    • A big difference to Open Source software is in the cost of the manufacturing equipment.

      True, but if you read TFA this is about big-iron firms on the Facebook/Google scale who are engaged in building humungous scale-able data centers and are already being forced to design their own custom solutions and pay to have them manufactured in quantity.

      The problem with (say) the Open Graphics Project is that it duplicates the functionality of already available products, produced in vast numbers and developing at a rapid pace, with its only advantage being truly and usefully GPL'd drivers - an admirab

  • Poor Technology (Score:2, Interesting)

    by scharman ( 308566 )

    How about we actually stop the insanity that promulgates the need for the insanely sized data centers? Use smart caching, java applets and just send business logic via the connection instead of the bloated insanity of html. Instead of shoe-horning an intentionally stateless 'square peg' protocol into the 'round hole' actually go with something rational. Then your data servers only need to deal with business logic and you farm out more of your processing requirements to clients. (aka the rational approac

    • Yes, because I really want to install Java Applets when browsing to any site on the internet!
    • "just send business logic via the connection instead of the bloated insanity of html."

      That would be good but I don't see how this would reduce datacenter size since you should compute again your business logic again at the datacenter.

    • This is largely what is already in place, though the web has long since gone mobile and your client systems are mobile phones with limited battery life, limited CPU and display capabilities.

      You also need "architects" who are not complete morons.

      There are also problems with trust etc, the client cannot be trusted once it's running on the user's computer which means your protocols are opened up to inspection and your servers potentially to abuse.
       

    • by swalve ( 1980968 )
      I think the lesson has been learned that people don't like downloading clients to their computers. What it saves in datacenter resources gets eaten up by millions of people downloading clients to their machines and in greater support costs. The whole point of the web is that you can get whatever you want from whatever client platform you are on.
      • yes just prior to doing my fist www project in 1994 - I worked on a traditional client server using oracle to install our application we had to physically go on site and install 16 different floppy disks - took 2 of us 2 days to install 6 machines.
        I remember commenting to my Boss after the end of the www project about how the it could save huge amounts of money in deployment.
  • How is this different from using reference designs, blueprints, and best practices?

    Or are we only calling it "open source" so software weenies will think they know what's going on?

    Best leave the real engineering to real engineers.

    • Obviously, a big difference can be in the license terms. Can you legally change the reference designs, re-distribute the result without paying royalties and allow the recipient to do the same?

      If yes, you have the equivalent of Open Source in software.

      If no, then there is a difference and "open source hardware" is actually something new.

      • by EdZ ( 755139 )
        Yes, you can fiddle with the position of rackmount holes and the like to your hearts content. You just won't be able to fit it into a standard rack, which is sort of the point.
  • Facebook is just making the Working Group [wikipedia.org] sound cool by calling it "open".
  • can opener hardware in a data center?
  • Data Centers care about efficiency and processing density. Can open hardware currently compete?

  • No, it won't.

  • As all that changes is the chipset drivers / drivers for other stuff on the MB.

    The APP / OS code does not need to be changed to go from let's say corei3 to corei5 or say intel to AMD (Same x86-64 code base)

  • At a commodity level it is simply about who has the biggest distribution channel and who can get the stuff made for the lowest cost, probably somewhere in China. Since it is all commodity stuff there really isn't a secret about drivers, firmware or manufacturing.

    Move up the scale a little bit to real managed servers with fault-tolerant redundant parts and real diagnostics and you have left the commodity vendors behind. And now there is a considerable value difference between Vendor A's approach and Vendor B's approach. You also have the situation where Vendor A's stuff integrates well with Vendor C but not Vendor B.

    Google set a somewhat different standard for building a data center and doing it totally with commodity hardware. Cheap commodity hardware. As far as I know, this example has not been replicated by anyone large. I suspect a significant portion of Google's effort in building a data center this way was dealing with non-fault-tolerant hardware and systems with no management and/or diagnostics. It means stuff is going to go down at random times and you just have to deal with it by pulling the whole unit. I guess it works for them. I suspect most other data center level operations really aren't run as a distributed cluster where the cluster is fault-tolerant but the pieces are not. We are still pretty much at the beginning of clustering and fault-tolerant systems with complete fallover support as far as the mainstream is concerned.

    Understand that if a company is supplying nothing but commodity hardware (think the low end of Dell), they can be immediately replaced with any other commodity supplyer. Which is why Dell is getting out of the commodity PC business - there is no value proposition in it. On the other hand, Dell supplying servers which are not commodity hardware but using lots of custom parts and firmware means (a) they can supply much higher value to the data center and (b) they are not easily replaced by competitors that do not have matching parts and firmware. Making that level of hardware "open" is suicide because then you have turned your high value hardware into a commodity with no value at all.

    • by ista ( 71787 )

      Understand that if a company is supplying nothing but commodity hardware (think the low end of Dell), they can be immediately replaced with any other commodity supplyer. Which is why Dell is getting out of the commodity PC business - there is no value proposition in it. On the other hand, Dell supplying servers which are not commodity hardware but using lots of custom parts and firmware means (a) they can supply much higher value to the data center and (b) they are not easily replaced by competitors that do not have matching parts and firmware. Making that level of hardware "open" is suicide because then you have turned your high value hardware into a commodity with no value at all.

      Out of many server suppliers, exactly Dell actually is supplying commodity server hardware and their boxes can easily be replaced by about any kind of vendor.
      Dell is taking a few things of what's being sold on the market, do "customize" (brand) its firmware and that's it. And what they're actually replacing usually sucks (e.g. their BIOS) or is somehow outdated and just a little buggy. For example, a colleague of mine did fix a couple of DELL raid controller issues just by downloading official LSI firmware

Never test for an error condition you don't know how to handle. -- Steinbach

Working...