Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Facebook Open Source Hardware News

Open Compute Project Publishes Final Open Rack Spec 28

Nerval's Lobster writes "The Open Compute Project has published the final specification of the Open Rack Specification, which widens the traditional server rack to more than 23 inches. Specifically, the rack is 600 mm wide (versus the 482.6 mm of a 19-inch rack), with the chassis guidelines calling for a width of 537 mm. All told, that's slightly wider than the 580 mm used by the Western Electric or ETSI rack. The Open Compute Project said that changes in the new 1.0 specification include a new focus on a single-column rack design. The new dimensions now accommodate hotter inlet temperatures of between 18 to 35 degrees Celsius and up to 90 percent humidity, which reflects other Open Compute designs and real-world data center temperatures, according to project documents. Facebook has led the implementation of the Open Compute Project, which publicly shares the designs it uses in data centers, including its Prineville, Ore. facility. As the spec clearly shows, however, the new designs deviate from the traditional configurations and specifications, which means data center operators will need to find and then source racks from third-party vendors (or, in the case of Facebook, design their own)."
This discussion has been archived. No new comments can be posted.

Open Compute Project Publishes Final Open Rack Spec

Comments Filter:
  • by vlm ( 69642 ) on Wednesday September 19, 2012 @04:47PM (#41393057)

    the new designs deviate from the traditional configurations and specifications, which means data center operators will need to find and then source racks from third-party vendors (or, in the case of Facebook, design their own).

    Other stuff goes in racks too. Power distribution, cable mis-management, fiber trays, all that stuff needs to be redesigned.

    Looks like a mighty painful transition.

    • by Anonymous Coward

      Meh. This is a non-issue. I've seen Adapter dog-ears to allow 19 inch rack devices to fit into the larger 22 inch style racks. They were sturdy and worked fairly well. It would be harder if it was larger to smaller.

      the new designs deviate from the traditional configurations and specifications, which means data center operators will need to find and then source racks from third-party vendors (or, in the case of Facebook, design their own).

      Other stuff goes in racks too. Power distribution, cable mis-management, fiber trays, all that stuff needs to be redesigned.

      Looks like a mighty painful transition.

    • I work in audio visual where all our equipment is 19 inch and we have no need for the 22 inch standard but as we share some rack hardware with the IT industry (power distribution, cable management etc) I hope what we will see is equipment being built to the 19 inch standard wherever practical and shipped with ears to accommodate either rack standard
    • by SuperQ ( 431 ) *

      I'm glad to see someone is finally pushing to replace the shitty old 19" rack spec.

      Yea, This spec seems slightly light on some details that I would like to know.

      Is the rack designed for one-side maintenance? I'm tired of the old design where you load a machine in the cold isle, and then have to walk around to the hot isle to plug the cables in. I want everything on the cold isle so I can have the datacenter duct the hot air directly into the heat exchangers. From what I can tell I think this is true for

      • by Lennie ( 16154 )

        Not sure about the rack, but at least some of the machines that Facebook and the others have designed do have all the cables in the cold isle.

        This also means, you "never" have to visit the hot isle.

        Thus they have it completely enclosed and it really is hot there. Like close(r) to 30 degrees Celsius or something like that (86 F).

        That means the cold isle doesn't have to be as cold either which means less work for the airco.

        • by SuperQ ( 431 ) *

          Probably a lot hotter than that. Drives are happy to run in the 40c range. CPUs in the 60-70c range. I'd expect the exit temp and hot side to be atleast 40c. The bigger the thermal diff between hot and cold the better. If you have a normal exit temp of 40c and a max of 50c, you can let your inlet temp range between 20c-30c. This way even when it's hot out you can get by with evaporative cooling [wikipedia.org].

  • by Anonymous Coward

    I'm sure that the hardware manufacturers will be all over this as it is just the thing to boost their flagging sales.

    or...

    No one will give a crap or care to replace all their stuff with "non-standard" hardware.

    Only time will tell, of course. But, I'm betting that this will evaporate like most other OpenHardware projects.

    • by mlts ( 1038732 ) * on Wednesday September 19, 2012 @05:12PM (#41393311)

      It reminds me of the push to high voltage DC electricity. Yes, the current standards are not as efficient as they should be, but it would cost a pretty penny to rip our racks out, and either adapt or buy new hardware.

      We have enough standards already -- a lot of data centers have both 19" racks and 22" racks. It would be like asking that the power company went to 360 Hz for the AC power -- it would be better, but since so much is used to the standard, it likely won't happen.

      • by afidel ( 530433 )

        Google's already Intel's fifth biggest customer, the other big web\cloud players probably aren't that far behind so if enough of them insist on Open Rack designs it won't matter what the rest of us want, they WILL be produced. With the way people move between those large players it's likely that will happen.

    • by Anonymous Coward

      It might be a good idea to read the original facebook blog post that kicked all this off, to get an idea of the aims of the project:

      http://www.facebook.com/notes/facebook-engineering/building-efficient-data-centers-with-the-open-compute-project/10150144039563920

      It's not really about people replacing their existing stuff with "non-standard" hardware...

      It's more about big application/platform/service providers standardising on the way they build giant new data centres while striving to be as efficient as poss

  • by Archangel Michael ( 180766 ) on Wednesday September 19, 2012 @04:58PM (#41393179) Journal
  • WTF? (Score:4, Informative)

    by msauve ( 701917 ) on Wednesday September 19, 2012 @05:17PM (#41393361)
    If you go to the first link, it says "The Open Rack is the first rack design to diverge from the existing 19" rack standard, "

    Well, no. There's a 23" standard (sometimes called the ETSI rack, and which the summary even mentions), for which adapters (some including cable management) are readily available to allow installing equipment designed for 19" racks.

    The summary, well, sucks. It bounces between the widths of the actual racks (which isn't really defined for 19" racks), widths of the installed equipment, and the width across the flanges for 19" racked equipment. Apples and oranges.

    It gives temperature specs, but that's not so much a function of the rack, but of the equipment placed in the rack, and the type of HVAC provided. Despite pretending to give thermal specs, it doesn't bother to define airflow - front to back? Right to left?

    This seems to be a solution looking for a problem.
    • The problem is lagging equipment sales. If they can get the Porcine animal into a bag, they can sell it.

      Specifically, I'd like to know what problem this new rack spec solves, other than the "Metric/Imperial" measure.

    • Apparently the WE standard of 23" (580mm?) just wasn't good enough, they needed that extra 3/4" or so?

      Really?

      Fortunately, for those of us who can live with good enough, this may make surplus 19" racks dirt cheap. I could put one to use in the garage, now how to cool it during the Phoenix summer. Hmm..

    • Arrrgh, no! Do Not Want!

      Telco equipment traditionally used 23" rack instead of 19", so my underground laboratory at work is full of a bunch of telco racks with shelves to accommodate all the 19" equipment. (We've gradually been converting to 19" rack, but it means a lot of disruption, because this is an old lab in California, so everything's bolted to the floor and anchored to the overhead cable tray and power railings, and there are all sorts of different heights of racks, and it's usually been a lot ea

      • by msauve ( 701917 )
        Spacers [cablesandkits.com] are much cheaper than shelves, and more secure.
      • by Shatrat ( 855151 )

        Shelves? Sounds like maybe you've solved a problem with another problem.
        Just buy 19" to 23" mounting ears for your 19" equipment. Cheap, easy, most telco stuff comes with them in the box anyway so they can fit in either 19" or 23".
        Mad respect for the underground laboratory though, I sincerely hope you're working on a doomsday device and not just a billing system or iphone app.

        • Actually we're mostly using the underground laboratory to test firewalls, intrusion detection hardware, VPN tunnel servers, and occasionally routers. And we're testing management systems for all of the above. (And for Anonymous Coward's comment, we don't have a couple megawatts, but we're up to about 200 amps or so :-)

          Our corporate real estate mafia are moving us to an undisclosed location next year (we're assuming it's the main office up the road, where they're moving the people upstairs from us who have

  • I'm curious as to why this article has a Facebook icon...
    • by Lennie ( 16154 )

      Because Facebook is the best known (founding) member of the Open Compute project.

"The great question... which I have not been able to answer... is, `What does woman want?'" -- Sigmund Freud

Working...