Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage Facebook Hardware

Open Compute Hardware Adapted For Colo Centers 21

1sockchuck writes "Facebook has now adapted its Open Compute servers to work in leased data center space a step that could make the highly-efficient 'open hardware' designs accessible to a broader range of users. The Open Compute Project was launched last year to bring standards and repeatable designs to IT infrastructure, and has been gaining traction as more hardware vendors join the effort. Facebook's move to open its designs has been a welcome departure from the historic secrecy surrounding data center design and operations. But energy-saving customizations that work in Facebook's data centers present challenges in multi-tenant facilities. To make it work, Facebook hacked a rack and gave up some energy savings by using standard 208V power."
This discussion has been archived. No new comments can be posted.

Open Compute Hardware Adapted For Colo Centers

Comments Filter:
  • 208v? ha! (Score:2, Interesting)

    by CRC'99 ( 96526 )

    Ok, so they're getting in on what the rest of the world does with a single phase.

    Most of the world is 240v single phase, 415v 3 phase. I don't quite understand how they give up energy savings by using a higher input voltage?

    Lower voltage = more amps = more heat
    Higher voltate = less amps = less heat.

    • by swalve ( 1980968 )
      My impression was that their power supplies are rated at 190v, so giving them 208v wastes some energy. But the upside is, I guess, that the power supplies can withstand power sags better, and go into a larger variety of locations without having to upgrade the power at the locations.
    • Re: (Score:3, Informative)

      by Anonymous Coward

      The equipment was originally designed to run at 277V (1 leg of a 3-phase 480V system), but is instead running at 208V (3-phase system where each leg is 120V). So while 208V may be higher than most US equipment, it's still lower than what they typically use.

      dom

    • I don't quite understand how they give up energy savings by using a higher input voltage?

      You lose efficiency, thus wasting energy, when you convert the 208v AC into the low DC voltages necessary to run the computer. Instead of each computer having a power supply that converts from high AC to low DC some companies are using large AC to DC power supplies to power whole racks of servers. These servers run on DC.

      • Re:208v? ha! (Score:4, Informative)

        by tlhIngan ( 30335 ) <slashdot&worf,net> on Thursday October 25, 2012 @11:06AM (#41765923)

        You lose efficiency, thus wasting energy, when you convert the 208v AC into the low DC voltages necessary to run the computer. Instead of each computer having a power supply that converts from high AC to low DC some companies are using large AC to DC power supplies to power whole racks of servers. These servers run on DC.

        Low voltage DC is piss-poor for distribution because power losses in wires increases at the SQUARE of the current. 120V@1A will have far lower losses than 12V@10A - 100 times less.

        The big AC to DC places use high-voltage DC for that reason - lower current cables are far easier to handle than high current cables (the thickness of a conductor depends on its current - ampacity. The insulator does have to get thicker for higher voltages, but it's a lot more flexible than a thick 00-gauge wire.

        DC-DC converters are fairly efficient and converting down to where you need has less losses than trying to shove 100A of 12VDC to a rack (assuming said rack only consumes 1200W. I think a modern rack can easily draw 3600/4800W fully loaded with servers which would mean up to 400A at 12V to the rack - calling for seriously thick cabling).

        Oh, and what happens when you have high currents flowing at low voltages? You get welding. Because IIR heating is far more effective when you're passing huge currents through.

  • The modern data center is a vestige of the time when computing power was expensive.

    Now, computing power is cheap and storage is cheap. The question is scaling. I think we tend to discount the role that physical hardware plays in this process when we talk about "the Cloud."

    Back in the late 1990s, people were predicting that the future data center would look like something out of Star Trek: many small "cells" which stored data or executed processing tasks, linked together by a neural net-like mesh that adapti

    • The World is Distributed. People are Distributed. The web is Distributed. Centralized Computing / Centralized Storage is irrelevant. Resistance is futile, you will be distributated.
    • by eyegor ( 148503 )

      Clouds, virtual systems, clusters, stand-alone servers all benefit from being in an environmentally friendly facility where there's lots of networking capacity and sufficient power and cooling. While home users have dedicated desktop or laptop computers, it's far more power efficient to use technologies like blade systems to package computing power. Regardless, everything's still in a data center where the equipment can be protected.

      I used to work at a very large ISP where there were a half dozen data cente

      • by Lennie ( 16154 )

        Have a good look at what Google and Facebook are doing and how Facebook is very open about it and collaborating with others in the OpenCompute project.

        Companies like HP and Dell are looking very closely at what they can use from these designs to build servers for the rest of us. I think Dell is even one of the members of the Open Compute project.

        The most important "innovation" if you ask me is to close off the hot corridore and have all the connectors at the front of the server in the cold corridore and mak

        • What's remarkable is the PUE factors Google, Facebook and Apple can get in their data centers. I stil think these are due to the homogenous nature of the equipment the place there, and the fact they don't have to worry about the multi-tenancy of commercial data centers. Middle of nowhere locations where things such as venting from the hot aisle are possible. In NYC, the 111 8th Avenue data centers are a good example of the constraints put on the various operators. Hopefully Google can help remediate that.

          In

          • by Lennie ( 16154 )

            If HP, Dell, Supermicro and others come up with a "standard" which puts all the connectors and indicators of servers on the front, then maybe we could all benefit the same way.

    • by mlts ( 1038732 ) *

      Data centers likely won't be going anywhere anytime soon. Businesses [1] tend to like keeping their critical stuff in a secured spot.

      What I see happening in a data center are a few changes:

      1: Data center rack widths will increase. This allows more stuff to be packed in per rack unit.

      2: There will be a standard for liquid cooling where CPUs, RAM, GPUs, and other components that normally use heat sinks will use water jackets. Instead of a HVAC system, just the chilled water supply and a heat exchanger wo

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...