Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Power IT

DC Power Poised To Bring Savings To Datacenters 287

snydeq writes "InfoWorld's Logan Harbaugh follows up his '10 IT Power-Saving Myths Debunked' to argue in favor of using DC power in the datacenter. The practice — viewed as a somewhat crackpot means for reducing wasteful conversions in the datacenter just a few short years ago — has gained traction to the point where server vendors such as HP, IBM, and Sun are making DC power supplies available in their server wares. Meanwhile, Panduit and other companies are working to bring down another barrier for DC to the datacenter: a standardized 400-VDC connector and cabling solution. And with GE working to list 600-VDC circuit breakers with the Underwriters Labs, DC's promise of reduced conversion waste could soon be commonly realized."
This discussion has been archived. No new comments can be posted.

DC Power Poised To Bring Savings To Datacenters

Comments Filter:
  • by cosm ( 1072588 ) <> on Wednesday January 14, 2009 @03:44PM (#26453981)
    Tesla smiles in his grave as Franklin catches on fire from Nikolai's coil-arcs-of-doom.
    • by ( 1108067 ) on Wednesday January 14, 2009 @04:11PM (#26454407) Homepage Journal

      Alternate view: []

      Or, to summarize - if you take a high-efficiency AC system and convert it to 480 volts, downstep to only 240 volts (and all todays' boxes can run either 110 or 220-240), you can get to within 1% of the DC system.

      Add to that the savings in materials (1.5" copper wiring? Booster cables for diesels aren't anywhere near that thickness) and there's no real reason to change.

      In fact, the biggest saving would probably be if we went from 120v to 240v for everything. One less down-conversion, etc.

      • by wsanders ( 114993 ) on Wednesday January 14, 2009 @04:19PM (#26454563) Homepage

        You can achieve substantial savings just by wiring your datacenter for 240V only (in the US). The rest of the world knows this already, but every time I suggest this in the US, people look at me like I have monkeys flying out my nose. Half as many amps == half as many power strips, half as many UPS devices, half as much wire, etc. With the exception of cheap-ass wall wart powered devices, I have not encountered any equipment that was not 240V compatible in the US in years.

        • Re: (Score:2, Informative)

          by aaarrrgggh ( 9205 )

          It isn't half as many amps, it is only a 15% reduction since 208V is used in the US for data centers. The benefit (albeit at the expense of fault current) is eliminating one AC:AC transition in the process. The same could be said for getting equipment to operate at 277VAC similar to lighting in the US.

          • by ( 1108067 ) on Wednesday January 14, 2009 @05:30PM (#26455821) Homepage Journal
            For those running on equipment that is currently running on 110-120v, it's a 50% amperage reduction, especially since anything designed to run on 208V will also handle 220v-240v just fine.
        • Re: (Score:3, Informative)

          by ckhorne ( 940312 )

          It should be noted that the only savings is in the infrastructure, not in ongoing energy costs. Power = current * volts, so whether you're using 240V or 120V, the overall power (measured in watts) is the same, and thus the overall power bill is practically the same. There will be slight differences in efficiencies, but you really won't gain all that much.

          The biggest difference is, yes, you could get away with less overall wiring costs to carry the same amount of power.

        • Nonsense (Score:5, Informative)

          by ( 463190 ) * on Wednesday January 14, 2009 @05:48PM (#26456141) Homepage

          Half as many amps == half as many power strips, half as many UPS devices, half as much wire, etc.

          In the split-single-phase arrangement that is used in the USA, the only difference is whether there's a neutral wire in the conduit. For a given wire gauge you don't get any more power from a 240V circuit, because they're fundamentally the same thing, just one has kind of a "center tap". That copper is a very marginal savings (3 conductors vs 4) when you figure all the labor, conduit, breakers etc that's going to be put in anyway. And if you're dealing with 3-phase it's even less (4 vs 5 conductors).

          In a colo environment it would be smarter to run 120 (with shared neutral) so people can use the normal plugs and cables that they have on hand, although in a single-customer datacenter where all your equipment is sure to have modern power supplies, fine, go with 240. But it's not hard to wire 240V outlets as needed (eg for a high density unit like a blade chassis or cisco gear).

          You don't use any fewer power strips because you still need a plug per computer regardless of the voltage, and you still need to same amount of UPS equipment because your VA and WH would be the same for a 120 vs 240v UPS of a given price or physical size. It may surprise you that 120V and 240V UPSes generally have the same internals, the only difference is the plugs and cables that they're outfitted with on the back panel. Try measuring the voltage across two hots on different plugs of a "120V" UPS - you'll probably see 240V.

          • Re: (Score:3, Interesting)

            by Sycraft-fu ( 314770 )

            One advantage is that switching PSUs seem to be more efficient with the higher voltage input. It's not a lot, like maybe 2% on most PSUs, but still. If you have 50kW worth of computers, 2% savings is not trivial, since it also = less heat output.

        • by Gallomimia ( 1415613 ) on Wednesday January 14, 2009 @06:44PM (#26457063) Homepage

          Half as many amps == half as many power strips, half as many UPS devices, half as much wire, etc.


          This is all utterly and completely false. Number of amps does not affect number of power strips as other posters have proven with math the first computer could have done. (20 == 20 => true)

          The number of amps does not affect the number or capacity of UPS devices; it is Watts (commonly referred to as volt-amps) which dictates this, and it remains constant for a specific device no matter the supply voltage.

          Your wire savings formula is flawed, unless the 240/415V technique is used as proposed (sort of) by another poster.

          As mentioned by another poster, data centers are 3 phase electrical installations. This means there are three wires with alternating voltage in them, and they are all at peak at different times. A wiring technique is used in many installations called "Edison Three Wire" (I have no idea if they use this in data centers, but if they don't, they're stupid) This brings two live wires and one neutral wire to a location requiring two circuits of a given amperage. Let's call it 30A since this requires 10GA wire on a short run from the panel. Two Circuits, three wires. If you remove the ability to use Edison three wire by using 208 V circuits involving no neutral and two live wires, you increase the wire usage by 50%. An up in voltage yields a down in voltage by the same factor. The savings in current is only 42%. 150% of the number of wires times 57% as much ampacity. 17A requires 12GA wire, or run 14GA wire and a 15A circuit breaker which you pray no one trips on a daily basis. (Wow that's reliable) This does not factor in the need by law for each circuit to have a separate ground wire, adding to the number of wires.

          10GA wire has a cross sectional area of 10.4 thousand circular mils or kcmil. (mil = 0.001 inch) (1 circular mil is when you have circle radius 0.001 inch.)

          12GA wire is 6.53kcmil
          14GA wire is 4.11kcmil

          Wikipedia on AWG []

          For arguments sake and the fact that calculations dictated that a circuit now needs 17A down from 30A, we'll use 12GA wire. If you want to argue that it could be 14GA consider the fact that if you cut the available current by 2A you will likely need to increase the number of circuits to an enclosure or other fixture such as cooling or lighting. This will require larger distribution panels, bigger feeder cables, larger conduits, and all around more electrical capacity when dealing with things such as generators and UPS. This will eat into the savings.

          Your new wire is 65% as much copper as the old wire. You require 150% as many wires not counting grounding/bonding. Your total mass of copper used is now 97.5% as much as before. A total savings of: precisely dick.

          Or, you can double the voltage of every circuit in the data center and leave the electrical network topology the same. This requires new transformers, new distribution equipment, and now run the risk of never being able to provide a customer a 120V circuit for their wall-wart powered device. (I'm assuming that's a transformer block, many of which are now supporting 240V anyway) You could save about 60% as much copper mass, and then spend 10x more replacing all the other equipment which delivers the electricity around the data center, keeping in mind the cost of hiring a certified electrician to install this is tremendous.

          Wire is the cheapest piece of equipment in the entire building, and it's the only thing that will be saved in the 240V datacenter, even if you start a brand new building from scratch with this in mind from the first mark on the design plan. Get over it. No one wants to do it.

          Perhaps an electrical engineer could come up with some more promising data for converting data centers to 240V up from 120V, however I'm quite certain an EE wouldn't say "WHERE'S MY FISH YOU IDIOT"

          Topic Change

          Drifting away from t

      • by aaarrrgggh ( 9205 ) on Wednesday January 14, 2009 @04:32PM (#26454769)

        The concept is actually to go with a European 240/415V system rather than ever using US voltages of 480/277 and 120/208V; you step down from medium voltage directly to the 400V. "Best practices" would be to have an offline or line-interactive UPS.

        The biggest gain is actually in the power supplies and not the electrical distribution system. I'm a fan of 600VDC in the data center from an engineering perspective, but there are huge safety issues that need to be resolved to make it viable. (DC arcs don't self extinguish as there is never a zero crossing.)

        When discussions were first being done five years or so ago, my theory was that for it to be practical you would need a 3N design rather than today's 2N system, as all work would need to be done on cold busses and you still had to maintain 2N redundancy.

    • by Smidge204 ( 605297 ) on Wednesday January 14, 2009 @04:13PM (#26454439) Journal

      Ere... not sure why "Insightful" since Tesla was the one who invented the AC polyphase distribution system, and would probably not approve of using Edison's (not Franklin's?) DC distribution method.

      That said, AC power made a lot more sense before the event of solid state power electronics. You can't reasonably convert DC to DC efficiently without using an AC phase via transformer, which was a major hurdle in using DC power. High frequency power supplies can do the job just fine, though.

      • by hardburn ( 141468 ) <hardburn&wumpus-cave,net> on Wednesday January 14, 2009 @05:11PM (#26455471)

        I think Tesla would be just fine with DC power if he saw what we're using it for today. Back then, there wasn't much stuff that cared which way the current flowed. Lights and electric heaters work fine either way, and motors are more efficient on AC, as is any power source that depends on spinning a generator (almost everything besides solar cells). But once you start throwing diode junctions and electrolytic capacitors into the mix, things change.

      • by amn108 ( 1231606 ) on Wednesday January 14, 2009 @06:44PM (#26457055)

        Tesla was not discriminating against DC power in general, he was merely certain that it was AC electricity that was the winner for transporting electricity over long distances, to which Edison objected in favour of DC, but Tesla turned out to be right. To my knowledge, Tes;a never objected scientifically to DC being used in wherever else it was due - such as medium and shorter path interconnects and fine electronics where precise voltages were needed.

  • by jellomizer ( 103300 ) on Wednesday January 14, 2009 @03:46PM (#26454003)

    Who would have thought the GE would be a big supporter of DC.

  • by Wonko the Sane ( 25252 ) * on Wednesday January 14, 2009 @03:47PM (#26454019) Journal

    I felt a great disturbance in the Force, as if millions of Tesla fanboys suddenly cried out in terror and were suddenly silenced. I fear something terrible has happened.

    • by Shakrai ( 717556 )

      as if millions of Tesla fanboys suddenly cried out in terror and were suddenly silenced

      Sorry, my mistake [] ;) We won't let it happen again [] in the future :P

  • by fred fleenblat ( 463628 ) on Wednesday January 14, 2009 @03:47PM (#26454021) Homepage

    Suggestion for the DC power supply designers: have a heart and build GFCI into the spec.

    • by rabtech ( 223758 )

      While this is a good idea in the sense that no one likes being electrocuted, the risk of DC shock is more about burns than anything else, since there isn't any alternating current to cause fibrillation of the heart.

      The idea of GFCI is to detect the current imbalance (magnetically) within a short enough time frame that the AC jolt doesn't cause a disruption of your heart rhythms.

    • Even GFCI isn't enough; you have to also have active arc-fault detection, and you need all this fault detection throughout the entire system. It isn't as easy as stringing together 480 D-Size batteries...

  • What's the quote? "One Test is Worth 1,000 Expert Opinions".

    So build a few variations and lets see what the deal is.

  • by mrchaotica ( 681592 ) * on Wednesday January 14, 2009 @03:49PM (#26454063)

    I don't run a datacenter, but I sure would like to get rid of the power bricks that all small electronic appliances seem to come with these days!

    • Re: (Score:2, Interesting)

      by ryanleary ( 805532 )
      Well, particularly for those small devices it would seem that they would still require stepdown circuitry--likely a transformer. It just won't require rectification and smoothing.
    • Re: (Score:3, Interesting)

      by bughunter ( 10093 )

      Agreed. My PC and media installations are plagued by a plethora of these heat-generating devices, as I add on printers, ethernet devices, networked disks, extra storage, converters, encoders, decoders, and the like. I had to learn to include plans for a well-ventilated place for these things.

      Also, it's an inherently good idea for power savings. Power supply efficiency can go way up when both a) total power goes up and b) the supply can be designed for a constant load (which would be the case for a large

    • Re: (Score:3, Interesting)

      I don't run a datacenter, but I sure would like to get rid of the power bricks that all small electronic appliances seem to come with these days!

      probably because these 'wall-warts' are linear converters - seldom better than 40% eff.

      As more stuff conforms to the ENERGY INDEPENDENCE AND SECURITY ACT OF 2007, these will become much less of an issue.

      • Not so much anymore (Score:3, Informative)

        by Sycraft-fu ( 314770 )

        These days they are usually switching power supplies, which are quite efficient (not to mention smaller).

      • Re: (Score:3, Informative)

        Linear power supplies in consumer devices disappeared years ago. I don't think I ever saw a wall-wart that actually had linear regulators in it, even back in the 80's. They're all switching supplies now, running very efficiently at very high frequencies, which is why you can get 50-100 watts out of that little brick that powers your laptop.
    • Re: (Score:3, Insightful)

      I don't run a datacenter, but I sure would like to get rid of the power bricks that all small electronic appliances seem to come with these days!
      We (EPA?) should start with standardizing 12 volt DC connectors to let a PC run directly off of a UPS without going through the DC->AC->DC pass.

      • Re: (Score:3, Informative)

        by gregmac ( 629064 )

        Having a standardized DC plug would be a good thing. There could be power bars and UPSes which provide it, and eventually household outlets could have DC sockets, alongside the 120V sockets.

        USB [] would almost be a good choice, but unfortunately it only provides 5V, and 500 mA. Enough to power/charge some small devices, but not everything. PoweredUSB [] comes a step closer, but it is proprietary and the current is probably still too low to be useful for everything.

        A connector/plug like Serial ATA power [] is probab

    • This is snake oil (Score:5, Informative)

      by jmorris42 ( 1458 ) * <> on Wednesday January 14, 2009 @04:33PM (#26454775)

      > I don't run a datacenter, but I sure would like to get rid of the power bricks...

      DC vs AC wouldn't help you rid yourself of power bricks. No more than it can help a datacenter get rid of power supplies in each server. Telco equipment runs on 48 volts not to save electricity but because of the way telephone exchanges are built. Telephones don't go down, period. So how do they accomplish this miracle? Huge battery banks. Back in the day a DC-AC conversion system large enough to run a whole switch plus drive every telephone would have been all but impossible. So they just ran everything directly from the batteries and used the mains to charge the batteries.

      This DC in the datacenter thing is just a green craze that will pass. It is pure unadulterated snake oil. Go reread the summary. They ain't even doing the smart thing and adopting the telco 48V standard. Does anything in a server run on 48V? No. Does anything in a server run on the 400V they are proposing? No. So a DC-DC conversion will be needed, i.e. a switch mode power supply. Guess what is in a current server? A switch mode power supply. Current PC power supplies are available with efficiencies over 90% without buying too far off the mainstream. I seriously doubt these DC powered supplies will be much better and in the end that is the ONLY number that matters. Except these DC installations have to factor in the power loss from the big AC-DC conversion and worry about redundency, backup power, etc.

      • Re:This is snake oil (Score:5, Informative)

        by evanbd ( 210358 ) on Wednesday January 14, 2009 @05:32PM (#26455859)

        Switch mode supplies that run off DC input don't require a big high voltage input capacitor. They also don't require complicated PFC circuitry. Basically, a modern AC-DC SMPS has an input boost converter that goes to ~380-400VDC, and then a forward or flyback converter that turns that into usable voltages. This is required to get power factor correction, which is required for high efficiency on a large system. This system moves the first half of that outside the computer into one large device. Running the large converter off three-phase power makes it mildly more complex but removes the bulk capacitors. Between that and the fact that there is only one of these converters, it's a lot cheaper. Also, power electronics generally get more efficient as they get larger, for a variety of reasons; this takes advantage of that.

        The major economic reason to run off 48VDC instead of 400VDC is that some gear already exists thanks to the telcos; the major reason not to is thicker more expensive wiring. Which one wins depends on the size of the market, and it sounds like the market is big enough that the 400VDC probably wins.

        If you really wanted to, you could push the AC-DC efficiency higher with more expensive electronics -- but centralizing it is cheaper, so why bother?

      • Re: (Score:3, Informative)

        by Firethorn ( 177587 )

        They ain't even doing the smart thing and adopting the telco 48V standard.

        The 48V standard does suffer from a rather large problem - the current necessary to support the wattage a modern datacenter needs results in rather large wire sizes, even bars.
        With 240V AC, you can ship a little more than 4kw over a 12 gauge wire. With 400V DC, you sould be able to ship almost 8kw.

        IE you can cut your wiring costs substantially, or voltage losses on the wires.

        As for the extra costs, well, obviously they're hoping to be able to sell enough 400V/600V stuff to become competitive.

    • by zippthorne ( 748122 ) on Wednesday January 14, 2009 @05:29PM (#26455795) Journal

      There is one already: USB power. Fairly low current, but a host of consumer devices from bluetooth headsets to GPS devices to iPods use it as their standard charging source.

      It's a little awkward because there are more pins than ought to be strictly necessary, but it's a relatively reasonable compromise over the former solution of no common standard at all.

    • Re: (Score:3, Informative)

      Opportunity for some power supply manufacturer here.

      Some time ago I was tasked with solving the brick proliferation problem for a national retailer. The cheap little wall-plug PSU's that were proliferating under the POS lanes were considered dangerous.

      My response was to talk to a power supply manufacturer and get them to design a single wall-mount PSU with multiple DC leads using a variety of connectors to fit the various POS peripherals (fortunately they all used a standard DC voltage).

      The output le

  • In the good old days (Score:4, Informative)

    by oldzoot ( 60984 ) <morton.james@comca[ ]net ['st.' in gap]> on Wednesday January 14, 2009 @03:50PM (#26454081)

    In the 80's we built custom interfaces for large computers using wire-wrap Standard Logic Inc. wiring modules. The planes of wiring were assembled into rackmount chassis which were fed DC power via a vertical bus-bar system in the rack. The busbars were about .5 X 1 inch solid copper, insulated by shrink tubing with holes cut for the threaded holes in the busbar. The power supplies were rackmount 100 or 200 A Lambda supplies providing either 5 volts or 12 Volts. It was occasionally a pain to be called into the computer center in the middle of the night to replace one of those heavy power supplies - at least they were at the bottom of the rack.


    • You didn't work for Concurrent Computer, did you? I remember the same sort of beastly power supplies - I still have a couple of them. The rumor (possibly fact) was that Concurrent (which was Perkin Elmer Data Systems) had some of the first patents on switching supplies.

      Those were the days - a 32 bit Floating Point Unit made out of _discrete_ 74xx series chips, mainly 74181's. It took a full 17" x 17" board. Youch! Their flagship system, a 3280 [] clocked a whole 6 MIPS for a uniprocessor system. I think at one

  • WTF? (Score:5, Funny)

    by Shadow Wrought ( 586631 ) * <shadow.wrought@g ... om minus painter> on Wednesday January 14, 2009 @03:52PM (#26454115) Homepage Journal
    I thought the power in D.C. caused waste and ineffeciency.
    • Re:WTF? (Score:5, Informative)

      by ivan256 ( 17499 ) on Wednesday January 14, 2009 @03:55PM (#26454157)

      The article can basically be summed up as follows:

      Though there are more transmission losses with DC than with AC, if your DC->AC conversion can be done with an outdoor-rated supply, you save more in cooling by doing the conversion outdoors than you'd lose in transmission losses.

      • Re: (Score:2, Insightful)

        by amorsen ( 7485 )

        Though there are more transmission losses with DC than with AC

        There aren't, at the same voltage. In fact AC loses slightly more at a given voltage, up to a lot more for really long wires.

        • Re:WTF? (Score:5, Informative)

          by girlintraining ( 1395911 ) on Wednesday January 14, 2009 @04:51PM (#26455123)

          In fact AC loses slightly more at a given voltage, up to a lot more for really long wires.

          Whiskey. Tango. Foxtrot. Line losses are based on current not voltage. And with AC you can convert current and voltage with a transformer with a very high Q. That's why AC (Tesla) beat DC (Edison) at the turn of the century for power distribution. Also, direct current generates more heat than alternating current. -_-

          • Re:WTF? (Score:4, Informative)

            by Firethorn ( 177587 ) on Wednesday January 14, 2009 @05:42PM (#26456049) Homepage Journal

            No WTF.

            At a given voltage/amperage, DC will lose less power per mile than AC. However, AC transformation equipment is cheaper/more efficient than DC.

            At a couple hundred miles, DC becomes the more cost effective solution for a high power run.

            Also, direct current generates more heat than alternating current

            Not at the same wattage.

      • Re:WTF? (Score:5, Informative)

        by rabtech ( 223758 ) on Wednesday January 14, 2009 @04:29PM (#26454719) Homepage

        DC power lost the "current wars" because we didn't have solid state transformers capable of doing voltage step up/down like we did with AC back in the day (simple wound transformers).

        These days even the cost of really high power DC transformers (>500,000 volts) is offset by more efficient transmission and a number of notable long-distance power lines are actually DC for that reason (lower losses offset cost of transformers).

        By stepping up the voltage, such as to 48v, you can significantly lower the losses, shrink required conductor sizes, make the circuit breakers cheaper, and still derive the same benefits (48v->12v->5v->3.3v DC transformers are actually fairly cheap, unlike their high-power cousins).

        Why do you think some car makers are switching to 48v DC on-board power and 48v batteries? You can greatly increase efficiency and lower weight since so many devices are electrical on modern cars.

      • While your comment is, indeed, insightful. Allow me to say the following:


    • Not half as much as when you give it to an Anonymous Coward.

  • by Anonymous Coward on Wednesday January 14, 2009 @03:53PM (#26454121)

    Telco gear tends to be 48VDC all over the place. It just works. Speaking as a guy working at a telco in the IT department, I'm hugely in favor of moving to 48VDC servers.

    • I'm not a telco guy but I am aware of the 48VDC standard.

      Why didn't they just do the same for servers in a datacenter?

      • They seem to be heading in that direction; but I assume that it has something to do with the fact that most servers have an evolutionary heritage that goes back to normal x86 boxes plugging into generic wall current. Sure, today's servers are specialized a bit, but their design very much takes advantage of the extraordinary economies of scale to be had in sharing components with normal computers. Not until there is a critical mass of very large datacenter installations(which it seems like their is, because
  • by olddotter ( 638430 ) on Wednesday January 14, 2009 @04:00PM (#26454227) Homepage

    I'm not an EE. But back during the dotboom I thought it would make sense to have a big ups in the data center that output voltages that mother boards expected as input. I almost thought of rigging my own experiment using laptops as servers and feeding them all 12vdc directly from the UPS battery pack.

    Ok rip it apart guys, why is wrong with that plan?

    • by autocracy ( 192714 ) * <[slashdot2007] [at] []> on Wednesday January 14, 2009 @04:11PM (#26454413) Homepage

      Power loss over distance. 12 volts loses four times as much energy in one foot of travel as 24 volt transmission does. Telecom gear, for example, runs on 48 volt DC. For the few feet of travel in your laptop, 12 volts is fine. Crossing a room at 12 volts, you'd get too much voltage drop.

      See []

      Transmission efficiency is improved by increasing the voltage using a step-up transformer, which reduces the current in the conductors, while keeping the power transmitted nearly equal to the power input. The reduced current flowing through the conductor reduces the losses in the conductor and since, according to Joule's Law, the losses are proportional to the square of the current, halving the current makes the transmission loss one quarter the original value.

    • by rabtech ( 223758 )

      Because the power delivered is roughly voltage * current (amps), by bumping the voltage you can lower the current and carry the same effective power across smaller wires, which is a huge cost savings given the cost of copper, circuit breakers, etc.

      *Yes, I know this is a very rough description and I haven't posted the proper mathematical formulas.

    • Single point of failure. At least last time I proposed a this idea on slashdot the prevailing mods seemed to think this was the case.
      On a related note, what do you have at your desk that actually requires more than 12V? If we are able to make this switch in a data center, why not in an office? If we got LED lighting (obviously florescent lighting requires higher voltages, but who's really gonna miss florescent light anyway) I can't think of anything on my desk that actually runs on AC, rather than conv
  • by mlts ( 1038732 ) * on Wednesday January 14, 2009 @04:01PM (#26454229)


    No power supply needed for each machine. This removes a major point of failure. Instead, one would need to just step down voltages to the 5 and 12 volt rails. This also helps with cooling because the room AC/DC converter can be cooled with a dedicated system, either liquid, or part of the HVAC system.


    48 VDC needs a dedicated connector with a high plug/unplug cycle rating that people know is 48 volts and 48 volts only. It sucks when you have to manually wire it up, because this takes time and there is always the risk of getting zapped if you don't throw the right circuit breaker (or pull the right fuse) on a telco rack where 48V is in use.

    Because there is only one 48VDC power supply for a room, it has to be held up to a lot more rigorous standards than average mains current. It has to not just provide 48VDC, but provide it under extremely heavy load without the voltage dropping by much.

    Maybe 48 volts would be a new computer standard. The key is not having to wire it up manually like some stereo speakers, but giving it a dedicated, foolproof, power connector that Joe Twelvepack who is slurping down his seventh can of Bud Light can easily and reliably plug and unplug while staggering around in the back of the server room until his shift ends.

    • by PPH ( 736903 ) on Wednesday January 14, 2009 @04:16PM (#26454499)

      Another pro:

      A UPS would consist of nothing more than a battery charger and 48V battery.

      • by harmic ( 856749 ) on Wednesday January 14, 2009 @04:36PM (#26454837)
        As someone else here has already noted - 48VDC power supply distribution has been standard in Telco exchanges since.... forever as far as I know. When I first started working in Telecoms (early 90's) the exchange would have a separate power room with rectifiers and huge battery banks. The resulting 48VDC was distributed through the equipment room using large busbars. In latter years this approach has mostly been replaced with smaller power supplies installed in each suite of racks, but the principle is still the same. It has always seemed somewhat ridiculous to me that one powers one's server by passing 240 or 110 VAC into a UPS, convert it to DC, charge a battery with it, invert it back up to 110/240, feed it into the server, which then converts it back to DC.
      • Re: (Score:3, Interesting)

        by zippthorne ( 748122 )

        I've often wondered why the ups is *before* the computer power supply, anyway. It seems to me that a couple of lithium cells in the right places could keep the important bits going for just long enough to get through short power hiccups.

        e.g. keep just the ram and proc going for a few seconds before suspending to ram, followed ultimately with some kind of chipset-powered auto-hibernate when cell voltage indicates that it can't hold the suspend much longer and still retain the option of hibernation.

    • Re: (Score:3, Interesting)

      by MBCook ( 132727 )

      The whole 48v DC thing sounds good to me (I don't run a data center though, or anything like it).

      That said the article discusses (and I've seen it said elsewhere) the large copper bars used for wires in this kind of setup, and how they will lose more power between the wall and the rack than AC.

      I can see the appeal of going TOTALLY 48v, but why not run AC to the racks, and just have a large converter for every 2 or three that provides the full DC power and backup for those three racks? You're still avoidin

    • by rabtech ( 223758 )

      Because there is only one 48VDC power supply for a room, it has to be held up to a lot more rigorous standards than average mains current. It has to not just provide 48VDC, but provide it under extremely heavy load without the voltage dropping by much.

      No, you can have multiple DC supplies dumping power onto a common supply rail with just a few extra electronics and protection devices. You don't have sync issues like with AC power where everything needs to be exactly in-phase.

      Furthermore these devices can be placed in the basement, on the roof, etc in locations that aren't necessarily required to be held at some constant cool temperature, as they can function in a much wider range without noticeable loss of service life.

    • Re: (Score:3, Interesting)

      by Cobralisk ( 666114 )

      A. You won't get zapped from 48VDC. If you are extremely sweaty you might feel a slight tingle, but nothing to get excited about.

      2. Just wire up some big batteries in parallel and you don't have to worry about voltage drop under load. As long as the rectifiers can keep with the current needed to float the batteries at 48V (really more like 52V in practice) you're fine. As stated by an earlier poster, this is proven technology in use by telcos for a very long time.

      D. This whole article is about datacenters.

  • Edison vs Tesla (Score:4, Insightful)

    by swell ( 195815 ) <jabberwock@po[ ] ['eti' in gap]> on Wednesday January 14, 2009 @04:07PM (#26454347)

    One can't help but reflect upon these two and their stubborn support of DC and AC respectively. Edison created a circus atmosphere demonstrating the dangers of AC. He electrocuted dogs & other animals and even participated in the design of the electric chair to prove his point.

    Edison's financial ambition was part of the problem, and his inability to understand AC, but mostly it seems to have been an emotional attachment to DC.

    Let's hope that in our time emotion and personal gain have no part in such decisions.

  • by Waffle Iron ( 339739 ) on Wednesday January 14, 2009 @04:13PM (#26454437)

    a standardized 400-VDC connector and cabling solution

    I set this kind of system up myself and it works great, assuming you need a lot of cores. I strung together 296 Intel Core 2 Duo chips in series accross the 400VDC supply, so each one gets the specified 1.35 volts. If I want to overclock, I just take a set of alligator clips and shunt across a few dozen of the chips, and it boosts the voltage to the remaining CPUs.

    The only problem is that with so many chips, I get occasional failures, just like I do with my old Christmas lights. Then I have to try shunting around each of the CPUs by trial and error until I isolate the burnt out one before I can get my cluster running again. Oh yeah, I also have to be really careful to keep any peripherals I plug in away from each other and/or grounded objects.

  • by jockeys ( 753885 ) on Wednesday January 14, 2009 @04:16PM (#26454493) Journal
    Just a side note, this has already been growing in the field of UPS units for at least 5 years, and it's not terribly hard to find UPS units and PSU units with DC connectors.

    (Since to use a UPS without DC means converting battery's DC, sending it to the PSU in AC where it's converted back again.)
  • That concluded that using the european system of 230/400 3 phase AC for distribution splitting out to 230V single phase AC near the point of use was almost as efficiant as a 400V DC system and far cheaper and easier to deploy. Your servers existing power supplies can almost certinaly handle 230V without any problems (changing a switch may be required on crappier models)

    BTW in many cases there are often huge savings to be made without changing your infrastructure just by using better PSUs, cheapasss PSUs are both inefficiant and unreliable.

  • by miller60 ( 554835 ) * on Wednesday January 14, 2009 @04:44PM (#26454975) Homepage
    There are a number of companies providing commercial DC solutions for data centers. Validus DC Power [] is providing products for DC power distribution, while Power Loft [] is building a brand new data center optimized for DC power.
  • by Skapare ( 16644 ) on Wednesday January 14, 2009 @04:59PM (#26455281) Homepage

    From TFA:

    The power starts at the utility pad at 16,000 VAC (volts alternating current), then converted to 440 VAC, to 220 VAC, then to 110 VAC before it reaches the UPSes feeding each server rack.

    That's just stupid. I hope it's just a case of a journalist not correctly understanding (which is a common problem). Given the usage of numbers like 220 and 110, instead of the standard 240 and 120, I do suspect it is a journalist giving wrong info. But even many computer people don't know what the standard power voltages are (and have been for decades). Lots of people in the USA still refer (incorrectly) to "two twenty" and "one ten". The standard in Europe is 230 volts.

    With so many conversions taking place, there will be a lot of power loss. To begin with, the computers should have been operated directly on the 240 VAC, not 120. That 240 VAC should have been obtained from the utility power directly (though voltages like 7200, 7620, 7970, 12470, 13200, 13800, 14400, 19920, 22860, 23900, 24940, 34500, etc, are more common ... I've never heard of 16000 being used). Since power comes in as three phase, the ideal voltage conversion would have been 240 VAC line-to-neutral, which would give 416 VAC line-to-line. Neutral harmonics issues can be avoided by use of oversized neutrals or multiwire neutral.

    Do AC wiring correctly, and the advantages of DC are minimal at best. Where the DC plan can have an advantage is that the conversion to 400 VDC, done on a large scale, can be done more efficiently. If that doesn't happen, then it's just one AC-to-DC conversion vs. another AC-to-DC conversion. When the 400 VDC gets to the computers, you still need a PSU to convert the 400 VDC to the various voltages provided to the components inside the computer box (e.g. 12V, 5V, 3.3V, etc).

    AC voltage conversion can be more efficient than 98% when properly designed low impedance transformers are used. That can beat the DC conversions ... even DC-to-DC, in most cases. So you want to do conversion of DC only once or certainly no more than twice.

    It has been reported that mainboards can be designed to efficiently convert 12 VDC to the other voltages needed. Google's original proposal was to supply computers with 12 VDC, allowing them to be manufactured without the PSU entirely, and thus in a smaller footprint as well as having the increased efficiency. The 12 VDC would come from a large PSU in the middle of the rack (to limit the length of wire carrying the higher current that is involved with a low voltage). That large PSU would be designed to accept AC at any voltage from 380 to 480, 50 or 60 Hz, and thus be usable just about everywhere in the world. The PSU may even operate more efficiently when fed with full three phase power (the full cycle nature of three phase power reduces the level of filtering needed for smooth DC).

    Running DC is NOT a crackpot idea. It just needs to be studied correctly, in its various possible forms, and compared to CORRECT designs of AC wiring, in its various possible forms. The choice of 400VDC for distribution within a data center to the individual PSUs is a reasonable one, given that the existing PSU designs go through a conversion to 340VDC to 380VDC, anyway. But these same PSUs, especially in the larger form of one per rack, could just as well be designed to operate from 380 VAC, 400 VAC, 416 VAC, or 480 VAC.

    Maybe DC is the right choice. Or maybe AC can still be the right choice when engineered correctly (which far too often is not done, sometimes due to ignorance, sometimes due to budget limitations which would never go for DC anyway, and sometimes just due to mental inertia).

  • by scharkalvin ( 72228 ) on Wednesday January 14, 2009 @05:04PM (#26455367) Homepage

    The reason Tesla/Westinghouse won the current wars with Edison because there wasn't any good way to step DC voltages up or down. You can't transmit power very far at 110 volts. AC allowed the use of inexpensive and transformers to step voltage down at the customer site and transmit at high voltage over long distance.

    Today solid state converters do allow the step up / down of DC voltage, and very high voltage DC can be sent over long distances with less loss than the same AC at the same voltage. At least one power company is looking at using DC transmission lines over long distance.

    AC power still makes more sense for consumer and most industrial use, but for transmission and delivery of power in bulk DC seems to be making a comeback.

  • pros and cons (Score:3, Informative)

    by sjames ( 1099 ) on Wednesday January 14, 2009 @05:32PM (#26455871) Homepage Journal

    Moving the AC->600VDC stage out of the controlled environment will be a savings even if you keep the inverter and stay AC in the datacenter.

    for actually going DC in the datacenter, the top benefit is losing the inefficiency and heat of the inverter stage of the UPS.. Instead, you have the potentially smaller losses of several smaller 600VDC to 48VDC converters in the racks and potentially cheaper power supplies that don't have to care about power factor.

    The con side is the need to re-fit, heavier power cables from down-converter to the individual machines and the underfloor area becomes much more hazardous (600VDC = 3rd rail).

  • Is this really new? (Score:3, Interesting)

    by Logical Zebra ( 1423045 ) on Wednesday January 14, 2009 @05:34PM (#26455901)

    I work in the telecommunications industry. It has always been standard practice (at least where I work) to use DC power supplies for data equipment if they are co-located with voice equipment, since most voice equipment uses -48 V DC power.

    This has the additional advantage of utilizing the battery backup system (required for voice) to also back up the data equipment's power.

  • DC-DC converters (Score:3, Interesting)

    by drolli ( 522659 ) on Wednesday January 14, 2009 @06:05PM (#26456381) Journal

    The responses to this here where highly predictible, and many af them are quite naive.

    Modern DC-DC converters have excellent Efficiency over a wide dynamic range of loads. This holds true for the small, nice isolating ones which every designer of instruments likes very much, and also for larger ones. No transformers, smaller capacitors, easier redundant designs, easier buffering. In a time when computers are more and more designed to vary their input power according to their load, all these things could provide a savings of energy (and money). Even if this saves only a few percent, the investment will be payed off in a reasonable time.

news: gotcha