AC and DC Battle For Data Center Efficiency Crown 168
jfruh writes "AC beat DC in the War of the Currents that raged in the late 19th century, which means that most modern data centers today run on AC power. But as cloud computing demands and rising energy prices force providers to squeeze every ounce of efficiency out of their data centers, DC is getting another look."
Makes sense. (Score:5, Interesting)
AC is better than DC for transporting electricity because you can convert between voltages with just a transformer. But in a data centre, when all the equipment will be powered by the same voltage, it makes sense to use one good efficient power supply for multiple computers, so that all the components don't have to be duplicated for each computer.
Re: (Score:2)
Yes, let's use big power supply for all computers, so they all share the same exact point of failure AND have a MASSIVE fault current when someone accidentally drops a piece of uninsulated wire across a bus bar, so we have have a couple racks of equipment meltdown and a techie vaporized to ash.
Re: (Score:2)
Thats why, like, I donno, 80 years ago, the telco business got in the habit of A and B power bus distribution. I worked at a place with a C bus which was pretty much a load balancing hack and confused the hell out of the CO techs and electricians... They actually shorted out the C bus one time because they didn't understand the concept of having three busses instead of the "standard" two.
Re:Makes sense. (Score:5, Informative)
AC is better than DC for transporting electricity because you can convert between voltages with just a transformer.
Which was a winning argument in the 19th century, but not anymore. The use of AC entails significant power loses, especially for cables that are immersed in salt water, which is why DC is used in such situations:
https://en.wikipedia.org/wiki/High_voltage_direct_current [wikipedia.org]
Re:Makes sense. (Score:5, Informative)
It depends.
AC wins out because of ease of conversion, becaues the higher the voltage, the lower the current, and lower the current, the lower the IIR losses in the wire. DC didn't win because at the time, efficient (and cheap) voltage converters didn't exist. These days, a switching DC-DC supply can easily exceed 90% efficiency, and you can get solid-state converters that can handle transmission line powers easily. Hence the launching of HVDC transmission lines which don't have resonant losses and no phasing issues
In a datacenter, you'd probably take the incoming power and turn it into an intermediate voltage like 48VDC per rack or something - something that minimizes IIR losses (you want high voltages) and DC-DC converter losses (ideally you want output voltage and no converter).
It will have to be per-rack at the minimum purely because of the losses - if we did 12V lines and a few servers take 1200W total, we're talking 100A in current If we bump it to 48V, we're dealing with 25A (maybe 30A after inefficiencies), and IIR losses at 25A are lower than at 100A (it increases with the square of the current).
Also, the 100A cables are big and chunky (which you need because they reduce the "R" part of IIR losses).
Re:Makes sense. (Score:5, Informative)
Re: (Score:2)
Yes... many data centers these days have UPS. So my guess is that should be more efficient to get the DC right from the batteries somehow to avoid the losses of having those two extra conversions. So agreeing with your comment, I assume it would be efficient to have small UPS systems (rectifiers and batteries) per rack (or per small group of racks), t
Re: (Score:2)
Also, the 100A cables are big and chunky (which you need because they reduce the "R" part of IIR losses).
Thickness of wires has nothing to do with R. You use big cables to reduce heating of wires.
LOL, absolutely not. 0 credit given. Temperature held constant, the resistance of a conductor decreases with conductor thickness.
The relationship can be stated as R = pL/A where p is material resistivity, for example in ohm meters, L is conductor length, and A is conductor cross section area. p varies with temperature, but as you can see, R does indeed vary with A, and discounting A as you suggest is incorrect.
Actually converting DC is pretty easy these days (Score:2)
The problem is it wasn't back when the grid was being made. There was no good, easy, efficient way to convert DC voltage. Now, not so hard.
Re:Actually converting DC is pretty easy these day (Score:5, Interesting)
You are getting that wrong. DC can be transmitted farther than AC. DC has only resistive losses, while AC also has capacitive and inductive ones.
I'd sumarize it as the following:
DC is slighlty (just slightly) better for transmitting;
AC was easier to convert from one tension to the other (currently, we have the oposite situation);
AC is better to use on motors (it was much better, now it is just slightly better);
AC is easier to generate (it was much better, now it is just slightly better - except on photovoltaics);
AC is easier on the connectors (hight current DC connectors are a hell to maintain)
It is easy to see why AC won. I bet AC would win again just because of the connectors and generators, after all, converting it to DC is relatively cheap. The only problem is the low frequencies we currently use, it would be better to increase them a lot now that we have better materials.
I am getting what wrong? (Score:2)
Back when the current wars were happening, there was no good way to convert DC voltages. Edison's model called for lots of local small powerplants to deal with that. AC was easy to convert using transformers. Now DC voltage conversion is easy. Thyristors do the trick nicely.
Because it was hard to convert voltages, you couldn't do HVDC runs unless you wanted it in the home as well.
Re: (Score:2)
AC is better than DC for transporting electricity because you can convert between voltages with just a transformer. But in a data centre, when all the equipment will be powered by the same voltage, it makes sense to use one good efficient power supply for multiple computers, so that all the components don't have to be duplicated for each computer.
Unless you want to transmit with lower loss and send more current down the same cable. That's why high-voltage direct current [wikipedia.org] is used for most undersea cables
Re: (Score:3)
AC is better than DC for transporting electricity because you can convert between voltages with just a transformer.
Not anymore. The greenies / cost cutting / etc means no more xfrmrs anymore. Bye bye to that technology. Whens the last time you bought a wall wart charged device with a transformer inside it (you'd know, it'll be cubical and heavy)? You have to be pretty old by /. terms to have bought a main desktop computer without a switcher, like early 1980s era pre-PC "home computers"... Ahh the old Altair with its smoking hot 7805 regulators...
Since you're gonna have a switching power supply anyway... why not ski
Re: (Score:2)
DC/DC converters are better than transformers in almost every way. They are lighter, smaller, and cheaper. Also, altought theoreticaly you could create a transformer that loses less power than a DC/DC conversor, in practice nobody did that, thus they also waste less power.
They would be even better (on all variables above) if they didn't need to deal with a low frequency AC supply. Either a high frequency AC or a DC one would do.
Re:Makes sense. (Score:4, Informative)
The last two data centers (Clearwire) I built out were DC. The only AC in the cage was for a video monitor and for the tech's wifi router. Very standard stuff, the telcos have always done it that way. Any bit of Cisco/Juniper/whatever kits can be ordered with DC power supplies. I see DC plants as more the standard now. And yes, the are still built using waxed string.
Even Power over Ethernet has it roots in telco -48VDC power. All the WAPs and fiber converters at a Lowe's are powered by a Valare DC power supply ( http://www.power-solutions.com/dc-power-systems/eltek-valere.php [power-solutions.com] ).
One nice thing about DC plants are the power cables are cut to length so you don't have all that extra line cord to bundle and hide.
Re: (Score:2)
You can run them on DC, but it's a waste; you still get the diode drop and half your diodes are going to get hot as hell; normally you have a full-wave rectifier where each diode operates at 50% duty cycle. Wit
The side that wins the current wars will be (Score:3)
whichever one gets Dirty Deeds Done Dirt Cheap
Re: (Score:2)
but that's the proble. we aren't talking about 1 rack, we are talking about 10' of thousands of machines. How is the efficiency to the end of all the computers? what about heat? risk?
I honestly don't know the answer, and I look forward to datacenters data.
Re: (Score:2)
Re: (Score:2)
Either way, you get to play the "let's balance transmission losses vs. redundancy vs. efficiency vs. componen
Make sure you cool those busbars (Score:2)
A rack basis is one thing, and can work well. A big roomfull is another.
Re:Makes sense. (Score:4, Insightful)
As opposed to the transformer coming into your building? How about the UPS and HVAC units supporting your server room?
Obviously, you'll have redundant DC power supplies, just like you do now. Except instead of having two AC->DC power supplies per PC, you'll route two room-level DC power supplies to each machine in the room. Lots of little, less efficient, lower quality power supplies replaced by a pair of high quality, high efficiency supplies.
Re: (Score:2, Funny)
"I'm on the highway to hell......"
Re: (Score:2)
You forgot the lightning bolt...
Re:Makes sense. (Score:4, Interesting)
And a big disaster waiting to happen with such large DC currents available on all the busses going all over the room. FYI, telco 48VDC systems addressed the dangers with resistive busses. But that was a huge efficiency loss. They didn't care so much about efficiency back then as all they wanted was a reliable battery backed up system. Making DC efficient is also making DC unsafe, at data center scale. AC is safer on that scale. Then do the conversion to DC at no larger than one rack, and put ride-through (2 minute) backup batteries in each rack (just need to be long enough for slow start generators or maybe a little longer for diversity loading systems so you don't slam the generators with load). I'd have a separate AC distribution system for the generator power and have each (two input) power converter switch over at randomized times over a 2 minute interval.
Re: (Score:2)
For you it seems logical, but WHY are are large DC currents such a problem? Why are they more a problem then 10-20 lower AC currents? SHort circuit? Same problem in both setups. Electrocution? Higher voltage sounds more dangerous.
Re:Makes sense. (Score:4, Insightful)
I'm more concerned that I convert AC to DC to charge a battery, then convert it back to AC to power a power supply in my machine that outputs DC voltage. (Or, taking the DC battery output and inverting it to AC to run a computer.) Why can't I just run my PC off a battery that's kept charged by a DC current from a single power supply? I mean, I don't need the efficiency of AC for long distance transfer (we're talking maybe 3 feet) so why convert it back to AC?
Re: (Score:2)
Why can't I just run my PC off a battery that's kept charged by a DC current from a single power supply?
Because your battery will degrade faster? Because you are converting the power three times (two of them chemical) instead of one? Just a couple of possibilities.
Re: (Score:3)
It doesn't really work that way. A battery charger is just a power supply. When the battery is charged the charger outputs maintenance voltage and your computer is really running off the charger. When the battery is not charged the charger puts out charging voltage, and your computer is really running off the charger. When the mains current cuts out your computer just runs off the battery. This is a UPS, as opposed to a SPS where you run on mains and then switch to the inverter in case of a failure and hope
Re: (Score:2)
Re: (Score:2)
The idea makes a lot of sense, but the problem with either high voltage (500+V) DC or 400V AC is you have trouble getting the fault current down to under 5,000A per (US) code at the plug. Safety procedures are about 5-10 years out for widespread use of high voltage DC adoption in buildings.
Re: (Score:3)
That's basically what Blades are right now. Effectively, Blades already exist because treating rack servers the same as a herd of Boxen has been silly for a while.
Ideally, you would have a "rack level" spec where DC power would be on tap maybe at the rack level, with the "ATX" DC connector (maybe even 3,6, & 12 volts) going out the back of the server.
The problem right now is that if you're not Google and can get boards and cases shipped from the factory like that, there is no spec so "little people" can
Re: (Score:2)
FYI, the newer Intel Mac Mini has an integrated AC power supply (only the original and early Intel use DC-in with an external brick)
Re: (Score:2)
Bigger is not automatically better...
As the distance loss for DC is immense (resulting in unwanted heat), it's probably only feasible to actually gain something from shared DC if the supply is relatively close to the servers, i.e. in the same rack or no further than the end of a row. A centralized supply for the whole datacenter will result in a huge waste of energy from transmission alone, far beyond what small less-than-perfect transformers in each server cause today.
We already have something a little bit
Re:Makes sense. (Score:4, Interesting)
Re: (Score:3)
Thank you. This is why the debate always confuses me. The poster is not exactly trolling. A single AC-DC power converter is a single point of failure, which is bad. Typically you have two, or even three, power supplies on most servers.
In my data center the AC is very clean, redundant, and has diesel fail over. Now if that is considered to be reliable, and as one poster suggested, we could use backup batteries for only a minute or two, why not convert all of the servers and supporting hardware to DC inpu
Re: (Score:3)
The best DC approach is 500+V DC distribution to the rack. The best AC approach is 400V to the rack. Either approach uses redundant low voltage power supplies at the rack level.
The benefit of DC is that you can stick dumb batteries on the bus (with an in-line charger) which eliminates a conversion to AC that would be required from a traditional static UPS.
On AC, the energy saving strategy is different-- do as little work as possible for as much time as possible, and run on "dirty" power until it is really b
Re: (Score:2)
If you're the hardware is built for you and designed to all run on one voltage so it doesn't need a DC-DC power supply just as big as the AC-DC power supply it would have in an AC-powered environment then it makes sense to run DC. But since PCs still have big bulky power supplies even in a DC-DC environment to generate the multitude of voltages they're expected to contain, you're not really gaining anything there.
Re: (Score:2)
I guess that is a good point, but just how bulky do you think the equipment would be to generate something like 6 different voltages? I am not sure there really are that many different voltages. Most spec sheets I see show 3 different voltages. 12, 5, and 3.3 IIRC.
Most stuff is pretty standard and I am sure manufacturers could get on board at some point.
Do you think it would only be 2U per rack? How much more?
Getting rid of all the individual power supplies gets you back space (pretty valuable) and save
Re: (Score:2)
Most stuff is pretty standard and I am sure manufacturers could get on board at some point.
yeah, it's standardized... on ATX, which uses a big bulky box. But it helps ensure that there will be room in the case for a broad variety of power supplies. as it turns out, saving the space doesn't matter all that much. when you're not putting in more windows just because you have more wall, the cost of keeping the building cool doesn't scale so much with square footage.
Re: (Score:2)
I meant standardized as far as voltages go.
Servers are not standardized for power supplies with respect to size. I have seen hot swappable power supplies in quite a few different form factors as well as your standard power supplies in a lot of different form factors as well. They all have a cost in space, components, connectors, etc.
Keeping the building cool is one thing, but I am also interested in density and efficiency in power consumption.
AC-DC conversion does generate heat, so you are getting a savin
Re: (Score:2)
Re: (Score:2)
"If you wanted to still make it redundant, you could build a 2U dual high-efficiency AC-DC converter with battery backup. That should be pretty reliable."
umm.. you mean.. build an ups ps?
anyways, it's pretty crappy to run low voltage dc over longer distances and the devices are going to need +12, +5 and 3.3 anyways, so you'll be running more and thicker cables or you're going to have a psu at the machine end, some voltage regulator circuit is going to be there anyhow.
1200 watts for 20 meters at 12 volt.. yo
Re: (Score:2)
If you can afford a rack of 1U servers you don't use redundant PSUs. Well, unless you're stupid. And there's a lot of STUPID out there.
You design tired load balancing and failover software so no single component is a SPoF.
In the age where supermicro can spit out cheap 95+% AC conversion with mostly single 12v rail mainboard design we're doing pretty well. The only real thing missing from the "Google" design is the per-machine battery backup.
The real problem with datacenter design is not the AC or DC anym
Re: (Score:2)
We're still doing rack cabinets wrong. We still load servers from the cold isle, but connect all the cabling from the hot isle. Many datacenters don't do hot isle capture. Until we switch to wiring servers from the cold isle and ducting the hot isle away we can't get any real heat transfer efficiency.
Huh? - You lost me there. Or maybe we're doing it right after all?
We load servers from the cold isle and the wiring is in the hot isle, but the hot isle is complete sealed off (with doors at the end of course) and the cooling sucks in air from the hot isle only and expel the cooled air from both the floor and the ceiling above the cold isle. This way the cool areas are never too cold and the hot areas don't leak heat to the cool areas. All servers are of course of the type that suck in air from the front an
Re: (Score:2)
Re: (Score:3)
Yeah, it'll be just like network routers and switch that always bring down the whole network with them! If only there was some way to prevent single points of failure...
Re: (Score:2)
Yeah! Err. Oh yeah. Down with DC! Except the single AC supply into the building is already a single point of failure. There is no reason you can't have all the redundancy you have with AC phases / UPS / circuits, and have n redundant efficient PSUs powering m-racks, whatever works most efficiently.
Re: (Score:2)
Yep, I'm not sure why you quoted me rather than the OP but I don't understand why he doesn't get this.
Re: (Score:2)
Sorry, I wasn't concentrating! :)
Re: (Score:2)
So, add 1 or 2 for backup. Still better than scores of separate, less efficient power supplies scattered all over your server room.
Re: (Score:2)
Even with centralized power supplies, you can still use built in redundancy in the rare case that one fails.
Re: (Score:2)
Yes, let's use big power supply for all computers, so they all share the same exact point of failure.
Eh, depends on the scale of your operation: Single computers only usually have one or two PSUs. Blade cages might have three or four; but serving 10+ PCs. If your infrastructure is in the thousands of racks, the savings on redundant power supplies might make a rack-level point of failure acceptable. Depends on what you are running and how much you want to pay for it...
Re: (Score:2)
Yes, let's use big power supply for all computers, so they all share the same exact point of failure.
Hmm, your post modded troll? Somebody was indeed a) clueless about the very real SPOF potential b) abusing their moderator privilege. Let's try a more rational approach: indeed, supplying multiple processors from a single power supply is a potential SPOF. However, M power supplies per N processing nodes would mitigate this at a modest cost in complexity and cabling.
Re: (Score:2)
However, M power supplies per N processing nodes would mitigate this at a modest cost in complexity and cabling.
Isn't this called blade servers?
Re: (Score:2)
If they plug into a motherboard. I understood the topic to be serving power to multiple enclosures.
Re: (Score:2)
Like the single, bottle-necked, AC power supply then?
Re: (Score:2)
Agreed. But it should be 240V everywhere, 50 to 60 Hz.
Re: (Score:2, Interesting)
Actually, 240V, 1kHz would be a lot more efficient, and make power supplies cheaper and more robust to boot.
Not really (Score:2)
which means that most modern data centers today run on AC power
Only if you ignore all telecom equipment which have run on -48VDC for decades. True, they're not really 'data centers' but it's not like they don't use massive amounts of electricity.
Re: (Score:2)
48VDC also means a rather large amount of current. A data center in many cases these days is much, much larger than a telco switching center was (aside from maybe a few trunk points for large cities). They did, in many cases, divide up the electrical systems to avoid high fault currents. But it was well know the high battery currents involved could be a disaster if there was a short, even on a branch tap into equipment.
The benefit of DC distribution was NOT efficiency. They did use resistors and in some
prior art 8) (Score:2)
most telephone exchange and related transmission hubs use DC 12, 24 and 48VDC are standard. This isn't anything new, and data centers have always been space and power inefficient, it's the nature of the beast, and method of construction.
Re: (Score:2)
Yay, another volt standard... (Score:5, Informative)
There was an article about using 380 volts a couple weeks ago on /. in the data center.
Having DC brings some benefits, mainly just needing to step down voltage and not have to rectify it smoothly with capacitors to even out the output current.
However, there are some downsides:
1: AC power supplies in devices tend to be more tolerant of power fluctuations. An all DC shop might completely be halted by a power surge/spike that wouldn't bother a data center on AC.
2: DC sparks a lot when connecting/disconnecting. AC has plenty of zero-crossings a second (120 or so), so it won't make the fireworks show when plugging/unplugging. This makes switches rated for DC a lot more expensive than AC.
3: There is no such thing as a NEMA 380VDC connector. So, either items would have to be wired up to a bus bar similar to how 48VDC telco stuff gets, or it will end up like 12VDC with at least 5+ connectors (direct wires, cig lighter, airplane, marine connector, male/female combined connector, motorcycle accessory connector, banana plugs.)
4: Safety. 12 VDC shocks are annoying; a shock from 380VDC will be fatal, especially because of DC's tendency to get muscles to "lock". (This is why stun fences uses AC, while kill electric fences use DC so they can keep the target locked on the wires long enough to get the amps across the heart.)
5: Issues with wire length. AC, it isn't hard to use a transformer to deal with voltage drop. DC, that will be a lot harder.
All and all, 380VDC seems like a solution in search for a problem. We really don't need another standard. Heck, just pointing out 120VAC in the US means I have to doublecheck if I'm dealing with 15 amps, 20 amps, 30 amps, or 50 amps, and the locking versions of each, which means six plug types and minimum wire gauges.
Re: (Score:2)
All this does is require that the conditioning for power be done well before it reaches the machines. There will be an AC->DC power supply regardless, it'll just be much, much larger and could probably supply even more resilience than a bunch of smaller power supplies.
Re: (Score:3)
So you'll handle it much like most hotplug PC hardware is these days, with latches and mechanical disconnects that ensure + and - are disconnected simultaneously.
You don't want to disconnect both simultaneously. The idea is to disconnect + first and leave ground connected. The voltage across the whole component falls to ground level instead of potentially floating up to + briefly.
This is a different problem from what the GP was talking about: When you hot-unplug a device drawing lots of DC, it starts to draw an arc. The arc will continue to draw until it gets too long to be stable, forms a big rainbow, and then extinguishes itself. The distance depends on the v
Re: (Score:2)
AC power supplies in devices tend to be more tolerant of power fluctuations. An all DC shop might completely be halted by a power surge/spike that wouldn't bother a data center on AC.
Essentially you're just removing the rectifier from the power supply, putting it outside, and feeding the same old switching supply indoors. Not so. You could design a system that intentionally was more sensitive, but no one would intentionally do that.
or it will end up like 12VDC with at least 5+ connectors
The world seems to be converging on the Anderson Power Pole connector (which I believe is a (TM)). Cheap, high current, tough, reasonable simple to assemble...
All and all, 380VDC seems like a solution in search for a problem
See the above. Basically you're doing a lot of foolishness to remotely mount the rectifier diode
Re: (Score:2)
4: Safety. 12 VDC shocks are annoying; a shock from 380VDC will be fatal, especially because of DC's tendency to get muscles to "lock". (This is why stun fences uses AC, while kill electric fences use DC so they can keep the target locked on the wires long enough to get the amps across the heart.)
While 380VDC is really bad news, the myth that DC is more dangerous than AC is just that. A myth. In fact, AC will induce tetanus more readily than DC and cause fibrilation at much lower currents. (Given typical frequencies. High frequencies will not due to skin effect.) i.e.:
The high voltage direct current (DC) electrocution tends to cause a single muscle contraction, throwing its victim from the source. These patients tend to have more blunt trauma. Direct current electrocution can also cause cardiac dysrrhythmias, depending on the phase of the cardiac cycle affected. This action is similar to the affect of a cardiac defibrillator.
Low voltage alternating current (AC) electrocution is three times more dangerous than DC current at the same voltage. The lowest frequency for electrical current in the United States is 60 Hertz (Hz) because this is the lowest frequency at which an incandescent light functions. With AC electrocution, continuous muscle contractions (tetany) may occur, since the muscle fibers are stimulated at between 40 to 110 times per second. With tetany, the victim tends to hold on to the source of current output, thereby increasing the duration of contact and worsening the injury.[2]
(http://www.medscape.com/viewarticle/410681_3)
I've had the original Berkeley student experiments where they studied tetanus and AC vs DC, but I've lost the link. In either case, the results were much as they are reported above, i.e. it takes more than twice the DC current to "lock
6 one way, half a dozen the other (Score:5, Interesting)
AC, DC, it does not make a difference any more. Yes, you have to rectify AC before it powers a computer, but the rectification costs less than 1% of the energy. Power factor compensation can be more costly, but it could be avoided by going to a 3 phase rectifier. There are also serious distribution advantages in 3 phase electricity, but it is not used because of the extra complexity, despite being cheap.
DC distribution is expensive, and 1% gain is just not enough to pay for it. Once we have intelligent grids, the situation may be different, but for now there is just no business case.
Yes, but (Score:2)
What if you want to electrocute an elephant?
Re: (Score:3, Funny)
Re: (Score:2)
You're gonna spend your savings on copper (Score:4, Interesting)
Standard -48VDC current distribution requires four times the current as 208V AC distribution for the same amount of power. Have you seen DC cabling at data centers that use it? If we're going to start using DC in data centers we need to come up with a higher voltage standard, otherwise we're going to spend all the savings on more copper (which is expensive!) to carry those extra amps.
Re: (Score:2)
380VDC is still horribly unsafe without proper segmenting. And that means a lot of efficiency if you segment from a single large massive conversion system. You need to segment at the rack level. And then you end up with the double conversion scenario.
you forgot the lightning bolt (Score:2)
you forgot the lightning bolt
Let's electrocute some elephants just to be sure! (Score:2)
If it was good enough for Edison, it's good enough for me!
Slashad (Score:4, Insightful)
Re: (Score:2)
And you don't literally need, or need all of, his product, to make a very efficient AC-based data center.
I am concerned about his brief mention of cooling that seemed to be based on using a single system. There, I would want multiple redundancy at N(4)+2. The more discrete units you have, the more STABLE you can hold the temperature. The more stable the temperature, the higher temperature you can run it at. UNSTABLE temperatures cause damage to equipment as much as too high a temperature.
This is New? (Score:3)
In 2005 we started looking at blade chassis and tested a rack of HP BL series blades.
That system came with a 48v DC power enclosure with 6 hot swap power supplies. It sat in the bottom of the rack and had a buss bar system to feed every chassis in the rack.
As others have stated.. 48v is a long standing standard for telecom power.
Re: (Score:3)
But 48VDC also means dual conversion. Convert the AC to 48VDC, then do the conversion again with the PSU in each chassis. You have to get both conversions to be very, very efficient to make that worthwhile.
Everything from Cisco can be had with 240VAC. Very little telco equipment these days actually requires a 48VDC power source. And most of that is for telcos, not for web site providers (for example). And where big network providers do need some 48VDC-only equipment, that can usually be put in the nort
Re: (Score:2)
But 48VDC also means dual conversion.
Not really. Every switched mode power supply converts AC to DC, then back to AC (at a very high frequency) and then back to DC (at several voltages). The whole DC buss distribution idea pulls the first AC to DC conversion out of every individual supply and centralizes them. This makes it possible to back up the DC buss with batteries. But as others have noted, the high fault energies available on these busses are harder to deal with using common circuit breakers.
Re: (Score:2)
The problem is, you need high voltages. You cannot run 12VDC to every server because you're talking about HUGE currents.
Let's say the server is high powered and takes say, 480W. At 120VAC, that's 4A, maybe 5A after power supply inefficiency. 5A isn't a lot of current and wires are nice and thin (like they
I'll stick with AC through the data center and... (Score:3)
... convert that AC to DC at a "blade rack". That would be a rack designed to take blades. But the blades would be a mix of
This will safely segment the power, leaving the DC busses limited to the amperage needed for one rack ... or even partial rack. It also has the flexibility of balancing power conversion vs. 1st tier power backup (at the point of use). Increasing the backup times to a couple minutes allows slow start generators, which are more reliable.
I would run 416/240 three phase everywhere in the data center (even in North America ... transformers for this are readily available). Where equipment isn't on the DC system, run it on 240VLN. The AC/DC converters might run on 240VLN or 416VLL. In countries with 400/230 or 380/220, just use it that way direct.
AC is safer due to the zero crossing. Circuit breakers can break a lot more power (usually 5x the voltage) with the advantage of AC, as compared to DC. A 380VDC breaker for a rack would be HUGE, especially if it has to handle a data center level of fault current.
what about PSU with buildin UPS hook ups? (Score:2)
what about PSU with build in UPS hook ups? so you can get rid of the AC to DC to AC to DC part and make it just AC to DC? at each system?
Re: (Score:2)
Are you talking about a 2nd AC input, or a separate DC input which can be supplied direct from battery?
Re: (Score:2)
I assume a DC connection that you can plug into a bank of batteries that can be used for power if the AC should fail (and charging the batteries when the AC is on).
Verari Systems (Score:5, Interesting)
Verari tried to take advantage of the efficiency gains in DC with exotic power supplies etc... And that company went the way of the dodo bird after trying to force 800V, 48V, and 12V DC power distribution systems in customer data centers. The fact is, everything already out there (switches, routers, servers, etc) uses AC-DC power supplies in each unit and it works in 99% of power outlets with pretty good uptime. The added complexity of running DC infrastructure isn't worth the efficiency gains (which on paper sound like a lot but theory rarely translates to reality the way we think it will), and when one DC rectifier burns up and takes down a hundred servers (vs 1 server with an AC-DC supply), customers aren't happy. Between the uptime issues and employee safety concerns (high amperage DC power is more dangerous than AC for a variety of reasons) it's also a liability nightmare
Again, I don't feel like getting into specifics but modern datacenters != underground telco installations and DC power distribution has a LOT of challenges that are often overlooked when marketing types start squawking about efficiency gains.
DC advocates trying to get on the same page (Score:4, Informative)
Nothing new under the sun (Score:2)
Telecom already acknowledges DC as victor (Score:2)
Telecom already acknowledges DC as the victor. It's about time the datacenter people also recognize the efficiencies of DC power in the datacenter.
Go Old School, DC (Score:2)
Did anyone RTFA? (Score:4, Informative)
Uh the article the post links to supports AC more than DC in case no one noticed. The article is about DC being hyped beyond the facts and that AC is claimed to be just as good. Sort of reverses the whole discussion here making it AD, alternating discussion. Edison gets the carbonite filament..
Edison invented FUD (Score:4, Funny)
Peta's really on our asses about that.
Re:Edison invented FUD (Score:4, Informative)
Well if you live in Toronto, there's a very good chance that simply walking down the street you could get electrocuted by well anything. I'm not actually kidding, they had a serious problem with live plates and poles all over the city for the last couple of years.
Auuuuuughh! (Score:2, Funny)
Must... unimagine... Tesla... frenching... Edison...
Re: (Score:2)
You man, don't know anything about analog current and digital one. Sorry, go and take this course again. With A+. The cheapest way to transport electricity from point A to point B is to use, surprise, the "wave" format. The sinusoid one.
LOL wake me when you have calculations showing RMS voltage is greater than peak voltage for a AC waveform...
Re: (Score:2)
why is the relevant?
Re: (Score:2)
Cost of electricity dwarfs cost of endpoint components, at least now a days, so cheapest way to transport = most watts thru a piece of wire. ... peak of the sine wave.
Watts is volts times amps
The insulation determines the peak voltage. For DC the peak is also the operating voltage. For AC the peak is the
The graphical/intuitive answer is DC can run full output continuously, but a AC sine wave can only run full out for a zillionth of a second at the peak voltage. If somehow magically you made the AC signal
Re: (Score:2)
Turns out the equivalent power transfer of a AC wave is the RMS voltage.
Err times the current, yeah. Ugh.
The point is the "average" of a DC line is ... the peak. The "average" of a AC wave is the RMS voltage which is about 70% of the peak.
I put "average" in scare quotes because the actual integrated voltage of a sine wave is zero. Or sometimes the "average" is calculated another way.
The number you're looking for is RMS root-mean-squared. take wild guess how you numerically calculate that...
Re: (Score:2)
You man, don't know anything about analog current and digital one..
Yes! Digital current!!! How about feeding the computers with AC at a frequency of whatever clock ticks/sec the CPU needs?
(duck... stop shooting)