Making Your Datacenter Into Less of a Rabid Zombie Power Hog 52
Nerval's Lobster writes "Despite the growing list of innovative (and sometimes expensive) adaptations designed to transform datacenters into slightly-less-active power gluttons, the most effective way to make datacenters more efficient is also the most obvious, according to researchers from Stanford, Berkeley and Northwestern. Using power-efficient hardware, turning power down (or off) when the systems aren't running at high loads, and making sure air-cooling systems are pointed at hot IT equipment—rather than in a random direction—can all do far more than fancier methods for cutting datacenter power, according to Jonathan Koomey, a Stanford researcher who has been instrumental in making power use a hot topic in IT. Many of the most-publicized advances in building "green" datacenters during the past five years have focused on efforts to buy datacenter power from sources that also have very low carbon footprints. But "green" energy buying didn't match the impact of two very basic, obvious things: the overall energy efficiency of the individual pieces of hardware installed in a datacenter, and the level of efficiency with which those systems were configured and managed, Koomey explained in a blog published in conjunction with his and his co-authors' paper on the subject in Nature Climate Change . (The full paper is behind a paywall but Koomey offered to distribute copies free to those contacting him via his personal blog.)"
Rabid zombies (Score:2)
are a renewable resource.
Re: (Score:2)
are a renewable resource.
Rabid zombies respectfully disagree...
Less powerconsumption = less cooling (Score:4, Interesting)
I've pointed this out a number of times. But people do not seem to "get it". If you can reduce your power consumption then there is less waste heat and then less cooling cost. Note too that if your applications use lots of disk reads/writes and network IO with the cpu in a waiting state then you can save power by using a lower end gear. E.g. laptop chips and slower memory vs full blown "Enterprise" hardware.
Re:Less powerconsumption = less cooling (Score:5, Interesting)
Re: (Score:2)
Re: Less powerconsumption = less cooling (Score:3)
Re: (Score:2)
Performance and price...
Laptops stay a few generations behind desktops, in terms of memory and bus speeds, memory and cache sizes, CPU speeds, etc.
In addition, looking at AMD, they actually made their "mobile" CPUs by testing their cores after manufacturing, to see which ones could handle low voltage operating without errors. They got rerouted to the mobile CPU line, while the rest were directed to the desktop lin
Re: (Score:2)
"Laptops stay a few generations behind desktops, in terms of memory and bus speeds, memory and cache sizes, CPU speeds, etc."
You still don't get it. My question is do you need the fastest "state of the art" hardware. If the answer is no, go with the lower end gear. WHo cares how fast a bus is if it is as fast as it MUST be.
Re: (Score:2)
What about performance? Desktops are faster like for gaming. :/
Re:Less powerconsumption = less cooling (Score:5, Funny)
It works the other way too. If you don't cool the servers at all, eventually they stop consuming power ;-)
Re: (Score:3)
It works the other way too. If you don't cool the servers at all, eventually they stop consuming power ;-)
Eventually. But not as soon as you might think. Modern servers can tolerate heat fairly well, and many data centers waste money on excessive cooling. As long as you are within the temp spec, there is little evidence that you gain reliability by additional cooling. Google has published data [google.com] on the reliability of hundreds of thousands of disk drives. They found that the reliability was actually better at the high end of the temperature range. This is one reason that Google runs "hot" datacenters today.
Re: (Score:1)
While that *may* be true for some hardware (and I only say "may" because Google claims it is true, though I'm fairly certain they have fundamental flaws in their accounting of this) I can personally verify that *temperature change* in the form of increases in temperature, even within the stated hardware specifications has a *HUGE* impact on longevity of most consumer-grade hardware.
Re: (Score:2)
I can personally verify that *temperature change* in the form of increases in temperature, even within the stated hardware specifications has a *HUGE* impact on longevity
So I can believe Google's peer reviewed and published study of hundreds of thousands of devices, or I can accept your "personal verification". Wow, this is a tough decision.
Re: (Score:1)
Well, think of it this way. *I* personally have absolutely no vested interest in increasing the frequency with which your own business's in-house hardware infrastructure suffers failures. Google on the other hand...
Re: (Score:1)
It's not "peer reviewed". At best, it's "peer read". Google's data is only 100% valid for GOOGLE. It's their data on their infrastructure. Unless you happen to have a Google Datacenter, the results aren't that valuable to you.
I keep my DC (~800sq.ft) at 68F. Mostly because I prefer to work in a cool space. (well, cool while I'm in the cool isle.) But also because of cooling capacity; if the HVAC is off, how long does it take to reach 105F? from 68, about 15 min, from 82 a few minutes. However rare that
Re: (Score:2)
Absolutely. When I ran a small datacenter, I instituted the change from 68F to 75F as a standard. In spite of predictions of disaster, the only thing that changed is the power bill went down.
Re: (Score:2)
Absolutely. When I ran a small datacenter, I instituted the change from 68F to 75F as a standard. In spite of predictions of disaster, the only thing that changed is the power bill went down.
If you have good airflow, you can go much higher than that. The critical factor is the temp of the components, not the room temp. Dell will warranty their equipment up to 115F (45C). Google runs some of their datacenters at 80F, and others at up to 95F.
There are some drawbacks to "hot" datacenters. They are less pleasant for humans, and there is less thermal cushion in the event of a cooling system failure. But many datacenters avoid that problem by replacing chillers with 100% outside ambient temp air
Re: (Score:2)
Sure, I suspect we could have gone hotter then, even with a datacenter designed for 68F, but it was a bit cutting edge at the time just to get to 75 and we would have had to alter the airflow.
Re: (Score:1)
This suggests your DC may be rather poorly insulated.
I don't know your environment (pre- or post-) so I cannot say what that +7F did to the thermodynamics of your HVAC system... +7F room, +20F servers, +50F exhaust? (greater deltaTemp == faster / more efficient energy transfer) for example, if you're in Az and your heat rejection (cooling coils) are only reaching 120F, they aren't going to be very good at dumping heat into >100F air. (this is where water cooling should be used.)
(Note: I had that "talk" w
Re: (Score:2)
I don't see why it would suggest particularly poor insulation. Any time you move a room's temperature closer to the outside temp, you can expect the bills to go down a bit. In our case it meant that the room was a bit above the outside temp for larger parts of the day which makes a huge difference, especially when you're using outside air when conditions are favorable.
Re: (Score:1)
It doesn't have to be particularly poor, just not sufficient for a data center. You want the heat load in the room to be as near 100% equipment as possible -- no leaks from outside the room. You also what the cold to stay in the room -- i.e. not blowing through cracks (or holes) in the floor, wall seams, through doors, etc. It's fairly simple to test the efficiency of the room... turn off all load, and watch how much the HVAC has to work to keep it at the setpoint.
As I said, I don't know the specifics of
Re: (Score:2)
if your applications use lots of disk reads/writes and network IO with the cpu in a waiting state then you can save power by using a lower end gear.
or you can add ramsan/flashsystem and enjoy 21 century.
Re: (Score:3)
"Enterprise" hardware doesn't mean the fastest... Infact it's the opposite, as enterprise hardware has longer development cycles.
Enterprise gear means things like ECC memory, BMCs monitoring server health, HDDs that won't freeze up for several minutes retrying a single unreadable block error, etc. And if you feel like skimping on it, you'll end up paying much more in the long run, as a sin
And scaling up (Score:2)
Last few years we went from 30 some database servers to a dozen at most
Modern hardware is insanely powerful and you get a huge bang for the buck consolidating a few servers onto a single machine
Re: (Score:3)
This. With the availability and reliability of SANs, virtual machine software, hypervisors, rack/blades, and such, there are a lot of tasks which are best moved to a rack/blades/SAN/VM architecture. Even high/extreme I/O can be handled by virtualization on POWER and SPARC platforms.
These days, for most tasks [1], the question is why not a rack/blade solution. A half-rack with a blade enclosure and a drive array oftentimes can do more than 2-3 racks of 1U machines.
Security separation is getting better and
Re: (Score:3)
This is complete nonsense. Blade servers are more expensive, and CAN'T outperform simple 1U servers. 1U servers are packed to the gills with the hottest components that can be kept cool given the amount of space they have to work with. Blade servers, or any other design, can't possibly pack things more densely than 1U serv
what about cuting down all the ac to dc to ac (Score:3)
Why can't there BE UPS with ATX DC out?
Re: (Score:1)
Because you, in your infinite laziness, have chosen not to manufacture and sell us one.
Re: (Score:2)
Why can't there BE UPS with ATX DC out?
surely there is dc ups systems? or what do you call a dc system and dc->atx psu's, if just not that?
Re: (Score:2)
Why can't there BE UPS with ATX DC out?
Because there's this thing called resistance...
Re: (Score:2)
Easily solved by cable size. Run a busbar system to each rack and tap off those.
Re: (Score:2)
I've wondered why NEBS 48 volt systems are not more common. 48 volts is high enough that is doesn't need the big fat wires that 12VDC high-amperage connections do, and computers would just need a DC-DC converter to convert the incoming voltage to the 12 and 5 volt rail voltages.
It would be nice to see a standard 48 volt connector, something other than the one used for phantom power to mics. Preferably a connector with a built in high-amp switch (DC has no zero crossings so DC switches have to be beefy eno
Re: (Score:3)
48 volts is high enough that is doesn't need the big fat wires that 12VDC high-amperage connections do
48V while not as bad as 12V still means much thicker cables and/or higher cable losses (most likely some combination of both) than normal mains voltages.
Servers at full load can draw a heck of a lot of power. 500W is not unreasonable for a beefy 1U server, put 42 of those in a rack and you are looking at 21KW.
Feed those servers with a 240V single phase supply and you are looking at about 88A. That is high but managable with the sort of cable sizes you can find at most electrical wholesalers.
Feed those serve
Re: (Score:2)
Feed those servers with a 240V single phase supply and you are looking at about 88A. That is high but managable with the sort of cable sizes you can find at most electrical wholesalers.
This is a problem if you're running commodity solutions with wires everywhere. If you're going to design a DC only datacentre you'd likely run very high current busbars over the aisle, and then tap busses onto each individual rack. Cables while flexible (in more ways than one) are not really ideal from an engineering point of view.
Re: (Score:2)
Comparing 48V DC to 240/415V TP+N AC.
For 48V DC you have
Higher wiring costs (both materials and labour).
Higher end system costs.
More restricted choice of end systems.
Most likely higher resistive losses in wiring.
Greater difficulty installing and removing stuff*.
Higher losses in the primary side of the isolating switched mode converter in your end system.
For 230/400V TP+N AC you have.
Losses from inverters in UPS systems and rectifiers in end devices.
Vendor lockin when paralell running UPS units.
* A new conne
Re: (Score:2)
Part of your list of downsides takes double credit. You don't have higher resistive losses if your wiring costs more. Resistive losses are the reason you buy bigger cables. But that's the key. I wasn't proposing a wire based solution. Busbars are used in high current application specifically due to the insane cost of wiring.
Yes that makes your system harder to implement but that does not equate to difficulty in installation / removal. That equates to an engineering design problem and several houses have so
Re: (Score:2)
It's an old myth that AC-DC-AC conversion is a big loss. The % losses are in the single-digits. And the hassle of running a DC powered datacenter is a HUGE hassle.
The idea started way the hell back before "80plus" power supplies, when most PSUs were 60% efficient, but DC power supplies were more commonly 80%+ efficient. Now that common AC PSUs are much better, the DC advantages are long gone. There were also another class of losses from intermediate power distribution, but they can be cleaned-up as well
Re: (Score:2)
What I've wondered about is using servers designed for power requirements at different times.
For example, server or blade "A" runs an Intel Atom and is made to be slow but energy saving. Server "B" runs much faster, but takes more electricity.
Add a SAN, cluster filesystems, and something like vMotion, and what can happen is that VMs that see heavy usage during the day can be moved to the higher speed servers as load permits. Then come evening, they get moved back to the slower processors, and the faster s
Throwing hardware at problems.. (Score:2)
So for years I've been hearing that it's much cheaper to throw faster hardware at a problem rather than tuning an application or a server. It's finally coming back to bite us. Imagine if tuning had gained a 10% or 15% improvement. How much power and millions of dollars does that translate to?
Re: (Score:2)
Rabid Zombie Power Hog (Score:1)
Location Location Location (Score:2)
Given that data centers are basically big electric heaters doing some number crunching along the way, might be sensible to put them in cold climates rather than hot, so a) it's easier to dump all the heat generated and b) that heat has some practical uses.
Re: (Score:2)
might be sensible to put them in cold climates rather than hot
People outside cold climates need servers geographically near too.... a datacenter that is far away will have high latency: so far noone's found a way around the speed of light limitation.
How about burying datacenters though... under the ground, where the temperature is more uniform, and, where you can also bury huge copper arrays, and put your servers in thermal contact with the thermally conductivearrays, to conduct the heat away.....
Re: (Score:2)
It is easier to just drill wells. Google geothermal heat pump.
Switching off (Score:2)