Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Power The Almighty Buck IT

Making Your Datacenter Into Less of a Rabid Zombie Power Hog 52

Nerval's Lobster writes "Despite the growing list of innovative (and sometimes expensive) adaptations designed to transform datacenters into slightly-less-active power gluttons, the most effective way to make datacenters more efficient is also the most obvious, according to researchers from Stanford, Berkeley and Northwestern. Using power-efficient hardware, turning power down (or off) when the systems aren't running at high loads, and making sure air-cooling systems are pointed at hot IT equipment—rather than in a random direction—can all do far more than fancier methods for cutting datacenter power, according to Jonathan Koomey, a Stanford researcher who has been instrumental in making power use a hot topic in IT. Many of the most-publicized advances in building "green" datacenters during the past five years have focused on efforts to buy datacenter power from sources that also have very low carbon footprints. But "green" energy buying didn't match the impact of two very basic, obvious things: the overall energy efficiency of the individual pieces of hardware installed in a datacenter, and the level of efficiency with which those systems were configured and managed, Koomey explained in a blog published in conjunction with his and his co-authors' paper on the subject in Nature Climate Change . (The full paper is behind a paywall but Koomey offered to distribute copies free to those contacting him via his personal blog.)"
This discussion has been archived. No new comments can be posted.

Making Your Datacenter Into Less of a Rabid Zombie Power Hog

Comments Filter:
  • are a renewable resource.

  • by plopez ( 54068 ) on Saturday June 29, 2013 @04:54PM (#44144325) Journal

    I've pointed this out a number of times. But people do not seem to "get it". If you can reduce your power consumption then there is less waste heat and then less cooling cost. Note too that if your applications use lots of disk reads/writes and network IO with the cpu in a waiting state then you can save power by using a lower end gear. E.g. laptop chips and slower memory vs full blown "Enterprise" hardware.

    • by TeknoHog ( 164938 ) on Saturday June 29, 2013 @05:02PM (#44144351) Homepage Journal
      My thoughts exactly. My first web server in 1998 was a laptop, and ever since I have wondered why 'desktop' components waste so much power compared to 'mobile' counterparts. Since 2003 my 'desktop' machines have been built with 'mobile' CPUs (Mini-ITX et al) and I keep asking this: why should a machine waste power willy-nilly just because it is plugged in? I also like the quiet of passively cooled CPUs (of course, other components like PSUs can be passively cooled).
      • Mere power consumption of individual IC packages probably isn't the overwhelming concern; TCO is.
      • if you can run on a low power system as you describe it is probably a good candidate for virtualization.
      • I have wondered why 'desktop' components waste so much power compared to 'mobile' counterparts

        Performance and price...

        Laptops stay a few generations behind desktops, in terms of memory and bus speeds, memory and cache sizes, CPU speeds, etc.

        In addition, looking at AMD, they actually made their "mobile" CPUs by testing their cores after manufacturing, to see which ones could handle low voltage operating without errors. They got rerouted to the mobile CPU line, while the rest were directed to the desktop lin

        • by plopez ( 54068 )

          "Laptops stay a few generations behind desktops, in terms of memory and bus speeds, memory and cache sizes, CPU speeds, etc."
            You still don't get it. My question is do you need the fastest "state of the art" hardware. If the answer is no, go with the lower end gear. WHo cares how fast a bus is if it is as fast as it MUST be.

      • by antdude ( 79039 )

        What about performance? Desktops are faster like for gaming. :/

    • by sjames ( 1099 ) on Saturday June 29, 2013 @05:20PM (#44144423) Homepage Journal

      It works the other way too. If you don't cool the servers at all, eventually they stop consuming power ;-)

      • It works the other way too. If you don't cool the servers at all, eventually they stop consuming power ;-)

        Eventually. But not as soon as you might think. Modern servers can tolerate heat fairly well, and many data centers waste money on excessive cooling. As long as you are within the temp spec, there is little evidence that you gain reliability by additional cooling. Google has published data [google.com] on the reliability of hundreds of thousands of disk drives. They found that the reliability was actually better at the high end of the temperature range. This is one reason that Google runs "hot" datacenters today.

        • While that *may* be true for some hardware (and I only say "may" because Google claims it is true, though I'm fairly certain they have fundamental flaws in their accounting of this) I can personally verify that *temperature change* in the form of increases in temperature, even within the stated hardware specifications has a *HUGE* impact on longevity of most consumer-grade hardware.

          • I can personally verify that *temperature change* in the form of increases in temperature, even within the stated hardware specifications has a *HUGE* impact on longevity

            So I can believe Google's peer reviewed and published study of hundreds of thousands of devices, or I can accept your "personal verification". Wow, this is a tough decision.

            • Well, think of it this way. *I* personally have absolutely no vested interest in increasing the frequency with which your own business's in-house hardware infrastructure suffers failures. Google on the other hand...

            • by Cramer ( 69040 )

              It's not "peer reviewed". At best, it's "peer read". Google's data is only 100% valid for GOOGLE. It's their data on their infrastructure. Unless you happen to have a Google Datacenter, the results aren't that valuable to you.

              I keep my DC (~800sq.ft) at 68F. Mostly because I prefer to work in a cool space. (well, cool while I'm in the cool isle.) But also because of cooling capacity; if the HVAC is off, how long does it take to reach 105F? from 68, about 15 min, from 82 a few minutes. However rare that

        • by sjames ( 1099 )

          Absolutely. When I ran a small datacenter, I instituted the change from 68F to 75F as a standard. In spite of predictions of disaster, the only thing that changed is the power bill went down.

          • Absolutely. When I ran a small datacenter, I instituted the change from 68F to 75F as a standard. In spite of predictions of disaster, the only thing that changed is the power bill went down.

            If you have good airflow, you can go much higher than that. The critical factor is the temp of the components, not the room temp. Dell will warranty their equipment up to 115F (45C). Google runs some of their datacenters at 80F, and others at up to 95F.

            There are some drawbacks to "hot" datacenters. They are less pleasant for humans, and there is less thermal cushion in the event of a cooling system failure. But many datacenters avoid that problem by replacing chillers with 100% outside ambient temp air

            • by sjames ( 1099 )

              Sure, I suspect we could have gone hotter then, even with a datacenter designed for 68F, but it was a bit cutting edge at the time just to get to 75 and we would have had to alter the airflow.

          • by Cramer ( 69040 )

            This suggests your DC may be rather poorly insulated.

            I don't know your environment (pre- or post-) so I cannot say what that +7F did to the thermodynamics of your HVAC system... +7F room, +20F servers, +50F exhaust? (greater deltaTemp == faster / more efficient energy transfer) for example, if you're in Az and your heat rejection (cooling coils) are only reaching 120F, they aren't going to be very good at dumping heat into >100F air. (this is where water cooling should be used.)

            (Note: I had that "talk" w

            • by sjames ( 1099 )

              I don't see why it would suggest particularly poor insulation. Any time you move a room's temperature closer to the outside temp, you can expect the bills to go down a bit. In our case it meant that the room was a bit above the outside temp for larger parts of the day which makes a huge difference, especially when you're using outside air when conditions are favorable.

              • by Cramer ( 69040 )

                It doesn't have to be particularly poor, just not sufficient for a data center. You want the heat load in the room to be as near 100% equipment as possible -- no leaks from outside the room. You also what the cold to stay in the room -- i.e. not blowing through cracks (or holes) in the floor, wall seams, through doors, etc. It's fairly simple to test the efficiency of the room... turn off all load, and watch how much the HVAC has to work to keep it at the setpoint.

                As I said, I don't know the specifics of

    • if your applications use lots of disk reads/writes and network IO with the cpu in a waiting state then you can save power by using a lower end gear.

      or you can add ramsan/flashsystem and enjoy 21 century.

    • you can save power by using a lower end gear. E.g. laptop chips and slower memory vs full blown "Enterprise" hardware.

      "Enterprise" hardware doesn't mean the fastest... Infact it's the opposite, as enterprise hardware has longer development cycles.

      Enterprise gear means things like ECC memory, BMCs monitoring server health, HDDs that won't freeze up for several minutes retrying a single unreadable block error, etc. And if you feel like skimping on it, you'll end up paying much more in the long run, as a sin

  • Last few years we went from 30 some database servers to a dozen at most

    Modern hardware is insanely powerful and you get a huge bang for the buck consolidating a few servers onto a single machine

    • by mlts ( 1038732 ) *

      This. With the availability and reliability of SANs, virtual machine software, hypervisors, rack/blades, and such, there are a lot of tasks which are best moved to a rack/blades/SAN/VM architecture. Even high/extreme I/O can be handled by virtualization on POWER and SPARC platforms.

      These days, for most tasks [1], the question is why not a rack/blade solution. A half-rack with a blade enclosure and a drive array oftentimes can do more than 2-3 racks of 1U machines.

      Security separation is getting better and

      • These days, for most tasks [1], the question is why not a rack/blade solution. A half-rack with a blade enclosure and a drive array oftentimes can do more than 2-3 racks of 1U machines.

        This is complete nonsense. Blade servers are more expensive, and CAN'T outperform simple 1U servers. 1U servers are packed to the gills with the hottest components that can be kept cool given the amount of space they have to work with. Blade servers, or any other design, can't possibly pack things more densely than 1U serv

  • Why can't there BE UPS with ATX DC out?

    • by Anonymous Coward

      Because you, in your infinite laziness, have chosen not to manufacture and sell us one.

    • by gl4ss ( 559668 )

      Why can't there BE UPS with ATX DC out?

      surely there is dc ups systems? or what do you call a dc system and dc->atx psu's, if just not that?

    • Why can't there BE UPS with ATX DC out?

      Because there's this thing called resistance...

    • by mlts ( 1038732 ) *

      I've wondered why NEBS 48 volt systems are not more common. 48 volts is high enough that is doesn't need the big fat wires that 12VDC high-amperage connections do, and computers would just need a DC-DC converter to convert the incoming voltage to the 12 and 5 volt rail voltages.

      It would be nice to see a standard 48 volt connector, something other than the one used for phantom power to mics. Preferably a connector with a built in high-amp switch (DC has no zero crossings so DC switches have to be beefy eno

      • 48 volts is high enough that is doesn't need the big fat wires that 12VDC high-amperage connections do

        48V while not as bad as 12V still means much thicker cables and/or higher cable losses (most likely some combination of both) than normal mains voltages.

        Servers at full load can draw a heck of a lot of power. 500W is not unreasonable for a beefy 1U server, put 42 of those in a rack and you are looking at 21KW.

        Feed those servers with a 240V single phase supply and you are looking at about 88A. That is high but managable with the sort of cable sizes you can find at most electrical wholesalers.
        Feed those serve

        • Feed those servers with a 240V single phase supply and you are looking at about 88A. That is high but managable with the sort of cable sizes you can find at most electrical wholesalers.

          This is a problem if you're running commodity solutions with wires everywhere. If you're going to design a DC only datacentre you'd likely run very high current busbars over the aisle, and then tap busses onto each individual rack. Cables while flexible (in more ways than one) are not really ideal from an engineering point of view.

          • Comparing 48V DC to 240/415V TP+N AC.

            For 48V DC you have

            Higher wiring costs (both materials and labour).
            Higher end system costs.
            More restricted choice of end systems.
            Most likely higher resistive losses in wiring.
            Greater difficulty installing and removing stuff*.
            Higher losses in the primary side of the isolating switched mode converter in your end system.

            For 230/400V TP+N AC you have.

            Losses from inverters in UPS systems and rectifiers in end devices.
            Vendor lockin when paralell running UPS units.

            * A new conne

            • Part of your list of downsides takes double credit. You don't have higher resistive losses if your wiring costs more. Resistive losses are the reason you buy bigger cables. But that's the key. I wasn't proposing a wire based solution. Busbars are used in high current application specifically due to the insane cost of wiring.

              Yes that makes your system harder to implement but that does not equate to difficulty in installation / removal. That equates to an engineering design problem and several houses have so

    • It's an old myth that AC-DC-AC conversion is a big loss. The % losses are in the single-digits. And the hassle of running a DC powered datacenter is a HUGE hassle.

      The idea started way the hell back before "80plus" power supplies, when most PSUs were 60% efficient, but DC power supplies were more commonly 80%+ efficient. Now that common AC PSUs are much better, the DC advantages are long gone. There were also another class of losses from intermediate power distribution, but they can be cleaned-up as well

  • So for years I've been hearing that it's much cheaper to throw faster hardware at a problem rather than tuning an application or a server. It's finally coming back to bite us. Imagine if tuning had gained a 10% or 15% improvement. How much power and millions of dollars does that translate to?

    • Compared to the labor rates to do such a thing? Electricity is cheap and you seldom have to justify it being budgeted for.
  • I think I used to work one of those.
  • Given that data centers are basically big electric heaters doing some number crunching along the way, might be sensible to put them in cold climates rather than hot, so a) it's easier to dump all the heat generated and b) that heat has some practical uses.

    • by mysidia ( 191772 )

      might be sensible to put them in cold climates rather than hot

      People outside cold climates need servers geographically near too.... a datacenter that is far away will have high latency: so far noone's found a way around the speed of light limitation.

      How about burying datacenters though... under the ground, where the temperature is more uniform, and, where you can also bury huge copper arrays, and put your servers in thermal contact with the thermally conductivearrays, to conduct the heat away.....

  • Switching off and on the hardware will wear it out for various reasons: power supply are more likely to fail when switching on, hard disks mechanical parts suffer from hot/cold cycles. It means switching off for power saving cause the hardware to be more replaced, which also has an environmental cost. I did not read TFA, but from the summary, I understand that the benefit outweights the cost, is that correct?

Time is the most valuable thing a man can spend. -- Theophrastus

Working...