Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Power Intel

Power Consumption and the Future of Computing 105

mrdirkdiggler writes "ArsTechnica's Hannibal takes a look at how the power concerns that currently plague datacenters are shaping next-generation computing technologies at the levels of the microchip, the board-level interconnect, and the datacenter. In a nutshell, engineers are now willing to take on a lot more hardware overhead in their designs (thermal sensors, transistors that put components into sleep states, buffers and filters at the ends of links, etc.) in order to get maximum power efficiency. The article, which has lots of nice graphics to illustrate the main points, mostly focuses on the specific technologies that Intel has in the pipeline to address these issues."
This discussion has been archived. No new comments can be posted.

Power Consumption and the Future of Computing

Comments Filter:
  • anyone ever compared power usage (say, per flop) advances to the power usage (per mile) advances of our innovative combustion engine indutry?
    • anyone ever compared system failure (say, per computer-hour) improvements to the system failure (per mile) improvements of our innovative combustion engine industry?

      anyone ever compared computer system transparency (say, how much of it has publicly available documentation) to the system transparency of our innovative combustion engine industry?
    • by dbIII ( 701233 )
      Yes - it comes to a yearly rate of 37 Volkswagens per football feild.
    • by Nullav ( 1053766 )
      That's like comparing the number plane collisions per year to the number of car collisions per year. (For one, a CPU doesn't move in any meaningful sense.)
  • With the availability of PC power supplies* in excess of 1000 watts, and the mine's-bigger-than-your's demographic, I wonder what bearing this has on power consumption also. Perhaps peripheral manufacturers need to concentrate on power usage also.

    [*] - http://www.newegg.com/Product/ProductList.aspx?Sub mit=ENE&N=2010320058+1131428171 [newegg.com]
    • Re: (Score:3, Interesting)

      by MP3Chuck ( 652277 )
      If you look at the number of reviews, though, compared to something like a more modest 500W PS [newegg.com] it would seem that not too many people really use/need a 1KW PS.

      In fact, those high-end 1KW supplies might even be better for power consumption since they tend to have higher efficiencies than the cheapo options.
    • by ncc74656 ( 45571 ) *

      With the availability of PC power supplies* in excess of 1000 watts, and the mine's-bigger-than-your's demographic,

      WTF actually needs that kind of power? I've built 16-disk 3U RAID arrays that don't use nearly that much power. Each is powered by a 650W RPS (made up of three hot-swappable 350W power supplies, capable of running on two if one fails), and actual maximum power consumption (measured with a clamp-on ammeter and a power cord with one of the wires pulled out in a loop) was somewhere around 350

      • Am I the only one who thinks Aero graphics is an environmental disaster...?

        I don't know who needs 1000W but it's easy to make SLI gaming rigs go over 500W.

        Stick a couple of the "twin power connector" cards in a box with a big CPU, overclock the hell out of it...that's four or five hundred watts right there.

  • The problem is the massive rollout of servers and blades into datacenters that under utilized. The One Job One Box Syndrome. Most DC hardware is barely used but there it sits idling and sucking power. If we leaped into massive virtualization we'd be able to reduce the number of physical components and save power.
    • Batch, PBS, NQS, SGE, Torque, LVS. . .

      Choose your poison.

       
    • by holysin ( 549880 )
      Yup, only downside would be the single point of failure for say, all of the company's servers. One job/one box has its benefits. Optimizing power and hardware utilization is not one of these benefits. (Of course if you do decide to utilize virtualization, odds are you're smart enough that if your business requires 24/7 uptime, you have a hot swap server serving as a primary backup which would be easy for you to roll out...) Power consumption is not on the radar for most companies now (excluding the huge
      • Virtualization can be used to increase redundancy actually. You can set up virtualization across a number of machines, then set it up so that if one (physical) machine fails, the VM simply shifts the load to a different physical machine (probably a bit intensive on hard drives because of the additional redundancy, but then, it's hard to have to much redundancy for your data anyway in a business environment). The big catch to virtualization is gonna be the TCO if you want to use a proprietary OS. Either y
      • Yup, only downside would be the single point of failure for say, all of the company's servers. One job/one box has its benefits.

        Typically, if you're going to virtualize - the minimum number for physical boxes is probably 3. During normal operations, you run your load spread across all 3 boxes, with the option to consolidate down to 2 boxes if one goes down. You can do it with just 2 boxes, but it's not going to be as nice. Naturally, if you have the server load to require 4+ boxes, it becomes much easi
  • Computer performance being limited by power. Who'd have thought.

     
  • Big cuts (Score:4, Interesting)

    by Forge ( 2456 ) <kevinforge AT gmail DOT com> on Saturday June 30, 2007 @09:24AM (#19699261) Homepage Journal
    The thing with power usage is that nobody seems interested in attacking the 2 largest areas of power wastage. (except maybe google)

    #1. DCAC conversion.
    Your typical Datacenter has a UPS or batteries and inverters (Enterprise scale UPS). What this amounts it is AC power from your utility company converted to DC for storage in a battery and then converted back to AC to supply the Server's power supply, then converted back to DC to actually run the components of the computer.

    Ever notice how hot a UPS gets during normal operation? That's power going to waste. The solution is to run our servers at a standardised DC voltage. 48 Volts sounds good since that is already defined for Telecom equipment (correct me if I'm wrong. I am not sure of the figure)

    #2. Raised flour and underground AC. A good chunk of datacenter power is used to run the air conditioning. If we abandoned the notion of raised flours and replaced them with say insulated celling mounted ducts with vents faceing each rack.

    While we are at it here is another simple power tip. Turn your rows of racks back to back. When they all face the same direction, hot air blows from the back of one machine to the frunt of another, forcing the AC to work overtime. In my design, I would have extraction fans betwean my back to back racks, pumping the hot air outside (or into the office during winter. For those of you who have winter.
    • by johnw ( 3725 )

      #2. Raised flour and underground AC. A good chunk of datacenter power is used to run the air conditioning. If we abandoned the notion of raised flours and replaced them with say insulated celling mounted ducts with vents faceing each rack.
      ITYM self-raising flour. Raised flour is a cake.
      • I thought he was on to something, running your power conduit in flour to insulate heat from the rest of the datacenter. :P

        Unfortunately for me, I got back from Costco -AFTER- reading your reply explaining its a typo. I was going to reinsulate my attic. Anyone have a use for two pallets of flour?
    • Re:Big cuts (Score:5, Informative)

      by NeverVotedBush ( 1041088 ) on Saturday June 30, 2007 @10:13AM (#19699553)
      Lots of errors in your suppositions.

      DC/AC conversion? The bigger data centers can't use batteries - too many, too big of a hazard, etc. They use rotational UPS's. These stay AC all the way.

      Additionally - power distribution is better at higher voltages. It's that current squared thing. More and more equipment is also going to higher voltage distribution on the boards with local DC/DC conversion at the load. For the exact same reason. Our center distributes at 208 volts.

      The argument against a raised floor is bogus. That acts (and is necessary) not only for cabling, but also for air distribution. Heated air rises. Feeding cold air up from the floor to where it flows into the racks to be heated and then recovered at the ceiling is the most efficient way for air. The fact that the floor is not insulated is a non-issue. The whole room is being cooled. The temperature is the same on either side of the floor tiles.

      And about the face to face and back to back layout of racks - every single one of our racks is already in that orientation for exactly that reason. We have hot aisles and cold aisles and the temperature difference between them is pretty marked.

      The next wave is a move back to "water" cooling. Either plumbing liquid to each rack where in the rack it locally grabs heat from circulated air within the rack, or plumbing into the boxes themselves. This is simply because heat loads are going up and it gets harder (and louder) to pump enough air through a building to cool the more dense newer equipment. Plus people don't have to put on jackets to go out on the floor or yell to be heard in a big data center.
      • Ideally (not stuck with hardware designed for an office environment) you do this:

        Air should flow from cold aisles to hot aisles by a simple pressure difference. Those little CPU fans generate heat and lots of noise. It's better to rely on airflow supplied by the building. This of course means that the cases have ductwork and aerodynamic heat sinks as required. I've seen it for a single rack; it's really nice to eliminate the individual CPU fans. Reliability goes up (no CPU fan failures) and noise goes down.
        • There is some advantage to grabbing outside air, using it once, and then venting it up a chiminey. Modern computers don't need to be all that cold; for the drives it is even bad to be really cold. (see Google results) Cooling the air is expensive. Of course, some places have extreme variation in outside air temperature that must be considered.

          The flaw in that plan (pulling cold air from outside the building) is that when you bring 20 degree F air into a data center that is at 70 degrees F, you'll find th
          • by Forge ( 2456 )
            Where I come from the air coming out the back of your servers is usually cooler than the air outside. 80 degrees is average all year round. :)

            Which by the way is why I have had to worry so much about cooling.

            As for not going to battery. I guess we don't have any large data centres here. Our largest phone company only has around 1.6 Million subscribers. Almost twice as many clients as our largest bank.
    • Re:Big cuts (Score:5, Informative)

      by Firethorn ( 177587 ) on Saturday June 30, 2007 @10:16AM (#19699557) Homepage Journal
      I've seen raised floor AC done right. Each rack was sealed, had a vent in the bottom and a vent into the ceiling. The AC pushed cold air into the subfloor, which was then sucked into the racks, with hot air rising into the ceiling. Where the AC pulled the hot air to be cooled again.

      Also, 99% of UPS units don't convert AC to DC unless it's charging the batteries. Normally this would only be a trickle charge. If the UPS is providing power, you're in a critical situation anyways, I wouldn't worry about the fact that a UPS isn't particularly efficient, as you're probably spending 99% of your time not on UPS.

      As for switching to telephone industry standard 48V power, you'd be converting it again to whatever the equipment wants, much of it 12V or less. 120VAC->12VDC is more efficient than 120VAC->48VDC->12VDC. In addition you run into the problem that 120VAC over 12gauge cable wastes less than half of the power that the same wattage of 48VDC would waste over the same diameter cable. So you'd have to use heavier gauge cable - payback isn't quick for that by any means.

      You might be able to get away with it on a rack level, powering all the blades on 48V via rails to a couple of redundant power supplies somewhere in the rack. Either top or bottom, depending upon cooling and other requirements, though the middle might be an interesting choice, as it'd allow you to have half the wattage running over the rails on average(you'd have two runs instead of one).

      You want to save power? I'd switch to feeding the racks/power supplies with 240V lines. Half the line resistance for the wattage.
      • Re: (Score:3, Informative)

        by DaleGlass ( 1068434 )

        Also, 99% of UPS units don't convert AC to DC unless it's charging the batteries. Normally this would only be a trickle charge. If the UPS is providing power, you're in a critical situation anyways, I wouldn't worry about the fact that a UPS isn't particularly efficient, as you're probably spending 99% of your time not on UPS.

        That's a cheap, consumer oriented UPS. Datacenters use the kind described [wikipedia.org], ones that are always doing the AC -> DC -> AC conversion. What this achieves is that instead of the UPS

      • Also, 99% of UPS units don't convert AC to DC unless it's charging the batteries. Normally this would only be a trickle charge.
        That trickle adds up. Try doing the math for a datacenter with 100,000 blades (relatively small,) which would suggest at least 20,000 UPSes. Consider that that trickle comes in at about 1.5% inefficiency, and look at the end result. You might be surprised.
        • by jbengt ( 874751 )
          ? 20,000 UPSs ?!?
          Try 2 to 4 big ones.
        • As the other poster noted, you'd only need 20k UPS units with 100k blades if you're using small UPS units, not 'building' level ones that sit somewhere else and have whole racks of batteries.

          As for the 1.5% efficiency - The larger the UPS, the more efficient the charging system. Still, you can't get away from the fact that you need a float charge for lead-acid batteries, indeed, for most rechargable technologies. Still, the number of batteries needed depend on how many kwh you need to store. For 100k bla
      • by jbengt ( 874751 )
        "you're probably spending 99% of your time not on UPS."

        Almost by definition, you're always going thru the UPS; what you're not doing 99% of the time is discharging the batteries.
        And a large, efficient UPS is proably only around 90% efficient at normal loads.
        At very low loads, they can actually use more energy than at full loads.
        So a 250 kVA UPS is going to turn about 20 to 25 kW of energy into heat, even when the equipment it's serving is idling.
      • by Skapare ( 16644 )

        Unfortunately, in the USA, power to most commercial/industrial buildings is not available in 240 volts. Power on a large scale is provided in three phase so the power company distribution is kept in balance. But in the USA the primary choices for that are 208/120 volts, or 480/277 volts. Many power supplies would probably work OK at 277 volts, but since they are not specified for that, it's risky from many perspectives. 208 volts would work for full range power supplies, but maybe not for those that hav

        • Unfortunately, in the USA, power to most commercial/industrial buildings is not available in 240 volts.

          Huh, I've seen it available in most buildings I've been in. Even so, as long as it's AC, you can efficiently transform voltages around, even if you need a big transformer in a mechanical room somewhere.

          120Volts doesn't come in on it's own set of wires, it's set up as a split phase via grounding from two 240 volt lines.

          My general point is that it's more efficient to move high voltage around than low voltag
          • by Skapare ( 16644 )

            Most commercial/industrial buildings have 3-phase power. Most 3-phase power is of the "star/wye" configuration, which means 3 separate 120 volt transformer secondaries wired to a common grounded neutral. At 120 degrees phase angle, the voltage between any 2 of these 3 lines is 208 volts, not 240 volts. There are some exceptions where commercial buildings get single phase power, or an older delta type 3-phase system that has various kinds of problems with it.

            They will use standard 240 volt outlets for th

    • "While we are at it here is another simple power tip. Turn your rows of racks back to back. When they all face the same direction, hot air blows from the back of one machine to the frunt of another, forcing the AC to work overtime. In my design, I would have extraction fans betwean my back to back racks, pumping the hot air outside (or into the office during winter." Pumping the hot air outside might help depending on outside temperatures. But, if the air you are pumping outside is 80 degrees F and it is 9
      • It actually does not force the AC to work overtime. You are merely blowing hot air into the next rack to cool it - which it does less effectively. The parts in the second rack just run hotter. Overall, the AC actually will ultimately carry the same load. Your boxes will just overheat. The heat load on the room is just the power you supply to your boxes. They are, in effect, nothing but space heaters.

        Data centers recirculate the same air generally because it is cheaper in the building design, more reliabl
    • by pla ( 258480 )
      The thing with power usage is that nobody seems interested in attacking the 2 largest areas of power wastage. (except maybe google)

      Except, those don't really waste most of the power.

      With AC/DC, you already have equipment available that can push over 90% efficient. With air conditioning, central home units manage 90-94% efficient, and I'd expect industrial models to do even better. So not a lot of room for improvement there.

      With servers, however... The better they scale to their load, the more effic
      • Or better yet, just line them all up in a single long row, ...

        *cough*

        I think our data center might be bigger than your data center...

        C//
      • by jbengt ( 874751 )
        "With AC/DC, you already have equipment available that can push over 90% efficient"
        Yes, but when UPSs are designed for maximum load, and redundant UPSs are installed, and you typically are operating below 50% of capacity (e.g. late shifts), that 90% full load efficiency can be below 50% real life efficiency.

        "With air conditioning, central home units manage 90-94% efficient, and I'd expect industrial models to do even better"
        Not even close, if you assume that you're talking about 90-94% of theoretical maximu
      • Comment removed based on user account deletion
      • I don't understand why we AC them instead of just pumping as much outside air as possible through the room.

        What do you do on the days when the temp outside the building is below 32F? How about when the temp does not go above zero degrees F for a couple of a weeks in the middle of winter? Do you know what happens to the relative humidity of air that is heated from zero degrees F to 70 degrees F? Are you going to spend a lot of money for equipment and power to humidify that air as you pull it into the d
        • by redcane ( 604255 )
          Warmer air can carry more water right? So the air inside the data centre would be extremely dry. Most computer equipment specifies 0-x% humidity. So lowering it doesn't adversely effect the equipment. I'm sure the outside environment doesn't care either.
          • Most computer equipment specifies 0-x% humidity. So lowering it doesn't adversely effect the equipment.

            Ok, you go ahead and run your data center with 10% relative humidity. Don't be surprised when static electricity becomes a big issue for you. Also, I doubt that you find that "most computer equipment specifies 0-x% humidity" when the equipment is running (you might be able to *store* the equipment at 10 percent relative humidity). There is a reason that most data centers are kept at 40-45% relative h
      • by Forge ( 2456 )
        Any computer I can run for extended periods in a none air conditioned building here (Jamaica) can be considered a solid desktop. Haven't met server like that yet. An AC breakdown at the datacenter is a major crises. The repair crew on that system has 30 to 90 minutes to repair the C before servers start dropping.

        In other words. Add "tropics" to "desert"
    • EnergyStar standards for power supplies are that no more than 20% of the AC power be converted to heat going through the transformer - which means that on most systems more has been turned to heat in the server room before it does anything. Just moving *that* heat conversion out of the server room where it does not need to be cooled away is a big win. Remember, it takes, in a well designed system, 1.7 times the energy to cool a space as it took energy to heat it. Opponents of DC in the server room usually m
      • by jbengt ( 874751 )
        "I have seen Absorption Chillers running off the transformer heat being used to boost data center cooling"

        That must be one mighty hot transformer.
      • Remember, it takes, in a well designed system, 1.7 times the energy to cool a space as it took energy to heat it.

        Got any source for that? It doesn't pass the laugh test.

        Figures say that even the most inefficient AC units out there remove more watts of heat than they need to operate themselves.

        • by tc9 ( 674357 )
          Great - so you've got a working Carnot-Cycle generator. Cool.
          • Great - so you've got a working Carnot-Cycle generator. Cool.

            That's idiotic. You have absolutely no understanding of carnot.

            • by tc9 ( 674357 )
              Always happy to inform those whose mode of disccourse is hurling insults...but if you really can pull more heat out of a system than you put in energy doing so, run to the patent office...quick

              From TheGreenGrid consortium

              Conventional models for estimating the electrical efficiency of datacenters are grossly inaccurate for real-world installations. Currently, many manufacturers provide efficiency data for power and cooling equipment. For power equipment, efficiency is typically expressed as the percent o

              • Your first link says nothing at all about the subject.

                Ironically, a careful reading of your second link would show you how wrong your idiotic assertions are...

                In their example chart on page 4 which you quote (out of order), the cooling system is responsible for 33% of energy demands. That means that while consuming 33% of the power, it is cooling the other 67%.

                if you really can pull more heat out of a system than you put in energy doing so, run to the patent office...quick

                Why?

    • by mblase ( 200735 )
      Ever notice how hot a UPS gets during normal operation? That's power going to waste.

      Maybe for you. I've been using mine as a nacho-cheese warmer for months.
    • on #2- I agree with you, but fire marshalls don't see it that way. We have 3 datacenters in my company and airflow and room temp are strictly monitored by the fire marshalls. If they look at the room and say it's not how they want it they shut down the entire company until compliance is made.
    • AC power from your utility company converted to DC for storage in a battery and then converted back to AC to supply the Server's power supply, then converted back to DC to actually run the components of the computer.

      There are many groups that have expressed interest in DC datacenters.

      The reality, however, is that AC/DC conversion is only nominally less efficient than DC/DC conversion. With the increasing popularity of 80plus efficient PSUs, there's very, very little to be gained by going to DC. You're rea

      • by Forge ( 2456 )
        It took a while but a logical fact infested response to each point I make.

        I won't go point by point because much of what you say amounts to a command for me to do more in-depth research before claiming either of us is right. And some of it makes me go "gee, I didn't see it that way". (The ducts for hot air spring to mind)

        However to the wattage. The Solder iron isn't wasting energy. Generating heat is what it dose. The 60 Watt TV is cool BECAUSE it is efficient. Most of It's power is being used to put a brig
        • The 60 Watt TV is cool BECAUSE it is efficient. Most of It's power is being used to put a bright image on your screen.

          I think you just misunderstood my point. A 60watt TV may be cool to the touch, while a 30watt TV could be extremely hot to the touch.

          There are two parts to this:

          1) It's any use of energy, not just WASTE of energy that makes heat.
          2) How cool a device stays has very, very little to do with how much energy it is using/wasting, unless they're identical in every other way (which basically never

          • by Forge ( 2456 )
            The relative cost of DC PSUs is (as you mention) a function of market forces. Specifically volume production. This however can be changed over time. Hate to draw a car analogy but I have to.

            In most places, spare parts for different Japanese sedans are pretty close.

            In Jamaica, that was the case until the Police Force standardised on the Toyota Corolla over a decade ago. These days Corolla spare parts cost a fraction of Honda spares.

            Yeah. It's depressing. No Data centre customer has large enough needs and inf
  • A couple of months ago, Luiz André Barroso of Google gave a talk at Stanford about this very topic. Unfortunately the talk wasn't recorded, but here's a summary: http://cs343-spr0607.stanford.edu/index.php/Writeu ps:Luiz_Andr%C3%A9_Barroso [stanford.edu]
  • I have been wondering this for a while now.

    Why can I sit here and type this on a laptop that is faster than a top-of-the-line 1U rack from 1 year ago, and yet data centers are still loaded with power-sucking 3 year old machines by the thousands?

    What you need in a data center is a) Performance, and b) Reliability. Performance is already covered - every year laptop speeds match the top speeds of the previous year's desktop machines. So you're at most a year behind the times. As for reliability - anyone who wo
    • Or perhaps not laptops, but machine designed using laptop components.

      I've heard of folks using Mac Minis as servers. They use laptop mainboards and hard drives, so they consume very little power, but are plenty fast for a lot of server needs.

      -Z
      • What do you think that blades generally are? Well, not exactly, they use components that are designed to be used 24x365, faster memory subsystems, multi-processor, etc...

        As for replacing machines every year - that's a big false economy. Even cheap power hungry servers cost more than their electrical costs per year*. Then you have all the hardware concerns and swapouts.

        Swapping out all your servers in a farm annually would be a good way to get the greens coming down on you.

        *Assuming sane prices per kwh, o
    • "Why can I sit here and type this on a laptop that is faster than a top-of-the-line 1U rack from 1 year ago, and yet data centers are still loaded with power-sucking 3 year old machines by the thousands?" Uh... Cost? How much would thousands of your laptop cost to replace them every year to keep up with the latest technology?

      Like most computer buyers, you get on that treadmill when you have to. At first you have the fastest machine on the block (depending on your price range), but then as it gets older,
    • As for reliability - anyone who works with data centers knows that reliability does not come from reliable hardwaye - it comes from redundancy.

      It comes from both. Also, I'm not sure why you believe laptops perform in comparison with 1U units from the previous year; there's a lot more to speed than processor type, the mobile processors don't compete per megahertz with the standalone processors, and those laptops cost something like 3:1 for the same hardware statistics.

      If what you said was true, these variou

    • every year laptop speeds match the top speeds of the previous year's desktop machines

      Comparing laptops and desktops is irrelevant when talking about data centers since they use servers. The fastest low-end server from one year ago was a 4-core 3.0 GHz Woodcrest system, but the fastest laptop today is only 2-core at 2.4 GHz. Not to mention that last year's low-end server can hold 16-32 GB of ECC RAM, and today's laptops only hold 4 GB non-ECC RAM.

      RLX and HP tried building servers from laptop components back
    • plus, you could sell the laptops (thru middle-man, donate to schools, give to lowlevel employees, whatev) when thru a lot bigger market for old working laptops than old rackmounts
  • by Anonymous Coward
    The future of desktop computing is 24/7 thin clients/home servers using less than 10W and passive cooling without fans, because for a typical 300W desktop 24/7 system you probably would be paying $100/month, more than a thousand a year. This is enough for 90% of users, those who are not after the latest/greatest 3D horsing power, those whose necessities are supplied with an onboard graphics chip such as Intel X3100 or even less. You would be surprised with the amount of computing power such devices have now
    • because for a typical 300W desktop 24/7 system you probably would be paying $100/month, more than a thousand a year.

      I call BS... my desktop is dual athlon 1800MP with 5 hard drives (500 watt ps)and I've also got a K6II/450 with 2 drives that acts as my personal server. Both are on 24/7. Throw in a laptop, 2 tvs (one on 12 hours a day, the other 24 hours a day for my dad), 2 waterbeds, electric water heater, 10 year old fridge, AC, etc. I used 907 kWh last month for a total of $128.64 (about 14.2 cents a
      • I've always been a big fan of these tiny, power-efficient computers for no reason I can fully explain. However, that doesn't excuse the tremendous errors in your reasoning.

        Your first flaw is in the cost of power. It's a bit lower than that - about a fifth your estimate, in most places.

        Yeah, a low-end computer these days will pretty likely have a 300-watt power supply. However, most consumer-level computers don't draw anything like that much power. Then, even if you did have a computer setup that drew 30
        • by redcane ( 604255 )
          15c/kwh is normal here. (Australian Dollars). But even if it was 1 cent.... Why waste it? Some peoples annual incomes are reasonably measured in cents. Solar power (and other 100% green technologies), are still expensive to get in large quantities. If you can run 5 pcs off 100 watts, instead of one, that can be useful if your off grid. (perhaps you should be ;-)
    • because for a typical 300W desktop 24/7 system you probably would be paying $100/month, more than a thousand a year.

      I have no idea where you get those numbers, but they're amazingly wrong. I own an NSP. I will sell you a year of dedicated service including a ten megabit guaranteed available line for $1200/year including off-box hourly backups, and at that rate I'm making a fair profit.

      If you buy five machines from me, I'll beat $1000/y. I can sell you the box, bandwidth, voltage, backups and hardware upg

      • Sorry, that's $1200/y for a dedicated Pentium 4 3.06 with a gig of ram, an 80 gig hard drive, an IP address, any free OS (you pay extra for winders,) ten megabits of bandwidth guaranteed 99.99% available at all times and with a 99.99% uptime.

        Sometimes I forget that other dedicated providers dust stuff under the rug.
    • because for a typical 300W desktop 24/7 system you probably would be paying $100/month, more than a thousand a year.

      Um, I think you will want to check your math (unless you live somewhere that has verrry expensive electricity).

      I ran the numbers on a Core 2 Duo E6600 box that I built a few months ago to run the Folding@Home client 24 x 7. My Killawatt says the box is consuming 155 watts when the Folding SMP client for Linux is running. The CPU is running at nearly 100%, 24 hours a day (the Linux Foldi
  • DC power is a simple way to reduce power consumption by 30%, it can also significantly reduce cooling requirements, and it's compatible with standard telco DC power kit.

    dcpower [rackable.com]

    • I'd just like to point out that there is a substantial difference between 'up to 30%' as stated in the article and the '30%' quoted by blackjackshellac.

      I'm not saying that there isn't potential advantages to this scheme, it's just that I wouldn't automatically assume that 30% can be saved. It all depends upon the situation. For example, if I'm using high-efficiency individual power supplies, I'm likely to save a lot less by changing over.
      • That's true in so far as the cost (power loss) of converting AC to DC as most modern power supplies are > 80% efficient. But all those individual PSUs generate a lot of heat. If you can do the conversion outside the datacenter and then run in the DC you could probably cut cooling by a third.*

        *Random figure, no basis to it.
        • You might be able to cut cooling to individual racks by 10-20%, but you're still going to have a massive power converter somewhere that's just as likely to need active cooling, and it's not going to be something you want to expose to the weather.

          On the other hand, you probably could engineer it to be happy at higher temperatures and simply use a fan.

          It's one of those things that is extremely situational dependent.
  • by rduke15 ( 721841 ) <(rduke15) (at) (gmail.com)> on Saturday June 30, 2007 @10:40AM (#19699741)
    I'm not sure ho much power this saves, but on all servers which I install, I use this Debian HOW-TO : CPU power management [technowizah.com] page. Basically, I do:

    aptitude install cpufrequtils sysfsutils
    cat /proc/cpuinfo | grep "model name"
    modprobe p4_clockmod ## depends on your CPU!
    modprobe cpufreq_ondemand
    echo ondemand > /sys/devices/system/cpu/cpu0/cpufreq/scaling_gover nor
    echo p4_clockmod >>/etc/modules ## depends on your CPU!
    echo cpufreq_ondemand >>/etc/modules
    echo devices/system/cpu/cpu0/cpufreq/scaling_governor = ondemand >>/etc/sysctl.conf
    And I see my servers run at 350MHz instead of 2.8Ghz. or more.

    Of course, these are all small workgroup or very small Internet servers. It would be of no use for a server which would be at the max speed most of the time.

    Anyway, I haven't had an opportunity to meter the difference yet to see how much power that really saves. Does someone know?
    • Re: (Score:3, Informative)

      p4-clockmod doesn't help at all; try acpi_cpufreq. The newer Intel processors have C1E, so they automatically drop to the lowest frequency when idle, so there's not a lot for CPUfreq to do.
      • by Spoke ( 6112 )
        To elaborate on the parent's post, p4-clockmod doesn't actually change the core clock of the CPU. All it does is force the CPU to run idle more of the time when it's in use.

        p4-clockmod will actually end up causing you to use more power since it's usually more efficient to get the work done faster at a higher CPU utilization and it takes a bit of time for p4-clockmod to "ramp up" the virtual clockspeed again.

        If you're running the latest kernel (2.6.21 or later) with dynticks enabled, you can install and run
        • by Ant P. ( 974313 )
          Just to elaborate even more, I've tested p4-clockmod using a wattmeter on the wall socket and there's absolutely no difference between idling at 2.6GHz or 333MHz. Using powertop to get rid of useless processes actually makes it go down 2 or 3W, but since it's a P4 that's still negligible.

          The only time I can see the clockmod driver being any use is when you need to force the CPU to slow down for whatever reason.
        • Using a Killawatt, I was able to measure usage of some tasks with the CPU forced to 350MHz in comparison with allowing it to manage its own power. While it was by no means a well designed experiment, it seemed fairly obvious that the system always used less power when allowed to manage its own power usage.
  • I've seen a number of posts about how smart it would be to use laptop components in servers. I disagree.

    Most server farms are running at full speed 24 hours a day. They don't throttle back and would not spend much if any time at a low-power idle.

    There are job scheduling programs where if servers aren't doing real-time stuff, they are backfilling with other jobs. Stuff gets queued up for literally weeks. It has been my experience that users demand more cycles -- not that the systems sit there idle just
    • by Cheeze ( 12756 )
      i beg to differ. Most servers are NOT running 100% 24/7. If they are, they haven't been engineered properly.

      Take a walk through a general use datacenter and you will find lots of 1U single use, non-clustered servers burning energy at full speed and running at .05 load. There is no real reason to run them at 300W when they can easily run at 50W with the same visible performance to the end user.

      There ARE specific applications that would utilize the hardware 100%, but those are a small percentage of the server
    • Most server farms are running at full speed 24 hours a day.

      Afraid not. Data center utilization is typically 20%, and often a lot less. A very lot less.

      C//
      • Maybe at your data center. Maybe I shouldn't have said most data centers. And we aren't running underdesigned systems. We are running the fastest and biggest we can get. For modelling and simulation work, the loads are high and pretty continuous.
        • For modelling and simulation work, the loads are high and pretty continuous.

          Sure, but this is a minority. I think it's more likely that most servers in the world are running stuff like databases, email, Web, business applications, etc. When there's no work to be done, they just sit idle.
        • You're right. You should not have said "most".

          Modsim is a unusual use case. That use case has its own concerns.

          20% is really on the high end for utilization in virtually every data center.

          The syndrome that is most alive today is the "one service, one box" issue. That's why all the drive for consolidation, coming from the virtualization vendors.

          C//
  • Power usage is horrid. A 600MHz ARM Xscale has better performance per clock than a 600MHz x86 and eats half a watt of power; while 1.2GHz Core Solos eat at least 10 watts and most other processors around 2-3GHz eat 60-120 watts, some even 180 watts! That's between 5 and 36 times more power per clock cycle; and most of the newer chips are RISC processors on a proprietary instruction set with a real-time translator from x86 to internal RISC. It takes HOW MUCH power?

    I've actually brought this up on Dell's I
  • Hi,

    I'm going through the exercise this month of replacing a whole slew of my always-on Internet servers at home (HTTP, SMTP, DNS, NTP) on machines going back long enough to still be running SunOS 4.1.3_U1 in one case, with a single Linux laptop. Current power consumption is ~700W. Target power consumption for the new system So, it is doable and worth doing financially, and I don't have to pay the 3x extra cost right now to remove the heat with aircon (if I did, I could pay for the solar panels too powe

Life is a whim of several billion cells to be you for a while.

Working...