Power Consumption and the Future of Computing 105
mrdirkdiggler writes "ArsTechnica's Hannibal takes a look at how the power concerns that currently plague datacenters are shaping next-generation computing technologies at the levels of the microchip, the board-level interconnect, and the datacenter. In a nutshell, engineers are now willing to take on a lot more hardware overhead in their designs (thermal sensors, transistors that put components into sleep states, buffers and filters at the ends of links, etc.) in order to get maximum power efficiency. The article, which has lots of nice graphics to illustrate the main points, mostly focuses on the specific technologies that Intel has in the pipeline to address these issues."
For Funzies (Score:2)
Re: (Score:1)
anyone ever compared computer system transparency (say, how much of it has publicly available documentation) to the system transparency of our innovative combustion engine industry?
Re: (Score:2)
Re: (Score:1)
Re: (Score:1)
My guess is that metal is lead, no?
I would find it ironic that we'd look at running a combustion engine off of a battery in such a direct form
Re: (Score:3, Insightful)
1000 watt power supplies (Score:2)
[*] - http://www.newegg.com/Product/ProductList.aspx?Su
Re: (Score:3, Interesting)
In fact, those high-end 1KW supplies might even be better for power consumption since they tend to have higher efficiencies than the cheapo options.
Re: (Score:2)
WTF actually needs that kind of power? I've built 16-disk 3U RAID arrays that don't use nearly that much power. Each is powered by a 650W RPS (made up of three hot-swappable 350W power supplies, capable of running on two if one fails), and actual maximum power consumption (measured with a clamp-on ammeter and a power cord with one of the wires pulled out in a loop) was somewhere around 350
Aero graphics, that's what.... (Score:2)
I don't know who needs 1000W but it's easy to make SLI gaming rigs go over 500W.
Stick a couple of the "twin power connector" cards in a box with a big CPU, overclock the hell out of it...that's four or five hundred watts right there.
Let's virtualize! (Score:2)
You don't even need that (Score:2)
Choose your poison.
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Typically, if you're going to virtualize - the minimum number for physical boxes is probably 3. During normal operations, you run your load spread across all 3 boxes, with the option to consolidate down to 2 boxes if one goes down. You can do it with just 2 boxes, but it's not going to be as nice. Naturally, if you have the server load to require 4+ boxes, it becomes much easi
Huh. Well I never. (Score:1, Redundant)
Big cuts (Score:4, Interesting)
#1. DCAC conversion.
Your typical Datacenter has a UPS or batteries and inverters (Enterprise scale UPS). What this amounts it is AC power from your utility company converted to DC for storage in a battery and then converted back to AC to supply the Server's power supply, then converted back to DC to actually run the components of the computer.
Ever notice how hot a UPS gets during normal operation? That's power going to waste. The solution is to run our servers at a standardised DC voltage. 48 Volts sounds good since that is already defined for Telecom equipment (correct me if I'm wrong. I am not sure of the figure)
#2. Raised flour and underground AC. A good chunk of datacenter power is used to run the air conditioning. If we abandoned the notion of raised flours and replaced them with say insulated celling mounted ducts with vents faceing each rack.
While we are at it here is another simple power tip. Turn your rows of racks back to back. When they all face the same direction, hot air blows from the back of one machine to the frunt of another, forcing the AC to work overtime. In my design, I would have extraction fans betwean my back to back racks, pumping the hot air outside (or into the office during winter. For those of you who have winter.
Re: (Score:2)
Re: (Score:1)
Unfortunately for me, I got back from Costco -AFTER- reading your reply explaining its a typo. I was going to reinsulate my attic. Anyone have a use for two pallets of flour?
Re:Big cuts (Score:5, Informative)
DC/AC conversion? The bigger data centers can't use batteries - too many, too big of a hazard, etc. They use rotational UPS's. These stay AC all the way.
Additionally - power distribution is better at higher voltages. It's that current squared thing. More and more equipment is also going to higher voltage distribution on the boards with local DC/DC conversion at the load. For the exact same reason. Our center distributes at 208 volts.
The argument against a raised floor is bogus. That acts (and is necessary) not only for cabling, but also for air distribution. Heated air rises. Feeding cold air up from the floor to where it flows into the racks to be heated and then recovered at the ceiling is the most efficient way for air. The fact that the floor is not insulated is a non-issue. The whole room is being cooled. The temperature is the same on either side of the floor tiles.
And about the face to face and back to back layout of racks - every single one of our racks is already in that orientation for exactly that reason. We have hot aisles and cold aisles and the temperature difference between them is pretty marked.
The next wave is a move back to "water" cooling. Either plumbing liquid to each rack where in the rack it locally grabs heat from circulated air within the rack, or plumbing into the boxes themselves. This is simply because heat loads are going up and it gets harder (and louder) to pump enough air through a building to cool the more dense newer equipment. Plus people don't have to put on jackets to go out on the floor or yell to be heard in a big data center.
ditch the fans (Score:2)
Air should flow from cold aisles to hot aisles by a simple pressure difference. Those little CPU fans generate heat and lots of noise. It's better to rely on airflow supplied by the building. This of course means that the cases have ductwork and aerodynamic heat sinks as required. I've seen it for a single rack; it's really nice to eliminate the individual CPU fans. Reliability goes up (no CPU fan failures) and noise goes down.
Re: (Score:2)
The flaw in that plan (pulling cold air from outside the building) is that when you bring 20 degree F air into a data center that is at 70 degrees F, you'll find th
Re: (Score:2)
Which by the way is why I have had to worry so much about cooling.
As for not going to battery. I guess we don't have any large data centres here. Our largest phone company only has around 1.6 Million subscribers. Almost twice as many clients as our largest bank.
Re:Big cuts (Score:5, Informative)
Also, 99% of UPS units don't convert AC to DC unless it's charging the batteries. Normally this would only be a trickle charge. If the UPS is providing power, you're in a critical situation anyways, I wouldn't worry about the fact that a UPS isn't particularly efficient, as you're probably spending 99% of your time not on UPS.
As for switching to telephone industry standard 48V power, you'd be converting it again to whatever the equipment wants, much of it 12V or less. 120VAC->12VDC is more efficient than 120VAC->48VDC->12VDC. In addition you run into the problem that 120VAC over 12gauge cable wastes less than half of the power that the same wattage of 48VDC would waste over the same diameter cable. So you'd have to use heavier gauge cable - payback isn't quick for that by any means.
You might be able to get away with it on a rack level, powering all the blades on 48V via rails to a couple of redundant power supplies somewhere in the rack. Either top or bottom, depending upon cooling and other requirements, though the middle might be an interesting choice, as it'd allow you to have half the wattage running over the rails on average(you'd have two runs instead of one).
You want to save power? I'd switch to feeding the racks/power supplies with 240V lines. Half the line resistance for the wattage.
Re: (Score:3, Informative)
That's a cheap, consumer oriented UPS. Datacenters use the kind described [wikipedia.org], ones that are always doing the AC -> DC -> AC conversion. What this achieves is that instead of the UPS
Re: (Score:2)
Re: (Score:2)
Try 2 to 4 big ones.
Re: (Score:2)
As for the 1.5% efficiency - The larger the UPS, the more efficient the charging system. Still, you can't get away from the fact that you need a float charge for lead-acid batteries, indeed, for most rechargable technologies. Still, the number of batteries needed depend on how many kwh you need to store. For 100k bla
Re: (Score:2)
Almost by definition, you're always going thru the UPS; what you're not doing 99% of the time is discharging the batteries.
And a large, efficient UPS is proably only around 90% efficient at normal loads.
At very low loads, they can actually use more energy than at full loads.
So a 250 kVA UPS is going to turn about 20 to 25 kW of energy into heat, even when the equipment it's serving is idling.
Re: (Score:2)
Unfortunately, in the USA, power to most commercial/industrial buildings is not available in 240 volts. Power on a large scale is provided in three phase so the power company distribution is kept in balance. But in the USA the primary choices for that are 208/120 volts, or 480/277 volts. Many power supplies would probably work OK at 277 volts, but since they are not specified for that, it's risky from many perspectives. 208 volts would work for full range power supplies, but maybe not for those that hav
Re: (Score:2)
Huh, I've seen it available in most buildings I've been in. Even so, as long as it's AC, you can efficiently transform voltages around, even if you need a big transformer in a mechanical room somewhere.
120Volts doesn't come in on it's own set of wires, it's set up as a split phase via grounding from two 240 volt lines.
My general point is that it's more efficient to move high voltage around than low voltag
Re: (Score:2)
Most commercial/industrial buildings have 3-phase power. Most 3-phase power is of the "star/wye" configuration, which means 3 separate 120 volt transformer secondaries wired to a common grounded neutral. At 120 degrees phase angle, the voltage between any 2 of these 3 lines is 208 volts, not 240 volts. There are some exceptions where commercial buildings get single phase power, or an older delta type 3-phase system that has various kinds of problems with it.
They will use standard 240 volt outlets for th
Re: (Score:2)
Re: (Score:2)
Data centers recirculate the same air generally because it is cheaper in the building design, more reliabl
Re: (Score:2)
Except, those don't really waste most of the power.
With AC/DC, you already have equipment available that can push over 90% efficient. With air conditioning, central home units manage 90-94% efficient, and I'd expect industrial models to do even better. So not a lot of room for improvement there.
With servers, however... The better they scale to their load, the more effic
Re: (Score:2)
*cough*
I think our data center might be bigger than your data center...
C//
Re: (Score:2)
Yes, but when UPSs are designed for maximum load, and redundant UPSs are installed, and you typically are operating below 50% of capacity (e.g. late shifts), that 90% full load efficiency can be below 50% real life efficiency.
"With air conditioning, central home units manage 90-94% efficient, and I'd expect industrial models to do even better"
Not even close, if you assume that you're talking about 90-94% of theoretical maximu
Re: (Score:1)
Re: (Score:2)
What do you do on the days when the temp outside the building is below 32F? How about when the temp does not go above zero degrees F for a couple of a weeks in the middle of winter? Do you know what happens to the relative humidity of air that is heated from zero degrees F to 70 degrees F? Are you going to spend a lot of money for equipment and power to humidify that air as you pull it into the d
Re: (Score:1)
Re: (Score:2)
Ok, you go ahead and run your data center with 10% relative humidity. Don't be surprised when static electricity becomes a big issue for you. Also, I doubt that you find that "most computer equipment specifies 0-x% humidity" when the equipment is running (you might be able to *store* the equipment at 10 percent relative humidity). There is a reason that most data centers are kept at 40-45% relative h
Re: (Score:2)
In other words. Add "tropics" to "desert"
Looking to the bigger picture (Score:1)
Re: (Score:2)
That must be one mighty hot transformer.
Re: (Score:2)
Got any source for that? It doesn't pass the laugh test.
Figures say that even the most inefficient AC units out there remove more watts of heat than they need to operate themselves.
Re: (Score:1)
Re: (Score:2)
That's idiotic. You have absolutely no understanding of carnot.
Re: (Score:1)
From TheGreenGrid consortium
Re: (Score:2)
Ironically, a careful reading of your second link would show you how wrong your idiotic assertions are...
In their example chart on page 4 which you quote (out of order), the cooling system is responsible for 33% of energy demands. That means that while consuming 33% of the power, it is cooling the other 67%.
Why?
Re: (Score:2)
Maybe for you. I've been using mine as a nacho-cheese warmer for months.
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
There are many groups that have expressed interest in DC datacenters.
The reality, however, is that AC/DC conversion is only nominally less efficient than DC/DC conversion. With the increasing popularity of 80plus efficient PSUs, there's very, very little to be gained by going to DC. You're rea
Re: (Score:2)
I won't go point by point because much of what you say amounts to a command for me to do more in-depth research before claiming either of us is right. And some of it makes me go "gee, I didn't see it that way". (The ducts for hot air spring to mind)
However to the wattage. The Solder iron isn't wasting energy. Generating heat is what it dose. The 60 Watt TV is cool BECAUSE it is efficient. Most of It's power is being used to put a brig
Re: (Score:2)
I think you just misunderstood my point. A 60watt TV may be cool to the touch, while a 30watt TV could be extremely hot to the touch.
There are two parts to this:
1) It's any use of energy, not just WASTE of energy that makes heat.
2) How cool a device stays has very, very little to do with how much energy it is using/wasting, unless they're identical in every other way (which basically never
Re: (Score:2)
In most places, spare parts for different Japanese sedans are pretty close.
In Jamaica, that was the case until the Police Force standardised on the Toyota Corolla over a decade ago. These days Corolla spare parts cost a fraction of Honda spares.
Yeah. It's depressing. No Data centre customer has large enough needs and inf
Google Distinguished Engineer's point of view (Score:2, Informative)
Video (Score:3, Informative)
Laptops in the datacenter (Score:2)
Why can I sit here and type this on a laptop that is faster than a top-of-the-line 1U rack from 1 year ago, and yet data centers are still loaded with power-sucking 3 year old machines by the thousands?
What you need in a data center is a) Performance, and b) Reliability. Performance is already covered - every year laptop speeds match the top speeds of the previous year's desktop machines. So you're at most a year behind the times. As for reliability - anyone who wo
Re: (Score:2)
I've heard of folks using Mac Minis as servers. They use laptop mainboards and hard drives, so they consume very little power, but are plenty fast for a lot of server needs.
-Z
Re: (Score:2)
As for replacing machines every year - that's a big false economy. Even cheap power hungry servers cost more than their electrical costs per year*. Then you have all the hardware concerns and swapouts.
Swapping out all your servers in a farm annually would be a good way to get the greens coming down on you.
*Assuming sane prices per kwh, o
Re: (Score:2)
Like most computer buyers, you get on that treadmill when you have to. At first you have the fastest machine on the block (depending on your price range), but then as it gets older,
Re: (Score:2)
It comes from both. Also, I'm not sure why you believe laptops perform in comparison with 1U units from the previous year; there's a lot more to speed than processor type, the mobile processors don't compete per megahertz with the standalone processors, and those laptops cost something like 3:1 for the same hardware statistics.
If what you said was true, these variou
Tried it, didn't work (Score:2)
Comparing laptops and desktops is irrelevant when talking about data centers since they use servers. The fastest low-end server from one year ago was a 4-core 3.0 GHz Woodcrest system, but the fastest laptop today is only 2-core at 2.4 GHz. Not to mention that last year's low-end server can hold 16-32 GB of ECC RAM, and today's laptops only hold 4 GB non-ECC RAM.
RLX and HP tried building servers from laptop components back
Re: (Score:1)
The future computing device uses less than 10W (Score:2, Informative)
Re: (Score:1)
I call BS... my desktop is dual athlon 1800MP with 5 hard drives (500 watt ps)and I've also got a K6II/450 with 2 drives that acts as my personal server. Both are on 24/7. Throw in a laptop, 2 tvs (one on 12 hours a day, the other 24 hours a day for my dad), 2 waterbeds, electric water heater, 10 year old fridge, AC, etc. I used 907 kWh last month for a total of $128.64 (about 14.2 cents a
Re: (Score:2)
Your first flaw is in the cost of power. It's a bit lower than that - about a fifth your estimate, in most places.
Yeah, a low-end computer these days will pretty likely have a 300-watt power supply. However, most consumer-level computers don't draw anything like that much power. Then, even if you did have a computer setup that drew 30
Re: (Score:1)
Re: (Score:2)
I have no idea where you get those numbers, but they're amazingly wrong. I own an NSP. I will sell you a year of dedicated service including a ten megabit guaranteed available line for $1200/year including off-box hourly backups, and at that rate I'm making a fair profit.
If you buy five machines from me, I'll beat $1000/y. I can sell you the box, bandwidth, voltage, backups and hardware upg
Re: (Score:2)
Sometimes I forget that other dedicated providers dust stuff under the rug.
Re: (Score:2)
Um, I think you will want to check your math (unless you live somewhere that has verrry expensive electricity).
I ran the numbers on a Core 2 Duo E6600 box that I built a few months ago to run the Folding@Home client 24 x 7. My Killawatt says the box is consuming 155 watts when the Folding SMP client for Linux is running. The CPU is running at nearly 100%, 24 hours a day (the Linux Foldi
Rackable's DC solution (Score:1)
dcpower [rackable.com]
Re: (Score:2)
I'm not saying that there isn't potential advantages to this scheme, it's just that I wouldn't automatically assume that 30% can be saved. It all depends upon the situation. For example, if I'm using high-efficiency individual power supplies, I'm likely to save a lot less by changing over.
Re: (Score:2)
*Random figure, no basis to it.
Re: (Score:2)
On the other hand, you probably could engineer it to be happy at higher temperatures and simply use a fan.
It's one of those things that is extremely situational dependent.
Try it (Score:2)
The nearest thing we have nowadays to assembler is tools for compiler optimisation a
Re: (Score:2)
No, But Google Has Vast Amounts of Money (Score:2)
Google isn't stupid, which is why they aren't running the world's most power-efficient data centers. Quite the contrary, actually. My educated guess is that Lexis-Nexis has a per-search energy use profile vastly lower than Google's. (No, I don't work for Lexis-Nexis nor have any inside information. I've just reviewed public information about them.)
Google has tons of money, so they don't particularly care about ene
Re: (Score:1)
Does slowing down idle CPUs help? (Score:3, Interesting)
Of course, these are all small workgroup or very small Internet servers. It would be of no use for a server which would be at the max speed most of the time.
Anyway, I haven't had an opportunity to meter the difference yet to see how much power that really saves. Does someone know?
Re: (Score:3, Informative)
Re: (Score:2)
p4-clockmod will actually end up causing you to use more power since it's usually more efficient to get the work done faster at a higher CPU utilization and it takes a bit of time for p4-clockmod to "ramp up" the virtual clockspeed again.
If you're running the latest kernel (2.6.21 or later) with dynticks enabled, you can install and run
Re: (Score:1)
The only time I can see the clockmod driver being any use is when you need to force the CPU to slow down for whatever reason.
Parent is right about clocking/power usage (Score:2)
Laptop Components (Score:2)
Most server farms are running at full speed 24 hours a day. They don't throttle back and would not spend much if any time at a low-power idle.
There are job scheduling programs where if servers aren't doing real-time stuff, they are backfilling with other jobs. Stuff gets queued up for literally weeks. It has been my experience that users demand more cycles -- not that the systems sit there idle just
Re: (Score:2)
Take a walk through a general use datacenter and you will find lots of 1U single use, non-clustered servers burning energy at full speed and running at
There ARE specific applications that would utilize the hardware 100%, but those are a small percentage of the server
Re: (Score:2)
Afraid not. Data center utilization is typically 20%, and often a lot less. A very lot less.
C//
Re: (Score:2)
Re: (Score:2)
Sure, but this is a minority. I think it's more likely that most servers in the world are running stuff like databases, email, Web, business applications, etc. When there's no work to be done, they just sit idle.
Re: (Score:2)
Modsim is a unusual use case. That use case has its own concerns.
20% is really on the high end for utilization in virtually every data center.
The syndrome that is most alive today is the "one service, one box" issue. That's why all the drive for consolidation, coming from the virtualization vendors.
C//
Power usage is horrid (Score:2)
I've actually brought this up on Dell's I
Replacing servers with a laptop (Score:1)
I'm going through the exercise this month of replacing a whole slew of my always-on Internet servers at home (HTTP, SMTP, DNS, NTP) on machines going back long enough to still be running SunOS 4.1.3_U1 in one case, with a single Linux laptop. Current power consumption is ~700W. Target power consumption for the new system So, it is doable and worth doing financially, and I don't have to pay the 3x extra cost right now to remove the heat with aircon (if I did, I could pay for the solar panels too powe
Re: (Score:2)
Re: (Score:2)
I'm not, in fact, absolutely clear what you mean in your post. There is a difference between implementing redundancy and failover as a policy, using dedicated hardware, and the idea of having servers get together and somehow vote on which is to fulfil a management function, which the art
Replying to my stalker, the "overrated" mod (Score:2)