Benchmarking Power-Efficient Servers 97
modapi writes "According to the EPA, data centers — not including Google et al. — are on track to double power consumption in the next five years, to 3% of the US energy budget. That is a lot of expensive power. Can we cut the power requirement? We could, if we had a reliable way to benchmark power consumption across architectures. Which is what JouleSort: A Balanced Energy-Efficiency Benchmark (PDF), by a team from HP and Stanford, tries to do. StorageMojo summarizes the key findings of the paper and contrasts it with the recent Google paper, Power Provisioning for a Warehouse-sized Computer (PDF). The HP/Stanford authors use the benchmark to design a power-efficient server — with a mobile processor and lots of I/O — and to consider the role of software, RAM, and power supplies in power consumption."
ummmmm (Score:3, Insightful)
Until then, this is just marketing 101...
Not really (Score:3, Insightful)
Re:ummmmm (Score:5, Insightful)
However the way you've worded it amounts to "since we can't account for all aspects of impact, I'm not going to worry about any aspect of impact." That's a bit extreme. Surely reducing our power consumption during the operating lifetime of our servers is a step towards greater environmental and fiscal responsibility.
Now, if you can show that the "energy saving" chips generate more pollution during production than the "normal" chips (and that this increase in pollution/energy-use/cost is greater than the savings during the lifetime operation of the chip), that's important. However I doubt that is the case. Thus, to ignore the potential advantages of power-saving measures in the data-center, simply because such measures don't address the orthogonal concerns of production impact, is silly.
Re: (Score:2)
The crazy pork-loaded policy of subsidising turning feedstuff into ethanol is already distorting world food prices and policies, causing harm to the poor.
The Toyota Prius (pious?) uses more fuel than a good small diesel car, and is less functional. In fact, you'd be doing more good for the planet if you just bought
Re: (Score:3, Insightful)
Never misunderestimate [:)] the power of technological progress - you gotta start somewhere.
Re: (Score:2)
But a Tesla, http://www.teslamotors.com/ [teslamotors.com], hmmmm
Of course, the electricity to recharge the cells is mostly generated by coal-fired power stations. Damn...
The sad fact is, (and yes, I mean 'sad' - I have kids, so I'm concerned about the future of the pl
Re: (Score:2)
And I'm with you on nuke power, absolutely.
Re: (Score:2)
I don't see any diesel cars on the market (or any diesels for that matter) that are similar in fuel economy, functionality and emissions, let alone more efficient or less polluting.
Re: (Score:2)
Also, checkout the latest Volks, Merc & BMW 'super efficient' models - low rolling resistance, engine cut-off on coast and at red lights...in 'real world' driving, (not EPA bullshit - yup, I'm an avid Car & Driver reader too), they beat the shit out of the Prius.
Re: (Score:2)
BTW, did you remember to take into account the fact that diesel has ~10-20% more energy than gasoline and as a result ~10-20% more CO2 emissions for same fuel economy?
Hybrids won't really get good until they start puttin
Re: (Score:1)
Re: (Score:2)
More efficient, lower power servers directly relate to a cash savings on your electric bill. One server operating at 10% greater efficiency may not be a big deal, but it starts to matter when multiplied over a room of servers. Servers that use less power (generally) put off less heat, so you also save electricity because you don't have to cool them as much, and you can cram mor
Re: (Score:2)
Re: (Score:2)
"However, if those more efficient servers cost twice as much to purchase per unit of work, not to mention the energy used in manufacturing, the savings are reduced."
I think you missed something obvious - the cost of the servers includes the cost of the energy used to make them.
That said, they could help reduce everyone's energy consumption by posting their stuff as plain html instead of pdf. Less data to transfer, no need to open up a pdf reader, etc.
Re: (Score:1)
Units? (Score:4, Funny)
virtualization? (Score:2)
Re: (Score:3, Interesting)
Eventually, if RAM continues to get bigger and cheaper, more cores get packed into chips, and virtualization becomes what it is intended to be in terms of performance and stability, we will start to
Re: (Score:2)
So you mean, big boxes with loads of CPUs and tonnes of memory, all connected to a huge storage system?
Sounds like IBM did a good thing keeping their mainframe business open :p
Commodity hardware was sold over big mainframes on the basis that it's much more scalable. If you want to do something else, just buy a couple of relatively cheap boxes and away you go. The thing that no-one mentioned is that it suddenly starts to cost a lot more $$$ to keep the things in power and cooled properly, so now we're se
Re: (Score:2)
Basically, mainframes are cheaper
Re: (Score:2)
I think we have the virtualisation technology we need - in fact we've had it for a long time. I just don't see massive adoption happening until there's a fat, cheap and secure pipe everywhere... Until then, I'll stick with my laptop, and home PC, and server, and think my kids will too. OK salesforce.com works, but it's still peanuts compared to the PC users worldwide - and what do people connect to salesf
Re: (Score:2)
Now, when the whole "virtualization is the answer to everything" wave started rolling in, I got excited and thought it was going to actually make good on all of its promises (and it still may). However, what is keeping me from actually putting it to use is that when you put several different VMs inside one box, then all of those VMs can be taken down by a single failure (disk, power supply, nic, etc) th
Re: (Score:2)
If you have lots of tasks that don't take much computing power by todays standards and you don't have a massive budget then you have two choices.
1: put each one on it's own cheap shit box
2: put them all on one higher grade box which has
Re: (Score:2)
you have the option of designing your software so machine failures are tollerated but I can't see how you can do that with a mix of legacy applications
I really want to post my proposed architecture, but without going to too much detail, I am working on designing a system that uses mainstream software on commodity hardware in such a way as to break down each component into trivial tasks. Each of the tasks are stateless and therefore can be spread across different machines so that parallel requests (even by the same user) can be handled by different hardware components concurrently. By having at least 2 (more in most cases) machines that can perform each
Re: (Score:1)
Literally dozens of those things could be displaced by single modern VMWare or Xen hosts. Its all a matter of manpower and know-how. (As well as convincing the PHB that his initial outlay will be made up quicky with power savings and administration cost sa
Re: (Score:2)
Re: (Score:2)
If at that point they don't get together and work (apply pressure) to reduce the number, your company is full of idiots.
Re: (Score:1)
We did just that a few years ago. It was justification for switching from CRT to LCD monitors.
In one year we saved enough in electricity to pay for the difference in price. Like many businesses, our electricity costs are based on our highest month's bill. By reducing that we ended up saving for the whole year.
We're regretting our move from a single IBM pSeries to 10 HP rack-mount servers
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
Sortof. Unfortunately the ease of deployment and price reductions accomplished tend to result in a vast expansion of virtual servers instead. You're likely to end up with as much hardware except it's doing several times more than what it used to.
At least from what I've seen of virtualiztion your bill isnt going to get smaller, you're just going to get more for it. Which isnt too bad anyway.
a la simpsons (Score:2, Funny)
Russ Cargill: Of course I've gone mad with power! Have you ever tried going mad without power? It's boring and no one listens to you!
Efficient design (Score:2)
Re: (Score:2)
I would really like to have one server for each of the web sites running on Windows. Too bad virtualisation is out of discussion, as Windows is a big memory hog.
Re: (Score:2)
Re: (Score:2)
For the most part this is due to software vendors not wanting to deal with support calls on more complicated systems. It's simply easier for them to support if you take out all of the variables introduced by other applications running on the machine.
The simplest thing for them to do is say or recommend it goes on a dedicated server. Much less of a headache come support time. Of course businesses spend millions because of it, and Dell/HP/IBM rake in the bucks.
I mean really, do license servers need to be de
Re: (Score:2)
Re: (Score:2)
For render farms, database servers, and HPCs the X86-64 and Power5, UltraSparc T2.
The X86-64 does a good job at about everything but it is not the best at anything. The new low power laptop cpus are not terrible but I don't think they can match the ARM, PowerPC, or Mips in the
Network Queue Systems (Score:3, Insightful)
The real problem is that most I.T. staff are either as dumb as bricks and have no idea how to make use of one or have plenty of profit to burn and just don't care.
Re: (Score:2)
Re: (Score:2)
Now, I've been in the IT industry for ~ 5 years now and I've never heard of something like "Network Queue Systems". And definitely not in connection to power savings.
They've been around since the early 1980s.
See:
http://www.google.co.uk/search?num=100&hl=en&safe= off&q=Network+Queueing+Systems&btnG=Search&meta= [google.co.uk]
or
http://en.wikipedia.org/wiki/Job_scheduler [wikipedia.org]
Modern free and commercial examples:
http://gridengine.sunsource.net/ [sunsource.net]
http://www.cs.wisc.edu/condor/ [wisc.edu]
http://www.clusterresources.com/pages/products/tor que-resource-manager.php [clusterresources.com]
http://www.platform.com/Products/Platform.LSF.Fami ly/Platform.LSF/ [platform.com]
http://www.gridwisetech.com/content/view/123/90/la ng,en/ [gridwisetech.com]
http: [ibm.com]
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
If every service / customer is independent of every other service / customer, then outages tend to stay small and simple. And with a simple, stupid datacenter your failure modes also tend to be simple and stupid. 99.999% uptime is often worth the cost to keep too many servers running.
Don't worry (Score:3, Funny)
It's going to be interesting (Score:2)
If we ever run into wide-spread power availability issues. In the event of a natural or economic disaster, perhaps a series of them or we just degenerate into a civil war between political factions. No one ever imagines we could go through a near-collapse and fragmentation similar to the old Soviet Union.
We'd likely have bigger worries than whether we could keep our data centers running but it's an interesting scenario to contemplate. I honestly had no idea data centers in the US consumed that much powe
DC power (Score:3, Insightful)
Re: (Score:2)
Re: (Score:1)
The smaller gear we use has standard-ish connectors. Positive, negative and ground go into a terminal block, which in turn gets plugged into the device. That may just be a Cisco thing, I'm not sure -- even then, it's only on the relatively low draw devices with high gauge wire that use it.
The bigger stuff is all lugs, which I guess you could call a standard, but it's a nightmare to wire. Cut and crimp positive, negative and ground, fight with heavy gauge wire, find suitable ground points, or make yo
Re: (Score:3, Interesting)
At the cost of massive transmission losses... (Score:1, Insightful)
That's because DC power distribution suffers from massive losses if it's transmitted across any decent distance.
Re: (Score:1)
Modern solid state equipment means DC-DC conversion is more efficient than ever - AC was originally chosen because of how hard it was to convert DC between different voltages (the high ones required for transmission and the low ones required
Re: (Score:1)
Re: (Score:2)
HVDC makes the most of the voltage limit in the wire insulation and the relative lack of inductive losses to ground fro
Re: (Score:2)
Actually, traditionally the reverse is generally true-- it's only the small (That's because DC power distribution suffers from massive losses if it's transmitted across any decent distance.
Low voltage infers high current; high current causes losses in the wiring. Traditionally, it was hard to convert DC voltages as is done in AC with transformers. This precluded having tr
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Weeeelll... not necessarily. When you start dealing with long wires, you end up having to deal with voltage drops across those wires. If your computer needs 5.000V to run reliably, you simply can't feed it with 5.000V produced by a power supply ten metres away, because by the time the electricity reaches the computer it won't be 5.000V any more.
Which means you need to feed it w
Re: (Score:2)
Except that it wouldn't...
Switching (AC) power supplies have the potential to be just as efficient, if not more so, than DC power supplies.
With a DC datacenter, you have to have a big central AC/DC converter, and then a bunch of DC/DC voltage converters. There's very little to gain, even in theory.
In practice, you'd probably do far better if you took a fraction of the money it would cost to make a DC datacenter, and instead replace all the PSUs w
Re: (Score:2)
bias (Score:1)
If nothing else, maybe this will spur the design of other relevant energy efficiency benchmarks.
db
Mainframes, anyone? (Score:2)
Power Metering in IT (Score:2)
Software (Score:2)
1) Developers, the time may not too far away when your code is measured on power efficiency.
2) Software effects will be found significant as well because widely used software affects so many systems.
This reminds me of an article here on
The case for Smart Appliances (Score:3, Interesting)
Perhaps now I know.
It would be nice if I could set my house up on a "power budget", and let my appliances vie for electrical power and load-balance themselves to stay within that budget. If all appliances spoke over the in-house wiring (or perhaps wireless) and could turn themselves off or adjust their power usage that would be awesome.
You could implement something similar to this today with an X10 system or the like, but this is more of an off/on scenario, and is not based on actual power demands.
It would be great if all of my electrical things in my house could get together and say, "OK, guys, we have X amount of electricity to use today between all of us. Let's figure out, based on past usage patterns, who needs to be on and when in order to hit this budget".
Re: (Score:2)
Re: (Score:3, Funny)
Dude, we've been using X11 [x.org] for some time now! X10 has been obsolete for almost exactly 20 years...
Re: (Score:2)
I wonder if you are mixing up two things - power and energy.
Power is important. Enough power plants and transmission capacity have to be built to handle the peak power load. Leveling out power usage can save money in construction costs and reduce the footprint of the electrical infrastructure.
As individuals, most of us pay for electrical
Re: (Score:2)
Programming languages and system architecture (Score:4, Interesting)
Answer me this : how much power is lost through the use of inefficient programming languages and architectures which only emphasize processor speed, instead of balancing memory, processor and IO ?
Python, Perl and PHP all suffer from one big drawback : when you scale up you need that much extra processor power. One programming language I know (Common Lisp) offers the advantages of them, but can be compiled to near C/C++ speeds. I suppose there are others. Don't come saying that programmers are expensive. It seems that what you gain on programmers, you lose in the cost of your datacenter. I don't know how Java matches here, it probably depends upon the deployment of more recent JIT compilers.
If you see how much a process has to wait on IO, how come there are still no good solutions in providing enough IO bandwidth that the processor can use fully ? (Unless you buy a mainframe or iSeries system that is)
Just asking.
Re: (Score:3, Interesting)
http://www.earth.org.uk/low-power-laptop.html [earth.org.uk]
I actually now control the CPU-speed control with another small Java app (see update for 2007/08/20 on same page) and in particular watching it with strace() can't see the JVM doing anything that hand-crafted C wouldn't in the main loop.
In fact, the whole machine, including several Java and static Web ser
Re: (Score:2)
Re: (Score:1)
The 4GB SD card was/is essentially dd-ed mirror of the first three hard disc partitions (/,
The idea is that if/when the SD card dies from too many writes (I have no idea how long this will take even though I have minimised writes) then I can boot off hard disc with minimal work
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
What would be the key here ? Are there optimisations possible to speedup XML processing ? Can guidelines be written to enhance XML designs for speeding up processing ? Have there been profiling tests to see where in the XML processing the bottlenecks are ? Should you use XML even ?
At least one company is tackling the issue (Score:1)
SiCortex [sicortex.com]
They're more focused on computation than giant racks of storage, but their 2 systems are rated at max 3W/core total power consumption including drives, power supplies, interconnect, etc. I suspect the actual power draw will be much less.
How much storage does a "typical" datacenter have? (I know any answer would have huge variances.) For probably over two million USD or so, you should be able to get their larger system with 8TB of RAM and run their RAM-based Lustre filesystem along with the
Re: (Score:2)
Re: (Score:2)
Total power is a misnomer though, since the broke out cooling separately.
Power Metrics? (Score:1)
-John Mark
Who is this article about? (Score:1)
Re: (Score:2)
'dynamically' realocating servers (Score:2)
So the idea was to have systems re purposed automatically, using something like
Re: (Score:2)
Add VMware or Xen to the mix and you can pretty much get rid of the boot time as well as the install time. And if you have uniform hardware with LOM cards you can even automate the powering on/off of the base servers depending on the load of all the existing machines in the grid.
Re: (Score:2)
The real benefit to those is that if you have systems that don't need resources in large chunks (i.e. 2-4 dual core Opterons, 8-16 GiB of RAM). That chunk size seemed to be fine, and it was similar to what the DBAs were used to throwing into our V880s when they need
And that is why (Score:2)
How much would consumption decrease.... (Score:2)
On a serious note, since I've got nothing else to do at work (as it's mind-numbingly boring), I was trying to figure out how many electrical plugs I would actually need to live a happy life (note, this is an extreme).
My answer: 2
One for the fridge, the other for a radio. Of course, it would be nice to have ceiling fans too, but those don't require power plugs, just direct wiring. Think about how many things you have plugged in to your house (e.g. those cell phone