AMD Beats Intel in Power-Efficiency Study 87
Ted Samson writes "AMD Opteron servers proved up to 15.2 percent more energy-efficient than those running Intel Xeon in a server-power-efficiency test performed by Neal Nelson and Associates, InfoWorld reports. That translates to annual electricity savings between $20.29 per server and $36.04 per server, depending on the workload, the study concluded. The benchmark tests were conducted on similarly configured 3GHz systems running Novell SUSE Linux, Apache2, and MySQL."
Multiple OS-es (Score:4, Interesting)
Life's great cycle (Score:1, Funny)
Re:Sponsorship? (Score:4, Informative)
Re: (Score:2, Funny)
Over what time period? Or are they using the prime interest rate to figure out what the one-time savings are?
They're talking about those very special monthly annual savings..
energy $$$ savings yea right (Score:2, Insightful)
Intel 5160 cpu = $851
The AMD system will be obsolete before you realize any "cost savings".
Also you don't buy these top dog chips if you're going to let them sit idle all day.
Re: (Score:1)
You sir are so wrong. After a mere 24 years*, the additional cost for the more expensive AMD will already be absorbed by the huge amount of saved power. Additionally, at the end of the 24th year, you'll have saved up another $43 towards the purchase of your next server. Oh and the nicest thing about it: the 8222 will be almost as old as the 80386 is now - and Intel is still producing more of those!
Unfortunately for AMD, you could also
Annually. (Score:1, Funny)
Did you seriously not read up to the fourth word in that sentence?
Power savings of OSS. (Score:1, Interesting)
Our analysis suggests that this was due to the open source software being more efficient than the equivalent Windows-based software. This is backed up by the fact that we saw a significant performance boost after the transition. Database jobs that would take 20 minutes on SQL Server 2005 and Windows 2003 wo
Different Power Supplies (Score:1, Insightful)
Re: (Score:3, Informative)
Re:Different Power Supplies (Score:5, Informative)
Re:Different Power Supplies (Score:5, Informative)
For a more scientific study, they should use the same power supply.
Re:Different Power Supplies (Score:5, Informative)
Re: (Score:2)
Re: (Score:1)
Mod parent +5 (Score:1)
pseudo-science. They might as well be killing chickens.
Re:Different Power Supplies (Score:5, Informative)
Re: (Score:2)
-nB
Re:Different Power Supplies (Score:5, Insightful)
For one, AMD and Intel don't release their new chips on the same date, so one side can always complain "that's not our newest stuff" or "yeah, but just wait until our next generation". If you wait for same generation, same CPU frequency chips from both manufacturers before you do a benchmark, you're going to be waiting a while - it'll never happen. And if you pick a "performance class" to set your benchmark on, somebody will complain "yeah but XXXX's chip is
Also above there is a discussion about chipsets / power supplies / etc. Again nearly impossible to standardize on this stuff as well. Obviously there is no motherboard that is identical in every regard except the processor that it accepts. Another thread talks about the memory controller for Intel being off-chip vs. on-chip for AMD - so right there you have to go beyond the CPU and include more platform to make a "fair" comparison. Even if they standardized on a power supply, people can argue that the system that pulls less power doesn't need the larger power supply and could save more power (less loss to inefficiencies) on a smaller unit. So do you run the recommended unit for the server or run the same, possibly wrong power supply for both?
My overall point being that in for somebody to do any kind of test like this, they need to setup some base rules. I don't know why people complain so much - they provide all the criteria they chose and did a comparison based on that. If that doesn't answer a question you had, do it yourself or go to another benchmark. Don't complain that the test is invalid because your chip of choice didn't win. For this benchmark, power consumption for 3.0 GHz servers under "real world" conditions (not idle, not pinned, running various applications from databases to web servers), AMD won. Get over it.
Core vs Opteron (Score:2)
Is that still true considering that Woodcrest is based on the Core (not Netburst) architecture. Core should outperform Opteron per GHz quite a bit by now. Reading the comments on TFA confirms this without having to pull out some Anandtech benchmarks.
Re: (Score:2)
I'd like to know why they compared a Woodcrest Xeon, circa June 2006 to the latest and greatest Opteron of today.
Do you really think that's unfair?
1. the Woodcrest processor *is* the latest and greatest Intel CPU. So, they're comparing the *best* Intel to the *best* AMD. How is that not fair?
2. Both architectures are do for replacement later this year, but samples have not been released to reviewers, as of yet.
3. The Opteron was released cira August 2006, a scant 2 months after the Woodcrest. The Windsor stepping on which the Opteron is based was released in May 2006, a month *before* Intel. The architecture f
Re: (Score:1)
Re:Different Power Supplies (Score:4, Interesting)
Re: (Score:1)
GHz != Performance (Score:2, Interesting)
AMD is doing better at idle speeds (Intel definitely needs to crank Penryn down more when it's not in user) but if this survey compared equivalent performance processors, the difference would be much smaller.
Re: (Score:2)
The os has a lot more to do with power consumption than the CPU. If the OS says "give me power" the CPU will oblige. Shitty code can cause more power consumption problems than the CPU can try and mitigate. Running Windows under vmware sometime and watch the CPU peg to 100% when its sitting in a loop at the login dialog.
Re: (Score:2)
3.0Ghz May Not Meen Equal Performance (Score:5, Insightful)
Re: (Score:3, Insightful)
If you read page 6 of the test description, under Test Design, they say this:
The test simulates credit card transactions coming in at a controlled rate. So this test would let someone get an idea of their operating costs. The fixed capital costs determined by required throughput and how much hardware needed to handle it is a separated issue this test doesn't tell you anything about.
Re: (Score:2)
No, for just reason the parent stated - it doesn't tell you how many computers of each type you'd need to handle each particular transaction load.
The biggest news I see, though, is the massive lead AMD holds in Idle power consumption - 44% lower! This is a very important special case (unless you somehow have a steady workload 24/7, which I think would be high
Re: (Score:2)
Yes for most servers but for say Rendering farms or HPC clusters "I will not use b word" it could be very typical.
In the low end server market I wounder how the new 45 Watt Athlon's do. For a small servers you don't need "server" class cpu and those low power Athlons look like they could make some pretty nice 1U systems.
Yeah, 'cos render farms are sooo common these days (Score:3, Insightful)
I don't think we should throw away the test results because of a few render farms.
Re:Yeah, 'cos render farms are sooo common these d (Score:2)
Re: (Score:2)
Re: (Score:2)
Of course that is just the logical way they should work which means that it is probably wrong.
Re: (Score:2)
Re: (Score:3, Insightful)
The problem with that is you often don't have a steady load 24/7. At 3am, you need 1 server; at 8:30am, you need 40. Since virtualization has overhead, the total amount of hardware required to support your max load using virtualization is actually more.
Agreed virtualization could be good for pooling services that each consistently takes less than 1 server though.
Re:3.0Ghz May Not Meen Equal Performance (Score:5, Informative)
Re: (Score:3, Insightful)
Re: (Score:1)
Re: (Score:3, Informative)
That Page is over 4 years old..
*sigh*
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Starting with the P3, the length of their lists are encreasing in size.
Saveing are also higher (Score:3, Interesting)
Also FB-DIMMS and the intel chipset need a lot more power then amd chipsets and DDR2 ECC / DDR1 ECC ram.
Intel CPU is only 1 part that uses a lot of power (Score:5, Informative)
The Intel CPUs are competitive with the Opterons on power consumption.
But: The whole system uses more with Intel.
Why? the northbridge memory controller is a separate chip with Intel, and it is very power hungry.
In the AMD chips the memory controller is a part of the CPU.
In the case of a similar dual XEON compared to a dual Opteron,
the XEON machine uses about 80W more power.
What a lot of these studies do not even get into is cooling cost.
for every watt of power , which ends up heat, we have to expend at least 1.5 watts, on air conditioning.
As for the comment about the size of the power supplies, that is irrelevant.
The maximum rated output of a supply has nothing to do with the power consumed.
Bottom line:
Assuming an Intel XEON server uses about 80 watts more than an equivalent AMD one,
which is what we see when we build them:
80w x 24 hours/day x 365 days is 700KWh. @ 9c/kWh costs $63/year.
Add aircon costs for that extra 80W:
120w x 24 hours/day x 365 days is 1050KWh. @ 9c/kWh costs $96/year.
Therefore, a machine using an extra 80W costs an extra $160 to run in an air conditioned room.
Source of power rates:
http://www.neo.ne.gov/statshtml/115.htm [ne.gov]
Re:Intel CPU is only 1 part that uses a lot of pow (Score:2)
What kind of crappy A/C do you have? I would expect more like 0.5 watts for air conditioning.
Re: (Score:2)
Re:Intel CPU is only 1 part that uses a lot of pow (Score:3, Insightful)
You should be able to get down below 1.5 kW per Ton of A/C. (efficient systems can get down below 1.0 kW/T, even including all the pumps and fans)
That works out to close to 0.4 kW of A/C power used per 1.0 kW of heat cooled. But first add about 0.15 kW UPS per 1.0 kW power delivered, so you might as much as 0.5 kW per 1.0 kW of server power.
The maximum rated power supply does not correlate to power consumed, but an over-sized or under-sized power supply wil
Re: (Score:2, Informative)
To my (somewhat limited) knowledge, an Airconditioner should cool about three to four times as much energy as it uses up (Wikipedia says a SEER-13 aircon (which is the minimum level for newly installed air conditioners in the U.S.) ought to pump "3.43 units of heat energy [...] per unit of work energy".).
Re: (Score:2)
Re: (Score:1)
If the temperature outside is higher than inside, using an air conditioner to heat your home makes perfect sense, but I don't think it's very probable that the desire to heat your home coincides with that condition.
Re:Intel CPU is only 1 part that uses a lot of pow (Score:1, Informative)
AMD runs better with no load, where Intel runs better with full load. So in this particular instance, do you have a server that's gonna idle 99% of the time(If so why are you not using a VMWare setup?) I'd expect a nice new server to be cranking out 100% usage for as long as I can keep it there.
You are correct that for every 1 watt of heat, it takes 1.5 watts(or sometimes even more) to remove the heat
Re: (Score:1)
damned statistics (Score:2)
Typical FUD - New AMD vs. Old Intel (Score:1)
Re: (Score:1)
Re: (Score:2)
http://www.cdw.com/shop/products/default.aspx?EDC
Servers absolutely are selling with the Opteron 8222, I don't know why you think it's unreleased.
isn't that irrelevant (Score:1)
Re: (Score:2)
Re: (Score:1)
It is the platform, stupid! (Score:2)
Like, you start with software. They used Apache, Linux and MySQL -- what about, say, lighttpd, BSD and PotgreSQL? Each is reputedly more efficient than its counterpart. And what about comparing to a sytem with a different architectural decision, like business rules in the DBMS à la Alphora Dataphor (partly doable in PotgreSQL or IBM DB2) or like Lisp?
Moreover, one should compare to RISC to
Re: (Score:2)
Re: (Score:2)
Not at all. And even if perchance that's the case, what I tried to say is that there are far more gains to be had elsewhere than in AMD vs Intel.