In Tests Opteron Shows Efficiency Edge Over Intel, Again 98
Ted Samson writes "In their latest round of energy-efficiency tests between AMD Opteron and Intel Xeon, independent testing firm Neal Nelson and Associates find AMD still holds an edge, but it's certainly not cut-and-dried. Nelson put similarly equipped servers through another gauntlet of tests, swapping in different amounts of memory and varying transaction loads. In the end, he found that the more memory he installed on the servers, the better the Opteron performed compared to the Xeon. Additionally, at maximum throughput, the Intel system fared better, power-efficiency-wise, by 5.0 to 5.5 percent for calculation intensive workloads. For disk I/O intensive workloads, AMD delivered better power efficiency by 18.4 to 18.6 percent. And in idle states — that is, when servers were waiting for their next work load — AMD consistently creamed Intel."
Boy, what a link search (Score:5, Informative)
Re:What sort of Xeon? (Score:4, Informative)
MOD PARENT UP (Score:4, Informative)
Re:sort of useless (Score:2, Informative)
RTFA (Score:5, Informative)
(Granted, it was buried several links deep.)
The article does not mention it, but SLES 10 enables cpufreq and the ondemand governor by default.
AMD power utilisation with reduced frequency in idle is higher than that of a Xeon system which consumes nearly nothing when you slam it down to 250MHz.
Uh, the lowest frequency of the Xeon 5160 is 2GHz.
Re:MOD PARENT UP (Score:3, Informative)
As I said, the full paper is much more informative. You may consider that extra information to be irrelevant, but that doesn't change the fact that there is a lot of info in the full paper that the submitted article doesn't even hint at. The paper, by the way, focuses on power efficiency, not performance. If people are looking at power efficiency because they want to save money on electricity (there may be other reasons to consider it, or course), then the fact that the systems themselves have very different prices seems pretty relevant to me.
OTOH, don't let me stand in the way of your fan-boyism.
75% of the computers I've bought have been Intel based. Give it a rest.
Something I've noticed... (Score:5, Informative)
If you fully load them down, my X2s use nearly as much as the Core2 systems - but when lightly loaded, my experience mirrors that of the article, that the X2 systems use significantly less power.
In our call center, we built a large batch of X2-based systems - nothing too fancy, just an X2/3800, two gigs of memory, a 250-gig drive, a DVD burner, a 6200tc video card, and 19" LCD monitors. The cases and power supplies were pretty cheap - I think $35 for the case and a "400-watt" power supply. (Yes, the quotes are there for a reason.)
In order to size out the UPS units, we broke out the old, trusty Kill-A-Watt. In logging into a PDC server, browsing the web, checking email, etc., then logging out, the peak draw for one machine and monitor together was 140 watts, with the load *most* of the time at 80-100 watts. Those are some spankily low numbers, especially when you consider that the monitor's contribution was probably 25-40 watts.
And, as we speak, I have a dual-socket, dual-core opteron with a 15K SCSI raid array and 8 gigs running just a few feet away from me, with 4 instances of Prime95 running. Kill-A-Watt says 296 watts with all of that going on. This is going to replace an old 4x700 MHz Xeon server which draws 500-700 watts. The power factor, however, is just 0.7 - I really need a better power supply in there.
Re:Can we actually see the damn test config (Score:1, Informative)
As you said, this probably has more to do with the OS, Motherboard, and BIOS than the chip being used.
When Will They Learn (Score:5, Informative)
We know that Intel takes a hit with FB-DIMM memory especially as you add more memory modules.
Another inconsistency appears to be related to the case design, where the cases for the Intel machines appeared to be providing inadequate cooling for the memory modules, causing the system management controller to bump up fan speed considerably. So now we're comparing two systems with different power supplies and with different requirements for cooling which may or may not be related to the actual architecture but may be impacted by a design consideration made by the case manufacturer. How would these results change with different power supplies or a different case. Are the differences the same in a 2U case? A tower? Does it get worse? Better? I know that our Mac Pro's NEVER speed up the fans above the 500/600 RPM's that they bottom out at.
As noted by others, the paper is completely devoid of any discussion regarding CPU frequency / voltage scaling that may or may not be handled by the BIOS or Linux resident programs (cpuspeed daemon). It's possible they haven't even checked for it. As our company has both Intel and AMD linux boxes, I can testify that linux is very sensitive to motherboard/cpu combinations when it comes to cpu scaling and it's "possible" that this could be playing a MAJOR role in the idle performance values. It'd be nice to see it addressed.
Lastly, there's no discussion as to the optimizations made to the software being run on each of the boxes. Is the code compiled for each architecture individually taking into account support for 3DNow / SSE instructions, cache sizes, etc? Obviously more efficient or less efficient code execution would have a MAJOR impact on these studies, enough so that companies usually spend a large amount of time playing with compiler options to get the best performance on a given architecture. And when you're arguing over performance comparisons in the sub 20% difference arena, code efficiency should be addressed, especially if it's not a big commercial package that "everyone" in the industry would be using. Anyhoo, just my thoughts.
Re:AMD better than Intel? hmm... (Score:1, Informative)
Re:AMD better than Intel? hmm... (Score:4, Informative)
Re:MOD PARENT UP (Score:3, Informative)