Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Intel Hardware

In Tests Opteron Shows Efficiency Edge Over Intel, Again 98

Ted Samson writes "In their latest round of energy-efficiency tests between AMD Opteron and Intel Xeon, independent testing firm Neal Nelson and Associates find AMD still holds an edge, but it's certainly not cut-and-dried. Nelson put similarly equipped servers through another gauntlet of tests, swapping in different amounts of memory and varying transaction loads. In the end, he found that the more memory he installed on the servers, the better the Opteron performed compared to the Xeon. Additionally, at maximum throughput, the Intel system fared better, power-efficiency-wise, by 5.0 to 5.5 percent for calculation intensive workloads. For disk I/O intensive workloads, AMD delivered better power efficiency by 18.4 to 18.6 percent. And in idle states — that is, when servers were waiting for their next work load — AMD consistently creamed Intel."
This discussion has been archived. No new comments can be posted.

In Tests Opteron Shows Efficiency Edge Over Intel, Again

Comments Filter:
  • Because I'm using an Opteron!

    I'll bet it checks off "post anonymously" even better than an Intel too!
  • by StefanJ ( 88986 ) on Friday August 31, 2007 @03:19PM (#20428397) Homepage Journal
    Opteron?

    Xeon?

    Why do these top of the line processors sound like character names from crummy 1980-vintage cartoons about giant robots who talk like street thugs?

    "I'm calling you out Xeon! You will be defeated and all Processaria will bow before my superior power stats!"

    "You're a fool if you believe those benchmarks Opteron! The true power is Inside!" (duh-Dah-dumm!)
  • by Anonymous Coward on Friday August 31, 2007 @03:20PM (#20428401)
    This just in! AMD is more efficient than Intel when doing nothing!

    For a really good test, they should compare AMD to an empty carboard box, and see which one uses more power when processing no transactions.
    • Re:Efficient Post! (Score:5, Insightful)

      by Ajehals ( 947354 ) on Friday August 31, 2007 @03:27PM (#20428479) Journal
      Idle power consumption may not be important for systems that are under a constant workload all the time, but for office file servers, where any given server may be under heavy load for 8 hours a day (probably closer to 6 and probably not "heavy load at that), having it draw less power in the remaining 16 hours would be rather beneficial, after all a server like that would be idle 2/3 of the time.

      Obviously ideally you would be using all your kit at 95% capacity all the time, but even then you would need some idle kit stood by to take case of any additional demand. Sadly company' who aren't planning their IT systems with load in mind (but rather by which vendor takes them to lunch more often or which has the coolest flashing lights) are probably not too interested in power consumption stats anyway
      • Obviously ideally you would be using all your kit at 95% capacity all the time
        Yep, if your developers aren't working 24 hours a day, you need to outsource half of them to the other side of the world. That way you get your money's worth out of the development servers.
    • Re:Efficient Post! (Score:5, Insightful)

      by Bert64 ( 520050 ) <.moc.eeznerif.todhsals. .ta. .treb.> on Friday August 31, 2007 @03:29PM (#20428507) Homepage
      Most servers spend a lot of time idle, often far more time idle than busy...
      You don't buy a server that is just barely fast enough for your workload, your over-spec so that it can easily handle spikes in load and allow for future growth.
      Also, many business operations have busy hours and quiet hours, for instance internal servers at a company will usually only see much load during working hours.
      • A file server or webserver doing static pages, sure. A computation server or server doing lots of dynamic content, not so much. A more useful benchmark would be to measure the actual loads for various tasks, then see how they perform for that. Say "If you have a server doing X, this is what you can expect form these processors." Servers aren't a "one size fits all" kind of deal. I agree idle efficiency is something worth considering but let's not pretend like all servers just idle. Also, I know many places
        • Get numbers for actual server product lines. As another poster has pointed out, the PSU, case design, RAM configuration, disk config can all make a difference to power consumption.

          So, benchmark the whole system. And don't bother with MIPS or FLOPS they're arbitrary and don't allow you to compare differing architectures. So give us SPECmarks per watt or TPC-? per watt as well as per dollar.

          Then you can simply choose a particular make/model based on requirements.

           
  • No matter.... (Score:3, Insightful)

    by hurting now ( 967633 ) on Friday August 31, 2007 @03:21PM (#20428421) Homepage Journal
    Even if its not cut and dry, this is EXCELLENT for the CPU industry. We need to see competition between the manufacturers.

    Don't let that get lost in the arguments between which is better or what have you. Continued improvements and development benefits everyone.
    • This applies to the original post as well.

      The idiomatic expression is cut and dried. It means ready-made, predetermined and not changeable. For example, "The procedure is not quite cut and dried. There's definitely room for improvisation."

      It originally referred to herbs for sale in a shop, as opposed to fresh, growing herbs.
      • by nuzak ( 959558 )
        > It originally referred to herbs for sale in a shop, as opposed to fresh, growing herbs.

        I'm under the impression that it had to do with firewood -- you have to cut it and dry it before burning it. Chopping firewood seems like a far more universal activity of the time. But it certainly got applied to many other things in time, all with the same connotation of convenience, suitability, and uniformity.

        It's amazing, and kind of depressing, how many "word origins" sites only serve to repeat long-debunked u
  • by Anonymous Coward on Friday August 31, 2007 @03:23PM (#20428439)
    Here [worlds-fastest.com] is the whitepaper, instead of the summaries.

    • MOD PARENT UP (Score:4, Informative)

      by Bill Dimm ( 463823 ) on Friday August 31, 2007 @03:41PM (#20428631) Homepage
      The submitted article is terrible. The full paper is much more informative. For example, the full paper gives the system specs (both systems at 3.0GHz) and shows that the Opteron system is much cheaper ($2800 vs. $4170 for 2GB configuration).
      • the full paper gives the system specs (both systems at 3.0GHz)

        Unfortunately, the white paper doesn't say if the Xeon 5160s they benchmarked are from the relatively new G stepping. The new G stepping cuts idle power consumption by at least 30W for two Xeon 5160s. The Tech Report reported this a few weeks ago: New Xeons bring dramatically lower idle power [techreport.com].

        30 watts is a very significant difference, but I'm not sure if it would make up for those power-sucking FB-DIMMs.

        • Re: (Score:3, Informative)

          These tests were not run with the new G stepping. If someone can loan me a pair of the new chips for about a week I will re-run the tests and promptly publish the results. Neal Nelson
  • I am confused about Intel branding. Last time I checked, Xeons were not their most efficient cores. Are these ones based on the Conroe architecture or something?
  • by Anonymous Coward
    "...in cases where Intel outperformed AMD in power efficiency, the servers were configured with smaller larger memory sizes..."

    "...At the maximum throughput, based on transactions per watt hour, the Intel system delivered better power-efficiency..."
    This seems to imply that they are measuring throughput in transactions per watt hour, but those units are appropriate for power-efficiency, not throughput. At best, this is unnecessarily confusing.
  • FTFA (Score:4, Funny)

    by JedaFlain ( 899703 ) on Friday August 31, 2007 @03:30PM (#20428513)
    "Further, in cases where Intel outperformed AMD in power efficiency, the servers were configured with smaller larger memory sizes."

    It's all so clear dark to me now...
  • sort of useless (Score:2, Insightful)

    by krog ( 25663 )
    Only a fool would specify an Opteron or a Xeon in a power-critical application. You might as well compare fuel consumption among a group of muscle cars; the very act of comparison indicates that you missed the point entirely.
    • Re: (Score:3, Insightful)

      by Applekid ( 993327 )
      Depends on the rest of the specs. If you have a muscle car with more power for less fuel, certainly it's worth noting.
      • Re: (Score:3, Insightful)

        by geekoid ( 135745 )
        Nope. Muscle cars are about power. If you car has more power and less fuel, you win. If your car has more power and more fuel, then you win.
        It's not even worth noting.
        Now if you are talking about high performance race cars, then it is pretty important.
        • by smoker2 ( 750216 )
          There is not much point to a "muscle car" if it uses so much fuel that it can only run for 2 seconds - so I would say it IS worth noting !
          It has to have a certain amount of fuel economy OR huge ferkin tanks !
    • Re: (Score:2, Informative)

      by Tinyn ( 1100891 )
      No, the point is people are starting to care about the total power usage of their 500-zillion server colo facility, where even a 5% reduction in power usage can mean hundreds or thousands on the power bill.
    • Re: (Score:3, Insightful)

      The point of the study is the relative power efficiency of the two processors, not absolute power efficiency. If you need the performance of an Opteron or Xeon, why wouldn't you choose the more efficient one (all else being equal)?
    • by XaXXon ( 202882 )
      You missed the point. You care about power efficiency in a server because when you get outside being a rinky-dink operation and start designing entire data centers, you realize that there's a huge multiplier on your power consumption. You have to remember that increase in power use causes an increase in heat which requires an increase in cooling requirements. This also increases your generation requirements.

      additional cost =
      power delta * 10,000 (machines) * 2 (for cooling)
      + additional cooling hardware +
    • The Opteron HE is AMD's best processor in terms of performance per watt for a given rack or blade unit. Sure, you could theoretically run a server farm of Intel Centrinos, but you would get far less computing speed overall, and a modest savings in power.
    • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Friday August 31, 2007 @04:18PM (#20428989) Journal
      It's all about performance per watt. Well, and other considerations, like how much the hardware costs up front, and how much physical space it will require.

      The bottom line is: You want to spend your money in the most efficient way possible.

      If you have two potential architectures, and one offers more performance per watt, then ignoring up front hardware costs, it's cheaper to run the one that costs you less power. That's a bit different than suggesting they just use a bunch of laptop CPUs.
    • by nuzak ( 959558 )
      > Only a fool would specify an Opteron or a Xeon in a power-critical application.

      Google has a power bill slightly higher than most people's home PC's. They don't run their bricks on ARM, do they? Any company with a big data center wants to see its electric bill go down.

  • by edxwelch ( 600979 ) on Friday August 31, 2007 @03:43PM (#20428655)
    Actually, if you look at the raw test data (rather than the conclusions) you will see that both servers performed nearly equally. The xeon doing slightly better on some tests, while the opteron better in others. In most tests the results are about the same (5% difference)
  • Amazingly skimpy article. No effing data whatsoever.

    I can bet a case of beer that this was run in a standard server config under Winhoze Server 2003. These are the results you more or less expect in that case.

    If that is the case neither Opteron, nor Xeon utilise CPU frequency scaling as there is no OS support. If you use CPU frequency scaling under let's say current RHEL or Debian, the idle and IO efficiency picture tends to reverse because AMD is still not as good at this as Intel. In fact it not even supp
    • Amazingly skimpy article. No effing data whatsoever.
      No argument there.

      I can bet a case of beer that this was run in a standard server config under Winhoze Server 2003
      What kind of beer? The full paper [worlds-fastest.com] says they were running 64-bit SUSE Linux Enterprise Server 10.
      • by arivanov ( 12034 )
        They still ran it like Windoze. They did not use a single of the linux power control options and tunables leaving everything at defaults. This is not how you run a power efficient installation. There are plenty of tunables under the cpufreq and some less relevant ACPI stuff that make up to 70% power consumption difference on a 1U server. They touched none of them.
    • RTFA (Score:5, Informative)

      by Wesley Felter ( 138342 ) <wesley@felter.org> on Friday August 31, 2007 @03:54PM (#20428771) Homepage
      http://www.worlds-fastest.com/d.pdf/wfw991.pdf [worlds-fastest.com]

      (Granted, it was buried several links deep.)

      The article does not mention it, but SLES 10 enables cpufreq and the ondemand governor by default.

      AMD power utilisation with reduced frequency in idle is higher than that of a Xeon system which consumes nearly nothing when you slam it down to 250MHz.

      Uh, the lowest frequency of the Xeon 5160 is 2GHz.
      • by arivanov ( 12034 )
        Uh, the lowest frequency of the Xeon 5160 is 2GHz.

        Utter bullshit. That is the base frequency. The lowest frequency adjustable through the cpufreq standard P4 runtime frequency interface is 200MHz or 256MHz. If the base is 2GHz it is 256 (it is usually in 8 equal steps).

        To see the frequencies:

        • modprobe p4_clockmod
          modprobe cpufreq_ondemand

        To enable dynamic scaling (via kernel)

        • echo ondemand > /sys/devices/system/cpu/cpu0/cpufreq/scaling_gover nor

        Watch either /sys/devices/system/cpu/cpu0/cpufreq/c

        • Your statement "They tweaked all kinds of shit ...." is incorrect. Appendix B in the white paper lists the two changes that we made to the Apache2 configuration files. One increased the number of user sessions and the other turned off logging. As noted in the text of the white paper we also set the BIOS fan speed control to automatic. I hardly think that these changes can be accurately described as tweaking "all kinds of shit". If you will send me a list of what you consider to be the proper tweaking change
          • by arivanov ( 12034 )
            1. They claim to be showing power vs performance statistics including for idle. If you do so, you need to know the factors affecting this for the OS used. They have shown to know only the ones relevant under Winhoze.

            2. They have not configured the system for optimum power vs performance neither for idle, nor for IO load, nor for varied load. In all of these cases you can improve power consumption and heat produced by anything from 30% to 70%. Instead of that they are scratching their testicles by tweaking s
            • I read your reply but I do not see any recommended settings. Can you provide some link to a "howto" or some published paper where this information is provided for the Xeon and the Opteron? Thanks, Neal Nelson
              • by arivanov ( 12034 )
                Go a couple of posts up the chain. There is a post by me that says how to do it in the simplest form using a kernel ondemand governor (no need to repeat it) including actual commands.

                This is good enough for 99% of typical enterprise server loads including nearly any file serving, webserving, etc. The only area where you may find this approach problematic are cases where the rampup from 0 to 100% load is not stepwise, but nearly instantaneous and the latency of the rampump time is critical. There are a few a
                • As I said earler, the ondemand governor was already used in these tests, because SLES 10 enables it by default.
                • I checked both the Xeon and Opteron servers, and just as Wesley said ondemand governor was already set on both boxes. Neal Nelson
        • Clock modulation makes the processor slower but less efficient, so I don't recommend it. I think the results from these benchmarks would actually be worse if clock modulation was used. I wish Linux would go ahead and remove the p4_clockmod driver so that people like you will stop making their systems less efficient.

          For a Xeon you want to use EIST instead of clock modulation; the proper driver is speedstep_centrino or cpufreq_acpi (depending on kernel version) and SLES 10 loads this driver automatically, so
    • Re: (Score:1, Informative)

      by Anonymous Coward
      Actually, a close look at the tests show is that they got the AMD to cycle down under no-load conditions, but couldn't get the Intel chip to do the same.

      As you said, this probably has more to do with the OS, Motherboard, and BIOS than the chip being used.
  • by NerveGas ( 168686 ) on Friday August 31, 2007 @04:28PM (#20429059)

        If you fully load them down, my X2s use nearly as much as the Core2 systems - but when lightly loaded, my experience mirrors that of the article, that the X2 systems use significantly less power.

        In our call center, we built a large batch of X2-based systems - nothing too fancy, just an X2/3800, two gigs of memory, a 250-gig drive, a DVD burner, a 6200tc video card, and 19" LCD monitors. The cases and power supplies were pretty cheap - I think $35 for the case and a "400-watt" power supply. (Yes, the quotes are there for a reason.)

        In order to size out the UPS units, we broke out the old, trusty Kill-A-Watt. In logging into a PDC server, browsing the web, checking email, etc., then logging out, the peak draw for one machine and monitor together was 140 watts, with the load *most* of the time at 80-100 watts. Those are some spankily low numbers, especially when you consider that the monitor's contribution was probably 25-40 watts.

        And, as we speak, I have a dual-socket, dual-core opteron with a 15K SCSI raid array and 8 gigs running just a few feet away from me, with 4 instances of Prime95 running. Kill-A-Watt says 296 watts with all of that going on. This is going to replace an old 4x700 MHz Xeon server which draws 500-700 watts. The power factor, however, is just 0.7 - I really need a better power supply in there.

  • When Will They Learn (Score:5, Informative)

    by jonesy16 ( 595988 ) on Friday August 31, 2007 @04:44PM (#20429163)
    Over and over again people try and compare the efficiencies between two "seemingly" identical servers / machines. But truly, how can you declare a winner (and base it on something like a 5% efficiency margin) when the two machines are using different power supplies? A 600 Watt for the Intel, 500 Watt for the AMD. I can't find those models listed on Delta's website at quick glance, but it'd be a stretch to imagine that two different power supplies have the exact same efficiency curves. I mean, I'd believe if they were accurate to within maybe 3%, so now we're arguing over whether or not Intel and AMD are more than 2% different in efficiency? Come on people. The whitepaper does say they assume there might be a 1% difference between the two power supplies, but that's based on "eyeballing" the efficiency curves.

    We know that Intel takes a hit with FB-DIMM memory especially as you add more memory modules.

    Another inconsistency appears to be related to the case design, where the cases for the Intel machines appeared to be providing inadequate cooling for the memory modules, causing the system management controller to bump up fan speed considerably. So now we're comparing two systems with different power supplies and with different requirements for cooling which may or may not be related to the actual architecture but may be impacted by a design consideration made by the case manufacturer. How would these results change with different power supplies or a different case. Are the differences the same in a 2U case? A tower? Does it get worse? Better? I know that our Mac Pro's NEVER speed up the fans above the 500/600 RPM's that they bottom out at.

    As noted by others, the paper is completely devoid of any discussion regarding CPU frequency / voltage scaling that may or may not be handled by the BIOS or Linux resident programs (cpuspeed daemon). It's possible they haven't even checked for it. As our company has both Intel and AMD linux boxes, I can testify that linux is very sensitive to motherboard/cpu combinations when it comes to cpu scaling and it's "possible" that this could be playing a MAJOR role in the idle performance values. It'd be nice to see it addressed.

    Lastly, there's no discussion as to the optimizations made to the software being run on each of the boxes. Is the code compiled for each architecture individually taking into account support for 3DNow / SSE instructions, cache sizes, etc? Obviously more efficient or less efficient code execution would have a MAJOR impact on these studies, enough so that companies usually spend a large amount of time playing with compiler options to get the best performance on a given architecture. And when you're arguing over performance comparisons in the sub 20% difference arena, code efficiency should be addressed, especially if it's not a big commercial package that "everyone" in the industry would be using. Anyhoo, just my thoughts.
    • Mod back up (Score:3, Insightful)

      I don't fully agree with the parent post, but it's not a troll. Some of these are legitimate issues.

      there's no discussion as to the optimizations made to the software being run on each of the boxes. Is the code compiled for each architecture individually taking into account support for 3DNow / SSE instructions, cache sizes, etc? Obviously more efficient or less efficient code execution would have a MAJOR impact on these studies, enough so that companies usually spend a large amount of time playing with comp
  • Imagine a Beowulf cluster of those!
  • Almost all servers are extremely underutilized according to the research I've seen (which is why companies like VMware say they can sell you expensive virtualization products, to better utilize your equipment). If you're lucky it might have some bursty load where the big CPUs you put on it are going to be taxed for a significant amount of time, but most people simply average around 10 to 30% utilization during business hours. (so we're not even counting the mostly idle time for the 14 or so hours a day)

    Seem
  • hurt intel as they need a lot more power and give off more heat then DDR2 ECC ram.
    and in amd systems the ram is linked to EACH cpu built in ram controller and the cpu have a better cpu to cpu link. Also the chipsets use less power.
    • Is anyone aware of any other published power efficiency data? It would be pretty easy to plug in a "Kill A Watt" or "Watts Up" device and measure the power at idle. Is there any data for other server configurations? Has anybody compared an Intel "Desktop Server" with DDRII to a Xeon based server with FB-DIMMS? Has anybody reported idle power under Windows versus idle power under Linux on the same machine?
  • Is it getting time for AMD marketing to pull this one out of the closet?

    Intel - Leap Ahead =( http://www.leapsbeyond.com/ [leapsbeyond.com]
    AMD - Leaps Beyond! http://www.leapsbeyond.com/ [leapsbeyond.com]
  • Is it getting time for AMD marketing to pull this one out of the closet?

    Intel - Leap Ahead =( http://www.leapahead.com/ [leapahead.com]
    AMD - Leaps Beyond! http://www.leapsbeyond.com/ [leapsbeyond.com]

Whoever dies with the most toys wins.

Working...