Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Power AMD Intel The Almighty Buck IT

AMD Beats Intel in Power-Efficiency Study 87

Ted Samson writes "AMD Opteron servers proved up to 15.2 percent more energy-efficient than those running Intel Xeon in a server-power-efficiency test performed by Neal Nelson and Associates, InfoWorld reports. That translates to annual electricity savings between $20.29 per server and $36.04 per server, depending on the workload, the study concluded. The benchmark tests were conducted on similarly configured 3GHz systems running Novell SUSE Linux, Apache2, and MySQL."
This discussion has been archived. No new comments can be posted.

AMD Beats Intel in Power-Efficiency Study

Comments Filter:
  • Multiple OS-es (Score:4, Interesting)

    by OS24Ever ( 245667 ) * <trekkie@nomorestars.com> on Saturday July 21, 2007 @10:48AM (#19937963) Homepage Journal
    I would have liked to seen them test it with the 'big three' OSes of Linux (RH and SUSE), VMware and WIndows. It would have been nice to see if the power management of the operating systems would have come into play some above and beyond just the single OS. Besides the OS the applications used run on any of those platforms.
  • by Anonymous Coward
    I run AMD because I own their stock because I run AMD because I...
  • by Anonymous Coward
    Open source software tends to bring reduce power consumption, as well. After switching a number of our systems from Windows Server 2003 to Linux, we saw a fairly significant drop in our electricity costs.

    Our analysis suggests that this was due to the open source software being more efficient than the equivalent Windows-based software. This is backed up by the fact that we saw a significant performance boost after the transition. Database jobs that would take 20 minutes on SQL Server 2005 and Windows 2003 wo
  • by Anonymous Coward
    If you read the PDF, you'll see that the AMD system was tested with a 500W power supply while the Intel one was tested with a 600W one. I wonder how much of the different can be associated with that.
    • Re: (Score:3, Informative)

      by segedunum ( 883035 )

      If you read the PDF, you'll see that the AMD system was tested with a 500W power supply while the Intel one was tested with a 600W one. I wonder how much of the different can be associated with that.
      None. They would draw exactly the same power whether they used 500W or 600W PSUs. Besides (and I haven't got all the way through the article), they may just be using recommended PSUs in pre-built machines.
      • by Gabrill ( 556503 ) on Saturday July 21, 2007 @11:08AM (#19938121)
        however differing power supplies do have different efficiencies of conversion. So we're really comparing top-to bottom solutions, and the processor may actually be a small part of the energy savings.
      • by Anonymous Coward on Saturday July 21, 2007 @11:08AM (#19938135)
        Actually the power capacity of a supply WILL affect the power measured as even same amount of power output will be on different part of the efficiency curves even if component losses are identical. Power supplies tends to be most efficient from 1/3 to 2/3 of their power ratings.

        For a more scientific study, they should use the same power supply.
        • by florescent_beige ( 608235 ) on Saturday July 21, 2007 @11:25AM (#19938253) Journal
          IANAEE but I found this thing [dell.com] (pdf) from DELL that has a "typical" efficiency curve (fig A, on the third page of the pdf, page # 64) that shows efficiency is pretty flat from 35% up to max load. Within maybe 5%.
          • IANAEE but I found this thing (pdf) from DELL that has a "typical" efficiency curve (fig A, on the third page of the pdf, page # 64) that shows efficiency is pretty flat from 35% up to max load. Within maybe 5%.
            Spot on.
            • by tjt225 ( 1132373 )
              that's true, but that's the curve for a single supply. That has nothing to do with comparing two different supplies. In fact, two different supplies can easily vary by 10, 20, even 30% in efficiency.
        • by Anonymous Coward
          There is so much bullshit like this out there. All these alleged experts keep telling me how cheap hardware is but none of them EVER seem to be able to put together two identical machines for a bake-off. They're not just slightly different, as might be required by something as substantial as differing CPUs, they're wildly different.

          pseudo-science. They might as well be killing chickens.
      • by kaiwai ( 765866 ) on Saturday July 21, 2007 @12:56PM (#19938817)
        I'd like to know why they compared a Woodcrest Xeon, circa June 2006 to the latest and greatest Opteron of today.
        • because they're trying to be biased without being too obvious...
          -nB
        • by cecil_turtle ( 820519 ) on Saturday July 21, 2007 @07:18PM (#19941717)
          The reason why is because the Woodcrest Xeon is the only 3GHz Xeon that Intel made, and for some reason they decided to standardize this test on "3.0 GHz". Since everybody knows that AMD outperforms Intel on a per-GHz basis, it does lead one to wonder why they chose that particular metric, but honestly no matter what metric they chose people would complain.

          For one, AMD and Intel don't release their new chips on the same date, so one side can always complain "that's not our newest stuff" or "yeah, but just wait until our next generation". If you wait for same generation, same CPU frequency chips from both manufacturers before you do a benchmark, you're going to be waiting a while - it'll never happen. And if you pick a "performance class" to set your benchmark on, somebody will complain "yeah but XXXX's chip is .5GHz slower/faster than XXXX's". It's a lose-lose situation for the tester.

          Also above there is a discussion about chipsets / power supplies / etc. Again nearly impossible to standardize on this stuff as well. Obviously there is no motherboard that is identical in every regard except the processor that it accepts. Another thread talks about the memory controller for Intel being off-chip vs. on-chip for AMD - so right there you have to go beyond the CPU and include more platform to make a "fair" comparison. Even if they standardized on a power supply, people can argue that the system that pulls less power doesn't need the larger power supply and could save more power (less loss to inefficiencies) on a smaller unit. So do you run the recommended unit for the server or run the same, possibly wrong power supply for both?

          My overall point being that in for somebody to do any kind of test like this, they need to setup some base rules. I don't know why people complain so much - they provide all the criteria they chose and did a comparison based on that. If that doesn't answer a question you had, do it yourself or go to another benchmark. Don't complain that the test is invalid because your chip of choice didn't win. For this benchmark, power consumption for 3.0 GHz servers under "real world" conditions (not idle, not pinned, running various applications from databases to web servers), AMD won. Get over it.
          • Since everybody knows that AMD outperforms Intel on a per-GHz basis, it does lead one to wonder why they chose that particular metric, but honestly no matter what metric they chose people would complain.

            Is that still true considering that Woodcrest is based on the Core (not Netburst) architecture. Core should outperform Opteron per GHz quite a bit by now. Reading the comments on TFA confirms this without having to pull out some Anandtech benchmarks.
        • I'd like to know why they compared a Woodcrest Xeon, circa June 2006 to the latest and greatest Opteron of today.

          Do you really think that's unfair?

          1. the Woodcrest processor *is* the latest and greatest Intel CPU. So, they're comparing the *best* Intel to the *best* AMD. How is that not fair?
          2. Both architectures are do for replacement later this year, but samples have not been released to reviewers, as of yet.
          3. The Opteron was released cira August 2006, a scant 2 months after the Woodcrest. The Windsor stepping on which the Opteron is based was released in May 2006, a month *before* Intel. The architecture f

    • by GregPK ( 991973 )
      the difference between 500 and 600 is about 17 percent. But I don't think it would make much difference on a modern power supply. Most of them nowdays have energy effeciency built into them. So it shouldn't make any difference. Other than maybe if they used a cheap 600w that had less effeciency to it.
  • GHz != Performance (Score:2, Interesting)

    by Anonymous Coward
    A 3.0GHz Core 2 is more than 15% faster than a 3GHz Opteron for many tasks.

    AMD is doing better at idle speeds (Intel definitely needs to crank Penryn down more when it's not in user) but if this survey compared equivalent performance processors, the difference would be much smaller.
    • by bl8n8r ( 649187 )
      > AMD is doing better at idle speeds

      The os has a lot more to do with power consumption than the CPU. If the OS says "give me power" the CPU will oblige. Shitty code can cause more power consumption problems than the CPU can try and mitigate. Running Windows under vmware sometime and watch the CPU peg to 100% when its sitting in a loop at the login dialog.
      • by anarxia ( 651289 )
        Actually it's the applications that make the most difference. In a typical server most of the cpu time is used for the database (queries etc) and calculations. Optimizing/reducing your queries and application code might save you more a lot more electricity than switching platforms.
  • by robbieduncan ( 87240 ) on Saturday July 21, 2007 @11:23AM (#19938233) Homepage
    Both systems had 3.0Ghz CPUs and similar amounts of RAM. But did they offer the same performance? If both servers were being pushed 100% would one be able to server more users than the other? If the servers were never pushed to 100% then the test is not really a like-with-like comparison. I imagine that one CPU performs better than the other (and I'd expect right now that's the Intel one). Perhaps a 2.66Ghz vs 3.0Ghz test is closer to the same performance?
    • Re: (Score:3, Insightful)

      If you read page 6 of the test description, under Test Design, they say this:

      This test is not intended to measure the maximum throughput that a server can deliver

      The test simulates credit card transactions coming in at a controlled rate. So this test would let someone get an idea of their operating costs. The fixed capital costs determined by required throughput and how much hardware needed to handle it is a separated issue this test doesn't tell you anything about.

      • The test simulates credit card transactions coming in at a controlled rate. So this test would let someone get an idea of their operating costs.

        No, for just reason the parent stated - it doesn't tell you how many computers of each type you'd need to handle each particular transaction load.

        The biggest news I see, though, is the massive lead AMD holds in Idle power consumption - 44% lower! This is a very important special case (unless you somehow have a steady workload 24/7, which I think would be high

        • by LWATCDR ( 28044 )
          "This is a very important special case (unless you somehow have a steady workload 24/7, which I think would be highly atypical)"
          Yes for most servers but for say Rendering farms or HPC clusters "I will not use b word" it could be very typical.

          In the low end server market I wounder how the new 45 Watt Athlon's do. For a small servers you don't need "server" class cpu and those low power Athlons look like they could make some pretty nice 1U systems.
          • Render farms, HPC, etc. are a tiny percentage of all the "servers" out there.

            I don't think we should throw away the test results because of a few render farms.

            • Actually I was agreeing with you. If you read what wrote I said that was correct for most servers. However with XEN I wonder how the cpu loads will lock in the future.
              • I have not seen any clusters that achieve consistently high utilization (though this is often a goal and sometimes inflated by whoever is backing the cluster to prove its worth). I don't have direct experience with render farms, but I'd imagine it's the same; there are "crunch times" when they're saturated for a few days (or up to a few months) on end, but then that project ends and there's less load for a while. Anyways I agree efficiency under load is important, too.
                • by LWATCDR ( 28044 )
                  I would think that render farms would be cranking out frames or would be in sleep mode with little to nothing in between.
                  Of course that is just the logical way they should work which means that it is probably wrong.
        • If your server is spending a lot of time idle, then you should probably look at virtualisation. The savings from consolidating a number of server workloads on a smaller number of machines using something like Xen (which supports migration between cluster nodes for load balancing) would likely be a lot more significant than the $30/year saved by buying CPUs that were more efficient when not doing anything.
          • Re: (Score:3, Insightful)

            by timeOday ( 582209 )

            The savings from consolidating a number of server workloads on a smaller number of machines using something like Xen...

            The problem with that is you often don't have a steady load 24/7. At 3am, you need 1 server; at 8:30am, you need 40. Since virtualization has overhead, the total amount of hardware required to support your max load using virtualization is actually more.

            Agreed virtualization could be good for pooling services that each consistently takes less than 1 server though.

    • by RootWind ( 993172 ) on Saturday July 21, 2007 @11:49AM (#19938403)
      Anandtech recently did that kind of power efficiency vs. performance test actually: (2.6Ghz vs. 2.33ghz), with AMD coming out on top: http://www.anandtech.com/IT/showdoc.aspx?i=3039 [anandtech.com]
    • Re: (Score:3, Insightful)

      by locketine ( 1101453 )
      The opteron is actually faster, noticeably faster in fact. Or atleast it's slower equivalent was faster so I can only imagine the 3ghz model is even more powerful. http://www.tomshardware.com/2003/04/22/duel_of_the _titans/page18.html#database_test_mysql_32352__32_ bit_suse_enterprise_8 [tomshardware.com]. You're still 100% correct that the test isn't really good with using two proc's from different companies at the same clock speed. They should have first figured out a good matchup in performance before testing engergy usage
      • Exactly how much performance boost not shown in clock speed you get with Opteron over Xeon is going to depend on what you're doing. (AMD is very good a floating point operations, and fairly good at virtualization), and those tests the parent post linked seem fairly optimal for an AMD processor. But then, those are the kinds of things server's do. That said, I doubt an Intel chip would come out on top of an AMD chip for any benchmark if the chips have the same number of cores/clock speed. (Unless you fin
      • Re: (Score:3, Informative)

        by Iam9376 ( 1096787 )
        I just want you to know...

        That Page is over 4 years old..

        *sigh*
        • Does it matter when it's the same arch, clock, cache etc.... The only thing that changed is the AMD got faster (2xcore) and both got smaller and more efficient (smaller process). I would consider what you said more insightful if you found us a current article comparing performance.
  • by Joe The Dragon ( 967727 ) on Saturday July 21, 2007 @11:55AM (#19938437)
    If add that ram for AMD severs costs less then FB-DIMMS.

    Also FB-DIMMS and the intel chipset need a lot more power then amd chipsets and DDR2 ECC / DDR1 ECC ram.
  • by mauriceh ( 3721 ) <mhilarius@gmai l . com> on Saturday July 21, 2007 @12:09PM (#19938533)
    We see similar when we build systems.
    The Intel CPUs are competitive with the Opterons on power consumption.
    But: The whole system uses more with Intel.

    Why? the northbridge memory controller is a separate chip with Intel, and it is very power hungry.
    In the AMD chips the memory controller is a part of the CPU.
    In the case of a similar dual XEON compared to a dual Opteron,
    the XEON machine uses about 80W more power.

    What a lot of these studies do not even get into is cooling cost.
    for every watt of power , which ends up heat, we have to expend at least 1.5 watts, on air conditioning.

    As for the comment about the size of the power supplies, that is irrelevant.
    The maximum rated output of a supply has nothing to do with the power consumed.

    Bottom line:
    Assuming an Intel XEON server uses about 80 watts more than an equivalent AMD one,
    which is what we see when we build them:
    80w x 24 hours/day x 365 days is 700KWh. @ 9c/kWh costs $63/year.
    Add aircon costs for that extra 80W:
    120w x 24 hours/day x 365 days is 1050KWh. @ 9c/kWh costs $96/year.

    Therefore, a machine using an extra 80W costs an extra $160 to run in an air conditioned room.

    Source of power rates:
    http://www.neo.ne.gov/statshtml/115.htm [ne.gov]

    • for every watt of power , which ends up heat, we have to expend at least 1.5 watts, on air conditioning.

      What kind of crappy A/C do you have? I would expect more like 0.5 watts for air conditioning.
      • With the A/C systems I've seen, I could have sworn that they run at 33% the power vs the heat they pump.
    • Your idea is right, but your math is a little off.

      You should be able to get down below 1.5 kW per Ton of A/C. (efficient systems can get down below 1.0 kW/T, even including all the pumps and fans)
      That works out to close to 0.4 kW of A/C power used per 1.0 kW of heat cooled. But first add about 0.15 kW UPS per 1.0 kW power delivered, so you might as much as 0.5 kW per 1.0 kW of server power.

      The maximum rated power supply does not correlate to power consumed, but an over-sized or under-sized power supply wil
    • Re: (Score:2, Informative)

      by darthflo ( 1095225 )
      Most of this does sound pretty logical to me, however you make one point I don't understand at all...

      for every watt of power , which ends up heat, we have to expend at least 1.5 watts, on air conditioning.

      To my (somewhat limited) knowledge, an Airconditioner should cool about three to four times as much energy as it uses up (Wikipedia says a SEER-13 aircon (which is the minimum level for newly installed air conditioners in the U.S.) ought to pump "3.43 units of heat energy [...] per unit of work energy".).

      • by Teun ( 17872 )
        If this were true we'd all be heating our homes with such devices!
        • I didn't say it "destroys" three or four times the energy it consumes, I said it "pumps" that amount of energy from one end point to another.
          If the temperature outside is higher than inside, using an air conditioner to heat your home makes perfect sense, but I don't think it's very probable that the desire to heat your home coincides with that condition.
    • There are so many questions to answer for this question that the real and true answer will never be known.

      AMD runs better with no load, where Intel runs better with full load. So in this particular instance, do you have a server that's gonna idle 99% of the time(If so why are you not using a VMWare setup?) I'd expect a nice new server to be cranking out 100% usage for as long as I can keep it there.

      You are correct that for every 1 watt of heat, it takes 1.5 watts(or sometimes even more) to remove the heat
  • Unless these computers are similar in performance and modernness, it makes no sense to compare them for power consumption.
  • The woodcrest platform used for the Intel side of this test was a year old, while the AMD platform was their latest and greatest. This kind of test is crap until they test the latest and greatest AMD latest and greatest Intel - same power supplies, etc. Of course, to be valid at any given time, this requires that these kinds of tests be run after each release of a new platform by either company. This is a result only an AMD fanboy could love.
  • Surely $20 -$30 per year is a joke in a business environment, my office would save more money buying slightly cheaper coffee than by moving away from zeon's in our small datacenter.
    • It's $20-$30 per server per year, so over a 5 year lifespan of a rack of say 40 servers, that equates to a $5000 difference in lifetime cost, not including the cooling costs (add another 30-50%). Also if they pull less power then less electrical infrastructure is needed (think larger data centers) which reduces costs on UPSs, generators, PDUs, etc. In your environment it may not matter much but it becomes more important with scale. If this particular benchmark is of no use to you, then move on to one wit
      • I understand that some server rooms have hundreds of machines, I'm not sure how many servers we have in our datacenter (never been in there, not my department), but I'd guess it's between 10 - 20. Which would put the power savings well under $1,000 per year. I don't know how high our anual expenses are, and wouldn't be giving them out if I did, but that has to be a tiny portion of the overall running costs. Also, I would think the only common reason to need more servers is if you need more capacity, in whi
  • I wonder... people get so hooked on Intel vs. AMD comparision... but what about thinking the total architecture?

    Like, you start with software. They used Apache, Linux and MySQL -- what about, say, lighttpd, BSD and PotgreSQL? Each is reputedly more efficient than its counterpart. And what about comparing to a sytem with a different architectural decision, like business rules in the DBMS à la Alphora Dataphor (partly doable in PotgreSQL or IBM DB2) or like Lisp?

    Moreover, one should compare to RISC to
    • by mgblst ( 80109 )
      That is like saying who cares about the top speed of cars, why don't you just compare how they perform on different roads. If the AMD is faster than the intel for LAMP, then it will be faster also for a for efficient implementation.
      • by leandrod ( 17766 )

        If the AMD is faster than the intel for LAMP, then it will be faster also for a for efficient implementation.

        Not at all. And even if perchance that's the case, what I tried to say is that there are far more gains to be had elsewhere than in AMD vs Intel.

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...