Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel AMD Power Hardware

Xeons, Opterons Compared in Power Efficiency 98

Bender writes "The Tech Report has put Intel's 'Woodcrest' and quad-core 'Clovertown' Xeons up against AMD's Socket F Opterons in a range of applications, including widely multithreaded tests from academic fields like computational fluid dynamics and proteomics. They've also attempted to quantify power efficiency in terms of energy use over over time and energy use per task, with some surprising results." From the article: "On the power efficiency front, we found both Xeons and Opterons to be very good in specific ways. The Opteron 2218 is excellent overall in power efficiency, and I can see why AMD issued its challenge. Yes, we were testing the top speed grade of the Xeon 5100 and 5300 series against the Opteron 2218, but the Opteron ended up drawing much less power at idle than the Xeons ... We've learned that multithreaded execution is another recipe for power-efficient performance, and on that front, the Xeons excel. The eight-core Xeon 5355 system managed to render our multithreaded POV-Ray test scene using the least total energy, even though its peak power consumption was rather high, because it finished the job in about half the time that the four-way systems did. Similarly, the Xeon 5160 used the least energy in completing our multithreaded MyriMatch search, in part because it completed the task so quickly. "
This discussion has been archived. No new comments can be posted.

Xeons, Opterons Compared in Power Efficiency

Comments Filter:
  • by Salvance ( 1014001 ) * on Friday December 15, 2006 @09:59AM (#17255084) Homepage Journal
    AMD needs to deliver some real quad core chips (or 8 core chips) that will beat Intel's performance. If they don't soon, AMD will quickly get kicked back to the 2nd rate Intel cloner that everyone knew them prior to their groundbreaking AMD 64s and dual core chips briefly took the performance lead from Intel. I'm keeping my fingers crossed that AMD will deliver, I've always liked (and bought) their chips as long as the performance is similar to Intel.
    • by msobkow ( 48369 ) on Friday December 15, 2006 @11:15AM (#17256500) Homepage Journal

      I know of and have worked with too many organizations that figure it's just a matter of slapping all the computers in an air-conditioned room. Every watt of waste heat adds to the A/C bill.

      Old fashioned water-cooled mainframes and big iron (for it's time) often recirculated the wasted heat into the heating systems of the surrounding buildings. We've known all along how to be more energy efficient, if companies and management would only place the emphasis on the environment in their budgets.

      • This doesn't work everywhere. Down here in Miami we have to run A/C the entire year, because it rarely gets cold enough outside for heating to even be needed, much less added to.

        I'm surprised there aren't more data centers in places with really cold climates. Must be nice to use waste heat to heat the building, or just put a radiator with a fan blowing through it outside instead of having to use air conditioning.

        -Z
        • by Umbrel ( 1040414 )
          Let's outsource our server farms to Alaska and Siberia, althought IT techs will not be happy with that... I guess
          • The only ones affected are the tape monkeys, and their jobs were replaced by robotics years ago.

            Twenty years ago satellite ground stations were dropped off up north with nothing more than a big tank of diesel, a power generator, and a fault-resilient or fault-tolerant server, left alone for months at a time.

            With modern high speed networks and VPN access, it's often hard to tell the difference between being at work and remote access, other than the environment. Don't forget how much sysadmin work has b

    • Re: (Score:3, Insightful)

      by aminorex ( 141494 )
      Evidently you didn't read the review. Intel has serious problems for large scale computing. It does not scale up. It's fine as a thread engine for processing small transactions, but for the kind of problems that people like Google and NCAR are doing -- and it is people like that who drive some very large CPU buys -- the external MMU bites their ass every time. Is the current generation of Opterons a gamer buy? No. AMD probably won't dominate the gamer market until a high-end GPU is integrated on die a
      • Which benchmark are you talking about? It looks to me like the 8 core Xeon slaughtered everything. It even won for least power to complete a job, while doing it in half the time. The Opteron only gained the upper hand in the "power at idle" test.
        • by afidel ( 530433 )
          Power at idle is VERY important for many datacenters because unless you are running VMWARE ESX with a very tight farm the majority of your servers probably spend a fair percentage of their day idle or nearly so. Personally I am using Dual Core Opterons for many of my n-tier boxes with Xeon's being used for the Citrix farm, each is the platform best suited for the type of work it is doing and the typical power profile. I think IT that simply plops one solution in for all needs is not taking best advantage of
    • Re: (Score:1, Interesting)

      by Anonymous Coward
      What I don't get is why are people constantly comparing Intel's quad-core with AMD's revision F hardware? Is this a marketing ploy from Intel to try and make it seem like AMD isn't in the game anymore? Revision F hardware from AMD means that the hardware is *capapble* of taking the future quad core chips, but AMD has not released them yet. In all fairness, what has Intel done for the public without AMD (or Sun for that matter) coming out with their own multi-core competitive product that saved us so much
  • AMD's path (Score:4, Insightful)

    by homey of my owney ( 975234 ) on Friday December 15, 2006 @10:01AM (#17255120)
    AMD needs to do what they have been doing - thinking independently and coming up with original solutions.
  • by pla ( 258480 ) on Friday December 15, 2006 @10:05AM (#17255200) Journal
    the Opteron ended up drawing much less power at idle than the Xeons
    ...
    the Xeon 5160 used the least energy in completing our multithreaded MyriMatch search, in part because it completed the task so quickly.

    So what does this mean for people shopping for servers?

    If your servers constantly tick along at nearly 100% CPU use, you might do better going with the Xeon system. If your machines basically sit idle most of the time with an occasional spike for a few seconds when it actually does something, the AMD would save you more on electricity.

    Of course, this raises a third possibility - Would running a number of virtual servers on one large Xeon machine waste more energy than it saves, or give a net gain?
    • Re: (Score:3, Insightful)

      by archen ( 447353 )
      Although some people will pipe in with their number crunching sever stories, are there any normal usage servers that really come in at 100% CPU usage? For the 20 odd servers I run few ever run at that rate for more than 30 minutes a day or so - and usually doing backups for that matter. Other system components often keep you from reaching that target, and most 24-7 servers I've seen do most of their work during a certain period then spend the rest of their time twiddling their thumbs.
      • Best Practices (Score:5, Insightful)

        by killmenow ( 184444 ) on Friday December 15, 2006 @10:47AM (#17255956)
        It has always been my understanding that best practices dictate a server running at a constant 100% CPU utilization is underpowered and needs upgraded. Normal, every day, steady CPU utilization should hover no higher than around 50% (closer to 75%, if you like living on the edge) leaving enough CPU to handle peak loads. Very few functions require a system that maintains a constant CPU utilization and never peaks over it.
        • > Normal, every day, steady CPU utilization should hover no higher than around 50% (closer to 75%, if you like living on the edge) leaving enough CPU to handle peak loads

          A server that's providing services to regular users, sure. But if your server is doing computational work, like many of the scientific computing examples given in the article, it should be spending every minute of every day at 100% utilization.
          • by Anthony ( 4077 ) *
            And here is an example. Sunday is a quiet day as job submission drops off, but this cluster is normally near 100% CPU capacity [APAC] [apac.edu.au]. Each bar is a 32 Itanium2 CPU node.
      • Re: (Score:2, Insightful)

        by Anonymous Coward
        Any server running at that rate for more than a few short peaks a day is under capacity. Ideally, you'd like to keep them at 100% but you don't control scheduling of server demand. It's too ad-hoc. You trend then build enough excess capacity to handle projected peak loads. Of course, this depends on the level of service you want to deliver. Most server "customers" expect the server to be always as responsive as it can be, regardless of load. (expectation of IT is always 100% all the time). So server
      • are there any normal usage servers that really come in at 100% CPU usage?

        The anti-spam filters at my place of employment (two machines, each with a single 2.6GHz Xeon). That's why we are replacing them with two machines, each using two dual-core Xeons, for 4x the CPU power.

      • Re: (Score:3, Interesting)

        by ptbarnett ( 159784 )
        Although some people will pipe in with their number crunching sever stories, are there any normal usage servers that really come in at 100% CPU usage?

        For capacity planning purposes, most of my clients target 40-50% CPU utilization on servers. If it starts creeping above 60% on a consistent basis (or is forecasted to do so soon), they begin the acquisition process to either upgrade or add servers.

        Queuing theory (M/M/1) shows that while the average response time doesn't increase that much, the standard

    • 4 core opteron x2 vs. 8 core xeon x1

      Fron the article, the idle power consumption of the 8 core xeon is ~230W. 4 core opteron us ~120W.

      Which means, at idle, the single 8 way xeon is better than 2 4 way opterons. Given that the efficiency of the 8 way under load is better than the 4-way, I would think that stacking on the 8-way is better.

      Of course, having two 4 way independant systems is better redundancy. On the other hand, the 8 way can be utilized to solve SMP multithread problems (without the expense of h
      • Re: (Score:3, Interesting)

        If I'm do General Purpose computing I would trade the 10W difference in power consumption for the redundancy and flexibility of the 4-way Opteron. With two 4 way boxes you can use one as the failover for the other, or load balance between them keeping low CPU use on each. General purpose computing really doesn't need the power of an 8-way SMP solution even with 1000's of users. You can virtualize either the 4 way or the 8 way with VMWare or Zen or Solaris Containers so that (IMHO) is a wash.

        It's really back
    • Power = Heat (Score:3, Insightful)

      by mungtor ( 306258 )
      "If your machines basically sit idle most of the time with an occasional spike for a few seconds when it actually does something, the AMD would save you more on electricity."

      More importantly, I think, is that power consumption translates to heat output. If you have mostly idle servers with occasional spikes, you can either cool them for less or put more in the same space depending on what you need. And don't forget that you actually save money twice with the AMD since you have to pay to power and cool the
    • by rbanffy ( 584143 ) on Friday December 15, 2006 @11:47AM (#17257130) Homepage Journal
      Well... If you have a couple servers that idle most of the time, I suggest that, instead of AMD, you buy VMWare.

      Or go Xen, OpenVz or whatever does the trick.

      But, most important, get rid of the idling boxes.
    • It seems unfortunate that The Tech Report is the one that has to step up and measure energy efficiency [techtarget.com]. OK, so AMD is more efficient at idle and Xeon is more efficient at 100%. Who ever really runs at either of those levels? What about 10%, 20%, 30%, etc. Those are real-life utilization rates. SPEC [spec.org] is looking into doing something. So is the EPA [energystar.gov]. Maybe they can get together and figure it out.
    • by Amouth ( 879122 )
      i am not sure what it would be for this but i do know that i am currently running 5 virtual servers on a dual p4 2.4 xeon box - load is around 25-40 % average with up to 80 when it gets hammered.. but no service issues - the only bottle neck is disk i/o - we are currently adding/upgrading raid controlers on it to give the virtual servers better disk access but over all it works well - i was able to pull 3 servers off the rack and virtualize them..

      the dual xeon consumes ~280 watts constant and each of the th
  • This just in! (Score:5, Insightful)

    by gentimjs ( 930934 ) on Friday December 15, 2006 @10:10AM (#17255286) Journal
    Apples compared to Oranges: Our findings on the page after the banner adds!
    .. nothing to see here, move along...
    • Sure there's banner ads, but do you go to hardware sites much? The Tech Report is one of the last honest places on the web, IMO.
  • "The eight-core Xeon 5355 system managed to render our multithreaded POV-Ray test scene using the least total energy, even though its peak power consumption was rather high, because it finished the job in about half the time that the four-way systems did. Similarly, the Xeon 5160 used the least energy in completing our multithreaded MyriMatch search, in part because it completed the task so quickly."

    Presumably, the article tests power consumption because businesses are concerned with how much running each o

    • "Presumably, the article tests power consumption because businesses are concerned with how much running each of these systems will cost them. If the Xeons managed to win in power consumption because they completed the task in half the time, that has other cost-saving benefits even beyond power consumption. "

      The benchmarks chosen have very little to do with the real business world.
      They mostly demonstrate the effect of Intel's larger CPU caches on performance.

      Choose a series of applications(p

      • Sounds like you're talking about server use while they tested workstation use. It looks like they called it "server/workstation" class, whatever that means.
  • oracle datacenter (Score:4, Informative)

    by chap_hyd ( 717718 ) on Friday December 15, 2006 @10:39AM (#17255834) Homepage
    one friend who works for oracle, in their datacenter, told me that they are swaping the dell intel xeon server with Sun AMD Opteron servers. the main reason behind this server swap is power efficiency of the new sun servers. So that means big corps already had their eye on AMD cpus :)
    • it doesn't make any sense to swap out a working and functional server running intel chips with one running AMD purely for power saving, because electricity is a relatively small of the lifetime cost of a server, until

      • the server no longer has adequate spare capacity and would be upgraded
      • you're beginning to overload your power or cooling grid, and its cheaper to regrade your servers (which can be deployed elsewhere) than change the power grid or fix your air-con

      it's a similar problem for car users - for

      • by Tmack ( 593755 )

        it doesn't make any sense to swap out a working and functional server running intel chips with one running AMD purely for power saving, because electricity is a relatively small of the lifetime cost of a server, until

        • the server no longer has adequate spare capacity and would be upgraded
        • you're beginning to overload your power or cooling grid, and its cheaper to regrade your servers (which can be deployed elsewhere) than change the power grid or fix your air-con

        it's a similar problem for car users - for an average vehicle doing 25mpg, about half the energy of its lifetime of making, using, and recycling/scrap is consumed when making.. environmentally it's best to fix up an old car so it runs properly with minimal emissions than generate a lot of scrap metal & plastics and incur the environmental costs of mining/refining metals, drilling for oil for plastics, manufacture etc of a new car.

        Considering that Xeons have been around for years now, for all the parent stated these could be old 1Ghz or slower Xeon based servers. Rather than upgrading to the latest, they decided to switch platforms, which would meet your criteria.

        However, I disagree with your statement that the cost to power a server is a small fraction of its cost. A basic server, costing about $4k (nothing fancy), running 24x7x365.25 at about 300Watts, will use 18408.6 KWH in one year. At $0.07/KWH, thats $1288.60 per year just

        • Re: (Score:2, Informative)

          by aczisny ( 871332 )

          A basic server, costing about $4k (nothing fancy), running 24x7x365.25 at about 300Watts, will use 18408.6 KWH in one year. At $0.07/KWH, thats $1288.60 per year just to power the box.

          It took me forever to figure out what was wrong with this. I knew your numbers didn't add up but I couldn't put my finger on it until I realized you multiplied out exactly what people say when they mean constant uptime. The problem of course, is that it should be 300(watts)*24(hours/day)x365(days/year) or 24(hours/day)x7(da

          • by afidel ( 530433 )
            The problem is that just considering power is stupid. I figure power used x3 when designing because between inefficiencies, heat load from UPS's, AC, etc that's about what you end up at. So 365 days *24Hrs * 300W /1,000(WHrs/KWHr) = 2628KWhrs * 3 = 7884 * $.12/KWhr (realistic for most of the country when you include delivery charges) = $946.08/year. Then add in the amortized cost per KW of your UPS and generator and it almost doubles that figure so say $2K/year. Over the useful lifetime of the typical serve
  • It's very useful to have some normalized way of measuring watts/performance, as they try to do in this article. But at least they could have used a more general and useful benchmark, like those offered by www.spec.org [spec.org].

  • I'd like to see these efficiency curves plotted against 100%, the maximum theoretical efficiency of the transfer function through the semiconductors. Anyone know how to calculate the minimum W:b (watts per bit) necessary for these real-world tasks? Or is that just way too complex a stat to compute without melting the datacenter at which it's computed?
  • With the intel chip set there is only 2 x8 pci-e lanes coming out of the north bridge and sas / sata-2 , pci-x, networking, as well as the pci-e slots on the board have to share them.

    So with a lot of network use and disk use you can choke up that bus.
    • With the intel chip set there is only 2 x8 pci-e lanes coming out of the north bridge and sas / sata-2 , pci-x, networking, as well as the pci-e slots on the board have to share them.

      So with a lot of network use and disk use you can choke up that bus.

      How did you come to the conclusion that AMD has better chipsets? I can get an nforce/crossfire/via motherboard for either AMD or Intel with pretty much identical specs. Intel has the advantage of making their own chipset, so Intel is the one that has the chipse

  • Here is one test that needs to be done take a duel amd opteron workstation with 2 Quadro cards in sli and also put in a raid 5 sas or sata setup also do some networking at the same time. There are duel and quad amd opteron boards with nForce Professional chip sets. some have 4 pci-e slots x16 x8 x8 x16 with each half coming from a HTT link.

    Also take a duel intel workstation and try to do the same thing the best that you can find is x8 x8

    Use hacked sli drivers is ok.

    I think that the amd system will do better
  • http://techreport.com/reviews/2006q4/xeon-vs-opte r on/index.x?pg=7 [techreport.com]

    Very interesting. The benchmark uses a database and is the only one I've seen that seems to test the limits of the CPU cache with a database.. and low and behold, at 8 threads, performance degrades for the 5355 and it's actually slower than the opteron 2218.

    Or it could just be that this benchmark isn't coded well - it might use a global lock frequently so as you add more threads there's more contention. In any case someone with more time than
    • Or it could just be that this benchmark isn't coded well - it might use a global lock frequently so as you add more threads there's more contention. In any case someone with more time than me should dig into this benchmark which might show a weakness in the core 2 architecture.

      Take a look at http://tweakers.net/reviews/661/7 [tweakers.net] if you want to see how the performance of the Clovertown Core 2 chips scales with a scalable database and many clients.
  • by Splork ( 13498 ) on Friday December 15, 2006 @01:51PM (#17259094) Homepage
    See http://electricrain.com/greg/opteron-powersave.txt [electricrain.com].

    All AMD K8 (Opteron and Athlon 64) CPUs have the ability to run the clock and an extra slow speed when in HLT (idle) mode saving a bunch more power. Many (most?) BIOSes are not smart enough to enable this. A simple setpci command will turn it on under linux.

    find out if its on:

      setpci -d 1022:1103 87.b

    If that returns 00, its off. To turn on clock-divide-in-hlt to div by 512 mode use:

      setpci -d 1022:1103 87.b=61

    (see the above URL for links to the AMD documentation on the PMM7 register; other values can work).
    • Unless your chip is very recent, the timestamp counter speeds will vary.

      Unless your Linux kernel is very recent, this condition will not be detected automatically. Linux will assume that the discrepency means you are losing clock ticks.

      You can try kernel parameters like clocksource=pmtmr to fix it. Good luck, you may need it...

      The BIOS vendors disable this power-saving feature because there are Windows games that, like Linux, assume the timestamp counters don't vary in speed.

      • by Splork ( 13498 )
        good thing to note... i haven't seen any problems on our systems so far but i am keeping my eyes open.

        i'll check our kernel sources later to see if they include the code from the referenced lkml post or already default to not preferring the tsc for timekeeping.
  • FTFA: the amount of energy used by each system to render the scene, expressed in Watt-seconds


    How can you subtract a unit of time (seconds) from a unit of power (watt) ?

    Assuming multiplication was intended instead of subtraction, why use Watt.seconds instead of Joule ? Still, kudos for using SI units and not something like boe.

  • Up here in The Great White North, there is a second important feature (mostly for desktop and deskside systems) -- and that's efficiency as a space heater. When these boxes are running at full bore, how many BTUs do they generate, and how many BTUs/watt do they generate. How many Zeons or K7s would it take to heat the average house?
    More importantly, how does that compare to a dedicated space-heater?
    • Re: (Score:2, Insightful)

      by Cassini2 ( 956052 )
      Computers are almost 100% efficient as space heaters. Almost every watt consumed gets converted to heat.

      The energy in the light radiated from the monitor or from the LEDs in the computer case is very small compared to the energy consumed by the computer. Computers do no useful physical work. The result is that almost all energy consumed by a computer is converted to heat.

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...