Forgot your password?
typodupeerror
Intel Hardware

Server Benchmarking Lone Wolf Bites Intel Again 90

Posted by ScuttleMonkey
from the everyone-loves-a-homecourt-ruling dept.
Ian Lamont writes "Neal Nelson, the engineer who conducts independent server benchmarking, has nipped Intel again by reporting that AMD's Opteron chips 'delivered better power efficiency' than Xeon processors. Intel has discounted the findings, claiming that Nelson's methodology 'ignores performance,' but the company may not be able to ignore Nelson for much longer: the Standard Performance Evaluation Corp., a nonprofit company that develops computing benchmarks, is expected to publish a new test suite for comparing server efficiency that Nelson believes will be similar to his own benchmarks that measure server power usage directly from the wall plug."
This discussion has been archived. No new comments can be posted.

Server Benchmarking Lone Wolf Bites Intel Again

Comments Filter:
  • Great (Score:2, Insightful)

    Now if they can get their laptop chips to be more efficient than Intel's, I'll be happy again.
  • FBDIMM (Score:3, Informative)

    by RightSaidFred99 (874576) on Friday September 07, 2007 @01:56PM (#20512273)
    Yeah yeah, we all know. FBDIMM is a power sucker. FBDIMM is going the way of the dodo before long, though.

    AMD also typically has lower idle clock multipliers so when they're not doing anything, they draw less power. If you have a room full of computers sitting there doing nothing, you'll certainly use less power in that case.

    • not that fast intel next xeon chipset will FB-DIMMS in the high end chipset with pci-e 2.0 also not all of the pci-e lanes will be 2.0 and DDR2 ECC in the lower end one with less pci-e lanes and no pci-e 2.0.
    • Re: (Score:3, Insightful)

      by visualight (468005)

      If you have a room full of computers sitting there doing nothing, you'll certainly use less power in that case.

      That is what most servers spend most of their time doing - nothing. There's peaks and valleys sure but there are *a lot* of idle cycles.
      • That's what virtualisation is for. Even if you have all your peaks in the same place you can save a lot of power by running something like Xen and migrating the virtual machines to a smaller number of nodes during the troughs and turning off a few machines, then spreading them out again on the peaks.
        • Re:FBDIMM (Score:4, Insightful)

          by RingDev (879105) on Friday September 07, 2007 @03:28PM (#20513601) Homepage Journal
          And the percent of Netadmins who have the time, budget, knowledge, and inclination to do so is right about .001%

          I agree that Virtualization is a great solution, but the vast majority of IT shops around the world don't have the knowledge or budget to pull it off these days. Give it another 5-10 years and it'll be the new standard, but right now it just doesn't have the market or education penetration. For the cost of investing in a Xen system and training, most IT shops will be financially better off just paying the extra electric bill.

          -Rick
        • I heard a rumour that servers could actually run more than one application at a time. Imagine that, running (say) 10 applications without needing to run 10 operating systems!

          Yeah I know - applications suck and operating systems suck - meaning that virtualisation frequently IS the best option. I just wish that applications and operating systems sucked less.

      • by jgc7 (910200)
        Which is why measuring computations per watt at peak load, is not a great indicator of energy efficiency. We typically buy enough hardware to handle a load that rarely happens, thus measuring actual energy efficiency is tricky. Since we build out to peak load, absolute performance means fewer total processors. On the other hand, it is also important how much the usage decreases under light load. For instance, a processor that uses 30% of peak power at a 10% load may be more efficient than a processor th
    • Re:FBDIMM (Score:4, Informative)

      by InvalidError (771317) on Friday September 07, 2007 @02:24PM (#20512645)
      The original Advanced-Memory-Buffer-based FBDIMMs might be going away next year but Intel has not given up on off-chip memory bridges since they announced plans for AMB2. Instead of having the AMB2 chip on-DIMM, it will be either on multi-DIMM AMB2 risers or on the motherboard.

      BTW, AMD also announced plans for off-chip AMB2-like memory bridges with multiple multi-gigabit serial lanes... they called it G3MX: G3 (socket) Memory eXtender.

      So, while FBDIMMs may be going away soon, the idea of using external bridges to dump the RAM further away from the CPUs/chipset using serial interfaces is gaining traction - at least in the server space.
    • by Laxator2 (973549)
      Most corporations have large numbers of desktops which are left on 24/7 but they sit idle from 5PM to 9AM.
      In such a case idle power becomes an issue. That is, of course, unless the desktops are busy doing their share of work for various botnets.
    • If you want to even out the difference between AMD and Intel in terms of server CPU utilization, just post a link to said servers here on Slashdot.
  • by Anonymous Coward
    So who cares about those ancient CPUs.
  • by Joe The Dragon (967727) on Friday September 07, 2007 @02:00PM (#20512313)
    The FB-DIMMS are sucking up alot of power and giving off a lot of heat. That is bad for intel as there chipsets use alot more power as well and that looks bad next to a AMD system with cheaper DDR2 ECC ram.

    Intel new 4p systems with 4 FSB, L3 cache in the chipset and FB-DIMM may even use a lot more.

    Amd systems can have more then one chip set link and more pci-e lanes as well.
  • I didn't RTFA, but my question is, are power savings a real necessity? I'd imagine that the answer depends on the size of the server farm. If you only have a few servers, the additional savings from the lower power consumption may be peanuts to the raw processing power of another processor bought at a similar price. Then, when you take the obsolescence of the processors into consideration, the power savings may be even more negligible.

    As the size of the farm scales, however, I'd hazard to guess that the p

    • by khasim (1285) <brandioch.conner@gmail.com> on Friday September 07, 2007 @02:07PM (#20512407)
      The other side of that is that lowering the power consumption means lowering the heat generated which means lowering the cooling requirements.

      And cooling requires electricity also. So by reducing the power usage of one component, you can save money on your cooling costs, also. It's twice the savings.
      • It is far less than twice the savings unless you have a woefully inefficient air conditioner.

        Since ACs usually have COPs better than 9, the AC would use less than 25W to offset the heat generation of a 200W system. So the savings from not having the extra heat to pump out in the first place far outweighs (>8:1) the cooling costs themselves.

        As far as datacenters and server/render/etc. farms are concerned though, lower-power and faster units only means they can pack more units per rack and more racks per r
        • by afidel (530433)
          I think the 2:1 power usage is a total system measure, that is the inverter inefficiency, the heat from the inverter and batteries, the power lost in transit, power factor, etc. I think that a typical datacenter uses about 2x the power draw of the servers it houses.
          • If you include the heat from rectifiers, inverters and batteries, you are talking online-UPS. In that case, the power-factor should be a non-issue if your online-UPS' rectifier front-end is properly balanced and PFC'd - most current models perform at 99+% out-of-the-box. During normal operation, most of the power goes right through from rectifiers to inverters with nearly no loss in the batteries other than floating charge and ~5% AC ripple from the tri-phase full-wave rectifiers. On top of that, rectifiers
    • by moderatorrater (1095745) on Friday September 07, 2007 @02:19PM (#20512581)
      Let's not forget the environmental factor of using less electricity. More electricity means more carbon, and even if it doesn't matter to your company, it matters to other companies that your company will deal with.
    • Re:Does it matter? (Score:5, Informative)

      by Azarael (896715) on Friday September 07, 2007 @02:27PM (#20512697) Homepage
      It's fairly common for 3rd party data centers to charge based on power consumption. If you want to rent spaces to have a few machines hosted, you can save a bunch of money by building servers that aren't power hogs. Any data center worth hosting at pays very close attention to how much power they have available, so even in the event of power loss, then have an alternate circuit to draw from and/or sufficient emergency generator power.
    • by AJWM (19027)
      Yes, power savings are necessary. Lower power used by the logic also means lower power needed by the cooling fans, which overall translates to less heat put out by the box, which means less cooling needed for your data center.

      This is especially critical in older data centers. I know of one where they can't put more than a couple of blade enclosures in a rack because the DC wasn't designed to put that much power and cooling into one spot. Physical space is no longer the limitation.

      Since there's always a d
    • by VENONA (902751)
      It's difficult to answer your question, because you haven't RTFA, which talks about primarily CPU v primarily disk workloads, power consumption at idle, etc.

      Overall, data center power consumption is a big deal. It's one of the main reasons that some corporations are after virtualization. It's one of the main reasons that Google is locating a datacenter in the Columbia Gorge.
      http://www.iht.com/articles/2006/06/13/business/se arch.php [iht.com]

      While you were 'hazarding to guess', and 'imagining' and thinking various th
  • ...a new test suite for comparing server efficiency that Nelson believes will be similar to his own benchmarks that measure server power usage directly from the wall plug.
    OK, I'm intrigued. What kind of fudge do the current efficiency tests consist of? Measuring generated heat with a thermometer?
    • by XaXXon (202882)
      And I don't understand them not saying "well, we'll need 25% more amd-based servers, so lets factor in those extra machines into the power equation.."

      I think Intel has a reasonable beef with the test. I'm not an intel fanboy.. except that I think they have better stuff right now in the 2-socket (4 and possibly 8 core) arena.
      • by Vancorps (746090) on Friday September 07, 2007 @02:21PM (#20512609)

        Except that there is a very small performance difference in the 32bit world and a non-existent performance difference in the 64bit world. The Opteron actually outperforms quite commonly in the 64bit world much like the Athlons do against Core 2 Duo on the desktop side. Intel has an edge on 32bit optimization right now which is why the Core 2 Duo looks so good right now.

        Add 4 and 8 sockets and you've got to be joking considering Intel's shared bus. They cores are chocked for memory throughput at that point while the Opterons just perform better and better as they scale. In a 2 socket system they compete very well. In a 4 socket system the Opteron is by far the superior choice both with power consumption and performance especially with 64bit database,email, and web servers.

        • Opteron at least on floating point is lower than Woodcrest/Clovertown IPC in 64-bit. Note the top500 and the increase of Intel presence as of the Core2 generation. Barcelona is supposed to either meet or beat the Intel floating point IPC, but that's yet to be proven publicly. There is at least one significant 64-bit operation that Core2 creams AMD with. I don't know much about other types of instructions in general though.

          I agree though, AMD's architecture scales *much* better with socket count and memo
          • by Vancorps (746090)

            I was talking specifically talking about the Opteron vs Xeon, not Athlon vs Core2. The database benches I had done clearly put the Opteron in the winning spot but Intel has had time to improve. I'm not saying either are bad choices at this point. There is clearly healthy competition now. My experience with 64bit Xeon performance was the initial EMT64 offerings. They are not impressive by any means. I was not aware there has been significant improvement in this area.

            Of course that is we research every year

            • Re: (Score:2, Insightful)

              by ShapeGSX (865697)
              The latest Xeons are all Core 2 derived parts. Your comparison is horribly dated.
      • by bockelboy (824282) on Friday September 07, 2007 @02:28PM (#20512703)
        These tests *did* factor performance into this (well, that's what the tester says. Intel is contesting this claim. You decided who you believe). In fact, those tests draw the same conclusions as folks I know who recently bought Opteron servers.

        The Intel chips have great performance per watt *as a chip*. Perhaps even better than AMD does; I've never measured a chip's power usage.

        The Intel servers, on the other hand, have worse performance per watt *as a fully loaded server*. Unless you're running the chip without a server, you generally should care about the power draw from the outlet - like these tests did.

        The Intel servers seem to have the edge in performance per watt when the server is going nearly unused. However, in my area, usually the CPU is pegged 24/7 (unlike, say, a webserver).

        It's good to see the chip wars are still alive and kicking. When the competition is healthy, consumers benefit instead of stockholders.
    • by click2005 (921437) on Friday September 07, 2007 @02:17PM (#20512543)
      OK, I'm intrigued. What kind of fudge do the current efficiency tests consist of? Measuring generated heat with a thermometer?

      They used to but now they time how long it takes to toast a marshmallow. Its useful because you can use the melted mallows as thermal paste. Its not as efficient as Arctic Silver 5 but I hear its better than the standard ceramic stuff.
  • If intel chips are constantly exposed as being inferior to AMD's, why can't intel improve its engineering, with all that money flowing to them?

    What do AMD have in their design methodologies that Intel don't?
    • Re: (Score:3, Insightful)

      by geekoid (135745)
      Your premise is flawed. They are not constantly exposed as being inferior to AMD. People supporting their biases constantly expose AMD or Intel.

      In fact, both are so close that only very specifically myopic tests makes one the 'leader'. There is no noticeable performance difference between the two that matters.

      • I wouldn't necessarily say myopic. It varies based on the time frame.

        The original K7/Athlon, (actually, even P3) was noticably better than the first generation P4, without cpu beer-goggles.
        Later P4s managed to overtake the 32-Bit Athlons noticably, until the Athlon64 came out, which took the lead again.
        It didn't hold that lead for long, because the Core2s seemed to be major ass-kickers.

        During these timeframes, there were extended periods where one was on-par with the other, or they were too close to call. H
        • One of the funniest eras was the year that followed the P4's introduction. At the time, the 1.6GHz P4 was competing against the 1.3GHz P3T and ~1.3GHz Athlons... and it got ridiculed. It was not until the 2GHz Northwoods that the P4 gained a clear lead over the older P3T. After that, the P4's clocked ramped up explosively, leaving the Athlon and P3 in the dust performance-wise for a year or so until the P4 crashed into the 3.6-3.8GHz brick wall, with Intel unexpectedly stalled for over a year thanks to Pres
      • by Gr8Apes (679165) on Friday September 07, 2007 @02:45PM (#20512919)
        It depends upon what's important to you. Is power consumption important? AMD wins. Is multiple CPU cores in single servers important? Anything over 4 until recently, and now 8, is an AMD win. Do you need the most processing power possible for a single process in a 2P or less unit? Intel wins that one. Need high density stacked CPUs with loads of RAM? AMD wins that one (That's a power/heat/space issue). Need to process web calls? Sun wins that one hands down on a /$, /kW, and /J measure.

        There are definite differences in performance between the various CPUs. A mere 5% difference in power draw across a day times 1000s of CPUs is significant. Same with a 5% thermal dissipation difference, as that turns into increased cooling requirements.

        These things all matter in the server world.
    • On board memory controller, Better cpu to cpu link, a lot of chipsets to choose from with intel xeons you can only use intel chipsets, more cpu to chipsets links, chipsets with more pci-e lanes and other stuff.
    • by conteXXt (249905) on Friday September 07, 2007 @02:20PM (#20512587)
      "What do AMD have in their design methodologies that Intel don't?"

      Digital Equipment Corp's Alpha engineers.

      Sorry to beat a dead horse to a pulp but those that know still know.

      • DEC was bought by Compaq way back when (1997?). Compaq was bought by HP more recently. AMD was not involved with either of those takeovers.

        So how did AMD get the DEC Alpha engineers? As far as I know, the DEC Alpha guys are still within HP. Did I miss something?

        • Re: (Score:1, Informative)

          by Anonymous Coward
          Intel employs a large number of the former DEC Alpha team, many of which helped develop CSI and the next generation Itanium architecture. AMD was able to snag some of the former Alpha engineers during the HP takeover, and then later when Intel was given that department from HP. The mere fact that people change jobs doesn't mean AMD is filled with super-star Alpha people, many of whom wouldn't like AMD's culture of minimal R&D/innovation.
          • Re: (Score:3, Insightful)

            by VENONA (902751)
            "AMD's culture of minimal R&D/innovation."

            What? Who brought 64-bit instructions to x86, when Intel and HP were trying to drive everyone to high-dollar (and at the time miserably performing) Itanium for 64-bit? Who brought out an architecture that would let you plug FPGAs, etc., into CPU slots?

            IMHO, AMD is lagging in semiconductor manufacturing processes. Their geometries are larger, etc. I doubt that they get the yields that Intel does, and that counts against them in price wars. But developing new fab
        • >DEC was bought by Compaq way back when (1997?). Compaq was bought by HP more recently. AMD was not involved with either of those takeovers.
          >So how did AMD get the DEC Alpha engineers? As far as I know, the DEC Alpha guys are still within HP. Did I miss something?

          Alpha team was spun off to Intel.
          http://news.com.com/Intel+gets+more+key+Alpha+alum s/2100-1006_3-1023146.html [com.com]
          http://www.theinquirer.net/?article=20024 [theinquirer.net]

          How many people were working on Alpha EV7 are still working at Intel would be a valid quest
    • by tonywong (96839)
      AMD does not have market leadership, so they can make radical gambles for better efficiencies to attempt for better marketshare.

      From what I can recall, Intel fellow Bob Colwell mentioned the CPU designers could integrate ethernet onboard but they faced a fight from the ethernet chip group which have their own marketshare, budget and design group.

      I suppose that as long as chipsets (Northbridges and Southbridges) make money for Intel, memory controllers will stay on the Northbridge and use more power than hav
    • by tabby (592506)
      "with all that money flowing to them?"

      You just answered your own question
    • This benchmark is a system benchmark, meaning that it takes into account power dissipation of much more than the processor alone. It is fair to say that Intel's current server platforms use more power than AMD's server platforms, but this is actually due to their memory technology, and not to the processors themselves.

      To be more specific, the Xeon processor in this review is the same processor core as the Merom/Conroe Core 2 Duo core. If you benchmark Conroe on a platform using the same memory technology

  • I am sure AMD Hypertrasport is the king who rules! Unless Intel will come with something similar...
  • YAY! (Score:3, Interesting)

    by Colin Smith (2679) on Friday September 07, 2007 @02:29PM (#20512721)
    At last we'll be able to determine server power efficiency.

    London, the world financial centre has real problems with datacentre power supplies. Any new ones pretty much have to be built outside the M25. There's pressure on the ones inside to use less power.
     
  • Does anyone else see any bias with a website, called "worlds-fastest", seemingly dedicated to pro AMD benchmarks, that he has done out of the goodness of his heart? And all the custom software he has written, doesn't have any CPU specific optimizations? None of the open source software, has any optimizations slanted toward one side or another? Coming out with an AMD Opteron vs Intel Netburst test result, when the newer Intel stuff had been out for 6 months? It all looks like a bunch of PR to generate bu
    • You are correct that NNA hopes to generate some business from these tests but I feel that your other speculations are wrong. The "worlds-fastest" web site is not a put up job by AMD. It is true that several recent sets of test results show AMD as superior but that could change, possibly as soon as Xeon servers are available with DDR2 RAM. The test methodology is valid and when Intel starts building servers that use less power, the test will report that.

      I don't believe that MySQL or SuSE linux has a bias tow
  • Excuse me, NetBurst? You are testing against NetBurst? That's like comparing Core2Duo against Duron, imho. Nice try, astroturfers.
  • No relation, but Neal is a sharp cookie nonetheless. I've worked with him before.
  • Imagine a Lone Beowulf cluster of these!
  • I would like to discuss how the benchmark could be improved. In its current form: 1) It is a client/server test with web clients talking to an Apache2 web server, 2) The server runs SuSE Linux Enterprise Server, 3) The server's database tables are built on MySQL, 4) The transaction is a gasoline credit card purchase, 5) The test measures power consumed at 7 different transaction activity levels: Idle, 5 different constant transaction rates and the maximum that the server will deliver, 6) At each activity le
  • The forthcoming energy efficiency benchmark from SPEC is generally described at http://www.spec.org/specpower/ [spec.org]

When all else fails, read the instructions.

Working...