Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Hardware

Server Benchmarking Lone Wolf Bites Intel Again 90

Ian Lamont writes "Neal Nelson, the engineer who conducts independent server benchmarking, has nipped Intel again by reporting that AMD's Opteron chips 'delivered better power efficiency' than Xeon processors. Intel has discounted the findings, claiming that Nelson's methodology 'ignores performance,' but the company may not be able to ignore Nelson for much longer: the Standard Performance Evaluation Corp., a nonprofit company that develops computing benchmarks, is expected to publish a new test suite for comparing server efficiency that Nelson believes will be similar to his own benchmarks that measure server power usage directly from the wall plug."
This discussion has been archived. No new comments can be posted.

Server Benchmarking Lone Wolf Bites Intel Again

Comments Filter:
  • FBDIMM (Score:3, Informative)

    by RightSaidFred99 ( 874576 ) on Friday September 07, 2007 @02:56PM (#20512273)
    Yeah yeah, we all know. FBDIMM is a power sucker. FBDIMM is going the way of the dodo before long, though.

    AMD also typically has lower idle clock multipliers so when they're not doing anything, they draw less power. If you have a room full of computers sitting there doing nothing, you'll certainly use less power in that case.

  • by khasim ( 1285 ) <brandioch.conner@gmail.com> on Friday September 07, 2007 @03:07PM (#20512407)
    The other side of that is that lowering the power consumption means lowering the heat generated which means lowering the cooling requirements.

    And cooling requires electricity also. So by reducing the power usage of one component, you can save money on your cooling costs, also. It's twice the savings.
  • Re:FBDIMM (Score:4, Informative)

    by InvalidError ( 771317 ) on Friday September 07, 2007 @03:24PM (#20512645)
    The original Advanced-Memory-Buffer-based FBDIMMs might be going away next year but Intel has not given up on off-chip memory bridges since they announced plans for AMB2. Instead of having the AMB2 chip on-DIMM, it will be either on multi-DIMM AMB2 risers or on the motherboard.

    BTW, AMD also announced plans for off-chip AMB2-like memory bridges with multiple multi-gigabit serial lanes... they called it G3MX: G3 (socket) Memory eXtender.

    So, while FBDIMMs may be going away soon, the idea of using external bridges to dump the RAM further away from the CPUs/chipset using serial interfaces is gaining traction - at least in the server space.
  • Re:Does it matter? (Score:5, Informative)

    by Azarael ( 896715 ) on Friday September 07, 2007 @03:27PM (#20512697) Homepage
    It's fairly common for 3rd party data centers to charge based on power consumption. If you want to rent spaces to have a few machines hosted, you can save a bunch of money by building servers that aren't power hogs. Any data center worth hosting at pays very close attention to how much power they have available, so even in the event of power loss, then have an alternate circuit to draw from and/or sufficient emergency generator power.
  • by Vancorps ( 746090 ) on Friday September 07, 2007 @03:35PM (#20512789)

    All of my Opteron based servers are rock solid with multiple chipset vendors. The days when that was a problem for AMD are long gone. There is a reason I have to reboot my Xeon servers once a week and my Opteron servers stay up until my maintenance window. They are both configured identically but the Xeons just aren't as stable. I haven't been able to play with the newer Xeons, only the crappy P4 based ones. I've got some new servers coming though so I'll get an update on the stability issue.

    Through the history of the Opteron though stability has never been an issue in my experience. The Athlon had problems as you were describing. There were plenty of Intel and AMD desktop chipsets that were horrible during that time. More of a chipset maker problem than a CPU maker. In both cases Intel and AMD had their own chipset out which did work. Although Intel motherboards declined sharply in quality around that time too. I remember having a bunch of xeons that would reboot and if you were lucky everything would come up okay. Firmware updates came out which gradually improved the issue. I do believe it took three firmware updates to get stability to what you would expect for a 24/7 server. Wasn't a problem with the CPU though.

  • by Gr8Apes ( 679165 ) on Friday September 07, 2007 @03:45PM (#20512919)
    It depends upon what's important to you. Is power consumption important? AMD wins. Is multiple CPU cores in single servers important? Anything over 4 until recently, and now 8, is an AMD win. Do you need the most processing power possible for a single process in a 2P or less unit? Intel wins that one. Need high density stacked CPUs with loads of RAM? AMD wins that one (That's a power/heat/space issue). Need to process web calls? Sun wins that one hands down on a /$, /kW, and /J measure.

    There are definite differences in performance between the various CPUs. A mere 5% difference in power draw across a day times 1000s of CPUs is significant. Same with a 5% thermal dissipation difference, as that turns into increased cooling requirements.

    These things all matter in the server world.
  • by Anonymous Coward on Friday September 07, 2007 @03:50PM (#20513033)
    is http://en.wikipedia.org/wiki/Intel_QuickPath_Inter connect [wikipedia.org] , formerly known as CSI (common system interface).

  • Re:Please explain (Score:1, Informative)

    by Anonymous Coward on Friday September 07, 2007 @06:51PM (#20515501)
    Intel employs a large number of the former DEC Alpha team, many of which helped develop CSI and the next generation Itanium architecture. AMD was able to snag some of the former Alpha engineers during the HP takeover, and then later when Intel was given that department from HP. The mere fact that people change jobs doesn't mean AMD is filled with super-star Alpha people, many of whom wouldn't like AMD's culture of minimal R&D/innovation.
  • by RecessionCone ( 1062552 ) on Friday September 07, 2007 @07:35PM (#20515905)
    This benchmark is a system benchmark, meaning that it takes into account power dissipation of much more than the processor alone. It is fair to say that Intel's current server platforms use more power than AMD's server platforms, but this is actually due to their memory technology, and not to the processors themselves.

    To be more specific, the Xeon processor in this review is the same processor core as the Merom/Conroe Core 2 Duo core. If you benchmark Conroe on a platform using the same memory technology (DDR2) as AMD, you'll find that Intel's power consumption is significantly less than AMD's. But Intel decided to use a different technology (FBDIMM) for its server platforms, in order to increase maximum memory capacity, whereas the Opteron used a simpler technology which is severely limited in memory capacity per channel, since the outdated parallel multidrop DDR2 bus can't go at speed when heavily loaded.

    FBDIMM is like PCI-Express or Hypertransport for a memory interface, meaning that it's serial and point to point, instead of parallel and multidrop. This allows Intel to add many more loads to the memory channel without slowing the channel down, because it is Fully Buffered (the FB part of FBDIMM), which increases memory capacity per channel. However, FBDIMM also turns out to be very power hungry, and Intel is now being forced (by benchmarks such as this one) to release server platforms without FBDIMM in order to lower power consumption for people who don't need large memory capacities. (for some confirmation of this, look here: http://theinquirer.net/?article=42183 [theinquirer.net])

    In any case, the results of this benchmark aren't about "chips", they're about platforms. Intel's current chips are pretty good, but their server platforms need some work. That's why Intel's coming out with a whole new platform next year (here's some reading material for you: http://realworldtech.com/page.cfm?ArticleID=RWT082 807020032 [realworldtech.com] ).

    So a quick answer to your question: Intel's chips ARE better than AMD's, but their platforms aren't. Here's the question you should have asked: Why are Intel's platforms always behind AMDs? The answer to that is basically that Intel has lots more internal politics, and therefore it is slow to change things that have impact across the company, like platforms. Intel has a lot of internal competition: lots of separate groups working on various competing processors, so the processors themselves are usually pretty good (Darwin at work). But the teams making the processors don't have the freedom to change the platform, since that's outside their scope and requires lots of corporate maneuvering. So Intel's platforms are much slower to change than AMDs.

    Summing up: don't confuse a system benchmark for a processor benchmark! TFA isn't about processors at all, it's about systems.

The rule on staying alive as a program manager is to give 'em a number or give 'em a date, but never give 'em both at once.

Working...