Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AMD Intel Linux Business Hardware

EM64T Xeon vs. Athlon 64 under Linux (AMD64) 313

legrimpeur writes "Anandtech has a nice performance comparison under Linux (AMD64) between the recently introduced 3.6GHz EM64T Xeon processor and an Athlon 64 3500+. It is disappointing to see how the Athlon gets trounced in FPU intensive benchmarks. No memory-bound benchmarks (where the Athlon is supposed to have an edge) are presented, though." Update: 08/09 23:34 GMT by T : As the Inquirer reports, many Anandtech readers take issue with the comparison.
This discussion has been archived. No new comments can be posted.

EM64T Xeon vs. Athlon 64 under Linux (AMD64)

Comments Filter:
  • Comment removed (Score:5, Informative)

    by account_deleted ( 4530225 ) on Monday August 09, 2004 @10:40AM (#9920000)
    Comment removed based on user account deletion
  • Re:Math Co-Processor (Score:2, Informative)

    by fitten ( 521191 ) on Monday August 09, 2004 @10:43AM (#9920019)
    I don't understand your post at all. The Athlon line has historically been the better of the two (compared with Intel's P3 and P4 line) CPUs in FPU performance. In fact, the Athlon 64 and the Nocona both support x87, MMX, SSE, and SSE2 instruction sets.
  • 2 things... (Score:4, Informative)

    by Pandion ( 179894 ) on Monday August 09, 2004 @10:44AM (#9920032)
    For one the Xeon has more L2 cache and for another most of the math benchmarks looked to be integer based. The Xeon gets beat in POVray wich is FPU intensive if im am not much mistaken... I think it is unfair to say the FPU on the Xeon is better...
    I would be nice to see more non-synthetic benchmarks.
  • Re:Opteron (Score:3, Informative)

    by fitten ( 521191 ) on Monday August 09, 2004 @10:47AM (#9920053)
    RTFA usually helps.... directly from the Conclusions section:

    Although the Athlon 64 3500+ and the Xeon 3.6GHz EM64T processors were not necessarily designed to compete against each other, we found that comparing the two CPUs was more appropriate than anticipated, particularly in the light of Intel's newest move to bring EM64T to the Pentium 4 line. Once we obtain a sample of the Pentium 4 3.6F, we expect our benchmarks to produce very similar results to the 3.6 Xeon tested for this review.

    Without a doubt, the 3.6GHz Xeon trounces over the Athlon 64 in math-intensive benchmarks. Intel came ahead in every severe benchmark that we could throw at it, particularly during John the Ripper. Even though John uses several different optimizations to generate hashes, in every case, the Athlon chip found itself at least 40% behind. Much of this is likely attributed to the additional math tweaking in the Prescott family core.

    That's not to say that the Xeon CPU necessarily deserves excessive praise just yet. At time of publication, our Xeon processor retails for $850 and the Athlon 3500+ retails for about $500 less. Also, keep in mind that the AMD processor is clocked 1400MHz slower than the 3.6GHz Xeon. With only a few exceptions, the 3.6GHz Xeon outperformed our Athlon 64 3500+, whether or not the cost and thermal issues between these two processors are justifiable.

    We will benchmark some SMP 3.6GHz Xeons against a pair of Opterons in the near future, so check back regularly for new benchmarks!

  • by TheRealMindChild ( 743925 ) on Monday August 09, 2004 @10:53AM (#9920108) Homepage Journal
    Your processor doesnt "Crash". If you are having issues, chances are it is because you are too incomptent to be that close to the hardware. Try an OEM built AMD machine. A completely different experience.
  • Re:Math Co-Processor (Score:5, Informative)

    by ergean ( 582285 ) on Monday August 09, 2004 @10:56AM (#9920121) Journal
    They have a cross licence agreement, so each one has what the other has in production in the term of 6 to 9 months. That is why we see the SSE in AMD processors, and AMD64 instruction in Intel64 processors.

    http://contracts.corporate.findlaw.com/agreement s/ amd/intel.license.2001.01.01.html

    So I don't see any problem fro AMD in licensing the cp-processor.
  • Re:Why Not Opteron? (Score:2, Informative)

    by fitten ( 521191 ) on Monday August 09, 2004 @10:57AM (#9920135)
    Why not RTFA... especially the Conclusions section...
  • Riiight (Score:3, Informative)

    by Zebra_X ( 13249 ) on Monday August 09, 2004 @10:59AM (#9920152)
    And the 3500+ and the Xeon are in the same processor class how?

    The 3500+ is a mainstream, desktop processor. For a more accurate comparison, the FX series, and the opteron line should have been used.
  • by vincecate ( 741268 ) on Monday August 09, 2004 @11:01AM (#9920164) Journal
    A good review would have pitched the 3.6Ghz nacoma vs an Opteron 150, would have tested both in 32 and 64 bit and tried to use some application benchmarks.
    Different compilers would also be interesting. It seems that the pathscale compiler is the best for AMD64 [pathscale.com]. Much more optimized than gcc for 64-bit.
  • Re:Opteron (Score:5, Informative)

    by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday August 09, 2004 @11:11AM (#9920235) Homepage Journal
    Your comment does not in any way contravene the parent. It would still be more interesting if it were a benchmark with Opteron vs. Xeon. Personally what I would like to see is benchmarks which compare processors with like prices rather than market positioning. In any case, the fact that they plan to do a Xeon vs. Opteron benchmark later does not change the fact that such a benchmark would/will be more interesting than this one.
  • Re:Opteron (Score:1, Informative)

    by Anonymous Coward on Monday August 09, 2004 @11:12AM (#9920249)
    Well, that and more on die memory and some other enhancements.
  • by LightStruk ( 228264 ) on Monday August 09, 2004 @11:19AM (#9920293)
    The human eye pretty much stops distinguishing framerate past 30 fps
    Just as an example, try visually comparing GoldenEye 007 on the N64 to James Bond 007: NightFire on the GameCube. GoldenEye runs at 20-30 fps, while NightFire runs at a solid 60 fps. Then tell me that your eyes don't see the difference in smoothness and responsiveness.
    The reason our eyes don't have a problem with 24 fps film is because movies have lots of motion blur! Video games have no motion blur at all, unless you're playing a PS2, in which case everything is blurry.
  • by Anonymous Coward on Monday August 09, 2004 @11:21AM (#9920317)
    I agree with the parent, and umm NO to the person who say gaming senches are what really stress a system. My dad, an computational chemist does a lot of computing on molecules etc, and these operations can take a MONTH on good size cluster, a few dozen computers. Some things he does can not be split to multiple systems, so he runs those on a dual opteron system, he has always found that opterons kick tail in those computations, but it would be nice to see a similar bench so that people in the scientific community know how to get research done best.
  • by ernstp ( 641161 ) <ernstp.gmail@com> on Monday August 09, 2004 @12:30PM (#9921019)
    Right. No kidding this benchmarks sucks. :-)
    Seriously, it makes a great difference what version of GCC they use.

    I saw a great boost in benchmarks when I switched from gcc 3.3 to 3.4 on my AMD64.

    -O3 -pipe -march=k8 -fomit-frame-pointer -ftracer
    That's the way to go!

    "We compiled the program using ./configure and make with no optimizations."
  • Flawed benchmarks (Score:5, Informative)

    by Rufus211 ( 221883 ) <rufus-slashdotNO@SPAMhackish.org> on Monday August 09, 2004 @12:59PM (#9921274) Homepage
    I though that these benchmarks looked a little strange when you're using Jack the Ripper as one of your major comparisons. There's a nice thread [aceshardware.com] going on over at Ace's bashing the benchmarks, including a post [aceshardware.com] from the author of the chess benchmark stating:
    this test they did was flawed in all respects.
  • Re:FPU intensive? (Score:5, Informative)

    by kent.dickey ( 685796 ) on Monday August 09, 2004 @01:28PM (#9921562)
    The "primegen" program listed where the Xeon beats the Athlon slightly does not do any floating point.

    I looked at the code and played with it a little (I got it from http://cr.yp.to/primegen.html [cr.yp.to] and it seems the benchmark is mostly limited by the implementation of putchar().

    My system was an dual AMD Opteron 1.8GHz running Win XP pro with Cygwin. I modified the benchmark to not use putchar() but instead just write the characters to a 1MB buffer, and it got 16 times faster! To be specific, "primes 1 100000000 > file" went from 24.2 seconds to 1.497. Note that it's generating 51MB of output for primes under 100 million. I didn't bother running it for the 100 billion max, but would expect it to be around 50GB.

    This is a very poor benchmark since it's just measuring your stdc implementation of putchar and your system's ability to sink data to /dev/null, not anything useful.
  • by Anonymous Coward on Monday August 09, 2004 @02:36PM (#9922232)
    I run heavy duty computational plasma physics stuff, which includes, among other things, addaptive numerical integration, fft's, numerical derivation to estimate jacobians, solving small linear systems, zero finding in multiple dimensions, etc. These programs do NOT do particle in a box simulations, which are a whole different animal.

    My experience has been the following. My personal Athlon XP2800 system configured as follows:
    Asus cheapo motherboard
    1GB ram
    SCSI ultra wide hard drive (ancient!!)
    Windows 2K
    Lahey Fujitsu 5.7 Fortran compiler
    No tweaks here, everything out of the box.

    Six thousand dollar workstation bought by my research group (last year though):
    Xeon 2.8Ghz
    Configured by Dell, whatever mobos they use
    1GB ram
    Ultra 160 SCSI drives
    Red Hat Linux
    Fujitsu 6.0 Fortran compiler

    Results: To be honest, I am not certain on whether that old LF95 6.0 compiler is 64 bit native. 6.2 is though. My self built system runs MY computational programs about 25-30% faster.
  • by Anonymous Coward on Monday August 09, 2004 @03:38PM (#9922880)
    The act of using a 3500+ instead of an Opteron 150 is a minor issue.

    The major issue is that Anandtech does not know how to compile software.

    The Makefile used for TSCP on the A64 is broken, and does not apply -O2 optimization at the right stage.

    My A64 3200+ scores 290K n/s when -O2 is properly applied.

    On "primegen" most of the time is spent in putchar(), instead of in computation, and they should comment out the putchar() loop instead of directing output to /dev/null, and retest both machines.

    Also, they should have edited conf-cc and turned on -O2 optimization.

    ubench is known to be buggy, and the AMD64 results have been questioned on other sites as being implausibly bad.

    They copied their data wrong on the first database test. The A64 3500+ times in at 215 in 64b mode, beating the 3.6 GHz Nocona.

    Their encoding benchmarks are equally suspicious.

    And gzip was a 32bit executable.

    In short, this "review" is HORRENDOUS, and filled with errors. A64 3500+ vs. Opteron 150 is a distraction from the real problem:

    These guys don't know how to compile, optimize, and benchmark software.

  • by Short Circuit ( 52384 ) * <mikemol@gmail.com> on Monday August 09, 2004 @03:50PM (#9923002) Homepage Journal
    That's one example. Both systems have their advantages.

    However, I'm thinking more along the lines of AMD's general strategy: More work-per-clock. The K7 was intended as a direct competitor to the Pentium III, and the design was great. (It even holds up pretty well against the P4.)

    The Pentium Pro had two integer units and a single floating-point unit. The k7 had three of each.

All seems condemned in the long run to approximate a state akin to Gaussian noise. -- James Martin

Working...