Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AMD Intel Hardware

Xeon vs. Opteron Performance Benchmarks 362

QuickSand writes "Anand got his hands on some of Intel and AMD's enterprise processors including 4MB L3 Xeons, and put them to the test. Results were a little varied as 4-way Opteron systems seemed to fare the best, although dual Xeon configurations almost always beat dual Opterons. The exact benchmarks are here."
This discussion has been archived. No new comments can be posted.

Xeon vs. Opteron Performance Benchmarks

Comments Filter:
  • by alen ( 225700 ) on Wednesday March 03, 2004 @01:19PM (#8453133)
    I've read anand's results before and he is alwyas benchmarking games with corporate products such as xeon, opteron and windows64. I never understood that. Except for maybe a tiny percentage of people, who uses this stuff for games? And he actually had the nerve to slam windows64 because it didn't run games as fast as normal windowsXP.
  • by chef_raekwon ( 411401 ) on Wednesday March 03, 2004 @01:20PM (#8453151) Homepage
    dual xeons have owned the market for a long time...it will be difficult (although not impossible) for AMD to topple this.

    many people did not upgrade to Intel's Itanium, but rather were upgrading to their high end dualie xeon systems -- they run very reliably, and very fast. a few instances where we've put in dual 2.x ghz xeons for web/mail servers...and only a slashdotting could bring them down...(well, an exaggeration...but you get the point).
  • by Lomby ( 147071 ) <andrea@lAAAombar ... inus threevowels> on Wednesday March 03, 2004 @01:22PM (#8453173) Homepage
    Hmmm, you should read the article before commenting.

    The last two articles on Xeons used their forum database as the workload for the benchmark. In the current article he even managed to use an unnamed enterprise order management system.

    Then, if you have the games and the 64 bits systems at hand, why not do a quick benchmark?

    Their review of windows64 highlighted some obvious problems, probably with drivers/PCI, that may be relevant for professional use (think of CAD).
  • by Anonymous Coward on Wednesday March 03, 2004 @01:27PM (#8453227)
    Believe it or not, Intel's compiler generates very good code for the Opteron. Far better than GCC or generic IA32 compilers.

    So in any evaluation, the compiler and binaries that are used is an important question.

    There was no mention of this in the article.
  • by Anonymous Coward on Wednesday March 03, 2004 @01:28PM (#8453242)
    I call bullshit. You make a blanket statement without anything to support it or any logical argument at all. Of course you will get modded up, though, because your post is anti-Intel.
  • by Fallen Kell ( 165468 ) on Wednesday March 03, 2004 @01:30PM (#8453268)
    The jist of the whole thing is that Intel's achitecture has a huge bottleneck in its FSB. All the processors share the same FSB and quickly max it out if there are more then 2 processors. So anyone building or buying systems with more then 2 processors will get much better performance out of an AMD opteron system then an Intel.
  • by maharito ( 626909 ) on Wednesday March 03, 2004 @01:32PM (#8453287)
    I attend a university that is currently building a beowulf cluster, and when it came down to making a decision, the deciding factor was price/performance ratio. While it may make sense for enterprises to go with the Xeon, the Opteron is a clear winner, in my mind, when money is an object. Of course, if you have the money to burn, the Xeon may seem to be the more obvious choice.
  • by tuffy ( 10202 ) on Wednesday March 03, 2004 @01:35PM (#8453324) Homepage Journal
    I've owned 5 AMD processors from the K5 to an Athlon64 and all are still in perfect working order. But these sorts of anecdotes aren't very helpful in determining average chip reliability.
  • by Pingular ( 670773 ) on Wednesday March 03, 2004 @01:37PM (#8453354)
    I attend a university that is currently building a beowulf cluster, and when it came down to making a decision, the deciding factor was price/performance ratio. While it may make sense for enterprises to go with the Xeon, the Opteron is a clear winner, in my mind, when money is an object. Of course, if you have the money to burn, the Xeon may seem to be the more obvious choice.
    Even if someone has money to burn, wouldn't it be better to get more performance anyway?
  • by hng_rval ( 631871 ) on Wednesday March 03, 2004 @01:39PM (#8453371)
    Alright I have had about 3 AMD processors die on me. I have owned about 4 Intel processors all the way back from original Pentium. Not one has ever had a problem.
    Now... given this kind of statistics, as sad as it may sound I'd say I am willing to pay anything for an Intel just to avoid the headaches.


    That is an interesting use of the word statistics. In order to determine if your next processor is likely to break, you should look at thousands or hundereds of thousands of Intel procs and AMD procs. Your 7 processor study is inherently flawed.
  • Umm.. wrong. (Score:2, Insightful)

    by Egekrusher2K ( 610429 ) on Wednesday March 03, 2004 @01:44PM (#8453453) Homepage
    There were but a very few benchmarks that the Xeons beat the Opterons on in the 2 way configuration. And even those were by a very small margin. And in the 4 way configs? It was a slaughter.
  • by John Courtland ( 585609 ) on Wednesday March 03, 2004 @01:45PM (#8453464)
    Wow, I have a working AMD 386/40. Yet I have a score of dead Intel 286/386/486's. I just evened out your "statistics". Not to mention the 5th gen and above x86 class processors I have.
  • -5, Clueless (Score:5, Insightful)

    by Gothmolly ( 148874 ) on Wednesday March 03, 2004 @01:47PM (#8453491)
    Firstly, Anandtech uses flash for its images so that people w/o the plugin can't see the data. This forces you to install it, so that you can see their OTHER Flash pieces... ads.
    Secondly, you are not going to get MS to recompile an MS-SQL for Opteron. You're not going to get IBM to support a Linux installation, after you've rolled your own ueber-NUMA-patch-level-42 kernel.
    The test was clear - out of the box, plug in servers, load OS, load app, run benchmark.
    And the outcome was clear, the Opteron architecture is vastly superior, both performance and price-wise.
    The MHz myth is over, at least in Slashdot and Anandtech circles.
  • by Pingular ( 670773 ) on Wednesday March 03, 2004 @01:49PM (#8453513)
    I disagree. I'm running a Windows XP workstation with dual 2.4 GHz Xeons, and I'm not at all disappointed... neither are the 50 or 60 other developers surrounding me which are running on the same boxes.
    What exactly would be our grounds for dissapointment?

    That your company spent $3750x2x55= $412500 on processors alone (assuming you have the 1mb MP model Xeons), when you could have the same performance for a quarter of that price.
  • Re:IA-32e vs IA-32 (Score:4, Insightful)

    by Anonymous Coward on Wednesday March 03, 2004 @01:54PM (#8453577)
    need to buy new motherboards

    This makes me want to throw up. The last motherboard purchase I made, it was a chore finding one with the _least_ amount of features. Need an AMR riser slot? Fuck no, I'd rather have the PCI slot back. Need integrated sound? No, integrated sound makes my already bad speakers sound worse. It must've been tough figuring out how to make a decade's worth of improvements in technology amount to nothing. I have an ISA soundblaster from 10 years ago that sounds better than the onboard sound on my last motherboard. Need integrated video? I won't begrudge you this. Some people build clusters with their motherboards, and a video card is needed to boot, but if I have a choice I won't buy a mobo with integrated video.

    In short, I want a motherboard with slots for RAM, an AGP slot, a socket/slot/hole for a CPU, PS/2 hookups, serial and USB connectors, and the rest of the board filled up with PCI (or PCI express) slots. That's the ticket.
  • by flaming-opus ( 8186 ) on Wednesday March 03, 2004 @01:59PM (#8453617)
    You will find that most high-end Xeon systems are also NUMA systems. IBM, Unisys, HP all construct their really big xeon boxes as NUMA-clusters of 4-processor SMPs. They create a distributed memory machine at the chip-set level. This is actually what the opteron does, except that the chip-set (well, the memory controller part of it) is built into the processor.

    I think the above poster had the correct idea about NUMA, but worded it in a misleading way. A NUMA design (either of opterons, or of Xeon-quads) will have to do some memory access through the memory controllers on other nodes. This increases the latency of memory access, and can clog up the inter-processor links if lots of memory loads/stores go to remote memory. Thus NUMA-aware operating systems and system libraries are necessary to maximize the amount of memory access that is local, and minimize the usage of the inter-processor links.

    While the opteron design is elegant, and fast, it is not the only smart way to do things. It offers great aggregate memory bandwidth, but can slow things down in the worst case. Most large NUMA systems are created by linking 4-way SMP nodes. (Examples: Sunfire, HP alphaservers, Cray X1, NEC SX-6, Unisys 7000, IBM xseries 4xx xeon, IBM xseries 4xx itanium,...) Apart from opteron systems, the only systems I can think of that do NUMA per processor are the cray T3E, SGI origin, and intel paragon, all of which are Massively parallel supercomuters.

    It is safe to say, however, that a shared bus system does not scale well beyond a few processors. This is best demonstrated by the 36 processor SGI-challengeXL, which was significantly bottle-necked at the memory bus.

    food for thought.

  • by Laur ( 673497 ) on Wednesday March 03, 2004 @02:00PM (#8453634)
    Also, in my opinion there was no significant difference between the two platforms regarding their speed on this benchmark. The difference between 1st and 2nd place, regardless of who won that test, was between 5 and 12%. I don't start to get interested until there is at least 20% difference

    How about cost? The Xeons cost twice as much as the Opterons, and the Opterons give equivalent or better performance! Although you are correct that the performance difference may not be staggering (and between top of the line chips, who would expect it to be?), the price/performance ratio certainly is.

  • Re:-5, Clueless (Score:3, Insightful)

    by MikeBabcock ( 65886 ) <mtb-slashdot@mikebabcock.ca> on Wednesday March 03, 2004 @02:03PM (#8453668) Homepage Journal
    1) Microsoft is a major Opteron supporter; they had a freely downloadable Opteron Windows XP beta available for some time now that I have an ISO of here.

    2) IBM would probably support uber-numa patched kernels as you put it, since they are one of the main proponents of Linux-on-massively-parallel-supercomputers anyway.

    Do some research.
  • by Anonymous Coward on Wednesday March 03, 2004 @02:06PM (#8453708)
    The lack of detailed hardware specifications disturbs me. What servers are they using? Vanilla clone quad proc servers? If so, odds are both the AMD and Intel versions could do better using hardware actually optimized to take advantage of them. I'd like to see HP servers running both chips put to the same tests and see how they perform.
  • by brucmack ( 572780 ) on Wednesday March 03, 2004 @02:12PM (#8453779)
    The purpose of the test is not to test the memory, but to test the processors. Thus, they used the same memory in testing each processor configuration.

    One of the purposes of the test was to show how the memory bandwidth bottleneck of the Xeons limits their effectiveness in 4-way configurations, which the Opterons do not have that problem. Doing this comparison with different memories would make things more complicated.

    Additionally, you'll notice that Anand's final words recommend the Opteron for being at least equivalent and much cheaper than Xeon. This was also the selection process for their new forum servers, so you can bet that they aren't getting any kickback from Intel, or those would be Xeons.

    If you still have doubts about the validity of Anandtech's testing, check out the benchmarks from their AMD vs. Intel web server test in December: http://www.anandtech.com/IT/showdoc.html?i=1935&p= 9 [anandtech.com]. All on dual processor configurations. There is definitely no Intel bias in that test.

    Really, I think some people ought to think before they flame like this. The benchmarks are showing the Opterons to be equivalent or faster in 2-way configurations and definitely faster in 4-way configurations, so what is there to complain about? The fact that Anandtech has consistently recommended AMD's processors just makes it doubly silly.
  • by Anonymous Coward on Wednesday March 03, 2004 @02:15PM (#8453805)
    The Xeons cost twice as much as the Opterons

    Even at pricewatch.com, that's blatently false. Furthermore, server customers care about the system price, not the CPU price -- Check out IBM.com (virtually the only Tier 1 AMD vendor):

    x335: 2x Xeon 3.2Ghz, 1GB RAM -- $2,639.00
    e325: 2x Opteron 248 2.2Ghz, 1GB RAM -- $3,399.00

    Oh, lookie. the AMD system is actually more expensive. It's also a better product, so that makes sense. Retract your FUD.
  • by ELiTeUI ( 591102 ) on Wednesday March 03, 2004 @02:16PM (#8453816)
    No, actually, the Opteron's use a newly designed bus, namely HyperTransport (www.hypertransport.org). It was designed primarily by AMD. It is not a switched bus, it is a fabric architecture.

    just so you know.

    ELiTeUI

  • by flaming-opus ( 8186 ) on Wednesday March 03, 2004 @02:17PM (#8453820)
    Itaniums are expensive, but not outrageous compared to other high-end processors like Power4 or ultrasparc. They also perform quite well. They are definately better performers than xeons for most of our apps.

    The problem with itanium is not that they aren't a good technology, but rather that intel is trying to shove them into the high-end of the market, which is a difficult place to compete. sparc, power, pa-risc, alpha have all been around for years, have established customer bases, and lots of businesses have invested tons of money in running them. It's a difficult place to introduce any new products.

    Intel has been stymied trying to sell ia64 into that space, and has undercut itself, by continuing to improve xeon, which performs pretty well and is comparatively inexpensive. Most segments are going to migrate to the all-american mantra of "GOOD-ENOUGH, and CHEAP!" which describes xeons/opterons perfectly. The market segments that won't migrate in that direction are willing to pay the big bucks for stability, and reliability. They are very slow movers. Intel might sell some itaniums to these customers, but they'd better be willing to wait a long time.

    I think a lot of people judge itanium by the yardstick of Xeon, and maybe should not. If itanium ends up simply as a replacement for pa-risc, alpha, and MIPS in the SGI and HP portpholio, that may be a success by some measures.
  • by ThisIsFred ( 705426 ) on Wednesday March 03, 2004 @02:23PM (#8453897) Journal
    Going to go slightly off-topic here:

    I'm an AMD lover, but it's my opinion that AMD is making a *huge* mistake with their desktop market. They only produced two marginal FSB400 processors with the "32-bit" Barton core, and then focused all their attention on the Athlon 64s. People who've made a choice in the past year to go with an AMD-compatible FSB400 mainboard are getting the shaft, and AMD is unwittingly forcing them to move to Intel during their next upgrade. Currently Intel's latest 3.0+ GHz offerings are spanking Athlon 64s in benchmarks with 32 bit applications. When users decide to do the next upgrade, they're going to say "hey, I have to replace my mainboard anyway", and they're going to go to Intel because it has more upgrade possibilities, is cheaper than the Athlon 64 for the same level of computing power, and currently performs better.

    So this is more of a plea for AMD to extend the Athlon "32" line a bit further. Please AMD, don't prematurely kill off 32-bit Athlon chip development!
  • Re:OS (Score:2, Insightful)

    by DangerSteel ( 749051 ) on Wednesday March 03, 2004 @02:23PM (#8453898)
    Specifically the article says Windows 2003 Enterprise server of which there are at least two 64 bit versions, but they don't tell you which version they used. I don't always agree with their findings but they appear to be sharp people for the most part. I doubt they would ever use XP for a server class chip test.
  • Re:IA-32e vs IA-32 (Score:4, Insightful)

    by Loki_1929 ( 550940 ) on Wednesday March 03, 2004 @02:26PM (#8453931) Journal
    " Can somebody tell me if the IA-32e processors will be in the socket 478 format to work with existing boards, or will they require a whole new socket and chipset (rather than a bios update)"

    They're disabled in all socket 478 chips. The new Pentium 4e chips (Prescott core) supposedly have the extensions, but they remain disabled. Technically, it may be possible to gain access to those instructions through some sort of BIOS hack, but you would also need to use an Operating system that can detect, support, and make use of those new instructions. Also, you risk using unfinished or untested parts of the CPU if you do manage to gain access and use the extensions. There would be no benefit other than simple tinkering.

    "If they really are just "extensions" then I don't see why anything special would need to be on the motherboard correct?"

    You still need a CPU that supports the instructions, and which has them enabled. Technically, if Intel released Prescotts in S478 form with IA-32e enabled, it should work fine with an existing motherboard which would otherwise support the Prescott chip you're using. The probablility of Intel taking the time and effort to do this less than a quarter away from a whole new socket is virtually nil.

    "The cpu should switch into 64bit mode whenever the OS tells it to right?"

    That's not entirely accurate. Technically, what happens under AMD64 (the basis for IA-32e), is that specific instructions can be sent to the CPU to have it run code in what's called "Compatibility mode", which essentially allows it to behave as though it were a 32-bit CPU. The difference is that you're not 'switching' to 64-bit mode. You're either in 64-bit mode with the option for compatibility mode when needed (meaning you need a 64-bit capable OS), or you're in 32-bit and you're stuck in 32-bit.

    If you're looking for 64-bitness, you may simply want to get an Athlon64. If you're waiting for 64-bitness on the Intel side of things, you'll be waiting until some time towards the end of this year. Good luck.

  • by ms8423 ( 585674 ) on Wednesday March 03, 2004 @02:26PM (#8453936)

    Thanks for pointing it out. From the conclusion of the article:

    The comparison we've made here is a very important one; it identifies Intel's strengths and their weaknesses with Xeon, and it crowns Opteron a clear multiprocessor winner. An area that we didn't touch on is cost, which is where AMD truly shines. The Opteron 848 processors we tested are around 1/2 the price of Intel's 2MB L3 Xeon MPs and we have not seen retail data on how expensive the 4MB parts will be.
    In a 4-way configuration AMD's Opteron cannot be beat, and thus it is our choice for the basis for our new Forums database server. We'll be documenting that upgrade in a separate article so stay tuned.

    Not quite as the /. story reads.

  • by hackstraw ( 262471 ) * on Wednesday March 03, 2004 @02:28PM (#8453948)
    The purpose of the test is not to test the memory, but to test the processors. Thus, they used the same memory in testing each processor configuration.

    Then they should not have done an memory intensive and disk intensive benchmark like a database, now should they?
  • Comparing Prices (Score:5, Insightful)

    by gbulmash ( 688770 ) <semi_famous@yah o o . c om> on Wednesday March 03, 2004 @02:37PM (#8454048) Homepage Journal
    4 AMD Opteron 248's at Newegg: $5876 ($1469 ea)
    4 Xeons (@Intel's announced pricing): $14768 ($3692 ea)

    Did the quad Xeon system outperform the quad Opteron by a factor of 2.5:1? No. In fact, in some cases, the quad Opteron outperformed the quad Xeon. The Xeon had advantages of hyperthreading, 4x as much cache, and a clock speed 800mhz higher than the Opteron, ans still got beat.

    Clock speed may sell in the consumer market ("Me want bigger!"), but in the server market, Opterons getting better performance for half the price are going to win more and more converts.

    - Greg

  • by Loki_1929 ( 550940 ) on Wednesday March 03, 2004 @02:39PM (#8454071) Journal
    The Itanium2s and up are pretty decent so long as you're working with code designed for 64-bit/EPIC. Where you run into problems is with 32-bit code, or pretty much any code not designed/optimized for EPIC. There's nothing wrong with Itanium in-and-of itself; it's just not cut out for compatibility the way x86 is. Had Intel stuck with the original plan for IA-64 (which was to replace x86 from top to bottom), this would have been fine. You simply would have lost the ability to use old applications, but new ones would run reasonably fast. 10 years later, Itanium has its niche, does quite well within that niche, and sucks for everything else. :)

    "I wish we could get by with cheap Xeons, but they just don't cut the mustard for our applications."

    This is exactly why Opteron DOES compete with Itanium - if only indirectly. Opteron will never hit the big-tin niche, simply because it was never designed, nor intended to do so. What Opteron does is bring 64-bitness, and all the benefits therein to the mid-range crowd. This forces Intel to choose between giving up on Itanium as anything other than a big-tin chip, or losing half its mid-range customers to AMD. Losing such a lucrative market would be far worse for Intel in the long run than losing the 10 years of R&D sunk into Itanium, so they've chosen to bring the Xeon line to the 64-bit world. With the new Potomac core (Q1/H1 '05), the XeonMP will be the CPU of choice for Intelphile mid-range customers in need of Itanium's benefits, but conscious of cost. The result will be that Itanium's legs will finally be completely taken out from under it, and it will be resigned to little more than a handfull of extremely high-end big-tin servers each year.

    Does this mean Intel should continue to develop Itanium, even if it becomes clear it can no longer sustain its own R&D? I don't know - I think that's a question for Intel's board to answer. What I do know is that AMD had it right in '98/'99 when they decided to help transition people to 64-bit CPUs without losing x86's incredible compatibility. The bottom line is that someone like you would have gladly gone with either Opterons or Xeons had the choice been given to you. Unfortunately for Intel's margins, you and those in your position now have that choice.

  • by Anonymous Coward on Wednesday March 03, 2004 @02:59PM (#8454306)

    The parent is wrong, they're not testing CPUs. Testing completely different architectures and believing that 'because we use the same memory the test is balanced' is just too absurd!

  • by Anonymous Coward on Wednesday March 03, 2004 @03:00PM (#8454313)
    K6-2 to K6-3 was dramatic. Thoroughbred to Barton was not. Athlon64 3000 to Athlon64 3200 is even less dramatic (maybe 1-3%?).

    Cache does not always help. Once your critical path fits in cache what more can you do? The very interesting thing is that cutting the K8's L2 in half (3200->3000) results in only a couple % performance degradation.

    But scaling the K8's MHz up 10% (3200->3400) results in 10% speedup in benchmarks--almost exactly.

    This, along with the upcoming trend in multi-die CPUs and hyperthreading, points that we are not as bandwidth-limited as we've been told anymore. And we can't push MHz faster yet. So for now we need more parallelism in the CPU itself.

    Clearly right now MHz and ALUs are what really count for speed.
  • by Zeriel ( 670422 ) <<gro.ainotrehta> <ta> <selohs>> on Wednesday March 03, 2004 @03:06PM (#8454389) Homepage Journal
    *raises hand* I'm a corporate IT type, and I read his benchmarks. Along with about three others on a regular basis. Because sometimes, "real work" tends to scale pretty similarly to game performance--especially when that real work is a lot of 3D graphic operations.
  • by DarkHelmet ( 120004 ) * <mark AT seventhcycle DOT net> on Wednesday March 03, 2004 @03:07PM (#8454402) Homepage
    Here's a part that I can't help but laugh at:

    In our infinite desire to please everyone we worked very closely with a company that could provide us with a truly Enterprise Class SQL stress application. We cannot reveal the identity of the Corporation that provided us with the application because of non-disclosure agreements in place.

    Okay... So we know what kind of hardware they're testing against, but not knowing what kind of software they're benchmarking? "We're using an enterprise scenario" isn't good enough.

    It's nice to look at pretty charts and all, but I imagine anyone who is going to investigate enterprise level solutions is going to want to know EXACTLY what this is being benchmarked on.

    Even though I typically tend to trust Anandtech's outlook on things, I'm still kind of so-so on this review. Their forum test is not really externally reproducible and their enterprise test is too vague. I doubt any IT person would weigh this review too heavily when making a decision.

    Then again, I could be wrong.

  • by Loki_1929 ( 550940 ) on Wednesday March 03, 2004 @03:22PM (#8454580) Journal
    "Heh, I guess the Cray Red Storm system kind of shoots down that theory... ;-)"

    Not really - I mean, I could design a cluster with a few million Pentium Pro CPUs and have it compete with the upper end Itanium boxes. Does that mean the Pentium Pro was designed to compete with Itanium?

    The Opteron makes an excellent solution in many different scenarios, but don't take its flexibility to mean that it was meant to be in direct competition with Itanium. As I stated, however, it indirectly competes with (vis-a-vis Xeon) Itanium, and in fact, creates a situation in which Itanium cannot possibly be self-sustaining within a year from now.

  • by Hoser McMoose ( 202552 ) on Wednesday March 03, 2004 @03:26PM (#8454611)
    They WILL standardize on a socket, it's just that the socket will be Socket 939 and not the current one.

    It's pretty much the same story with SlotA/SocketA. They had an initial design that was quickly replaced. The second socket then stuck it out for the duration.

    Intel did pretty much the same thing with their P4, initially releasing it on socket 423 and then quickly moving to socket 478 which has lasted for several years now (though it too will soon be replaced).

    Markets change, technology changes, and sometimes sockets need to change with them. Remeber that the specification for Socket 754 and Socket 940 for current Athlon64 chips was set in stone about 3 years ago (before the first beta chips tapped out), and a lot has changed since then. AMD has gone to great lengths to minimize socket changes, but there's only so much that they can do.
  • by dpilot ( 134227 ) on Wednesday March 03, 2004 @03:45PM (#8454872) Homepage Journal
    Problem is, much as we might like 32-bit Athlons, AMD hasn't been doing well at them, financially. AMD is in this to make money, not to make hobbyists happy. In the past they have been able to do both, and when they haven't, they've still been pleasing the gearheads. Maybe they'll be able to again in the future. But for the moment, AMD is attempting to climb upstream into the highly profitable small-to-mid server space with Opteron, and that's where their focus is. I suspect that Athlon-64 will get more attention in due time, and you'll be happy, again. At the moment, we're in the gap.
  • by Shinobi ( 19308 ) on Wednesday March 03, 2004 @04:22PM (#8455346)
    Except that theory takes a severe beating. Compare entries 4, 5 and 6 here at Top 500 [top500.org], especially the number of CPU's. Notably, entries 4 and 6 use the same kind of high-speed interconnects between the nodes, so the difference can't be blamed on that.

    The problem with the AMD approach is that you get the NUMA drawbacks not only against other nodes, but internally on the node. If the data CPU 1 needs isn't in it's own memory banks, it's got to request them from CPU 2, 3 or 4, with a latency penalty(Letting the memory controllers read/write freely from each other's banks doesn't sound like a good idea, really). It works well with databases, serving websites, file servers etc, where the processes don't need to share memory, talk to each other a lot etc, but for physics and chemistry simulations etc, you get some penalties
  • by Hoser McMoose ( 202552 ) on Wednesday March 03, 2004 @04:25PM (#8455380)

    Cooling is/was more important, especially for the older T-birds.

    Cooling is VERY important for all current processors. It all becomes relative. When the Thunderbird was current, it used quite a bit more power than the PIII that it competed against, so cooling was very important for that chip. Now, the ~70W that the T-Bird used is not at all abnormal, this is the same basic power consumption of an AthlonXP (Barton or Thoroughbred), Athlon64/Opteron, P4 "Northwood" or even an IBM PowerPC 970 (aka Apple G5). The Intel P4 "Prescott" chips are a bit hotter, so for the moment people are talking a lot about how hot those chips get, but give it another year or two and 100W TDP probably will not be at all abnormal for processors.

    You have to be more careful not to crack the chip when putting the heatsink on.

    I've always wondered how in the hell anyone managed to crack their chips putting heatsinks on. I've put heatsinks on a fair number of AthlonXP chips and never even see a possible way to crack the core. Do people usually install heatsinks using a hammer or something?! While I do like the newer retention mechanism used on Intel P4 chips or AMD Athlon64/Opteron chips, it's really NOT difficult at all to put a heatsink on an AthlonXP processor.

    AMD has had software issues. For example, Win2K had to be patched to SP1 because AMD messed up AGP coherence. AMD also never told the Linux developers about this same problem causing numerous people to have system crashes when using nVidia cards/drivers under linux. You'd think they would have sent a fricken e-mail to the kernel-dev lit.

    You need to do a bit more reading on that problem, it was actually caused by nVidia's drivers doing some really stupid things (ATI does/did the same stupid things, as did most other video card vendors). It was only a matter of sheer luck that the problem DIDN'T affect Intel chips in the sameway.

  • by Hoser McMoose ( 202552 ) on Wednesday March 03, 2004 @04:45PM (#8455661)
    Definitely not BS, though whether or not it's useful depends heavily on your application.

    The idea behind hyperthreading is that the P4's long pipeline will often stall with only a single thread going through. With hyperthreading you run two threads at once, so when one thread stalls you just start up the other thread and go with that one for a while. In a way it's almost like a poor-mans dual-processor system, giving you two logical processors on a single chip.

    Now, obviously there are a few things to consider here. First off, if ALL of your processing is being done in a single thread then you aren't going to see any benefit to hyperthreading, and in fact the extra overhead might even make things a bit slower (usually only 1-2% slower).

    Games almost always do all their major processing in a single thread. Even if they have extra threads hanging around, you almost always spend 99%+ of your time in a single thread. For this reason, games see virtually no benefit to hyperthreading (they don't see much/any benefit from dual-processor setups either).

    On the other end of the spectrum, some applications see up to a 25% performance boost when hyperthreading is enabled. The tests I've seen show the biggest improvement have been things like Photoshop and rendering applications. Some server applications should benefit as well.

    The other boost that hyperthreading gives you, like with a real dual-processor setup, is that it makes multitasking a bit "snappier". This is by no means a night-and-day difference here, but it is there.
  • HA HA! (Score:1, Insightful)

    by Anonymous Coward on Wednesday March 03, 2004 @05:44PM (#8456409)
    That's funny. You said "resale value" when referring to PC computer equipment.
  • by Loki_1929 ( 550940 ) on Wednesday March 03, 2004 @06:09PM (#8456660) Journal
    "They only produced two marginal FSB400 processors with the "32-bit" Barton core, and then focused all their attention on the Athlon 64s."

    Out with the old, in with the new; sounds good to me.

    "People who've made a choice in the past year to go with an AMD-compatible FSB400 mainboard are getting the shaft"

    Those who made the choice to purchase a CPU which is at the upper end of an architecture's limits have only themselves to blame. If AMD spent another 20 years trying to make K7 faster, you'd be posting to slashdot in 2024 complaining about how people who just bought the latest Athlons are 'getting the shaft'. What about people who bought 233MHz P1s? What about people who bought 600MHz Slot 1s? What about people who bought 1GHz P3s? Are all these people 'getting the shaft'? Or is this simply an inevitable event in the course of chip development? Look, socket A has been around since S370. Since that time, Intel has gone through... what, 3, 4 sockets? AMD makes CPUs that go from competing with 600MHz P3s to competing with 3GHz P4s, and you continue to complain when they finally reach a ceiling they can't break through.

    Look, most experts were looking at the death of the K7 at about 2GHz. They were looking at the architecture, and it simply doesn't do well at a whole lot above that. Yes, some chips will make it to higher speeds with excellent cooling, but those are the exceptions - not the rule. It's a credit to AMD that the massive core improvements from Thunderbird to Barton have kept the K7 in competition for this long. Now that the Mhz train has run out of steam and they can't squeeze any more performance out of K7, there's not a whole lot they can do with it. The extra cache, the FSB jumps - they're just not sustaining K7 performance improvements anymore. The concept of diminishing returns really comes into play at this point.

    "AMD is unwittingly forcing them to move to Intel during their next upgrade."

    I'm not quite sure I understand this part. Let's see, I can buy Socket 478 board with a 3.2GHz P4, which will be worthless if I want to get to anything above 3.4GHz. Or, I can wait for LGA775, which might be ready some time this summer. Or, I can go to Athlon64's S754, which will hit 4000+ at a minimum. Or I can wait for Socket 939, which should be out some time towards the end of this month, and go even further with the Athlon64/FX lines.

    "Currently Intel's latest 3.0+ GHz offerings are spanking Athlon 64s in benchmarks with 32 bit applications."

    Pull yourself away from Tomspropagandamachine.com and look at Anandtech, Ace's hardware, or just about anywhere else on the face of the planet. The only 32-bit apps that the P4 wins at all are encoding/streaming benchmarks. When you look at games, office, rar'ing, etc, the A64s put the P4 down like an old dog.

    "When users decide to do the next upgrade, they're going to say "hey, I have to replace my mainboard anyway", and they're going to go to Intel because it has more upgrade possibilities, is cheaper than the Athlon 64 for the same level of computing power, and currently performs better."

    You've stumbled into the SCOiverse, I think. Are you going to try and tell me that the P4 is cheaper than an Athlon64 3000+ system?? Let's see: P4 3GHz upgrades to ... 3.4GHz max. Athlon64 3000+ upgrades to, at least 4000+, probably closer to 4400+. As for current performance, you've got to be looking at Tom's to even be able to imagine that one. It's amazing what sorts of results you can conjure up when you rig benchmarks by kneecapping the "competition's" (competition? isn't this supposed to be an unbiased review?) products by tweaking driver versions/settings/etc.

    "So this is more of a plea for AMD to extend the Athlon "32" line a bit further. Please AMD, don't prematurely kill off 32-bit Athlon chip development!"

    Your wish is Jerry Sanders' command! AMD has already stated that socket A will live throughout '04, an
  • by rich115 ( 638875 ) on Wednesday March 03, 2004 @09:24PM (#8458898) Homepage
    ...and then they wasted everyones time by running Windows in the benchmark. Why not a 64 bit OS on the Opteron? Linux or Solaris x86 for instance. I'd prefer to see the difference then.
  • by juhaz ( 110830 ) on Wednesday March 03, 2004 @10:00PM (#8459219) Homepage
    You can't moderate and post in the same thread (maybe even story), so clearly he wasn't.

    Since when tomshardware and "relevant" fit in same sentence? Besides, future upgrades or no, chips with gigantic cache will _ALWAYS_ be very, very expensive.

So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand

Working...