Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AMD Intel Hardware

Xeon vs. Opteron Performance Benchmarks 362

QuickSand writes "Anand got his hands on some of Intel and AMD's enterprise processors including 4MB L3 Xeons, and put them to the test. Results were a little varied as 4-way Opteron systems seemed to fare the best, although dual Xeon configurations almost always beat dual Opterons. The exact benchmarks are here."
This discussion has been archived. No new comments can be posted.

Xeon vs. Opteron Performance Benchmarks

Comments Filter:
  • IA-32e vs IA-32 (Score:5, Interesting)

    by Stonent1 ( 594886 ) <stonentNO@SPAMstonent.pointclark.net> on Wednesday March 03, 2004 @01:19PM (#8453143) Journal
    Can somebody tell me if the IA-32e processors will be in the socket 478 format to work with existing boards, or will they require a whole new socket and chipset (rather than a bios update) If they really are just "extensions" then I don't see why anything special would need to be on the motherboard correct? The cpu should switch into 64bit mode whenever the OS tells it to right?
  • by Pingular ( 670773 ) on Wednesday March 03, 2004 @01:20PM (#8453152)
    Xeon's are almost always for servers, wheras Opeteron's can be for anything. Try running a windows xp workstation on a dual Xeon system and you'll be very disappointed.
  • by Judg3 ( 88435 ) <jeremy@pa[ ]ck.com ['vle' in gap]> on Wednesday March 03, 2004 @01:26PM (#8453208) Homepage Journal
    It's probably due to the lack of knowledge/tools to benchtest anything else. I'd like to see SQL benchtests, IIS/Apache test/etc but just like a lot of other people, I don't know exactly how to do that. Though if I ran a site which made it my business to test hardware I'd definately find out and learn how to do it.

    I'd like to see more "Consumer Reports" type tests to. Test hardware configuration X as a high-volume SQL server, and show me how it's held up after a month, 3 months, 6 months, and a year. Yes, maybe I'd upgrade before then, but not everyone would, and I'd like to see common failures and problems down the line - not a 1-2 day test.
  • by SlashingComments ( 702709 ) on Wednesday March 03, 2004 @01:26PM (#8453213)
    I saw this yeasterday on his site. Pretty good.

    One thing I did not understand is how come the 3MB cache is helping with big database query ? I thought that will thrash the cache and there will be not much performance gain if you are working with bigger code/data set. Also, for the four CPU opteron, do they have hyper transport going from every cpu to every cpu ? Is it like a mesh or like a ring where every cpu has only two connections to it's next ones.

    Another thing I did not get is how linux is handling ( not handling ) the local memory to the CPU. This thing looks like a mini-numa type system. Does linux actually try to keep the data in the RAM and process it with the cpu it is connected to ? how does this really work ?

    May be you guys can help clear my ideas .

  • OS (Score:4, Interesting)

    by millahtime ( 710421 ) on Wednesday March 03, 2004 @01:27PM (#8453228) Homepage Journal
    So I see that M$ Windows was used as the OS. Unless this was a prerelease of the 64bit XP then they were running a 32bit OS on the chips. So, wouldn't that mean that this isn't a true test of the power?? Your not taking full advantage of the 64bit power.
  • by ePhil_One ( 634771 ) on Wednesday March 03, 2004 @01:28PM (#8453232) Journal
    many people did not upgrade to Intel's Itanium

    Folks were avoiding the Itanium because it was a disaster; slow and expensive. We've been looking at 64 bit computing for a while, because of the seamless > 4GB RAM capabilities. Intel's PAE extensions are OK, but they really didn't solve any of the problems we were having.

    The net result was we went to 64 bit PPC architecture 3 years ago on those critical systems, And everything has been fine. AIX works great, and IBM's embrace of GNU/Linux means an easy learning curve for us Linux users.

  • Back to Intel Fanboy (Score:3, Interesting)

    by superpulpsicle ( 533373 ) on Wednesday March 03, 2004 @01:28PM (#8453236)
    Alright I have had about 3 AMD processors die on me. I have owned about 4 Intel processors all the way back from original Pentium. Not one has ever had a problem.

    Now... given this kind of statistics, as sad as it may sound I'd say I am willing to pay anything for an Intel just to avoid the headaches.
  • L3 cache (Score:2, Interesting)

    by Anonymous Coward on Wednesday March 03, 2004 @01:29PM (#8453250)
    I thought the very definition of L3 cache was off die. If it is on die, wouldn't it be L2 cache, unless is does not run at core CPU speed?
  • memory controllerS? (Score:5, Interesting)

    by Bender Unit 22 ( 216955 ) on Wednesday March 03, 2004 @01:33PM (#8453304) Journal
    But these days days with all the virtualization getting hot(vmware etc), a server architecture with a single memory bus/controller is getting old.
    I'd like to see some test on servers like the IBM x445 [ibm.com] with NUMA.

  • The Usual Problem (Score:5, Interesting)

    by Sloppy ( 14984 ) * on Wednesday March 03, 2004 @01:37PM (#8453345) Homepage Journal
    We've seen this same type of benchmark over and over. It wasn't interesting then, and it's not interesting now.

    The tests in this article, involved running the same exact binaries (out-of-the-box Microsoft 386 stuff) on both types of CPUs, rather than the code being compiled to run natively. The Opterons were fighting with one hand tied behind their backs.

    In other words, this benchmark is mainly only of interest to Microsofties. If that's what you run, then fine, the article may be useful to you and you may get something out of reading it.

    If you are trying to maximize speed, though, then the software contraints that this test took place under, are totally contrary to what you'd actually be doing (running code that is appropriate for the hardware).

    BTW, another weird thing I noticed about this article: these guys use flash for static images of bar graphs. WTF? Anandtech, your w3b d3$1gn3rz R S0 31337!!!1

  • by hackstraw ( 262471 ) * on Wednesday March 03, 2004 @01:41PM (#8453401)
    Xeon's are almost always for servers, wheras Opeteron's can be for anything. Try running a windows xp workstation on a dual Xeon system and you'll be very disappointed.

    OK, lets go over this again. There is nothing really special about Xeons vs a P4 except the P4 is crippled so that it cannot do SMP, and there may be more cache options on a Xeon. Performancewise they are the same @ the same clock speed. FWIW, I've been dissapointed with XP regardless of the hardware :)

    Now, back to this benchark thingy. 1st, I would appreciate in the article writeup that it said that it was only doing a simple read/write database benchmark, and that was it, but we don't come to slashdot for the stories, right? Also, in my opinion there was no significant difference between the two platforms regarding their speed on this benchmark. The difference between 1st and 2nd place, regardless of who won that test, was between 5 and 12%. I don't start to get interested until there is at least 20% difference, and even then that would only determine my choice for an initial purchase, I would never upgrade a system unless there was at least 100% speedup, preferably 200 -> 400% is worthy of doing an upgrade.

    It would have been interesting to see results like this for more platforms, because I have not seen any significant numbers from the Opteron yet. For example, the memory bandwidth of the Opteron is 1/2 that of the Itanium2's.
  • by chef_raekwon ( 411401 ) on Wednesday March 03, 2004 @01:43PM (#8453435) Homepage
    ne thing I did not understand is how come the 3MB cache is helping with big database query

    this is interesting, and i don't have an answer, except to say that SQL servers generally try to load all of the tables into available RAM. If the data is too large, then simply the indexes(??). If the server ever has to go back to the harddrive for data (which would make it bloody slow for a query) it will check recently cached stuff first - and larger caches means reduced time pulling data sets from a raid array or single harddrive.

    that is atleast my take...it generally differs from server to server....MySQL does not run exactly the same way as MSSQL as Oracle. (which means I've generalized.)
  • by embarcadero ( 568047 ) on Wednesday March 03, 2004 @01:47PM (#8453482)
    In addition, Anand used sub-optimal memory in the Opteron, and non-NUMA config. Looks like he had some Intel "assistance" in designing the "benchmarks" as well... the database read/write ratio is not at all realistic, favors the Xeon.
  • Re:The Usual Problem (Score:1, Interesting)

    by Anonymous Coward on Wednesday March 03, 2004 @01:48PM (#8453499)
    Bah. In all likelyhood, even Unix/Linux users aren't going to be custom compiling their Inventory System for optimal performance.

    You're right that these benchmarks only cover RDBMS-based client-server shrinkwrap apps, but that's a pretty common case. When Oracle and MS ship their DB Servers compiled for x86-64, the tests should be run again, but until then business customers will need to work with what they've got.

    Claiming that this is "mainly only of interest to Microsofties" is borderline flamebait, especially since one of the marketing points of the Opteron is excellent performance with your existing software.
  • by hackstraw ( 262471 ) * on Wednesday March 03, 2004 @01:55PM (#8453584)
    I have not heard that the Itaniums were slow since Intel came out with the Itanium2. Yes, the Itanium1's were dog slow. I've got 65 Itanium2 processors downstairs, and I'm happy with them. For our purposes (crunching numbers on very large datasets) the Itanium2 was the platform of choice because of its 64bit addressing, high memory bandwidth and good processor speed.

    I wish we could get by with cheap Xeons, but they just don't cut the mustard for our applications.
  • by nolife ( 233813 ) on Wednesday March 03, 2004 @02:03PM (#8453663) Homepage Journal
    I'll soon be finishing up my extensive long term testing of an Apollo 735 HP-UX Unix Workstation with a 125Mhz PA-7200 PA-RISC processor. I'll post the results for you if you are in the market for one of these. You can still find them on Ebay for about $5. ;)

  • I don't start to get interested until there is at least 20% difference, and even then that would only determine my choice for an initial purchase, I would never upgrade a system unless there was at least 100% speedup, preferably 200 -> 400% is worthy of doing an upgrade.

    Good post. However the one comment that I didn't agree with was the above.

    My guess is that you aren't involved with any applications where compute time = money. When you are running simulations (say large CFD runs for example) that can takes days or weeks per run, a 50% improvement in speed is a major breakthrough if you get it by not touching code, ie hardware upgrades. Optimzing code is great and all, but it can introduce bugs and other expected behavior. Plus, us development people are pricey. Hardware is relatively cheap. Add in the fact that you generally get charged for CPU time on these big machines (or clusters of little ones), then *any* speed that you get is a major breakthrough, ie you can run more simulations in the same time for the same money.

    In your environment, it's probably okay for you to only upgrade every three years when you get a doubling or more of performance, but there are enviroments where any speed increase is sought after highly, even if it's 20%. I suspect this is true of the special effects industry too, guys like Pixar, ILM, etc. If they can render more frames in the same time or even render the frames in the same time at a higher level of detail, that's worth paying for. Perhaps someone who knows more would care to enlighten us, I'm curious if I'm interpreting that correctly.

  • by tangent3 ( 449222 ) on Wednesday March 03, 2004 @02:08PM (#8453722)
    Anandtech had an Opteron vs Xeon test earlier too, AMD Opteron 248 vs. Intel Xeon 2.8: 2-way Web Servers go Head to Head [anandtech.com] where the Opteron trashes the Xeon handily. I guess that was more focused towards web serving, and now that Anandtech intended to replace their forums database server, they naturally based their latest test "AMD Opteron vs. Intel Xeon: Database Performance Shootout"on database performance.

    "In a 4-way configuration AMD's Opteron cannot be beat, and thus it is our choice for the basis for our new Forums database server. We'll be documenting that upgrade in a separate article so stay tuned."
  • Re:IA-32e vs IA-32 (Score:3, Interesting)

    by Octorian ( 14086 ) on Wednesday March 03, 2004 @02:09PM (#8453730) Homepage
    Well, any basic standardized and commoditized integrated I/O is a good thing. I'd also be fine with on-board Ethernet (good chipset, of course) and on-board SCSI/FC/SATA/etc. But yeah, I'd rather add my own cards for video/sound.
  • by hackstraw ( 262471 ) * on Wednesday March 03, 2004 @02:24PM (#8453905)
    My guess is that you aren't involved with any applications where compute time = money.

    Your right. I work with scientists that run programs up to 5 days over 10 to 20 processors. We get our money upfront, but everyone wants their answers quickly.

    When you are running simulations (say large CFD runs for example) that can takes days or weeks per run, a 50% improvement in speed is a major breakthrough if you get it by not touching code, ie hardware upgrades.

    So your saying that its more cost effective for you to upgrade every 6 to 9 months? Thats fine if it pays off for you. You probably don't have that many processors to worry about either. Trust me, its not trivial to upgrade 60 to 120 processors that often, even if the machines were given to me.

    In your environment, it's probably okay for you to only upgrade every three years when you get a doubling or more of performance, but there are enviroments where any speed increase is sought after highly, even if it's 20%.

    Moore's "law" appears to still be holding true with a doubling every 18 months. Even I'm not slack enough to only upgrade every 3 years :)

    Add in the fact that you generally get charged for CPU time on these big machines (or clusters of little ones), then *any* speed that you get is a major breakthrough, ie you can run more simulations in the same time for the same money.

    CPU time charges are often proportional to the speed of the machine, and its cputime not walltime (there is a difference).
  • o your saying that its more cost effective for you to upgrade every 6 to 9 months? Thats fine if it pays off for you. You probably don't have that many processors to worry about either. Trust me, its not trivial to upgrade 60 to 120 processors that often, even if the machines were given to me.

    It depends, I'm not trying to make a blanket statement that this is always the case, but yes, I can certainly envision scenarios in that the benefit to customers is worth the price of the upgrade when you get less than a 200% return as you mentioned. I hope I didn't come across by saying uprade every 9 months for the latest and greatest.

    As far as having too many processors, I work in one of the premier supercomputing centers in the world (shameless link [nasa.gov]) and we probably have more processors in a single computer than some people have at their entire sites. So I have *some* understanding of the logistics involved. Having said that, you're absolutely correct, replacing hundreds or thousands of CPU's isn't something that you do every year (or even every three).

    As before, you make good points and I'm not really disagreeing with most of what your saying. Just trying to point out that it can make sense to upgrade when you get less than a 200% or 400% return on speed. Espeically when you have world-wide support dependencies, like providing the CPU time for the Return to Flight Initiative.

  • by flaming-opus ( 8186 ) on Wednesday March 03, 2004 @02:40PM (#8454074)
    They don't have hypertransport from every CPU to every other. An opteron has 3 ht links. Since some of those need to be used to connect to the system I/O devices, you have 2 left for your mini-numa system. Thus the processors would have to be connected in a ring.

    Linux is capable of intelligent memory layout. It can migrate data to the processor on which the threads are running, is intelligent about which processor runs which threads, and can make duplicate copies of read-only data. It works reasonably well. (some of this is the stuff SCO is in a huff about) However, I doubt this functionality is turned on in any off-the-shelf distros. If the benchmarker compiled a kernel with NUMA in mind, this would work, otherwise I doubt it.

    Incidently, since the two streams of a hyperthreading-capable xeon share the L2 and L3 caches, they benefit from NUMA grouping also.
  • by Anonymous Coward on Wednesday March 03, 2004 @02:43PM (#8454117)
    One thing I did not understand is how come the 3MB cache is helping with big database query ? I thought that will thrash the cache and there will be not much performance gain if you are working with bigger code/data set.

    You are correct.

    I recently wrote some matrix code in C. Play with the sizes of drows, dcols, mrows, and mcols some.
    /*ht.c
    *This is a C program to test cache hits/misses
    *using matrix multiplication.
    */

    #include <math.h>
    #include <stdlib.h>

    #define DROWS 256
    #define DCOLS 128
    #define MROWS DCOLS
    #define MCOLS 256

    int
    main(void)
    {
    unsigned int seed = 8123;

    int i,j,k;
    double d[DROWS][DCOLS];
    double m[MROWS][MCOLS];
    double r[DROWS][MCOLS] = {0};

    for(i = 0; i < DROWS; i++)
    for(j = 0; j < DCOLS; j++)
    d[i][j] = rand_r(&seed) / RAND_MAX;

    for(i = 0; i < MROWS; i++)
    for(j = 0; j < MCOLS; j++)
    m[i][j] = rand_r(&seed) / RAND_MAX;

    for(i = 0; i < DROWS; i++)
    for(j = 0; j < MCOLS; j++)
    for(k = 0; k < MROWS; k++)
    r[i][j] += d[i][k] * m[k][j];

    /*
    for(j = 0; j < MCOLS; j++)
    for(i = 0; i < DROWS; i++)
    for(k = 0; k < MROWS; k++)
    r[i][j] += d[i][k] * m[k][j];
    */

    return EXIT_SUCCESS;
    }


    Compile and run it using time under bash:
    bash-2.03$ gcc -o ht ht.c
    bash-2.03$ time { ./ht; }

    real 0m2.108s
    user 0m2.080s
    sys 0m0.010s
    .

    Vary the sizes of the matrices and swap to the second (commented out) multiplication routine. They produce the same result, but one will generate more cache misses than the other (if you get the matrix sizes right) resulting in a 50% slowdown on an Athlon XP 1900+ or an unintelligible crap pile on a Sparc running Solaris version unknown.

    Make the matrices too small and you can't tell the effects of the cache because the operations are too fast. Make the matrices too large (not hard on a Sparc, as I just learned) and the cache hits/misses won't make a noticeable difference in time, assuming you're not using a Sparc and you succeed in not segfaulting.
  • by Glock27 ( 446276 ) on Wednesday March 03, 2004 @02:47PM (#8454164)
    Opteron will never hit the big-tin niche, simply because it was never designed, nor intended to do so.

    Heh, I guess the Cray Red Storm [amd.com] system kind of shoots down that theory... ;-)

    Actually the design of Opteron beats Itanium for HPC, and the relative number of Opteron vs. Itanium HPC design wins bears that out nicely.

  • Re:Cache always help (Score:3, Interesting)

    by Vaystrem ( 761 ) on Wednesday March 03, 2004 @02:48PM (#8454176)
    Cache may always help but this is not as straightforward a statement as you indicate. It is highly dependent upon the architecture of the processor.

    The reason the 4mb Xeon's are significantly outperforming the 2mb Xeon's is due to the shared bandwidth architecture of the Xeon's. The cache makes up for the lack of access to data via the FSB and keeps the very deep pipeline of the P4 series processors full. The long pipeline is the reason that cache misses impact the speed of the P4s so much - despite Intels attempt to improve branch prediction. Simply look @ the P4 Celeron's to see how they can be so utterly trounced by regular P4s @ the same clockspeed with little architectural difference but cache size.

    Opterons/AMD 64s do not benefit as much from the boost in L2 Cache. Perfect example of this is the
    Athlon 64 3000+ and Athlon 64 3200+

    The 3200 has 1meg of L2 - and the 3000 has 512k - and both run @ 2 ghz. The performance difference between these two (in most benchmarks) is less than 10%
    Anand Review of Athlon 64 3400+ [anandtech.com]

    So a doubling of cache at the same processor speed results in a 10% boost in performance 'maybe'.

    Finally some applications are more sensitive to L2 cache sizes then others.

    Therefore your statement "more L2 cache always helps" is strictly true - but the degree of performance increase must be compared against the increase in cost. And this benefit will change from processor to processor and application to application.
  • by Twillerror ( 536681 ) * on Wednesday March 03, 2004 @02:59PM (#8454297) Homepage Journal
    SQL servers use a page caching system generally. That is the database exists on the harddrive as a series of xK pages ( 8k pages for MS SQL server ).

    As a page gets loaded from the harddisk it is loaded into the server's cache. Any read/writes are done to memory and not the disk. Background process write the pages to disk that are dirty. All transactions are written to the transaction log so if the server crashes before this happens recovery can happen when the db starts back up.

    This means that a large portion of the data is already in memory. Servers usually pre-allocate gigs of memory for this purpose, the more the better and a big reason for 64 bit on large dbs.

    Under x86 caching schemes, the CPU does speculative loads. It "guesses" what memory the processor is going to need and starts loading it into high speed cache. This is perfect for a db since most of the time the db pages you need are in sequential order. Especially when you are talking about pages that only include indexing data. The query usually does most of it's work using indexes, and then at the end will actually "lookup" the data.

    So bigger caches mean that these big binary trees get loaded into cache and the algorythms can loop through them faster and pull off the cache.

    Take this into the Itanium world and we can start to get even better performance. The thing people tend to forget about Itanium is that you can tell it to load you data into cache. So an optimized DB server can have it read this large section of code into the cache while it does calculations. Itanium allows 3 instructions to be loaded. Once hypertheading is put into Itanium you will see these DB apps really fly. Itanium is showing good promise in this arena, even at 1.5 GHZ. Clock that up to 2 to 3ghz with multiple hypertheaded cores and we are going to have one fast chip for dbs.

    The big issue is the price, if you are going to spend that much money, go with the proven Sun/IBM. Itanium is set to replace the Xeon by 2007 ( I'm guessing before then because of scaling issue on the Xeon and x86 emulation software giving decent 32 bit for legacy apps ).

    I really think Intel needs to push their Itanium. MS likes it, Linux likes it, a few db servers like it, and a slew of other high performance, server based things. I don't see how Intel is going to scale against the Oppie. Not unless they stick a on-die memory controller. Hypertheading and the new thread based instructions will help though. Should be an inresting battle. I'll be happy if AMD can get 10% to 20% marketshare, then we will see some true competition and innovation like we have on the desktop.

  • Re:Comparing Prices (Score:2, Interesting)

    by Anonymous Coward on Wednesday March 03, 2004 @02:59PM (#8454304)
    That's nice, but I'm not sure that difference will show up in the system price -- big vendors like Dell and IBM get huge discounts off Intel list.
  • Not Surprising (Score:3, Interesting)

    by RAMMS+EIN ( 578166 ) on Wednesday March 03, 2004 @03:10PM (#8454436) Homepage Journal
    ``Results were a little varied as 4-way Opteron systems seemed to fare the best, although dual Xeon configurations almost always beat dual Opterons.''

    Varied, perhaps, but not surprising. AMD has integrated the memory controller on the CPU, which could explain their getting better when the number of CPUs increases (the Intels being held back by having to go through the same memory controller).

    As for Intel winning out on the dual CPU systems, well, they are ahead of AMD in the CPU speed race, aren't they?
  • by prisoner-of-enigma ( 535770 ) on Wednesday March 03, 2004 @03:12PM (#8454460) Homepage
    Opteron systems seemed to fare the best, although dual Xeon configurations almost always beat dual Opterons.

    Perhaps the benchmarks show the 2P Xeon's doing OK against 2P Opteron's, but for the price of two Xeon MP chips you can buy five Opteron 848's. Rounding that down, I wonder how well the 2P Xeon does against the 4P Opteron? Oops, Anand already though of that. He says "it would not be pretty." Indeed.
  • by NerveGas ( 168686 ) on Wednesday March 03, 2004 @03:22PM (#8454577)
    Yes, the tests weren't exactly apples-to-apples - the outcomes are actually much better for AMD than the graphs would initially appear.

    The graphs mean that Opterons with a "measly" 1 meg of cache are beating out Xeons that have (a) four times the cache, (b) 50% higher clock speed, and (c) a price tag that's three times higher.

    Hats off to AMD. In times past (K2/K3), price was the only thing they had better than Intel. Now they've got both price and performance.

    steve
  • by Loki_1929 ( 550940 ) on Wednesday March 03, 2004 @03:42PM (#8454832) Journal
    "People bash the x86 architecture and at the same time, bash anything that isn't x86."

    Well, I think that people look at the x86 architecture, and they can see the many, many horrible hacks that have been used to sustain it. That much is pretty obvious if you spend even 10 minutes looking over things. You sit there scratching your head and going, "What the hell? Why'd they do that?", and then realize it's because something, somewhere, was broken until they did it. The reason people don't like to start looking into replacement architectures is exactly as you expressed; the must-have software. You can try running that software under emulation, but the best architecture in the world is always going to take a performance nosedive when running code under emulation. I can look at what IBM has been doing, or even at what Intel was doing with EPIC back in the day, and I can say, "wow, that's pretty cool". But what I can't do is put down the x86, toss all the old software, and hope that all the new software, written for a completely new architecture, is going to work in some sort of reliable fashion. What you really get with x86 is 20 years of experience, and thus, a measure of predictability. In essence, you're paying for predictable problems (much better than unpredictable ones) with old, poor architecture.

    "The AMD solution doesn't do away with x86"

    AMD64 actually does get rid of a lot of garbage in x86 that is no longer in use. Take a look at the presentation (link at Ace's [aceshardware.com]) by the guy who designed AMD64. He was actually pretty thrilled (well, as thrilled as this guy gets) about being able to dump a lot of the cruft x86 has accumulated. Unfortunately, many things had to remain intact, for the obvious reason of compatibility. I have to warn you though, the guy from the AMD presentation is a real ball of fire. (Although, the ex-Intel guy from the other presenation was pretty interesting and funny)

  • by CritterNYC ( 190163 ) on Wednesday March 03, 2004 @03:55PM (#8455005) Homepage
    But yes, I agree with you, AMD cannot neglect the desktop market, unless it makes AMD64 cheap enough that it can put them in all computers (which I think is their inevitable goal). Hell, once eMachines starts stocking them in Computer City, I think they'll have achieved it.

    The Mobile Athlon64 3000+-based eMachines M6807 [circuitcity.com] latpop is available at Circuit City and Best Buy (M6805).

    The Athlon64 3200+-based Compaq s6900NX [circuitcity.com] is also available at Circuit City.

    The Athlon64 3200+-based eMachines T6000 [bestbuy.com] is available at Best Buy.

    That good enough?
  • re: AMD vs Intel (Score:1, Interesting)

    by Anonymous Coward on Wednesday March 03, 2004 @04:32PM (#8455470)
    Has anyone noticed how the comparisons of Intel vs AMD always show AMD slightly less than Intel? Has anyone ever suspected that AMD might be faking that it runs at a slower clock speed, with less cache just to get some people saying that AMD "whoops" intel's ass? Theres something not right about an 800 pound gorilla getting beat up by a monkey
  • by Anonymous Coward on Wednesday March 03, 2004 @04:34PM (#8455498)
    FWIW, I've been dissapointed with XP regardless of the hardware :)

    XP: able to cripple the fastest processor ;)

    Also, in my opinion there was no significant difference between the two platforms regarding their speed on this benchmark.

    Man, do I disagree! Yeah, the performance of the systems was close, as long as you ignore the fact that the AMD's were running at 2/3 the clock speed with 1/4 the cache! AMD clearly has a better design than Intel. All of the technology that Intel has will eventually belong to AMD, also, and their design obviously takes better advantage of any technology!

    Has anyone had a look at the datasheets for AMD and Intel? Does the lower clock speed translate into an equivalent savings in power conmsumption? Or is that offset by the larger geometry?
  • Re:IA-32e vs IA-32 (Score:3, Interesting)

    by DJStealth ( 103231 ) on Wednesday March 03, 2004 @05:04PM (#8455912)
    They will most likely require entirely new sockets as they are 64bit chips, as opposed to 32 bit.

    The 32e means its an extended version of the machine/assebly code modified from IA32 to work on 64bit processors and still have backwards compatibility.
  • by Eneff ( 96967 ) on Wednesday March 03, 2004 @05:52PM (#8456481)
    Because it's not a s simple as X vs 2X computers. The chip, while significant, is not the total cost of the computer. Xeon 2x motherboards, for example, run 200 dollars less for equivalent Tyan MBs. (from looking at Newegg, anyway)

    Because even if the chip is a significant portion of the cost of building the computer, it is only a small fraction of the total cost over the useful lifetime of the cluster.

    Because one has to benchmark for one's own problem set. It's possible that one set of instructions are better optimized for Xeons.

    Because the fewer number of nodes in a cluster, the more efficent each individual node is. A small performance increase may be substantial enough to require fewer nodes, bringing numbers into line.

    Because if it's big enough, Intel might throw in a few days with an engineer to sweeten the deal. (But then again, so may AMD.)

    Numbers arguments get too complex to make such an important decision a no-brainer.

Software production is assumed to be a line function, but it is run like a staff function. -- Paul Licker

Working...