Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware

Intel's Pentium 4 3.4GHz Processors Reviewed 226

EconolineCrush writes "In one of the most gratuitous benchmarking indulgences I've seen, Tech Report has tested Intel's new Northwood and Prescott Pentium 4 3.4GHz processors against sixteen competitors ranging from the relatively old school Athlon XP to the opulent Pentium 4 Extreme Edition, with plenty of Athlon 64 action thrown in for good measure. Performance is tested in a wide range of applications, including gaming, rendering, image processing, media encoding, speech recognition, and scientific number crunching. Even if you're not interested in Intel's latest Pentium 4s, the review nicely shows where 18 of the fastest desktop chips from AMD and Intel stack up against each other."
This discussion has been archived. No new comments can be posted.

Intel's Pentium 4 3.4GHz Processors Reviewed

Comments Filter:
  • Speed (Score:4, Interesting)

    by Peden ( 753161 ) on Monday March 22, 2004 @08:36AM (#8632992) Homepage
    While all that processor speed is mighty good, who needs top-of-the-line equipment anymore? The new games all rely on the GFX card rather than the CPU. Any suggestions, other than the fact that Intel is keeping up to Moore's law?
    • Re:Speed (Score:2, Insightful)

      by jonjohnson ( 568941 )

      If you end up building your own system, you're right. However, there are still plenty of low-end graphics cards that companies stick into computers just to save another 50 bucks in the manufacturing cycle. When this happens, you still have the "top-of-the-line" graphics chipset, but the board doesn't have its own processor. Without the onboard processor, the CPU does matter.

      I remember a story in Wired a year or two ago that detailed how nvidia's CEO (or was it CTO.. it was a while ago) envisioned most of

      • Re:Speed (Score:3, Insightful)

        by EulerX07 ( 314098 )
        This doesn't make sense. So you're saying that since some of the computers we can buy will come with a lousy graphic card, there's use for those cpus that costs 750+$?

        How about buying the version of the computer with the 150$ cpu and switching the video card for a 150$ mid-end card from ATI or Nvidia? You'd wipe the floor with the 3.4EE computer with a lousy graphic card, and save 450$.

        And also, how can you have both a "low-end graphic card" and a "top-of-the-line" graphic chipset? No offense, but the m
    • Re:Speed (Score:5, Insightful)

      by nacturation ( 646836 ) <nacturation AT gmail DOT com> on Monday March 22, 2004 @08:42AM (#8633010) Journal
      While all that processor speed is mighty good, who needs top-of-the-line equipment anymore? The new games all rely on the GFX card rather than the CPU. Any suggestions, other than the fact that Intel is keeping up to Moore's law?

      Many non-game apps are CPU bound, and speed is always desired in these situations. Examples include rendering, video compression, SETI@Home, etc. Likely you don't need a faster processor, but it doesn't mean that the business world sees it the same way. Heck, maybe some day these processors will power your graphics card too!
      • Re:Speed (Score:5, Interesting)

        by Kjella ( 173770 ) on Monday March 22, 2004 @09:12AM (#8633127) Homepage
        Many non-game apps are CPU bound, and speed is always desired in these situations. Examples include rendering, video compression, SETI@Home, etc. Likely you don't need a faster processor, but it doesn't mean that the business world sees it the same way. Heck, maybe some day these processors will power your graphics card too!

        Not to mention, many of the games are CPU-bound because of the minimum specs - you can up the gfx from 640x480x16bit -> 1600x1200x32bit, but there's no setting the AI to "dumb -> average -> smart". I'm sure there's lots of interesting ideas in AI (groups, formations, tactics, responses to movement/sound, distractions etc.) or game world design (i.e. things happen to the world around you, not just what's being rendered on the screen) that'd love to have more power to throw at it.

        Kjella
        • Re:Speed (Score:3, Informative)

          by ionpro ( 34327 )
          That's hardly true any more. Any modern game will allow you to set AI difficulty/AI CPU time, sometimes seperately. For instance, Battlefield (any variant) allows you to set "Overall bot difficulty" and "CPU time given to AI" (5~25%). I'm not entirely sure, but I believe Unreal Tournament 2004 has a similar setting.
      • Vector processor (Score:3, Insightful)

        by Colin Smith ( 2679 )
        Well, so far you've made the case for a vector processor, or an add on like AltiVec. How's about making one for a faster CPU?

      • Re:Speed (Score:3, Insightful)

        by shotfeel ( 235240 )
        Examples include rendering, video compression,

        And lets not forget the fact that some of us like to listen to our own background music while playing a game on the computer duing the time it takes for our favorited DVD authoring applications to encode the video.
    • Let me tell you about my day at work.

      Just generating the RMI stubs for one small module with WebSphere's lobotomized Ant tasks takes anywhere between 1 and 3 minutes, depending on the module I'm compiling. On a Pentium 4 2.26 GHz. It's a purely command line process, so how would a GPU help?

      Deploying anything on the Solaris test server takes ages. Deploying the GUI takes literally half an hour, punctuated by needing to click on the "next" button in that stupid wizard approximately once every 5 minutes.

      Exp
    • games like moo3 and simcity 4 still utalise the cpu alot.
    • Here's where the extra speed of the CPU becomes useful: editing multimedia files.

      With the proliferation of digital still cameras storing large-sized picture files and MiniDV/MicroDV digital camcorders where you can copy the video recording in digital form to the computer for editing, there is now a serious need for faster and faster CPU's to edit and process these multimedia files at a reasonable speed. Even today's so-called mid-range AMD Athlon XP 2400+ CPU is getting somewhat hard-pressed to do such wor
  • Naming? (Score:3, Interesting)

    by Davak ( 526912 ) on Monday March 22, 2004 @08:37AM (#8633000) Homepage
    I thought Intel was killing their label of chips by speeds...

    Davak
    • I thought Intel was killing their label of chips by speeds...

      Doesn't mean that the chips still don't have an internal clock speed. I guess this is the engineering benchmark rather than the marketing benchmark.
  • Nice In-Place Ad (Score:2, Insightful)

    by Davak ( 526912 )
    Thanks to Corsair for providing us with memory for our testing. If you're looking to tweak out your system to the max and maybe overclock it a little, Corsair's RAM is definitely worth considering.

    Boy... I wonder how much memory Corsair donated for that wonderful little plug.

    I can tolerate Coke planting their product in sit-coms... but I don't think I would appreciate my newscaster saying "Coke is so refreshing" in the middle of a news story.

    Planting an obvious ad in the middle of "journalism" is just
    • Planting an obvious ad in the middle of "journalism" is just wrong.

      What's so wrong about it? If they were testing RAM, then there would obviously be an issue. Or if the RAM were supplied by AMD or Intel. A manufacturer supplied a common piece of material for a comparison and the reviewers are acknowledging it, it doesn't hurt their credibility in one bit. Now if the review were sprinkled with "and such and such cpu performed so well on the memory benchmarks because of this wonderful Corsair RAM ....
    • by nacturation ( 646836 ) <nacturation AT gmail DOT com> on Monday March 22, 2004 @08:51AM (#8633043) Journal
      Planting an obvious ad in the middle of "journalism" is just wrong.

      I don't see anything wrong about it. Imagine if you ran a tech review site and couldn't afford to equip all your various test machines with gigs of RAM each. Wouldn't you approach a company and ask if they could perhaps donate (or at least loan) you the equipment you needed? And, if they did such a thing, wouldn't it be nice to credit them for helping you out?

      I fail to see how this is a "plant". It would be suspect if this were a review of sound cards and, right in the middle of the article, it said "Hey, your system needs more memory... purchase Corsair RAM today!" then that would be a plant. It would be no different than somebody comparing operating systems and thanking IBM/Dell/whoever for loaning you the equipment to do a side-by-side comparison with realtime parameter tweaking rather than having to tediously reformat a single machine every time you want to test a new config.

      It's the lost art of the professional "thank you".
    • Corsair arguably makes the best ram in the world, and i have a hard time seeing why this plug is so shameless. If it were Spectek or something being plugged, I might be a little more cautious.

      DISCLAIMER. I USE CORSAIR RAM IN THE PCS I BUILD, AND FIND IT TO BE HIGH QUALITY.
      • LOL. I was wondering if the line in question could be considered part of a "full disclosure" statement. In most cases it would be considered inappropriate to not state what hardware was used and where it came from.

  • Missing 400Mhz....? (Score:5, Interesting)

    by inphinity ( 681284 ) on Monday March 22, 2004 @08:41AM (#8633006) Homepage
    Besides this test being ridiculously comprehensive, did anybody else notice the stat differences between the P$ 3.0 Ghz - 3.4 Ghz?

    Or, more precisely, the lack of differences?
    I wonder, is this just an inability of benchmark software to challenge a processor at such a high clock speed, or are these processors actually the same thing with shinier packaging?

    Thoughts?

    • Comment removed (Score:5, Informative)

      by account_deleted ( 4530225 ) on Monday March 22, 2004 @09:48AM (#8633351)
      Comment removed based on user account deletion
      • I disagree with the order in which you put your options slightly. I think memory is the single most important aspect. When windows kicks off some bullshit service task in the background memory is the #1 thing that will save you because if you are paged from here to nebraska (unless you're in nebraska, in which case you'll need to pick some point further away) when a new process loads it doesn't matter which task has priority, disk access will (in effect) go to the program paging you out. On many games, incr
    • Add to that the extra power usage 93W Thermal Design Power vs 103W and you have a real argument against using the faster CPU. My guess is that most of the tasks tested are bus bound more than CPU bound.
  • Initial observations (Score:5, Informative)

    by Pidder ( 736678 ) on Monday March 22, 2004 @08:48AM (#8633033)
    A quick glance on the system setups shows that they have used RAM with almost the same CAS-latencies in all the setups. The AMD CPUs benefit from low CAS to a greater extent than the P4. When an Intel fanboy site like Tomshardware wants the p4 to beat the Athlon they usually use very slow ram on the Athlon setup, which is of course overlooked by most consumers.
    • The other thing they do is rely heavily on synthetic benchmarks like PCMark. A little while ago the inquirer ran a story that pointed out that a heavily overclocked p4 running at over 4 GHz got impossible low scores on some synthetic benchmarks. This turned out to be because the windows internal clockspeed counter was just 32bits and there was an integer wraparound. What they failed to notice was that this meant that these so called benchmarks were baseing their scores on the reported clockspeed of the proc
    • You'd better look at the results from TomsHardware [tomshardware.com] before starting to rant about it. They are clearly drawing the conclusion tht AMD is better than Intel. Do NOT bring your biased personal taste toward other websites up here!
    • What I do notice it that they seem to me to be VERY chatty about Intels. When they talk about AMD's, they seem to be quieter.

      I notice that an AMD could nuke an Intel on a test and get little press. When an Intel edges out an AMD, it gets more press.

      Perhaps it's big news when Intels can keep up with AMD's, these days! :-) I would love to see 64 bit versions of the software in these comparisons, to show how future proofed the AMD's are.

      And I know that everything I've said here is anecdotal and that I'm
      • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday March 22, 2004 @12:33PM (#8635070) Homepage Journal
        I've certainly noticed that people are more or less willing to take for granted that two things are true about AMD processors. One, they do more per cycle. This should be clear, anyway, because they have more functional units and the functional units are more flexible. Also they have the credibility of having built a multitude of RISC designs over the last few years, most of which have had x86 emulators on the front of them of course. Well, and the back, it wouldn't do to fetch and never retire. Anyway Two, they are much cheaper than intel processors. Sadly intel outstripped AMD in terms of bus bandwidth some time ago and AMD is just now catching up again with the processors with integrated memory controllers - Since that is separate from the bandwidth used to the north bridge. It seems that HT should give about 1/2 the performance of the P4's FSB, but since it doesn't have to carry information from the CPU to main memory (FX-53 has a DDR 400 dual channel memory controller, which should be plenty of memory bandwidth for anyone. Of course DMA still has to occur via HT but in most cases this should not be a serious problem. (Using system memory for AGP textures will still be slow, though of course still faster than loading them from disk all the time.)

        So, the hot AMD processor (FX-51) currently beating up on the hot intel processor. The FX-53 is even more destructive (about 10% faster still) and I doubt that it will be substantially more expensive than the P4 EE 3.2GHz 2MB cache, which is already defeated in the benchmarks by the FX-51.

        So yes, with the release of the Hammer-core processors, it is unusual for intel to be able to keep up with AMD these days - As it was with the Athlon before it. Remember when the Athlon's double-pumped bus made it two or three times as fast (in terms of FSB) as the intel processors? And how intel processors had less cache, and typically slower cache? Since the release of the K6 intel has been running scared, even in spite of the K6's many flaws. The Athlon was the real sign that AMD was ready to compete with everyone, that's really an amazingly slick chip and there's a multiprocessor version, so AMD targeted basically every space below supercomputing with that processor, and had good success with sales nearly everywhere. (Actually the K6 sold quite a few units also.)

  • Heat (Score:5, Interesting)

    by shawkin ( 165588 ) * on Monday March 22, 2004 @08:50AM (#8633041)
    With the case open, this thing runs at 178 degrees. In a practical sense, all the other benchmarks are less important.

    It is not going to be easy to cool. It is not likely to be suitable for clustered processing. It is not likely to be particularly reliable.

    This article illustrates the diminishing returns of the current Intel CPU architecture and processes. Soon, both AMD and Intel will be forced to explore new designs similar to the IBM Power 5.

    Given the time, effort and money involved in developing a new CPU architecture, the near and medium term future may lie with IBM.
    • Re:Heat (Score:2, Interesting)

      by xeper ( 29981 )

      With the case open, this thing runs at 178 degrees. In a practical sense, all the other benchmarks are less important.

      Just wondering: Is the described setup with the case open & lying on its side actually better or worse for cooling?

      With the case closed you have a nice airflow from the frontside fan [1] over the CPU-Cooler to the backside/PSU-Fan adding to overall cooling [2]... OTOH having the case open makes it less likely that the CPU-Cooler tries to cool the CPU with the already heated air from

      • I guess it depends on how cold the ambient air is, and how many fans you have.

        Even so, case open + 75 watt desk/floorstanding fan is quite hard to beat :). Get two and you have redundancy, plus the better quality fans are pretty quiet. But case open = easy for damaging stuff to get in.

        I wonder if I could go passive cooling if I underlock my 2500xp Barton to 800MHz? Thing is the other fans make noise too (GPU, power supply, HD fans), so the actual reduction probably isn't worth it.
      • Just wondering: Is the described setup with the case open & lying on its side actually better or worse for cooling?

        The only answer to that question is: It depends.

        Ideally your case will be designed to have a decent flow of cool air through the case, and in particular the cool air will flow over the processor while the warm air will be sucked out the back. This is better than having the case lying open and on it's side where the only flow of cool air will be from free convection (ie hot air rising

  • Since a shorthand way of describing the benchmarks is (1) Athlon FX 64 has much higher (default) memory bandwidth than P4 3.2, (2) Athlon FX does much better in UT2003 than P4 3.2 (plus other results like P4 doing better in Rendering/Encoding), can I conclude that UT2003 is memory bound?

    And thus since I have DDR400 in my P4 3.2 and can overclock it to to 220 FSB to get Sandra benchmarks of 5600 MB/sec (non OC/d it's like 4900), that I can expect to get similar UT2003/4 numbers as the Athlon FX? Obviously
  • by heironymouscoward ( 683461 ) <heironymouscowar ... .com minus punct> on Monday March 22, 2004 @08:55AM (#8633059) Journal
    1. AMD64 is better for games
    2. Intel Northwood P4 3.4 is good for general use.
    3. Intel's new Prescott is too hot.
    4. Whatever you buy will be redundant in 2 months.

    Plus ca change, plus ca reste la meme chose.
    • redundant? why, are they going to start giving everyone lots of free CPUs? or is all software suddenly going to become much more efficient so we don't need powerful CPUs any more?

      just because you see a word lots of times on /. doesn't mean you know how to use it.
      • [quote]just because you see a word lots of times on /. doesn't mean you know how to use it.[/quote]

        Ironically, your examples do not give particularly convincing evidence you do either. Your second example is much more convincing for "obsolete". The first one is marginal. Perhaps you meant: redundant? why, is your processor going to reproduce? Will you suddenly have TWO or possibly THREE processors?
      • Redundant means kicked out of a job where I come from. "I was made redundant by a computer." Actually exactly the same as obsolete.

        Redundant also means excess to requirements, which is a twist on the previous meaning. "Those cables are redundant."

        Redundant also means _deliberately_ excess to requirements, which is yet another twist. "We use redundant cabling to ensure scalability."

        So, "redundant" as a designed attribute of a system is almost purely opposed to "redundant" as an unfortunate circumstanc
    • Just an update:

      1.) AMD64 is better for games.
      2.) Intel Northwood P4 3.4 is good for general use
      3.) Intel's new Prescott is better then Northwood for general use/video encoding especially with SSE3 in the future, but it runs too hot.
      4.) Wait 45 days for new mobo's with new sockets and PCI Express.

  • by oingoboingo ( 179159 ) on Monday March 22, 2004 @08:56AM (#8633064)
    What I'd like to see in a huge multi-CPU benchmark like this are some Apple G5 systems thrown in too. Decent cross-platform tests are hard to find, but given OS X's UNIX underpinnings, it may be possible to come up with a set of tests that are run on x86 Linux and OS X which have an identical code base, and which do not artificiallly advantage one architecture over the other. One thing I've found since switching to OS X about 6 months ago...the Mac community still lacks a really good site which does solid, rigorous benchmarks of Mac hardware/software...and there are a lot of myths and misinformation doing the rounds on various Mac forums (as there are on PC forums too). A well controlled multi-CPU benchmark including some Macs could go a long way to alleviating this.
    • Try www.barefeats.com. They're pretty good about keeping their benchmarks clean, and they do some cross-platform benching (although they are primarliy a Mac site).
    • by thatguywhoiam ( 524290 ) on Monday March 22, 2004 @09:51AM (#8633368)
      This site [macspeedzone.com] compares Macs to Macs... its sort of useful.

      This site [xlr8yourmac.com] actually has a German G5 vs. Athlon benchmark posted right now.

      Neither one is like Tom's (good or bad)... but its something.

    • Apple won't send G5s to people who intend to benchmark against other computers. I know because I've talked to their PR department. They don't want to lose any benchmark tests.

      Next time I won't mention that part of my testing.

      -Jem
    • I can see one problem with your idea: compilers. Say that everything running on our machines must be compiled with the same rev of gcc. What if gcc on Platform A isn't as highly evolved or optimized as on Platform B?

      Say that instead we use compilers made by the CPU manufacturers, since they presumably optimize the best. No, can't do that. AMD, for example, hasn't the resources of Intel and can't write their own.

      Just being the devil's advocate.
  • by nickovs ( 115935 ) on Monday March 22, 2004 @08:59AM (#8633070)
    Taking the opportunity for a moment to troll, flame bait and be an annoying Apple user, I think it's worth commenting how piss-poor the P4's LinPack performance is. The Apple [apple.com] Xserve G5 [apple.com] gets 4.5 Gigaflops out of each of it's two 2GHz G5 processor when running HPC Linpack, as opposed to the 3.4GHx P4 "Extreme Edition" which peaks at just 1.3 Gigaflops. Anyone looking to do serious scientific calculations rather than just playing Quake should not be using Intel hardware these days; it just doesn't keep up with the PPC G5 for floating point.
    • Assuming you stay within the hand-tuned codes that are available. You won't see anything near that performance from compiled code, and that's with a good compiler. I've done some tests myself, and the G5 performes about the same clock-for-clock as the Opteron. And these days, the Opteron clocks a bit higher...
    • by jstott ( 212041 ) on Monday March 22, 2004 @10:41AM (#8633806)
      Taking the opportunity for a moment to troll, flame bait and be an annoying Apple user, I think it's worth commenting how piss-poor the P4's LinPack performance is.

      The AltiVec processor on the G5's is a vector coprocessor. If your compiler/library is set up to use it, that's good for a 4-5x increase in floating-point speed. Essentially the CPU does a block of mathematical operations in parallel--Cray mainframes work the same way, only more so. This is different from pipelining in that it's a true parallel operation. I think the AltiVec can do vector integer operations as well, but that won't change the LinPack performance.

      Note too that the boost from a vector processor only works on specific types of floating point operations, most notably matrix math, so it's not a magic cure-all. Also, the data has to be in the right format and loaded into appropriate registers, so it helps to have code written specifically to use vector operations (although a good optimizing compiler can still do a lot of the work for you)

      .

      -JS

    • You're comparing Apples and oranges here! (no pun intended... honest! :> ).

      The Linpack code used in this test was really designed to demonstrate the memory subsystem characteristics of P4 vs. the Athlon, not to crunch data. This should be blatently obvious even to someone making a troll/flamebait/annoying Apple user post, since the benchmarks you quoted show 3.2GHz Xeon processors (nearly identical to the P4) crunching at up to 4.35GFlops.

      A more accurate view of the P4's capabilities for scientific co
  • Scale matters! (Score:5, Interesting)

    by IceFox ( 18179 ) on Monday March 22, 2004 @09:09AM (#8633108) Homepage
    After reading it I get the following:

    1) If you are doing anything in Lightwave by all means don't use AMD's XP :) There must be some major tweak they are missing.

    2) Encoding type work XP seems to be the best bang for the buck (right now)

    3) I had a difficult time understanding the results because most of the graphs didn't have a scale to go by. Some of them like the games you could figure out that 500fps is twice as fast as the slowest at 250fps, but in either case you didn't care. With lame from the looks of it the slowest was still faster then what I could rip from cd (need to test, but just off the top of my head). Maybe on the larger scale for a particular test all of the cpu's are very close together, but in the view of close up it looks like one is _way_ faster.

    4) With all of the tests there wasn't one compiler test :(

    -Benjamin Meyer
    • addressing 4):
      I've seen several sites using compiler benchmarks recently. Quick summary: The P4 Prescott is about the same speed as the P4 Northwood, if compiling isn't done in parallel both don't have the slightest luck against Athlon 64 (can't even beat Athlon XP). However, if something like make -j3 is used, the P4 are very close to the Athlon 64. Some link (Visual Studio, can't remember the other which used gcc): http://www20.tomshardware.com/cpu/20040318/athlon - fx53-28.html [tomshardware.com]
  • LaGrande? (Score:5, Informative)

    by slux ( 632202 ) on Monday March 22, 2004 @09:40AM (#8633303)

    Has everyone already completely forgotten about LaGrande [slashdot.org]?

    The tech sites certainly don't seem to be making much fuss about the fact that Prescott has this technology already in it. I wonder how they can be that unknowing of it. There was this big Extremetech article on LaGrande [extremetech.com] though.

    Even on Slashdot no-one seems to be bringing it up these days. For me, the benchmarks aren't even worth looking at with the knowledge that these processors are the beginning of the DRM revolution. Seems they're being able to sneak the technology inside every PC just as they've planned it.

    Still, sticking with AMD is going to be just a temporary measure. Is there any talk about integrating DRM into the PowerPC? If not, maybe the next motherboard upgrade could be a Pegasos [pegasosppc.com] or one could just go with a Mac.

    • Re:LaGrande? (Score:3, Insightful)

      Even on Slashdot no-one seems to be bringing it up these days. For me, the benchmarks aren't even worth looking at with the knowledge that these processors are the beginning of the DRM revolution. Seems they're being able to sneak the technology inside every PC just as they've planned it.

      You bring up an excellent point, and one that I wonder about.

      At some point, the Slashdot/Ars/Tom's crowd and others who are a little more informed will identify the 'last great un-hobbled processor', i.e. the fastest

  • We have come upon a point in processor technology that Active cooling other than passive radiant exchange with active air movement technology (IE cooling block and a noisy fan) will be exceeded very shortly. Running at 178 degrees these guys will require active cooling system such as water, refrigerant. But there is a third type of cooling technology that is micro channel cooling: http://www.cooligy.com/micro_channel_cooling.html [cooligy.com] I would like to see a rock solid active cooling system implemented and run
  • When did the 2.0 GH Pentium come out, around August 2001. And now we're reviewing 3.4 GH Pentiums 2.5 years later? Dead!!! Long Live Moores Law.
  • by polyp2000 ( 444682 ) on Monday March 22, 2004 @10:04AM (#8633463) Homepage Journal
    It seems a little dubious to pit a 64 bit processor (Athlon64) against a 32bit one.

    The Athlon64 does surprisingly well in many of the tests, especially when you note that in the majority of benchmarks it is only executing 32bit code. I bet we would see a different story if the Athlon64 was running at its best ability eg running 64bit apps on a 64bit os.

    How difficult would it be to do some benchmarks comparing two identical linux distro's running on the same processor but one compiled for 32bit and the other compiled for 64bit. That might be an interesting comparison.

    Nick
    • Maybe I'm missing the point, but how would 64 bit apps help the Athlon 64? How is a 64 bit app any different? A 32 bit operation (add, mult...) doesn't take any more clock cycles the the equivalent 64 bit operation.

      It also seems that if data structures are unnecessarily made 64 bits wide (using 64 bit integers when 32 or less would suffice), its going to slow down a 64 bit processor -takes more bandwidth to move around 64 bit numbers vs. 32 bit, plus not as many will fit in processor caches.

      Not to pick o
      • It depends on what exactly it is you are trying to do. For example a 64bit cpu is going to be considerably faster at computing an operation on two 64bit numbers.

        If you are doing any kind of calculation intensive operations having 64 bit is going to increase speed.

        There are many benefits such as Music applications eg audio channel mixing applications; with 64bit you can realistically increase the mixing headroom for sound channels;

        Quick synopsis to explain the point;

        Basic physics tells us that if you add
    • You're right, though not because of the 64-bit thing. Going 64 bit actually has a small performance penalty. But in 64-bit mode, Athlon64 has twice as many registers, and registers are the fastest storage there is (kind of like an L0 cache), so doubling that more than makes up for the 64-bit penalty.
  • AMD wins. (Score:2, Informative)

    by Jexx Dragon ( 733193 )
    Not only that, but Intel has confessed that 64-bit extensions now lie dormant in Prescott, ready to be turned on in future versions of the Pentium. This fall, Microsoft will deliver a 64-bit version of Windows, and both AMD and Intel processors will run it.

    Why are the 64-bit extensions disabled? Linux comes in 64-bit now, which clearly means I'll be buying a Athlon 64 over an Intel. Then agian, maybe I'll just go with a four or eight processor Opteron based system. I here the 8088s are good this year too.

  • AMD64 testing (Score:5, Interesting)

    by ValourX ( 677178 ) on Monday March 22, 2004 @10:25AM (#8633624) Homepage

    Yet another review that doesn't test in 64-bit mode.

    I don't know why this wasn't deemed Slashdot-worthy, but here's an excellent review of a P4 3.2E versus an Athlon 64 3200+ in both 32-bit *AND* 64-bit mode:

    AMD64 vs. i386 in FreeBSD [thejemreport.com]

    -Jem
    • And note that sometimes 32 bit mode is faster.

      I'm not very familiar with the tests used, but it seems the only time 64 bit mode was significantly faster was in the Offenc tests. I'm still trying to understand why that is the case. The author states its because "there are twice as many general-purpose registers available", which I'll check into when I get a chance.

      The other results show 32 bit mode to be either on par or significantly faster than 64 bit mode. For now I'm ignoring the "Synthetic Benchmarks
  • compiler comparison (Score:5, Interesting)

    by pwagland ( 472537 ) on Monday March 22, 2004 @10:29AM (#8633667) Journal
    Hmm. Just had a quick browse of the article, and noticed something a little funny. In the Sphinx speech recognition test [techreport.com] they compared all of the chips with both the microsoft and the Intel compiler. What was strange about it though was that for every AMD chip the Intel compiler was faster, by up to 4%. However, for 7 out of the 10 intel processors the microsoft compiler produced faster code than the intel compiler!

    Bizzare eh?

  • by Junks Jerzey ( 54586 ) on Monday March 22, 2004 @10:31AM (#8633696)
    Note: I'm not trolling, nor am I an AMD zealot.

    Yes, you can't go by raw clockspeed alone, but in this case its close enough. In short, 3.4GHz P4 is THIRTEEN PERCENT faster in raw clockspeed than the 3.0GHz P4. The actual performance increase is less than that. At the same time, BOTH PRICE AND POWER DISSIPATION have gone up by MUCH MORE THAN THIRTEEN PERCENT.

    Bottom line: This is a completely uninteresting processor at the current time.
    • This is how it has always been... If one buys the latest and greatest part, it's going to come with a large premium. And that price premium isn't going to correlate with the increase in clock speed.

      Heck, the P4EE and Athlon 64 FX processors aren't even at the top of all of the tests, yet they cost how much more?

      This may be an uninteresting processor to BUY at the current time, but it is my opinion that this is a very interesting processor to STUDY. With the Prescott, Intel increased the pipeline stages by
  • by pwagland ( 472537 ) on Monday March 22, 2004 @10:40AM (#8633790) Journal
    Now, sure, we don't expect these people to be totally unbiased, but where did they pull this from?:
    The Pentium 4 'E' is an absolute monster in workstation graphics, capturing the top spot in three of the six tests and tying for it in one more. In the other two, the Prescott 3.4GHz is second only to the Athlon 64 FX-53.
    By the way, that test that it tied? It tied it with the Athlon 64 FX-53. But then I guess they wouldn't get their advertising budget if they said:
    The Pentium 4 'E' and Athlon 64 FX-53 are roughly equal in workstation graphics, with the P4E winning three of the six tests the A64 FX-53 winning two, and they tied one test. Overall though there was less than 2% difference in any test.
    • ...consider the price difference between those two processors. Here's one case where the AMD is the overly expensive part.
  • Antivirus software is as realworld as it comes. Is Athlon64/Opteron faster or is P4/PG+HT? My guess would be Opteron (based on AV stuff potentially having more branch misses), but I haven't done any tests.

    Think AV scanner on a linux box to scan email/web passing through it. How many raw executables per sec?

    Also, consider copying lots of files from one striped HD array to another striped HD array with AV enabled. If HDD still saturated - how much idle CPU left to do other stuff?
  • I don't believe that the benchmarks utilized hyperthreading...which is reasonable given that most people are interested in existing single-threaded apps. However the nice thing about a HT-enabled processor is that your system is still extremely snappy while crunching that MPEG encoding.

  • by SomeGuyFromCA ( 197979 ) on Monday March 22, 2004 @01:08PM (#8635499) Journal
    The reference to "enough microarchitectural tweaks to kill a horse" was bad enough, but now this:
    We start with memory performance, because these benchmarks are synthetic [...] and not always indicative of real-world performance. They [...], however, [...] present the opportunity to make all sorts of colorful graphs.
  • I call BS. If the Athlon XP is old school than my 1.4 ghz Athlon T-Bird is Ancient Lore =)

For God's sake, stop researching for a while and begin to think!

Working...