Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AMD Hardware IT

First Looks at Athlon 64 4000+ & FX-55 235

CrzyP writes "AnandTech.com has benchmarked the new "Athlon 64 4000+ and the FX-55" in various areas including business application performance, audio/video, gaming, and much more in this first look at AMD's newest 64bit chips. Just after AMD's announcement, AnandTech posted this article to help consumers choose between Intel and AMD."
This discussion has been archived. No new comments can be posted.

First Looks at Athlon 64 4000+ & FX-55

Comments Filter:
  • Also here (Score:5, Informative)

    by elid ( 672471 ) <eli.ipod@g m a il.com> on Tuesday October 19, 2004 @11:13AM (#10566175)
    on Extremetech [extremetech.com]
  • Spread the love (Score:5, Informative)

    by Hedon ( 192607 ) on Tuesday October 19, 2004 @11:13AM (#10566178)
    Why should Anand get all the attention?
    Feel free to also check http://www.hardocp.com/article.html?art=Njc1 [hardocp.com]
  • I wonder (Score:3, Insightful)

    by robslimo ( 587196 ) on Tuesday October 19, 2004 @11:15AM (#10566202) Homepage Journal
    With Intel having recently backed off on the effort to push clock rates ever higher, is there a plateau in sight for AMD? Will we not see anything between 5 to 10 GHz with today's techniques?

    Maybe it'll take optical computing to spur the next clock push.
    • Re:I wonder (Score:2, Interesting)

      by stone2020 ( 123807 )
      AMD will probably max out at 3 ghz with this technology, but then they will introduce dual core which should be around 5 ghz.
    • Re:I wonder (Score:5, Insightful)

      by iggymanz ( 596061 ) on Tuesday October 19, 2004 @11:24AM (#10566323)
      clock speed isn't the only way to do more work in less time, and get better balance with the current bottlenecks getting to memory & i/o -- time for some new architectures, instead of the same old stuff with smaller transistors and higher clock speeds
      • Re:I wonder (Score:5, Insightful)

        by Martin Blank ( 154261 ) on Tuesday October 19, 2004 @11:56AM (#10566730) Homepage Journal
        Indeed. AMD's designs didn't scale as well speed-wise, so they had to get more creative to get better performance out of their chips. Improving their branch prediction, boosting FPU speeds, enlarging the cache, and improving the data bus were all methods that Intel would go back to from time to time, but usually with some reluctance.
        • Re:I wonder (Score:4, Insightful)

          by ViolentGreen ( 704134 ) on Tuesday October 19, 2004 @12:27PM (#10567042)
          I think this is a lot of the reason AMD is starting to pull ahead of intel. Intel was able to just increase the clock speed to make their chips "faster" than AMDs. AMD had to look for other methods to increase performance and perhaps they learned a bit along the way.
      • Re:I wonder (Score:3, Interesting)

        by drinkypoo ( 153816 )
        How about some new architectures with smaller transistors and higher clock speeds? :) I'd love to see, say, asynchronous logic, but I don't see it happening any time soon, at least in the main stream.
        • Re:I wonder (Score:2, Funny)

          by kc0dxw ( 42207 )
          Microsoft would immediately take advantage of this by delegating the decision to blue-screen into hardware.
    • Re:I wonder (Score:5, Insightful)

      by swordboy ( 472941 ) on Tuesday October 19, 2004 @12:20PM (#10566971) Journal
      With Intel having recently backed off on the effort to push clock rates ever higher, is there a plateau in sight for AMD?

      There's some information to be realized:

      AMD uses IBM's Silicon on Insulator [ibm.com] (SOI) technology. This reduces power consumption by a very large degree. It is rumored that Intel tried to license the technology but, IBM and their fondness for cross-licensing, wanted too much (probably an x86 license). So Intel has been pushing out chips with standard silicon fabrication techniques at the expense of tremendous power consumption.

      My guess is that Intel is coming up with a "massively parallel" architecture that can be applied to mainframes all the way down to handhelds simply by reducing the number of cores on a chip. The cores, will probably be very small and flexible. A mainframe might have a few thousand while a handheld might have a few dozen. They've certainly been hinting at a change in architecture for some time.

      And then there was the "Windows Elements" that was supposed to come out with the P5. I'm not sure why that didn't get more press. I'm guessing that it is a version of Windows that will run in local storage on these processors (i.e. - the processor will have enough on-chip storage to hold "Windows Elements").
  • and tom's hardware (Score:5, Informative)

    by he1icine ( 512651 ) on Tuesday October 19, 2004 @11:17AM (#10566234)
    another review on tom's hardware

    http://www.tomshardware.com/cpu/20041019/index.h tm l
  • No one ever got fired for buying Intel. That's a shame since AMD seem to have better products and more innovative ideas.
    • That's IBM.

      Your boss is an idiot.

    • No one ever got fired for buying Intel.

      Yep.

      That's why ``... AMD seem to have better products and more innovative ideas.'' Since they're number two, they try harder. Once people have been saying ``No one ever got fired for buying AMD.'' for a while, expect them to stumble a few times.

    • by Anonymous Coward
      really??

      we lost a purchasing person at corperate because he bought intel.

      we asked for some SGI workstations for a specific project. the nimrod decided he could save us $$$$thousands by buying Intel Based Dells instead.

      He was fired.


      • That's the worst thing a purchasing person can do: second-guess the engineers' request. Nothing squashes morale worse than working with SGI/Sun/IBM/whatever for years only for some bean counter to declare that Windows on x86 is as good as UNIX for some task they don't understand. It is sad that so many vendors have jumped on the Windows bandwagon leaving some engineers with no choice but to put up with Windows' limitations because their preferred tool migrated because "Windows is the future". Just losin
    • No one ever got fired for buying Intel.
      Really ? .... just wait a bit ... [serverpipeline.com]
    • There's not a whole lot of significant innovation going on either at AMD or Intel, because that's not what people want. People want incremental improvements, so that's what they're getting. Even 64-bit CPUs have been around for almost a decade.

      Most of the innovation going on in the CPU world right now is in the fab and design areas. We aren't getting innovative processors, we're getting innovative manufacturing techniques which allow us to do the same thing we've been doing for 30 years... just at a hi

  • by bstadil ( 7110 ) on Tuesday October 19, 2004 @11:20AM (#10566261) Homepage
    CPU manufacturing is all about yields, if AMD can make more chips that work by increasing the die size by adding a larger cache instead of upping the clock speed, then that's the route AMD will take.

    This is actually the last resort, as the cost of wafer real-estate versus speed increase is low. You rarely do this for raw speed rather for special needs like Servers and the like.

    The increase in the speed for a workstation is probably one speed grade at a cost increase of 30% or so.

    There is two good articles over on TheInquirer about Intels road map and why they have to go the Increase the cache route for 2005. Worth a read. Part One [theinquirer.net] and Part Two [theinquirer.net]

  • by Anonymous Coward on Tuesday October 19, 2004 @11:22AM (#10566297)
    This is a major overhaul of the aging nforce3 chipset.. Check it out. [anandtech.com]

    Expect a flurry of new advances by the end of the year.

    I am ready to buy a new Linux system and am pulling hair out trying to make the best choice. Due to Linux compatibility issues (and mixed experiences with nforce2), I cannot really consider nforce4 so it will be Via for me. Though Nvidia will likely get the nod for graphics.

    The 90nm chips are a mixed bag at the moment.
    • Using the latest reverse-engineered nforce drivers (i.e. not the nvidia ones) I find nforce much more stable than Via ever was. YMMV, as always.
  • Power density (Score:4, Interesting)

    by lagartijo ( 756488 ) on Tuesday October 19, 2004 @11:25AM (#10566334)
    The power density at nm process (watts per square inch) has reached the nuclear reactors. See page 8. http://cnscenter.future.co.kr/resource/rsc-center/ presentation/intel/spring2003/S03USCQNS67_OS.pdf [future.co.kr] It's intel's but I assume it is the same for AMD.
  • amd bias? (Score:4, Insightful)

    by uniqueCondition ( 769252 ) on Tuesday October 19, 2004 @11:29AM (#10566399)
    if you're going to stand your ground with wintel and attack reviews from tomshardware & co. then i have to ask what you take issue with.

    did you disagree with the test system?
    the benchmarks used?

    i've read tomshardware for years and have found them objective and informative. While their results disagree with your emotion you shouldn't make baseless remarks
    • Re:amd bias? (Score:3, Informative)

      by Anita Coney ( 648748 )
      I used to read Tomshardware until several years ago. It reviewed several 3d cards, including one from 3dfx. It tested them in OpenGL where the 3dfx card dominated. The reviewer stated that the 3dfx card did well in OpenGL because OpenGL was its "native" API.

      Not only did the reviewer not know the difference between Glide and OpenGL, he didn't even know that 3dfx's advantage in OpenGL was due to the fact that the drivers didn't fully impliment OpenGL.

      In other words, Tom's site is worthless.
      • I stopped reading Tom's after the nVidia bias issue. nVidia ads were plastered all over the site, and suddenly reviews that showed nVidia edging out its competitors in the benchmarks were showing that nVidia was 'clearly far ahead' and were 'the best cards available' even as Tom professed to have no bias. While eventually nVidia was head and shoulders above the rest (for a little while, at least), I just couldn't deal with Tom's blatant ignorance of his own bias at best, or outright lying at worst. I wen
      • In other words, Tom's site is worthless.

        I was fine until there. One bad review or a bit of wrong information does not make a site worthless.
        • Re:amd bias? (Score:3, Insightful)

          by Anita Coney ( 648748 )
          If such an egregious and obvious error can get published, how could I ever trust the site to discover less obvious errors? Essentially, it'd be like reading the Weekly World News for real news: Pointless.

    • Re:amd bias? (Score:2, Informative)

      Too easy. I no longer read Toms, the bias is simply too strong. Where, you ask? Take this example. In the same week, Tom abused ATI and nVidia (rightly so, IMO) for paper launches, then does a serious review of an Intel chipset that not only isn't available yet, but won't be for another 6 months!

      A great quote, from the Xeon vs. Opteron battleground: "AMD can consider itself lucky, because due to the dual channel memory controller that is part of each processor, the dual Opteron has a nice advantage, desp
  • Duh (Score:4, Interesting)

    by igzat ( 817053 ) on Tuesday October 19, 2004 @11:30AM (#10566409) Homepage
    I think the choice is clear regardless of this article. Intel announced last week that they are giving up a 4Ghz Pentium CPU, and even the 3.8 Ghz model is very scarce. Where as AMD's Athlon 3800+ can be easily found, With the announcement of the 4000+ CPU, AMD has a clear lead over intel, and will until the Dual-CPU wars begin sometime next year. I think now is a good time to own AMD stock. Their marketshare is going to slowly increase over the next 12 months. I'm not taking sides here, just stating the obvious.
    • From what an article on CNet stated, the dual cpu wars won't begin until 2006, as Intel has pushed back its expected ship date for their product. The article stated that AMD was expecting to start shipping in 3rd quarter of next year.
  • Apples to apples? (Score:4, Insightful)

    by Quixote ( 154172 ) on Tuesday October 19, 2004 @11:40AM (#10566526) Homepage Journal
    I'm reading the review right now (I know, I should burn my /. membership card), and the first thing that jumped out at me were the difference in memory specs between the AMD setup and the P4 setup:

    AMD: 2 x 512MB OCZ PC3200 EL Dual Channel DIMMs 2-2-2-10
    Intel P4: 2 x 512MB Crucial DDR-II 533 Dual Channel DIMMs 3-3-3-12

    Why not keep the rest of the components exactly the same, so we can have a _real_ comparison?

    I'm no Intel fanboy (or an AMD fanboy, for that matter), but when you're doing such benchmarking, some attention to details would help.

    • by Quixote ( 154172 ) on Tuesday October 19, 2004 @11:44AM (#10566577) Homepage Journal
      Oops, my bad: I didn't notice the "DDR-II" in the specs.
    • I think the point was to find the fastest memory available for the system, this isn't just comparing processors its also comparing the possibly max speeds you can expect when using said processors.
    • Because that speed difference in ram access speed most likely is due to different design. The Amd chip includes a ram controller on the cpu which normally allowed it to access ram faster then the intel solution. My guess is that both ram chips are DDR3600.

      But in general you should always test the system with the fastes ram that system support.
      • I did read the text again, and the reason they don't use the same ram, is that the intel board use 533 MHz ddr2 ram while the amd board use 400MHz ddr ram(PC3200).

        Some "back on the envolope" calculations tells us that the ram on the intel board is fastes to return the first 128 bit of data, but that the Amd board is a bit faster, to read more then 128 bits in a row.

        Martin

  • Consumers? (Score:3, Insightful)

    by TimmyDee ( 713324 ) on Tuesday October 19, 2004 @11:42AM (#10566539) Homepage Journal
    "Just after AMD's announcement, AnandTech posted this article to help consumers choose between Intel and AMD." So if by consumers, you mean people that read /. and hardware sites and not the general public, then yes?
    • Obviously they should switch over to growing banannas. Any fool knows that the average Joe will never read hardware review sites. But mmmm, everybody loves a good bananna.
  • by adiposity ( 684943 ) on Tuesday October 19, 2004 @11:46AM (#10566594)
    The 4000+ isn't clocked any higher than the 3800+, it's just got a bigger cache. It's basically an FX-53; in fact, that's exactly what it is, sans the name. It would seem AMD is plateauing as well, but perhaps 90nm will get them out of the jam later on.

    However, this is a wise move by AMD even if the rating isn't justified (hint: the benchmarks say it's not). Intel will never have a 4GHz CPU, and idiots who don't understand performance will see the 4000+ and want it because it breaks the 4000 barrier. It could backfire, but probably not, because even though 4000+ isn't justified, it's still faster than any of Intel's chips on 90% of applications.

    -Dan
    • I realize the Pentium 4 won't hit 4 GHz in its current incarnation, but is it really true that we'll never see a CPU above 4 GHz? Have we finally hit the limit?
      • We may, but not for a very long time, as they are dropping the speed down for dual core, probably. They ramped up to higher speeds than they should have, when they could have had the same performance with a lower clocked P3 derivative. Eventually they may get back up there, but the high clock rates are gone for a while, I think.

        -Dan
    • The 4000+ isn't clocked any higher than the 3800+, it's just got a bigger cache...It would seem AMD is plateauing as well

      What about the FX-55 that was also reviewed in the article? Is that clocked higher than the 3800+?
      Maybe you should read the whole article...

  • They test with a lot of binary only pieces of software. Wouldn't the compiler and the compiler flags that these binaries were created with make a hell of a lot of difference? - I don't know - I'm asking.
    • Yes,some difference could be attributed to the flags Based on Benchmarks I have done in the past, I'd say its maybe 2-3%. Nothing you would notice as the end user as the apps are pretty well optimized when released.
  • I bet Intel rethinks about releasing a 4ghz P4 after this news.
  • by Bryan Ischo ( 893 ) * on Tuesday October 19, 2004 @12:35PM (#10567139) Homepage
    This very thorough article also includes a comparison of power usage of the various processors during idle and busy states. The numbers look HUGE - the 90 nm Athlon 64 3500+ does the best at 86 watts at idle, with the Intel P4 560 (3.6 Ghz) doing the worst at 124 watts. While under a workload, the range is 114 watts to 210 watts.

    At first I couldn't believe my eyes - how can heat sinks keep up with these figures? But then I realized that only some of that wattage is being converted to waste heat - some of it is actually doing the useful work of the processor.

    Just curious - does anyone have any idea what the likely waste heat dissipation, in watts, would be for these processors, given the total power consumption figures in the article?
    • That figure *is* the power dissipation. There is no significant "useful work" that a processor can do.
      • I'm pretty sure that I read in a previous Anandtech article a couple of weeks ago that they are using a new means of measuring CPU power usage - they measure the total wattage consumed by the entire PC and somehow extract the CPU power usage from that. I could be wrong though, I'm not sure exactly how they are obtaining their CPU power usage figures.

        I do not believe that the total wattage consumed by the processor would equal the total wattage produced by the processor as waste heat. The processor does m
        • Nearly all the energy that goes into a CPU is dissipated as heat. Some of it is completely wasted, some of it performs calculations in the process. Consider that there are no moving parts other than electrons (which move quite slowly), the CPU doesn't make noise, and the output lines don't transmit appreciable power. You are putting energy in, so it has to come out, and the only output left is heat.
          • by Bryan Ischo ( 893 ) * on Tuesday October 19, 2004 @01:14PM (#10567513) Homepage
            So basically, the amount of energy it takes to perform calculations is tiny? If processor A performs 2 billion arithmetic operations per second, and it is able to perform each operation just as efficiently as processor B which only performs 1 billion operations per second, the I would expect processor A to use twice as much energy performing its calculations as processor B.

            But what you're saying is that the amount of energy being wasted as heat for both processors A and B is 99%, so the extra power used by processor A in its calculations won't be noticeable compared to processor B (assuming that the only extra power used by processor B is that used to perform calculations).

            486s comsumed what, 10 - 20 watts? And they performed something like 1/100 or fewer as many arithmetic operations per second as today's processors? So they used 1/5 the power but performed 1/100 the amount of useful work. I guess that today's processors actually convert more of their input power to useful work (calculations) than processors of the past did.
            • by raygundan ( 16760 ) on Tuesday October 19, 2004 @02:41PM (#10568467) Homepage
              All of the energy going into the processor is going to come out as heat. It's similar to what would happen if you put a lightbulb in a box, and then measured how much heat was being produced outside.

              Some of the power going to the lightbulb makes waste heat directly, and some of it makes light. But since it's all closed up in a box, all of the light ends up making heat, too.

              So yes, some of the power going into the processor does useful work. But from the point of view outside the processor at the heatsink, even the useful work creates heat.

              • Well, even though it all gets converted into heat, it's not all wasted. Some of that electricity actually does some amount of processing, so you have received some benefit from the heat.

                However, a very great deal of the electrical power is just leaked through the transistors in their "off" state, performing absolutely no useful work. I believe that for the 90nm chips, as much as 75% (!) of the electrical power is leaked, so you're really only getting anything useful out of 25% of the power you pump i
      • Speak for yourself. I didn't have to buy an electric space heater or my room. I just play some ROME:TW for a few hours and I'm nice and toasty.
    • But then I realized that only some of that wattage is being converted to waste heat - some of it is actually doing the useful work of the processor.

      All - 100% - of the energy entering a system must be either stored or dissipated. It makes no difference whether a system is mechanical or electrical. For instance, an elevator (if poorly balanced) could store energy when it is on the top floor, but it would lose it when it travels down again. The stored energy in one trip to the top is trivial compared to the
    • by mczak ( 575986 ) on Tuesday October 19, 2004 @02:26PM (#10568298)
      you misinterpreted that power consumption graphs. This is total system power consumption, not only the cpu!
      So if it says 100W, that is 100W measured at AC! Since psus are only 65-80% efficient, that means the system (without including psu loss) is only using 75W. If you keep in mind this includes hd, graphic card, mem, chipset,..., this doesn't leave that much for the cpu actually. Measuring system power also makes the differences in cpu power consumption look much smaller than it is in reality obviously.
      And others have mentioned it already, ALL power is transformed to heat.
    • by NerveGas ( 168686 ) on Tuesday October 19, 2004 @03:48PM (#10569152)

      How can heat sinks keep up? If you've seen the size of heat sinks that come with these processers, you'll understand.

      I built an P4/LGA system for a guy last week. The heat sink that came standard with the CPU really impressed me - it's the kind of heat sink you would have expected to see hardcore overclockers paying $60 for two years ago. Very large and well-designed!

      Times used to be when heat sinks weighed one or two ounces, and came with 40mm fans. Then came the 60mm fans. Now, 80mm fans and two-pound aren't at all uncommon, with some models using 92mm fans, and some weighing three pounds or more. Copper is being used for more and more of the heat sink. Better heat conduction, more surface area, and more air. It's not rocket science. : )

      Plus, on the new P4's, the chips are able to run at much higher temperatures than previous generations. The greater temperature differential between the chip and the heat sink, the faster you can get the heat to conduct.

      steve
  • snicker (Score:4, Insightful)

    by DeathByDuke ( 823199 ) on Tuesday October 19, 2004 @12:39PM (#10567180)
    all these accusations of Tom's Hardware being AMD biased makes me laugh. Just be happy for once we dont have to moan about Intel biased websites anymore ;)
    • Oh ye of little knowledge of Anandtech.

      His first ever review was of his hot dog cooker k6 chip. And the review appeared on his personal site that has since become the Behemoth that is Anandtech.

      So he is partial to AMD, you could even say AMD was the piece of sand that turned into his personal pearl.

      Puto

"The great question... which I have not been able to answer... is, `What does woman want?'" -- Sigmund Freud

Working...