Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AMD Hardware

AMD Athlon 64 FX-57 Review 167

Duane writes "GDHardware.com has the first review of AMD's upcoming Athlon 64 FX-57 CPU clocked at 2.8GHz. They benchmark it against Intel's current fastest 3.8GHz P4 and the Athlon 64 X2." From the article: "Clocked at 2.8GHz, the FX-57 continues the 'San Diego' core AMD released with the FX-55, but is stepped up a paltry 200MHz faster. What's interesting is that while 200MHz on the Intel side of things doesn't always mean that great of a performance gain, not so with AMD."
This discussion has been archived. No new comments can be posted.

AMD Athlon 64 FX-57 Review

Comments Filter:
  • Summary (Score:5, Informative)

    by 823723423 ( 826403 ) on Sunday June 19, 2005 @02:50PM (#12857504)
    [page 1]
    AMD continues to raise the bar in performance - both in dual core with its recent X2 chip and now once again in the single core design with its pending FX-57 launch due on June 27th.
    [page 2]
    The FX-57 is armed with a total of 1152KB of cache (128KB L1 and 1024KB L2) which greatly speeds up commonly called data cues and is a great sized buffer between the CPU and system RAM.
    [conclusion]
    However, at this point in the game we'd have a hard time giving a full recommendation to anyone to spend close to or over $1000 on a chip that isn't dual core
    • Sigh, if only I had a grand to lose. As is, I will HAVE to settle for my week old obsolete dual core Pentium D 820 at a measealy 2.8 GHz. I bet the FX 57 could rip a box set of CDs a full 3.243 seconds FASTER!!! Sigh...
  • Great, but... (Score:4, Interesting)

    by wbren ( 682133 ) on Sunday June 19, 2005 @02:51PM (#12857506) Homepage
    From TFA:
    If you have tons of money to spend, and aren't attracted at all by the AthlonX2 then get this chip; however, at this point in the game we'd have a hard time giving a full recommendation to anyone to spend close to or over $1000 on a chip that isn't dual core.
    I realize the price will go down over time, but seriously, who is going to buy this chip? Ok, I know some gamers with too much money on their hands will buy it, but it's still going to be surpassed when the dual cores start gaining ground, especially in gaming (think Christmas '05). Until I saw the pricetag I thought this might be an option for my next build, but not anymore. There are other options, at much lower prices.
    • Re:Great, but... (Score:3, Informative)

      by juhaz ( 110830 )
      I realize the price will go down over time, but seriously, who is going to buy this chip?

      The same people who always buy flagship chips, kids with rich parents and other folks with whole load of money in their hands.

      Ok, I know some gamers with too much money on their hands will buy it, but it's still going to be surpassed when the dual cores start gaining ground, especially in gaming (think Christmas '05).

      I doubt too many games that can take advantage of dualcores will be done by christmas, but if I'm
    • There have always been people paying unjustified amounts of money just to have the best/fastest/newest available, and i guess there always will be.

      Its just like the 1000$ P4EE, or the 1GHz p3 on release, or the kryotech Super G, or the p2-300 katmai....
    • I realize the price will go down over time, but seriously, who is going to buy this chip?

      Quite a few jackasses with too short egos. If the chip was released at $200, they'd pay $200 and you'd pay $200, now. But if it's released at $1000, they pay $1000 now, and you'll pay $200 in half a year. AMD is not in hurry, they prefer to earn more over longer period of time than less, NOW.
      • >>I realize the price will go down over time, but seriously, who is going to buy this chip?

        But if it's released at $1000, they pay $1000 now, and you'll pay $200 in half a year.


        Thing is, the prices on the FX series did not really drop. If the going price for a 57 is $1000, the fx 55 and 53 seem to be priced around $800. I waited for almost a year for the 3500+ to drop, and it went from $350 to $270 (for the rev E out today). A year. Use to be you could count on the fact that those waiting to sn
      • I don't think it's their egos that are too short.
    • I realize the price will go down over time, but seriously, who is going to buy this chip?
      Thats a very good question.

      I'm thinking for now its just going to be the absolute enthusiasts who never want to go more than 5 minutes out of date.

      Then theres the people who think "If I get this, I won't have to upgrade for another 2 years". Those people are most likely to cry when the price drops and a new processor is released moments after.

      Oh, then theres people who want to see how fast they can get their
      • I look at it and say "Well, it's time to buy a new chip, and I want it to last me about 3 years." My problem has usually been that of trying to hang on to some irrelevant piece of legacy hardware, so I end up downgrading the motherboard (and taking a downgraded processor to run on it.) Just last week I was still running an Athlon 2400 when I pulled the trigger on an Athlon64 4000+. I had PC3200 RAM (downclocked, but still not very stable on the old 2400) which I kept, and I bought a fast SATA hard drive.
    • The idiot too-much-money-for-their-own-good gamer market drives the high-end x86 industry; hadn't you noticed?
    • by Anonymous Coward on Sunday June 19, 2005 @05:46PM (#12858624)
      I realize the price will go down over time, but seriously, who is going to buy this chip?

      You're asking the wrong question. Even if no one buys this chip, the chip is still worthwhile to have on the market.

      A few years ago Wendy's found that almost no one was buying their triple cheeseburgers, so they took triples off the menu. When they did this, they found that sales of their double cheeseburgers dropped to almost nothing. The problem, as they discovered later, was that the presence of triple cheeseburgers on the menu helped to legitimize the double cheeseburgers as mainstream items. Without triple cheeseburgers, the double cheeseburgers became the high end item and mainstream buyers went for the singles instead.

      Since profit margins on double cheeseburgers are higher, the chain was forced to bring back triple cheeseburgers, even though triples weren't selling at all, because the sales of their double cheeseburgers depended on having triples on the menu.

      Point is, although this is a fast food example, the same thing applies to the computer industry. You HAVE to have a high end item available if you are to have any hope of positioning the more profitable midrange items as mainstream.

      • God dammit, when are people like you that have something interesting to say learn how to create an account?

        You're like 1% of AC's that have a good comment, and since the vast majority browse at +1, they miss stuff like this.

        Seriously, get an account or sign in. :)
      • I'd add to it that since that chip does not seem to bring new features in it apart from the higher clock rate, its existence is probably a consequence of some improvement in AMD's chip-making technology. I suspect they simply noticed they can clock some part of their chips 200 MHz faster. What did you expect them to do in that situation?
      • Gateway (cimputers) imho made the same mistake as well. They went for the low end of the market. They forgot that because of their very fast and up to date high end machines, they got consumer trust. What AMD is doing is buying consumer trust, even if no one is going to buy it.

        Except that they will. People buy sports cars as well, although the speed limit clearly prohibit the use of sportscars for the speed they are designed for. In that same light, $1000 (say $500 extra for high medium to top of the range
      • mmmm burger.

        *stomach rumble*

        Dammit, you've just ruined the rest of my work day!

    • "I realize the price will go down over time, but seriously, who is going to buy this chip?"

      I'm in a small small minority here, but my company is a candidate. I work at a studio making an animated 3D movie. We need more speed for both the UI and in rendering.

      We're using Lightwave, so rendering isn't as strong of case. Lightwave provides unlimited licenses for network rendering. In that case, it's more beneficial to buy more slower machines and add them to the network. But if we were using another ren
      • " It's been a while since I've looked it up, but rendering on another machine could mean spending another $1,000 for the license to render on it. So we'd have to factor that in, too."

        I apologize for responding to my own post here, but I said something that doesn't make any sense and I'd like to clear it up.

        I meant if we rendered on another renderer such as Mental Ray, not another computer. Since I've got a little time to look it up now, Mental Ray is $995. Hopefully that clears up the error in my last
    • Yeah, no one bought that PII 266MHz when it came out at over $1,000.
  • by Anonymous Coward
    safe to say I won't be visiting GDHardware for reviews again, not that I'd heard of them before.

    "The real target audience for the FX-57 is going to be the ultra-gamer who insists on nothing but the absolute fastest gaming CPU money can buy. It simply crushes everything in its path in game performance and handles most of today's common applications with power to spare."

    Please show me an ultra gamer that plays on cutting edge hardware at only 640x480. I guess it was the only test they could find where the
    • Re:640x480 gaming (Score:1, Insightful)

      by Anonymous Coward
      All hardware sites do CPU testing in games at low resolutions - in doing so they help remove the GPU factor. It's obvious you don't know much about hardware testing..
      • Couldn't you just use standard VGA drivers?
      • If you're reviewing for "the ultra-gamer", however, you should measure what effect the ultra-gamer is likely to see.

        That ultra-gamer is likely NOT going to be running an unaccelerated graphics card at low resolution. If, in fact, he probably won't, and because of this he's unlikely to see any significant speed gain, then that's a perfectly fair result to present and arguably one much more relevant to said ultra-gamer.
        • If you're reviewing for "the ultra-gamer", however, you should measure what effect the ultra-gamer is likely to see.

          That ultra-gamer is likely NOT going to be running an unaccelerated graphics card at low resolution. If, in fact, he probably won't, and because of this he's unlikely to see any significant speed gain, then that's a perfectly fair result to present and arguably one much more relevant to said ultra-gamer.


          This wasn't a system benchmark, this was a CPU benchmark. As such, they isolated the CP
    • Re:640x480 gaming (Score:5, Informative)

      by NerveGas ( 168686 ) on Sunday June 19, 2005 @03:10PM (#12857612)

      Maybe you don't understand: They were benchmarking a CPU. Not a graphics card, a CPU. The more they turn up the resolution and detail, the more the video card will be a factor, and mask the benefits of the CPU. Even if they used the same video card, as the card becomes more of a limitting factor, the more all of the CPUs will look the same.

      Now, that's not to say that it wouldn't have been interesting to have some 1600x1200 benchmarks, but in and of itself, the choice of 640x480 is not a bad one.

      steve
      • I don't think so. At 640x480 the vid card can probably handle everything all by itself. You need to put a big load on it so that the work has to run on the cpu since the vid card can't do it all itself.

        Best thing would be to use a card with no hardware acceleration at all. Not sure where you'd find such a beast though. Perhaps you could use a game where you have a choice of renderers, and make sure it's switched to software?

        But either way, you're not any worse with a bigger res, and might well be more accu

        • I don't think so. At 640x480 the vid card can probably handle everything all by itself. You need to put a big load on it so that the work has to run on the cpu since the vid card can't do it all itself.

          That is nonsense. A useful gaming benchmark would make sure the graphics card is doing what any normal gamer would have it doing but not giving it such a high workload that a slowdown is seen that is due to the graphics card.

          That way you isolate the benchmark to measure what a gamer expects to have hi

          • Well, I wouldn't expect my CPU to be doing 640x480 either. Higher resolution would give a better approximation of real life. No hardware acceleration would make it more of a pure cpu test, which should make the effects more obvious. (Part of me's surprised they saw any changes at all at 640x480, unless it was an ai-heavy game). The 640x480 benchmark is neither realistic nor a good test of CPU.
            • If you're surprised at why they saw changes at 640x480, you don't understand 3D processing, and you should learn about it.

              In a 'realistic' test, everyone is limited by their video cards, so it's pointless to do game benchmarking. Given the large differential between CPUs, this clearly was a good test of a CPU.

              What gets me is that they do motherboard tests for 3D games. I mean, what? 1% difference is totally imperceptible.
            • It's an excellent test. The CPU doesn't crunch more triangles due to the higher resolution, it's the exact same number of polygons. The video card is the stresstest in higher resolution.. the cpu sends the SAME information, and the videocard adds all the pixels, lighting, fillrate, etc. to make the more refined picture.

              Thus ALL benchmarks for cpu uses this test. It takes the videocard out of the equation so it doesn't skew the results.

              • In this case, it's best to turn down the resolution so that you can see the difference in CPU power. If you're limited by the video card, you're not getting information that helps you decide what CPU to get - which is the point.


                Thats exactly what I said! Its not that 1024x768 doesn't use more CPU resources, but that it will also tie up the video card. If you use 800x600 then you minimize video card delays, thus isolating the CPU (and system memory) performance.

              • Damn, somehow that reply went to the wrong post. Sorry.
          • Well, he's wrong, but so are you.

            640x480 tends to be used because everyone else uses it, too. Can you imagine how hard it would be to compare sites' benchmarks if one used 1152x864 all the time, one used 1024x768, and one used 1280x720?

            A useful benchmark for a CPU is a test of what the CPU does - in a 3D game, the CPU does physics, AI, etc - and some of the 3D processing before handing off data to the video card for accelerated functions.

            In this case, it's best to turn down the resolution so that y
        • At 640x480 the vid card can probably handle everything all by itself. You need to put a big load on it so that the work has to run on the cpu since the vid card can't do it all itself.


          You have no clue how 3D accelerated graphics work, do you?

        • Re:640x480 gaming (Score:3, Informative)

          by be-fan ( 61476 )
          There is no load balancing between the CPU and the GPU. The two always do the same part of the work. If one can't keep up, the other doesn't take over the work. Instead, the slower part just becomes a bottleneck. At 640x480, you're testing how fast the CPU can feed the graphics card data. At 1600x1200, the graphics card becomes the bottleneck, and as long as the CPU can feed the graphics card fast enough, they'll get the same results.
        • I don't think so. At 640x480 the vid card can probably handle everything all by itself. You need to put a big load on it so that the work has to run on the cpu since the vid card can't do it all itself.

          i know. I wish more people understood this. I'm having a bit of trouble though with the geforceMX card that I carefully modded to map into the 762 pin socket on my motherboard. The darn thing just don't wanna boot!

          I mean, it has more gigaflops and bogomips than a G4, which we all know is a national sec
      • Okay, fair enough.

        But to a gamer wanting to put together a system, they are going to want to know what the FX-57 will do for them (especially at that price tag!). So run some benchmarks at some normal settings, and see how the chip compares. If it shows that the FX-57 has little to no advantage over a chip that costs $750 less in a gaming setting - that's very useful information!
    • It simply crushes everything in its path in game performance and handles most of today's common applications with power to spare."

      Bullshit, Intels' Dothan (Pentium-M) on an Asus mobo will smoke an FX-57 at less that half the price. Dothans currently hold all the 3DMark records and SuperPi. Check out the scores [futuremark.com] for yourself.

      I e-mailed the author several days ago that leaving out Dothan benches made his review and conclusions worthless. He hasn't e-mailed back. I can't say for sure, but this article sur

  • by Peter Amstutz ( 501 ) on Sunday June 19, 2005 @03:12PM (#12857623) Homepage
    From what I've read, while Intel can keep cranking up the core speed of their chips, all those clock cycles are wasted if it spends most of its time waiting around for memory. The northbridge on Intel motherboards is now their biggest bottleneck. So at least part of the reason AMD can get better throughput at a lower clockrate is that it eliminates the northbridge altogether, puts the memory controller on the CPU, and ties everything else together using their insanely fast "HyperTransport" system bus. Any engineers who know more about it care to comment?
    • Exactly. Never has clock speed meant less for the Pentium than now! Expect Intel to cranc up the marketing, the hip-hop dancers and the gigahertz game. Sickens me already.
      • actually intel has already said they don't care about ghz anymore because over the next few years they can only go up 3x performance. however, with dual core(and dual core+hyper thread) they can go up 10x the performance in the upcoming years. so i guess you can expect intel to crank up the dual-core game
    • I think it may have more to do with the on-die memory controller. HyperTransport is just used for I/O (disk, AGP, USB, etc.) in this case.
    • The northbridge is nowhere near being the biggest bottleneck in a modern PC. AMD's design reduces latency substantially, which results in slightly improved bandwidth.

      For newer games, graphics processing is the performance bottleneck. For scientific work, it is generally either memory bandwidth or execution resources on the CPU. For servers, it is generally memory bandwidth and/or I/O bandwidth from the hard disks.

      Integrating the northbridge onto the CPU die does net a modest performance boost, but it d
    • Funny thing is, it isn't purely their HyperTransport. It was developed togerther with Digital for their alpha CPU's. Way to go digital, like many other "modern" features the alpha chips had this baby first. Too bad they died.

      you can also look at alpha systems (in this matter any "real workstation design") how to fix this, e.g. with memory interleaving. With 64 memory dimms supplying data to the CPU, it will be the memory running circles around your CPU. :-)

      Same goes for IO, most cheap-ass computers are

      • you can also look at alpha systems (in this matter any "real workstation design") how to fix this, e.g. with memory interleaving. With 64 memory dimms supplying data to the CPU, it will be the memory running circles around your CPU. :-)

        Actually, just about all Pentium-4 motherboards already utilize dual banks of RAM, hence the 800MHz bus speed (2x400). And as you can see from the benchmarks, it's not running circles around the competition.

        • The drawback, from what I can gather, is that while Intel is using DDR3200 now (2x400), it's just a 64bit memory path.

          The latest NForce4 boards, socket 939 for AMD, are using Dual Channel DDR. This nets you a fatter memory path. You get the same 2x400 but it's in a double-wide bandwidth path, 128bit.

          From what I've seen in informal testing here, using dual channel memory is a huge difference. Throw in a SATA/150 drive instead of an IDE drive, and a Windows XP install gets shaved down to 15 minutes. Not
      • Isn't that always how it is? I mean, AMD and Intel mostly succeed because they can ship chips in volume and at low prices. Most actual innovation takes place at Sun, IBM, or other server-oriented companies if I'm not mistaken. The Alpha is just a particularly poignant example.
    • while Intel can keep cranking up the core speed of their chips

      Actually, they can't... hence the interest in multi-core and the Pentium M! :)

    • you're mistaken (Score:2, Informative)

      Intel generally leads AMD in memory bandwidth, at least without any overclocking.

      Intel was doing 6.4GB/s (dual channel PC3200 RAM) when AMD was at 2.7GB/sec. (single channel PC2700).

      Also, note that memory accesses don't go over HyperTransport on an Athlon. The memory controller is built into the CPU. This is nice for latency, but bad because it means that Athlon users are stuck with whatever memory technology AMD has selected. At the moment, that means Athlon systems are stuck with DDR right now even as D
      • Re:you're mistaken (Score:1, Interesting)

        by Anonymous Coward
        Regardless of Intels memeory bandwidth advantage, this has rarely ever helped intel. They still get stomped over and over again in performance in just about every benchmark by Amd. And the hypertransport bus is why Amd's Duel Cores run so much faster than Intel's Duel cores. Intel has no chance of ever beating Amd's X2 Line. Intel must still use the standard fsb for intercommmunication between processors, further clogging an already overloaded bus. Not so with Amd.
        • When Intel came out with the 800MHz FSB (6.4GB/s), they handily beat AMD on benchmarks. AMD didn't have the internal memory controller at the time. Plus they had a slow FSB. Plus they were in that awkward time before NVidia jumped on the bandwagon, and so the chipsets for AMD were terrible, most of them bad VIA performers.

          AMD may have the upper hand in many benchmarks right now (I guess you don't look at video compression), but it hasn't been that way for long. AMD's most recent rise above Intel really sta
          • That's somewhat odd, even for /.
          • Also, I find the "troll" moderation on my post above insulting. It's seems silly to me, but disregarding that, I find it ridiculous that people automatically moderate "troll" apparently just for speaking any nice things about Intel.

            You seem to be a bit paraoid. If anything, I'd say the troll mods are because there's no "-1 uniformed" or "-1 Wrong" mod options.

            It's easy to consider a post a troll, though, when the incorrect/wrong by omission/biased info all seems in-favor of one side, even if it was AM

  • Damn you (Score:3, Insightful)

    by skomes ( 868255 ) on Sunday June 19, 2005 @03:17PM (#12857653)
    Damn you Intel and AMD, always teasing me with the absolute most bleeding edge hardware that I CAN'T AFFORD. Come on, let's work on bringing down prices as well as bringing up performance.
  • by spitefowl ( 786321 ) on Sunday June 19, 2005 @03:18PM (#12857655) Homepage
    Some of them say "Lower is better" some of them say "FPS" some of them just don't say anything. It makes it hard to gauge if higher is better or lower is better. I mean, some things are obvious like 3dmark 2005 results, but then it says "4D rendering" what the heck is that? Is it measuring FPS?

    Agh, eeh gads!
  • Why is it that the dodgiest news/review sites so often get written up here?
    • dodgey? duane has been around since 98, like we have. how the fuck is it dodgey? the 57 will be the fastest in gaming, but not much faster than the x2. why? because the clock is not that different. same fucking conclusion I had when I did my fx57 preview last week. maybe you should do some research before you start typing. clearly you have not.

      You can find the new San Diego core FX benched at FX57 and FX59(3GHz) speeds here. [amdzone.com]
      • The grammar, the eccentric charts, the bizzare statements like: "While it's certainly not a multi-threaded capable chip..."
        • It isn't dual core, it does not have hyperthreading. Since you are throwing the stones with the glass house, put up your FX57 review, so we can take aim at it, or shut up. I doubt you would do a better job. Of course you couldn't, you don't have the hardware, but you sure can mouth off on Slashdot like you are important.
  • by ruiner5000 ( 241452 ) on Sunday June 19, 2005 @06:08PM (#12858735) Homepage
    You can find the new San Diego core FX benched at FX57 and FX59(3GHz) speeds here. [amdzone.com]
  • As if CPU benchmark tests weren't vague and subjective enough, we get a bunch of random graphs without LABELS. Jesus. I mean, you can figure out what most of them probably are but I'd rather just READ it. I mean I hate reading those things enough when they don't mention how they do means or what they normalize to, but just meaningless numbers? Yeah, no thanks boss.

Never test for an error condition you don't know how to handle. -- Steinbach

Working...