Become a fan of Slashdot on Facebook


Forgot your password?
AMD Hardware

AMD Launches Fastest Phenom Yet, Phenom II X4 980 207

MojoKid writes "Although much of the buzz lately has revolved around AMD's upcoming Llano and Bulldozer-based APUs, AMD isn't done pushing the envelope with their existing processor designs. Over the last few months AMD has continued to ramp up frequencies on their current bread-and-butter Phenom II processor line-up to the point where they're now flirting with the 4GHz mark. The Phenom II X4 980 Black Edition marks the release of AMD's highest clocked processor yet. The new quad-core Phenom II X4 980 Black Edition's default clock on all four of its cores is 3.7GHz. Like previous Deneb-based Phenom II processors, the X4 980 BE sports a total of 512K of L1 cache with 2MB of L2 cache, and 6MB of shared L3 cache. Performance-wise, for under $200, the processor holds up pretty well versus others in its class and it's an easy upgrade for AM2+ and AM3 socket systems."
This discussion has been archived. No new comments can be posted.

AMD Launches Fastest Phenom Yet, Phenom II X4 980

Comments Filter:
  • Wait for Bulldozer (Score:4, Insightful)

    by rwade ( 131726 ) on Wednesday May 04, 2011 @08:12PM (#36030676)

    I'll be waiting for the dust to clear with Bulldozer before I make a commitment for my next build. No reason to buy a $200 Phenom II X4 980 now when there is no application that needs that much power. If you buy a Sandy Bridge or a higher-end AM3 board/processor now, your average gamer or office worker won't be able to max it out for years -- unless he does video editing or extensive photo shop or if he has to get his DVD rips down to a 10 minute rip vs a 15 minute rip per feature film...

    Might as well wait for the dust to clear or for prices to fall.

  • by rwade ( 131726 ) on Wednesday May 04, 2011 @08:29PM (#36030812)

    This [] thread has some interesting information on possible BD performance.


    This is 301 posts with back and forth that looks basically to be speculation. Prove me wrong by quoting specific statements of those that have benched the [unreleased] bulldozer. Because otherwise, this link is basically a bunch of AMD fanboys fighting against Intel fanboys. But prove me wrong...

  • by ShakaUVM ( 157947 ) on Wednesday May 04, 2011 @08:34PM (#36030840) Homepage Journal

    >>I'll be waiting for the dust to clear with Bulldozer before I make a commitment for my next build.

    I agree. The Phenom II line is just grossly underpowered compared to Sandy Bridge: []

    The i5 2500K is in the same price range, but is substantially faster. Bulldozer ought to even out the field a bit, but then Intel will strike back with their shark-fin Boba FETs or whatever (I didn't pay much attention to the earlier article on 3D transistors.)

    And then on the high-ish end, AMD has nothing to compete against the i7 2600K. And it's not really that much more expensive (+$100) for the 15% extra gain in performance. It's not like their traditional $1000 high end offerings.

  • Re:3700 megahertz? (Score:5, Insightful)

    by DurendalMac ( 736637 ) on Wednesday May 04, 2011 @08:59PM (#36031008)
    So clock speed means everything when comparing different CPUs and not their raw performance. Got it.

    Furthermore, there is no 10 year old CPU that runs at 3ghz unless you did some absurd overclocking.
  • by Anonymous Coward on Wednesday May 04, 2011 @09:22PM (#36031112)

    Bah, the speeders are wasting their money. Cores all the way. Half a GHz doesn't make up for two cores. When you've seen a 6-core kernel building it's hard to go back to start-stop. Even if you are a dumb Windows user, you will enjoy the shovelware being unable to slow you down.

  • by Anonymous Coward on Wednesday May 04, 2011 @09:31PM (#36031156)

    Read this excerpt from an AMD management blog:

    "Thanks to Damon at AMD for this link to a blog from AMD's Godfrey Cheng.
    We are no longer chasing the Phantom x86 Bottleneck. Our goal is to provide good headroom for video and graphics workloads, and to this effect “Llano” is designed to be successful. To be clear, AMD continues to invest in x86 performance. With our “Bulldozer” core and in future Bulldozer-based products, we are designing for faster and more efficient x86 performance; however, AMD is seeking to deliver a balance of graphics, video, compute and x86 capabilities and we are confident our APUs provide the best recipe for the great majority of consumers. "

    People, read between the lines.
    What he is saying is that they can no longer compete with Intel on speed and have decided to concentrate on a balance at the low end priced points.
    The days of the cpu wars are in fact over and Intel has won with Sandy Bridge.
    Yes, I have been an AMD only fan for years but you have to face the reality that times have changed permanently in Intels favor and AMD's days are numbered.
    Why else do you think Bulldozer is over a year late!
    Oh, and AMD is prime for a buyout right now and there are rumors.
    When AMD fails Intel will have a monopoly and the consumer will loose in the end.
    Sad but true.

  • by swb ( 14022 ) on Wednesday May 04, 2011 @10:01PM (#36031306)

    My sense is that people who actually *use* a computer also install dozens of applications and end up with complicated and highly tailored system configurations that are time consuming to get right and time consuming to recreate on a new system.

    The effort to switch to a new system tends to outweigh the performance improvement and nobody does it until the performance improvement makes it really worthwhile (say, Q6600 to a new i5 or i7).

    I've found that because I end up maintaining a system for a longer period, it pays to buy power today for applications very likely to need or use it in the lifetime of the machine. Avoid premature obsolescence.

  • by Anonymous Coward on Wednesday May 04, 2011 @10:31PM (#36031442)

    This will change, AMD can't stick with the AM3 form factor forever (I think the next gen will in fact change the socket), but generally speaking AMD has done a much better job on hardware longevity than Intel has.

    Oh yes, AMD is wonderful about keeping sockets around for a long time. The move to 64 bit CPUs only involved four (754,939,940,AM2) sockets, three (754,939,940) of which were outdated in short order.

  • by gregrah ( 1605707 ) on Wednesday May 04, 2011 @11:30PM (#36031724)
    Those power consumption benchmarks look a little suspect to me. I've got a Phenom II 720 (3 cores @ 2.8 GHZ) with 95 watt TDP, and the total system consumption at idle is about 65 watts. I'm not sure how they are managing to pull down almost double that with an Athlon II (also a 95 watt CPU) in the test system they used - unless a) they turned off the power management settings in the BIOS, or b) they are using some ridiculous 1000W PSU that is totally inefficient at lower loads.

    Anyway - assuming that I leave my machine running for 8 hours a day on average, and the overwhelming majority of the time the CPU is at near-idle loads (i.e. consuming 65W), with electricity costing about $0.12 per kWh, I figure that it probably costs me about $24 per year in electricity. If I could shave off 1/3 of the electricity cost, I would only be saving $8 a year. After the 3 years that it takes me to make up that $25 difference, I'm probably in need of a new CPU anyway.

    Also - while I haven't spent much time pricing motherboards recently - when I last checked I found that AMD motherboards tend to be cheaper than Intel motherboards, and also that AMD integrated graphics were considerably stronger, allowing me to get by without a discrete graphics card. Furthermore, if I wanted to upgrade my CPU now with the latest and greatest I would be able to do so without replacing my motherboard and buying new memory, I would be able to do so - whereas if I bought an LGA 1156 motherboard a year ago it would now be obsolete.

    In other words - I agree with you that with Intel you'll have a faster and more power efficient machine, but I'm not so sure that you'll end up saving any money.
  • by Anonymous Coward on Wednesday May 04, 2011 @11:37PM (#36031760)

    Prime95, in this context, is for convincing 0v3rcl0ckz0r kiddiez that their massive overclock is stable even though it's a terrible stability test. A prime number search program is not exactly the world's best method of achieving full test coverage of a CPU, no matter what a billion leetboy forums may tell you.

    Just for example, according to its webpage, prime95 only uses 32MB of memory, which means it basically runs from cache on any modern CPU. Which in turn means you're not really exercising memory access much at all. Guess what's really, really important to test if you want to know how stable your system is, especially given that modern CPUs have integrated memory controllers? (Some overclockers are more sane and only do multiplier overclocking, but the focus of most is speed at any cost and the memory gets it too, and if they rely on prime95, well... not good.)

    And then there's the issue that as a program which does nothing but manipulate large integer numbers, prime95 probably isn't touching anything other than the integer ALUs. Maybe MMX/SSE if you're lucky. So huge chunks of the CPU's datapath go untested.

    Another consequence of that limited memory use is that it probably doesn't thrash the TLBs much, which means that OS pagefault handlers are rarely called, which means you're not testing the stability of all the VM machinery.

    I could go on. Next to no I/O or interaction with peripherals. So on and so forth. Prime95 has a reputation vastly in excess of its true usefulness.

    But the real problem is this:

    Say you're doing something like the GP: using your computer to do scientific calculations which have to be right. You want to overclock, but because the results matter you want to find software which can help you validate that your computer is so stable that there's no chance of a crash. Or worse: silent data corruption. (Which I've personally observed when overclocking. Not fun when you don't discover it until after it's trashed a lot of data.)

    Problem is, there is literally no end-user software which is an adequate stress test for this purpose. The only known way to get that kind of reassurance is to use factory automated test equipment (ATE). ATEs don't typically run software on the CPU under test. Instead, they make use of special test mode circuitry to quickly perform direct pass/fail tests on most circuits in the chip at any desired voltage/frequency/temperature operating point.

    Well designed ATE tests can cover essentially every circuit. The factory uses ATE testers both to identify rejects and bin good chips into speed grades, but they're also the only way to be truly sure that an overclock will be 100% stable.

    But you can't buy factory ATEs, and you can't get them to tell you the ATE test data for your chip, beyond a guarantee that it passed at the frequency they sold it to you as.

    Which is why, if you're doing work which is important, such as scientific research, you damn well shouldn't overclock no matter how safe you think it is.

  • Re:3700 megahertz? (Score:4, Insightful)

    by smash ( 1351 ) on Thursday May 05, 2011 @12:06AM (#36031898) Homepage Journal
    Don't forget that IPC isn't the be all and end all. If you're stalled due to cache misses, then IPC goes out the Window. Modern CPUs have much more cache and much faster buses to main memory than we had in 2004. That is a large reason as to why they're faster. They also have additional instructions that can do more work per instruction - so comparing IPC from CPUs released today to CPUs released last decade is even more meaningless.
  • by smash ( 1351 ) on Thursday May 05, 2011 @12:15AM (#36031920) Homepage Journal

    Alternatively, even though intel has won, he is acknowledging that for 99.9% of people, CPU performance no longer matters as much as it used to.

    In general use for example, I see no difference between the core i5 in my work machine, and then Pentium D in my oldest home box.

    Gaming? Sure, however even that is becoming more GPU constrained.

    Both AMD and intel are on notice. Hence both are putting more focus into GPUs. In terms of CPU, it won't be that long before some random multi-core strongarm variant is more power than any regular user will ever need, and they absolutely kill x86 in terms of performance per watt.

    The focus is no longer absolute CPU performance, it is shifting towards price, size, heat and power consumption. Computing is going mobile and finding its way into ever smaller devices. Rather than building one large box, distributed computing is the new wave (the cloud, clustering, etc).

    AMD's CPU business might have a tricky path ahead, bu thent so does x86 in general, barring niche markets. If AMD are doomed, then intel's traditional dominance using x86 won't be too far behind them.

  • by Rockoon ( 1252108 ) on Thursday May 05, 2011 @01:06AM (#36032162)
    The only legit (that I know of) in-the-wild Bulldozer benchmark is a 1.8 GHz dual chip (2 x 16 = 32 cores) server chip setup using the Phoronix benchmark suite (search results [])

    It is likely that these sample chips are as much a test of the new 32nm fab as they are a test of the new cpu architecture, and definitely not a test of how quickly they can be clocked.

What is algebra, exactly? Is it one of those three-cornered things? -- J.M. Barrie