Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
AMD Hardware

AMD's Next-Gen Steamroller CPU Could Deliver Where Bulldozer Fell Short 161

Posted by Soulskill
from the rollin'-rollin'-rollin' dept.
MojoKid writes "Today at the Hot Chips Symposium, AMD CTO Mark Papermaster is taking the wraps off the company's upcoming CPU core, codenamed Steamroller. Steamroller is the third iteration of AMD's Bulldozer architecture and an extremely important part for AMD. Bulldozer, which launched just over a year ago, was a disappointment. The company's second-generation Bulldozer implementation, codenamed Piledriver, offered a number of key changes and was incorporated into the Trinity APU family that debuted last spring. Steamroller is the first refresh of Bulldozer's underlying architecture and may finally deliver the sort of performance and efficiency AMD was aiming for when it built Bulldozer in the first place. Enhancements to Fetch and Decode architecture have been made, as well as increased scheduler efficiency and cache load latency, which combined could bring a claimed 15 percent performance-per-watt performance gain. AMD expects to ship Steamroller sometime in 2013 but wouldn't offer timing detail beyond that."
This discussion has been archived. No new comments can be posted.

AMD's Next-Gen Steamroller CPU Could Deliver Where Bulldozer Fell Short

Comments Filter:
  • by BadgerRush (2648589) on Tuesday August 28, 2012 @09:57PM (#41161127)
    AMD may be getting its shit together when in regards to chip design. but I'm still going Intel on my next PC because of their superior Linux drivers. At the moment I'm an unhappy owner of a laptop with a AMD graphics card that can't do anything because the drivers are useless. I'm looking forward to a new laptop with an Intel Ivy Bridge processor (I don't think I can wait Haskell).
  • by PhrostyMcByte (589271) <phrosty@gmail.com> on Tuesday August 28, 2012 @10:16PM (#41161263) Homepage

    I think AMD's work here will provide some great evolutionary speedups that will be significant to many people. Unfortunately for them, at the same time AMD is bringing out these small "free lunch" general improvements, Intel will be bringing out Haswell -- which in addition to such evolutionary improvements has some really fantastic, significant new features that'll provide remarkable performance boosts.

    • Integer AVX-256. For apps that can take advantage of it, this'll be a massive speed-up. Things like x264 and other video processing will take huge advantage of this.
    • SIMD LUTs. One of the major optimization tricks programmers have been using for ages, look up tables, have been thus far out of reach to SIMD code without complex shuffle operations that usually aren't worth it.
    • Transactional memory. This is not quite the easy BEGIN/COMMIT utopia people are hoping transactional memory will eventually get us, but it's a building block that'll enable some advanced concurrent algorithms that either aren't realistic now or are so complex that they're out of reach to most coders.

    These are all pretty specialized features, yes, but they service some very high-profile benchmark areas: video processing and concurrency are always on the list, and AMD will get absolutely crushed when apps start taking advantage of it.

    I'm a developer, a major optimization geek both micro- and macro-. I thrive playing with instruction latencies, execution units, and cache usage until my code eeks out as much performance as possible. Of course we'll never know until the CPUs are released for everyone to play with, but right now my money is on Intel.

    AMD is in serious trouble here. I hope I'm wrong.

  • I'm a developer, a major optimization geek both micro- and macro-. I thrive playing with instruction latencies, execution units, and cache usage until my code eeks out as much performance as possible. Of course we'll never know until the CPUs are released for everyone to play with, but right now my money is on Intel.

    Yeah, I'm a developer too. However, my simulations run on desktops not super computers so it doesn't matter how optimal the code is on a single particular piece of hardware... Wake me up when there's a cross platform intermediary "object code" I can distribute that gets optimised and linked/compiled at installation time for the exact hardware my games will be running on.

    We need software innovation (OS's and Compilers) otherwise I'm coding tons of cases for specific hardware features that aren't available on every platform, and are outpaced by the doubly powerful machine that comes out 18 months later... In short: It's not worth doing all that code optimisation for each and every chip released. This is also why Free Software is so nice: I release the cross platform source code, you compile it, and it's optimised for your hardware... However, most folks actually just download the generic architecture binary, defeating the per processor optimisation benefits.

    Like I said: In addition to hardware improvements we need a better cross platform intermediary binary format (so that both closed and open projects benefit). You know, kind of like how Android's Davlik bytecode works (processed at installation), except without needing a VM when you're done. I've got one of my toy languages doing this, but it requires a compiler/interpreter to be already installed (which is fine for me, but in general suffers the same problems as Java). MS is going with .NET, but that's some slow crap in terms of "high-performance" and still uses a VM.

    Besides, I thought it was rule #2: Never optimise prematurely?
    (I guess the exception is: Unless you're only developing for yourself...)

  • by hairyfeet (841228) <bassbeast1968@@@gmail...com> on Wednesday August 29, 2012 @12:07AM (#41162047) Journal

    Ya know what? Nothing wrong with cheap and "good enough" the problem has been their new designs are cheap and shitty thanks to that lame "half core" they went for.

    You take a good 85%+ of the people out there and a MOR AMD Deneb quad will frankly be twiddling its thumbs because it will blow through any jobs that they have, even gaming, even more so for Thuban. And their Brazos chips were fricking great, an APU designed for mobile video and basic tasks that got great battery life while often being cheaper than an Atom+ION setup.

    I've sold many an Athlon II and Phenom II and the people are damned happy with them, they just blast through everything they want to do with plenty of cycles left over. I even put my money where my mouth is with regards to my family, me and the oldest are gaming on Thubans while the youngest took my Deneb, and they blow through any game we throw at 'em.

    I see from TFA they've partially dropped the "half core" design but I can only hope that with Piledriver they'll drive a stake through it, as most of the people I've talked to Win 8 is a DO NOT WANT yet the half core scheduler bug is only fixed in Win 8. Meh, hopefully I'll still be able to get enough Thuban, Deneb, and Liano chips to get me through the whole BD/SR phase and the new Apple chip designer they hired will give us another Athlon64. One can hope after all.

  • by afidel (530433) on Wednesday August 29, 2012 @12:44AM (#41162287)

    or Windows will treat it as hyperthreading and tie a nice boat anchor to your new chip.

    Actually it's the opposite, the system SHOULD be treating the co-cores like HT units and not scheduling demanding jobs on adjacent cores (at least not ones that both need the FP unit or lots of decode operations). The problem is that AMD basically lied to the OS and told it that every core is the same and that it can go ahead and schedule anything wherever it wants. If they had just marked the second portion of each co-core as an HT unit the normal scheduler optimizations would have basically handled 99% of cases correctly. In reality BD's problem wasn't so much the gaff with the co-cores (though that certainly didn't help things), but that Global Foundry is more than a process node behind Intel (one node plus 3D transistors).

  • by iamhassi (659463) on Wednesday August 29, 2012 @01:22AM (#41162521) Journal

    Ya know what? Nothing wrong with cheap and "good enough" the problem has been their new designs are cheap and shitty thanks to that lame "half core" they went for.

    You take a good 85%+ of the people out there and a MOR AMD Deneb quad will frankly be twiddling its thumbs because it will blow through any jobs that they have, even gaming, even more so for Thuban. And their Brazos chips were fricking great, an APU designed for mobile video and basic tasks that got great battery life while often being cheaper than an Atom+ION setup.

    I've sold many an Athlon II and Phenom II and the people are damned happy with them, they just blast through everything they want to do with plenty of cycles left over. I even put my money where my mouth is with regards to my family, me and the oldest are gaming on Thubans while the youngest took my Deneb, and they blow through any game we throw at 'em.

    I see from TFA they've partially dropped the "half core" design but I can only hope that with Piledriver they'll drive a stake through it, as most of the people I've talked to Win 8 is a DO NOT WANT yet the half core scheduler bug is only fixed in Win 8. Meh, hopefully I'll still be able to get enough Thuban, Deneb, and Liano chips to get me through the whole BD/SR phase and the new Apple chip designer they hired will give us another Athlon64. One can hope after all.

    This. I have a six-core 1055T. Bought it to overclock and it does hit 4ghz stable on air but guess what? I run it at stock 2.8ghz. Why? Because 99.9% of the time six cores at 2.8ghz is more than enough. Even games run perfectly. CPUs have finally reached the point where faster isn't better anymore, its power usage and heat output. Rather have it run cool using little power at stock then run it full blast all the time sucking watts and heating the room at 4ghz I'm not even using.

    When I bought this intel didn't have anything close in price that performed as well. Sure I could have spent double and bought a faster intel chip, but why? What was the point of spending more on something I wouldn't use? Rather spend the $ on a ssd for real performance gains then extra ghz I'd never use. So I bought AMD and I'll probably do it again next year if the price is reasonable and the speed is "good enough"

  • by the_humeister (922869) on Wednesday August 29, 2012 @02:10AM (#41162825)

    This is just so weird. 20 years ago it was Alpha, MIPS, SPARC, PA-RISC, etc. that were the ones counted to do all the heavy lifting backend, HPC stuff. x86 was kind of a joke that everyone frowned upon but tolerated because it was cheap and did the job adequately for the price. Then x86 steamrolled through. Now no more Alpha or PA-RISC. MIPS is relagated to low-power applications (my router has one).

  • by hairyfeet (841228) <bassbeast1968@@@gmail...com> on Wednesday August 29, 2012 @03:54AM (#41163409) Journal

    Hi fellow Thuban user! I have the 1035T and my oldest has the 1045T and like you I got some crazy OCing when I first got it (I ended up with 3.5GHz with 3.9GHz turbo) before going back to stock because even at 2.6Ghz it just mows through everything.

    What I really love is the TurboCore, with my Asrock board I can tweak to my hearts content but even at stock settings with TC I'm getting a little over 3GHz when gaming thanks to most games using 3 cores or less. No muss, no fuss, it just kicks it in automatically when I need the single threaded boost. And with the N520 cooler, which I paid a grand total of $30 for, it stays around 8 degrees above room temp idle and never hits above 127F even when the cores are being pounded. When you can keep a chip that cool with just a $30 heatpipe cooler and arctic silver what's not to like?

    What did it for me though was like you how much I could save while still having damned good performance. I have 2 teen boys that also game so I try to keep us pretty close to parity and when you can grab a complete 6 core kit for $345 [tigerdirect.com] and that's BEFORE the MIR gives you another $30 off? It was a no brainer. I got myself the Thuban, gave the youngest my X4 925, which considering he prefers MMOs is frankly overkill, while the oldest ended up with a kit I just like I linked to given to him by his grandpa as a back to school present before I could grab it for him.

    All told for THREE systems, with the family pack of Win 7 HP X64 and 3 HD 4850 GPUs? $1400 before the MIRs, after I got those back all told it was around $390 a system. You just can't beat that and all the games we play run just beautiful at the 1600x900 res our screens run on. In a year to a year and a half I'll pick up some 6770s or 6850s when they drop to the $50 price range and make my money back on the 4850s off of CL. With two hexas and a quad we couldn't be happier, the kids gaming and movies, my gaming and transcoding along with multitrack audio editing? We have tons of cycles to spare.

    So I have to agree, what's the point when these systems already can tear through anything we care to do at half the price of a similar Intel unit?

  • by gweihir (88907) on Wednesday August 29, 2012 @05:38AM (#41163975)

    Indeed. I think AMD is actually far ahead of Intel (again, think e.g. integrated memory controller, for quite a few server-loads Intel was vastly behind for a time due to that). The speed increases of CPUs have become slower and slower and mater less and less. The trick for AMD will be to survive intact until Intel gives up and gets a next-gen architecture of their own. By then AMD will have ironed out the kinks and they will be on an equal footing again. When looking at their relative sizes and cash-reserves, it is impressive that AMD can compete at all. But the bottom-line is that in almost all cases (exception: You need a small number of CPUs with high power because your software is stupid, and cost of the CPUs is not an issue) you get significantly better value for the money from AMD.

Our policy is, when in doubt, do the right thing. -- Roy L. Ash, ex-president, Litton Industries

Working...