Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AMD Hardware

AMD's Next-Gen Steamroller CPU Could Deliver Where Bulldozer Fell Short 161

MojoKid writes "Today at the Hot Chips Symposium, AMD CTO Mark Papermaster is taking the wraps off the company's upcoming CPU core, codenamed Steamroller. Steamroller is the third iteration of AMD's Bulldozer architecture and an extremely important part for AMD. Bulldozer, which launched just over a year ago, was a disappointment. The company's second-generation Bulldozer implementation, codenamed Piledriver, offered a number of key changes and was incorporated into the Trinity APU family that debuted last spring. Steamroller is the first refresh of Bulldozer's underlying architecture and may finally deliver the sort of performance and efficiency AMD was aiming for when it built Bulldozer in the first place. Enhancements to Fetch and Decode architecture have been made, as well as increased scheduler efficiency and cache load latency, which combined could bring a claimed 15 percent performance-per-watt performance gain. AMD expects to ship Steamroller sometime in 2013 but wouldn't offer timing detail beyond that."
This discussion has been archived. No new comments can be posted.

AMD's Next-Gen Steamroller CPU Could Deliver Where Bulldozer Fell Short

Comments Filter:
  • by Anonymous Coward on Tuesday August 28, 2012 @09:16PM (#41160739)
    They all sound like sexual positions.
    • by Johann Lau ( 1040920 ) on Tuesday August 28, 2012 @09:28PM (#41160879) Homepage Journal

      And Intel ones don't? Who are you kidding?

      Aladdin
      Bad Axe
      Bad Axe 2
      Batman
      Batman's Revenge
      Big Laurel
      Black Pine (a cute name for anal sex I guess)
      Black Rapids (I don't want to know)
      Bonetrail
      Caneland
      Cougar Canyon
      Glidewell
      Tanglewood (sounds bi to me)
      Warm Springs

      and last, but never least, the

      Windmill (also known as "Helicopter Dick")

  • by Gothmolly ( 148874 ) on Tuesday August 28, 2012 @09:16PM (#41160741)

    Things like hitting the 1GHz mark first, and making a workable 64bit chip that also speaks x86 only get you so far. AMD needs to come up with something cool, else they're doomed to play catch-up.

    • by ArhcAngel ( 247594 ) on Tuesday August 28, 2012 @09:45PM (#41161037)
      Well they definitely need to step up with their current offerings but I will forever be grateful for their 64 bit x86 extensions. If not for that we'd be stuck with Itanium desktops...*SHUDDER*...
    • by rrhal ( 88665 )
      There Heterogeneous System Architecture (HSA) is pretty innovative stuff. If AMD is successful this will change the way software is written and move us to a more parallel world.
      • by smash ( 1351 )
        CPUs don't really drive software development that much. Or else we would have migrated off x86 years ago. If intel can get the same/similar performance without a paradigm shift in development methodology, developers won't bother.
        • by tlhIngan ( 30335 ) <slashdot@worf.ERDOSnet minus math_god> on Tuesday August 28, 2012 @11:40PM (#41161855)

          CPUs don't really drive software development that much. Or else we would have migrated off x86 years ago. If intel can get the same/similar performance without a paradigm shift in development methodology, developers won't bother.

          The migration from x86 has already started, actually - the architecture they're moving to is ARM. (After all, there are more ARM-based SoCs shipped than x86 CPUs - every PC includes one or more ARM cores doing something).

          But on a more user level - tablets, smartphones are becoming the computing platforms of the day, all running ARM processors. Regardless of whether they run iOS or Android. Developers have embraced it and cranking out tons of apps and games and other stuff for this. It's so scary that Intel's investing a lot of money bringing Android to x86 because the writing's on the wall (when more phones and tablets ship than PCs...)

          But x86 won't die - it has a raw performance advantage that ARM has yet to reach, so for computation-heavy operations like databases, it'll be the heavy lifter. Perhaps serving an entire array of ARM frontend webservers.

          • by the_humeister ( 922869 ) on Wednesday August 29, 2012 @02:10AM (#41162825)

            This is just so weird. 20 years ago it was Alpha, MIPS, SPARC, PA-RISC, etc. that were the ones counted to do all the heavy lifting backend, HPC stuff. x86 was kind of a joke that everyone frowned upon but tolerated because it was cheap and did the job adequately for the price. Then x86 steamrolled through. Now no more Alpha or PA-RISC. MIPS is relagated to low-power applications (my router has one).

    • by PhrostyMcByte ( 589271 ) <phrosty@gmail.com> on Tuesday August 28, 2012 @10:16PM (#41161263) Homepage

      I think AMD's work here will provide some great evolutionary speedups that will be significant to many people. Unfortunately for them, at the same time AMD is bringing out these small "free lunch" general improvements, Intel will be bringing out Haswell -- which in addition to such evolutionary improvements has some really fantastic, significant new features that'll provide remarkable performance boosts.

      • Integer AVX-256. For apps that can take advantage of it, this'll be a massive speed-up. Things like x264 and other video processing will take huge advantage of this.
      • SIMD LUTs. One of the major optimization tricks programmers have been using for ages, look up tables, have been thus far out of reach to SIMD code without complex shuffle operations that usually aren't worth it.
      • Transactional memory. This is not quite the easy BEGIN/COMMIT utopia people are hoping transactional memory will eventually get us, but it's a building block that'll enable some advanced concurrent algorithms that either aren't realistic now or are so complex that they're out of reach to most coders.

      These are all pretty specialized features, yes, but they service some very high-profile benchmark areas: video processing and concurrency are always on the list, and AMD will get absolutely crushed when apps start taking advantage of it.

      I'm a developer, a major optimization geek both micro- and macro-. I thrive playing with instruction latencies, execution units, and cache usage until my code eeks out as much performance as possible. Of course we'll never know until the CPUs are released for everyone to play with, but right now my money is on Intel.

      AMD is in serious trouble here. I hope I'm wrong.

      • I'm a developer, a major optimization geek both micro- and macro-. I thrive playing with instruction latencies, execution units, and cache usage until my code eeks out as much performance as possible. Of course we'll never know until the CPUs are released for everyone to play with, but right now my money is on Intel.

        Yeah, I'm a developer too. However, my simulations run on desktops not super computers so it doesn't matter how optimal the code is on a single particular piece of hardware... Wake me up when there's a cross platform intermediary "object code" I can distribute that gets optimised and linked/compiled at installation time for the exact hardware my games will be running on.

        We need software innovation (OS's and Compilers) otherwise I'm coding tons of cases for specific hardware features that aren't available on every platform, and are outpaced by the doubly powerful machine that comes out 18 months later... In short: It's not worth doing all that code optimisation for each and every chip released. This is also why Free Software is so nice: I release the cross platform source code, you compile it, and it's optimised for your hardware... However, most folks actually just download the generic architecture binary, defeating the per processor optimisation benefits.

        Like I said: In addition to hardware improvements we need a better cross platform intermediary binary format (so that both closed and open projects benefit). You know, kind of like how Android's Davlik bytecode works (processed at installation), except without needing a VM when you're done. I've got one of my toy languages doing this, but it requires a compiler/interpreter to be already installed (which is fine for me, but in general suffers the same problems as Java). MS is going with .NET, but that's some slow crap in terms of "high-performance" and still uses a VM.

        Besides, I thought it was rule #2: Never optimise prematurely?
        (I guess the exception is: Unless you're only developing for yourself...)

      • *Looks around* AAAAAAAnd, how does this AVX-256 compare to OpenCl transcoding of video?

        • Re: (Score:2, Insightful)

          by TeXMaster ( 593524 )

          *Looks around* AAAAAAAnd, how does this AVX-256 compare to OpenCl transcoding of video?

          That's a stupid question. OpenCL by itself does nothing whatsoever to improve video transcoding. OpenCL is an API, so the performance of an OpenCL kernel for video transcoding highly depends on which hardware you're running it on. On Intel CPUs supporting AVX-256, OpenCL kernels will be compiled to use those instructions (if Intel keeps updating its OpenCL SDK), on GPUs and APUs it will use whatever the respective platforms use. What OpenCL does is make it easier to exploit AVX-256, just as it makes it eas

          • I believe the point is that an OpenCL transcoding algorithm running on a typical GPU will make doing it on a CPU look silly and pointless, so who cares how fast the CPU can do it when you're going to do it on the GPU anyway?

        • That is like asking "What colour do you want this database in?" OpenCL doesn't do anything for transcoding video, it is an API for talking to graphics cards. Now, GPUs can be used for video transcoding, of course, using OpenCL or other APIs (CUDA, DirectCompute). However how well they do depends on what the graphics card is. An AMD 7970, that'll throw down some serious performance. An ATi 3400 doesn't even have driver support for OpenCL and if it did would be very slow.

          So there isn't going to be any way to

          • +10 for being pedantic (the best kind of correct, technically correct), -1000 for knowing exactly what I was groping for, but choosing to be pedantic.

            Just got back from a late-night concert, and my head hasn't stopped pounding yet (and there is some question of sobriety -> Jimmy Buffet with margaritas). Besides, and I am summoning my inner BOFH here, who teh f*ck would run OpenCl code on a CPU? I've tried, and the only thing I've succeeded in doing is giving my laptop a grand mal seizure.

            And no one sane

            • +10 for being pedantic (the best kind of correct, technically correct), -1000 for knowing exactly what I was groping for, but choosing to be pedantic.

              Welcome to Slashdot!

            • Agreed. I think some of the things Intel (and AMD, and Qualcomm, and nVidia) put into their CPU instruction sets are 10% for real use and 90% so they can put an item on the bullet list for the marketing literature and get fanboys to buy something new when the thing they already had is just fine.

              It was a great Buffett concert, wasn't it? But I'm getting too old for this, after two hours I couldn't hear much beyond the ringing in my ears. I still have a bit of it today. And also, I thought movie thea
          • Comment removed based on user account deletion
            • Heads up: The x264 project's incorporating OpenCL support for certain parts of the encoder. Take a look over here [anandtech.com] - initial results are very promising.
        • Your question is a bit... difficult.

          GPUs can definitely excel at many forms of video processing. Encoding, thus far, hasn't proved to be one of them. Currently, CPU-based encoders are faster and of significantly higher quality. I'm sure someone smart will make a fantastic GPU-based encoder eventually, but so far nobody has come close. A few companies have lied and/or used faulty benchmarking to help people believe they have, though!

    • Really? I got a nice combo bundle (case, PSU, AMD "APU", ram, motherboard) for $120, shipped. It runs Diablo 3 and all of my steam games with no trouble. What can you get me with Intel offerings that can do the same, at that price?
      • by Pulzar ( 81031 )

        What can you get me with Intel offerings that can do the same, at that price?

        Probably nothing. The problem is that AMD hardly makes anything on selling you that whole setup, and there are too few of you who need something like that to make it up in selling huge volumes.

        It's not that their stuff is awful. It's just that they can't sell the cheap stuff at enough of a profit, and they don't have expensive stuff to make up for it.

        The business side of the company is failing.

        • Re: (Score:2, Informative)

          by Anonymous Coward

          Intel completely dominates AMD in terms of process tech, but due to antitrust concerns, they tweak their prices so that AMD can stay barely alive in the "budget segment".

          In the last 20 years, AMD had the best parts for only 2 years, and were in the running for maybe another 3-4 years. The game has always been rigged in Intel's favor.

          • That may be true in the consumer space, but is not anywhere close in the server space.

            We specced recently a pair of Dell servers, (poweredge 810 and 815). Both with 256G of ram. Difference between the 810, with dual 6-core CPU's, and the dual AMD 16 cores? About $7500 per server. Both the CPU, and the RAM are much, much cheaper. The Ram might run slightly slower, but since were mainly using it for "buffer" space in Oracle, we don't care.. its still 1000X faster than disk. And our software doesn't need

        • There are a number of documented cases in the past few years when Intel paid major PC makers to only carry their own chips.

          Regardless of the morality of that, the result was that it really hurt AMD sales, and in turn that prevented them from getting the investment capital they needed to keep improving their products.

          Of course, the counter point is that AMD failed to make a compelling case to the major PC vendors that dropping AMD products was a serious mistake. That's competition at work. But I disl
    • Comment removed based on user account deletion
      • by afidel ( 530433 ) on Wednesday August 29, 2012 @12:44AM (#41162287)

        or Windows will treat it as hyperthreading and tie a nice boat anchor to your new chip.

        Actually it's the opposite, the system SHOULD be treating the co-cores like HT units and not scheduling demanding jobs on adjacent cores (at least not ones that both need the FP unit or lots of decode operations). The problem is that AMD basically lied to the OS and told it that every core is the same and that it can go ahead and schedule anything wherever it wants. If they had just marked the second portion of each co-core as an HT unit the normal scheduler optimizations would have basically handled 99% of cases correctly. In reality BD's problem wasn't so much the gaff with the co-cores (though that certainly didn't help things), but that Global Foundry is more than a process node behind Intel (one node plus 3D transistors).

        • Comment removed based on user account deletion
          • Personally I thought the whole idea was retarded except for the mobile chips like Brazos, on the desktop the idea was completely stupid and on the server even more so. For those that don't know the original plan was to go "Full APU" and have the GPU take the place of the FP on chip, which would be a much simpler and weaker design than in years past thus freeing up more TDP for more cores. Why is this dumb? Well what if you want to use the GPU AND do some floating point heavy task? Or what if you don't want the integrated GPU because you can't OC worth a crap with the GPU built in?

            All correct, but I could live with those aspects. I usually don't OC, and if I know I want the GPU AND do some floating point heavy task, I could get an additional discrete GPU. There is, however, a worse one:

            Memory bandwidth congestion. A typical lower midrange graphics card with 128 bit data bus and GDDR3 is significantly slower than the same model with GDDR5. In an APU, the GPU part has to share the even lower bandwidth of the DDR3 main memory with the CPU part.

            When the LLano was new, Anandtech published

          • >I'm sure AMD fanboys will....

            *perk*

            There are still AMD fanboys? Where? ;-P

      • "After all who is gonna want to buy a system that has to get stuck with Win 8 just to have it run correctly?"

        Guess it depends on what one wants to do. I was under the impression that the patches for BD had been included in the last Linux kernel or two (not that it'll help AMD's bottom line viz. market percentage.) As for Win8, if nothing else a third-party dev will have a 'Metro' app with a "click/touch/punch/yell here to get to a real desktop" icon.

        I like a lot of what Intel has been doing recently with

      • otherwise AMD is stuck with a chip that will only run correctly on an OS that looks to be the most hated Windows since MS Bob.

        WTF are you talking about? Nearly all OSes work just fine with Bulldozer modules. You just happened to cherry-pick three example that don't, and one that does but which you happen to not like.

        Interesting that all 4 OSes you mentioned, just happen to be from one team/company.

        You remind me of the kind of people who complain about Democrats and Republicans, and then go out and vote

  • I think 15% would put them around even with Intel. That means it's a toss up except, oh wait, their boards are ungodly expensive. I really don't know why, probably chipset manufacturing cost or something. A really nice MSI B75 board with all metal caps is $65 and my usual H77MA-G43 board is a mere $90. All the AMD ones i looked at that had the features I wanted (basically same ones as those last two I mentioned) are all $100 and usually well over a hundred. Just to get SATA III at all was terribly expe
    • Huh to be honest I've not looked at Motherboards in a while, but I've never been able to get a board I wanted for less than 100 dollars and if I could I'd be pretty skeptical. Then again Elitegroup pretty much ruined my opinion of cheap motherboards.
    • by Joe_Dragon ( 2206452 ) on Tuesday August 28, 2012 @09:38PM (#41160967)

      AMD boards have better PCI-E lanes then intel chips.

      With Intel you need to go high end to get more then 16 lanes + DMI

      • by smash ( 1351 )
        The thing is, the low end don't care for masses of PCI lanes. They run integrated video. The high end want a fast CPU as well.
      • Sorry but lots of PCIe lanes are just not the kind of thing that matters to non-high end users or people who focus on stats rather than real world performance. To even have a situation on a desktop board where it could theoretically matter you have to have multiple graphics cards. The 1x slots hang off the southbridge and have their own bandwidth separate from the lanes on the CPU for the video card. So if you stick on two GPUs then yes, you don't have enough to give them both 16 lanes.

        However it turns out

      • by Z34107 ( 925136 )

        What kind of workload needs more than 16 PCIe lanes, but doesn't similarly need a higher-end processor?

        • by Targon ( 17348 )

          I am guessing those who want to do more GPGPU type stuff, so if you get a supported video card or multiple video cards, then you can potentially get some great performance. Of course, if running dual-GPU stuff is what you want, you should NOT be bothering with a cheap motherboard.

    • I don't buy MSI or ECS as a general rule for any chip... additionally theres pretty much feature parity for price in AMD vs Intel boards. Not sure where you're shopping

    • by Osgeld ( 1900440 )

      I just got a gigabyte with dual PCI-e 4 ram slots (1833 and if you OC it a little 2000) with all the latest buzzwords for like 70 bucks ... you need to shop some more

    • I got my Asus SATA3/USB3/Firewire2 AM2+/AM3 board in 2010 for $85, so I have no fucking idea what you're talking about.
    • Every time Intel vs AMD comes up, some complete dope claims that Intel boards are cheaper.

      Thanks for keeping the tradition of dopes alive.
    • by Targon ( 17348 )

      You have missed the difference between the higher end "expensive" boards and the lower end consumer boards. Things have changed a fair bit over the past two years since consumer level processors from AMD are the A series(E and C make no sense on the desktop) with the GPU built into the CPU. The socket AM3 and AM3+ boards are intended for machines that will be higher performing(video card, not integrated video), so you end up paying more in that segment these days.

      You can find cheap boards that support w

  • ... or amd are facing irrelevance. ARM is eating their lunch in mobile, the core series is eating their lunch on the desktop, and the atom isn't standing still in the low power market.

    Intel's integrated GPUs are now "good enough" for most people. Those who game won't want integrated AMD if integrated intel isn't good enough...

    • by Osgeld ( 1900440 )

      AMD is Intel's only direct competition in the desktop market, they are not going anywhere

      • by smash ( 1351 )
        Unless they start turning a profit (i.e., steamroller actually works this time), they won't be able to afford new fabs, and the investment required in CPU design to even try to keep up. Things have not been good for AMD lately.
        • by afidel ( 530433 )

          AMD doesn't own any fabs, that was spun off to Global Foundry, and AMD has made some noise about moving to TSMC for their next CPU despite TSMC having their own problems at the current process node and the fact that AMD will take a hit on the stock they own in GF.

    • by dywolf ( 2673597 )

      The AMD APU chips are pretty damn good. I used one for my HTPC, and it runs Diablo 3 and WoW, and most anything else I throw at it more than acceptably. Now, for me, acceptably, on an HTPC, doesnt mean everything maxxed. But its an HTPC, in the living room. a 2nd machine to complement my other rig. So I dont care about being maxxed.

      I -love- AMD. I havent used Intel since my first system I built with a pentium3, and that system gave me nothing but grief. My current rig machine uses a black chip, forget which

      • by Targon ( 17348 )

        You are dealing with some outdated information. The AMD three core processors mostly were gone by the time the Phenom 2 generation came out and once the process technology was a bit more mature. New designs are always problematic, so more "failures" are expected. The Bulldozer issues are the same way, initial batch of a new design was a bit problematic, which Piledriver will fix.

        Notice that the A10 parts from AMD have NOT had production issues, and those are based on Piledriver, so now it is just abou

        • by smash ( 1351 )
          I just don't understand how AMD could have gotten to the point of volume production without determining the inferior performance to their previous generation first. Do they do no testing? It's not like Windows 7 is rare or hard to obtain.
      • by smash ( 1351 )
        HD3000 can run Diablo 3, WOW, etc as well. In terms of CPU though, intel kicks butt at the moment.
  • by BadgerRush ( 2648589 ) on Tuesday August 28, 2012 @09:57PM (#41161127)
    AMD may be getting its shit together when in regards to chip design. but I'm still going Intel on my next PC because of their superior Linux drivers. At the moment I'm an unhappy owner of a laptop with a AMD graphics card that can't do anything because the drivers are useless. I'm looking forward to a new laptop with an Intel Ivy Bridge processor (I don't think I can wait Haskell).
    • by cbope ( 130292 )

      Wait a minute... weren't all the ATI (now AMD) fanboys claiming a couple years ago that because ATI was developing more "open" drivers that they would rule the linux landscape?

      • by thue ( 121682 )

        While AMD is releasing documentation, Intel is releasing actual open source drivers. And now that Intel's graphics hardware is no longer a complete joke, Intel is becoming a real alternative for some users.

        AMD is still better than NVIDIA, which doesn't release documentation.

        • Maybe in principle, but in my experience using the hardware, the drivers that NVIDIA is providing are far superior to the AMD drivers available for all but the most basic uses. This seems to be the general consensus, at least where I tend to spend my time.

          If you're more concerned about software freedom than I am, maybe you'd rather have AMD. My Linux boxes are much happier with NVIDIA, especially my HTPC. If I get enough cash to throw at it, I might try a low power Ivy Bridge or one of the new Atoms for

        • by cynyr ( 703126 )

          but i like my GPUs to draw 3D things (mostly via wine) so i got a nvidia.

      • weren't all the ATI (now AMD) fanboys claiming a couple years ago that because ATI was developing more "open" drivers that they would rule the linux landscape?

        That merely killed Nvidia. Intel dodged that bullet and then shot it right back at AMD, better aimed and with an explosive warhead attached. It passed through AMD (wounding them) and and then exploded inside Nvidia's skull.

        Open is necessary, but if you intend to open (AMD) while your competitor (Intel) actually does it and then also writes the dri

      • Well, they could be using Intel CPUs and AMD GPUs.
    • by Jeng ( 926980 )

      This story is regarding AMD CPUs, not AMD GPUs.

      Currently Linux supports the features of AMD's current CPUs better than it is supported on the Windows side of things.

  • Is that new resonant clock mesh technology still planned to launch with this new series? I remember reading they were planning to break the 4GHz barrier [slashdot.org]?
    • by DL117 ( 2138600 )

      What 4Ghz barrier? I have an Intel i7 3700k at 4.6Ghz, and before that I had a Bulldozer at 4.5Ghz(which I promptly returned due to it's horribleness.)

      • by Targon ( 17348 )

        Overclocking compared to the stock speed of 4GHz is two different things. There WERE some issues that held back overclocking in older chips(4GHz was almost a hard limit for some reason), but that has been fixed in the newer chips. Still, the AMD FX 8350 running a stock 4GHz with turbo mode to 4.2GHz should be interesting with the new Piledriver improvements over Bulldozer. That is something that should be interesting to see, just because it may fix all the performance problems with Bulldozer.

  • Having dedicated decoders for the IPU is definitely on the right rrack, but a shared fetch is still means there is a bottleneck in getting those cores fed.Also, apparently, the changes hit the L1 performance, so they had to add some cache to make up for it. So, there is some room for improvement, and this does help, However, I just don't see it as the big step that AMD needs against Intel. Intel's dies are smaller, they are making better use of space, and this is a huge advantage. Intel has 10 core dies, an

    • Bah, what AMD needs to do is just keep doubling cores on the Phenom-line of chips. A 12-core Phenom III, in the next 12 months, could keep them going for another two years or so. Of course, then they'd think that it might impact their server offerings, but lets be honest, I've looked at their server offerings, and while I love the number of cores, I need at least 3 Ghz cores; the 2.1 Ghz cores make me question whether it's worth just buying multiple machines with Phenoms, as opposed to buying Magny-Cours (o

      • by Targon ( 17348 )

        Piledriver(not in an APU) comes out in October of THIS year(2012) with the 8350 set to be released at 4GHz and a turbo mode to 4.2GHz speed without overclocking. Steamroller will be the next step after Piledriver for next year. It is almost a given that improvements to performance per watt will happen every YEAR, so what comes out in a 125 watt max this year will be a 90 watt chip next year for the same performance, possibly even going below that level depending on improvements in the process technology

To stay youthful, stay useful.

Working...