AMD's Next-Gen Steamroller CPU Could Deliver Where Bulldozer Fell Short 161
MojoKid writes "Today at the Hot Chips Symposium, AMD CTO Mark Papermaster is taking the wraps off the company's upcoming CPU core, codenamed Steamroller. Steamroller is the third iteration of AMD's Bulldozer architecture and an extremely important part for AMD. Bulldozer, which launched just over a year ago, was a disappointment. The company's second-generation Bulldozer implementation, codenamed Piledriver, offered a number of key changes and was incorporated into the Trinity APU family that debuted last spring. Steamroller is the first refresh of Bulldozer's underlying architecture and may finally deliver the sort of performance and efficiency AMD was aiming for when it built Bulldozer in the first place. Enhancements to Fetch and Decode architecture have been made, as well as increased scheduler efficiency and cache load latency, which combined could bring a claimed 15 percent performance-per-watt performance gain. AMD expects to ship Steamroller sometime in 2013 but wouldn't offer timing detail beyond that."
AMD has cool code names. (Score:5, Funny)
Re:AMD has cool code names. (Score:4, Funny)
And Intel ones don't? Who are you kidding?
Aladdin
Bad Axe
Bad Axe 2
Batman
Batman's Revenge
Big Laurel
Black Pine (a cute name for anal sex I guess)
Black Rapids (I don't want to know)
Bonetrail
Caneland
Cougar Canyon
Glidewell
Tanglewood (sounds bi to me)
Warm Springs
and last, but never least, the
Windmill (also known as "Helicopter Dick")
Re: (Score:2)
> but AMD is still sadly lagging years behind Intel.
Exactly, so a promised 15% increase in efficiency next year is not going to cut it. Intel has an advantage of about 50%, and they will probably deliver improvements by next year, too.
So for me, this message says that AMD has lost the race.
Re:AMD has cool code names. (Score:5, Informative)
For some tasks when you can get 640 slightly slower cores (the ten core Intel chips have a lower clock than the ones with less cores) for the same price as 80 it's pretty easy to see which way to go. If anything is massively parallel you can forget about Intel at this point.
Re:AMD has cool code names. (Score:5, Interesting)
Indeed. I think AMD is actually far ahead of Intel (again, think e.g. integrated memory controller, for quite a few server-loads Intel was vastly behind for a time due to that). The speed increases of CPUs have become slower and slower and mater less and less. The trick for AMD will be to survive intact until Intel gives up and gets a next-gen architecture of their own. By then AMD will have ironed out the kinks and they will be on an equal footing again. When looking at their relative sizes and cash-reserves, it is impressive that AMD can compete at all. But the bottom-line is that in almost all cases (exception: You need a small number of CPUs with high power because your software is stupid, and cost of the CPUs is not an issue) you get significantly better value for the money from AMD.
Re: (Score:2)
Reading comprehension failure (Score:2)
One more thing (Score:2)
The above costs are from quotes for complete servers, so the cost difference is as described. Do I have to put things in bold with illustrations in this place?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Sounds to me like they're all machines that can get massive amounts of work done in a relatively short time (compared to any alternative).
Re: (Score:2)
One thing that many need to consider is that an eight-core processor will be excellent for multi-threaded applications, but if the implementation isn't great or the OS does not talk to it well, it may not be as fast in older applications. Now, Bulldozer left a lot of people unsatisfied, mostly because the performance was NOT where it should have been. Piledriver itself should provide a 10-15 percent performance boost over Bulldozer, and will be available in another few months(FX 8350 from what I have re
Re: (Score:2)
Performance per watt is mostly important in mobile applications, especially laptops, where battery life is something that people really do care about. I've never heard of people caring about performance per watt for non-mobile systems with the exception of very low power builds. I went with a 25W Athlon II X2 for my router mostly because it does have very little power draw. It's not very relevant for the desktop/server market, but it's still very important to other markets - especially the very large mobile
Re: (Score:2)
I'd rather make a getaway on top of a steamroller instead of running with a clawhammer between my legs.
Re: (Score:2)
Re: (Score:2)
You could at least link to the original source:
http://theoatmeal.com/comics/literally [theoatmeal.com]
Re: (Score:2)
They need to innovate (Score:5, Insightful)
Things like hitting the 1GHz mark first, and making a workable 64bit chip that also speaks x86 only get you so far. AMD needs to come up with something cool, else they're doomed to play catch-up.
Re:They need to innovate (Score:5, Insightful)
Re: (Score:2)
Comment removed (Score:4, Interesting)
Re:They need to innovate (Score:5, Interesting)
Ya know what? Nothing wrong with cheap and "good enough" the problem has been their new designs are cheap and shitty thanks to that lame "half core" they went for.
You take a good 85%+ of the people out there and a MOR AMD Deneb quad will frankly be twiddling its thumbs because it will blow through any jobs that they have, even gaming, even more so for Thuban. And their Brazos chips were fricking great, an APU designed for mobile video and basic tasks that got great battery life while often being cheaper than an Atom+ION setup.
I've sold many an Athlon II and Phenom II and the people are damned happy with them, they just blast through everything they want to do with plenty of cycles left over. I even put my money where my mouth is with regards to my family, me and the oldest are gaming on Thubans while the youngest took my Deneb, and they blow through any game we throw at 'em.
I see from TFA they've partially dropped the "half core" design but I can only hope that with Piledriver they'll drive a stake through it, as most of the people I've talked to Win 8 is a DO NOT WANT yet the half core scheduler bug is only fixed in Win 8. Meh, hopefully I'll still be able to get enough Thuban, Deneb, and Liano chips to get me through the whole BD/SR phase and the new Apple chip designer they hired will give us another Athlon64. One can hope after all.
This. I have a six-core 1055T. Bought it to overclock and it does hit 4ghz stable on air but guess what? I run it at stock 2.8ghz. Why? Because 99.9% of the time six cores at 2.8ghz is more than enough. Even games run perfectly. CPUs have finally reached the point where faster isn't better anymore, its power usage and heat output. Rather have it run cool using little power at stock then run it full blast all the time sucking watts and heating the room at 4ghz I'm not even using.
When I bought this intel didn't have anything close in price that performed as well. Sure I could have spent double and bought a faster intel chip, but why? What was the point of spending more on something I wouldn't use? Rather spend the $ on a ssd for real performance gains then extra ghz I'd never use. So I bought AMD and I'll probably do it again next year if the price is reasonable and the speed is "good enough"
Comment removed (Score:4, Interesting)
Re: (Score:2)
Why? Because 99.9% of the time six cores at 2.8ghz is more than enough. Even games run perfectly
There are virtually no games that actually use 6 cores though, so 2 of those cores just sit idle doing nothing. There are a lot of games that still only use 2 cores too, for example Skyrim (one of the most popular games in recent years):
http://www.tomshardware.com/reviews/skyrim-performance-benchmark,3074-9.html [tomshardware.com]
That's one of the big problems with Bulldozer. In programs than can use all 8 cores it is somewhat competitive with 4 Intel cores, but in the programs that can't use 8 cores (quite a few) they get
Re: (Score:2)
I'm building a new PC next year, if the Steamroller chips aren't really good both on performance and on price, I will
Re: (Score:3)
Actually, the "best gaming CPU for the money" article to which you refer only gives the FX-4170 a rating of "Honorable Mention". These are Tom's Hardware recommendations for gaming CPUs at varying price points (from the July version of the article):
~$70: Pentium G630
$100: Pentium G870
$110: None (FX-4170 Honorable Mention)
$125: Core i3-2120
$180: Core i5-2310
$200: Core i5-3450
$230: Core i5-3570K
$590: Core i7-3930K
Sadly, the best desktop CPU AMD has to offer is bested by the lowly Core i3, and is crushed by a
Re: (Score:2)
Re: (Score:2)
Agreed. AMD give you 90% the power of Intel for 50% the price.
I built my parents a computer to replace the old Athlon XP. I could have gotten a Core 2 Duo, or a Phenom II X3. Guess which one I got?
I actually told them that if they want to upgrade (new Civilization game out, seems to run a bit slow), the Phenom X4 and X6 chips are dirt-cheap now. Haven't heard back from them yet.
Re: (Score:2)
I guess their saving grace is they can weld a real GPU to it, then beat the GPU benchmarks for Intel's welded on GPU.
You may not have noticed, but Intel is fast closing the gap in integrated GPU performance. They are catching up to AMD on the integrated GPU front much faster than AMD is catching up to Intel on the CPU side.
Re: (Score:2)
Only in select tests, in most situations, Intel graphics still SUCK, and their drivers just can't cut it. Intel keeps claiming PLANS for something much faster, but outside of artificial benchmarks, Intel still has a LONG way to go. AMD also does not have much to really worry about, because at ANY time, they can put in a better GPU in new APUs. There is also something called graphics quality, where the speed doesn't mean much if what you are looking at is ugly.
Re: (Score:2)
Re: (Score:2)
Re:They need to innovate (Score:5, Insightful)
The migration from x86 has already started, actually - the architecture they're moving to is ARM. (After all, there are more ARM-based SoCs shipped than x86 CPUs - every PC includes one or more ARM cores doing something).
But on a more user level - tablets, smartphones are becoming the computing platforms of the day, all running ARM processors. Regardless of whether they run iOS or Android. Developers have embraced it and cranking out tons of apps and games and other stuff for this. It's so scary that Intel's investing a lot of money bringing Android to x86 because the writing's on the wall (when more phones and tablets ship than PCs...)
But x86 won't die - it has a raw performance advantage that ARM has yet to reach, so for computation-heavy operations like databases, it'll be the heavy lifter. Perhaps serving an entire array of ARM frontend webservers.
Re:They need to innovate (Score:5, Interesting)
This is just so weird. 20 years ago it was Alpha, MIPS, SPARC, PA-RISC, etc. that were the ones counted to do all the heavy lifting backend, HPC stuff. x86 was kind of a joke that everyone frowned upon but tolerated because it was cheap and did the job adequately for the price. Then x86 steamrolled through. Now no more Alpha or PA-RISC. MIPS is relagated to low-power applications (my router has one).
AMD's in deep trouble with Steamroller (Score:5, Interesting)
I think AMD's work here will provide some great evolutionary speedups that will be significant to many people. Unfortunately for them, at the same time AMD is bringing out these small "free lunch" general improvements, Intel will be bringing out Haswell -- which in addition to such evolutionary improvements has some really fantastic, significant new features that'll provide remarkable performance boosts.
These are all pretty specialized features, yes, but they service some very high-profile benchmark areas: video processing and concurrency are always on the list, and AMD will get absolutely crushed when apps start taking advantage of it.
I'm a developer, a major optimization geek both micro- and macro-. I thrive playing with instruction latencies, execution units, and cache usage until my code eeks out as much performance as possible. Of course we'll never know until the CPUs are released for everyone to play with, but right now my money is on Intel.
AMD is in serious trouble here. I hope I'm wrong.
Re:AMD's in deep trouble with Steamroller (Score:4, Interesting)
I'm a developer, a major optimization geek both micro- and macro-. I thrive playing with instruction latencies, execution units, and cache usage until my code eeks out as much performance as possible. Of course we'll never know until the CPUs are released for everyone to play with, but right now my money is on Intel.
Yeah, I'm a developer too. However, my simulations run on desktops not super computers so it doesn't matter how optimal the code is on a single particular piece of hardware... Wake me up when there's a cross platform intermediary "object code" I can distribute that gets optimised and linked/compiled at installation time for the exact hardware my games will be running on.
We need software innovation (OS's and Compilers) otherwise I'm coding tons of cases for specific hardware features that aren't available on every platform, and are outpaced by the doubly powerful machine that comes out 18 months later... In short: It's not worth doing all that code optimisation for each and every chip released. This is also why Free Software is so nice: I release the cross platform source code, you compile it, and it's optimised for your hardware... However, most folks actually just download the generic architecture binary, defeating the per processor optimisation benefits.
Like I said: In addition to hardware improvements we need a better cross platform intermediary binary format (so that both closed and open projects benefit). You know, kind of like how Android's Davlik bytecode works (processed at installation), except without needing a VM when you're done. I've got one of my toy languages doing this, but it requires a compiler/interpreter to be already installed (which is fine for me, but in general suffers the same problems as Java). MS is going with .NET, but that's some slow crap in terms of "high-performance" and still uses a VM.
Besides, I thought it was rule #2: Never optimise prematurely?
(I guess the exception is: Unless you're only developing for yourself...)
Re: (Score:3)
*Looks around* AAAAAAAnd, how does this AVX-256 compare to OpenCl transcoding of video?
Re: (Score:2, Insightful)
*Looks around* AAAAAAAnd, how does this AVX-256 compare to OpenCl transcoding of video?
That's a stupid question. OpenCL by itself does nothing whatsoever to improve video transcoding. OpenCL is an API, so the performance of an OpenCL kernel for video transcoding highly depends on which hardware you're running it on. On Intel CPUs supporting AVX-256, OpenCL kernels will be compiled to use those instructions (if Intel keeps updating its OpenCL SDK), on GPUs and APUs it will use whatever the respective platforms use. What OpenCL does is make it easier to exploit AVX-256, just as it makes it eas
Re: (Score:2)
I believe the point is that an OpenCL transcoding algorithm running on a typical GPU will make doing it on a CPU look silly and pointless, so who cares how fast the CPU can do it when you're going to do it on the GPU anyway?
What are you talking about? (Score:2)
That is like asking "What colour do you want this database in?" OpenCL doesn't do anything for transcoding video, it is an API for talking to graphics cards. Now, GPUs can be used for video transcoding, of course, using OpenCL or other APIs (CUDA, DirectCompute). However how well they do depends on what the graphics card is. An AMD 7970, that'll throw down some serious performance. An ATi 3400 doesn't even have driver support for OpenCL and if it did would be very slow.
So there isn't going to be any way to
Re: (Score:3)
+10 for being pedantic (the best kind of correct, technically correct), -1000 for knowing exactly what I was groping for, but choosing to be pedantic.
Just got back from a late-night concert, and my head hasn't stopped pounding yet (and there is some question of sobriety -> Jimmy Buffet with margaritas). Besides, and I am summoning my inner BOFH here, who teh f*ck would run OpenCl code on a CPU? I've tried, and the only thing I've succeeded in doing is giving my laptop a grand mal seizure.
And no one sane
Re: (Score:3)
+10 for being pedantic (the best kind of correct, technically correct), -1000 for knowing exactly what I was groping for, but choosing to be pedantic.
Welcome to Slashdot!
Re: (Score:2)
It was a great Buffett concert, wasn't it? But I'm getting too old for this, after two hours I couldn't hear much beyond the ringing in my ears. I still have a bit of it today. And also, I thought movie thea
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Your question is a bit... difficult.
GPUs can definitely excel at many forms of video processing. Encoding, thus far, hasn't proved to be one of them. Currently, CPU-based encoders are faster and of significantly higher quality. I'm sure someone smart will make a fantastic GPU-based encoder eventually, but so far nobody has come close. A few companies have lied and/or used faulty benchmarking to help people believe they have, though!
Re: (Score:3)
Re: (Score:2)
Probably nothing. The problem is that AMD hardly makes anything on selling you that whole setup, and there are too few of you who need something like that to make it up in selling huge volumes.
It's not that their stuff is awful. It's just that they can't sell the cheap stuff at enough of a profit, and they don't have expensive stuff to make up for it.
The business side of the company is failing.
Re: (Score:2, Informative)
Intel completely dominates AMD in terms of process tech, but due to antitrust concerns, they tweak their prices so that AMD can stay barely alive in the "budget segment".
In the last 20 years, AMD had the best parts for only 2 years, and were in the running for maybe another 3-4 years. The game has always been rigged in Intel's favor.
Re: (Score:3)
That may be true in the consumer space, but is not anywhere close in the server space.
We specced recently a pair of Dell servers, (poweredge 810 and 815). Both with 256G of ram. Difference between the 810, with dual 6-core CPU's, and the dual AMD 16 cores? About $7500 per server. Both the CPU, and the RAM are much, much cheaper. The Ram might run slightly slower, but since were mainly using it for "buffer" space in Oracle, we don't care.. its still 1000X faster than disk. And our software doesn't need
Re: (Score:2)
Regardless of the morality of that, the result was that it really hurt AMD sales, and in turn that prevented them from getting the investment capital they needed to keep improving their products.
Of course, the counter point is that AMD failed to make a compelling case to the major PC vendors that dropping AMD products was a serious mistake. That's competition at work. But I disl
Re: (Score:3)
Re:They need to innovate (Score:4, Interesting)
or Windows will treat it as hyperthreading and tie a nice boat anchor to your new chip.
Actually it's the opposite, the system SHOULD be treating the co-cores like HT units and not scheduling demanding jobs on adjacent cores (at least not ones that both need the FP unit or lots of decode operations). The problem is that AMD basically lied to the OS and told it that every core is the same and that it can go ahead and schedule anything wherever it wants. If they had just marked the second portion of each co-core as an HT unit the normal scheduler optimizations would have basically handled 99% of cases correctly. In reality BD's problem wasn't so much the gaff with the co-cores (though that certainly didn't help things), but that Global Foundry is more than a process node behind Intel (one node plus 3D transistors).
Re: (Score:3)
One more reason why APU in the high end is stupid (Score:2)
Personally I thought the whole idea was retarded except for the mobile chips like Brazos, on the desktop the idea was completely stupid and on the server even more so. For those that don't know the original plan was to go "Full APU" and have the GPU take the place of the FP on chip, which would be a much simpler and weaker design than in years past thus freeing up more TDP for more cores. Why is this dumb? Well what if you want to use the GPU AND do some floating point heavy task? Or what if you don't want the integrated GPU because you can't OC worth a crap with the GPU built in?
All correct, but I could live with those aspects. I usually don't OC, and if I know I want the GPU AND do some floating point heavy task, I could get an additional discrete GPU. There is, however, a worse one:
Memory bandwidth congestion. A typical lower midrange graphics card with 128 bit data bus and GDDR3 is significantly slower than the same model with GDDR5. In an APU, the GPU part has to share the even lower bandwidth of the DDR3 main memory with the CPU part.
When the LLano was new, Anandtech published
Re: (Score:2)
Re: (Score:2)
>I'm sure AMD fanboys will....
*perk*
There are still AMD fanboys? Where? ;-P
Re: (Score:2)
"After all who is gonna want to buy a system that has to get stuck with Win 8 just to have it run correctly?"
Guess it depends on what one wants to do. I was under the impression that the patches for BD had been included in the last Linux kernel or two (not that it'll help AMD's bottom line viz. market percentage.) As for Win8, if nothing else a third-party dev will have a 'Metro' app with a "click/touch/punch/yell here to get to a real desktop" icon.
I like a lot of what Intel has been doing recently with
Re: (Score:2)
Piledriver you mean. Steamroller will be the next generation after Piledriver(which is due in October of 2012).
Re: (Score:2)
Re: (Score:2)
WTF are you talking about? Nearly all OSes work just fine with Bulldozer modules. You just happened to cherry-pick three example that don't, and one that does but which you happen to not like.
Interesting that all 4 OSes you mentioned, just happen to be from one team/company.
You remind me of the kind of people who complain about Democrats and Republicans, and then go out and vote
Re: (Score:2)
it's the boards! (Score:2)
Re: (Score:2)
AMD boards have better PCI-E lanes then intel chip (Score:4, Informative)
AMD boards have better PCI-E lanes then intel chips.
With Intel you need to go high end to get more then 16 lanes + DMI
Re: (Score:3)
And an average user would care because? (Score:3)
Sorry but lots of PCIe lanes are just not the kind of thing that matters to non-high end users or people who focus on stats rather than real world performance. To even have a situation on a desktop board where it could theoretically matter you have to have multiple graphics cards. The 1x slots hang off the southbridge and have their own bandwidth separate from the lanes on the CPU for the video card. So if you stick on two GPUs then yes, you don't have enough to give them both 16 lanes.
However it turns out
Re: (Score:3)
What kind of workload needs more than 16 PCIe lanes, but doesn't similarly need a higher-end processor?
Re: (Score:2)
I am guessing those who want to do more GPGPU type stuff, so if you get a supported video card or multiple video cards, then you can potentially get some great performance. Of course, if running dual-GPU stuff is what you want, you should NOT be bothering with a cheap motherboard.
Re: (Score:2)
I don't buy MSI or ECS as a general rule for any chip... additionally theres pretty much feature parity for price in AMD vs Intel boards. Not sure where you're shopping
Re: (Score:3)
I just got a gigabyte with dual PCI-e 4 ram slots (1833 and if you OC it a little 2000) with all the latest buzzwords for like 70 bucks ... you need to shop some more
Re: (Score:3)
Re: (Score:2)
Thanks for keeping the tradition of dopes alive.
Re: (Score:2)
You have missed the difference between the higher end "expensive" boards and the lower end consumer boards. Things have changed a fair bit over the past two years since consumer level processors from AMD are the A series(E and C make no sense on the desktop) with the GPU built into the CPU. The socket AM3 and AM3+ boards are intended for machines that will be higher performing(video card, not integrated video), so you end up paying more in that segment these days.
You can find cheap boards that support w
Re: (Score:2)
well, it better do (Score:2)
Intel's integrated GPUs are now "good enough" for most people. Those who game won't want integrated AMD if integrated intel isn't good enough...
Re: (Score:2)
AMD is Intel's only direct competition in the desktop market, they are not going anywhere
Re: (Score:2)
Re: (Score:2)
AMD doesn't own any fabs, that was spun off to Global Foundry, and AMD has made some noise about moving to TSMC for their next CPU despite TSMC having their own problems at the current process node and the fact that AMD will take a hit on the stock they own in GF.
Re: (Score:2)
The AMD APU chips are pretty damn good. I used one for my HTPC, and it runs Diablo 3 and WoW, and most anything else I throw at it more than acceptably. Now, for me, acceptably, on an HTPC, doesnt mean everything maxxed. But its an HTPC, in the living room. a 2nd machine to complement my other rig. So I dont care about being maxxed.
I -love- AMD. I havent used Intel since my first system I built with a pentium3, and that system gave me nothing but grief. My current rig machine uses a black chip, forget which
Re: (Score:2)
You are dealing with some outdated information. The AMD three core processors mostly were gone by the time the Phenom 2 generation came out and once the process technology was a bit more mature. New designs are always problematic, so more "failures" are expected. The Bulldozer issues are the same way, initial batch of a new design was a bit problematic, which Piledriver will fix.
Notice that the A10 parts from AMD have NOT had production issues, and those are based on Piledriver, so now it is just abou
Re: (Score:2)
Re: (Score:2)
In Linux drivers, Intel is still king. (Score:4, Interesting)
Re: (Score:2)
Wait a minute... weren't all the ATI (now AMD) fanboys claiming a couple years ago that because ATI was developing more "open" drivers that they would rule the linux landscape?
Re: (Score:3)
While AMD is releasing documentation, Intel is releasing actual open source drivers. And now that Intel's graphics hardware is no longer a complete joke, Intel is becoming a real alternative for some users.
AMD is still better than NVIDIA, which doesn't release documentation.
Re: (Score:2)
Maybe in principle, but in my experience using the hardware, the drivers that NVIDIA is providing are far superior to the AMD drivers available for all but the most basic uses. This seems to be the general consensus, at least where I tend to spend my time.
If you're more concerned about software freedom than I am, maybe you'd rather have AMD. My Linux boxes are much happier with NVIDIA, especially my HTPC. If I get enough cash to throw at it, I might try a low power Ivy Bridge or one of the new Atoms for
Re: (Score:2)
but i like my GPUs to draw 3D things (mostly via wine) so i got a nvidia.
Re: (Score:2)
That merely killed Nvidia. Intel dodged that bullet and then shot it right back at AMD, better aimed and with an explosive warhead attached. It passed through AMD (wounding them) and and then exploded inside Nvidia's skull.
Open is necessary, but if you intend to open (AMD) while your competitor (Intel) actually does it and then also writes the dri
Re: (Score:2)
Re: (Score:2)
This story is regarding AMD CPUs, not AMD GPUs.
Currently Linux supports the features of AMD's current CPUs better than it is supported on the Windows side of things.
Resonant Clock Mesh? (Score:2)
Re: (Score:2)
What 4Ghz barrier? I have an Intel i7 3700k at 4.6Ghz, and before that I had a Bulldozer at 4.5Ghz(which I promptly returned due to it's horribleness.)
Re: (Score:2)
Overclocking compared to the stock speed of 4GHz is two different things. There WERE some issues that held back overclocking in older chips(4GHz was almost a hard limit for some reason), but that has been fixed in the newer chips. Still, the AMD FX 8350 running a stock 4GHz with turbo mode to 4.2GHz should be interesting with the new Piledriver improvements over Bulldozer. That is something that should be interesting to see, just because it may fix all the performance problems with Bulldozer.
Re: (Score:2)
It's a Start (Score:2)
Having dedicated decoders for the IPU is definitely on the right rrack, but a shared fetch is still means there is a bottleneck in getting those cores fed.Also, apparently, the changes hit the L1 performance, so they had to add some cache to make up for it. So, there is some room for improvement, and this does help, However, I just don't see it as the big step that AMD needs against Intel. Intel's dies are smaller, they are making better use of space, and this is a huge advantage. Intel has 10 core dies, an
Re: (Score:2)
Bah, what AMD needs to do is just keep doubling cores on the Phenom-line of chips. A 12-core Phenom III, in the next 12 months, could keep them going for another two years or so. Of course, then they'd think that it might impact their server offerings, but lets be honest, I've looked at their server offerings, and while I love the number of cores, I need at least 3 Ghz cores; the 2.1 Ghz cores make me question whether it's worth just buying multiple machines with Phenoms, as opposed to buying Magny-Cours (o
Re: (Score:2)
Piledriver(not in an APU) comes out in October of THIS year(2012) with the 8350 set to be released at 4GHz and a turbo mode to 4.2GHz speed without overclocking. Steamroller will be the next step after Piledriver for next year. It is almost a given that improvements to performance per watt will happen every YEAR, so what comes out in a 125 watt max this year will be a 90 watt chip next year for the same performance, possibly even going below that level depending on improvements in the process technology
Re: (Score:2)
The biggest cost will ALL server facilities is power, floorspace and cooling. No matter how many idiots will keep bleeting "manpower is expensive", the fact remains that manpower is generally a fixed cost no matter what hardware you get, while power, cooling and floorspace will be variable depending on what hardware you get. And right now, it doesn't matter if AMD gives you more cores because Intel does more through a more efficient architecture that can be more easily fed to maintain maximum throughput, at
Re: (Score:2)
Piledriver in October of 2012 should answer your question about performance, so you won't need to wait for next year. 8 cores at 4GHz without the scheduler problems SHOULD beat the Phenom 2 generation, but we have another 1-2 months before we know for sure what the performance will be. Socket AM3+ does mean that DDR2 will finally be fading away, so many of us with older systems WILL need all new motherboards and memory on those older machines that didn't get updated yet.