AMD Unveils Carrizo APU With Excavator Core Architecture 114
MojoKid writes: AMD just unveiled new details about their upcoming Carrizo APU architecture. The company is claiming the processor, which is still built on Global Foundries' 28nm 28SHP node like its predecessor, will nonetheless deliver big advances in both performance and efficiency. When it was first announced, AMD detailed support for next generation Radeon Graphics (DX12, Mantle, and Dual Graphics support), H.265 decoding, full HSA 1.0 support, and ARM Trustzone compatibility. But perhaps one of the biggest advantages of Carrizo is the fact that the APU and Southbridge are now incorporated into the same die; not just two separates dies built into and MCM package.
This not only improves performance, but also allows the Southbridge to take advantage of the 28SHP process rather than older, more power-hungry 45nm or 65nm process nodes. In addition, the Excavator cores used in Carrizo have switched from a High Performance Library (HPL) to a High Density Library (HDL) design. This allows for a reduction in the die area taken up by the processing cores (23 percent, according to AMD). This allows Carrizo to pack in 29 percent more transistors (3.1 billion versus 2.3 billion in Kaveri) in a die size that is only marginally larger (250mm2 for Carrizo versus 245mm2 for Kaveri). When all is said and done, AMD is claiming a 5 percent IPC boost for Carrizo and a 40 percent overall reduction in power usage.
This not only improves performance, but also allows the Southbridge to take advantage of the 28SHP process rather than older, more power-hungry 45nm or 65nm process nodes. In addition, the Excavator cores used in Carrizo have switched from a High Performance Library (HPL) to a High Density Library (HDL) design. This allows for a reduction in the die area taken up by the processing cores (23 percent, according to AMD). This allows Carrizo to pack in 29 percent more transistors (3.1 billion versus 2.3 billion in Kaveri) in a die size that is only marginally larger (250mm2 for Carrizo versus 245mm2 for Kaveri). When all is said and done, AMD is claiming a 5 percent IPC boost for Carrizo and a 40 percent overall reduction in power usage.
Re: (Score:1)
Even if they go on sale for $59 they are harmless IMO. It would only knock a couple hundred off a intel based system and still suck balls compared to them. Even swapping computers every couple years if your time savings isn't worth $200 to you you need a better job.
Re:Operating at 20W gives zero improvement. (Score:5, Informative)
Re: (Score:2)
I agree. But they haven't really been competitive in the desktop market for about 7 years. I hope they do better because I'm not a fan of monopolies. Strictly on price isn't the way to go IMO. Other than the most basic users (secretaries, store clerks and the like) I think the computer as a tool to do work well it never pays to be (far) off the current best of breed. AMD has been in the bargain basement i7 territory for a while. I'm not convinced this architecture is going to do it though. We'll see.
Re: (Score:2)
Chasing speed gains for their own sake is a fools errand - "most" computing applications don't need better performance and that's underscored by the fact that singlethreaded performance hasn't really changed significantly for the last decade - a lot of office systems are now on a 5-7 year replacement cycle because older systems are still more than enough to run their software.
Power savings were (and are) a big gain in the server room - cooling systems are expensive to install and expensive to run. The fact
Re: (Score:3, Insightful)
In real world server benches, the Opteron 6380, despite being a 16 core part that uses more power, *loses* all-around to a E5-2660 which is slower than the one you mentioned (and also a bit cheaper): http://www.anandtech.com/show/6508/the-new-opteron-6300-finally-tested/14 Not that it matters, because it's the only segment they're even remotely competitive in.
As for desktops, they're barely competitive with the old Core 2 arch... All of their current CPUs have a terrible IPC, and they kinda suck in the perf
Re: (Score:2, Insightful)
I'd take power reduction over IPC any day, I haven't needed more CPU performance in about 6 years, and it's looking like I still won't need any more performance for another 6 years.
Re: (Score:2)
An Intel Xeon E5-2690 V2, S 2011, 10 Core, 3.0GHz costs £1500.
An AMD 3rd Gen. Opteron 6380 CPU, Abu Dhabi 16 Core, S G34 provides better performance and costs less to run yet only retails for £700.
I'd say AMD have plenty of life left in the server market and if they can achieve similar price / performance numbers relative to Intel with these new desktop chips I'd say there is some life in them in the desktop arena too!
Are you insinuating that Intel's markups are unreasonable? That because they had no competition in the high-end cpu world, would they charge what the market would bear? High cost systems also mean higher profits for retailers too. I would not ever say the word "price gouging" when there is scarcity in the marketplace.
Re: (Score:3)
It would only knock a couple hundred off a intel based system and still suck balls compared to them.
It's knocking out a whole chip, it could bring the price of the whole PC down to less than a couple hundred.
Re: (Score:1)
South bridge is pretty cheap: http://www.aliexpress.com/pric... [aliexpress.com] . I guess the mobo would be simpler too giving you more savings but I don't think it would do it. Still say $300 for a crappy AMD based system and $500 for a 2X faster intel system: other than the dirt poor I know which one I'd recommend. Anyone using a computer for more than a glorified smartphone has time with a value. It doesn't take many minutes throughout the year to equal the cost difference. IMO you are almost guaranteed for professional
Re: (Score:2)
And in fact, that's what I did with my wife's most recent computer. AMD A8-7600 + 12GB of RAM + 120GB SSD. Extremely cheap and it can still play Minecraft and Call of Duty: Black Ops 2 for my sons.
But if you were going to get an SSD anyway, plus 6+GB of
Re: (Score:2)
It's knocking out a whole chip, it could bring the price of the whole PC down to less than a couple hundred.
The low end has been sitting at a couple hundred for a while now -- and during that time, the quality of the CPU and GPU you can get have just gotten better and better, to the point that even net-top CPUs can get the job done. I'm amazed at how good even low-end netbook processors are these days.
Comment removed (Score:5, Interesting)
Re: (Score:2)
Colo providers also tend to sell power on blocks of 10 or more amps. I know in the last situation I was in, going with Intel would have saved me nothing at all. There was no room to shove another server in and I was below the minimum power they would sell anyway.
Meanwhile, faster is questionable as long as you don't use the Intel compiler.
Re:Operating at 20W gives zero improvement. (Score:4, Interesting)
Do you have a link for that? It's not that I disbelieve you, I strongly suspect that Intel would do that crap. I'd like to read more about it however if you hae a link handy, then stash the link for the next time this benchmark comes up.
Personally, I like the Phoronix Linux benchmarks. They're more meaningful for me since I use Linux and they're all based on GCC which is trustworthy.
http://www.phoronix.com/scan.p... [phoronix.com]
The i7 4770 ocasionally blows away the FX8350 by a factor of 2, but in many benchmarks they're close, and Intel loses a fair fraction. The 4770 is the best overall performer, but not by all that much. It seems that the choice of CPU is fairly workload dependent.
For servers, I still prefer the supermicro 4s opteron boxes. 64 cores, 512G RAM, 1U. Nice.
Re: (Score:3, Informative)
Do you have a link for that? It's not that I disbelieve you, I strongly suspect that Intel would do that crap. I'd like to read more about it however if you hae a link handy, then stash the link for the next time this benchmark comes up.
Personally, I like the Phoronix Linux benchmarks. They're more meaningful for me since I use Linux and they're all based on GCC which is trustworthy.
http://www.phoronix.com/scan.p... [phoronix.com]
The i7 4770 ocasionally blows away the FX8350 by a factor of 2, but in many benchmarks they're close, and Intel loses a fair fraction. The 4770 is the best overall performer, but not by all that much. It seems that the choice of CPU is fairly workload dependent.
For servers, I still prefer the supermicro 4s opteron boxes. 64 cores, 512G RAM, 1U. Nice.
The i7 4770k is a fairly high end chip by Intel. I own one but I would not expect to find one in a sub 700. It is not a Xeon, but it is just 1 notch down from the $900 extreme edition so it is 2nd highest in consumer non server chips.
Well sites like tomshardware.com make it look like a Pentium or i3 can smoke the latest AMD black edition fresh out of the water. However, biased or not my real world experience says otherwise as many games are optimized for intel and use NVidia specific directX extensions with
Re:Operating at 20W gives zero improvement. (Score:4, Informative)
Re: (Score:1)
Re: (Score:2)
Except if you bothered to watch the video linked above for actual in-game performance testing (NOT synthetic benchmarks), you'll see that most of the time Intel is neck and neck with AMD - not "smoking" them.
Is this the lame old 'look! If I run a game that's GPU-limited, my AMD machine is just as fast as that Intel machine that costs twice as much!' nonsense?
AMD fanboys have been doing that for years when they can't find any legitimate way to beat Intel.
Re: (Score:2)
AMD was (before haswel) not too bad if you have a multithreaded workload.
Thanks to the XboxONE with 8 cores games will run better on AMD as they will become more threaded since many are crappy xbox ports.
For a cheap box to run VM images in virtualbox/vmware workstation, video editing, or compiling code AMD offers a great value and the bios does not cripple virtualization extensions unlike the cheap ones from intel.
FYI I switched to an i7 4770k for my current system so I am not an AMD fanboy. But I paid thro
Re: (Score:2)
Intel's competition isn't AMD, it's ARM. AMD are pretty much irrelevant to them at this point.
Re: (Score:1)
In SWTOR I got a doubling of FPS from moving from a PhenomII black edition to an i7 4770k.
I would be surprised if that were not the case. The i7-4770k came out 5 years after the Phenom II - a lot happened in that time, including the entire Phenom line being discontinued and succeeded by newer architectures. I'd be more interested in a comparison between the i7-4770k and its 30%-cheaper contemporary, the FX-9590 (naturally, expecting the i7-4770k still to win to some degree if we focus purely on single-thread performance, but is that worth it? Once SWTOR is no-longer CPU-bound you wouldn't see an
Re: (Score:2)
In SWTOR I got a doubling of FPS from moving from a PhenomII black edition to an i7 4770k.
I would be surprised if that were not the case. The i7-4770k came out 5 years after the Phenom II - a lot happened in that time, including the entire Phenom line being discontinued and succeeded by newer architectures. I'd be more interested in a comparison between the i7-4770k and its 30%-cheaper contemporary, the FX-9590 (naturally, expecting the i7-4770k still to win to some degree if we focus purely on single-thread performance, but is that worth it? Once SWTOR is no-longer CPU-bound you wouldn't see any difference between the two at all).
Here is the quicker. The half decade old phenom II is faster per clock cycle than the FX series based on the bulldozer architecture?? AMD really messed up as it was optimized for its graphics hoping it would win this way. In other words those mocking it call it Pentium IV 2.0
Re: (Score:1, Troll)
Re: (Score:2)
Re: (Score:1)
You can look up the actual legal case.
http://www.ftc.gov/sites/defau... [ftc.gov]
Re: (Score:2)
Re:Operating at 20W gives zero improvement. (Score:5, Interesting)
The best evidence I know of is this one:
http://arstechnica.com/gadgets... [arstechnica.com]
You can see how changing the ID string of the CPU will change the performance of the exact same hardware.
Comment removed (Score:5, Informative)
Re: (Score:2)
I guess maybe I should care more about the low end market I guess. I'm not that customer nor are most/all corporate customers. I buy $500 monitors not $500 computers. But I guess a lot of the market goes to ~$700 laptops and $500 desktops so in consumer land they might have a winner.
Re: (Score:2)
Then surprise surprise AMD chips trade blows with chips costing more than twice as much [youtube.com] with several tests the AMD outright smoking and in others within a couple percentage points of the i5s.
The issue is that the damage is done; AMD hasn't updated their CPU lineup recently. The FX-8350 was originally released in late 2012 and still seems to be the best option from their FX series. (The FX-8370 is just a nicer binning, and the FX-9xxx appear to be ridiculously overclocked, with almost twice the TDP.) I'm planning to upgrade my PC later this year, but buying a 3 year old CPU just seems insane. In contrast, Haswell processors are barely a year old, and a Haswell i5 delivers comparable performance.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Here's one where the cheating was exposed when leaked benches for new Macs surfaced before Cinebench had been updated to take them into account.
http://www.tomshardware.com/re... [tomshardware.com]
Those scores require a bit of context, though. The 32-bit build of Geekbench uses x87 code, for starters, so it isn’t optimized for any of the other instruction set extensions that Westmere-EP or Ivy Bridge-EP support. Getting close to Apple’s claim of doubled floating-point performance requires software compiled with the AVX flag. John Poole, the founder of Geekbench, posted several other reasons why the next-gen and previous-gen Mac Pros might be separated by such a narrow margin.
The leaked result was run using the free 32-bit build of Geekbench on a pre-release build of OS X Mavericks. Switching over to the paid 64-bit build of the benchmark adds SSE support, though that’s still a pre-Pentium 4 extension. Tab between the 32- and 64-bit runs on Xeon X5675-based systems and you’ll find that the SSE-capable build averages 14%-better performance.
Curious as to how the very same 12-core Xeon E5-2687 V2 compared in Windows, I ran my own test on a 64-bit build of Geekbench and scored in excess of 30,000 points—more than 25% faster than the leaked number. The individual sub-tests showed both Xeon E5-based platforms trading blows in the integer and floating-point components, but clearly a more real-world comparison was needed in order to establish the new Xeon’s performance in a workstation environment. Fortunately, I have the upcoming Xeon E5-2697 V2, the upcoming Core i7-4960X, an existing eight-core Xeon E5-2687W, and a Core i7-3970X.
This kind of thing isn't exactly a revelation. Benchmarks have been tainted by Intel and the ICC for ages. The real problem is that a lot of actual software is as well, so in the end the artificially-gimped performance reflected in the benchmarks translates to actual usage. Even among fairly-compiled programs Intel's parts typica
Re: (Score:2)
That link talks about 11.5, makes claims with no evidence, and admits to using the ICC compiler.
We KNOW the ICC compiler is rigged. http://yro.slashdot.org/story/... [slashdot.org]
Re: (Score:2)
Not really surprising.
Getting the most out of any processor requires processor-specific optimization. Unfortunately for AMD, Intel has the lion's share of the market, so developers pay more attention to getting software to run well on Intel processors. Some of the top tier games that get used for benchmarks have been hand-optimized for Intel, as have productivity applications such as video encoders and Photoshop. (The last two have also benefited historically from Intel having better SIMD implementations. T
Re: (Score:2)
The benchmark I work with is "how fast the software my staff work with runs"
AMD is generally slower per core, but not much slower - they've generally won on being cheaper and being able to cram more cores in any given box.
FWIW: Over the last 12-13 years memory bus speed has been a _much_ greater influence on real-world processing results than CPU speed. Internal clock multipliers are only of any use if the CPU doesn't have to step outside the box to grab more data/instructions from the ram.
As a rule of thu
BS. (Score:2)
1) Anyone that uses synthetic benchmarks like Cinebench deserve whatever they get. These things have been rigged since the ATI VS nVIDIA days, and ATI doesn't even exist anymore. Not to mention they don't really prove anything.
2) Using real world software tests, particularly in gaming, Intel has been blowing AMD out of the water since just after the Athelon64 days. The only places that AMD has has success commercially or in performance has been in A) The server market, and B) the very low end market, the la
Unless you are talking about a laptop (Score:2)
Reducing wattage on an APU means more battery life and putting the southbridge on the chip lowers the cost and allows increased customization
options.
Re: (Score:1)
just download an OS instance maybe in a data center but in your home? even with out an data cap downing an 1GB+ OS IMAGE (uncompressed) and then some.
HSA software environment (Score:1)
I am intrigued by the potential of HSA, but are there any examples of it in use?
Re:HSA software environment (Score:5, Informative)
There are some results from LibreOffice Calc at the bottom of this page [tomshardware.com].
Re: (Score:2)
I hope they are also (Score:5, Interesting)
Re:I hope they are also (Score:4, Informative)
They seem to be doing a pretty good job on the graphics front. Their open source driver is in better shape and has more momentum than the nvidia open source driver.
My impression is that Intel has better linux support for their IGP but the performance is about a generation behind.
Re: (Score:2)
They seem to be doing a pretty good job on the graphics front. Their open source driver is in better shape and has more momentum than the nvidia open source driver.
And yet neither of the AMD drivers actually have good performance or hardware support.
My impression is that Intel has better linux support for their IGP but the performance is about a generation behind.
The support is light-years ahead, unless it's one of the licensed PowerVRs.
Re: (Score:2)
And yet neither of the AMD drivers actually have good performance or hardware support.
Good performance compared to what? Intel IGP? nouveau, the proprietary Nvidia binary driver?
The support is light-years ahead, unless it's one of the licensed PowerVRs.
So are you agreeing with me?
Re: (Score:2)
So are you agreeing with me?
About intel, but not AMD. The AMD drivers are just crap. It's only if you institute artificial restrictions like that drivers must be OSS that AMD even gets a chance to be in the running.
Re: (Score:3)
Re: (Score:1)
Re: (Score:2)
I got burned by bad ati drivers a long time ago too. There was a time when I was a "hardcore" nvidia supporter. But I was never so hardcore that my loyalty to one company over another was unconditional. I really do feel like AMD is doing a better job on linux development at the moment, while nvidia is coasting on its past achievements.
Re: (Score:1)
The support is light-years ahead, unless it's one of the licensed PowerVRs.
Oh trying to decide whatever light-years ahead actually make sense or not when it comes how far one technology is ahead of another one ...
"This must be what quantum physics is all about!"
Linux hybrid driver (Score:2)
For the record, AMD is also moving toward a hybrid stack for the Linux drivers:
- the same opensource kernel driver is used every where.
- the only difference is that either you run the official catalyst OpenGL implementation from AMD on top of it. Ot the opensource Mesa Gallium3D tracker.
- same goes for video (either a VA-API implemented by catalyst, or the various Gallium video state tracker).
So except for the 3D and Video, everything else is opensource and work is shared.
From the development point of view,
Re: (Score:2)
Not worth buying unless I can get it in 8S/12GPU (Score:1)
Like this bad mother fucker right here. [imgur.com]
Re: (Score:3, Informative)
You do realize that is 4 computers in one box right? Like literally the equivalent of 4 1u servers with 2 CPU and 3 GPU but turned on their side. Also that it costs nearly 100k, which is approximately 50 times as much as an _entire_ computer based on this APU would cost.
Also, considering that with your computer you are getting 12 cores * 4 machines for $100k, where as with an APU based machine you are getting 4 cores * 50 machines, and each of those cores is clocked about twice as fast (4.3 vs 2.4ghz).
Re: (Score:2)
I thought you might go for devastator (i.e. from the transformers ;)
And? (Score:2)
Don't you mean "not just two separates dies built into an MCM package"?
Re: (Score:2)
I agree, and I think that people who harrass others online or make inappropriate 'troll' posts should be sentenced to death.
Re: (Score:1)
By having smarter designs obviously, Intel has always been about brute forcing it.
Re:Intel is killing AMD - 28 NM Process?????? (Score:4, Insightful)
And Intel would still be forcing the Itanium on us had AMD not come out with the Athlon and the x86_64 instruction set, stealing Inel's lunch for a few years until they caught up.
Sure, AMD dropped the ball and Intel stole the lead back from them years ago. But without the competition, Intel wouldn't have any incentive to have processors as good as they do now. The market needs companies like AMD to keep companies like Intel competitive.