AMD Considered GDDR5 For Kaveri, Might Release Eight-Core Variant 120
MojoKid writes "Of all the rumors that swirled around Kaveri before the APU debuted last week, one of the more interesting bits was that AMD might debut GDDR5 as a desktop option. GDDR5 isn't bonded in sticks for easy motherboard socketing, and motherboard OEMs were unlikely to be interested in paying to solder 4-8GB of RAM directly. Such a move would shift the RMA responsibilities for RAM failures back to the board manufacturer. It seemed unlikely that Sunnyvale would consider such an option but a deep dive into Kaveri's technical documentation shows that AMD did indeed consider a quad-channel GDDR5 interface. Future versions of the Kaveri APU could potentially also implement 2x 64-bit DDR3 channels alongside 2x 32-bit GDDR5 channels, with the latter serving as a framebuffer for graphics operations. The other document making the rounds is AMD's software optimization guide for Family 15h processors. This guide specifically shows an eight-core Kaveri-based variant attached to a multi-socket system. In fact, the guide goes so far as to say that these chips in particular contain five links for connection to I/O and other processors, whereas the older Family 15h chips (Bulldozer and Piledriver) only offer four Hypertransport links."
Re: (Score:3)
Great! Can you explain to me why "GDDR5 isn't bonded in sticks for easy motherboard socketing" ?
Packaging != modular design (Score:3)
CPUs - the last that I saw - were PGAs - except for embedded systems, I'm not sure of many BGA based CPUs. DDR3 onwards changed its packaging from TSOP to BGA due to excess pin count, where on TSOP, only a larger package would work, but then, the length of the wire bonds would become a factor in the speed of the sub-system (CPU to DDR3). Also, while TSOP is cheaper than BGA for lower pin counts, when the pin counts become comparable - ~50, the equation flips - BGAs become cheaper than 56 pin TSOPs. As a
Re: (Score:1)
Most laptop CPUs are now coming in BGAs rather than PGAs. The push for thinner laptops is driving the change. It is also causing more laptops to come with memory soldered to the motherboard rather than socketed in SODIMMs.
GDDR5 is not currently available on DIMM sticks. The high speed of the memory may make it impossible to package effectively that way.
Re: (Score:2)
Motherboard manufacturers see the profit margin's Apple has with RAM that can't be increased in a number of their high-end models and they want in on that action.
Re:news for nerds (Score:5, Informative)
Because it's electrically so delicate that you can't keep bit sync when shoving such high frequencies through a slot connector. The price of higher bandwidth, in both the analog and digital senses.
Chips (Score:1)
I've often wondered if it would be useful to have RAM in socketed chips (similar to a CPU) rather than on a stick.
Re: (Score:2)
That used to be common practice, back in the Z80 to 286 era. Stick form (SIMM, then DIMM) is just more convenient to work with. Look how small the pins are on a surface-mount chip - if those chips were big enough to socket, they'd be unweildy. Plus the stick form connects the whole bus at once, so it's fewer dust-prone connections to make than each individual chip together would have.
Re: (Score:2)
Re: (Score:3)
Re: (Score:3)
Great! Can you explain to me why "GDDR5 isn't bonded in sticks for easy motherboard socketing" ?
Reason is that they are used exclusively for the video cards/GPUs, and are not meant to be accessed directly by the CPU. In cases of integrated video on motherboards, they're not used in the first place. In case of video cards, they are soldered right on the video cards - video cards don't have slots b'cos then, you'll have video cards going into PCIe slots, and then the cards would have slots of their own, and then the height of the GDDR5 modules would potentially eliminate other motherboard slots that m
Re: (Score:1)
Latency vs bandwidth (Score:5, Informative)
Re: (Score:3)
Re: (Score:3)
Re:Latency vs bandwidth (Score:4, Interesting)
False, XBox one uses pure DDR3.
It is also one of the key reasons why many games on XBox one cannot do 1080p (that, and the lack of ROPs - PS4 having twice as many ROPs for rasterization)
XBox One tries to "fix" the RAM speed by using embedded sRAM on-chip as a cache for the DDR3 for graphics. Remains to be seen how well the limitations of DDR3 can be mitigated. Early games are definitely suffering from "developer cannot be assed to do a separate implementation for Xbox One".
Kaveri, while related to the chips inside the consoles, is decisively lower performing part. Kaveri includes 8 CUs. XBox one has 14 CUs on die, but two of those are disabled (to improve yields), so 12. PS4 has 20 CUs on-die, with again two CUs disabled to improve yields, so 18.
On the other hand, Kaveri has far better CPU cores (console chips feature fairly gimpy Jaguar cores, tho both consoles feature 8 of those cores vs 4 on Kaveri)
Any integrated graphics setup that uses DDR3 is bound to be unusable for real gaming. Kaveri has a good integrated graphics setup compared to the competition, but it is far behind what the new consoles feature - boosting it with GDDR5 without also at least doubling the CU count wouldn't do much. Either way, it really isn't usable for real gaming. It beats the current top offering from Intel, but that's bit like winning in Special Olympics when compared to real graphics cards (even ~$200-250 midrange ones)
Less FUD please (Score:2)
I get real tired of this "XBox one cannot do 1080p" crap that Sony/ Nintendo fans keep trotting out. Yes, it can. FIFA Soccer 14, Forza Motorsport 5, NBA 2K14, Need for Speed Rivals, all run at 1080 internally. Yes, a number of games run at less and I imagine that'll only be more common as time goes on, but it doesn't change the fact that the system is perfectly capable of 1080. Heck for that matter, not all PS4 titles run at 1080, BF4 being an example.
I have no stake in this fight, I don't do console games
Re: (Score:2, Informative)
Somewhat false. Latency is approximately the same for DDR3 vs GDDR5, at least in terms of nanoseconds from request to response. GDDR5 is impractical for use in CPUs due to the need to solder to a board and high power consumption (enough to need cooling). That CPUs don't need so much bandwidth just makes it useless in addition.
Also, the Xbox One uses DDR3, and makes up for the lack of bandwidth with four channels and 32MB of ESRAM for graphics use.
Re:Latency vs bandwidth (Score:5, Insightful)
Latency in cycles is higher for GDDR5, but the clock speed's a lot faster, isn't it? As the real-time latency is the product of the number of cycles and the length of a cycle, I think it's pretty much a wash.
Re: (Score:2)
Indeed. All the variations of memory today seem to have about the same latency. RAM that cycles faster simply takes more cycles to start returning requests, though once it starts it IS faster.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re:Latency vs bandwidth (Score:4, Informative)
AMD could do a 24 core desktop chip right now (Score:2, Interesting)
Re: (Score:2)
Why do you need so many cores on the desktop though?
Re: (Score:3)
Re:AMD could do a 24 core desktop chip right now (Score:4, Informative)
Re: (Score:2)
You know what a fast, multi-core parallelised CPU optimised for video encoding looks like?
A GPU.
Re: (Score:2)
It depends on what you're doing. In my case I'm often running 4-6 VMs all busy doing various tasks. Having multiple cores helps greatly, as does having a lot of RAM. Any use case that benefits from multiple threads can, if the software is written properly, take advantage of multiple cores.
Re: (Score:2)
Re: (Score:2)
It depends on what the VMs are doing. Having more physical processors to run threads on helps a lot in my case, where they usually involve running multiple hosts simulating client networks, with a few of said VMs doing lots of packet processing.
Re: (Score:2, Interesting)
No, they don't do it because it considerably raises the cost of the chip and it doesn't help improve the "average" user's workload. Many core processors have a bunch of inherent complexity dealing with sharing information between the cores or sharing access to a common bus where the information can be transferred between processes. There are tradeoffs to improving either scenario.
Making the interconnection between cores stronger means that you have a high transistor count and many layers in silicon. Even
Re: (Score:2)
No, they do it because it would compete with their $1000 server chips. ARM is about to give them some correction.
I've been an AMD booster from way back, but they think they're still competing with Intel and that is a serious mistake.
Re: (Score:2)
I think they know, they do have ARM license for 64bit stuff and will make ARM64 based Opterons.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re:AMD could do a 24 core desktop chip right now (Score:4, Interesting)
They don't care because a desktop with a 24 core AMD CPU is likely to be slower than a 4 core Intel CPU for most popular _desktop_ tasks which are mostly single threaded. They're already having problems competing with 8 core CPUs, adding more cores would either make their chips too expensive (too big) or too slow (dumber small cores).
Sad truth is for those who don't need the speed a cheap AMD is enough - they don't need the expensive ones. Those who want the speed pay more for Intel's faster stuff. The 8350 is about AMD's fastest desktop CPU for people who'd rather not use 220W TDP CPUs, and it already struggles to be ahead of Intel's mid range for desktop tasks: http://www.anandtech.com/bench/product/697?vs=702 [anandtech.com]
A few of us might regularly compress/encode video or use 7zip to compress lots of stuff. But video compression can often be accelerated by GPUs (and Intel has QuickSync but quality might be an issue depending on implementation). The rest of the desktop stuff that people care about spending $$$ to make faster would be faster on an Intel CPU.
A server with 24 cores will be a better investment than a desktop with 24 cores.
Re: (Score:2)
No, AMD's issue is that they underestimate what is required to take the desktop, and they underestimate tech Intel has in reserve. They have management control issues, partner communication issues, and their recent layoffs have left their message garbled. I'd love to be able to unleash AMD's potential on an unsuspecting world, but I'm unlikely to be given the chance. I would call a press conference and say "Hey, Krzanich: bite me!" and unleash a 16 core desktop CPU with 192 core GPU. I would open a divi
Re: (Score:2)
Where else are Intel going to go? AMD's got the console space wrapped up this generation, and the Steambox isn't far enough along to make for a solid revenue stream. That leaves low-power applications where they're making progress but not yet ready to dive in. Like it or not, Intel are going to be a Windows and MacOS house for another five years or so.
Re: (Score:2)
Oh, Windows is moving units for Intel, as Intel pretty much owns the enterprise. Walk into pretty much any Fortune-100, and you'll see the blue logo stickers on practically every single laptop and desktop in the place, and the data center will be floor to ceiling with Xeon except specialty stuff (SPARC boxes, IBM pSeries, mainframes). Thin Clients running Linux? Likely on Atom, but AMD is making a bit of an inroad there.
Don't fool yourself - big business is still suffering from 20 years of Microsoft lock
Re: (Score:3)
Re: (Score:2)
Re: (Score:3)
The rest of the desktop stuff that people care about spending $$$ to make faster would be faster on an Intel CPU.
You mean something like this [semiaccurate.com]? (Or similar solutions for other languages?)
Re: (Score:2)
You seriously think typical desktop java stuff will run faster on a GPU?
Oracle certainly thinks that. Ditto for servers. I bet that they have more heads than the two of us to think about the problem, and they seem already hell-bent on implementing just that. Now I kindly suggest that you sit back and leave that to the experts.
Re: (Score:2)
Re: (Score:2)
I think you missed the part where AMD has been continuously tweaking their GPU hardware in the past few years to make it increasingly happy with random access and C/C++/Java-like data layouts. Namely the switch to superscalar RISC cores, the conception of the HSA in the first place, and now the advent of shared access to paged CPU memory, C++11-compatible atomics included. I didn't notice anything in the HSAIL spec that would prevent the implementation of "traditional" Algol-like programming languages. Now
Re: (Score:1)
An 8-core Steamroller would be an improvement too, now that computer games finally start scaling well with multiple cores. I might even be willing to re-purpose a server part for my next desktop, even if it is a tad more expensive.
If AMD does not bother though, the Xeon E3-1230 v3 from Intel looks nice as well, only the mainboards that actually support ECC RAM are a bit expensive.
Re: (Score:3)
Your problem - and Intel and AMD have this problem also - is thinking the specifics of the device have something to do with its utility. It doesn't. What matters is how the thing enables people to do what they want to do, and we passed that level of utility in devices a decade ago. I can be telepresent with my children without resorting to either Intel or AMD technologies - all day. Compared to exclusively special uses of spreadsheets and slideshow programs, compatibility differentials of Office suites,
Re: (Score:1)
For some applications, in particular games, performance still matters. My current PC will run older games just fine, but some newer releases tend to demand a fairly powerful machine.
For example, I might be interested in Star Citizen but Chris Roberts has already announced that he is aiming for maximum eye candy on high end machines. It is unlikely that my current machine will handle that game, even on low settings.
If the applications you run do well on a machine from a decade ago, fine. But that is not the
Re: (Score:2)
Re: (Score:2)
I'm not one who buys into the whole "we should be designing games for $2000 systems" madness, but it's still obvious that improved technical performance is something that game creators want and can exploit to improve their art and design, much as film-makers like having new varieties of lens, film, and camera, new techniques for practical effects, new rigging systems and dollies, and so on.
You couldn't do the gameplay of something like MGS3, for instance, on an original PlayStation. For that gameplay to wor
Re: (Score:1)
A good point from the perspective of a game designer, and I support the sentiment.
But most of us are consumers most of the time. Even those of us who work on one or two community software projects will typically use a bunch of software where they are not involved in the making. Which means taking the software as it is, and if its creators went for a design that requires a beefy PC you have one or you don't use that particular software.
Re: (Score:1)
Maybe they should start with the server version though, since AMD currently have no server with 24 steamroller cores.
Re: (Score:2)
Actually "pal". Friend would be "ystävä".
This show up every time! (Score:2)
No, its a river in india [wikipedia.org]... AMD uses river names for APU/CPUs families and GPUs are islands [wikipedia.org] (where the type and location of the island is used as family grouping)
Of course, that would miss the point (Score:5, Interesting)
The whole point of AMD APUs is low cost gaming. That is, lower cost than buying a dedicated GPU plus a processor. Many already argue that you don't save much by buying an APU. A cheap Pentium G3220 with a AMD Radeon 7730 costs the same as the A10 Kaveri APU, and will give better frame rate. Even if the Kaveri APU prices come down, the savings will be small. If you have to buy the GDDR5 memory, there won't be any savings. It's understandable that AMD didn't take that route.
Re: (Score:1)
Also benchmarks have already proven that Kaveri couldn't utilize GDDR5 well enough. memory speeds over 2133 MHz no longer improve benchmarks on it, so while "bog standard" 1600 MHz DDR3 will leave Kaveri's GPU somewhat memory starved, 2133 MHz is already enough for it (upping that to higher frequencies through overclocking the memory alone doesn't help).
Besides, Kaveri could just go for four DDR3 memory channels. The Chip supposed can do it, it's just that motherboards available right now can't.
Now in the f
Re: (Score:1)
Besides, Kaveri could just go for four DDR3 memory channels. The Chip supposed can do it, it's just that motherboards available right now can't.
It would also require a new and presumably more expensive socket, and motherboards would always need four DDR3 sockets for provideing the full bandwidth - no more cheap mini boards with only two sockets.
Overall, I'm not sure if it would be much cheaper than GDDR5 on the mainboard.
Re: (Score:2)
Mini-ITX and smaller boards could just use SO-DIMMs.
Re: (Score:2)
Now in the future APUs may get to a point where they would greatly benefit from on-motherboard block of GDDR5 for the GPU, but Kaveri ain't it - it is off by about factor of 3 or so. 8 CUs is just far too little...
Why don't you simply get a discrete card and use the APU CUs for what they're supposedly good for, that is, random-access-ish general-purpose computation, as opposed to streamed-access shading and texturing? I mean, the fact that the APU is also reasonably competent as an iGPU is nice, but...
Re: (Score:3)
The whole point of AMD APUs is low cost gaming.
Sigh. You people with your myopic vision. If AMD consigned itself to your view of what it should do, it'll be dead in another 5-7 years. Let's take a look at what Intel offers: Higher performance. Lower energy consumption. Less heat. Smaller die size. In fact, you'd be hard pressed to find anything AMD has in its favor from an engineering standpoint. So what does AMD have that's keeping it in business? Cost. AMD offers a lower price point for economy systems.
But that's not where the profit is. That's not wh
Re: (Score:1)
" So what does AMD have that's keeping it in business? "
A licensing agreement with intel for the x64 instruction set also helps. Every intel chip sold gives AMD a few bucks and every AMD chip sold gives intel a few bucks (because of the licensing of x86 instruction set). I'm not sure on the specifics but I assume this to be a lot of money.
Re: (Score:2)
Sigh. You people with your myopic vision. If AMD consigned itself to your view of what it should do, it'll be dead in another 5-7 years. Let's take a look at what Intel offers: Higher performance. Lower energy consumption. Less heat. Smaller die size. In fact, you'd be hard pressed to find anything AMD has in its favor from an engineering standpoint. So what does AMD have that's keeping it in business? Cost. AMD offers a lower price point for economy systems.
I am sorry, but I think you can't realize that at
Re: (Score:1)
the fact the AMD offerings are woefully uncompetitive at almost any price level
They simply aren't.
If they was then Sony and Microsoft wouldn't had opted to put tens of millions of their solutions in their consoles.
Also for the older A10s faster memory really helped with graphics performance, and that the consoles have a different memory configuration likely help them perform better than the current desktop parts.
It's nothing new GPUs like having massive memory bandwidth, sure the one in the APU is pretty small but it would likely still enjoy having better memory access than what a (mu
Re: (Score:2)
If they was then Sony and Microsoft wouldn't had opted to put tens of millions of their solutions in their consoles.
Intel probably aren't much interested in the tiny margins in the console market, particularly when they'd have to throw a lot of R&D into building a competitive GPU.
That doesn't make AMD any more competitive in PCs.
Re: (Score:1)
In the "chips for people who aren't ultra-graphics extreme gamers"-department they seem to be doing pretty ok currently. The APUs doesn't seem terrible. Sure you can get faster processors and sure you can get dedicated graphics cards.
Maybe chip A is quicker in this and that area vs chip B and prices are whatever but for those consumers I don't know how much it really matters. For me personally if I went without a dedicated graphics card I think game performance would rate pretty high / the highest.
If more g
Re: (Score:2)
Midrange market? HAHA
Intel Core i5, mobile or desktop, is the midrange market. NO ONE of AMD's processors, APUs or FX, can match that in speed or power efficiency, not even the new Kaveri APU.
Re: (Score:2)
Intel have plenty of CPUs that can compete with AMD's prices at the low end. The only thing AMD have is better graphics, which is why they keep pushing APUs, even though there's a tiny market for them... they're basically for people who want to play games, but don't want to play them enough to buy a proper GPU.
It would be a very viable company if they could just dump the CPU side and concentrate on graphics.
Re: (Score:2)
Re: (Score:2)
But that's not where the profit is. That's not what's going to take AMD into the mid-21st century. If AMD sticks to that line of thinking, it'll go the way of Cyrix... and for exactly the same reason. AMD can't invest in a new fab plant because its cash reserves are too low, whereas Intel's pile of gold just keeps growing.
Dude, this already happened back in 2009 when they first spun off and later sold out of GlobalFoundries.
They are trying to claw their way into the mid-range market and undercut Intel.
Again it sounds like you dropped out of a time machine from 2009, when Thuban was aging and Bulldozer was supposed to be AMDs ace in the hole. Since then AMD has done nothing but dodge Intel selling all-in-one APUs using their graphics division and special case architectures for consoles, supercomputers, ARM servers and everything but going head to head with Intel. Their flagship CPU is still a Bulldozer
Re: (Score:1)
Uh oh. Yet it is much cheaper than i5 (+ premium on mainboard), but does much better in games.
http://www.anandtech.com/show/7643/amds-kaveri-prelaunch-information [anandtech.com]
Re: (Score:2)
NO ONE buys an i5 CPU to use its integrated graphics for games. The Intel HD Graphics are a bonus for people who don't play games. Add a discrete GPU to i5 based PC as well as to AMD PC, and it will be game over to ANY AMD based system, not just Kaveri based. As for Kaveri A10, which costs $170 online, the lowest as of now, you can beat it with a $69 dollar Intel Haswell Pentium G3220 and a discrete graphics card like a $100 dollar Radeon 7730.
Re: (Score:2)
The thing is the real Kaveri star is the A8 7600, not the A10 models. The A8 7800 is only $119 yet thrashes any Intel APU in gaming.
Re: (Score:2)
The A8 makes sense for a very low end system. If I was building a PC, either for playing games or not, I think my might have considered the A8 or A10-6800K (last generation A10) if I was looking for a $130 processor. It looks like they both have compute power somewhere in the neighborhood of Intel i3-4150 but somewhat better graphics.
Re: (Score:3)
Re: (Score:2)
The problem is that AMD cores aren't that great. For one, each AMD module has two integer cores but only one FPU. Despite that, the call it "dual core" even though for floating point stuff, the AMD architecture awfully sounds like hyperthreading. And so to me it's not surprise that Intel i3 (2 real cores but 4 logical because of hyper-threading) can challenge 4 and 6-core AMD CPUs.
Re: (Score:2)
Re: (Score:2)
The whole point of AMD APUs is low cost gaming. That is, lower cost than buying a dedicated GPU plus a processor. Many already argue that you don't save much by buying an APU. A cheap Pentium G3220 with a AMD Radeon 7730 costs the same as the A10 Kaveri APU, and will give better frame rate. Even if the Kaveri APU prices come down, the savings will be small. If you have to buy the GDDR5 memory, there won't be any savings. It's understandable that AMD didn't take that route.
AMD is aiming for "good enough", and they did a great job. Per thread, AMD is now on par with Intel's Haswell and has an integrated GPU that can cover the 80/20 rule for games. The only issue I personally have is that AMD's current Kaveri offerings are limited to a 2 module(4 core) setup that consumes about 50% more power than 2 core(4 Hyper-thread) Haswell while idle and about 100% more power under pure CPU load. Since I will have a discrete GPU, I see no benefit to consuming that much more power. We're ta
Re: (Score:2)
I don't understand what you mean "per thread". AMD claims A10 is a four core CPU, but each module has only one FPU despite presenting two logical cores, which sounds awfully like hyper-threading to me, but Intel was more honest and called the i3 2-core CPU with hyper-threading. Basically, AMD overestimates how many real cores its processors really have, but this strategy seems to work since the web forums are filled with fanboys who think that more cores is always better.
This is why benchmarks show AMD can'
Re: (Score:2)
Intel's hyper threading shared nearly everything, right down to the integer execution units and L1 cache. Int
Re: (Score:2)
The 45W Kaveris are interesting, as they show a nice improvement in performance/watt - the new "sweet spot" is not in the top models but in the somewhat slower A8-7600 (3.1-3.3 GHz CPU speed).
I wonder how a 4 module (8 core) FX on that basis would perform and at which TDP. For software that scales well with many cores, it might be a good buy.
The point is MANTLE (Score:2)
I'm shopping for a new gaming computer on a budget. And even models shipping with this APU still usually have a R9 270x dedicated card as well, for a price point of about $850 USD.
Where this gets interesting is if MANTLE gets widely adopted. Suddenly it can treat those 6 or 8 GCN nodes on the APU as additional GPU Processing power to be used in the queue. While maybe not as powerful as a second video card, it should give a boost in performance at no additional cost.
Of course assuming game developers star
A Better Explaination At Anandtech (Score:5, Informative)
Anandtech's writeup [anandtech.com] (which Hothardware seems to be ripping off) has a much better explanation of what's going on and why it matters.
It's also worth noting that the Anandtech article implies that AMD is still on the fence on Kaveri APUs with more memory bandwidth, and that it may be something they do if there's enough interest/feedback about it.
Re: (Score:1)
Re: (Score:1)
Cheers for the better article.
It wasn't just me who thought the author of the hothardware article didnt have a clue?
Obvious questions (Score:1)
Regardless, motherboard manufacturers might still want to integrate the GDDR5 to sell the next generatio