Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Trust the World's Fastest VPN with Your Internet Security & Freedom - A Lifetime Subscription of PureVPN at 88% off. Also, Slashdot's Facebook page has a chat bot now. Message it for stories and more. ×
AMD Hardware

AMD Considered GDDR5 For Kaveri, Might Release Eight-Core Variant 120

MojoKid writes "Of all the rumors that swirled around Kaveri before the APU debuted last week, one of the more interesting bits was that AMD might debut GDDR5 as a desktop option. GDDR5 isn't bonded in sticks for easy motherboard socketing, and motherboard OEMs were unlikely to be interested in paying to solder 4-8GB of RAM directly. Such a move would shift the RMA responsibilities for RAM failures back to the board manufacturer. It seemed unlikely that Sunnyvale would consider such an option but a deep dive into Kaveri's technical documentation shows that AMD did indeed consider a quad-channel GDDR5 interface. Future versions of the Kaveri APU could potentially also implement 2x 64-bit DDR3 channels alongside 2x 32-bit GDDR5 channels, with the latter serving as a framebuffer for graphics operations. The other document making the rounds is AMD's software optimization guide for Family 15h processors. This guide specifically shows an eight-core Kaveri-based variant attached to a multi-socket system. In fact, the guide goes so far as to say that these chips in particular contain five links for connection to I/O and other processors, whereas the older Family 15h chips (Bulldozer and Piledriver) only offer four Hypertransport links."
This discussion has been archived. No new comments can be posted.

AMD Considered GDDR5 For Kaveri, Might Release Eight-Core Variant

Comments Filter:
  • by symbolset ( 646467 ) * on Monday January 20, 2014 @03:54AM (#46011437) Journal
    They don't care to because it would cut into their server revenue where margins are higher. Personally I think that really sucks. Intel is the same way. Maybe the migration to mobile where we don't have these margin protection issues is a good thing.
  • by guacamole ( 24270 ) on Monday January 20, 2014 @04:08AM (#46011481)

    The whole point of AMD APUs is low cost gaming. That is, lower cost than buying a dedicated GPU plus a processor. Many already argue that you don't save much by buying an APU. A cheap Pentium G3220 with a AMD Radeon 7730 costs the same as the A10 Kaveri APU, and will give better frame rate. Even if the Kaveri APU prices come down, the savings will be small. If you have to buy the GDDR5 memory, there won't be any savings. It's understandable that AMD didn't take that route.

  • by Anonymous Coward on Monday January 20, 2014 @04:15AM (#46011497)

    False, XBox one uses pure DDR3.

    It is also one of the key reasons why many games on XBox one cannot do 1080p (that, and the lack of ROPs - PS4 having twice as many ROPs for rasterization)

    XBox One tries to "fix" the RAM speed by using embedded sRAM on-chip as a cache for the DDR3 for graphics. Remains to be seen how well the limitations of DDR3 can be mitigated. Early games are definitely suffering from "developer cannot be assed to do a separate implementation for Xbox One".

    Kaveri, while related to the chips inside the consoles, is decisively lower performing part. Kaveri includes 8 CUs. XBox one has 14 CUs on die, but two of those are disabled (to improve yields), so 12. PS4 has 20 CUs on-die, with again two CUs disabled to improve yields, so 18.

    On the other hand, Kaveri has far better CPU cores (console chips feature fairly gimpy Jaguar cores, tho both consoles feature 8 of those cores vs 4 on Kaveri)

    Any integrated graphics setup that uses DDR3 is bound to be unusable for real gaming. Kaveri has a good integrated graphics setup compared to the competition, but it is far behind what the new consoles feature - boosting it with GDDR5 without also at least doubling the CU count wouldn't do much. Either way, it really isn't usable for real gaming. It beats the current top offering from Intel, but that's bit like winning in Special Olympics when compared to real graphics cards (even ~$200-250 midrange ones)

  • by Anonymous Coward on Monday January 20, 2014 @04:17AM (#46011509)

    No, they don't do it because it considerably raises the cost of the chip and it doesn't help improve the "average" user's workload. Many core processors have a bunch of inherent complexity dealing with sharing information between the cores or sharing access to a common bus where the information can be transferred between processes. There are tradeoffs to improving either scenario.

    Making the interconnection between cores stronger means that you have a high transistor count and many layers in silicon. Even if you do this, the scenarios in which the cores need to intercommunicate means that either the processes are being dispatched to the processor and the processor has a scheduling algorithm built in (which has its own issues), or you have a new set of instructions allowing software to dispatch code from cpu to cpu. Then race conditions and all sorts of other nonsense are inherently in the CPU itself and you have a whole bunch of problems trying to write code for the mess.

    Even if you go the above route, you still have to get the data you process out of the CPU and into RAM. Then you have a bus contention problem: how can multiple cores access different sections of RAM simultaneously? Is the CPU responsible for settling a conflict when two different cores want to write to the same section of RAM at the same time? These issues are largely easily settled with only 2, 3, or 4 cores now (and currently the responsibility for preventing these scenarios is on the OS not the CPU), but they would explode with a 24 core (or more).

    It's currently possible to build a CPU with several hundred cores inside (and both Intel and AMD have done it). No one has settled the issues above to make it practical, or invented the new software paradigm to make it easy to fix them.

  • by TheLink ( 130905 ) on Monday January 20, 2014 @04:44AM (#46011639) Journal

    They don't care because a desktop with a 24 core AMD CPU is likely to be slower than a 4 core Intel CPU for most popular _desktop_ tasks which are mostly single threaded. They're already having problems competing with 8 core CPUs, adding more cores would either make their chips too expensive (too big) or too slow (dumber small cores).

    Sad truth is for those who don't need the speed a cheap AMD is enough - they don't need the expensive ones. Those who want the speed pay more for Intel's faster stuff. The 8350 is about AMD's fastest desktop CPU for people who'd rather not use 220W TDP CPUs, and it already struggles to be ahead of Intel's mid range for desktop tasks: []

    A few of us might regularly compress/encode video or use 7zip to compress lots of stuff. But video compression can often be accelerated by GPUs (and Intel has QuickSync but quality might be an issue depending on implementation). The rest of the desktop stuff that people care about spending $$$ to make faster would be faster on an Intel CPU.

    A server with 24 cores will be a better investment than a desktop with 24 cores.

You are always doing something marginal when the boss drops by your desk.