Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AMD Hardware

AMD A10 Kaveri APU Details Emerge, Combining Steamroller and Graphics Core Next 105

MojoKid writes "There's a great deal riding on the launch of AMD's next-generation Kaveri APU. The new chip will be the first processor from AMD to incorporate significant architectural changes to the Bulldozer core AMD launched two years ago and the first chip to use a graphics core derived from AMD's GCN (Graphics Core Next) architecture. A strong Kaveri launch could give AMD back some momentum in the enthusiast business. Details are emerging that point to a Kaveri APU that's coming in hot — possibly a little hotter than some of us anticipated. Kaveri's Steamroller CPU core separates some of the core functions that Bulldozer unified and should substantially improve the chip's front-end execution. Unlike Piledriver, which could only decode four instructions per module per cycle (and topped out at eight instructions for a quad-core APU), Steamroller can decode four instructions per core or 16 instructions per quad-core module. The A10-7850K will offer a 512-core GPU while the A10-7700K will be a 384-core part. Again, GPU clock speeds have come down, from 844MHz on the A10-6800K to 720MHz on the new A10-7850K but should be offset by the gains from moving to AMD's GCN architecture."
This discussion has been archived. No new comments can be posted.

AMD A10 Kaveri APU Details Emerge, Combining Steamroller and Graphics Core Next

Comments Filter:
  • I think AMD used Phenom || instead of bulldozer, as Phenom has already proven its significane over the bulldozer. Besides implementing CGN into pehnom || would againg start phenom ||'s production.
    • Re: (Score:2, Insightful)

      I wish I had mod points.

      Proud Phenom II user here. Awesome chip and I still use it on the gaming platform ... from 2010?!

      I like the 6 core and only 10% slower than an icore 7! ... 1st generation icore 7 from 2010. :-(

      IT is obsolete and I see no purpose to upgrade my system with it with another one. It is like making a multicore 486 at 3ghz. It would not hold a candle to anything made in the last 15 years.

      AMD needs to do an Intel. Dump Netburst 2.0. Intel failed with the pentium IV but after 3 years the core

      • by Desler ( 1608317 )

        They are called "Core i7" not "iCore7".

  • What's the GPU for? (Score:4, Interesting)

    by rsilvergun ( 571051 ) on Tuesday December 03, 2013 @11:34PM (#45591855)
    Laptops? While I'd love to see a nice, low cost CPU/GPU combo that can hang with my (rather meager) Athlon X2 6000+ and GT 240, I'm still running pretty low end gear. If this is targeted at enthusiasts they're just going to replace it with a card...
    • Laptops? While I'd love to see a nice, low cost CPU/GPU combo that can hang with my (rather meager) Athlon X2 6000+ and GT 240, I'm still running pretty low end gear. If this is targeted at enthusiasts they're just going to replace it with a card...

      Basically it's a CPU + GPU bundle that only takes up the size of the CPU. It's not meant for the hardcore gamers, just pragmatists who are looking for value and simplicity. Like every company, AMD has a product lineup -- different products are marketed in different ways (although AMD is not always as clear about the matter as it could be). For the price, these chips are usually pretty good values.

      • by Bengie ( 1121981 )
        Actually, their intent is for the IGP to be used like a co-processor. The IGP has about 2 magnitudes lower latency than discreet GPUs, that makes a difference.
    • You ask in the title "What's the GPU for?"

      You are all over the place. You wonder what the GPU is for, then state that you actually will love this very product because its a low cost CPU/GPU combo, but then specifically name your "rather meager" rig that is even slower than the last generation of APU's in both CPU and GPU performance (ie, your rig is the thing that cannot hang), and finish the whole thing off hypothesizing that AMD might in fact be targeting "enthusiasts."

      Are you some sort of discordian
    • I believe if you have a discrete GPU based on the same architecture (GCN in this case), you can use both simultaneously for a small speed boost, or switch between them depending on load (so your 250W video card doesn't need to spin its fans up just to render the desktop).

      There's also some consideration for using the integrated GPU for lower-latency GPGPU stuff while using the discrete GPU for rendering. I don't think that's actually used in anything yet, but I'm not actually using an APU in any of my machin

    • Exactly. That's why the big deal with Intel's Haswell was basically "consumes a lot less power", the rest was incremental and a few added instructions for the future. AMD seems to have the same tech analysts as Netcraft crying "The Desktop is dying, the desktop is dying!"

      If you play to own anything that is a desktop, then anything like this from AMD or Intel, that can be replaced with something that is TWICE is fast using the cheapest 50$ dedicated video card, makes the advances absolutely meaningless.

      In fa

      • by Bengie ( 1121981 )
        There are some upcoming new techs that make use of IGPs, in that the IGP is potentially 10x faster than a discreet GPU because of latency issues.
    • I kind of wonder about this too. No matter how low-end your desktop system is, as long as you have a modern CPU, even in say Celeron range, you can always pop into it a 100 dollar ATI video card (check Tom's hardware's latest recommendations) and it should run circles around those AMD APU's with integrated graphics. Now AMD is supposedly shooting at the market for these $100 video cards. That is, they seem to imply that this APU will make cheap video cards unnecessary. It will certainly be interesting to lo

  • Will this new architecture of AMD support OpenCL 2.0?

    • by Bengie ( 1121981 )
      These are fully programmable GPUs that support preemptive multitasking, protected mode memory-addressing, can even cause page faults to use virtual memory transparently with the OS. Now for the good part. Fully C and C++ compliant. If you can write OpenCL in C or C++, then you can write it on these GPUs.
  • by Anonymous Coward

    "enthusiasts" don't give a rat's tail about on-board graphics.. so strip that shit out and give us an unlocked processor for less coin. tyvm.

    • so strip that shit out

      It will be much harder to have the GPU cache coherent with the CPU if that "shit" is stripped out. It is this advance far more than anything else which makes this architecture hold promise. There's now some crazy amount of arithmetic performance available (far, far more than even the most expensive Intel chips) but without the usual high latency (and pain in the ass) trip to and grom a graphics card.

      That "shit" will make the GPU suitable for a substantially broader range of problems s

    • Yes, this isn't an enthusiast part, it's a budget part.

      To be fair, most of the PC market is budget. We the enthusiasts are the minority. This thing will probably play Starcraft 2, Crysis 3, Battlefield Whatever, BioShock Infinite Squared, etc... well enough for someone who doesn't mind 35 fps on an HD monitor. If you want 90 fps on a 4K monitor, you'll have to move up to Core i5 + mid level or better discrete graphics card.
    • I know I am minority, but it's cool to have a mid-level gaming capability on portables and not having to pay for it arm and leg. On the desktops.. I agree. The market for people who insist on gaming on a budget PC but refuse to put in at least a $100-120 video card is kind of small.

  • Kavari looks good for a budget gaming PC, but I think they are being a bit optimistic about the "dual graphics" feature. This is where you pair the iGPU with a dGPU, to get better performance. AMD has never been able to get this feature to work properly. All it does is create "runt" frames, which makes the FPS look higher, but without giving any visual improvement.

    http://www.tomshardware.com/reviews/dual-graphics-crossfire-benchmark,3583.html [tomshardware.com]

  • by Anonymous Coward

    Kaveri should be properly compared to the chips in the PS4 and Xbone. As such, it can be said that Kaveri is significantly poorer than either.

    -Kaveri is shader (graphics) weak compared to the Xbone, which itself is VERY weak compared to the PS4.
    -Kaveri should be roughly CPU equivalent (multi-threaded) to the CPU power of eother console
    -Kaveri is memory bandwidth weak compared to the Xbone, which itself is VERY bandwidth weak compared to the PS4
    -Kaveri is a generation ahead of the Xbone in HSA/hUMA concepts,

    • - Single-thread performance matters much more than multi-thread performance, and Kaveri has almost twice the single-thread performance of the Xbone and PS4 chips.

      - Memory bandwidth is expensive. You either need wide and expensive bus, or expensive low-capasity graphics DRAM which need soldering, and means you are limited to 4 GiB of memory(with the highest capasity GDDR chips out there), with zero possibility of late upgrading it, or both(and MAYBE get 8 giB of soldered memory). Though there has been rumour

      • - Memory bandwidth is expensive. You either need wide and expensive bus, or expensive low-capasity graphics DRAM which need soldering, and means you are limited to 4 GiB of memory(with the highest capasity GDDR chips out there), with zero possibility of late upgrading it, or both(and MAYBE get 8 giB of soldered memory). Though there has been rumours that Kaveri might support GDDR5, for configurations with only 4 GiB of soldered memory.

        In general (not necessarily relating to Kaveri as-is) 8 giB of fast, soldered memory as in the PS4 would make sense for a PC.

        The current APUs are seriously bandwidth starved. In reviews where a Phenom II with a discrete graphics card is pitted against an APU with similar clock speed and number of graphics cores, the Phenom II usually wins (except benchmarks that don't use the GPU much). Overclocking the memory helps the APU some, which is further evidence.

        With PS4 style memory that problem could be solved,

        • In reviews where a Phenom II with a discrete graphics card is pitted against an APU with similar clock speed and number of graphics cores, the Phenom II usually wins... looking back on my last three computer purchases, I always ended up doing a complete update instead of adding RAM to the existing PC. Because the CPU and GPU were also obsolete...

          Because my computer is a Phenom II, this might be the first time I add RAM to an existing PC.

        • by Bengie ( 1121981 )
          APUs are only bandwidth starved when working with large datasets. There is a huge class of work-loads that are small amounts of data but require a lot of processing. In these cases, memory bandwidth isn't the limiting factor in any way. In many of these cases, it's faster to process the data on a 80GFlops CPU than to offload to a 3TFlops discreet GPU. Now we have a 900TFlop GPU that is only a few nanoseconds away from the CPU instead of tens of microseconds.
    • I could be wrong, but it had little to do with AMD and more to do with MS specifications.

      The only difference between the graphic cores on the Xbox One and PS4 is that the PS4 uses newer DDR5 memory, while the xbox DDR3. Xbox tried to compensate for the slower memory by adding additional cache on die, however this takes up physical real estate, which forced them to use a couple less cores (in exchange for faster memory handling). To simply say one is faster/better than the other is a bit misleading.

      The reaso

    • OK, I admit I didn't read too carefully. Thought you were just comparing the Xbox and PS4 situation.

      However it is likely for the exact same reason. When is DDR5 coming out? Can you actually buy some? No you cannot. Why design and release something you cannot use?

      Reminds me of the funny MB with two slots, one for one kind of DDR VS another. I have no doubt the have another version all ready for "release" once DDR5 become viable and common place.

  • by DudemanX ( 44606 ) <(dudemanx) (at) (gmail.com)> on Wednesday December 04, 2013 @02:22AM (#45592521) Homepage

    This is the chip that unites the CPU and GPU into one programing model with unified memory addressing. Heterogeneous System Architecture(HSA) and Heterogeneous Uniform Memory Access(HUMA) are the nice buzzword acronyms that AMD came up with but it basically removes the latency from accessing GPU resources and makes memory sharing between the CPU cores and GPU cores copy free. You can now dispatch instructions to the GPU cores almost as easily and as quickly as you do to the basic ALU/FPU/SSE units of the CPU.

    Will software be written to take advantage of this though?

    Will Intel eventually support it on their stuff?

    Ars article on the new architecture. [arstechnica.com]

    Anandtech article on the Kaveri release. [anandtech.com]

    • The thing I hope to see explored is using such a chip with discrete graphics. The ability for the on-chip GPU to access the same memory will allow some things to be optimised (but possibly not all graphics). I imagine in future we'll see a repeat of what happened with FPU coprocessors in the late 80s onwards: (this is a rough picture, and you are advised to look up the precise details if you're interested)

      1. The 386 had a discrete FPU, called the 387
      2. The 486 integrated the FPU, and all subsequ
      • by LoRdTAW ( 99712 )

        The x86 external FPU started with the Intel 8087 [wikipedia.org] which was launched during 1980. The 8087 was the FPU for the 8086, the first generation of x86. The 80286 followed the same logic using an external FPU, 80287.

        The 386 was the first to integrate the FPU onto the CPU die in the DX line of 386's. The 386SX was a 386 without the FPU which depending on the computer/motherboard could be upgraded with a 387 coprocessor.

        So:
        386DX = 386+FPU
        386SX = 386, no FPU

        The 486 also followed the same logic offering a DX or SX vers

        • by the_humeister ( 922869 ) on Wednesday December 04, 2013 @10:17AM (#45595173)

          Your history is rather off. The 386 never had an integrated FPU. 386 DX had a 32-bit bus. The 386 SX had a 16-bit bus for cost saving measures. The 486 DX was the one with the integrated FPU, and that was the first to include the FPU by default. The 486 SX had the FPU fused off.

          • by LoRdTAW ( 99712 )

            Ah shit, you're right. I forgot that the 386 didn't have an FPU and was confused by the 486SX/DX nomenclature.

            Thinking back my father had two 386's at work. One a 386DX was for CAD and now that I think of it, it had a Cyrix "Fast Math" 387 FPU. Interesting thing was it had a slot which was two 8 bit ISA slots end-to-end that was a 32 bit expansion slot. Wasn't populated but was interesting. He also had a 386SX which was used for word processing and accounting/payroll. Later on we had two 486's.

    • by higuita ( 129722 )

      Actually should be the driver work to support this.

      When a app asks to copy something to the GPU, it ask the GPU drivers, that can use that zero-copy/remap magic and tell the apps its done.

      So yes, it should be supported out of the box if the drivers support it right.

    • Games will almost certainly make use of uniform memory for loading textures faster. That feature will make it much easier to implement "mega-textures".

  • The current Richland APUs have a native memory controller that runs at 1866MHz so if you put in 9-10-9 RAM of that speed and overclock it a hair, you get graphics performance that ranks at a 6.9-7.0 in the WEI in Win7. REmember, you have to jack up the memory speed since the GPU inside the CPU is using system memory instead of GDDR5. That rating is medium speed for games. So that's around $139 for the top of the line chip and $75 for the RAM.

    Now let's look at Intel's solution for a basic gaming or HD v
    • Why is anyone still buying Intel?

      Power consumption.

      I really like AMD (in fact, all my computers since 1999 -- except for an old iMac -- have been AMD-based), but I really, really wish I could get a (socketed, not embedded) AMD APU with less than 65W TDP (ideally, it should be something like 10-30W).

      I hate that when I ask people in forums "what's the lowest power consumption solution for MythTV with commercial detection and/or MP4 reencoding?" the answer is "buy Intel."

      • You're missing three important factors. One is that both brands downclock significantly when not in use so they're a lot closer in real world usage than you think on power consumption. Maximum TDP is just that, a maximum. That's why not many servers have AMD chips but as for desktops running normal tasks like web browsing, the CPU is reduced to a lower power state over 90% of the time.

        Secondly, if it's not a laptop not many people really care. DVRs sort of make sense because of the actual heat though.
    • by 0123456 ( 636235 )

      But wait, there's more! Their 6-core non-APU chip blows away an i3 and some of their i5 processors while costing almost half.

      Wow! AMD only need six cores to beat an Intel dual-core! They're totally crushing Intel, baby!

      Back in the real world, if what you're saying is true, AMD woudln't be forced to sell these chips at bargain basement prices. I'm thinking of using one to replace my old Athlon X2 system, but only because it's cheap.

      • I don't care about principle or theory. They can have 12 cores for all I care. You know why HP has 30+% of the market? Because they're the cheapest because they undercut everyone. That's what consumers buy. If AMD can get X performance for Y price and Intel can't beat them, that's who everyone will buy.
        Plus, the i5 is a quad core. The FX6300 gets a passmark rating of around 6400. The i5-3450 gets around 6450 so they're basically the same speed.
        The FX is $119 and the i5 is $190.
        The FX has a max TDP
        • by 0123456 ( 636235 )

          If AMD can get X performance for Y price and Intel can't beat them, that's who everyone will buy.

          Except they don't, because AMD can't compete with Intel at anything other than the low end. Which they've traditionally done by selling big chips at low prices where the margins can't be good.

          Plus, the i5 is a quad core.

          You were gloating about a six-core AMD beating the i3, which is a dual core with hyperthreading. That you consider that an achievement shows how far behind AMD are right now.

          The FX6300 gets a passmark rating of around 6400. The i5-3450 gets around 6450 so they're basically the same speed.

          In a purely synthetic benchmark.

          The FX has a max TDP of 95W and the i5 is 77W and their minimum power states are almost identical.

          And my i7 has a 75W TDP.

          AMD has a price per performance passmark ratio of 55.63 and Intel's is 12.36. 55.63 beats every Intel chip in existence as well.

          So why don't AMD triple their price? They'd still beat Intel on price/performance for ever

          • It hurts me to see AMD like this.

            I am typing this on a Phenom II now. Not a bad chip at the time several years ago as that could hold a candle to the i5s and i7s with just a 10% performance decrease but was less than half the price and had virtualization support for VMWare and 6 cores. I run VMWare and several apps at once so it was a reasonable compromise for me and budget friendly.

            But today I would not buy an AMD chip :-(

            I would buy a GPU chip which is about it as those are very competitive. I wish AMD w

            • I plan on buying AMD anyway, despite its inferiority, because I think the competition is good for everyone.

              Eventually, maybe 5, 10, or 15 years out, I expect Intel's competition to be high end ARM chips. But for now, AMD is it. If we the consumers let AMD fold, we had better be satisfied with buying used desktop processors because I fully expect new ones to double in price per performance compared to what they are today, just because nothing will be available as an alternative.
              • Dude look at the 200,000,000 XP installations still running!

                x86 is here to stay forever. Windows RT failed and it is a competitive cycle where ARM can't compete.

                These XP users also show there is no need to ever upgrade anymore. They work. Why change to something that does this same thing they already have??!

                Chips no longer double in performance as we hit limits in physics :-(

                AMD is loud and needs a big fan. A i3 core is just as competitive sadly unless you really hit every darn core for an app. My phenom II

                • I don't think x86 is here to stay forever. What are some of the most popular video games in the world: Candy Crush Saga (or whatever it's called), Angry Birds, Draw Something, Cut the Rope, Minecraft. Smart phone and tablet sales keep climbing and desktop and laptop sales are stagnant.

                  I don't expect Android to dominate consumer operating systems next year, or five years from now. But I can readily believe that 15 years from now Microsoft consumer operating systems will be in a decline, and so w
          • What is your way of measuring speed that's superior than the rest of the internet then? The FX6300 ties or beats it in every other category and test style imaginable as well.
            You're also forgetting that processors are practically a non-issue these days. If you had an i7 system with a 1TB drive and an Athlon X2 AM3 Regor 260 system with an SSD, the AMD system would feel faster doing just about anything realistic like web browsing and opening software. Intel fanboys are just buying high performance chips t
  • the A10-7700K will be a 384-core part.

    Can some of these cores work on game AI whilst others handle graphics, or can they only work on one task all at once? Could they do game AI at all? And can programmers program for gpu cores cross-platform or is it still one brand at a time?

  • I was doing some reading on Mantle, and there's some interesting things I noted. One of the things about Mantle is you can create "task" queues. You register a queue with some consumer, be that the CPU or a GPU. Registering the queue is a system call, but the queue itself is in user land. Each task is a data structure that contains a few things, several of them were stuff that I was less interested in, but a few stood out. One was a pointer to a function and another was a pointer to some data.

    The way this
  • How does it compare in per-core performance to Intel chips? Everything else is just meaningless techno-babble.

The opossum is a very sophisticated animal. It doesn't even get up until 5 or 6 PM.

Working...