Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
AMD Graphics Power Upgrades Hardware Technology

AMD's Fusion Processor Combines CPU and GPU 240

ElectricSteve writes "At Computex 2010 AMD gave the first public demonstration of its Fusion processor, which combines the Central Processing Unit (CPU) and Graphics Processing Unit (GPU) on a single chip. The AMD Fusion family of Accelerated Processing Units not only adds another acronym to the computer lexicon, but ushers is what AMD says is a significant shift in processor architecture and capabilities. Many of the improvements stem from eliminating the chip-to-chip linkage that adds latency to memory operations and consumes power — moving electrons across a chip takes less energy than moving these same electrons between two chips. The co-location of all key elements on one chip also allows a holistic approach to power management of the APU. Various parts of the chip can be powered up or down depending on workloads."
This discussion has been archived. No new comments can be posted.

AMD's Fusion Processor Combines CPU and GPU

Comments Filter:
  • Yeah! (Score:5, Interesting)

    by olau ( 314197 ) on Friday June 04, 2010 @05:36AM (#32456056) Homepage

    I'm hoping moving things into the CPU will make it easier to take advantage of the huge parallel architecture of modern GPUs.

    For what, you ask?

    I'm personally interested in sound synthesis. I play the piano, and while you can get huge sample libraries (> 10 GB), they're not realistic enough when it comes to the dynamics.

    Instead people have been researching physical models of the piano. So you simulate a piano in software, or the main components of it, and extract the sound from that. Nowadays there are even commercial offerings, like Pianoteq (www.pianoteq.com) and Roland's V-Piano. Problem is that while this improves dynamics dramatically, they're not accurate enough yet to produce a fully convincing tone.

    I think that's partly because nobody understands how to model the piano fully yet, at least judging from the research literature I've read, but also very much because even a modern CPU simply can't deliver enough FLOPS.

  • by Anonymous Coward on Friday June 04, 2010 @05:52AM (#32456132)

    It doesn't bring anything to the table yet. Firstly, IOMMUs need to be more prevalent in hardware, then secondly there needs to be support for using them in your favourite flavour (Xen will be there first) of virtualisation.

    That said, we'll get ugly vendor-dependent software wrapping of GPU resources. Under the guise of better sharing of GPUs between VMs, but really so you're locked in.

  • by ceeam ( 39911 ) on Friday June 04, 2010 @06:07AM (#32456224)

    I also heard that they *share* FP math units between CPU and GPU.

  • Re:vs Larrabee (Score:5, Interesting)

    by Calinous ( 985536 ) on Friday June 04, 2010 @06:49AM (#32456370)

    The 6-core Intel processor is the Extreme Edition (always was introduced at $1000), and frankly smokes every other desktop processor out there.
    AMD is the value-choice - they're cheaper at the same performance point, but they don't really compete in the over $250 desktop arena.
    On the server front, Intel's introduction of Core2 based Xeons allowed it to compete again, and right now AMD is leader only in some cases in server performance (some are draws, but most I think go to Intel). Too bad, as server processors were producing a lot of money for AMD.
    Intel is also leader in performance/watt, due to a complex power delivery architecture and better processor production facilities.
    Meanwhile, AMD competes where it can on the processor front (but ruled the previous 6 months on the performance graphic front).

  • Open Source drivers? (Score:5, Interesting)

    by erroneus ( 253617 ) on Friday June 04, 2010 @06:54AM (#32456386) Homepage

    Will the drivers for the graphics be open source or will we be crawling out of this proprietary driver hole we have been trying to climb out of for over a decade?

  • Re:heat (Score:4, Interesting)

    by Rockoon ( 1252108 ) on Friday June 04, 2010 @06:54AM (#32456390)

    This GPU-on-the-CPU is targeting the mobile/lightweight market.

    Think about how the other solutions work. That GPU chip sits next to the CPU chip and they both must be connected to the system bus in order to access ram. With AMD's solution here, you remove that GPU chip and therefor also remove the external BUS connection that it required. This is a very big win for manufacturers, who would even pay a premium for the chip because of the lower production costs. But knowing AMD, they wont be charging a premium for it. Instead they will try to push Atom's out of the market.
  • by sznupi ( 719324 ) on Friday June 04, 2010 @07:08AM (#32456430) Homepage

    Well, "incorporating a better GPU" makes quite a bit of difference, considering i3/i5 solution isn't much of an improvement almost anywhere (speed - not really, cost - yeah, I can see Intel willingly passing the savings...anyway, cpu + mobo combo hasn't got cheaper at all, power consumption is one but mostly due to how Intel chipsets were not great at this); and seemed to be almost a fast "first" solution, announced quite a bit after the Fusion.

  • by Anonymous Coward on Friday June 04, 2010 @07:11AM (#32456444)

    | This will be the equivalent of putting audio on the motherboard, a low baseline quality but done with no cost.

    I don't think you are viewing this correctly. I wish they didn't call it a GPU because your thought on the matter is what people are going to think of first. Instead think of it as the Fusion between a normal threaded CPU and a massively parallel processing unit. This thing is going to smoke current CPUs in things like physic operations without the need of anything like CUDA and without the performance limit of the PCIe bus. The biggest problem with discrete cards is pulling data off the cards because the PCIe bus is only fast in one direction (data into the card). This thing is going to be clocked much higher then discrete cards in addition to having direct access to the memory controller.

    I don't think many have even scratched the surface of what a PPU (Parallel Processing Unit) can do or how it can improve the quality of just about any application ... I think this is going to be Hott.

  • by hedwards ( 940851 ) on Friday June 04, 2010 @07:23AM (#32456494)
    It'll also be interesting to see how they manage to use this in tandem with a discrete card, as in preprocessing the data and assisting the discrete card to be more efficient.
  • by the_one(2) ( 1117139 ) on Friday June 04, 2010 @10:24AM (#32457968)

    It's not like we'll get more bang for our buck, we'll just get more floating point bangs, and fewer integer ones.

    You can accelerate integer operations as well on "new" GPUs. This means that for highly parallel, data independent operations you will get a ton of bang for your buck and without having to send data to the graphics memory first and then pulling the results back.

  • by Rudeboy777 ( 214749 ) on Friday June 04, 2010 @11:03AM (#32458554)
    cost - yeah, I can see Intel willingly passing the savings...anyway, cpu + mobo combo hasn't got cheaper at all

    This is where Intel's monopolistic behaviour rears its ugly head. In the past, the GPU needed to be integrated on the motherboard. Now it's on the CPU but Intel motherboard chipsets cost the same as previous generations. Seems like a terrific opportunity a market for 3rd party chipset vendors to make an offering (like the good old days when you could choose from VIA, Nvidia, SiS, Intel, ...)

    But wait, Intel will no longer allows 3rd parties to produce chipsets for their CPUs and keeps the profits from the artificially inflated chipset market to itself. Intel may have the performance crown, but its reasons like this (and the OEM slush funds to lock out AMD from Dell and other vendors) that keep me from supporting "Chipzilla"
  • Re:Meh. (Score:3, Interesting)

    by ElectricTurtle ( 1171201 ) on Friday June 04, 2010 @11:08AM (#32458618)
    Your anecdote falls a little flat with me. At one point I sold motherboards. ASRocks came back in droves to be RMA'ed, MSIs largely didn't. I have built many systems on MSI boards, and none ever failed. Of course even good manufacturers produce some bad boards and even bad manufacturers produce some good boards, so congratulations, you won the lottery!
  • Re:vs Larrabee (Score:2, Interesting)

    by Anonymous Coward on Friday June 04, 2010 @11:29AM (#32458974)

    Uhmm, have any of you guys actually READ the tech specs for amd's 12 core parts? They're ALL QUAD CHANNEL DDR3 if G34 socket. Which given the pricing compared to Socket F means unless you had a fully equipped Socket F server (And didn't mind 8-core cpus max) you'd have no reason to stay with socket F and get just as much memory bandwidth as Intel's current top of the line processors.)

    So while intel is currently giving more processing power per chip via hyperthreading, amd is giving just as much memory bandwidth at 1/2 to 1/3 the cost Go compare a 1567 Intel Xeon versus an AMD Opteron. You can have the whole mobo cpu combo for about the same as a single intel chip, which would get you 24 cores and a higher clockrate for less than you'd pay for 12 hyperthreaded cores via an 6 core intel chip. And that's with the same amount of quad channel memory bandwidth.

    Now where things get interesting is deciding whether larger cache or cheaper price/higher clockrate is more useful for your application. The intel parts are available in 6 and 8 core parts with 18 and 24 megs of L3 cache, which the Opterons across the board are 12 meg. For many applications the performance penalty, if any, might be small, but for those that are cache hungry you could see a 5 fold(?) increase in performance compared to having to hit main memory.

    But if it's multithreaded to begin with, you could also just throw a second server's worth of cores at it for the same price as the intel box :D

  • by Anonymous Coward on Friday June 04, 2010 @11:34AM (#32459046)
    Yeah, right.

    I'll believe that when I see it. For one thing, there's no way to expose that in any reasonable way to the OS. For another, that would mean the execution unit would have to answer to 2 contending schedulers...

    Color me skeptical.
  • Re:Meh. (Score:3, Interesting)

    by icebraining ( 1313345 ) on Friday June 04, 2010 @12:30PM (#32459768) Homepage

    32C? Is that considered high?

    I've played Call of Duty 4 in my cheap P4 in the summer. I don't know the temperatures inside, but outside there were 40C and I have no A/C. All with stock cooler too. The CPU is now seven years old and still works perfectly.

    In fact, the only thing that died in that cheap system was the power supply due to some construction workers in another floor which connected their machines directly to the building's power without protection and caused a power surge.

  • by mosel-saar-ruwer ( 732341 ) on Friday June 04, 2010 @07:22PM (#32465228)
    In the old days, there was a physical chipset which sat between the GPU and the CPU.

    But in this architecture, there is no physical barrier - they're on the same silicon.

    Look for the bad guys to try to force the graphics drivers to sneak over and sniff the memory of the CPUs - I can imagine how they might be able to load some code in a pr0n movie that could tell some pointer in a GPU driver to point to addresses of cache which [at least ostensibly] belong to a CPU, at which point they should be able to read the cache.

    And if they're lucky, their specially-crafte pr0n-videos might even be able to WRITE to the CPU cache, at which point they can probably pwn the entire operating system.

    Hopefully AMD has put some thought into their implementation, and has some sort of hardware safeguards that force the GPU to always act as the "slave" of its masters [the CPUs], but, if not, then all Hades could break loose.

    [And Intel probably won't put nearly as much thought into their implementation as AMD did with theirs.]

Someday your prints will come. -- Kodak