Forgot your password?
typodupeerror
AMD Graphics Power Upgrades Hardware Technology

AMD's Fusion Processor Combines CPU and GPU 240

Posted by timothy
from the hence-the-name dept.
ElectricSteve writes "At Computex 2010 AMD gave the first public demonstration of its Fusion processor, which combines the Central Processing Unit (CPU) and Graphics Processing Unit (GPU) on a single chip. The AMD Fusion family of Accelerated Processing Units not only adds another acronym to the computer lexicon, but ushers is what AMD says is a significant shift in processor architecture and capabilities. Many of the improvements stem from eliminating the chip-to-chip linkage that adds latency to memory operations and consumes power — moving electrons across a chip takes less energy than moving these same electrons between two chips. The co-location of all key elements on one chip also allows a holistic approach to power management of the APU. Various parts of the chip can be powered up or down depending on workloads."
This discussion has been archived. No new comments can be posted.

AMD's Fusion Processor Combines CPU and GPU

Comments Filter:
  • by sco08y (615665) on Friday June 04, 2010 @05:20AM (#32455954)

    “Hundreds of millions of us now create, interact with, and share intensely visual digital content,” said Rick Bergman, senior vice president and general manager, AMD Product Group. “This explosion in multimedia requires new applications and new ways to manage and manipulate data."

    So people watch video and play video games, and it's still kinda pokey at times. We're way past diminishing marginal returns on improving graphical interfaces.

    I bring it up, because if you're trying to promote a technology that actually uses a computer to compute, you know, work with actual data, you are perpetually sidetracked by trying to make it look pretty to get any attention.

    Case in point: working on a project to track trends over financial data, there were several contractors competing. One had this software that tried to glom everything into a node and vector graph, which looked really pretty, but didn't actually do anything to analyze the data.

    But to managers, all they see is that those guys have pretty graphs in their demos and all we had was our research into the actual data... all those boring details.

  • by FuckingNickName (1362625) on Friday June 04, 2010 @05:41AM (#32456076) Journal

    Just like my Core i3 sitting about 20 inches to the left, then. Yes, I know they're incorporating a better GPU, but they're touting too much as new.

  • Re:vs Larrabee (Score:1, Insightful)

    by Anonymous Coward on Friday June 04, 2010 @05:44AM (#32456094)

    Change plans?

    No, they need to get their rasterising software written and the chip quite a bit more efficient.. as per their plans as-is.

  • by Deliveranc3 (629997) <deliverance AT level4 DOT org> on Friday June 04, 2010 @05:47AM (#32456112) Journal
    |"Hundreds of millions of us now create, interact with, and share intensely visual digital content," said Rick
    |Bergman, senior vice president and general manager, AMD Product Group. "This explosion in multimedia requires
    |new applications and new ways to manage and manipulate data."

    So people watch video and play video games, and it's still kinda pokey at times. We're way past diminishing marginal returns on improving graphical interfaces.


    Well sure YOU DO, but your Gran still has a 5200 with "Turbo memory" (actually that's only 3 years old, she probably has worse). This will be the equivalent of putting audio on the motherboard, a low baseline quality but done with no cost.

    I bring it up, because if you're trying to promote a technology that actually uses a computer to compute, you know, work with actual data, you are perpetually sidetracked by trying to make it look pretty to get any attention.

    Bloat is indeed a big problem, programs are exploding into GIGABYTE sizes, which is insane. OTOH linux reusing libraries seems not to have worked. There is too little abstraction of the data so each coder writes their own linked list, red-black tree, or whatever algorithm instead of just using the methods from the OS.

    Case in point: working on a project to track trends over financial data, there were several contractors competing. One had this software that tried to glom everything into a node and vector graph, which looked really pretty, but didn't actually do anything to analyze the data.

    Sounds like a case of "not wanting to throw the baby out with the bathwater." If they have someone of moderate intelligence on staff, that person can find a way to pull useful information out of junk data. He/she will resist removing seemingly useless data, because they occasionally use it and routinely ignore it. A pretty presentation can also be very important in terms of usability, remember you have to look at the underlying code but the user has to look at the GUI, often for hours a day.

    But to managers, all they see is that those guys have pretty graphs in their demos and all we had was our research into the actual data... all those boring details. I can't comment on the quality of your management, but once again don't underestimate ease of use or even perceived ease of use (consider how long you will remain trying to learn a new tool if frustrated, the perception that something is as easy as possible is a huge boon... think iCrap).
    Anyway back to Fusion, this is EXACTLY what Dell wants, bit lower power, less heat, significantly lower price and a baseline for their users to be able to run Vista/7 (7 review: better than Vista, don't switch from XP). So while it's true that this chip won't be dominant under ANY metric, and would therefore seem to have no customer base it's attractiveness to retail is such, that they will shove it down consumer throats and AMD will reap the rewards.

    I'm curious about these things in small form factor, now that SD cards/MicroSD cards have given us nano-size storage we can get back to Finger sized computers that attach to a TV.

    SFF Fusion for me!
  • by Rogerborg (306625) on Friday June 04, 2010 @05:57AM (#32456168) Homepage

    Let's party like it's 1995! Again! [wikipedia.org]

    Slightly less cynically, isn't this (in like-for-like terms) trading a general purpose CPU core for a specialised GPU one? It's not like we'll get more bang for our buck, we'll just get more floating point bangs, and fewer integer ones.

  • Re:vs Larrabee (Score:5, Insightful)

    by Rockoon (1252108) on Friday June 04, 2010 @06:38AM (#32456326)

    AMDs product is just a desperate attempt at trying to be relevant. They need to show they have a product competing with the big boys in all the right channels.

    AMD is plenty relevant. It is Intel that scrambled to put out a 6 core desktop processor, which was so poorly planned that the cheap version is $1000. Meanwhile nVidia is desperately trying to get people locked into their CUDA API because their video cards just dont bang the performance drum like they used to.

    AMD and Intel have different visions. AMD is clearly focusing on getting more cores on chip for more raw parallel performance (12 core CPU's in 4 chip configs are owning the top end server market.. brought to you by AMD), while Intel is clearly trying to maximize memory bandwidth to peak out raw single threaded performance (triple channel ram and larger cache is owning the software rendering and gaming markets)

    Normal people are within the $50 to $200 CPU range, and at those price points, solutions from both camps perform about the same. On the video card front, you just can't beat AMD right now. Best price/performance ratio on top of best performance period.

  • Re:heat (Score:5, Insightful)

    by sznupi (719324) on Friday June 04, 2010 @06:59AM (#32456406) Homepage

    AMD chipsets with integrated GFX were quite good at power consumption already; using a dozen or so watts. Considering AMD puts out quadcores with sub 100W TDP, Fusion shouldn't be that big a problem.

  • by MikeFM (12491) on Friday June 04, 2010 @07:07AM (#32456428) Homepage Journal
    Sure you can offload GPU work so long as the entire process is handled by the server and just streaming the result to the client. I've seen this done over the Internet besides on a LAN. It'd be different if it were trying to use the client CPU and memory to drive the GPU.

    They specifically were pointing out the benefit to having the GPU and CPU on the same chip which is quite a bit different than a mobo integrated solution. It probably isn't as powerful as a Xeon quad-core processor and a $500 video card but the question is how well it is setup to handle many different GPU tasks. I'd at least assume it's quite a bit faster for these types of tasks than a standard CPU and I wonder how well they can scale the technology for a better CPU and GPU.

    I'm not sure I agree it's a niche market. I'd say more of a market poised to explode when the right products make it attainable. For virtualization it's more important that it can handle several unrelated tasks at a reasonable speed than that it can handle a single task at a high speed. If each CPU core also had a paired GPU it'd open up possibilities. Bulk, power consumption, and heat are often as big of issues for server farms as for laptops which is another reason why an interpreted GPU might be of interest.

    Grid computing uses goes hand in hand with virtualization. Again coming down to how well these can work in parallel. Being able to fit a number of CPU and GPU cores on a single physical chip could be very beneficial I think.
  • Re:vs Larrabee (Score:4, Insightful)

    by sznupi (719324) on Friday June 04, 2010 @07:39AM (#32456566) Homepage

    Intel is also leader in performance/watt, due to a complex power delivery architecture and better processor production facilities.

    As long as you look only at raw CPU performance and power usage. Add GFX perf into consideration and...

    (plus that would be quite recent development for Intel; their power consumption numbers weren't that great by themselves, when adding also chipsets of previous gen)

  • Re:vs Larrabee (Score:3, Insightful)

    by Rockoon (1252108) on Friday June 04, 2010 @07:41AM (#32456578)

    The 6-core Intel processor is the Extreme Edition (always was introduced at $1000)

    If ((not realistic for server marker) && (cant sell for less than $1000 without undercutting our other offerings))
    {
    setlabel("Extreme Edition");
    }

    Where is Intel's budget 6-core design? Is it because they refuse to make budget 6-core CPU's, or is it because they can't make budget 6-core CPU's?

    Either way, the proof is in the pudding. They are not targeting the highly parallel market either by choice ("ignoring that market" scenario) or by mistake ("caught with pants down" scenario)

  • Re:vs Larrabee (Score:2, Insightful)

    by TheGryphon (1826062) on Friday June 04, 2010 @08:03AM (#32456712)
    hopefully this has good effects for cooling, also. Maybe genuises will stop designing boards with 2 hot components separated by 4-6" on a board cooled by 1 copper pipe/fan assembly ... cleverly heating everthing along the whole length of pipe.
  • Re:heat (Score:1, Insightful)

    by Anonymous Coward on Friday June 04, 2010 @08:16AM (#32456774)
    Because I said some swear words. I tried to resist, but it's just too satisfying.
  • Meh. (Score:2, Insightful)

    by argStyopa (232550) on Friday June 04, 2010 @08:21AM (#32456822) Journal

    Sounds like a non-advancement to me.

    "Look, we can build a VCR *into* the TV, so they're in one unit!"

    Yeah, so when either breaks, neither is usable.
    Putting more points of failure into a device just doesn't sound like a great idea.

    In the last 4 computers I've built/had, they've gone through at least 6-7 graphics cards and 5 processors. I can't remember a single one where they both failed simultaneously.

    Now, if this tech will reduce the likelihood of CPU/GPU failures (which, IMO, are generally due to heat or less frequently power issues) somehow, then great. But I have a gut reaction against taking two really hot, power-intensive components and jamming them into even closer proximity.

    Finally, I'm probably in the minority, but I prefer being able to take my components ala carte. There were many times in the past 25 years that I couldn't afford the best of all components TODAY, so I built a system with a very high-end mobo and CPU, but using my old soundboard, RAM, etc until I could afford individually to replace those components with peer-quality stuff.

  • What about memory? (Score:2, Insightful)

    by ElusiveJoe (1716808) on Friday June 04, 2010 @08:39AM (#32456978)

    This thing is going to smoke current CPUs in things like physic operations without the need of anything like CUDA and without the performance limit of the PCIe bus.

    Ummm, but videocard has its own super-fast memory (and a lot of it), and it uses direct access to system RAM, while this little thing will have to share the memory access and caches with CPU.

    without the need of anything like CUDA

    I dare to say, that this is totally false.

  • Re:Meh. (Score:4, Insightful)

    by mcelrath (8027) on Friday June 04, 2010 @08:56AM (#32457100) Homepage

    Sounds like you need a new power supply, or a surge suppressor, or a power conditioner, or an air conditioner.

    You shouldn't see that many failures. Are you overclocking like mad? Silicon should last essentially forever compared to other components in the system, as long as you keep it properly cooled and don't spike the voltage. Removing mechanical connectors by putting things on one die should mean fewer failure modes. A fanless system on a chip using a RAM disk should last essentially forever.

    A single chip with N transistors does not have N failure modes. It's essentially tested and will not develop a failure by the time you receive it. A system with N mechanically connected components has a failure rate of N*(probability of failure of one component), and it's always the connectors or the cheap components like power supplies that fail.

  • Re:vs Larrabee (Score:5, Insightful)

    by mcelrath (8027) on Friday June 04, 2010 @09:06AM (#32457190) Homepage

    AMD and Intel need to have a contest on the shittiest driver category. I have one of each. Each revision of xserver-xorg-video-intel bricks my laptop in a new and exciting way. And AMD's fglrx is a steaming pile of rendering errors, inconsistent performance, and crashes.

    On the other hand, both Intel [intellinuxgraphics.org] and AMD [x.org] have released specs and participate in open source development. So in the long run, either one is a better choice than NVidia. So I'll continue to complain about them and submit bug reports. It's the open source way.

  • Re:Relevant? (Score:2, Insightful)

    by Hal_Porter (817932) on Friday June 04, 2010 @10:03AM (#32457742)

    Back in the days of Athlon64 vs Pentium 4 and Itanium AMD were ahead. Still since Core2 I'd say Intel are doing better. That being said Larrabee seems to be dead and I still think the idea has legs. Hopefully AMD will be to Larrabee what AMD64 was to IA64 - i.e. a more pragmatic version of the idea that ends up working better.

  • by Rockoon (1252108) on Friday June 04, 2010 @11:05AM (#32458586)

    while this little thing will have to share the memory access and caches with CPU.

    Sharing the cache is not necessarily a bad thing. Its nice when the data that the CPU now needs is already sitting in L1 because the GPU just computed it, or vise-versa. That was, in fact, the point of the poster.

  • by Technomancer (51963) on Friday June 04, 2010 @02:33PM (#32461502)

    As opposed to how graphics drivers are security issue now?

    Graphic cards can DMA memory and GPU can access pretty much any physical memory in the system (as long as it is visible via PCI bus). There is no simple fix for that but there are certain security features already available on graphics cards. Go read radeon Linux kernel sources, look at the command buffer parser (linux/drivers/gpu/drm/radeon/r600_cs.c for instance) that verifies that graphic card only accesses memory that belongs to it.

    Also, there was some driver exploit in signed Windows graphics drivers that allowed loading unsigned code into windows kernel.

"How do I love thee? My accumulator overflows."

Working...