Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AMD Graphics Hardware

AMD's Fusion CPU + GPU Will Ship This Year 138

mr_sifter writes "Intel might have beaten AMD to the punch with a CPU featuring a built-in GPU, but it relied on a relatively crude process of simply packaging two separate dies together. AMD's long-discussed Fusion product integrates the two key components into one die, and the company is confident it will be out this year — earlier than had been expected."
This discussion has been archived. No new comments can be posted.

AMD's Fusion CPU + GPU Will Ship This Year

Comments Filter:
  • by WrongSizeGlass ( 838941 ) on Saturday May 15, 2010 @11:51PM (#32224490)
    With IE 9 headed toward GPU assisted acceleration, these types of "hybrid" chips will make things even faster. Since AMD's main enduser is a Windows user, and IE 9 will probably be shipping later this year, these two may be made for each other.

    Of course every other aspect of the system will speed up as well, but I wonder how this type of CPU/GPU package will work with after market video cards? If you want a better video card for gaming, will the siamese-twin GPU bow to the additional video card?
  • by sznupi ( 719324 ) on Saturday May 15, 2010 @11:58PM (#32224552) Homepage

    Actually, the situation might be reversed this time; sure, that Intel quadcores weren't "real" didn't matter much, because their underlying architecture was very good.

    In contrast, Intel GFX is shit compared to AMD. The former can usually do all "daily" things (at least for now, who knows if it will keep up with more and more general usage of GPUs...)' the latter, even in integrated form, is suprisingly sensible even for most games, excluding some of the latest ones.

    Plus, if AMD throws this GPU on one die, it means it will be probably manufactured at Global Foundries = probably smaller process and much more speed.

  • by Gadget_Guy ( 627405 ) * on Sunday May 16, 2010 @12:00AM (#32224568)

    Calling Intel's offerings crude sounds like it is quoting from AMD's press release. It may be crude, but it works and was quick and cheap to implement. But does it have any disadvantages? Certainly the quote from the article doesn't seem terribly confident that the integrated offering is going to be any better:

    We hope so. We've just got the silicon in and we're going through the paces right now - the engineers are taking a look at it. But it should have power and performance advantages.

    Dissing a product for some technical reason that may not have any real performance penalties? That's FUD!

  • by sznupi ( 719324 ) on Sunday May 16, 2010 @12:39AM (#32224780) Homepage

    ...But does it have any disadvantages?...

    With Intel's offerings the thing is that they don't really have any advantages (except perhaps making 3rd party chipsets less attractive for OEMs, but that's a plus only for Intel). They didn't end up cheaper in any way (ok, a bit too soon to tell...but do you really have some hope?). They are certainly a bit faster - but still too slow; and anyway it doesn't matter much with the state of Intel drivers.

    AMD integrated GFX has already very clear advantages. This new variant, integrated with the CPU, while certainly simpler than standalone parts, might make up for it with much higher clock and wide data bus. Edning quite attractive.

  • by GigaplexNZ ( 1233886 ) on Sunday May 16, 2010 @12:49AM (#32224842)
    That'll certainly increase bandwidth which will help outperform current integrated graphics and really low end discrete chips, but I severely doubt it will be enough to compensate for the raw number of transistors in the mid to high end discrete chips. An ATI 5670 graphics chip has just about as many transistors as a quad core Intel Core i7.
  • Advanced features (Score:5, Interesting)

    by wirelessbuzzers ( 552513 ) on Sunday May 16, 2010 @12:53AM (#32224868)

    In addition to the CPGPU or whatever what they're calling it, Fusion should finally catch up to (and exceed) Intel in terms of niftilicious vector instructions. For example, it should have crypto and binary-polynomial acceleration, bit-fiddling (XOP), FMA and AVX instructions. As an implementor, I'm looking forward to having new toys to play with.

  • future upgrading? (Score:5, Interesting)

    by LBt1st ( 709520 ) on Sunday May 16, 2010 @01:41AM (#32225118)

    This is great for mobile devices and laptops but I don't think I want my CPU and GPU combined in my gaming rig. I generally upgrade my video card twice as often as my CPU. If this becomes the norm then eventually I'll either get bottlenecked or have to waste money on something I don't really need. Being forced to buy two things when I only need one is not my idea of a good thing.

  • by crazycheetah ( 1416001 ) on Sunday May 16, 2010 @02:02AM (#32225210)

    While this is more for gamers (and other more GPU intensive tasks; if GPGPU use keeps increasing--if it is increasing?--it could become more of a factor for more people), AMD had hinted at the ability to use the integrated GPU in the CPU alongside a dedicated graphics card, using whatever the hell they call that (I know nVidia is SLI, only because I just peaked at the box for my current card). So, it's something power users could actually be quite happy to get their hands on, if it works well. And as for non-power users, we can get this and not worry about graphics cards on the mobo or dedicated. Sounds like a good deal to me. And that beats anything Intel has to offer with this same idea (not that Intel doesn't win in other areas).

  • Apple angle (Score:2, Interesting)

    by Per Cederberg ( 680752 ) on Sunday May 16, 2010 @05:34AM (#32226126)
    Worth noting is that Apple has invested rather heavily in technology to allow programmer use of the GPU in MacOS X. And were recently rumored to have met with high ranking persons from AMD. Seems only logical that this type of chip could find its way into some of the Apple gear.

    Question is of course if it would be powerefficient enough for laptops, where space is an issue...
  • by strstr ( 539330 ) on Sunday May 16, 2010 @05:59AM (#32226224)

    I look forward to seeing what AMD's new architecture brings. It's not really interesting thinking about it as integrating a GPU into the same space as a CPU, but creating one chip that can do more exotic types of calculations than either chip could alone and making it a available in every system. I'm also envisioning "GPU" instructions being executed where normally CPU instructions were when not in use, and vise versa, basically so everything available could be put to use.

  • by bhtooefr ( 649901 ) <[gro.rfeoothb] [ta] [rfeoothb]> on Sunday May 16, 2010 @06:19AM (#32226284) Homepage Journal

    Arguably, the "off-chip FPU" nowadays IS a GPU - hence all the GPGPU stuff.

  • by Anonymous Coward on Sunday May 16, 2010 @09:35AM (#32227052)

    Intel has always had worse engineering and better execution than AMD.

    Instead of Intel "getting there first" perhaps it's Intel executing first on AMD's better engineering; like AMD64. Huh?

    You incessant Intel posters are the Fox News of Slashdot.

  • by pantherace ( 165052 ) on Sunday May 16, 2010 @01:59PM (#32228666)

    I've been watching this for a while, and as far as I can tell, discrete graphics cards will still be significantly faster for most things. The reason being memory bandwidth. Sure cache is faster, for smaller datasets. Unfortunately, let's assume you have 10MB of cache, your average screen size will take at half of that (call it 5MB for a 32 bit 1440x900 image), and that's not counting the cpu's cache usage if it's shared. So you can't cache many textures, geometry or similar, after which it drops off to the figures below:

    DDR3-1066 8533 MB/s (x2 or x3) up to ~ 25 GB/s (~8600 GT)
    DDR3-1333 10667 MB/s up to 32 GB/s (~8600 GT)

    Both well below the 103 GB/s of an 8800 Ultra

    Compare that with a few current generation end cards:
    Geforce 220-25GB/sec
    Geforce 260-111GB/sec
    Geforce 280-141GB/sec
    Geforce 480-177GB/sec

    There will be some advantages to having it on die, but for anything requiring lots of memory bandwidth, a discrete card is likely to absolutely trounce Fusion, especially when you consider that the memory bandwidth for the DDR chips quoted above, is shared with the processor. (Considering I was thinking of all current CPUs, AMD's are only dual channel, or x2, not the x3 as above, but that may change, and probably should if they introduce a new socket which they probably need to, simply to support the graphics outputs.) That's a lot of the reason Integrated graphics using main memory have always been behind anything with it's own memory. Even the really cheap Nvidia cards (Don't remember which they were, but they were about the time PCI express came out) that were advertised as 64MB (of system memory) had at least 16MB. That was for two reasons: Latency PCI express has a lot of bandwidth, but local memory is faster, and framebuffer.

    Fusion strikes me as AMD repeating Nvidia's experiment, probably with the result of beating the heck out of current integrated chips, but being at best comparable to 'midrange' (x6____) graphics cards. If it has that performance, and has good drivers: it will be a resounding success for them. It won't cannibalize the highly profitable high end, but will make good gaming even cheaper on AMD. Every AMD Fusion based computer would be capable of good enough gaming, or 3D work. Bonus to them if when not in use for graphics the GPU part also speeds up the CPU with a separate dedicated card. (I think that's another intention, but not the primary focus, but like everyone I'll have to wait and see.)

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...