Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AMD Graphics Hardware

AMD's Fusion CPU + GPU Will Ship This Year 138

mr_sifter writes "Intel might have beaten AMD to the punch with a CPU featuring a built-in GPU, but it relied on a relatively crude process of simply packaging two separate dies together. AMD's long-discussed Fusion product integrates the two key components into one die, and the company is confident it will be out this year — earlier than had been expected."
This discussion has been archived. No new comments can be posted.

AMD's Fusion CPU + GPU Will Ship This Year

Comments Filter:
  • Sup dawg (Score:3, Funny)

    by Anonymous Coward on Saturday May 15, 2010 @10:38PM (#32224438)

    Sup dawg. We herd you like processing units, so we put a processing unit in yo' processing unit so you can computer while you compute!

  • by Dragoniz3r ( 992309 ) on Saturday May 15, 2010 @10:48PM (#32224476)
    It doesn't really matter, any more than AMD's "proper" quad core mattered more than Intel pasting two dual-core dies together. This is really just AMD getting beaten to the punch again, and having to try to spin it in some positive way. It's great news that it will be out earlier than expected, but I think they would have been better off taking the less "beautiful" and just throwing discrete dies into a single package. Particularly as it has yet to be seen how big the market for this sort of thing is. More exciting to me is that AMD is ahead of schedule with this, so hopefully they'll be similarly ahead with their next architecture. I'm yearning for the day when AMD is back to being competitive on a clock-for-clock basis with Intel.
    • by sanman2 ( 928866 )
      I'd really love it if Fusion could give us ultra-powered iPad-killing tablet PCs, complete with multi-tasking/multi-window functionality, as well as 3D acceleration. But will it be low-powered enough?
    • by Anonymous Coward on Saturday May 15, 2010 @10:56PM (#32224534)

      Sure Intel got there first and sure Intel has been beating AMD on the CPU side, but...

      Intel graphics are shit. Absolute shit. AMD graphics are top notch on a discrete card and still much better than Intel on the low end.

      Maybe you should compare the component being integrated instead of the one that already gives most users more than they need.

      • by sayfawa ( 1099071 ) on Saturday May 15, 2010 @11:22PM (#32224684)
        Intel graphics are only shit for gamers who want maximum settings for recent games. For everything else, and even for casual gamers, they are fine. At this very moment I'm just taking a quick break from playing HL-2 with an I3's graphics. Resolution is fine, fps is fine, cowbell is maxed out. Go look at some youtube videos to see how well the gma 4500 (precurser to the current gen) does with Crysis.
        • Re: (Score:3, Insightful)

          by cyssero ( 1554429 )
          Should it be any accomplishment that a game released in November 2004 works on a latest-gen system? For that matter, my Radeon 9100 IGP (integrated) ran HL-2 'fine' back in 2004.
          • And was your Radeon 9100 free? 'Cause I was just looking for a processor. But in addition I got graphics that are good enough for every pc game I have. Which includes Portal, so 2007. And no, I'm not saying that that is awesome. But it's certainly not shit, either. And, as your post demonstrates, only gamers give a fuck.
            • Re: (Score:3, Informative)

              by sznupi ( 719324 )

              Is your CPU + motherboard combo cheaper than typical combo from some other manufacturer that has notably higher performance and compatibility?

              With greater usage of GPUs for general computation, the point is that not only gamers "give a fuck" nowadays.

              PS. If something runs HL2, it can run Portal. As my old Radeon 8500 did, hence also certainly integrated 9100 of parent poster.

        • by sznupi ( 719324 )

          Intel GFX is shit for many games, especially older ones (considering their state of drivers); they have problems with old 2D Direct(smth...2D?) games since Vista drivers FFS.

          At least they manage to run properly one of the most popular FPS games ever, lucky you...

          • DirectDraw. Microsoft is going to release Direct2D with IE9 for faster rendering, and DirectDraw is depreciated...

            • by sznupi ( 719324 )

              Deprecation which doesn't mean much if one wants to simply run many of the older games, using tech which should work in the OS and drivers right now. Older games, for which Intel gfx was supposed to be "fine"...

        • Re: (Score:3, Insightful)

          by TubeSteak ( 669689 )

          Intel graphics are only shit for gamers who want maximum settings for recent games.

          Having the "best" integrated graphics is like having the "best" lame horse.
          Yea, it's an achievement, but you still have a lame horse and everyone else has a car.

          • Re: (Score:2, Insightful)

            by sznupi ( 719324 )

            What are you talking about? On good current integrated graphics many recent games work quite well; mostly "flagship", bling-oriented titles have issues.

            "Lean car -> SUV" probably rings closer to home...

        • by amorsen ( 7485 )

          Intel graphics are only shit for gamers who want maximum settings for recent games.

          If only... I have a laptop with an "Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller" according to lspci. It has great trouble achieving more than 2fps in Civilization IV.

      • That depends on what you plan on using it for. I can run a composited desktop, torchlight, and civ 4 on a core i3 (1900x1200). It supports h264 decoding. It's low power. And if it gets too slow in a few years I can buy a $50 card to upgrade. So for me it's fine.
    • Re: (Score:3, Insightful)

      by markass530 ( 870112 )
      I would say it will matter, at least it might, Can't really write it off until you've seen it in the wild. AMD's more elegant initial dual core solution was infinitely better than Intels "lets slap 2 space heaters together and hope for the best"
    • I agree, this really strikes me as the same thing that happened with the memory controllers/FSB a few years ago. They move all of it on die, then claim it's this great huge thing... but in the end it really doesn't make all that much of a difference in the grand scheme of things. Obviously its a good move, and one that Intel is going to want to make... EVENTUALLY. But what's the real world benefit of this over the Intel solution? IS there any benefit that the average user buying these chips is ever goin
    • by sznupi ( 719324 ) on Saturday May 15, 2010 @10:58PM (#32224552) Homepage

      Actually, the situation might be reversed this time; sure, that Intel quadcores weren't "real" didn't matter much, because their underlying architecture was very good.

      In contrast, Intel GFX is shit compared to AMD. The former can usually do all "daily" things (at least for now, who knows if it will keep up with more and more general usage of GPUs...)' the latter, even in integrated form, is suprisingly sensible even for most games, excluding some of the latest ones.

      Plus, if AMD throws this GPU on one die, it means it will be probably manufactured at Global Foundries = probably smaller process and much more speed.

    • Re: (Score:3, Insightful)

      by evilviper ( 135110 )

      This is really just AMD getting beaten to the punch again, and having to try to spin it in some positive way.

      I'll have to call you an idiot for falling for Intel's marketing, and believe that, just because they can legally call it by the same name, it remotely resembles what AMD is doing.

    • by MemoryDragon ( 544441 ) on Sunday May 16, 2010 @04:49AM (#32226178)

      Except that Intel yet has to deliver an integrated graphics solution which deserves the name. AMD has the advantage that they can bundle an ATI core into their CPUs which means a decent graphics card finally.

    • by Anonymous Coward

      If AMD are simple tossing a gpu and cpu on the same die because they can... then agreed.

      If AMD are taking advantage of the much reduced distance between cpu and gpu units to harness some kind of interoperability for increased performance or reduced power usage over the Intel "glue em together" approach... then maybe this could be a different thing altogether.

    • Re: (Score:3, Insightful)

      by Hurricane78 ( 562437 )

      I’m sorry, but I still support everyone who does things properly instead of “quick and dirty”.
      What Intel did, is the hardware equivalent of spaghetti coding.
      They might be “first”, but it will bite them in the ass later.
      Reminds one of those “FIRST” trolls, doesn’t it?

      • The Diff (Score:4, Insightful)

        by fast turtle ( 1118037 ) on Sunday May 16, 2010 @08:27AM (#32227008) Journal

        There's two sides to this coin and Intel's is pretty neat. By not having the GPU integrated into the CPU die, Intel can improve the CPU/GPU without having to redesign the entire chip. For example, any Power management improvements can be moved into the design as soon as it's ready. Another advantage for them is the fact that each die CPU and GPU are actually indepenent and can be manufactured using what ever process makes the most sense to them.

        AMD's design offers a major boost to overall CPU performance simply through the fact that the integration is far deeper then Intel's. From what I've read, the Fusion ties the Stream Processors (FPU) directly to a CPU and should offer a major boost in all Math ops of the CPU and I expect that it will finally compete with Intel's latest CPU's in regards to FPU operations.

        • by sznupi ( 719324 )

          I don't think the first side is such a big deal. You don't see now, because of it, various blocks of CPUs done as separate dies; and there's quite a lot of quite different structures there already - ALU, FPU, SIMD, L1, L2, memory controller, PCIe controller...

          "What ever process makes the most sense to them", sure. But I don't see how reusing old fabs benefits that much us. For some time that was one of the reasons why Intel chipsets consumed a lot of power.

    • Quad-cores had a lot to do with performance, while this technical innovation is more to do with cost. By using a single dye they have the advantage of more efficient manufacturing process, and might be able to seize those "critical point" markets that bit faster.

      • by teg ( 97890 )

        Quad-cores had a lot to do with performance, while this technical innovation is more to do with cost.

        Not only cost, but also system design [arstechnica.com]. Also, the cost side has more sides than the one you present: Two smaller chips will have larger yields than one large, and Intel is ahead of AMD when it comes to chip manufacturing.

    • by DrSkwid ( 118965 )

      It's not about graphics. It's about that on-die vector pipeline. Massive SIMD with the throughput to match.

      CUDE GPUs blow SSE out of the water by an order of magnitude for some classes of computation. SSE can do 2 x 64bit ops at once pipelined with loads and pre-fetches.

      http://www.drdobbs.com/high-performance-computing/224400246 [drdobbs.com]

  • With IE 9 headed toward GPU assisted acceleration, these types of "hybrid" chips will make things even faster. Since AMD's main enduser is a Windows user, and IE 9 will probably be shipping later this year, these two may be made for each other.

    Of course every other aspect of the system will speed up as well, but I wonder how this type of CPU/GPU package will work with after market video cards? If you want a better video card for gaming, will the siamese-twin GPU bow to the additional video card?
    • With IE 9 headed toward GPU assisted acceleration, these types of "hybrid" chips will make things even faster.

      Even faster than current generation discrete GPUs? I think not.

      • by WrongSizeGlass ( 838941 ) on Saturday May 15, 2010 @11:03PM (#32224590)

        Even faster than current generation discrete GPUs? I think not.

        They'll move data inside the chip instead of having to send it off to the internal bus, they'll have access to L2 cache (and maybe even L1 cache), they'll be running in lock-step with the CPU, etc, etc. These have distinct advantages over video cards.

        • Re: (Score:3, Interesting)

          by GigaplexNZ ( 1233886 )
          That'll certainly increase bandwidth which will help outperform current integrated graphics and really low end discrete chips, but I severely doubt it will be enough to compensate for the raw number of transistors in the mid to high end discrete chips. An ATI 5670 graphics chip has just about as many transistors as a quad core Intel Core i7.
          • Re: (Score:3, Insightful)

            by alvinrod ( 889928 )
            And if Moore's law continues to hold, within the next four years it won't be an issue to put both of those chips on the same die. Hell, that may even be the budget option.
        • One distinct disadvantage... HEAT! even with all the die shrinks
          No1 Advantage, forcing Intel to product decent graphics.

          • Re: (Score:3, Informative)

            by Hal_Porter ( 817932 )

            Actually Intel had a radical way to handle this - Larrabee. It was going to be 48 in order processors on a die with Larrabee new instructions. There was a Siggraph paper with very impressive scalability figures [intel.com] for a bunch of games running DirectX in software - they captured the DirectX calls from a machine with a conventional CPU and GPU and injected them into a Larrabee simulator.

            This was going to be a very interesting machine - you'd have a machine with good but not great gaming performance and killer se

            • Re: (Score:3, Informative)

              by Rockoon ( 1252108 )

              Of course there are problems with this sort of approach. Most current games are not very well threaded - they have a small number of threads that will run poorly on an in order CPU. So if the only chip you had was a Larrabee and it was both a CPU and a GPU the GPU part would be well balanced across multiple cores. The CPU part would likely not. You have to wonder about memory bandwidth too.

              I believe that it was in fact memory bandwidth which killed larrabee. A GPU's memory controller is nothing like a CPU's memory controller, so trying to make a many-core CPU behave like a GPU while still also behaving like a CPU just doesnt work very well.

              Modern good performing GPU's require the memory controller be specifically tailored to filling large cache blocks. Latency isnt that big of an issue. The GPU is likely to need the entire cache line, so latency is sacrificed for more bandwidth. The latenc

              • Re: (Score:2, Informative)

                by Hal_Porter ( 817932 )

                It seems like the caching issues could be fixed with prefetch instructions that can fetch bigger chunks. Which it apparently has.

                Still just fetching instructions for 48 cores is a huge amount of bandwidth.

                http://perilsofparallel.blogspot.com/2010/01/problem-with-larrabee.html [blogspot.com]

                Let's say there are 100 processors (high end of numbers I've heard). 4 threads / processor. 2 GHz (he said the clock was measured in GHz).

                That's 100 cores x 4 treads x 2 GHz x 2 bytes = 1600 GB/s.

                Let's put that number in perspective:

                * It's moving more than the entire contents of a 1.5 TB disk drive every second.

                * It's more than 100 times the bandwidth of Intel's shiny new QuickPath system interconnect (12.8 GB/s per direction).

                * It would soak up the output of 33 banks of DDR3-SDRAM, all three channels, 192 bits per channel, 48 GB/s aggregate per bank.

                In other words, it's impossible.

                So 48 cores needs 16 banks of DDR3-SDRAM.

        • Re: (Score:2, Interesting)

          by pantherace ( 165052 )

          I've been watching this for a while, and as far as I can tell, discrete graphics cards will still be significantly faster for most things. The reason being memory bandwidth. Sure cache is faster, for smaller datasets. Unfortunately, let's assume you have 10MB of cache, your average screen size will take at half of that (call it 5MB for a 32 bit 1440x900 image), and that's not counting the cpu's cache usage if it's shared. So you can't cache many textures, geometry or similar, after which it drops off to the

      • Even faster than current generation discrete GPUs? I think not.

        Not for most things, but for some specific GPGPU type stuff where you want to shuffle data between the CPU and the GPU, yes. Much, much faster. For exactly the same reasons that we no longer have off-chip FPU's. A modern separately socketed FPU could have massive performance. It could have its own cooling system, so you could use a ton of power just on the FPU. OTOH, you would still need to get data back and forth to the main CPU, so it mak

    • Seeing as AMD is in both markets, I'm sure they will have no issue working along side discrete graphics.

    • Comment removed based on user account deletion
  • by Gadget_Guy ( 627405 ) * on Saturday May 15, 2010 @11:00PM (#32224568)

    Calling Intel's offerings crude sounds like it is quoting from AMD's press release. It may be crude, but it works and was quick and cheap to implement. But does it have any disadvantages? Certainly the quote from the article doesn't seem terribly confident that the integrated offering is going to be any better:

    We hope so. We've just got the silicon in and we're going through the paces right now - the engineers are taking a look at it. But it should have power and performance advantages.

    Dissing a product for some technical reason that may not have any real performance penalties? That's FUD!

    • Re: (Score:3, Interesting)

      by sznupi ( 719324 )

      ...But does it have any disadvantages?...

      With Intel's offerings the thing is that they don't really have any advantages (except perhaps making 3rd party chipsets less attractive for OEMs, but that's a plus only for Intel). They didn't end up cheaper in any way (ok, a bit too soon to tell...but do you really have some hope?). They are certainly a bit faster - but still too slow; and anyway it doesn't matter much with the state of Intel drivers.

      AMD integrated GFX has already very clear advantages. This new variant, integrated with the CPU, while ce

      • I thought integration in the same package allowed (presumably for electrical reasons - very small physical distance, not limited by number of pins you can fit) a faster interconnect between the two dies, so there actually is (potentially) some advantage to doing it, even though it's not proper dual core.

        • by sznupi ( 719324 )

          Which is even more true for everything integrated on one die...

          • Absolutely! Just worth bearing in mind that Intel's part-way solution of simply sharing a package is better in more ways than just being physically smaller. In that sense it's potentially a step up from the state of the art, even though it's a step behind a fully integrated solution.

        • For the interconnect, they still need I/O buffers to deal with the impedance of those (relatively) huge wires, and clock distribution still can't be perfect. Both increase the delay a lot, but I guess modern processors don't rely on perfect clock distribution anyway.
      • With Intel's offerings the thing is that they don't really have any advantages

        What about the large reduction in power requirements for their supporting chipset. This was always the weakest link for Intel. Their CPUs are quite low powered, but their chipsets ruin any power savings. The all-in-one CPUs now allow for substantial overall power savings, meaning Intel is the king when it comes to performance per Watt.

        • by sznupi ( 719324 )

          Hm, indeed, with the long=resent "Intel has great power consumption" I forgot about their chipsets. But that still doesn't give them the title of "king when it comes to performance per Watt", not when you look at overall performance of the combo (meaning also 3D performance)

    • Calling Intel's offerings crude sounds like it is quoting from AMD's press release. It may be crude, but it works and was quick and cheap to implement. But does it have any disadvantages?

      Of course it does. Having an additional interconnect between CPU and GPU means not only that cost is higher, but that performance is decreased. You have to have an interface that can be carried through such an interconnect, which is just another opportunity for noise; this interface will likely be slower than various core internals. With both on one die, you can integrate the two systems much more tightly, and cheaper too.

      • Actually, Intel's CPUs with built-in GPU are infinitely faster than AMD's in that you can buy one of the Intel chips now. Coming up with technical quibbles is meaningless without any real benchmarks to show the differences, which even AMD can't provide.

  • I, for one, welcome our new small furry not-yet-house-trained overlords!

    CPUGPU, just step around it...

  • by rastoboy29 ( 807168 ) on Saturday May 15, 2010 @11:28PM (#32224716) Homepage
    I hope so, Intel is far too dominant right now.
  • With the bulk of processing power for both CPU and Graphics being concentrated in a single die, I can only imagine how hot it's going to get!
  • by Boycott BMG ( 1147385 ) on Saturday May 15, 2010 @11:36PM (#32224766) Journal
    AMD Fusion was meant to compete with Larrabee which is not released. The Intel package with two separate dies is not interesting. The point of these products is to give the programmer access to the vast FP power of a graphics chip, so they can do, for instance, a large scale fft and ifft faster than a normal CPU. If this proves more powerful than Nvidia's latest Fermi (GTX 480 I believe), then expect a lot of shops to switch. Right now my workplace has a Nvidia Fermi on backorder, so it looks like this is a big market.
    • More like, NV can't get the yields up, I suspect.

    • by cmaxx ( 7796 )

      It so won't be more powerful than Fermi. But it might be available in industrial quantities, instead of cottage-industry amounts.

  • I want my CPU to be mostly GPU. Just enough CPU to run the apps. They don't need a lot of general purpose computation, but the graphics should be really fast. And a lot of IO among devices, especially among network, RAM and display.

    • by keeboo ( 724305 )
      Yeah, that surely matters a lot for corporate users.
      • 1. I don't care. And there are many millions, billions of people like me.

        2. Most corporate computing also uses "netbook" type functionality that doesn't use a big CPU, but needs a bigger GPU. That's why there are CPUs like the Atom.

        3. Sarcasm is just obnoxious when you're wrong.

  • Advanced features (Score:5, Interesting)

    by wirelessbuzzers ( 552513 ) on Saturday May 15, 2010 @11:53PM (#32224868)

    In addition to the CPGPU or whatever what they're calling it, Fusion should finally catch up to (and exceed) Intel in terms of niftilicious vector instructions. For example, it should have crypto and binary-polynomial acceleration, bit-fiddling (XOP), FMA and AVX instructions. As an implementor, I'm looking forward to having new toys to play with.

  • Does this mean CUDA support in every AMD "CPU" ?

    • Re: (Score:2, Informative)

      by Anonymous Coward
      No.
      CUDA is Nvidia.
      ATI has Stream.
  • Call me when they can fit 9 inches of graphics card into one of these cpu.
  • future upgrading? (Score:5, Interesting)

    by LBt1st ( 709520 ) on Sunday May 16, 2010 @12:41AM (#32225118)

    This is great for mobile devices and laptops but I don't think I want my CPU and GPU combined in my gaming rig. I generally upgrade my video card twice as often as my CPU. If this becomes the norm then eventually I'll either get bottlenecked or have to waste money on something I don't really need. Being forced to buy two things when I only need one is not my idea of a good thing.

    • Re: (Score:2, Insightful)

      by FishTankX ( 1539069 )

      The grpahics core will likely be small, add an inconsequential amount of transistors, be disable-able, and or crossfire able with the main crossfire card.

      However, the place I see this getting HUGE gains, is if the on board GPU is capable of doing physics calculations. Having a basic physics co processor on every AMD CPU flooding out the gates will do massive good for the implementation of physics in games, and can probably offload alot of other calculations in the OS. On board video encode acceleration anyo

    • Depending on how the economics shake out, it can pretty easily be a good thing, or at least an indifferent one.

      Generally, it is cheaper to make a given feature standard than it is to make it optional(obviously, making it unavailable is cheaper still). Standard means that you can solder it right onto the mainboard, or include it on the die, means no separate inventory channel to track, and greater economies of scale. For these reasons, once a given feature achieves some level of popularity, it shifts from
    • by CAIMLAS ( 41445 )

      I generally upgrade my video card twice as often as my CPU. If this becomes the norm then eventually I'll either get bottlenecked or have to waste money on something I don't really need.

      That depends: do you buy Intel or AMD processors, currently?

      Because if you buy Intel processors, I can see your point (and the reason behind not frequently upgrading your CPU): CPU upgrades are costly if the socket changes with every upgrade, requiring a new board at the same time. With AMD processors, however, they've retained the same basic socket for quite some time (to negligible performance detriment and the ability to upgrade components largely independently). This is Good Design on their part.

      If they

      • sound / firewire / usb 3.0 still need pci / pci-e bus and mid-range and high end / muilt display cards are not dieing. Most board video can only do 1-2 DVI / HDMI out's any ways with most at 1 DVI / HDMI + 1 vga and vga is poor for big screens and does not work with HDCP. PCI-e will not die as it is also needed for TV cards (if this new cable card pc push works good then you may see many more systems with them) on board sound / sata (some boards) / usb 3.0 / Network use the pci / pci-e bus as well. 4 tuner

    • by Skaven04 ( 449705 ) on Sunday May 16, 2010 @09:48AM (#32227446) Homepage

      You've got to stop thinking of it as a GPU and think of it more like a co-processor.

      First of all, AMD isn't going to force you to buy a built-in GPU on all of their processors. Obviously the enthusiast market is going to want huge 300W discrete graphics rather than the 10-15W integrated ones. There will continue to be discrete CPUs, just like there will always continue to be discrete GPUs.

      But this is a brilliant move on AMD's part. They start with a chunk of the market that is already willing to accept this: system builders, motherboard makers and OEMs will be thrilled to be able to build even smaller, simpler, more power efficient systems for the low end. This technology will make laptops and netbooks more powerful and have better battery life by using less energy for the graphics component.

      Now look further ahead, when AMD begins removing some of the barriers that currently make programming the GPU for general-purpose operations (GPGPU) such a pain. For example, right now you have to go through a driver in the OS and copy input data over the PCI bus into the frame buffer, do the processing on the GPU, then copy the results back over the PCI bus into RAM. For a lot of things, this is simply too much overhead for the GPU to be much help.

      But AMD can change that by establishing a standard for incorporating a GPU into the CPU. Eventually, imagine an AMD CPU that has the GPU integrated so tightly with the CPU that the CPU and GPU share a cache-coherent view of the main system memory, and even share a massive L3 cache. What if the GPU can use the same x86 virtual addresses that the CPU does? Then...all we have to have is a compiler option that enables the use of the GPU, and even tiny operations can be accelerated by the built-in GPU.

      In this future world, there's still a place for discrete graphics -- that's not going away for your gaming rig. But imagine the potential of having a TFLOP-scale coprocessor as a fundamental part of future sub-50W CPU. Your laptop would be able to do things like real-time video stabilization, transcoding, physics modeling, and image processing, all without breaking the bank (or the power budget).

      But before we can get to this place, AMD has to start somewhere. The first step is proving that a GPU can coexist with a CPU on the same silicon, and that such an arrangement can be built and sold at a profit. The rest is just evolution.

    • Yeah, but I bet you like your integrated NIC, sound, and math coprocessor, for that matter.

      If there are good technical reasons for it, then it's good.  And they're are. :-)
  • But will quad core and six core chips also carry a graphics chip? And how long before the two quad core mother boards hit the streets?
                Frankly we are on the edge of a serious improvement in computers.

    • Indeed, but do most people really need more than 4 cores at this point? Software for the home user still hasn't really caught up with quad core at this point, it'd be a bit silly to put out a dual quad core board for that market. OTOH that'd be just dandy for the server market.
  • Apple angle (Score:2, Interesting)

    Worth noting is that Apple has invested rather heavily in technology to allow programmer use of the GPU in MacOS X. And were recently rumored to have met with high ranking persons from AMD. Seems only logical that this type of chip could find its way into some of the Apple gear.

    Question is of course if it would be powerefficient enough for laptops, where space is an issue...
  • I look forward to seeing what AMD's new architecture brings. It's not really interesting thinking about it as integrating a GPU into the same space as a CPU, but creating one chip that can do more exotic types of calculations than either chip could alone and making it a available in every system. I'm also envisioning "GPU" instructions being executed where normally CPU instructions were when not in use, and vise versa, basically so everything available could be put to use.

  • Meanwhile, where are those damned quad core laptop processors AMD promised? I've been waiting freaking AGES to buy a laptop with one.

  • by Thagg ( 9904 ) <thadbeier@gmail.com> on Sunday May 16, 2010 @10:47AM (#32227786) Journal

    If AMD puts a competetive GPU onto the CPU die, comparable to their current high-end graphics boards) then this is a really big deal. Perhaps the biggest issue with GPGPU programming is the fact that the graphics unit is at the end of a fairly narrow pipe with limited memory, and getting data to the board and back is a performance bottleneck and a pain in the butt for a programmer.

    Putting the GPU on the die could mean massive bandwidth from the CPU to the hundreds of streaming processors on the GPU. It also strongly implies that the GPU will have access directly to the same memory as the CPU. Finally, it would mean that if you have a Fusion-based renderfarm then you have GPUs on the renderfarm.

    This is exciting!

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...