AMD's Fusion CPU + GPU Will Ship This Year 138
mr_sifter writes "Intel might have beaten AMD to the punch with a CPU featuring a built-in GPU, but it relied on a relatively crude process of simply packaging two separate dies together. AMD's long-discussed Fusion product integrates the two key components into one die, and the company is confident it will be out this year — earlier than had been expected."
Sup dawg (Score:3, Funny)
Sup dawg. We herd you like processing units, so we put a processing unit in yo' processing unit so you can computer while you compute!
OK, they're integrated "properly", but... (Score:4, Insightful)
Tablet Processor? (Score:1, Offtopic)
Re: (Score:1)
you mean the tegra 250?
http://www.nvidia.com/object/tegra_250.html [nvidia.com]
Re: (Score:2)
Tegra can't run Windows. AMD Fusion can.
Re: (Score:2)
You mean Windows can't run on a Tegra? It is an OS you know, designed to Operate a System, yo... 'n stuff...
Re:OK, they're integrated "properly", but... (Score:5, Insightful)
Sure Intel got there first and sure Intel has been beating AMD on the CPU side, but...
Intel graphics are shit. Absolute shit. AMD graphics are top notch on a discrete card and still much better than Intel on the low end.
Maybe you should compare the component being integrated instead of the one that already gives most users more than they need.
Re:OK, they're integrated "properly", but... (Score:4, Informative)
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:3, Informative)
Is your CPU + motherboard combo cheaper than typical combo from some other manufacturer that has notably higher performance and compatibility?
With greater usage of GPUs for general computation, the point is that not only gamers "give a fuck" nowadays.
PS. If something runs HL2, it can run Portal. As my old Radeon 8500 did, hence also certainly integrated 9100 of parent poster.
Re: (Score:2)
Really? You haven't noticed how basically all of the browsers are adding utilisation of GPU lately? That alone, a browser, represents nowadays probably most of the usage of typical user.
Final versions should be out around the time of Fusion, at the least.
Re: (Score:2)
Intel GFX is shit for many games, especially older ones (considering their state of drivers); they have problems with old 2D Direct(smth...2D?) games since Vista drivers FFS.
At least they manage to run properly one of the most popular FPS games ever, lucky you...
Re: (Score:2)
DirectDraw. Microsoft is going to release Direct2D with IE9 for faster rendering, and DirectDraw is depreciated...
Re: (Score:2)
Deprecation which doesn't mean much if one wants to simply run many of the older games, using tech which should work in the OS and drivers right now. Older games, for which Intel gfx was supposed to be "fine"...
Re: (Score:3, Insightful)
Intel graphics are only shit for gamers who want maximum settings for recent games.
Having the "best" integrated graphics is like having the "best" lame horse.
Yea, it's an achievement, but you still have a lame horse and everyone else has a car.
Re: (Score:2, Insightful)
What are you talking about? On good current integrated graphics many recent games work quite well; mostly "flagship", bling-oriented titles have issues.
"Lean car -> SUV" probably rings closer to home...
Re: (Score:2)
Intel graphics are only shit for gamers who want maximum settings for recent games.
If only... I have a laptop with an "Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller" according to lspci. It has great trouble achieving more than 2fps in Civilization IV.
Re: (Score:2)
For Intel, GMA stands for "Graphics My Ass!"
Re: (Score:3, Insightful)
How come Intel sells more GPUs than ATI and NVIDIA combined?
Because they sell them to people who've moved out of their parent's basement...
Re: (Score:3, Insightful)
and, therefore, can't afford anything better than the graphics equivalent of Mac'n'Cheese now
that Mom and Dad are no longer paying the bills.
Re: (Score:2)
Or they sell it to people that don't won't to worry about outstandng graphics right now. I bought an i3-530 and I'm going to get a graphics card later because my needs are currently met.
Re: (Score:2)
Re:OK, they're integrated "properly", but... (Score:5, Informative)
The page for the GMA 950 [intel.com] even has this hilarious tidbit:
"With a powerful 400MHz core and DirectX* 9 3D hardware acceleration, Intel® GMA 950 graphics provides performance on par with mainstream graphics card solutions that would typically cost significantly more."
Whoever wrote that line must have been borrowing Steve's Reality Distortion Field.
Re: (Score:2)
Nah, thats just standard marketspeak. If this had of been Apple's doubleplus goodspeak it would sound like this.
Re: (Score:3, Insightful)
Re: (Score:2)
Re:OK, they're integrated "properly", but... (Score:5, Informative)
I recently went from an older AMD dual core to a Phenom II. With the exact same board and hardware, my memory performance increased by about 20% thanks to the independent memory controllers.
AMD also makes strikingly capable on-board graphics, so this will likely rule out the need for on-board or discrete video in the average person's computer. Cheaper/simpler motherboards and hopefully better integration of GPGPU functionality for massively parallel computational tasks.
Re:OK, they're integrated "properly", but... (Score:5, Insightful)
Lower power consumption, making AMD chips more competitive in notebooks - perhaps even netbooks.
Re: (Score:2)
Although Atoms already have the GPU on-die nowadays...
Re: (Score:2)
Nah, just the N4x0, D4x0, and D5x0 - the Z5xx is part of the first-gen, and has a separate northbridge.
Re: (Score:2, Interesting)
While this is more for gamers (and other more GPU intensive tasks; if GPGPU use keeps increasing--if it is increasing?--it could become more of a factor for more people), AMD had hinted at the ability to use the integrated GPU in the CPU alongside a dedicated graphics card, using whatever the hell they call that (I know nVidia is SLI, only because I just peaked at the box for my current card). So, it's something power users could actually be quite happy to get their hands on, if it works well. And as for no
Re:OK, they're integrated "properly", but... (Score:5, Informative)
You're behind the times.
http://en.wikipedia.org/wiki/ATI_Hybrid_Graphics [wikipedia.org]
http://en.wikipedia.org/wiki/Scalable_Link_Interface#Hybrid_SLI [wikipedia.org]
Re:OK, they're integrated "properly", but... (Score:5, Interesting)
Actually, the situation might be reversed this time; sure, that Intel quadcores weren't "real" didn't matter much, because their underlying architecture was very good.
In contrast, Intel GFX is shit compared to AMD. The former can usually do all "daily" things (at least for now, who knows if it will keep up with more and more general usage of GPUs...)' the latter, even in integrated form, is suprisingly sensible even for most games, excluding some of the latest ones.
Plus, if AMD throws this GPU on one die, it means it will be probably manufactured at Global Foundries = probably smaller process and much more speed.
Re: (Score:3, Insightful)
I'll have to call you an idiot for falling for Intel's marketing, and believe that, just because they can legally call it by the same name, it remotely resembles what AMD is doing.
Re:OK, they're integrated "properly", but... (Score:5, Insightful)
Except that Intel yet has to deliver an integrated graphics solution which deserves the name. AMD has the advantage that they can bundle an ATI core into their CPUs which means a decent graphics card finally.
Re: (Score:2)
If AMD are simple tossing a gpu and cpu on the same die because they can... then agreed.
If AMD are taking advantage of the much reduced distance between cpu and gpu units to harness some kind of interoperability for increased performance or reduced power usage over the Intel "glue em together" approach... then maybe this could be a different thing altogether.
Re: (Score:3, Insightful)
I’m sorry, but I still support everyone who does things properly instead of “quick and dirty”.
What Intel did, is the hardware equivalent of spaghetti coding.
They might be “first”, but it will bite them in the ass later.
Reminds one of those “FIRST” trolls, doesn’t it?
The Diff (Score:4, Insightful)
There's two sides to this coin and Intel's is pretty neat. By not having the GPU integrated into the CPU die, Intel can improve the CPU/GPU without having to redesign the entire chip. For example, any Power management improvements can be moved into the design as soon as it's ready. Another advantage for them is the fact that each die CPU and GPU are actually indepenent and can be manufactured using what ever process makes the most sense to them.
AMD's design offers a major boost to overall CPU performance simply through the fact that the integration is far deeper then Intel's. From what I've read, the Fusion ties the Stream Processors (FPU) directly to a CPU and should offer a major boost in all Math ops of the CPU and I expect that it will finally compete with Intel's latest CPU's in regards to FPU operations.
Re: (Score:2)
I don't think the first side is such a big deal. You don't see now, because of it, various blocks of CPUs done as separate dies; and there's quite a lot of quite different structures there already - ALU, FPU, SIMD, L1, L2, memory controller, PCIe controller...
"What ever process makes the most sense to them", sure. But I don't see how reusing old fabs benefits that much us. For some time that was one of the reasons why Intel chipsets consumed a lot of power.
Re: (Score:2)
Quad-cores had a lot to do with performance, while this technical innovation is more to do with cost. By using a single dye they have the advantage of more efficient manufacturing process, and might be able to seize those "critical point" markets that bit faster.
Re: (Score:2)
Quad-cores had a lot to do with performance, while this technical innovation is more to do with cost.
Not only cost, but also system design [arstechnica.com]. Also, the cost side has more sides than the one you present: Two smaller chips will have larger yields than one large, and Intel is ahead of AMD when it comes to chip manufacturing.
Re: (Score:2)
It's not about graphics. It's about that on-die vector pipeline. Massive SIMD with the throughput to match.
CUDE GPUs blow SSE out of the water by an order of magnitude for some classes of computation. SSE can do 2 x 64bit ops at once pipelined with loads and pre-fetches.
http://www.drdobbs.com/high-performance-computing/224400246 [drdobbs.com]
This Is Good For IE 9 (Score:2, Interesting)
Of course every other aspect of the system will speed up as well, but I wonder how this type of CPU/GPU package will work with after market video cards? If you want a better video card for gaming, will the siamese-twin GPU bow to the additional video card?
Re: (Score:2)
With IE 9 headed toward GPU assisted acceleration, these types of "hybrid" chips will make things even faster.
Even faster than current generation discrete GPUs? I think not.
Re:This Is Good For IE 9 (Score:5, Insightful)
Even faster than current generation discrete GPUs? I think not.
They'll move data inside the chip instead of having to send it off to the internal bus, they'll have access to L2 cache (and maybe even L1 cache), they'll be running in lock-step with the CPU, etc, etc. These have distinct advantages over video cards.
Re: (Score:3, Interesting)
Re: (Score:3, Insightful)
Re:This Is Good For everyone (Score:1)
One distinct disadvantage... HEAT! even with all the die shrinks
No1 Advantage, forcing Intel to product decent graphics.
Re: (Score:3, Informative)
Actually Intel had a radical way to handle this - Larrabee. It was going to be 48 in order processors on a die with Larrabee new instructions. There was a Siggraph paper with very impressive scalability figures [intel.com] for a bunch of games running DirectX in software - they captured the DirectX calls from a machine with a conventional CPU and GPU and injected them into a Larrabee simulator.
This was going to be a very interesting machine - you'd have a machine with good but not great gaming performance and killer se
Re: (Score:3, Informative)
Of course there are problems with this sort of approach. Most current games are not very well threaded - they have a small number of threads that will run poorly on an in order CPU. So if the only chip you had was a Larrabee and it was both a CPU and a GPU the GPU part would be well balanced across multiple cores. The CPU part would likely not. You have to wonder about memory bandwidth too.
I believe that it was in fact memory bandwidth which killed larrabee. A GPU's memory controller is nothing like a CPU's memory controller, so trying to make a many-core CPU behave like a GPU while still also behaving like a CPU just doesnt work very well.
Modern good performing GPU's require the memory controller be specifically tailored to filling large cache blocks. Latency isnt that big of an issue. The GPU is likely to need the entire cache line, so latency is sacrificed for more bandwidth. The latenc
Re: (Score:2, Informative)
It seems like the caching issues could be fixed with prefetch instructions that can fetch bigger chunks. Which it apparently has.
Still just fetching instructions for 48 cores is a huge amount of bandwidth.
http://perilsofparallel.blogspot.com/2010/01/problem-with-larrabee.html [blogspot.com]
Let's say there are 100 processors (high end of numbers I've heard). 4 threads / processor. 2 GHz (he said the clock was measured in GHz).
That's 100 cores x 4 treads x 2 GHz x 2 bytes = 1600 GB/s.
Let's put that number in perspective:
* It's moving more than the entire contents of a 1.5 TB disk drive every second.
* It's more than 100 times the bandwidth of Intel's shiny new QuickPath system interconnect (12.8 GB/s per direction).
* It would soak up the output of 33 banks of DDR3-SDRAM, all three channels, 192 bits per channel, 48 GB/s aggregate per bank.
In other words, it's impossible.
So 48 cores needs 16 banks of DDR3-SDRAM.
Re: (Score:2, Interesting)
I've been watching this for a while, and as far as I can tell, discrete graphics cards will still be significantly faster for most things. The reason being memory bandwidth. Sure cache is faster, for smaller datasets. Unfortunately, let's assume you have 10MB of cache, your average screen size will take at half of that (call it 5MB for a 32 bit 1440x900 image), and that's not counting the cpu's cache usage if it's shared. So you can't cache many textures, geometry or similar, after which it drops off to the
Re: (Score:2)
Not for most things, but for some specific GPGPU type stuff where you want to shuffle data between the CPU and the GPU, yes. Much, much faster. For exactly the same reasons that we no longer have off-chip FPU's. A modern separately socketed FPU could have massive performance. It could have its own cooling system, so you could use a ton of power just on the FPU. OTOH, you would still need to get data back and forth to the main CPU, so it mak
Re:This Is Good For IE 9 (Score:5, Interesting)
Arguably, the "off-chip FPU" nowadays IS a GPU - hence all the GPGPU stuff.
Re: (Score:2)
Seeing as AMD is in both markets, I'm sure they will have no issue working along side discrete graphics.
Re: (Score:2)
Re: (Score:2)
Seems a bit rich to call it crude (Score:3, Interesting)
Calling Intel's offerings crude sounds like it is quoting from AMD's press release. It may be crude, but it works and was quick and cheap to implement. But does it have any disadvantages? Certainly the quote from the article doesn't seem terribly confident that the integrated offering is going to be any better:
We hope so. We've just got the silicon in and we're going through the paces right now - the engineers are taking a look at it. But it should have power and performance advantages.
Dissing a product for some technical reason that may not have any real performance penalties? That's FUD!
Re: (Score:3, Interesting)
...But does it have any disadvantages?...
With Intel's offerings the thing is that they don't really have any advantages (except perhaps making 3rd party chipsets less attractive for OEMs, but that's a plus only for Intel). They didn't end up cheaper in any way (ok, a bit too soon to tell...but do you really have some hope?). They are certainly a bit faster - but still too slow; and anyway it doesn't matter much with the state of Intel drivers.
AMD integrated GFX has already very clear advantages. This new variant, integrated with the CPU, while ce
Re: (Score:2)
I thought integration in the same package allowed (presumably for electrical reasons - very small physical distance, not limited by number of pins you can fit) a faster interconnect between the two dies, so there actually is (potentially) some advantage to doing it, even though it's not proper dual core.
Re: (Score:2)
Which is even more true for everything integrated on one die...
Re: (Score:2)
Absolutely! Just worth bearing in mind that Intel's part-way solution of simply sharing a package is better in more ways than just being physically smaller. In that sense it's potentially a step up from the state of the art, even though it's a step behind a fully integrated solution.
Re: (Score:2)
Re: (Score:2)
With Intel's offerings the thing is that they don't really have any advantages
What about the large reduction in power requirements for their supporting chipset. This was always the weakest link for Intel. Their CPUs are quite low powered, but their chipsets ruin any power savings. The all-in-one CPUs now allow for substantial overall power savings, meaning Intel is the king when it comes to performance per Watt.
Re: (Score:2)
Hm, indeed, with the long=resent "Intel has great power consumption" I forgot about their chipsets. But that still doesn't give them the title of "king when it comes to performance per Watt", not when you look at overall performance of the combo (meaning also 3D performance)
Re: (Score:2)
Calling Intel's offerings crude sounds like it is quoting from AMD's press release. It may be crude, but it works and was quick and cheap to implement. But does it have any disadvantages?
Of course it does. Having an additional interconnect between CPU and GPU means not only that cost is higher, but that performance is decreased. You have to have an interface that can be carried through such an interconnect, which is just another opportunity for noise; this interface will likely be slower than various core internals. With both on one die, you can integrate the two systems much more tightly, and cheaper too.
Re: (Score:2)
Actually, Intel's CPUs with built-in GPU are infinitely faster than AMD's in that you can buy one of the Intel chips now. Coming up with technical quibbles is meaningless without any real benchmarks to show the differences, which even AMD can't provide.
CPUGPU (Score:1)
I, for one, welcome our new small furry not-yet-house-trained overlords!
CPUGPU, just step around it...
Re: (Score:2)
Re: (Score:2)
I thought the joke was just quirky enough to work pretty well. I guess it just goes to show that humor is never universal.
Could this be AMD's next Athlon? (Score:3, Insightful)
How do you cool this thing? (Score:1)
AMD Fusion is about GPGPU (Score:3, Informative)
Re: (Score:2)
More like, NV can't get the yields up, I suspect.
Re: (Score:2)
It so won't be more powerful than Fermi. But it might be available in industrial quantities, instead of cottage-industry amounts.
Bigger GPU Than CPU, Please (Score:2)
I want my CPU to be mostly GPU. Just enough CPU to run the apps. They don't need a lot of general purpose computation, but the graphics should be really fast. And a lot of IO among devices, especially among network, RAM and display.
Re: (Score:2)
Re: (Score:2)
1. I don't care. And there are many millions, billions of people like me.
2. Most corporate computing also uses "netbook" type functionality that doesn't use a big CPU, but needs a bigger GPU. That's why there are CPUs like the Atom.
3. Sarcasm is just obnoxious when you're wrong.
Advanced features (Score:5, Interesting)
In addition to the CPGPU or whatever what they're calling it, Fusion should finally catch up to (and exceed) Intel in terms of niftilicious vector instructions. For example, it should have crypto and binary-polynomial acceleration, bit-fiddling (XOP), FMA and AVX instructions. As an implementor, I'm looking forward to having new toys to play with.
Re: (Score:1)
Actually they are calling it an APU (accelerated processing unit), FWIW.
CUDA? (Score:1)
Does this mean CUDA support in every AMD "CPU" ?
Re: (Score:2, Informative)
CUDA is Nvidia.
ATI has Stream.
Not marketed toward me (Score:2)
Re:Not marketed toward me (Score:5, Insightful)
Call me when they can fit 9 inches of graphics card into one of these cpu.
Size isn't everything!
Re:Not marketed toward me (Score:4, Funny)
Re: (Score:2)
Clearly you have a small graphics card yourself.
future upgrading? (Score:5, Interesting)
This is great for mobile devices and laptops but I don't think I want my CPU and GPU combined in my gaming rig. I generally upgrade my video card twice as often as my CPU. If this becomes the norm then eventually I'll either get bottlenecked or have to waste money on something I don't really need. Being forced to buy two things when I only need one is not my idea of a good thing.
Re: (Score:2, Insightful)
The grpahics core will likely be small, add an inconsequential amount of transistors, be disable-able, and or crossfire able with the main crossfire card.
However, the place I see this getting HUGE gains, is if the on board GPU is capable of doing physics calculations. Having a basic physics co processor on every AMD CPU flooding out the gates will do massive good for the implementation of physics in games, and can probably offload alot of other calculations in the OS. On board video encode acceleration anyo
Re: (Score:2)
Generally, it is cheaper to make a given feature standard than it is to make it optional(obviously, making it unavailable is cheaper still). Standard means that you can solder it right onto the mainboard, or include it on the die, means no separate inventory channel to track, and greater economies of scale. For these reasons, once a given feature achieves some level of popularity, it shifts from
Re: (Score:2)
I generally upgrade my video card twice as often as my CPU. If this becomes the norm then eventually I'll either get bottlenecked or have to waste money on something I don't really need.
That depends: do you buy Intel or AMD processors, currently?
Because if you buy Intel processors, I can see your point (and the reason behind not frequently upgrading your CPU): CPU upgrades are costly if the socket changes with every upgrade, requiring a new board at the same time. With AMD processors, however, they've retained the same basic socket for quite some time (to negligible performance detriment and the ability to upgrade components largely independently). This is Good Design on their part.
If they
sound / firewire / usb 3.0 still need pci / pci-e (Score:2)
sound / firewire / usb 3.0 still need pci / pci-e bus and mid-range and high end / muilt display cards are not dieing. Most board video can only do 1-2 DVI / HDMI out's any ways with most at 1 DVI / HDMI + 1 vga and vga is poor for big screens and does not work with HDCP. PCI-e will not die as it is also needed for TV cards (if this new cable card pc push works good then you may see many more systems with them) on board sound / sata (some boards) / usb 3.0 / Network use the pci / pci-e bus as well. 4 tuner
Re:future upgrading? (Score:5, Insightful)
You've got to stop thinking of it as a GPU and think of it more like a co-processor.
First of all, AMD isn't going to force you to buy a built-in GPU on all of their processors. Obviously the enthusiast market is going to want huge 300W discrete graphics rather than the 10-15W integrated ones. There will continue to be discrete CPUs, just like there will always continue to be discrete GPUs.
But this is a brilliant move on AMD's part. They start with a chunk of the market that is already willing to accept this: system builders, motherboard makers and OEMs will be thrilled to be able to build even smaller, simpler, more power efficient systems for the low end. This technology will make laptops and netbooks more powerful and have better battery life by using less energy for the graphics component.
Now look further ahead, when AMD begins removing some of the barriers that currently make programming the GPU for general-purpose operations (GPGPU) such a pain. For example, right now you have to go through a driver in the OS and copy input data over the PCI bus into the frame buffer, do the processing on the GPU, then copy the results back over the PCI bus into RAM. For a lot of things, this is simply too much overhead for the GPU to be much help.
But AMD can change that by establishing a standard for incorporating a GPU into the CPU. Eventually, imagine an AMD CPU that has the GPU integrated so tightly with the CPU that the CPU and GPU share a cache-coherent view of the main system memory, and even share a massive L3 cache. What if the GPU can use the same x86 virtual addresses that the CPU does? Then...all we have to have is a compiler option that enables the use of the GPU, and even tiny operations can be accelerated by the built-in GPU.
In this future world, there's still a place for discrete graphics -- that's not going away for your gaming rig. But imagine the potential of having a TFLOP-scale coprocessor as a fundamental part of future sub-50W CPU. Your laptop would be able to do things like real-time video stabilization, transcoding, physics modeling, and image processing, all without breaking the bank (or the power budget).
But before we can get to this place, AMD has to start somewhere. The first step is proving that a GPU can coexist with a CPU on the same silicon, and that such an arrangement can be built and sold at a profit. The rest is just evolution.
Re: (Score:2)
If there are good technical reasons for it, then it's good. And they're are.
It will Be Fast! (Score:2)
But will quad core and six core chips also carry a graphics chip? And how long before the two quad core mother boards hit the streets?
Frankly we are on the edge of a serious improvement in computers.
Re: (Score:2)
Apple angle (Score:2, Interesting)
Question is of course if it would be powerefficient enough for laptops, where space is an issue...
more like x86-64+GPU instructions combined.. (Score:2, Interesting)
I look forward to seeing what AMD's new architecture brings. It's not really interesting thinking about it as integrating a GPU into the same space as a CPU, but creating one chip that can do more exotic types of calculations than either chip could alone and making it a available in every system. I'm also envisioning "GPU" instructions being executed where normally CPU instructions were when not in use, and vise versa, basically so everything available could be put to use.
What about quad core laptop processors? (Score:2)
Meanwhile, where are those damned quad core laptop processors AMD promised? I've been waiting freaking AGES to buy a laptop with one.
Is this exciting? Well, what are the GPU specs? (Score:4, Insightful)
If AMD puts a competetive GPU onto the CPU die, comparable to their current high-end graphics boards) then this is a really big deal. Perhaps the biggest issue with GPGPU programming is the fact that the graphics unit is at the end of a fairly narrow pipe with limited memory, and getting data to the board and back is a performance bottleneck and a pain in the butt for a programmer.
Putting the GPU on the die could mean massive bandwidth from the CPU to the hundreds of streaming processors on the GPU. It also strongly implies that the GPU will have access directly to the same memory as the CPU. Finally, it would mean that if you have a Fusion-based renderfarm then you have GPUs on the renderfarm.
This is exciting!