AMD Fusion Details Leaked 94
negRo_slim writes "AMD has pushed Fusion as one of the main reasons to justify its acquisition of ATI. Since then, AMD's finances have changed colors and are now deep in the red, the top management has changed, and Fusion still isn't anything AMD wants to discuss in detail. But there are always 'industry sources' and these sources have told us that Fusion is likely to be introduced as a half-node chip."
fusion leak? (Score:5, Funny)
Now we'll never get the NIMBY's to allows us to build fusion reactors.
Re: (Score:1)
Re: (Score:1, Troll)
I don't know for you fellow geeks, but as soon as I see one more attempt at doing integrated graphics in CPUs an alarm goes off in my head, the one that means "SLOW CRAP ALERT".
Integrated graphics suck. Always have, always will. More cost for same perfs == teh suck. Shared memory == teh suck. They will suck at 3D, and render desktops with tearing and artefacts when you drag windows. Teh One True Suck.
Ideas like this are one of the reasons why computers still take as long to boot as they did ten years ago, b
Re:fusion leak? Now that's it's leaked, (Score:1)
How diffuse will the experience be?
Re: (Score:2, Funny)
You mean his condition was upgraded from dead to serious? No wonder he always played God in the movies.
WARNING LAST MEASURE (Score:3, Informative)
Re: (Score:2)
'es not dead! 'es kipping!
In all seriousness, he's ok. A broken arm/shoulder injury and he's in good spirits.
Re: (Score:2)
I was all set to make some witty (or at least half witty) comment about writing too much SF lately when I react negatively to seeing "fusion" and "leak" in the same headline, but you beat me to it.
However:
Great, we finally get cold fusion working (by a chip manufacturer? really?)
Sure, why not? If there is anything to cold fusion (and at least part of the jury is still out on that), and it depends on the microstructure of the cathode (anode? whichever), who better to perfect it? I use that little c
Just one question... (Score:2, Interesting)
Re:Just one question... (Score:5, Informative)
Re: (Score:2)
That's an inaccurate description. There's nothing stopping you from re-working a half-node shrinked design. Yes, it is often used as a stop-gap where all they do is a shrink, but that's not always the case: look at ATI's Radeon 4800 series on 55nm (an all-new design built on a half-node between 65nm and 45nm). And for further examples, Nvidia released the very successful 6600GT and later the 7800 series on the 110nm half-node (between 130nm and 90nm). These are all NEW designs, not simple die shrinks.
He
Re:Just one question... (Score:4, Informative)
FTFA: "As Fusion is shaping up right, we should expect the chip be become the first half-node CPU (between 45 and 32 nm)"
Re: (Score:2)
print page (Score:2, Informative)
Re: (Score:1, Redundant)
Proper grammar, on the other hand, is just a "nice to have" (FEWER ads ARE always good).
Re: (Score:2, Informative)
The verbage "less ads" is wrong, period.
bitch.
But hey, I'm responding to a troll. I was simply trying to be a little snarky while pointing out that the OP sounded like an ignoramus writing as he did. I'm not saying he is, but if Albert Einstein ever said "Y'alls better believe that thar sound speed is fewer than light speed", you'd think he's an idiot as well. Fewer is never a replacement for less, and vice versa.
Re: (Score:2)
Yeah, or...
"On the other hand, proper grammar is just as nice to have."
Anyone with more knowledge explain this to me (Score:3, Interesting)
Re:Anyone with more knowledge explain this to me (Score:5, Interesting)
A higher level of integration makes sense for laptops. Putting the GPU with the CPU also makes a lot more sense when we consider that the CPU these days also means the place closest to the memory controllers.
In addition, you have an interconnect between the two which is far faster than anything else available today. However, there is no code today that will use it explicitly, the whole paradigm of a GPU is that you do not read data back to the CPU.
So, for now, the benefits are really physical size and cost. A CPU-integrated graphics core can be better than one placed on the motherboard when you have an integrated memory controller, but a separate card with dedicated RAM should beat both, as long as you do not expect a new "chatty" paradigm of GPU usage.
Re: (Score:3, Interesting)
I think the chatty paradigm of GPU usage will be more fine-grained "stream computing." When the latency between CPU and GPU is lower, and you share the same cache, the penalty for setting up and launching stream computing tasks on the GPU becomes lower, enabling more things to be accelerated this way.
The old way, you only really got benefits from stream computing if you were able to set up a massive job for the GPU, set it on its task, wait for completion, and then get the results. Now, maybe new classes of
Re: (Score:2, Interesting)
> The old way, you only really got benefits from stream computing if you were
> able to set up a massive job for the GPU, set it on its task, wait for
> completion, and then get the results. Now, maybe new classes of apps become
> more feasible.
Yes. I think this is more a response to Cell than to Intel. You'll note that Cell has a very high bandwidth interconnect between the main CPU and its slave stream processors. This is the same idea. And if they implement a good double precision float in those
Re: (Score:2)
And if they implement a good double precision float in those stream units, I predict it will become very desirable for scientific computing.
I was wondering how long it would take for someone to get it, also I expect to see acceleration of 3d GUIs match performance of the old Win95 shell running on a modern computer. AKA Snappy! Little things like that will snowball until incredibly strong performance in calculations once negated to specialized hardware (486DX Anyone?) become the norm and expected.
Re:Anyone with more knowledge explain this to me (Score:5, Interesting)
So, for now, the benefits are really physical size and cost.
Power, more than size. Off-chip buses like Hypertransport are fairly power intensive, and now CPUGPU communication won't have to leave the chip. Depending on how they do the integration with the memory controller, it could also mean that less of the chip needs to be active when doing nothing more than screen refreshes from the frame buffer. But the HT link is a pretty big deal power-wise.
Re: (Score:3, Interesting)
there is no code today that will use it explicitly, the whole paradigm of a GPU is that you do not read data back to the CPU.
Perhaps you should look into GPGPU [gpgpu.org] and CUDA [nvidia.com]. Most of what most people do with computers involves one-way traffic to the GPU, but a small and sometimes well-funded subset of us have bigger plans than video games for the massive parallelization the GPU provides.
It will be interesting to see if the Nvidia/Intel and AMD/ATI alliances will kill progress in this direction and make us all wait for Intel and AMD to figure out a way to market 256 threads of execution to consumers who won't ever need it, but perhap
Re: (Score:2)
I am familiar with GPGPU and so on, but the pure scientific market is not large enough to warrant the development of these chips, and it certainly doesn't serve as a plausible excuse for buying ATI. Also note that (almost) all stuff done today as GPGPU is high-latency, you send a large chunk of data and read the results back. You just keep feeding a stream to the computation kernels. The thing is also that they are now taking an existing GPU core, which is still tuned for this kind of workloads. These days,
Re: (Score:2)
In addition, you have an interconnect between the two which is far faster than anything else available today. However, there is no code today that will use it explicitly, the whole paradigm of a GPU is that you do not read data back to the CPU.
Can't this code be put in the driver?
Re: (Score:3, Interesting)
Can't this code be put in the driver?
Not really, as I see it. The driver should naturally be written to use the faster bus, but the availability of this communication channel could be used for doing some special effect stages on the CPU and then hand the data back (assuming that the effect for some reason cannot be implemented as a shader). Some kind of dynamic off-loading if the GPU turns out to be the bottleneck could be handled in driver, and that would surely be interesting, but the traditional cores would be a very minor addition to the t
Re: (Score:2)
Hypertransport.
Since AMD processors already have internal memory controllers, it's entirely possible to do this and avoid any off-chip accesses.
The thing to do would be to give the GPU it's own memory controller and Hypertransport access to fast graphics memory. Then, what you'd have eliminated is "slow" trans
Re: (Score:2, Funny)
Maybe a chip with a huge amount of cache on it?
Think of a chip with the CPU, GPU, 2-4GB of DDR5 (or more like DDR20 when it happens) cache on it.
Someone more informed could say what the speed of the cache is. I just know that it is fast. If there was a chip with a few gig of this fast cache on it, it could make a nice system. Then again, it all depends on how it is implemented.
Re: (Score:1)
With enough cache to replace RAM, you'd have a system so expensive that even IBM couldn't dream of building it, let alone sell it.
The reason we don't do Just That already is that cache is fabulously expensive. It's so many more transistors in the chip, they're not feasible.
Remember the first celeron? It had no L2 cache, that's why it SUCKED all that much. Remember the last Alpha? It had 8M of cache, that's why it simply killed each and every other CPU at the time. Well, that, and that its circuits were hand
Re: (Score:2)
Actually, if you had 4 GB of memory on chip, you'd probably not wire it as cache, but as main system RAM at the full processor speed. DIMM slots, if any, would then just be a huge disk cache or possibly RAM drives for your swap.
The hard part is fitting that many transistors onto the chip along with the cores. Four gigabytes means 32 gigabits, plus the interface circuitry to the memory controller. 4 GB on-chip would add sagans of transistors to a design. A Core 2 Quad has about 580 million transistors.
You're
Re: (Score:3, Interesting)
1. It has a very high speed link low lag link to the cpu
2. It can hook in to the ram controller in the cpu and maybe even have it's own later.
3. It can work with a real video card in the system.
4. In a 2+ system you can have a full cpu in socket and and gpu + cpu in the other one.
Re: (Score:1)
Putting the GPU on the CPU, in AMD's case, means the graphics chip doesn't have to access main memory by proxy through the CPU's on-board memory controller.
What'd be nicer is if they would stop pretending it's a discrete processing unit and just call it SSE5 or something, so that everyone gets access to a metric assload of vector/stream hardware without any of this stupid "driver" business.
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
Reduced costs, hopefully. And killing performance, too.
Completely right.
A CPU is on-board, yes?
No.
In related news ... (Score:1, Informative)
In related news, there are rumours, just recently denied [dailytech.com], that Nvidia is exiting the chipset business.
Re: (Score:1, Troll)
What has that got to do with AMD, though?
Because AMD was the one that *started* the rumor, of course!!
AMDs problem. (Score:5, Insightful)
Was the rush to both a native quad core and quad core on the desktop.
Desktops matter less than less these days. Notebooks are more and more important. You don't put quad core in notebooks yet.
If AMD can pull off Fusion and have it compete with Intel in the laptop space they may actually do well again.
There current problem is they are not competing with the ATOM yet. The netbook may be the next big battle ground. Most people don't want a faster machine anymore. And most laptop users don't want faster laptop. What they want is one that runs longer and is smaller and lighter.
Re: (Score:1, Interesting)
Thats interesting, because I'm typing this on my quad-core laptop.. www.pcmicroworks.com www.sager.com www.dell.com/xps
Quadcore laptops arent even rare anymore. Expensive, yes, but still pretty common..
Re:AMDs problem. (Score:5, Interesting)
Yes, they are still rare. The few "laptops" with quad-core CPUs are using power-hungry desktop or server class CPUs and weigh over >10 lbs. You won't see a quad-core CPU in a traditional (less than 7 lbs.) laptop until these hit the market [wikipedia.org] in the near future.
Re: (Score:2)
I think duel core will rule laptops for a while yet. The simple reason is that most PC users are not screaming for more power. Excluding gamers most users computers are fast enough until they become bogged down with malware.
Netbooks and Nettops I predict will be the next big thing.
We don't need bigger and faster PCs anymore.
We need smaller, lighter, and more convenient.
Re:AMDs problem. (Score:5, Funny)
duel core
I prefer joust core myself.
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
Heat? Heat kills my laptops faster than their warranty expires! Not because of the CPU -they got at least THAT right, now the laptop CPUs are cooled well, but the GPUs just BURN. When will they understand and water-cool the fucking things?
Re: (Score:2)
As soon as they find a water-cooling set that doesn't electrocute your balls until after the warranty period has expired?
Re: (Score:2)
I would have to say that they are still pretty uncommon.
Gaming laptops are no where near the majority of laptop users.
Even then I would guess that duel core systems out sell quad core laptops at least 50 to 1 if not higher but I am just guessing.
Re:AMDs problem. (Score:4, Insightful)
Gamers are like sports car buyers.
They are few, not a large profit base, and only important for their halo effect.
Now the manufactures do make some good money off the games because they soak them and get to recoup a large section of their RnD.
What gamers really help with is the Halo effect. A lot of gamers will buy Intel not because they are going to get the top of the line CPU but because they dream of someday getting that top of the line CPU so they buy a motherboard that will work with it.
Intel has better and faster CPUs but I will be that even with that most gamers are still buying dual-core cpus.
Oh and gamers don't buy the most expensive hardware. The HPC have them beat. Those are the people that really push out some bucks. :)
You think a top of the line gaming rig is expensive. Price a top of the line IBM POWER system
Re: (Score:1)
Re: (Score:2)
Maybe but the number one producer of GPUs is Intel.
And I think the best that Intel can do with Crysis is 4 FPM on with the settings on high.
Go buy Motortrend and see what is on the cover. Nobody cares that much about the low end of graphics but they out sell the high end in volume.
There is money to be made in gaming cards but it is the low end is what pays the bills. It is just that the high end will become the low end at some point.
But even then take a look at the reviews of high end video cards. Just bein
Porsche just bought Volkswagen AG (Score:2)
Re: (Score:1)
Yes but IMB's Power CPU doesn't run Windows, and games for PC run in Windows. Do you mean running Windows in a virtual machine?
Re: (Score:1)
Laptops matter.
Desktops don't really.
but servers do, which is what AMD was targeting for the quad core and where AMD was destroying Intel when the various Athlons first came out.
Re: (Score:2)
But Intel took a lot back with their "fake" quad cores. AMD lost a good bit of market share. They should have pushed out a fake quad core while they where worked on the true quad. Then throw in that the first true quads had some issues and you can see AMD took a bit of hit.
Re: (Score:2)
First quad-core laptop hits U.S. [cnet.com]
Xtremenotebooks launches quad core laptop [engadget.com]
Quad-core notebook [notebookreview.com]
Dell launches new quad-core laptop [arstechnica.com]
Build your own quad-core laptop
Re: (Score:1)
Pardon? Atom is a "me too" to AMD's Geode technology (which is now over a decade under development, dating back to the good ol Cyrix)
Re: (Score:2)
Actually, I'd say the Atom is the late player compared to the Geode and the C3/C7 lineup from Via for low-power devices. When Transmeta was still selling chips instead of being a tech licensing company, Crusoe was handing Intel an empty lunch sack and slapping it on the ass. Arm and Freescale have been in that market with non-x86 chips for a long time.
Intel might have a solid entry finally in the low-power space, but they are hardly pioneering anything there with Atom.
Cost and Performance info? (Score:5, Informative)
Without cost and performance (speed) info, this is not really interesting.
Facts in the story:
- AMD using TSMC
- AMD using 40nm instead of 45 or 32
- DirectX 10.1 support with the R800 engine on the chip.
None of this matters unless it does something better and/or cheaper than some other option.
Re: (Score:2)
Re:Cost and Performance info? (Score:5, Informative)
No, the article says a Bulldozer-based Fusion chip will be fabbed by TSMC. AMD will probably make the non-fused Bulldozer itself.
Re: (Score:2)
Re: (Score:3, Interesting)
The first Fusion processor is code-named Shrike, which will, if our sources are right, consist of a dual-core Phenom CPU and an ATI RV800 GPU core. This news is actually a big surprise, as Shrike was originally rumored to debut as a combination of a dual-core Kuma CPU and a RV710-based graphics unit.
And just because you don't care about this news does not mean that everybody else will agree with you.
Re: (Score:2)
Does the combined Fusion chip have advantages over the separate chips?
If not, there's no point. I don't think anyone doubts that the separate chips exist.
Re: (Score:2)
one processor die two processor dies
one chip socket two chip sockets
Fewer parts means lower cost. TFA didn't say, but it's entirely possible to put a PCIe controller or a HyperTransport link on a processor die, too (that's where HT links are now). If they dedicate a link directly from the CPU cores to the GPU core without going out to the chipset and back, then you eliminate all the traces on the motherboard for machines that aren't doing AMD's hybrid graphics. If the motherboards need fewer traces on the
Re: (Score:2)
Theoretically, yes. Unless the huge die size results in low yield, or thermal issues, or time-to-market issues, or integration of unused functionality that makes the chip too expensive to compete with chips with smaller functionality sets, or some other issue.
And even if it's only a little cheaper, then it may not be worthwhile either.
I'm in the business. Integration is great sometimes, ok sometimes, and bad sometimes. Single-chip solutions sometimes lose out to multi-chip solutions.
Re: (Score:2)
Well, the standard "all else being equal" of course applies.
Going from 90nm and 65nm to 40nm and 32nm parts should help deal with a few issues, of course. Will two CPU cores and one GPU core at 40nm actually be any larger than four CPU cores at 65nm? The RV800 is a pretty small chip by itself already.
I'm not in the business, so I yield to you that you probably have better information and more insight on the topic. Given AMD's problems of late, it's probably prudent to bring up the possible downsides. Yet I
Re: (Score:1)
Re: (Score:2)
They could make projections or otherwise state that there's some advantage.
Half-Node? (Score:2, Interesting)
Re: (Score:1)
As Fusion is shaping up right, we should expect the chip be become the first half-node CPU (between 45 and 32 nm) in a very long time.
Re:Half-Node? (Score:4, Informative)
That's a weird definition of node... (Score:5, Informative)
I can't comment if your description of a "node" is true for AMD or not, but the rest of the silicon industry (via the ITRS roadmap) labels technology nodes like 90nm, 65nm, 45nm, 32nm, 32nm, 16nm, etc, etc...
Historically, the ITRS used the term "technology node" to attempt to provide a single, simple indicator of overall industry progress in IC technology by defining it to be the smallest half-pitch of contacted metal lines on any product (usually DRAM), but they have since abandoned this practice of declaring technology nodes (because various parameters are now scaling at widely different rates). Nowdays, in the rest of the semiconductor industry a node often corresponds to some major process enabling technology (e.g., TSMC 45nm combined 193nm immersion photolithography, strained silicon and extreme low-k inter-metal dielectric material).
If you meant that AMD has 7-9 different nodes that evolved from the 45nm node, I guess that's consistant with this too, but not that consistant with everyone elses' use of "node", they would probably call that a "half-node". If you meant that AMD's 45nm technology uses up to 7 to 9 different scaling factors from other technology nodes I guess that is consistant with this too, but I don't think that's standard industry usage of the word "node".
AFAIK, the industry uses the term "half-node" when the somewhere between the main nodes (e.g., at TSMC, 40nm is considered a half-node from 45nm). Normally a half-node is created by some sort of parametric scaling of some of the features of a regular process node to achieve higher transistor density (generally something theoretically in-reach of a regular process node, by tweaking scaling by different amounts). Of course there are usually several different variety of 1/2 nodes (low leakage, high speed variants, etc) developed. But that's no different than the fact there are many different variants at a particular node in any case.
Often process technology folks design something like a 45nm technology node and after they are comfortable with being able to yield it, they spend some time to tweak it to see if they can get a shrink and if the tweakage good enough, they market it as another "half-node" design point. This is a pretty good tradeoff since they can offer a "shrink" to customer using the main node as a cost reduction exercise or a way to scale customized parts of their designs (e.g., cells, rams, I/O pads) w/o radical redesigns (which might happen between major technology shifts) giving a good !/$ for their engineering efforts.
The reason why many folks think it's weird to design something that probably has a lot of custom stuff like a CPU-GPU hybrid in a half-node is that new things take a long time to design and with processes technology a moving target, it's nice to be able to schedule in a "shrink" and get a low effort cost reduction during the useful sellable lifetime for a product. By starting production in a half-node, to get a cost reduction worth the engineering effort, you'll probably have to redesign/layout the chip in the next technology node (say 32nm which may have lots of different non-compatible features and take lots of effort like a new high-k gate dielectric).
Re: (Score:1)
What's halfway between a Tick and a Tock?
Re: (Score:1)
Re: (Score:1)
And don't hold your breath... (Score:1)
Because we all know how well AMD has held to "release dates" recently.....
The Interesting Tidbit (Score:2, Interesting)