Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Upgrades Hardware

AMD Fusion Details Leaked 94

negRo_slim writes "AMD has pushed Fusion as one of the main reasons to justify its acquisition of ATI. Since then, AMD's finances have changed colors and are now deep in the red, the top management has changed, and Fusion still isn't anything AMD wants to discuss in detail. But there are always 'industry sources' and these sources have told us that Fusion is likely to be introduced as a half-node chip."
This discussion has been archived. No new comments can be posted.

AMD Fusion Details Leaked

Comments Filter:
  • Just one question... (Score:2, Interesting)

    by Anonymous Coward on Monday August 04, 2008 @04:00PM (#24471821)
    WTF is a "half-node chip"?
  • by the_humeister ( 922869 ) on Monday August 04, 2008 @04:02PM (#24471843)
    What's the point in putting the GPU on the same die as the CPU? Doesn't it just then get access to slower main memory vs. a discreet video card with faster memory? Motherboards won't have on-board video anymore? This is all rather confusing.
  • Re:AMDs problem. (Score:1, Interesting)

    by Anonymous Coward on Monday August 04, 2008 @04:07PM (#24471931)

    Thats interesting, because I'm typing this on my quad-core laptop.. www.pcmicroworks.com www.sager.com www.dell.com/xps

    Quadcore laptops arent even rare anymore. Expensive, yes, but still pretty common..

  • by cnettel ( 836611 ) on Monday August 04, 2008 @04:10PM (#24471975)

    A higher level of integration makes sense for laptops. Putting the GPU with the CPU also makes a lot more sense when we consider that the CPU these days also means the place closest to the memory controllers.

    In addition, you have an interconnect between the two which is far faster than anything else available today. However, there is no code today that will use it explicitly, the whole paradigm of a GPU is that you do not read data back to the CPU.

    So, for now, the benefits are really physical size and cost. A CPU-integrated graphics core can be better than one placed on the motherboard when you have an integrated memory controller, but a separate card with dedicated RAM should beat both, as long as you do not expect a new "chatty" paradigm of GPU usage.

  • Re:AMDs problem. (Score:5, Interesting)

    by nxtw ( 866177 ) on Monday August 04, 2008 @04:23PM (#24472163)

    Thats interesting, because I'm typing this on my quad-core laptop.. www.pcmicroworks.com www.sager.com www.dell.com/xps

    Quadcore laptops arent even rare anymore. Expensive, yes, but still pretty common..

    Yes, they are still rare. The few "laptops" with quad-core CPUs are using power-hungry desktop or server class CPUs and weigh over >10 lbs. You won't see a quad-core CPU in a traditional (less than 7 lbs.) laptop until these hit the market [wikipedia.org] in the near future.

  • Half-Node? (Score:2, Interesting)

    by abshnasko ( 981657 ) on Monday August 04, 2008 @04:25PM (#24472201)
    I did a google search on this topic but I can't really determine the significance of what a 'half-node' processor is. Is there something inherently special about it? Can someone more knowledgeable about processors explain this?
  • by eebra82 ( 907996 ) on Monday August 04, 2008 @04:26PM (#24472209) Homepage
    You forgot the most important piece:

    The first Fusion processor is code-named Shrike, which will, if our sources are right, consist of a dual-core Phenom CPU and an ATI RV800 GPU core. This news is actually a big surprise, as Shrike was originally rumored to debut as a combination of a dual-core Kuma CPU and a RV710-based graphics unit.

    And just because you don't care about this news does not mean that everybody else will agree with you.

  • by HickNinja ( 551261 ) on Monday August 04, 2008 @04:31PM (#24472275)

    I think the chatty paradigm of GPU usage will be more fine-grained "stream computing." When the latency between CPU and GPU is lower, and you share the same cache, the penalty for setting up and launching stream computing tasks on the GPU becomes lower, enabling more things to be accelerated this way.

    The old way, you only really got benefits from stream computing if you were able to set up a massive job for the GPU, set it on its task, wait for completion, and then get the results. Now, maybe new classes of apps become more feasible.

  • by Chris Burke ( 6130 ) on Monday August 04, 2008 @04:39PM (#24472373) Homepage

    So, for now, the benefits are really physical size and cost.

    Power, more than size. Off-chip buses like Hypertransport are fairly power intensive, and now CPUGPU communication won't have to leave the chip. Depending on how they do the integration with the memory controller, it could also mean that less of the chip needs to be active when doing nothing more than screen refreshes from the frame buffer. But the HT link is a pretty big deal power-wise.

  • by pseudorand ( 603231 ) on Monday August 04, 2008 @04:39PM (#24472381)

    there is no code today that will use it explicitly, the whole paradigm of a GPU is that you do not read data back to the CPU.

    Perhaps you should look into GPGPU [gpgpu.org] and CUDA [nvidia.com]. Most of what most people do with computers involves one-way traffic to the GPU, but a small and sometimes well-funded subset of us have bigger plans than video games for the massive parallelization the GPU provides.

    It will be interesting to see if the Nvidia/Intel and AMD/ATI alliances will kill progress in this direction and make us all wait for Intel and AMD to figure out a way to market 256 threads of execution to consumers who won't ever need it, but perhaps it will bring about innovations that remove todays bottlenecks, such as host/device bandwidth instead.

  • by Joe The Dragon ( 967727 ) on Monday August 04, 2008 @05:16PM (#24472897)

    1. It has a very high speed link low lag link to the cpu
    2. It can hook in to the ram controller in the cpu and maybe even have it's own later.
    3. It can work with a real video card in the system.
    4. In a 2+ system you can have a full cpu in socket and and gpu + cpu in the other one.

  • by cnettel ( 836611 ) on Monday August 04, 2008 @05:57PM (#24473461)

    Can't this code be put in the driver?

    Not really, as I see it. The driver should naturally be written to use the faster bus, but the availability of this communication channel could be used for doing some special effect stages on the CPU and then hand the data back (assuming that the effect for some reason cannot be implemented as a shader). Some kind of dynamic off-loading if the GPU turns out to be the bottleneck could be handled in driver, and that would surely be interesting, but the traditional cores would be a very minor addition to the total performance. It's like having a broadband link, but everyone except for a few academics are just providing dial-up content.

  • by Slaimus ( 697294 ) on Monday August 04, 2008 @06:46PM (#24474001)
    I think the most interesting tidbit is that TSMC will support SOI in the future instead of just bulk CMOS. That is quite an investment they are making, and will encourage more fab-less semiconductor companies to adopt SOI instead of just those working with IBM.
  • by maynard ( 3337 ) on Monday August 04, 2008 @10:27PM (#24475625) Journal

    > The old way, you only really got benefits from stream computing if you were
    > able to set up a massive job for the GPU, set it on its task, wait for
    > completion, and then get the results. Now, maybe new classes of apps become
    > more feasible.

    Yes. I think this is more a response to Cell than to Intel. You'll note that Cell has a very high bandwidth interconnect between the main CPU and its slave stream processors. This is the same idea. And if they implement a good double precision float in those stream units, I predict it will become very desirable for scientific computing.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...