Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Upgrades Hardware

AMD Fusion Details Leaked 94

negRo_slim writes "AMD has pushed Fusion as one of the main reasons to justify its acquisition of ATI. Since then, AMD's finances have changed colors and are now deep in the red, the top management has changed, and Fusion still isn't anything AMD wants to discuss in detail. But there are always 'industry sources' and these sources have told us that Fusion is likely to be introduced as a half-node chip."
This discussion has been archived. No new comments can be posted.

AMD Fusion Details Leaked

Comments Filter:
  • by Red Flayer ( 890720 ) on Monday August 04, 2008 @03:48PM (#24471647) Journal
    Great, we finally get cold fusion working (by a chip manufacturer? really?) and the first I hear of it, there's been a leak.

    Now we'll never get the NIMBY's to allows us to build fusion reactors.
    • Im sure you can build one at TechShop http://techshop.ws/
      • Re: (Score:1, Troll)

        I don't know for you fellow geeks, but as soon as I see one more attempt at doing integrated graphics in CPUs an alarm goes off in my head, the one that means "SLOW CRAP ALERT".

        Integrated graphics suck. Always have, always will. More cost for same perfs == teh suck. Shared memory == teh suck. They will suck at 3D, and render desktops with tearing and artefacts when you drag windows. Teh One True Suck.

        Ideas like this are one of the reasons why computers still take as long to boot as they did ten years ago, b

    • How diffuse will the experience be?

    • by AJWM ( 19027 )

      I was all set to make some witty (or at least half witty) comment about writing too much SF lately when I react negatively to seeing "fusion" and "leak" in the same headline, but you beat me to it.

      However:
      Great, we finally get cold fusion working (by a chip manufacturer? really?)

      Sure, why not? If there is anything to cold fusion (and at least part of the jury is still out on that), and it depends on the microstructure of the cathode (anode? whichever), who better to perfect it? I use that little c

  • Just one question... (Score:2, Interesting)

    by Anonymous Coward
    WTF is a "half-node chip"?
    • Re: (Score:1, Redundant)

      by PitaBred ( 632671 )

      Proper grammar, on the other hand, is just a "nice to have" (FEWER ads ARE always good).

      • Yeah, or...

        "On the other hand, proper grammar is just as nice to have."

  • by the_humeister ( 922869 ) on Monday August 04, 2008 @04:02PM (#24471843)
    What's the point in putting the GPU on the same die as the CPU? Doesn't it just then get access to slower main memory vs. a discreet video card with faster memory? Motherboards won't have on-board video anymore? This is all rather confusing.
    • by cnettel ( 836611 ) on Monday August 04, 2008 @04:10PM (#24471975)

      A higher level of integration makes sense for laptops. Putting the GPU with the CPU also makes a lot more sense when we consider that the CPU these days also means the place closest to the memory controllers.

      In addition, you have an interconnect between the two which is far faster than anything else available today. However, there is no code today that will use it explicitly, the whole paradigm of a GPU is that you do not read data back to the CPU.

      So, for now, the benefits are really physical size and cost. A CPU-integrated graphics core can be better than one placed on the motherboard when you have an integrated memory controller, but a separate card with dedicated RAM should beat both, as long as you do not expect a new "chatty" paradigm of GPU usage.

      • Re: (Score:3, Interesting)

        by HickNinja ( 551261 )

        I think the chatty paradigm of GPU usage will be more fine-grained "stream computing." When the latency between CPU and GPU is lower, and you share the same cache, the penalty for setting up and launching stream computing tasks on the GPU becomes lower, enabling more things to be accelerated this way.

        The old way, you only really got benefits from stream computing if you were able to set up a massive job for the GPU, set it on its task, wait for completion, and then get the results. Now, maybe new classes of

        • Re: (Score:2, Interesting)

          by maynard ( 3337 )

          > The old way, you only really got benefits from stream computing if you were
          > able to set up a massive job for the GPU, set it on its task, wait for
          > completion, and then get the results. Now, maybe new classes of apps become
          > more feasible.

          Yes. I think this is more a response to Cell than to Intel. You'll note that Cell has a very high bandwidth interconnect between the main CPU and its slave stream processors. This is the same idea. And if they implement a good double precision float in those

          • And if they implement a good double precision float in those stream units, I predict it will become very desirable for scientific computing.

            I was wondering how long it would take for someone to get it, also I expect to see acceleration of 3d GUIs match performance of the old Win95 shell running on a modern computer. AKA Snappy! Little things like that will snowball until incredibly strong performance in calculations once negated to specialized hardware (486DX Anyone?) become the norm and expected.

      • by Chris Burke ( 6130 ) on Monday August 04, 2008 @04:39PM (#24472373) Homepage

        So, for now, the benefits are really physical size and cost.

        Power, more than size. Off-chip buses like Hypertransport are fairly power intensive, and now CPUGPU communication won't have to leave the chip. Depending on how they do the integration with the memory controller, it could also mean that less of the chip needs to be active when doing nothing more than screen refreshes from the frame buffer. But the HT link is a pretty big deal power-wise.

      • Re: (Score:3, Interesting)

        by pseudorand ( 603231 )

        there is no code today that will use it explicitly, the whole paradigm of a GPU is that you do not read data back to the CPU.

        Perhaps you should look into GPGPU [gpgpu.org] and CUDA [nvidia.com]. Most of what most people do with computers involves one-way traffic to the GPU, but a small and sometimes well-funded subset of us have bigger plans than video games for the massive parallelization the GPU provides.

        It will be interesting to see if the Nvidia/Intel and AMD/ATI alliances will kill progress in this direction and make us all wait for Intel and AMD to figure out a way to market 256 threads of execution to consumers who won't ever need it, but perhap

        • by cnettel ( 836611 )

          I am familiar with GPGPU and so on, but the pure scientific market is not large enough to warrant the development of these chips, and it certainly doesn't serve as a plausible excuse for buying ATI. Also note that (almost) all stuff done today as GPGPU is high-latency, you send a large chunk of data and read the results back. You just keep feeding a stream to the computation kernels. The thing is also that they are now taking an existing GPU core, which is still tuned for this kind of workloads. These days,

      • In addition, you have an interconnect between the two which is far faster than anything else available today. However, there is no code today that will use it explicitly, the whole paradigm of a GPU is that you do not read data back to the CPU.

        Can't this code be put in the driver?

        • Re: (Score:3, Interesting)

          by cnettel ( 836611 )

          Can't this code be put in the driver?

          Not really, as I see it. The driver should naturally be written to use the faster bus, but the availability of this communication channel could be used for doing some special effect stages on the CPU and then hand the data back (assuming that the effect for some reason cannot be implemented as a shader). Some kind of dynamic off-loading if the GPU turns out to be the bottleneck could be handled in driver, and that would surely be interesting, but the traditional cores would be a very minor addition to the t

    • Re: (Score:2, Funny)

      Maybe a chip with a huge amount of cache on it?

      Think of a chip with the CPU, GPU, 2-4GB of DDR5 (or more like DDR20 when it happens) cache on it.

      Someone more informed could say what the speed of the cache is. I just know that it is fast. If there was a chip with a few gig of this fast cache on it, it could make a nice system. Then again, it all depends on how it is implemented.

      • With enough cache to replace RAM, you'd have a system so expensive that even IBM couldn't dream of building it, let alone sell it.

        The reason we don't do Just That already is that cache is fabulously expensive. It's so many more transistors in the chip, they're not feasible.

        Remember the first celeron? It had no L2 cache, that's why it SUCKED all that much. Remember the last Alpha? It had 8M of cache, that's why it simply killed each and every other CPU at the time. Well, that, and that its circuits were hand

        • Actually, if you had 4 GB of memory on chip, you'd probably not wire it as cache, but as main system RAM at the full processor speed. DIMM slots, if any, would then just be a huge disk cache or possibly RAM drives for your swap.

          The hard part is fitting that many transistors onto the chip along with the cores. Four gigabytes means 32 gigabits, plus the interface circuitry to the memory controller. 4 GB on-chip would add sagans of transistors to a design. A Core 2 Quad has about 580 million transistors.

          You're

    • Re: (Score:3, Interesting)

      1. It has a very high speed link low lag link to the cpu
      2. It can hook in to the ram controller in the cpu and maybe even have it's own later.
      3. It can work with a real video card in the system.
      4. In a 2+ system you can have a full cpu in socket and and gpu + cpu in the other one.

    • by Ant P. ( 974313 )

      Putting the GPU on the CPU, in AMD's case, means the graphics chip doesn't have to access main memory by proxy through the CPU's on-board memory controller.

      What'd be nicer is if they would stop pretending it's a discrete processing unit and just call it SSE5 or something, so that everyone gets access to a metric assload of vector/stream hardware without any of this stupid "driver" business.

      • by hr.wien ( 986516 )
        One step at a time. We're going to end up there but it's silly to try to do it all in one go. Especially if you're AMD.
    • by ponos ( 122721 )
      Most integrated graphics already depend on system memory. Interestingly, the AMD solution does not forbid the addition of a graphics card but rather allows some level of cooperation between the two. It appears that you could get a sort of "crossfire" between the CPU-GPU and the standalone GPU if you added an extra card. That would also help when powering down the power-hungry standalone GPU, in the context of simple 2d or when in battery mode.
    • What's the point in putting the GPU on the same die as the CPU?

      Reduced costs, hopefully. And killing performance, too.

      Doesn't it just then get access to slower main memory vs. a discreet video card with faster memory?

      Completely right.

      Motherboards won't have on-board video anymore?

      A CPU is on-board, yes?

      This is all rather confusing.

      No.

  • In related news ... (Score:1, Informative)

    by oldspewey ( 1303305 )

    In related news, there are rumours, just recently denied [dailytech.com], that Nvidia is exiting the chipset business.

  • AMDs problem. (Score:5, Insightful)

    by LWATCDR ( 28044 ) on Monday August 04, 2008 @04:05PM (#24471913) Homepage Journal

    Was the rush to both a native quad core and quad core on the desktop.
    Desktops matter less than less these days. Notebooks are more and more important. You don't put quad core in notebooks yet.
    If AMD can pull off Fusion and have it compete with Intel in the laptop space they may actually do well again.
    There current problem is they are not competing with the ATOM yet. The netbook may be the next big battle ground. Most people don't want a faster machine anymore. And most laptop users don't want faster laptop. What they want is one that runs longer and is smaller and lighter.

    • Re: (Score:1, Interesting)

      by Anonymous Coward

      Thats interesting, because I'm typing this on my quad-core laptop.. www.pcmicroworks.com www.sager.com www.dell.com/xps

      Quadcore laptops arent even rare anymore. Expensive, yes, but still pretty common..

      • Re:AMDs problem. (Score:5, Interesting)

        by nxtw ( 866177 ) on Monday August 04, 2008 @04:23PM (#24472163)

        Thats interesting, because I'm typing this on my quad-core laptop.. www.pcmicroworks.com www.sager.com www.dell.com/xps

        Quadcore laptops arent even rare anymore. Expensive, yes, but still pretty common..

        Yes, they are still rare. The few "laptops" with quad-core CPUs are using power-hungry desktop or server class CPUs and weigh over >10 lbs. You won't see a quad-core CPU in a traditional (less than 7 lbs.) laptop until these hit the market [wikipedia.org] in the near future.

        • by LWATCDR ( 28044 )

          I think duel core will rule laptops for a while yet. The simple reason is that most PC users are not screaming for more power. Excluding gamers most users computers are fast enough until they become bogged down with malware.
          Netbooks and Nettops I predict will be the next big thing.
          We don't need bigger and faster PCs anymore.
          We need smaller, lighter, and more convenient.

      • by p0tat03 ( 985078 )
        They've also been able to shove SLI video cards into laptops, doesn't make them common. The heat and power issues with quad cores still make it completely impossible to put into a laptop and retain a reasonable level of mobility and battery life.
        • Heat? Heat kills my laptops faster than their warranty expires! Not because of the CPU -they got at least THAT right, now the laptop CPUs are cooled well, but the GPUs just BURN. When will they understand and water-cool the fucking things?

          • As soon as they find a water-cooling set that doesn't electrocute your balls until after the warranty period has expired?

      • by LWATCDR ( 28044 )

        I would have to say that they are still pretty uncommon.
        Gaming laptops are no where near the majority of laptop users.
        Even then I would guess that duel core systems out sell quad core laptops at least 50 to 1 if not higher but I am just guessing.

    • by rachit ( 163465 )

      Laptops matter.
      Desktops don't really.

      but servers do, which is what AMD was targeting for the quad core and where AMD was destroying Intel when the various Athlons first came out.

      • by LWATCDR ( 28044 )

        But Intel took a lot back with their "fake" quad cores. AMD lost a good bit of market share. They should have pushed out a fake quad core while they where worked on the true quad. Then throw in that the first true quads had some issues and you can see AMD took a bit of hit.

    • by mikael ( 484 )

      First quad-core laptop hits U.S. [cnet.com]

      Xtremenotebooks launches quad core laptop [engadget.com]

      Quad-core notebook [notebookreview.com]

      Dell launches new quad-core laptop [arstechnica.com]

      Build your own quad-core laptop

    • by downix ( 84795 )

      Pardon? Atom is a "me too" to AMD's Geode technology (which is now over a decade under development, dating back to the good ol Cyrix)

    • Actually, I'd say the Atom is the late player compared to the Geode and the C3/C7 lineup from Via for low-power devices. When Transmeta was still selling chips instead of being a tech licensing company, Crusoe was handing Intel an empty lunch sack and slapping it on the ass. Arm and Freescale have been in that market with non-x86 chips for a long time.

      Intel might have a solid entry finally in the low-power space, but they are hardly pioneering anything there with Atom.

  • by Kohath ( 38547 ) on Monday August 04, 2008 @04:06PM (#24471923)

    Without cost and performance (speed) info, this is not really interesting.

    Facts in the story:

    - AMD using TSMC
    - AMD using 40nm instead of 45 or 32
    - DirectX 10.1 support with the R800 engine on the chip.

    None of this matters unless it does something better and/or cheaper than some other option.

    • The really interesting news to me is not that fusion is going to be built at TSMC(since that was pretty much expected, ATi uses TSMC already and AMD's fabs are already suffering though the transition to 45nm), but that Bulldozer is going to be built at TSMC! I guess it makes since, AMD is still a ways away from having 32nm tech ready, but this will be the first major x86 chip to be built by a third party. I would expect AMD to quickly go to 32nm and manufacture Bulldozer in tandem with TSMC(or perhaps skip
    • Re: (Score:3, Interesting)

      by eebra82 ( 907996 )
      You forgot the most important piece:

      The first Fusion processor is code-named Shrike, which will, if our sources are right, consist of a dual-core Phenom CPU and an ATI RV800 GPU core. This news is actually a big surprise, as Shrike was originally rumored to debut as a combination of a dual-core Kuma CPU and a RV710-based graphics unit.

      And just because you don't care about this news does not mean that everybody else will agree with you.

      • by Kohath ( 38547 )

        Does the combined Fusion chip have advantages over the separate chips?

        If not, there's no point. I don't think anyone doubts that the separate chips exist.

        • one processor die two processor dies
          one chip socket two chip sockets

          Fewer parts means lower cost. TFA didn't say, but it's entirely possible to put a PCIe controller or a HyperTransport link on a processor die, too (that's where HT links are now). If they dedicate a link directly from the CPU cores to the GPU core without going out to the chipset and back, then you eliminate all the traces on the motherboard for machines that aren't doing AMD's hybrid graphics. If the motherboards need fewer traces on the

          • by Kohath ( 38547 )

            Theoretically, yes. Unless the huge die size results in low yield, or thermal issues, or time-to-market issues, or integration of unused functionality that makes the chip too expensive to compete with chips with smaller functionality sets, or some other issue.

            And even if it's only a little cheaper, then it may not be worthwhile either.

            I'm in the business. Integration is great sometimes, ok sometimes, and bad sometimes. Single-chip solutions sometimes lose out to multi-chip solutions.

            • Well, the standard "all else being equal" of course applies.

              Going from 90nm and 65nm to 40nm and 32nm parts should help deal with a few issues, of course. Will two CPU cores and one GPU core at 40nm actually be any larger than four CPU cores at 65nm? The RV800 is a pretty small chip by itself already.

              I'm not in the business, so I yield to you that you probably have better information and more insight on the topic. Given AMD's problems of late, it's probably prudent to bring up the possible downsides. Yet I

    • It's silly to expect price and performance information for something that's still in the planning stages. eebra is correct, just because the development phase doesn't interest you doesn't mean it's objectively uninteresting. I myself am interested in these early decisions that, by the way, will ultimately affect what it costs and how well it performs.
  • Half-Node? (Score:2, Interesting)

    by abshnasko ( 981657 )
    I did a google search on this topic but I can't really determine the significance of what a 'half-node' processor is. Is there something inherently special about it? Can someone more knowledgeable about processors explain this?
    • I think RTFA explains it in part:

      As Fusion is shaping up right, we should expect the chip be become the first half-node CPU (between 45 and 32 nm) in a very long time.

    • Re:Half-Node? (Score:4, Informative)

      by karvind ( 833059 ) <karvind.gmail@com> on Monday August 04, 2008 @05:03PM (#24472695) Journal
      AMD has multiple "nodes" per technology. So in 45nm itself, they have 7 to 9 nodes. Each node represents performance improvement over the previous one by using new technology innovations. It is still 45nm technology, but you may add, for example, higher stress liner to improve mobility, hence more current and hence performance. It doesn't change any of the basic groundrules. These nodes are typically in 3-6 months range (rather than 18 month as said by Moore's law). But then these nodes don't really improve performance by 2x either. The first node is the hardest - get the ground rules right, get a yielding process etc. Once the foundation is set, it is relatively easier to experiment with new process technologies.
      • by slew ( 2918 ) on Monday August 04, 2008 @07:46PM (#24474539)

        I can't comment if your description of a "node" is true for AMD or not, but the rest of the silicon industry (via the ITRS roadmap) labels technology nodes like 90nm, 65nm, 45nm, 32nm, 32nm, 16nm, etc, etc...

        Historically, the ITRS used the term "technology node" to attempt to provide a single, simple indicator of overall industry progress in IC technology by defining it to be the smallest half-pitch of contacted metal lines on any product (usually DRAM), but they have since abandoned this practice of declaring technology nodes (because various parameters are now scaling at widely different rates). Nowdays, in the rest of the semiconductor industry a node often corresponds to some major process enabling technology (e.g., TSMC 45nm combined 193nm immersion photolithography, strained silicon and extreme low-k inter-metal dielectric material).

        If you meant that AMD has 7-9 different nodes that evolved from the 45nm node, I guess that's consistant with this too, but not that consistant with everyone elses' use of "node", they would probably call that a "half-node". If you meant that AMD's 45nm technology uses up to 7 to 9 different scaling factors from other technology nodes I guess that is consistant with this too, but I don't think that's standard industry usage of the word "node".

        AFAIK, the industry uses the term "half-node" when the somewhere between the main nodes (e.g., at TSMC, 40nm is considered a half-node from 45nm). Normally a half-node is created by some sort of parametric scaling of some of the features of a regular process node to achieve higher transistor density (generally something theoretically in-reach of a regular process node, by tweaking scaling by different amounts). Of course there are usually several different variety of 1/2 nodes (low leakage, high speed variants, etc) developed. But that's no different than the fact there are many different variants at a particular node in any case.

        Often process technology folks design something like a 45nm technology node and after they are comfortable with being able to yield it, they spend some time to tweak it to see if they can get a shrink and if the tweakage good enough, they market it as another "half-node" design point. This is a pretty good tradeoff since they can offer a "shrink" to customer using the main node as a cost reduction exercise or a way to scale customized parts of their designs (e.g., cells, rams, I/O pads) w/o radical redesigns (which might happen between major technology shifts) giving a good !/$ for their engineering efforts.

        The reason why many folks think it's weird to design something that probably has a lot of custom stuff like a CPU-GPU hybrid in a half-node is that new things take a long time to design and with processes technology a moving target, it's nice to be able to schedule in a "shrink" and get a low effort cost reduction during the useful sellable lifetime for a product. By starting production in a half-node, to get a cost reduction worth the engineering effort, you'll probably have to redesign/layout the chip in the next technology node (say 32nm which may have lots of different non-compatible features and take lots of effort like a new high-k gate dielectric).

  • Because we all know how well AMD has held to "release dates" recently.....

  • by Slaimus ( 697294 )
    I think the most interesting tidbit is that TSMC will support SOI in the future instead of just bulk CMOS. That is quite an investment they are making, and will encourage more fab-less semiconductor companies to adopt SOI instead of just those working with IBM.

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...