Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AMD Graphics Power Upgrades Hardware Technology

AMD's Fusion Processor Combines CPU and GPU 240

ElectricSteve writes "At Computex 2010 AMD gave the first public demonstration of its Fusion processor, which combines the Central Processing Unit (CPU) and Graphics Processing Unit (GPU) on a single chip. The AMD Fusion family of Accelerated Processing Units not only adds another acronym to the computer lexicon, but ushers is what AMD says is a significant shift in processor architecture and capabilities. Many of the improvements stem from eliminating the chip-to-chip linkage that adds latency to memory operations and consumes power — moving electrons across a chip takes less energy than moving these same electrons between two chips. The co-location of all key elements on one chip also allows a holistic approach to power management of the APU. Various parts of the chip can be powered up or down depending on workloads."
This discussion has been archived. No new comments can be posted.

AMD's Fusion Processor Combines CPU and GPU

Comments Filter:
  • by Anonymous Coward
    Heh. They should use Apu from the Simpsons in their advertising...
  • This could be interesting. Intel might have to change plans for Larrabee again.
    • It's not so interesting. I rarely wait on my CPU. It's my I/O and my GPU that hit the limits. When will NVIDIA make a GPU with a CPU core? That could be a real game-changer.

  • by sco08y ( 615665 ) on Friday June 04, 2010 @04:20AM (#32455954)

    “Hundreds of millions of us now create, interact with, and share intensely visual digital content,” said Rick Bergman, senior vice president and general manager, AMD Product Group. “This explosion in multimedia requires new applications and new ways to manage and manipulate data."

    So people watch video and play video games, and it's still kinda pokey at times. We're way past diminishing marginal returns on improving graphical interfaces.

    I bring it up, because if you're trying to promote a technology that actually uses a computer to compute, you know, work with actual data, you are perpetually sidetracked by trying to make it look pretty to get any attention.

    Case in point: working on a project to track trends over financial data, there were several contractors competing. One had this software that tried to glom everything into a node and vector graph, which looked really pretty, but didn't actually do anything to analyze the data.

    But to managers, all they see is that those guys have pretty graphs in their demos and all we had was our research into the actual data... all those boring details.

    • by Deliveranc3 ( 629997 ) <deliverance@NOSpaM.level4.org> on Friday June 04, 2010 @04:47AM (#32456112) Journal
      |"Hundreds of millions of us now create, interact with, and share intensely visual digital content," said Rick
      |Bergman, senior vice president and general manager, AMD Product Group. "This explosion in multimedia requires
      |new applications and new ways to manage and manipulate data."

      So people watch video and play video games, and it's still kinda pokey at times. We're way past diminishing marginal returns on improving graphical interfaces.


      Well sure YOU DO, but your Gran still has a 5200 with "Turbo memory" (actually that's only 3 years old, she probably has worse). This will be the equivalent of putting audio on the motherboard, a low baseline quality but done with no cost.

      I bring it up, because if you're trying to promote a technology that actually uses a computer to compute, you know, work with actual data, you are perpetually sidetracked by trying to make it look pretty to get any attention.

      Bloat is indeed a big problem, programs are exploding into GIGABYTE sizes, which is insane. OTOH linux reusing libraries seems not to have worked. There is too little abstraction of the data so each coder writes their own linked list, red-black tree, or whatever algorithm instead of just using the methods from the OS.

      Case in point: working on a project to track trends over financial data, there were several contractors competing. One had this software that tried to glom everything into a node and vector graph, which looked really pretty, but didn't actually do anything to analyze the data.

      Sounds like a case of "not wanting to throw the baby out with the bathwater." If they have someone of moderate intelligence on staff, that person can find a way to pull useful information out of junk data. He/she will resist removing seemingly useless data, because they occasionally use it and routinely ignore it. A pretty presentation can also be very important in terms of usability, remember you have to look at the underlying code but the user has to look at the GUI, often for hours a day.

      But to managers, all they see is that those guys have pretty graphs in their demos and all we had was our research into the actual data... all those boring details. I can't comment on the quality of your management, but once again don't underestimate ease of use or even perceived ease of use (consider how long you will remain trying to learn a new tool if frustrated, the perception that something is as easy as possible is a huge boon... think iCrap).
      Anyway back to Fusion, this is EXACTLY what Dell wants, bit lower power, less heat, significantly lower price and a baseline for their users to be able to run Vista/7 (7 review: better than Vista, don't switch from XP). So while it's true that this chip won't be dominant under ANY metric, and would therefore seem to have no customer base it's attractiveness to retail is such, that they will shove it down consumer throats and AMD will reap the rewards.

      I'm curious about these things in small form factor, now that SD cards/MicroSD cards have given us nano-size storage we can get back to Finger sized computers that attach to a TV.

      SFF Fusion for me!
      • by Anonymous Coward on Friday June 04, 2010 @06:11AM (#32456444)

        | This will be the equivalent of putting audio on the motherboard, a low baseline quality but done with no cost.

        I don't think you are viewing this correctly. I wish they didn't call it a GPU because your thought on the matter is what people are going to think of first. Instead think of it as the Fusion between a normal threaded CPU and a massively parallel processing unit. This thing is going to smoke current CPUs in things like physic operations without the need of anything like CUDA and without the performance limit of the PCIe bus. The biggest problem with discrete cards is pulling data off the cards because the PCIe bus is only fast in one direction (data into the card). This thing is going to be clocked much higher then discrete cards in addition to having direct access to the memory controller.

        I don't think many have even scratched the surface of what a PPU (Parallel Processing Unit) can do or how it can improve the quality of just about any application ... I think this is going to be Hott.

        • by hedwards ( 940851 ) on Friday June 04, 2010 @06:23AM (#32456494)
          It'll also be interesting to see how they manage to use this in tandem with a discrete card, as in preprocessing the data and assisting the discrete card to be more efficient.
        • This thing is going to smoke current CPUs in things like physic operations without the need of anything like CUDA and without the performance limit of the PCIe bus.

          Ummm, but videocard has its own super-fast memory (and a lot of it), and it uses direct access to system RAM, while this little thing will have to share the memory access and caches with CPU.

          without the need of anything like CUDA

          I dare to say, that this is totally false.

      • Re: (Score:3, Informative)

        Well sure YOU DO, but your Gran still has a 5200 with "Turbo memory" (actually that's only 3 years old, she probably has worse).

        What year are you living in?
        1: Turbocache didn't exist until the 6100.
        2: The 5200 is seven years old
        3: You can apparently still buy them: eBuyer Link [ebuyer.com]

        • Re: (Score:3, Funny)

          by FreonTrip ( 694097 )
          If you think that's bad, you can still buy Radeon 7000 cards in lots of places, and they're fully a decade old. Look around a bit more, and you can find Rage128 Pro and - yes, your nightmares have come back to haunt you - 8 MB Rage Pro cards at your local CompUSA store. For the right market, old display technologies can still easily be good enough - mach64 support in X.org is good 'n' mature at this point, and if you're running a command-line server with framebuffer support it's easily good enough.
    • by sznupi ( 719324 )

      They specifically said about "creating, interacting, sharing ... managing, manipulating" - and you just dismissed that part and criticised "consuming"?

      There is a quite popular usage scenario which is nowhere near diminishing marginal returns - video editing. Architecture of Fusion seems perfect for that. Will help also in image editing; even if its not so desparatelly needed in this case, it will come handy with what's enabling video boom - reasonably cheap digicams shooting fabulous 720p, even at this poin

      • by sco08y ( 615665 )

        Yeah, sure, go ahead and call it "crap"...but that sea of crap will give us many great videographers. Especially if large portion of them will be able to finally even afford quite sensible camera and editing rig (remember, world encompanesses not only developed countries)

        I'm not saying it's crap. Other comments pointed out how this is far more than simply glomming a GPU onto a CPU and I don't doubt that. I'm complaining about the eye-candy oriented hype, and I'm stupefied as to how, even in third-world countries, there's this desperate shortage of video. Do you really think that their problems would be solved if only they could set up their own cable news networks?

        • by sznupi ( 719324 )

          ...

          If not "developed" then it's suddenly "third world"? For that matter, why does first world (numbering designation is a bit obsolete btw) accept the existance of indy videographers? Aren't they useless?

        • Actually, if they could use video to democratize the availability of information, you could really be on to something...
  • I wonder how well it would work in a virtualization environment such as VMware, Xen, KVM, etc. I could really see a point to a server that could easily off-load GPU work from thinclients that are running virtual desktops without needing to manage a huge box full of GPU cards.
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      It doesn't bring anything to the table yet. Firstly, IOMMUs need to be more prevalent in hardware, then secondly there needs to be support for using them in your favourite flavour (Xen will be there first) of virtualisation.

      That said, we'll get ugly vendor-dependent software wrapping of GPU resources. Under the guise of better sharing of GPUs between VMs, but really so you're locked in.

  • Moving electrons (Score:4, Informative)

    by jibjibjib ( 889679 ) on Friday June 04, 2010 @04:26AM (#32455992) Journal
    "Moving electrons between two chips" isn't entirely accurate. What moves is a wave of electric potential; the electrons themselves don't actually move very far.
    • Last I checked nobody was quite sure if all electrons moved or some electrons moved. Has this actually been ironed out? Do any of these chips actually switch fast enough that your statement is correct regardless?

      • Electrons will drift( look up 'electron mobility' on Wikipedia), but the GP is right in that it is the wave motion of the potentials that are primarily the means in which the information will travel. At the same time, he is just being a bit nit picky. I think what person in the article is trying to say that usually to go from one chip to another, you usually need to provide a buffer (ie amplifier) on the output interface. This will give you better noise margins on the receiver chip. This would be your class
        • Adding buffers will increase the power of the chip.

          Just to be clear, you mean power consumption, right? And to be still more clear, adding buffers increases latency because you have to wait for more transistors to switch? And of course more power means more opportunity for noise in other systems...

          • Re: (Score:3, Informative)

            by stewbee ( 1019450 )
            Correct, I was referring to power consumption. At the die level, you can make a buffer a simple transistor/FET, so the time delay added would be pretty small. The noise that I was referring to in getting from A to B is mostly an EMI sort of issues. In a trace that runs from chip to chip, depending on how long it is, is susceptible to picking up EM radiation from other sources to the point where the EMI would corrupt the received signal to possible give the wrong value (p(0|1) or p(1|0) condition). There are
        • Electrons will drift( look up 'electron mobility' on Wikipedia), but the GP is right in that it is the wave motion of the potentials that are primarily the means in which the information will travel. At the same time, he is just being a bit nit picky.

          Agreed, and it's a pretty minor nit. Electrons do move (there would be no current if they did not), they just don't have to move all the way for the electric potential to reach the destination.

          The nit I would have picked would have been with the phrase "the sa

          • Good points. I am not an IC designer, so I am not sure of the exact implementation or best practices at the die level. My background is in electromagnetics and RF design, but I guess I know enough from those fields to think that similar design practices would still be good.
  • Yeah! (Score:5, Interesting)

    by olau ( 314197 ) on Friday June 04, 2010 @04:36AM (#32456056) Homepage

    I'm hoping moving things into the CPU will make it easier to take advantage of the huge parallel architecture of modern GPUs.

    For what, you ask?

    I'm personally interested in sound synthesis. I play the piano, and while you can get huge sample libraries (> 10 GB), they're not realistic enough when it comes to the dynamics.

    Instead people have been researching physical models of the piano. So you simulate a piano in software, or the main components of it, and extract the sound from that. Nowadays there are even commercial offerings, like Pianoteq (www.pianoteq.com) and Roland's V-Piano. Problem is that while this improves dynamics dramatically, they're not accurate enough yet to produce a fully convincing tone.

    I think that's partly because nobody understands how to model the piano fully yet, at least judging from the research literature I've read, but also very much because even a modern CPU simply can't deliver enough FLOPS.

    • If 10GB sample libraries arent good enough, try 100GB, then 1000GB.

      I thought the whole instrument modeling thing died because disk space is cheaper than processing power. That 1000GB of samples fits on a $100 drive.
  • Just like my Core i3 sitting about 20 inches to the left, then. Yes, I know they're incorporating a better GPU, but they're touting too much as new.

    • by odie_q ( 130040 ) on Friday June 04, 2010 @04:54AM (#32456140)

      The technical difference is that while your Core i3 has its GPU as a separate die in the same packaging, AMD Fusion has the GPU(s) on the same die as the CPU(s). The Intel approach makes for shorter and faster interconnects, the AMD approach completely removes the interconnects. The main advantage is probably (as is alluded to in the summary) related to power consumption.

    • by sznupi ( 719324 ) on Friday June 04, 2010 @06:08AM (#32456430) Homepage

      Well, "incorporating a better GPU" makes quite a bit of difference, considering i3/i5 solution isn't much of an improvement almost anywhere (speed - not really, cost - yeah, I can see Intel willingly passing the savings...anyway, cpu + mobo combo hasn't got cheaper at all, power consumption is one but mostly due to how Intel chipsets were not great at this); and seemed to be almost a fast "first" solution, announced quite a bit after the Fusion.

      • by Rudeboy777 ( 214749 ) on Friday June 04, 2010 @10:03AM (#32458554)
        cost - yeah, I can see Intel willingly passing the savings...anyway, cpu + mobo combo hasn't got cheaper at all

        This is where Intel's monopolistic behaviour rears its ugly head. In the past, the GPU needed to be integrated on the motherboard. Now it's on the CPU but Intel motherboard chipsets cost the same as previous generations. Seems like a terrific opportunity a market for 3rd party chipset vendors to make an offering (like the good old days when you could choose from VIA, Nvidia, SiS, Intel, ...)

        But wait, Intel will no longer allows 3rd parties to produce chipsets for their CPUs and keeps the profits from the artificially inflated chipset market to itself. Intel may have the performance crown, but its reasons like this (and the OEM slush funds to lock out AMD from Dell and other vendors) that keep me from supporting "Chipzilla"
  • by Rogerborg ( 306625 ) on Friday June 04, 2010 @04:57AM (#32456168) Homepage

    Let's party like it's 1995! Again! [wikipedia.org]

    Slightly less cynically, isn't this (in like-for-like terms) trading a general purpose CPU core for a specialised GPU one? It's not like we'll get more bang for our buck, we'll just get more floating point bangs, and fewer integer ones.

    • Slightly less cynically, isn't this (in like-for-like terms) trading a general purpose CPU core for a specialised GPU one? It's not like we'll get more bang for our buck, we'll just get more floating point bangs, and fewer integer ones.

      Until it takes less than four intel CPU cores (This is a psuedorandom number, it's what I recall from some intel demo) to do the job of a halfway decent GPU, this approach will be rational for any users who care about 3d graphics. Intel would like us to have "thousands" of CPU cores (I assume that means dozens in the near term) and to ditch our GPUs and the change cannot come fast enough for me... but it's not here yet.

    • Very few people need more than a dual-core - the cores just sit their twiddling their bits. Sacrifice a core or two for a good GPU, and you have massively simplified the design of the system, saved power and saved space.

      Sure, it's not a new idea - in IT we seem to progress in spirals. It's time this idea came around again...

    • Re: (Score:3, Interesting)

      by the_one(2) ( 1117139 )

      It's not like we'll get more bang for our buck, we'll just get more floating point bangs, and fewer integer ones.

      You can accelerate integer operations as well on "new" GPUs. This means that for highly parallel, data independent operations you will get a ton of bang for your buck and without having to send data to the graphics memory first and then pulling the results back.

  • The new paradigm. Snort...
  • Open Source drivers? (Score:5, Interesting)

    by erroneus ( 253617 ) on Friday June 04, 2010 @05:54AM (#32456386) Homepage

    Will the drivers for the graphics be open source or will we be crawling out of this proprietary driver hole we have been trying to climb out of for over a decade?

    • The more things change, the more things stay the same.
            Don't expect to have high quality, high performance open source drivers as people high in the company will think the information revealed in those drivers will help the competition

    • by Skowronek ( 795408 ) <skylarkNO@SPAMunaligned.org> on Friday June 04, 2010 @07:36AM (#32456944) Homepage

      The documentation needed to write 3D graphics drivers has been consistently released by ATI/AMD since R5xx. In fact, yesterday I was setting up a new system with a RV730 graphics card which was both correctly detected and correctly used by the open source drivers. Ever since AMD started supporting the open source DRI project with both money, specifications and access to hardware developers things have improved vastly. I know some of the developers personally; they are smart and I believe that given this support, they will produce an excellent driver.

      It's sad to see that with Poulsbo Intel did quite an about-face, and stopped supporting open source drivers altogether. The less said about nVidia the better.

      In conclusion, seeing who is making this Fusion chip, I would have high hopes for open source on it.

  • APU (Score:3, Funny)

    by Capt James McCarthy ( 860294 ) on Friday June 04, 2010 @06:39AM (#32456568) Journal

    Looks like the Quickie Mart has a lawsuit on their hands.

  • Meh. (Score:2, Insightful)

    by argStyopa ( 232550 )

    Sounds like a non-advancement to me.

    "Look, we can build a VCR *into* the TV, so they're in one unit!"

    Yeah, so when either breaks, neither is usable.
    Putting more points of failure into a device just doesn't sound like a great idea.

    In the last 4 computers I've built/had, they've gone through at least 6-7 graphics cards and 5 processors. I can't remember a single one where they both failed simultaneously.

    Now, if this tech will reduce the likelihood of CPU/GPU failures (which, IMO, are generally due to heat or

    • Re:Meh. (Score:4, Insightful)

      by mcelrath ( 8027 ) on Friday June 04, 2010 @07:56AM (#32457100) Homepage

      Sounds like you need a new power supply, or a surge suppressor, or a power conditioner, or an air conditioner.

      You shouldn't see that many failures. Are you overclocking like mad? Silicon should last essentially forever compared to other components in the system, as long as you keep it properly cooled and don't spike the voltage. Removing mechanical connectors by putting things on one die should mean fewer failure modes. A fanless system on a chip using a RAM disk should last essentially forever.

      A single chip with N transistors does not have N failure modes. It's essentially tested and will not develop a failure by the time you receive it. A system with N mechanically connected components has a failure rate of N*(probability of failure of one component), and it's always the connectors or the cheap components like power supplies that fail.

      • I have built as many systems and have had no GPU failures and only one CPU failure (and that was because it was a first generation socket 462 and the brick and mortar I was buying the parts from didn't have any HSFs for 462... he said 'oh, just use this really beefy socket 7 HSF, that will be enough!' Pff, well, it wasn't. At least the bastard replaced the chip when it died, at which time he did have 462 HSFs).

        I'm willing to bet that the GP buys cheap crap like ASRock and generic PSUs that couldn't perfor
        • Then you'd be wrong.

          Firstly, I've been building PC's since perhaps 1984. (I wouldn't include the early computers that I built from kits in 80-81.) So we're talking over a long, long span of time.

          I learned early on that you get what you pay for - shit components=shit performance.
          Thus PRECISELY the point I was making: when sinking a lot into individual components because you're not buying cheap crap, it's useful to be able to purchase incrementally.

          Now, I'll answer all the other commenters: first, recognize

          • If you have a good UPS as you say, you should be experiencing very consistent power output regardless of what is happening on the line in. I notice you don't mention your PSU, which is a very important half of the equation for turning 'dirty' power to 'clean' for the components. Even supposedly high-end PSUs can be introducing voltage differences [tomshardware.com] that ultimately wear down the components they power.

            Further, ambient temperature shouldn't be any issue to the CPU if your HSF is good enough. I don't let my C
          • Well you should have mentioned the rural challenges earlier!

            Those of us in more urban settings with more controlled climate and power don't experience these problems and yours should not be taken to be normal experience. Particularly, I've never seen a CPU failure and motherboard and video card failures are nearly always the result of bad capacitors.

          • Re: (Score:3, Interesting)

            32C? Is that considered high?

            I've played Call of Duty 4 in my cheap P4 in the summer. I don't know the temperatures inside, but outside there were 40C and I have no A/C. All with stock cooler too. The CPU is now seven years old and still works perfectly.

            In fact, the only thing that died in that cheap system was the power supply due to some construction workers in another floor which connected their machines directly to the building's power without protection and caused a power surge.

      • Re:Meh. (Score:5, Informative)

        by CAIMLAS ( 41445 ) on Friday June 04, 2010 @09:33AM (#32458064)

        People who don't know better seem to skimp on the power supplies more than anything else.

        I can understand cheap boards; they'll (usually) last the useful life of the system provided they're not really crappy. But the power supply is existential: it's the heart of the system.

        If it doesn't pump your electricity properly (at the correct rates and the like), your brain and various peripherals will die a slow death. Sometimes it is not so slow.

        Invest in a decent power supply: it's worth it. It's probably the only part of a typical user computer I'd consider an investment, too, because it is an insurance policy (of sorts) on the parts. Buying a cheap power supply so you can get a UPS is backwards. Your components are still going to be getting crap power if the PSU is crap.

        I've had a total of one power supply failure, 2 disk failures, and 0 peripheral/RAM/CPU/motherboard failures in the 12 years I've been buying my own parts to build systems.

        The current PSU I've got in my main home computer is a Seasonic something or other (they, and Antec, I've found are very good). I'm amazed at how good this converter is: yes, it's got PFC and all those bells, which certainly help, but it delivers amazingly consistent power, evening out the voltage nicely. Hell, we had the power go out for long enough to stop the motor in the washing machine, make my wife's laptop go to battery, and kill the lights, and make my LCD lose power: the computer didn't turn off (and no, I'm not currently using a UPS). This little power supply caches enough power for a full second or so of operation while playing a CPU and graphics intensive game.

        So yeah, paying $70 or more for a PSU does not seem unreasonable in the least. With PSUs, you're paying more so for quality than you are for advertised performance or anything like that, so throw down the cash.

        • by mcelrath ( 8027 )
          I totally agree. Now, how do you identify a good PSU? I mean, it's always been a popular game for PSU manufacturers to skimp on the internals (because no one ever sees it). Now we have $1000 PSU's. Are they actually any good? How do you tell? My recent experience with "gaming" hardware (which is where the expensive PSU's are) has taught me that it's all crap sold to overclocking suckers, who would never know that the failure was due to their PSU rather than their overclock. For years I used only serv
    • by brxndxn ( 461473 )

      Wow, obviously you are a consumer. I cannot imagine anyone worth their pay in the business world replacing individual components in a computer. Usually, it is just tossed.. or handed back to the oem to get fixed..

      Second, when is the last time you had a processor fail?

    • by MikeFM ( 12491 )
      With that kind of fail rate you're probably seeing the reason not to be cheap and not to try to keep reusing old parts. In my experience the technology moves fast enough that every three years I want to replace my systems anyway and the only thing worth saving is the hard disk which can be dumped as yet-another-drive into your backup units RAID. I kept trying to save the last really expensive graphic card I purchased for my new systems until I realized that the new $25 cards were more powerful - not worth t
      • 1 year is definitely unusual for the life on a PC. My parents still have a nice Dell from 2001 (with the ram tripled from the original and an extra hard drive) which is working fine. The floppy drive on it died but nothing else has had a fault. But ignoring the anecdotes, the data shows that the average life is far more than one year. Here in the EU it is not legal to offer less than 2 years warranty so they need a decent survival rate just to stay in business.

    • You may have missed the memo - Intel is the largest supplier of graphics units for PCs and Mac. And no - none of their graphics units are discrete. They're all mounted on the motherboard. Just like the audio controller. And the USB controller. And SATA controller. And NIC.

      And for some strange reason, there's still a market for discrete controllers.

      What this is doing isn't taking away your choices. It's giving you more choices. Though, to be realistic, it's probably aimed more at OEMs and business who have n

      • And for some strange reason, there's still a market for discrete controllers.

        Discrete controllers (video, sound, RAID, NIC) offer better performance than motherboard-integrated options.

        The integrated parts are fine for the vast majority of users, but many users, especially professionals in various endeavours, need better.
    • by LWATCDR ( 28044 )

      You are having way to high of a failure rate.
      Fusion is going into notebooks first which don't tend to get a lot of cpu or gpu swaps.

    • These days, the mechanical components of computers are the only parts that fail. Fans and non-SSD hard drives will need replacement. Chips and circuits won't. In fact, your computer should be shutting itself down if critical fans fail (though your power supply may cook itself).

      Always use a UPS and be sure to use dust filters on your intakes if you find your fans choking to death. Every few months, blast it all out with an air can.

      Your computers won't fail if you do this.

    • Re: (Score:2, Informative)

      by tippe ( 1136385 )

      Taken as a whole, GPU+CPU is simpler and more robust than two separate components connected via an external bus. It does away with connectors, bus drivers (need something to drive those signals across connectors and inches of trace) , level shifters (external busses don't operate at the same voltage as core silicon), bridges (external busses are often shared by multiple devices) and all of the complexity, signal integrity issues and points of failure that these things introduce. GPU+CPU on one die means t

  • Packing the GPU into the CPU makes a lot of sense but also raises some questions.

    Does this mean that in the future we can have chips that contain not only a multi-core CPU but also a multi-core GPU? For example could AMD pump out a frag-tastic 6 CPU + 4 GPU chip for hardcore gamers and scientists?

    How is this going to effect the cooling for the chip? If I fire up Crisis will my computer melt? (Assuming a GPU is packed in with enough power to play crisis.)

    Also how is this going to effect memory bandwidth? Mos

    • GPU's are already highly "multi core". They have been for years. I am not sure if they call them cores, they are slightly different from CPU cores but it is effectively the same idea. his si why GPU's are so much better than CPU's at graphics and some other parallel tasks with things like CUDA.

    • by Jeng ( 926980 )

      Simple answer no.

      They are not taking everything that makes up a video card and putting it all on the chip, they are just integrating some GPU like functions into a CPU.

The most difficult thing in the world is to know how to do a thing and to watch someone else doing it wrong, without commenting. -- T.H. White

Working...