Forgot your password?
AMD Graphics Upgrades Hardware Technology

AMD Fusion System Architecture Detailed 121

Posted by timothy
from the why-when-I-was-a-boy dept.
Vigile writes "At the first AMD Fusion Developer Summit near Seattle this week, AMD revealed quite a bit of information about its next-generation GPU architecture and the eventual goals it has for the CPU/GPU combinations known as APUs. The company is finally moving away from a VLIW architecture and instead is integrating a vector+scalar design that allows for higher utilization of compute units and easier hardware scheduling. AMD laid out a 3-year plan to offer features like unified address space and fully coherent memory for the CPU and GPU that have the potential to dramatically alter current programming models. We will start seeing these features in GPUs released later in 2011."
This discussion has been archived. No new comments can be posted.

AMD Fusion System Architecture Detailed

Comments Filter:
  • by Rosco P. Coltrane (209368) on Friday June 17, 2011 @04:42AM (#36472154)

    I think only a small number of computer users upgrade components these days - gamers and power users. But the majority of people these days buy a beige box or a laptop and never ever open them. From a business point of view, combining the GPU and the CPU makes sense. Heck, nobody cried when separate math coprocessors disappeared.

  • by myurr (468709) on Friday June 17, 2011 @05:16AM (#36472250)

    Except Larrabee failed because performance didn't live up to expectations and was a generation behind the best from AMD and nVidia. What this development from AMD allows is much more efficient interaction and sharing of data between a traditional CPU and an on-die GPU through updates to the memory architecture. These memory changes will also allow the parts to take advantage of the very fastest DDR3 memory that current CPUs struggle to fully utilise.

    The two most obvious scenarios for this technology are for accelerating traditional problems that take advantage of the existing vector units (SSE, etc.) by utilising the integrated GPU to massively accelerate these programs, and in gaming rigs where there is a discrete GPU the new architecture allows the integrated GPU to share some of the workload. The example given, and one that is increasingly relevant as all games now have physics engines, is for the discrete GPU to concentrate on pushing pixels to the screen and the integrated GPU to be used to accelerate the physics engine.

    Is it a game changer? Probably not in the first couple of generations, although it would be a very welcome boost to AMDs platform that could get them back in the game as the preferred CPU maker. But long term Intel will have to come up with an answer to this in some form as programmers get ever more adept at exploiting the GPU for general purpose computing, and changes like those AMD are incorporating into their designs make these techniques ever more powerful and relevant to wider ranges of problems. Adding more x86 cores won't necessarily be the answer.

  • by TheRaven64 (641858) on Friday June 17, 2011 @05:50AM (#36472346) Journal

    It's not that difficult to write code that takes full advantage of modern hardware. The limitation is need. Every 18 months, we get a new generation of processors that can easily do everything that the previous generation could just about manage. Something like an IBM 1401 took a weekend to run all of the payroll calculations for a medium sized company in 1960, using heavily optimised FORTRAN (back when Fortran was written in all caps). Now, the same calculations written in interpreted VBA in a spreadsheet on a cheap laptop will run in under a second.

    It would be naive to say that computers are fast enough - that's been said every year for the last 30 or so, and been wrong every time - but the number of problems for which efficient use of computational resources is no longer important grows constantly. Look at the number of applications written in languages like Python and Ruby and then run in primitive AST interpreters. A decent compiler could run them 10-100x faster, but there's no need because they're already running much faster than required. I work on compiler optimisations, and it's slightly disheartening when you realise that the difference that your latest improvements make is not a change from infeasible to feasible, it's a change from using 10% of the CPU to using 5%.

  • by MrHanky (141717) on Friday June 17, 2011 @06:55AM (#36472506) Homepage Journal

    One reason why laptop sales passed desktop sales is of course that desktops last longer, due to their upgradeability.

You can do this in a number of ways. IBM chose to do all of them. Why do you find that funny? -- D. Taylor, Computer Science 350