Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AMD Graphics Upgrades Hardware Technology

AMD Fusion System Architecture Detailed 121

Vigile writes "At the first AMD Fusion Developer Summit near Seattle this week, AMD revealed quite a bit of information about its next-generation GPU architecture and the eventual goals it has for the CPU/GPU combinations known as APUs. The company is finally moving away from a VLIW architecture and instead is integrating a vector+scalar design that allows for higher utilization of compute units and easier hardware scheduling. AMD laid out a 3-year plan to offer features like unified address space and fully coherent memory for the CPU and GPU that have the potential to dramatically alter current programming models. We will start seeing these features in GPUs released later in 2011."
This discussion has been archived. No new comments can be posted.

AMD Fusion System Architecture Detailed

Comments Filter:
  • Re:Long time coming (Score:3, Interesting)

    by noname444 ( 1182107 ) on Friday June 17, 2011 @04:37AM (#36472146)

    Integrating CPU, GPU and unifying the memory address space will probably make things easier for programmers. So hopefully it'll help programmers utilize the hardware better.

  • by Sycraft-fu ( 314770 ) on Friday June 17, 2011 @05:10AM (#36472230)

    One concern of mine is simply performance with unified memory. The reason is that memory bandwidth is a big factor in 3D performance. The kind of math you have to do just needs a shitload of memory access. This is why GPUs have such insane memory configurations. They have massively wide controllers, special high performance ram (GDDR5 is based on DDR3, but higher performance) and so on. That's wonderful, but also expensive.

    So it seems to me that you run in to a situation where either you are talking about needing to have much more expensive memory for a computer, possibly with additional constraints (at high speeds memory on a stick isn't feasible, electrical issues are such that you have to solder it to the board) or a system where your performance suffers because it is starved for memory bandwidth. Please remember that it would also have to share memory with the CPU.

    Perhaps they've found a way to overcome this, but I'm skeptical.

    I also worry this could lead to fragmentation of the market. What I mean is right now we have a pretty nice unified situation from a developer perspective. AMD and Intel have all kinds of cross licensing agreements with regards to instruction sets. So the instructions for one are the instructions for the other. While there are special cases, like 3DNow that only AMD does, or AVX which Intel has and AMD has yet to implement, by and large you have no problems supporting both with a very similar, or dead identical, codebase.

    Likewise GPUs are unified from an app perspective. You talk to them with DirectX or OpenGL. The details of how AMD or nVidia do things aren't so important, that handled. You use one interface to talk to whatever card the user has. Not saying there can't be issues, but by and large it is the same deal.

    Well this could change that. APUs might need a drastically different development structure. Ok fine, except AMD might be the only company that has them. Intel doesn't seem to be going down this road right now, and nVidia doesn't have a CPU division. So then as a developer you could have a problem where something that works well for traditional CPU/GPU doesn't work well, or maybe at all, for an APU.

    That could lead to a choice of three situations, none that good:

    1) You develop for traditional architectures. That's great for the majority of people, who are Intel owners (and people who own what is now current AMD stuff) but screws over this new, perhaps better, way of doing things.

    2) You develop for the APU. That is nice for the people who have it but it screws over the mass market.

    3) You develop two versions, one for each. Everyone is happy but your costs go way up from having more to maintain.

    Of course even if everything goes APU it could be problematic if AMD and Intel have very different ways of doing things. Their cross licensing does not extend to this sort of thing, and I could see them deciding to try and fight it out.

    So neat idea, but I'm not really sure it is a good one at this point.

  • by YoopDaDum ( 1998474 ) on Friday June 17, 2011 @06:14AM (#36472414)
    Unified memory is an implementation option, but not the only one. It definitely make sense when price matters more than performance. But for a higher end part you could have separate memories. Look at AMD multi-core CPUs, it's already NUMA (light) from the start: each core as a direct attached bank with minimum latency, and can access the other cores memory banks with a (small) additional latency. Extended here, the GPU could have a dedicated higher performance GDDR5 memory directly attached, but accessible from the CPU side (and similarly the GPU could access all the system memory). It's a NUMA extension for a hybrid architecture if you wish. It needs support from the OS/drivers to handle this in a transparent way, but NUMA is not new so existing know-how could be reused.

    Regarding performance, on principle an integrated solution can do better by offering tighter integration and more efficient exchanges between CPU and GPU than going through a lower speed / higher latency external bus as for a discrete GPU. We shouldn't judge the principle by today implementations, as they target the low (bobcat based) and middle (llano) ends only, not yet the high end.
    The con of integration is that you loose the flexibility of choosing CPU and GPU separately, and upgrading separately, but as others have pointed out most people do not care nor use this in practice.

    As for fragmentation, it's the usual situation. You can hide the differences using things like OpenCL, but you'll sacrifice some performance initially compared to a targeted implementation. Most should target this when the tools become sufficiently mature. But if you want to extract all the juice you will have to be target dependent, and face this fragmentation indeed. Still, over time we can expect some convergence (the good ideas will become clearer, and be adopted). So with time the generic approach (OpenCL or like) will become better and better, and less and less people will develop for a target as the decreasing performance advantage won't justify the cost. This process will not necessarily be fast ;) and we're just starting.
  • Re:Long time coming (Score:5, Interesting)

    by TheRaven64 ( 641858 ) on Friday June 17, 2011 @07:43AM (#36472706) Journal

    Not really. Now the CPU spends 95% of its time waiting for data from the network or disk instead of 90%, but the CPU is rarely the bottleneck these days.

    Around the time of the Pentium II, Intel did some simulations where they increased the (simulated) speed of the CPU running typical applications and measured performance. They found that, if the speed of other components didn't change, an infinitely fast CPU (i.e. all CPU operations took 0 simulation time) ran about twice as fast as the ones that they were shipping. It doesn't take much of an improvement in CPU speed before the CPU just isn't the bottleneck anymore, even in processor intensive tasks. RAM and disk bandwidth and latency quickly take over. This was one of the problems Apple had with the PowerPC G4 - the RAM wasn't fast enough to supply it with data as fast as it could process it, so it rarely came close to its theoretical maximum speed.

  • Comment removed (Score:4, Interesting)

    by account_deleted ( 4530225 ) on Friday June 17, 2011 @08:09AM (#36472858)
    Comment removed based on user account deletion
  • by fuzzyfuzzyfungus ( 1223518 ) on Friday June 17, 2011 @08:25AM (#36472960) Journal
    I would(given ATI's historically somewhat weak driver team) be wholly unsurprised to see some rather messy teething pains; but(given AMD's historical friendliness, and the long-term trajectory of this plan), I suspect that it will actually be a boon to Linux and similar.

    The long term plan, it appears, would be to integrate the GPU sufficiently tightly with the CPU that it becomes, in effect, an instruction-set extension specialized for certain tasks, like SSE on steroids. If they reach that point, you'll basically have a CPU where running OpenGL "in software" is the same as running it "in hardware" on the embedded graphics board, because the embedded graphics board is simply the hardware implementation of some of the available CPU instructions, along with a few displayport interfaces and some monitor-management housekeeping.

    I'd be unsurprised, as with Optimus, to see some laptops released with an embedded/discrete GPU combination that is fucked in one way or another under anything that isn't the latest Windows, possibly making the discrete invisible, possibly forcing you to run the discrete all the time, or some other dysfunctional situation; but I'd tend to be optimistic about the long term: GPU driver support has always been a sore spot. Compiler support for CPU instructions, on the other hand, has generally been pretty good.
  • by DragonHawk ( 21256 ) on Friday June 17, 2011 @08:48AM (#36473128) Homepage Journal

    A "math coprocessor" is just the FPU (Floating Point Unit) of a particular era of microcomputers. The FPU implements machine instructions for floating point math. Before the microcomputer, when machines filled cabinets, you might have an FPU (on one or more circuit boards), you might not. Same with the early micros. Eventually they built the FPU into the same die as the CPU, so no need for a separate chip. The FPU is always tightly coupled to the CPU because it shares the same control unit as the CPU. (A CPU consists of a control unit plus an arithmetic/logic unit.) You can't change the design of one without changing the other.

    A GPU is different from an FPU. It doesn't process CPU instructions -- it has its own control unit. GPUs operate independently of the CPU.

    Building a CPU into the same die or IC package as the CPU won't prevent you from installing a discrete graphics card. No need to get all upset about it.

    Although the tech may eventually get to the point where you won't bother with a discrete graphics card. I suspect we'll eventually see a large package containing CPU, GPU and memory, for performance reasons. One will upgrade them all together.

    Before you panic about that: In the early days of minicomputers, CPUs were implemented as many boards containing lots of discrete logic and small scale integration. It was possible to do things like change how the adder was implemented, how memory was accessed, or add whole new machine instructions. You could "upgrade" at that level. That capability was lost with the move to (very) large scale integration. However, things are so much cheaper and faster with (V)LSI that it's worth it.

    So if $100 will bring you a new CPU, GPU, and RAM, running 10x faster than what you had before, then yah, I can see it happening, and being a win.

Say "twenty-three-skiddoo" to logout.

Working...