Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AMD Graphics Upgrades Hardware Technology

AMD Fusion System Architecture Detailed 121

Vigile writes "At the first AMD Fusion Developer Summit near Seattle this week, AMD revealed quite a bit of information about its next-generation GPU architecture and the eventual goals it has for the CPU/GPU combinations known as APUs. The company is finally moving away from a VLIW architecture and instead is integrating a vector+scalar design that allows for higher utilization of compute units and easier hardware scheduling. AMD laid out a 3-year plan to offer features like unified address space and fully coherent memory for the CPU and GPU that have the potential to dramatically alter current programming models. We will start seeing these features in GPUs released later in 2011."
This discussion has been archived. No new comments can be posted.

AMD Fusion System Architecture Detailed

Comments Filter:
  • Whats wrong with hardware !

    Humans are too stupid to program it.

    Not sure want the fix is not hardware keeps exploding and we are stuck with Windows 7, lol 8 or (CAT), lol Lion.
    • Re: (Score:3, Interesting)

      by noname444 ( 1182107 )

      Integrating CPU, GPU and unifying the memory address space will probably make things easier for programmers. So hopefully it'll help programmers utilize the hardware better.

      • by Anonymous Coward

        nvidia already offers this with cuda.

        • No it doesn't. Like OpenCL, CUDA basically means you're sending instructions to the GPU by writing data to a mapped memory region. Sharing address space is not possible at that level. It's only possible to do at a CPU level.

      • by TheRaven64 ( 641858 ) on Friday June 17, 2011 @04:50AM (#36472346) Journal

        It's not that difficult to write code that takes full advantage of modern hardware. The limitation is need. Every 18 months, we get a new generation of processors that can easily do everything that the previous generation could just about manage. Something like an IBM 1401 took a weekend to run all of the payroll calculations for a medium sized company in 1960, using heavily optimised FORTRAN (back when Fortran was written in all caps). Now, the same calculations written in interpreted VBA in a spreadsheet on a cheap laptop will run in under a second.

        It would be naive to say that computers are fast enough - that's been said every year for the last 30 or so, and been wrong every time - but the number of problems for which efficient use of computational resources is no longer important grows constantly. Look at the number of applications written in languages like Python and Ruby and then run in primitive AST interpreters. A decent compiler could run them 10-100x faster, but there's no need because they're already running much faster than required. I work on compiler optimisations, and it's slightly disheartening when you realise that the difference that your latest improvements make is not a change from infeasible to feasible, it's a change from using 10% of the CPU to using 5%.

        • While I agree with you regarding application programming, need, etc. I must clarify that I was talking about graphics/game applications that require the full hardware potential.

          If you compare this new architecture with an arguably over complicated architecture like the playstation 3 I'd argue that writing software that utilizes the hardware to its full potential is indeed hard. And in this context, making a more elegant, integrated GPU/CPU will make the lives of us poor indie game programmers a bit easier.

        • by Bert64 ( 520050 )

          The current trend seems to be towards more power efficient hardware and virtualization (and dynamic scaling etc), rather than ever faster hardware...
          So while your interpreted spreadsheet may be able to compute payroll calculations in a second, your hardware will consume more power doing it that way than using an optimized implementation... Also with sub optimal code, you won't be able to run as many instances on a single piece of hardware, and thus require more hardware.

      • Comment removed (Score:4, Interesting)

        by account_deleted ( 4530225 ) on Friday June 17, 2011 @07:09AM (#36472858)
        Comment removed based on user account deletion
      • Comment removed based on user account deletion
      • "hopefully it'll help programmers utilize the hardware better."

        Yes, this looks like a very nice architecture, which it should be possible to use to the max - if it weren't for AMD's plan to cripple its double-precision performance. NVIDIA already does this- if you don't shell out the extra $ for a Tesla or Quadro, they cut the 64-bit performance by half of what the chip can actually do. AMD's current chips are even slower than the crippled NVIDIA chips on 64-bit floating point, but the new AMD Fusion chips

  • by Anonymous Coward

    Is that the modular nature of current components allows for relatively easy upgrading and a comparatively low cost. Buying a new graphics card that has the price of a GPU and dedicated video RAM is reasonable. Having to buy a new CPU every time you want to upgrade your GPU could get unreasonably expensive fast.

    • by Rosco P. Coltrane ( 209368 ) on Friday June 17, 2011 @03:42AM (#36472154)

      I think only a small number of computer users upgrade components these days - gamers and power users. But the majority of people these days buy a beige box or a laptop and never ever open them. From a business point of view, combining the GPU and the CPU makes sense. Heck, nobody cried when separate math coprocessors disappeared.

      • That may be the case, but the boxes they buy benefit from the economy of scale offered by being able to seperate those components. Every time I go to a computer store, I'd say that within the boxes people can buy, there's a wide variety of CPUs and GPUs in those boxes - in many combinations. This allows customers to buy what they need. For some, that's a moderate processor with moderate graphics, for others, it's a moderate processor with relatively decent graphics (to play blu-ray discs or 1080p flash vide

        • I would imagine that you'll likely still be able to upgrade by adding a discrete graphics card for quite some time.

          • Since this design seems to be about using the APU for non-graphics things as well, you could probably stick an nVidia card in the PCI-E slot for better video and continue to use the Fusion APU for OpenCL (or whatever) at the same time.
        • by Targon ( 17348 ) on Friday June 17, 2011 @04:55AM (#36472358)

          There will still be that same ability to get separate components, but the GPU element is being moved from the chipset onto the CPU(now called an APU).

          There really have been only three general configurations:
          1: CPU with integrated graphics on the motherboard
          2: CPU with integrated graphics on the motherboard PLUS a discrete video card/GPU.
          3: CPU without integrated graphics on the motherboard with ONLY one or more video cards.

          So, what this does is to update 1 and 2, since you can still add a discrete video card. Since the graphics portion of Fusion is better than what Intel offers, this isn't a bad setup. There will also be the option to swap the APU with a faster version that has both a faster CPU core as well as faster GPU core in most motherboards.

          Yes, there are certain advantages offered by the APU design, but it isn't an "all or nothing" offering, AMD will continue to offer straight CPUs(with Bulldozer being the next core design), and if you think about it, AMD may go to a tick-tock design like Intel has, but rather than it being based on core design and fab processor technology going back and forth, we may see AMD going CPU core design, GPU design, and then APU to combine the latest CPU and GPU designs.

          Right now, many are waiting for AMD to release its first all new core design since 2003, since that will hopefully get AMD the better CPU core performance that many have been waiting for.

        • Laptop sales passed desktop sales a couple of years ago. Anyone buying a desktop is now in the minority. With laptops, the constraints are different. Having the CPU and GPU in separate chips complicates the board design, which adds to the cost. With integrated CPU and GPU designs, you can have a simple board design and just pop a faster chip in the top of the line models.

          Upgrading your GPU separately? My first PC had a slot for installing an FPU. You could get one from Intel, but you could get faste

          • by MrHanky ( 141717 ) on Friday June 17, 2011 @05:55AM (#36472506) Homepage Journal

            One reason why laptop sales passed desktop sales is of course that desktops last longer, due to their upgradeability.

            • Who upgrades desktop machines? Most desktops go through their entire life without a single upgrade. Most users will pitch them and buy another computer if they develop a problem they don't know how to fix, let alone if the machine is too slow. Remember, we live in a disposable culture. It's interesting in that the Native Americans were big on throwing stuff into big piles too, but of course nothing they were working with was leaving a toxic debt.

              • i upgraded my scaleo x with a new motherboard harddrive cpu and ram and gpu and harddrive

              • Re: (Score:2, Flamebait)

                by gad_zuki! ( 70830 )

                Cheap people, gamers, power users, and businesses do. That's probably a good chunk of the desktop market right now.

                > Remember, we live in a disposable culture.

                Would you like some cheese with your whine?

                • Would you like some cheese with your whine?

                  Would you like some Flaturin with your trash culture of slavery?

            • And they're less likely to fail due to less movement etc.
              There's *still* P4's in use, though they are finally being phased out -- and then only(likely) because the PSU's caps are failing. Same with other hardware from the same vintage, like screens.

        • Yes and no. Most customer (I'd guess 80%) actually don't care at all about performance (neither CPU nor GPU) because whatever's current nowadays is good enough for them. For those, an APU means cheaper prices and more hardware/software reliability.

          The rest will indeed need more CPU and/or GPU power, and neither Llano nor its successor will be for them, because the CPUs are lackluster, the GPU is OK but not great (equivalent to an entry-level discrete card), and, on top of that, CPU and GPU have to fight for

        • The integrated GPU on the processor die won't make it impossible to buy a and install a aftermarket graphics card. In fact, you could just use the integrated GPU for other things, like super fast matrix computations. A video game could in effect use the Fusion processor by allocating matrix computations to the GPU and scalar computations to the CPU, then leave an aftermarket graphics card only for rendering. A programmer would have to write the program to take advantage of this, but its possible.
      • Also, nothing about AMD's new design precludes discrete GPUs more or less similar to today's models, it is just an effort to make the (economically inevitable) integrated GPU more useful by virtue of its close integration with the system, rather than simply cheaper as integrated GPUs are today.

        Expansion will be slightly trickier than today's Crossfire/SLI, because certain GPU elements(while comparatively few) will enjoy much faster access to the CPU and main memory, while the expansion GPU(s) will presum
      • The reason why nobody cried when separate math coprocessors disappeared was because not only math coprocessors didn't disappeared but also separate math coprocessors didn't disappeared also.

        Back in those days, you needed a math coprocessor because more often than not the CPU didn't offered any support for basic features such as floating point arithmetic, which happens to be of fundamental importance. Yet, even when providing that support directly on the CPU and even providing vectorized versions of

        • by DragonHawk ( 21256 ) on Friday June 17, 2011 @07:48AM (#36473128) Homepage Journal

          A "math coprocessor" is just the FPU (Floating Point Unit) of a particular era of microcomputers. The FPU implements machine instructions for floating point math. Before the microcomputer, when machines filled cabinets, you might have an FPU (on one or more circuit boards), you might not. Same with the early micros. Eventually they built the FPU into the same die as the CPU, so no need for a separate chip. The FPU is always tightly coupled to the CPU because it shares the same control unit as the CPU. (A CPU consists of a control unit plus an arithmetic/logic unit.) You can't change the design of one without changing the other.

          A GPU is different from an FPU. It doesn't process CPU instructions -- it has its own control unit. GPUs operate independently of the CPU.

          Building a CPU into the same die or IC package as the CPU won't prevent you from installing a discrete graphics card. No need to get all upset about it.

          Although the tech may eventually get to the point where you won't bother with a discrete graphics card. I suspect we'll eventually see a large package containing CPU, GPU and memory, for performance reasons. One will upgrade them all together.

          Before you panic about that: In the early days of minicomputers, CPUs were implemented as many boards containing lots of discrete logic and small scale integration. It was possible to do things like change how the adder was implemented, how memory was accessed, or add whole new machine instructions. You could "upgrade" at that level. That capability was lost with the move to (very) large scale integration. However, things are so much cheaper and faster with (V)LSI that it's worth it.

          So if $100 will bring you a new CPU, GPU, and RAM, running 10x faster than what you had before, then yah, I can see it happening, and being a win.

    • by ledow ( 319597 )

      I have to say - I can't remember the last time I upgraded a video card (it may have been the AGP era), and I play 20+ hours a week just on Steam games.

      Since we hit the CPU speed limits, and software authors can't just make you upgrade, there comes a point where a computer is "good enough" for the vast majority of games for almost its entire usable life. By the time it comes to upgrades, it's usually cheaper to just buy a new computer with the components you want than trying to force your motherboard into C

      • You forget the fact that the majority of the PC games is created with console hardware in mind and as such uses only a fraction of what a modern GPU is capable of.

        That said, the "good enough" argument does fly for desktop users. I expect graphics boards to become like sound cards, they will be useful for specific applications (musicians come to mind for sound cards) and the people that need them will buy them.

      • by Smirker ( 695167 )
        Heck, I can't even notice the difference these days to SCREEN 13.
      • by cynyr ( 703126 )

        Just upgraded my desktop, and by that i mean bought a new one, and moved the server into the old one. At some point in the future, this new desktop will become the family computer, I'll have a new desktop, and the server will still be humming along.

        As for "bumping up the price" give me tools to use it for GPGPU while using the PCIe card for video and I'm sold.

        I think some business users will notice, I have a nvidia Quadro in my work laptop for a reason.

      • well CAD and useing the GPU as a CPU is still there. OpenCL makes the video card in to a HIGH end FPU the can do stuff that the main cpu sucks at.

        Any ways a video card still has faster ram that is not used shared with system ram. On board video on some boards has a max of 2 displays (some boards force one to be analog) Now if ATI / AMD can have on board video with DP then you can do more. But I think if you need like 3-4+ screens a add in video card may be better and save you the ram hit.

    • by Targon ( 17348 )

      AMD will still make straight CPUs as well as GPUs, but Fusion makes sense for the low end of the market that was already going to use integrated graphics, the APU makes more sense. You can also add a video card to a desktop, or possibly some laptops that have a Fusion APU. As it stands now, Llano is still going to be using CPU cores that are based on current Athlon 2/Phenom 2 cores. Bulldozer is the next core design from AMD and will have both CPU-only implementations, and then later we will see ne

      • Perhaps for most folks around here it's low end, but I recently got one, and I've been shocked at how well it performs. You're not going to be playing games that were made in the last few years, but it does a really good job at the sorts of things that people typically do. I needed something portable, durable and power efficient, and it does that quite well. I'm really curious to see what the new tool kits are going to be able to provide.

    • Comment removed based on user account deletion
  • by Sycraft-fu ( 314770 ) on Friday June 17, 2011 @04:10AM (#36472230)

    One concern of mine is simply performance with unified memory. The reason is that memory bandwidth is a big factor in 3D performance. The kind of math you have to do just needs a shitload of memory access. This is why GPUs have such insane memory configurations. They have massively wide controllers, special high performance ram (GDDR5 is based on DDR3, but higher performance) and so on. That's wonderful, but also expensive.

    So it seems to me that you run in to a situation where either you are talking about needing to have much more expensive memory for a computer, possibly with additional constraints (at high speeds memory on a stick isn't feasible, electrical issues are such that you have to solder it to the board) or a system where your performance suffers because it is starved for memory bandwidth. Please remember that it would also have to share memory with the CPU.

    Perhaps they've found a way to overcome this, but I'm skeptical.

    I also worry this could lead to fragmentation of the market. What I mean is right now we have a pretty nice unified situation from a developer perspective. AMD and Intel have all kinds of cross licensing agreements with regards to instruction sets. So the instructions for one are the instructions for the other. While there are special cases, like 3DNow that only AMD does, or AVX which Intel has and AMD has yet to implement, by and large you have no problems supporting both with a very similar, or dead identical, codebase.

    Likewise GPUs are unified from an app perspective. You talk to them with DirectX or OpenGL. The details of how AMD or nVidia do things aren't so important, that handled. You use one interface to talk to whatever card the user has. Not saying there can't be issues, but by and large it is the same deal.

    Well this could change that. APUs might need a drastically different development structure. Ok fine, except AMD might be the only company that has them. Intel doesn't seem to be going down this road right now, and nVidia doesn't have a CPU division. So then as a developer you could have a problem where something that works well for traditional CPU/GPU doesn't work well, or maybe at all, for an APU.

    That could lead to a choice of three situations, none that good:

    1) You develop for traditional architectures. That's great for the majority of people, who are Intel owners (and people who own what is now current AMD stuff) but screws over this new, perhaps better, way of doing things.

    2) You develop for the APU. That is nice for the people who have it but it screws over the mass market.

    3) You develop two versions, one for each. Everyone is happy but your costs go way up from having more to maintain.

    Of course even if everything goes APU it could be problematic if AMD and Intel have very different ways of doing things. Their cross licensing does not extend to this sort of thing, and I could see them deciding to try and fight it out.

    So neat idea, but I'm not really sure it is a good one at this point.

    • This is why GPUs have such insane memory configurations. .... wonderful, but also expensive.

      Have you seen what sub-$100 graphics cards can do these days?

      This sort of integration could save enough money at the manufacturing end to make that level of graphics almost free to the end user, especially in laptops. It's a huge win.

    • by YoopDaDum ( 1998474 ) on Friday June 17, 2011 @05:14AM (#36472414)
      Unified memory is an implementation option, but not the only one. It definitely make sense when price matters more than performance. But for a higher end part you could have separate memories. Look at AMD multi-core CPUs, it's already NUMA (light) from the start: each core as a direct attached bank with minimum latency, and can access the other cores memory banks with a (small) additional latency. Extended here, the GPU could have a dedicated higher performance GDDR5 memory directly attached, but accessible from the CPU side (and similarly the GPU could access all the system memory). It's a NUMA extension for a hybrid architecture if you wish. It needs support from the OS/drivers to handle this in a transparent way, but NUMA is not new so existing know-how could be reused.

      Regarding performance, on principle an integrated solution can do better by offering tighter integration and more efficient exchanges between CPU and GPU than going through a lower speed / higher latency external bus as for a discrete GPU. We shouldn't judge the principle by today implementations, as they target the low (bobcat based) and middle (llano) ends only, not yet the high end.
      The con of integration is that you loose the flexibility of choosing CPU and GPU separately, and upgrading separately, but as others have pointed out most people do not care nor use this in practice.

      As for fragmentation, it's the usual situation. You can hide the differences using things like OpenCL, but you'll sacrifice some performance initially compared to a targeted implementation. Most should target this when the tools become sufficiently mature. But if you want to extract all the juice you will have to be target dependent, and face this fragmentation indeed. Still, over time we can expect some convergence (the good ideas will become clearer, and be adopted). So with time the generic approach (OpenCL or like) will become better and better, and less and less people will develop for a target as the decreasing performance advantage won't justify the cost. This process will not necessarily be fast ;) and we're just starting.
      • Regarding performance, on principle an integrated solution can do better by offering tighter integration and more efficient exchanges between CPU and GPU than going through a lower speed / higher latency external bus as for a discrete GPU.

        This isnt quite right. On principle, a discrete solution doesnt have to compromise with the low-latency random access memory performance demands of the CPU, while an integrated solution does. For raw compute performance, the discrete solutions are starting out in a much better position.

        The latency savings only manifests as a win for small workloads, but small workloads ultimately dont matter (blink of an eye vs half-a-blink of an eye)

        • Regarding performance, on principle an integrated solution can do better by offering tighter integration and more efficient exchanges between CPU and GPU than going through a lower speed / higher latency external bus as for a discrete GPU.

          This isnt quite right. On principle, a discrete solution doesnt have to compromise with the low-latency random access memory performance demands of the CPU, while an integrated solution does. For raw compute performance, the discrete solutions are starting out in a much better position.

          Let's keep in mind that I'm talking about a possible high-end integrated solution that doesn't exist yet. This device would be NUMA-like, and have a high-speed memory on a wide bus optimized for the GPU, in addition to a classical memory optimized for the CPU. Still, CPU and GPU can access each other memories with higher performance than in a current discrete GPU solutions. Think about a multi-core Opteron memory organization, but instead of being symmetric (all memory ports identical) the ports are optimiz

          • by Agripa ( 139780 )

            Let's keep in mind that I'm talking about a possible high-end integrated solution that doesn't exist yet. This device would be NUMA-like, and have a high-speed memory on a wide bus optimized for the GPU, in addition to a classical memory optimized for the CPU. Still, CPU and GPU can access each other memories with higher performance than in a current discrete GPU solutions. Think about a multi-core Opteron memory organization, but instead of being symmetric (all memory ports identical) the ports are optimiz

    • Well, we could always have memory right on the motherboard, à la Sideport. Of course, more memory, such as 512 MB of GDDR5, would be better than today's Sideport memory's specifications (which is 1333 MHz DDR3, I think). But anyway, comparing [wikipedia.org] HD 6xxx integrated GPUs to their non-integrated counterparts, I find the memory bandwidth not to be so bad.

      Any sub €60 graphics card I can buy comes with, at best, 1333-1400 MHz DDR3 memory anyway...
    • They have massively wide controllers, special high performance ram (GDDR5 is based on DDR3, but higher performance) and so on.

      I have a GT 240. It has 3/4 the functional units of the GTS 250, GDDR3 instead of GDDR5 (you can get a GDDR5 model now, but you couldn't when I bought it) and yet provides 3/4 the performance of the GTS. The memory bandwidth is clearly only an issue when you actually need that much bandwidth, which you don't if you're pushing slightly less polys etc. As long as the connection to memory is wide enough it won't be a problem for the low- to mid-range market they're aiming for.

      I also worry this could lead to fragmentation of the market. [...] Well this could change that. APUs might need a drastically different development structure.

      They might?

    • I don't see why APUs need to be seen differently than discrete cards, from a software point of view. AMD has made abundantly clear that LLano is using a variant of their current Radeon architecture, all the hardware is and will remain abstracted anyway (through DirectX mainly).

      I'm sure there are specificities to an APU, and that they would benefit, possibly greatly benefit, from the Apps adressing them in a more "native" way. But the same can surely be said of the discrete AMD and nVidia cards, and nobody i

    • by LWATCDR ( 28044 )

      They do address this but I am going to suspect that their will always be room for high end GPUs or at least there will be for a long time. APUs are going to target the good enough category first. If they are good enough for 1080p video and gaming they will be good enough for 90+% of the market. This will hopefully raise the bar on integrated graphics up to the usable level. For high end users the APU could be used for things like trans-coding, physics modeling, and other GPU friendly tasks while the graphic

    • Comment removed based on user account deletion
    • TFA says it can do 30GB/s memory transfers, while the CPU functions only need at most 12GB/s. 30GB/s isn't the fastest ever for a GPU, but it's quite respectable.

      Maybe I'm wrong, but it looks to me like it can cope with either CPU or GPU workloads without recompilation needed for either, and it can use most of the GPU silicon for parallelizable math computations with very little extra effort compared to most other GPUs.

  • by psergiu ( 67614 ) on Friday June 17, 2011 @04:38AM (#36472302)

    ... and congratulated AMD for redescovering sgi's O2 [wikipedia.org] Unified memory Architecture. [wikipedia.org].

    PS: IBM PC jr. (1984) & Commodore Amiga (1985) were actually the 1st one to use UMA. Could this mean we will have "Chip RAM" & "Fast RAM" again ? :)

    • Could this mean we will have "Chip RAM" & "Fast RAM" again ? :)"

      That would actually make sense, given the current difference in graphics card RAM speed/cost vs system RAM speed/cost.

    • by Anonymous Coward

      What? The BBC Micro (1981) had shared graphics memory as did many of its contemporaries (e.g. Vic 20, ZX81, Spectrum). I believe the Acorn Atom (1980) also did.

      • And the Apple II had it in 1977.

      • by psergiu ( 67614 )

        At least ZX81 & Spectrum had a fixed address for VideoRAM. With O2's UMA you could play a movie by just filling up the RAM with the uncmpressed mofie frames and moving the start address of the framebuffer at each vertical refresh. AFAIK, you could do the same on the Amiga (in ChipRAM) but there you were able to change the address and the resolution mid-frame - i have seen screens where the top part was high-res low-color and the bottom part low-res high-color.

  • But how good will the new architecture be for processing Bitcoin blocks?

    That's why some of AMD's current high-end GPUs are hard to find.

  • Does it have WebGL support? i.e., address space protection and preemption support/kernel mode for shader programs?
  • Maybe someone read the TFA could chime in. The TFS mentioned unified address space, but not necessarily unified memory access right? it could be just another virtual memory paging mechanism....

  • Will it run Linux? (Score:5, Informative)

    by vigour ( 846429 ) on Friday June 17, 2011 @06:55AM (#36472782)

    Will it run Linux?

    I'm not being facetious, I got stung by the lack of support [phoronix.com] by Nvidia for their Optimus [nvidia.com] graphics cards on my ASUS U30JC.

    Thankfully Martin Juhl [martin-juhl.dk] has been working on a solution using VirtualGL, which gives us the use of our Nvidia cards under linux [github.com]

    • by fuzzyfuzzyfungus ( 1223518 ) on Friday June 17, 2011 @07:25AM (#36472960) Journal
      I would(given ATI's historically somewhat weak driver team) be wholly unsurprised to see some rather messy teething pains; but(given AMD's historical friendliness, and the long-term trajectory of this plan), I suspect that it will actually be a boon to Linux and similar.

      The long term plan, it appears, would be to integrate the GPU sufficiently tightly with the CPU that it becomes, in effect, an instruction-set extension specialized for certain tasks, like SSE on steroids. If they reach that point, you'll basically have a CPU where running OpenGL "in software" is the same as running it "in hardware" on the embedded graphics board, because the embedded graphics board is simply the hardware implementation of some of the available CPU instructions, along with a few displayport interfaces and some monitor-management housekeeping.

      I'd be unsurprised, as with Optimus, to see some laptops released with an embedded/discrete GPU combination that is fucked in one way or another under anything that isn't the latest Windows, possibly making the discrete invisible, possibly forcing you to run the discrete all the time, or some other dysfunctional situation; but I'd tend to be optimistic about the long term: GPU driver support has always been a sore spot. Compiler support for CPU instructions, on the other hand, has generally been pretty good.
      • by vigour ( 846429 )

        ...but I'd tend to be optimistic about the long term: GPU driver support has always been a sore spot. Compiler support for CPU instructions, on the other hand, has generally been pretty good.

        Excellent point!

  • Does this Fusion APU multitask so that it can run 2 or more kernels at once (with no worries of the watchdog kicking in and stopping >5 sec kernels) ?

    • by import ( 40570 )

      From what I understand (I attended the AMD summit in question), Llano cannot multitask natively, although through the driver you should be able to do it and much more efficiently than in the past. I believe set up time for kernels has been drastically reduced with Llano, since there's no PCIE layer. Their future APUs will be introducing hardware scheduling so this will be better then...

  • Anyone else notice the similarity between Llano's and Arrandale's memory controller configuration, i.e., that both put the MC on the GPU and have the CPU talk to the GPU via some protocol for data? Okay, in Llano's case there's the option of going directly to memory through WCs but still.

    And then, this FSA crap seems to be going in the direction of Sandy Bridge, i.e., a unified L3 cache... as much as I like AMD, they do seem like their following in Intel's footsteps. This new architecture reminds me a l

You are in a maze of little twisting passages, all different.

Working...