AMD Reveals Plans to Move Beyond the Core Race 227
J. Dzhugashvili writes "The Tech Report has caught wind of AMD's plans for processors over the coming years. Intel may be counting on cramming 'tens to hundreds' of cores in future CPUs, but AMD thinks the core race is just a repeat of the megahertz race that took place a few years ago. Instead, AMD is counting on Accelerated Processing Units, chips that mix and match general-purpose CPU cores with dedicated application processors for graphics and other tasks. In the meantime, AMD is cooking up some new desktop and mobile processors that it hopes will give Intel a run for its money."
Re:Same old. (Score:5, Interesting)
AMD is smaller obviously, so it has fewer resources...but with those Alpha scientists, they're going to keep going strongly. It's just a matter of time with business directives like this before AMD takes over. They've been having some really cool ideas...and a few more over a few years, the innovators may win. And no, I'm not an AMD fanboi, but I have talked to some architects from IBM and Intel, and they do concur.
Integrated graphics.. (Score:3, Interesting)
If anyone can give me any insight here...please speak up.
Thanks
- I post interesting things or short articles I write here [wi-fizzle.com]
Dedicated processors for "other" tasks (Score:2, Interesting)
Re:Integrated graphics.. (Score:3, Interesting)
Just as the floating point coprocessor became the FPU section of the processor, it makes sense to give future processors the ability to do the common operations that are now done by graphics cards.
Things like matrix multiplications (which is actually will be a single processor operation in SSE3) are used all over the place in graphics, sound, and well, virtually anything that eats up CPU power these days. Doing this stuff serially in a traditional general-purpose CPU takes forever, but it's blazing fast if you do it in parallel specially designed hardware.
You might think that having hardware that just does matrix-multiplications limits your processor to only certain domains, but it makes sense to have a dektop process that's fast at desktop tasks: (play games, rip/encode video/audio, run skype, raytrace and use photoshop)? There's still going to be the server-class processors that are good at general-purpose non-mathy things like serving databases, but it just doesn't make sense to use a Xeon in a desktop when an AMD/ATI integrated chip will do all the stuff you want faster and cheaper.
Naturally... not all processes are (Score:2, Interesting)
A lot of other software is not. Such as: Office productivity, operating systems...(these can benefit, but ultimately they'll reach a limit).
The other question is, when you put hundreds of cores on a chip, how do you handle logistics of accessing cache? Or cache coherency?(not required) They it'll go up to 16 or so cores before they might run into some cache latency issues.
I think the other question is... how long till software catches up? We're at a point where hardware has been carrying software. Software is coded for the most part, pretty crapily(thanks to out of order cores). When are the software designers going to get with the program and leverage hardware more? I know hardware is very dynamic. But, now we're seeing hardware reach it's limit, and that multiple cores don't do anything unless some key multi-threaded apps are running.
Re:Integrated graphics.. (Score:4, Interesting)
Hybrid Graphics & the Cell roadmap. (Score:5, Interesting)
The most interesting thing for me was the mention of "Hybrid Graphics": It also looks like they're also extending the Fusion concept along Cell-like lines, with additional cores for non CPU or GPU purposes.
Their road map through 2008 only talks about up to quad core, although I assume this means CPU cores (I'm not sure that I would accept a CPU+GPU on a single die branded as a 'dual-core' chip). I think the Cell has eight cores, but due to yield issues not all are enabled in a PS3, and they are not all functionally equivalent. I don't know if this is the case for the Cell-based IBM blades, though.
The roadmap basically looks like periodic refreshing of the product line reducing power consumption with each iteration, which is where I think Intel have got a head-start on AMD. However, if AMD can sort out the yield issues, and compilers and developers begin to take advantage of these "associate" cores in Cell and future AMD architectures, then maybe Intel will have turned out to have missed a trick, as they did with x86-64.
Re:Integrated graphics.. (Score:1, Interesting)
Also, the only reason you'd want an FPGA is if application developers wanted to program it specifically for their algorithms (which is added, usually unneeded, development time).
On a side note, it might be cool to get those self-configuring CPUs which optimize their own codepath for the application & data (although I haven't heard anything about them recently).
most of them idle most of the time? (Score:3, Interesting)
The two big obstacles to getting better performance from parallelization are that (1) some problems aren't parallelizable, and (2) programmers, languages, and development tools are still stuck in the world of non-parallel programming. So from that point of view, this might make more sense than simply making a computer with a gazillion identical, general-purpose CPUs.
On the other hand, I'd imagine that most of these processors would sit idle most of the time. For instance, right now I'm typing this slashdot post. If I had a video card with a fancy GPU (which I don't), it would still be drawing current, but sitting idle 99.99% of the time, since displaying characters on the screen as the user types is something that could be done back in the days of 1 MHz CPUs. Suppose I have a special-purpose physics processor. It's also drawing current right now, but not doing anything useful. Ditto for the speech-recognition processor, the artificial intelligence processor, the crypto processor, ...
There are also a lot of applications that don't lend themselves to either multiple general-purpose processors or multiple special-purpose CPUs. One example that comes to mind is compiling.
On a server, you're probably either I/O bound, or you're running a bunch of CGI scripts simultaneously, in which case multiple general-purpose processors are what you need.
For almost all desktop applications except gaming, performance is a software issue, not a hardware issue. I was word-processing in 1982 with a TRS-80, and it wasn't any less responsive than Abiword on my current computer. Since I'm not into gaming, my priorities would be (1) to have a CPU that draws a low amount of power, and (2) to have Linux do a better job of cooperating with my hardware on power management. I would also like to have parallized versions of certain software, but that's going to take a lot of work. For example, the most common CPU-heavy thing I do is compiling long books in LaTeX; a massively parallel version of LaTeX would be very cool, but I'm not holding my breath.
Intel may be early (Score:3, Interesting)
Intel's standpoint seems to be that there's a world of data crunching lurking in all our computers (automated photo sorting, face recognition, and photo-realistic rendering), but none of these strike me as killer apps waiting to happen. All are things we could get used to and come to depend on, but I don't think any of them are being held back just because of our computing capacity, although photo-realistic rendering may be close. I'm pretty sure these aren't solved problems yet. Even if we were itching to do all this, one can only sort so many photos. It seems a bit wasteful to have all that power waiting around most of the time. Are we really nearly living in a world in which computing power is so plentiful that we can have that kind of ability even though we hardly ever use it?
On the other hand, AMD's approach seems to have more immediate application. Video/audio encoding and other parallel processes are things that many of us do do frequently. A couple hundred cores could be pressed into use for this, but that seems much less elegant than purpose-built hardware.
I don't know which approach will be best in the long-run. Probably both. It does seem to me that Intel is at best a few years to early to be hyping large numbers of cores.
Re:Integrated graphics.. (Score:2, Interesting)
Re:Integrated graphics.. (Score:3, Interesting)
This is what human minds do but CPU's are far from this goal, not to mention the nightmare of managing it as complexity increases.
Re:Same old. (Score:2, Interesting)
Doesn't that assume that Intel doesn't change their strategy? It seems to me that Intel has adopted a strategy that is designed more for marketing than for innovation. This seems to have worked VERY well for them, but it doesn't mean that they couldn't change their strategy in the future. It's interesting to note that whenever AMD pulls ahead in terms of processor performance Intel catches up and vise versa AMD catches up to Intel.
As for whether or not AMD will beat Intel, I favor Intel's marketing strategy not because it's what I would buy but because it's what will be easier to sell to the masses and that's where the lion's share of the money is.Re:hyper transport (Score:5, Interesting)
The real issue is feature size. AMD is hurt badly by being consistently behind on that. Intel's been at 65 nm for a while now, and AMD is only now releasing 65 nm parts. Intel will be at 45 nm in some lines by this time next year, while AMD is a year behind them. Feature size brings with it higher yields (more chips per wafer) once you work the kinks out, lower heat, and more transistors per chip. That's the game winner right there, unless one of them shoots themselves in the foot again, like Netburst.
Re:Hybrid Graphics & the Cell roadmap. (Score:3, Interesting)
The cell has 8 SPUs which are stripped down vector processors and one PPU with is an only mildly stripped down PPC core. On the PS3 one of the SPUs are disabled to increase yield.
The problem with the cell is the the SPUs are hell to program if you have a problem that doesn't fit nicely in the 256k ram that an SPU has. And most programming tasks these days don't. If the programming is essentially DSP work then you are good to go, but hopefully AMD learns from Sony's mistake, and still allow all cores random access to memory.
However L2 cache eats up a *lot* of die space, so who knows what nightmares we have coming.
Re:Same old. (Score:3, Interesting)
Correct. Intel still has the lion's share of the market, and they want to keep it that way. It's interesting how they "cheat" and lock two dies together and call it dual-core or quad-core just to come out with the technology "first" to keep the investors happy.
Cheat? The result is 4 cores in one socket. Things like "they cheated!", how many nm the process is etc is really irrelevant. What matters is the end result, like performance, power usage, memory bandwidth. That AMD can't do it yet and had gotten slow and docile by a couple of years success and being the top performer will hopefully be fixed soon, but as of now, they're badly lagging.