Intel Says to Prepare For "Thousands of Cores" 638
Impy the Impiuos Imp writes to tell us that in a recent statement Intel has revealed their plans for the future and it goes well beyond the traditional processor model. Suggesting developers start thinking about tens, hundreds, or even thousand or cores, it seems Intel is pushing for a massive evolution in the way processing is handled. "Now, however, Intel is increasingly 'discussing how to scale performance to core counts that we aren't yet shipping...Dozens, hundreds, and even thousands of cores are not unusual design points around which the conversations meander,' [Anwar Ghuloum, a principal engineer with Intel's Microprocessor Technology Lab] said. He says that the more radical programming path to tap into many processing cores 'presents the "opportunity" for a major refactoring of their code base, including changes in languages, libraries, and engineering methodologies and conventions they've adhered to for (often) most of the their software's existence.'"
The thing's hollow - it goes on forever (Score:5, Funny)
- and - oh my God - it's full of cores!
Imagine a Beowulf cluster.... (Score:5, Funny)
oh nevermind, what's the point?
it's.... (Score:5, Funny)
OVER 9000!!!!!!11111one
Re:The thing's hollow - it goes on forever (Score:5, Funny)
Don't give up! Stay the cores!
Re:The thing's hollow - it goes on forever (Score:5, Funny)
No, not quite. It's CORES all the way down!
Re:The thing's hollow - it goes on forever (Score:4, Informative)
2001 : A Space Odyssey , by Arthur C. Clarke.
Great book.
Re:The thing's hollow - it goes on forever (Score:5, Funny)
Re:The thing's hollow - it goes on forever (Score:5, Informative)
And before they made it into a movie it was an interesting short story. http://en.wikipedia.org/wiki/The_Sentinel_(short_story) [wikipedia.org]
If you'd like to read it, seems it is this PDF, http://econtent.typepad.com/TheSentinel.pdf [typepad.com]
Not Sure I'm Getting It (Score:5, Insightful)
Re:Not Sure I'm Getting It (Score:5, Informative)
Then you take the tasks that can be broken up over multiple cores (Ray Tracing anyone?) and fill the rest of your cores with that.
Re:Not Sure I'm Getting It (Score:5, Insightful)
From a practical standpoint, Intel is right that we need vastly better developer tools and that most things that require ridiculous amounts of compute time can be parallized if you put some effort into it.
Re:Not Sure I'm Getting It (Score:4, Informative)
Re:Not Sure I'm Getting It (Score:4, Insightful)
Are you crazy? Context switches are the slowdown in multitasking OSes.
Unfortunately, multitasking OSes are not the slowdown in most tasks, exceptions noted of course.
Re:Not Sure I'm Getting It (Score:5, Informative)
True but misleading. The major cost of task switching is a hardware-derived one. It's the cost of blowing caches. The swapping of CPU state and such is fairly small by comparison, and the cost of blowing caches is only going up.
Re: (Score:3, Interesting)
Of course, the billion threads design doesn't solve the "how do n cores efficiently share x amount of cache" problem at all.
Re:Not Sure I'm Getting It (Score:5, Insightful)
Why wouldn't each core have it's own cache? It only needs to cache what it needs for its job.
Re:Not Sure I'm Getting It (Score:5, Interesting)
yes, but if you have 1000 cores each with 64k of cache, then you start to run into problems with memory throughput when computing massively parallel data.
memory throughput has been the achilles heel of graphic processing for years now. and as we all know, splitting up a graphic screen into smaller segments is simple. so GPUs went massively parallel long before CPUS, in fact you will soon be able to get over 1000 stream processing units in a single desktop graphic card.
so, the real problem is memory technology, how can a single memory module consistently feed 1000 cores, for instance if you want to do real-time n-pass encoding of a hd video stream... while playing a FPS online, and running IM software, and a strong anti-virus suite...
I have a horrible horrible ugly feeling that you'll never be able to get a system that can reliably do all that. at the same time, just because they'll skimp on memory tech or interconnects, so you'll have most of the capabilities of a 1,000 core system wasted.
Re:Not Sure I'm Getting It (Score:5, Interesting)
Now that 64-bit processors are so common, perhaps operating systems can spare some virtual address space for performance benefits.
The OPAL operating system [washington.edu] was a University of Washington research project from the 1990s. OPAL uses a single address space for all processes. Unlike Windows 3.1, OPAL still has memory protection and every process (or "protection domain") has its own pages. The benefit of sharing a single address space is that you don't need to flush the cache (because the virtual-to-physical address mapping do not change when you context switch). Also, pointers can be shared between processes because their addresses are globally unique.
Re:Not Sure I'm Getting It (Score:5, Informative)
It's a huge kludge for idiotic processors (like arm9) that don't have physically-tagged caches. On all non-incredibly-sucky processors, we have physically tagged caches, and so having every app have its own address space, or having multiple apps share physical pages at different virtual addresses, all of these are fine.
Problems with SAS:
Most people... even people using ARM... are using processors with physically-tagged caches. Please, Please, Please, don't further the madness of single-address-space environments. There are still people encouraging this crime against humanity.
Maybe I'm a bit bitter, because some folks in my company have drunk the SAS kool-aid. But believe me, unless you have ARM9, it's not worth it!
Re:Not Sure I'm Getting It (Score:5, Informative)
No. I/O is the slowdown in multitasking OSes.
Re:Not Sure I'm Getting It (Score:5, Insightful)
I concur, furthermore I'd like to see one core per pixel, that would certainly solve your high-end gaming issues.
Re:Not Sure I'm Getting It (Score:5, Insightful)
At the moment, I'm looking at Slashdot in Firefox, while listening to an mp3. I'm only using two out of my four cores, and I have 3% CPU usage.
Maybe when I post this, I might use a third core for a little while, but how many cores can I actually usefully use.
I can break a password protected Excel file in 30 hours max with this computer, and a 10000 core chip might reduce this to 43 seconds, but other than that, what difference is it going to make?
Re:Not Sure I'm Getting It (Score:4, Interesting)
That's what I'm curious about. Having 2 cores is enough for most consumers, one for the OS and background tasks and one for the application you're using. And that's overkill for most users.
Personally, I like to multi task and am really going to love when we get to the point where I can have the OS on one core and then have 1 core for each of my applications. But even that is limited to probably less than 10 cores.
Certain types of tasks just don't benefit from extra cores, and probably never will. Things which have to be done sequentially are just not going to see any improvement with extra cores. And other things like compiling software may or may not see much of an improvement depending upon the design of the source.
But really, it's mainly things like raytracing and servers with many parallel connections which are the most likely to benefit. And servers are still bound by bandwidth, probably well before they would be hitting the limit on multi cores anyways.
Re:Not Sure I'm Getting It (Score:5, Insightful)
Before having 1 core was enough, and having 512mb of RAM was enough for most consumers. Computing power grows, and software developers makes use of that additional power. However, this will mainly effect the gaming industry.
Bill gates was just mis-quoted (Score:5, Funny)
Re:Not Sure I'm Getting It (Score:5, Funny)
I can break a password protected Excel file in 30 hours max with this computer, and a 10000 core chip might reduce this to 43 seconds, but other than that, what difference is it going to make?
29 hours 59 minutes 17 seconds?
Re:Not Sure I'm Getting It (Score:5, Insightful)
This is, IMHO, the wrong question to be asking. Asking how current tasks will be optimized to take advantage of future hardware makes the fundamental flawed assumption that the current tasks will be what's considered important once we have this kind of hardware.
But the history of computers have shown that the "if you build it, they will come" philosophy applies to the tasks that people end up wanting to accomplish. It's been seen time and again that new abilities for using computers wait until we've hit a certain performance threshold, whether it CPU, memory, bandwidth, disk space, video resolution or whatever, and then become the things we need our computers to do.
Take, for instance, the huge success of mp3's. There was a time not so long ago when people were limited to playing music off a physical CD. This wasn't because there was no desire amongst computer users to listen to digital files that could be stored locally or streamed off the internet. It was because computer users did not know yet that they had the desire to do it. But technology advanced to the point where a) processors became fast enough to decode mp3's in real time without using the whole CPU and b) hard drives grew to the point where we had the capacity to store files that are 10% of the size of the size of the files on the CD.
Similarly, it's likely that when we reach the point where we have hundreds or thousands of cores, new tasks will emerge that take advantage of the new capabilities of the hardware. It may be that those tasks are limited in some other way by one of the other components we use or by the as yet non-existent status of some new component, but it's only important that multiple cores play a part in enabling the new task.
In the near term, you can imagine a whole host of applications that would become possible when you get to the point where the average computer can do real-time H.264 encoding without affecting overall system performance. I won't guess at what might be popular further down the road, but there will be people who will think of something to do with those extra cores. And, in hindsight, we'll see the proliferation of cores as enabling our current computer-using behavior.
Re:Not Sure I'm Getting It (Score:4, Informative)
"Take, for instance, the huge success of mp3's. There was a time not so long ago when people were limited to playing music off a physical CD. This wasn't because there was no desire amongst computer users to listen to digital files that could be stored locally or streamed off the internet. It was because computer users did not know yet that they had the desire to do it. But technology advanced to the point where a) processors became fast enough to decode mp3's in real time without using the whole CPU"
I started making mp3s with a 486 DX 75mhz
I could decode in real time on a 486 DX 75 as i recall encoding took a bit of time, and i only had a 3 GB HDD that had been an upgrade to the system...
Mp3s use a asynchronous encoding algorithm, more CPU to encode, than to decode, if your MP3 player doesn't run correctly on a 486, then it's because they designed in features not strictly needed to decode a MP3 stream.
Oh hey, I have an RCA Lyra mp3 player, that isn't even as fast as a 486, but the decoder was designed for mp3 decoding.
Ogg decoding uses a beefier decoder, that's half the problem getting ogg support in devices not made for decoding video streams.
Re:Not Sure I'm Getting It (Score:5, Informative)
"Because each core is no longer task switching. Once you have more cores than tasks you can remove all the context switching logic and optimize the cores to run single processes as fast as possible.
Then you take the tasks that can be broken up over multiple cores (Ray Tracing anyone?) and fill the rest of your cores with that."
Unfortunately all this is going to lead to bus and memory bandwidth contention, you're just shifting the burden from one point to another. Although their is a 'penalty' for task switching, there is an even greater bottleneck at the bus and memory bandwidth level.
IMHO intel would have to release a cpu on a card with specialized ram chips and segment the ram like GPU's do to get anything out of multicore over the long term, ram is not keeping up and the current architecture for PC ram is awful for multicore. CPU speed is far outstripping bus and memory bandwidth. I am quite dubious of multi-core architecture, there is fundamental limits of geometry of circuits. I'd be sinking my money into materials research not glueing cores together and praying CS and math guys come up with solutions that take advantage of it.
The whole of human history of engineering and tool use, is to take something extremely complicated and offload complexity, and compartmentalize it so that it's mangable. I see the opposite happening with multi-core.
Re:Not Sure I'm Getting It (Score:5, Informative)
Because each core is no longer task switching. Once you have more cores than tasks you can remove all the context switching logic and optimize the cores to run single processes as fast as possible.
OK, so now the piece that's running on each core runs really really fast . . . until it needs to wait for or communicate with the piece running on some other core. If you can do your piece in ten instructions but you have to wait 1000 for the next input to come in, whether it's because your neighbor is slow or because the pipe between you is, then you'll be sitting and spinning 99% of the time. Unfortunately, the set of programs that decompose nicely into arbitrarily many pieces that each take the same time (for any input) doesn't extend all that far beyond graphics and a few kinds of simulation. Many, many more programs hardly decompose at all, or still have severe imbalances and bottlenecks, so the "slow neighbor" problem is very real.
Many people's answer to the "slow pipe" problem, on the other hand, is to do away with the pipes altogether and have the cores communicate via shared memory. Well, guess what? The industry has already been there and done that. Multiple processing units sharing a single memory space used to be called SMP, and it was implemented with multiple physical processors on separate boards. Now it's all on one die, but the fundamental problem remains the same. Cache-line thrashing and memory-bandwidth contention are already rearing their ugly heads again even at N=4. They'll become totally unmanageable somewhere around N=64, just like the old days and for the same reasons. People who lived through the last round learned from the experience, which is why all of the biggest systems nowadays are massively parallel non-shared-memory cluster architectures.
If you want to harness the power of 1000 processors, you have to keep them from killing each other, and they'll kill each other without even meaning to if they're all tossed in one big pool. Giving each processor (or at least each small group of processors) its own memory with its own path to it, and fast but explicit communication with its neighbors, has so far worked a lot better except in a very few specialized and constrained cases. Then you need multi-processing on the nodes, to deal with the processing imbalances. Whether the nodes are connected via InfiniBand or an integrated interconnect or a common die, the architectural principles are likely to remain the same.
Disclosure: I work for a company that makes the sort of systems I've just described (at the "integrated interconnect" design point). I don't say what I do because I work there; I work there because of what I believe.
Re:Not Sure I'm Getting It (Score:5, Funny)
Re:Not Sure I'm Getting It (Score:4, Insightful)
Re:Not Sure I'm Getting It (Score:5, Funny)
My friends and I have lots of conversations about girls, how to get girls, how to please girls.
What, haven't you guys heard of simulation?
Re:Not Sure I'm Getting It (Score:5, Funny)
Re: (Score:3, Funny)
I think I once figured out that, starting with 3 billion women on the planet, there were about 5 with mutual attraction with me. I think I've found two of them.
Re:Not Sure I'm Getting It (Score:4, Interesting)
IANACS, but if your program structure changes a bit, you can process the two different styles of instructions in different ways, such that when the data needed from or to some sequential group of tasks is needed it is already there, sort of like doing things 6 steps ahead of yourself when possible. I know that makes no sense on the face of it, but at the machine code basics of it, by parsing instructions this way, 5 or 6 operations from now you will need register X loaded with byte 121 from location xyz, so while this core plods through the next few instructions, core this.plus.one prefetches the data at memory location xyz to register X.... or something like that. That will break the serialization of the code. There are other techniques as well, and if written for multicore machines, the program machine code can be executed this way without interpretation by the machine/OS.
There are more than one type of CPU architectures, and principles of execution vary between them. Same for RISC CISC. I think it is likely that the smaller the instruction set for the CPU, the more likely that serialized tasks can be shared out among cores.
Re:Not Sure I'm Getting It (Score:5, Interesting)
While prefetching data can be done using a single core, your post in this context gives me a cool idea.
Who needs branch prediction when you could just have 2 cores running a thread? Send each one executing instructions without a break in the pipeline and sync the wrong core to the correct one once you know the result. You'd still have to wait for results before any store operations, but you should probably know the branch result by then anyway.
Re:Not Sure I'm Getting It (Score:4, Interesting)
Indeed, and any tasks that are flagged as repeating can be repeated on a separate core from cores executing serial instructions such that IPC allows things that happen serially to happen coincident with each other. A simple high level example is reading the configuration for your process that may change at any time during your process due to outside influences. Let the reading of that happen out of band on the processing as it is not part of the sequential string of instructions for executing your code. That way config data is always correct without your serially oriented code needing to stop to check anything other than say $window.size=? such that it's value is always updated by a different core.
Sorry if that is not a clear explanation. I just mean to say that since most of what we do is serially oriented, it's difficult to see how at the microscopic level of the code, it can be broken up to parallel tasks. A 16% decrease in processing time is significant. Building OS and compilers to optimize this would improve execution times greatly, just as threading does today. If threads are written correctly to work with multiple cores, it's possible to see significant time improvements there also.
Re:Not Sure I'm Getting It (Score:5, Insightful)
That is what most current processors do and use branch prediction for. Even if you have a thousand cores, that's only 10 binary decisions ahead. You need to guess really well very often to keep your cores busy instead of syncing. Also, the further you're executing ahead, the more ultimately useless calculations are made, which is what drives power consumption up in long pipeline cores (which you're essentially proposing).
In reality parallelism is more likely going to be found by better compilers. Programmers will have to be more specific about the type of loops they want. Do you just need something to be performed on every item in an array or is order important? No more mindless for-loops for not inherently sequential processes.
Re: (Score:3, Insightful)
I disagree. Having the compiler analyze loops to find out if they are trivially parallelizable is easy, there's little need to change the language.
On the other hand, a language that was really desi
Lookahead/predictive branching is one option... (Score:5, Interesting)
I do see this move by Intel as a direct follow up to their plans to negate the processing advantages of today's video cards. Intel wants people running general purpose code to run it on their general purpose CPU's, not on their video cards using CUDA or the like. If the future of video game rendering is indeed ray-tracing (an embarrassingly parallel algorithm if ever there was one) then this move will also position Intel to compete directly with Nvidia for the raw processing power market.
One thing is for sure, there's a lot of coding to do. Very few programs currently make effective use of even 2 cores. Parallelization of code can be quite tricky, so hopefully tools will evolve that will make it easier for the typical code-monkey who's never written a parallel algorithm in his life.
Re: (Score:3, Informative)
Algorithms that can't be parallelized will not benefit from a parallel architecture. It's really that simple. :( Also, many algorithms that are parallelizable will not benefit from an "infinite" number of cores. The limited bandwith for communication between cores will usually become a bottleneck at some point.
Re: (Score:3, Insightful)
Re:Not Sure I'm Getting It (Score:5, Insightful)
As a software engineer, I wonder the same thing.
Put simply, the majority of code simply doesn't parallelize well. You can break out a few major portions of it to run as their own threads, but for the most part, programs either sit around and wait for the user, or sit around and wait for hardware resources.
Within that, only those programs that wait for a particular hardware resource - CPU time - Even have the potential to benefit from more cores... And while a lot of those might split well into a few threads, most will not scale (without a complete rewrite to chose entirely different algorithms - If they even exist to accomplish the intended purpose) to more than a handful of cores.
Re:Not Sure I'm Getting It (Score:5, Insightful)
Re: (Score:3, Informative)
Yup. Its Amdahl's law [wikipedia.org].
This whole many core hype looks a lot like the Gigahertz craze from a few years ago. Obviously they're afraid that there will be no reason to upgrade. 2 or 4 cores, ok - you often (sometimes?) have that many tasks active. But significantly more will only buy you throughput for games, simulations and similar heavy computations. Unless we (IAACS too) rewrite all of our apps under new paradigms like functional programming (e.g. in Erlang [wikipedia.org].) Which will only be done if there's a good reason
Re:Not Sure I'm Getting It (Score:5, Insightful)
As a software engineer, I wonder the same thing.
Put simply, the majority of code simply doesn't parallelize well. You can break out a few major portions of it to run as their own threads, but for the most part, programs either sit around and wait for the user, or sit around and wait for hardware resources.
Within that, only those programs that wait for a particular hardware resource - CPU time - Even have the potential to benefit from more cores... And while a lot of those might split well into a few threads, most will not scale (without a complete rewrite to chose entirely different algorithms - If they even exist to accomplish the intended purpose) to more than a handful of cores.
As a software engineer you should know that "most code doesn't parallelize" is very different from "most of the code's runtime can't parallelize", as code size and code runtime are substantially different things.
Look at most CPU intensive tasks today and you'll notice they all parallelize very well: archiving/extracting, encoding/decoding (video, audio), 2D and 3D GUI/graphics/animations rendering (not just for games anymore!), indexing and searching indexes, databases in general, and last but not least, image/video and voice recognition.
So, while your very high-level task is sequential, the *services* it calls or implicitly uses (like GUI rendering), and the smaller tasks it performs, actually would make a pretty good use of as many cores as you can throw at them.
This is good news for software engineers like you and me, as we can write mostly serial code and isolate slow tasks into isolated routines that we write once and reuse many times.
Re:Not Sure I'm Getting It (Score:4, Insightful)
Obviously just adding more cores does little to speed up individual sequential processes, but it does help with multitasking, which is what I really think is the "killer app" for multi-core processors.
Back in the late 90's (it doesn't feel like "back in.." yet but I'm willing to admit that it was about a decade ago) I decided to build a computer with an Abit BP6 motherboard, two Celeron processors and lots of RAM instead of a single higher end processor because I wanted to be able to multitask properly, my gamer friends mocked me for choosing Celeron processors but for the price of a single processor system I got a system that was capable of running several "normal" apps and one with heavy cpu usage without slowing down the system, and the extra RAM also helped (I saw lots of people back then go for 128 MB of RAM and a faster CPU instead of "wasting" their money on RAM, and then they cursed their computer for being slow when it started swapping). There was also the upside of having Windows 2000 run as fast on my computer as Windows 98 did on my friends' computers...
/Mikael
Re:Not Sure I'm Getting It (Score:4, Insightful)
Uh, last time I checked, Python had a single interpreter lock per process which made it unsuitable for heavily multithreaded programs. Java would be a better example of a scalable and multithread-aware language.
Great... (Score:5, Funny)
As if Oracle licensing wasn't complicated enough already...
Memory bandwidth? (Score:5, Interesting)
Re: (Score:3, Interesting)
Memory would have to be completely redefined. Currently, you have one memory bank that is effectively accessed serially.
If you have 1000 cores that depend on the same data, you would have to have a way of multicasting the data to the cores, which could then select the data they want.
Basically, hardware and software architecture has to be completely redefined.
It is not impossible, though. Just look around. The universe computes in parallel all the time.
Re:Memory bandwidth? (Score:5, Insightful)
Memory would have to be completely redefined. Currently, you have one memory bank that is effectively accessed serially.
Yes, in Intel land. AMD has this thing called NUMA. What do you think "HyperTransport" means?
you mean SGI (Score:5, Insightful)
SGI and or Cray were using NUMA a decade ago.
Re: (Score:3, Insightful)
Disagreement about this trend (Score:5, Interesting)
At Supercomputing 2006, they had a wonderful panel [supercomputing.org] where they discussed the future of computing in general, and tried to predict what computers (especially Supercomputers) would look like in 2020. Tom Sterling made what I thought was one of the most insightful observations of the panel -- most of the code out there is sequential (or nearly so) and I/O bound. So your home user checking his email, running a web browser, etc is not going to benefit much from having all that compute power. (Gamers are obviously not included in this) Thus, he predicted, processors would max out at a "relatively" low number of cores - 64 was his prediction.
Re:Disagreement about this trend (Score:5, Funny)
Even 64 sounds optimistic (Score:3, Interesting)
I'd be surprised if a desktop PC ever really uses more than eight. Desktop software is sequential, as you said. It doesn't parallelize.
Games will be doing their physics, etc., on the graphics card by then. I don't know if the current fad for doing it on the GPU will go anywhere much but I can see graphics cards starting out this way then going to a separate on-board PPU once the APIs stabilize.
We might *have* 64 cores simply because the price difference between 8 and 64 is a couple of bucks, but they won't
Re:Disagreement about this trend (Score:4, Funny)
Re:Disagreement about this trend (Score:5, Insightful)
My guess is 4 cores in 2008, 4 cores in 2009, moving to 8 cores through 2010. We may move to a new uber-core model once the software catches up, more like 6-8 years than 2-4. I'm positive we won't "max out" at 64 cores, because we're going to hit a per-core speed limit much more quickly than we hit a number-of-cores limit.
Re: (Score:3, Interesting)
We've pretty much already hit a per-core speed limit, you really can't find many CPU's running over 3GHZ, whereas back in P4 days you'd see them all the way up to 3.8.
Architectures have changed and other stuff allow a current single core of a 3.2 to easily outperform the old 3.8's but then still why don't we see new 3.8's?
Re:Disagreement about this trend (Score:5, Interesting)
Architectures have changed and other stuff allow a current single core of a 3.2 to easily outperform the old 3.8's but then still why don't we see new 3.8's?
The Pentium 4 is, well, it's scary. It actually has "drive" stages because it takes too long for signals to propagate between functional blocks of the processor. This is just wait time, for the signals to get where they're going.
The P4 needed a super-deep pipeline to hit those kinds of speeds as a result, and so the penalty for branch misprediction was too high.
What MAY bring us higher clock rates again, though, is processors with very high numbers of cores. You can make a processor broad, cheap, or fast, but not all three. Making the processors narrow and simple will allow them to run at high clock rates and making them highly parallel will make up for their lack of individual complexity. The benefit lies in single-tasking performance; one very non-parallelizable thread which doesn't even particularly benefit from superscalar processing could run much faster on an architecture like this than anything we have today, while more parallelizable tasks can still run faster than they do today in spite of the reduced per-core complexity due to the number of cores - if you can figure out how to do more parallelization. Of course, that is not impossible [slashdot.org].
Re: (Score:3, Informative)
Architectures have changed and other stuff allow a current single core of a 3.2 to easily outperform the old 3.8's but then still why don't we see new 3.8's?
Clock rate is meaningless. They could build a 10 ghz cpu, but it wouldn't outperform the current 3 ghz cpu's.
A modern cpu uses pipelining. This means that each instruction is spread out across a series of phases (e.g. fetch data, perform calculation 1, perform calculation 2, store data). Each phase is basically a layer of transistors the logic has to go
Re:Disagreement about this trend (Score:4, Interesting)
You've excluded gamers as if this had been some nearly extinct exotic species. Don't they contribute the most to PC hardware market growth and progress?
Re:Disagreement about this trend (Score:5, Insightful)
Web applications are becoming more AJAX'y all the time, and they are not sequential at all. Watching a video while another tab checks my Gmail is a parallel task. All indications are that people want to consume more and more media on their computers. Things like the MLB mosaic allow you to watch four games at once.
Have you ever listened to a song through your computer while coding, running an email program, and running an instant messaging program? There are four highly parallelizable tasks right there. Not compute intensive enough for you? Imagine the song compressed with a new codec that is twice as efficient in terms of size but twice as compute intensive. Imagine the email program indexing your email for efficient search, running algorithms to assess the email's importance to you, and virus checking new deliveries. Imagine your code editor doing on the fly analysis of what you are coding, and making suggestions.
"Normal" users are doing more and more with computers as well. Now that fast computers are cheap, people who never edited video or photos are doing it. If you want a significant market besides gamers who need more cores, it is people making videos, especially HD videos. Sure, my Grandmother isn't going to be doing this, but I do, and I'm sure my children will do it even more.
And don't forget about virus writers. They need a few cores to run on as well!
Computer power keeps its steady progress higher, and we keep finding interesting things to do with it all. I don't see that stopping, so I don't see a limit to the number of cores people will need.
Re: (Score:3, Insightful)
I KNOW it is so very often sited but if every was a time to mention the "5 computers in the whole world" it is this. In fact, I would dare say that is the whole point of this push by Intel: trying to get people (programmers) used to the thought of having so many parallel cpus in a home computer.
Sure, from where we stand now, 64 seems like a lot but maybe a core for nearly each pixel on my screen makes sense, has real value to add. Or how about just flat-out smarter computers, something which might happen by
Ok.. so how do I do that? (Score:3, Interesting)
Re:Ok.. so how do I do that? (Score:5, Informative)
A year or so ago, I saw a presentation on Thread Building Blocks [threadingb...blocks.org], which is basically an API thingie that Intel created to help with this issue. Their big announcement last year was that they've released it open-source and have committed to making it cross-platform. (It's in Intel's best interest to get people using TBB on Athlon, PPC, and other architectures, because the more software is multi-core aware, the more demand there will be for multi-core CPUs in general, which Intel seems pretty excited about.)
been there, done that (Score:5, Funny)
Good idea (Score:5, Insightful)
It's a good idea.. Somewhat of the same idea that the Cell chip has going for it (and well, Phenom X3s). You make a product with lots of redunant objects so that when some are bound to failure, the percentage of failure is much lower..
If there are 1000 cores on a chip, and 100 go bad... You're still only losing a *maximum* of 10% of performance versus when you have 2 or 4 cores and 1 or 2 go bad, you have a performance impact of 50% essentially.. Brings costs down because yeilds go up dramatically.
Already Happening (Score:3, Informative)
Declarative languages is the answer (Score:3, Interesting)
Is that really a good idea? (Score:4, Interesting)
I'm all for newer, faster processors. Hell, I'm all for processors with lots of cores that can be used, but wouldn't completely redoing all of the software libraries and such that we've got used to cause a hell of a divide in developers?
Sure, if you only develop on an x86 platform, you're fine, but what if you want to write software for ARM or PPC? Processors that might not adopt the "thousands of cores" model?
Would it not be better to design a processor that can intelligently utilise single threads across multiple cores? (I know this isn't an easy task, but I don't see it being much harder than what Intel is proposing here).
Or is this some long-time plan by intel to try to lock people into their platforms even more?
Desperation? (Score:4, Interesting)
Honestly I wonder if Intel isn't looking at the expense of pushing per-core speed further and comparing it against the cost of just adding more cores. The unfortunately reality is that the many-core approach really doesn't fit the desktop use case very well. Sure, you could devote an entire core to each process, but the typical desktop user is only interested in the performance of the one progress in the foreground that's being interacted with.
It's also worth mentioning that some individual applications just aren't parallelizable to the extent that more than a couple of cores could be exercised for any significant portion of the application's run time.
Intel is building an FPGA (Score:3, Interesting)
Dozens, hundreds, and even thousands of cores are not unusual design points
I don't think they mean cores like the regular x86 cores, I think they will put an FPGA on the same die together with the regular four/six cores.
Start! What do they mean, start? (Score:3, Interesting)
Heat issues (Score:4, Interesting)
How are they going to cope with excessive heat and power consumption? How are they going to dissipate heat from a thousand cores?
When the processing power growth was fed by shrinking transistors, the heat stayed at manageable level (well, it gradually increased with packing more and more elements on die, but the function wasn't linear). Smaller circuits yielded less heat, despite being much more of them.
Now we're packing more and more chips into one package instead and shrinkage of transistors has significantly slowed down. So how are they going to pack those thousand cores into a small number of CPUs and manage power and heat output?
All we need to do now... (Score:3, Interesting)
It's all changing too fast (Score:3, Insightful)
I've only been programming professionally for 3 years now, but already I'm shaking in my boots over having to rethink and relearn the way I've done things to accomodate these massively parallel architectures. I can't imagine how scared must be the old timers of 20, 30, or more years. Or maybe the good ones who are still hacking decades later have already had to deal with paradigm shifts and aren't scared at all?
Re:It's all changing too fast (Score:5, Insightful)
My dad's been programming for decades, and he's much more used to paradigm shifts than I am. His first programming job was translating assembly from one architechture to another, and now he's a proficient web developer. He understands concurrency and keeps up to date on new developments.
I'm reminded of an anecdote told to me during a presentation. The presenter had been introducing a new technology, and one man had a concern: "I've just worked hard to learn the previous technology. Can you promise me that, if I learn this one, it will be the last one I ever have to learn?" The presenter replied, "I can't promise you that, but I can promise you that you're in the wrong profession."
Re:It's all changing too fast (Score:5, Interesting)
For example, "CPU expensive, memory expensive, programmer cheap" is now "CPU cheap, memory cheap, programmer expensive" -- hence Java et al. (I am sometimes amazed when I casually allocate/free chunks of memory larger than all the combined memory of all the computers at my university - both in the labs and the administration/operational side - but what amazes me is that it doesn't amaze me!)
Actually some of the "old timers" may be a more comfortable with some issues of highly parallel programming than some of the "kids" (term used with respect, we were all kids once!) who have mostly had them masked from them by high level languages. Comparing "old timers" to "kids" doing enterprise server software, the kids seem much less likely to understand issues like memory coherence models of specific architectures, cache contention issues of specific implementations, etc.
Also, too often, the kids make assumptions about the source of performance/timing problems rather than gathering empirical evidence and acting on that evidence. This trait is particularly problematic because when dealing with concurrency and varying load conditions, intuition can be quite unreliable.
Really, it's not all that scary - the first paradigm shift is the hardest!
look what happened to ps3 (Score:4, Interesting)
So now we have a shit load of cores all we have to do is wait for the developers to put some multi-threading goodness in their apps.... or maybe not.
The PS3 was ment to be faster than any other system because of it's multi-cores cell architecture, but in a interview John Carmack said, "Although it's interesting that almost all of the PS3 launch titles hardly used any Cells at all."
http://www.gameinformer.com/News/Story/200708/N07.0803.1731.12214.htm [gameinformer.com]
Interesting challenges (Score:3, Interesting)
If people are writing their applications using threads, I dont see there should be a big problem with more cores. Basically, threads should be used where it is practical and makes sense and does not make programming that much more difficult, in fact it can make things eisier. Rather than some overly complicated reengineering, threads when properly used can lead to programs that are just as easy to understand. They can be used for a program that does many tasks, processing can usually be parallelised when you have different operations which do not depend on the output of each other. A list of instructions which depends on output of a previous instructions, which must run sequentially, of course cannot be threaded or paralellised. Obvious example of applications that can be threaded is a server, where you have a thread to process data from each socket, a program which scans multiple files, can have a thread for processing each file, etc.
it's not about cores (Score:3, Interesting)
If you put 1000 cores on a chip and plug it into a PC... very little would happen in terms of speedup.
What we need to know is the memory architecture. How is memory allocated to cores? How is data transferred? What are the relative costs of accesses? How are the caches handled?
Without that information, it's pointless to think about hundreds or thousands of cores. And I suspect even Intel doesn't know the answers yet. And there's a good chance that a company other than Intel will actually deliver the solution.
Profit!!! (Score:5, Funny)
Cores? (Score:4, Interesting)
Can't they just make the existing ones go faster? Seriously, if I want to start architectures around 1000's of independent threads of execution, i'd start with communication speeds, not node count.
It's already easy to spawn thread armies that peg all IO channels. Where is all this "work" you can do without any IO?
I think Intel better starting thinking of "tens, hundreds or even thousands" of bus speed multipliers on their napkin drawings.
Aside from some heavy processing-dependent concepts (graphics, complex mathematical models, etc) the world need petabyte/sec connectivity, not instruction set munching.
Databases and implimentation-neutrality (Score:5, Interesting)
Databases provide a wonderful opportunity to apply multi-core processing. The nice thing about a (good) database is that queries describe what you want, not how to go about getting it. Thus, the database can potentially split the load up to many processes and the query writer (app) does not have to change a thing in his/her code. Whether a serial or parallel process carries it out is in theory out of the app developer's hair (although dealing with transaction management may sometimes come into play for certain uses.)
However, query languages may need to become more general-purpose in order to have our apps depend on them more, not just business data. For example, built-in graph (network) and tree traversal may need to be added and/or standardized in query languages. And, we made need to clean up the weak-points of SQL and create more dynamic DB's to better match dynamic languages and scripting.
Being a DB-head, I've discovered that a lot of processing can potentially be converted into DB queries. That way one is not writing explicit pointer-based linked lists etc., locking one into a difficult-to-parallel-ize implementation.
Relational engines used to be considered too bulky for many desktop applications. This is partly because they make processing go through a DB abstraction layer and thus are not using direct RAM pointers. However, the flip-side of this extra layer is that they are well-suited to parallelization.
Re:Databases and implimentation-neutrality (Score:5, Informative)
By "a lot of processing can potentially be converted into DB queries", what you discovered is functional programming :) LINQ in .NET 3.5/C# 3.0 is an example of functional programming that is made to look like DB queries, but it isn't the only way. It is a LOT easier to convert that stuff and optimize it to the environment (like how SQL is processed), since it describes the "what" more than the "how". It is already done, and one (out of many examples) is Parallel LINQ, which smartly execute LINQ queries in parallel, optimized for the amount of cores, etc. (And I'm talking about LINQ in the context of in memory process, not LINQ to SQL, which simply convert LINQ queries into SQL ones).
Functional programming, tied with the concept of transactional memory to handle concurency, is a nice medium term solution to the multi-core problem.
My first thought... (Score:4, Funny)
Is it bad that my first thought when I saw this was: "But, my code already generates thousands of cores..."
so, Intel made risc passé... (Score:3, Insightful)
and now they're bringing it back?
we all learned how 1000 cores doesn't matter if each core can only process a simplified instruction set compared to 2 cores that can handle more data per thread.
this is basic computer design here people.
Can't you only have 1 core? (Score:3, Informative)
By definition, isn't a core just the middle/root of something? if you have more than 1 core, shouldn't the term really be changed to reflect something closer to which it represents?
Re:Useless (Score:5, Insightful)
Re:Useless (Score:4, Interesting)
We got to keep reminding ourselves, the world we live in runs in parallel, why shouldn't our computers?
Re:Generic jokes (Score:5, Funny)
In the Soviet Union
Oh wait... the Soviet Union already broke into smaller cores.
Re:We all saw it coming anyway (Score:5, Insightful)
"So whether programmers find this move acceptable or not is irrelevant because this path is probably the only way to design faster CPU:s once we've hit the nanometer wall."
I guess you should put "faster" in quotes.
In any case, it is absolutely relevant what programmers think since any performance improvements that customers actually experience is dependent on our code.
Historically a primary reason to buy a new computer is because a faster system makes legacy applications run faster. To a large extent this won't be true with a new multicore PC. So why would people buy them?
That's why Intel wants us to redesign our software - so that in the future their customers will still have a reason to buy a new PC with Intel Inside.
Re:That's all well and good..... (Score:4, Informative)
Example: "race condition" Say processor one is trying to find the optimal value of variable A, and processor two is doing something different, but calling some subfunction which changes variable A, then processor one might keep on running forever.
The other main problem is the deadlock: Processor one needs the final result of variable B to calculate variable A, but processor two needs the final result of variable A to calculate B. Both processors will come to a standstill, and the program is halting forever.
For simple programs, these things are relatively easy to troubleshoot. But for your huge program package with hundreds of modules, it is almost impossible to know what is happening.
Actually, it is the duty of intel and co. to find a way to prevent these situations, but also there, what kind of genius is able to program an automated debugger that can find deadlocks and race conditions.
Re: (Score:3, Funny)
A lot.
Re: (Score:3, Interesting)
Last time I checked my computer had only one GPU core, which had a multitude of functional units. So does my CPU, in fact, but the GPU has more. Each CPU has its own "context" (the state of certain registers which store pointers, and the flags register.) More CPU cores means more contexts means less context switches means cheaper threads. Pretty simple!
CUDA &c are cool in that they offer you a way to use your video card for non-video applications when it is idle. However, their use is likely to be cycli
Difference (Score:3, Insightful)
What's different this time may be that nobody else has anything better. Last time, AMD64 was the easier solution, and it clobbered Itanium. Can AMD (or anybody) simply choose to keep making single cores faster, or is multi-core the way CPUs really must go from here?
Re: (Score:3, Interesting)
I think that gcc should insert code to control memory leaks and process safety and the kernel should be in charge of tasking between cores.
Please limit this desire to languages like Java, Python, and Ruby. We don't need this in C. If you can't program without it, you shouldn't be programming in C.