Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Says to Prepare For "Thousands of Cores" 638

Impy the Impiuos Imp writes to tell us that in a recent statement Intel has revealed their plans for the future and it goes well beyond the traditional processor model. Suggesting developers start thinking about tens, hundreds, or even thousand or cores, it seems Intel is pushing for a massive evolution in the way processing is handled. "Now, however, Intel is increasingly 'discussing how to scale performance to core counts that we aren't yet shipping...Dozens, hundreds, and even thousands of cores are not unusual design points around which the conversations meander,' [Anwar Ghuloum, a principal engineer with Intel's Microprocessor Technology Lab] said. He says that the more radical programming path to tap into many processing cores 'presents the "opportunity" for a major refactoring of their code base, including changes in languages, libraries, and engineering methodologies and conventions they've adhered to for (often) most of the their software's existence.'"
This discussion has been archived. No new comments can be posted.

Intel Says to Prepare For "Thousands of Cores"

Comments Filter:
  • - and - oh my God - it's full of cores!

  • by gbulmash ( 688770 ) * <semi_famousNO@SPAMyahoo.com> on Wednesday July 02, 2008 @03:44PM (#24035965) Homepage Journal
    I'm no software engineer, but it seems like a lot of the issue in designing for multiple cores is being able to turn large tasks into many independent discrete operations that can be processed in tandem. But it seems that some tasks lend themselves to that compartmentalization and some don't. If you have 1,000 half-gigahertz cores running a 3D simulation, you may be able to get 875 FPS out of Doom X at 1920x1440, but what about the processes that are slow and plodding and sequential? How do those get sped up if you're opting for more cores instead of more cycles?
    • by Delwin ( 599872 ) * on Wednesday July 02, 2008 @03:46PM (#24036011)
      Because each core is no longer task switching. Once you have more cores than tasks you can remove all the context switching logic and optimize the cores to run single processes as fast as possible.

      Then you take the tasks that can be broken up over multiple cores (Ray Tracing anyone?) and fill the rest of your cores with that.
      • by jandrese ( 485 ) <kensama@vt.edu> on Wednesday July 02, 2008 @04:28PM (#24036595) Homepage Journal
        Process switching overhead is pretty low though, especially if you just have one thread hammering away and most everything else is largely idle. The fundamental limitation of being stuck with 1/1000 of the power of your 1000 core chip because your problem is difficult/impossible to parallelize is a real one.

        From a practical standpoint, Intel is right that we need vastly better developer tools and that most things that require ridiculous amounts of compute time can be parallized if you put some effort into it.
        • by Brian Gordon ( 987471 ) on Wednesday July 02, 2008 @04:43PM (#24036755)
          Are you crazy? Context switches are the slowdown in multitasking OSes.
          • by hey! ( 33014 ) on Wednesday July 02, 2008 @04:53PM (#24036889) Homepage Journal

            Are you crazy? Context switches are the slowdown in multitasking OSes.

            Unfortunately, multitasking OSes are not the slowdown in most tasks, exceptions noted of course.

          • by k8to ( 9046 ) on Wednesday July 02, 2008 @04:56PM (#24036917) Homepage

            True but misleading. The major cost of task switching is a hardware-derived one. It's the cost of blowing caches. The swapping of CPU state and such is fairly small by comparison, and the cost of blowing caches is only going up.

            • Re: (Score:3, Interesting)

              by k8to ( 9046 )

              Of course, the billion threads design doesn't solve the "how do n cores efficiently share x amount of cache" problem at all.

              • by geekoid ( 135745 ) <{moc.oohay} {ta} {dnaltropnidad}> on Wednesday July 02, 2008 @06:53PM (#24038125) Homepage Journal

                Why wouldn't each core have it's own cache? It only needs to cache what it needs for its job.

                • by kesuki ( 321456 ) on Wednesday July 02, 2008 @07:35PM (#24038445) Journal

                  yes, but if you have 1000 cores each with 64k of cache, then you start to run into problems with memory throughput when computing massively parallel data.

                  memory throughput has been the achilles heel of graphic processing for years now. and as we all know, splitting up a graphic screen into smaller segments is simple. so GPUs went massively parallel long before CPUS, in fact you will soon be able to get over 1000 stream processing units in a single desktop graphic card.

                  so, the real problem is memory technology, how can a single memory module consistently feed 1000 cores, for instance if you want to do real-time n-pass encoding of a hd video stream... while playing a FPS online, and running IM software, and a strong anti-virus suite...

                  I have a horrible horrible ugly feeling that you'll never be able to get a system that can reliably do all that. at the same time, just because they'll skimp on memory tech or interconnects, so you'll have most of the capabilities of a 1,000 core system wasted.

            • by cpeterso ( 19082 ) on Wednesday July 02, 2008 @05:29PM (#24037321) Homepage

              Now that 64-bit processors are so common, perhaps operating systems can spare some virtual address space for performance benefits.

              The OPAL operating system [washington.edu] was a University of Washington research project from the 1990s. OPAL uses a single address space for all processes. Unlike Windows 3.1, OPAL still has memory protection and every process (or "protection domain") has its own pages. The benefit of sharing a single address space is that you don't need to flush the cache (because the virtual-to-physical address mapping do not change when you context switch). Also, pointers can be shared between processes because their addresses are globally unique.

              • by Erich ( 151 ) on Wednesday July 02, 2008 @10:17PM (#24039425) Homepage Journal
                Single Address Space is horrible.

                It's a huge kludge for idiotic processors (like arm9) that don't have physically-tagged caches. On all non-incredibly-sucky processors, we have physically tagged caches, and so having every app have its own address space, or having multiple apps share physical pages at different virtual addresses, all of these are fine.

                Problems with SAS:

                • Everything has to be compiled Position-independent, or pre-linked for a specific location
                • Virtual memory fragmentation as applications are loaded and unloaded
                • Where is the heap? Is there one? Or one per process?
                • COW and paging get harder
                • People start using it and think it's a good idea.

                Most people... even people using ARM... are using processors with physically-tagged caches. Please, Please, Please, don't further the madness of single-address-space environments. There are still people encouraging this crime against humanity.

                Maybe I'm a bit bitter, because some folks in my company have drunk the SAS kool-aid. But believe me, unless you have ARM9, it's not worth it!

          • by skulgnome ( 1114401 ) on Wednesday July 02, 2008 @05:57PM (#24037641)

            No. I/O is the slowdown in multitasking OSes.

      • by 192939495969798999 ( 58312 ) <info AT devinmoore DOT com> on Wednesday July 02, 2008 @04:33PM (#24036649) Homepage Journal

        I concur, furthermore I'd like to see one core per pixel, that would certainly solve your high-end gaming issues.

      • by jonbryce ( 703250 ) on Wednesday July 02, 2008 @04:57PM (#24036927) Homepage

        At the moment, I'm looking at Slashdot in Firefox, while listening to an mp3. I'm only using two out of my four cores, and I have 3% CPU usage.

        Maybe when I post this, I might use a third core for a little while, but how many cores can I actually usefully use.

        I can break a password protected Excel file in 30 hours max with this computer, and a 10000 core chip might reduce this to 43 seconds, but other than that, what difference is it going to make?

        • by hedwards ( 940851 ) on Wednesday July 02, 2008 @05:13PM (#24037141)

          That's what I'm curious about. Having 2 cores is enough for most consumers, one for the OS and background tasks and one for the application you're using. And that's overkill for most users.

          Personally, I like to multi task and am really going to love when we get to the point where I can have the OS on one core and then have 1 core for each of my applications. But even that is limited to probably less than 10 cores.

          Certain types of tasks just don't benefit from extra cores, and probably never will. Things which have to be done sequentially are just not going to see any improvement with extra cores. And other things like compiling software may or may not see much of an improvement depending upon the design of the source.

          But really, it's mainly things like raytracing and servers with many parallel connections which are the most likely to benefit. And servers are still bound by bandwidth, probably well before they would be hitting the limit on multi cores anyways.

        • by kv9 ( 697238 ) on Wednesday July 02, 2008 @06:08PM (#24037753) Homepage

          I can break a password protected Excel file in 30 hours max with this computer, and a 10000 core chip might reduce this to 43 seconds, but other than that, what difference is it going to make?

          29 hours 59 minutes 17 seconds?

        • by curunir ( 98273 ) * on Wednesday July 02, 2008 @06:46PM (#24038081) Homepage Journal

          ...but other than that, what difference is it going to make?

          This is, IMHO, the wrong question to be asking. Asking how current tasks will be optimized to take advantage of future hardware makes the fundamental flawed assumption that the current tasks will be what's considered important once we have this kind of hardware.

          But the history of computers have shown that the "if you build it, they will come" philosophy applies to the tasks that people end up wanting to accomplish. It's been seen time and again that new abilities for using computers wait until we've hit a certain performance threshold, whether it CPU, memory, bandwidth, disk space, video resolution or whatever, and then become the things we need our computers to do.

          Take, for instance, the huge success of mp3's. There was a time not so long ago when people were limited to playing music off a physical CD. This wasn't because there was no desire amongst computer users to listen to digital files that could be stored locally or streamed off the internet. It was because computer users did not know yet that they had the desire to do it. But technology advanced to the point where a) processors became fast enough to decode mp3's in real time without using the whole CPU and b) hard drives grew to the point where we had the capacity to store files that are 10% of the size of the size of the files on the CD.

          Similarly, it's likely that when we reach the point where we have hundreds or thousands of cores, new tasks will emerge that take advantage of the new capabilities of the hardware. It may be that those tasks are limited in some other way by one of the other components we use or by the as yet non-existent status of some new component, but it's only important that multiple cores play a part in enabling the new task.

          In the near term, you can imagine a whole host of applications that would become possible when you get to the point where the average computer can do real-time H.264 encoding without affecting overall system performance. I won't guess at what might be popular further down the road, but there will be people who will think of something to do with those extra cores. And, in hindsight, we'll see the proliferation of cores as enabling our current computer-using behavior.

          • by kesuki ( 321456 ) on Wednesday July 02, 2008 @07:46PM (#24038539) Journal

            "Take, for instance, the huge success of mp3's. There was a time not so long ago when people were limited to playing music off a physical CD. This wasn't because there was no desire amongst computer users to listen to digital files that could be stored locally or streamed off the internet. It was because computer users did not know yet that they had the desire to do it. But technology advanced to the point where a) processors became fast enough to decode mp3's in real time without using the whole CPU"

            I started making mp3s with a 486 DX 75mhz

            I could decode in real time on a 486 DX 75 as i recall encoding took a bit of time, and i only had a 3 GB HDD that had been an upgrade to the system...

            Mp3s use a asynchronous encoding algorithm, more CPU to encode, than to decode, if your MP3 player doesn't run correctly on a 486, then it's because they designed in features not strictly needed to decode a MP3 stream.

            Oh hey, I have an RCA Lyra mp3 player, that isn't even as fast as a 486, but the decoder was designed for mp3 decoding.

            Ogg decoding uses a beefier decoder, that's half the problem getting ogg support in devices not made for decoding video streams.

      • by blahplusplus ( 757119 ) on Wednesday July 02, 2008 @05:50PM (#24037555)

        "Because each core is no longer task switching. Once you have more cores than tasks you can remove all the context switching logic and optimize the cores to run single processes as fast as possible.

        Then you take the tasks that can be broken up over multiple cores (Ray Tracing anyone?) and fill the rest of your cores with that."

        Unfortunately all this is going to lead to bus and memory bandwidth contention, you're just shifting the burden from one point to another. Although their is a 'penalty' for task switching, there is an even greater bottleneck at the bus and memory bandwidth level.

        IMHO intel would have to release a cpu on a card with specialized ram chips and segment the ram like GPU's do to get anything out of multicore over the long term, ram is not keeping up and the current architecture for PC ram is awful for multicore. CPU speed is far outstripping bus and memory bandwidth. I am quite dubious of multi-core architecture, there is fundamental limits of geometry of circuits. I'd be sinking my money into materials research not glueing cores together and praying CS and math guys come up with solutions that take advantage of it.

        The whole of human history of engineering and tool use, is to take something extremely complicated and offload complexity, and compartmentalize it so that it's mangable. I see the opposite happening with multi-core.

      • by Salamander ( 33735 ) <jeff.pl@atyp@us> on Wednesday July 02, 2008 @08:23PM (#24038823) Homepage Journal

        Because each core is no longer task switching. Once you have more cores than tasks you can remove all the context switching logic and optimize the cores to run single processes as fast as possible.

        OK, so now the piece that's running on each core runs really really fast . . . until it needs to wait for or communicate with the piece running on some other core. If you can do your piece in ten instructions but you have to wait 1000 for the next input to come in, whether it's because your neighbor is slow or because the pipe between you is, then you'll be sitting and spinning 99% of the time. Unfortunately, the set of programs that decompose nicely into arbitrarily many pieces that each take the same time (for any input) doesn't extend all that far beyond graphics and a few kinds of simulation. Many, many more programs hardly decompose at all, or still have severe imbalances and bottlenecks, so the "slow neighbor" problem is very real.

        Many people's answer to the "slow pipe" problem, on the other hand, is to do away with the pipes altogether and have the cores communicate via shared memory. Well, guess what? The industry has already been there and done that. Multiple processing units sharing a single memory space used to be called SMP, and it was implemented with multiple physical processors on separate boards. Now it's all on one die, but the fundamental problem remains the same. Cache-line thrashing and memory-bandwidth contention are already rearing their ugly heads again even at N=4. They'll become totally unmanageable somewhere around N=64, just like the old days and for the same reasons. People who lived through the last round learned from the experience, which is why all of the biggest systems nowadays are massively parallel non-shared-memory cluster architectures.

        If you want to harness the power of 1000 processors, you have to keep them from killing each other, and they'll kill each other without even meaning to if they're all tossed in one big pool. Giving each processor (or at least each small group of processors) its own memory with its own path to it, and fast but explicit communication with its neighbors, has so far worked a lot better except in a very few specialized and constrained cases. Then you need multi-processing on the nodes, to deal with the processing imbalances. Whether the nodes are connected via InfiniBand or an integrated interconnect or a common die, the architectural principles are likely to remain the same.

        Disclosure: I work for a company that makes the sort of systems I've just described (at the "integrated interconnect" design point). I don't say what I do because I work there; I work there because of what I believe.

    • by Mordok-DestroyerOfWo ( 1000167 ) on Wednesday July 02, 2008 @03:47PM (#24036025)
      My friends and I have lots of conversations about girls, how to get girls, how to please girls. However until anything other than idle talk actually happens this goes into the "wouldn't it be nice" category
    • by zappepcs ( 820751 ) on Wednesday July 02, 2008 @03:54PM (#24036121) Journal

      IANACS, but if your program structure changes a bit, you can process the two different styles of instructions in different ways, such that when the data needed from or to some sequential group of tasks is needed it is already there, sort of like doing things 6 steps ahead of yourself when possible. I know that makes no sense on the face of it, but at the machine code basics of it, by parsing instructions this way, 5 or 6 operations from now you will need register X loaded with byte 121 from location xyz, so while this core plods through the next few instructions, core this.plus.one prefetches the data at memory location xyz to register X.... or something like that. That will break the serialization of the code. There are other techniques as well, and if written for multicore machines, the program machine code can be executed this way without interpretation by the machine/OS.

      There are more than one type of CPU architectures, and principles of execution vary between them. Same for RISC CISC. I think it is likely that the smaller the instruction set for the CPU, the more likely that serialized tasks can be shared out among cores.

      • by Talennor ( 612270 ) on Wednesday July 02, 2008 @04:05PM (#24036263) Journal

        While prefetching data can be done using a single core, your post in this context gives me a cool idea.

        Who needs branch prediction when you could just have 2 cores running a thread? Send each one executing instructions without a break in the pipeline and sync the wrong core to the correct one once you know the result. You'd still have to wait for results before any store operations, but you should probably know the branch result by then anyway.

        • by zappepcs ( 820751 ) on Wednesday July 02, 2008 @04:18PM (#24036445) Journal

          Indeed, and any tasks that are flagged as repeating can be repeated on a separate core from cores executing serial instructions such that IPC allows things that happen serially to happen coincident with each other. A simple high level example is reading the configuration for your process that may change at any time during your process due to outside influences. Let the reading of that happen out of band on the processing as it is not part of the sequential string of instructions for executing your code. That way config data is always correct without your serially oriented code needing to stop to check anything other than say $window.size=? such that it's value is always updated by a different core.
          Sorry if that is not a clear explanation. I just mean to say that since most of what we do is serially oriented, it's difficult to see how at the microscopic level of the code, it can be broken up to parallel tasks. A 16% decrease in processing time is significant. Building OS and compilers to optimize this would improve execution times greatly, just as threading does today. If threads are written correctly to work with multiple cores, it's possible to see significant time improvements there also.

        • by Anonymous Coward on Wednesday July 02, 2008 @04:21PM (#24036485)

          That is what most current processors do and use branch prediction for. Even if you have a thousand cores, that's only 10 binary decisions ahead. You need to guess really well very often to keep your cores busy instead of syncing. Also, the further you're executing ahead, the more ultimately useless calculations are made, which is what drives power consumption up in long pipeline cores (which you're essentially proposing).

          In reality parallelism is more likely going to be found by better compilers. Programmers will have to be more specific about the type of loops they want. Do you just need something to be performed on every item in an array or is order important? No more mindless for-loops for not inherently sequential processes.

          • Re: (Score:3, Insightful)

            by joto ( 134244 )

            In reality parallelism is more likely going to be found by better compilers. Programmers will have to be more specific about the type of loops they want. Do you just need something to be performed on every item in an array or is order important? No more mindless for-loops for not inherently sequential processes.

            I disagree. Having the compiler analyze loops to find out if they are trivially parallelizable is easy, there's little need to change the language.

            On the other hand, a language that was really desi

    • by Cordath ( 581672 ) on Wednesday July 02, 2008 @04:00PM (#24036199)
      Say you have a slow, plodding sequential process. If you reach a point where there are several possibilities and you have an abundance of cores, you can start work on each of the possibilities while you're still deciding which possibility is actually the right one. Many CPU's already incorporate this sort of logic. It is, however, rather wasteful of resources and provides a relatively modest speedup. Applying it at a higher level should work, in principle, although it obviously isn't going to be practical for many problems.

      I do see this move by Intel as a direct follow up to their plans to negate the processing advantages of today's video cards. Intel wants people running general purpose code to run it on their general purpose CPU's, not on their video cards using CUDA or the like. If the future of video game rendering is indeed ray-tracing (an embarrassingly parallel algorithm if ever there was one) then this move will also position Intel to compete directly with Nvidia for the raw processing power market.

      One thing is for sure, there's a lot of coding to do. Very few programs currently make effective use of even 2 cores. Parallelization of code can be quite tricky, so hopefully tools will evolve that will make it easier for the typical code-monkey who's never written a parallel algorithm in his life.
    • Re: (Score:3, Informative)

      by zarr ( 724629 )
      How do those get sped up if you're opting for more cores instead of more cycles?

      Algorithms that can't be parallelized will not benefit from a parallel architecture. It's really that simple. :( Also, many algorithms that are parallelizable will not benefit from an "infinite" number of cores. The limited bandwith for communication between cores will usually become a bottleneck at some point.

    • Re: (Score:3, Insightful)

      by ViperOrel ( 1286864 )
      Just a thought, but I would say that 3 billion operations should be enough for just about any linear logic you could need solved. Where we run into trouble is in trying to use single processes to solve problems that should be solved in parallel. If having a thousand cores means that we can now run things much more efficiently in parallel, then maybe people will finally start breaking their problems up that way. As long as you can only count the cores up on one hand, your potential benefit from multithrea
    • by pla ( 258480 ) on Wednesday July 02, 2008 @04:10PM (#24036327) Journal
      I'm no software engineer [...] but what about the processes that are slow and plodding and sequential? How do those get sped up if you're opting for more cores instead of more cycles?

      As a software engineer, I wonder the same thing.

      Put simply, the majority of code simply doesn't parallelize well. You can break out a few major portions of it to run as their own threads, but for the most part, programs either sit around and wait for the user, or sit around and wait for hardware resources.

      Within that, only those programs that wait for a particular hardware resource - CPU time - Even have the potential to benefit from more cores... And while a lot of those might split well into a few threads, most will not scale (without a complete rewrite to chose entirely different algorithms - If they even exist to accomplish the intended purpose) to more than a handful of cores.
      • by Intron ( 870560 ) on Wednesday July 02, 2008 @04:35PM (#24036677)
        I wonder who has the rights to all of the code from Thinking Machines? We are almost to the point where you can have a Connection Machine on your desktop. They did a lot of work on automatically converting code to parallel in the compiler and were quite successful at what they did. Trying to do it manually is the wrong approach. A great deal of CPU time on a modern desktop system is spent on graphics operations, for example. That is all easily parallelized.
      • Re: (Score:3, Informative)

        by rrohbeck ( 944847 )

        Yup. Its Amdahl's law [wikipedia.org].

        This whole many core hype looks a lot like the Gigahertz craze from a few years ago. Obviously they're afraid that there will be no reason to upgrade. 2 or 4 cores, ok - you often (sometimes?) have that many tasks active. But significantly more will only buy you throughput for games, simulations and similar heavy computations. Unless we (IAACS too) rewrite all of our apps under new paradigms like functional programming (e.g. in Erlang [wikipedia.org].) Which will only be done if there's a good reason

      • by Stan Vassilev ( 939229 ) on Thursday July 03, 2008 @12:32AM (#24040107)

        As a software engineer, I wonder the same thing.

        Put simply, the majority of code simply doesn't parallelize well. You can break out a few major portions of it to run as their own threads, but for the most part, programs either sit around and wait for the user, or sit around and wait for hardware resources.

        Within that, only those programs that wait for a particular hardware resource - CPU time - Even have the potential to benefit from more cores... And while a lot of those might split well into a few threads, most will not scale (without a complete rewrite to chose entirely different algorithms - If they even exist to accomplish the intended purpose) to more than a handful of cores.

        As a software engineer you should know that "most code doesn't parallelize" is very different from "most of the code's runtime can't parallelize", as code size and code runtime are substantially different things.

        Look at most CPU intensive tasks today and you'll notice they all parallelize very well: archiving/extracting, encoding/decoding (video, audio), 2D and 3D GUI/graphics/animations rendering (not just for games anymore!), indexing and searching indexes, databases in general, and last but not least, image/video and voice recognition.

        So, while your very high-level task is sequential, the *services* it calls or implicitly uses (like GUI rendering), and the smaller tasks it performs, actually would make a pretty good use of as many cores as you can throw at them.

        This is good news for software engineers like you and me, as we can write mostly serial code and isolate slow tasks into isolated routines that we write once and reuse many times.

    • by mikael_j ( 106439 ) on Wednesday July 02, 2008 @04:44PM (#24036773)

      Obviously just adding more cores does little to speed up individual sequential processes, but it does help with multitasking, which is what I really think is the "killer app" for multi-core processors.

      Back in the late 90's (it doesn't feel like "back in.." yet but I'm willing to admit that it was about a decade ago) I decided to build a computer with an Abit BP6 motherboard, two Celeron processors and lots of RAM instead of a single higher end processor because I wanted to be able to multitask properly, my gamer friends mocked me for choosing Celeron processors but for the price of a single processor system I got a system that was capable of running several "normal" apps and one with heavy cpu usage without slowing down the system, and the extra RAM also helped (I saw lots of people back then go for 128 MB of RAM and a faster CPU instead of "wasting" their money on RAM, and then they cursed their computer for being slow when it started swapping). There was also the upside of having Windows 2000 run as fast on my computer as Windows 98 did on my friends' computers...

      /Mikael

  • Great... (Score:5, Funny)

    by Amarok.Org ( 514102 ) on Wednesday July 02, 2008 @03:44PM (#24035973)

    As if Oracle licensing wasn't complicated enough already...

  • Memory bandwidth? (Score:5, Interesting)

    by Brietech ( 668850 ) on Wednesday July 02, 2008 @03:45PM (#24035975)
    If you can get a thousand cores on a chip, and you still only have enough pins for a handful (at best) of memory interfaces, doesn't memory become a HUGE bottleneck? How do these cores not get starved for data?
    • Re: (Score:3, Interesting)

      by smaddox ( 928261 )

      Memory would have to be completely redefined. Currently, you have one memory bank that is effectively accessed serially.

      If you have 1000 cores that depend on the same data, you would have to have a way of multicasting the data to the cores, which could then select the data they want.

      Basically, hardware and software architecture has to be completely redefined.

      It is not impossible, though. Just look around. The universe computes in parallel all the time.

    • Re: (Score:3, Insightful)

      by Gewalt ( 1200451 )
      Not really. If you can put 1000 cores on a processor, then I don't see why you cant put 100 or so layers of ram on there too. Eventually, it will becomea requirement to get the system to scale.
  • by Raul654 ( 453029 ) on Wednesday July 02, 2008 @03:46PM (#24036015) Homepage

    At Supercomputing 2006, they had a wonderful panel [supercomputing.org] where they discussed the future of computing in general, and tried to predict what computers (especially Supercomputers) would look like in 2020. Tom Sterling made what I thought was one of the most insightful observations of the panel -- most of the code out there is sequential (or nearly so) and I/O bound. So your home user checking his email, running a web browser, etc is not going to benefit much from having all that compute power. (Gamers are obviously not included in this) Thus, he predicted, processors would max out at a "relatively" low number of cores - 64 was his prediction.

    • by RailGunSally ( 946944 ) on Wednesday July 02, 2008 @03:57PM (#24036149)
      Sure! 64 cores should be enough for anybody!
    • I'd be surprised if a desktop PC ever really uses more than eight. Desktop software is sequential, as you said. It doesn't parallelize.

      Games will be doing their physics, etc., on the graphics card by then. I don't know if the current fad for doing it on the GPU will go anywhere much but I can see graphics cards starting out this way then going to a separate on-board PPU once the APIs stabilize.

      We might *have* 64 cores simply because the price difference between 8 and 64 is a couple of bucks, but they won't

    • by tzhuge ( 1031302 ) on Wednesday July 02, 2008 @04:11PM (#24036345)
      Sure, until a killer app like Windows 8 comes along and requires a minimum of 256 cores for email, web browsing and word processing. Interpret 'killer app' how you want in this context.
    • by RightSaidFred99 ( 874576 ) on Wednesday July 02, 2008 @04:22PM (#24036507)
      His premise is flawed. People using email, running a web browser, etc... hit CPU speed saturation some time ago. A 500MHz CPU can adequately serve their needs. So they are not at issue here. What's at issue is next generation shit like AI, high quality voice recognition, advanced ray tracing/radiosity/whatever graphics, face/gesture recognition, etc... I don't think anyone sees us needing 1000 cores in the next few years.

      My guess is 4 cores in 2008, 4 cores in 2009, moving to 8 cores through 2010. We may move to a new uber-core model once the software catches up, more like 6-8 years than 2-4. I'm positive we won't "max out" at 64 cores, because we're going to hit a per-core speed limit much more quickly than we hit a number-of-cores limit.

      • Re: (Score:3, Interesting)

        by eht ( 8912 )

        We've pretty much already hit a per-core speed limit, you really can't find many CPU's running over 3GHZ, whereas back in P4 days you'd see them all the way up to 3.8.

        Architectures have changed and other stuff allow a current single core of a 3.2 to easily outperform the old 3.8's but then still why don't we see new 3.8's?

        • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday July 02, 2008 @04:42PM (#24036753) Homepage Journal

          Architectures have changed and other stuff allow a current single core of a 3.2 to easily outperform the old 3.8's but then still why don't we see new 3.8's?

          The Pentium 4 is, well, it's scary. It actually has "drive" stages because it takes too long for signals to propagate between functional blocks of the processor. This is just wait time, for the signals to get where they're going.

          The P4 needed a super-deep pipeline to hit those kinds of speeds as a result, and so the penalty for branch misprediction was too high.

          What MAY bring us higher clock rates again, though, is processors with very high numbers of cores. You can make a processor broad, cheap, or fast, but not all three. Making the processors narrow and simple will allow them to run at high clock rates and making them highly parallel will make up for their lack of individual complexity. The benefit lies in single-tasking performance; one very non-parallelizable thread which doesn't even particularly benefit from superscalar processing could run much faster on an architecture like this than anything we have today, while more parallelizable tasks can still run faster than they do today in spite of the reduced per-core complexity due to the number of cores - if you can figure out how to do more parallelization. Of course, that is not impossible [slashdot.org].

        • Re: (Score:3, Informative)

          by jsebrech ( 525647 )

          Architectures have changed and other stuff allow a current single core of a 3.2 to easily outperform the old 3.8's but then still why don't we see new 3.8's?

          Clock rate is meaningless. They could build a 10 ghz cpu, but it wouldn't outperform the current 3 ghz cpu's.

          A modern cpu uses pipelining. This means that each instruction is spread out across a series of phases (e.g. fetch data, perform calculation 1, perform calculation 2, store data). Each phase is basically a layer of transistors the logic has to go

    • by the_olo ( 160789 ) on Wednesday July 02, 2008 @04:25PM (#24036543) Homepage

      So your home user checking his email, running a web browser, etc is not going to benefit much from having all that compute power. (Gamers are obviously not included in this)

      You've excluded gamers as if this had been some nearly extinct exotic species. Don't they contribute the most to PC hardware market growth and progress?

    • by MojoRilla ( 591502 ) on Wednesday July 02, 2008 @04:34PM (#24036667)
      This seems silly. If you create more compute power, someone will think of ways to use it.

      Web applications are becoming more AJAX'y all the time, and they are not sequential at all. Watching a video while another tab checks my Gmail is a parallel task. All indications are that people want to consume more and more media on their computers. Things like the MLB mosaic allow you to watch four games at once.

      Have you ever listened to a song through your computer while coding, running an email program, and running an instant messaging program? There are four highly parallelizable tasks right there. Not compute intensive enough for you? Imagine the song compressed with a new codec that is twice as efficient in terms of size but twice as compute intensive. Imagine the email program indexing your email for efficient search, running algorithms to assess the email's importance to you, and virus checking new deliveries. Imagine your code editor doing on the fly analysis of what you are coding, and making suggestions.

      "Normal" users are doing more and more with computers as well. Now that fast computers are cheap, people who never edited video or photos are doing it. If you want a significant market besides gamers who need more cores, it is people making videos, especially HD videos. Sure, my Grandmother isn't going to be doing this, but I do, and I'm sure my children will do it even more.

      And don't forget about virus writers. They need a few cores to run on as well!

      Computer power keeps its steady progress higher, and we keep finding interesting things to do with it all. I don't see that stopping, so I don't see a limit to the number of cores people will need.
    • Re: (Score:3, Insightful)

      by BlueHands ( 142945 )

      I KNOW it is so very often sited but if every was a time to mention the "5 computers in the whole world" it is this. In fact, I would dare say that is the whole point of this push by Intel: trying to get people (programmers) used to the thought of having so many parallel cpus in a home computer.

      Sure, from where we stand now, 64 seems like a lot but maybe a core for nearly each pixel on my screen makes sense, has real value to add. Or how about just flat-out smarter computers, something which might happen by

  • by bigattichouse ( 527527 ) on Wednesday July 02, 2008 @03:47PM (#24036027) Homepage
    Are we just looking at crazy-ass multithreading? or do you mean we need some special API? I think its really the compiler guru's who are really going to make the difference here - 99% of the world can't figure out debugging multithread apps. I'm only moderately successful with it if I build small single process "kernels" (to steal a graphics term) that process a work item, and then a loader that keeps track of workitems .. then fire up a bunch of threads and feed the cloud a bunch of discrete workitems. Synchronizing threads is no fun.
    • by Phroggy ( 441 ) <slashdot3@NOsPaM.phroggy.com> on Wednesday July 02, 2008 @04:02PM (#24036221) Homepage

      A year or so ago, I saw a presentation on Thread Building Blocks [threadingb...blocks.org], which is basically an API thingie that Intel created to help with this issue. Their big announcement last year was that they've released it open-source and have committed to making it cross-platform. (It's in Intel's best interest to get people using TBB on Athlon, PPC, and other architectures, because the more software is multi-core aware, the more demand there will be for multi-core CPUs in general, which Intel seems pretty excited about.)

  • by frovingslosh ( 582462 ) on Wednesday July 02, 2008 @03:49PM (#24036051)
    Heck, my original computer had 229376 cores. They were arranged in 28k 16 bit words.
  • Good idea (Score:5, Insightful)

    by Piranhaa ( 672441 ) on Wednesday July 02, 2008 @03:52PM (#24036083)

    It's a good idea.. Somewhat of the same idea that the Cell chip has going for it (and well, Phenom X3s). You make a product with lots of redunant objects so that when some are bound to failure, the percentage of failure is much lower..

    If there are 1000 cores on a chip, and 100 go bad... You're still only losing a *maximum* of 10% of performance versus when you have 2 or 4 cores and 1 or 2 go bad, you have a performance impact of 50% essentially.. Brings costs down because yeilds go up dramatically.

  • Already Happening (Score:3, Informative)

    by sheepweevil ( 1036936 ) on Wednesday July 02, 2008 @03:55PM (#24036129) Homepage
    Supercomputers already have many more than thousands of cores. The IBM Blue Gene/P can have up to 1,048,576 cores [ibm.com]. What Intel is probably talking about is bringing that level of parallel computing to smaller computers.
  • by olvemaudal ( 1318709 ) on Wednesday July 02, 2008 @03:59PM (#24036183)
    In order to utilize mega-core processors, I believe that we need to rethink the way we program computers. Instead of using imperative programming languages (eg, C, C++, Java) we might need to look at declarative languages like Erlang, Haskell, F# and so on. Read more about this at http://olvemaudal.wordpress.com/2008/01/04/erlang-gives-me-positive-vibes/ [wordpress.com]
  • by neokushan ( 932374 ) on Wednesday July 02, 2008 @04:03PM (#24036241)

    I'm all for newer, faster processors. Hell, I'm all for processors with lots of cores that can be used, but wouldn't completely redoing all of the software libraries and such that we've got used to cause a hell of a divide in developers?
    Sure, if you only develop on an x86 platform, you're fine, but what if you want to write software for ARM or PPC? Processors that might not adopt the "thousands of cores" model?
    Would it not be better to design a processor that can intelligently utilise single threads across multiple cores? (I know this isn't an easy task, but I don't see it being much harder than what Intel is proposing here).
    Or is this some long-time plan by intel to try to lock people into their platforms even more?

  • Desperation? (Score:4, Interesting)

    by HunterZ ( 20035 ) on Wednesday July 02, 2008 @04:05PM (#24036265) Journal

    Honestly I wonder if Intel isn't looking at the expense of pushing per-core speed further and comparing it against the cost of just adding more cores. The unfortunately reality is that the many-core approach really doesn't fit the desktop use case very well. Sure, you could devote an entire core to each process, but the typical desktop user is only interested in the performance of the one progress in the foreground that's being interacted with.

    It's also worth mentioning that some individual applications just aren't parallelizable to the extent that more than a couple of cores could be exercised for any significant portion of the application's run time.

  • by obender ( 546976 ) on Wednesday July 02, 2008 @04:06PM (#24036291)
    From TFA:

    Dozens, hundreds, and even thousands of cores are not unusual design points

    I don't think they mean cores like the regular x86 cores, I think they will put an FPGA on the same die together with the regular four/six cores.

  • by 4pins ( 858270 ) on Wednesday July 02, 2008 @04:10PM (#24036339) Homepage
    It has been long taught in theory classes that certain things can be solved in fewer steps using nondeterministic programming. The problem is that you have to follow multiple paths until you hit the right one. With sufficiently many cores the computer can follow all the possible paths at the same time, resulting in a quicker answer. http://en.wikipedia.org/wiki/Non-deterministic_algorithm [wikipedia.org] http://en.wikipedia.org/wiki/Nondeterministic_Programming [wikipedia.org]
  • Heat issues (Score:4, Interesting)

    by the_olo ( 160789 ) on Wednesday July 02, 2008 @04:18PM (#24036441) Homepage

    How are they going to cope with excessive heat and power consumption? How are they going to dissipate heat from a thousand cores?

    When the processing power growth was fed by shrinking transistors, the heat stayed at manageable level (well, it gradually increased with packing more and more elements on die, but the function wasn't linear). Smaller circuits yielded less heat, despite being much more of them.

    Now we're packing more and more chips into one package instead and shrinkage of transistors has significantly slowed down. So how are they going to pack those thousand cores into a small number of CPUs and manage power and heat output?

  • by DerPflanz ( 525793 ) <bart.friesoft@nl> on Wednesday July 02, 2008 @04:20PM (#24036473) Homepage
    is find out how to program that. I'm a programmer and I know the problems that are involved in (massive) parallel programming. For a lot of problems, it is either impossible or very hard. See also my essay 'Why does software suck [friesoft.nl]' (dutch) (babelfish translation [yahoo.com]).
  • by blowhole ( 155935 ) on Wednesday July 02, 2008 @04:26PM (#24036557)

    I've only been programming professionally for 3 years now, but already I'm shaking in my boots over having to rethink and relearn the way I've done things to accomodate these massively parallel architectures. I can't imagine how scared must be the old timers of 20, 30, or more years. Or maybe the good ones who are still hacking decades later have already had to deal with paradigm shifts and aren't scared at all?

    • by GatesDA ( 1260600 ) on Wednesday July 02, 2008 @05:13PM (#24037133)

      My dad's been programming for decades, and he's much more used to paradigm shifts than I am. His first programming job was translating assembly from one architechture to another, and now he's a proficient web developer. He understands concurrency and keeps up to date on new developments.

      I'm reminded of an anecdote told to me during a presentation. The presenter had been introducing a new technology, and one man had a concern: "I've just worked hard to learn the previous technology. Can you promise me that, if I learn this one, it will be the last one I ever have to learn?" The presenter replied, "I can't promise you that, but I can promise you that you're in the wrong profession."

    • by uncqual ( 836337 ) on Wednesday July 02, 2008 @05:35PM (#24037397)
      If a programmer has prospered for 20 or 30 years in this business, they probably have adapted to multiple paradigm shifts.

      For example, "CPU expensive, memory expensive, programmer cheap" is now "CPU cheap, memory cheap, programmer expensive" -- hence Java et al. (I am sometimes amazed when I casually allocate/free chunks of memory larger than all the combined memory of all the computers at my university - both in the labs and the administration/operational side - but what amazes me is that it doesn't amaze me!)

      Actually some of the "old timers" may be a more comfortable with some issues of highly parallel programming than some of the "kids" (term used with respect, we were all kids once!) who have mostly had them masked from them by high level languages. Comparing "old timers" to "kids" doing enterprise server software, the kids seem much less likely to understand issues like memory coherence models of specific architectures, cache contention issues of specific implementations, etc.

      Also, too often, the kids make assumptions about the source of performance/timing problems rather than gathering empirical evidence and acting on that evidence. This trait is particularly problematic because when dealing with concurrency and varying load conditions, intuition can be quite unreliable.

      Really, it's not all that scary - the first paradigm shift is the hardest!
  • by edxwelch ( 600979 ) on Wednesday July 02, 2008 @04:37PM (#24036699)

    So now we have a shit load of cores all we have to do is wait for the developers to put some multi-threading goodness in their apps.... or maybe not.
    The PS3 was ment to be faster than any other system because of it's multi-cores cell architecture, but in a interview John Carmack said, "Although it's interesting that almost all of the PS3 launch titles hardly used any Cells at all."

    http://www.gameinformer.com/News/Story/200708/N07.0803.1731.12214.htm [gameinformer.com]

  • by Eravnrekaree ( 467752 ) on Wednesday July 02, 2008 @04:41PM (#24036731)

    If people are writing their applications using threads, I dont see there should be a big problem with more cores. Basically, threads should be used where it is practical and makes sense and does not make programming that much more difficult, in fact it can make things eisier. Rather than some overly complicated reengineering, threads when properly used can lead to programs that are just as easy to understand. They can be used for a program that does many tasks, processing can usually be parallelised when you have different operations which do not depend on the output of each other. A list of instructions which depends on output of a previous instructions, which must run sequentially, of course cannot be threaded or paralellised. Obvious example of applications that can be threaded is a server, where you have a thread to process data from each socket, a program which scans multiple files, can have a thread for processing each file, etc.

  • it's not about cores (Score:3, Interesting)

    by speedtux ( 1307149 ) on Wednesday July 02, 2008 @04:50PM (#24036851)

    If you put 1000 cores on a chip and plug it into a PC... very little would happen in terms of speedup.

    What we need to know is the memory architecture. How is memory allocated to cores? How is data transferred? What are the relative costs of accesses? How are the caches handled?

    Without that information, it's pointless to think about hundreds or thousands of cores. And I suspect even Intel doesn't know the answers yet. And there's a good chance that a company other than Intel will actually deliver the solution.

  • Profit!!! (Score:5, Funny)

    by DeVilla ( 4563 ) on Wednesday July 02, 2008 @05:03PM (#24037011)
    Hi. I make processors. I know a lot about processors. I think a big change is coming to processors. I think you should learn to use a lot of processors. A whole lot of processors. You need more processors. Oh, and did I tell you I make processors?
  • Cores? (Score:4, Interesting)

    by mugnyte ( 203225 ) on Wednesday July 02, 2008 @05:20PM (#24037243) Journal

      Can't they just make the existing ones go faster? Seriously, if I want to start architectures around 1000's of independent threads of execution, i'd start with communication speeds, not node count.

      It's already easy to spawn thread armies that peg all IO channels. Where is all this "work" you can do without any IO?

      I think Intel better starting thinking of "tens, hundreds or even thousands" of bus speed multipliers on their napkin drawings.

      Aside from some heavy processing-dependent concepts (graphics, complex mathematical models, etc) the world need petabyte/sec connectivity, not instruction set munching.

  • by Tablizer ( 95088 ) on Wednesday July 02, 2008 @05:23PM (#24037269) Journal

    Databases provide a wonderful opportunity to apply multi-core processing. The nice thing about a (good) database is that queries describe what you want, not how to go about getting it. Thus, the database can potentially split the load up to many processes and the query writer (app) does not have to change a thing in his/her code. Whether a serial or parallel process carries it out is in theory out of the app developer's hair (although dealing with transaction management may sometimes come into play for certain uses.)

    However, query languages may need to become more general-purpose in order to have our apps depend on them more, not just business data. For example, built-in graph (network) and tree traversal may need to be added and/or standardized in query languages. And, we made need to clean up the weak-points of SQL and create more dynamic DB's to better match dynamic languages and scripting.

    Being a DB-head, I've discovered that a lot of processing can potentially be converted into DB queries. That way one is not writing explicit pointer-based linked lists etc., locking one into a difficult-to-parallel-ize implementation.

    Relational engines used to be considered too bulky for many desktop applications. This is partly because they make processing go through a DB abstraction layer and thus are not using direct RAM pointers. However, the flip-side of this extra layer is that they are well-suited to parallelization.
           

    • by Shados ( 741919 ) on Wednesday July 02, 2008 @05:54PM (#24037619)

      By "a lot of processing can potentially be converted into DB queries", what you discovered is functional programming :) LINQ in .NET 3.5/C# 3.0 is an example of functional programming that is made to look like DB queries, but it isn't the only way. It is a LOT easier to convert that stuff and optimize it to the environment (like how SQL is processed), since it describes the "what" more than the "how". It is already done, and one (out of many examples) is Parallel LINQ, which smartly execute LINQ queries in parallel, optimized for the amount of cores, etc. (And I'm talking about LINQ in the context of in memory process, not LINQ to SQL, which simply convert LINQ queries into SQL ones).

      Functional programming, tied with the concept of transactional memory to handle concurency, is a nice medium term solution to the multi-core problem.

  • by Druppy ( 10015 ) on Wednesday July 02, 2008 @05:51PM (#24037577) Homepage

    Is it bad that my first thought when I saw this was: "But, my code already generates thousands of cores..."

  • by DragonTHC ( 208439 ) <DragonNO@SPAMgamerslastwill.com> on Wednesday July 02, 2008 @07:02PM (#24038203) Homepage Journal

    and now they're bringing it back?

    we all learned how 1000 cores doesn't matter if each core can only process a simplified instruction set compared to 2 cores that can handle more data per thread.

    this is basic computer design here people.

  • by kahanamoku ( 470295 ) on Wednesday July 02, 2008 @07:03PM (#24038207)

    By definition, isn't a core just the middle/root of something? if you have more than 1 core, shouldn't the term really be changed to reflect something closer to which it represents?

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...