Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming Supercomputing Hardware IT Technology

The Father of Multi-Core Chips Talks Shop 90

pacopico writes "Stanford professor Kunle Olukotun designed the first mainstream multi-core chip, crafting what would become Sun Microsystems's Niagra product. Now, he's heading up Stanford's Pervasive Parallelism Lab where researchers are looking at 100s of core systems that might power robots, 3-D virtual worlds and insanely big server applications. The Register just interviewed Olukotun about this work and the future of multi-core chips. Weird and interesting stuff."
This discussion has been archived. No new comments can be posted.

The Father of Multi-Core Chips Talks Shop

Comments Filter:
  • That's a lot of core systems.
  • WTF??? (Score:2, Funny)

    by zappepcs ( 820751 )

    This is slashdot, you _CAN'T_ post an article that can't be read! timothy, what are you thinking?

  • It is time for professor Olukotun and the rest of the multicore architecture design community to realize that multithreading is not part of the future of parallel computing and that the industry must adopt a non-algorithmic model [blogspot.com]. I am not one to say I told you so but, one day soon (when the parallel programming crisis heats up to unbearable levels), you will get the message loud and clear.

    • Re: (Score:3, Funny)

      by hostyle ( 773991 ) *

      Indeed. Its turtles all the way down.

      • But I laik turtles... :(
    • by Anonymous Coward on Saturday July 19, 2008 @06:17PM (#24256865)

      That strikes me as crackpottery. The stuff that link describes as "nonalgorithmic" is also easily algorithmic, just in a process calculus.
      And guess what? Non-kooks in the compsci community are busily working on process calculi and languages or language-facilities built around them.

    • by lenski ( 96498 ) on Saturday July 19, 2008 @06:36PM (#24256943)

      To simplify: Dataflow. It's been too many years, but I recall that DataFlow was a company name. Their lack of commercial success was based on the combination of being way ahead of their time.

      The recent advent of multiple on-die asynchronous units ("cores") is leading to a resurgence of interest in the dataflow model.

      Anyone who has implemented networked event-driven functionality has already started down the path of dataflow model of computation, though obviously it's not fine-grained. The "non-algorithmic model" looks like a fine-grained implementation of a normal network application. (I agree with a downthread post that claims that current and classical Java-based server applications are already there, accepting the idea that event-driven multithreading applications are essentially coarse-grained dataflow applications.) And when the research gets going hot and heavy, I'll wager that the research will end up focusing on organizing the connectivity model.

      As far as I am concerned, one place to look for multicore models to shine would be in spreadsheets and similar applications where there is already a well-defined pattern of interdependency among computational units (which in this case would be the spreadsheet cells). I also think that database rows (or row groupings) would be naturals for dataflow computing.

      An efficient dataflow system would be the most KICK-ASS life computation engine! :-) (Now you know how old I am...)

      • Re: (Score:1, Interesting)

        by Louis Savain ( 65843 )

        An efficient dataflow system would be the most KICK-ASS life computation engine! :-) (Now you know how old I am...)

        Actually National Instruments has had a graphical data-flow dev tool for years. Their latest incarnation even has support for multicore processors (i.e., multithreading). However, what I'm proposing is not dataflow but signal flow as in logic circuits, vhdl, spiking neural networks, cellular automata and other inherently and implicitly parallel systems.

        • by lenski ( 96498 ) on Saturday July 19, 2008 @07:17PM (#24257265)

          I can see your point... I can imagine a thing that looks a whole lot like an FPGA whose cells are designed to accept new functional definitions extremely dynamically.

          (As you can tell, I don't agree with using the name "non-algorithmic": It's algorithmic by any reasonable theoretical definition. This is why I refer to it as being an extremely fine-grained data flow model.)

          However, if you look at modern FPGAs, you will discover that even there, the macrocells are fairly large objects.

          I guess that when it comes down to it, the "non-algorithmic" model proposed in the page you cite seems so fine-grained that benefits would be overwhelmed by connectivity issues. By this I mean not simply bandwidth among functional components, but in defining "who talks with whom under what dynamically changing circumstances". Any attempt to discuss fine-grained data flow must face the issue of efficiency in connecting the interacting data and control "elements".

          There's the possibly even more interesting question about how many of each sort of functional module should be built.

          What do you say to meeting in the middle, and thinking about a system that isn't so fine-grained, while also thinking of "control functions" as being just as movable as the data elements? Here's why I ask: In my opinion, there might well be some very good research work to be done in applying techniques related to functional programming to a system of extremely large number of simple functional units that know how to move functionality around with the data.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Hi MOBE2001 [slashdot.org]. Trying Twitter's tricks as well now?

    • Re: (Score:3, Interesting)

      Oh yes, that is why we have (or are currently developing) purely functional programming languages that can often mimic this model quite nicely, and efficient compilers capable of compiling the code into (potentially) whatever paralellism model you are using. Threads should ideally be just a means of implementing paralellism for such languages, of for parallel computing frameworks. Today, you are probably not supposed to write threaded code by hand in most cases. Once you have a reasonable compiler (latest v

    • by Cheesey ( 70139 ) on Sunday July 20, 2008 @05:04AM (#24260483)

      Right, so you split your computation up into small units that can be efficiently allocated to the many core array. This allows you to express the parallelism in the program properly, because you're not constrained by the coarse granularity of a thread model. Cool.

      But the problem here is how you write the code itself. Purely functional code maps really well onto this model, but nobody wants to retrain all their programmers to use Haskell. We're going to end up with a hybrid C-based language: but what restrictions should exist in it? This depends on what is easy to implement in hardware - because if we wanted to stick with what was easy to implement in software, we'd carry on trying to squeeze a few extra instructions per second out of a conventional CPU architecture.

      The biggest restriction turns out to be the "R" in RAM. Most of our programs use memory in an unpredictable way, pulling data from all over the memory space, and this doesn't map well to a many core architecture. You can put caches at every core, but the cache miss penalty is astronomical, not to mention the problems of keeping all the caches coherent. Random access won't scale; we will need something else, and it will break lots of programs.

      This is going to lead to some really shitty science, because:

      • Many core architectures will only be good for running certain types of program: not just programs that can be split into tiny units of computation, but programs that access RAM in a predictable way.
      • The many core architects will pick the programs that work best on their system; these may or may not have anything to do with real applications for many core systems (And what is an application for a many core system anyway? Don't say graphics...)
      • It will be hard to quantitatively compare one many core architecture with another because of the different assumptions about what programs are able to do in each case. There are too many variables; there is no "control variable".

      I think that the eventual winning architecture will be the one that is easiest to write programs for. But it will have to be so much better at running those programs that it is worth the effort of porting them. So it will have to be a huge improvement, or easy to port to, or some combination of the two. However, those are qualitative properties. Anyone could argue that their architecture is better than another - and they will.

      • But the problem here is how you write the code itself. Purely functional code maps really well onto this model, but nobody wants to retrain all their programmers to use Haskell.

        I agree but I am not so sure that it is true that "Purely functional code maps really well onto this model". For example, in spite of its proponents' use of terms like 'micro-threading' and 'lightweight processes', Erlang's concurrent model uses coarse-grained parallelism. I have yet to see a fine-grained parallel quicksort in Erlang

    • You know, most programmers already know how to construct state machines, and alreay create entire programs using this concept. Your ideas are not revolutionary, they only highlight the need to use asynchronous state machines over synchronous threads. Where your ideas fail is this: you want to get rid of threads completely.

      Do you want to know why the programming community likes threads? There's a simple reason: state machines DO NOT SCALE. As you add more capabilities to a state machine, the number of st

  • ...oh, you know how this one is supposed to go.
  • by Skapare ( 16644 ) on Saturday July 19, 2008 @06:18PM (#24256875) Homepage

    Multi-core chips will be constrained by, among other things, the memory bandwidth going off-chip. Maybe they need larger caches. Maybe they just need to put all the RAM on the chip itself instead of so many other cores. How about 4GB of RAM at 1st level cache speed.

    Ultimately, we'll end up with PCs made from SoCs, and direct SATA, USB, Firewire, and DVI interfaces coming out instead of a RAM access bus. By the time they are ready to make 256 core CPUs, software still won't be ready to work well on that. So in the interim, they might as well just do tighter integration (that can also run faster there, too). No more north bridge or south bridge. Just a few capacitors, resistors, and maybe a buffer amp or two, around the big CPU.

    About the only thing that won't be practical to put in the CPU for a long time is the power supply. They could even put the disk drive in there (flash SSD).

    • by ddrichardson ( 869910 ) on Saturday July 19, 2008 @06:32PM (#24256925)

      That sounds ideal and in the long term is probably what will happen. But you need to overcome two massive issues first - leakage and interference between that many components in one space and of course heat dissapation.

      • by pjt33 ( 739471 ) on Saturday July 19, 2008 @06:40PM (#24256973)
        Three - three massive issues! Leakage, interference between that many components in one space, of course heat dissipation, and having a single, expensive, point of failure. Wait, I'll come in again.
        • Three - three massive issues! Leakage, interference between that many components in one space, of course heat dissipation, and having a single, expensive, point of failure. Wait, I'll come in again.

          You forgot the almost fanatical devotion to the Pope!

    • Re: (Score:3, Insightful)

      by BrentH ( 1154987 )
      How do videocards handle feeding data to 800 (latest AMD chip) separate processors? The memory controller is onchip of course, and it has a bandwidth of about 50-60GB/s I believe. So, for normal multicore cpu's, try bumping up that DDR2 ram from a measly ~10GB/s (when used in dual channel) up to the same level (AMD again already has the memorycontroller onchip, Intel is going there I believe). DDR(2) being 64bits wide (why?) doesn't either help I'd say.
      • Re: (Score:3, Interesting)

        by Fweeky ( 41046 )

        The memory controller is onchip of course, and it has a bandwidth of about 50-60GB/s I believe

        Which is in fact, around the amount of memory bandwidth Niagara systems have [fujitsu.com], with 6 memory controllers per socket.

        • Re: (Score:1, Interesting)

          by Anonymous Coward

          I'm not sure about Niagara, but it should be noted that for GPUs to obtain anything even close to the advertised bandwidth requires very specific access patterns. I'm not familiar with ATI's GPUs, but in the case of nVidia GPUs, "proper" access requires ensuring that the memory address of the "first" process in a warp (group of processes) meets certain criterion and that the addresses accessed in parallel with this by other processes in the warp are contiguous with a stride of 4, 8, or 16 bytes. Doing anyth

          • by TheRaven64 ( 641858 ) on Sunday July 20, 2008 @05:03AM (#24260479) Journal

            Niagara has enough memory bandwidth to keep its execution units happy. The last chip I remember that didn't was the G4 (PowerPC). The problem is more one of latency. This isn't such a problem in a GPU, since they are basically parallel stream processors - you just throw a lot of data at them and they process it more-or-less in that order.

            There was a study conducted ages ago (70s or 80s) which determined that, on average, there is one branch instruction in every seven instructions in general purpose code. This means that you can pretty much tell where memory accesses are going to be for 7 instructions, you've got a 50% chance for 14 (assuming it's a conditional jump, not a computed jump), a 25% chance for 21 instructions and so on. The time taken to fetch something from memory if you guessed wrongly is around 200 cycles.

            This is a big reason why the T1/2 have lots of contexts (threads). If you need to wait for memory with one of them, then there are 3 or 7 (T1 or T2) waiting that can still use the execution units.

            Most CPUs use quite a lot of cache memory. This does two things. First, it keeps data that's been accessed recently around for a while. Second, you access memory via the cache, so when you want one word it will load an entire cache line and data near it will also be fast to access. This is good for programs which have a lot of locality of reference (most of them).

            • by thogard ( 43403 )

              The T1s have lots of contexts so they can deal with running poorly optimised code that spends all of its time chasing pointers. A typical T1 CPU will run program 1, grab a pointer, try to load its data and stall from a cache miss but that switches to program 2 which had already stalled and now has its data ready so it can grab that, do a pointer calculation and stall again so the CPU is off to thread 3. The key to the T1 is there is not context swap between them. The T1 seems to very quick running poorly

    • Re: (Score:3, Interesting)

      by lenski ( 96498 )

      Actually, Sun's Niagara has that "problem". The way they solved it is to place Gbit networking close to the cores. There are also multiple DDR-2 memory buses and (I think) PCI-E lanes to feed the processor's prodigious need for memory bandwidth.

      The comments to the Register article include a comment about the Transputer. (In case it's not familiar history, the transputer was a really slick idea that went nowhere... 4 high bandwidth connections, one for each neighbor CPU, with onboard memory. I recall that t

      • Re: (Score:3, Interesting)

        by mikael ( 484 )

        the transputer was a really slick idea that went nowhere... 4 high bandwidth connections, one for each neighbor CPU, with onboard memory. I recall that they were programmed in "Occam", a dataflow-oriented language.)

        Mainly because CPU clock speed and data bus speed were doubling every year. By the time an accelerator card manufacturer had a card out for six months, Intel had already ensure that the CPU was faster and so the accelerator card rapidly became a de-accelerator card. If you look at the advert page

  • Well (Score:1, Interesting)

    by Anonymous Coward

    IAASE and if I recall correctly, Donald Knuth said that the all the advent of multi-core systems showed was that chip developers had run out of ideas and from what I can see happening in the industry today, he was right.

    multi-core = multi-kruft

    • So what else should we do? Should we stick with single core and watch our computers never get any faster?

    • Silly. I cannot believe Donald Knuth would be that dense, there must be more to the conversation.

      Every major system in existence today is already a "multiprocessor" system, we just don't think of them that way. The average PC is a parallel system running at least 14 CPUs in parallel. (two or three for every spindle, one or two for keyboard, a few for your broadband modem, a few in your firewall, etc etc etc).

      Multicore systems are simply an extension of the existing computational model. Plus, every supercom

      • Re: (Score:3, Informative)

        by cnettel ( 836611 )
        Those processors have all been part of maintaining the illusion of the von Neumann machine, and to maintain common interfaces between different pieces of hardware. What's happening now is that the model of a single instruction feed is breaking down completely, no matter what task you want to do, if you want it done efficiently. And that's for the very reason that the very smart people designing chips have run out of ideas on how to make them faster while maintaining that very convenient illusion for the sma
        • Re: (Score:3, Interesting)

          by lenski ( 96498 )

          I'll accept the argument that the single-threaded model is (temporarily) being preserved in current systems. That said, I believe that there is a natural progression toward multithreaded computing as the technologies become more pervasive.

          What do you think of such things as SQL and spreadsheets already starting down a road of declarative style of "programming", which would implicitly allow the engines to make their own decisions about how to run multi-threaded?

          I had good experience with a quad-phenom runnin

          • Please: Haskell, LISP, O'Caml or any of a number of other 'real' functional languages deserve your attention far before SQL or Excel.
            • by lenski ( 96498 )

              OK by me! :-)

              I just brought out the most commonly known technologies that can be converted to high-count multi-core systems quickly and relatively efficiently.

              I didn't claim that SQL, spreadsheets, etc are THE solutions, only that multi-core systems are not wasteful cruft even given the ordinary techniques we have available today.

              Here's the bottom line: we agree that somewhere, sometime, a new language or language family is very likely to replace the current procedural (or half-object, half-procedural langu

              • I agree completely. :-) There is no crisis, but the sooner that people realize that Haskell can help them in this area (and allow writing many fewer [and more elegant] lines of code), then the sooner I will get to start using it in production. [A current search of dice.com for 'Haskell' returns only 10 results--and the other functional langs, not so much more].
        • Intel & Sun's compilers have openmp -xautopar flags, such that when you build your code, the compiler is smart enough to find obvious places to parallelise your code (loops, what have you)

          But honestly, I think it'll take about 8 or 9 years for the real potential of multicore to start to pay dividends. First universities will need to start teaching OpenMP ( or whatever ), then the kids graduate, and start using it @ work
        • Re: (Score:1, Interesting)

          by Anonymous Coward

          No. Sheesh. Supercomputers almost all use a message passing model in practice. Messages between concurrent processes, not threads and mutexes. They may have shared memory, but that means message passing is zero-copy, not that that they're programmed with anything other than MPI.

          The grownups have been playing with concurrent systems for decades now.
          There's a lot of wheel-reinvention going on as the kiddie home computers get multiple processors. Eventually they'll realise what we've known for years - the

          • Yea, but client-side GUI programmers understand that too. Message queues are great for this. WIN32 is not the best system by any stretch of the imagination, but its message passing is a fairly simple model for concurrent programming.
      • by xigxag ( 167441 )

        Those may be processing units, but they're hardly CENTRAL Processing units.

        • by CBravo ( 35450 )

          Maybe not called central. Maybe not multi-purpose. But try pulling out your graphical processor.

          I agree that we haven't really been programming our multi processor environments like we should (generally with libraries).

  • IANACS

    Pardon my ignorance, but we had many supercomputers (more recent one Roadrunner) which use multiple CPUs (and accelerators). Can't we use the programming 'tricks' or 'models' or 'techniques' used there for efficiently using multicores ? I understand multicore has significantly less communication overhead, but overall philosophy of synchronizing, message passing, shared memory etc wouldn't be completely irrelevent ?

    • Their "tricks" are to 1. have a hell of a lot of computation to be done, and 2. make sure that the work can be split into millions or billions of completely independent tasks. Then you send a few thousand tasks to each CPU, wait a while, and they all get done. Most interesting problems require some amount of communication or reduction or something that is not perfectly parallel - but there's nothing magical going on. If your computation is largely serial, there's not a whole heck of a lot that can be don

    • The programming trick on a supercomputer is that you have a dedicated PhD-(student) work fully on parallelizing your single application. The program that is parallelized performs a lot of computations in a serial kind of way, and is built up out of blocks that can be calculated separately to an extent that allows the speed up of parallelization make up for your cost in bandwidth. The program has to be simple enough that the programmer can predict at which time, which data is needed at what processor, for a
  • For servers, POWER4 was released in 2001. For desktops, the Pentium D came out in mid 2005 and the PowerPC 970MP a few months later. All of these came out before Niagara.

    • by TheRaven64 ( 641858 ) on Saturday July 19, 2008 @07:05PM (#24257181) Journal
      Read more carefully. He created the Stanford Hydra in 1994, and the Niagara is based on this design. They are not claiming the Niagara was first.
      • by Anne Thwacks ( 531696 ) on Sunday July 20, 2008 @04:24AM (#24260307)
        Well, the fundamental idea behind it was used in the National Semiconductors COP - a 4 bit processsor in the late 1970s.

        Incidentally, I worked with Transputers,and the concept died for many reasons

        1) The comms channel was a wierd, proprietry protocol, and not HDLC - completely fatal

        2) In the event of an error, the entire Transputer netowork locked up - competely fatal

        3) Mrs Thatcher eventually agreed to fund the project with $50,000,000 the same day that United Technology (can you say 6502, or was it Z80) cancelled a project saying "in the world of Microprocessors $50,000,000 is nothing". - Two fatal errors here (a) expecting the UK government to fund anything reasonably sensible, and (b) Making it clear that the project is insufficiently funded to survive

        4) The project was taken over by the French - whose previous achievements in both hardware and software are [white space here]. 5) Inmos, who made it, (a) tried to force people to use a new language, at a time when there was a new language every month, (b) took two years to discover that the target market wanted C, and (c) never discovered the appropriate language was Algol68.

        In short, the company was run by a clever but narrow minded geek, who failed to take advice from others in the industry (including other narrow minded geeks, like me, etc).

  • I just finished taking a course at MIT on multiprocessor programming. It was taught by the authors of The Art of Multiprocessor Programming [elsevier.com], Maurice Herlihy and Nir Shavit. I highly recommend their book, their classes, their expertise. They are now focused on transactional memory, which may make things a bit easier to program in the multiprocessor universe. Of course we can stick with course-grained locking, but as they pointed out early on, Amdahl's Law [wikipedia.org] shows that throwing hardware at a problem may not be

    • Gustafson's law [wikipedia.org] says that while throwing hardware at a problem, increase the problem size of an application as well to improve scalability of hardware. There are many discussions supporting Amdahl's and Gustafson's laws. But, if an application can scale without increasing data access contention, it can benefit from multi-core processors. All applications may not scale their problem size, but there are many applications that can.
      Check out http://www.multicoreinfo.com/ [multicoreinfo.com] for a lot of multicore related news a
  • Linked Lists? in order to get to an item, you have to traverse the list until that point. Maybe you could have one thread traverse the list, and dispatch each item to a new thread for processing... but what about counting how many items on the linked list satisfy a condition? maybe round robin assign them to the worker threads, the add up the subtotals...

    Seems to me that simple linked list structures may be something to avoid in favor of trees, where you could just send references to branches to threads. By

    • by jd ( 1658 )
      The software guys in the team have some neat utilities for analyzing and generating parallel code - but it's old and clearly not maintained. Apart from the stuff that has become commercial. Either way, a good workman keeps their toolset in good condition, so the lack of maintenance does bother me.
    • In general, you don't want to be breaking down your problem at anything like that granularity. Traversing a linked list and testing each element is a tiny bit of code which fits nicely in an instruction cache. Automatic prefetching means that you can do this very efficiently on a single core. Unless the processing component is very large, you would be better off splitting your program at a coarser granularity. No one writes code to traverse linked lists, they write code to solve problems, and these prob
  • Horse Pucky..... (Score:5, Insightful)

    by FlyingGuy ( 989135 ) <flyingguy&gmail,com> on Saturday July 19, 2008 @08:38PM (#24257833)

    We already have servers for INSANELY HUGE internet apps, its called a main-frame.

    It amazes me to no end, how many people still think its about the CPU. It about throughput, ok? Can we just get that fucking settled already? I don't give a rats ass how many damn cores you have running or if the are running 100 gigahertz, if you are still reading data across a bus, over an ethernet connection, ANYTHING that does not work at CPU speed then it makes little difference, that damn CPU will be sitting there spinning waiting for the data to come popping through so it can do something!

    Mainframes use 386 chips for I/O controllers and even those sit there and loaf, talk about a waste of electricity! About .01% of the worlds computers need the kind of power that a CPU with more then say 4 cores provide. Those that do are rather busy doing insanely complex mathematics, but even then I doubt that the CPU(s), even when running at "100%" utilization are actually doing the work that they were programmed to do, rather they are waiting for I/O to a database or RAM and fetching data.

    Until someone figures out how to move data in a far far more efficient manner then we currently understand, these mega-core CPU's, while nice to think about, are simply a waste of time and silicon with the possible exception of research.

    • Re: (Score:1, Informative)

      by Anonymous Coward

      Mainframes use 386 chips for I/O controllers and even those sit there and loaf, talk about a waste of electricity! About .01% of the worlds computers need the kind of power that a CPU with more then say 4 cores provide. Those that do are rather busy doing insanely complex mathematics, but even then I doubt that the CPU(s), even when running at "100%" utilization are actually doing the work that they were programmed to do, rather they are waiting for I/O to a database or RAM and fetching data.

      IBM mainframes use PPC 440 processors in their channel cards. You are wrong. PPC 440 is not fast enough. Look at their pathetic Ficon IOs/sec numbers vs. an Emulex FCP adapter.

      • I stand corrected, but I think my original point still holds.

        While even a more efficient channel controller will open up those choke points to a degree, the main processing unit still sits there doing nothing, for a very large percentage of the time waiting on Data, User Input, all those things that happen rather S L O W L Y.

    • Re: (Score:2, Funny)

      by Kristoph ( 242780 )

      About .01% of the worlds computers need the kind of power that a CPU with more then say 4 cores provide.

      Yes but now that we can't buy XP any more, the penetration of Vista is sure to grow.

    • by dodobh ( 65811 )

      The problem is that the mainframe is still a huge, single point of failure. What we need is the ability to toss a few dozen _CHEAP_ systems at the problem, and figure out how to make it work.

      Failures happen. Code around them.

      There is only so high you can scale a mainframe up to. Then you need to start scaling out. Scalability isn't a few thousand users pounding your systems. It's about a few million users pounding your systems, with increases of one order of magnitude being common.

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      Precisely. This is why labs such as RAMP and PARLAB (both from Berkeley - take that, Stanford) have designed not just multicore systems, but 'manycore' systems possessing in excess of 1000 CPUs (the chips are actually FPGAs if I'm not mistaken). The chips run pretty slowly -- some of them around 100MHz, but the operation virtually any part of the chip can be observed and tweaked at a very low level. The idea is not to design a faster-clocked or more parallel CPU so much as it is to discover the best archite

  • totally and undeniably allrighty then ... we don't need something new, we just need MORE of the same old stuff ... all hail von neumann !

It is easier to write an incorrect program than understand a correct one.

Working...