Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI Education Power Hardware Science

California Researchers Build The World's First 1,000-Processor Chip (ucdavis.edu) 205

An anonymous reader quotes a report from the University of California, Davis about the world's first microchip with 1,000 independent programmable processors: The 1,000 processors can execute 115 billion instructions per second while dissipating only 0.7 Watts, low enough to be powered by a single AA battery...more than 100 times more efficiently than a modern laptop processor... The energy-efficient "KiloCore" chip has a maximum computation rate of 1.78 trillion instructions per second and contains 621 million transistors.
Programs get split across many processors (each running independently as needed with an average maximum clock frequency of 1.78 gigahertz), "and they transfer data directly to each other rather than using a pooled memory area that can become a bottleneck for data." Imagine how many mind-boggling things will become possible if this much processing power ultimately finds its way into new consumer technologies.
This discussion has been archived. No new comments can be posted.

California Researchers Build The World's First 1,000-Processor Chip

Comments Filter:
  • Link to paper (Score:5, Informative)

    by NotInHere ( 3654617 ) on Sunday June 19, 2016 @11:46PM (#52349991)

    The press release does not include it, nor does the slashdot summary. The link to the paper: http://vcl.ece.ucdavis.edu/pub... [ucdavis.edu]

    • by gweihir ( 88907 )

      These are pretty primitive, yet very flexible cores. Worthless for most current loads, but that may change. However the comparison to modern CPUs is unfair. A proper comparison would be to modern GPUs.

  • Maybe things are getting better. Too many programs are single threaded. Too many drivers are single threaded. Yes you can sandbox them.
    That leaves out the nasty deadly embrace. Or less nasty, waiting on a key resource to complete.
    More core just gets you bound up in your shorts faster.
    more cores is not a magic bullet.
    • by Anonymous Coward on Sunday June 19, 2016 @11:55PM (#52350041)

      I take it you've never done high performance computing, have you? More cores is often a good thing. If I'm doing a simulation across 1,024 cores and each node has 16 cores, that means I need a minimum of 64 nodes. There's a lot of communication that takes place over protocols like Infiniband in order to make MPI work. It also rules out the possibility of shared memory systems like OpenMP when jobs reach that scale and have to be spread across multiple nodes. If more cores are located within a single node, it reduces the amount of communication with other nodes and the resulting latency. It also makes shared memory a viable option for larger parallel jobs. If I can fit 64 or 256 cores on a node, there's a lot less need for relatively slow protocols like Infiniband to pass messages. I don't think the ordinary user has a need for 1,000 cores or would have such a need for a long time. But it really could help with high performance computing.

      • by Crashmarik ( 635988 ) on Monday June 20, 2016 @12:37AM (#52350159)

        Oi

        There's always problems that parallelize well and this setup will likely work just fine for them. The same way nvidia cuda does already, the same way vectorizing/coprocessing add ons have done going back to the ISA bus.

        The fly in the ointment is most of the worlds problems don't and even when you can parallelize debugging is nightmarish.

        All said expect to see this doing neural network work. From the article and the description of the processor communication/lack of shared memory it sounds custom tailored to that.

      • no, but with this low energy usage (a single AA powering it), I think that this COULD have an impact on tablets and phones. That ability to shut down cores, while scaling up, is darn useful.
      • by goose-incarnated ( 1145029 ) on Monday June 20, 2016 @05:12AM (#52350709) Journal

        It also makes shared memory a viable option for larger parallel jobs.

        Good luck with that. I mean it. IME as you go *more* parallel, shared memory becomes a *less* viable option, regardless of how many cores are running on the same machine. The cycles lost to memory locking to make shared memory work increases exponentially with the number of autonomous processes/threads.

        The math isn't disputed - see the birthday problem [wikipedia.org] for a start on calculating the clashes in playing musical chairs. In short, when you have X individuals with Y pigeonholes, then you are effectively bounded by Y, not by X. When you have X threads trying to access one variable, the chance that any thread will get this variable without waiting is effectively 1 for one thread, 1/2 for two threads, 1/3 for three threads, etc.

        By the time you get to a mere 64 threads each trying to access a variable, each thread basically has a 1.5% chance of getting it, and a 98.5% chance of being placed into a queue for that variable. Queue times get longer logarithmically. For one thread, time spent in the queue is ((0 * ATIME) + ATIME) where ATIME is the access time of the variable. For two threads, it's ((1-1/2) * ATIME) + ATIME, for three threads it's ((1-1/3) * ATIME) + ATIME, for four threads it's ((1-1/4) * ATIME) + ATIME. For ATIME=100us, the times above are, respectively, 100us, 150us, 166.67us, 175us. That last number is only for four threads with one variable, and assuming that queuing takes no clock cycles. The times increase exponentially with an increase in the number of variables that must be locked.

        For 64 threads your expected time in the queue is ((1-1/64) * ATIME) = 98.5us. You can forget about using shared memory if you want to use 1000 cores.

        But wait, "Use a sane design pattern and that won't happen, like with consumer/producer, etc" I hear you say? Sorry, no design pattern will save you, because if even a single thread writes to a variable, then all threads have to implement read-locks to make sure they don't get an access during a write (race condition).

        If you have 1000 cores, implement local message-passing. Don't try shared memory unless each thread will use a local copy (in which case, it isn't "shared", now is it?). Or, go ahead and do it and maybe you'll find a shared memory design that doesn't fail to first year statistics, and if you do beat the numbers then I'll be the first to nominate you for a Fields medal/Turing award :-)

        • Sorry, no design pattern will save you, because if even a single thread writes to a variable, then all threads have to implement read-locks to make sure they don't get an access during a write (race condition).

          That sound like a problem the immutable object [wikipedia.org] pattern was designed to solve.

          • Sorry, no design pattern will save you, because if even a single thread writes to a variable, then all threads have to implement read-locks to make sure they don't get an access during a write (race condition).

            That sound like a problem the immutable object [wikipedia.org] pattern was designed to solve.

            Then you don't need shared memory. If the object never changes then each thread can keep their own local copy, and there's no need for shared memory (which is what I said somewhere above in that jungle of text).

          • I was thinking atomic operations [wikipedia.org] as they would also avoid the wait.
            • I was thinking atomic operations [wikipedia.org] as they would also avoid the wait.

              Atomic operations aren't useful enough to share data; we use them to implement the locks on the actual data we want to share. GP spoke about wanting 1000 cores with shared memory, chances are he's not planning on having all 1000 simply increment/decrement an integer.

              • It all depends on the data one is working with and how it is being used. Integers work wonders for a whole lot of things and provided that you aren't working on a collection of them (in which case you may be able to do things differently like reading from one and writing to another). This may not work in for the problem you are working on but That said the goal should be to limit the number of locks you need and there may very well be a better way of doing it in a shared memory environment that doesn't req
  • by ebonum ( 830686 ) on Sunday June 19, 2016 @11:47PM (#52350003)

    A young intern who likes to "work late" in Davis California has recently come into the possession of a rather large stash of bitcoins.

  • by dejitaru ( 4258167 ) on Sunday June 19, 2016 @11:51PM (#52350019)
    But I am not sure what system or software can take advantage of it. Personally I want to see progress being made on quantum computing for consumer lever stuff.
    • by Ironlenny ( 1181971 ) on Monday June 20, 2016 @12:11AM (#52350089)

      Quantum computing is not magic. It has problems it's insanely good at (in theory) solving, and it has problems where it's as fast or slower (because of the necessary error correction) as your traditional deterministic computer. Not only are we a long way off from personal quantum computing (we still don't even have a general purpose quantum processor), we still need to research deterministic architectures.

      • Yes, you are correct and I am well aware of that, but still, just the thought of it becoming personal computing and storing data on a qubit just sounds soo... futuristic! Doubt I will see anything in my lifetime, but still we can dream :)
    • by thinkwaitfast ( 4150389 ) on Monday June 20, 2016 @12:20AM (#52350117)
      Live video streaming. The thing about more cores is that for a similar application, energy usage decreases with the square of the frequency.
      • But how well does a system know to allocate it to different cores?
        • How does it currently? How does your GPU know which pixel to render with which of the similarly high number of CUDA cores a typical video card has these days?

    • by dj245 ( 732906 )

      But I am not sure what system or software can take advantage of it. Personally I want to see progress being made on quantum computing for consumer lever stuff.

      If you have an application where you can calculate many possible solutions independant of each other, and then choose the best one, this kind of processor might be useful. Quantum computers are very strong for that kind of application, so I see it being a stepping stone to quantum computing.

      • Agreed but I see it all (consumer) being handled by just ones and zeros, considering the fact that qubits can expand on that to have data be past binary I can see a whole lot happening with it
    • That kind of computation ability with that low amount of power is worth something.
    • Think like quantum mechanic, finite elements analyzes or weather prediction etc... Everything which are based on matrix or subset of elements which are calculated in parallel. Although I am guessing here memory would be a bottleneck.
  • by Anonymous Coward on Sunday June 19, 2016 @11:52PM (#52350025)

    the world's first microchip with 1,000 independent programmable processors ... Imagine how many mind-boggling things will become possible if this much processing power ultimately finds its way into new consumer technologies.

    Yeah, but you have to keep in mind how many cores will be left for the user!

    1000 cores minus:
    * 200 cores for anti-virus software
    * 25 cores for the ransomware battling it out with the anti-virus
    * 55 cores for Microsoft's Win10 update nagware
    * 350 cores for the NSA monitoring
    * 122 cores for the FBI monitoring
    * 75 cores to handle syncing all your data to the cloud
    * 94 cores to run the 3D GUI based desktop
    * 62 cores for constant advertising
    * 14 cores for Google to keep tabs on what you're doing
    * 1 core dedicated to emacs

    So, only 2 cores left for the user. No better than an Athlon from 2005, I'm afraid.

  • Obligatory (Score:5, Funny)

    by Motherfucking Shit ( 636021 ) on Monday June 20, 2016 @12:18AM (#52350103) Journal

    Imagine a Beowulf cluster of these!

  • by Camembert ( 2891457 ) on Monday June 20, 2016 @12:41AM (#52350171)
    It could be an interesting extra chip in a general use computer, where programs could syphon routines to, for example kinds of video/image rendering, parallel-able mathematical operations, image recognition, a 1000 node neural network, etc.
    • by Arkh89 ( 2870391 )

      The main problem would be the memory bandwidth then. GPU can siphon through a lot of data because the architecture assumes that nearby threads are very likely to read contiguous data. This architecture however, allows for each core to have its own instruction queue, it should be hard to predict which thread is going to access which portion of the memory so that we can fetch it into a single request. I fail to see how you can scale the bus/controller/etc to match the bandwidth requirement (outside of few doz

    • Sounds exactly like a GPU to me. :-P
    • by AmiMoJo ( 196126 )

      We have those already, in the form of modern GPUs that can do a lot of general purpose processing such as physics simulation and image recognition.

      This chip is more like the Cell processor in the Playstation 2, with a bunch of under-powered cores that are a bugger to program and have very low performance each. I can't see it taking off because, for example, each core only has access to a tiny amount of RAM so the processing they can do will be limited mostly by memory bandwidth. A GPU gives its thousands of

  • The 1990's called, they want their joke back!

  • Aren't the shader units of the modern GPUs like the Geforces basically specialized CPUs?
    In this case we're already at 2560 CPUs on a single chip.

    • Re:Shader units (Score:4, Insightful)

      by Arkh89 ( 2870391 ) on Monday June 20, 2016 @02:11AM (#52350349)

      No they are not. The threads in a modern GPU are not all free to execute different instructions. A GPU is a SIMT architecture : Single Instruction, Multiple Threads; each warp of threads (group of approx. 16 to 32 threads) will execute the same instruction at the same time on whatever data each one is holding (some threads can also be deactivated in the group, for this instruction). So the physical architecture for each of the thread in a GPU is much simpler than for the threads of this processor (because of factorization of all the instruction queue and related mechanism, much simpler synchronization, etc.).

      • by Z80a ( 971949 )

        That makes em quite bad at dealing with conditional execution, right?

        • by Arkh89 ( 2870391 )

          Well, yes. But I don't think that we can say "terrible" performance for conditional execution. Very simply, if you have a condition "if(test){ ... } else { ... }", the warp (group of threads) will go in the true-block if at least one of them ticks (test==true). During this portion of the execution, the threads which did not tick are disabled and are indeed waiting. And vice-versa for the false-block. If none of the threads tick, or if they all do, then the unnecessary block will be avoided (this is what we

          • by Z80a ( 971949 )

            For loops that use a gradient as a reference must be completely GPU crushing.

            • You mean l like coherence-enhancing filters that use a structure tensor to control the shape of a blur and sharpening kernel?

              Photoshops implementation (oil paint filter) is particularly poor in performance. I don't know why its so terrible in performance. Maybe its a marketing thing (if its slow, its must be really good?)

              For image processing in particular, the fact that branching can in the worst case have a significant penalty on gpu's is moot because the worst case doesnt normally happen in practice.
  • The way to improve computational technology is parallelism. What are the usage domains?

    -anything video related
    --games
    --image recognition

    -anything AI (I think?)
    --autonomous cars
    --facial recognition

    -a lot of physics applications

    Thoughts?

    • Most stuff in autonomous cars don't need that power.

      The stuff I was involved in runs mainly on 4 ARMs, 1 DSP, 512MB, 500MHz(not sure, might be less). But that was image processing, only for emergency breaking, pedestrian recognition, sign recognition, lane detection etc.

      Additional systems like LIDAR, RADAR, ultrasonic surface tracking etc. usually run independent on a different system, but with similar low spec requirements.

  • It only runs at 1.7ghz. My Pentium IV running XP runs at 4 GHz! Just ask any Joe Six pack who bought them over an AMD?

  • Boring (Score:4, Informative)

    by nateman1352 ( 971364 ) on Monday June 20, 2016 @02:48AM (#52350425)

    ...contains 621 million transistors... Imagine how many mind-boggling things will become possible if this much processing power ultimately finds its way into new consumer technologies.

    Let see... 1,000 very small compute cores... sounds a awful lot like your typical GP-GPU these days. Only reason the power consumption is so small is because it has < 1 billion transistors. Compare that to the 17 billion transistor nVidia pascal monster. Even the non-Iris graphics Skylake desktop CPU has ~1.7 billion, and over half of those are spent on the GPU.

    Chances are even paltry Intel HD Graphics running an OpenCL program will have more FLOPS than this thing. Don't be fooled by the flashy headline, the laws of physics still apply.

    • While I agree this is more flash then substance, it hardly deviates from the laws of physics. Unlike the nVidia example you provided, this CPU does not have much in the way of IO bandwidth. So we are talking about minimal movement of data which in turn results in impressively low power consumption. For certain applications this could be great (a previous post mentions neural networks). For the other 99% it is worthless.

      One should not compare this CPU to a GPU because the underlying design goals are v

  • by hughbar ( 579555 ) on Monday June 20, 2016 @03:37AM (#52350509) Homepage
    Will slow it down to a crawl before blue screening. Then we'll be ready for Windows 24 Home Premium Edition. No worries.
  • by Required Snark ( 1702878 ) on Monday June 20, 2016 @04:22AM (#52350619)
    If you read the two page technical paper you will see that there is much less here then the hype suggests.

    Each CPU supplies an amount of computation less then a single instruction on a regular CPU. Think of it as a grid of instructions not a grid of computers. A processor has a Harvard architecture with 128 instructions of 40 bit size and a separate data memory with two banks of 128 16 bit data values (256 16 bit data words total). It says nothing about register files or stacks or subroutine calls. It's likely that the two data banks are in effect the register set. The paper implies that a CPU can compute a single floating point operation in software.

    Compiling means mapping code fragments to a set of connected CPUs and routing resources, and then feeding the data into the compute array. After some circuitous path through the grid the answer emerges somewhere. There are also 12 independent memory banks each with a 64KB of SRAM that are available to all CPUs.

    History has not been kind to this kind of grid architecture with lots of CPUs and very little memory. Almost none of them ever made it out of the lab. It's symptomatic of hardware engineers who are clueless about software and design unprogrammable computers. They confuse aggregate theoretical throughput with useful compute resources.

    Debugging code on this would be a nightmare. It's completely asynchronous, there is no hardware to segregate different sets of CPUs doing different computing tasks and so few resources per CPU that software debugging aids would crowd out the working code. The people listed on the paper should be punished by being force to make it do useful work for at least a year. They would be scarred for life.

  • I can imagine. (Score:4, Interesting)

    by Megol ( 3135005 ) on Monday June 20, 2016 @04:51AM (#52350687)

    Even ignoring all other limitations of this particular processor there's still Amdahl's law, limiting the speedup by the serial parts of a task.
    As one example how that works look at compiling to hardware. In theory this should bring enormous benefits as not only can one parallelize on a instruction level but on a sub-instruction one, speculating and pipelining e.g. additions. Many types of communication can be eliminated entirely by replicating hardware.
    But even with those benefits there are a _lot_ of software that is better to run on a standard processor. Why? Because using custom optimized hardware to run it ends up replicating a number of normal processors including caches, branch prediction etc. and then a processor optimized by a dedicated team of experienced people ends up being attractive.

    Now saying custom hardware can't bring huge benefits, not even saying that this research processor can't do it _however_ in general there are a lot of tasks that can't really be accelerated much.

  • It's only 0.7W when clocked at 115MHz, but still impressive.
  • FINALLY! (Score:4, Funny)

    by Lumpy ( 12016 ) on Monday June 20, 2016 @08:58AM (#52351347) Homepage

    Something that will run Flash without bogging down.

  • What kind of computer scientists are they?

    They should have made it 1024. And labelled them 0-1023.

  • a Beowolf cluster...

    Had to say it. Haven't see that response in a while.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...