Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

Processors and the Limits of Physics 168

An anonymous reader writes: As our CPU cores have packed more and more transistors into increasingly tiny spaces, we've run into problems with power, heat, and diminishing returns. Chip manufacturers have been working around these problems, but at some point, we're going to run into hard physical limits that we can't sidestep. Igor Markov from the University of Michigan has published a paper in Nature (abstract) laying out the limits we'll soon have to face. "Markov focuses on two issues he sees as the largest limits: energy and communication. The power consumption issue comes from the fact that the amount of energy used by existing circuit technology does not shrink in a way that's proportional to their shrinking physical dimensions. The primary result of this issue has been that lots of effort has been put into making sure that parts of the chip get shut down when they're not in use. But at the rate this is happening, the majority of a chip will have to be kept inactive at any given time, creating what Markov terms 'dark silicon.' Power use is proportional to the chip's operating voltage, and transistors simply cannot operate below a 200 milli-Volt level. ... The energy use issue is related to communication, in that most of the physical volume of a chip, and most of its energy consumption, is spent getting different areas to communicate with each other or with the rest of the computer. Here, we really are pushing physical limits. Even if signals in the chip were moving at the speed of light, a chip running above 5GHz wouldn't be able to transmit information from one side of the chip to the other."
This discussion has been archived. No new comments can be posted.

Processors and the Limits of Physics

Comments Filter:
  • by Jody Bruchon ( 3404363 ) on Saturday August 16, 2014 @11:44AM (#47684595)
    Clockless logic circuits [wikipedia.org] might be an interesting workaround for the communication problem. The other side of the chip starts working when the data CAN make it over there, for example. I don't claim to know much about CPU design beyond how the work on a basic logical level, but I'd love to hear the opinions of someone here who does regarding CPUs and asynchronous logic.
  • Go vertical! (Score:5, Interesting)

    by putaro ( 235078 ) on Saturday August 16, 2014 @11:48AM (#47684603) Journal

    Stacking dies or some other form of going from flat to vertical will get you around some of the signaling limits. If you look back at old supercomputer designs there were a lot of neat tricks played with the physical architecture to work around performance problems (for example, having a curved backplane lets you have a shorter bus but more space between boards for cooling). Heat is probably the major problem, but we still haven't gone to active cooling for chips yet (e.g. running cooling tubes through the processor rather than trying to take the heat off the top).

  • by dbc ( 135354 ) on Saturday August 16, 2014 @12:01PM (#47684637)

    "Even if signals in the chip were moving at the speed of light, a chip running above 5GHz wouldn't be able to transmit information from one side of the chip to the other." ... in a single clock.

    So in the 1980's I was a CPU designer working on what I call "walk-in, refrigerated, mainframes". It was mostly 100K-family ECL in those days and compatible ECL gate arrays. Guess what -- it took most of a clock to get to a neighboring card, and certainly took a whole clock to get to another cabinet. So in the future it will take more than one clock to get across a chip. I don't see how that is anything other than a job posting for new college graduates.

    That one statement in the article reminds of when I first moved to Silicon Valley. Everybody out here was outrageously proud of themselves because they were solving problems that had been solved in mainframes 20 years earlier. As the saying goes: "All the old timers stole all our best ideas years ago."

  • by Rockoon ( 1252108 ) on Saturday August 16, 2014 @12:27PM (#47684693)
    Even more obvious is that even todays CPU's dont perform any calculation in a single clock cycle. The distances involved only effects latency, not throughput. The fact that a simple integer addition operation has a latency of 2 or 3 clock cycles doesnt prevent the CPU from executing 3 or more of those additions per clock cycle.

    Even AMD's Athon designs did that. Intels latest offerings can be coerced into executing 5 operations per cycle that are each 3 cycle latency, and then thats on a single core with no SIMD.

    Its not how quickly the CPU can produce a value.. its how frequently the CPU can retire(*) instructions.

    (*) Thats actually a technical term.

"Engineering without management is art." -- Jeff Johnson

Working...