Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware

Processors and the Limits of Physics 168

An anonymous reader writes: As our CPU cores have packed more and more transistors into increasingly tiny spaces, we've run into problems with power, heat, and diminishing returns. Chip manufacturers have been working around these problems, but at some point, we're going to run into hard physical limits that we can't sidestep. Igor Markov from the University of Michigan has published a paper in Nature (abstract) laying out the limits we'll soon have to face. "Markov focuses on two issues he sees as the largest limits: energy and communication. The power consumption issue comes from the fact that the amount of energy used by existing circuit technology does not shrink in a way that's proportional to their shrinking physical dimensions. The primary result of this issue has been that lots of effort has been put into making sure that parts of the chip get shut down when they're not in use. But at the rate this is happening, the majority of a chip will have to be kept inactive at any given time, creating what Markov terms 'dark silicon.' Power use is proportional to the chip's operating voltage, and transistors simply cannot operate below a 200 milli-Volt level. ... The energy use issue is related to communication, in that most of the physical volume of a chip, and most of its energy consumption, is spent getting different areas to communicate with each other or with the rest of the computer. Here, we really are pushing physical limits. Even if signals in the chip were moving at the speed of light, a chip running above 5GHz wouldn't be able to transmit information from one side of the chip to the other."
This discussion has been archived. No new comments can be posted.

Processors and the Limits of Physics

Comments Filter:
  • Dupe (Score:5, Informative)

    by Anonymous Coward on Saturday August 16, 2014 @11:38AM (#47684573)
  • by AchilleTalon ( 540925 ) on Saturday August 16, 2014 @12:28PM (#47684697) Homepage

    Well, clearly moving mainframe people to OS/2 development wouldn't have been a so great idea. The mainframe segment was much more profitable than the PC segment where the profit margin are so thin IBM decided to sell the whole division to Lenovo. The money is elsewhere.

    And do not forget memory management has to be reinvented because there was IP rights on the MVS algorithms IBM wasn't willing to transfer to OS/2. In these old times, the PC market and mid-range market were perceived as a threat by the big mainframe guys at IBM which were still the guys at the top in the hierachy. The technical side is just the lesser part of this problem.

  • Re: Lightfoot (Score:4, Informative)

    by Fred Zlotnick ( 3676845 ) on Saturday August 16, 2014 @01:33PM (#47684967)
    The speed of light is approximately .3 X 10^8 m. Per sec in a vacuum. It's about half as fast in a semiconductor like silicon. So closer to 6 inches. Nearly all chips are less than one inch. Even if this were not the case, that would not be an upper limit, data does not have to reach the end of the chip before the next clock cycle. This is an example of the author having a bit of knowledge ( erroneous, as you point out) and extrapolating an incorrect answer.
  • Re:So what (Score:4, Informative)

    by ledow ( 319597 ) on Saturday August 16, 2014 @01:43PM (#47685017) Homepage

    Nobody says 5GHz is impossible. Read it.

    It says that you can't traverse the entire chip while running at 5GHz. Most operations don't - why? Because the chips are small and any one set of instructions tends to operate in a certain smaller-again area.

    What they are saying is that chips will no longer be synchronous - if chips get any bigger, your clock signal takes too long to traverse the entire length of the signal and you end up with different parts of the chips needing different clocks.

    It's all linked. The size of the chip can get bigger and still pack in the same density, but then the signals get more out of sync, the voltages have to be higher, the traces have to be straighter, the routing becomes more complicated, and the heat will become higher. Oh, and you'll have to have parts of it "go dark" to avoid overheating neighbours, etc. This is exactly what the guy is saying.

    At some point, there's a limit at which it's cheaper and easier to just have a bucket load of synchronous-clock chips tied together loosely than one mega-processor trying to keep everything ticking nicely.

    And current overclocking records are only around 8GHz. Nobody says you can't make a processor operating at 10THz if you want. The problem is that it has to be TINY and not do very much. Frequency, remember, is high in anything dealing with radio - your wireless router can do some things at 5GHz and, somewhere inside it, is an oscillator doing just that. But not the SAME kinds of things as we expect modern processors to do.

    Taking account that most of those overclocking benchmarks probably operate in small areas of the silicon, are run in mineral oil or similar and are the literal speed of a benchmark over a complicated chip that ALREADY takes account that signals take so long that clocks can get out of sync across the chip, we don't have much leeway at all. We hit a huge wall at 2-3GHz and that's where people are tending to stay despite it being - what, a decade or more? - since the first 3GHz Intel chip. We add more processors and more core and more threading but pretty much we haven't got "faster" over the last decade, we're just able to have more processors at that speed.

    No doubt we can push it further, but not forever, and not with the kind of on-chip capabilities you expect now.

    With current technology (i.e. no quantum leaps of science making their way into our processors), I doubt you'll ever see a commercially available 10GHz chip that'll run Windows. Super-parallel machines running at a fraction of that but performing more gigaflops per second - yeah - but basic core sustainable frequency? No.

  • by slew ( 2918 ) on Saturday August 16, 2014 @08:30PM (#47686591)

    Moore's Law is "the number of transistors in a dense integrated circuit doubles every two years". You can accomplish that by halving the size of the transistors, or by doubling the size of the chip. Some element of the latter is already happening - AMD and Nvidia put out a second generation of chips on the 28nm node, with greatly increased die sizes but similar pricing. The reliability and cost of the process node had improved enough that they could get a 50% improvement over the last gen at a similar price point, despite using essentially the same transistor size.

    Bad example, the initial yield on 28nm was so bad that the initial pricing was hugely impacted by wafer shortages. Many fabless customers reverted to the 40nm node to wait it out. TSMC eventually got things sorted out so now 28nm has reasonable yields.

    Right now, the next node is looking even worse. TSMC isn't counting on the yield-times-cost of their next gen process to *ever* get to the point when it crosses over 28nm pricing per transistor (for typical designs). Given that reality, it will likely only make sense to go to the newer processes if you need its lower-power features, but you will pay a premium for that. The days of free transistors with a new node appear to be numbered until they make some radical manufacturing breakthroughs to improve the economics (which they might eventually do, but it currently isn't on anyone's roadmap down to 10nm). Silicon architects need to now get smarter, as they likely won't have many more transistors to work with at a given product price point.

    If memory-bound problems start becoming a priority (and transistors get cheap enough), we might see a shift back from DRAM to SRAM for main memory.

    Given the above situation, and that fast SRAMs tend to be quite a bit larger than fast DRAMs (6T vs 1T+C) and the basic fact that the limitation is currently the interface to the memory device, not the memory technology, a shift back to SRAM seems mighty unlikely.

    The next "big-thing" in the memory front is probably WIDEIO2 (the original wideio1 didn't get many adopters). Instead of connecting an SoC (all processors are basically SoC's these days) to a DRAM chip, you put the DRAM and SoC in the same package (either stacked with through silicon vias or side-by-side in a multi-chip package). Since the interface doesn't need to go on the board, you can have many more wire to connect the two, and each wire will have lower capacitance which will increase the available bandwidth to the memory device.

This file will self-destruct in five minutes.

Working...