Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Transistors Will Stop Shrinking in 2021, Moore's Law Roadmap Predicts (ieee.org) 133

Moore's Law, an empirical observation of the number of components that could be built on an integrated circuit and their corresponding cost, has largely held strong for more than 50 years, but its days are really numbered now. The prediction of the 2015 International Technology Roadmap for Semiconductors, which was only officially made available this month, says that transistor could stop shrinking in just five years. From an article on IEEE: After 2021, the report forecasts, it will no longer be economically desirable for companies to continue to shrink the dimensions of transistors in microprocessors. Instead, chip manufacturers will turn to other means of boosting density, namely turning the transistor from a horizontal to a vertical geometry and building multiple layers of circuitry, one on top of another. These roadmapping shifts may seem like trivial administrative changes. But "this is a major disruption, or earthquake, in the industry," says analyst Dan Hutcheson, of the firm VLSI Research. U.S. semiconductor companies had reason to cooperate and identify common needs in the early 1990s, at the outset of the roadmapping effort that eventually led to the ITRS's creation in 1998. Suppliers had a hard time identifying what the semiconductor companies needed, he says, and it made sense for chip companies to collectively set priorities to make the most of limited R&D funding.It still might not be the end of Moore's remarkable observation, though. The report adds that processors could still continue to fulfill Moore's Law with increased vertical density. The original report published by ITRS is here.
This discussion has been archived. No new comments can be posted.

Transistors Will Stop Shrinking in 2021, Moore's Law Roadmap Predicts

Comments Filter:
  • You have to wonder just how its adherents will start adjusting the scenario now. Should be like watching preachers recalculate the date of the rapture.

    • by Anonymous Coward

      The original Moore's law is about the maximum number of components they can cram on a single circuit.

      If they go vertical, that's more components, hence it's still Moore's Law. Basically this headline is hype.

      • Re: (Score:3, Informative)

        Well, actually, it's not about maximum number of components of a single chip, it's about complexity for minimum component costs (that's verbatim from Moore's article - which, by a strange coincidence, I happen to have re-read just a few hours ago!).
        • "This image covers the basic features of 3D Xpoint. The new memory is designed to be non-volatile, stackable (to improve density), and can perform read/write operations without requiring a transistor (DRAM requires one transistor per cell, which is one reason why it draws much more power per GB than a NAND flash drive)." ----

          http://www.extremetech.com/ext... [extremetech.com]

          Maybe transistors can't get smaller, but you don't have to use transistors. 3DXPoint is not as fast as DRAM but it is still so fast that it can replac

          • DRAM requires one transistor per cell, which is one reason why it draws much more power per GB than a NAND flash drive

            Isn't it fun when somebody technically ignorant tries to explain technology? DRAM draws lots of power because the charge that defines a bit leaks away, and to avoid loss of data refresh cycles are required, which means power draw. Flash leakage is more than 10 orders of magnitude lower, which means that practically speaking a flash device does not need to be refreshed.

            • 3DXpoint is not NAND flash. Its leakage characteristics (unpublished) would be likely different than flash or RAM.

      • Technically, we've ALREADY started to "go vertical". There are ALREADY combo chips that stack RAM and Flash chips (sandwiched between heat-removal structures and separated by some kind of insulator), but they're limited to chips where you have one chip that's not terribly hot, and one chip that's relatively cool (like slow-clocked PSRAM and NOR flash). If you tried to stack a pair of i7 cores, they'd fry each other within milliseconds.

        Heat removal is a nontrivial problem. If Intel wanted to, it could sell b

    • the singularity has nothing to do with transistor density. that is a matter of either algorithms and/or alternative non-digital architecture

    • Technically, "singularity" doesn't have as much to do with Moore's law as some people might claim, since - at least unless I misunderstood something - "singularity" implies some kind of vertical asymptote which Moore's law, being merely exponential, doesn't have. This means that Moore's law is not a sufficient condition for reaching "singularity". There would ALWAYS have to be some other kind of mechanism involved that could very well work even in absence of Moore's law, for example some kind of increased
    • by HiThere ( 15173 )

      FWIW, I believe that even our current technology is sufficient to "achieve the singularity". The thing that's lacking is software. The thing that would be changed it how widely spread the "superhuman AIs" are. Possibly also how fast they are. (You could do it with cog-wheels if you didn't worry about speed.)

      Also, I haven't seen anything that would cause me to revise my expected date of 2030 plus or minus 5 years. Even that "plus or minus" doesn't really belong there. as there won't be any sudden change

      • A big part of the singularity logic is that you will have technological feedback that keeps increasing the power of tech till it reaches unrecognizable levels.

        If you are looking at 2030 for the singularity, there no matter how you slice it there isn't going to be that exponential growth from traditional hardware driving it.

        • So we will have on a desktop computer brains the size of a planet, but they will be bogged down complaining about being given menial tasks and how bored and depressed they are?

        • by HiThere ( 15173 )

          There are those whose view of the Technological Singularity is as you describe them. Those believe in the "hard take-off Singularity". Most of those who think seriously about it, however, believe in the "soft take-off". To deny that the technological feedback is happening and increasing is to deny (at least) the last five decades of history. But it never goes the way you predict...unless your prediction is just that it's going to increase.

          Clearly there must be a limit. It is, however, not at all clear

    • Or it could be as fun as watching "Peak Oil" fanatics twist themselves into pretzels.

  • by acoustix ( 123925 ) on Sunday July 24, 2016 @01:41PM (#52571321)

    We hear the same bullshit every 2 years. Moore's law has nothing to do with the SIZE of the transitors. It has to do with the number of transistors on the chip and, to a lesser extent, the density of the transistors. Arranging the transistors vertically and horizontally will allow the law to continue.

    • Not quite. Going from pure 2d, to 2.5d is not the same.
      With 2d, you get (for example) 40000*40000.
      To double (in the same chip area) - you need to go to 60000*60000, or 40000*40000*2.
      You can do this several times.
      You may reach 40000*40000*8 - you are not going to reach close to 40000, without the chip fabrication costs getting completely out of control.

    • Re: (Score:1, Informative)

      by Anonymous Coward

      Moore's law has nothing to do with the number of transistors on a chip. It's about the complexity of minimum component cost, which means you can get more of the cheapest thing on the device. If the transistors are the cheapest thing on the chip, and if they aren't getting cheaper, and if they can't build bigger chips that contain the cheapest transistors, then Moore's law is dead.

      • by Anonymous Coward

        Gordon Moore determined back in the 60s and 70s that the number of transistors and components in an integrated circuit doubles approximately every two years.

        So yes, it has everything to do with the number of transistors on a chip.

      • by neonv ( 803374 ) on Sunday July 24, 2016 @03:21PM (#52571673)

        Google it, you'll get that it has to do with number of transistors, not complexity.

        "The observation made in 1965 by Gordon Moore, co-founder of Intel, that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented. Moore predicted that this trend would continue for the foreseeable future."

        • And saying "well, just go vertical" fails Moore's law as well. "Square inch" is two dimensional. He didn't say cubed inch. Being an engineer, I'm pretty sure he knew the difference between the two. Not that you said "go vertical", but many other posters here are.
          • Re: (Score:2, Insightful)

            by Anonymous Coward

            And saying "well, just go vertical" fails Moore's law as well. "Square inch" is two dimensional. He didn't say cubed inch.

            And you can't increase the population density per square mile by building tall buildings. Because that's no longer two dimensional, right?

        • by Pseudonym ( 62607 ) on Sunday July 24, 2016 @05:24PM (#52572067)

          Google it, you'll get that it has to do with number of transistors, not complexity.

          Read Moore's papers.

          The AC is strictly incorrect in stating that it has "nothing to do with the number of transistors on a chip". It has something to do with that. However, they did state what Moore said accurately, unlike whatever source Google took you to.

      • Does the price include such things as packaging? If you get more transistors in a single package, and perhaps lower costs of routing (both in terms of design time, because of one extra dimension to play with, and in terms of manufacturing), the total expense per transistor probably still decreases even if the transistor (on its own) costs exactly the same.
    • by Anonymous Coward

      We hear the same bullshit every 2 years. ... Arranging the transistors vertically and horizontally will allow the law to continue.

      Not this time. Many types of circuits are at the economic limit of scaling already at 28nm. Photolithography is getting extremely expensive, whether multiple masking, DUV or e-beam. Finfets or nanowire transistors will buy you only a few more years for circuits that can afford the expensive. People have tried to stack transistors economically for decades without much success except for a handful of memory layers. Transistor leakage is getting worse and worse, even if you decrease the operating voltage. A

      • by Bengie ( 1121981 )
        My wife's 3.5Ghz 14nm Skylake was cheaper than my 3.3ghz 22nm Haswell, and that's not even factoring in that my Haswell was on sale and also not including inflation. Lithography may be technically more expensive by some metric, but the retail prices seem to completely ignore that. There's a lot of variables that drive the price of CPUs, and even with production being a large part, they're still dropping in price, slowly.
    • by gweihir ( 88907 ) on Sunday July 24, 2016 @03:12PM (#52571633)

      In actual reality, most of Moore's law has stopped 6-8 years ago. Just compare a midrange CPU from back then with one from today in actual performance. Not so much of a difference.

      • Lots of progress in CPUs was driven by the desire to use more transistors to run sequential programs for old ISAs and computer architectures faster than the older hardware could. Unfortunately, the number of transistors appears to rise superlinearly with the performance of our sequential (or "observably-sequential" ?) state machines, which means that the diminished increase in performance can very well be deceiving because we're not using the components in an optimal way.
        • by gweihir ( 88907 )

          Or alternatively, that is just how things are and this technology is reaching maturity, i.e. no grand advances to be expected anymore. From all other available examples of technology, that happens eventually. There is no reason to expect computers to be any different.

      • True, it's definitely slowed down. Still, try comparing GPUs, or the performance of mobile computing hardware versus 6-8 years ago, and you'll see a fairly dramatic difference. In addition to the obvious technical challenges, I think perhaps desktop CPUs haven't advanced as dramatically in the last decade partially because there hasn't been a huge demand by most consumers for increased performance. My computer from 5 years ago works every bit as well for my day to day task as it did back then - the only

        • by Bengie ( 1121981 )
          A Modern Intel is about 150% faster than an Intel CPU from 5 years ago. Mostly IPC and SIMD improvements, only a mild frequency boost. That's edging on 3x as fast.
      • by tlhIngan ( 30335 ) <slashdot&worf,net> on Monday July 25, 2016 @12:57AM (#52573421)

        In actual reality, most of Moore's law has stopped 6-8 years ago. Just compare a midrange CPU from back then with one from today in actual performance. Not so much of a difference.

        And Moore's law has never been about performance. Just transistor density.

        General logic like what makes up the computation portion of a CPU don't need Moore's law at all - the transistor density is so low, they generally fab tons more transistors that sit around doing nothing. This way when a bug is found, they can revise the metal layers and put some of those spare transistors to use. This easily saves half of the masks they need to re-do, so at a $100K each per mask, it could mean spending under a million dollars over a couple of million dollars.

        Instead, Moore's law is closely followed by memory manufacturers, because the denser the transistors, the more memory available. This applies for bot flash and RAM - 6-8 years ago you probably had a machine where 8GB of RAM is considered high end for a PC. Nowadays, 64GB is often the high end for a PC. As well, 120GB of SSD storage was considered luxury. Nowadays, you can get 480+GB for less money than that 120GB SSD, and it's not just SATA2, but SATA3. Or even PCIe.

        There are two things in IC fabrication - you have "pin limited" and "silicon limited" designs. Similar to how in programs, you have "I/O bound" and "CPU bound". "Pin limited" ICs mean the overall functionality and design is limited by the number of pins your package supports. Even with 1000+ pins in modern packages, that still limits what you can do. Whereas in silicon limited designs, the limit is how much area your design takes up - more area means higher costs due to less dice per wafer, as well as higher chance of die defect. Memory devices are area limited - the pin counts of modern RAM and flash devices is low, but the area is high. Moore's law increases the storage density so you can have more storage in the same area.

        It's why SSDs have a hard time catching up to HDDs (at least with raw storage) - SSDs improve with roughly Moore's law. HDDs have been improving (storage wise) faster.

        In fact, most of the millions and billions of transistors in your CPU aren't used for logic processing - probably 90% of those transistors are memory related - caches, on board memory, etc. Because those are dense. SRAM cells are typically 6T (6 transistor) designs, so if your CPU has 16MB of cache, that's 96M transistors right there and then just in the storage array. Even more fascinating is that those 95M transistors will probably occupy less area than one of the major processing units on the same chip which may be only 1-2M transistors.

        • by epine ( 68316 )

          And Moore's law has never been about performance.

          I don't get the selective pedantry, here. There never was a Moore's "law" about the scaling of transistors over time. Pedantically, it probably should be called Moore's prescient, off-hand, transistor-scaling extrapolation. What ultimately came to be termed "Moore's law" never had a particularly strong basis in what Moore actually said.

          Even then, The Moore Attribution (thank you, Mr Ludlum) behaved in practice more like Moore's Moneylust Mandate (this was

        • SRAM cells are typically 6T (6 transistor) designs, so if your CPU has 16MB of cache, that's 96M transistors right there and then just in the storage array.

          That would be 768M transistors, if anybody actually used 6T SRAM for a 16MB cache. (Do they? I see that CPU transistor counts are up over 2e9 now. Whew!)

          16 MB * 8 b/B * 6 T/b = 768 MT

      • by Agripa ( 139780 )

        In actual reality, most of Moore's law has stopped 6-8 years ago. Just compare a midrange CPU from back then with one from today in actual performance. Not so much of a difference.

        Moore's law still applied; instead of an increase in performance, it manifested as a smaller die size which required a lower power.

    • Yes, the article seems to contradict itself:

      "...These changes will allow companies to pack more transistors in a given area and so adhere to the letter of Mooreâ(TM)s Law. "

    • by Jeremi ( 14640 )

      Moore's law has nothing to do with the SIZE of the transitors. It has to do with the number of transistors on the chip and, to a lesser extent, the density of the transistors. Arranging the transistors vertically and horizontally will allow the law to continue.

      In the future, the size of each transistor will remain roughly the same, but the size of the chip will double every year, so that by 2030 the average CPU will measure about 50 feet in each dimension. People will use them simultaneously for both computing and as floors, walls, or ceilings for their homes.

      Remember, you heard it here first.

  • by Anonymous Coward

    Moore's law will stop when a switching device becomes a single molecule. Make no mistake that it means for Moore's law to continue it means a radical change in the materials and design of switching devices. Notice I didn't say "transistor." Transistor density is becoming an issue. There are fundamental problems like electron tunneling that can only be fixed by tweaks like voltage for so long.

    The next move is going to have to start moving towards molecular electronics. Thankfully nature has been working on s

    • by K. S. Kyosuke ( 729550 ) on Sunday July 24, 2016 @03:53PM (#52571795)
      I love Stanislaw Lem's concept of "the last generation computer". It may have been tongue-in-cheek in the time he wrote Fiasco (when the much-hyped "fifth generation" was "the Next Big Thing") but the concept feels increasingly relevant these days.

      "This was a computer of the 'last' generation--last, because no other could have greater calculating power. Limits were imposed by such properties of matter as Planck's constant and the speed of light. Greater calculating ability could be achieved only by the so-called imaginary computers, designed by theorists engaged in pure mathematics and not dependent on the real world. The constructors' dilemma arose from the necessity of satisfying mutually exclusive conditions to pack the most neurons into the smallest volume. The travel time of the signals could not be longer than the reaction time of the components; otherwise, the time taken by the signals would limit the speed of calculation. The newest relays responded in one-hundred-billionth of a second. They were the size of atoms, so that an actual computer had a diameter of barely three centimeters. A computer any larger would be slower. The Hermes' computer did indeed take up half the control room, but that was for its peripherals: decoders, hierarchic assemblers, and so-called hypothesis generators, which, with the linguistic modules, did not operate in real time. But decisions in critical situations, in extremis, were made by the lightning-swift core, which was no bigger than a pigeon's egg."

  • by rbrander ( 73222 ) on Sunday July 24, 2016 @02:37PM (#52571507) Homepage

    The author is the son-half of a father/son duo, Dan and Jerry Hutcheson, that wrote an article for Scientific American in 1996 on the expected coming end of Moore's Law, say around 2003-2005. It was one of the many that Intel liked to deride as they pushed on down below the wavelength of high-ultraviolet light in their form factors, a remarkable achievement.
    And no doubt, Hutcheson will be in for more mocking about how Moore's will continue until we're using subatomic particles.

    But for me, Moore's ended around the 2003-2005 they predicted. My big IT interest isn't phones and low-power computing, where Moore's is continuing - yes, possibly for longer than Hutcheson predicts -- but in raw desktop performance at number-crunching big databases. There's been progress there since 2005, but most of it has come from faster memory, SSDs, more cores. Raw horsepower progress continued, even exponentially - but not at a 2-year doubling after about 2005, it was more like 3, 4, then 5 years. I should have titled this, "Moore's law has been winding down for a decade, for many".

    The new "Skylake" generation of i7's is mostly about low-power progress. A genuine jump for us power users is coming in the fall, I think, after a couple of years since the last one...and the chips should be 15% or 20% faster than 2014's. Just not like the late 90s and doublings every year or two.

    • by jopsen ( 885607 )
      I guess for most people Moore's law is going to be reformulated to fit whatever narrative you're trying to sell..

      On topic, it's all about performance, exactly how it improves is perhaps less important... I suspect that future performance improvements will have to come from software though. It's easy to make CPU faster, but that doesn't help much when software jumps of out the CPU cache :)
      • by ET3D ( 1169851 )

        Yeah, QED. Moore's law is about transistor density, but you reformulated it to be about performance. :)

    • by AJWM ( 19027 )

      Depending on the specific problem, with number-crunching big databases you may be running into the limits of Amdahl's Law, not Moore's.

      If part of the algorithm is inherently serial (ie, can't be parallelized), then that's going to be the bottleneck no matter how many cores you throw at it (although faster memory and I/O may help). CPU clock speed has been stuck around 2-4 GHz for many years now, throwing more transistors at the problem isn't going to help much. What we need there is not more transistors

      • I don't know if it's true, but I've read that GaAs loses its advantage below the 0.35 micron node, which is close to 20 years ago.
    • by ET3D ( 1169851 )

      You're looking at desktop CPU's. You look at other chips, GPU's for example, there has been much higher growth in performance. Even performance of integrated GPU's has grown a lot.

      Moore's law was never about performance, but even looking at performance, you got about 10x increase for GPU's in past 8 years. Not exactly doubling every two years, but more than the 3,4,5 you talk about.

  • Transistors will stop shrinking when they reach the smallest size possible for an electron to move from one side to the next. This will be one atom thick by 1 atom high for the emitter and collector, but I'm not sure what the base would require. Maybe two or three atoms to be able to control the flow? This is for digital transistors only, analog may need be much larger due to frequencies, polarity, rate of flow and all that rigmarole.
    • by HiThere ( 15173 )

      Sorry, but if you get that system you'll need to run it with liquid helium coolant to eliminate noise. For most purposes it's better to use parts 3-4 times as large and need less cooling. You might still need liquid nitrogen, but that's a lot more doable.

  • by bigHairyDog ( 686475 ) on Sunday July 24, 2016 @03:54PM (#52571801)
    The number of people predicting the death of Moore’s law doubles every two years.
  • Long live Moore's Law!

  • The report adds that processors could still continue to fulfill Moore's Law with increased vertical density.

    What took them so long?

    I've been pointing out that a three-dimensional arrangement off components could continue FAR longer than an essentially single-layer arrangements since at least the 1970s.

    • by Jeremi ( 14640 )

      I've been pointing out that a three-dimensional arrangement off components could continue FAR longer than an essentially single-layer arrangements since at least the 1970s.

      Sure, but unless you've developed a superconducting substrate, or come up with a reliable, efficient 3D cooling system, or are willing to run the 3D transistors only at very low speed/power, you're going to run into serious heat dissipation problems. Solving those (along with manufacturing a working 3D structure in the first place) is what's taking them so long.

      • Sure, but unless you've developed a superconducting substrate, or come up with a reliable, efficient 3D cooling system, or are willing to run the 3D transistors only at very low speed/power, you're going to run into serious heat dissipation problems.

        Back then I was proposing a diamond semiconductor - supported and powered by water-cooled silver busbars. Diamond is extremely conductive thermally. The bandgap is 5.5V, corresponding to the deep ultraviolet, so you can run it very hot without fouling the elec

    • The report adds that processors could still continue to fulfill Moore's Law with increased vertical density.

      What took them so long?

      I've been pointing out that a three-dimensional arrangement off components could continue FAR longer than an essentially single-layer arrangements since at least the 1970s.

      Yeah, and people have been trying that approach since the 70's, too. They're still working on heat dissipation.

  • The limits for general purpose CPUs for the about a decade has been power/heat, not transistor size. In the 1990s-2000s, performance could be increased with faster clockrates and more on-chip caches. Since about 2005, when clockrates passed 3GHz, the CPU vendors embraced multiple cores and have cut power demands.

    Moore's Law can continue with 3D chips. Maybe a CPU of 2025 will be built with a first layer of transistors that covers the entire areal plane just for caching and with additional layers built ve

  • Stacking transistors vertically means less surface exposed to a heatsink.

    Unless I misunderstand something about how cooling these chips works, how can this problem be overcome?

  • The reason why chips are so cheap despite the large number of components on them is that all the components are produced at the same time. It's a complicated process with many steps using ludicrously expensive equipment for sure, but it's a single iteration through the production process. If you want to scale vertically, you have to increase the number of iterations. The production costs will asymptotically approach proportionality with the number of components on the chip.
  • Super conducting processors are a thing... they run at Thz cycle frequencies in the lab.

    Sure their on the level of complexity of the original IBM PC or so... but that can be remedied. More transistors isn't the only way to go faster... faster transistors is also an equally valid method. Implementing wave pipelines in more components is also valid (they've been used in varying degrees since the early 2000's) being able to go into 3 dimensions may help the practicality of wave pipelines which rely on constant
    • The implication is you will only be able to buy faster RAM, not more RAM. Having the same number of states but running computations on them faster isn't really the same thing as having more states or more complex circuits.

      • Your are right about that speed doesn't inherently increase memory density. However nothing is stopping anyone from reading multiple bits of information from single atoms...so yes higher densities are possible it's a somehwat separate problem from processor speed though....
        • Well there are some limits to what you suggest as well, due to quantum physics, uncertainty principle, etc. Through I doubt we are very close to those limits yet.

  • It still might not be the end of Moore's remarkable observation, though. The report adds that processors could still continue to fulfill Moore's Law with increased vertical density.

    Nope, high performance logic is already limited by the ratio of power density to surface area and it has been this way for almost a decade now. Increasing vertical density just makes this worse.

  • by rew ( 6140 )

    .... John von Neumann said..... In 1947.

    http://www.brainyquote.com/quo... [brainyquote.com]

    It would appear that we have reached the limits of
    what it is possible to achieve with computer technology,
    although one should be careful with such statements,
    as they tend to sound pretty silly in 5 years.

    For the record: I have produced this quote around 20 years ago when similar statements about the "end of moore within 5-10 years" were made

Do molecular biologists wear designer genes?

Working...