Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware Technology

Will 7nm and 5nm CPU Process Tech Really Happen? 142

An anonymous reader writes "This article provides a technical look at the challenges in scaling chip production ever downward in the semiconductor industry. Chips based on a 22nm process are running in consumer devices around the world, and 14nm development is well underway. But as we approach 10nm, 7nm, and 5nm, the low-hanging fruit disappears, and several fundamental components need huge technological advancement to be built. Quoting: "In the near term, the leading-edge chip roadmap looks clear. Chips based on today's finFETs and planar FDSOI technologies will scale to 10nm. Then, the gate starts losing control over the channel at 7nm, prompting the need for a new transistor architecture. ... The industry faces some manufacturing challenges beyond 10nm. The biggest hurdle is lithography. To reduce patterning costs, Imec's CMOS partners hope to insert extreme ultraviolet (EUV) lithography by 7nm. But EUV has missed several market windows and remains delayed, due to issues with the power source. ... By 7nm, the industry may require both EUV and multiple patterning. 'At 7nm, we need layers down to a pitch of about 21nm,' said Adam Brand, senior director of the Transistor Technology Group at Applied Materials. 'That's already below the pitch of EUV by itself. To do a layer like the fin at 21nm, it's going to take EUV plus double patterning to round out of the gate. So clearly, the future of the industry is a combination of these technologies.'"
This discussion has been archived. No new comments can be posted.

Will 7nm and 5nm CPU Process Tech Really Happen?

Comments Filter:
  • by sinij ( 911942 ) on Friday June 20, 2014 @10:04AM (#47281593)
    Could someone explain to me why further refinement of fabrication process is the only way to progress? With a car analogy?
    • Drivers are getting fatter and fatter, and the only way to get the car to move at the same speed is by continually improving the car... to end up at the same speed as before.

      • fuel cell cars are always on the cusp of commercialization, but remain 10 years out due to some technical hurdles. They've been 10 years out for decades.
        • So fuel cell cars are memristors? That actually sounds about right.
          • Ah but HP is already testing and design products with memresistors. Of course progress is going slow because HP sucks at bringing products To the market.

        • fuel cell cars are always on the cusp of commercialization, but remain 10 years

          Cars maybe. But fuel cell busses are a regular sight round these parts:

          http://en.wikipedia.org/wiki/L... [wikipedia.org]

          • FCBs are much more prevalent in Europe than USA. USA has much higher needs for power, while Europe is gentler. Source: it's my job.
        • One problem of fuel cells is the fuel. How do you make it, how do you store it?

          We will either get the hydrogen from hydrocarbons ( 95% of todays hydrogen comes from fossil fuels), or we can split water with electrolysis which uses a large amount of energy.

          If splitting water, the hydrogen just becomes a method of transporting the energy used to make it. How do you scale this and still be sustainable?
          If you are steam reforming oil into hydrogen, you are still dependent on oil, and what do you do with all the

          • the main unsolved problems with fuel cell vehicles are:
            * demonstrating fuel cell stack life
            * finding a better way to store hydrogen on board (more dense)

            In terms of how you make the fuel, all of this is solved. when you say "one problem is the fuel" I wonder what problem you are defining specifically? It's unclear.

            you call out the expense of catalysts, without any knowledge of the expense of the catalyst or the amount required. hint: catalytic converters also contain platinum, but somehow science fo
      • In a rather odd way it's incredibly fitting.

      • by dkman ( 863999 )
        Or aim it downhill
    • Everyone wants faster, cheaper, and lighter cars, but you cannae break the laws o' physics, captain.

      • Re:Car analogy? (Score:4, Interesting)

        by Zak3056 ( 69287 ) on Friday June 20, 2014 @11:01AM (#47282231) Journal

        Everyone wants faster, cheaper, and lighter cars, but you cannae break the laws o' physics, captain.

        That doesn't sound like breaking the laws of physics: making the car lighter will make it faster, as well as (assuming you avoid exotic materials) making it cheaper.

        • Re:Car analogy? (Score:5, Insightful)

          by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Friday June 20, 2014 @12:16PM (#47282957) Homepage Journal

          That doesn't sound like breaking the laws of physics: making the car lighter will make it faster, as well as (assuming you avoid exotic materials) making it cheaper.

          It's not breaking the laws of physics, but it is ignoring the current state of materials technology. You have to build a lot of cars before you can get the cost of building an aluminum body down to the same as the cost of building a steel body, and carbon fiber (the only other credible alternative today) is always more expensive.

          Also, they forgot "stronger". Cars which have a more rigid body not only handle better but they're actually more comfortable, because the suspension can be designed around a more rigid, predictable body. Getting all four of those things in the same package is the real challenge.

    • by Anonymous Coward

      As you get smaller channels they start to interfere with eachother. Much like shrinking lane size on an interstate would cause similar problems.

    • Re:Car analogy? (Score:5, Insightful)

      by DahGhostfacedFiddlah ( 470393 ) on Friday June 20, 2014 @10:41AM (#47281981)

      We're trying to make smaller and smaller cars out of silicon, because then we can fit more cars onto parking lots. The number of cars we can fit onto a parking lot has been doubling approximately every 18 months for the past half-century, but we appear to be approaching some hard physical limits for the actual size of cars. In addition to the limits imposed by the size of the cars themselves (below a certain size, cars start interacting at a quantum level with the other cars around them), there are also challenges inherent in manufacturing cars at such a tiny scale. There is some new car-making technology on the horizon that may resolve these issues by using higher-frequency car-making lasers in our car foundries. But top researchers still have technical hurdles to pass before they can manufacture cars that are smaller than 7nm.

      • We're trying to make smaller and smaller cars out of silicon, because then we can fit more cars onto parking lots. The number of cars we can fit onto a parking lot has been doubling approximately every 18 months for the past half-century, but we appear to be approaching some hard physical limits for the actual size of cars. In addition to the limits imposed by the size of the cars themselves (below a certain size, cars start interacting at a quantum level with the other cars around them), there are also challenges inherent in manufacturing cars at such a tiny scale. There is some new car-making technology on the horizon that may resolve these issues by using higher-frequency car-making lasers in our car foundries. But top researchers still have technical hurdles to pass before they can manufacture cars that are smaller than 7nm.

        Easier car analogy: you can only shrink the car so much before the limiting factor is not the size of your cars, but how precisely (and how thin) you can paint the parking lines.

      • by Tablizer ( 95088 )

        How about drivers being the electron. Shrinking beyond 1 Meter means the driver is bigger than the car. So, to get smaller cars, you have to put wheels on the drivers' asses...

      • by xded ( 1046894 )
        You just replaced the word "transistor" with "car".
        Your post still doesn't explain why the only way to progress is fitting more and more cars into a parking lot...
    • Could someone explain to me why further refinement of fabrication process is the only way to progress? With a car analogy?

      Easy enough. Take a car X driven by a driver Y. One driver can drive one car, so X = Y. If you make the car 50% smaller, then you'll have 2X = Y. If each car has a top speed of V, then the same driver Y can achieve 2V by driving those two smaller cars at once.

    • by dnavid ( 2842431 )

      Could someone explain to me why further refinement of fabrication process is the only way to progress? With a car analogy?

      The only way to make cars both faster than more energy efficient is to make them lighter. You can make cars faster by giving them more powerful engines, but at some point you'd have to power them with nuclear reactors. At some point, the semiconductor manufacturers were making cars with about fifty pounds of aluminum and carbon fiber, and reaching the limits of what you could do with less material without the car falling apart. So they are currently researching carbon nanotubes and organic spider silk to

    • Could someone explain to me why further refinement of fabrication process is the only way to progress? With a car analogy?

      Its like Peak Oil, but in this case its Peak Transistor Density.

    • In essence, we don't need to go below 10nm technology.
      What we need is to stop writing crappy code, prevent computers from being shipped with bloatware.
      Those who actually need to go bellow 10nm are the ones directly profiting from it (Intel, HP, Dell, Lenovo, AMD).
      From current 22nm down to 10nm technology is close to three orders of magnitude decrease in transistor size (close to a 1:1000 shrinkdown).

      Learn to be more frugal. Migrate to Linux. Linux can still run FAST on 4 year old top computers or the cheape

  • e-beam lithography? (Score:5, Informative)

    by by (1706743) ( 1706744 ) on Friday June 20, 2014 @10:07AM (#47281639)
    Clearly e-beam has some serious issues (throughput, to name one...), but progress is being made on that front. For instance, http://www.mapperlithography.c... [mapperlithography.com] ( http://nl.wikipedia.org/wiki/M... [wikipedia.org] -- though it appears there's only a Dutch entry...).
  • This seems highly technical which is great. I would say at best these issues are 5 years out. Plus, stacking processors + making them larger is always an option. The margins on processors can be slim at the low end, to many fold at the top. The manufacturers will have to learn to live on leaner margins all round.
    • by Guspaz ( 556486 )

      The problem with stacking is the thermal/power situation. Specifically, how much power can a processor use before it's impractical to power and cool it? And when you have two or more processor dies stacked on top of eachother, the heatsink is only going to contact the topmost one. How do you remove that heat from the bottom one?

      I suspect the answers to those questions are, it's not practical to use that much more power that we use in high-end desktop chips today (150-200W is probably the limit of practicali

      • by Mashiki ( 184564 )

        And when you have two or more processor dies stacked on top of eachother, the heatsink is only going to contact the topmost one. How do you remove that heat from the bottom one?

        Best guess? Synthetic/actual diamond transfer layer using something along the lines of heatpipes to the the top layer of the die plate, using the vertical method you mentioned. Either that or the die double sided with a heat sink on both sides, that could let you stack three cpu's together.

        • by Guspaz ( 556486 )

          Either that or the die double sided with a heat sink on both sides, that could let you stack three cpu's together.

          So you're telling me you want to go back to the days of slot-loading CPUs :P

          • by Mashiki ( 184564 )

            So you're telling me you want to go back to the days of slot-loading CPUs :P

            Who needs to go back to slot-loading? Why not mount the motherboard in the centre of the case and have a heat sink on either side.

  • Last time it was leakage would prevent us from breaking 65nm. Before that it was lithography wouldn't get us below 120nm. Something will happen like it always does.
    • Re:Same story (Score:4, Insightful)

      by rahvin112 ( 446269 ) on Friday June 20, 2014 @11:35AM (#47282529)

      There is a limit we'll hit eventually, we're approaching circuits that are single digit atoms wide. No matter what we'll never get a circuit less than a single atom. Don't get me wrong, I don't think 10nm is going to be the problem but somewhere around single digit atoms wide we're going to run out of options to make them smaller.

      • There is a limit we'll hit eventually, we're approaching circuits that are single digit atoms wide. No matter what we'll never get a circuit less than a single atom.

        We'll go optical, and we'll use photons...

        • There is a limit we'll hit eventually, we're approaching circuits that are single digit atoms wide. No matter what we'll never get a circuit less than a single atom.

          We'll go optical, and we'll use photons...

          Don't think that will work. We'll still need optical channels and by time we are limited to the size of an atom, smaller photons will be in the gamma ray frequencies which are ionizing and will probably pass through the rest of the computer anyway.

          • Don't think that will work. We'll still need optical channels and by time we are limited to the size of an atom, smaller photons will be in the gamma ray frequencies which are ionizing and will probably pass through the rest of the computer anyway.

            Yeah, but by then we'll probably have some better way to control photons, and we won't need optical channels or the photons will automagically sort themselves at the end of a shared channel or something. Or the processor will be holographic and three-dimensional, etc etc.

  • i remember in the 90's everyone swore it was impossible to go under 90nm how 1GHz was the maximum speed you could get

  • by necro81 ( 917438 ) on Friday June 20, 2014 @10:44AM (#47282015) Journal
    I am amused by this bit in the summary:

    But as we approach 10nm, 7nm, and 5nm, the low-hanging fruit disappears

    I'd say the low-hanging fruit disappeared a few decades ago. Continuing down the feature size curve has for many years required a whole slew of every-more-complicated tricks and techniques.

    That said: yes, going forward is going to be increasingly difficult. You will eventually reach an insurmountable limit that no amount of trickery or technology will be able to overcome. I predict it'll be somewhere around the 0 nm process node.

    • Came in here to say the same thing -- as if even getting to the nanometer level was somehow "low hanging fruit", never mind the current ability to fabricate at 14nm. Welcome to the brave new world of the gadget generation who take tech for granted yet are completely ignorant of the fundamentals behind it.
    • I'd say the low-hanging fruit disappeared a few decades ago

      In an absolute sense, yes. In a relative sense, some fruit will always be lower than others.

  • by mrflash818 ( 226638 ) on Friday June 20, 2014 @11:00AM (#47282211) Homepage Journal

    I worry about the reliability with tinyer and tinyer CPU feature size. ...how will those CPUs be doing, reliability-wise, 10yrs later?

    When I buy something 'expensive', I expect it to last at least 10yrs, and CPUs are kinda expensive, to me.

    (I still have an Athlon Thunderbird 700MHz Debian workstation that I use, for example, and it's still reliable.)

    • Yes.

      If you ever get a job designing chips, you will find that RV has become an important part of the design flow.

    • (I still have an Athlon Thunderbird 700MHz Debian workstation that I use, for example, and it's still reliable.)

      And I have an Athlon Thunderbird 700 MHz debian system that I retired years ago, and replaced with a pogoplug. It's no slower and it draws over an order of magnitude less power; IIRC 7W peak before USB devices. You can get one for twenty bucks brand new with SATA and USB3, and install Debian on that. It'll probably pay for itself in relatively short order, what with modern utility rates.

      If you want another Athlon 700 though, you can have mine. I need to get rid of it. I need the shelf space for something el

    • In my decades of experience I do not ever remember a case where the CPU is the cause of failure to the system.

      Hard Drive failures, GPU, Modem, Network Card, Monitor, Keyboard, Mouse, Power Supply. But the CPU seems to always keep kicking, Granted I had the CPU fan die, but I tend to replace that rather quickly after failure.

      But I also don't do stupid things like over clocking

    • More miniturization equals greater reliability, because smaller components always do better at surviving shock and vibration than larger components.

    • by Tablizer ( 95088 ) on Friday June 20, 2014 @04:33PM (#47284991) Journal

      I worry about the reliability with tinyer and tinyer CPU feature size

      I'n usiing a 5nm protTotype,, andd it~s doingn &` ju ust f%ne. Don^t b be~a worRy waqrt#!

    • Comment removed based on user account deletion
  • Why don't we use smaller architecture in larger dies, so that we have higher densities, and higher speeds? Also that wouldn't that allow room for more cores and cache.
    • Why don't we use smaller architecture in larger dies, so that we have higher densities, and higher speeds? Also that wouldn't that allow room for more cores and cache.

      Because that doesn't lower costs and increase margins.
      With this last shrink we saw pretty much no gain (and in some cases losses) in cost efficiency, so with further shrinks they may have to wake the fuck up and start working on upping clock speeds, giving us a larger die with an entire butt of cores and cache, etc.

      • > entire butt of cores and cache
        I checked my copy of Measure for Measure, but 'butt' doesn't appear anywhere as a unit.

        BTU (energy) and buito (mass) a both close.
        Bucket (volume) is semantically close I suspect.

        • by Anonymous Coward

          >I checked my copy of Measure for Measure, but 'butt' doesn't appear anywhere as a unit.

          A butt is precisely 1/2 of a tun (sic).

          (Yes, really. See http://en.wikipedia.org/wiki/Butt_(unit)#butt )

  • Because whatever you do in the computing world, you are affected by processing power and cost. Growth in these regions drives both new hardware and new software to go with it, and any hit to growth will mean loss of jobs.

    Software (what most of us here create) usually gets created for one of two reasons:

    1. Software is created because nobody is filling a need. Companies may build their own version if they want to compete, or a company may contract a customized version if they can see increased efficiency or

    • >Software (what most of us here create)

      Really? A lot of us create hardware. We have an existential interest in the answer to TFA.

      • I figured the hardware effect was fairly obvious :D

        I concentrated on the software side effects because more readers here work on that end.

    • by PRMan ( 959735 )
      There are always new things to do and never enough people to do them. I for one will be surprised if developers have a culling in the next 20 years. There are too many other jobs to eliminate first.
    • by david_thornley ( 598059 ) on Friday June 20, 2014 @04:29PM (#47284977)

      You do realize that we've been in that situation since the dawn of computers, don't you? Once we get close to filling needs, people come up with other needs. Once processor development more or less stalls out, people will still want better performance, but they won't get it by updating their systems any more. Software development is a pretty secure profession.

  • by Theovon ( 109752 ) on Friday June 20, 2014 @12:42PM (#47283195)

    There are a number of factors that affect the value of technology scaling. One major one is the increase in power density due to the end of supply and threshold voltage scaling. But one factor that some people miss is process variation (random dopant fluctuation, gate length and wire width variability, etc.).

    Using some data from ITRS and some of my own extrapoliations from historical data, I tried to work out when process variation alone would make further scaling ineffective. Basically, when you scale down, you get a speed and power advantage (per gate), but process variation makes circuit delay less predictable, so we have to add a guard band. At what point will the decrease in average delay become equal to the increase in guard band?

    It turns out to be at exactly 5nm. The “disappointing” aspect of this (for me) is that 5nm was already believed to be the end of CMOS scaling before I did the calculation. :)

    Incidentally, if you multiply out the guard bands already applied for process variation, supply voltage variation, aging, and temperature variation, we find that for an Ivy Bridge processor, about 70% of the energy going in is “wasted” on guard bands. In other words, if we could eliminate those safety margins, the processor would use 1/3.5 as much energy for the same performance or run 2.5 times faster in the same power envelope. Of course, we can’t eliminate all of them, but some factors, like temperature, change so slowly that you can shrink the safety margin by making it dynamic.

    • by sinij ( 911942 )

      Very interesting post, thank you for writing it up.

      I have a question. Are there guard bands in biological computation (e.g. our brains) ? I was under impression that our cognitive processes (software) are optimized for speed and designed to work with massively parallel but highly unreliable neural hardware.

      What I am trying to say is that nature performed optimization decided that it is better to be very efficient all the time, and correct some of the time, but also be very good at error che

      • by Theovon ( 109752 )

        What you’re talking about is “approximate computing,” which is a hot area in research right now. If you can tolerate some errors, then you can get a massive increase in performance.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Are you taking into account depletely-depleted MOS with controlled bias?

      As I understand it, the Vt of the junction in present devices is controlled by the dopant level in the channel, and the source of Vt variation is from the shot noise in implantation. If you can increase the dopant concentration by 5x, you also decrease the variation due to shot noise by 5x. To compensate for the deeply depleted junction, you now need to add a body electrode to all the gates to bias them correctly, and a power supply tre

      • by Theovon ( 109752 )

        I don’t know the principles behind how doping concentrations are chosen, but I’m sure it’s optimized for speed. Also, you can compensate for Vth variaton using body bias, but it’s basically impossible to do this per-transistor. You can do it for large blocks of transistors, which allows you to compensate a bit for systematic variation (due mostly to optical aberrations in lithography), but there’s nothing you can do about random variation. Also, there’s effective lengt

  • An interesting article here discribes the horrendiously difficult challenges that face EUV:
    https://www.semiwiki.com/forum... [semiwiki.com]

  • The problem is that memristance effects begin to manifest below 5nm

    Thus, start using memristors to build IMP-FALSE logic circuits.

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...