Will 7nm and 5nm CPU Process Tech Really Happen? 142
An anonymous reader writes "This article provides a technical look at the challenges in scaling chip production ever downward in the semiconductor industry. Chips based on a 22nm process are running in consumer devices around the world, and 14nm development is well underway. But as we approach 10nm, 7nm, and 5nm, the low-hanging fruit disappears, and several fundamental components need huge technological advancement to be built. Quoting: "In the near term, the leading-edge chip roadmap looks clear. Chips based on today's finFETs and planar FDSOI technologies will scale to 10nm. Then, the gate starts losing control over the channel at 7nm, prompting the need for a new transistor architecture. ... The industry faces some manufacturing challenges beyond 10nm. The biggest hurdle is lithography. To reduce patterning costs, Imec's CMOS partners hope to insert extreme ultraviolet (EUV) lithography by 7nm. But EUV has missed several market windows and remains delayed, due to issues with the power source. ... By 7nm, the industry may require both EUV and multiple patterning. 'At 7nm, we need layers down to a pitch of about 21nm,' said Adam Brand, senior director of the Transistor Technology Group at Applied Materials. 'That's already below the pitch of EUV by itself. To do a layer like the fin at 21nm, it's going to take EUV plus double patterning to round out of the gate. So clearly, the future of the industry is a combination of these technologies.'"
Car analogy? (Score:5, Funny)
Re: (Score:3)
Drivers are getting fatter and fatter, and the only way to get the car to move at the same speed is by continually improving the car... to end up at the same speed as before.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Ah but HP is already testing and design products with memresistors. Of course progress is going slow because HP sucks at bringing products To the market.
Re: (Score:2)
fuel cell cars are always on the cusp of commercialization, but remain 10 years
Cars maybe. But fuel cell busses are a regular sight round these parts:
http://en.wikipedia.org/wiki/L... [wikipedia.org]
Re: (Score:2)
Re: (Score:1)
One problem of fuel cells is the fuel. How do you make it, how do you store it?
We will either get the hydrogen from hydrocarbons ( 95% of todays hydrogen comes from fossil fuels), or we can split water with electrolysis which uses a large amount of energy.
If splitting water, the hydrogen just becomes a method of transporting the energy used to make it. How do you scale this and still be sustainable?
If you are steam reforming oil into hydrogen, you are still dependent on oil, and what do you do with all the
Re: (Score:2)
* demonstrating fuel cell stack life
* finding a better way to store hydrogen on board (more dense)
In terms of how you make the fuel, all of this is solved. when you say "one problem is the fuel" I wonder what problem you are defining specifically? It's unclear.
you call out the expense of catalysts, without any knowledge of the expense of the catalyst or the amount required. hint: catalytic converters also contain platinum, but somehow science fo
Re: (Score:2)
In a rather odd way it's incredibly fitting.
Re: (Score:1)
Re: (Score:2)
Everyone wants faster, cheaper, and lighter cars, but you cannae break the laws o' physics, captain.
Re:Car analogy? (Score:4, Interesting)
Everyone wants faster, cheaper, and lighter cars, but you cannae break the laws o' physics, captain.
That doesn't sound like breaking the laws of physics: making the car lighter will make it faster, as well as (assuming you avoid exotic materials) making it cheaper.
Re:Car analogy? (Score:5, Insightful)
That doesn't sound like breaking the laws of physics: making the car lighter will make it faster, as well as (assuming you avoid exotic materials) making it cheaper.
It's not breaking the laws of physics, but it is ignoring the current state of materials technology. You have to build a lot of cars before you can get the cost of building an aluminum body down to the same as the cost of building a steel body, and carbon fiber (the only other credible alternative today) is always more expensive.
Also, they forgot "stronger". Cars which have a more rigid body not only handle better but they're actually more comfortable, because the suspension can be designed around a more rigid, predictable body. Getting all four of those things in the same package is the real challenge.
Re: (Score:1)
As you get smaller channels they start to interfere with eachother. Much like shrinking lane size on an interstate would cause similar problems.
Re:Car analogy? (Score:5, Insightful)
We're trying to make smaller and smaller cars out of silicon, because then we can fit more cars onto parking lots. The number of cars we can fit onto a parking lot has been doubling approximately every 18 months for the past half-century, but we appear to be approaching some hard physical limits for the actual size of cars. In addition to the limits imposed by the size of the cars themselves (below a certain size, cars start interacting at a quantum level with the other cars around them), there are also challenges inherent in manufacturing cars at such a tiny scale. There is some new car-making technology on the horizon that may resolve these issues by using higher-frequency car-making lasers in our car foundries. But top researchers still have technical hurdles to pass before they can manufacture cars that are smaller than 7nm.
Re: (Score:2)
We're trying to make smaller and smaller cars out of silicon, because then we can fit more cars onto parking lots. The number of cars we can fit onto a parking lot has been doubling approximately every 18 months for the past half-century, but we appear to be approaching some hard physical limits for the actual size of cars. In addition to the limits imposed by the size of the cars themselves (below a certain size, cars start interacting at a quantum level with the other cars around them), there are also challenges inherent in manufacturing cars at such a tiny scale. There is some new car-making technology on the horizon that may resolve these issues by using higher-frequency car-making lasers in our car foundries. But top researchers still have technical hurdles to pass before they can manufacture cars that are smaller than 7nm.
Easier car analogy: you can only shrink the car so much before the limiting factor is not the size of your cars, but how precisely (and how thin) you can paint the parking lines.
Re: (Score:1)
How about drivers being the electron. Shrinking beyond 1 Meter means the driver is bigger than the car. So, to get smaller cars, you have to put wheels on the drivers' asses...
Re: (Score:2)
Your post still doesn't explain why the only way to progress is fitting more and more cars into a parking lot...
Re: (Score:2)
Could someone explain to me why further refinement of fabrication process is the only way to progress? With a car analogy?
Easy enough. Take a car X driven by a driver Y. One driver can drive one car, so X = Y. If you make the car 50% smaller, then you'll have 2X = Y. If each car has a top speed of V, then the same driver Y can achieve 2V by driving those two smaller cars at once.
Re: (Score:2)
Could someone explain to me why further refinement of fabrication process is the only way to progress? With a car analogy?
The only way to make cars both faster than more energy efficient is to make them lighter. You can make cars faster by giving them more powerful engines, but at some point you'd have to power them with nuclear reactors. At some point, the semiconductor manufacturers were making cars with about fifty pounds of aluminum and carbon fiber, and reaching the limits of what you could do with less material without the car falling apart. So they are currently researching carbon nanotubes and organic spider silk to
Re: (Score:1)
Could someone explain to me why further refinement of fabrication process is the only way to progress? With a car analogy?
Its like Peak Oil, but in this case its Peak Transistor Density.
Re: (Score:2)
In essence, we don't need to go below 10nm technology.
What we need is to stop writing crappy code, prevent computers from being shipped with bloatware.
Those who actually need to go bellow 10nm are the ones directly profiting from it (Intel, HP, Dell, Lenovo, AMD).
From current 22nm down to 10nm technology is close to three orders of magnitude decrease in transistor size (close to a 1:1000 shrinkdown).
Learn to be more frugal. Migrate to Linux. Linux can still run FAST on 4 year old top computers or the cheape
Re:Car analogy? (Score:5, Insightful)
Re: (Score:2)
If you take anything away from the car analogy, let it be this.
Re: (Score:2)
You have to know when to push past those barriers...until you have a single cylinder engine with more displacement than before!
Re: (Score:1)
Re: (Score:3)
same with cars
30 years ago you had an AM radio, you needed a V8 for 190hp and dozens of features we take for granted today may have been thought of to be only on ultra luxury cars. not like we had navigation, blue tooth and lots of other gizmos in cars. a lot of the safety features and the new features people want suck up gas as well
Re: (Score:3)
a lot of the safety features and the new features people want suck up gas as well
Safety, yes. But none of the new features people want weigh anything notable. Indeed, most of them come free with the size reduction associated with modernization. If you replace a bunch of relays with some logic and a couple of relays then you can also add automation along with it and the whole thing actually weighs less. Even the lightest cars today have ABS, traction control, and yaw control, and virtually no cars [in the USA] are not offered with AC. Even modern adaptive suspension requires very little
Re: (Score:3)
Yeah, but brakes would break occasionally for no reason, or it would just not start for no good reason, you could only drive on roads that the car makers approved and only transport goods that were approved to be transported by this specific kind of car, you'd have to get a new car every other year because you would not get any service for your old one anymore, people could easily hotwire your cars and drive away with them and everyone would tell you whatever goes wrong with it, it's only YOUR fault, not th
e-beam lithography? (Score:5, Informative)
Technical (Score:1)
Re: (Score:3)
The problem with stacking is the thermal/power situation. Specifically, how much power can a processor use before it's impractical to power and cool it? And when you have two or more processor dies stacked on top of eachother, the heatsink is only going to contact the topmost one. How do you remove that heat from the bottom one?
I suspect the answers to those questions are, it's not practical to use that much more power that we use in high-end desktop chips today (150-200W is probably the limit of practicali
Re: (Score:2)
And when you have two or more processor dies stacked on top of eachother, the heatsink is only going to contact the topmost one. How do you remove that heat from the bottom one?
Best guess? Synthetic/actual diamond transfer layer using something along the lines of heatpipes to the the top layer of the die plate, using the vertical method you mentioned. Either that or the die double sided with a heat sink on both sides, that could let you stack three cpu's together.
Re: (Score:2)
Either that or the die double sided with a heat sink on both sides, that could let you stack three cpu's together.
So you're telling me you want to go back to the days of slot-loading CPUs :P
Re: (Score:2)
So you're telling me you want to go back to the days of slot-loading CPUs :P
Who needs to go back to slot-loading? Why not mount the motherboard in the centre of the case and have a heat sink on either side.
Same story (Score:2)
Re:Same story (Score:4, Insightful)
There is a limit we'll hit eventually, we're approaching circuits that are single digit atoms wide. No matter what we'll never get a circuit less than a single atom. Don't get me wrong, I don't think 10nm is going to be the problem but somewhere around single digit atoms wide we're going to run out of options to make them smaller.
Re: (Score:3)
There is a limit we'll hit eventually, we're approaching circuits that are single digit atoms wide. No matter what we'll never get a circuit less than a single atom.
We'll go optical, and we'll use photons...
Re: (Score:2)
There is a limit we'll hit eventually, we're approaching circuits that are single digit atoms wide. No matter what we'll never get a circuit less than a single atom.
We'll go optical, and we'll use photons...
Don't think that will work. We'll still need optical channels and by time we are limited to the size of an atom, smaller photons will be in the gamma ray frequencies which are ionizing and will probably pass through the rest of the computer anyway.
Re: (Score:2)
Don't think that will work. We'll still need optical channels and by time we are limited to the size of an atom, smaller photons will be in the gamma ray frequencies which are ionizing and will probably pass through the rest of the computer anyway.
Yeah, but by then we'll probably have some better way to control photons, and we won't need optical channels or the photons will automagically sort themselves at the end of a shared channel or something. Or the processor will be holographic and three-dimensional, etc etc.
what was the excuse for 90nm again? (Score:2)
i remember in the 90's everyone swore it was impossible to go under 90nm how 1GHz was the maximum speed you could get
Re:what was the excuse for 90nm again? (Score:4, Informative)
I remember the 90's too and I don't remember any of that.
The race to 1 GHz were heady, optimistic days, and I don't recall anyone thinking that once we got there, it would all be over.
So I call bullshit on your post.
Re: (Score:3)
Re: (Score:2)
Another physical limit: a silicon atom is about .22nm across. That isn't going to change either.
Re: (Score:3)
I read an article in the Apple ][ days claiming going beyond 16MHz was impossible, given track to track inductance.
Low Hanging Fruit (Score:5, Funny)
I'd say the low-hanging fruit disappeared a few decades ago. Continuing down the feature size curve has for many years required a whole slew of every-more-complicated tricks and techniques.
That said: yes, going forward is going to be increasingly difficult. You will eventually reach an insurmountable limit that no amount of trickery or technology will be able to overcome. I predict it'll be somewhere around the 0 nm process node.
Re: (Score:2)
Absolute vs. relative (Score:2)
I'd say the low-hanging fruit disappeared a few decades ago
In an absolute sense, yes. In a relative sense, some fruit will always be lower than others.
Re: (Score:2)
how many transistors can you etch onto the side of a silicon atom?
Re: (Score:1)
You're asking the wrong question. The better one: How can we get a single silicon atom to behave like a full logic gate?
Re: (Score:3)
The part of HP's work that applies here isn't the memristor. That's a low-cost SRAM (as opposed to DRAM). HP does have something to say about electron leakage, though. Their photonic interconnects use photons rather than electrons, hence the name.
Re: (Score:1)
Memristor is a (IMO) more ambitious goal.
Currently, we have 3 passive circuit elements; resistor, capacitor and the inductor.
For the resistor, you have a linear relation between the R, I and V.
In the capacitor, you use the rate-of-change of the V to understand its behavior.
In the inductor, you use the rate-of-change of I.
The memristor tries to use the change of R to give you a passive element to use in conjunction with the above. Where resistance (or C or H) were merely constants before, now you have someth
Will it last with 10yrs of continuous use? (Score:5, Insightful)
I worry about the reliability with tinyer and tinyer CPU feature size. ...how will those CPUs be doing, reliability-wise, 10yrs later?
When I buy something 'expensive', I expect it to last at least 10yrs, and CPUs are kinda expensive, to me.
(I still have an Athlon Thunderbird 700MHz Debian workstation that I use, for example, and it's still reliable.)
Re: (Score:2)
Yes.
If you ever get a job designing chips, you will find that RV has become an important part of the design flow.
Re: (Score:2)
You don't? You'll never get a job around here then.
Re: (Score:2)
(I still have an Athlon Thunderbird 700MHz Debian workstation that I use, for example, and it's still reliable.)
And I have an Athlon Thunderbird 700 MHz debian system that I retired years ago, and replaced with a pogoplug. It's no slower and it draws over an order of magnitude less power; IIRC 7W peak before USB devices. You can get one for twenty bucks brand new with SATA and USB3, and install Debian on that. It'll probably pay for itself in relatively short order, what with modern utility rates.
If you want another Athlon 700 though, you can have mine. I need to get rid of it. I need the shelf space for something el
Re: (Score:2)
In my decades of experience I do not ever remember a case where the CPU is the cause of failure to the system.
Hard Drive failures, GPU, Modem, Network Card, Monitor, Keyboard, Mouse, Power Supply. But the CPU seems to always keep kicking, Granted I had the CPU fan die, but I tend to replace that rather quickly after failure.
But I also don't do stupid things like over clocking
Re: (Score:2)
+1 Insightbait
Good news for reliability (Score:2)
More miniturization equals greater reliability, because smaller components always do better at surviving shock and vibration than larger components.
Re:Will it last with 10yrs of continuous use? (Score:5, Funny)
I'n usiing a 5nm protTotype,, andd it~s doingn &` ju ust f%ne. Don^t b be~a worRy waqrt#!
Re: (Score:1)
Why are we worried about size? (Score:1)
Re: (Score:2)
Why don't we use smaller architecture in larger dies, so that we have higher densities, and higher speeds? Also that wouldn't that allow room for more cores and cache.
Because that doesn't lower costs and increase margins.
With this last shrink we saw pretty much no gain (and in some cases losses) in cost efficiency, so with further shrinks they may have to wake the fuck up and start working on upping clock speeds, giving us a larger die with an entire butt of cores and cache, etc.
Re: (Score:2)
> entire butt of cores and cache
I checked my copy of Measure for Measure, but 'butt' doesn't appear anywhere as a unit.
BTU (energy) and buito (mass) a both close.
Bucket (volume) is semantically close I suspect.
Re: (Score:1)
>I checked my copy of Measure for Measure, but 'butt' doesn't appear anywhere as a unit.
A butt is precisely 1/2 of a tun (sic).
(Yes, really. See http://en.wikipedia.org/wiki/Butt_(unit)#butt )
Re: (Score:2)
Thank you.
I guess it's not a sufficiently well regulated unit to make it into the engineering references.
Re: (Score:2)
Dies are tiny.
Fuck power and heat. I have a desktop.
This affects our entire industry (Score:2)
Because whatever you do in the computing world, you are affected by processing power and cost. Growth in these regions drives both new hardware and new software to go with it, and any hit to growth will mean loss of jobs.
Software (what most of us here create) usually gets created for one of two reasons:
1. Software is created because nobody is filling a need. Companies may build their own version if they want to compete, or a company may contract a customized version if they can see increased efficiency or
Re: (Score:2)
>Software (what most of us here create)
Really? A lot of us create hardware. We have an existential interest in the answer to TFA.
Re: (Score:2)
I figured the hardware effect was fairly obvious :D
I concentrated on the software side effects because more readers here work on that end.
Re: (Score:2)
Re:This affects our entire industry (Score:5, Insightful)
You do realize that we've been in that situation since the dawn of computers, don't you? Once we get close to filling needs, people come up with other needs. Once processor development more or less stalls out, people will still want better performance, but they won't get it by updating their systems any more. Software development is a pretty secure profession.
CMOS scaling limited by process variation (Score:5, Interesting)
There are a number of factors that affect the value of technology scaling. One major one is the increase in power density due to the end of supply and threshold voltage scaling. But one factor that some people miss is process variation (random dopant fluctuation, gate length and wire width variability, etc.).
Using some data from ITRS and some of my own extrapoliations from historical data, I tried to work out when process variation alone would make further scaling ineffective. Basically, when you scale down, you get a speed and power advantage (per gate), but process variation makes circuit delay less predictable, so we have to add a guard band. At what point will the decrease in average delay become equal to the increase in guard band?
It turns out to be at exactly 5nm. The “disappointing” aspect of this (for me) is that 5nm was already believed to be the end of CMOS scaling before I did the calculation. :)
Incidentally, if you multiply out the guard bands already applied for process variation, supply voltage variation, aging, and temperature variation, we find that for an Ivy Bridge processor, about 70% of the energy going in is “wasted” on guard bands. In other words, if we could eliminate those safety margins, the processor would use 1/3.5 as much energy for the same performance or run 2.5 times faster in the same power envelope. Of course, we can’t eliminate all of them, but some factors, like temperature, change so slowly that you can shrink the safety margin by making it dynamic.
Re: (Score:2)
Very interesting post, thank you for writing it up.
I have a question. Are there guard bands in biological computation (e.g. our brains) ? I was under impression that our cognitive processes (software) are optimized for speed and designed to work with massively parallel but highly unreliable neural hardware.
What I am trying to say is that nature performed optimization decided that it is better to be very efficient all the time, and correct some of the time, but also be very good at error che
Re: (Score:2)
What you’re talking about is “approximate computing,” which is a hot area in research right now. If you can tolerate some errors, then you can get a massive increase in performance.
Re: (Score:2, Interesting)
Are you taking into account depletely-depleted MOS with controlled bias?
As I understand it, the Vt of the junction in present devices is controlled by the dopant level in the channel, and the source of Vt variation is from the shot noise in implantation. If you can increase the dopant concentration by 5x, you also decrease the variation due to shot noise by 5x. To compensate for the deeply depleted junction, you now need to add a body electrode to all the gates to bias them correctly, and a power supply tre
Re: (Score:2)
I don’t know the principles behind how doping concentrations are chosen, but I’m sure it’s optimized for speed. Also, you can compensate for Vth variaton using body bias, but it’s basically impossible to do this per-transistor. You can do it for large blocks of transistors, which allows you to compensate a bit for systematic variation (due mostly to optical aberrations in lithography), but there’s nothing you can do about random variation. Also, there’s effective lengt
Re: (Score:2)
Guard bands are a rational engineering tradeoff, when confronted with the physical laws of random fluctuations on one hand and developing entirely new computational models on the other.
When a difference of one dopant atom creates a measurable change in device characteristics you have to accept that its past the point where just spending money can tighten up the tolerances. Sometimes it's just faster to overdesign the part than to re-invent mathematics, physics, and chemistry simultaneously.
EUV not going to happen (Score:2)
An interesting article here discribes the horrendiously difficult challenges that face EUV:
https://www.semiwiki.com/forum... [semiwiki.com]
Memristance (Score:2)
The problem is that memristance effects begin to manifest below 5nm
Thus, start using memristors to build IMP-FALSE logic circuits.
Re: (Score:2)
IBM could build a chip that way if they wanted to. It just wouldn't be cost-effective - would it take decades of very delicate work to make a single processor that way.
Re:For a sense of scale (Score:5, Informative)
We're already at the point where 22nm components are more expensive per transistor than those at 28nm. [eetimes.com]
Previous shrinks lowered the cost of each transistor. It doesn't look like it's going to happen after 28nm.
Re: (Score:2)
There are other advantages to shrinking components. Higher clock rates become possible. The power consumption is also lessened, if you can offset the leakage issue somehow.
Re:For a sense of scale (Score:4, Informative)
Kind of. Heat dissipation starts being a bigger problem, and thermally limit slock speed. Look at overclocking sandy bridge vs ivy bridge chips.
Re:For a sense of scale (Score:5, Informative)
You'd think so, but the problem is global interconnect. Not gates. It was all the way back at the 250nm node when interconnect and gate delay were about the same.
At the 28nm node, wire delay is responsible for something like 80% of the time it takes for signals to work their way through a circuit.
And it some cases inverters are actually used to help signals propagate more quickly down long wires. In other words, long wires are so slow compared to gates that adding gates can speed things up!
Re: (Score:2)
So unless we come up with a novel technology to build with a higher density we are at the end of the road for that.
Maybe it's time to instead focus on other ways to improve performance. It may of course mean that the current architectural dogmas has to be abandoned.
Re: (Score:1)
>We're already at the point where 22nm components are more expensive per transistor than those at 28nm. [eetimes.com]
Only if you have a crappy GF fab.
If you invested in a decent finfet capable process on sufficiently large wafers, your margins will improve at smaller geometries.
Comment removed (Score:5, Insightful)
Re: For a sense of scale (Score:2)
Because some people like their laptops as small and thin as possible, there's always demand for the next best, smallest but fastest thing.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
you probably could. However, for a processor with 10^9 transistors and perhaps a dozen layers, it gets pretty time-consuming to build it by pushing atoms around one at a time.
Re: (Score:2)
Well I think he is saying, that is pretty much what we are already getting to. When you are printing a 10nm wire into the silicon chip, you are not very far from doing it atom by atom as the wire is only like 50 atoms wide.
Re: (Score:3)
Perhaps, but at least with lithography you can do it across the entire wafer (or die) area in a single go. That's batch processing all the transistors at once, rather than serially processing them with AFM.
Re: (Score:2)
Bettridge's first exception:
Any headline whose question contains thinly veiled skepticism, will instead be best answered with a "yes".
Re: (Score:2)
Certainly it is on its deathbed at least.
Re: (Score:2)
If somebody took care of that Moore guy, his laws wouldn't apply anymore.
Re: (Score:2)
14nm -> 7nm.
2:1 Looks good to me.
Down at 2nm I think we're going to be worrying about whether the gate has an odd or even number of atoms across its width.
Re: (Score:2)
The future is hardware; learn a HDL today.
You're correct here, but I'd like to mention that recent advancements in HLS (High Level Synthesis) allow regular software programmers to write C code that is compiled directly to hardware logic. There are some new rules to learn, things don't always work as expected and debugging is completely different to debugging software, but my point is that it's definitely possible to write major logic blocks in C without writing a line of VHDL code. So not necessarily will everyone need to learn a HDL to be a part o