How Much Smaller Can Chips Go? 362
nk497 writes "To see one of the 32nm transistors on an Intel chip, you would need to enlarge the processor to beyond the size of a house. Such extreme scales have led some to wonder how much smaller Intel can take things and how long Moore's law will hold out. While Intel has overcome issues such as leaky gates, it faces new challenges. For the 22nm process, Intel faces the problem of 'dark silicon,' where the chip doesn't have enough power available to take advantage of all those transistors. Using the power budget of a 45nm chip, if the processor remains the same size only a quarter of the silicon is exploitable at 22nm, and only a tenth is usable at 11nm. There's also the issue of manufacturing. Today's chips are printed using deep ultraviolet lithography, but it's almost reached the point where it's physically impossible to print lines any thinner. Diffraction means the lines become blurred and fuzzy as the manufacturing processes become smaller, potentially causing transistors to fail. By the time 16nm chips arrive, manufacturers will have to move to extreme ultraviolet lithography — which Intel has spent 13 years and hundreds of millions trying to develop, without success."
Re:Don't make them smaller (Score:2, Insightful)
It's not about communication lag, it's about cost. Price goes up with die area.
Re:I miss the pressure AMD used to put on Intel (Score:5, Insightful)
The latest revision of my Phenom II X4 disagrees with you. The Phenom II series is absolutely steamrolling over every other Intel product in its price range.
Hint: Notice I said "in its price range." Because not everyone prefers spending $1300 on a CPU that's marginally better than one at $600. It seems like Intel has stepped away from the "chip speed" game and stepped right into "ludicrously expensive".
Re:Why do they need to? (Score:3, Insightful)
The problem is that x86 has become so entrenched in the market that even it's creator can't kill it off.
You even cited a perfect example of their last (failed) attempt to do so (Itanic).
Re:This question (Score:5, Insightful)
why will it be any different this time?
Because sooner or later, it has to be. You reach a breaking point where the new technology is sufficiently different from the old that they don't represent the same device anymore. I think you'd have to be crazy to think that we're approaching the peak of our ability to solve computational problems, but I don't think its unreasonable to think that we're approaching the limit of what we can do with this technology (transistors).
Re:Why do they need to? (Score:3, Insightful)
Moore's Law describes increases in computing power, it does not proscribe it.
Re:Why do they need to? (Score:2, Insightful)
Very true, but it eventually needs to be done. You can only get so big with a jet engine that is strapped onto a biplane. The underlying architecture needs to change sooner or later. As things improve, maybe we we will get to a point where we have CPUs with enough horsepower to be able to run emulated amd64 or x86 instructions at a decent speed. The benefits will be many by doing this. First, in assembly language, we will save a lot of instructions because programs will have enough registers to do actions at once, rather than keep shuttling data to and from RAM to complete a calculation. Having few access to and from RAM will speed up tasks immensely because register access is so much faster. Take a calculation that adds up a bunch of numbers. The numbers can be loaded into separate registers, added, result dropped back into RAM. With the x86, it would take a lot of load and stores to do the same thing.
Re:Why do they need to? (Score:5, Insightful)
x86 and amd64 have an installed base. Itanium doesn't. This doesn't mean x86 is any better than Itanium, in the same way that Britney Spears is better than $YOUR_FAVORITE_BAND because Britney has sold far more albums.
Intel has done an astounding job at keeping the x86 architecture going. However, there is only so much lipstick you can put on a 40 year old pig.
Re:Don't make them smaller (Score:3, Insightful)
Re:Maybe we will start seeing more cores? (Score:5, Insightful)
What we are able to do with the smaller chips is what's changed. Raising the clock speed worked for years, and that is the best option, but because of physical problems, in the latest generations we weren't able to do that. So the next best thing is to add cores. Now the article is suggesting we may not even be able to do that anymore.
I will tell you I've been reading articles like this for as long as I've known what a computer was, so if you're a betting man, you would do well to bet against this type of article every time you read it. But in theory it has to end somewhere, unless we learn how to make subatomic particles, which presumably is outside the reach of the research budget at Intel.
Re:Maybe we will start seeing more cores? (Score:4, Insightful)
Well done, you've just described... today!
And today, we already know the problem with this approach: most everyday problems aren't easily parallelizable. Yes, there are specific areas where the problems are sometimes embarrassingly parallel (some scientific/number crunching applications, graphics rendering, etc), but generally speaking, your average software problem is unfortunately very serial. As such, those multiple cores don't provide much benefit for any single task. So if you want to execute one of these problems faster, the only thing you can do is ramp up the clock rate.
Re:This is what reversible computing is for, right (Score:3, Insightful)
People have been proposing circuits for regenerative switching (mainly for clocking) for a long long time. The problem always being that if you add an inductance to your circuit to store and feedback the energy, you will significantly decrease how fast you can switch.
Also, you think transistors are difficult to build in small sizes? Try building tiny inductors.
Re:Quantum Computing (Score:3, Insightful)
Better software (Score:5, Insightful)
Re:CPU caches also work like that (Score:3, Insightful)
Am I the only one who finds it pretty awesome that we're actually using focused ion beams in the manufacture of everyday items?
Re:The Atoms (Score:3, Insightful)
Theres a difference here... those reports were about being practically impossible, not theoretically impossible, on the going below the atomic scale you're hitting the theoretically impossible(given current understandings) point along with the practically impossible. We've had the theory for atomic size transistors for quite a while, its the practical that really needs to catch up.
Re:Plank's Law (Score:3, Insightful)
And being certain about something that comes from uncertainty principle makes me feel confused...
Re:Clock speed is a no-go (Score:3, Insightful)
(can't go faster, so lets just go the same speed, but in parallel).
Actually they do go faster. Clock speed doesn't mean processing speed. Modern CPUs do much more per clock cycle than their predecessors because of their greater instruction-level parallelism, shorter instruction latencies, larger caches, etc. While their cores don't generally operate at a higher frequency, they perform many times faster.
That's not even considering the additional cores and massively improved power efficiency. It's difficult to overstate just how fucking amazingly good CPUs are now.
Re:Why do they need to? (Score:4, Insightful)
Itanium failed - because it could not run x86 code at an acceptable speed. Which meant that if you wanted to switch over to Itanium, you had to start from scratch - rebuying every piece of software that you depended on, or getting new versions for Itanium.
AMD's 64bit CPUs, on the other hand, were excellent at running older x86 code while also giving you the ability to code natively in 64bit for the future. AMD's method took the market by storm and Intel had to relent and produce a 64bit x86 CPU.
(There were other reasons why Itanium failed - such as relying too much on compilers to produce optimal code, cost of the units due to being limited quantity, and Intel arrogance.)
Re:Maybe we will start seeing more cores? (Score:3, Insightful)
Trust me, what you're seeing is *not* what you think you're seeing. Windows isn't magically auto-parallelizing your code. That's a hot topic of research today, and it's really fucking hard.
Re:"Extreme Ultraviolet" (Score:3, Insightful)
because "X-rays" is such an UGLY word....
There's actually some truth to this. Originally it was called soft x-ray projection lithography. The other type of x-ray lithography was a near contact shadow technique using shorter (near 1nm) x-rays. To distinguish the two techniques they changed the name from soft x-ray to EUV.
This was also done for marketing reasons. X-ray lithography had failed (after sinking a lot of $$ into it), while optical lithography had successful moved from visible to UV, to DUV. By calling it EUV it sounds like the next logical step, instead of being associated with the failure that was x-ray lithography.
(Actually, x-ray lithography didn't really truly fail. It does work, but optical surpassed it before it was ready, so it became pointless)
Re:Better software (Score:4, Insightful)
They did. The are damn fast on modern processors, too. However, people simply look at me funny for using all GTK v1.2 applications... GIMP, aumix, emelfm, Ayttm, Sylpheed1, XFce3, etc.
So, why AREN'T YOU using better software, which "doesn't require 24 cores and 64GB of RAM"?
Re:Don't make them smaller (Score:2, Insightful)
Re:Don't make them smaller (Score:3, Insightful)
... I really think that money would be better spent helping all of you coders out there in creating a language/compiler programing paradigm that can use 12 threads efficiently for something beyond rendering GTA.
The entirety of programming is we know it is stuck in a single threaded paradigm and making the shift to massively parallel computing requires a huge shift in thinking.
This is so hard because our technique, languages and compilers all have their roots in a world that barely even multi-tasked let alone considered doing anything in parallel for performance.
Every coder that ever learnt to code, coded for kicks or money, learnt this way, and they still do.
We've come all this way without ever having to think in parallel. I stopped developing in 2003, having never had to really consider parallelism.
Even in 2010, as kids today start learning programming linearly still, and you go a long way before having to consider a second thread.
I think calling it a whole new paradigm is not doing the change required justice. It's about re-learning and re-thinking everything.
Frankly every day I think it's a fucking miracle that software as a whole performs as well as it does, and that our civilizations infrastructure can be use this technology, and that Moore's law hasn't stopped it's inexorable march yet.
It all works result of a brute force of millions of smart people problem solving line by line, getting it to compile, run and work without crashing too often. Software development now sees teams of hundreds of developers, open source projects can have thousands. One should be forgiven for thinking programming itself hasn't improved terrifically. Advances in software are still largely coming with throwing human resources at problems.
Clearly then, the deficiencies are in software, not hardware.
I won't shed a tear when Intel can no longer make progress with it's enormous investment in producing silicon based chips, and may have to consider graphene et al. But it's far from the end of the story. Silicon is only one element on the periodic table after all.