How Much Smaller Can Chips Go? 362
nk497 writes "To see one of the 32nm transistors on an Intel chip, you would need to enlarge the processor to beyond the size of a house. Such extreme scales have led some to wonder how much smaller Intel can take things and how long Moore's law will hold out. While Intel has overcome issues such as leaky gates, it faces new challenges. For the 22nm process, Intel faces the problem of 'dark silicon,' where the chip doesn't have enough power available to take advantage of all those transistors. Using the power budget of a 45nm chip, if the processor remains the same size only a quarter of the silicon is exploitable at 22nm, and only a tenth is usable at 11nm. There's also the issue of manufacturing. Today's chips are printed using deep ultraviolet lithography, but it's almost reached the point where it's physically impossible to print lines any thinner. Diffraction means the lines become blurred and fuzzy as the manufacturing processes become smaller, potentially causing transistors to fail. By the time 16nm chips arrive, manufacturers will have to move to extreme ultraviolet lithography — which Intel has spent 13 years and hundreds of millions trying to develop, without success."
Don't make them smaller (Score:5, Funny)
Re: (Score:2)
Distant parts of the chip then have a communication lag, but yes, this will really help. Certainly much less lag than communicating with something outside the die.
Re: (Score:2, Insightful)
It's not about communication lag, it's about cost. Price goes up with die area.
Plan the dark areas around the defects (Score:3, Interesting)
Larger dies generally cost more because it's more likely that they'll have a defect. I haven't done any chip design since college (and even then it was really entry level stuff) but if you could break the chip down into 10 different subcomponents that need to be spaced out, you could put 100 of those components on the chip and then after manufacture you could select the blocks that perform best and are defect free, spacing your choices accordingly.
I'm pretty sure chip makers likely already
GPUs work kind of like this (Score:4, Informative)
Since they are so parallel they are made as a bunch of blocks. A modern GPU might be, say, 16 blocks each with a certain number of shaders, ROPs, TMUs, and so on. When they are ready, they get tested. If a unit fails, it can be burned off the chip or disabled in firmware, and the unit can be sold as a lesser card. So the top card has all 16 blocks, the step down has 15 or 14 or something. Helps deal with cases were there's a defect, but overall the thing works.
CPU caches also work like that (Score:3, Informative)
Actually, it's pretty common practice to put spare arrays and spare cells in the design that aren't connected in the metal layers. When a chip is found defective, the upper metal layers can be cut and fused to form new connections and use the spare cells/arrays instead of the ones that failed by use of a focused ion beam.
But that still adds time and cost. Decreasing die area is pretty much always preferable. Also, larger dies means even more of the chip's metal interconnects have to be devoted to power dist
Re: (Score:3, Insightful)
Am I the only one who finds it pretty awesome that we're actually using focused ion beams in the manufacture of everyday items?
Re:Don't make them smaller (Score:5, Interesting)
Distant parts of the chip then have a communication lag, but yes, this will really help. Certainly much less lag than communicating with something outside the die.
Wouldn't that suggest that three dimensional chips be the logical next step. Although heat dissipation would become more difficult, not to mention the fact that the production process would be an order of magnitude more complicated.
Re:Don't make them smaller (Score:5, Informative)
This is also why Intel has been investing so much into in-silicon optical interconnects. They can go 3D if they can separate the wafers far enough to put a heat pipe in between and still pass data.
Re: (Score:3, Insightful)
Re: (Score:2)
I'd like to see more work with peltiers, but IIRC, they take a lot of energy to do their job of moving heat to one side, something that CPUs are already tight on.
Re: (Score:3, Informative)
Re: (Score:3, Funny)
A peltier gets cold on one side and hot on the other. Where are you going to put the hot side, since you're trying to put the thing in the middle of a block of silicon?
Easy -- just put two peltiers together, hot sides facing each other. Problem solved! ;-)
Re:Don't make them smaller (Score:5, Interesting)
Making 3D chips is the holy grail of semiconductor processing but is still beyond reach. They've not been able to lay down a single crystal second layer to make your stacked chip. They have tried using amorphous silicon but the devices are not near as good so there is no point.
We are already seeing the outcrop of all of this, as next years machines are not necessarily 2x the performance at the same cost. I really think that money would be better spent helping all of you coders out there in creating a language/compiler programing paradigm that can use 12 threads efficiently for something beyond rendering GTA. I certainly don't have the answer and given that that problem has not been solved yet, neither does anybody else at this time.
Its a very very hard problem. It is going to be interesting here in the next few years. If nothing changes, your going to have to start becoming accustom to the fact that next years PC is going to cost you MORE not less and thats really going to suck.
Re: (Score:2)
money would be better spent helping all of you coders out there in creating a language/compiler programing paradigm that can use 12 threads efficiently for something beyond rendering GTA.
From what I've heard, the number of cores you throw at GTA doesn't matter, it still runs like crap. ;)
Re: (Score:2)
We are already seeing the outcrop of all of this, as next years machines are not necessarily 2x the performance at the same cost.
I don't know how long you've been buying computers, but it has never been the case of "2x performance every year". The best it ever was was every 18 years or so, processing power doubled, and that was bumped back to about ever 2 years back in the late 80's/early 90's. But even that has never meant 2x all around performance. You might be able to crunch numbers 2x as fast after two years (never one), but there have always been bottlenecks - like RAM and hard drive speed - which have kept it down to around
Re: (Score:2)
Damnit, missed it in preview - 18 months not 18 years.
Re:Don't make them smaller (Score:5, Informative)
Re: (Score:3, Interesting)
No, you are incorrect. You are talking about stacked gates. That is significantly different than what I am talking about which is making entire stacked devices where you have a second level of additional devices including sources and drains as well as gates. Work has been tried with amorphous silicon with mixed results, no of which amount to much.
You are correct in that the power density issue trumps all other concerns.
And in the end economic issues will trump everything.
Re: (Score:3, Insightful)
... I really think that money would be better spent helping all of you coders out there in creating a language/compiler programing paradigm that can use 12 threads efficiently for something beyond rendering GTA.
The entirety of programming is we know it is stuck in a single threaded paradigm and making the shift to massively parallel computing requires a huge shift in thinking.
This is so hard because our technique, languages and compilers all have their roots in a world that barely even multi-tasked let alone considered doing anything in parallel for performance.
Every coder that ever learnt to code, coded for kicks or money, learnt this way, and they still do.
We've come all this way without ever having to th
Re: (Score:3, Informative)
The biggest performance bottleneck is still harddrives. So rather than focusing on faster CPUs, I'd love to see fast SSDs come down in price. I also can't wait until 16 gigs of RAM is standard.
Agreed, except I'd like to disagree on your preference: I'd love to have slow SSDs come down in price and go up in capacity. It will be Good Enough, or at least significantly better.
I mean, seriously: does the common desktop really need secondary storage which has higher throughput than the majority of DDR memory? There are SATA 6GB/s disks out there with >400MB/s rates, whereas DDR 400 only maxed out at 400MB/s. That's freaking INCREDIBLE.
Even introducing slower 200MB/s SSDs at a lower price than curren
Re: (Score:2)
Yes it does. And then after that it's the robotic arm, the explosions everywhere and the "come with me if you want to live".
The Atoms (Score:5, Interesting)
They're going to hit atomic scale transistors fairly soon from what I can see as well, the manufacturing process for those is probably prohibitively expensive but that is as small as they can go(according to our current knowledge of the universe at least).
I can't imagine Intel has all of its eggs in one basket on Extreme Ultraviolet Lithography though. Something thats been in development for even 5 years and doesn't show any concrete signs of success should at least have alternatives developed for it. After 5 years if you still can't say for certain if its ever going to work, you definitely need to start looking in different directions.
Re:The Atoms (Score:5, Funny)
Something thats been in development for even 5 years and doesn't show any concrete signs of success should at least have alternatives developed for it.
You haven't followed much of the history of Itanium's development have you?
Re: (Score:2)
No, I really haven't. I tend not to pay much attention to things that are released more than 2 years after their original announced release date.
Though, I have to point out I didn't advocate terminating a project after 5 years of zero results(a la Itanium) just looking in additional directions and not keeping all the eggs in the questionable basket.
Re: (Score:2)
You seem to miss the point. You imagine that Intel doesn't point all of its eggs in one basket. The development of Itanium disproves that notion as they had no other real alternatives being developed at the same time.
Re: (Score:2)
Yeah, it's really hurt them. They've been wildly unprofitable since then
Re: (Score:2)
Because I made either the claim that it hurt Intel or it made them unprofitable? Oh wait...
Re: (Score:2)
You haven't followed much of the history of Itanium's development have you?
I saw the movie though, Leo dies at the end.
I'd say you haven't (Score:5, Interesting)
For one, Itanium is still going strong in high end servers. It is a tiny market, but Itanium sells well (no I don't know why).
However in terms of the desktop, you might notice something: When AMD came out with an x64 chip and everyone, most importantly Microsoft, decided they liked it and started developing for it, Intel had one out in a hurry. This doesn't just happen. You don't design a chip in a couple months, it takes a long, long time. What this means is Intel had been hedging their bets. They developed an x64 chip (they have a license for anything AMD makes for x86 just as AMD has a license for anything they make) should things go that way. They did and Intel ran with it.
Ran with it well, I might add, since now the top performing x64 chips are all Intel.
They aren't a stupid company, and if you think they are I'd question your judgment.
Re: (Score:2)
They're going to hit atomic scale transistors fairly soon from what I can see as well
Yeah, there was an article here in the spring on atomic computing, where I did a little math on it. I was surprised, but it worked out that in roughly a decade Moore's Law would get down to atomic transitors if reducing the part size was the method employed.
I had always presumed before that it would never run out, but it's going to have to zig sideways if that's going to be true.
Google recently bought that company working
Re: (Score:2)
i remember reading a long time ago that 90nm or 65nm would be impossible due to physics and science
Re: (Score:3, Insightful)
Theres a difference here... those reports were about being practically impossible, not theoretically impossible, on the going below the atomic scale you're hitting the theoretically impossible(given current understandings) point along with the practically impossible. We've had the theory for atomic size transistors for quite a while, its the practical that really needs to catch up.
Re:The Atoms (Score:5, Informative)
I deal with EUV lithography for a living. Not at Intel, but at ASML [asml.com], the world's largest supplier of lithography machines and the only one that has actually manufactured working EUV lithography tools.
Something thats been in development for even 5 years and doesn't show any concrete signs of success should at least have alternatives developed for it. After 5 years if you still can't say for certain if its ever going to work, you definitely need to start looking in different directions.
You are misinformed. On our Alpha development machines, working 22 nm devices were already manufactured last year. (source [www2.imec.be]) We are shipping the first commercial EUV lithography machines in the coming year (source [asml.com], source [chipdesignmag.com]) A problem for the chip manufacturers is that the capacity on the alpha machines is rather low and needs to be shared among competitors.
There is a temporary alternative; it is called double patterning [wikipedia.org] (and triple patterning, etcetera). The first problem is that you need twice (thrice) as many process steps for the small features, and also proportionally more lithography machines that are not exactly cheap. The second problem is that double patterning imposes tough restrictions on the chip design; basically you can only make chips that consist mostly of repeating simple patterns. That is doable for memory chips, but much less so for CPUs. Moreover, if you want to continue Moore's law that way, the manufacturing cost will increase exponentially, so this is not a long-term viable alternative.
You can bet that the semiconductor manufacturers have looked for alternatives. But those don't exist, at least not viable ones.
Re:The Atoms (Score:4, Informative)
Re:The Atoms (Score:4, Informative)
IMEC is not the only ASML customer who has played with one of the two EUV Alpha tools, but it's the only one I could find with a quick Google search that has published the results. IMEC is a research institute. Other customers (actual chip manufacturers) have little to gain by disclosing to the competition exactly how much progress they have made.
Licensing is not the business model. The article suggests that Intel develops these machines ("fancy camera's") themselves, but in reality, they simply buy the machines from one of the three manufacturers (ASML, Nikon, and Canon). We spend an R&D budget of 500 M€ per year to develop these machines; Intel's R&D costs are likely mostly in the design of their chips and optimizing process parameters to squeeze as much as possible out of their fabs.
Why do they need to? (Score:5, Funny)
Why does Intel need to push the envelope that hard and that fast just to create a product that will, in the end, have extremely low yield and extremely high cost?
Just so they can adhere to some ancient "law" proposed by one of their founders? It's time to let go of Moore's Law. It's outdated and doesn't scale well... just like the x86 architecture! *ba-dum, chhh*
Re:Why do they need to? (Score:5, Interesting)
At the extreme, maybe it might be time for a new CPU architecture? Intel has been doing so much stuff behind the scenes to keep the x86 architecture going, that it may be time to just bite the bullet and move to something that doesn't require as much translation?
Itanium comes to mind here because it offers a dizzying amount of registers, both FPU and CPU available to programs. To boot, it can emulate x86/amd64 instructions.
Virtual machine technology is coming along rapidly. Why not combine a hardware hypervisor and other technology so we can transition to a CPU architecture that was designed in the past 10-20 years?
Re: (Score:3, Insightful)
The problem is that x86 has become so entrenched in the market that even it's creator can't kill it off.
You even cited a perfect example of their last (failed) attempt to do so (Itanic).
Re: (Score:2, Insightful)
Very true, but it eventually needs to be done. You can only get so big with a jet engine that is strapped onto a biplane. The underlying architecture needs to change sooner or later. As things improve, maybe we we will get to a point where we have CPUs with enough horsepower to be able to run emulated amd64 or x86 instructions at a decent speed. The benefits will be many by doing this. First, in assembly language, we will save a lot of instructions because programs will have enough registers to do acti
Re: (Score:3, Informative)
We already have this. All current x86's have a decode unit to convert the x86 instructions to micro-ops in the native RISC instruction set.
Re: (Score:2)
Itanium failed because it used a VLIW architecture - great for specialized processing tasks on big machines but for general purpose computing (ie. what 99.9% of people do) it wasn't much faster than x86.
Are computers really 'too slow' now? It seems to me that an x64 desktop at 3GHz is fast enough for just about anything a normal person would do. The only "normal task" I can think of that's too slow at the moment is decoding x264 video on netbooks and they're better off with a little hardware decoder tacked
Re:Why do they need to? (Score:4, Insightful)
Itanium failed - because it could not run x86 code at an acceptable speed. Which meant that if you wanted to switch over to Itanium, you had to start from scratch - rebuying every piece of software that you depended on, or getting new versions for Itanium.
AMD's 64bit CPUs, on the other hand, were excellent at running older x86 code while also giving you the ability to code natively in 64bit for the future. AMD's method took the market by storm and Intel had to relent and produce a 64bit x86 CPU.
(There were other reasons why Itanium failed - such as relying too much on compilers to produce optimal code, cost of the units due to being limited quantity, and Intel arrogance.)
Re: (Score:2)
Itanium comes to mind here because it offers a dizzying amount of registers, both FPU and CPU available to programs.
And it's been such a smashing success in comparison to x86, right?
Re:Why do they need to? (Score:5, Insightful)
x86 and amd64 have an installed base. Itanium doesn't. This doesn't mean x86 is any better than Itanium, in the same way that Britney Spears is better than $YOUR_FAVORITE_BAND because Britney has sold far more albums.
Intel has done an astounding job at keeping the x86 architecture going. However, there is only so much lipstick you can put on a 40 year old pig.
Re:Why do they need to? (Score:5, Informative)
Itanium was their clean room redesign, and look what happened to it. Outside HPCs and very niche applications, no one was willing to rewrite all their apps, and more importantly, wait for the compiler to mature on an architecture that was heavily dependent on the compiler to extract instruction level parallelism.
All said, the current instruction set innovation is happening with the SSE, and VT instructions, where some really cool stuff is possible. There is something to be said for the choice of CISC architecture by Intel. In RISC ones, once you run out of opcodes, you are in pretty deep trouble. In CISC, you can keep adding them,making it possible to have binaries that can run unmodified on older generation chips, but able to take advantage of newer generation features when running on newer chips.
Re: (Score:2, Troll)
However, there is only so much lipstick you can put on a 40 year old pig."
Hey, you insensitive clod, that's my wife!
Sarah Palin is your wife!?
Re: (Score:2)
You are right in that a new architecture could offer improved performance, however it is a one shot deal. Once you've rolled out the new architecture there will be a short period while everything catches up and then you are right back to cramming more on the die.
Re: (Score:2)
Re: (Score:2)
And if we did, are we talking about 2x speed returns very roughly, or even up to 20x?
Would it really help though? (Score:2)
It seems to be almost an article of faith with geeks that if only we didn't have that nasty x86 we could have so much better chips. However the thing is, there ARE non-x86 chips out there. Intel and AMD may love it, others don't. You can find other architectures. So then, where's the amazing chip that kicks the crap out of Intel's chips? I mean something that is faster, uses the same or less power and costs less to produce (it can be sold for more, but the fab costs have to be less). Where is the amazing ch
Re:Why do they need to? (Score:4, Informative)
Because nowadays, the ISA is really very little impact on resulting performance. The total die space devoted to translating x86 instructions on a modern Nehalem is tiny compared to the rest of the chip. The only time the ISA decode logic matters if for very low power chips (smartphones). This is part of the reason why ARM is so far ahead of Intel's x86 offerings in that area.
Modern x86, with SSE and x86-64, is actually not that bad of an ISA and there aren't too many ugly workarounds necessary anymore that justify a big push to change.
Re: (Score:3, Informative)
Intel has been doing so much stuff behind the scenes to keep the x86 architecture going, that it may be time to just bite the bullet and move to something that doesn't require as much translation?
Actually, the vast majority of what Intel and AMD have been doing behind the scenes are microarchitectural improvements that would be applicable to any out-of-order processor regardless of ISA.
There are some minor penalties to x86 that remain, but getting rid of them would be a very modest performance upside and is
Re: (Score:2)
Re: (Score:3, Insightful)
Moore's Law describes increases in computing power, it does not proscribe it.
Re: (Score:3, Informative)
I miss the pressure AMD used to put on Intel (Score:2)
I miss the pressure AMD used to put on Intel. When Intel had an agile competitor often leaping ahead of it chip speeds shot up like a rocket - seems like they've been resting on their laurels lately...
Re: (Score:2)
Re:I miss the pressure AMD used to put on Intel (Score:5, Insightful)
The latest revision of my Phenom II X4 disagrees with you. The Phenom II series is absolutely steamrolling over every other Intel product in its price range.
Hint: Notice I said "in its price range." Because not everyone prefers spending $1300 on a CPU that's marginally better than one at $600. It seems like Intel has stepped away from the "chip speed" game and stepped right into "ludicrously expensive".
Re: (Score:3, Interesting)
The only Intel chips that are $1000+ are those that are either a few months old and/or are of the "Extreme" series. The core i7-860s and 930s are under 300 bucks and pretty much the entire core i5 line is at 200 or less.
Re: (Score:2)
The problem is that the Intel motherboards are more expensive, and they lock you into your chip "class". You can't upgrade to an i7 from an i5 in some cases.
Re: (Score:2)
The price difference is negligible between AMD and Intel boards, unless you are attending the race to bottom, where AMD rules. You also can't upgrade from an AM2 to AM3 CPU on a AM2 board. The talk about upgrading is meaningless in a broader sense too: Why would you buy something not optimal just so that you can upgrade it later? It's false economy, get the best you can afford now, and a whole new rig with whole new tech a few years later.
Re:I miss the pressure AMD used to put on Intel (Score:5, Informative)
You also present a false dichotomy, because upgrading isnt ONLY about buying suboptimal hardware and then upgrading it later. Anyone who purchased bleeding edge AM2 gear when it was introduced can get a bios update and then socket an AM3 Phenom II chip. They still only have DDR2, but amazingly Phenom II's support both DDR2 on AM2 and DDR3 on AM3.
So that guy who purchased a dual-core AM2 Phenom when they were cutting edge can now socket a hexa-core AM3 Phenom II.
Its amazing what designing for the future gives your customers. Intel users have only rarely had the chance to substantially upgrade CPU's.
Re: (Score:2)
You probably mean AM2+ boards. All AM2 boards definitely don't support AM3 CPUs, feel free to check the manufacturer sites.
For the false dichotomy part, you build up another in your case, too. In the last few years (AM2 and AM3 age), the quad cores haven't been too expensive compared to the dual cores. Your example user has made the wrong choice when buying the dual core in the first place; the combined price of the dual and the hexa core CPUs would have given him/her a nice time in multithreaded apps for t
Re: (Score:2)
Sorry, like I later mentioned, it's board-specific. Care to give the board model?
Re: (Score:2)
So you haven't really done any research there? Intel's i5 750 and 760 "steamroll" all the Phenom II X4 CPUs in the price range. Don't trust me, trust benchmarks.
Re: (Score:2)
So you haven't really done any research there? Intel's i5 750 and 760 "steamroll" all the Phenom II X4 CPUs in the price range. Don't trust me, trust benchmarks.
Phenom II X6 chips with Turbo Core in the same price range would like to have a word with you about you cherry picking old X4 chips.
Re: (Score:2)
In what uses? X6 CPUs don't really deliver compared to i5, except in uses where you can really blast out all the cores, like vid encoding with certain programs.
And the OP especially was telling how his/her *Phenom II X4* beats everything Intel has to offer in its price range, which is blatantly false. LTR.
Re: (Score:2)
Wrong again. The PII x4 955 is in the ~$150 price range, the i5 750 is in the $200 price range. The 750 is a bit faster, but it is 25% more expensive for less than 25% more performance.
The latest generation of AMD chips also has adaptive clock speeds to improve performance on monolithic tasks, soon that will be available on the x4 chips as well as the x6.
I am not going to argue that AMD chips are absolutely better, but in terms of price/performance they have very little competition from Intel. The i5 750 a
Re: (Score:3, Informative)
This review:
http://it-review.net/article/hardware/cpu/Intel_Core_i7_980X,_Core_i5_650_and_Core_i3_530_review&3 [it-review.net]
These processors:
Core i7-980X
http://www.newegg.com/Product/Product.aspx?Item=N82E16819115223 [newegg.com]
Core i5-650
http://www.newegg.com/Product/Product.aspx?Item=N82E16819115220 [newegg.com]
Core i3-530
http://www.newegg.com/Product/Product.aspx?Item=N82E16819115222 [newegg.com]
Notice the performance of the 980X over the other two. There's no more than a 3x performance increase in media encoding. Compare the price tag differences,
Re: (Score:3, Informative)
You asked me to provide evidence supporting my claim of 2x performance gains and 8x the pricetag. I did exactly that. AMD and Intel may be in a tight race at the midrange ($140-$200) but the interoperability between AMD's three socket specs (AM2,AM2+,AM3) and the DDR2/DDR3 backwards compatability are what send AMD leaps and bounds ahead of Intel. From a holistic standpoint AMD's offering is alot more stable in the long-term, and this is how they steamroll over the competition.
P.S. I got fed up with Inte
Re: (Score:2)
So, how do you define "price range"? Is that the exact price?
This question (Score:2, Interesting)
Re:This question (Score:5, Insightful)
why will it be any different this time?
Because sooner or later, it has to be. You reach a breaking point where the new technology is sufficiently different from the old that they don't represent the same device anymore. I think you'd have to be crazy to think that we're approaching the peak of our ability to solve computational problems, but I don't think its unreasonable to think that we're approaching the limit of what we can do with this technology (transistors).
Re: (Score:2)
Eventually there's a theoretical limit, a limit that can't be exceeded without violating the laws of physics, specifically quantum mechanics. Once your transistors get close enough together, the probability of an electron tunneling from one side to the other gets high enough that it isn't possible to tell between your on and off states. We are rapidly approaching that limit even if all the manufacturing issues can be overcome (I believe it's somewhere around 5nm, but I could be wrong).
Plank's Law (Score:5, Funny)
Re: (Score:2)
*For classical computation
Re: (Score:3, Insightful)
And being certain about something that comes from uncertainty principle makes me feel confused...
This is what reversible computing is for, right? (Score:3, Interesting)
Re: (Score:3, Insightful)
People have been proposing circuits for regenerative switching (mainly for clocking) for a long long time. The problem always being that if you add an inductance to your circuit to store and feedback the energy, you will significantly decrease how fast you can switch.
Also, you think transistors are difficult to build in small sizes? Try building tiny inductors.
3D chips to keep the scale going (Score:2)
Current technology is based on a single planar layer of silicon substrate. A chips is built with a metal interconnect on top. But the base layers are essentially a 2D structure. We are already postprocessing things with thru vias to stack substrates into a single package. The increases density from the package perspective.
Increasing technologies in stacking will keep Moors law going for another decade (as long as you consider Moor's law to be referencing density in 2D).
"Extreme Ultraviolet" (Score:2)
because "X-rays" is such an UGLY word....
Re:"Extreme Ultraviolet" (Score:5, Informative)
Because X-rays are .01 - 10 nm light and EUV is 13.5nm light... so nothing to do with the word, as much as engineers like to label things correctly.
Re: (Score:2)
EUV is 13.5nm, X-rays are generally thought of 10nm and smaller. http://hyperphysics.phy-astr.gsu.edu/hbase/ems3.html [gsu.edu]
It is close and this region is sometimes referred to as "soft" X-rays but there is nothing incorrect about the "UV" moniker. It also helps to distinguish EUV from actual X-ray lithography, a largely abandoned approach which used wave lengths on the order of 1nm. http://en.wikipedia.org/wiki/X-ray_lithography [wikipedia.org]
Re: (Score:3, Insightful)
because "X-rays" is such an UGLY word....
There's actually some truth to this. Originally it was called soft x-ray projection lithography. The other type of x-ray lithography was a near contact shadow technique using shorter (near 1nm) x-rays. To distinguish the two techniques they changed the name from soft x-ray to EUV.
This was also done for marketing reasons. X-ray lithography had failed (after sinking a lot of $$ into it), while optical lithography had successful moved from visible to UV, to DUV. By calling it EUV it sounds like the next
Obviously, one transistor per atom (Score:2)
That's how small they can go. Beyond that, increasing the functional density of our CPUs will get really challenging.
Better software (Score:5, Insightful)
Re:Better software (Score:4, Insightful)
They did. The are damn fast on modern processors, too. However, people simply look at me funny for using all GTK v1.2 applications... GIMP, aumix, emelfm, Ayttm, Sylpheed1, XFce3, etc.
So, why AREN'T YOU using better software, which "doesn't require 24 cores and 64GB of RAM"?
It's hard writing software to keep up with the HW (Score:4, Funny)
Folks don't often realize how much work we software writers go through to write this big, complex, core-eating software. Back in the day with 8-bit 500 KHz CPUs we could write a simple 1000-iteration loop with a bit of code in it, and it might lag the CPU for a whole second. Now with these fast processors we have to go through all kinds of hoops to use up all those cycles! Building languages on top of languages, interpreted languages, all kinds of extra error checking (error checking can often take 80%-90% of the cycles and code), objects on top of arrays on top of pointers on top of objects ... you get the idea. SOMEBODY has to make the software to use up all those cycles.
It's a dirty job, but somebody has to do it!!!
WE CAN NOT LET THE HARDWARE PEOPLE WIN!!! For every added processor, every bump in Hz, we WILL come up with a way to burn it! Soon we will embark on the new 3D ray-traced desktop - THAT will keep the HW folks busy for a while!!! And (don't tell anybody) soon we will establish the need for full time up-to-date indexing of everything on the LAN. Of course, that could be done by one machine, but if we all do it independently on each machine, that will burn another whole 2GHz CPU's worth of cycles.
Our goal and our motto: "A computer is nothing but a very complicated and expensive heater." :D
Who cares about lithography? (Score:2)
The diameter of a silicon atom is roughly. 0.25 nm. That means that 32nm is about 120 atoms across. A 16nm line is about 60 atoms across.
For reliable use, there is going to be an approximate minimum to number of atoms in a line. Electron interactions among individual atoms are quantum events, so for any sort of predictability you're going to need enough atoms for the probabilities to average out enough. I don't know how many that is, but it pretty much has to be more than one.
I have a great deal o
Re: (Score:3, Interesting)
3D Chips (Score:2)
Why hasn't Intel rolled out 3D chips stacked in layers, with microfluidics cooling between layers? I used to see all kinds of engineering PR about it, but it's been years since I saw any progress, and it's taken way longer than I expected.
3D would not only increase the amount of transistors (and other devices) fit into a "chip", but put the circuits closer together, requiring less voltage/power and shorter propagation times. What's holding it up?
Re:3D Chips (Score:4, Informative)
Actually, 3D has picked up quite a bit in the last few years. However, the primary interest is connect different chips together in the same package with short, fast, interconnect. It's a lot better than conventional System In Package and much much better than circuit board connections. Unfortunately, the connections are a bit too coarse to spread a single design like an Intel processor across the layers.
For that you need more sophisticated methods like growing a new wafer on top of one that has already been built up. These methods are not yet ready for production.
Re:Maybe we will start seeing more cores? (Score:5, Funny)
You have an uncanny ability to predict the present!
Re:Maybe we will start seeing more cores? (Score:5, Insightful)
What we are able to do with the smaller chips is what's changed. Raising the clock speed worked for years, and that is the best option, but because of physical problems, in the latest generations we weren't able to do that. So the next best thing is to add cores. Now the article is suggesting we may not even be able to do that anymore.
I will tell you I've been reading articles like this for as long as I've known what a computer was, so if you're a betting man, you would do well to bet against this type of article every time you read it. But in theory it has to end somewhere, unless we learn how to make subatomic particles, which presumably is outside the reach of the research budget at Intel.
Re: (Score:3, Interesting)
another problem is that adding cores is not as effective, right now, as upping clock speed.
this may change however if the designs change from multiple universal cores to something more like a the cell cpu that powers the playstation 3, or maybe something like the the latest GPUs. Basically, a couple of universal cores like before (as they provide some benefit, if the os do a proper job in spreading processes across them) combined with multiple simpler cores that can be arranged like a assembly line. Then yo
Re:Maybe we will start seeing more cores? (Score:4, Insightful)
Well done, you've just described... today!
And today, we already know the problem with this approach: most everyday problems aren't easily parallelizable. Yes, there are specific areas where the problems are sometimes embarrassingly parallel (some scientific/number crunching applications, graphics rendering, etc), but generally speaking, your average software problem is unfortunately very serial. As such, those multiple cores don't provide much benefit for any single task. So if you want to execute one of these problems faster, the only thing you can do is ramp up the clock rate.
Re: (Score:3, Insightful)
Trust me, what you're seeing is *not* what you think you're seeing. Windows isn't magically auto-parallelizing your code. That's a hot topic of research today, and it's really fucking hard.
Re: (Score:3, Insightful)
(can't go faster, so lets just go the same speed, but in parallel).
Actually they do go faster. Clock speed doesn't mean processing speed. Modern CPUs do much more per clock cycle than their predecessors because of their greater instruction-level parallelism, shorter instruction latencies, larger caches, etc. While their cores don't generally operate at a higher frequency, they perform many times faster.
That's not even considering the additional cores and massively improved power efficiency. It's difficult to overstate just how fucking amazingly good CPUs are now.
Re: (Score:3, Insightful)