Please create an account to participate in the Slashdot moderation system


Forgot your password?
Intel Hardware Science Technology

How Much Smaller Can Chips Go? 362

nk497 writes "To see one of the 32nm transistors on an Intel chip, you would need to enlarge the processor to beyond the size of a house. Such extreme scales have led some to wonder how much smaller Intel can take things and how long Moore's law will hold out. While Intel has overcome issues such as leaky gates, it faces new challenges. For the 22nm process, Intel faces the problem of 'dark silicon,' where the chip doesn't have enough power available to take advantage of all those transistors. Using the power budget of a 45nm chip, if the processor remains the same size only a quarter of the silicon is exploitable at 22nm, and only a tenth is usable at 11nm. There's also the issue of manufacturing. Today's chips are printed using deep ultraviolet lithography, but it's almost reached the point where it's physically impossible to print lines any thinner. Diffraction means the lines become blurred and fuzzy as the manufacturing processes become smaller, potentially causing transistors to fail. By the time 16nm chips arrive, manufacturers will have to move to extreme ultraviolet lithography — which Intel has spent 13 years and hundreds of millions trying to develop, without success."
This discussion has been archived. No new comments can be posted.

How Much Smaller Can Chips Go?

Comments Filter:
  • by TheDarAve ( 513675 ) on Friday August 13, 2010 @01:09PM (#33242354)

    This is also why Intel has been investing so much into in-silicon optical interconnects. They can go 3D if they can separate the wafers far enough to put a heat pipe in between and still pass data.

  • by Sycraft-fu ( 314770 ) on Friday August 13, 2010 @01:44PM (#33242918)

    Since they are so parallel they are made as a bunch of blocks. A modern GPU might be, say, 16 blocks each with a certain number of shaders, ROPs, TMUs, and so on. When they are ready, they get tested. If a unit fails, it can be burned off the chip or disabled in firmware, and the unit can be sold as a lesser card. So the top card has all 16 blocks, the step down has 15 or 14 or something. Helps deal with cases were there's a defect, but overall the thing works.

  • by Rockoon ( 1252108 ) on Friday August 13, 2010 @01:44PM (#33242922)
    What are you talking about? AM2 boards support AM3 chips.

    You also present a false dichotomy, because upgrading isnt ONLY about buying suboptimal hardware and then upgrading it later. Anyone who purchased bleeding edge AM2 gear when it was introduced can get a bios update and then socket an AM3 Phenom II chip. They still only have DDR2, but amazingly Phenom II's support both DDR2 on AM2 and DDR3 on AM3.

    So that guy who purchased a dual-core AM2 Phenom when they were cutting edge can now socket a hexa-core AM3 Phenom II.

    Its amazing what designing for the future gives your customers. Intel users have only rarely had the chance to substantially upgrade CPU's.
  • by Anonymous Coward on Friday August 13, 2010 @02:00PM (#33243236)

    With greater clock speed comes greater heat dissipation needs (most heat is created at clock switching); they have basically hit this wall already, hence the multi-core direction everyone is taking (can't go faster, so lets just go the same speed, but in parallel).

  • by imgod2u ( 812837 ) on Friday August 13, 2010 @02:05PM (#33243318) Homepage

    Actually, it's pretty common practice to put spare arrays and spare cells in the design that aren't connected in the metal layers. When a chip is found defective, the upper metal layers can be cut and fused to form new connections and use the spare cells/arrays instead of the ones that failed by use of a focused ion beam.

    But that still adds time and cost. Decreasing die area is pretty much always preferable. Also, larger dies means even more of the chip's metal interconnects have to be devoted to power distribution.

  • by imgod2u ( 812837 ) on Friday August 13, 2010 @02:08PM (#33243376) Homepage

    Because nowadays, the ISA is really very little impact on resulting performance. The total die space devoted to translating x86 instructions on a modern Nehalem is tiny compared to the rest of the chip. The only time the ISA decode logic matters if for very low power chips (smartphones). This is part of the reason why ARM is so far ahead of Intel's x86 offerings in that area.

    Modern x86, with SSE and x86-64, is actually not that bad of an ISA and there aren't too many ugly workarounds necessary anymore that justify a big push to change.

  • by Anonymous Coward on Friday August 13, 2010 @02:10PM (#33243406)

    We already have this. All current x86's have a decode unit to convert the x86 instructions to micro-ops in the native RISC instruction set.

  • by sunbane ( 146740 ) on Friday August 13, 2010 @02:17PM (#33243508) Homepage

    Because X-rays are .01 - 10 nm light and EUV is 13.5nm light... so nothing to do with the word, as much as engineers like to label things correctly.

  • by QuantumLeaper ( 607189 ) on Friday August 13, 2010 @02:40PM (#33243928) Journal
    Moore's Law has nothing to do with computing power, but with the NUMBER of transistors on a piece of silicon. Which he said would double every 2 years, which has be petty much true and will remain true for the next decade most likely.
  • by quo_vadis ( 889902 ) on Friday August 13, 2010 @02:41PM (#33243938) Journal
    You are incorrect about the reason for lack of 3D stacking. Its not that we cant stack them. There has been a lot of work on it. In fact, the reason flash chips are increasing in capacity is because they are stacked usually 8 layers high. The problem quite simply is heat dissipation. A modern CPU has a TDP of 130W, most of which is removed from the top of the chip, through the casing, to the heatsink. Put a second core on top of it, and the bottom layer develops hotspots that cannot be handled. There are currently some approaches based on microfluidic channels interspersed between the stacked dies, but that has its own drawbacks.
  • by quo_vadis ( 889902 ) on Friday August 13, 2010 @02:50PM (#33244068) Journal
    Um, actually Intel has done a lot of work on the architecture and microarchitecture of its processors. The CPUs Intel makes today are almost RISC like, with a tiny translation engine, which thanks to the shrinking size of transistors takes a trivial amount of die space. The cost of adding a translation unit is tiny, compared to the penalty of not being compatible with a vast majority of the software out there.

    Itanium was their clean room redesign, and look what happened to it. Outside HPCs and very niche applications, no one was willing to rewrite all their apps, and more importantly, wait for the compiler to mature on an architecture that was heavily dependent on the compiler to extract instruction level parallelism.

    All said, the current instruction set innovation is happening with the SSE, and VT instructions, where some really cool stuff is possible. There is something to be said for the choice of CISC architecture by Intel. In RISC ones, once you run out of opcodes, you are in pretty deep trouble. In CISC, you can keep adding them,making it possible to have binaries that can run unmodified on older generation chips, but able to take advantage of newer generation features when running on newer chips.
  • Here comes the.. (Score:2, Informative)

    by WakaRiMaSu ( 962393 ) on Friday August 13, 2010 @02:59PM (#33244176)
  • by BitZtream ( 692029 ) on Friday August 13, 2010 @03:39PM (#33244714)

    They developed an x64 chip (they have a license for anything AMD makes for x86 just as AMD has a license for anything they make) should things go that way.

    Actaully, no they don't.

    There are certain things they share licenses for, but thats mostly related to Intel wanting to be able to fill government contracts that require multiple vendor sources.

    It does not cover everything that is x86, which is why the two companies regularly sue each other over silly shit.

  • Re:3D Chips (Score:4, Informative)

    by erice ( 13380 ) on Friday August 13, 2010 @03:53PM (#33244924) Homepage

    Actually, 3D has picked up quite a bit in the last few years. However, the primary interest is connect different chips together in the same package with short, fast, interconnect. It's a lot better than conventional System In Package and much much better than circuit board connections. Unfortunately, the connections are a bit too coarse to spread a single design like an Intel processor across the layers.

    For that you need more sophisticated methods like growing a new wafer on top of one that has already been built up. These methods are not yet ready for production.

  • Re:The Atoms (Score:5, Informative)

    by hankwang ( 413283 ) * on Friday August 13, 2010 @03:59PM (#33245004) Homepage

    I deal with EUV lithography for a living. Not at Intel, but at ASML [], the world's largest supplier of lithography machines and the only one that has actually manufactured working EUV lithography tools.

    Something thats been in development for even 5 years and doesn't show any concrete signs of success should at least have alternatives developed for it. After 5 years if you still can't say for certain if its ever going to work, you definitely need to start looking in different directions.

    You are misinformed. On our Alpha development machines, working 22 nm devices were already manufactured last year. (source []) We are shipping the first commercial EUV lithography machines in the coming year (source [], source []) A problem for the chip manufacturers is that the capacity on the alpha machines is rather low and needs to be shared among competitors.

    There is a temporary alternative; it is called double patterning [] (and triple patterning, etcetera). The first problem is that you need twice (thrice) as many process steps for the small features, and also proportionally more lithography machines that are not exactly cheap. The second problem is that double patterning imposes tough restrictions on the chip design; basically you can only make chips that consist mostly of repeating simple patterns. That is doable for memory chips, but much less so for CPUs. Moreover, if you want to continue Moore's law that way, the manufacturing cost will increase exponentially, so this is not a long-term viable alternative.

    You can bet that the semiconductor manufacturers have looked for alternatives. But those don't exist, at least not viable ones.

  • by pclminion ( 145572 ) on Friday August 13, 2010 @04:12PM (#33245184)
    A peltier gets cold on one side and hot on the other. Where are you going to put the hot side, since you're trying to put the thing in the middle of a block of silicon?
  • by Chris Burke ( 6130 ) on Friday August 13, 2010 @04:28PM (#33245394) Homepage

    Intel has been doing so much stuff behind the scenes to keep the x86 architecture going, that it may be time to just bite the bullet and move to something that doesn't require as much translation?

    Actually, the vast majority of what Intel and AMD have been doing behind the scenes are microarchitectural improvements that would be applicable to any out-of-order processor regardless of ISA.

    There are some minor penalties to x86 that remain, but getting rid of them would be a very modest performance upside and is completely not worth ditching backward compatibility for.

    Itanium comes to mind here because it offers a dizzying amount of registers, both FPU and CPU available to programs. To boot, it can emulate x86/amd64 instructions.

    You don't actually need that many architected registers, and modern out of order processors have a similar number of physical registers anyway. Sure IA32 had way too few GPRs, but most of the time the 32 registers of most RISC machines aren't used. x86-64 has a good compromise, and IA64 has overkill.

    If you're going to ditch x86 and start with something new -- and hypothetically I completely agree it would be great -- at least pick a real RISC architecture, and not something that actually has a bigger manual than x86. The only thing worse than an ISA designed by 30 years of engineering pragmatism is one designed by a committee of compiler writers. :P

    Emulation of x86 was something that was touted but never performed well enough to actually satisfy customers who wanted to run x86 workloads. This is surprising only to people who think x86 chips are inherently slow. :P

    Virtual machine technology is coming along rapidly. Why not combine a hardware hypervisor and other technology so we can transition to a CPU architecture that was designed in the past 10-20 years?

    Because virtual machines don't actually let you do that. They only virtualize a few aspects of the ISA to make compartmentalization possible. The guest OS and applications are compiled to the underlying ISA. Virtualization is all about efficiency, and emulating foreign instruction sets is inefficient.

  • by CAIMLAS ( 41445 ) on Friday August 13, 2010 @04:48PM (#33245662) Homepage

    The biggest performance bottleneck is still harddrives. So rather than focusing on faster CPUs, I'd love to see fast SSDs come down in price. I also can't wait until 16 gigs of RAM is standard.

    Agreed, except I'd like to disagree on your preference: I'd love to have slow SSDs come down in price and go up in capacity. It will be Good Enough, or at least significantly better.

    I mean, seriously: does the common desktop really need secondary storage which has higher throughput than the majority of DDR memory? There are SATA 6GB/s disks out there with >400MB/s rates, whereas DDR 400 only maxed out at 400MB/s. That's freaking INCREDIBLE.

    Even introducing slower 200MB/s SSDs at a lower price than current 400GB models w/ higher capacity would be significantly appreciated.

    That said, SSDs are going down in price - enough that the demand has increased again, pushing memory prices up in the past week (meh, look at Newegg if you don't believe me. 2x2GB DDR3 Crucial was $42 last week; this week it jumped up significantly for the same part #.)

  • by Revotron ( 1115029 ) on Friday August 13, 2010 @05:42PM (#33246276)

    This review:,_Core_i5_650_and_Core_i3_530_review&3 []

    These processors:

    Core i7-980X []

    Core i5-650 []

    Core i3-530 []

    Notice the performance of the 980X over the other two. There's no more than a 3x performance increase in media encoding. Compare the price tag differences, ranging from a six-fold increase over the i5 and an eight-fold increase over the i3.

    The kind of premium Intel charges for the "Extreme Edition" brand is ridiculous. Based on those specs alone and knowing the price of the two lower models, I wouldn't expect to be charged anything more than $600 for the i7.

  • Re:The Atoms (Score:4, Informative)

    by hankwang ( 413283 ) * on Friday August 13, 2010 @06:18PM (#33246660) Homepage

    I wasn't aware of someone succeeding where intel failed. I assumed that intel would have simply licensed the tech from anyone that had by now.

    IMEC is not the only ASML customer who has played with one of the two EUV Alpha tools, but it's the only one I could find with a quick Google search that has published the results. IMEC is a research institute. Other customers (actual chip manufacturers) have little to gain by disclosing to the competition exactly how much progress they have made.

    Then again, just last year means that the licensing talks could easily still be going on. I'm going to keep an eye on this from now on.

    Licensing is not the business model. The article suggests that Intel develops these machines ("fancy camera's") themselves, but in reality, they simply buy the machines from one of the three manufacturers (ASML, Nikon, and Canon). We spend an R&D budget of 500 M€ per year to develop these machines; Intel's R&D costs are likely mostly in the design of their chips and optimizing process parameters to squeeze as much as possible out of their fabs.

  • by Revotron ( 1115029 ) on Friday August 13, 2010 @06:20PM (#33246680)

    You asked me to provide evidence supporting my claim of 2x performance gains and 8x the pricetag. I did exactly that. AMD and Intel may be in a tight race at the midrange ($140-$200) but the interoperability between AMD's three socket specs (AM2,AM2+,AM3) and the DDR2/DDR3 backwards compatability are what send AMD leaps and bounds ahead of Intel. From a holistic standpoint AMD's offering is alot more stable in the long-term, and this is how they steamroll over the competition.

    P.S. I got fed up with Intel when I found out I'd have to throw out my motherboard, CPU and RAM to move from a Core2 Quad to *any* of the i3/5/7 offerings. My motherboard, CPU and RAM were no more than two years old and yet somehow there was no financially sane upgrade path for ANY of the components. If I were to get an i3 or i5, that meant I most likely couldn't upgrade to an i7 later without chucking the entire motherboard. This is what ticks me off about Intel's business model.

  • Re:The Atoms (Score:4, Informative)

    by hankwang ( 413283 ) * on Friday August 13, 2010 @06:36PM (#33246782) Homepage
    I forgot to add a disclaimer: the opinions expressed are mine and not necessarily my employer's, etcetera.
  • Re:The Atoms (Score:2, Informative)

    by wen1454 ( 1875096 ) on Saturday August 14, 2010 @04:09AM (#33249502) Homepage Journal

    The computational power of the human brain, which uses only 25 watts, is estimated to be between 10^13 and 10^23 instructions per second [1]. This means the human brain is 100 to 10^12 times more powerful than a high-end desktop. So computers still have a way to go before they could possibly approach any physical limits.

    1. Merkle, 1989: 10^13-10^16 IPS; Maravec, 1997: 10^14 IPS; Thagard, 2002: 10^23 IPS; Modha, 2009: 3.8*10^16 IPS.

It is not for me to attempt to fathom the inscrutable workings of Providence. -- The Earl of Birkenhead