Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Hardware Science Technology

How Much Smaller Can Chips Go? 362

nk497 writes "To see one of the 32nm transistors on an Intel chip, you would need to enlarge the processor to beyond the size of a house. Such extreme scales have led some to wonder how much smaller Intel can take things and how long Moore's law will hold out. While Intel has overcome issues such as leaky gates, it faces new challenges. For the 22nm process, Intel faces the problem of 'dark silicon,' where the chip doesn't have enough power available to take advantage of all those transistors. Using the power budget of a 45nm chip, if the processor remains the same size only a quarter of the silicon is exploitable at 22nm, and only a tenth is usable at 11nm. There's also the issue of manufacturing. Today's chips are printed using deep ultraviolet lithography, but it's almost reached the point where it's physically impossible to print lines any thinner. Diffraction means the lines become blurred and fuzzy as the manufacturing processes become smaller, potentially causing transistors to fail. By the time 16nm chips arrive, manufacturers will have to move to extreme ultraviolet lithography — which Intel has spent 13 years and hundreds of millions trying to develop, without success."
This discussion has been archived. No new comments can be posted.

How Much Smaller Can Chips Go?

Comments Filter:
  • The Atoms (Score:5, Interesting)

    by Ironhandx ( 1762146 ) on Friday August 13, 2010 @12:51PM (#33242080)

    They're going to hit atomic scale transistors fairly soon from what I can see as well, the manufacturing process for those is probably prohibitively expensive but that is as small as they can go(according to our current knowledge of the universe at least).

    I can't imagine Intel has all of its eggs in one basket on Extreme Ultraviolet Lithography though. Something thats been in development for even 5 years and doesn't show any concrete signs of success should at least have alternatives developed for it. After 5 years if you still can't say for certain if its ever going to work, you definitely need to start looking in different directions.

  • by mlts ( 1038732 ) * on Friday August 13, 2010 @12:55PM (#33242150)

    I have a feeling that once doing smaller and smaller lines becomes prohibitive, we will see a return to either revving up the clock speed (if possible), or adding more cores per die. Maybe even adding more discrete CPUs, so a future motherboard may have multiple CPUs on it similar to how mid to upper range PCs ended up with multiple procs present around 2000-2001.

    There are always more ways to keep going with Moore's law if one item gets near exhausted.

  • This question (Score:2, Interesting)

    by bigspring ( 1791856 ) on Friday August 13, 2010 @12:58PM (#33242184)
    I think there has been a major article asking this question every six months for the last decade. Then: surprise surprise, there's a new tech development that improves the technology. We've been "almost at the physical limit" for transistor size since the birth of the computer, why will it be any different this time?
  • by mlts ( 1038732 ) * on Friday August 13, 2010 @01:00PM (#33242212)

    At the extreme, maybe it might be time for a new CPU architecture? Intel has been doing so much stuff behind the scenes to keep the x86 architecture going, that it may be time to just bite the bullet and move to something that doesn't require as much translation?

    Itanium comes to mind here because it offers a dizzying amount of registers, both FPU and CPU available to programs. To boot, it can emulate x86/amd64 instructions.

    Virtual machine technology is coming along rapidly. Why not combine a hardware hypervisor and other technology so we can transition to a CPU architecture that was designed in the past 10-20 years?

  • by ibwolf ( 126465 ) on Friday August 13, 2010 @01:03PM (#33242268)

    Distant parts of the chip then have a communication lag, but yes, this will really help. Certainly much less lag than communicating with something outside the die.

    Wouldn't that suggest that three dimensional chips be the logical next step. Although heat dissipation would become more difficult, not to mention the fact that the production process would be an order of magnitude more complicated.

  • by Lunix Nutcase ( 1092239 ) on Friday August 13, 2010 @01:09PM (#33242356)

    The only Intel chips that are $1000+ are those that are either a few months old and/or are of the "Extreme" series. The core i7-860s and 930s are under 300 bucks and pretty much the entire core i5 line is at 200 or less.

  • by grahamsz ( 150076 ) on Friday August 13, 2010 @01:14PM (#33242446) Homepage Journal

    Larger dies generally cost more because it's more likely that they'll have a defect. I haven't done any chip design since college (and even then it was really entry level stuff) but if you could break the chip down into 10 different subcomponents that need to be spaced out, you could put 100 of those components on the chip and then after manufacture you could select the blocks that perform best and are defect free, spacing your choices accordingly.

    I'm pretty sure chip makers likely already

  • by TimFreeman ( 466789 ) <tim@fungible.com> on Friday August 13, 2010 @01:30PM (#33242680) Homepage
    The article mentions "dark transistors", which are transistors on the chip that can't be powered because you can't get enough power onto the chip. This is the problem that reversible [theregister.co.uk] computing [wikipedia.org] was supposed to solve.
  • I'd say you haven't (Score:5, Interesting)

    by Sycraft-fu ( 314770 ) on Friday August 13, 2010 @01:50PM (#33243028)

    For one, Itanium is still going strong in high end servers. It is a tiny market, but Itanium sells well (no I don't know why).

    However in terms of the desktop, you might notice something: When AMD came out with an x64 chip and everyone, most importantly Microsoft, decided they liked it and started developing for it, Intel had one out in a hurry. This doesn't just happen. You don't design a chip in a couple months, it takes a long, long time. What this means is Intel had been hedging their bets. They developed an x64 chip (they have a license for anything AMD makes for x86 just as AMD has a license for anything they make) should things go that way. They did and Intel ran with it.

    Ran with it well, I might add, since now the top performing x64 chips are all Intel.

    They aren't a stupid company, and if you think they are I'd question your judgment.

  • by rimcrazy ( 146022 ) on Friday August 13, 2010 @01:53PM (#33243092)

    Making 3D chips is the holy grail of semiconductor processing but is still beyond reach. They've not been able to lay down a single crystal second layer to make your stacked chip. They have tried using amorphous silicon but the devices are not near as good so there is no point.

    We are already seeing the outcrop of all of this, as next years machines are not necessarily 2x the performance at the same cost. I really think that money would be better spent helping all of you coders out there in creating a language/compiler programing paradigm that can use 12 threads efficiently for something beyond rendering GTA. I certainly don't have the answer and given that that problem has not been solved yet, neither does anybody else at this time.

    Its a very very hard problem. It is going to be interesting here in the next few years. If nothing changes, your going to have to start becoming accustom to the fact that next years PC is going to cost you MORE not less and thats really going to suck.

  • by hitmark ( 640295 ) on Friday August 13, 2010 @05:13PM (#33245938) Journal

    another problem is that adding cores is not as effective, right now, as upping clock speed.

    this may change however if the designs change from multiple universal cores to something more like a the cell cpu that powers the playstation 3, or maybe something like the the latest GPUs. Basically, a couple of universal cores like before (as they provide some benefit, if the os do a proper job in spreading processes across them) combined with multiple simpler cores that can be arranged like a assembly line. Then you stuff data in at one end, have each core do it assigned task in the chain, and have the result come out the other. With enough of them, one start to approach something like FPGA, giving each logical instruction in a program its own core.

    This is interesting in that a recent presentation i found the video of, stated that cpus these days slowed down mostly because it needed something from cache (usually because of a bad speculation during a IF or similar divergent routes in the code).

  • by rimcrazy ( 146022 ) on Friday August 13, 2010 @05:46PM (#33246328)

    No, you are incorrect. You are talking about stacked gates. That is significantly different than what I am talking about which is making entire stacked devices where you have a second level of additional devices including sources and drains as well as gates. Work has been tried with amorphous silicon with mixed results, no of which amount to much.

    You are correct in that the power density issue trumps all other concerns.

    And in the end economic issues will trump everything.

  • by ChrisMaple ( 607946 ) on Friday August 13, 2010 @10:59PM (#33248500)
    Another critical dimension is gate thickness. When you speak of a 16 nm process, you are (generally) talking about the minimum dimension in the XY plane, which is usually reserved for gate length. Gate thickness is a much smaller dimension, and if I recall correctly we're already down to about 4 molecules of thickness. Quantum tunneling is a problem.

Ya'll hear about the geometer who went to the beach to catch some rays and became a tangent ?

Working...