Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel Hardware Science Technology

How Much Smaller Can Chips Go? 362

nk497 writes "To see one of the 32nm transistors on an Intel chip, you would need to enlarge the processor to beyond the size of a house. Such extreme scales have led some to wonder how much smaller Intel can take things and how long Moore's law will hold out. While Intel has overcome issues such as leaky gates, it faces new challenges. For the 22nm process, Intel faces the problem of 'dark silicon,' where the chip doesn't have enough power available to take advantage of all those transistors. Using the power budget of a 45nm chip, if the processor remains the same size only a quarter of the silicon is exploitable at 22nm, and only a tenth is usable at 11nm. There's also the issue of manufacturing. Today's chips are printed using deep ultraviolet lithography, but it's almost reached the point where it's physically impossible to print lines any thinner. Diffraction means the lines become blurred and fuzzy as the manufacturing processes become smaller, potentially causing transistors to fail. By the time 16nm chips arrive, manufacturers will have to move to extreme ultraviolet lithography — which Intel has spent 13 years and hundreds of millions trying to develop, without success."
This discussion has been archived. No new comments can be posted.

How Much Smaller Can Chips Go?

Comments Filter:
  • by AhabTheArab ( 798575 ) on Friday August 13, 2010 @11:49AM (#33242062) Homepage
    Make them bigger. More space to put stuff on them then anyway. Tostito's Restaurant style tortilla chips can fit much more guacamole and salsa on them than their bite size chips. Bigger is better when it comes to chips.
    • Distant parts of the chip then have a communication lag, but yes, this will really help. Certainly much less lag than communicating with something outside the die.

      • Re: (Score:2, Insightful)

        by Anonymous Coward

        It's not about communication lag, it's about cost. Price goes up with die area.

        • Larger dies generally cost more because it's more likely that they'll have a defect. I haven't done any chip design since college (and even then it was really entry level stuff) but if you could break the chip down into 10 different subcomponents that need to be spaced out, you could put 100 of those components on the chip and then after manufacture you could select the blocks that perform best and are defect free, spacing your choices accordingly.

          I'm pretty sure chip makers likely already

          • by Sycraft-fu ( 314770 ) on Friday August 13, 2010 @12:44PM (#33242918)

            Since they are so parallel they are made as a bunch of blocks. A modern GPU might be, say, 16 blocks each with a certain number of shaders, ROPs, TMUs, and so on. When they are ready, they get tested. If a unit fails, it can be burned off the chip or disabled in firmware, and the unit can be sold as a lesser card. So the top card has all 16 blocks, the step down has 15 or 14 or something. Helps deal with cases were there's a defect, but overall the thing works.

            • Actually, it's pretty common practice to put spare arrays and spare cells in the design that aren't connected in the metal layers. When a chip is found defective, the upper metal layers can be cut and fused to form new connections and use the spare cells/arrays instead of the ones that failed by use of a focused ion beam.

              But that still adds time and cost. Decreasing die area is pretty much always preferable. Also, larger dies means even more of the chip's metal interconnects have to be devoted to power dist

              • Re: (Score:3, Insightful)

                by ultranova ( 717540 )

                Actually, it's pretty common practice to put spare arrays and spare cells in the design that aren't connected in the metal layers. When a chip is found defective, the upper metal layers can be cut and fused to form new connections and use the spare cells/arrays instead of the ones that failed by use of a focused ion beam.

                Am I the only one who finds it pretty awesome that we're actually using focused ion beams in the manufacture of everyday items?

      • by ibwolf ( 126465 ) on Friday August 13, 2010 @12:03PM (#33242268)

        Distant parts of the chip then have a communication lag, but yes, this will really help. Certainly much less lag than communicating with something outside the die.

        Wouldn't that suggest that three dimensional chips be the logical next step. Although heat dissipation would become more difficult, not to mention the fact that the production process would be an order of magnitude more complicated.

        • by TheDarAve ( 513675 ) on Friday August 13, 2010 @12:09PM (#33242354)

          This is also why Intel has been investing so much into in-silicon optical interconnects. They can go 3D if they can separate the wafers far enough to put a heat pipe in between and still pass data.

        • Re: (Score:3, Insightful)

          by Xacid ( 560407 )
          Built in peltiers to draw the heat out of the center perhaps?
          • by mlts ( 1038732 ) *

            I'd like to see more work with peltiers, but IIRC, they take a lot of energy to do their job of moving heat to one side, something that CPUs are already tight on.

          • Re: (Score:3, Informative)

            by pclminion ( 145572 )
            A peltier gets cold on one side and hot on the other. Where are you going to put the hot side, since you're trying to put the thing in the middle of a block of silicon?
            • Re: (Score:3, Funny)

              by Rival ( 14861 )

              A peltier gets cold on one side and hot on the other. Where are you going to put the hot side, since you're trying to put the thing in the middle of a block of silicon?

              Easy -- just put two peltiers together, hot sides facing each other. Problem solved! ;-)

        • by rimcrazy ( 146022 ) on Friday August 13, 2010 @12:53PM (#33243092)

          Making 3D chips is the holy grail of semiconductor processing but is still beyond reach. They've not been able to lay down a single crystal second layer to make your stacked chip. They have tried using amorphous silicon but the devices are not near as good so there is no point.

          We are already seeing the outcrop of all of this, as next years machines are not necessarily 2x the performance at the same cost. I really think that money would be better spent helping all of you coders out there in creating a language/compiler programing paradigm that can use 12 threads efficiently for something beyond rendering GTA. I certainly don't have the answer and given that that problem has not been solved yet, neither does anybody else at this time.

          Its a very very hard problem. It is going to be interesting here in the next few years. If nothing changes, your going to have to start becoming accustom to the fact that next years PC is going to cost you MORE not less and thats really going to suck.

          • by RulerOf ( 975607 )

            money would be better spent helping all of you coders out there in creating a language/compiler programing paradigm that can use 12 threads efficiently for something beyond rendering GTA.

            From what I've heard, the number of cores you throw at GTA doesn't matter, it still runs like crap. ;)

          • We are already seeing the outcrop of all of this, as next years machines are not necessarily 2x the performance at the same cost.

            I don't know how long you've been buying computers, but it has never been the case of "2x performance every year". The best it ever was was every 18 years or so, processing power doubled, and that was bumped back to about ever 2 years back in the late 80's/early 90's. But even that has never meant 2x all around performance. You might be able to crunch numbers 2x as fast after two years (never one), but there have always been bottlenecks - like RAM and hard drive speed - which have kept it down to around

          • by quo_vadis ( 889902 ) on Friday August 13, 2010 @01:41PM (#33243938) Journal
            You are incorrect about the reason for lack of 3D stacking. Its not that we cant stack them. There has been a lot of work on it. In fact, the reason flash chips are increasing in capacity is because they are stacked usually 8 layers high. The problem quite simply is heat dissipation. A modern CPU has a TDP of 130W, most of which is removed from the top of the chip, through the casing, to the heatsink. Put a second core on top of it, and the bottom layer develops hotspots that cannot be handled. There are currently some approaches based on microfluidic channels interspersed between the stacked dies, but that has its own drawbacks.
            • Re: (Score:3, Interesting)

              by rimcrazy ( 146022 )

              No, you are incorrect. You are talking about stacked gates. That is significantly different than what I am talking about which is making entire stacked devices where you have a second level of additional devices including sources and drains as well as gates. Work has been tried with amorphous silicon with mixed results, no of which amount to much.

              You are correct in that the power density issue trumps all other concerns.

              And in the end economic issues will trump everything.

          • Re: (Score:3, Insightful)

            by w0mprat ( 1317953 )

            ... I really think that money would be better spent helping all of you coders out there in creating a language/compiler programing paradigm that can use 12 threads efficiently for something beyond rendering GTA.

            The entirety of programming is we know it is stuck in a single threaded paradigm and making the shift to massively parallel computing requires a huge shift in thinking.

            This is so hard because our technique, languages and compilers all have their roots in a world that barely even multi-tasked let alone considered doing anything in parallel for performance.

            Every coder that ever learnt to code, coded for kicks or money, learnt this way, and they still do.

            We've come all this way without ever having to th

        • by Yvan256 ( 722131 )

          Wouldn't that suggest that three dimensional chips be the logical next step?

          Yes it does. And then after that it's the robotic arm, the explosions everywhere and the "come with me if you want to live".

  • The Atoms (Score:5, Interesting)

    by Ironhandx ( 1762146 ) on Friday August 13, 2010 @11:51AM (#33242080)

    They're going to hit atomic scale transistors fairly soon from what I can see as well, the manufacturing process for those is probably prohibitively expensive but that is as small as they can go(according to our current knowledge of the universe at least).

    I can't imagine Intel has all of its eggs in one basket on Extreme Ultraviolet Lithography though. Something thats been in development for even 5 years and doesn't show any concrete signs of success should at least have alternatives developed for it. After 5 years if you still can't say for certain if its ever going to work, you definitely need to start looking in different directions.

    • by Lunix Nutcase ( 1092239 ) on Friday August 13, 2010 @11:55AM (#33242140)

      Something thats been in development for even 5 years and doesn't show any concrete signs of success should at least have alternatives developed for it.

      You haven't followed much of the history of Itanium's development have you?

      • No, I really haven't. I tend not to pay much attention to things that are released more than 2 years after their original announced release date.

        Though, I have to point out I didn't advocate terminating a project after 5 years of zero results(a la Itanium) just looking in additional directions and not keeping all the eggs in the questionable basket.

        • You seem to miss the point. You imagine that Intel doesn't point all of its eggs in one basket. The development of Itanium disproves that notion as they had no other real alternatives being developed at the same time.

      • You haven't followed much of the history of Itanium's development have you?

        I saw the movie though, Leo dies at the end.

      • I'd say you haven't (Score:5, Interesting)

        by Sycraft-fu ( 314770 ) on Friday August 13, 2010 @12:50PM (#33243028)

        For one, Itanium is still going strong in high end servers. It is a tiny market, but Itanium sells well (no I don't know why).

        However in terms of the desktop, you might notice something: When AMD came out with an x64 chip and everyone, most importantly Microsoft, decided they liked it and started developing for it, Intel had one out in a hurry. This doesn't just happen. You don't design a chip in a couple months, it takes a long, long time. What this means is Intel had been hedging their bets. They developed an x64 chip (they have a license for anything AMD makes for x86 just as AMD has a license for anything they make) should things go that way. They did and Intel ran with it.

        Ran with it well, I might add, since now the top performing x64 chips are all Intel.

        They aren't a stupid company, and if you think they are I'd question your judgment.

    • They're going to hit atomic scale transistors fairly soon from what I can see as well

      Yeah, there was an article here in the spring on atomic computing, where I did a little math on it. I was surprised, but it worked out that in roughly a decade Moore's Law would get down to atomic transitors if reducing the part size was the method employed.

      I had always presumed before that it would never run out, but it's going to have to zig sideways if that's going to be true.

      Google recently bought that company working

    • by alen ( 225700 )

      i remember reading a long time ago that 90nm or 65nm would be impossible due to physics and science

      • Re: (Score:3, Insightful)

        by Ironhandx ( 1762146 )

        Theres a difference here... those reports were about being practically impossible, not theoretically impossible, on the going below the atomic scale you're hitting the theoretically impossible(given current understandings) point along with the practically impossible. We've had the theory for atomic size transistors for quite a while, its the practical that really needs to catch up.

    • Re:The Atoms (Score:5, Informative)

      by hankwang ( 413283 ) * on Friday August 13, 2010 @02:59PM (#33245004) Homepage

      I deal with EUV lithography for a living. Not at Intel, but at ASML [asml.com], the world's largest supplier of lithography machines and the only one that has actually manufactured working EUV lithography tools.

      Something thats been in development for even 5 years and doesn't show any concrete signs of success should at least have alternatives developed for it. After 5 years if you still can't say for certain if its ever going to work, you definitely need to start looking in different directions.

      You are misinformed. On our Alpha development machines, working 22 nm devices were already manufactured last year. (source [www2.imec.be]) We are shipping the first commercial EUV lithography machines in the coming year (source [asml.com], source [chipdesignmag.com]) A problem for the chip manufacturers is that the capacity on the alpha machines is rather low and needs to be shared among competitors.

      There is a temporary alternative; it is called double patterning [wikipedia.org] (and triple patterning, etcetera). The first problem is that you need twice (thrice) as many process steps for the small features, and also proportionally more lithography machines that are not exactly cheap. The second problem is that double patterning imposes tough restrictions on the chip design; basically you can only make chips that consist mostly of repeating simple patterns. That is doable for memory chips, but much less so for CPUs. Moreover, if you want to continue Moore's law that way, the manufacturing cost will increase exponentially, so this is not a long-term viable alternative.

      You can bet that the semiconductor manufacturers have looked for alternatives. But those don't exist, at least not viable ones.

      • Re:The Atoms (Score:4, Informative)

        by hankwang ( 413283 ) * on Friday August 13, 2010 @05:36PM (#33246782) Homepage
        I forgot to add a disclaimer: the opinions expressed are mine and not necessarily my employer's, etcetera.
  • by Revotron ( 1115029 ) on Friday August 13, 2010 @11:53AM (#33242116)

    Why does Intel need to push the envelope that hard and that fast just to create a product that will, in the end, have extremely low yield and extremely high cost?

    Just so they can adhere to some ancient "law" proposed by one of their founders? It's time to let go of Moore's Law. It's outdated and doesn't scale well... just like the x86 architecture! *ba-dum, chhh*

    • by mlts ( 1038732 ) * on Friday August 13, 2010 @12:00PM (#33242212)

      At the extreme, maybe it might be time for a new CPU architecture? Intel has been doing so much stuff behind the scenes to keep the x86 architecture going, that it may be time to just bite the bullet and move to something that doesn't require as much translation?

      Itanium comes to mind here because it offers a dizzying amount of registers, both FPU and CPU available to programs. To boot, it can emulate x86/amd64 instructions.

      Virtual machine technology is coming along rapidly. Why not combine a hardware hypervisor and other technology so we can transition to a CPU architecture that was designed in the past 10-20 years?

      • Re: (Score:3, Insightful)

        by Andy Dodd ( 701 )

        The problem is that x86 has become so entrenched in the market that even it's creator can't kill it off.

        You even cited a perfect example of their last (failed) attempt to do so (Itanic).

        • Re: (Score:2, Insightful)

          by mlts ( 1038732 ) *

          Very true, but it eventually needs to be done. You can only get so big with a jet engine that is strapped onto a biplane. The underlying architecture needs to change sooner or later. As things improve, maybe we we will get to a point where we have CPUs with enough horsepower to be able to run emulated amd64 or x86 instructions at a decent speed. The benefits will be many by doing this. First, in assembly language, we will save a lot of instructions because programs will have enough registers to do acti

          • Re: (Score:3, Informative)

            by Anonymous Coward

            We already have this. All current x86's have a decode unit to convert the x86 instructions to micro-ops in the native RISC instruction set.

        • Itanium failed because it used a VLIW architecture - great for specialized processing tasks on big machines but for general purpose computing (ie. what 99.9% of people do) it wasn't much faster than x86.

          Are computers really 'too slow' now? It seems to me that an x64 desktop at 3GHz is fast enough for just about anything a normal person would do. The only "normal task" I can think of that's too slow at the moment is decoding x264 video on netbooks and they're better off with a little hardware decoder tacked

          • by WuphonsReach ( 684551 ) on Friday August 13, 2010 @02:07PM (#33244282)
            Itanium failed because it used a VLIW architecture - great for specialized processing tasks on big machines but for general purpose computing (ie. what 99.9% of people do) it wasn't much faster than x86.

            Itanium failed - because it could not run x86 code at an acceptable speed. Which meant that if you wanted to switch over to Itanium, you had to start from scratch - rebuying every piece of software that you depended on, or getting new versions for Itanium.

            AMD's 64bit CPUs, on the other hand, were excellent at running older x86 code while also giving you the ability to code natively in 64bit for the future. AMD's method took the market by storm and Intel had to relent and produce a 64bit x86 CPU.

            (There were other reasons why Itanium failed - such as relying too much on compilers to produce optimal code, cost of the units due to being limited quantity, and Intel arrogance.)
      • Itanium comes to mind here because it offers a dizzying amount of registers, both FPU and CPU available to programs.

        And it's been such a smashing success in comparison to x86, right?

        • by mlts ( 1038732 ) * on Friday August 13, 2010 @12:17PM (#33242484)

          x86 and amd64 have an installed base. Itanium doesn't. This doesn't mean x86 is any better than Itanium, in the same way that Britney Spears is better than $YOUR_FAVORITE_BAND because Britney has sold far more albums.

          Intel has done an astounding job at keeping the x86 architecture going. However, there is only so much lipstick you can put on a 40 year old pig.

          • by quo_vadis ( 889902 ) on Friday August 13, 2010 @01:50PM (#33244068) Journal
            Um, actually Intel has done a lot of work on the architecture and microarchitecture of its processors. The CPUs Intel makes today are almost RISC like, with a tiny translation engine, which thanks to the shrinking size of transistors takes a trivial amount of die space. The cost of adding a translation unit is tiny, compared to the penalty of not being compatible with a vast majority of the software out there.

            Itanium was their clean room redesign, and look what happened to it. Outside HPCs and very niche applications, no one was willing to rewrite all their apps, and more importantly, wait for the compiler to mature on an architecture that was heavily dependent on the compiler to extract instruction level parallelism.

            All said, the current instruction set innovation is happening with the SSE, and VT instructions, where some really cool stuff is possible. There is something to be said for the choice of CISC architecture by Intel. In RISC ones, once you run out of opcodes, you are in pretty deep trouble. In CISC, you can keep adding them,making it possible to have binaries that can run unmodified on older generation chips, but able to take advantage of newer generation features when running on newer chips.
      • by ibwolf ( 126465 )

        You are right in that a new architecture could offer improved performance, however it is a one shot deal. Once you've rolled out the new architecture there will be a short period while everything catches up and then you are right back to cramming more on the die.

        • The point of a new architecture would be for it NOT to be a one shot deal and that it would give you ample room for evolution before hitting physical limitations at least for a few "generations". The problem of stepping sideways is the risk. You don't have to look too far on /. to find other examples of civilization being irrationally tied to a legacy they're unwilling to walk away from even if by doing so they accept mediocre technology.
      • by Twinbee ( 767046 )

        And if we did, are we talking about 2x speed returns very roughly, or even up to 20x?

      • It seems to be almost an article of faith with geeks that if only we didn't have that nasty x86 we could have so much better chips. However the thing is, there ARE non-x86 chips out there. Intel and AMD may love it, others don't. You can find other architectures. So then, where's the amazing chip that kicks the crap out of Intel's chips? I mean something that is faster, uses the same or less power and costs less to produce (it can be sold for more, but the fab costs have to be less). Where is the amazing ch

      • by imgod2u ( 812837 ) on Friday August 13, 2010 @01:08PM (#33243376) Homepage

        Because nowadays, the ISA is really very little impact on resulting performance. The total die space devoted to translating x86 instructions on a modern Nehalem is tiny compared to the rest of the chip. The only time the ISA decode logic matters if for very low power chips (smartphones). This is part of the reason why ARM is so far ahead of Intel's x86 offerings in that area.

        Modern x86, with SSE and x86-64, is actually not that bad of an ISA and there aren't too many ugly workarounds necessary anymore that justify a big push to change.

      • Re: (Score:3, Informative)

        by Chris Burke ( 6130 )

        Intel has been doing so much stuff behind the scenes to keep the x86 architecture going, that it may be time to just bite the bullet and move to something that doesn't require as much translation?

        Actually, the vast majority of what Intel and AMD have been doing behind the scenes are microarchitectural improvements that would be applicable to any out-of-order processor regardless of ISA.

        There are some minor penalties to x86 that remain, but getting rid of them would be a very modest performance upside and is

    • Yeah, progress is such a hassle.
    • Re: (Score:3, Insightful)

      by T-Bone-T ( 1048702 )

      Moore's Law describes increases in computing power, it does not proscribe it.

      • Re: (Score:3, Informative)

        Moore's Law has nothing to do with computing power, but with the NUMBER of transistors on a piece of silicon. Which he said would double every 2 years, which has be petty much true and will remain true for the next decade most likely.
  • I miss the pressure AMD used to put on Intel. When Intel had an agile competitor often leaping ahead of it chip speeds shot up like a rocket - seems like they've been resting on their laurels lately...

    • Really? I got a Core i5-750 in January, and I have been happier with it for the money than any chip I've ever had.
    • by Revotron ( 1115029 ) on Friday August 13, 2010 @12:04PM (#33242270)

      The latest revision of my Phenom II X4 disagrees with you. The Phenom II series is absolutely steamrolling over every other Intel product in its price range.

      Hint: Notice I said "in its price range." Because not everyone prefers spending $1300 on a CPU that's marginally better than one at $600. It seems like Intel has stepped away from the "chip speed" game and stepped right into "ludicrously expensive".

      • Re: (Score:3, Interesting)

        The only Intel chips that are $1000+ are those that are either a few months old and/or are of the "Extreme" series. The core i7-860s and 930s are under 300 bucks and pretty much the entire core i5 line is at 200 or less.

        • The problem is that the Intel motherboards are more expensive, and they lock you into your chip "class". You can't upgrade to an i7 from an i5 in some cases.

          • The price difference is negligible between AMD and Intel boards, unless you are attending the race to bottom, where AMD rules. You also can't upgrade from an AM2 to AM3 CPU on a AM2 board. The talk about upgrading is meaningless in a broader sense too: Why would you buy something not optimal just so that you can upgrade it later? It's false economy, get the best you can afford now, and a whole new rig with whole new tech a few years later.

            • by Rockoon ( 1252108 ) on Friday August 13, 2010 @12:44PM (#33242922)
              What are you talking about? AM2 boards support AM3 chips.

              You also present a false dichotomy, because upgrading isnt ONLY about buying suboptimal hardware and then upgrading it later. Anyone who purchased bleeding edge AM2 gear when it was introduced can get a bios update and then socket an AM3 Phenom II chip. They still only have DDR2, but amazingly Phenom II's support both DDR2 on AM2 and DDR3 on AM3.

              So that guy who purchased a dual-core AM2 Phenom when they were cutting edge can now socket a hexa-core AM3 Phenom II.

              Its amazing what designing for the future gives your customers. Intel users have only rarely had the chance to substantially upgrade CPU's.
              • You probably mean AM2+ boards. All AM2 boards definitely don't support AM3 CPUs, feel free to check the manufacturer sites.

                For the false dichotomy part, you build up another in your case, too. In the last few years (AM2 and AM3 age), the quad cores haven't been too expensive compared to the dual cores. Your example user has made the wrong choice when buying the dual core in the first place; the combined price of the dual and the hexa core CPUs would have given him/her a nice time in multithreaded apps for t

      • So you haven't really done any research there? Intel's i5 750 and 760 "steamroll" all the Phenom II X4 CPUs in the price range. Don't trust me, trust benchmarks.

        • So you haven't really done any research there? Intel's i5 750 and 760 "steamroll" all the Phenom II X4 CPUs in the price range. Don't trust me, trust benchmarks.

          Phenom II X6 chips with Turbo Core in the same price range would like to have a word with you about you cherry picking old X4 chips.

          • In what uses? X6 CPUs don't really deliver compared to i5, except in uses where you can really blast out all the cores, like vid encoding with certain programs.

            And the OP especially was telling how his/her *Phenom II X4* beats everything Intel has to offer in its price range, which is blatantly false. LTR.

  • This question (Score:2, Interesting)

    by bigspring ( 1791856 )
    I think there has been a major article asking this question every six months for the last decade. Then: surprise surprise, there's a new tech development that improves the technology. We've been "almost at the physical limit" for transistor size since the birth of the computer, why will it be any different this time?
    • Re:This question (Score:5, Insightful)

      by localman57 ( 1340533 ) on Friday August 13, 2010 @12:05PM (#33242300)

      why will it be any different this time?

      Because sooner or later, it has to be. You reach a breaking point where the new technology is sufficiently different from the old that they don't represent the same device anymore. I think you'd have to be crazy to think that we're approaching the peak of our ability to solve computational problems, but I don't think its unreasonable to think that we're approaching the limit of what we can do with this technology (transistors).

    • Eventually there's a theoretical limit, a limit that can't be exceeded without violating the laws of physics, specifically quantum mechanics. Once your transistors get close enough together, the probability of an electron tunneling from one side to the other gets high enough that it isn't possible to tell between your on and off states. We are rapidly approaching that limit even if all the manufacturing issues can be overcome (I believe it's somewhere around 5nm, but I could be wrong).

  • Plank's Law (Score:5, Funny)

    by cosm ( 1072588 ) <thecosm3@gmai l . c om> on Friday August 13, 2010 @12:00PM (#33242208)
    Well I can say with absolute certainty that they will not go below the Planck length.
    • by imgod2u ( 812837 )

      *For classical computation

    • Re: (Score:3, Insightful)

      by Yvanhoe ( 564877 )
      At 10^-35 meters, that leaves us a lot room...
      And being certain about something that comes from uncertainty principle makes me feel confused...
  • by TimFreeman ( 466789 ) <tim@fungible.com> on Friday August 13, 2010 @12:30PM (#33242680) Homepage
    The article mentions "dark transistors", which are transistors on the chip that can't be powered because you can't get enough power onto the chip. This is the problem that reversible [theregister.co.uk] computing [wikipedia.org] was supposed to solve.
    • Re: (Score:3, Insightful)

      by imgod2u ( 812837 )

      People have been proposing circuits for regenerative switching (mainly for clocking) for a long long time. The problem always being that if you add an inductance to your circuit to store and feedback the energy, you will significantly decrease how fast you can switch.

      Also, you think transistors are difficult to build in small sizes? Try building tiny inductors.

  • Current technology is based on a single planar layer of silicon substrate. A chips is built with a metal interconnect on top. But the base layers are essentially a 2D structure. We are already postprocessing things with thru vias to stack substrates into a single package. The increases density from the package perspective.
    Increasing technologies in stacking will keep Moors law going for another decade (as long as you consider Moor's law to be referencing density in 2D).

  • because "X-rays" is such an UGLY word....

    • by sunbane ( 146740 ) on Friday August 13, 2010 @01:17PM (#33243508) Homepage

      Because X-rays are .01 - 10 nm light and EUV is 13.5nm light... so nothing to do with the word, as much as engineers like to label things correctly.

    • by erice ( 13380 )

      EUV is 13.5nm, X-rays are generally thought of 10nm and smaller. http://hyperphysics.phy-astr.gsu.edu/hbase/ems3.html [gsu.edu]

      It is close and this region is sometimes referred to as "soft" X-rays but there is nothing incorrect about the "UV" moniker. It also helps to distinguish EUV from actual X-ray lithography, a largely abandoned approach which used wave lengths on the order of 1nm. http://en.wikipedia.org/wiki/X-ray_lithography [wikipedia.org]

    • Re: (Score:3, Insightful)

      by Steve525 ( 236741 )

      because "X-rays" is such an UGLY word....

      There's actually some truth to this. Originally it was called soft x-ray projection lithography. The other type of x-ray lithography was a near contact shadow technique using shorter (near 1nm) x-rays. To distinguish the two techniques they changed the name from soft x-ray to EUV.

      This was also done for marketing reasons. X-ray lithography had failed (after sinking a lot of $$ into it), while optical lithography had successful moved from visible to UV, to DUV. By calling it EUV it sounds like the next

  • That's how small they can go. Beyond that, increasing the functional density of our CPUs will get really challenging.

  • Better software (Score:5, Insightful)

    by Andy_w715 ( 612829 ) on Friday August 13, 2010 @01:07PM (#33243368)
    How about writing better software. Stuff that doesn't require 24 cores and 64GB of RAM?
    • Re:Better software (Score:4, Insightful)

      by evilviper ( 135110 ) on Friday August 13, 2010 @02:56PM (#33244970) Journal

      How about writing better software. Stuff that doesn't require 24 cores and 64GB of RAM?

      They did. The are damn fast on modern processors, too. However, people simply look at me funny for using all GTK v1.2 applications... GIMP, aumix, emelfm, Ayttm, Sylpheed1, XFce3, etc.

      So, why AREN'T YOU using better software, which "doesn't require 24 cores and 64GB of RAM"?

    • Folks don't often realize how much work we software writers go through to write this big, complex, core-eating software. Back in the day with 8-bit 500 KHz CPUs we could write a simple 1000-iteration loop with a bit of code in it, and it might lag the CPU for a whole second. Now with these fast processors we have to go through all kinds of hoops to use up all those cycles! Building languages on top of languages, interpreted languages, all kinds of extra error checking (error checking can often take 80%-90% of the cycles and code), objects on top of arrays on top of pointers on top of objects ... you get the idea. SOMEBODY has to make the software to use up all those cycles.

      It's a dirty job, but somebody has to do it!!!

      WE CAN NOT LET THE HARDWARE PEOPLE WIN!!! For every added processor, every bump in Hz, we WILL come up with a way to burn it! Soon we will embark on the new 3D ray-traced desktop - THAT will keep the HW folks busy for a while!!! And (don't tell anybody) soon we will establish the need for full time up-to-date indexing of everything on the LAN. Of course, that could be done by one machine, but if we all do it independently on each machine, that will burn another whole 2GHz CPU's worth of cycles.

      Our goal and our motto: "A computer is nothing but a very complicated and expensive heater." :D

  • The diameter of a silicon atom is roughly. 0.25 nm. That means that 32nm is about 120 atoms across. A 16nm line is about 60 atoms across.

    For reliable use, there is going to be an approximate minimum to number of atoms in a line. Electron interactions among individual atoms are quantum events, so for any sort of predictability you're going to need enough atoms for the probabilities to average out enough. I don't know how many that is, but it pretty much has to be more than one.

    I have a great deal o

    • Re: (Score:3, Interesting)

      by ChrisMaple ( 607946 )
      Another critical dimension is gate thickness. When you speak of a 16 nm process, you are (generally) talking about the minimum dimension in the XY plane, which is usually reserved for gate length. Gate thickness is a much smaller dimension, and if I recall correctly we're already down to about 4 molecules of thickness. Quantum tunneling is a problem.
  • Why hasn't Intel rolled out 3D chips stacked in layers, with microfluidics cooling between layers? I used to see all kinds of engineering PR about it, but it's been years since I saw any progress, and it's taken way longer than I expected.

    3D would not only increase the amount of transistors (and other devices) fit into a "chip", but put the circuits closer together, requiring less voltage/power and shorter propagation times. What's holding it up?

    • Re:3D Chips (Score:4, Informative)

      by erice ( 13380 ) on Friday August 13, 2010 @02:53PM (#33244924) Homepage

      Actually, 3D has picked up quite a bit in the last few years. However, the primary interest is connect different chips together in the same package with short, fast, interconnect. It's a lot better than conventional System In Package and much much better than circuit board connections. Unfortunately, the connections are a bit too coarse to spread a single design like an Intel processor across the layers.

      For that you need more sophisticated methods like growing a new wafer on top of one that has already been built up. These methods are not yet ready for production.

Genius is ten percent inspiration and fifty percent capital gains.

Working...