Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel Hardware Science Technology

How Much Smaller Can Chips Go? 362

nk497 writes "To see one of the 32nm transistors on an Intel chip, you would need to enlarge the processor to beyond the size of a house. Such extreme scales have led some to wonder how much smaller Intel can take things and how long Moore's law will hold out. While Intel has overcome issues such as leaky gates, it faces new challenges. For the 22nm process, Intel faces the problem of 'dark silicon,' where the chip doesn't have enough power available to take advantage of all those transistors. Using the power budget of a 45nm chip, if the processor remains the same size only a quarter of the silicon is exploitable at 22nm, and only a tenth is usable at 11nm. There's also the issue of manufacturing. Today's chips are printed using deep ultraviolet lithography, but it's almost reached the point where it's physically impossible to print lines any thinner. Diffraction means the lines become blurred and fuzzy as the manufacturing processes become smaller, potentially causing transistors to fail. By the time 16nm chips arrive, manufacturers will have to move to extreme ultraviolet lithography — which Intel has spent 13 years and hundreds of millions trying to develop, without success."
This discussion has been archived. No new comments can be posted.

How Much Smaller Can Chips Go?

Comments Filter:
  • by Anonymous Coward on Friday August 13, 2010 @12:53PM (#33242104)

    It's not about communication lag, it's about cost. Price goes up with die area.

  • by Revotron ( 1115029 ) on Friday August 13, 2010 @01:04PM (#33242270)

    The latest revision of my Phenom II X4 disagrees with you. The Phenom II series is absolutely steamrolling over every other Intel product in its price range.

    Hint: Notice I said "in its price range." Because not everyone prefers spending $1300 on a CPU that's marginally better than one at $600. It seems like Intel has stepped away from the "chip speed" game and stepped right into "ludicrously expensive".

  • by Andy Dodd ( 701 ) <atd7NO@SPAMcornell.edu> on Friday August 13, 2010 @01:04PM (#33242282) Homepage

    The problem is that x86 has become so entrenched in the market that even it's creator can't kill it off.

    You even cited a perfect example of their last (failed) attempt to do so (Itanic).

  • Re:This question (Score:5, Insightful)

    by localman57 ( 1340533 ) on Friday August 13, 2010 @01:05PM (#33242300)

    why will it be any different this time?

    Because sooner or later, it has to be. You reach a breaking point where the new technology is sufficiently different from the old that they don't represent the same device anymore. I think you'd have to be crazy to think that we're approaching the peak of our ability to solve computational problems, but I don't think its unreasonable to think that we're approaching the limit of what we can do with this technology (transistors).

  • by T-Bone-T ( 1048702 ) on Friday August 13, 2010 @01:13PM (#33242418)

    Moore's Law describes increases in computing power, it does not proscribe it.

  • by mlts ( 1038732 ) * on Friday August 13, 2010 @01:14PM (#33242438)

    Very true, but it eventually needs to be done. You can only get so big with a jet engine that is strapped onto a biplane. The underlying architecture needs to change sooner or later. As things improve, maybe we we will get to a point where we have CPUs with enough horsepower to be able to run emulated amd64 or x86 instructions at a decent speed. The benefits will be many by doing this. First, in assembly language, we will save a lot of instructions because programs will have enough registers to do actions at once, rather than keep shuttling data to and from RAM to complete a calculation. Having few access to and from RAM will speed up tasks immensely because register access is so much faster. Take a calculation that adds up a bunch of numbers. The numbers can be loaded into separate registers, added, result dropped back into RAM. With the x86, it would take a lot of load and stores to do the same thing.

  • by mlts ( 1038732 ) * on Friday August 13, 2010 @01:17PM (#33242484)

    x86 and amd64 have an installed base. Itanium doesn't. This doesn't mean x86 is any better than Itanium, in the same way that Britney Spears is better than $YOUR_FAVORITE_BAND because Britney has sold far more albums.

    Intel has done an astounding job at keeping the x86 architecture going. However, there is only so much lipstick you can put on a 40 year old pig.

  • by Xacid ( 560407 ) on Friday August 13, 2010 @01:19PM (#33242520) Journal
    Built in peltiers to draw the heat out of the center perhaps?
  • by phantomfive ( 622387 ) on Friday August 13, 2010 @01:24PM (#33242600) Journal
    It has always been about making it smaller. Clock speed was able to increase because the chips got smaller. We were able to add more cores per die because the chips got smaller. Moore's law is about size: it doesn't say computers will get faster, it says they will get smaller.

    What we are able to do with the smaller chips is what's changed. Raising the clock speed worked for years, and that is the best option, but because of physical problems, in the latest generations we weren't able to do that. So the next best thing is to add cores. Now the article is suggesting we may not even be able to do that anymore.

    I will tell you I've been reading articles like this for as long as I've known what a computer was, so if you're a betting man, you would do well to bet against this type of article every time you read it. But in theory it has to end somewhere, unless we learn how to make subatomic particles, which presumably is outside the reach of the research budget at Intel.
  • by Abcd1234 ( 188840 ) on Friday August 13, 2010 @01:48PM (#33242984) Homepage

    Well done, you've just described... today!

    And today, we already know the problem with this approach: most everyday problems aren't easily parallelizable. Yes, there are specific areas where the problems are sometimes embarrassingly parallel (some scientific/number crunching applications, graphics rendering, etc), but generally speaking, your average software problem is unfortunately very serial. As such, those multiple cores don't provide much benefit for any single task. So if you want to execute one of these problems faster, the only thing you can do is ramp up the clock rate.

  • by imgod2u ( 812837 ) on Friday August 13, 2010 @02:02PM (#33243264) Homepage

    People have been proposing circuits for regenerative switching (mainly for clocking) for a long long time. The problem always being that if you add an inductance to your circuit to store and feedback the energy, you will significantly decrease how fast you can switch.

    Also, you think transistors are difficult to build in small sizes? Try building tiny inductors.

  • by psbrogna ( 611644 ) on Friday August 13, 2010 @02:06PM (#33243338)
    I'd settle for less bloat-ware. Back in the day amazing things were done with extremely limited CPU resources by programming closer to the wire. Now we have orders of magnitude more resources but most programming is done at a very high level with numerous layers of inefficiency which negates, possibly more than negates, the benefits of increased CPU resources. Yes, yes- I wax a little "in my day/up hill both ways, etc." but do the benefits of high level programming and efficient use of resources have to be mutually exclusive?
  • Better software (Score:5, Insightful)

    by Andy_w715 ( 612829 ) on Friday August 13, 2010 @02:07PM (#33243368)
    How about writing better software. Stuff that doesn't require 24 cores and 64GB of RAM?
  • by ultranova ( 717540 ) on Friday August 13, 2010 @02:30PM (#33243756)

    Actually, it's pretty common practice to put spare arrays and spare cells in the design that aren't connected in the metal layers. When a chip is found defective, the upper metal layers can be cut and fused to form new connections and use the spare cells/arrays instead of the ones that failed by use of a focused ion beam.

    Am I the only one who finds it pretty awesome that we're actually using focused ion beams in the manufacture of everyday items?

  • Re:The Atoms (Score:3, Insightful)

    by Ironhandx ( 1762146 ) on Friday August 13, 2010 @02:31PM (#33243776)

    Theres a difference here... those reports were about being practically impossible, not theoretically impossible, on the going below the atomic scale you're hitting the theoretically impossible(given current understandings) point along with the practically impossible. We've had the theory for atomic size transistors for quite a while, its the practical that really needs to catch up.

  • Re:Plank's Law (Score:3, Insightful)

    by Yvanhoe ( 564877 ) on Friday August 13, 2010 @03:04PM (#33244238) Journal
    At 10^-35 meters, that leaves us a lot room...
    And being certain about something that comes from uncertainty principle makes me feel confused...
  • by Spatial ( 1235392 ) on Friday August 13, 2010 @03:04PM (#33244244)

    (can't go faster, so lets just go the same speed, but in parallel).

    Actually they do go faster. Clock speed doesn't mean processing speed. Modern CPUs do much more per clock cycle than their predecessors because of their greater instruction-level parallelism, shorter instruction latencies, larger caches, etc. While their cores don't generally operate at a higher frequency, they perform many times faster.

    That's not even considering the additional cores and massively improved power efficiency. It's difficult to overstate just how fucking amazingly good CPUs are now.

  • by WuphonsReach ( 684551 ) on Friday August 13, 2010 @03:07PM (#33244282)
    Itanium failed because it used a VLIW architecture - great for specialized processing tasks on big machines but for general purpose computing (ie. what 99.9% of people do) it wasn't much faster than x86.

    Itanium failed - because it could not run x86 code at an acceptable speed. Which meant that if you wanted to switch over to Itanium, you had to start from scratch - rebuying every piece of software that you depended on, or getting new versions for Itanium.

    AMD's 64bit CPUs, on the other hand, were excellent at running older x86 code while also giving you the ability to code natively in 64bit for the future. AMD's method took the market by storm and Intel had to relent and produce a 64bit x86 CPU.

    (There were other reasons why Itanium failed - such as relying too much on compilers to produce optimal code, cost of the units due to being limited quantity, and Intel arrogance.)
  • by Abcd1234 ( 188840 ) on Friday August 13, 2010 @03:12PM (#33244332) Homepage

    Trust me, what you're seeing is *not* what you think you're seeing. Windows isn't magically auto-parallelizing your code. That's a hot topic of research today, and it's really fucking hard.

  • by Steve525 ( 236741 ) on Friday August 13, 2010 @03:47PM (#33244826)

    because "X-rays" is such an UGLY word....

    There's actually some truth to this. Originally it was called soft x-ray projection lithography. The other type of x-ray lithography was a near contact shadow technique using shorter (near 1nm) x-rays. To distinguish the two techniques they changed the name from soft x-ray to EUV.

    This was also done for marketing reasons. X-ray lithography had failed (after sinking a lot of $$ into it), while optical lithography had successful moved from visible to UV, to DUV. By calling it EUV it sounds like the next logical step, instead of being associated with the failure that was x-ray lithography.

    (Actually, x-ray lithography didn't really truly fail. It does work, but optical surpassed it before it was ready, so it became pointless)

  • Re:Better software (Score:4, Insightful)

    by evilviper ( 135110 ) on Friday August 13, 2010 @03:56PM (#33244970) Journal

    How about writing better software. Stuff that doesn't require 24 cores and 64GB of RAM?

    They did. The are damn fast on modern processors, too. However, people simply look at me funny for using all GTK v1.2 applications... GIMP, aumix, emelfm, Ayttm, Sylpheed1, XFce3, etc.

    So, why AREN'T YOU using better software, which "doesn't require 24 cores and 64GB of RAM"?

  • by Dylan16807 ( 1539195 ) on Friday August 13, 2010 @06:36PM (#33246792)
    DDR, along with almost all desktop memory, has a 64-bit interface. So DDR 400 is at 3200MB/s, and if you go dual-channel you get 6400MB/s. Still, having bulk storage only an order of magnitude below main memory is wonderful.
  • by w0mprat ( 1317953 ) on Saturday August 14, 2010 @07:52AM (#33250006)

    ... I really think that money would be better spent helping all of you coders out there in creating a language/compiler programing paradigm that can use 12 threads efficiently for something beyond rendering GTA.

    The entirety of programming is we know it is stuck in a single threaded paradigm and making the shift to massively parallel computing requires a huge shift in thinking.

    This is so hard because our technique, languages and compilers all have their roots in a world that barely even multi-tasked let alone considered doing anything in parallel for performance.

    Every coder that ever learnt to code, coded for kicks or money, learnt this way, and they still do.

    We've come all this way without ever having to think in parallel. I stopped developing in 2003, having never had to really consider parallelism.

    Even in 2010, as kids today start learning programming linearly still, and you go a long way before having to consider a second thread.

    I think calling it a whole new paradigm is not doing the change required justice. It's about re-learning and re-thinking everything.

    Frankly every day I think it's a fucking miracle that software as a whole performs as well as it does, and that our civilizations infrastructure can be use this technology, and that Moore's law hasn't stopped it's inexorable march yet.

    It all works result of a brute force of millions of smart people problem solving line by line, getting it to compile, run and work without crashing too often. Software development now sees teams of hundreds of developers, open source projects can have thousands. One should be forgiven for thinking programming itself hasn't improved terrifically. Advances in software are still largely coming with throwing human resources at problems.

    Clearly then, the deficiencies are in software, not hardware.

    I won't shed a tear when Intel can no longer make progress with it's enormous investment in producing silicon based chips, and may have to consider graphene et al. But it's far from the end of the story. Silicon is only one element on the periodic table after all.

"A car is just a big purse on wheels." -- Johanna Reynolds

Working...