Forgot your password?
typodupeerror
Intel Hardware

End of Moore's Law in 10-15 years? 248

Posted by CmdrTaco
from the no-for-real-this-time dept.
javipas writes "In 1965 Gordon Moore — Intel's co-founder — predicted that the number of transistors on integrated circuits would double every two years. Moore's Law has been with us for over 40 years, but it seems that the limits of microelectronics are now not that far from us. Moore has predicted the end of his own law in 10 to 15 years, but he predicted that end before, and failed."
This discussion has been archived. No new comments can be posted.

End of Moore's Law in 10-15 years?

Comments Filter:
  • Again? (Score:5, Interesting)

    by dylan_- (1661) on Wednesday September 19, 2007 @10:58AM (#20667731) Homepage
    There are always a few of these [techdirt.com].

    I do recall someone telling me that no CPU would ever run at more than 2GHz, as it would then start emitting microwave radiation...
  • by Jeff DeMaagd (2015) on Wednesday September 19, 2007 @11:18AM (#20668051) Homepage Journal
    You are right, but that's also because the fabs get more expensive on each generation, I think each feature size shrink requires a fab that costs 50% more than the previous fab.
  • Re:Gordon Moore (Score:5, Interesting)

    by kebes (861706) on Wednesday September 19, 2007 @11:20AM (#20668069) Journal
    A realistic design for a quantum computer would probably have a classical CPU that does most of the work, with a quantum co-processor. Traditional things, like running the OS and dealing with hardware I/O, would probably still be classical. The quantum co-processor would be assigned computations by the CPU that can be accomplished much faster than on the classical CPU.

    This abstraction would mean that most software wouldn't have to be written with any understanding of quantum computing: libraries and compilers would be designed to use CPU calls that launch the quantum co-processor, if available.

    For many operations, the quantum CPU would not be needed. But for certain tasks, it would provide orders-of-magnitude speed boosts. If quantum co-processors became commonplace, we would see improvements in all kinds of parallel-processing tasks (matrix operations, simulations, graphics, maybe even search?).
  • by goombah99 (560566) on Wednesday September 19, 2007 @11:23AM (#20668103)
    If you accept the statement I just made about moore's law being sustained because of economics then here's a corollary which makes an observable prediction.
    Moores law stays fixed because the industry invests enough research dollars--and not one dollar more-- to keep it at that rate. Their entire economic model is built on this.

    Therefore, if we every do reach a point where we simply are running out of available physics and computer science (multiprocessing) then the first sign of this will be an increasing fraction of research dollars spent to sustain moores law.

    Plot the industry's margin, smooth the curve, and you will be able to extrapolate to the point where the research dollars cross the profit line. somewhere shortly before that is when moore's law will end.

    The only way that would not be true is if the nature of innovation changes from frequent small leaps to massive leaps spaced far apart.

  • Re:Again? (Score:4, Interesting)

    by krray (605395) on Wednesday September 19, 2007 @11:25AM (#20668135)
    I do recall someone telling me that no CPU would ever run at more than 2GHz, as it would then start emitting microwave radiation...

    I remember having / making a similar claim myself way back when -- with the 486/33 and 486/66 being the hot system in the day. I predicted they'd have a hard time getting above ~80Mhz because of FM radio interference / shielding problems. Boy was I wrong.... :*)

    Today I predict "Moore's Law" to hold pretty true -- even in 10 or 15 years. IBM has been playing with using atoms as the gate / switch which will make today's CPU's look like Model T's.

    In the 90's they had http://www-03.ibm.com/ibm/history/exhibits/vintage/vintage_4506VV1003.html [ibm.com]
    Not too long ago they've done http://domino.watson.ibm.com/comm/pr.nsf/pages/news.20040909_samm.html [ibm.com]
    And recently it has been http://www.physorg.com/news107703707.html [physorg.com]

    This will both be a boom for storage and the chips themselves IMHO (not to mention my stock :).
  • by tiktok (147569) on Wednesday September 19, 2007 @11:38AM (#20668331) Homepage
    Why buy a computer this year, when I can get a faster one next year? [thetoque.com]

    The fuzzy logic behind not buying a computer due to Moore's Law.
  • by Cutie Pi (588366) on Wednesday September 19, 2007 @11:39AM (#20668339)
    As the parent implied, Moore's Law will likely not end because of technological constraints but rather economic ones.

    We reached a wall a few years ago in terms of transistor speed, mostly due to the thin gate oxides giving rise to significant leakage current, which translates into heat. The upcoming high-k metal gate technology mitigates but doesn't eliminate this problem. Thus, Intel and the like are putting those smaller transistors to work in redudant cores rather than faster, monolithic circuits. However, Moore's law is still marching on, from 65nm, to very soon 45nm, and then 32nm and 22nm. Each technology node effectively halves the area requirement, or (more realistically) doubles the number of transistors that can fit in the same area. This translates into lower cost per transistor. 32nm technology is in the final stages of development, and 22nm is believed to be possible, although much more difficult. The real limit right now is the optical lithography process used to pattern the circuits. There is no high-volume solution available for the 16nm node. We can certainly make patterns this small (using electron beam lithography, for example), but it would be prohibitively expensive (each chip would take many hours to expose, compared to SFIL but the industry isn't really putting money into them. My belief is that the 2-year cycle we're on will be ending after 22nm, or possibly 16nm. Circuits will probably continue to get smaller, but at a slower pace, as development and technology costs become prohibitive. Even now, there is debate in the industry about whether to delay 22nm and instead do a 28nm "half node". If this was to occur in the entire industry, the 2-year Moore's Law as we know it would end.

    BTW, "45nm" and "32nm" don't directly refer to the size of the transistors, but rather to 70% of the current half-pitch being printing with lithography. Thus, the 45nm node has a 65nm half-pitch, which means the wires are spaced at 130nm. This spacing decreases to 90nm and 65nm for the 32 and 22nm nodes, respectively. The actual transistor size (channel length) can be much smaller than the currently technology node designation.
  • Re:Gordon Moore (Score:4, Interesting)

    by Daniel_Staal (609844) <DStaal@usa.net> on Wednesday September 19, 2007 @11:49AM (#20668491)
    Somewhere, once a upon a time, I saw an article that took the opposite approach: They worked out what the absolute maximum transistor density was, and worked out from that when Moore's Law had to end. They figured one transisitor per Plank-unit, in a spherical computer. (Where the clock speed is proportional to the size of the sphere, governed by the speed of light.)

    IIRC, it ended up something like 150 years in the future.
  • by hackstraw (262471) on Wednesday September 19, 2007 @12:40PM (#20669193)
    Whenever one process technology reaches its physical limits, we get a new one, because the new process makes money.

    I kinda agree and kinda disagree.

    Moore's "Law" is clearly stated in terms of physics. It says that the number of transisters will double, not the speed will double over time.

    However, as Kurzweil and other's have observed, the speed of _computation_ has doubled over time before Moore's law and there is no reason or hint that this will stop once Moore's law is obsolete.

    Take a peek at http://www.kurzweilai.net/articles/art0134.html?printable=1 [kurzweilai.net] specifically http://www.kurzweilai.net/articles/images/chart03.jpg [kurzweilai.net]

    ICs have been good for a while, but then so were abacus' at one time.

    CPUs are simply different than they were a few years ago. Things like the Niagra chip from Sun and the multi-core stuff from AMD and Intel is pretty different design (SMP on a chip -- yes, that is an oversimplification).

    10-15 years is about in the middle of 2020, which seems to be a common point of a number of interesting stuff. Physics computations are predicted to be pretty interesting by then. Computers are predicted to be interesting by then. Who knows what else.

    Its not hardware that I think is the problem or challenge, its the pains of software that seems to be more challenging. I mean its 2007 and we have what for software? OSes and compilers and whatnot have pretty much stagnated since the early 70s. Sure, we have 4g languages that are easier for us stupid people to program with, but from a performance and efficiency POV they are backwards, not forwards. JIT stuff in .NET and Java are a little interesting, but programming computers is still a PITA.

    I guess we will have to wait and see.

  • Re:Gordon Moore (Score:5, Interesting)

    by kebes (861706) on Wednesday September 19, 2007 @01:09PM (#20669639) Journal
    I'm not sure if this is the same article that you saw previously, but this paper discusses that topic:
    Seth Lloyd, "Ultimate physical limits to computation [nature.com]" Nature 406, 1047-1054 (31 August 2000) | doi: 10.1038/35023282 [doi.org] (for those without access to Nature articles, this arXiv preprint [arxiv.org] appears to be the same article).

    The article reviews the absolute maximum limits for computation, based on current understanding of thermodynamics, relativity, and quantum mechanics.

    The basic conclusion of the paper is that a theoretical 1 kg computer (confined to a volume of 1 liter), operating perfectly at the edge of what is physically possible could compute 10^51 operations/second on 10^31 bits of information (as compared to our current computers: 10^10 operations/second on 10^10 bits). Naively scaling Moore's law from current sizes, this suggests that we will reach such limits in 250 years. Of course the paper repeatedly points out that this is for an unrealistically 'perfect' computer, that is somehow able to perfectly organize all its internal matter solely for performing the computation at hand. For instance when running a computation it effectively has a temperature of ~10^9 Kelvin, which is considerably hotter than any known material could withstand.

    Nevertheless, it's interesting to see what the fundamental principles of relativity and quantum mechanics indicate as a boundary for any sort of computation. The article is an interesting read.
  • Re:it's the law (Score:2, Interesting)

    by kesuki (321456) on Wednesday September 19, 2007 @04:58PM (#20672641) Journal
    the best measurment of the drop in the 'internet IQ' is the dollar value of 'cybercrime' as the drop in internet users iq's falls, the dollar value of money stolen from the average internet user rises. a smart preson sees an 'email' from 'paypal' requesting they log in on a url that looks like paypal in the email, but in the url bar at the top shows some hotel in russia and they don't log in. as more and more people who aren't smart enough to recognize simple phishing emails when they get them log in and have their identity and cash stolen the overal cybercrime value increases.

    in a less obvious scam someone goes to google looking for cheap software, sees an ad for $40 unlimited software downloads, or a site selling 'download only' software and foolishly gets their credit dard info stolen, by crooks, and pay for pirated software... well i personally am intelligent enough to google the companies name and find them in a database of cybercrooks, but what about the millions who just think the deal is real, and wind up screwed over... well, that's why cybercrime has reached new records...

    america has the highest reported rate of computer viruses, simply because finding and using security software that really works is too 'hard' for the average user, they just buy what the guy at the computer warehouse said was good. and some are stupid enough to not even do that. all the 'user friendly' suites have shortcomings that virus programmers and hackers can find workarounds for. because they generallz take a 'less secure' approach to defualt security settings to avoid having tech support overwhelmed with users who can't get their internet game, or websites to load.

    and often times those techies at the warehouse stores use 'default' fileshare names, and turn filesharing on, and then hackers can easily find and compromize those systems.

    people wind up throwing away the systems that got viruses-trojaned because the cost of a new computer is less than having a computer guy fix it. and usually those systems have to get so bad with malware and viruses that they're unusable, probally they were compromized for years.

    hackers get good training on compromizing firewalls, traversing nats, and placing backdoors on vulnerable systems, with all the computers out there... which is why an unprotected windows system takes about 12 minutes to get compromized on the net.
  • Sounds About Right (Score:3, Interesting)

    by YetAnotherBob (988800) on Thursday September 20, 2007 @12:09AM (#20677033)
    IC's today are made photographically, on a flat surface. Manufacturers keep working to reduce the area needed for a component, be it transistor, resistor, capacitor or trace wire. We already know from lab work what the minimum possible sizes are for each basic component. We've come up on the minimum possible size several times in the past. Each time, it was related to the possibilities of the light source we were using. Now, we are up there in the extreme UV range, and have minimum feature sizes that are actually smaller than the wavelength used. The best commercial plants use a 45 nM wavelength. At about 30 nM, the traces (on chip wires) become unstable, and may no longer be conductors. That is a fundamental limit that clever plant engineering will not be able to surmount. Current commercial plants are using a 60 to 90 nM min. feature size, if memory serves. That means we have about 6 or 7 doublings (each doubling is about a 70% reduction in feature size and takes 2 to 3 years t realize.) That gives us 12 to 20 years.

    Going to still smaller wavelengths means that the photons pack more punch. It's like trying to play billiards by shooting the cue ball with a high powered rifle. You get pieces of cue ball everywhere. When random photon collisions are pushing random atoms by several dozen radii, your nice ordered atomic lattice becomes a horrid mess. we are nearing the limits of what nature allows for photo lithography now.

    Increasing chip size is not a viable solution, as the full wafer is used now. Increase chip size, and yield drops quickly. Yes, they could double the size of the chip to increase transistor count, but that would mean increasing the cost of the chip by 4X. That's not he direction we want chip cost to go.

    Off in the distance, there are more real hard boundaries, beyond which no amount of effort will yield additional benefits. One of those is component size. Minimum transistor size is 7 atoms (it's been done). Minimum diode size is about 5. Minimum trace size varies with material. The best I've seen is benzene, at about 6 atoms width. Keep in mind that at room temperature, benzene is a gas. It's going to be very hard to make wires of the stuff. We really need a solid. Aluminum, silver, gold, all have been used, and all need to be 30 to 60 atoms wide or more, and several thick to be even a poor conductor. Some creative metallo-insulator engineered materials might allow for smaller trace sizes, but probably not. Please note that this is still smaller than buckytubes, which are also as tall as they are wide, creating other connection problems, so don't peddle that as a panacea. That means that the trace sizes required will probably be the final limit. Real capacitors are larger than the traces, but their size is really controlled by the number of electrons needed to operate the transistor/switch. I'm still betting on the traces as establishing the limit.

    Heat dissipation is also a problem. It gets to be more of a problem as densities go up. Current best designs are operating half way to melt now. switching to silicon carbide would let us go hotter, say 400 to 800 C. Diamond/graphite bases would let it get higher still, though diamond heated to 1,200 in an oxygen atmosphere isn't going to last very long. Need some creative packaging there. Heat dissipation is the real reason we can't go 3D. The systems that tried to be true 3D, or near to it, all relied on the chips being immersed in some coolant and having channels for the coolant through the chip. Liquid nitrogen cooled some that IBM did a few years ago. bubbles were a problem. move the coolant fast enough to transport the heat before bubbling and erosion is a problem.

    Some of these issues can be fixed, some can never be fixed. So, when we are fully 30 nM size with our components, it all stops. It's a problem with the wiring. Solve that, and we would be close to being able to compute with atoms. But, with what we think we can do now, the shrinkage stops in about 20 years.

    Enjoy it while you can.

    Looks like you
  • by mschuyler (197441) on Thursday September 20, 2007 @11:31PM (#20692153) Homepage Journal
    I agree. The real point is that Moore's Law is not dependent on Moore, nor on silicon. If in the past researchers had fixated on the vacuum tube, they never would have reached beyond the vacuum tube paradigm to make the advances that happened. I am encouraged by the results other research labs have already achieved with these new mediums. It's not so much that they still need to be invented as much as it is that their discoveries need to be developed. I think it was William Gibson who said, "The future is already here. It's just unevenly distributed."

Make sure your code does nothing gracefully.

Working...