Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware Technology

Limits to Moore's Law Launch New Computing Quests 74

tringtring alerts us to news that the National Science Foundation has requested $20 million in funding to work on "Science and Engineering Beyond Moore's Law." The PC World article goes on to say that the effort "would fund academic research on technologies, including carbon nanotubes, quantum computing and massively multicore computers, that could improve and replace current transistor technology." tringtring notes that quantum computing has received funding on its own lately, and work on multicore chips has intensified the hunt for parallel programming. Also, improvements are still being made to current transistor mechanics.
This discussion has been archived. No new comments can be posted.

Limits to Moore's Law Launch New Computing Quests

Comments Filter:
  • Is this necessary? (Score:2, Insightful)

    by nebaz ( 453974 )
    I don't really think a prize is necessary for this technology. Unlike space travel, reearch in chip design have shown to be profitable at the commercial level, and there is also no government monopoly to stifle progress in this area. Whether or not a prize is offered, faster computers and better technology are what we as consumers expect in this area, and what we will pay for.
    • Re: (Score:3, Insightful)

      Capital investment is most important with things like this. The purpose of the prize however might not be that it is necessary, but rather to accelerate development, speed it up. The benefits of course of improving technology would be significant, more powerful, faster computers.
      • by mrxak ( 727974 )
        If there actually was a prize, which it doesn't sound like it FTA, it would probably serve more for bragging rights than anything else. Similar to how SpaceShipOne cost $25 million and only got a $10 million prize, the publicity and fame was probably worth it whether the prize was there or not, but the prize actually got everybody's attention.

        But, it's just a request for funding, not a prize, so it doesn't matter.
    • by aztektum ( 170569 ) on Sunday February 17, 2008 @02:51PM (#22455416)
      It isn't a $20mil prize, it's a budget request.
    • by Mike1024 ( 184871 ) on Sunday February 17, 2008 @03:23PM (#22455608)
      I don't really think a prize is necessary for this technology.

      Who said anything about a prize? The PC World article talks about 'funding for research', i.e. cash given to researchers to develop new technology.

      Unlike space travel, reearch in chip design have shown to be profitable at the commercial level, [...] Whether or not a prize is offered, faster computers and better technology are what we as consumers expect in this area, and what we will pay for.

      It's true that a lot of commercial effort goes into current chips and the improvement thereof, but there isn't much commercial effort going into areas like quantum computing because the potential rewards are a loooooong way off. Your money is much safer invested in designing a 32-core Core2ThirtyTwo to be made in 3 years, compared to quantum computing, a technology that faces substantial scalability roadblocks and that no-one knows how to design algorithms for.

      Most of the current quantum computers which have been demonstrated rely on Nuclear magnetic resonance (NMR), but it is thought this technique will not scale well - it is believed less than 100 qubits would be possible. As of 2006, the largest quantum computer ever demonstrated was 12 qubits (making it capable of such tasks as quickly finding the prime factors of a number... as long as that number is less than 4096.

      In summary, promising future technologies often make poor investments because they are (a) experimental and (b) a long way off. So some funding to make research possible wouldn't go amiss.

      Just my $0.02.
    • by Xiph1980 ( 944189 ) on Sunday February 17, 2008 @03:46PM (#22455820)
      Moore just happened to make a prognosis that transistordensity would double every 2 years.
      It just happened to work out that way. We're about to reach a point where current transistors won't cut in anymore. At such a point we'll either stagnate because we can't make a smaller process than 10 nanometer and we can't find a different functional tech, or we'll make an enormous jump in performance because we'll find something in a different field, be it optics or nano-tubing, that does make processors a lot faster.

      Moore's law isn't a law, and should never have been called that way. It's merely a prognosis.
      microprocessor technology is driven by the market. If the general consumer thinks their pc is fast enough, manufacturers will focus on energy-efficiency to sell more cpu's, and speed will start to be a secondary concern.
      • by Tablizer ( 95088 )
        That does not make it "bullshit", but just less than a mathematical or economic certainty. It's an observation that has held up.
      • by Nullav ( 1053766 )

        If the general consumer thinks their pc is fast enough, manufacturers will focus on energy-efficiency to sell more cpu's, and speed will start to be a secondary concern.
        Sounds good to me.
      • Sure, Moore expressed his observations in terms of transistor density, and the speed factor of 1-2 years has varied a bit over the last few decades, but what it's really about is price-performance of technology in a positive-reinforcement market. If you want to sell more chips, you either have to make them faster or cheaper or both, unless you're the only player in an underserved market.

        So the expensive fast chips get faster to sell to customers with the need for speed, and the production technology gets r

      • by iocat ( 572367 )
        Or maybe we'll go back to respecting efficiently written code, as opposed to tolerating things like chat programs that require ungodly amounts of RAM (Trillian, I'm looking at you), or OSes that require GBs of RAM (ahem, MS) and waste trillions of cycles due to shoddy programming, just because we have computers that can run 1000s of times faster than they did in the 80s... but provide essentially the same user experience and speed of execution.
      • by mdwh2 ( 535323 )
        Moore's law isn't a law, and should never have been called that way. It's merely a prognosis.

        Actually, I believe he based it on observed past behaviour, so even though he may have intended it to also be a prediction, calling it a law is fine.

        I'm also confused by "merely" - I'd argue that saying it is a prognosis carries the implication that it will hold in future, whilst "law" implies that, just like other laws, it is merely a generalisation of observed behaviour.
      • I have to add: Moore's law is not about density in a physics way, but about transistor density per a fixed amount of money. That's it, the cost of a fixed number of transistors will halve each two years.

        So it's more like an economic law.

        About your argument 'prognosis vs law', well, almost anything in economics is more a prognosis than a law, but whatever, nobody cares.
  • by syncrotic ( 828809 ) on Sunday February 17, 2008 @02:48PM (#22455394)
    ...is a mentality that probably won't work here.

    Intel sunk billions into the development of Itanium on the premise that if they make a VLIW architecture, compiler developers will find a way to automatically extract the parallelism necessary to make good use of it. A company with the size, resources, and engineering knowledge of Intel made the mistake of assuming that a fundamental shift in thinking could be driven by money and sheer desire, but it turns out that the problem is not just hard - that would make it solvable given sufficient effort and money - it's actually impossible. Those compiler advances never materialized; you can't draw blood from a stone.

    The quest for parallelism in ordinary software might just be similar. Developing tools to make this automated and easy with low overhead is akin to putting a dozen smart people in a room and saying "think up the next big idea that will make me millions." Innovation doesn't work that way; it can't be forced... and money isn't going to make the impossible into the possible.

    I think we'll see a move to eight and then maybe even sixteen cores on a consumer-level chip before we see things start going back in the other direction. This will necessary mean a slowdown in the development of processors as CPU manufacturers go back to wringing every last bit of single-threaded performance out of their designs.

    Thoughts?
    • I'm just making this up as I go along but it sounds plausible. I suspect the Itanium Epic failed because it targeted too small a niche. Supercomputers and larger servers could get better cost/benefit from using desktop processors in larger numbers. First because those processors had more research money behind them so they improved faster than a niche processor, and second because the compilers were already there. IBM learned, and is using that lesson with the CELL in the PS3. Video games and the mass m
    • by ZorbaTHut ( 126196 ) on Sunday February 17, 2008 @03:32PM (#22455692) Homepage
      While I agree with some of your points, I disagree with your details. There's no proof that compilers can't be made smart enough for that - just because it didn't happen doesn't mean it couldn't. The biggest issues with Itanium were that it was incredibly slow on "non-native" code and incredibly expensive. There was no reason for anyone to buy one without having a compiler built specifically for it, and there was no reason for anyone to spent the effort to write a compiler for it without someone having bought one.

      It's possible that if Itanium had been able to execute x64 or even x86 code at a competitive speed, we'd all be using IA-64 by now (or at least hoping that new programs were recompiled with it.)

      Also, I don't actually think we'll have a shift back to single-threaded apps. The fact is that most programs run "fast enough" now, even single-threaded on quadcore systems. The ones that don't (mostly games and some professional software) are frequently relatively easy to multithread. I suspect most programs will stay single-threaded, and the ones that need maximum speed will become extremely multithreaded.
      • As an anecdote, the single application most important to my work (and hobby) is Eclipse. For the amount of machine assistance it can provide towards productive work, you pay heavily in RAM and CPU power. For a particularly prolonged session on a large project, I can have my entire operating system occupying 100MB of RAM, and Eclipse occupying as much as 800MB, leaving very little room even on a 1GB stick.

        It's not that Eclipse has always been this heavy, rather, innovations in machine assistance expand the s
      • "Also, I don't actually think we'll have a shift back to single-threaded apps"

        Not completely true. We might still see single threaded at the conceptual level with languages supporting latent parallelism, even though the program flow is conceptually single threaded. Think parallel "for" loops and futures. The burden of actually distributing the parallel execution would be the language runtime's responsibility. This way you have code that acts and behaves like it is single threaded but actually scales on proc
      • by epine ( 68316 )
        This is even sillier than your reply indicates. By the same logic we could conclude by the failure of the Pentium IV that x86 was doomed. Then Intel came up with Core Duo to show what should have been achieved in the first place (and saved the world many gigawatt years of unnecessary power generation in the meanwhile).

        Intel botched their first hack at Itanium. They weren't willing to pony up another couple of billion to get it right the second time. By then their performance war against AMD had set the
      • by ykardia ( 645087 )

        While I agree with some of your points, I disagree with your details. There's no proof that compilers can't be made smart enough for that - just because it didn't happen doesn't mean it couldn't. While I agree with some of your points, I disagree with your details. There's no proof that compilers can't be made smart enough for that - just because it didn't happen doesn't mean it couldn't.

        In fact, I think people are working on dealing with things like Nested Data Parallelism [microsoft.com] (pdf) in compilers right now. I think this will happen in functional languages very, very soon (Haskell, someone below mentioned Erlang). Simpler things, like dealing with flat data parallelism via the compiler (+ a special library) have been possible for a while (see e.g. OpenMP [wikipedia.org]).

    • Thoughts?
      Stop pushing on bits of string. Start pulling them...

       
    • You should have researched the Actor model Erlang and automatic parallelization of purely functional programs before you made that assertion.

      Parallelism for ordinary software its already here, it's a matter of time before it is adopted by mainstream applications.
    • Itanic failed because the machines had horrible price/performance except in very tiny niches. One of the things that killed Itanic is x86 clusters - aka. parallel programs.

      Multicore processors, in contrast, are free. What I mean by that is this: Dual core processors cost basically the same as single core processors at the same core speed. You can still buy single core processors today, but nobody does - there's no reason not to take the free second core. For a variety of reasons, the same thing will be tru

    • by rbanffy ( 584143 )
      I think Itanic missed its window of opportunity.

      By the time the systems were shipping and there was an mainstream OS (read "Windows") to run applications on it, the AMD64 and multi-core x86 processors were already appearing.

      Had HP invested more on HP-UX over the years (making it escape the narrow niche they carved for it), had Linux been more mainstream by that timeframe (read "a decent desktop OS", which it kind of wasn't), had Intel invested a lot of resources making GCC deliver the promised performance o
  • by Anonymous Coward on Sunday February 17, 2008 @02:48PM (#22455398)
    How much experience is this quest worth?
  • patents? (Score:3, Interesting)

    by Neffirithion ( 950526 ) on Sunday February 17, 2008 @02:50PM (#22455410)
    say they do get these carbon tubing and other stuff that would massively accelerate the technology worlds... Would they have patents on them as well as the 20 million? If so why have the prize? you'll just have to licence the technology from them anyway, so who ever does will be dirt rich + 20 million in pocket... If there is a hole in my thinking... please point it out to me.

    ~Neff
    • Re:patents? (Score:4, Insightful)

      by Jasin Natael ( 14968 ) on Sunday February 17, 2008 @10:02PM (#22458304)

      It's not a prize. It's funding; A budget. This is the older-than-dirt story of, "If you build it, they will come!" vs. "I have a 0.01% chance of succeeding if I try to build it, so who's going to feed my family in the 99.99% probable case that I fail?"

  • by fpgaprogrammer ( 1086859 ) on Sunday February 17, 2008 @02:54PM (#22455446) Homepage
    Moore's law is an observation about the cost per transistor in a circuit. Making faster computation is all about transistor density and the distance signals must travel. Even after the 2-D transistor density levels off, the race will be on to make cheaper 3-D chips using wafer-bonding methods, giving us a new dimension to increase density and thus speed up computation:
    http://mtlweb.mit.edu/researchgroups/icsystems/3dcsg/ [mit.edu]

    And we'll still see the same exponential benefits to GOPs/$ for a long time after 3-D transistor density maxes out. The economics that drive the exponential cost-per-computation trend are more related to volume of demand which offsets high fixed production costs and less related to our ability to actually cram more transistors on a chip.
    • by mean pun ( 717227 ) on Sunday February 17, 2008 @03:19PM (#22455598)
      Even if we keep getting exponential growth of transistors per dollar in the coming years, the question is what to do with them. Arranging them in useful circuits is increasingly difficult because at a certain point adding cache and execution units to a processor just isn't very helpful (hence multi-core). Adding more cores is also not going to help at some point. Moreover, power dissipation can't keep growing proportionally, which means that with increasing transistor counts each transistor will have to dissipate less, which means lowering the average number of switching events per transistor, and how are we going to arrange for that?
      • by fpgaprogrammer ( 1086859 ) on Sunday February 17, 2008 @05:03PM (#22456356) Homepage
        the hard part is, of course, how do we program it; there are plenty of applications that benefit from parallelization (graphics processing, SDR, FEM). parallelization tends to offer equivalent throughput at a lower rate of switching. we need to review whether high frequency switching is really worth the power when you have trillions of transistors in a cubic centimeter. at todays price for 1 million gate FPGA, a 1 trillion gate FPGA array would cost about $10-20M, I expect this will be down by a factor of 1000 in under 10 years. operating at 100 MHz it would be hard to not have a petaflop of computation. lower frequency requirements lets us get creative with power. power density (temperature) is directly related to the number of switching capacitance in a region. lower frequency circuits and asynchronous circuits can reduce the effect of the most major sources of switching (clock). with adiabatic logic systems you can actually use charge pumping and charge recovery to eliminate capacitive loss during switching but these circuits operate slower. when you combine asynchronous and adiabatic logic you can actually use the REQ-ACK handshake as a charge pump to power on functional units. and if you really want high frequency switching, you'll need to remove thermal energy. one of the early uses of carbon nanotubes in circuits may be as thermal channels. it's also possible to create submersible circuitry using microfluidic ducts to cool wafer-stacked chips.
      • Re: (Score:3, Informative)

        by Eivind ( 15695 )
        Actually, transistors can also become more effective, and have been for decades. If not, you'd be right: Doublt computing-power would mean double power-consumption which would mean double heat-production and spell heaps of trouble.

        We're still -far- away from the theoretical limits though.

        Flipping a -single- bit MUST consume atleast kT joules, where T is the temperature in Kelvin and k is the Bolzmann-constant of around 10^-23.

        So if your cpu runs at 300K (cooling it more won't help because then you'll spend
      • Even if we keep getting exponential growth of transistors per dollar in the coming years, the question is what to do with them. Arranging them in useful circuits is increasingly difficult because at a certain point adding cache and execution units to a processor just isn't very helpful (hence multi-core).

        I disagree with at least part of the above.

        The problem is that you're not thinking outside the current box we're in. It's not that we have too many transistors and too much "cache", but rather that we have too few, and will continue to have too few for a few Moore's law generations yet.

        Consider the effect once that "cache" reaches the half-gig and higher level (and consider that the current cache sizes are per-core, so multiply by the number of cores to get the total per chip size, since that's what we're

    • by jelle ( 14827 )
      3D stacking of wafers with transistors is not the solution:

      If someone were to try it, they better get working on methods to cool those stacks of wafers well, and ways to make the wafers cheaper...

      If you make a chip with a stack of, say 10 wafers, you've also had to diffuse 10 wafers, costing, well, the same ten chips of only one wafer... Diffusing doesn't magically get cheaper when you stack the wafers afterwards. I'm sure the 'wafer-bonding' costs some dough too.

      And it generates the heat of 10 chips of one
  • Killer app? (Score:5, Funny)

    by sakdoctor ( 1087155 ) on Sunday February 17, 2008 @03:02PM (#22455498) Homepage
    And what would be the killer app that needed all that extra power?
    Moore's Law might be linear but who's to say that demand for processing power is also... ...scratch that, Microsoft just released a new operating system. The minimum spec is 640 quantum cores.
    • by deadline ( 14171 )

      "The minimum spec is 640 quantum cores."

      Actually, it reads 640 universes. You are guaranteed to get the right answer in one of these.

    • Re: (Score:3, Interesting)

      by mangu ( 126918 )

      And what would be the killer app that needed all that extra power?

      The short answer is all the applications that run in these computers [top500.org].

      I can think of at least two applications that are often in the news: protein folding and physical simulations of continuous media, like weather and climate, aerodynamics, water, oil, and gas flow in porous rocks, etc.

      But I think the future applications for personal supercomputers haven't been invented yet. We don't have the brains to predict what super-human artificial intel

      • by HiThere ( 15173 )
        You don't need to wait for those. By the time the computers arrive there'll be LOTS of jobs that they could do. E.g., given a genome and a blood sample, predict the best drug to use to treat disease X.

        (Yeah, we don't have the database that program would need yet. But it's already being worked on.)

        You can also use them to do ray tracking in a changing 3-D environment. (Think realistic games. Lots of people will pay for that one.)
    • Re: (Score:1, Redundant)

      by squeeze69 ( 756427 )
      640 quantum cores should be enough for anybody.... :-)
    • Re:Killer app? (Score:4, Insightful)

      by master_p ( 608214 ) on Sunday February 17, 2008 @04:55PM (#22456302)
      How about:

      1) parallel search
      2) accurate text translation
      3) accurate human speech rendering
      4) raytracing for 3d graphics
      5) advanced physics in 3d applications
      6) more dynamic programming languages
      7) better video and audio decompression
      8) much faster compression
      9) ultra fast large WORD document repagination
      etc
    • Re: (Score:1, Insightful)

      by Anonymous Coward

      And what would be the killer app that needed all that extra power?

      Well, there are a couple, graphics processing for example. Governments in particular however might be interested in two different areas which would profit considerably from massively parallel computing: (semi-)brute force cryptanalysis and simulation (think weapons, in particular, nuclear ones since it's difficult and expensive to do real tests with them).

  • The Hunt (Score:1, Flamebait)

    and work on multicore chips has intensified the hunt for parallel programming.
    Try down the back of the couch. Whenever born again Christians come to the door and ask me if I've found Jesus, that's where I tell them he was all along. Everything ends up there eventually.
  • by Anonymous Coward
    So, is Moores Law a law or the quota the industry need to meet?

    Officer : "Sir, I'll have to arrest you for breaking Moore's Law"

    Intel exec : "Oh noes!"
  • If you can come up with a technology that continues Moore's beyond the limits of silicon, you will make a lot of money and everyone knows this. That is why Intel, HP, Ibm, etc are investing billions in this kind of research. I don't have any complaints against the nsf funding this research but it's a little insignificant compared to the rest of the market funded research going on and I think this money could be better spent on other things that the free market is not already funding.
  • Everyone in the computing industry these days seems to be mesmerized by multiple core computers. What about improving single core performance? There has been many papers published back in the 1970s showing that multiple cores are really not effective at improving performance of a single application assuming that it has more than 1% non-parallizable code. Moreover as the number of cores increase it has been shown that thread management starts consuming the majority of the system's time, not the actual app
    • Reminds me of PATA and SATA. We are moving to SATA away from PATA because it's faster to just run through all the instructions on one go (Single Core) than attempt to bring all the different answers together at the end of the IO operation (PATA). I would rather have a single 32Ghz CPU than a 1Ghz CPU with 32 cores, and I can bet it will run all my real world apps a hell of a lot faster.
  • Being able to automate the task of sending off threads to various cores is pretty much and impossibility. The level of exceptions to any set of rules that allow the compiler or even a run time environment with managed code would be so large that the MCP would be in a constant busy state just figuring out if it was possible to send various threads of to n numbers of cores. much less keeping all the threads synced and sorting out the wait times for various threads on various CPU's to finally all be finished

  • What happened to Gallium Arsenide technology? It's supposed to be 10 times faster than silicon

    And what about silicon germanium?

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...