Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Hardware Technology

End of Moore's Law Forcing Radical Innovation 275

dcblogs writes "The technology industry has been coasting along on steady, predictable performance gains, as laid out by Moore's law. But stability and predictability are also the ingredients of complacency and inertia. At this stage, Moore's Law may be more analogous to golden handcuffs than to innovation. With its end in sight, systems makers and governments are being challenged to come up with new materials and architectures. The European Commission has written of a need for 'radical innovation in many computing technologies.' The U.S. National Science Foundation, in a recent budget request, said technologies such as carbon nanotube digital circuits will likely be needed, or perhaps molecular-based approaches, including biologically inspired systems. The slowdown in Moore's Law has already hit high-performance computing. Marc Snir, director of the Mathematics and Computer Science Division at the Argonne National Laboratory, outlined in a series of slides the problem of going below 7nm on chips, and the lack of alternative technologies."
This discussion has been archived. No new comments can be posted.

End of Moore's Law Forcing Radical Innovation

Comments Filter:
  • by JoshuaZ ( 1134087 ) on Wednesday January 08, 2014 @01:37AM (#45895127) Homepage

    This is ok. For many purposes, software improvements in terms of new algorithms that are faster and use less memory have done more for heavy-dute computation than hardware improvement has. Between 1988 and 2003, linear programmng on a standard benchmark improved by a factor of about 40 million. Out of that improvement, about 40,000 was from improvements in software and only about 1000 in hardware improvements (these numbers are partially not well-defined because there's some interaction between how one optimizes software for hardware and the reverse). See this report []. Similar remarks apply to integer factorization and a variety of other important problems.

    The other important issue related to this, is that improvements in algorithms provide ever-growing returns because they can actually improve on the asymptotics, whereas any hardware improvement is a single event. And for many practical algorithms, asymptotic improvements are occurring still. Just a few days ago a new algorithm was published that was much more efficient for approximating max cut on undirected graphs. See [].

    If all forms of hardware improvement stopped today, there would still be massive improvement in the next few years on what we can do with computers simply from the algorithms and software improvements.

  • Implications (Score:4, Interesting)

    by Animats ( 122034 ) on Wednesday January 08, 2014 @01:38AM (#45895131) Homepage

    Some implications:

    • We're going to see more machines that look like clusters on a chip. We need new operating systems to manage such machines. Things that are more like cloud farm managers, parceling out the work to the compute farm.
    • Operating systems and languages will need to get better at interprocess and inter-machine communication. We're going to see more machines that don't have shared memory but do have fast interconnects. Marshalling and interprocess calls need to get much faster and better. Languages will need compile-time code generation for marshalling. Programming for multiple machines has to be part of the language, not a library.
    • We're going to see more machines that look like clusters on a chip. We need new operating systems to manage such machines. Things that are more like cloud farm managers.
    • We'll probably see a few more "build it and they will come" architectures like the Cell. Most of them will fail. Maybe we'll see a win.
  • Re:Rock Star coders! (Score:5, Interesting)

    by lgw ( 121541 ) on Wednesday January 08, 2014 @01:43AM (#45895165) Journal

    I think the next couple of decades will be mostly about efficiency. Between mobile computing and the advantage of ever-more cores, the benefits from lower power consumption (and reduce heat load as a result) will be huge. And unlike element size, we're far from basic physical limits on efficiency.

  • by jones_supa ( 887896 ) on Wednesday January 08, 2014 @01:50AM (#45895209)

    Would you rather that your CPU and memory were always underutilized by software, going to waste?

    Of course, because then we would either save in power consumption or alternatively do more interesting stuff with the extra free resources that we get.

  • Moore's "law" & AI (Score:3, Interesting)

    by globaljustin ( 574257 ) on Wednesday January 08, 2014 @01:52AM (#45895217) Journal

    In my mind it was an interesting statistical coincedence, *when it was first discussed*

    Then the hype took over, and we know what happens when tech and hype meet up...

    Out of touch CEO's get hair-brained ideas from non-tech marketing people about what makes a product sell, then the marketing people dictate to the product managers what benchmarks they have to hit...then the new product is developed and any regular /. reader knows the rest.

    It's bunk. We need to dispel these kinds of errors in language instead of perpetuating them, because it has tangible effects on the engineers in the lab who actually do the damn work.

    Part of what made the Moore's "Law" meme so sticky is how it was used, usually in a simple line graph, by "futurists" who barely can check their own email to pen mellodramatic, overhyped predictions about *when* we would have 'AI'.

    AI hype is tied to computer performance, and Moore's "Law" was something air-head journalists could easily source, complete with a nice graph from a tech "expert"

    I know my view of AI as a fiction is in the minority, but IMHO we need to grow up, stop with the reductive notion that computing is progressing towards some kind of 'AI' singularity and focus on making things that help people do work or play.

    Our industry looses **BILLIONS** of dollars and hundreds of thousands of work-hours chasing a fiction when we could be making more useful, powerful, and imaginitive things that meet actual, real world human needs.

    To bring this back to Moore's Law, let's work on better explaining the value of tech to non-techies. Let's give air-headed journalists something to sink their teeth into that will help our industry progress, not play the bullshit/hype game like every other industry.

  • by radarskiy ( 2874255 ) on Wednesday January 08, 2014 @01:54AM (#45895229)

    The defining characteristic of the 7nm is that it's the one after the 10nm node. I can't remember the last time I worked in a process where the was a notable dimension that matched the node name, either drawn or effective.

    Marc Snir gets bogged down in an analysis of gate length reduction which is quite besides the point. If it gets harder to shrink the gate than to do something else, then something else will be done. It worked on processes with the same gate length as the "previous" process, and I've probably even worked on a process that had a larger gate than the previous process. The device density still increased, since gate length is not the only dimension.

  • by gweihir ( 88907 ) on Wednesday January 08, 2014 @02:10AM (#45895317)

    As somebody that has watched what has been going on in that particular area for more than 2 decades, I do not expect anything to come out of it. FPGAs are suitable for doing very simples things reasonably fast, but so are graphics cards and with a much better interface. Bit as soon as communication between computing elements or large memory is required, both FPGAs and graphics cards become abysmally slow in comparison to modern CPUs. That is not going to change, as it is an effect of the architecture. There will not be any "massive" performance increase anywhere now.

  • by Taco Cowboy ( 5327 ) on Wednesday January 08, 2014 @03:06AM (#45895511) Journal

    Between 1988 and 2003, linear programmng on a standard benchmark improved by a factor of about 40 million. Out of that improvement, about 40,000 was from improvements in software and only about 1000 in hardware improvements (these numbers are partially not well-defined because there's some interaction between how one optimizes software for hardware and the reverse).

    I downloaded the report at the link that you have so generously provided - - but I found the figures somewhat misleading

    In the field of numerical algorithms, however, the improvement can be quantified. Here is just one example, provided by Professor Martin GrÃtschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. GrÃtschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later â" in 2003 â" this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algo-rithms! GrÃtschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008

    Professor GrÃtschel's sighting was in regard of "numerical algorithm", and no doubt, there have been some great improvements achieved due to new algorithms. But that is just one tiny aspect out of the whole spectrum of the programming scene.

    Out of the tiny segment of the numerical crunching, bloatwares have emerged everywhere.

    While the hardware speed has accelerated 1,000x (as claimed by the kind professor), the speed of the software in solving the myriad problems hasn't exactly been keeping up.

    I have invested more than 30 years of my life in the tech field, and comparing to what we had achieved in software back in the late 1970's, what we have today are astoundingly disappointing.

    Back then, RAM was counted in KB, and storage in MB were considered as "HUGE".

    We had to squeeze every single ounce of performance out of our programs just to make them run at decent speed.

    No matter if it's a game of "pong" or numerical analysis, everything had to be considered and more than often we get down to the machine level (yes, we code one step lower than the assembly language) so to minimize the "waste", counting down to each and every single cycle.

    Yes, many of the younger generation will look at us as though we the old farts are crazy, but our quest in fighting against the hardware limitation was, at least to us who went through it all, extremely stimulating.

  • by InvalidError ( 771317 ) on Wednesday January 08, 2014 @03:08AM (#45895517)

    Programming FPGAs is far more complex than programming GPGPUs and you would need a huge FPGA to match the compute performance available on $500 GPUs today. FPGAs are nice for arbitrary logic such as switch fabric in large routers or massively pipelined computations in software-defined radios but for general-purpose computations, GPGPU is a much cheaper and simpler option that is already available on many modern CPUs and SoCs.

  • Re:Rock Star coders! (Score:5, Interesting)

    by Forever Wondering ( 2506940 ) on Wednesday January 08, 2014 @03:16AM (#45895535)

    There was an article not too long ago (can't remember where) that mentioned that a lot of the performance improvement over the years came from better algorithms rather than faster chips (e.g. one can double the processor speed but that pales with changing an O(n**2) algorithm to O(n*log(n)) one).

    SSD's based on flash aren't the ultimate answer. Ones that use either magneto-resistive memory or ferroelectric memory show more long term promise (e.g. mram can switch as fast as L2 cache--faster than DRAM but with the same cell size). With near unlimited memory at that speed, a number of multistep operations can be converted to a single table lookup. This is done a lot in a lot of custom logic where the logic is replaced with a fast SRAM/LUT.

    Storage systems (e.g. NAS/SAN) can be parallelized but the limiting factor is still memory bus bandwidth [even with many parallel memory buses].

    Multicore chips that use N-way mesh topologies might also help. Data is communicated via a data channel that doesn't need to dump to an intermediate shared buffer.

    Or hybrid cells that have a CPU but also have programmable custom logic attached directly. That is, part of the algorithm gets compiled to RTL that can then be loaded into the custom logic just as fast as a task switch (e.g. on every OS reschedule). This is why realtime video encoders use FPGAs. They can encode video at 30-120 fps in real time, but a multicore software solution might be 100x slower.

  • by Katatsumuri ( 1137173 ) on Wednesday January 08, 2014 @08:56AM (#45896739)

    It's true that we may not see another 90s-style MHz race on our desktops. But there is ongoing need for faster, bigger, better supercomputers and datacenters, and there is technology that can help there. I did quote some examples where this technology is touching the market already. And once it is adopted and refined by the government agencies and big data companies, it will also trickle down into consumer market.

    I/O will get much faster. Storage will get much bigger. Computing cores may still become faster or more energy-efficient. New specialized co-processors may become common, for example for NN or QC. Then some of them may get integrated, as it happened to FPUs and GPUs. So the computing will most likely improve in different ways than before, but it is still going to develop fast and remain exciting.

    And some technology may stay out of the consumer market, similar to your supersonic flight example, but it will still benefit the society.

  • Re:Rock Star coders! (Score:4, Interesting)

    by ultranova ( 717540 ) on Wednesday January 08, 2014 @08:47PM (#45903263)

    Yes, today, but really there hasn't been much focus on anything except ways to reduce element size for decades.

    True but misleading. A smaller element has less surface area to dissipate its heat, thus it must either generate less or run hotter. And silicon ran into the upper limits of the material years ago, which is why processors have required active cooling for a long time now. But that has practical limits, so the TDP of new flagship processors stays around the same (100-200 W) and after a few generations their tech gets recycled into a power-optimized model.

    In other words, you can't reduce element size without also reducing power usage or the damn thing will melt.

Perfection is acheived only on the point of collapse. - C. N. Parkinson