Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware Technology

End of Moore's Law Forcing Radical Innovation 275

dcblogs writes "The technology industry has been coasting along on steady, predictable performance gains, as laid out by Moore's law. But stability and predictability are also the ingredients of complacency and inertia. At this stage, Moore's Law may be more analogous to golden handcuffs than to innovation. With its end in sight, systems makers and governments are being challenged to come up with new materials and architectures. The European Commission has written of a need for 'radical innovation in many computing technologies.' The U.S. National Science Foundation, in a recent budget request, said technologies such as carbon nanotube digital circuits will likely be needed, or perhaps molecular-based approaches, including biologically inspired systems. The slowdown in Moore's Law has already hit high-performance computing. Marc Snir, director of the Mathematics and Computer Science Division at the Argonne National Laboratory, outlined in a series of slides the problem of going below 7nm on chips, and the lack of alternative technologies."
This discussion has been archived. No new comments can be posted.

End of Moore's Law Forcing Radical Innovation

Comments Filter:
  • Rock Star coders! (Score:5, Insightful)

    by Anonymous Coward on Wednesday January 08, 2014 @01:15AM (#45895001)

    The party's over. Get to work on efficient code. As for the rest of all you mothafucking coding wannabes, suck it! Swallow it. Like it! Whatever, just go away.

  • by Osgeld ( 1900440 ) on Wednesday January 08, 2014 @01:16AM (#45895007)

    Its more of a prediction, that has mostly been on target cause of its challenging nature

  • by ka9dgx ( 72702 ) on Wednesday January 08, 2014 @01:16AM (#45895013) Homepage Journal

    Now the blind ants (researchers) will need to explore more of the tree (the computing problem space)... there are many fruits out there yet to discover, this is just the end of the very easy fruit. I happen to believe that FPGAs can be made much more powerful because of some premature optimization. Time will tell if I'm right or wrong.

  • by Taco Cowboy ( 5327 ) on Wednesday January 08, 2014 @01:17AM (#45895019) Journal

    The really sad thing regarding this "Moore's Law" thing is that, while the hardware had kept on getting faster and even more power efficient, the software that runs on them kept on becoming more and more bloated.

    Back in the days of pre-8088 we already had music notation softwares running on Radio Shack TRS-80 model III.

    Back then, due to the constraints of the hardware, programmers had to use every trick on the book (and off) to make their programs run.

    Nowadays, even the most basic "Hello World" program comes up in megabyte range.

    Sigh !

  • And best of all... (Score:5, Insightful)

    by pushing-robot ( 1037830 ) on Wednesday January 08, 2014 @01:18AM (#45895023)

    We might even stop writing everything in Javascript?

  • by lister king of smeg ( 2481612 ) on Wednesday January 08, 2014 @01:32AM (#45895099)

    Would you rather that your CPU and memory were always underutilized by software, going to waste?

    yes more efficient and fast code would be much better

  • by bloodhawk ( 813939 ) on Wednesday January 08, 2014 @01:32AM (#45895101)
    I don't find that a sad thing at all. The fact people have to spend far less effort on code to make something that works is a fantastic thing that has opened up programming to millions of people that would never have been able to cope with the complex tricks we used to play to get every byte of memory saved and to prune every line of code. This doesn't mean you can't do those things and I still regularly do when writing server side code. But why spend man years of effort to optimise memory,CPU and disk footprint when the average machine has abundant surplus of both.
  • by Z00L00K ( 682162 ) on Wednesday January 08, 2014 @01:38AM (#45895143) Homepage Journal

    Efficient code and new ways to solve computing problems using massive multi-core solutions.

    However many "problems" with performance today are I/O-based and not calculation based. It's time for the storage systems to catch up in performance with the processors, and they are on the way with SSD disks.

  • by MightyMartian ( 840721 ) on Wednesday January 08, 2014 @01:45AM (#45895173) Journal

    Pointless cycles because of poor code and compiler optimizations is hardly what I would call "utilization".

  • by jarfil ( 1341877 ) on Wednesday January 08, 2014 @02:12AM (#45895327) Homepage

    How about spending 20x the man hours for a 10,000% performance gain? That is what I've recently experienced myself, in the reverse: an embedded device interface getting rewritten to require 20x less man hours to mantain... at a 100x performance hit. Suffice to say it went from quite snappy, to completely useless, but it seems like it's my fault for not upgrading the hardware.

  • by K. S. Kyosuke ( 729550 ) on Wednesday January 08, 2014 @02:35AM (#45895425)
    There hasn't been a computing power revolution for quite some time now. All the recent development has been rather evolutionary.
  • by Anonymous Coward on Wednesday January 08, 2014 @03:46AM (#45895627)

    (e.g. one can double the processor speed but that pales with changing an O(n**2) algorithm to O(n*log(n)) one).

    In some cases. There are also a lot of cases where overly complex functions are used to manage lists that usually contains three or four items and never reaches ten.
    Analytical optimizing is great when it can be applied but just because one has more than one data item to work on doesn't automatically mean that a O(n*log(n)) will beat a O(n**2) solution. The O(n**2) solutions will often be faster per iteration so it is a good idea to consider how many items one usually will work with.

  • by lister king of smeg ( 2481612 ) on Wednesday January 08, 2014 @04:17AM (#45895753)

    Would you rather that your CPU and memory were always underutilized by software, going to waste?

    yes more efficient and fast code would be much better

    Then you should be using a 20 years old computer, with its lean software and scarce resources. Why buy more powerful hardware if you have no use for its inward capabilities? The rest of us will keep using hardware that allow possibilities that were unheard a few years ago.

    To quote an old adage What Moore's law giveth Gates Taketh away

    I would prefer to use lean software on powerful hardware so as to actually gain the advantages of said hardware rather than bad code and bloat roll back the advantages new hardware has given.

  • by Anonymous Coward on Wednesday January 08, 2014 @04:32AM (#45895795)

    We might even stop writing everything in Javascript?

    Indeed. JavaScript is the assembly language of the future, and we need to stop coding in it. There already are many nicer languages which are then compiled into Javascript, ready for execution in any computing environment.

    You were modded insightful rather than funny. I weep for the future.

  • by gweihir ( 88907 ) on Wednesday January 08, 2014 @05:54AM (#45896035)

    You may see them, but no actual expert in the field does.

    - 3D chips are decades old and have never materialized. They do not really solve the interconnect problem either and come with a host of other unsolved problems.
    - Memristors do not enable any new approach to computing, as there are neither many problems that would benefit form this approach, nor tools. The whole idea is nonsense at this time. Maybe they will have some future as storage, but not anytime soon.
    - Photonics is a dead-end. Copper is far too good and far too cheap in comparison.
    - Spintronics is old and has no real potential for ever working at this time.
    - Quantum computing is basically a scam perpetrated by some part of the academic community to get funding. It is not even clear whether it is possible for any meaningful size of problem.

    So, no. There really is nothing here.

  • by TheRaven64 ( 641858 ) on Wednesday January 08, 2014 @06:32AM (#45896149) Journal

    You can still write software that efficient today. The down side is that you can only write software that efficient if you're willing to have it be about that complex too. Do you want your notes application to just store data directly on a single disk from a single manufacturer, or would you rather have an OS that abstracts the details of the device and provides a filesystem? Do you want the notes software to just dump the contents of memory, or do you want it to store things in a file format that is amenable to other programs reading it? Do you want it to just handle plain text for lyricst, or would you like it to handle formatting? What about unicode? Do you want it to be able to render the text in a nice clean antialiased way with proportional spacing, or are you happy with fixed-width bitmap fonts (which may or may not look terrible, depending on your display resolution)? The same applies to the notes themselves. Do you want it to be able to produce PostScript for high-quality printing, or are you happy for it to just dump the low-quality screen version as a bitmap? Do you want it to do wavetable-based MIDI synthesis or are you happy with just beeps?

    The reason modern software is bigger is that it does a hell of a lot more. If you spent as much effort on every line of code in a program with all of the features that modern users expect as you did in something where you could put the printouts of the entire codebase on your office wall, you'd never be finished and you'd never find customers willing to pay the amount it would cost.

  • by Katatsumuri ( 1137173 ) on Wednesday January 08, 2014 @07:00AM (#45896253)
    It may not be an instant revolution that's already done, but some work really is in progress.

    - 3D chips are decades old and have never materialized.

    24-layer flash chips are currently produced [arstechnica.com] by Samsung. IBM works on 3D chip cooling. [ibm.com] Just because it "never materialized" before, doesn't mean it won't happen now.

    - Memristors do not enable any new approach to computing, as there are neither many problems that would benefit form this approach, nor tools. The whole idea is nonsense at this time. Maybe they will have some future as storage, but not anytime soon.

    Memristors are great for neural network (NN) modelling. MoNETA [bu.edu] is one of the first big neural modelling projects to use memristors for that. I do not consider NNs a magic solution to everything, but you must admit they have plenty of applications in computation-expensive tasks.

    And while HP reconsidered its previous plans [wired.com] to offer memristor-based memory by 2014, they still want to ship it by 2018. [theregister.co.uk]

    - Photonics is a dead-end. Copper is far too good and far too cheap in comparison.

    Maybe fully photonic-based CPUs are way off, but at least for specialized use there are already photonic integrated circuits [wikipedia.org] with hundreds of functions on a chip.

    - Spintronics is old and has no real potential for ever working at this time.

    MRAM [wikipedia.org] uses electron spin to store data and is coming to market. Application of spintronics for general computing may be a bit further off in the future, but "no potential" is an overstatement.

    - Quantum computing is basically a scam perpetrated by some part of the academic community to get funding. It is not even clear whether it is possible for any meaningful size of problem.

    NASA, Google [livescience.com] and NSA [bbc.co.uk], among others, think otherwise.

    So, no. There really is nothing here.

    I respectfully disagree. We definitely have something.

  • by martin-boundary ( 547041 ) on Wednesday January 08, 2014 @09:10AM (#45896821)

    However many "problems" with performance today are I/O-based and not calculation based. It's time for the storage systems to catch up [...]

    Meh, that's a copout. IO has always been a bottleneck. Why do you think that Knuth spends so much time optimizing sorting algorithms for tapes? It's not a new issue, solve it by changing your algorithm (aka calculation).

    The current generation of programmers are so used to doing cookie cutter work, gluing together lower level libraries that they do not understand in essentially trivial ways, that when they are faced with an actual mismatch between the problem and the assumptions of the lower level code, there is nothing they know how to do. Here's a hint: throw away the lower level crutches, and design a scalable solution from scratch. Most problems can be solved in many ways, and the solution that uses your favourite framework is probably never the most efficient.

    /rant

  • by ShanghaiBill ( 739463 ) on Wednesday January 08, 2014 @10:35AM (#45897381)

    I never asked for anyone to put a 10-million app capability on my phone

    Yet you bought a phone with that capability.

    or any of the other 37 now-standard features that suck the life out of my phone battery

    You can buy a dumb phone with a battery that lasts a week or more, for a lot less than you paid for your smart phone.

    The only thing we're better at, is greed feeding excess.

    It was silly of you to pay extra for features that you didn't want. It is even sillier to then whine that you were somehow a victim of "greed".

It is easier to write an incorrect program than understand a correct one.

Working...