Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware Technology

End of Moore's Law Forcing Radical Innovation 275

dcblogs writes "The technology industry has been coasting along on steady, predictable performance gains, as laid out by Moore's law. But stability and predictability are also the ingredients of complacency and inertia. At this stage, Moore's Law may be more analogous to golden handcuffs than to innovation. With its end in sight, systems makers and governments are being challenged to come up with new materials and architectures. The European Commission has written of a need for 'radical innovation in many computing technologies.' The U.S. National Science Foundation, in a recent budget request, said technologies such as carbon nanotube digital circuits will likely be needed, or perhaps molecular-based approaches, including biologically inspired systems. The slowdown in Moore's Law has already hit high-performance computing. Marc Snir, director of the Mathematics and Computer Science Division at the Argonne National Laboratory, outlined in a series of slides the problem of going below 7nm on chips, and the lack of alternative technologies."
This discussion has been archived. No new comments can be posted.

End of Moore's Law Forcing Radical Innovation

Comments Filter:
  • Rock Star coders! (Score:5, Insightful)

    by Anonymous Coward on Wednesday January 08, 2014 @12:15AM (#45895001)

    The party's over. Get to work on efficient code. As for the rest of all you mothafucking coding wannabes, suck it! Swallow it. Like it! Whatever, just go away.

    • by Z00L00K ( 682162 ) on Wednesday January 08, 2014 @12:38AM (#45895143) Homepage Journal

      Efficient code and new ways to solve computing problems using massive multi-core solutions.

      However many "problems" with performance today are I/O-based and not calculation based. It's time for the storage systems to catch up in performance with the processors, and they are on the way with SSD disks.

      • Re:Rock Star coders! (Score:5, Interesting)

        by lgw ( 121541 ) on Wednesday January 08, 2014 @12:43AM (#45895165) Journal

        I think the next couple of decades will be mostly about efficiency. Between mobile computing and the advantage of ever-more cores, the benefits from lower power consumption (and reduce heat load as a result) will be huge. And unlike element size, we're far from basic physical limits on efficiency.

        • But efficiency is largely based on element size.
        • I think the next couple of decades will be mostly about efficiency. Between mobile computing and the advantage of ever-more cores, the benefits from lower power consumption (and reduce heat load as a result) will be huge. And unlike element size, we're far from basic physical limits on efficiency.

          Efficiency in consumer products will have to outweigh greed first.

          I never asked for anyone to put a 10-million app capability on my phone, or any of the other 37 now-standard features that suck the life out of my phone battery just by looking at it.

          If today's smartphone hardware had to run with functions circa 10 years ago, the batteries would likely last for weeks. Our technology even today is far better than we think. The only thing we're better at, is greed feeding excess.

          • by ShanghaiBill ( 739463 ) on Wednesday January 08, 2014 @09:35AM (#45897381)

            I never asked for anyone to put a 10-million app capability on my phone

            Yet you bought a phone with that capability.

            or any of the other 37 now-standard features that suck the life out of my phone battery

            You can buy a dumb phone with a battery that lasts a week or more, for a lot less than you paid for your smart phone.

            The only thing we're better at, is greed feeding excess.

            It was silly of you to pay extra for features that you didn't want. It is even sillier to then whine that you were somehow a victim of "greed".

        • TFA is more about the problem of using commodity parts in high performance super computers. Since most of the industry is now more focussed on smaller and lower power chips.
      • by gweihir ( 88907 )

        You still do not get it. There will be no further computing power revolution.

        • by K. S. Kyosuke ( 729550 ) on Wednesday January 08, 2014 @01:35AM (#45895425)
          There hasn't been a computing power revolution for quite some time now. All the recent development has been rather evolutionary.
          • by Katatsumuri ( 1137173 ) on Wednesday January 08, 2014 @03:50AM (#45895857)

            I see many emerging technologies that promise further great progress in computing. Here are some of them. I wish some industry people here could post some updates about their way to the market. They may not literally prolong the Moore's Law in regards to the number of transistors, but they promise great performance gains, which is what really matters.

            3D chips. As materials science and manufacturing precision advances, we will soon have multi-layered (starting at a few layers that Samsung already has, but up to 1000s) or even fully 3D chips with efficient heat dissipation. This would put the components closer together and streamline the close-range interconnects. Also, this increases "computation per rack unit volume", simplifying some space-related aspects of scaling.

            Memristors. HP is ready to produce the first memristor chips but delays that for business reasons (how sad is that!) Others are also preparing products. Memristor technology enables a new approach to computing, combining memory and computation in one place. They are also quite fast (competitive with the current RAM) and energy-efficient, which means easier cooling and possible 3D layout.

            Photonics. Optical buses are finding their ways into computers, and network hardware manufacturers are looking for ways to perform some basic switching directly with light. Some day these two trends may converge to produce an optical computer chip that would be free from the limitations of electric resistance/heat, EM interference, and could thus operate at a higher clock speed. Would be more energy efficient, too.

            Spintronics. Probably further in the future, but potentially very high-density and low-power technology actively developed by IBM, Hynix and a bunch of others. This one would push our computation density and power efficiency limits to another level, as it allows performing some computation using magnetic fields, without electrons actually moving in electrical current (excuse me for my layman understanding).

            Quantum computing. This could qualitatively speed up whole classes of tasks, potentially bringing AI and simulation applications to new levels of performance. The only commercial offer so far is Dwave, and it's not a classical QC, but so many labs are working on that, the results are bound to come soon.

            • by gweihir ( 88907 ) on Wednesday January 08, 2014 @04:54AM (#45896035)

              You may see them, but no actual expert in the field does.

              - 3D chips are decades old and have never materialized. They do not really solve the interconnect problem either and come with a host of other unsolved problems.
              - Memristors do not enable any new approach to computing, as there are neither many problems that would benefit form this approach, nor tools. The whole idea is nonsense at this time. Maybe they will have some future as storage, but not anytime soon.
              - Photonics is a dead-end. Copper is far too good and far too cheap in comparison.
              - Spintronics is old and has no real potential for ever working at this time.
              - Quantum computing is basically a scam perpetrated by some part of the academic community to get funding. It is not even clear whether it is possible for any meaningful size of problem.

              So, no. There really is nothing here.

              • by Katatsumuri ( 1137173 ) on Wednesday January 08, 2014 @06:00AM (#45896253)
                It may not be an instant revolution that's already done, but some work really is in progress.

                - 3D chips are decades old and have never materialized.

                24-layer flash chips are currently produced [arstechnica.com] by Samsung. IBM works on 3D chip cooling. [ibm.com] Just because it "never materialized" before, doesn't mean it won't happen now.

                - Memristors do not enable any new approach to computing, as there are neither many problems that would benefit form this approach, nor tools. The whole idea is nonsense at this time. Maybe they will have some future as storage, but not anytime soon.

                Memristors are great for neural network (NN) modelling. MoNETA [bu.edu] is one of the first big neural modelling projects to use memristors for that. I do not consider NNs a magic solution to everything, but you must admit they have plenty of applications in computation-expensive tasks.

                And while HP reconsidered its previous plans [wired.com] to offer memristor-based memory by 2014, they still want to ship it by 2018. [theregister.co.uk]

                - Photonics is a dead-end. Copper is far too good and far too cheap in comparison.

                Maybe fully photonic-based CPUs are way off, but at least for specialized use there are already photonic integrated circuits [wikipedia.org] with hundreds of functions on a chip.

                - Spintronics is old and has no real potential for ever working at this time.

                MRAM [wikipedia.org] uses electron spin to store data and is coming to market. Application of spintronics for general computing may be a bit further off in the future, but "no potential" is an overstatement.

                - Quantum computing is basically a scam perpetrated by some part of the academic community to get funding. It is not even clear whether it is possible for any meaningful size of problem.

                NASA, Google [livescience.com] and NSA [bbc.co.uk], among others, think otherwise.

                So, no. There really is nothing here.

                I respectfully disagree. We definitely have something.

                • by Kjella ( 173770 )

                  I respectfully disagree. We definitely have something.

                  That there's research into exotic alternatives is fine, but just because they've researched flying cars and fusion reactors for 50 years doesn't mean it will ever matrialize or be usable outside a very narrow niche. If we hit the limits of copper there's no telling if any of these will materialize or just continue to be interesting, but overall uneconomical and impractical to use in consumer products. Like for example supersonic flight, it exists but all commercial passengers go on subsonic flights since th

                  • by Katatsumuri ( 1137173 ) on Wednesday January 08, 2014 @07:56AM (#45896739)

                    It's true that we may not see another 90s-style MHz race on our desktops. But there is ongoing need for faster, bigger, better supercomputers and datacenters, and there is technology that can help there. I did quote some examples where this technology is touching the market already. And once it is adopted and refined by the government agencies and big data companies, it will also trickle down into consumer market.

                    I/O will get much faster. Storage will get much bigger. Computing cores may still become faster or more energy-efficient. New specialized co-processors may become common, for example for NN or QC. Then some of them may get integrated, as it happened to FPUs and GPUs. So the computing will most likely improve in different ways than before, but it is still going to develop fast and remain exciting.

                    And some technology may stay out of the consumer market, similar to your supersonic flight example, but it will still benefit the society.

              • by interkin3tic ( 1469267 ) on Wednesday January 08, 2014 @11:50AM (#45898679)
                Hmm... yes good points. A bit off topic, but flying machines too are nonsense. No expert in the field sees them happening. People have been talking about flying machines, and we've had balloons for decades, but flying contraptions didn't materialize. And they don't solve any problem really that we don't already have a solution to, but do introduce new problems, like falling. The whole idea is nonsense. It's a dead end too! Ships and trains are far too cheap to ever let flying machines even be competitive. It's old and has no real potential for ever working. It's basically a scam prepetuated by some bike builders, and it's not clear it will ever be useful for any meaningful problem.

                In conclusion, you're right. There's no chance of any revolutionary computing technology coming forward, and there's no chance that humans will ever fly.
          • by gweihir ( 88907 )

            Indeed. Just my point. And that evolution is going slower and slower.

          • Amen. The most used CPU architectures in the world today are directly descended from microcontroller architectures designed in the late 1960s and early 1970s, based on the work of a handful of designers. None of those designers could have planned for or envisaged their chips as being the widely used CPUs of today.

      • Re:Rock Star coders! (Score:5, Interesting)

        by Forever Wondering ( 2506940 ) on Wednesday January 08, 2014 @02:16AM (#45895535)

        There was an article not too long ago (can't remember where) that mentioned that a lot of the performance improvement over the years came from better algorithms rather than faster chips (e.g. one can double the processor speed but that pales with changing an O(n**2) algorithm to O(n*log(n)) one).

        SSD's based on flash aren't the ultimate answer. Ones that use either magneto-resistive memory or ferroelectric memory show more long term promise (e.g. mram can switch as fast as L2 cache--faster than DRAM but with the same cell size). With near unlimited memory at that speed, a number of multistep operations can be converted to a single table lookup. This is done a lot in a lot of custom logic where the logic is replaced with a fast SRAM/LUT.

        Storage systems (e.g. NAS/SAN) can be parallelized but the limiting factor is still memory bus bandwidth [even with many parallel memory buses].

        Multicore chips that use N-way mesh topologies might also help. Data is communicated via a data channel that doesn't need to dump to an intermediate shared buffer.

        Or hybrid cells that have a CPU but also have programmable custom logic attached directly. That is, part of the algorithm gets compiled to RTL that can then be loaded into the custom logic just as fast as a task switch (e.g. on every OS reschedule). This is why realtime video encoders use FPGAs. They can encode video at 30-120 fps in real time, but a multicore software solution might be 100x slower.

        • Re: (Score:3, Insightful)

          by Anonymous Coward

          (e.g. one can double the processor speed but that pales with changing an O(n**2) algorithm to O(n*log(n)) one).

          In some cases. There are also a lot of cases where overly complex functions are used to manage lists that usually contains three or four items and never reaches ten.
          Analytical optimizing is great when it can be applied but just because one has more than one data item to work on doesn't automatically mean that a O(n*log(n)) will beat a O(n**2) solution. The O(n**2) solutions will often be faster per iteration so it is a good idea to consider how many items one usually will work with.

      • by martin-boundary ( 547041 ) on Wednesday January 08, 2014 @08:10AM (#45896821)

        However many "problems" with performance today are I/O-based and not calculation based. It's time for the storage systems to catch up [...]

        Meh, that's a copout. IO has always been a bottleneck. Why do you think that Knuth spends so much time optimizing sorting algorithms for tapes? It's not a new issue, solve it by changing your algorithm (aka calculation).

        The current generation of programmers are so used to doing cookie cutter work, gluing together lower level libraries that they do not understand in essentially trivial ways, that when they are faced with an actual mismatch between the problem and the assumptions of the lower level code, there is nothing they know how to do. Here's a hint: throw away the lower level crutches, and design a scalable solution from scratch. Most problems can be solved in many ways, and the solution that uses your favourite framework is probably never the most efficient.

        /rant

  • by Osgeld ( 1900440 ) on Wednesday January 08, 2014 @12:16AM (#45895007)

    Its more of a prediction, that has mostly been on target cause of its challenging nature

    • Re: (Score:3, Funny)

      by crutchy ( 1949900 )

      now now don't you go spreading propaganda that laws aint laws... next there will be idiots coming out of the woodwork claiming that einstein may have got it wrong and that it's ok to take a dump whilst being cavity searched by the police

    • In my mind it was an interesting statistical coincedence, *when it was first discussed*

      Then the hype took over, and we know what happens when tech and hype meet up...

      Out of touch CEO's get hair-brained ideas from non-tech marketing people about what makes a product sell, then the marketing people dictate to the product managers what benchmarks they have to hit...then the new product is developed and any regular /. reader knows the rest.

      It's bunk. We need to dispel these kinds of errors in language instead o

      • by fatphil ( 181876 )
        > To bring this back to Moore's Law, let's work on better explaining the value of tech to non-techies.

        We can now do stupid things more quickly and in vaster quantity?
  • by ka9dgx ( 72702 ) on Wednesday January 08, 2014 @12:16AM (#45895013) Homepage Journal

    Now the blind ants (researchers) will need to explore more of the tree (the computing problem space)... there are many fruits out there yet to discover, this is just the end of the very easy fruit. I happen to believe that FPGAs can be made much more powerful because of some premature optimization. Time will tell if I'm right or wrong.

    • So true. I also happen to believe that adding an FPGA coprocessor to general purpose CPUs, that applications could reconfigure on the fly to perform certain tasks, could lead to massive increases in performance.

      • by gweihir ( 88907 ) on Wednesday January 08, 2014 @01:10AM (#45895317)

        As somebody that has watched what has been going on in that particular area for more than 2 decades, I do not expect anything to come out of it. FPGAs are suitable for doing very simples things reasonably fast, but so are graphics cards and with a much better interface. Bit as soon as communication between computing elements or large memory is required, both FPGAs and graphics cards become abysmally slow in comparison to modern CPUs. That is not going to change, as it is an effect of the architecture. There will not be any "massive" performance increase anywhere now.

      • by InvalidError ( 771317 ) on Wednesday January 08, 2014 @02:08AM (#45895517)

        Programming FPGAs is far more complex than programming GPGPUs and you would need a huge FPGA to match the compute performance available on $500 GPUs today. FPGAs are nice for arbitrary logic such as switch fabric in large routers or massively pipelined computations in software-defined radios but for general-purpose computations, GPGPU is a much cheaper and simpler option that is already available on many modern CPUs and SoCs.

        • by TheRaven64 ( 641858 ) on Wednesday January 08, 2014 @05:20AM (#45896105) Journal
          Speaking as someone who works with FPGAs on a daily basis and has previously done GPGPU compiler work, that's complete nonsense. If you have an algorithm that:
          • Mostly uses floating point arithmetic
          • Is embarrassingly parallel
          • Has simple memory access patterns
          • Has non-branching flow control

          Then a GPU will typically beat an FPGA solution. There's a pretty large problem space for which GPUs suck. If you have memory access that is predictable but doesn't fit the stride models that a GPU is designed for then an FPGA with a well-designed memory interface and a tenth of the arithmetic performance of the GPU will easily run faster. If you have something where you have a long sequence of operations that map well to a dataflow processor, then an FPGA-based implementation can also be faster, especially if you have a lot of branching.

          Neither is a panacea, but saying a GPU is always faster and cheaper than an FPGA makes as much sense as saying that a GPU is always faster and cheaper than a general-purpose CPU.

      • FPGAs are relatively expensive compared to graphics chips, actually most chips. Its still not clear what FPGAs can accomplish in a general computing platform that will be of value, considering the other lower cost options available.

        The other half of the problem here is that, in comparison to GPU programming interfaces such as OpenCL and Cuda, there is relatively little effort in bringing FPGA development beyond ASIC-Lite. The tool chains and development processes (what FPGA vendors like to call "design fl

        • by ka9dgx ( 72702 )

          All of this points out what I'm saying... they've optimized for small(ish) systems that have to run very quickly, with a heavy emphasis on "routing fabric" internally. This makes them hard to program, as they are heterogeneous as all get out.

          Imagine a grid of logic cells, a nice big, homogenous grid, that was symmetric. You could route programs in it almost instantly, there's be no need for custom tools to program it.

          The problem IS the routing grid... it's a premature optimization. And for big scale stuff

    • by crutchy ( 1949900 ) on Wednesday January 08, 2014 @12:53AM (#45895225)

      just need to shoot more advanced alien spaceships down near roswell

  • by Taco Cowboy ( 5327 ) on Wednesday January 08, 2014 @12:17AM (#45895019) Journal

    The really sad thing regarding this "Moore's Law" thing is that, while the hardware had kept on getting faster and even more power efficient, the software that runs on them kept on becoming more and more bloated.

    Back in the days of pre-8088 we already had music notation softwares running on Radio Shack TRS-80 model III.

    Back then, due to the constraints of the hardware, programmers had to use every trick on the book (and off) to make their programs run.

    Nowadays, even the most basic "Hello World" program comes up in megabyte range.

    Sigh !

    • by bloodhawk ( 813939 ) on Wednesday January 08, 2014 @12:32AM (#45895101)
      I don't find that a sad thing at all. The fact people have to spend far less effort on code to make something that works is a fantastic thing that has opened up programming to millions of people that would never have been able to cope with the complex tricks we used to play to get every byte of memory saved and to prune every line of code. This doesn't mean you can't do those things and I still regularly do when writing server side code. But why spend man years of effort to optimise memory,CPU and disk footprint when the average machine has abundant surplus of both.
      • The sad part is not that it's easier to code, but that many people have grown complacent with their code executing at abysmal performance rates. And I also blame compilers/interpreters that don't mind bloating things up.

      • by MrL0G1C ( 867445 )

        1. Is hello world easier to code now? When I had an a bit the program was

        (type)
        print "Hello world" (hit enter)

        That didn't seem hard, now - what steps to you have to go through on a modern PC you've just bought. Not as easy is it.

        2. Agree with other poster, why are compilers throwing 99.999% of redundant code in to the software? Pathetic.

    • Back in the days of pre-8088 we already had music notation softwares running on Radio Shack TRS-80 model III.

      In 4-bit color on a 640x480 screen, with ugly fonts (if you could even get more than one!) and lousy, cheap sounding midi playback. Seriously, TRS-80 music notation software was severely limited compared to what we have today.

    • Nowadays, even the most basic "Hello World" program comes up in megabyte range.

      The most basic "Hello World" program doesn't have a GUI (if it has a GUI, you can make it more basic by just printing with printf), so let's see:

      $ ed hello.c
      hello.c: No such file or directory
      a
      #include <stdio.h>

      int
      main(void)
      {
      printf("Hello, world!\n");
      }
      .
      w
      67
      q
      $ gcc -o hello -Os hello.c
      $ size hello
      __TEXT __DATA __OBJC others dec hex
      4096 4096 0 4294971392 4294979584 100003000

      I'm not sure what "others" is, but I suspect there's a bug there (I'll take a look). 4K text, 4K data (that's the pag

      • by u38cg ( 607297 )
        Have you seen GNU's hello.c? Worth a look.
      • On OS X, printf is pretty huge. It is locale aware and so will call localeconv() to find out various things like currency separators. This, in turn, may load character set definitions and so on. For your example, you should use puts(), which will just write the string as-is, without trying to parse it and insert tokens.
      • by dacut ( 243842 )

        This guy [timelessname.com] managed to get it into 145 bytes (142 on his website, but he printed "Hi World" instead of "Hello world") with no external dependencies.

        The smallest ELF executable I've seen is this 45 byte example [muppetlabs.com]. It doesn't print anything and it violates the ELF standard, but Linux (or at least his version) is still willing to execute it.

        That said, there isn't much point in optimizing away libc except as an academic exercise. Yes, it's a few megabytes in size, but it's shared across every running userspace pr

    • by JanneM ( 7445 )

      What do you suggest we do with all the computing power we've gained then? It seems perfectly reasonable to use it to make software development easier and faster, and make more beautiful and more usable user interfaces.

    • Agreed. Seems amazing now that you could get spreadsheets and word processors running in 8 to 48k. The only time in recent years I've seen amazing efficiency was a graphics demo that drew a fully rendered scene using algorithmically generated textures. Demo ran for about 5 minutes scrolling about the buildings and hills and was only about 150k
    • by Greyfox ( 87712 )
      I did a static-compiled hello world program a while back and found that it was only a few kilobytes, which is still a lot but way less than I expected. A C function I wrote to test an assembly language function I also wrote recently came it at about 7000 bytes. That's better...

      There have been several occasions where I've seen a team "solve" a problem by throwing another couple gigabytes at a Java VM and add a task to reboot the system every couple of days. I've lost count of the times where simply optimiz

      • There's plenty of room to get more performance out of existing hardware, that's for sure!

        But that's exactly the point... For decades, Moore's Law has made it seem like a crime to optimize software. No matter how inefficient the software, hardware was cheaper than developer time, and the next generation of hardware would be fast enough to run the bloated, inefficient crap.

        The end of Moore's Law... if it actually happens this time, unlike the last 1,000 times it was predicted... will mean good people who

    • by TheRaven64 ( 641858 ) on Wednesday January 08, 2014 @05:32AM (#45896149) Journal

      You can still write software that efficient today. The down side is that you can only write software that efficient if you're willing to have it be about that complex too. Do you want your notes application to just store data directly on a single disk from a single manufacturer, or would you rather have an OS that abstracts the details of the device and provides a filesystem? Do you want the notes software to just dump the contents of memory, or do you want it to store things in a file format that is amenable to other programs reading it? Do you want it to just handle plain text for lyricst, or would you like it to handle formatting? What about unicode? Do you want it to be able to render the text in a nice clean antialiased way with proportional spacing, or are you happy with fixed-width bitmap fonts (which may or may not look terrible, depending on your display resolution)? The same applies to the notes themselves. Do you want it to be able to produce PostScript for high-quality printing, or are you happy for it to just dump the low-quality screen version as a bitmap? Do you want it to do wavetable-based MIDI synthesis or are you happy with just beeps?

      The reason modern software is bigger is that it does a hell of a lot more. If you spent as much effort on every line of code in a program with all of the features that modern users expect as you did in something where you could put the printouts of the entire codebase on your office wall, you'd never be finished and you'd never find customers willing to pay the amount it would cost.

  • And best of all... (Score:5, Insightful)

    by pushing-robot ( 1037830 ) on Wednesday January 08, 2014 @12:18AM (#45895023)

    We might even stop writing everything in Javascript?

    • by Urkki ( 668283 ) on Wednesday January 08, 2014 @01:51AM (#45895481)

      We might even stop writing everything in Javascript?

      Indeed. JavaScript is the assembly language of the future, and we need to stop coding in it. There already are many nicer languages which are then compiled into Javascript, ready for execution in any computing environment.

      • Re: (Score:3, Insightful)

        by Anonymous Coward

        We might even stop writing everything in Javascript?

        Indeed. JavaScript is the assembly language of the future, and we need to stop coding in it. There already are many nicer languages which are then compiled into Javascript, ready for execution in any computing environment.

        You were modded insightful rather than funny. I weep for the future.

  • by xtal ( 49134 ) on Wednesday January 08, 2014 @12:20AM (#45895031)

    ..took us in directions we hadn't considered.

    Forget the exact quote, but what a time to be alive. My first computer program was written on a Vic-20. Watching the industry grow has been incredible.. I am not worried about the demise of traditional lithographic techniques.. I'm actually expecting the next generation to provide a leap in speed as now there's a strong incentive to look at different technologies.

    Here's to yet another generation of cheap CPU.

  • We might stop seeing ridiculous gains in computing power, and might have to start making gains in software efficiency.
    • We might stop seeing ridiculous gains in computing power, and might have to start making gains in software efficiency.

      I agree in sprit but you perpetuate a false dichotomy based on a misunderstanding of *why* software bloat happens, in the broad industry-wide context.

      Just look at Windows. M$ bottlenecked features willfully because it was part of their business plan.

      Coders, the people who actually write the software, have always been up to the efficiency challenge. The problem is the biggest money wasn't pay

  • by JoshuaZ ( 1134087 ) on Wednesday January 08, 2014 @12:37AM (#45895127) Homepage

    This is ok. For many purposes, software improvements in terms of new algorithms that are faster and use less memory have done more for heavy-dute computation than hardware improvement has. Between 1988 and 2003, linear programmng on a standard benchmark improved by a factor of about 40 million. Out of that improvement, about 40,000 was from improvements in software and only about 1000 in hardware improvements (these numbers are partially not well-defined because there's some interaction between how one optimizes software for hardware and the reverse). See this report http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-nitrd-report-2010.pdf [whitehouse.gov]. Similar remarks apply to integer factorization and a variety of other important problems.

    The other important issue related to this, is that improvements in algorithms provide ever-growing returns because they can actually improve on the asymptotics, whereas any hardware improvement is a single event. And for many practical algorithms, asymptotic improvements are occurring still. Just a few days ago a new algorithm was published that was much more efficient for approximating max cut on undirected graphs. See http://arxiv.org/abs/1304.2338 [arxiv.org].

    If all forms of hardware improvement stopped today, there would still be massive improvement in the next few years on what we can do with computers simply from the algorithms and software improvements.

    • Misleading.

      Yes, I've got 100 fold improvments on a single image processing algorithm. It was pretty easy as well.
      However, that only speeds up that one algorithm, 10x faster hardware speeds everything 10x.

      Use of interpretted languages and bloated code has more than equalled the point gains in algorithms.

      The net result overall is that 'performance' increase has been mostly due to hardware, not software.

    • Re: (Score:3, Interesting)

      by Taco Cowboy ( 5327 )

      Between 1988 and 2003, linear programmng on a standard benchmark improved by a factor of about 40 million. Out of that improvement, about 40,000 was from improvements in software and only about 1000 in hardware improvements (these numbers are partially not well-defined because there's some interaction between how one optimizes software for hardware and the reverse).

      I downloaded the report at the link that you have so generously provided -http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-nitrd-report-2010.pdf - but I found the figures somewhat misleading

      In the field of numerical algorithms, however, the improvement can be quantified. Here is just one example, provided by Professor Martin GrÃtschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. GrÃtschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later â" in 2003 â" this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algo-rithms! GrÃtschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008

      Professor GrÃtschel's sighting was in regard of "numerical algorithm", and no doubt, there have been some great improvements achieved due to new algorithms. But that is just one tiny aspect out of the whole spectrum of the programming scene.

      Out of the tiny segment of the numerical crunching, blo

  • Implications (Score:4, Interesting)

    by Animats ( 122034 ) on Wednesday January 08, 2014 @12:38AM (#45895131) Homepage

    Some implications:

    • We're going to see more machines that look like clusters on a chip. We need new operating systems to manage such machines. Things that are more like cloud farm managers, parceling out the work to the compute farm.
    • Operating systems and languages will need to get better at interprocess and inter-machine communication. We're going to see more machines that don't have shared memory but do have fast interconnects. Marshalling and interprocess calls need to get much faster and better. Languages will need compile-time code generation for marshalling. Programming for multiple machines has to be part of the language, not a library.
    • We're going to see more machines that look like clusters on a chip. We need new operating systems to manage such machines. Things that are more like cloud farm managers.
    • We'll probably see a few more "build it and they will come" architectures like the Cell. Most of them will fail. Maybe we'll see a win.
    • by dkf ( 304284 )

      Some implications:

      • We're going to see more machines that look like clusters on a chip...
      • We're going to see more machines that look like clusters on a chip...

      Looks like you had a parallelism problem right there!

  • This quote from the article: " But it may be a blessing to say goodbye to a rule that has driven the semiconductor industry since the 1960s." is surely the dumbest thing I've read all day.

    Seriously? It's like, people wake up and say, "it would be such a blessing if I could never get a faster computer." Does that make sense at all?
    • Tell that to the XP holdouts.

      No newer technology means no change and using the best ever made soley because they are familiar.

      • I can understand the XP holdouts, sort of. They have some features they like or something.

        With Moore's law, we're talking about faster processors. No changes necessary, other than your motherboard. I've never met anyone who was in love with their motherboard.
    • That may have been the dumbest thing you've read all day but to be fair that was before your comment was written.

      The intent behind that sentence seems fairly clear, that the end of predictable speed increases may lead to greater focus on whole other avenues of development and other unpredictable and exciting ideas popping up.
      • The intent behind that sentence seems fairly clear, that the end of predictable speed increases may lead to greater focus on whole other avenues of development and other unpredictable and exciting ideas popping up.

        Yes, why don't we go back to the abacus, and see what new ideas come up!

        Seriously, do you remember when microcomputers came out? Academics complained that they were setting the computer world back three decades. That's basically how you sound.

        • by Urkki ( 668283 )

          Yes, why don't we go back to the abacus, and see what new ideas come up!.

          Logical fallacy, nobody is suggesting going back. If you want to take abacus as example, then what you are saying is, let's just keep adding beads and rows to our abacuses, and come up with ingenious bead sliding mechanisms to allow faster and faster movement of beads.

          Now we're soon hitting the diminishing returns limit of this road, need to start inventing something else to make calculations faster, and perhaps open entire new ways of doing calculations, analogous to what you can do with a slide rule.

  • by radarskiy ( 2874255 ) on Wednesday January 08, 2014 @12:54AM (#45895229)

    The defining characteristic of the 7nm is that it's the one after the 10nm node. I can't remember the last time I worked in a process where the was a notable dimension that matched the node name, either drawn or effective.

    Marc Snir gets bogged down in an analysis of gate length reduction which is quite besides the point. If it gets harder to shrink the gate than to do something else, then something else will be done. It worked on processes with the same gate length as the "previous" process, and I've probably even worked on a process that had a larger gate than the previous process. The device density still increased, since gate length is not the only dimension.

  • If, just if anything can be done, Governments will not play any significant role in this process. They do not even seem to understand the problem, how could they ever be part of the solution? And that is just it: It is quite possible that there is no solution, or that it may take decades or centuries for that one smart person to be in the right place at the right time. Other than blanket research funding, Governments cannot do anything to help that. Instead, scientific funding is today only given to concret

  • Picture a super computer so massive that components fail as fast as we can replace them. Now that is a big super computer. This is the issue. Super computers have a physical limit. If the node power doesn't grow we will reach a limit on simulation power. It will be interesting to see how the CPU matures. That means more features will be developed beyond raw power.
  • ... time to stop writing garbage in visual basic, man up, and use proper languages again that are actually efficient, isn't it?
  • Perhaps writing efficient code will come back into style.

  • The International Technology Roadmap for Semiconductors is published regularly and has information on the maturity of emerging technologies like carbon. There are many possibilities for "more than Moore" improvement. http://www.itrs.net/Links/2012ITRS/Home2012.htm [itrs.net]
  • by KonoWatakushi ( 910213 ) on Wednesday January 08, 2014 @05:03AM (#45896071)

    The refinement of process has postponed this for a long while, but the time has come to explore new architectures and technologies. The Mill architecture [ootbcomp.com] is one such example, and aims to bridge the enormous chasm of inefficiency between general purpose CPUs and DSPs. Conservatively, they are expecting a tenfold improvement in performance/W/$ on general purpose code, but the architecture is also well suited to wide MIMD and SIMD.

    Another area ripe for innovation is memory technologies, which have suffered a similar stagnation limited to refinement of an ancient technology. The density of both cache and main memory can be significantly improved on the same process with Thyristor-RAM or Z-RAM. Considering the potential benefits and huge markets, it is vexing that more resources aren't expended toward commercializing better technologies. Some of the newer technologies also scale down better.

    Something to replace the garbage which is NAND flash would also be welcome, yet sadly there appears to be no hurry there either. One point is certain, there is a desperate need to find a way to commercialize better technologies rather than perpetually refining inferior ones. Though examples abound, perhaps none is more urgent than the Liquid fluoride thorium reactor [wikipedia.org]. Molten salt reactors could rapidly replace fossil fuels with clean and abundant energy while minimizing environmental impact, and affordable energy is the basis for all prosperity.

As of next Thursday, UNIX will be flushed in favor of TOPS-10. Please update your programs.

Working...