Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×
Hardware Technology

End of Moore's Law Forcing Radical Innovation 275

Posted by Soulskill
from the how-many-transistors-can-dance-on-the-head-of-a-pin dept.
dcblogs writes "The technology industry has been coasting along on steady, predictable performance gains, as laid out by Moore's law. But stability and predictability are also the ingredients of complacency and inertia. At this stage, Moore's Law may be more analogous to golden handcuffs than to innovation. With its end in sight, systems makers and governments are being challenged to come up with new materials and architectures. The European Commission has written of a need for 'radical innovation in many computing technologies.' The U.S. National Science Foundation, in a recent budget request, said technologies such as carbon nanotube digital circuits will likely be needed, or perhaps molecular-based approaches, including biologically inspired systems. The slowdown in Moore's Law has already hit high-performance computing. Marc Snir, director of the Mathematics and Computer Science Division at the Argonne National Laboratory, outlined in a series of slides the problem of going below 7nm on chips, and the lack of alternative technologies."
This discussion has been archived. No new comments can be posted.

End of Moore's Law Forcing Radical Innovation

Comments Filter:
  • Rock Star coders! (Score:5, Insightful)

    by Anonymous Coward on Wednesday January 08, 2014 @01:15AM (#45895001)

    The party's over. Get to work on efficient code. As for the rest of all you mothafucking coding wannabes, suck it! Swallow it. Like it! Whatever, just go away.

    • by Z00L00K (682162) on Wednesday January 08, 2014 @01:38AM (#45895143) Homepage

      Efficient code and new ways to solve computing problems using massive multi-core solutions.

      However many "problems" with performance today are I/O-based and not calculation based. It's time for the storage systems to catch up in performance with the processors, and they are on the way with SSD disks.

      • Re:Rock Star coders! (Score:5, Interesting)

        by lgw (121541) on Wednesday January 08, 2014 @01:43AM (#45895165) Journal

        I think the next couple of decades will be mostly about efficiency. Between mobile computing and the advantage of ever-more cores, the benefits from lower power consumption (and reduce heat load as a result) will be huge. And unlike element size, we're far from basic physical limits on efficiency.

        • by jones_supa (887896) on Wednesday January 08, 2014 @01:44AM (#45895169)
          But efficiency is largely based on element size.
        • by geekmux (1040042) on Wednesday January 08, 2014 @07:32AM (#45896387)

          I think the next couple of decades will be mostly about efficiency. Between mobile computing and the advantage of ever-more cores, the benefits from lower power consumption (and reduce heat load as a result) will be huge. And unlike element size, we're far from basic physical limits on efficiency.

          Efficiency in consumer products will have to outweigh greed first.

          I never asked for anyone to put a 10-million app capability on my phone, or any of the other 37 now-standard features that suck the life out of my phone battery just by looking at it.

          If today's smartphone hardware had to run with functions circa 10 years ago, the batteries would likely last for weeks. Our technology even today is far better than we think. The only thing we're better at, is greed feeding excess.

          • by ShanghaiBill (739463) on Wednesday January 08, 2014 @10:35AM (#45897381)

            I never asked for anyone to put a 10-million app capability on my phone

            Yet you bought a phone with that capability.

            or any of the other 37 now-standard features that suck the life out of my phone battery

            You can buy a dumb phone with a battery that lasts a week or more, for a lot less than you paid for your smart phone.

            The only thing we're better at, is greed feeding excess.

            It was silly of you to pay extra for features that you didn't want. It is even sillier to then whine that you were somehow a victim of "greed".

        • by complete loony (663508) <Jeremy,Lakeman&gmail,com> on Wednesday January 08, 2014 @07:46AM (#45896439)
          TFA is more about the problem of using commodity parts in high performance super computers. Since most of the industry is now more focussed on smaller and lower power chips.
      • by gweihir (88907) on Wednesday January 08, 2014 @02:05AM (#45895289)

        You still do not get it. There will be no further computing power revolution.

        • by K. S. Kyosuke (729550) on Wednesday January 08, 2014 @02:35AM (#45895425)
          There hasn't been a computing power revolution for quite some time now. All the recent development has been rather evolutionary.
          • by Katatsumuri (1137173) on Wednesday January 08, 2014 @04:50AM (#45895857)

            I see many emerging technologies that promise further great progress in computing. Here are some of them. I wish some industry people here could post some updates about their way to the market. They may not literally prolong the Moore's Law in regards to the number of transistors, but they promise great performance gains, which is what really matters.

            3D chips. As materials science and manufacturing precision advances, we will soon have multi-layered (starting at a few layers that Samsung already has, but up to 1000s) or even fully 3D chips with efficient heat dissipation. This would put the components closer together and streamline the close-range interconnects. Also, this increases "computation per rack unit volume", simplifying some space-related aspects of scaling.

            Memristors. HP is ready to produce the first memristor chips but delays that for business reasons (how sad is that!) Others are also preparing products. Memristor technology enables a new approach to computing, combining memory and computation in one place. They are also quite fast (competitive with the current RAM) and energy-efficient, which means easier cooling and possible 3D layout.

            Photonics. Optical buses are finding their ways into computers, and network hardware manufacturers are looking for ways to perform some basic switching directly with light. Some day these two trends may converge to produce an optical computer chip that would be free from the limitations of electric resistance/heat, EM interference, and could thus operate at a higher clock speed. Would be more energy efficient, too.

            Spintronics. Probably further in the future, but potentially very high-density and low-power technology actively developed by IBM, Hynix and a bunch of others. This one would push our computation density and power efficiency limits to another level, as it allows performing some computation using magnetic fields, without electrons actually moving in electrical current (excuse me for my layman understanding).

            Quantum computing. This could qualitatively speed up whole classes of tasks, potentially bringing AI and simulation applications to new levels of performance. The only commercial offer so far is Dwave, and it's not a classical QC, but so many labs are working on that, the results are bound to come soon.

            • by gweihir (88907) on Wednesday January 08, 2014 @05:54AM (#45896035)

              You may see them, but no actual expert in the field does.

              - 3D chips are decades old and have never materialized. They do not really solve the interconnect problem either and come with a host of other unsolved problems.
              - Memristors do not enable any new approach to computing, as there are neither many problems that would benefit form this approach, nor tools. The whole idea is nonsense at this time. Maybe they will have some future as storage, but not anytime soon.
              - Photonics is a dead-end. Copper is far too good and far too cheap in comparison.
              - Spintronics is old and has no real potential for ever working at this time.
              - Quantum computing is basically a scam perpetrated by some part of the academic community to get funding. It is not even clear whether it is possible for any meaningful size of problem.

              So, no. There really is nothing here.

              • by Katatsumuri (1137173) on Wednesday January 08, 2014 @07:00AM (#45896253)
                It may not be an instant revolution that's already done, but some work really is in progress.

                - 3D chips are decades old and have never materialized.

                24-layer flash chips are currently produced [arstechnica.com] by Samsung. IBM works on 3D chip cooling. [ibm.com] Just because it "never materialized" before, doesn't mean it won't happen now.

                - Memristors do not enable any new approach to computing, as there are neither many problems that would benefit form this approach, nor tools. The whole idea is nonsense at this time. Maybe they will have some future as storage, but not anytime soon.

                Memristors are great for neural network (NN) modelling. MoNETA [bu.edu] is one of the first big neural modelling projects to use memristors for that. I do not consider NNs a magic solution to everything, but you must admit they have plenty of applications in computation-expensive tasks.

                And while HP reconsidered its previous plans [wired.com] to offer memristor-based memory by 2014, they still want to ship it by 2018. [theregister.co.uk]

                - Photonics is a dead-end. Copper is far too good and far too cheap in comparison.

                Maybe fully photonic-based CPUs are way off, but at least for specialized use there are already photonic integrated circuits [wikipedia.org] with hundreds of functions on a chip.

                - Spintronics is old and has no real potential for ever working at this time.

                MRAM [wikipedia.org] uses electron spin to store data and is coming to market. Application of spintronics for general computing may be a bit further off in the future, but "no potential" is an overstatement.

                - Quantum computing is basically a scam perpetrated by some part of the academic community to get funding. It is not even clear whether it is possible for any meaningful size of problem.

                NASA, Google [livescience.com] and NSA [bbc.co.uk], among others, think otherwise.

                So, no. There really is nothing here.

                I respectfully disagree. We definitely have something.

                • by Kjella (173770) on Wednesday January 08, 2014 @08:28AM (#45896603) Homepage

                  I respectfully disagree. We definitely have something.

                  That there's research into exotic alternatives is fine, but just because they've researched flying cars and fusion reactors for 50 years doesn't mean it will ever matrialize or be usable outside a very narrow niche. If we hit the limits of copper there's no telling if any of these will materialize or just continue to be interesting, but overall uneconomical and impractical to use in consumer products. Like for example supersonic flight, it exists but all commercial passengers go on subsonic flights since the Concorde landed. You can't have exponential growth forever, not even in computers.

                  • by Katatsumuri (1137173) on Wednesday January 08, 2014 @08:56AM (#45896739)

                    It's true that we may not see another 90s-style MHz race on our desktops. But there is ongoing need for faster, bigger, better supercomputers and datacenters, and there is technology that can help there. I did quote some examples where this technology is touching the market already. And once it is adopted and refined by the government agencies and big data companies, it will also trickle down into consumer market.

                    I/O will get much faster. Storage will get much bigger. Computing cores may still become faster or more energy-efficient. New specialized co-processors may become common, for example for NN or QC. Then some of them may get integrated, as it happened to FPUs and GPUs. So the computing will most likely improve in different ways than before, but it is still going to develop fast and remain exciting.

                    And some technology may stay out of the consumer market, similar to your supersonic flight example, but it will still benefit the society.

              • by interkin3tic (1469267) on Wednesday January 08, 2014 @12:50PM (#45898679)
                Hmm... yes good points. A bit off topic, but flying machines too are nonsense. No expert in the field sees them happening. People have been talking about flying machines, and we've had balloons for decades, but flying contraptions didn't materialize. And they don't solve any problem really that we don't already have a solution to, but do introduce new problems, like falling. The whole idea is nonsense. It's a dead end too! Ships and trains are far too cheap to ever let flying machines even be competitive. It's old and has no real potential for ever working. It's basically a scam prepetuated by some bike builders, and it's not clear it will ever be useful for any meaningful problem.

                In conclusion, you're right. There's no chance of any revolutionary computing technology coming forward, and there's no chance that humans will ever fly.
          • by gweihir (88907) on Wednesday January 08, 2014 @05:48AM (#45896021)

            Indeed. Just my point. And that evolution is going slower and slower.

          • Amen. The most used CPU architectures in the world today are directly descended from microcontroller architectures designed in the late 1960s and early 1970s, based on the work of a handful of designers. None of those designers could have planned for or envisaged their chips as being the widely used CPUs of today.

      • Re:Rock Star coders! (Score:5, Interesting)

        by Forever Wondering (2506940) on Wednesday January 08, 2014 @03:16AM (#45895535)

        There was an article not too long ago (can't remember where) that mentioned that a lot of the performance improvement over the years came from better algorithms rather than faster chips (e.g. one can double the processor speed but that pales with changing an O(n**2) algorithm to O(n*log(n)) one).

        SSD's based on flash aren't the ultimate answer. Ones that use either magneto-resistive memory or ferroelectric memory show more long term promise (e.g. mram can switch as fast as L2 cache--faster than DRAM but with the same cell size). With near unlimited memory at that speed, a number of multistep operations can be converted to a single table lookup. This is done a lot in a lot of custom logic where the logic is replaced with a fast SRAM/LUT.

        Storage systems (e.g. NAS/SAN) can be parallelized but the limiting factor is still memory bus bandwidth [even with many parallel memory buses].

        Multicore chips that use N-way mesh topologies might also help. Data is communicated via a data channel that doesn't need to dump to an intermediate shared buffer.

        Or hybrid cells that have a CPU but also have programmable custom logic attached directly. That is, part of the algorithm gets compiled to RTL that can then be loaded into the custom logic just as fast as a task switch (e.g. on every OS reschedule). This is why realtime video encoders use FPGAs. They can encode video at 30-120 fps in real time, but a multicore software solution might be 100x slower.

        • by Anonymous Coward on Wednesday January 08, 2014 @03:46AM (#45895627)

          (e.g. one can double the processor speed but that pales with changing an O(n**2) algorithm to O(n*log(n)) one).

          In some cases. There are also a lot of cases where overly complex functions are used to manage lists that usually contains three or four items and never reaches ten.
          Analytical optimizing is great when it can be applied but just because one has more than one data item to work on doesn't automatically mean that a O(n*log(n)) will beat a O(n**2) solution. The O(n**2) solutions will often be faster per iteration so it is a good idea to consider how many items one usually will work with.

      • by martin-boundary (547041) on Wednesday January 08, 2014 @09:10AM (#45896821)

        However many "problems" with performance today are I/O-based and not calculation based. It's time for the storage systems to catch up [...]

        Meh, that's a copout. IO has always been a bottleneck. Why do you think that Knuth spends so much time optimizing sorting algorithms for tapes? It's not a new issue, solve it by changing your algorithm (aka calculation).

        The current generation of programmers are so used to doing cookie cutter work, gluing together lower level libraries that they do not understand in essentially trivial ways, that when they are faced with an actual mismatch between the problem and the assumptions of the lower level code, there is nothing they know how to do. Here's a hint: throw away the lower level crutches, and design a scalable solution from scratch. Most problems can be solved in many ways, and the solution that uses your favourite framework is probably never the most efficient.

        /rant

  • by Osgeld (1900440) on Wednesday January 08, 2014 @01:16AM (#45895007)

    Its more of a prediction, that has mostly been on target cause of its challenging nature

    • by crutchy (1949900) on Wednesday January 08, 2014 @01:52AM (#45895213)

      now now don't you go spreading propaganda that laws aint laws... next there will be idiots coming out of the woodwork claiming that einstein may have got it wrong and that it's ok to take a dump whilst being cavity searched by the police

    • Moore's "law" & AI (Score:3, Interesting)

      by globaljustin (574257) <justinglobal@gmai l . com> on Wednesday January 08, 2014 @01:52AM (#45895217) Homepage Journal

      In my mind it was an interesting statistical coincedence, *when it was first discussed*

      Then the hype took over, and we know what happens when tech and hype meet up...

      Out of touch CEO's get hair-brained ideas from non-tech marketing people about what makes a product sell, then the marketing people dictate to the product managers what benchmarks they have to hit...then the new product is developed and any regular /. reader knows the rest.

      It's bunk. We need to dispel these kinds of errors in language instead of perpetuating them, because it has tangible effects on the engineers in the lab who actually do the damn work.

      Part of what made the Moore's "Law" meme so sticky is how it was used, usually in a simple line graph, by "futurists" who barely can check their own email to pen mellodramatic, overhyped predictions about *when* we would have 'AI'.

      AI hype is tied to computer performance, and Moore's "Law" was something air-head journalists could easily source, complete with a nice graph from a tech "expert"

      I know my view of AI as a fiction is in the minority, but IMHO we need to grow up, stop with the reductive notion that computing is progressing towards some kind of 'AI' singularity and focus on making things that help people do work or play.

      Our industry looses **BILLIONS** of dollars and hundreds of thousands of work-hours chasing a fiction when we could be making more useful, powerful, and imaginitive things that meet actual, real world human needs.

      To bring this back to Moore's Law, let's work on better explaining the value of tech to non-techies. Let's give air-headed journalists something to sink their teeth into that will help our industry progress, not play the bullshit/hype game like every other industry.

  • by ka9dgx (72702) on Wednesday January 08, 2014 @01:16AM (#45895013) Homepage Journal

    Now the blind ants (researchers) will need to explore more of the tree (the computing problem space)... there are many fruits out there yet to discover, this is just the end of the very easy fruit. I happen to believe that FPGAs can be made much more powerful because of some premature optimization. Time will tell if I'm right or wrong.

    • by jarfil (1341877) on Wednesday January 08, 2014 @01:37AM (#45895123) Homepage

      So true. I also happen to believe that adding an FPGA coprocessor to general purpose CPUs, that applications could reconfigure on the fly to perform certain tasks, could lead to massive increases in performance.

      • by gweihir (88907) on Wednesday January 08, 2014 @02:10AM (#45895317)

        As somebody that has watched what has been going on in that particular area for more than 2 decades, I do not expect anything to come out of it. FPGAs are suitable for doing very simples things reasonably fast, but so are graphics cards and with a much better interface. Bit as soon as communication between computing elements or large memory is required, both FPGAs and graphics cards become abysmally slow in comparison to modern CPUs. That is not going to change, as it is an effect of the architecture. There will not be any "massive" performance increase anywhere now.

      • by InvalidError (771317) on Wednesday January 08, 2014 @03:08AM (#45895517)

        Programming FPGAs is far more complex than programming GPGPUs and you would need a huge FPGA to match the compute performance available on $500 GPUs today. FPGAs are nice for arbitrary logic such as switch fabric in large routers or massively pipelined computations in software-defined radios but for general-purpose computations, GPGPU is a much cheaper and simpler option that is already available on many modern CPUs and SoCs.

        • by TheRaven64 (641858) on Wednesday January 08, 2014 @06:20AM (#45896105) Journal
          Speaking as someone who works with FPGAs on a daily basis and has previously done GPGPU compiler work, that's complete nonsense. If you have an algorithm that:
          • Mostly uses floating point arithmetic
          • Is embarrassingly parallel
          • Has simple memory access patterns
          • Has non-branching flow control

          Then a GPU will typically beat an FPGA solution. There's a pretty large problem space for which GPUs suck. If you have memory access that is predictable but doesn't fit the stride models that a GPU is designed for then an FPGA with a well-designed memory interface and a tenth of the arithmetic performance of the GPU will easily run faster. If you have something where you have a long sequence of operations that map well to a dataflow processor, then an FPGA-based implementation can also be faster, especially if you have a lot of branching.

          Neither is a panacea, but saying a GPU is always faster and cheaper than an FPGA makes as much sense as saying that a GPU is always faster and cheaper than a general-purpose CPU.

      • by linearz69 (3473163) on Wednesday January 08, 2014 @03:30AM (#45895577)

        FPGAs are relatively expensive compared to graphics chips, actually most chips. Its still not clear what FPGAs can accomplish in a general computing platform that will be of value, considering the other lower cost options available.

        The other half of the problem here is that, in comparison to GPU programming interfaces such as OpenCL and Cuda, there is relatively little effort in bringing FPGA development beyond ASIC-Lite. The tool chains and development processes (what FPGA vendors like to call "design flow") are miles apart between FPGA and software code. Right now, SystemC is the only thing close to software develpment for FPGAs (mainly because of C++ syntax) , but it really isn't that close. Also consider that there are really no common architectures - RTL synthesis can vary from part to part, and place & route is different for every flippen part number. This makes it nearly impossible for any third party, beyond pricey CAD vendors with cozy relationships to the FPGA Mfgs, to develop the libraries required to cleanly integrate FPGAs into software development.

        The FPGA manufactures have done quite well on the low volume, high margin game. They have no incentive to drop the cost required for consumer volumes. GPUs are a completely different story....

        • by ka9dgx (72702) on Wednesday January 08, 2014 @05:50AM (#45896027) Homepage Journal

          All of this points out what I'm saying... they've optimized for small(ish) systems that have to run very quickly, with a heavy emphasis on "routing fabric" internally. This makes them hard to program, as they are heterogeneous as all get out.

          Imagine a grid of logic cells, a nice big, homogenous grid, that was symmetric. You could route programs in it almost instantly, there's be no need for custom tools to program it.

          The problem IS the routing grid... it's a premature optimization. And for big scale stuff it definitely gets in the way.

          I would have a 4 bits in, 4 bits out lookup table as the basis of this, and I call it the "bitgrid".... I've been writing about it for years, feel free to make the chip, and send me an email (or preferably a sample, please)., because that puppy is disclosed as far as patents go.... I have none, and can't now.

          You should be able to get a 64k x 64k grid on a chip for a few bucks, in any kind of quantity. It should do Exaflops, or consume almost nothing if you idle it.

    • by crutchy (1949900) on Wednesday January 08, 2014 @01:53AM (#45895225)

      just need to shoot more advanced alien spaceships down near roswell

  • by Taco Cowboy (5327) on Wednesday January 08, 2014 @01:17AM (#45895019) Journal

    The really sad thing regarding this "Moore's Law" thing is that, while the hardware had kept on getting faster and even more power efficient, the software that runs on them kept on becoming more and more bloated.

    Back in the days of pre-8088 we already had music notation softwares running on Radio Shack TRS-80 model III.

    Back then, due to the constraints of the hardware, programmers had to use every trick on the book (and off) to make their programs run.

    Nowadays, even the most basic "Hello World" program comes up in megabyte range.

    Sigh !

    • by bloodhawk (813939) on Wednesday January 08, 2014 @01:32AM (#45895101)
      I don't find that a sad thing at all. The fact people have to spend far less effort on code to make something that works is a fantastic thing that has opened up programming to millions of people that would never have been able to cope with the complex tricks we used to play to get every byte of memory saved and to prune every line of code. This doesn't mean you can't do those things and I still regularly do when writing server side code. But why spend man years of effort to optimise memory,CPU and disk footprint when the average machine has abundant surplus of both.
      • by jarfil (1341877) on Wednesday January 08, 2014 @01:48AM (#45895193) Homepage

        The sad part is not that it's easier to code, but that many people have grown complacent with their code executing at abysmal performance rates. And I also blame compilers/interpreters that don't mind bloating things up.

      • by MrL0G1C (867445) on Wednesday January 08, 2014 @05:55AM (#45896045) Journal

        1. Is hello world easier to code now? When I had an a bit the program was

        (type)
        print "Hello world" (hit enter)

        That didn't seem hard, now - what steps to you have to go through on a modern PC you've just bought. Not as easy is it.

        2. Agree with other poster, why are compilers throwing 99.999% of redundant code in to the software? Pathetic.

    • by phantomfive (622387) on Wednesday January 08, 2014 @01:52AM (#45895215) Journal

      Back in the days of pre-8088 we already had music notation softwares running on Radio Shack TRS-80 model III.

      In 4-bit color on a 640x480 screen, with ugly fonts (if you could even get more than one!) and lousy, cheap sounding midi playback. Seriously, TRS-80 music notation software was severely limited compared to what we have today.

    • by Guy Harris (3803) <guy@alum.mit.edu> on Wednesday January 08, 2014 @02:06AM (#45895297)

      Nowadays, even the most basic "Hello World" program comes up in megabyte range.

      The most basic "Hello World" program doesn't have a GUI (if it has a GUI, you can make it more basic by just printing with printf), so let's see:

      $ ed hello.c
      hello.c: No such file or directory
      a
      #include <stdio.h>

      int
      main(void)
      {
      printf("Hello, world!\n");
      }
      .
      w
      67
      q
      $ gcc -o hello -Os hello.c
      $ size hello
      __TEXT __DATA __OBJC others dec hex
      4096 4096 0 4294971392 4294979584 100003000

      I'm not sure what "others" is, but I suspect there's a bug there (I'll take a look). 4K text, 4K data (that's the page size), which isn't too bad; the bulk of the work is done in a library, though - it's a shared library, and this OS doesn't support linking statically with libSystem, so it's hard to tell how much code is dragged in by printf. The actual file size isn't that big:

      $ ls -l hello
      -rwxr-xr-x 1 gharris wheel 8752 Jan 7 21:58 hello

    • by JanneM (7445) on Wednesday January 08, 2014 @03:13AM (#45895529) Homepage

      What do you suggest we do with all the computing power we've gained then? It seems perfectly reasonable to use it to make software development easier and faster, and make more beautiful and more usable user interfaces.

    • by clickclickdrone (964164) on Wednesday January 08, 2014 @03:19AM (#45895545)
      Agreed. Seems amazing now that you could get spreadsheets and word processors running in 8 to 48k. The only time in recent years I've seen amazing efficiency was a graphics demo that drew a fully rendered scene using algorithmically generated textures. Demo ran for about 5 minutes scrolling about the buildings and hills and was only about 150k
    • by Greyfox (87712) on Wednesday January 08, 2014 @03:22AM (#45895553) Homepage Journal
      I did a static-compiled hello world program a while back and found that it was only a few kilobytes, which is still a lot but way less than I expected. A C function I wrote to test an assembly language function I also wrote recently came it at about 7000 bytes. That's better...

      There have been several occasions where I've seen a team "solve" a problem by throwing another couple gigabytes at a Java VM and add a task to reboot the system every couple of days. I've lost count of the times where simply optimizing SQL I was looking at (Sometimes by rewriting it, sometimes by adding an index) has resulted in hour-long tasks suddenly completing in a minute or two. There's plenty of room to get more performance out of existing hardware, that's for sure!

      • by evilviper (135110) on Wednesday January 08, 2014 @06:31AM (#45896143) Journal

        There's plenty of room to get more performance out of existing hardware, that's for sure!

        But that's exactly the point... For decades, Moore's Law has made it seem like a crime to optimize software. No matter how inefficient the software, hardware was cheaper than developer time, and the next generation of hardware would be fast enough to run the bloated, inefficient crap.

        The end of Moore's Law... if it actually happens this time, unlike the last 1,000 times it was predicted... will mean good people who can write more efficient code will be worth far more, and the code monkeys who got by only thanks to a culture that was utterly unconcerned with performance/optimization, will be seen as the hacks they are...

    • by TheRaven64 (641858) on Wednesday January 08, 2014 @06:32AM (#45896149) Journal

      You can still write software that efficient today. The down side is that you can only write software that efficient if you're willing to have it be about that complex too. Do you want your notes application to just store data directly on a single disk from a single manufacturer, or would you rather have an OS that abstracts the details of the device and provides a filesystem? Do you want the notes software to just dump the contents of memory, or do you want it to store things in a file format that is amenable to other programs reading it? Do you want it to just handle plain text for lyricst, or would you like it to handle formatting? What about unicode? Do you want it to be able to render the text in a nice clean antialiased way with proportional spacing, or are you happy with fixed-width bitmap fonts (which may or may not look terrible, depending on your display resolution)? The same applies to the notes themselves. Do you want it to be able to produce PostScript for high-quality printing, or are you happy for it to just dump the low-quality screen version as a bitmap? Do you want it to do wavetable-based MIDI synthesis or are you happy with just beeps?

      The reason modern software is bigger is that it does a hell of a lot more. If you spent as much effort on every line of code in a program with all of the features that modern users expect as you did in something where you could put the printouts of the entire codebase on your office wall, you'd never be finished and you'd never find customers willing to pay the amount it would cost.

  • And best of all... (Score:5, Insightful)

    by pushing-robot (1037830) on Wednesday January 08, 2014 @01:18AM (#45895023)

    We might even stop writing everything in Javascript?

  • by xtal (49134) on Wednesday January 08, 2014 @01:20AM (#45895031)

    ..took us in directions we hadn't considered.

    Forget the exact quote, but what a time to be alive. My first computer program was written on a Vic-20. Watching the industry grow has been incredible.. I am not worried about the demise of traditional lithographic techniques.. I'm actually expecting the next generation to provide a leap in speed as now there's a strong incentive to look at different technologies.

    Here's to yet another generation of cheap CPU.

  • by ItMustBeEsoteric (732632) <.ryangilbert. .at. .gmail.com.> on Wednesday January 08, 2014 @01:22AM (#45895047)
    We might stop seeing ridiculous gains in computing power, and might have to start making gains in software efficiency.
    • by globaljustin (574257) <justinglobal@gmai l . com> on Wednesday January 08, 2014 @02:01AM (#45895269) Homepage Journal

      We might stop seeing ridiculous gains in computing power, and might have to start making gains in software efficiency.

      I agree in sprit but you perpetuate a false dichotomy based on a misunderstanding of *why* software bloat happens, in the broad industry-wide context.

      Just look at Windows. M$ bottlenecked features willfully because it was part of their business plan.

      Coders, the people who actually write the software, have always been up to the efficiency challenge. The problem is the biggest money wasn't paying for ninja-like efficiency of executing user instructions.

      It was about marketing and ass-backwards profit models forced onto the work of making good code.

      I've often observed that in order to do the most desirable work, a coder would have to sacrifice the very thing that made them want to work on the best software in the first place...

  • by JoshuaZ (1134087) on Wednesday January 08, 2014 @01:37AM (#45895127) Homepage

    This is ok. For many purposes, software improvements in terms of new algorithms that are faster and use less memory have done more for heavy-dute computation than hardware improvement has. Between 1988 and 2003, linear programmng on a standard benchmark improved by a factor of about 40 million. Out of that improvement, about 40,000 was from improvements in software and only about 1000 in hardware improvements (these numbers are partially not well-defined because there's some interaction between how one optimizes software for hardware and the reverse). See this report http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-nitrd-report-2010.pdf [whitehouse.gov]. Similar remarks apply to integer factorization and a variety of other important problems.

    The other important issue related to this, is that improvements in algorithms provide ever-growing returns because they can actually improve on the asymptotics, whereas any hardware improvement is a single event. And for many practical algorithms, asymptotic improvements are occurring still. Just a few days ago a new algorithm was published that was much more efficient for approximating max cut on undirected graphs. See http://arxiv.org/abs/1304.2338 [arxiv.org].

    If all forms of hardware improvement stopped today, there would still be massive improvement in the next few years on what we can do with computers simply from the algorithms and software improvements.

    • by Anonymous Coward on Wednesday January 08, 2014 @03:04AM (#45895507)

      Misleading.

      Yes, I've got 100 fold improvments on a single image processing algorithm. It was pretty easy as well.
      However, that only speeds up that one algorithm, 10x faster hardware speeds everything 10x.

      Use of interpretted languages and bloated code has more than equalled the point gains in algorithms.

      The net result overall is that 'performance' increase has been mostly due to hardware, not software.

    • by Taco Cowboy (5327) on Wednesday January 08, 2014 @03:06AM (#45895511) Journal

      Between 1988 and 2003, linear programmng on a standard benchmark improved by a factor of about 40 million. Out of that improvement, about 40,000 was from improvements in software and only about 1000 in hardware improvements (these numbers are partially not well-defined because there's some interaction between how one optimizes software for hardware and the reverse).

      I downloaded the report at the link that you have so generously provided -http://www.whitehouse.gov/sites/default/files/microsites/ostp/pcast-nitrd-report-2010.pdf - but I found the figures somewhat misleading

      In the field of numerical algorithms, however, the improvement can be quantified. Here is just one example, provided by Professor Martin GrÃtschel of Konrad-Zuse-Zentrum für Informationstechnik Berlin. GrÃtschel, an expert in optimization, observes that a benchmark production planning model solved using linear programming would have taken 82 years to solve in 1988, using the computers and the linear programming algorithms of the day. Fifteen years later â" in 2003 â" this same model could be solved in roughly 1 minute, an improvement by a factor of roughly 43 million. Of this, a factor of roughly 1,000 was due to increased processor speed, whereas a factor of roughly 43,000 was due to improvements in algo-rithms! GrÃtschel also cites an algorithmic improvement of roughly 30,000 for mixed integer programming between 1991 and 2008

      Professor GrÃtschel's sighting was in regard of "numerical algorithm", and no doubt, there have been some great improvements achieved due to new algorithms. But that is just one tiny aspect out of the whole spectrum of the programming scene.

      Out of the tiny segment of the numerical crunching, bloatwares have emerged everywhere.

      While the hardware speed has accelerated 1,000x (as claimed by the kind professor), the speed of the software in solving the myriad problems hasn't exactly been keeping up.

      I have invested more than 30 years of my life in the tech field, and comparing to what we had achieved in software back in the late 1970's, what we have today are astoundingly disappointing.

      Back then, RAM was counted in KB, and storage in MB were considered as "HUGE".

      We had to squeeze every single ounce of performance out of our programs just to make them run at decent speed.

      No matter if it's a game of "pong" or numerical analysis, everything had to be considered and more than often we get down to the machine level (yes, we code one step lower than the assembly language) so to minimize the "waste", counting down to each and every single cycle.

      Yes, many of the younger generation will look at us as though we the old farts are crazy, but our quest in fighting against the hardware limitation was, at least to us who went through it all, extremely stimulating.

  • Implications (Score:4, Interesting)

    by Animats (122034) on Wednesday January 08, 2014 @01:38AM (#45895131) Homepage

    Some implications:

    • We're going to see more machines that look like clusters on a chip. We need new operating systems to manage such machines. Things that are more like cloud farm managers, parceling out the work to the compute farm.
    • Operating systems and languages will need to get better at interprocess and inter-machine communication. We're going to see more machines that don't have shared memory but do have fast interconnects. Marshalling and interprocess calls need to get much faster and better. Languages will need compile-time code generation for marshalling. Programming for multiple machines has to be part of the language, not a library.
    • We're going to see more machines that look like clusters on a chip. We need new operating systems to manage such machines. Things that are more like cloud farm managers.
    • We'll probably see a few more "build it and they will come" architectures like the Cell. Most of them will fail. Maybe we'll see a win.
  • by phantomfive (622387) on Wednesday January 08, 2014 @01:48AM (#45895191) Journal
    This quote from the article: " But it may be a blessing to say goodbye to a rule that has driven the semiconductor industry since the 1960s." is surely the dumbest thing I've read all day.

    Seriously? It's like, people wake up and say, "it would be such a blessing if I could never get a faster computer." Does that make sense at all?
  • by radarskiy (2874255) on Wednesday January 08, 2014 @01:54AM (#45895229)

    The defining characteristic of the 7nm is that it's the one after the 10nm node. I can't remember the last time I worked in a process where the was a notable dimension that matched the node name, either drawn or effective.

    Marc Snir gets bogged down in an analysis of gate length reduction which is quite besides the point. If it gets harder to shrink the gate than to do something else, then something else will be done. It worked on processes with the same gate length as the "previous" process, and I've probably even worked on a process that had a larger gate than the previous process. The device density still increased, since gate length is not the only dimension.

  • by gweihir (88907) on Wednesday January 08, 2014 @02:02AM (#45895277)

    If, just if anything can be done, Governments will not play any significant role in this process. They do not even seem to understand the problem, how could they ever be part of the solution? And that is just it: It is quite possible that there is no solution, or that it may take decades or centuries for that one smart person to be in the right place at the right time. Other than blanket research funding, Governments cannot do anything to help that. Instead, scientific funding is today only given to concrete applied research that promises specific results. That is not going to help making any fundamental breakthrough, quite the opposite.

    Personally, I expect that is it for computing hardware for the next few decades or possibly permanently. I do not see any fundamental issue with that. And there would be quite a bit of historic precedent for a technology to slowly begin to mature.

    As software is to incredible unrefined these days, that would be a good thing. It would finally be possible to write reasonable standard components for most things, instead of the bloated, insecure mess so common these days. It would also be possible to begin restricting software creation to those that actually have a gift for it, instead of having software created by the semi-competent and the incompetent (http://www.codinghorror.com/blog/2010/02/the-nonprogramming-programmer.html). In the end, the fast process of computing (which burned a few centuries of fundamental research) was not a good thing. Things will be moving much slower while new fundamental research results will be created instead of merely consumed.

  • by Demonantis (1340557) on Wednesday January 08, 2014 @02:24AM (#45895371)
    Picture a super computer so massive that components fail as fast as we can replace them. Now that is a big super computer. This is the issue. Super computers have a physical limit. If the node power doesn't grow we will reach a limit on simulation power. It will be interesting to see how the CPU matures. That means more features will be developed beyond raw power.
  • by smash (1351) on Wednesday January 08, 2014 @03:27AM (#45895571) Homepage Journal
    ... time to stop writing garbage in visual basic, man up, and use proper languages again that are actually efficient, isn't it?
  • by Ultracrepidarian (576183) on Wednesday January 08, 2014 @04:16AM (#45895749)

    Perhaps writing efficient code will come back into style.

  • by yrrah (1247500) on Wednesday January 08, 2014 @05:38AM (#45895973)
    The International Technology Roadmap for Semiconductors is published regularly and has information on the maturity of emerging technologies like carbon. There are many possibilities for "more than Moore" improvement. http://www.itrs.net/Links/2012ITRS/Home2012.htm [itrs.net]
  • by KonoWatakushi (910213) on Wednesday January 08, 2014 @06:03AM (#45896071)

    The refinement of process has postponed this for a long while, but the time has come to explore new architectures and technologies. The Mill architecture [ootbcomp.com] is one such example, and aims to bridge the enormous chasm of inefficiency between general purpose CPUs and DSPs. Conservatively, they are expecting a tenfold improvement in performance/W/$ on general purpose code, but the architecture is also well suited to wide MIMD and SIMD.

    Another area ripe for innovation is memory technologies, which have suffered a similar stagnation limited to refinement of an ancient technology. The density of both cache and main memory can be significantly improved on the same process with Thyristor-RAM or Z-RAM. Considering the potential benefits and huge markets, it is vexing that more resources aren't expended toward commercializing better technologies. Some of the newer technologies also scale down better.

    Something to replace the garbage which is NAND flash would also be welcome, yet sadly there appears to be no hurry there either. One point is certain, there is a desperate need to find a way to commercialize better technologies rather than perpetually refining inferior ones. Though examples abound, perhaps none is more urgent than the Liquid fluoride thorium reactor [wikipedia.org]. Molten salt reactors could rapidly replace fossil fuels with clean and abundant energy while minimizing environmental impact, and affordable energy is the basis for all prosperity.

"Say yur prayers, yuh flea-pickin' varmint!" -- Yosemite Sam

Working...