Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware Technology

End of Moore's Law Forcing Radical Innovation 275

dcblogs writes "The technology industry has been coasting along on steady, predictable performance gains, as laid out by Moore's law. But stability and predictability are also the ingredients of complacency and inertia. At this stage, Moore's Law may be more analogous to golden handcuffs than to innovation. With its end in sight, systems makers and governments are being challenged to come up with new materials and architectures. The European Commission has written of a need for 'radical innovation in many computing technologies.' The U.S. National Science Foundation, in a recent budget request, said technologies such as carbon nanotube digital circuits will likely be needed, or perhaps molecular-based approaches, including biologically inspired systems. The slowdown in Moore's Law has already hit high-performance computing. Marc Snir, director of the Mathematics and Computer Science Division at the Argonne National Laboratory, outlined in a series of slides the problem of going below 7nm on chips, and the lack of alternative technologies."
This discussion has been archived. No new comments can be posted.

End of Moore's Law Forcing Radical Innovation

Comments Filter:
  • by Katatsumuri ( 1137173 ) on Wednesday January 08, 2014 @04:50AM (#45895857)

    I see many emerging technologies that promise further great progress in computing. Here are some of them. I wish some industry people here could post some updates about their way to the market. They may not literally prolong the Moore's Law in regards to the number of transistors, but they promise great performance gains, which is what really matters.

    3D chips. As materials science and manufacturing precision advances, we will soon have multi-layered (starting at a few layers that Samsung already has, but up to 1000s) or even fully 3D chips with efficient heat dissipation. This would put the components closer together and streamline the close-range interconnects. Also, this increases "computation per rack unit volume", simplifying some space-related aspects of scaling.

    Memristors. HP is ready to produce the first memristor chips but delays that for business reasons (how sad is that!) Others are also preparing products. Memristor technology enables a new approach to computing, combining memory and computation in one place. They are also quite fast (competitive with the current RAM) and energy-efficient, which means easier cooling and possible 3D layout.

    Photonics. Optical buses are finding their ways into computers, and network hardware manufacturers are looking for ways to perform some basic switching directly with light. Some day these two trends may converge to produce an optical computer chip that would be free from the limitations of electric resistance/heat, EM interference, and could thus operate at a higher clock speed. Would be more energy efficient, too.

    Spintronics. Probably further in the future, but potentially very high-density and low-power technology actively developed by IBM, Hynix and a bunch of others. This one would push our computation density and power efficiency limits to another level, as it allows performing some computation using magnetic fields, without electrons actually moving in electrical current (excuse me for my layman understanding).

    Quantum computing. This could qualitatively speed up whole classes of tasks, potentially bringing AI and simulation applications to new levels of performance. The only commercial offer so far is Dwave, and it's not a classical QC, but so many labs are working on that, the results are bound to come soon.

  • by TheRaven64 ( 641858 ) on Wednesday January 08, 2014 @06:20AM (#45896105) Journal
    Speaking as someone who works with FPGAs on a daily basis and has previously done GPGPU compiler work, that's complete nonsense. If you have an algorithm that:
    • Mostly uses floating point arithmetic
    • Is embarrassingly parallel
    • Has simple memory access patterns
    • Has non-branching flow control

    Then a GPU will typically beat an FPGA solution. There's a pretty large problem space for which GPUs suck. If you have memory access that is predictable but doesn't fit the stride models that a GPU is designed for then an FPGA with a well-designed memory interface and a tenth of the arithmetic performance of the GPU will easily run faster. If you have something where you have a long sequence of operations that map well to a dataflow processor, then an FPGA-based implementation can also be faster, especially if you have a lot of branching.

    Neither is a panacea, but saying a GPU is always faster and cheaper than an FPGA makes as much sense as saying that a GPU is always faster and cheaper than a general-purpose CPU.

  • by Katatsumuri ( 1137173 ) on Wednesday January 08, 2014 @10:07AM (#45897163)

    I looked up some companies by name (too bad you posted as AC and didn't mention them), and here is what I found:

    Intel reveals a neuromorphic chip design based on memristors and spintronics [technologyreview.com]

    HP and Hynix postpone memristor-based memory to avoid cannibalizing their flash business [xbitlabs.com]

    This pearl deserves to be quoted:

    "In terms of commercialization, we will have something technologically viable by the end of next year. Our partner, Hynix, is a major producer of flash memory, and memristors will cannibalize its existing business by replacing some flash memory with a different technology. So the way we time the introduction of memristors turns out to be important," said Stan Williams, Hewlett-Packard senior fellow and director of the company's cognitive systems laboratory, during a conversation at the Kavli Foundation.

    SanDisk and Toshiba are testing a ReRAM (memristor memory) chip [theinquirer.net]

    HP working with AMD, Intel, ARM and others to release memristor-based "nanostores" [thinlinedata.com].

    A working memristor has already been proven in the lab by HP and they are now working with AMD, Intel, ARM and others to release what they call "nanostores". A chip that combines the memristor and logic of the CPU can prove to replace all current microprocessors and memory architectures.

    A startup named "Crossbar" will try to beat HP to market with memristor-based ReRAM. [crossbar-inc.com]

On the eighth day, God created FORTRAN.

Working...