Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
HP Hardware

HP Answers The Question: Moore's Law Is Ending. Now What? (hpe.com) 95

Long-time Slashdot reader Paul Fernhout writes: R. Stanley Williams, of Hewlett Packard Labs, wrote a report exploring the end of Moore's Law, saying it "could be the best thing that has happened in computing since the beginning of Moore's law. Confronting the end of an epoch should enable a new era of creativity by encouraging computer scientists to invent biologically inspired devices, circuits, and architectures implemented using recently emerging technologies." This idea is also looked at in a broader shorter article by Curt Hopkins also with HP Labs.
Williams argues that "The effort to scale silicon CMOS overwhelmingly dominated the intellectual and financial capital investments of industry, government, and academia, starving investigations across broad segments of computer science and locking in one dominant model for computers, the von Neumann architecture." And Hopkins points to three alternatives already being developed at Hewlett Packard Enterprise -- neuromorphic computing, photonic computing, and Memory-Driven Computing. "All three technologies have been successfully tested in prototype devices, but MDC is at center stage."
This discussion has been archived. No new comments can be posted.

HP Answers The Question: Moore's Law Is Ending. Now What?

Comments Filter:
  • by mykepredko ( 40154 ) on Sunday July 02, 2017 @09:15AM (#54729203) Homepage

    Yes, I know "Moore's Law" isn't a law but an observation.

    When I RTFA, it seems the author is looking at different technologies to continue growth of computing capability for a given unit of space. I also get the impression that Mr. Williams is looking to fund projects that he has an eye on by saying that Si based chips will soon no longer be economically improved and VC/Investment Money should be looking at alternative technologies rather than continued shrinking of Si chip features.

    Unfortunately, I don't see a fundamental shift in what Mr. Williams is looking for the resulting devices to do. I would think that if he was really planning on dealing with the end of Moore's Law, he would be looking at new paradigms in how to perform the required tasks, not new ways of doing the same things we do now.

    Regardless, the physical end of our ability to grow the number of devices on a slab of Si has been forecasted for more than forty years now - Don't forget that as the devices have gotten smaller in size, the overall wafer and chip size has grown as have yields which mean a continuing drop in cost per Si capacitor/transistor along with an increase in capability per chip. I would be hesitant to invest in technologies that depend on the end of Si chips' trend of becoming increasingly cheaper with increased capabilities over the next few years.

    • A slight disagreement, R. Stanley Williams is interested in other solutions as he specific refers to options other than von Neumann architecture computing. Considering he is from HP one might surmise he is looking to DMC as well as their vague (to me) The Machine concept. I have yet to read the other article that is concerned with The Machine.

      The issue he offers up for consideration is that further spending of even more $B to move Moore one step to 5nm or beyond would be better spent on looking to other
      • Update, Williams piece does expand/explain The Machine idea. I had thought it was in the other article but that was mostly fluff (no insult intended) and linked to the Williams piece.

        Slogging through The Machine. Better than nothing but I tire of not having actual gear to use, same for 3D-Xpoint for that matter.
  • by Anonymous Coward

    Stopping Moore's law means that millennial programmers addiction to bloatware like react.js and electron will be curbed and we will be forced to write lean mean programs again. Remember when a gigabyte of ram was a high end workstation. Now it is a cheap netbook.

    • Don't tell them they might have to bin their handholding, inefficient bloated frameworks, or have to trade in their scripting or VM languages for something that compiles to machine code and where they might - horrors! - actually have to have a clue about how memory (de)allocation, threading, multi process, DB normalisation, sockets actually works. Or know how to pick the best sorting algorithm for the data size and complexity they're working with and not just hope the 21 year old hipster who wrote the dUdeF

  • by haruchai ( 17472 ) on Sunday July 02, 2017 @09:33AM (#54729255)

    Sounds like HP is about to make an itanic breakthrough

  • We start to megazord the chips, kinda like those HBM memory modules, but for several things like cores,GPUs or even lower level components like ALUs and pipelines etc..
    Of course, we're talking about a cooling and interconnecting hell here, but probably will be more economically feasible than trying to make sub atomic transistors.

  • It's not good for anyone else. Fast, simple, cheap improvements that means my computer today is absurdly much faster than the C64 I had 30 years ago. And my dad can tell how much faster they are than vacuum tubes 60 years ago. A friend of mine has two classic cars, ~30 and ~50 years ago. Maybe they're not quite as reliable or safe as modern cars, but they go fast enough to mingle well with other cars. I think it'd be pretty sad if in 2047 base performance is pretty much the same and we just do it "smarter"

    • by swilver ( 617741 )

      I wouldn't expect too much improvement on the algorithm side of things, we've already been optimizing those as much as possible for years because current hardware can't keep up.

      As for more cores, eventually power and heat requirements will become too large. Even if you can generate free power, the heat will have to go somewhere which will become a practical limit if we want to keep following Moore's law in just a few iterations. At the point where your computer is providing the heat for your entire house

      • I wouldn't expect too much improvement on the algorithm side of things, we've already been optimizing those as much as possible for years because current hardware can't keep up.

        I think that they're talking about doing more low-level programming instead of things like Java.

    • I don't know how quickly stuff will change, but we have to fronts: first new main memory technologies, like magnetic based RAMs. And secondly, different operation systems to exploit that. Better/Different operating systems is probably the slow part.

      Magnetic based RAM (Memristor, see: https://en.wikipedia.org/wiki/... [wikipedia.org] magnetic based main memory that holds its memory when switched off) will blur the distinction between RAM and "hard drive" or any external memory.

      Bottom line that means with a new operation sys

    • by HiThere ( 15173 )

      Considering the alternatives that he is proposing, and the increasing costs of fabs as features shrink, it might be a good thing.

      OTOH, either neuromorphic chips or memory centered chips will drastically redefine the art of programming, possibly rendering current skills obsolete. So it's not likely to be good for the current programmers. (Even multiprocessing needs changes that aren't yet widespread.) The third option, photonic computing, might well not redefine programming, but that's a "might" depending

  • There's nothing good about this at all for a consumer. Maybe for a lazy has been corporation like HP who can wallow in their simplistic and outdated designs that barely need to change. This is a sign that a market has reached stagnation and has nowhere else to go. As a hardware and computer nerd, this is a dark age.

  • by Anonymous Coward

    We've all observed that as CPU and RAM have increased in speed/capacity, software has bloated horribly. Maybe now it's time to rethink crap like .net* and layers of virtualization, and go back to efficient code writing.

    * (I've seen many major software projects more than quadruple in size, become buggier, and run much slower when they switched to .net)

    • Oh yeah, the useless VM layers: Combat the problem of JIT emulation being slower than native by making everything run as slow as JIT emulation. Google is particularly nasty on this, because they refuse to ship ahead-of-time compiled binaries (for dalvik apps) even for ARM 32-bit and ARM 64-bit ISAs. ISA egalitarianisn I guess. Obscure ISAs should have the same opportunities as dominant ones.
  • by Anonymous Coward

    Moore's Law has allowed too many to justify lazy design decisions and programming in computer architecture. With the end of Moore's Law, progress will come by offloading the main CPUs as much as possible, echoing construction of mainframes way back when. By coprocessing I/O, audio, and even most of the GUI, and OS kernel, the main CPUs can be reserved for actual user processes. Much of the housekeeping of a modern OS does not demand the degree of processing power of the main CPUs, but often hardware has bee

    • And it always bites them in the end, because the market tends to take advantage of Moore's Law to shift to ever-smaller and cheaper computers. The fatness of X.org made it impossible to sell most variants of Unix with ordinary PCs, and the Great Fattening(tm) of Windows NT during the development of Vista made it impossible to make Windows run well in tablet and subnotebook form factors for a long time, which made it hard for Microsoft to compete with the iPad and the Macbook Air. Even Google's inefficient d
  • Moore's Fork (Score:5, Interesting)

    by chewie2010 ( 2551696 ) on Sunday July 02, 2017 @10:29AM (#54729521)
    Moore's law might not directly hold true with multi-core x86's, but we now live in a world of differentiated processor power. ARM's specialized for hd streaming, or gaming, or AI, or Autonomous cars, or sensors for a wearable. You can buy an $80 tablet that will stream HD better then a nice 4 year old laptop. The reason is engineers are now focused on low cost processors for specific purposes. See Intel's purchase of Nervana for how Moore's law has forked.
    • by GuB-42 ( 2483988 )

      In fact, it is an old trend that is coming back.
      Old systems had plenty of specialized chips for sound, graphics, IO, etc... As CPUs became more powerful, these specialized features ended up as software run by the CPU. Now, these accelerator chips are coming back, the difference being that they tend to be on the same die as the main CPU (SoC).

  • Your grandfather's Hewlett Packard made calculators that were the envy of engineers everywhere. The pilgrims of NASA jet-packed to the blast-proof Taj Mahal by the Boeing load.

    Your father's HP made printer ink that was the envy of Rupert Murdoch. Bean counters sprouted sturdy beanstalks, and spouted unto the clouds in ecstasy. (This was before the one true cloud to rule them all.)

    Today's HP makes drivel that's the envy of one last, eccentric greybeard who lives in a ratty shack near the beach, with old n

  • What it really means is that "AI" won't happen. Autonomous car driving won't happen. Lots of things that AI and Space Nutters want to happen won't happen. We have been spoiled by Moore's Law and recent technological process has depended on it. If you look at the claims of AI nutters they all say "well computers are X times as fast now as they were in the last decade, image how fast computers will be in the next 10 years!!!". The answer is "not much faster". In fact, the computer you own now isn't significan
    • The answer is "not much faster". In fact, the computer you own now isn't significantly faster than the one you had 5 years ago.

      No, the answer is "much faster" using dedicated hardware designs for AI applications. The first generation of Google's TPU was already 15 to 30 times faster than a generic GPU, and used much less power. This was their first attempt, and Google's not even an experienced high performance chip designer.

      There is still a lot to gain, not only by limiting precision, and by implementing special functions, but also by recognizing that occasional mistakes are not a big problem, which allows a more efficient design.

      • Negative. You won't get "much faster", certainly not by Moore's Law standards. Face it, the digital age is flattening out. You have been spoiled. We would need some other sort of leap to get back to the progress we enjoyed with Moore's Law.
  • Ever question why there are constantly so many versions, updates, patches and problems? Because the hardware keeps getting updated, which gives us new potential features and introduce new problems.

    Old phones keep slowing down, and losing battery life because of this.

    With consistent hardware, we can:

    a) Take more time and develop software WITHOUT bugs. Yes, I know this sounds ridiculous, because no one can take years to develop software when their competitors do it weeks. No longer a problem once the hardw

  • by SvnLyrBrto ( 62138 ) on Sunday July 02, 2017 @01:31PM (#54730367)

    For starters, I've been reading "the sky is about to fall" articles for at least 20 years about how: "In 2-3 more years Moore's Law is going to slam into a barrier imposed by the laws of physics.". The entire world of computing will come crashing down and burn, the beast will rise from the pit, the keymaster and gatekeeper will find each other, the dead will dig themselves up from their graves, dogs and cats living together, mass hysteria. The doom and gloom crowd have been wrong every time so far. Every time, some clever person at IBM or Intel has figured a way to cross the streams and save the world. So why should I trust the chicken littles that we're living in the end times now?

    And even if Moore's Law slows down or pauses, there's plenty of room in the hardware we have today for continued improvement on the software side. Developers will just have to rely less on "pay no attention to the man behind the curtain" languages and frameworks and go back to optimizing their code for performance... like they used to before crap like Java, .Net, and Rails encouraged everyone to be lazy and rely on ever-improving CPUs to make their apps not suck. Why should the hardware guys do all the work, after all? Hell, they can start by writing their code to be properly multi-threaded. My desktop, for example, has a core i7 with 4 physical cores and 8 virtual ones via hyper-threading. I couldn't begin to count the number of times I've watched some program or another run a single core up to 100%, stop there, and ignore the 7 other threads it could be running simultaneously to improve its performance nearly 8-fold, no new or faster hardware needed.

    • Moore's Law is already dead. It has been dead for multiple years. Perhaps you haven't noticed.
  • As the transistor's get closer to the minimum allowed and with them unable to push past 4.3Ghz (unless extreme cooling is used) the CPU has reached stagnation and it's just clever marketing with little features built into the CPU that differentiates each process(along with the stupid naming conventions now used which mean nothing to many people means we should be looking elsewhere.

    We need to dump silicon, or find a hybrid way of using current nm production process with a new material to increase computing p

  • So if Moore's law is ending, we have Betteridge's law to let us know it is not. Thanks for putting that question mark on the end of the headline! We are saved.
  • Lay off half 50% of the employees every 18 months.
  • is Apple doomed, or will it be the Year of the Linux Desktop?

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...