Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel Businesses Hardware Technology

Why Intel Insists Rumors Of The Demise Of Moore's Law Are Greatly Exaggerated (fastcompany.com) 106

From an article on FastCompany: Intel hasn't lost its zeal for big leaps in computing, even as it changes the way it introduces new chips, and branches beyond the PC processor into other areas like computer vision and the internet of things. "Number one, too many people have been writing about the end of Moore's law, and we have to correct that misimpression," Mark Bohr, Intel's technology and manufacturing group senior fellow and director of process architecture and integration, says in an interview. "And number two, Intel has developed some pretty compelling technologies ... that not only prove that Moore's law is still alive, but that it's going to continue to provide the best benefits of density, cost performance, and power." But while Moore's law soldiers on, it's no longer associated with the types of performance gains Intel was making 10 to 20 years ago. The practical benefits of Moore's law are not what they used to be. [...] For each new generation of microprocessor, Intel used to adhere to a two-step cycle, called the "tick-tock." The "tick" is where Moore's law takes effect, using a new manufacturing process to shrink the size of each transistor and pack more of them onto a chip. The subsequent "tock" introduces a new microarchitecture, which yields further performance improvements by optimizing how the chip carries out instructions. Intel would typically go through this cycle once every two years. But in recent years, shrinking the size of transistors has become more challenging, and in 2016, Intel made a major change. The latest 14 nm process added a third "optimization" step after the architectural change, with modest performance improvements and new features such as 4K HDR video support. And in January, Intel said it would add a fourth optimization step, stretching the cycle out even further. The move to a 10 nm process won't happen until the second half of 2017, three years after the last "tick," and Intel expects the new four-step process to repeat itself. This "hyper scaling" allows computing power to continue to increase while needing fewer changes in the manufacturing process. If you divide the number of transistors in Intel's current tick by the surface area of two common logic cells, the rate of improvement still equals out to more than double every two years, keeping Moore's law on track. "Yes, they've taken longer, but we've taken bigger steps," Bohr said during his three-hour presentation.
This discussion has been archived. No new comments can be posted.

Why Intel Insists Rumors Of The Demise Of Moore's Law Are Greatly Exaggerated

Comments Filter:
  • by Tablizer ( 95088 ) on Tuesday April 04, 2017 @12:45PM (#54171317) Journal

    Seems the size of the paragraph doubles every 2 stories.

    • Seems the size of the paragraph doubles every 2 stories.

      While processor speed has been stuck at 4GHz for several years. Adding cores improves performance only while you have one process you can assign in parallel to each core.

  • Lack of competition is what is slowing processor speed growth not difficulty. One or two major competitors just don't provide enough competition so there is no reason for them to innovate. They hold onto new technologies for years and years until AMD makes an announcement then all of a sudden intel reduces prices or make an announcement. Innovation comes from competition not from patents that seem to last way too long or force expensive licensing and large corporations gobbling up any business they even has
  • Hm.. (Score:5, Insightful)

    by TFlan91 ( 2615727 ) on Tuesday April 04, 2017 @12:56PM (#54171405)

    "This "hyper scaling" allows computing power to continue to increase while needing fewer changes in the manufacturing process."

    This "hyper scaling" allows Intel to continue to milk customers who expect more than modest gains with every generation.

    FTFY

  • Moore or Less Law (Score:5, Insightful)

    by Oswald McWeany ( 2428506 ) on Tuesday April 04, 2017 @12:58PM (#54171429)

    Number of Transistors may still be increasing in density but computers aren't seeing the revolutionary jumps in power and performance- it's not scaling to us end users. I have a 5 year old PC at home I built, and it rivals most of the mainstream PCs being put out today. Even if Moore's law is still holding true, it's not really relevant anymore.

    Computers aren't getting much faster any more. Processors may be getting smaller as transistors density gets higher, but your average home PC isn't getting much better.

    • I have a 5 year old PC at home I built, and it rivals most of the mainstream PCs being put out today.

      Yes, it has never been a better time to buy a used computer. Scoring a used system with Windows 7 (or earlier) is a bonus.

      • by rakslice ( 90330 )

        I realized recently this situation is happening with laptops: reasonable quality quad core 1920x1080 systems from 4 generations ago are showing up for under $500 US, and at that price Big Box is still wants to sell you a 1366x768 dual core system (in light of how stagnant it's been it almost seems like an accomplishment that they've even gotten past wanting to only put 4GB of RAM in them).

    • by Anonymous Coward

      I think what's happening is they do get twice the performance per area, but it's also twice as expensive per area. Who cares about Moore's law if "performance per dollar" stays the same? Heck, the best value per dollar [cpubenchmark.net] for a processor scoring at least 10000 on PassMark is a processor from 2012.

    • by Anonymous Coward
      My wife's computer is not only about 10% faster, but consumes about 30% less peak power and about 90% less idle power. Not to mention was about 20% cheaper in raw dollar amounts, excluding inflation. Speed is not the only metric worth measuring.
      • Typically the payback for replacing based upon power savings alone is too long to be worth it. If she leaves her computer on 24/7 then maybe, but you'd realize a much better savings by having it go to sleep when it's not being used. You're usually better off just to keep using what you've got until it either is truly obsolete or breaks down.

    • The average home PC not getting any better is driven by market demand
      From the 90's, to about 2010 or so, I would upgrade my parents' machine every 2 to 3 years.

      Now that they are on an SSD and 16 gig of RAM there is absolutely no reason to.
    • by Mal-2 ( 675116 )

      Same situation here. I have a machine I built in mid-2011 around the best AMD CPU available (Phenom II x6 1090T, released in 2010 -- couldn't find the 1100T at that moment) and it still runs in the middle of the i5 pack today. Even Ryzen is only a tempter. I don't need more CPU, if the cost is a new motherboard and RAM as well.

  • by ilsaloving ( 1534307 ) on Tuesday April 04, 2017 @01:05PM (#54171483)

    Is it just me or does this whole diatribe just ooze "pathetic marketing maneuver"?

    It's one thing to admit that things are getting more challenging cause the low-hanging fruit is gone and Intel's having to put more time and effort into their manufacturing, but for the love of Pete, redefining Moore's Law is just lame.

    I really wish Apple had a tightly held patent on their reality distortion field cause now everyone else is trying to use it and it's just... cringeworthy.

    • by epine ( 68316 )

      Is it just me or does this whole diatribe just ooze "pathetic marketing maneuver"?

      For every action there's an equal and opposite reaction. Now that half the population has levelled-up on aggressive ignorance, the standard for those who still pretend to know better only becomes that much higher.

      Intel is very much one of those pretenders.

      Paging Gordon Moore.

      Gordon—dressed exactly like Bob Pinciotti—steps into the Tardis that just materialized outside his office, and grabs the ringing phone—

  • They claim that emperor still has clothes...

    • Seems like a really pathetic response to Ryzen's debut. More cores with nearly equal performance per core for less $$$ is hard to argue with, so they spew this marketing blather.

      The 10-20% per year performance increase Intel has been offering is just sad and pathetic. More cores should have been the next step, but they have been slapping huge markups on anything with >4 cores for years. At least now there is some actual competition and they might wake up and start trying again.

      • This is a remake of a 1999 movie. Plot summary: AMD had a chronically weak offering, Intel was in the habit of dribbling out the performance gains. AMD suddenly came on very strong with Athlon, a completely new chip which was arguably faster than Intel and definitely cheaper. Almost overnight, Intel suddenly figured out how to make much faster chips, and so did AMD. Performance doubled, tripled, with AMD being the first to crack the 1GHz barrier the next year. That spiral continued for a few years and the u

        • by dj245 ( 732906 )

          This is a remake of a 1999 movie. Plot summary: AMD had a chronically weak offering, Intel was in the habit of dribbling out the performance gains. AMD suddenly came on very strong with Athlon, a completely new chip which was arguably faster than Intel and definitely cheaper. Almost overnight, Intel suddenly figured out how to make much faster chips, and so did AMD. Performance doubled, tripled, with AMD being the first to crack the 1GHz barrier the next year. That spiral continued for a few years and the users were happy, but AMD ultimately fell behind and Intel went back to their old tick-tock.

          Obtaining higher and higher chip performance seems analogous to natural resource extraction. If it is harder and harder to keep getting the same gains, let those gains sit in the ground until they are actually needed.

      • More cores should have been the next step, but they have been slapping huge markups on anything with >4 cores for years. At least now there is some actual competition and they might wake up and start trying again.

        Until you try running something that isn't optimized for multiple-core support. Then no matter how many cores you have it doesn't help a bit. I'm not arguing that we shouldn't strive for more cores- I'm merely saying that's only one part of the puzzle.

  • Has parallel processing gone about as far as it can go due to difficulty in programming for it?
  • I originally read the title as "Why The Rumors Of Demi Moore's Demise Are Greatly Exaggerated".

  • Moorephy's Law: "If the processing power of a CPU can double every 2 years, it WILL double, and in the worst way possible. You will have a plurality of CPU cores that each want to do their own thing. And your compiler will not be able to get those cores to work with each other properly. If you code in Assembly, of course, things are very different. Your CPU cores WILL eventually learn to talk to each other, but by the time that happens in any meaningful way, you will unfortunately be a patient living in a p
  • by sinij ( 911942 ) on Tuesday April 04, 2017 @02:26PM (#54172035)
    We haven't had any noticeable gains in computing for a long while. Other than SSD, nothing got notably faster or bigger. What is not clear to me if we hit diminishing returns or lack of competition allowed market leaders to sleep on laurels.

    If this is diminishing return on hardware - then next area is software optimization. So far, most of our progress was carried by hardware. This is not going to be popular view among programmers, but default mode of operation is "how much resources do I have, lets use it all" in software engineering. There is no thought given to making it leaner and more efficient, because it used to be that hardware gains over time would make such effort moot. Well, there might not be any more notable gains. We will hit the next nm fab level, get 3D layout process in place and not have a good way to move forward other than occasional specialized optimizations (e.g. AES) acceleration). This might take a form of optional co-processors.
    • Most of this is because of the proliferation of Virtualization and "Cloud computing"
      If most of what you do is in a web app or a virtual machine or compatibility layer.
      Your never going to see the hardware gains.
      Intel processors may be 10X faster than they were 10 years ago but your ISP damn sure isn't.
      When basically all programs have to phone home before they do anything, hardware improvements become irrelevant.

  • I generally feel that Moore's Law is still well in affect. While Moore's law is just about number of transistor's will double ever 18 months, I take a broader look at this. I see it as processing power will double ever 18 months.

    How do I define 'processing power'? That's up to a lot aspects. Obviously, A CPU's is raw CPU clock speed, bogoMIPs, etc. But there are other ways we can game power/efficiency too. Multiple cores and SMT. But also power efficiency should be considered, if we can do more processing p

    • Well you also don't know the difference between affect and effect. The fact is that Moore's Law is dead. Kaput. Processing power isn't doubling at all. It is around 5-10% increase every 12 months.
      • In modern American usage "affect" and "effect" mean the same thing.
      • by jwhyche ( 6192 )

        Moore's Law isn't just dead. It's dead, buried, flowers have been laid, songs have been sung, and the mourners have left.

    • by sinij ( 911942 )
      Aside from data centers and smart phones, why would power efficiency matter at all? I really don't care that my desktop now uses 75W to power CPU at max load instead of 150W it used to couple years ago. Thing is, parallelization is yet to deliver outside of few very specific circumstances - most computing tasks will still take about the same time if you double the number of cores.
      • by Hydrian ( 183536 )

        Just because some processing power gains don't apply to you doesn't mean that don't happen or exist.

        And you lucky you to live in place that has reliable and cheap power. Not everyone has that luxury. Places in the world that need to have reliable computing with unreliable power have to make sure that have properly scaled generators and power storage. This limits what they can run. If some lower power chip comes out with
          the same Ghz, it extends their ability to process.

  • by OneHundredAndTen ( 1523865 ) on Tuesday April 04, 2017 @02:31PM (#54172071)
    However, what matters to me is that a 10-year-old desktop computer is not very far behind, performance-wise, a 2017 desktop computer.
    • by Ramze ( 640788 )

      I agree. I have an 8 year old desktop Linux PC that plays Netflix, streams youtube, twitch, etc. just fine, and can play 1080p video both x264 and x265 just fine. The x265 stresses the dual core CPU a bit, but no frames lost. It's not my only PC, and it's more of a Linux toy and a backup machine for when my other PCs are busy... but, there's really not much it can't do other than play serious games (though I could upgrade the vid card on the cheap). I could put Win10 on it and it'd likely get more

  • I wonder if PC making is entering the same regime as the aircraft industry has been in for quite some time? Fifty years ago, one could do London to New York as fast as it can be done today - faster, for Concorde was debuting. Today, we have the so-called Dreamliner, an airplane that is said to be revolutionary. What do we, travelers, get to see? Well, its windows are very small, rather than teeny-weeny; the air pressure in the cabin is slightly higher than before, but still not quite sea level; and the humi
  • It was wrongly stated anyway. Should have said by a golden ratio every year instead of doubles every 18 months (which is only a 4% difference). And it's much more believable that it would grow proportionate to Fibonacci numbers every year (parallel to release cycles) because the rate of tech progress is a recurrence relation to the level of existing tech.
  • Why Intel Insists Rumors Of The Demise Of Moore's Law Are Greatly Exaggerated
    Because Intel wants you to buy their CPUs which haven't seen worthwhile improvements in over 5 years.

    Unless you need bleeding edge performance, just pay half the money and get a Ryzen CPU.

    • No, If you want bleeding edge performance find a Ryzen cpu/mobo/memory combo that lets you boost the memory to 3600 and CPU to 4.1Ghz and remain stable. At which point you will be equal or better in performance to the 7700k @ 5Ghz(Interestingly, memory speed increases beyond 2400 have very little effect on Kaby Lake performance) in games and better than everything else in the other metrics.

      • The 7700k isn't the performance king. Some "Extreme" series (6870x or whatever) is. The 7700k is the popular gaming CPU.

        Most Ryzen chips can't sustain 5.1 GHz on air. The highspeed memory situation is getting better, but it's still a crapshoot in many cases.

        Another issue is that as you crank up the memory speed, increasing the speed of the "infinity fabric" connecting the CCX units, you have more power draw and more heat to deal with, which can actually hurt performance in certain workloads.

        For anything

  • I remember Intel trying to tell us that you improve the efficiency of a computer my making it run harder to complete tasks faster. They love to rewrite logic using words that happen to sound like there filling some sort of gap. Truth is Moores law has only been about increasing transister density, nothing else. Optimizing architectural changes has nothing to do with it, Intelies again.

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...