Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Graphics Hardware Technology

Ask Slashdot: Why Are There No Huge Leaps Forward In CPU/GPU Power? 474

dryriver writes: We all know that CPUs and GPUs and other electronic chips get a little faster with each generation produced. But one thing never seems to happen -- a CPU/GPU manufacturer suddenly announcing a next generation chip that is, say, 4-8 times faster than the fastest model they had 2 years ago. There are moderate leaps forward all the time, but seemingly never a HUGE leap forward due to, say, someone clever in R&D discovering a much faster way to process computing instructions. Is this because huge leaps forward in computing power are technically or physically impossible/improbable? Or is nobody in R&D looking for that huge leap forward, and rather focused on delivering a moderate leap forward every 2 years? Maybe striving for that "rare huge leap forward in computing power" is simply too expensive for chip manufacturers? Precisely what is the reason that there is never a next-gen CPU or GPU that is, say, advertised as being 16 times faster than the one that came 2 years before it due to some major breakthrough in chip engineering and manufacturing?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Why Are There No Huge Leaps Forward In CPU/GPU Power?

Comments Filter:
  • One word (Score:5, Insightful)

    by sl3xd ( 111641 ) on Friday March 03, 2017 @08:52PM (#53973241) Journal

    Physics

    • Re:One word (Score:5, Informative)

      by sl3xd ( 111641 ) on Friday March 03, 2017 @09:04PM (#53973319) Journal

      To elaborate: We can't reliably clock Silicon much faster than we're doing right now.

      There are other semiconductors (such as GaAs) which can operate reliably at higher frequencies, but they are absurdly expensive, produce too much heat, consume too much power, and so on -- not to mention the fact our tiny process sizes for silicon don't exactly work for entirely different materials (chemistry bites again).

      We're running into a similar wall for die shrinkage, on multiple fronts:

        - We're getting into the size territory where bits flip due to quantum tunneling, which tends to hurt reliability. Flash storage has started to reach that territory, if my colleagues working for ${SSD MANUFACTURER} are telling me the truth.
        - Yields of working units are going down significantly as the die shrinks, and it's taking a lot longer to figure out how to bring yields back up.

      In the end, every material has its limits, and we're starting to run into them with Silicon, and there isn't a material that 'stands out' as worth betting the business on.

      • Yields of working units are going down significantly as the die shrinks, and it's taking a lot longer to figure out how to bring yields back up.

        Actually no, the yield is higher for smaller die sizes at a given technology, since the likelihood of having a defect on your die is lower.

        On the other hand, time-to-ramp is longer for advanced tech nodes, although some companies like TSMC have shown impressive numbers.

        • Was about writing something like you did, however:
          since the likelihood of having a defect on your die is lower.
          No, the "likelihood" is the same. You have X defects on a die. So up to X "chips" will have a defect in the end. However as you increase the amounts of chips you produce the percentage of defect chips shrinks. Or in other words: the smaller the chips, the more you get from one waver/die. The amount of defect chips stays the same, though.

        • by dbIII ( 701233 )
          The critical flaw size increases as you shrink the die.
          Thus with all else being equal more flaws. Those flaws that previously were too small to matter now do.
          I hope I made that clear enough.
      • Comment removed (Score:5, Informative)

        by account_deleted ( 4530225 ) on Friday March 03, 2017 @10:22PM (#53973655)
        Comment removed based on user account deletion
        • by rtb61 ( 674572 )

          Not the physic you are thinking though. Lets use the favoured slashdot car analogy. Why are cars no longer more and more powerful, why is not the average consumer car capable of 300km per hour because it can not be safely or legally used and it is a waste of energy and resources, it serves no purpose except to allow some but to stroke the ego whilst stroking their private parts. Sure there is a market for it, a very tiny market (heh heh) but not sufficient to continue development, most people what more fuel

          • Central high power cloud machines are just a disaster waiting to happen, how many times does this have to be proven.

            Once would be a good start. Do you really think that people are not designing fault-tolerant network infrastructure?

    • No, the reason is marketing. If you were Intel, and had created a processor 4-8 times faster in workload than current ones. Would you sell it? or would you sell a lesser model that was only 2x as fast. Then enxt year you release the 3x as fast edition. You can make this stretch out for 10 years on one development. And if the Military makes it we'll never see it.
    • by mjwx ( 966435 )

      Physics

      Yes and no.

      Yes as in there is a limit to what we can do with silicon and transistors, but also no because of the way innovation tapers off after a few decades. Its the same reason that we dont see huge leaps in car, aeroplane and oven technology. Its because the design has matured to a point where for the most part we're just adding minor improvements to tried and tested designs. Intel/AMD/NVIDIA have pretty much reached this point and it will take a disruptive technology to change that.

      Said disrupt

      • Computers were first built back in the 40's... we didn't get them into the home until the 70's.

        The computers built in the 1940's used valves, not silicon. The first transistor-based computers were in the early 1950's so that's when the clock should start ticking since valve-based computers were clearly never going to be a consumer item. The same may be true of the next generation of computer technology - the current tech for quantum computers is not really consumer friendly if that turns out to be the next generation technological platform.

        • by mjwx ( 966435 )

          Computers were first built back in the 40's... we didn't get them into the home until the 70's.

          The first transistor-based computers were in the early 1950's so that's when the clock should start ticking since valve-based computers were clearly never going to be a consumer item. The same may be true of the next generation of computer technology - the current tech for quantum computers is not really consumer friendly if that turns out to be the next generation technological platform.

          Fair enough, it's 2:30 where I live and I didn't feel like reading the Wikipedia article on computers to get exact dates. However that's still 20 years from prototype to home product so I stand by my point.

          I forget where I read it, (back in high school, which is a while ago for some of us) but it takes 25 years from the point where a new technology becomes available for it to integrate into our lives. Replacements for silicon are largely still theoretical.

  • Those leaps are in the works, in the form of spintronics, quantum computing, and photonics.
  • Market (Score:5, Informative)

    by Shaman ( 1148 ) <shaman@@@kos...net> on Friday March 03, 2017 @08:55PM (#53973261) Homepage

    Most likely, there is no major competition in the market, and PC sales on the whole have slowed considerably. A modern 6800K processor is as close as you'll come to a leap forward, but it's $1100 Canadian and requires a similarly expensive motherboard + memory. Same with similar chips.

    Meanwhile the cheapest system on the market is as fast as a moderately high-grade enthusiast computer from 2010 and probably has reasonable 3D graphics onboard, with a SSD drive it will feel quite snappy.

    So, a) not a lot of market demand for faster systems, b) lots of tablets and game consoles for entertainment out there, c) moderately faster systems exist but cost keeps them low-volume, d) very low-percentage demand for faster computers - definitely less than 1% that will pay a premium for it, e) the majority of gamers are young-ish and they play largely twitch games even on PCs which are more GPU limited than CPU limited.

    • Re:Market (Score:5, Interesting)

      by Billly Gates ( 198444 ) on Friday March 03, 2017 @09:05PM (#53973327) Journal

      Dude gamer GPU's are increasing in performance incredibly fast. THey double in speed every 2 years. The only reason desktop is not innovating is because Intel has a monopoly and won. But that is changing starting with Kaby Lake thanks to AMD Ryzen. It is back to 15% every year again and maybe even more as graphics shows no slow downs anytime soon.

      Shoot for $185 you can get what a $399 did just in late 2014/2015 at all max settings in games.

      • by Mashiki ( 184564 )

        GPUs are increasing incredibly fast because of a couple of reasons. First, they're not anywhere close to the same die size as a CPU. They're roughly 2 generations behind CPU's in shrinking, that means the tolerances can be off and it won't make a huge difference and can "run wild" without the danger of causing errors. But can benefit from all the advances that AMD and Intel have gone through with each die shrink. The second is GPU's are able to increase their die size and transistor count as well as hav

        • That's not true; GPUs basically always use the latest process technology available, just like CPUs. Recently, there have been some degenerate cases where a new process is (at least initially) slower and more expensive than the previous one; but in general, they always move to the latest and greatest process, once that process is capable of making a better product.

          As for die size, the big GPUs are way bigger than CPUs. A 22-core Xeon Broadwell E5 from 2016 is 7.2 billion transistors, and 456 mm^2. The NVIDIA

    • by Kjella ( 173770 )

      Most likely, there is no major competition in the market, and PC sales on the whole have slowed considerably.

      Sorry, but I think this is plain wrong because they're always working to lower their own cost. Even in the absence of competition if Intel could make a processor twice as fast, they'd make it half the size and sell the same performance at a much higher profit margin. And while the PC market has shrunk it's still 270 million PCs/year or about 75% of its all time high, it's a huge market even if it's not a growth market anymore.

  • Business decision (Score:3, Insightful)

    by BoFo ( 518917 ) on Friday March 03, 2017 @08:57PM (#53973277)
    Every advance has to be paid for by the consumer. Each incremental advance comes as the previous one is marketed.
  • Limitations (Score:3, Informative)

    by fozzy1015 ( 264592 ) on Friday March 03, 2017 @08:57PM (#53973279)
    Instruction level parallelism in superscaler core designs have hit a limit. More pipeline stages becomes counter productive when a misprediction requires a flush. Thread level parallelism exploited by multi core designs can only go so far; only certain tasks can exploit massive parallelism(e.g. ray tracing).
    Increases in clock speed have hit a wall with current silicon based semiconductors. Exotic semiconductors and incredible cooling systems aren't practical for the mass market.
  • by JoeyRox ( 2711699 ) on Friday March 03, 2017 @09:00PM (#53973291)
    NVIDIA's 2016 Pascal architecture was significantly faster than their previous Maxwell architecture.

    "Relative to GTX 980 then, we're looking at an average performance gain of 66% at 1440p, and 71% at 4K. This is a very significant step up for GTX 980 owners,"

    http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/32 [anandtech.com]
    • by Misagon ( 1135 ) on Friday March 03, 2017 @09:13PM (#53973367)

      Architecture-wise, Pascal was mostly an incremental upgrade to Maxwell.
      The big difference from Maxwell to Pascal was a process upgrade from 28 nm to 16/14 nm which allowed the clock speed to bump 50% from around 1 GHz to around 1.5 GHz.
      Couple that improved memory and a good balance of different types of units for the best performance in typical games of its time.

    • Most of Pascal's increases come from dropping to a much smaller node size which allowed them to add a lot more cores in a smaller thermal envelope. That's why it bugs me that they jacked up the prices and are fusing them off to create artificial tiers - it's mostly more of the same. And they'll continue to be able to do that because there is almost no limit to the number of cores you can throw at the types of problems GPUs are used for.
  • by redelm ( 54142 ) on Friday March 03, 2017 @09:01PM (#53973303) Homepage

    The poster asks a question that assumes breakthroughs can be planned just like any other development project. But breakthroughs are not, or rather, those that can be planned and worked already have been. The computer science field has been operating awash with funding for at least 55 years.

    I'm not saying there are no breathoughts out there, what I'm saying is that our current project methodology has already discovered all it can, and most future breathoughs will come from some other methodology.

    The target, CPU/GPU power is also not especially compelling -- compared to the past, there is much less pressure to increase performance, and considerable uncertainty how the increase will be helpful.

    • by sl3xd ( 111641 ) on Friday March 03, 2017 @09:05PM (#53973329) Journal

      I'd mod you up if I could... at this point, it's starting to look like we need a material breakthrough - Silicon appears to be reaching its limits.

    • Re: (Score:2, Interesting)

      by ghoul ( 157158 )

      Huge breakthroughs happen when some option has not been tried due to lack of funds, vision, laziness, monopoly markets or some other crap. In a field where smart people have been exploring all options at the cutting if not bleeding edge there wont be an overlooked angle which can suddenly give a 16x jump.
      In short a huge breakthrough is not a sign of greatness rather it is a sign that there was something wrong with the field and someone figured out how to fix it.
      Huge breakthroughs will never happen in a heal

    • by Dahamma ( 304068 )

      Two excellent points in this comment - the obvious one about breakthroughs not being a planned project, and the other, also important: there just isn't a huge financial motivation for a company like Intel to make a chip an order of magnitude faster right now.

      That's especially true if you look at the inevitable tradeoffs - if they could make a chip 10x faster using 10x more power, would they bother? Or 10x more power with 10x cost? Probably not, since the market would be so limited. These days - both in m

    • The CIV games make young minds think that technological breakthroughs are simply a matter of money and time, then BANG tech advance!
      Somebody needs to start airing "Connections" again: http://topdocumentaryfilms.com... [topdocumentaryfilms.com]

  • by Billly Gates ( 198444 ) on Friday March 03, 2017 @09:02PM (#53973309) Journal

    The sole reason Kaby lakes got hot and clocked in so fast is because of AMD just around the corner and it worked to beat Ryzen. I expect the CPU race to heat back up again as physics has not killed innovation yet.

    Proof is GPU's and Phones are still improving at breakneck speed. It is only because of an INtel monopoly that on the desktop it has went to a standstill.

    • I think you will find that Intel is paddling as fast as it can, Qualcomm among others is snapping at their heels.

  • by DatbeDank ( 4580343 ) on Friday March 03, 2017 @09:06PM (#53973331)

    Right about 2008/2009 computer hardware became "good enough" to appeal to people's basic needs which really only centered on having a simple window to the internet. Netbooks became available and smartphones started to become good enough to browse the internet on their own. Consumers at the end of the day really only want a platform that's able to view into the internet.

    Someone can correct me, but I believe such innovation is still occurring for server technology and niche fields like a/v production, cad, and animation. Though, I do yearn for the olden days when consumer technology was cool and exciting. Being a tech nerd in the 90s was something else!

  • by Ramze ( 640788 ) on Friday March 03, 2017 @09:09PM (#53973345)

    I remember when Pentiums were first coming out. P75, P90, P100, P133, P166. They were faster than the 386s and 486sx and 486dx models. The p166 was noticeably more than twice as fast as the P75 on lots of tests. The Mhz and Ghz races are over.

    We can't just ramp up cycles anymore with silicon. It puts out too much heat. Multicore doesn't magically make programs faster unless they lend themselves well to parallellization & are coded properly for it. New architectures have been tried, but ultimately fail because they're costly or proprietary. ARM was a pretty good leap forward for mobile use. New instructions are being included in CPUs all the time -- especially ARM. Try to play a HEVC 1080p video on a 2013 tablet vs one today... you'll notice a difference right away. Check the CPU usage -- one's at 100% and dropping frames left and right while the other barely nudges past 15%.

    Intel or AMD could sell you a chip with 256 cores on it, but unless you do a lot of video encoding or physics rendering, it'd be wasted on you... and super expensive b/c they have no incentive to make it in volume. Maybe when VR or AI becomes commonplace, you'll drive demand for such architectures.

    CPUs are fast enough for just about anything one could think to do with them at a consumer level. GPUs can be made better, but market forces push for low power that's "good enough" for most users. CPUs and even GPUs aren't the bottlenecks anymore -- it's RAM, SSD, PCI-express lanes, various busses like USB, thunderbolt, HDMI, SATA, etc. Doesn't do much good to stuff a really fast CPU or GPU into a system if you can't feed it data fast enough to max it out. Most CPUs already have several layers of cache as well as branch prediction to help with the crippling latency from other I/O, but it's still not enough.

    Changes are usually evolutionary, not revolutionary... and we've tweaked so much with CPUs and GPUs, you're not going to see a big bump until we move away from silicon and PCB to say... diamond or carbon nano-wires and optical computing.

  • by imgod2u ( 812837 ) on Friday March 03, 2017 @09:09PM (#53973353) Homepage

    CPU architect here. I'll try to provide some insight.

    Performance for CPU/GPU or any computational tool isn't exactly just a number you hit. It's not like bandwidth for storage or communications nor is it like a battery's capacity.

    A CPU and to a lesser extent a GPU is able to perform all sorts (all logical) computational functions. Each of these involves different usage patterns of the different computational paths inside a piece of silicon. And thus, speeding up each of these usage patterns requires different structures.

    A single piece of code running something complex like launching an app or opening a webpage will generate hundreds of millions of instructions with lots of different patterns. Think about all those API's you call. How much code do you think is similar between them?

    And thus the problem of improving "performance". The goalpost is a shifty one. Speed up one code pattern, and you risk your changes hurting another. Or you can spend extra transistors making a specialized accelerator for that code pattern. But then...it'll be idle 95% of the time.

    And if you speed up a particular function by 1000x (it's happened), your average speed increase for a typical benchmark or API call will still be 0-1%. Because that function is only a small piece of the larger codebase.

    Think about how many non-similar libraries and functions there are in typical software, and think about how there's any way to speed them *all* up. You can make memcpy or memset (malloc uses these) faster by 5x and that'll speed up javascript processing by....0.01% or so.

    The reason "performance" doesn't increase as drastically in the computer world is because computing "performance" is very very multifaceted. Much like how "intelligence" can't just be increased by 5x -- someone can get 5x better at specific tasks, like memorizing or image recognition, but that doesn't make them 5x more "intelligent".

    Compare this with a simple metric like 0-60 acceleration or network bandwidth.

    • And more transistors means lower yield.
    • Yes, but I think you're missing the point that the OP is really making: they are asking why improvements to processor speed are so danged incremental. Processors are maybe 200x times faster now than they were 25 years ago, but the point is that we got here, so it was physically possible. What stopped us from condensing the last 25 years of progress into 5 years? Or 1 year? Why is the progress of Moore's Law supposedly so inexorable? Does this indicate a "learned helplessness" of the industry, transitioning
  • by skogs ( 628589 )

    AMD, to be fair, has pretty much done this just now with the Ryzen chips.

    • AMD didnt do shit with the Ryzen chips.

      AMD moved from 32nm and 28nm to 14nm, and amazingly experienced the same performance increases Intel saw when it moved from one node to another.

      I realize that sadly for some of you guys that cpus are inexplicable magic boxes, but they just arent. Put some effort into understanding, or turn in your geek card.
  • No context (Score:4, Interesting)

    by RubberDogBone ( 851604 ) on Friday March 03, 2017 @09:15PM (#53973373)

    This question lacks context. In terms of desktop PCs and common everyday usage, we don't NEED more speed or power. Nothing is going to speed up webpages or Facebook or whatever people typically do on their PCs. And even if you did, then you become constrained by the speed of the internet and there won't be much perceived benefit.

    On the mobile side, there is room for more speed but it comes at the expense of power and is still constrained by connection speeds and website performance on mobile devices, which often sucks. Throwing faster and more processing isn't necessarily the fix that is needed.

    There are cases where rendering and other heavy duty uses might benefit but the vast majority of people never use those things. Even gaming is usually constrained by other things like the GPU, the game engine, connection speed, and human performance.

    The major places where computing power is much more important are in things like supercomputing but those machines don't run desktop programs and don't work the same way. Only the people directly using those machines would ever have any idea how fast they are or how much faster they wish they could be.

    So, to recap, desktop PCs are adequate, mobile devices are still finding a balance between power and power usage, gamers are off on their own island but sheer CPU isn't a magic fix, and supercomputing, where extra power would matter, is so far removed from everyday users, there is no way to relate to it.

    • Re:No context (Score:4, Insightful)

      by Lumpy ( 12016 ) on Friday March 03, 2017 @09:26PM (#53973439) Homepage

      You need a netbook.

      I need a 6ghz 8 core because I do actual work on the computer like compiling and rendering.

      PC's are Not adequate because software today is complete shit, almost none of it is written well for multi threading.

      Again, mostly because programmers coming out of colleges are poorly trained, and then companies want them to bang out trash and not well optimized code that takes advantage of the hardware.

  • by swm ( 171547 ) <swmcd@world.std.com> on Friday March 03, 2017 @09:19PM (#53973407) Homepage

    Moore's law had a great run: ~40 years from early 60s to early 00s.
    During that time, every generation boosted density, gate count, clock speed, and value per dollar.
    The (exponential!) rule of thumb was 2x more every 18 months.

    Everyone knew it had stop sometime: you can't make things smaller than atoms.
    What finally did stop it (considerably north of atom-scale) was gate tunnelling current.
    In a MOS-FET, the gate is separated from the channel by an insulator (SiO2).
    As you scale the transistor down, that insulator gets thinner, along with everything else.
    When the insulator thickness is less than the wavelength of an electron, you start to get significant tunnelling current.
    This acts like short-circuit from the power to ground.

    The technology hit the wall around 2003.
    Gate tunnelling current was then over half of total power dissipation.
    The power density of the CPU chip was 150 W/cm^2 (like a stove top),
    and going further was clearly impractical.

    As it happens, the clock speed at that design node was 3 GHz,
    and that's pretty much were we are today.
    Everything since then has been building bigger, not faster: multi-core, caches, SoC;
    plus architecture tweaks and optimizations, like pipelining and super-scalar.

    It was a great run while it lasted, but it's over,
    and we're not getting another one without a fundamental scientific/technological breakthrough,
    on the order of coal, or steel, or quantum mechanics.

    • by Anonymous Coward on Friday March 03, 2017 @11:34PM (#53973939)

      Excellent (and accurate) observations, but
      can I just say?
      The way you did your line-breaks
      made me think at first glance that you had written your
      Comment in verse. Maybe,
      "An Ode to Moore's Law"? :)

    • by Raenex ( 947668 )

      It's nice to get the real answer amidst all the bullshit. I experienced nearly 20 years of those processor speedups, and it was glorious. Too bad it came to an end. If the trend had continued, we'd all be using some terahertz CPUs by now.

  • by LeftCoastThinker ( 4697521 ) on Friday March 03, 2017 @09:23PM (#53973423)

    Risk averse CEOs who don't want to sink in the R&D to make carbon based chips because there is risk of it not working.

    A synthetic diamond transistor was first built and tested over 13 years ago at 81GHz: http://www.geek.com/blurb/81gh... [geek.com]

    More recently they developed a 300GHz Graphene transistor, but that was still 7 years ago: https://www.bit-tech.net/news/... [bit-tech.net]

    The technology is there and proven, but scaling it up to processor scale would be a massive investment and a big risk.

    • Re: (Score:3, Informative)

      by gantry ( 180560 )

      The chip manufacturers are funding research on these and other technologies, but they are all a long way from viability. It is easy to forget that silicon CPUs with a billion transistors are the outcome of 60 years' research, development, and investment.

      Silicon processing is made easier because silicon's oxide is an extremely good insulator. For diamond and graphene, the oxide is a gas, and so insulating areas cannot be created by oxidising the material: another substance must be deposited.

    • by Goldsmith ( 561202 ) on Saturday March 04, 2017 @03:47AM (#53974685)
      The timeline for carbon electronics is really, really long, predating transistors and silicon by decades. Carbon based electronics has had more than enough R&D for us to understand the basic properties and scaling challenges. The proof of this is that there are commercial products out there using these materials, made in commercial fabs. You just don't hear about them, because they have very little to do with the digital world (right now). Typically, you'll find these products in sensors and analog components. [nanomedica...ostics.com] The particular strengths of carbon based electronics are an ability to carry lots of current in small channels (this is not just about resistivity, but also relates to chemical stability and thermal conductivity), and an ability to integrate seamlessly with biological material (this was initially just about carbon-carbon chemistries, but has grown to also encompass superior integrations of electronics with living systems).

      These are different kinds of transistors, and don't operate the way (digitally) MOSFET silicon transistors do.

      Diamond is a wide bandgap semiconductor (that's physics for insulator). In special conditions, it can perform well, but those conditions (ranges for temperature, humidity, and field strength) are not practical for consumer devices. Doping diamond is possible, but very difficult, and it still results in a material that is a pretty good insulator. Sorry, it's going to be a lab toy for a long time.

      Graphene is a zero-bandgap semiconductor. That means that it never turns off, it just has varying amounts of "on." It's got great numbers on paper (resistivity, mobility). Doping graphene is something immoral scientists talk about doing. The reality is that doping graphene creates a different material that lacks the speed and chemical stability of normal graphene. Your conduction mechanism changes, your gating mechanism changes, your noise sources change. It's a mess. Also, it's really easy to dope graphene on accident and lose your high-end performance. It's the newest material in this space, and the one least understood in the manufacturing realm (despite that, it forms the basis for the commercial product linked above, so obviously it's understood well enough).

      You didn't mention carbon nanotubes, but I will, because what was the point of getting a PhD in carbon nanotube electronics if I can't talk about them on Slashdot?! Carbon nanotubes remain the unattainable holy grail of digital electronics. You can have it all: the speed of graphene, the on-off ratio of silicon, low power requirements... It's just that you almost need to assemble your circuit by hand. It's been >25 years we've been working with these materials, and we still don't know how to properly control where they go on a wafer (well, maybe these guys [carbonicsinc.com] know). The problem is that nanotubes want to make a heterogeneous mixed metal-semiconductor plate of spaghetti on the wafer, when you want clean rows of uniform semiconductor. The best guys in the world at this are up to producing postage stamp sized patches in the middle of the wafer. So... there's some work to be done there before anyone starts designing a processor.
  • Moore's law ended in 2006 (heard it straight from an Intel engineer). In it's place they have been focusing on multi-processing and power savings.* In doing so they learned they could make even more money through a much slower upgrade time-table. They do have tech on the back burner to roll out that will have huge improvements on performance (optical interconnects, for instance) but they are going to roll that stuff out like molasses going up a hill. Greed has really taken hold of everything these days.

    * (D

  • Instead of thinking about processing power in term of Hz, you should be looking at a CPU's/GPU's overall computational throughput. When you look at things that way, you will see there has been a massive uptick in processing power in GPUs. x86 CPU have stagnated a bit due to lack of serious competition for the high-end but everywhere else it's thriving. Massive parallel processing is the real future of computing, so get ready for chips with a thousands of sub-GHz cores running independent and identical ta

  • by erice ( 13380 ) on Friday March 03, 2017 @09:50PM (#53973527) Homepage

    This kind of thing was rather common until about 2000. Each process node was better in every way than the last. Big jumps in performance at each node advance. Power went down too. And, of course it was much cheaper per gate. You could get doubled performance and 1/4 the cost by just porting over the same design, trace for trace, to the next full node. These "die shrinks" were quite common. Through the 90's you got an extra bonus for new designs. That is because the industry was brimming with ideas that were known to work but were just not practical to implement because they took too much silicon area.
    First the idea spigot sputtered. The good mainframe ideas had already been implemented. It was longer clear what to do with all those gates. New ideas were tried. Some worked. Some didn't. Also, about this time, complexity started to threaten the ability to make chips that actually worked. Bugs became more common. Design progress slowed.

    Then process starting acting up. Power scaling stopped. More transistors were available but if you used them, your chip consumed proportionally more power. Run the transistors faster and you had the same problem, only worse. A hot chip was no longer a marketing problem, it was a chip that would not work. More effort and more complexity were needed to tame power. A simple die shrink wouldn't do that much.

    Then process started getting messier. The new nodes were not better in every way. Leakage current went up instead of down. Variability went up. Performance scaling slowed. Getting any improvement at all required more development time and money. Progress always slows when development time and cost rise.

    Then 20nm planer came and it was awful. Terrible leakage. Required double patterning. Double patterning means more masks mean more expense up front and during manufacturing. It actually cost more per transistor than 28nm. What was the point, really?

    That is pretty much the mess were are in now. Can't significantly increase clock rate. Can't throw gates at the problem and wouldn't really know what to do with the gates if we had them. Finfets temporarily tamed power but are only available in nodes hobbled by the need for multi-patterning.

       

  • The people who are actually paying for the products are interested in

    a) Power in: do the same about of computation at half the power so my battery will last longer.
    b) Power Out: do the same amount of computation at half the power so I can use twice as many devices without blowing by power budget.

    Data centers are limited by how much heat you can extract per square foot. Desktops are limited by how loud the fan is. Mobile is limited by the battery size.

    Therefore, the designers are designing what people are ac

  • Perhaps the article poster is not familiar with recent history? their have been both significant gains in CPU and GPU power, especially GPU. however improvements tend to be focused where it is needed most e.g. performance per watt.
  • You're just looking in the wrong markets. If you're "just" looking at x86, obviously you have a blueprint you need to follow. Any breakthrough will take quite a few years in order to integrate and fab it. But even then, comparing 5 or 10 year old CPU's to now you can see quite a bit of new circuitry.

    Look at AES acceleration and virtualization, we can now fully virtualize a machine including it's hardware as if they were separate machines including networking. There is quite a bit of logistics to make that h

  • by Misagon ( 1135 ) on Friday March 03, 2017 @10:45PM (#53973755)

    For a long time, Intel and Microsoft Windows have rules the computing world. The platform has been at the bottom, Intel's instruction set architecture.
    Intel leaped from 16-bit to 32-bit architecture and then from 32-bit to 64-bit but the basic execution model remains the same. Most of the advances that Intel have done from the Pentium onwards in the early '90s have been stopgaps to get as much out of the execution model, but still being limited by it.

    There are other processors out there, DSPs, that are much faster than x86 at specialized tasks by making them pipelined and parallel. GPUs could be seen as massively parallel DSPs.
    But raw computing power is not the problem. The problem is to run general-purpose code well - and general-purpose code has many branches between code paths and that can't be parallelized.

    A company called Mill Computing [millcomputing.com] is working on a general-purpose CPU architecture inspired by DSPs and from what they think that the Intel IA-64 (Itanium) should have been.
    By being vastly different in several significant ways from x86, they claim to be able to achieve a significantly higher performance per watt and performance per clock overall than Intel and AMD's x86.

  • The main reason is money. Each generation costs billions to develop and produce, and manufacturers are going to make sure they get a return on their investment. These investments stretch back years, and designs have to be made with assumptions about what will be workable at the current process node at the time the chip is ready to produce. That said, not quite all the low hanging fruit has been picked yet. Ryzen could not carry a 50% IPC improvement over the FX if there was nothing left to work with. Maybe

  • by NothingWasAvailable ( 2594547 ) on Friday March 03, 2017 @11:08PM (#53973847)

    The gates are now so small that the electron wave function has a pretty high probability of being "on the other side" of the gate. As gates shrink, leakage power goes up very rapidly. Even when they're "off", the gates are consuming too much power (leaking it to ground.)

    Also, think about 5 Ghz, IBM's fastest chips. At 5 Ghz, the clock speed is 200 picoseconds, and a 10 deep pipeline can allocate about 20 ps to each gate transition. That's a lot to ask, given that resistance and capacitance don't scale down linearly with dimensions. You also have to populate your chip with a lot of decoupling capacitors in order to hold the charge locally for each transition (because you can't get the power from off chip in 20 ps.) To fight the increased RC load (proportionally) you're putting in more buffers (big amplifiers).

    As if that weren't enough, you have the fact that a 14 nm gate is about 20 silicon atoms across. When you start doping the substrate, your actual behavior is all over the place because one or two more dopant atoms represent a 10-20% shift, up or down (total shifts of 40-50%.)

    So, your gates are too small, they all behave differently, they have to drive a relatively larger load, and the suckers are too hot.

  • Competition (academic and free market) makes big jumps unlikely.

    Most of the improvements that any one company is trying to do to get 2X or more performance has already been done, by the time they get to market, by other companies trying to beat them to market. Only a percentage of things they manage to do differently (perhaps things that other companies didn't think were worth doing) differentiate the performance of any one company's product.

  • When too much money is invested in the status quo, you are much more like to see a slightly improved status quo next year rather than something completely different. Look at the resistance to changing our health care, our education system, our infrastructure, our.... Only when some newcomer finds a new way to do something and starts cleaning their clocks...do the entrenched players try to switch gears.
  • Why should a company who did all the hard work face competition from new brands?
    Former cpu and gpu staff starting their own brands?
    The way to stop that is to control the entire sector. No advance game or codecs will be offered to support any new start ups.
    Anything tech that is useable and considered free will be open sourced by the original brand to control, brand and shape the free end of the market.
    Zilog https://en.wikipedia.org/wiki/... [wikipedia.org] pricing spreading around the world was the reason why the the C
  • Intel's shady tatics (Score:5, Interesting)

    by bongey ( 974911 ) on Friday March 03, 2017 @11:35PM (#53973945)

    Intel is up to their shady tactics again with AMD's new Ryzen release. Maybe not out right paying off computer makers, just now they are sponsoring reviewers. The reviewers jump through all kinds of hoops to make sure that Intel is on top of the benchmark graphics and read like a Intel marketing brochure. None of the reviewers disclose that they are sponsored by Intel.
    Examples of oddities from reviewers that are sponsored by Intel.

    1) Tom's Hardware: Complains about the power consumption being higher than spec, leaves out that the result was from a overclocked test and an MSI board that has an additional CPU power.
    2) GamersNexus (one worst of them)
    a) Had to compared the 1800x to 6 different Intel processors that were overclocked with the 6900k overclocked by 700Mhz.
    b) Only one AMD processor was OC by -100Mhz(yep) . There OC vs stock were almost exactly same.
    c) Makes the 6900k pop on the top of the benchmarks.
    d)1800X only loses 6 vs 8 to the Intel 6900k at stock speeds. With only 2 benchmarks with the 1800x losing by more than 7fps.
    e)Pretty much all benchmarks by the same author never included OC tests, but suddenly he had to compare it to 6 different OC benchmarks. http://www.gamersnexus.net/gam... [gamersnexus.net] http://www.gamersnexus.net/gam... [gamersnexus.net]
    f) Out right lied saying AMD told him not to benchmark Ryzen at 1920x1080. AMD just asked him to benchmark at multiple resolutions , not just 1080P.

  • Vectorization (Score:4, Informative)

    by JBMcB ( 73720 ) on Saturday March 04, 2017 @12:58AM (#53974297)

    For certain operations, AVX made a huge difference. AVX2 made an even huge-r difference. Depending on what you're doing, you can see a 2x to 10x speedup on the outside vs. using a chip without AVX2 with similar performance characteristics.

  • by ET3D ( 1169851 ) on Saturday March 04, 2017 @01:34AM (#53974411)

    There have been many breakthroughs in the PC industry, incredibly clever inventions which allowed things to move forward. And that's the thing, the smartest things in the industry don't make for a huge processing leap, they enable making progress at all. Each of these developments take years. Ideas may be simple, but implementing them, especially at the level required for mass production, is hard. Each development also requires more accurate tools. Also, complexity is now so high, that, as imgod2u said, even a huge change in some part leads to an overall small change.

    So as others have said, physics, but I think the above is a more nuanced answer. I remember when people said that it wouldn't be possible to make transistors under a micron in size. The very fact that we've reached so far is miraculous.

  • It DOES happen (Score:4, Informative)

    by SoftwareArtist ( 1472499 ) on Saturday March 04, 2017 @01:51AM (#53974441)

    It happened about ten years ago with the rise of GPUs for general purpose computing. Suddenly we could do a lot of things 10-100 times faster than before. You program GPUs really differently than CPUs, so we had to rewrite a lot of code and design new algorithms. But the benefit was huge.

    It may be happening again with specialized chips for deep learning, like Google's TPU. These chips are designed for just one class of applications, but it's a really important class, and they can be 10x faster or more efficient for those applications.

    There've been other times when a new generation brought a sudden major improvement in speed, like with vector units or multicore CPUs. But always at the cost of having to rewrite how your code works.

    Now if you want new chips that work just like the old ones and run the same programs as before, just 10x faster, sorry. That isn't likely to happen. Huge jumps like that require major changes of approach.

  • by Pete Smoot ( 4289807 ) on Saturday March 04, 2017 @02:01AM (#53974453)

    I think the real issue is, semiconductors are so competitive, the current shipping product is always very close to the state of the manufacturing and physics arts. Intel, AMD, nVidia, Samsung, Toshiba, Apple, and others spend billions pushing the processes and architectures to the limit in every product so it stays competitive as long as possible.

    To get a 4x or 8x improvement in size, power, or speed would imply there's a revolutionary way to do things that we just don't quite know yet. And it better be something which can be quickly turned to production because Moore's Law hasn't stopped yet. If you have a 4x improvement idea but it takes five years to release, it won't get funded. Plain CMOS silicon has too good a chance of catching up.

    There's plenty of times people rolled the dice on processor moon shots. I was at HP when Itanium was first developed (~95). We thought we'd have working silicon in a few years (~98 or 99) at the astounding clock rate of 500 MHz (oh, and that was potentially retiring something like 6 to 12 instructions per cycle, I forget the details). This was when a good Pentium processor ran at around 45 MHz. We thought Itanium was going to be so frickin' fast there was no way Intel could compete. Then AMD started a clock rate war, x86 got faster really fast, Itanium took much longer to produce than we anticipated, and the rest was history.

    I think the bottom line is, it's really hard to produce a system which really is even 2x faster than the competition. 4x is incredible and 8x probably has never been done.

    As an analogy, consider cars and mileage. My car, a diesel Passat (which shortly will not be road legal :() actually exceeds 50 MPG on a good day. What would it take to make a car which gets 100 MPG with a 600 mile range? How about 200 MPG? With no compromises? And a sales price of $28k? It's pretty hard to imagine.

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...