Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Confirms 8th Gen Core On 14nm, Data Center First To New Nodes (anandtech.com) 78

Ian Cutress, writing for AnandTech: Intel's 8th Generation Core microarchitecture will remain on the 14nm node. This is an interesting development with the recent launch of Intel's 7th Generation Core products being touted as the 'optimization' behind the new 'Process-Architecture-Optimization' three-stage cadence that had replaced the old 'tick-tock' cadence. With Intel stringing out 14nm (or at least, an improved variant of 14nm as we've seen on 7th Gen) for another generation, it makes us wonder where exactly Intel can promise future performance or efficiency gains on the design unless they start implementing microarchitecture changes.
This discussion has been archived. No new comments can be posted.

Intel Confirms 8th Gen Core On 14nm, Data Center First To New Nodes

Comments Filter:
  • Translation... (Score:2, Insightful)

    by Lumpy ( 12016 )

    8th gen will suck as bad as 7th gen, so that means the 4th gen stuff will STILL outperform it.

    • by Anonymous Coward

      Except benchmarks show you are an idiot.

      http://core0.staticworld.net/images/article/2016/12/kaby_lake_cinebench_multi_threaded_oc-100700619-orig.jpg

      • Re:Translation... (Score:5, Interesting)

        by Glarimore ( 1795666 ) on Friday February 10, 2017 @11:11AM (#53839773)
        Okay fine, so 4th gen isn't literally faster than 8th gen, but I agree with what OP is getting at... What the graph you posted is best at showing is that Intel CPU performance improvements have been paltry for the past six years.

        According to your graph, the new Kaby Lake 7700k is only ~55% faster than my 2nd generation Sandy Bridge 2600k. Which means that between January 2011 and January 2017, Intel performance improvements for like-for-like CPU's has been about 7.5% per year, which is pretty shitty. It's not that 8th gen is going to suck as bad as 7th gen -- it's that both 7th gen and 8th gen suck as bad as everything Intel has released fort the past six years.
        • Re:Translation... (Score:5, Interesting)

          by alvinrod ( 889928 ) on Friday February 10, 2017 @01:10PM (#53840845)
          It's actually worse than that though if you're just looking at the architecture. The 7700k has a 20% clock speed advantage over a 2700k, which means that their architectural improvements aren't even 7.5% per year. Both of those chips exist within a similar TDP bracket as well (91W vs. 95W) so it isn't as though Intel has been using process improvements to offer the performance at lower power consumption levels either. And that's only certain benchmarks as there are others where Intel's older chips perhaps only fair worse by single digit percentages once accounting for clock speed differences. Sandy Bridge was a great chip for overclocking and it wasn't difficult to get as much as 4.4 GHz without putting a lot of effort into it. Some enthusiasts have been able to get up to 5 GHz with a good chip and cooling solution. The newer Core i7 chips usually require de-lidding since Intel uses a substandard TIM which doesn't transfer heat effectively enough.

          Intel needs a new microarchitecture to replace Core. Core was an exceptional design, especially considering what it replaced and how much the early performance gains were like if you bought an early Nehalem CPU. Hell, even Core itself traces its roots back to the P6 microarchitecture after Intel abandoned Prescott (which was sold as the Pentium 4 back in the wild days of the clock speed wars) which goes back decades. It's pretty clear that Core is tapped out in terms of what can be squeezed out of it and Intel needs to go back to the drawing board like AMD did and use all of the lessons they've learned to make a new architecture.

          Even if AMD's offerings aren't quite as good as Intel's, they'll still be closer than they ever have before and it will allow AMD to challenge Intel in their high-margin consumer market segments or in markets were AMD hasn't been relevant in years. Intel could afford to tread water while AMD was using their failed Bulldozer architecture, as Intel would just as gladly sell you a 4 year old CPU as a new one if the prices hadn't moved much, but now AMD is going to erode those price points or offer a competing product if they don't undercut Intel. Intel will still have a process advantage with their own fabs, but they need a new architecture to widen the gap if they want to have any hope of maintaining their profit levels.
          • Intel needs a new microarchitecture to replace Core. Core was an exceptional design, especially considering what it replaced and how much the early performance gains were like if you bought an early Nehalem CPU. Hell, even Core itself traces its roots back to the P6 microarchitecture after Intel abandoned Prescott (which was sold as the Pentium 4 back in the wild days of the clock speed wars) which goes back decades. It's pretty clear that Core is tapped out in terms of what can be squeezed out of it and In

          • Fantastic summary!

            The elephant in the room is that Silicon doesn't scale past 5 GHz. Everyone knows [psu.edu] about it but no one in the commercial sector is interested in doing anything about it. :-(

            Hell, even back in 2007 SiGe was proposed [toronto.edu] to get up past 50 GHz.

            What's really freaky is that a close friend of mine was playing with 1+ GHz CPUs in the (late) 70's. I guess we'll never have those 100 GHz Gallium Arsenide CPU's anytime soon ... :-/

            • but no one in the commercial sector is interested in doing anything about it.

              Why not?

              • Because it has a HUGE Risk for very little Reward. Silicon is literally dirt cheap.

                99% of people don't know or care that Silicon CPU's do everything they need. They will never be able to justify the cost of a CPU that is 10x or 100x then what they currently pay. The current tech is "good enough" for 99% of people -- that's where the bread and butter is.

                This creates a chicken-and-egg scenario. None wants to risk investing billions into alt. tech when the status quo is much more profitable. i.e. Who is goi

              • It's difficult. A manufacturer would have to see so obvious a business case for making a super-speed non-silicon processor that the worries about risk would be swept aside. (And from a paranoid viewpoint, the military might want to keep a super-speed process tightly under its own control.) That said, IBM has been working with SiGe for decades and may have a viable process. https://arstechnica.com/gadgets/2015/07/ibm-unveils-industrys-first-7nm-chip-moving-beyond-silicon/ [arstechnica.com]

                Be aware that SiGe is mostly used fo

            • Gallium Arsenide was useful down to the 0.35 micron node. Below that, other factors meant that it was no longer faster than silicon. ( I'm not knowledgeable about the details, but IIRC the superior transconductance available from GaAs was offset by an inability to develop high electron velocity over short distances.)
          • As much as 4.4GHz? There are a ton of us running overclocked Sandys (2500K/2600K/2700K) at 4.7 - 5GHz. Comparing balls-to-the-walls overclocked Sandy and Kabylake parts (non-HT vs non-HT, HT vs HT), you'll see at most ~30% more absolute performance assuming a slight clockspeed advantage to Kaby.
          • Kaby Lake isn't even a new architecture. People did some sleuthing and discovered it's literally a new stepping of Skylake, given a new CPUID string to signify it's a "new generation" of Intel Core CPUs. The reality is it's just Skylake with HEVC decode blocks added to the GPU, and a 100/200MHz clock bump. https://www.cpchardware.com/co... [cpchardware.com]

            People have benched the i7-6700K and 7700K at the same clocks and lo and behold, the results were identical on every bench they ran...because they are literally two differ

          • It's obvious the future is putting smarts directly in the memory for massive parallel processing with no memory bottlenecks. Everything else is just incremental improvement.
    • 8th gen will suck as bad as 7th gen, so that means the 4th gen stuff will STILL outperform it.

      Except it will have 6 cores. I assume they are talking about the old news of Coffee Lake which is a Skylake achitecture with 6 cores and will be the desktop and high-end laptop CPU of the "8th gen", where cannonlake would only be on ultrabooks.

    • 8th gen will suck as bad as 7th gen, so that means the 4th gen stuff will STILL outperform it.

      Nah, they will buy amd's Ryzen chips in a modified backage, rebrand them as their own, and resell those.

  • by Ecuador ( 740021 ) on Friday February 10, 2017 @10:35AM (#53839433) Homepage

    The hope is that AMD's RYZEN will be good enough to compete with Intel in performance - not just price. That will wake Intel again, since they are always relaxing when there is no competition i.e. no motive to do something more.

    • Well, a new node implies heavy infrastructure investment, so it's understandable. Issues / delays with EUV lithography aren't helping either.

      • by Anonymous Coward

        You mean FAB42, which was being discussed internally back in 2006...

        Yeah, since it will take 2 years to just complete the interior, and EUV is still a lab project, timing is actually pretty good to me. But is this a desktop/laptop play?

        OR is Intel planning to attack the mobile space with truly revolutionary chipsets? Like a SoC with a mobile side that is quick, mobile-focused, low power, and then a desktop side that is better than M3, preferably i5-i7 capable, waiting to be turned on and go all desktop whe

    • If the benchmarks leaked today turn out to be legit, it is looking very good indeed for Ryzen:
      http://wccftech.com/amd-ryzen-... [wccftech.com]

  • Good Analysis (Score:5, Interesting)

    by Anonymous Coward on Friday February 10, 2017 @10:48AM (#53839543)

    Yesterday prices were leaked on AMD Ryzen. For equal peformance, the AMD parts are abot 70 percent cheaper. Intel has been goofing off for several years now. Tweaking process improvements is not innovation. Intel's Architecture is tired and needs to be rethought. I'm really surprised that Intel has been caught with their pants down.

    • Intel needs have it's ass kicked the cutting pci-e lanes on a $400 chip that in last gen had way more no you need to go up to a $600 chip to get them back and that is on the last gen workstation / server sockets. The desktop boards have been stuck on the same pci-e lanes for years and maxing out at quad core.

      AMD is going have more pci-e and more cores on the desktop boards then what intel has. With the server / workstation ones like to have even more then what the amd desktop boards have.

  • by Joe_Dragon ( 2206452 ) on Friday February 10, 2017 @10:52AM (#53839581)

    consumer products need more pci-e lanes AMD is doing better with ryzen. 16+4+4(chip set link?) Also USB 3 may be in the cpu as well.

    ryzen server / workstation may have even more pci-e lanes and will there be 1 socket systems with 32 or more pci-e lanes + chipset link + 4? that can go after the high end consumer products form intel that are a gen behind there consumer products?

  • Meh, if Intel would just remove most of the debugging crap almost nobody uses anymore because it was superseded by newer debugging crap(!), and dedicated the 8th gen just to bug-fixing, they'd save a lot of transistors, power, and also get a lot of good will.

    Can you imagine an Intel processor where the errata sheet is not a mile long? Which you could trust your embedded products to without the fear of it being a timebomb as it happened several times already (the Atom C2000 is just the latest incarnation)?

    • An IOMMU is quite useful to users since you can map hardware between VMs, so this is a good feature. For debugging, you do need things like being able to single step and to trap instructions, which also is important for VMs. I understand most performance related things have nothing to do with ISA and are more of a electrical engineering and physics thing

  • by Anonymous Coward

    Hasn't the whole move from 14nm to 10nm kind of been BS because they didn't actually shrink the transistor size just the size of the interconnects between the transistors? No one has a true 10nm transistor right now or at least that's been my reading of it.

  • by cfalcon ( 779563 ) on Friday February 10, 2017 @01:32PM (#53841127)

    When Intel struggled to get Broadwell out, their die shrink to 14nm using the architecture that they made in Haswell, you knew that they were having at least some issues. When it turned out that Haswells almost exclusively didn't properly support the new "transaction memory", to the effect that the opcodes had to be patched out, that was also kinda depressing. Skylake, their next in line, and the newest architecture update, was the last time they have even vaguely been on schedule.

    Right after skylake, they announced that, instead of a die shrink to 10nm, they would add a new "optimization" step, and continue to tweak skylake instead of shrinking it. This is kabylake, which just came out in desktop and laptop properly (Xeons lag behind normally: the full suite of Skylake Xeons should be launching in a few months). They redid all their slides to show a full new arrow, giving them effectively another year to do the die shrink. Now that we are getting close to seeing what would be the next guy ("cannon lake"), who properly should be launching later this year on 10nm, we first heard that they were going to insert a "coffee lake", which would be another optimization at 14nm, for desktop, and that only laptop and low power chips would actually be on the 10m "cannon lake". And now, we find out that the first 10nm will be out for datacenter, which means an even further push back.

    Summary: their older slides used to show around a summer 2016 launch for their 10nm process. Then it became a summer 2017 launch, then that became only a partial launch, and now it is looking like a spring 2018 launch. The words change, but the message is the same: "We aren't close to having 10nm be actually profitable, or possibly even all that functional".

    • Summary: their older slides used to show around a summer 2016 launch for their 10nm process. Then it became a summer 2017 launch, then that became only a partial launch, and now it is looking like a spring 2018 launch. The words change, but the message is the same: "We aren't close to having 10nm be actually profitable, or possibly even all that functional".

      tbh I'll be happy if they get there by 2020.

  • The new plant they are building in Arizona is slated for 7nm dies, so smaller chips are coming eventually.

    • by hipp5 ( 1635263 )

      The new plant they are building in Arizona is slated for 7nm dies, so smaller chips are coming eventually.

      Those chips are destined for mobile markets, no?

  • Looks like Moore's Law is starting to fall apart. There's what? 7nm, then Carbon transistors, then 3nm, then 1nm CPU processes left in the bag. Beyond that they can't make transistors smaller and clock speed on Carbon transistors is going to be massively heat bound. (Yes Carbon sublimates at 5000C, but your solder on your motherboard only goes to about 300-350C. Even assuming you could make Steel traces, that still only gets you to 1200C before your board traces melt let alone capacitors and other component
    • by Khyber ( 864651 )

      A carbon atom is roughly 0.3nm in diameter. I imagine one could make a three-atom long transistor from carbon given how it can be made conductive or non-conductive, putting it at just under 1nm total package size.

    • No reason for the CPU temperature to ever reach the solder. Tungsten is a better conductor of electricity than iron (steel) and has a higher melting point. Some forms of carbon are superb heat conductors - How'd you like to have a diamond heat spreader?

      I suppose liquid cooling - flowing right over the die - is the ultimate solution for heat dissipation.

      • by Agripa ( 139780 )

        I suppose liquid cooling - flowing right over the die - is the ultimate solution for heat dissipation.

        At least with water, power densities 10 years ago already exceeded the point where film boiling is a problem so a heat spreader has to be used. We are already limited by copper heat spreaders leaving either higher thermal conductivity materials or improved heat pipes.

  • No reason to upgrade when there aren't going to be significant performance increases over 4 and 6 year old machines.

Keep up the good work! But please don't ask me to help.

Working...