Intel Confirms 8th Gen Core On 14nm, Data Center First To New Nodes (anandtech.com) 78
Ian Cutress, writing for AnandTech: Intel's 8th Generation Core microarchitecture will remain on the 14nm node. This is an interesting development with the recent launch of Intel's 7th Generation Core products being touted as the 'optimization' behind the new 'Process-Architecture-Optimization' three-stage cadence that had replaced the old 'tick-tock' cadence. With Intel stringing out 14nm (or at least, an improved variant of 14nm as we've seen on 7th Gen) for another generation, it makes us wonder where exactly Intel can promise future performance or efficiency gains on the design unless they start implementing microarchitecture changes.
Translation... (Score:2, Insightful)
8th gen will suck as bad as 7th gen, so that means the 4th gen stuff will STILL outperform it.
Re: (Score:1)
Except benchmarks show you are an idiot.
http://core0.staticworld.net/images/article/2016/12/kaby_lake_cinebench_multi_threaded_oc-100700619-orig.jpg
Re: (Score:2)
Re:Translation... (Score:5, Interesting)
According to your graph, the new Kaby Lake 7700k is only ~55% faster than my 2nd generation Sandy Bridge 2600k. Which means that between January 2011 and January 2017, Intel performance improvements for like-for-like CPU's has been about 7.5% per year, which is pretty shitty. It's not that 8th gen is going to suck as bad as 7th gen -- it's that both 7th gen and 8th gen suck as bad as everything Intel has released fort the past six years.
Re:Translation... (Score:5, Interesting)
Intel needs a new microarchitecture to replace Core. Core was an exceptional design, especially considering what it replaced and how much the early performance gains were like if you bought an early Nehalem CPU. Hell, even Core itself traces its roots back to the P6 microarchitecture after Intel abandoned Prescott (which was sold as the Pentium 4 back in the wild days of the clock speed wars) which goes back decades. It's pretty clear that Core is tapped out in terms of what can be squeezed out of it and Intel needs to go back to the drawing board like AMD did and use all of the lessons they've learned to make a new architecture.
Even if AMD's offerings aren't quite as good as Intel's, they'll still be closer than they ever have before and it will allow AMD to challenge Intel in their high-margin consumer market segments or in markets were AMD hasn't been relevant in years. Intel could afford to tread water while AMD was using their failed Bulldozer architecture, as Intel would just as gladly sell you a 4 year old CPU as a new one if the prices hadn't moved much, but now AMD is going to erode those price points or offer a competing product if they don't undercut Intel. Intel will still have a process advantage with their own fabs, but they need a new architecture to widen the gap if they want to have any hope of maintaining their profit levels.
Re: (Score:3)
Re: (Score:2)
Fantastic summary!
The elephant in the room is that Silicon doesn't scale past 5 GHz. Everyone knows [psu.edu] about it but no one in the commercial sector is interested in doing anything about it. :-(
Hell, even back in 2007 SiGe was proposed [toronto.edu] to get up past 50 GHz.
What's really freaky is that a close friend of mine was playing with 1+ GHz CPUs in the (late) 70's. I guess we'll never have those 100 GHz Gallium Arsenide CPU's anytime soon ... :-/
Re: (Score:2)
but no one in the commercial sector is interested in doing anything about it.
Why not?
Re: (Score:2)
Because it has a HUGE Risk for very little Reward. Silicon is literally dirt cheap.
99% of people don't know or care that Silicon CPU's do everything they need. They will never be able to justify the cost of a CPU that is 10x or 100x then what they currently pay. The current tech is "good enough" for 99% of people -- that's where the bread and butter is.
This creates a chicken-and-egg scenario. None wants to risk investing billions into alt. tech when the status quo is much more profitable. i.e. Who is goi
Re: (Score:2)
It's difficult. A manufacturer would have to see so obvious a business case for making a super-speed non-silicon processor that the worries about risk would be swept aside. (And from a paranoid viewpoint, the military might want to keep a super-speed process tightly under its own control.) That said, IBM has been working with SiGe for decades and may have a viable process. https://arstechnica.com/gadgets/2015/07/ibm-unveils-industrys-first-7nm-chip-moving-beyond-silicon/ [arstechnica.com]
Be aware that SiGe is mostly used fo
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Kaby Lake isn't even a new architecture. People did some sleuthing and discovered it's literally a new stepping of Skylake, given a new CPUID string to signify it's a "new generation" of Intel Core CPUs. The reality is it's just Skylake with HEVC decode blocks added to the GPU, and a 100/200MHz clock bump. https://www.cpchardware.com/co... [cpchardware.com]
People have benched the i7-6700K and 7700K at the same clocks and lo and behold, the results were identical on every bench they ran...because they are literally two differ
Re: (Score:2)
Re: (Score:2)
8th gen will suck as bad as 7th gen, so that means the 4th gen stuff will STILL outperform it.
Except it will have 6 cores. I assume they are talking about the old news of Coffee Lake which is a Skylake achitecture with 6 cores and will be the desktop and high-end laptop CPU of the "8th gen", where cannonlake would only be on ultrabooks.
Re: (Score:2)
Aren't there more improvements like more lanes and other shit like that. Buss speed? Fuck, other shit beside 'ghz'.
People tend to focus on single, simplistic things. Kinda show a very shallow understanding of the subject matter. Like the new MacBook.
Maybe, but that is more in the integrated chipset. It was the only thing upgraded in Kaby Lake, so they will probably upgrade it in minor ways again.
Re: (Score:2)
8th gen will suck as bad as 7th gen, so that means the 4th gen stuff will STILL outperform it.
Nah, they will buy amd's Ryzen chips in a modified backage, rebrand them as their own, and resell those.
The hope is that RYZEN will be good (Score:4, Insightful)
The hope is that AMD's RYZEN will be good enough to compete with Intel in performance - not just price. That will wake Intel again, since they are always relaxing when there is no competition i.e. no motive to do something more.
Re: The hope is that RYZEN will be good (Score:1)
Well, a new node implies heavy infrastructure investment, so it's understandable. Issues / delays with EUV lithography aren't helping either.
Re: (Score:1)
You mean FAB42, which was being discussed internally back in 2006...
Yeah, since it will take 2 years to just complete the interior, and EUV is still a lab project, timing is actually pretty good to me. But is this a desktop/laptop play?
OR is Intel planning to attack the mobile space with truly revolutionary chipsets? Like a SoC with a mobile side that is quick, mobile-focused, low power, and then a desktop side that is better than M3, preferably i5-i7 capable, waiting to be turned on and go all desktop whe
Re: (Score:2)
If the benchmarks leaked today turn out to be legit, it is looking very good indeed for Ryzen:
http://wccftech.com/amd-ryzen-... [wccftech.com]
Re: (Score:2)
hydrogen being the smallest one.
Really? [crystalmaker.com]
Re: (Score:3)
Competition is quite a bit behind Intel at the moment, so no reason to move forward while they can milk this current generation. Once competition starts getting *near* 14nm.... Intel will nudge forward to keep a few steps ahead.
What's beyond 7nm though?
It's another confirmation that Moore's Law is dead. If Moore's Law were still in effect, Intel would make their new chips at smaller geometry regardless of competition because it would be cheaper to do so and that would make for fatter profits. Cost per transistor is the driver of Moore's Law. That stalled at 28nm because that was last node that could be made without resorting to multi-patterning. Scaling worked in the past because the cost to make a wafer was roughly constant. By making features smal
Re: (Score:1)
Are you sure MOORE is better?
I'm pretty sure less is Moore
Re: (Score:2)
Agree. I have a 14 core machine with 128GB RAM. When I quickly fire up a Debian VM to test some Ansible script or something I simply give the entire VM 64GB RAM. Huge Huge Difference. Thing boots in less than a second from a Cold start. Very useful test Rig.
Re: (Score:2)
There are many tradeoffs involved in RAM design, but one basic principle is this: this bigger it is, the slower it is. This cannot be escaped. Bigger RAM means more row drivers, and/or more levels of column multplexers. Faster RAM means bigger row drivers and bigger cells. Put it all together and speed*size = heat, and RAM already needs heat sinks to be able to respond in ~20 CPU cycles.
Basically, you're never going to see big RAM fast enough to respond in a single fast CPU cyc
Re: (Score:2)
I think they are making the chips smaller, but who really cares? (Remember: The package is far larger then the chip itself).
This can best be seen in the Xeon chips, where they use their abilities to pack even more transistors into a cpu, to include more and more cores(They are up to 26 now, I think).
They don't do it for consumer chips, because It's really difficult to sell a 8 core chip if each is even 10% slower then in the 4 core version, and there is very little consumer software to use that many cores.
Good Analysis (Score:5, Interesting)
Yesterday prices were leaked on AMD Ryzen. For equal peformance, the AMD parts are abot 70 percent cheaper. Intel has been goofing off for several years now. Tweaking process improvements is not innovation. Intel's Architecture is tired and needs to be rethought. I'm really surprised that Intel has been caught with their pants down.
Intel needs have it's ass kicked the cutting pci-e (Score:3, Interesting)
Intel needs have it's ass kicked the cutting pci-e lanes on a $400 chip that in last gen had way more no you need to go up to a $600 chip to get them back and that is on the last gen workstation / server sockets. The desktop boards have been stuck on the same pci-e lanes for years and maxing out at quad core.
AMD is going have more pci-e and more cores on the desktop boards then what intel has. With the server / workstation ones like to have even more then what the amd desktop boards have.
consumer products need more pci-e lanes AMD is (Score:3)
consumer products need more pci-e lanes AMD is doing better with ryzen. 16+4+4(chip set link?) Also USB 3 may be in the cpu as well.
ryzen server / workstation may have even more pci-e lanes and will there be 1 socket systems with 32 or more pci-e lanes + chipset link + 4? that can go after the high end consumer products form intel that are a gen behind there consumer products?
Re: (Score:2)
USB controller may be in the cpu and not stacked to the chipset link.
Re: (Score:2)
Remove the spyware/outdated debug crap for savings (Score:1)
Meh, if Intel would just remove most of the debugging crap almost nobody uses anymore because it was superseded by newer debugging crap(!), and dedicated the 8th gen just to bug-fixing, they'd save a lot of transistors, power, and also get a lot of good will.
Can you imagine an Intel processor where the errata sheet is not a mile long? Which you could trust your embedded products to without the fear of it being a timebomb as it happened several times already (the Atom C2000 is just the latest incarnation)?
Re: (Score:2)
An IOMMU is quite useful to users since you can map hardware between VMs, so this is a good feature. For debugging, you do need things like being able to single step and to trap instructions, which also is important for VMs. I understand most performance related things have nothing to do with ISA and are more of a electrical engineering and physics thing
Re: (Score:2)
Tell me if I'm wrong but (Score:1)
Hasn't the whole move from 14nm to 10nm kind of been BS because they didn't actually shrink the transistor size just the size of the interconnects between the transistors? No one has a true 10nm transistor right now or at least that's been my reading of it.
Re: (Score:3)
Intel has been talking about 10nm for at least 3 years. They "pretended" to show off a 10nm chip recently, but all indications point to maybe 2018... the launch of 10nm has been delayed at least three times (official announcements).
When did intel announce 10nm chips || date range [2015 - 2016] [google.com]
Kinda bad news (Score:3)
When Intel struggled to get Broadwell out, their die shrink to 14nm using the architecture that they made in Haswell, you knew that they were having at least some issues. When it turned out that Haswells almost exclusively didn't properly support the new "transaction memory", to the effect that the opcodes had to be patched out, that was also kinda depressing. Skylake, their next in line, and the newest architecture update, was the last time they have even vaguely been on schedule.
Right after skylake, they announced that, instead of a die shrink to 10nm, they would add a new "optimization" step, and continue to tweak skylake instead of shrinking it. This is kabylake, which just came out in desktop and laptop properly (Xeons lag behind normally: the full suite of Skylake Xeons should be launching in a few months). They redid all their slides to show a full new arrow, giving them effectively another year to do the die shrink. Now that we are getting close to seeing what would be the next guy ("cannon lake"), who properly should be launching later this year on 10nm, we first heard that they were going to insert a "coffee lake", which would be another optimization at 14nm, for desktop, and that only laptop and low power chips would actually be on the 10m "cannon lake". And now, we find out that the first 10nm will be out for datacenter, which means an even further push back.
Summary: their older slides used to show around a summer 2016 launch for their 10nm process. Then it became a summer 2017 launch, then that became only a partial launch, and now it is looking like a spring 2018 launch. The words change, but the message is the same: "We aren't close to having 10nm be actually profitable, or possibly even all that functional".
Re: (Score:2)
Summary: their older slides used to show around a summer 2016 launch for their 10nm process. Then it became a summer 2017 launch, then that became only a partial launch, and now it is looking like a spring 2018 launch. The words change, but the message is the same: "We aren't close to having 10nm be actually profitable, or possibly even all that functional".
tbh I'll be happy if they get there by 2020.
7nm (Score:2)
The new plant they are building in Arizona is slated for 7nm dies, so smaller chips are coming eventually.
Re: (Score:2)
The new plant they are building in Arizona is slated for 7nm dies, so smaller chips are coming eventually.
Those chips are destined for mobile markets, no?
Re: (Score:2)
I am not sure they have said definitively.
So much for the singularity (Score:2)
Re: (Score:2)
A carbon atom is roughly 0.3nm in diameter. I imagine one could make a three-atom long transistor from carbon given how it can be made conductive or non-conductive, putting it at just under 1nm total package size.
Re: (Score:2)
Meanwhile, we're already using similar tech in fabrication of LEDs to control the Auger effect so we can obtain higher efficiency, so....... no, I'll keep being based in reality while you stick around with your head up your ass.
Re: (Score:2)
No reason for the CPU temperature to ever reach the solder. Tungsten is a better conductor of electricity than iron (steel) and has a higher melting point. Some forms of carbon are superb heat conductors - How'd you like to have a diamond heat spreader?
I suppose liquid cooling - flowing right over the die - is the ultimate solution for heat dissipation.
Re: (Score:2)
I suppose liquid cooling - flowing right over the die - is the ultimate solution for heat dissipation.
At least with water, power densities 10 years ago already exceeded the point where film boiling is a problem so a heat spreader has to be used. We are already limited by copper heat spreaders leaving either higher thermal conductivity materials or improved heat pipes.
Well this postpones my next builds. (Score:2)
No reason to upgrade when there aren't going to be significant performance increases over 4 and 6 year old machines.