Intel Unveils Full Details of Kaby Lake 7th Gen Core Series Processors (hothardware.com) 95
Reader MojoKid writes: Intel is readying a new family of processors, based on its next-gen Kaby Lake microarchitecture, that will be the foundation of the company's upcoming 7th Generation Core processors. Although Kaby Lake marks a departure from Intel's "tick-tock" release cadence, there have been some tweaks made to its 14nm manufacturing process (called 14nm+) that have resulted in significant gains in performance, based on clock speed boosts and other optimizations. In addition, Intel has incorporated a new multimedia engine into Kaby Lake that adds hardware acceleration for 4K HEVC 10-bit transcoding and VP9 decoding. Skylake could handle 1080p HEVC transcoding, but it didn't accelerate 4K HEVC 10-bit transcoding or VP9 decode and had to assist with CPU resources. The new multimedia engine gives Kaby Lake the ability to handle up to eight 4Kp30 streams and it can decode HEVC 4Kp60 real-time content at up to 120Mbps. The engine can also now offload 4Kp30 real-time encoding in a dedicated fixed-function engine. Finally, Intel has made some improvements to their Speed Shift technology, which now takes the processor out of low power states to maximum frequency in 15 milliseconds. Clock speed boosts across Core i and Core m 7th gen series processors of 400-500 MHz, in combination with Speed Shift optimizations, result in what Intel claims are 12-9 percent performance gains in the same power envelope as its previous generation Skylake series, and even more power efficient video processing performance.
HALT (Score:3, Insightful)
Re: (Score:1)
Copied from the web:
"In March 2016 in a Form 10-K report, Intel announced that it had deprecated the Tick-Tock cycle in favor of a three-step "process-architecture-optimization" model, under which three generations of processors will be produced with a single manufacturing process, adding an extra phase for each with a focus on optimization."
That means, Cannonlake will be process shrink (14nm to 10nm) of Kabylake. Is it worth waiting a year for negligible improvements?
Re: (Score:2)
Copied from the web:
"In March 2016 in a Form 10-K report, Intel announced that it had deprecated the Tick-Tock cycle in favor of a three-step "process-architecture-optimization" model, under which three generations of processors will be produced with a single manufacturing process, adding an extra phase for each with a focus on optimization."
That means, Cannonlake will be process shrink (14nm to 10nm) of Kabylake. Is it worth waiting a year for negligible improvements?
Is it probable that 10nm technology at high (3ghz) clock frequencies is too flaky ? With signal connections / transistors so close to each other. I would have concerns about 10nm technology. In my opinion, it needs a year of being in the field (say for 2019) before I would trust 10nm.
Re: (Score:2)
Wikipedia says H2 2017 for Cannonlake, but my gut says that's too soon for actual product. 10nm would be nice to have in a laptop, but for the desktop?? If someone requires 4K hardware acceleration, don't most discrete GPUs do that today?
Re: (Score:3)
Despite the extra year Intel has budgeted, I wouldn't count on 10nm being ready by the second half of 2017.
AMD May Nearly Catch Up (Score:4, Insightful)
It's an interesting time in CISC processors. With fabs having to spend exponential amounts of money for incremental gains in performance and power savings, a smaller company like AMD may be able to make a chip that's 90% as fast, at a much lower price, which I hope it does because it's good for customers on both sides.
Re: (Score:3)
...And I applaud Intel for supporting VP9.
Re: (Score:2)
Now if only they would support their older hardware. They seem to be cutting the support cycle shorter and shorter and the only real support software wise seems to be coming on the Linux side.
Intel does their part, perhaps you have them confused with Microsoft? They want to terminate Win7, but it is after all a seven year old OS in the extended support phase. The problem here isn't that it's running out of support, it's that I don't want any of their newer products. And I don't really have any right to demand they make the products the way I want them or support age old software because I don't like the new. I suppose I'll eventually have to "upgrade" it to a Wintendo because I don't really give
Re: (Score:2)
At this point it's Apple and ARM. my samsung and apple devices already do 95% of what i want a computer to do and a lot of times it's a lot more than what my laptops do
Re: (Score:1)
and code for some other system to compile then sure
What, still live in the mainframe era? We've had good incremental compilers since the 1970s/80s. I mean, everytime the focus of computing shifted, the newcomers had to learn everything from scratch, but nobody is really preventing you from doing whatever you want. (Well...maybe Apple is. They really don't like compilers on iDevices. So there's that, but that only means that iDevices are not "real computers".)
Re: (Score:2)
Re: (Score:2)
It's an interesting time in CISC processors. With fabs having to spend exponential amounts of money for incremental gains in performance and power savings, a smaller company like AMD may be able to make a chip that's 90% as fast, at a much lower price, which I hope it does because it's good for customers on both sides.
But AMD, when it wasn't fabless, did squat in having the state of the art fabs, and once it did let go of its fabs, it didn't make it better either. Intel still has the world's best fabs, and nothing that AMD does comes even close. The reason that the fabs are spending gobs of cash is that they are well past the point of diminishing returns, when shrinks would translate into cost reductions. They no longer do. Once Intel gets to depreciate those fabs, their margins would again improve considerably!
Re: (Score:1)
amd hasn't been able to do '90% as fast' for over a decade... hell, not even 75% within the same power envelope... what makes you think they would be able to do that now? or even anytime in the near or distant future? intel charges more, yes, because they can, amd's failure to compete allows them to.
Re: (Score:2)
Re: (Score:3)
There are no CISC processors, only CISC instruction sets. That ignorant fanboy feud died back in the 90's. Processor architecture is not driven by instruction set.
Nor are the "interesting times" unique to CISC. All processors have this issue unless they are uncompetitive.
AMD hasn't been competitive in quite a while and there's nothing new there. What has changed is the inherent need for x86 processors at all. Intel's threat is from ARM, not AMD.
Re: (Score:1)
It's an interesting time in CISC processors. With fabs having to spend exponential amounts of money for incremental gains in performance and power savings, a smaller company like AMD may be able to make a chip that's 90% as fast, at a much lower price, which I hope it does because it's good for customers on both sides.
It would be nice if AMD would catch up. Their biggest problem currently is their massive debt.
Re: (Score:2)
Re: (Score:2)
IBM is still making Power chips. I imagine they aren't cheap in the research & development department, but they do have a non-consumer focus...
Re: (Score:2)
I bet you are a blast at parties.
Big disappointment anymore (Score:3, Insightful)
It's like telling me the Sun will be brighter tomorrow. Nothing is so outstanding in improvements anymore in chips. It's just more claims and numbers that most people don't even care about. Who cares about Intel graphics? If your a gamer your not using Intel for graphics and probably never will. My SkyLake was a incredible disappointment and I could have saved a hundred or more dollars buying a Hazwell and got almost as good performance. Its really not the chip anymore because OS's have improved to accommodate tablets and slower CPU's. Windows 10, Linux versions, OS X have all improved resource consumption and power use. It's really not a issue anymore, and Intel can improve slightly those numbers. But any dramatic claims are not happening.
Re: (Score:2)
Re: (Score:2)
There is still plenty of room for improvements with specialized design and software even if physics is limiting the improvements in general purpose cpus.
New interconnect technology like on chip photonics, specialized hardware like artificial neurons, and new software designs to take advantage of new capabilities provide plenty of room for future growth.
Even if hardware stays still there is plenty of improvement to be made in algorithms, coding and working with ever larger networks.
I'm betting on A.I. and au
Re: (Score:2)
I want more cores. If every machine shipped with 8 cores today, software would find a way to use them before too long. Most higher end Skylakes have 40% dead silicon in the form of a crappy GPU that is never used. Why not use that space for more cores, a bigger caches, or virtually ANYTHING else.
9-12% improvement per year is a giant yawn, as we Skylake, and so on. Intel is mired in molasses, their prices stay high while their improvements are awesomely negligible.
Re: (Score:2)
I'm hoping that Chinese 64-Core ARM server processor starts to make its way to the consumer space...
Re: (Score:2)
That "crappy GPU" is more cores. Specialized cores, but even the Intel GPU is ridiculously fast for the right kind of code. Now that we're getting Vulkan and DX 12 software should be able to run GPU compute on the Intel or AMD integrated GPU while doing video on the discrete card.
I predict a future with a lot more OpenCL code in it. I also predict a future with more idiot gamers who complain that using all of the CPU cores plus the integrated GPU ruins their 4.6 GHz overclocks.
Re: (Score:2)
Do I give your post recognition for being mostly on point, or do I completely freak out at another example of one of those very weird uses of "anymore"? Sorry it's too jarring. Stop that, you're doing it wrong.
Anymore is only interchangeable with nowadays SOMETIMES not all the time.
Re: (Score:2)
Microarchitectural details? (Score:4, Interesting)
I'm sure the graphics and video playback specs are important, but I'd like to know what changes they've made architecturally in the processor core. Maybe I missed it, but this article seems light on those details.
Re:Microarchitectural details? (Score:5, Interesting)
According to Anandtech, there are no core architectural improvements, the IPC is the same as Skylake. Clocks per watt is substantially improved, though.
Re: (Score:2)
Huh. All this time, I thought Intel was touting this as being predominantly about architectural improvements while staying on the same process. Obviously, they have improved their process, but this seems like a departure from what I'd read about (or assumed?) previously.
Re: (Score:3)
Nope. Officially it's now: Process-Architecture-Optimization, but tick-tack-tock is what some people are calling it, with the tack having been tacked on there to allow selling 'refreshes' of processors with the same architecture and process whilst giving the impression of meaningful progress.
Re: (Score:2)
It does not look like much at all. The 10-19% seems to come from the clock bump.
500/3000 = 19%
Just like the old days. We rode the wave from 1 MHz to 4GHz.
Re: (Score:2)
Are you running your calculations on a Pentium 1?
500/3000 = 0.16667
Re: (Score:2)
Graphics! (Score:1)
I'll probably build my next gaming machine with KBL to replace my IVB machine. As with my current CPU, the 60% of die area for graphics will sit idle while a Nvidia card does its job.
It would be nice for a graphicsless gaming version with more cores and cache.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Actually, how does Kaby Lake's graphics compare to the latest from either NVIDIA or AMD?
Intel : Real GPU :: potato gun : howitzer
Re: (Score:2)
What I find interesting is that all the desktop Intel chips have this massively powerful coprocessor sitting right next to them. If you don't have a graphics card, then it provides mid range graphics. If you DO have a graphics card then... it just... sits there...
But there's nothing forcing that. You COULD have an application that uses the graphics card for graphics, and the coprocessor on the chip for some other kinds of math. In practice, this would be a big hassle: it wouldn't work great on any chip
Re: (Score:2)
A lot of AMDs chips have embedded GPUs. AMD calls the whole thing an APU.
They also integrate it fairly intelligently with direct access to shared resources and shit. Their whole HSA push.
With DX12 and Vulkan games should in theory be able to access all GPUs and use them opportunistically, across discrete/embedded and even across vendors. The most common use now is to use the discrete GPU as your GPU and use the embedded GPU to encode video. If Nvidia hadn't locked down hardware accelerated PhysX to thei
Re: (Score:2)
Re: (Score:2)
Kaby Lake (Score:3, Interesting)
Re: (Score:2)
I still have no reason to leave my overclocked i7 2600k.
Re: (Score:2)
Re: (Score:2)
Is still Skylake Refresh. Slightly tweaked GPU (software mostly, I suspect) slight clock boost, and new chipset. My expectations for IPC increases are 0%, or maybe 3% if they bothered to create a new wafer. Trust me, Kaby Lake will underwhelm.
IPC changes are none. Because the architecture is the same. They can get 5-10% higher frequencies on the same power envelope, but MHz to MHz it is CPU wise identical to Skylake.
CORE? (Score:2)
Time to remove the word Core from the processor name. It adds nothing!
Re: (Score:2)
These are the Intel Core line of chips. It's a stupid name, but it does tell you what broad family of chips you are dealing with.
https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:3)
Re: (Score:2)
Which basically means that Apple will take advantage of it, but Microsoft won't. This was touted as one of the prime features of the Surface Pro 4, along with the deep sleep states. Except that the W10 OS is so horribly managed that it never turns over control to the CPU of clock speed, and any process of any type can prevent the OS from allowing the CPU from going into a deep sleep state. And, by all user accounts, that's exactly what happens on the Surface Pro. The SP4 team has pretty much given up on th
Re: (Score:1)
Summary based on benchmarks:
1) Makes the CPU 25% "faster" for very short lived workloads by quickly ramping up from idle
2) Makes the CPU 2% faster for sustained workloads
3) Only consumes 0.8% more power under load and saves power for the short lived loads by completing them more quickly.
And trust me, there was a fuckton of work putting all the low latency power state switching support in all the bits of the chip. For systems on a battery, it means better battery life, which is what makes it worthwhile.
Re: (Score:1)
That feature has been around for several generations of core procs. It's nothing special to skylake.
p-states are definitely the bomb, though
Re: (Score:2)
Please use a good tech site. (Score:1)
Where to improve in future (Score:2)
In the past, there has been a shift from outboard hardware and coprocessors to CPUs, then from single core to multi-core. I invisage offloading the main CPUs as much as possible. Having lightweight RISC cores (e.g. what you find in a cheap smartphone) doing things like menial OS duties, possibly even much of what a kernel traditionally does, I/O and sound, moving the GUI and much rendering (e.g. what OSX's Quartz 'display pdf' layer did, and stuff like window management) to a RISC core on the GPU, and so on