Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Unveils Full Details of Kaby Lake 7th Gen Core Series Processors (hothardware.com) 95

Reader MojoKid writes: Intel is readying a new family of processors, based on its next-gen Kaby Lake microarchitecture, that will be the foundation of the company's upcoming 7th Generation Core processors. Although Kaby Lake marks a departure from Intel's "tick-tock" release cadence, there have been some tweaks made to its 14nm manufacturing process (called 14nm+) that have resulted in significant gains in performance, based on clock speed boosts and other optimizations. In addition, Intel has incorporated a new multimedia engine into Kaby Lake that adds hardware acceleration for 4K HEVC 10-bit transcoding and VP9 decoding. Skylake could handle 1080p HEVC transcoding, but it didn't accelerate 4K HEVC 10-bit transcoding or VP9 decode and had to assist with CPU resources. The new multimedia engine gives Kaby Lake the ability to handle up to eight 4Kp30 streams and it can decode HEVC 4Kp60 real-time content at up to 120Mbps. The engine can also now offload 4Kp30 real-time encoding in a dedicated fixed-function engine. Finally, Intel has made some improvements to their Speed Shift technology, which now takes the processor out of low power states to maximum frequency in 15 milliseconds. Clock speed boosts across Core i and Core m 7th gen series processors of 400-500 MHz, in combination with Speed Shift optimizations, result in what Intel claims are 12-9 percent performance gains in the same power envelope as its previous generation Skylake series, and even more power efficient video processing performance.
This discussion has been archived. No new comments can be posted.

Intel Unveils Full Details of Kaby Lake 7th Gen Core Series Processors

Comments Filter:
  • HALT (Score:3, Insightful)

    by negRo_slim ( 636783 ) <mils_orgen@hotmail.com> on Tuesday August 30, 2016 @09:18AM (#52795855) Homepage
    Isn't Cannonlake coming out in the second half of 2017? Why not wait a bit for these to drop in price or make the jump to 10nm if the performance is there.
    • by Anonymous Coward

      Copied from the web:

      "In March 2016 in a Form 10-K report, Intel announced that it had deprecated the Tick-Tock cycle in favor of a three-step "process-architecture-optimization" model, under which three generations of processors will be produced with a single manufacturing process, adding an extra phase for each with a focus on optimization."

      That means, Cannonlake will be process shrink (14nm to 10nm) of Kabylake. Is it worth waiting a year for negligible improvements?

      • Copied from the web:

        "In March 2016 in a Form 10-K report, Intel announced that it had deprecated the Tick-Tock cycle in favor of a three-step "process-architecture-optimization" model, under which three generations of processors will be produced with a single manufacturing process, adding an extra phase for each with a focus on optimization."

        That means, Cannonlake will be process shrink (14nm to 10nm) of Kabylake. Is it worth waiting a year for negligible improvements?

        Is it probable that 10nm technology at high (3ghz) clock frequencies is too flaky ? With signal connections / transistors so close to each other. I would have concerns about 10nm technology. In my opinion, it needs a year of being in the field (say for 2019) before I would trust 10nm.

    • by MTEK ( 2826397 )

      Wikipedia says H2 2017 for Cannonlake, but my gut says that's too soon for actual product. 10nm would be nice to have in a laptop, but for the desktop?? If someone requires 4K hardware acceleration, don't most discrete GPUs do that today?

    • Broadwell was originally supposed to be released 2014Q3, but (aside from Core M) was delayed until 2015Q1 for mobile and 2015Q2 for desktop processors due to problems Intel had getting enough yield out of the 14nm process. It was delayed so long that most manufacturers (and Intel for some of their product lines) skipped it entirely and moved straight from Haswell to Skylake which came out 2015Q3.

      Despite the extra year Intel has budgeted, I wouldn't count on 10nm being ready by the second half of 2017.
  • by BrendaEM ( 871664 ) on Tuesday August 30, 2016 @09:24AM (#52795893) Homepage

    It's an interesting time in CISC processors. With fabs having to spend exponential amounts of money for incremental gains in performance and power savings, a smaller company like AMD may be able to make a chip that's 90% as fast, at a much lower price, which I hope it does because it's good for customers on both sides.

    • ...And I applaud Intel for supporting VP9.

    • At this point it's Apple and ARM. my samsung and apple devices already do 95% of what i want a computer to do and a lot of times it's a lot more than what my laptops do

    • It's an interesting time in CISC processors. With fabs having to spend exponential amounts of money for incremental gains in performance and power savings, a smaller company like AMD may be able to make a chip that's 90% as fast, at a much lower price, which I hope it does because it's good for customers on both sides.

      But AMD, when it wasn't fabless, did squat in having the state of the art fabs, and once it did let go of its fabs, it didn't make it better either. Intel still has the world's best fabs, and nothing that AMD does comes even close. The reason that the fabs are spending gobs of cash is that they are well past the point of diminishing returns, when shrinks would translate into cost reductions. They no longer do. Once Intel gets to depreciate those fabs, their margins would again improve considerably!

    • by Anonymous Coward

      amd hasn't been able to do '90% as fast' for over a decade... hell, not even 75% within the same power envelope... what makes you think they would be able to do that now? or even anytime in the near or distant future? intel charges more, yes, because they can, amd's failure to compete allows them to.

      • There are ultimate limits to a technology, and the closer we are to the limits, the smaller and slower the improvements come. Since Intel's biggest technological edge is their semiconductor process, as their process advantage gets smaller their performance advantage will get smaller also. When, some day in the future, process advantage entirely disappears, the manufacturer with the best architecture and the best layout optimization will be making the best CPUs.
    • by dfghjk ( 711126 )

      There are no CISC processors, only CISC instruction sets. That ignorant fanboy feud died back in the 90's. Processor architecture is not driven by instruction set.

      Nor are the "interesting times" unique to CISC. All processors have this issue unless they are uncompetitive.

      AMD hasn't been competitive in quite a while and there's nothing new there. What has changed is the inherent need for x86 processors at all. Intel's threat is from ARM, not AMD.

    • It's an interesting time in CISC processors. With fabs having to spend exponential amounts of money for incremental gains in performance and power savings, a smaller company like AMD may be able to make a chip that's 90% as fast, at a much lower price, which I hope it does because it's good for customers on both sides.

      It would be nice if AMD would catch up. Their biggest problem currently is their massive debt.

      • We are stuck with the Intel standard because no other company can afford to spend a billion dollars a year on R&D and building new fabs. Sparc, MIPS, PowerPC, PA-RISC etc. all were good ideas at the time, but there wasn't the money to keep improving them to keep pace with Intel. ARM looks to be Intel's only viable competitor at this point. As others have pointed out, the RISC/CISC argument is meaningless at this point.
        • by rthille ( 8526 )

          IBM is still making Power chips. I imagine they aren't cheap in the research & development department, but they do have a non-consumer focus...

  • by Anonymous Coward on Tuesday August 30, 2016 @09:26AM (#52795899)

    It's like telling me the Sun will be brighter tomorrow. Nothing is so outstanding in improvements anymore in chips. It's just more claims and numbers that most people don't even care about. Who cares about Intel graphics? If your a gamer your not using Intel for graphics and probably never will. My SkyLake was a incredible disappointment and I could have saved a hundred or more dollars buying a Hazwell and got almost as good performance. Its really not the chip anymore because OS's have improved to accommodate tablets and slower CPU's. Windows 10, Linux versions, OS X have all improved resource consumption and power use. It's really not a issue anymore, and Intel can improve slightly those numbers. But any dramatic claims are not happening.

    • As I keep on explaining to people on here: it is because of physics. CPUs aren't going to get faster and faster forever. The performance growth is slowing. You see this is 9-12% improvement over the previous generation. Of course this makes people angry at me when I tell them technological progress isn't guarenteed, because the reality is we won't be seeing things like AI or autonomous cars which depend on ever increasing processing power.
      • There is still plenty of room for improvements with specialized design and software even if physics is limiting the improvements in general purpose cpus.
        New interconnect technology like on chip photonics, specialized hardware like artificial neurons, and new software designs to take advantage of new capabilities provide plenty of room for future growth.
        Even if hardware stays still there is plenty of improvement to be made in algorithms, coding and working with ever larger networks.

        I'm betting on A.I. and au

      • I want more cores. If every machine shipped with 8 cores today, software would find a way to use them before too long. Most higher end Skylakes have 40% dead silicon in the form of a crappy GPU that is never used. Why not use that space for more cores, a bigger caches, or virtually ANYTHING else.

        9-12% improvement per year is a giant yawn, as we Skylake, and so on. Intel is mired in molasses, their prices stay high while their improvements are awesomely negligible.

        • by rthille ( 8526 )

          I'm hoping that Chinese 64-Core ARM server processor starts to make its way to the consumer space...

        • by Zan Lynx ( 87672 )

          That "crappy GPU" is more cores. Specialized cores, but even the Intel GPU is ridiculously fast for the right kind of code. Now that we're getting Vulkan and DX 12 software should be able to run GPU compute on the Intel or AMD integrated GPU while doing video on the discrete card.

          I predict a future with a lot more OpenCL code in it. I also predict a future with more idiot gamers who complain that using all of the CPU cores plus the integrated GPU ruins their 4.6 GHz overclocks.

    • Do I give your post recognition for being mostly on point, or do I completely freak out at another example of one of those very weird uses of "anymore"? Sorry it's too jarring. Stop that, you're doing it wrong.

      Anymore is only interchangeable with nowadays SOMETIMES not all the time.

    • by jemmyw ( 624065 )
      The Sun will be brighter tomorrow. On average anyway. It gains 1% luminosity every 100 million years.
  • by Theovon ( 109752 ) on Tuesday August 30, 2016 @09:27AM (#52795911)

    I'm sure the graphics and video playback specs are important, but I'd like to know what changes they've made architecturally in the processor core. Maybe I missed it, but this article seems light on those details.

    • by Freedom Bug ( 86180 ) on Tuesday August 30, 2016 @09:37AM (#52795963) Homepage

      According to Anandtech, there are no core architectural improvements, the IPC is the same as Skylake. Clocks per watt is substantially improved, though.

      • by Theovon ( 109752 )

        Huh. All this time, I thought Intel was touting this as being predominantly about architectural improvements while staying on the same process. Obviously, they have improved their process, but this seems like a departure from what I'd read about (or assumed?) previously.

        • Nope. Officially it's now: Process-Architecture-Optimization, but tick-tack-tock is what some people are calling it, with the tack having been tacked on there to allow selling 'refreshes' of processors with the same architecture and process whilst giving the impression of meaningful progress.

    • In the summary is specifically identifies "steed step technology" as being the big change. It will make no difference when gaming or running a server but laptops and other low power devices should get a big lift. Waking up faster implies the CPU can go to sleep (or low frequency mode) more frequently and total power consumption will be reduced. And for those who did not click on the article - delay was previously ~95ms and has been reduced to ~15ms.
  • by Anonymous Coward

    I'll probably build my next gaming machine with KBL to replace my IVB machine. As with my current CPU, the 60% of die area for graphics will sit idle while a Nvidia card does its job.

    It would be nice for a graphicsless gaming version with more cores and cache.

    • Actually, how does Kaby Lake's graphics compare to the latest from either NVIDIA or AMD?
      • Actually, how does Kaby Lake's graphics compare to the latest from either NVIDIA or AMD?

        Intel : Real GPU :: potato gun : howitzer

        • by cfalcon ( 779563 )

          What I find interesting is that all the desktop Intel chips have this massively powerful coprocessor sitting right next to them. If you don't have a graphics card, then it provides mid range graphics. If you DO have a graphics card then... it just... sits there...

          But there's nothing forcing that. You COULD have an application that uses the graphics card for graphics, and the coprocessor on the chip for some other kinds of math. In practice, this would be a big hassle: it wouldn't work great on any chip

          • A lot of AMDs chips have embedded GPUs. AMD calls the whole thing an APU.
            They also integrate it fairly intelligently with direct access to shared resources and shit. Their whole HSA push.

            With DX12 and Vulkan games should in theory be able to access all GPUs and use them opportunistically, across discrete/embedded and even across vendors. The most common use now is to use the discrete GPU as your GPU and use the embedded GPU to encode video. If Nvidia hadn't locked down hardware accelerated PhysX to thei

    • You may be served by AMD Zen. 8 cores, broadwell class IPC per clock, 16 threads, 14 or 16 nm process. Probably much much cheaper than the 6 and 8 core Intel options. no igpu
    • Valid point. Gamers looking to win penis size contests at LAN parties will be buying GTX 1080 cards are replacing them with the latest and greatest in two years, so on-chip graphics are irrelevant and unnecessary for them. For gaming, Kaby Lake doesn't show any real improvement in benchmarks over Skylake (at the same clock speed), although it may be easier to overclock. I'm probably going to wait, cause I don't need to win any dick size contests in the near future.
  • Kaby Lake (Score:3, Interesting)

    by John Smith ( 4340437 ) on Tuesday August 30, 2016 @09:39AM (#52795977)
    Is still Skylake Refresh. Slightly tweaked GPU (software mostly, I suspect) slight clock boost, and new chipset. My expectations for IPC increases are 0%, or maybe 3% if they bothered to create a new wafer. Trust me, Kaby Lake will underwhelm.
    • I still have no reason to leave my overclocked i7 2600k.

    • Is still Skylake Refresh. Slightly tweaked GPU (software mostly, I suspect) slight clock boost, and new chipset. My expectations for IPC increases are 0%, or maybe 3% if they bothered to create a new wafer. Trust me, Kaby Lake will underwhelm.

      IPC changes are none. Because the architecture is the same. They can get 5-10% higher frequencies on the same power envelope, but MHz to MHz it is CPU wise identical to Skylake.

  • Time to remove the word Core from the processor name. It adds nothing!

  • Hot hardware? Seriously, if it the posting is from them (and it is) always link in Anandtech and Techreport if available. http://www.anandtech.com/show/... [anandtech.com] http://techreport.com/review/3... [techreport.com]
  • In the past, there has been a shift from outboard hardware and coprocessors to CPUs, then from single core to multi-core. I invisage offloading the main CPUs as much as possible. Having lightweight RISC cores (e.g. what you find in a cheap smartphone) doing things like menial OS duties, possibly even much of what a kernel traditionally does, I/O and sound, moving the GUI and much rendering (e.g. what OSX's Quartz 'display pdf' layer did, and stuff like window management) to a RISC core on the GPU, and so on

"All the people are so happy now, their heads are caving in. I'm glad they are a snowman with protective rubber skin" -- They Might Be Giants

Working...