Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Intel Hardware Games Technology

Intel's Just Launched 8th Gen 'Coffee Lake' Processors Bring the Heat To AMD's Ryzen 137

bigwophh writes: The upheaval of the high-end desktop processor segment continues today with the official release of Intel's latest Coffee Lake-based 8th Generation Core processors. The flagship in the new lineup is the Core i7-8700K. It is a 6C/12T beast, with a base clock of 3.7GHz, a boost clock of 4.7GHz, and 12MB of Intel Smart Cache. The Core i5-8400 features the same physical die, but has only 9MB of Smart Cache, no Hyper-Threading, and base and boost clocks of 2.8GHz and 4GHz, respectively. The entire line-up features more cores, support for faster memory speeds, and leverages a fresh platform that's been tweaked for more robust power delivery and, ultimately, more performance. The Core i7-8700K proved to be an excellent performer, besting every other processor in single-threaded workloads and competing favorably with 8C/16T Ryzen 7 processors. The affordably-priced 6-core Core i5-8400 even managed to pull ahead of the quad-core Core i7-7700K in some tests. Overall, performance is strong, especially for games, and the processors seem to be solid values in their segment.

Intel's Just Launched 8th Gen 'Coffee Lake' Processors Bring the Heat To AMD's Ryzen

Comments Filter:
  • More more more! (Score:5, Insightful)

    by Chris Katko ( 2923353 ) on Thursday October 05, 2017 @07:32PM (#55318609)

    More cores! More RAM! More performance! ... and more cost.

    Oh, and less PCI-e lanes while we're at it. And let me guess, no NVMe because us plebeians don't deserve it.

    • Re:More more more! (Score:4, Informative)

      by thegreatbob ( 693104 ) on Thursday October 05, 2017 @09:53PM (#55319099) Journal
      Currently running crusty old X79 stuff, a PCIe -> M.2 adapter, running a Samsung 960 Evo 250GB. Pretty sure NVMe just implies a standardized controller interface stitched to PCIe; I've been under the impression that software support is the main issue with it, as it's basically just another PCIe card as far as the hardware is concerned. I see it suggested on the internet (probably old forum posts) that X79 stuff should not be able to use it as a boot device, but my system begs to differ.

      The piddly PCIe provisions are a shame though... no improvement (in lane count) whatsoever since they pulled the controllers onto the CPU die (LGA1156, Nehlaem). Note that the addition of each lane requires no less than two additional pins on the socket, so they'd have to re-purpose some pins to do it, and there aren't really a lot to spare. I know there were a fair number (20+) on the 1155 that weren't marked RSVD or anything else, but I'm having some difficult finding data on 1151. From the images I have found, it appears that practically every pin is connected to something, and fewer than 20 RSVD pins remain at all.

      Site I'm referencing [eteknix.com]

      It looks like they ate about a dozen RSVD pins for more power...

      Perhaps the bigger nuisance is that Coffee Lake breaks compatibility with the 100/200 series chipset motherboards.
      • Currently running crusty old X79 stuff, a PCIe -> M.2 adapter, running a Samsung 960 Evo 250GB. Pretty sure NVMe just implies a standardized controller interface stitched to PCIe; I've been under the impression that software support is the main issue with it, as it's basically just another PCIe card as far as the hardware is concerned. I see it suggested on the internet (probably old forum posts) that X79 stuff should not be able to use it as a boot device, but my system begs to differ.

        The piddly PCIe provisions are a shame though... no improvement (in lane count) whatsoever since they pulled the controllers onto the CPU die (LGA1156, Nehlaem)..

        Prove that the PCIe lanes are being maxed out for gaming, daily computing, video editing, etc. and then I'll care. Yes, PCIe lanes matter for specific applications but the vast majority of gaming and higher end enthusiast systems are not maxing out the existing PCIe 3.0 lanes and DMI 3.0 bandwidth.

        Current benchmarks show very little difference between SSD and NVMe in boot times and gaming performance. The main difference is transferring large files and loading large files into memory (i.e. video editing).

        • That's the point, I suppose, the x16 PCIe connections come no where near being maxed out... I've seen plenty of numbers suggesting the 3.0 x8 is more than adequate for most people's use. The bigger concern is with everything connected to the PCH. If you have two gigabit network interfaces, an NVMe boot drive, and a few SATA disks and a few SATA disks, one can at least come close to maxing out the ~4GB/s available bandwidth on the DMI connection. So you're absolutely correct, the vast majority of folks do no
      • Some m.2 SSD's have a legacy option ROM that can be used to boot. Other boards may have and UEFI driver availible, but it's not going to work out of the box on every system and every drive on the X79 platform.

        • In all likelihood what I'm dealing with. I ran into precisely zero hangups in the process; just had to install the driver for it (windows 8). Perhaps Samsung made it too simple, and I never actually had to fight for it, leaving me with the perception that it's easier than it would normally be.
  • by shellster_dude ( 1261444 ) on Thursday October 05, 2017 @07:37PM (#55318621)
    I openly admit that I'm a fan of AMD. However, I think it's reasonable to ask why Intel CPU's have not seen any large jump in performance or features until they had to, due to AMD competition, again. The R&D time and cost on these new chips is multiple years. That means, that Intel can't just roll out a chip in response to AMD, unless they either have good corporate intellignence and knew one to two years ago that AMD was coming back in a big way, or the much more likely answer that they've been sitting on new features and performance because they wanted to milk the previous generation for all it was worth. I find the later to be reprehensible, which is why I will be building an new AMD system, even if it doesn't give me quite the top performance I might get from an Intel chip, because I appreciate them driving competition again (P.S. my last system was Intel because AMD wasn't really competing when I built it).
    • by Jeremi ( 14640 )

      It's Intel's R&D investment; they can sell it or sit on it as they see fit. They are a for-profit corporation, not a public service, and are under no obligation to anyone to sell their technology on any set schedule.

      That said, Intel-vs-AMD is a good example of the value of competition to improve products for the consumer. Without AMD on their heels, God only knows how long Intel would have coasted.

      • by oic0 ( 1864384 ) on Thursday October 05, 2017 @08:14PM (#55318769)
        Yes, but we are under no obligation to buy from a company that behaves in a manner we don't agree with. I'll always buy AMD when its a valid option because of all of intels past behavior. Sand bagging is just a drop in the bucket.
        • Re: (Score:3, Interesting)

          by Anonymous Coward

          Unless you are concerned about data security...
          In which case both Intel and AMD should be viewed with some level of suspicion (as well as AMD/Nvidia videocards) as all of the above hardware is using signed firmware and user inaccessable DRM/NSA processors that could be spying on you, either now or in the near future when they finally feel penetration is deep enough to turn them on.

          Most people scoff at these concerns, but an example that hasn't been brought up often enough: Tor and other privacy networks are

          • Open Power, FTW. Cost and arm and a leg, but you can build a powerful system tha't s open from the bottom up.

        • Nobody is under an obligation to buy any CPU from any company. If you think AMD is llilly white then you are naive. Personally, I buy what works best for my use case at the time of purchase. I try to leave imaginary ethical stuff aside. If AMD had an edge then they would 'sandbag' too. It is sound corporate strategy.

        • This is how smart businesses operate.

          I remember years ago when the Geforce was brand new. As in; the original was fresh to market by 1 or so years. Those were the days of the Coppermine 333 which you could clock to 500 without breaking a sweat. Quack 3 Arena absolutely *flew*. It was glorious.

          Anyway, the geforce. Another company, I believe it was voodoo at the time, came out with their response to the geforce which ran a tad faster...lo and behold, Nvidia released a driver update which significantly bo

      • by epine ( 68316 )

        It's Intel's R&D investment; they can sell it or sit on it as they see fit. They are a for-profit corporation, not a public service, and are under no obligation to anyone to sell their technology on any set schedule.

        If we replace "Intel R&D" with "Mylan", does your comment still stand? If not, why not?

        I'm almost libertarian enough to agree with you if the company in question operates on trade secrets and claims no patent protection.

        Patent protection, however, is a two-way street: you're granted a r

        • by gweihir ( 88907 )

          Patents have stopped being for the public good a long time ago. They serve as mechanism for intellectual theft from anybody else now and nothing else. Same, incidentally, for copyright. I bet most people here do not know that copyright was created to prevent publishers from ripping off artists by publishing their stuff without permission or payment.

          • At least they expire relatively quickly. MP3 is already Public Domain, as is most of MPEG2
            • by jabuzz ( 182671 )

              My calculations show that MPEG2 patents expire 18th February 2018, which is a mere 135 days from now.

        • Intel doesnt patent their process. They protect it via trade secrets instead, and it makes sense to do so because of how long it takes to spin up a fab.
      • I don't disagree. Intel can do what they want. It's their product. They don't owe me anything, and I don't owe them any of my money. I find their business practices scummy (though perfectly legal and within their rights), which is exactly why I'm voting with my dollar and going elsewhere.
    • AMD v Intel (Score:4, Interesting)

      by harvey the nerd ( 582806 ) on Thursday October 05, 2017 @07:49PM (#55318683)
      Intel the dairy farmer, milking the world.

      One wonders whether we would still be running 286s if there were no AMD. It has been AMD that has made Intel actually compete in x86 space for 35+ yrs.
      • Re:AMD v Intel (Score:5, Interesting)

        by gweihir ( 88907 ) on Thursday October 05, 2017 @10:10PM (#55319153)

        There is a reason the current instruction set is called AMD64 and not Intel64. Intel actually licenses it from AMD, because they failed to come up with anything competitive. AMD cares more about engineering and Intel more about profits. Now, if only MS would get a credible competitor, maybe this atrocity going on with Windows would finally stop.

        • Intel and AMD have had a cross-licensing agreement in place since the 1970s...Intel gives WAY more in that exchange than AMD ever does. It cracks me up when people trot out AMD64 like it means anything. AMDs entire worth is a rounding error compared to Intel. Hell, Intel spent as much in R&D as AMDs entire worth for decades. AMDs market capitalization is at $12 billion, Intel's is $183 billion. They are not peers in any way, Intel utterly dominates AMD in technology and processes. AMD doesnt even have t
          • mmm fanboi trolling Mail me your tears from all the butthurt you feel that ryzen is eating Intel's lunch. mmm
            • For most applications, without overclocking, an I7-7700K beats any Ryzen by a few percent. Intel's chip overclocks far better than AMD's. The I7-8700K is a few percent better than the i7-7700K. I like that AMD is back in the running, but they're not in the lead.
          • by gweihir ( 88907 )

            And yet, AMD drives x86 CPU technology forward and not Intel. Apparently you missed that little detail, probably because you mistake money for skill.

      • Pick up a copy of the book "Inside The AS/400". The design of the hardware is fascinating and years beyond this PC garbage we've been using for decades.

      • Intel would be producing nothing of any noteworthy speed or performance if not for DEC's intellectual property. In the 90s DEC had the speed crown. Intel "borrowed" their IP, got sued and lost. But in the meantime DEC was gobbled up by Compaq and Compaq was much more interested in Intel as a partner than producing VMS or Unix servers.
        • DEC got speed by pumping more electricity into a CPU chip than anyone else at the time dared to. Yes, their design was excellent, but that wasn't the breakthrough factor.
      • Intel the dairy farmer, milking the world.

        How appropriate. You uh... make CPUs like a cow.

        > People fall at my feet when they see me coming!

    • by aliquis ( 678370 )

      I think it's reasonable to ask why Intel CPU's have not seen any large jump in performance or features until they had to, due to AMD competition, again.

      You've answered that yourself.
      Because of lacking AMD competition they didn't had too.

      The R&D time and cost on these new chips is multiple years. That means, that Intel can't just roll out a chip in response to AMD, unless they either have good corporate intellignence and knew one to two years ago that AMD was coming back in a big way

      No problem.
      Intel have improved their cores and production the whole time and have had quad core desktop chips since Core 2. They launched six core Sandy-Bridge-E processors back in 2011 and have had Xeons with many cores too for a very long time.
      So it's been around. Intel just haven't put it into the main-stream market. The six-core i7 5820 from 2014 didn't really cost more than the i7 4790K though ($10-$20?) so even back t

    • by flatulus ( 260854 ) on Thursday October 05, 2017 @09:01PM (#55318929)
      Intel's schedule for Coffee Lake may have been moved up a bit due to Ryzen, but this is not a "rabbit out of a hat" move for Intel.

      See here http://marketrealist.com/2017/... [marketrealist.com] which says "There are rumors that Intel may launch its HEDT (high-end desktop) processors and chipsets and its Coffee Lake microarchitecture a few months earlier than anticipated in response to AMD’s Ryzen 5 and 7 processors. "

      That web page is dated April 28, 2017.
      Here's another article: https://www.pcworld.com/articl... [pcworld.com] which shows Coffee Lake in 2H17. This article is dated Feb 13, 2017.

      So Intel is executing according to plan since first of this calendar year.

    • by gweihir ( 88907 )

      You do not need to ask. Intel had this design basically ready, except for the optimization of the last production steps. This means for years, Intel has screwed over its customers with a sub-standard design at vastly inflated prices. The funny thing is that many of these screwed over customers think Intel can do no wrong.

      Sadly, customers that buy from the largest vendor only and do not even consider the competition are the death of competition and quality.

    • by tlhIngan ( 30335 )

      I openly admit that I'm a fan of AMD. However, I think it's reasonable to ask why Intel CPU's have not seen any large jump in performance or features until they had to, due to AMD competition, again. The R&D time and cost on these new chips is multiple years. That means, that Intel can't just roll out a chip in response to AMD, unless they either have good corporate intellignence and knew one to two years ago that AMD was coming back in a big way, or the much more likely answer that they've been sitting

    • My next rig will be AMD from stem to stern no matter what Intel is shipping.
    • I openly admit that I'm a fan of AMD. However, I think it's reasonable to ask why Intel CPU's have not seen any large jump in performance or features until they had to, due to AMD competition, again.

      Your confusion is this:

      There has not been a large jump in performance for Intel parts. These chips are just higher core count, and you pay more for those extra cores.

      You will not see a big boost in performance from Intel until they figure out 10nm, which is 2+ years late now and things still arent looking good. The last word from Intel was "Q4 2017" but that was over a year ago. Intel is in big trouble. They beat everyone to 14nm/16nm by a large margin but then got stuck chasing a 3D transistor fantasy

    • Everything I've read in a couple reviews (I know, citation needed) indicates that they basically just bolted on 2 more cores. It doesn't appear there's much new. The extra heat is, apparently, above what AMD shows (though Intel runs at a higher clock speed).
      I could argue that the world doesn't need more computing power on each desktop, but "640K is enough!" so...

      Tangentially: What I'd like to see are benchmarks run on chipsets that keep the performance numbers pegged at a certain rate and then measure all t

      • Exactly this. The single core performance isn't that much improved, which has been the story since Sandy Bridge. The performance improvements come almost entirely from the fact they have more cores. Keep in mind they've already been building chips like this in the Xeon line, so really it's just a matter of taking the Xeon's they were going to sell, disabling some features, and calling them i7's and i5's. And then taking what was going to be the i5 and making it the i3. There really isn't anything that

    • I've been building nothing but AMD for a number of years. They've always run better than the equivalently priced Intel. We won't even get into the fact that AMD drives the x86 technology these days, not Intel.
      • by msi ( 641841 )
        If you only run AMD how do you know how they compare to an Intel solution?
  • Submitter likely meant base but the base frequency isn't even very relevant than the all core boost is higher and the power save brings it lower anyway.

    The i5 8400 has an all core boost of 3.8 GHz and a single core boost of 4.0 GHz, so the 2.8 base isn't as bad as it seem ..

    It's a great gamers chip.

  • Meh (Score:5, Funny)

    by Aighearach ( 97333 ) on Thursday October 05, 2017 @07:42PM (#55318651) Homepage

    I don't think "bringing the heat" is going to scare the competition very much in this market. ;)

    • by Z80a ( 971949 )

      Of course it will scare AMD, you know, when they put a machine running the new chip overclocked next to the AMD's headquarters.

  • need more pci-e lanes (DMI is over subed) + need new chipset is a joke with no pci-e boost.

    AMD has more pci-e and usb on cpu die.

    • DMI 3.0 is trash tier. Fucking 4 PCIe 3.0 links for all that I/O (including USB)? bottleNECK!

    • by aliquis ( 678370 )

      Ryzen has 24 lanes of which 16x will/may be used for graphics and 4x for the chipset and then another 4x from the CPU and Intel have "20" of which 4 (DMI 3 8 GT/s) is for the chipset right?

      So yeah, a lead of 4 for Ryzen.

      But if you NEED more PCI-express then Intel have HEDT and AMD have Threadripper. But sure, more would be nice, and Zen+ people speculate will have PCI-express 4.0, that's of course nice. I feel you're right pointing out this flaw and it one but I don't really know what I should be doing abou

    • by gweihir ( 88907 )

      AMD always had vastly superior integration, e.g. the memory controller on the CPU half a decade before Intel.

    • by aliquis ( 678370 )

      Was it you who posted as AC about SATA and USB from the CPU?
      At-least it mentioned here.

      Can you link some information.

  • Forget Ryzen. I'd like to see one of the latest CPUs benchmarked against a Core i7-3960X. 6C/12T, 3.3GHz base clock, 15MB of cache, fully buzzword-compliant. Oh, and it's almost six years old.

    Honestly, it's hard to get excited about "bringing the heat" when we're talking about single-digit percentage gains. There hasn't been a breakthrough in either clock speed or IPC in years, and even core counts have remained pretty much the same.

    • It's about 50% faster in both single and multi-threaded benchmarks with only 20% higher clock speed.
      It's 95W instead of 130W TDP. That 95W TDP includes a GPU as well, the 3960X doesn't have one.
      And it's being released at less than half the price the 3960x was. (3960k was RRP of $1059, 8700k is RRP of $359)

      50% faster, 60% cheaper with 30% less power isn't single-digit. But then that is 6 years.
      You're also comparing an "extreme" edition with k-series CPU. Even though the 3960X had quad channel memory (so desp

      • It's about 50% faster in both single...... threaded benchmarks with only 20% higher clock speed.

        How did they do that? Serious question.

        • More instructions per cycle?
          Through the different Core generations they've been refining/tweaking the number of ports and the execution units behind them.

          don't know what they've done with Coffee lake by skylake/kaby lake is here:
          https://en.wikichip.org/wiki/i... [wikichip.org]

          • Oh that's really great, thanks. I didn't realize how much complexity there is in there. It's also fascinating that most of the complexity already there had the purpose of (and was successful at) making the chip go faster....and yet it can be optimized more.

            btw SGX looks like a security nightmare, but MPX looks like it could be useful.
        • One element is better branch prediction.

          Here's a paper detailing the branch predicition improvements of the last Intel platforms and their impact on scripting language execution: https://hal.inria.fr/hal-01100647/document [inria.fr]

          Our measures clearly show that, on Nehalem,
          on most benchmarks, 12 to 16 MPKI are encountered, that is about 240 to 320 cycles are lost every 1 kiloinstructions. On the next processor generation Sandy Bridge, the misprediction rate is much lower: generally about 4 to 8 MPKI on Javascript applications for instance, i.e. decreasing the global penalty to 80 to 160 cycles every Kiloinstructions. On the most recent processor generation Haswell, the misprediction rate further drops to 0.5 to 2 MPKI in most cases, that is a loss of 10 to 40 cycles every kiloinstructions.

          That's an order of magnitude difference. In their their Python benchmark the Nehalem has an average of ~1.5 instructions per clock (IPC) while Haswell has an average of ~2.4 IPC, a 50% increase over Nehalem. The branch predictor is likely a big factor in this.

          Of course scripting languages

      • by bongey ( 974911 )
        95W TDP , Wink , Wink add 65w , 159 W in a torture loop at the rail, and it always consumes more power than an 1800x OC except when idle http://www.tomshardware.com/re... [tomshardware.com] .
        • It appears to consume 20% more power than an 1800x, while producing 50% higher benchmark scores. Sounds like better performance per watt to me.

          But what does anything I said before have to do with an AMD CPU? All the numbers I put up are comparing two Intel CPU's

    • by gweihir ( 88907 )

      There are no major performance gains to be expected. This technology is now approaching maturity, which means it will slowly get cheaper and consume less power, but it will not get much faster anymore. Maybe we will eventually see something 2x as fast for desktop loads eventually, but that could take half a century or longer. There is still significant potential in writing software better, in particular in making it more parallel, but even that is limited and it is hard to do.

    • My overclocked i7-3820 (LGA2011 -3000 CPUs are Sandy Bridge, if anyone ever runs into that confusion) which clocks all cores to 4.1GHz (could do 4.3 on all cores, but it gets a tad warm, so i keep it to 1 @ 4.3) seems to keep up with modern workloads. The things to take away from any results are not just the obvious gains in efficiency and performance, but also radical decline in the pace of improvements over the last ~7 years. I have a preference to refit my stuff whenever the cost/performance ratio (subje
  • Summary notes hyper threading is removed. What is the point? Did that technology became a liability?
    • The summary does not use the term 'removed.' The Core i5, to which the 'no Hyper-Threading' comment applies, never had Hyper-Threading to start with. The i7 models continue to have Hyper-Threading.
      • One should also note that laptop parts muddy that a bit (both i5 and i7 have 2C 4T, and i7 with 4C 8T), though it's somewhat outside of the context of this discussion.
    • Still there on the i7, the i5 now has six physical cores, up from four physical cores. IIRC i saw today that at least one of the i3 processors will now have four physical cores, that's up from two cores plus two hyperthreaded ones, so I guess it's what the i5 used to be thought as being. But yeah, where are the four core + four hyperthread CPUs? I'm guessing they get introduced later, and the same with two core/two hyperthread lower end i3s. How will Intel brand the the four core + four hyperthread CPUs,
  • Maybe I missed it, but I didn't see the price of these guys. Any information out there? And what about the cost of motherboards?
  • As long as Intel continues to try and push the VROC scam, I know I'll be taking my business to AMD.

    AMD also now offers RAID-0 for NVMe, with similar performance - and without the extra cost of a (still non-existant) "upgrade key".

    Even better, M.2 sockets on X399 motherboards are running CPU lanes, where X299 motherboard-mounted M.2s are DMI and actually require an add-on card for full performance.

  • Back in the 386-486 days Now, except for benchmark & high end 3D games, does it really matter?
    • Slight, nit-picky, correction - games with a lot of AI processing. Not necessarily 3D.
    • Photographic sharpening by deconvolution. Typical run night take 20 minutes, then look at the result to see if it's any better (it's usually not). Try a new set of parameters, rinse and repeat.
  • I could swear that HT was a feature of all their chips since the last generation of Pentiums. When did that stop? Or am I wrong about all chips having it?
    • by GuB-42 ( 2483988 )

      Not all Intel CPUs have hyperthreading.
      Usually, i5 don't and i7 do. That is often the main difference between these two product lines.
      Lower end CPUs usually have it to make up for less physical cores.

      • I thought they did, but I never have Intel money so I guess I never looked hard enough. Thanks for the answer.

The moon is a planet just like the Earth, only it is even deader.

Working...