Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel Hardware Technology

Intel Launches 9th Generation Core Processors; Core i9-9900K Benchmarked (hothardware.com) 130

MojoKid writes: Intel lifted the embargo veil today on performance results for its new Core i9-9900K 9th Gen 8-core processor. Intel claims the chip is "the best CPU for gaming" due to its high clock speeds and monolithic 8-core/16-thread design that has beefier cache memory (now 16MB). The chip also has 16-lanes of on-chip PCIe connectivity, official support for dual-channel memory up to DDR4-2666, and a 95 watt TDP. Intel also introduced two other 9th Gen chips today. Intel's Core i7-9700K is also an 8-core processor, but lacks HyperThreading, is clocked slightly lower, and has 4MB of smart cache disabled (12MB total). The Core i5-9600K takes things down to 6 cores / 6 threads, with a higher base clock, but lower boost clock and only 9MB of smart cache. In benchmark testing, the high-end Core i9-9900K's combination of Intel's latest microarchitecture and boost frequencies of up to 5GHz resulted in the best single-threaded performance seen from a desktop processor to date. The chip's 8-cores and 16-threads, larger cache, and higher clocks also resulted in some excellent multi-threaded scores that came close to catching some of Intel's many-core Core X HEDT processors in a few tests. The Core i9-9900K is a very fast processor, but it is also priced as such at $488 in 1KU quantities. That makes it about $185 to $225 pricier than AMD's Ryzen 7 2700X, which is currently selling for about $304 and performs within 3% to 12% of Intel's 8-core chip, depending on workload type.
This discussion has been archived. No new comments can be posted.

Intel Launches 9th Generation Core Processors; Core i9-9900K Benchmarked

Comments Filter:
  • What security? (Score:5, Interesting)

    by sinij ( 911942 ) on Friday October 19, 2018 @07:59PM (#57507300)

    which is currently selling for about $304 and performs within 3% to 12% of Intel's 8-core chip, depending on workload type.

    Is it really going to be any faster after inevitable microcode and OS patching to address gross security flaws?

    • If by "gross security flaws" you mean the techniques that require that you are already infected by malware in order to function, then - no, probably not. If, OTOH, you maintain clean systems and do not install said microcode and OS patches, then rock and roll!!
      I, for one, think that the whole Meltdown/Spectre nonsense is a hyper over-reaction to a most obscure vulnerability that - again!! - requires that your machine already be infected. The whole JS scripting shit is nonsense also, BTW.
  • Wow, for a flagship chip, with the i7-9700K lacking HyperThreading Intel must *finally* be starting to be concerned about security. Guess performance isn't everything when you can get p0wned. =P

    • by Tough Love ( 215404 ) on Friday October 19, 2018 @09:36PM (#57507574)

      i7 is the new 85. I9 is the new i7. Pretty cynical of Intel. BTW, hyperthreading only speeds up Meltdown, but Meltdown still will get your passwords even without hyperthreading, it just takes a bit longer. This is because of the way cache is shared between processor cores.

      • Meltdown is as much a risk for the average joe as death by meteorite.
        The side-channel has abysmal transfer rates, can in no way be executed in a way that doesn't impact the performance of the machine, and can be easily mitigated when used in vectors that are going to affect the average person.
        Assuming worst case, where it's allowed to run while someone doesn't notice it, a program's virtual address space is *huge*.
        These attacks are real, but their real-world applicability is highly hypothetical.
        I think
    • by Anonymous Coward

      Is that a joke, or are they serious?

      So one single decent GPU can't even be fully fed, if you would also like any other type of IO??

      The Ryzen at least has 24. And Threadripper a whopping 64!!

      If that does not outweigh their single core performance in practical applications, I'll eat my hat^WIntel CEO.

      Oh, and any reason single core performance matters more in gaming, is solely due to developers optimizing for the low-thread-count consoles, and do shitty lazy ports later. As soon as commonly used consoles start

  • Too soon (Score:5, Insightful)

    by Snotnose ( 212196 ) on Friday October 19, 2018 @08:19PM (#57507354)
    They haven't had time to fix Spectre and Meltdown, I think I'll pass.
    • Re:Too soon (Score:5, Informative)

      by iggymanz ( 596061 ) on Friday October 19, 2018 @08:36PM (#57507406)

      the list of different things going into the "Spectre" bucket keep growning

    • Right. There are supposedly some mitigations in this Coffee Lake release but I seriously doubt that they are real hardware mitigations, probably just microcode hacks that cost performance. I am highly skeptical that Intel had enough time to develop and qualify the fundamental cache circuitry changes they need to fix Meltdown properly, let alone changing all the masks.

    • They haven't had time to fix Spectre and Meltdown, I think I'll pass.

      Incidentally there are not some 6 different Speculative Execution attacks on processors. According to the test script my brand new AMD is vulnerable to 5 of them. When I got news of this the first thing I did... disable the work arounds in Windows. Screw performance hits. My enemies are the script kiddies, and trojan makers of the internet, not the NSA or other well funded organisations that actually *may* have the capability to do something *useful* with these exploits.

  • PCIe (Score:5, Interesting)

    by darkain ( 749283 ) on Friday October 19, 2018 @08:21PM (#57507358) Homepage

    "The chip also has 16-lanes of on-chip PCIe connectivity" - this actually sounds EXTREMELY low. And here I am, on a CPU with 40 lanes, and a chipset that provides another 5... in a system that is several years old. This sounds like a massive downgrade. Though, most people I guess only populate 1 slot for the GPU nowadays, and nothing else. Consumer 10gbe isn't quite there yet. Add-on sound cards have gone to the wayside (onboard audio is still shit quality in comparison, but since people only listen to low bit rate streaming MP3s anyways, I guess it doesnt matter!?) The only thing I question is the NVMe craze right now, and how this chip will be able to keep up with that, since most recent ones are usually PCIe (though some are DIMM socket now as well)

    • by pjrc ( 134994 )

      Any chip that's old by "several years" would have at best pcie2, at roughly half the bandwidth per lane of pcie3.

      Your 40 pcie2 lanes could still be considered better than only 16 pcie3 lanes, but really not by very much, certainly not enough to call the I/O capability of these newer chips "EXTREMELY low".

      • Re:PCIe (Score:5, Informative)

        by WaffleMonster ( 969671 ) on Friday October 19, 2018 @09:59PM (#57507638)

        Any chip that's old by "several years" would have at best pcie2, at roughly half the bandwidth per lane of pcie3.

        Nonsense, CPUs with 16x 3.0 lanes were available more than 5 years ago.

        https://ark.intel.com/products... [intel.com]

        There is no excuse for 16 lanes in 2018.

        • The Ryzen 2700X has 20 lanes. 1x16 and 1x4.

          My Epyc system has 64 lanes per processor and dual processors (although I understand the single processor systems get 128 lanes because they don't use 64 of them them for inter processor communications.

      • by _merlin ( 160982 )

        My 2014 Xeon has a lot more lanes than that, and they're PCIe 3rd-generation. The chipset splits some up into PCIe 2nd-generation slots though. I currently have two Quadros in 16x 3rd-gen slots, a 2x40Gbps Ethernet NIC in an 8x 3rd-gen slot, and a SAS controller in a 4x 2nd-gen slot. The SAS controller could use an 8x 3rd-gen slot but I don't have one spare.

      • by darkain ( 749283 )

        My several year old system is indeed 40x PCIe3 straight from the CPU, with the additional 5x lanes from the north bridge being PCIe2. Nice assumption though!

      • Any chip that's old by "several years" would have at best pcie2, at roughly half the bandwidth per lane of pcie3.

        PCIe 3.0 is over 8 years old. Dual GPU systems would consume all 16 lanes available which is precisely why AMD has upped the lanes to 20 for even their lower entry Ryzen chips. Gotta leave enough for storage.

    • "The chip also has 16-lanes of on-chip PCIe connectivity" - this actually sounds EXTREMELY low. And here I am, on a CPU with 40 lanes, and a chipset that provides another 5... in a system that is several years old. This sounds like a massive downgrade. Though, most people I guess only populate 1 slot for the GPU nowadays, and nothing else. Consumer 10gbe isn't quite there yet. Add-on sound cards have gone to the wayside (onboard audio is still shit quality in comparison, but since people only listen to low bit rate streaming MP3s anyways, I guess it doesnt matter!?) The only thing I question is the NVMe craze right now, and how this chip will be able to keep up with that, since most recent ones are usually PCIe (though some are DIMM socket now as well)

      Actually, as I understand it, the i9 has 40 platform PCIe lanes (16 CPU + 24 PCH). 16 lanes are dedicated to devices needing fast access to the CPU like 16x/8x graphics card slots. 24 chipset lanes handle other connectivity, like M.2 slot, network interface, SATA, and other PCIe slots, etc.. How the lanes are allocated are based on the motherboard design.

      The Nvidia 2080 is the first graphics card that can max out 8x PCIe 3.0 lanes, and just barely. There is only a 1% to 2% improvement when running with

      • by darkain ( 749283 )

        For reference, 10gbe is ~1GB/sec. That is sustainable on burst reading on a 8x SATA drive array. I'm currently running over 20 drives in a home server with 10gbe link back to the networking core, and my desktop with a 10gbe link to that core as well. It is trivially easy to saturate a 10gbe link nowadays.

        • For reference, 10gbe is ~1GB/sec. That is sustainable on burst reading on a 8x SATA drive array. I'm currently running over 20 drives in a home server with 10gbe link back to the networking core, and my desktop with a 10gbe link to that core as well. It is trivially easy to saturate a 10gbe link nowadays.

          If you have a RAID array and the device(s) on the other side have fast storage to handle it, then yes, you can saturate a 10gbe link. But again, you have the necessary components to make use of it. Most people who talk about wanting 10Gbps on a consumer motherboard have no idea.

          Also, would you really rely on a 10Gbps chipset built-in to the motherboard vs a dedicated PCIe card that can handle the network offloading? Most consumer motherboards use the CPU for network processing. Server motherboards are a

      • 24 chipset lanes handle other connectivity, like M.2 slot

        And that along with Spectre/Meltdown is why you shouldn't go Intel for systems which require fast storage I/O. You'll note that Ryzens also only dedicate 16 PCIe lanes to graphics, but have an additional 4 dedicated to NVMe.

    • by Anonymous Coward

      Threadripper offers 128 PCIe lanes and support for ECC memory.

    • Though, most people I guess only populate 1 slot for the GPU nowadays, and nothing else.

      16 lanes for GPU. If you have a second GPU, which some do, that's 32 lanes already. NVMe storage is 4 lanes, and the GbE gets at least 1 lane and possibly 4 lanes. 16 lanes is a bad joke.

      • Though, most people I guess only populate 1 slot for the GPU nowadays, and nothing else.

        16 lanes for GPU. If you have a second GPU, which some do, that's 32 lanes already. NVMe storage is 4 lanes, and the GbE gets at least 1 lane and possibly 4 lanes. 16 lanes is a bad joke.

        No, that's not how it works. If you put the two GPUs in the two 16x slots they both downgrade to 8x and 8x. Only the newly released 2080 can saturate an 8x PCIe 3.0 slot. Tests show that a 2080 will perform 1% to 2% faster with a PCIe 3.0 slot at 16x vs 8x. So, while having 32x lanes would give you a bit of graphics boost, it's not that much. NVMe usually gets it's lanes from the 24x PCH lanes. But it depends on the motherboard design.

  • Thank god for AMD. Intel faces stiff competition once again and still charges 50% more for a 10% faster CPU. Remember the days before good competition? The P66 was introduced at $1000 in 1k quantities back in '94, which is about $1800 now. I mean even the terrible P4s were being sold at a premium (ok using dubious means, but still).

    • Actually, i9-9900K is 90% more expensive than Ryzen 2700X. And Intel had to fiddle the gaming benchmarks [tomshardware.com] to make it look faster than it really is. These are on Intel's 14nm process, they were hoping to be on 10nm by now but that isn't happening until some time next year. Meanwhile Ryzen 2 on 7nm will be out while Coffee Lake is still shipping, oops. Ryzen 2 will probably probably put AMD even in IPC and ahead in GHz. Intel's last remaining bragging points gone. And Intel isn't going to catch up any time soo

      • Actually, i9-9900K is 90% more expensive than Ryzen 2700X. And Intel had to fiddle the gaming benchmarks [tomshardware.com] to make it look faster than it really is. These are on Intel's 14nm process, they were hoping to be on 10nm by now but that isn't happening until some time next year. Meanwhile Ryzen 2 on 7nm will be out while Coffee Lake is still shipping, oops. .

        Ryzen 2 (2xxx) series is based on 12nm process, which are already in the market. You are referring to Zen 2 architecture (Probably will be released as Ryzen 3 series)

        • Zen+ is Ryzen 2000 series, Zen 2 will be Ryzen 3000. It's a bit confusing. Ryzen+ means Zen+ mainstream desktop. I think AMD intended Ryzen 2 to mean Zen 2, not Zen+, but there's so much confusion about that now that it's better to stick to the thousands terminology.

          • Zen+ is Ryzen 2000 series, Zen 2 will be Ryzen 3000. It's a bit confusing. Ryzen+ means Zen+ mainstream desktop. I think AMD intended Ryzen 2 to mean Zen 2, not Zen+, but there's so much confusion about that now that it's better to stick to the thousands terminology.

            Yes, perhaps you should do that, I already did (2xxx). What is clear is that the 7nm process is named Zen 2, while the product using the architecture is yet to be named

            • Yes, perhaps you should do that, I already did (2xxx).

              I'm just going to have to go ahead and point out that your retort qualifies as kind of snippy, considering that you actually said "Ryzen 2 (2xxx)" which is wrong, or at best adds to the confusion.

              • Yes, perhaps you should do that, I already did (2xxx).

                I'm just going to have to go ahead and point out that your retort qualifies as kind of snippy, considering that you actually said "Ryzen 2 (2xxx)" which is wrong, or at best adds to the confusion.

                Well, I do kind of already using the thousand terminology that you have pointed out above and AMD website refers to the series as 2nd Generation Ryzen Processors [amd.com], while many publications have dubbed them as either Ryzen 2nd Gen, Ryzen+, or Ryzen [tomshardware.com] 2 [youtube.com]. Asking me to use the thousand terminology without addressing your own use of "Ryzen 2 on 7nm" is well.. not fair. What exist today is Zen 2 architecture on 7nm, and whatever product that uses it is not yet named

                • What exist today is Zen 2 architecture on 7nm, and whatever product that uses it is not yet named

                  Pretty safe bet they will call it Ryzen 3, a change from their original plan which was on the dumb side. Confusion is not helpful. BTW there already is a 5nm Zen 3 on the roadmap, I bet that gets the damnatio memoriae treatment too, they will reimagine it as Ryzen 5 (skipping Ryzen 4 as originally planned because 4 rhymes with "dead" in Chinese)

                  • Oh, and another source of confusion: what is a Ryzen 3? Is that zen-3-formerly-known-as-zen-2 or is it the cheap budget PC bin?

                    • Oh, and another source of confusion: what is a Ryzen 3? Is that zen-3-formerly-known-as-zen-2 or is it the cheap budget PC bin?

                      I think that is actually why they formally use the over complicated "2nd Generation Ryzen" instead of Ryzen 2 or Ryzen+. Intel do this as well. To make things worse the first gen mobile and APU parts were based on the 14nm Zen instead of 12nm Zen+ and name 2xxxH and 2xxxG respectively

                      Of course, I meant to write "they will reimagine it as Zen 5 skipping Zen 4", there's that confusion at work. To make that seem somewhat legit, they can go with "5nm means Zen 5, right?" Then just grin and go with Zen 6 for the 3nm generation.

                      Or introduce yet another code name.

                    • the first gen mobile and APU parts were based on the 14nm Zen instead of 12nm Zen+

                      The 12nm node name surely counts as one of the most egregious terminology abuses in the process wars so far. It uses all the same dimensions as 14nm but tweaks some details for better clocks and power efficiency. It really really should be called 14nm+, but maybe they just felt a compelling need to distinguish it from Intel's unrelated 14nm. And 12nm is better than 14nm, right? And 12nm must be better than 14nm+, so that settles that. What we need to be clear on is, nm no longer means "nanometer", it means

                  • Of course, I meant to write "they will reimagine it as Zen 5 skipping Zen 4", there's that confusion at work. To make that seem somewhat legit, they can go with "5nm means Zen 5, right?" Then just grin and go with Zen 6 for the 3nm generation.

      • Intel Core i9-9900K 9th Gen CPU Review: Fastest Gaming Processor Ever [tomshardware.com]

        Then again, if money is no object and you have the need for speed, Core i9-9900K is the CPU to buy.

        Also, out of curiosity, from where are you getting this 90% more expensive from?
        The 9900K has an MSRP of $488, the 2700X has an MSRP of $329.
        Now, I'm no mathematician, but $488 != $625.

  • by Anonymous Coward

    Currently I have:

    1 x 16 lane graphics card
    1 x 4 lane USB3 controller (four independent USB controllers)
    1 x 1 lane USB3 controller

    As a result GPU currently only able to use 8 of 16 lanes on my circa 2013 i7. Here it is 5+ years later and NOTHING has changed.

    No way will I be spending money on a new CPU with only 16 lanes.
    No way will I be spending money on a new CPU without ECC memory.
    No way will I be spending money on a new CPU without security bugs fixed.
    No way will I be spending money on a new CPU that doe

    • Weird. My GPU is using all 16 lanes of my CPU PCIe bus.
      Everything else is on my chipset.

      Wait, yours are too. Sucks being stupid, doesn't it?
  • Beastly Xeon W-3175X (Score:4, Interesting)

    by Tough Love ( 215404 ) on Friday October 19, 2018 @09:57PM (#57507630)

    Beastly 28 core Xeon W-3175X, obviously targeted at AMD's 32 core Threadripper 2990WX, which you can buy right now on Amazon for $1,720. I'd like to know Intel's price, I guess it's not remotely close.

    Note that with these top heavy core counts you always get lower clock frequency because of bus contention. Not a stopper by any means, if you have the use case. But personally I'm a lot more interested in the higher clocked 16 core AMD parts, specifically the 2950X, $900. Slightly higher cost per core but clocked about 10% higher. Boost frequency 4.4 GHz, the technical term for that is awesome.

  • by Anonymous Coward on Friday October 19, 2018 @10:45PM (#57507732)

    That's $304 per SINGLE AMD processor, $488 per if you buy a thousand units of the Intel. Unless you're building a thousand computers this makes no sense to compare, and even then, the cost of the AMD processor goes down at those volumes too. This reveals a stupid level of bias in this article.

  • I'd buy one, but it won't run Windows 7.
  • While you are so busy developing top speed CPUs for gaming, could you remember once in a while to release something new for the few of us who still have to spend their life *working* ?!? Thank you...
    • While you are so busy developing top speed CPUs for gaming, could you remember once in a while to release something new for the few of us who still have to spend their life *working* ?!? Thank you...

      Don't worry, AMD has got you covered with loads of PCIe lanes, and encrypted ECC RAM.

  • I'm not a gamer, but I suspect that games are sold and will work on both Intel & AMD CPUs but are generic binaries. This means that the vendors will have used compiler options so that they work on both, but that means that they might work faster on one. I have seen instructions generated that test which CPU & run these instructions or those ones. How much does that favour one CPU type over another ?

  • I'm not a gamer, so I'm prepared to be flogged for my ignorant question. This is advertised as "the best gaming CPU". But at any resolution over 1080p every modern title is GPU bound. Every benchmark I've seen at 1440p or higher shows absolutely no difference in frame rate between this CPU and one that costs 1/2 as much.

    So my question is: who spends $580, on the CPU alone, to build a gaming PC that only plays at 1080p? I understand that 1080p is the most common gaming resolution, but for people spend
  • Comment removed based on user account deletion

When you are working hard, get up and retch every so often.

Working...