Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Launches New Core i9-9980XE 18-Core CPU With 4.5GHz Boost Clock (hothardware.com) 192

MojoKid writes: When Intel officially announced its 9th Generation Core processors, it used the opportunity to also unveil a refreshed line-up of 9th Gen-branded Core-X series processors. Unlike other 9th Gen Core i products, however, which leverage an updated Coffee Lake microarchitecture, new processors in Intel's Core-X series remain based on Skylake-X architecture but employ notable tweaks in manufacturing and packaging of the chips, specifically with a solder TIM (Thermal Interface Material) under their heat spreaders for better cooling and more overclocking headroom. The Core i9-9980XE is the new top-end CPU that supplants the Core i9-7980XE at the top of Intel's stack. The chip features 18 Skylake-X cores (36 threads) with a base clock of 3.0GHz that's 400MHz higher than the previous gen. The Core i9-9980XE has max Turbo Boost 2.0 and Turbo Boost Max 3.0 frequencies of 4.4GHz and 4.5GHz, which are 200MHz and 100MHz higher than Intel's previous gen Core i9-7980XE, respectively.

In the benchmarks, the new Core i9-9980XE is easily the fastest many-core desktop processor Intel has released to date, out-pacing all previous-gen Intel processors and AMD Threadripper X series processors in heavily threaded applications. However, the 18-core Core i9-9980XE typically trailed AMD's 24 and 32-core Threadripper WX series processors. Intel's Core i9-9980XE also offered relatively strong single-threaded performance, with an IPC advantage that's superior to any AMD Ryzen processor currently.

This discussion has been archived. No new comments can be posted.

Intel Launches New Core i9-9980XE 18-Core CPU With 4.5GHz Boost Clock

Comments Filter:
  • Pricing (Score:5, Insightful)

    by TFlan91 ( 2615727 ) on Tuesday November 13, 2018 @01:03PM (#57638116)

    The pricing [hothardware.com] though... AMD still edges out in my book.

    • If that's the price of bragging rights then I'll skip this one.

      • by Kjella ( 173770 )

        If that's the price of bragging rights then I'll skip this one.

        The price of bragging rights is always more than what most can afford. Otherwise, what's there to brag about?

        • For bragging rights it's tough to beat a 32 core 2990WX Threadripper, [techradar.com] now going $1730, that is, $150 less than the Intel part with 14 more cores. For that matter, a 16 core 1950X for $650 probably still makes you the best desktop on the block.

          Of course, what we all really want is a 7nm 32 core Castle Peak Threadripper, possibly going to be announced about eight weeks from now. The ultimate desktop hotrod. Still TR4, so can do the build now with a 1950X as a placeholder, or a 1900X for $353, still a highly r

    • by gweihir ( 88907 )

      As soon as you look at prices and availability, Intel is utterly naked.

  • So Intel finally adopt something that the modding community have been doing for years? Seriously late to the game guys. There's a reason Intel de-lidding is frequently done while there's borderline no point in doing it on AMD's high end offerings.

  • by Crashmarik ( 635988 ) on Tuesday November 13, 2018 @01:20PM (#57638236)

    For almost all desktop use.

    Unless your desktop is doing something that parallelizes really well you probably will never notice the benefits of this.
    Even things that benefit from parallel processing are far better served by running them on truly parallel architectures. You have an application that can support fine grained parallelism, why run it on 18 cores of X86 when you run it on 1500 cores off a graphics card ?
     

    • What someone could use this for is virtualization on your desktop. But at that price you might as well get a Xeon proc and call it a day.

      • Sounds about right. I am sure it would be the cat's pajamas at simulating a small network of processors and testing your program on them.

    • Shared memory parallel codes (OpenMP) could benefit, though. Many (originally single-threaded) or homemade scientific applications run in this space: get some parallelism for relatively little work (insert pragmas, be careful to be thread safe, and test test test), without all the extra work of redesigning those simulations for efficient message passing.

      You certainly find problems where it is much better bang for the buck to throw an expensive processor and OpenMP ( O($10^2 to $10^3) ) at a problem than to

    • I don't agree with your blanket generalization, ending in an unresolved comparison. Better for what?

      The processor is made and optimized for running parallel tasks. Chores like raytracing, video transcoding and export, photo editing filters, science applications, and web-servers generally like this kind of processor.

      I am in no way an Intel fan, but a 4.5GHz boost clock is quite respectable. I should think that it can handle single-core and low-core tasks well.

      • Unless your desktop is doing something that parallelizes really well

        I thought I was pretty clear saying that. Just how much of a performance boost is your web browser going to get from extra cores when it should just stop running scripts/vidoes in windows/tabs that don't have focus ? or for that matter your word processor or video game ?

        Chores like raytracing, video transcoding and export, photo editing filters, science applications, and web-servers generally like this kind of processor

        Well ray tracing is almost certainly better handled by a GPU these days, same for video transcoding. Web Server that needs parallelism is a server application not a desktop application. Some science applications certainly but if they paralle

    • You have an application that can support fine grained parallelism, why run it on 18 cores of X86 when you run it on 1500 cores off a graphics card ?

      Because a graphics card is not just 375 traditional CPUs jammed into a single package and just because something can scale to 18 cores doesn't mean it would run better on 1500 GPU cores.

      • You have an application that can support fine grained parallelism , why run it on 18 cores of X86 when you run it on 1500 cores off a graphics card ?

        Because a graphics card is not just 375 traditional CPUs jammed into a single package and just because something can scale to 18 cores doesn't mean it would run better on 1500 GPU cores.

        You really have a desperate need to learn how to read or at least learn what the relevant terminology actually means.

        • And you desperately need to understand the differences between a CPU and a GPU in the way that their processing actually works. Or you need to go home and spin up 1500 VMs using only a single core on your graphics card. Good luck getting that to boot before Christmas.

          "I have cores" != "I can do anything you can do" and regardless of how parallel your application gets they will not necessarily run faster or better on a GPU, a specific device designed to run a very VERY specific subset of instructions compare

          • And you desperately need to understand the differences between a CPU and a GPU in the way that their processing actually works. Or you need to go home and spin up 1500 VMs using only a single core on your graphics card. Good luck getting that to boot before Christmas.

            You can't seriously be that stupid ?

            I have cores" != "I can do anything you can do" and regardless of how parallel your application gets they will not necessarily run faster or better on a GPU, a specific device designed to run a very VERY specific subset of instructions compared to a GPU.

            I guess you can be that stupid. But at least you looked up the meaning of Fine Grained Parallelism. Unfortunately you failed to comprehend.

            You have heard of this new thing that was invented called math maybe ? Why don't you just do the numbers and work out just how much of a performance advantage would be needed for 18 processors to outperform 1500 or say 3000 if you run 2 or well you get the idea.

            Oh and this part ""I have cores" != "I can do anything you can do" that is

            • Fine grained parallelism does not have anything to do with if something is better or worse to do on a CPU than a GPU. But since you're all insults and no substance I'm sure you realised that a while back too. But whatever I'll go down to your level.

              Oh and this part ""I have cores" != "I can do anything you can do" that is just fundamentally wrong.

              Oh wow. I can't believe you called me stupid and then wrote a line like that when we were discussing performance. Tell you what, go dig out the old Turing machine (which as I think I may need to point out to you anyway is actually Turing complete), have it proces

              • Oh wow. I can't believe you called me stupid

                Why is that difficult to believe with your attitude I am sure lots of people call you stupid.

                • Interesting given who it was that started with the name calling. I think everyone has learnt a bit about you today. Your other post just now was equally retarded.

    • Your desktop is almost always doing something that parallelizes really well. For example, browsing - each tab runs in a separate thread. And with Vulkan/DX12 games now will use as many cores as you have, to feed a big GPU. The classic one is gaming and streaming, that used to be an issue before Ryzen.

      If you are compiling or doing anything with video there is no such thing as too many cores.

      • Browsing separate tabs really shouldn't be taking cpu at all. Haven't looked at Firefox's source but I will typically have a hundred plus tabs open and notice very little draw on my cpu resource. Right now it's pulling 5.1% on a quad core cpu with a guess at around a 100 tabs open and and at least 5 that are interactive. Streaming once again suspect the optimum use of the dollars is buying a higher end graphics card.

        • Play videos in multiple tabs. Access multiple crappy javascript sites. There are any number of ways to consume cpu in multiple tabs. I can only presume that you never looked at CPU consumption while browsing. In theory, browsing should be efficient. In practice, it isn't.

          • Actually I just did that's where my numbers came from. Can't say I am ever playing more than 2 videos at a time

        • And you don't seem to be clear on the distribution of work between CPU and GPU. It takes more cores to feed a bigger GPU. You go tell the streamers that they don't need multiple cores. They know otherwise.

          • And you don't seem to be clear on the distribution of work between CPU and GPU. It takes more cores to feed a bigger GPU. You go tell the streamers that they don't need multiple cores. They know otherwise.

            Yeah that's a function of memory bandwidth more than anything else. You can throw all the cores you want at it, it doesn't matter if you don't have the bandwidth. You may have noticed that GPUs generally have much much wider memory ?

            Anyway just for reference. You're taking data from the frame buffer, ideally encoding on the card using the card, then moving it out and maybe formatting with the CPU.

            You might want to look up how the architecture is actually laid out and how this works.

            • Re: (Score:2, Informative)

              by Tough Love ( 215404 )

              You are still confused. GPUs have high on-board memory bandwidth because they use it internally for texel and vertex fetching etc. Graphics features like filtering are highly memory intensive with typically multiple accesses per texel per raster op in on board memory. The bandwidth the CPU uses to upload primary data to the GPU is comparatively much less. Unless you made a major mistake, like not populating both controller channels, your streaming setup is unlikely to bottleneck on memory, including reading

              • You are still confused. GPUs have high on-board memory bandwidth because they use it internally for texel and vertex fetching etc.

                Says the guy who can't separate outcome from cause. Why they were initially designed that way is irrelevant.

                You don't encode video on the GPU while rendering unless you are OK with dropping the frame rate.

                Well seeing as encoding on a GPU is 4 to 5 times faster than on a CPU and it saves the time time of fetching unencoded video from the frame buffer to system memory. I'll be glad to trade the overhead. That is trade.

                I feel that you are just burping out random factoids

                Projection seems to be strong with you, so is feeling over actual understanding

                • So, crashing the party to showcase your awe inspiring intellect then.

                  You don't encode video on the GPU while rendering unless you are OK with dropping the frame rate.

                  seeing as encoding on a GPU is 4 to 5 times faster than on a CPU and it saves the time time of fetching unencoded video from the frame buffer to system memory. I'll be glad to trade the overhead.

                  Why is it necessary to explain it to you in words of one syllable? Lose frame rate.

                  • Thanks for showing me wrong.

                    I should have gone with my first impression that you were an idiotic troll when you said this

                    Play videos in multiple tabs. Access multiple crappy javascript sites. There are any number of ways to consume cpu in multiple tabs
                    https://slashdot.org/comments.... [slashdot.org]

                    But I gave you the benefit of the doubt. My bad.

                    Now you are coming up with this

                    Why is it necessary to explain it to you in words of one syllable? Lose frame rate.

                    And you have removed all doubt.

                    You turn on streaming you are going to lose frame rate no matter what you do. The questions are, how are you are going to lose more and what is the best use of resources to build the system.

                    • the questions are, how are you are going to lose more and what is the best use of resources to build the system.

                      Tying up GPU compute units isn't it. You obviously are no gamer. But you are a loudmouth.

                      Geez can't you even get the terms you're talking about right ? It isn't the set of gamers but subset of gamers that stream.

                      Instead of being an asshole, try actually learning about what you are talking about. Then you wont wind up saying stupid things and cherry picking horrifically bad examples like "Trying to watch videos in closed tabs", to show you're not an idiot.

                    • Did you really just type "watch videos in closed tabs"? You're losing it, go take your meds.

                      No you did

                      Play videos in multiple tabs. Access multiple crappy javascript sites. There are any number of ways to consume cpu in multiple tabs
                      https://slashdot.org/comments [slashdot.org].... [slashdot.org]

                      So you are lecturing others on not being an asshole, got it.

                      I expect you get lectured about being an asshole a hell of a lot.

                    • Just curious, how did "multiple tabs" become "closed tabs" in your mind?

                    • Still don't comprehend how little you know do you ?

                    • I comprehend than you have a screw loose.

                    • And I comprehend you don't anything about how a browser works.

                    • Perhaps your brain fever leads you to imagine that videos stop decoding when you switch tabs.

                    • Mine no. People who make the browser could be.

                    • So you understand that a browser can consume CPU per tab by decoding a video per tab.

                    • So you understand that a browser can consume CPU per tab by decoding a video per tab.

                      HAHAHA

                      You still are so fucking stupid.

                      Try doing a little research on how this actually works and how CPU video decoding actually works.

                    • Just in case, I verified using appropriate tools. I doubt you are capable of that.

                    • Unh hunh

                      Somehow I doubt you even knew what to check

                    • You've made it abundantly clear your technical skill rounds to zero. And you're delusional, that's heady stuff. Must be confusing to be inside you.

                    • You've made it abundantly clear your technical skill rounds to zero. And you're delusional, that's heady stuff. Must be confusing to be inside you.

                      Unh hunh. Just for the fun of it, I am going to suggest you bounce your position off someone else.

                      I am going to guess they will do exactly what I did, which is first try to explain why you are wrong and then write you off as the moron you are.

                      Ciao

                    • Your attempt at explanation was exactly what confirmed you have no technical skill. You're the old fart shouting at the cloud.

                    • Oh I don't know

                      I am not the guy who thinks playing videos you can't see is a good reason to have more cores.

                    • So the straw you're hanging onto is, exactly one use case doesn't apply to you.

    • by AmiMoJo ( 196126 )

      Ryzen and Threadripper have more than just extra cores. More PCIe lanes, for example. Stuff that matters for workstations.

      • Ryzen and Threadripper have more than just extra cores. More PCIe lanes, for example. Stuff that matters for workstations.

        Absolutely true. Without looking I would also bet the supporting chipsets are higher end as well

    • The main reason to have more cores is to be able to do more things at once. Remember when you used to be limited by your cpu how many things you could do at once without bogging everything down? You no longer have to worry about closing chrome when you go to play your game for a few hours. Providing you have the ram overhead to make up for chromes memory leaks(seriously they have had memory leaks for a decade, literally since day 1 what gives Google??)

      • Yeah that's true.
        It's just that for most people the number of things they want to do at the same time is less than what their rigs can currently handle

  • Vulnerabilites (Score:5, Insightful)

    by cyberchondriac ( 456626 ) on Tuesday November 13, 2018 @01:20PM (#57638240) Journal

    I didn't see any mention of addressing Meltdown, Spectre, L1TF.. so I assume those general architecture issues are not yet addressed, this is still Skylake.

    • And no one running these processors will care. In fact most of the people affected by Spectre and Meltdown are likely running Xeons.

      • Fact is, people do care. Whether it is a perception of sloppy Intel engineering, or security, people do care.

        • Nope. A couple of angry nerds care, and a couple of system administrators of large virtual servers care. If there is one thing that has been made 100% clear by people, their reaction to this, Intel's shareprice, Intel's marketshare, it's that people in the general case to mean the vast majority of computers users, most definitely do NOT care.

          • You are out of touch. Take a quick run around the comments section on any Intel vs AMD article and you will find Meltdown frequently cited. And Meltdown has gone mainstream. [theguardian.com] Even the business pages talk about it because it is affecting Intel's stock price.

            See, it's like GMO, it may or may not affect you directly but it is always a concern and a source of endless debate, such as this. The only way out of this for Intel is to fix it definitively in hardware as opposed to papering over by minor circuit tweaks,

            • Frequently cited means nothing ultimately. Bitcoin and blockchain technology is frequently cited too, and ultimately it was a management discussion fad that went no where. While we frequently cite things it means nothing when ultimately business practices haven't changed.

              Now that isn't universal. There has definitely been work in the cloud space, which makes perfect sense too since they actually have direct exposure to the issue as their business model relies on having people run their code on machines you

              • actual working exploits outside of carefully controlled lab experiments, or balls out just prove I can copy some random bits which I can't identify as belonging to something have yet to be seen or developed... a full year later.

                Wow, where have you been? [github.com]

                Q: Has Meltdown or Spectre been abused in the wild?
                A: We don't know. [meltdownattack.com]

                • Thanks for proving my point. You just linked me to a whole series of lab experiements which require up front knowledge of the computer in question.

                  If someone is in a position to gain enough knowledge about your machine to use any of the examples you just linked to, to pardon my French, you're already properly fucked, ... or your a cloud / VM provider which as I pointed out earlier are exactly the kind of people who are actively at risk here.

                  In terms of security risk for the 99.9% of people out there, this r

                  • In terms of security risk for the 99.9% of people out there, this ranks lower than...

                    Says random internet guy, knowing better than the security researchers.

  • Does i9 cure, or even address, Spectre [wikipedia.org] or Meltdown [wikipedia.org]?
  • The question is, does a Thread Ripper outperform a dual or quad core Xeon system, and can you justify the price of the Xeon system for the amount of extra performance you get. If my code doesn't support distributing processing out to the network, having a huge machine with 4 xeons and a mind-boggling amount of RAM on it might be the only way to accomplish what I need to accomplish. You just have to expect you're going to spend a LOT of money for that machine.

    I have a system encoding 8 1080p video streams

  • by Z80a ( 971949 ) on Tuesday November 13, 2018 @01:37PM (#57638342)

    I see you got a fancy new power curve, soldered TIM and nothing else!

  • "We all know Linux is great... it does infinite loops in 5 seconds." -- LinusTorvalds

    Now it can do it in under 2 seconds!
  • It's not the fastest desktop processor when it trails 24 and 32 core ThreadRippers.

    That's not how it works. Fastest doesn't mean slower.

    • by EvilSS ( 557649 )

      It's not the fastest desktop processor when it trails 24 and 32 core ThreadRippers.

      That's not how it works. Fastest doesn't mean slower.

      In the benchmarks, the new Core i9-9980XE is easily the fastest many-core desktop processor Intel has released to date

      I didn't realize Intel was releasing ThreadRipper CPUs.

  • I'm holding out for all the cores.

  • Why will people shell out an extra $300 for a processor that is 10% faster, but they won't pay $10 for a new software program that runs twice as fast as the one they are using?
    • This also kills me with cell-phones. People will pay ~$1000 for a phone... but refuse to buy $1 apps to use on it. They will go WAY out of their way to find a "free" app that does something similar...

      I absolutely cannot understand this phenomenon.

      • by Bert64 ( 520050 )

        Many do the opposite, they will buy (or pirate) expensive software because "its the thing to have" but then skimp out on the hardware to run it on.

    • by Bert64 ( 520050 )

      I'd rather pay nothing for a pirate version that removes drm code and thus runs faster than the paid version...

  • The "best" CPU is always the best one for *your* workload. If that's max single-threaded performance and money doesn't matter, then that means Intel, and it likely will for awhile. If we're talking about a workload that can be processed massively in parallel, then AMD has earned a seat at the table. I like the "High End CPUs - Intel vs AMD" [cpubenchmark.net] benchmarks at PassMark -- should enable plenty of dick-waving no matter who you are. Take the time to understand your workload in detail, set your budget, and choose
    • AMD is claiming Zen 2 29% IPC lift. If it's anything like what they said before release of zen 1 we may see 35-40% IPC lift. Even the former outs Intel in 2nd place. Hopefully they can get 10nm working sooner than later to keep innovation flowing.

  • SteveJobs R&D proved cores greater than 2X exhibit diminishing throughput on Intel for Darwin. A lot has changed; Darwin included as well as MacOS X with GPU onboard processing et. al. with cores doing look ahead, graphics, memory, etc...

    Could a generous anonymous type Avie Tevenian kernel nerd step in to raise all knowledge; level to the state of art on silicon? Are Hz marketing ' Intel' real world throughputs 'Inside'.

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...