Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel Hardware Technology

Intel's 9th Gen Processors Rumored To Launch In October With 8 Cores (theverge.com) 233

According to a new report from Wccftech, Intel will introduce new Core i9, i7, and i5 chips on October 1st that will be branded as 9th generation processors. The Verge reports: The mainstream flagship processor, Intel's Core i9-9900K, is expected to ship with 8 cores and 16 threads. Leaked documents show that this will be the first mainstream Core i9 desktop processor, and will include 16 MB of L3 cache and Intel's UHD 620 graphics chip. Even Intel's 9th gen Core i7 processor is expected to ship with 8 cores and 8 threads (up from the current 6 cores), with the Core i5 shipping with 6 cores and 6 threads. Intel is reportedly launching its unlocked overclockable processors first, followed by more 9th generation processors early next year.
This discussion has been archived. No new comments can be posted.

Intel's 9th Gen Processors Rumored To Launch In October With 8 Cores

Comments Filter:
  • by JoeyRox ( 2711699 ) on Sunday August 12, 2018 @11:44PM (#57114462)
    By just fixing Meltdown and Spectre :)
    • It's not like these have ANY hardware changes; Intel has chips to sell, don't harsh their buzz when they're dancing as fast as they can.

      That won't be fixed for a few more Mark Leaching cycles.

    • by Anubis IV ( 1279820 ) on Monday August 13, 2018 @12:41AM (#57114600)

      The hardware fixes for those flaws apparently won’t be included until Ice Lake. Intel is saying Ice Lake won’t be available in volume until 2H 2019, but it was originally scheduled for 2016, so I’ll believe that when I see it. Realistically, don’t expect those fixes in Intel chips until 2020 or later.

    • Meltdown and Spectre don't depend on hyperthreading, they just run faster with it.

  • by barc0001 ( 173002 ) on Sunday August 12, 2018 @11:50PM (#57114474)

    AMD seems to have really shaken up Intel's complacent little world over the last 18 months with the Ryzen.

    • by Tough Love ( 215404 ) on Monday August 13, 2018 @04:19AM (#57115058)

      A blow to the ego, certainly, but real damage? I doubt it. ARM bites Intel a whole lot harder.

      • by GuB-42 ( 2483988 )

        I don't think GP was thinking about AMD damaging Intel.
        Rather that AMD is now a serious, and much needed competitor, just like in the Athlon days.

        As for ARM biting Intel, I don't think that's really the case. Compared to Intel/AMD, ARM plays in the cutthroat market of mobile devices, with low power, low performance, low price chips. In fact, Intel has an ARM license, and they used it to make the XScale line. It is not the only halfassed attempt from Intel at the mobile market, but I guess the margins are to

        • by Tough Love ( 215404 ) on Monday August 13, 2018 @06:16AM (#57115300)

          For Intel, ARM is the invisible bite that doesn't show up directly because it is related to the PC market decline, which is entirely explained by people doing without PCs in favor of mobile ARM devices. If it wasn't for ARM, Intel would be sellilng a billion more processors a year than it now does, think about it. Intel badly wanted that mobile market and were utterly defeated. They could easily start peddling ARMs themselves, but not for the margin they got used to and are now dependent on.

          Meanwhile. AMD is nice enough to not undercut too much. Not because they don't want to, but because they can't afford it. The last thing Intel should do now is go kill AMD with antitrust thuggery again, that would be extremely unwise, it would just push AMD into the arms of somebody much richer, and much more of a threat. A curious kind of detent we have going on now, with AMD enthusiasts the big winners.

          • For Intel, ARM is the invisible bite that doesn't show up directly because it is related to the PC market decline, which is entirely explained by people doing without PCs in favor of mobile ARM devices.

            The processing power is in the cloud [pewinternet.org] instead of in people's hands. The market for high-end processors may well have shrunk as virtualization means higher resource utilization, but it hasn't shrunk so much as the increasing dependence on handhelds translating into a reduced dependence on desktops might imply.

            Meanwhile. AMD is nice enough to not undercut too much. Not because they don't want to, but because they can't afford it.

            That's not nice, though. That's just being responsive to market forces.

            A curious kind of detent we have going on now, with AMD enthusiasts the big winners.

            As if we weren't already the big winners :D

        • In fact, Intel has an ARM license, and they used it to make the XScale line.

          They gave up because, ironically, they couldn't get the power consumption down as low as other ARM devices while at lower clock rates. This is ironic because intel is known for lower power consumption x86 processors these days, which is also ironic because they were previously known for the pentium melting its socket during testing.

      • by AmiMoJo ( 196126 )

        AMD is making real inroads in the server market now.

        AMD parts can do a lot of stuff that Intel can't, such an encrypted RAM and offering a huge number of PCIe lanes. And of course, they offer more threads at a better price.

        • AMD parts can do a lot of stuff that Intel can't, such an encrypted RAM and offering a huge number of PCIe lanes. And of course, they offer more threads at a better price.

          On the other side, ECC in consumer parts really ups the ante.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      4.5 years ago, Intel announced it was cutting $350 million from it's R&D budget and putting $350 million into diversity programs. Just a coincidence of course.

      Also a coincidence that 4+ years later, Intel has stumbled badly and been overtaken by the likes of Samsung.

      Lesson: when that idiot from HR comes to you with a proposal for super-duper diversity that will absolutely improve your decision making and lead to higher profits as it's latest fad among all the useful fucking college intake born of Gender

      • by barc0001 ( 173002 ) on Monday August 13, 2018 @12:41PM (#57117322)

        > 4.5 years ago, Intel announced it was cutting $350 million from it's R&D budget and putting $350 million into diversity programs. Just a coincidence of course.

        You know, if you're gonna lie about things, you should pick something that a quick Google search or two won't show to be a porker:

        First of all, it's $300 M for diversity programs, to be spent from 2015 to 2020, so they have not spent $350M on it, and the 300 M hasn't had more than 70% spent.

        Secondly on the R&D, they've been ADDING to the R&D budget year over year:

        https://www.fool.com/investing/2018/04/17/heres-how-intel-corp-cut-its-marketing-spending-by.aspx

        "During the year, Intel's research and development (R&D) spending grew by just $358 million, a slowdown from the $612 million increase that it saw there during 2016."

        I get it, you think diversity programs are a waste of time because Western society is 100% perfect and we totally didn't have racists and Nazis parading this weekend, but don't blame a company going into the toilet on them spending $60m a year on a program when their income for that same year was almost 70 Billion with a B. That's like saying you weren't able to make your $1000 rent this month because you spent a buck on coffee.

    • actually.... no.

      onboard gfx is lagging seriously behind, along with power savings.

      I bought only intel for many of the last few generations of chips; I have a fanless heatpipe cooled i7 on my desk. can't do that with ryzen; for one, there is no good cpu with onboard apu (yet) and those with onboard gfx are not power efficient yet.

      I want to buy ryzen. been wanting a 1700 for the 16 thread perf, but I refuse to buy a gfx CARD (I'm not a gamer) and a waste of a slot is a big deal today. I need that pcie slot

      • Why it just get one of those USB to display adapters. No gpu needed plus it's almost as good as the Intel igpu just without 3D

  • Thank you AMD (Score:3, Interesting)

    by Anonymous Coward on Monday August 13, 2018 @12:00AM (#57114498)

    Thank the old gods and the new that we have a competitive AMD again. How long has intel sat on quad core cpus for for the consumer market? And now suddenly when AMD has competitive 8 core chips on the market, Intel thinks thats what the market is ready for... only now?? 8 cores should have came out a long long time ago so screw you intel holding back the computer industry.

    These 8 core chips coming soon from intel better be very very competitive priced too because they still have the problems with spectre and meltdown which wont be fixed until intel makes a major redesign to their chips.

    • These 8 core chips coming soon from intel better be very very competitive priced too

      Let's speculate... I can't see the 8 core i9 selling for $329, which is where AMD has the 2700X right now. Mind you, the i9 does have a (lame but functional) graphics core, while you must install a GPU for the 8-core Ryzen, so that's a slight point on Intel's side. Very slight. It would be great to see AMD will respond with some minimal GPU on their 8 core Zen 2 parts next year, but AMD called it right anyway: everybody who plugs in that 16 thread beast also happily plugs in a GPU. On the lighter side, I do

      • by voss ( 52565 )

        If you want a built in graphics you are probably gonna go with the ryzen 5 with graphics
        4 cores, 8 threads and graphics for $150 is not a bad deal.

        • Yes, it's a no brainer for a budget build, even if you plan to stick in a GPU at some point. Give your wife or PC a highly respectable sit-down PC that costs maybe $500 to build, or $600 for the deluxe version. For myself it is equally a no brainer that I want 8 Zen cores, and now being thoroughly addicted I will go on to 16 (above that the core clock starts to drop) which is not something I need, it is something I want.

          • I love my 1700.. Now that I've tasted blood... I want the new 32 core TR.. This hobby is expensive..

            • Haha, same with me. My 1700 build is a budget workstation, about $1k of parts. Now I intend to drop more than twice that on a 16 core TR2 and I don't bother justifying. The build itself is the point.

              Well, I do max out all my cores on a regular basis, so it's not just for fun. But dammit, it is fun. I'm going for 16 cores instead of 32 because it's clocked higher, so on balance, better for my mix of serial and parallel loads. But I can certainly see the value of the 32 core part, it will be the go to part fo

              • I have a number of builds here of various kinds that I justified with "because I need it for activities!" but when Ryzen came out I restrained myself because I already had enough powerful PC's I could have just done a few things on each, however my wife said that I should go ahead and build one(so i would quit talking to her about it I assume, lol) and out the door to frys I was. I did the bad customer thing and used their return policy, and the Segfault issue to bin myself a very nice performer. although o

          • The only downside for a later upgrade with the 2200G/2400G are that it only has an 8x PCIe link. All the other non-integrated graphics can support the full PCIe 16x.

            But the performance hit is fairly small even for a beefy graphics card. Even so, it's giving me a bit of pause.

            • I see your point. Maybe this part? [notebookcheck.net] solves the problem. Not sure about the socket or number of PCI lanes. Anyway, it's hard to complain about the 2400G at $150 as a throwaway placeholder, maybe even a keeper.

              • Yea, the $20 price drop has made it almost irresistible... if RAM weren't so !#$^ expensive right now I'd have probably pulled the trigger in spite of the PCIe thing.

                • If memory price is too painful just at the moment then you could pick up 2x4GB for $90 to get started and add another 2x8GB later when the price comes down.

                  • That's true, although stepping down in RAM size for an upgrade is a bitter pill to swallow :)

                    • So start with 2x8GB and add 2x8GB later, that's what I did. Now the remaining issue is, if I want to go higher my slots are already fully populated. Waste! Can't stand it. My solution is to fantasy-build a TR, partially populated with 64GB, and hand this rather nice machine down.

      • There is no way that the 8 threaded i7 competes with the 16 threaded 2700X unless it is half the price, which isn't going to happen.

        It will continue to compete, because enough people want the fastest, and that is simply the i7, period.
        My 8700K kills a 2700X in every game I've ever seen benchmarked.
        I fully understand that aggregate core performance of the 2700X per dollar is superior- but you need to accept that that metric just doesn't fucking matter to a whole lot of people.

        • There is no way that the 8 threaded i7 competes with the 16 threaded 2700X unless it is half the price, which isn't going to happen.

          It will continue to compete, because enough people want the fastest, and that is simply the i7, period.

          That's a fool's game. There is always a faster one next quarter. But I agree, there is a great supply of fools. And inertia is a thing too, it is sometimes wise. But I still see Intel coming out on the wrong side of the decision tree here.

          My 8700K kills a 2700X in every game I've ever seen benchmarked.

          You mean crap games, not yet using Vulkan/D12. And "kills" is a wild exaggeration. 2700X is widely recognized as a perfectly good gaming part, built for the future instead of the past, and also the best workstation in class. Cores to spare, aggregate throughput without pee

          • That's a fool's game.

            That's like, your opinion, man.

            There is always a faster one next quarter.

            So? There's always a faster car next year too. Liking fast cars doesn't make me stupid. It means I have disposable income.

            You mean crap games, not yet using Vulkan/D12.

            No. That's not what I meant at all. And that's a pretty bone-headed correlation for you to make. Do I need to point out how stupid it is, or can we forget you said it?

            Beyond that- you *are* correct that the margin is very much reduced or inverted with the 2 DX12/Vulkan games out there.

            And "kills" is a wild exaggeration.

            No, it's not. Usually between a 10-30% lead in performance. That's kill

            • you *are* correct that the margin is very much reduced or inverted with the 2 DX12/Vulkan games out there.

              Thanks for that, now try to connect the dots. I really don't care if an obsolete game engine runs 10% slower, I care about what happens with the new architecture that everybody is moving to. And you can hardly call any of those old crap engines unplayable on Ryzen, at least not without losing whatever cred you have, which is looking a bit thin at the moment to be honest.

              • You don't care about a damn thing other than not feeling stupid. Unfortunately, reality is dealing you some soul-crushing cognitive dissonance in that department.
                I'm happy that you're happy with your AMD. It's still not going to come close to knocking the crown off of Intel's head. I'm sorry you're too dim witted to see that numbers for what they are.

                Being that you seem to think that engines that don't use Vulkan are obsolete, I'm not really sure there's any point in going back and forth with you anymore
            • 2700X is widely recognized as a perfectly good gaming part,

              Steam CPU Statistics [steampowered.com]
              Look at all that recognition!

              What, AMD Steam share up from 11% in March to 15% in July? Undone by your own link.

              By the way, you have a crap posting style, you come across as a pimply teen taking revenge on the internet for being bullied.

              • Not undone at all. There was a 5% bump in share- which is awesome for AMD.
                15% against 85%.
                And all recent gains have been for high end Intel procs, while AMD has declined in every segment except Linux users.
                You need to learn to read.
                • AMD has declined in every segment except Linux users

                  You need to lay off that crack pipe.

                  • The numbers are there, and they don't lie, imbecile.
                    • Your numbers say that AMD share on Steam increased from 11% in March to 15% in July, the most recent numbers. Look, disability with math does not mean you are a complete imbecile. But other signs do point in that direction.

                    • 11.15% 15.96% 16.33% 16.22% 15.17%
                      +4.81% +0.37% -0.11% -1.05%

                      Intel's:
                      88.86% 84.04% 83.63% 83.74% 84.79%
                      -4.82% -0.41% +0.11% +1.05%

                      You literally cherry picked AMD's initial bump, which saturated in literally a month, and has been declining since, while Intel has been increasing.
                      You are literally a fucking moron.
  • by lordlod ( 458156 ) on Monday August 13, 2018 @12:01AM (#57114500)

    The Intel core i9 line has the same architecture and features as the i7 processors.

    This is a move to show the market than Intel has something new and innovative to offer. Unfortunately the emperor isn't wearing any clothes.

    • More cores (Score:2, Interesting)

      by Anonymous Coward

      They're slapping much needed cores in it, unfortunately they price it accordingly, which is a mistake. They should keep the price to the floor at this point, and keep hitting on performance.

      I do some serious number crunching on a dipole model, its done on a cluster of Android TV boxes, each 8 core 64 bit, 30 of them to give 240 cores. Each has its own storage and networking, and RAM making it totally scalable. It's the performance of a supercomputer from 15 years ago. And in total it costs around $2000. Sur

      • Having processors twice as fast as ARM cores isn't any good if they're more than twice the price.

        It's time for a car analogy. In a world of fARM tractors that can't go over 40 mph, a car that can go 80 mph is worth far more than twice as much.

    • The Intel core i9 line has the same architecture and features as the i7 processors.

      This has always been true. Intel does not design different cores for i3 vs i5 vs i7 vs i9. If there is any difference in features it is because those features failed testing, whether for speed or functionality, and were fused off. If there is any difference in last level cache size it is a chop.

  • by Anonymous Coward

    Even Intel’s 9th gen Core i7 processor is expected to ship with 8 cores and 8 threads (up from the current 6 cores)

    Current i7s are hyperthreaded 6 cores. Be interesting to see if non-hyperthreaded 8 cores outperforms hyperthreaded 6 cores.

  • by Fencepost ( 107992 ) on Monday August 13, 2018 @12:29AM (#57114562) Journal
    Are these new chips going to have all the same predictive execution and related issues? Did they have enough time to do any revamping? Or is this going to be the final generation that gets a big chunk of performance improvements crippled?
  • i mean normally those chip takes years to engineer, so if they were already pre-producing when spectre/metldown came up... There was probably no chance to change them ?
    • Major changes require years. Spectre/Meltdown fixes (if my understanding is correct) require detecting and enforcing permissions for all memory accesses, which should be relatively simple. A couple of months to change the design and have a bunch of people check that the change fixes the problems and doesn't introduce new problems. A month to go from schematic or Verilog description or whatever to mask, a month to go from mask to packaged silicon. A couple of months to check that the hardware does what it's
  • by rl117 ( 110595 )
    Why would you buy this over a Ryzen chip? I just built up a new system, Ryzen 2700X with 32GiB RAM. The CPU is reasonably priced and pretty power efficient for the number of cores/threads.
    • I just built a similar system. Ryzen 2700x, 32GB 3200mhz ram, 1tb Samsung pro 970 nvme drive, vega64...all water cooled. The system SCREAMS! Intel might have a few points of single threaded performance up on my system, but I doubt anyone sitting down at my system could even tell the difference!
    • Because it performs better?
      My i7-8700K outperforms a 2700X in every game I play, why would I want a 2700X?
      The 2700X is better bang for buck in terms of aggregate core performance, but I'm really not sure that matters to enough people to matter.
  • In essence (Score:5, Insightful)

    by Artem S. Tashkinov ( 764309 ) on Monday August 13, 2018 @07:16AM (#57115478) Homepage
    Intel launches SkyLake for the fourth time. Wow.
    • by sinij ( 911942 )
      Insofar as marketing goes, these are all different CPUs. Insofar as performance goes, all these are about the same since Haswell.
  • by jellomizer ( 103300 ) on Monday August 13, 2018 @09:15AM (#57115958)

    The big problem is most applications suck at parallel processing. 4 Cores
    1 for the OS
    3 for the applications

    Is what seems to suit home usage rather well. Having an 8th gen i7 with 6 cores with a total of 12 threads is underutilized by most applications, and will in general not run at its full potential. So you have an application that you want to work faster moving from 1 core to in essence 12 threads or 16 threads will not have mores law speed improvements, because the program is often stuck on a single core, which hasn't been increasing in speed.

    The problem is multi-fold
    1. Little Education in Parallel processing programming. Still this is mostly regulated to 300-400 CS classes for undergrad, and mostly designed to aid CS students as an area of study in their Masters Degree.

    2. Most programming language have poor implementation of parallel processing. Threading is one way to do parallel processing their are other methods as well. I have seen languages such as MPL (for an early parallel processing system) that actually had an elegant structure of plural variables where you can code parallel processing without threads but using standard lanagues
    This is psuto-code as I hadn't used MPL in over 20 years.

    plural int x;
    plural int holder;
    int didchange = 1;
    x = randint(maxcpu);
    while (didchange) {
          didchange = 0;
          if (cpu % 2 = 0) {
                if (x > x[cpu+1]) {
                        holder = x;
                        x = x[cpu+1];
                        x[cpu+1] = holder;
                        didchange = 1;
          }
          if (cpu % 2 = 1) {
                if (x[cpu-1] > x) {
                        holder = x[cpu-1];
                        x[cpu-1] = x;
                        x= holder;
                        didchange = 1;
          }
    }

    Locking conditions and timing all handled easily without a lot of thought of the details. Yet using all the processors.

    • The big problem is most applications suck at parallel processing.

      Literally the only applications which require a lot of CPU time which the average user is likely to encounter which fit this description are games. While games are now usually multithreaded, they are typically not sufficiently multithreaded to highly utilize many cores. One exception might be ports from modern consoles, which seem to have six or so cores available to the application. Console games which fully utilize the hardware ought to benefit from having multiple cores available on the PC as well.

      The on

    • The problem is multi-fold 1. Little Education in Parallel processing programming.

      2. Most programming language have poor implementation of parallel processing. ... I have seen languages such as MPL (for an early parallel processing system) that actually had an elegant structure of plural variables where you can code parallel processing without threads but using standard lanagues.

      I learned Fortran 90 around the turn of the millennium. Its native vector/matrix/complex math meant that decent compilers could automatically parallelize things like matrix multiplication. After all, the math construct is parallel to begin with. Using math this way helps solve both 1 and 2.

      In contrast, most languages started out with loop constructs, and later added hints to parallelize them. To a physics/math person this looks completely backwards. When two vectors are added together in real life, natur

    • by m00sh ( 2538182 )

      The big problem is most applications suck at parallel processing. 4 Cores 1 for the OS 3 for the applications

      Is what seems to suit home usage rather well. Having an 8th gen i7 with 6 cores with a total of 12 threads is underutilized by most applications, and will in general not run at its full potential. So you have an application that you want to work faster moving from 1 core to in essence 12 threads or 16 threads will not have mores law speed improvements, because the program is often stuck on a single core, which hasn't been increasing in speed.

      The problem is multi-fold 1. Little Education in Parallel processing programming. Still this is mostly regulated to 300-400 CS classes for undergrad, and mostly designed to aid CS students as an area of study in their Masters Degree.

      2. Most programming language have poor implementation of parallel processing. Threading is one way to do parallel processing their are other methods as well. I have seen languages such as MPL (for an early parallel processing system) that actually had an elegant structure of plural variables where you can code parallel processing without threads but using standard lanagues This is psuto-code as I hadn't used MPL in over 20 years. plural int x; plural int holder; int didchange = 1; x = randint(maxcpu); while (didchange) { didchange = 0; if (cpu % 2 = 0) { if (x > x[cpu+1]) { holder = x; x = x[cpu+1]; x[cpu+1] = holder; didchange = 1; } if (cpu % 2 = 1) { if (x[cpu-1] > x) { holder = x[cpu-1]; x[cpu-1] = x; x= holder; didchange = 1; } } Locking conditions and timing all handled easily without a lot of thought of the details. Yet using all the processors.

      For application developers, threading will utilize multiple cores.

      For library developers, it's not just multiple cores. It's SIMD and specialized instruction sets for different architectures and even GPU acceleration. Most library developers go in very deep into these and most very popular libraries already do a lot of work on this.

      The only problem where I had something hit one core while the others are doing nothing is the c++ linker. But, LLVM and gold are working on those problems.

  • with 5th generation bugs.

You know you've landed gear-up when it takes full power to taxi.

Working...