Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Launches 11th-Gen Rocket Lake-S CPUs (venturebeat.com) 91

The new generation of Intel Core CPUs is here. Intel is using a new architecture on its ancient 14nm process to power the 11th-generation Rocket Lake-S processors. From a report: That results in some significant power improvements, but it also means that Intel can only fit 8 cores on its flagship Core i9-11900K. That sacrifice to the number of cores looks bad compared to the 12-core AMD Ryzen 9 5900X or even the last-gen 10-core i9-10900K. But Intel is also promising massive improvements to efficiency that should keep the Rocket Lake-S parts competitive -- especially in gaming. Rocket Lake-S CPUs launch March 30. The $539 Core i9-11900K has 8 cores and 16 threads with a single-core Thermal Velocity boost of 5.3GHz and 4.8GHz all-core boost. The slightly more affordable $399 i7-11700K boosts up to 5GHz, and the i5-11600K is $262 with 6 cores at a 4.9GHz boost.

While the lack of cores is going to hurt Rocket Lake-S CPUs in multi-threaded applications, Intel claims that its 19% improvement to instructions per clock (IPC) will make up much of the difference. The UHD graphics processor in the CPUs also deliver 50% better performance than last generation. Of course, Intel is focusing on games because that is where its processors remain the most competitive versus AMD. And that should continue with its Rocket Lake-S chips. These high-clocked parts with improved performance should keep up and even exceed AMD's Zen 3 chips in certain games, like Microsoft's Flight Simulator (according to Intel).

This discussion has been archived. No new comments can be posted.

Intel Launches 11th-Gen Rocket Lake-S CPUs

Comments Filter:
  • by ZiggyZiggyZig ( 5490070 ) on Tuesday March 16, 2021 @12:31PM (#61165244)

    I wonder how easy it will be to implement Spectre-class data capture code with those new shiny ones...

    • by Tailhook ( 98486 )

      Indeed. The huge IPC improvement may involve the sort of design compromises that Intel has been guilty of in the past.

      • by gweihir ( 88907 )

        For quite some while Intel was only "ahead" of AMD because they just did not care about the security of their customers. Spectre/Meltdown-type attacks were predicted a long time ago on the relevant CPU design conference. AMD was careful enough to avoid Meltdown and make Spectre so hard nobody really knows whether it can work in practice on their CPUs. Intel just did not care and got better speed than AMD pretty much by defrauding their customers.

    • Just as complex as previously rendering the exploit completely useless as it has been up until this point.

    • Is it fixed yet? I must hand it to Intel, they could sell a new car with dings and splotchy paintwork and a three speed auto gearbox. They have had a number of years to fix the problem. Thankfully Apple is about to show them the error of their ways.
  • I know Intel is behind on fabs, but 14nm went into full production around 2014. How is that ancient?
    • 7 years is an eternity in chip time. Think about how outdated a year 2000 chip was in 2007. That's going from something like a Core 2 Quad 6600 (still a very usable CPU even in 2021) to a 600mhz Pentium III (not useable with modern software at all). Go 7 years before that and you were looking at a 486.

      So yeah, 7 years is very longtime to be refining the same basic design AND using the same process. Especially so when competitors like Apple are making chips on the 7mm proces (two process improvements ahead).

      • 7 years is an eternity in chip time... That's going from something like a Core 2 Quad 6600 (still a very usable CPU even in 2021) to a 600mhz Pentium III (not useable with modern software at all).

        If a CPU from 2007 is still usable in 2021 (14 years later), then 7 years isn't an eternity in "chip time."

        • It's only usable if you don't care about power consumption. Somebody must be defending a recent decision to stick with Intel.

          • by dryeo ( 100693 )

            The Q6600 I had before this I5 used slightly less power. You're perhaps thinking of netburst chips that were slow (high GHz numbers though) and power hungry.

          • It's also usable if you care about completely removing the Intel ME. Because the 2008 Intel chips are the last generation that can be Librebooted.
        • Well, I think part of it is that development has slowed considerably compared to what we were used to in the 90s and 00s. But the fact remains that Apple and AMD are 1-2 generations ahead in terms of die shrinks.

      • by Entrope ( 68843 )

        Apple is not making chips with a 7nm process. Apple's A14 and M1 are both on a 5nm process, as is Qualcomm's latest (Snapdragon 888 5G). AMD and Nvidia are mostly at 7nm, although AMD's I/O chiplets are 14nm.

        14nm for a 2021 CPU pretty much screams "process failure".

        • by ChoGGi ( 522069 )

          nm has been a marketing term since 28nm (I believe), it doesn't refer to gate length anymore.

          • by Puls4r ( 724907 )
            That may be true, but the fact that Intel can't even advertise to be close to the others in this artificial measure is starting to become a very major problem - even if it is optics.
          • According to this article, everything under 32nm is a very approximate measurement, and mostly marketing these days.

            https://prog.world/7-nm-proces... [prog.world]

            The new measurement is fairly complicated, because the fundamental structure of transistors has changed in newer chips.

            This sort of reminds me when the latest trend was "bits." We had 32-bit processors, which were obviously way better than 16-bit processors. Then marketers got hold, and they started claiming 64-bits, then 128 bits, but only for very selective

        • Intel made investments back in time to support the needs of the competitors when they were customers, those customers have taken chip back in house and were late so that have gone to a far smaller process. That power consumption does not really matter for the buyers of the intel chips that go for more than $200 and 85w per package, the market segment that highlighted 5-10 years ago was mobile handsets, home entertainment (set top) and green cloud. Energy efferent cloud is far behind time to market c
        • Besides their 5nm chips, Apple is still building and shipping iPad, iPad mini, iPhone xR and 11 units with A12, A12x, and A13 which are 7nm.

      • Re:Ancient 14nm? (Score:4, Interesting)

        by teg ( 97890 ) on Tuesday March 16, 2021 @03:11PM (#61165788)

        So yeah, 7 years is very longtime to be refining the same basic design AND using the same process. Especially so when competitors like Apple are making chips on the 7mm proces (two process improvements ahead).

        Intel seems to be sort of where they were back in the late Pentium 4 days, when they were at the end of the line for what they could do with an old technology and were being bested by competitors as a result.

        First, Apple is using 5 nm [anandtech.com] for their latest A14 (iPhone) and M1 (low end Macs) CPUs - not 7 nm.

        Second, Pentium 4 wasn't Intel being at the end of the line for what they could do with an old technology - they made a bet on a completely new technology - the Netburst microarchitecture [wikipedia.org]. This had a lot of new stuff and initially good performance. Intel's forecast that their processes could scale to 10 GHz did absolutely not pan out. Thus, they had to revert to to their old architecture in a form that had been evolved for use in laptops while their Netburst architecture was their focus for server and desktop.

        This old architecture - as opposed to their new, troubled one - was then turned into Intel's Core microarchitecture [wikipedia.org]. This led to a decade of Intel domination of the laptop, desktop and server CPU markets - and that was based on going back to their old architecture, not on their new.

        This domination might even be one of the reasons why Intel is in so much trouble now. Sure, they have massive process issues - but they've also seemingly spent most of their effort on market segmentation and creating an insane amount of SKUs as opposed to general improvements and a simple, easy to understand product line. Got to keep companies willing to pay top dollar for the highest end SKUs. When competition then adds a lot of features in silicon that is missing from pretty much the entire Intel range, they risk their market share being eaten from below.

        • Re:Ancient 14nm? (Score:4, Interesting)

          by cheesybagel ( 670288 ) on Tuesday March 16, 2021 @03:57PM (#61165946)

          Actually the first Netburst processor was a disaster. The Willamette core ran non-P4 optimized code really poorly because it had a really long pipeline and small L1 caches. It also had no barrel shifter so a lot of original Pentium optimized code which optimized multiplies to shifts and adds ran slow like molasses. Then there is the fact that it only ran optimally at all if you had a motherboard which used dual channel Rambus DRAM in it. While Rambus DRAM stocks were quite limited and expensive and those motherboards were also expensive like hell. Instead of having motherboards with much cheaper and more available DDR SDRAM, Intel on purpose released motherboards with a single channel of much slower and older SDRAM. So the performance was indeed crap for most people. Things only got better once motherboards with DDR SDRAM became widely available, like the ones which used the VIA chipsets. Also the CPU got better with the next iteration of the Netburst family, the Northwood core, which scaled better and had larger L1 caches and then Intel released their own DDR SDRAM motherboards too. Still even that was short lived, since the P4 core Intel released afterwards was Prescott. Which had awfully high power consumption and didn't clock that much higher. It was a hog. So one could claim of all the Pentium 4 family only Northwood was any decent.

          • Oh and then there is the hardware bug on Intel Rambus DRAM chipsets. That was a treat too.

          • by dryeo ( 100693 )

            I don't know. I had a P4D at 2.8GHz, a Northwood I believe, and swapped it (only the CPU) for a 1.86GHz C2D and experienced a speedup, close to double for compiling IIRC when using both cores.

        • Is this a paid for post? "could scale to 10 GHz did absolutely not pan out" It NEVER panned out, and nobody is near 10Ghz . Even Apple M1 has pushed that envelope as far as it can go. Memory and bus is also behind. Nivida did not use P4 either - they went RISC-V variant , a descendant of IBM. BTW SPARC chip was bloody good too. What Intel has is broken speculative/pre-execution pipelines that have errors and can leak, and have not even announced IF it will ever be fixed. ARM has some legacy 'gift' speculat
          • You must take into account two things:
            - the "Moore Law" at that time was alive and kicking for over 30 years
            - the increase in frequency in the past decade was copious
            Pentium 60MHz in 1993, 100 MHz in 1994, 120MHz in 1995, 200MHz in 1996, 233 in 1997
            Pentium II 300MHz in 1997, 450 MHz in 1998,
            Pentium III 600MHz in 1999, up to 1100MHz in 2000, 1400 in 2001.
            Pentium IV 2000 in 2001 also
            Pentium IV (Northwood, the one to have) reached 3GHz in 2002-2003

            Now, the "Moore Law" has slowed down lately but it really held

        • "Pentium 4 wasn't Intel being at the end of the line for what they could do with an old technology - they made a bet on a completely new technology"

          Everything after Pentium 4 was based on the Pentium M fork of the Pentium III from prior to the development of the P4. That's the definition of "end of the line".
          Yonah was pretty much a mobile core that beat the desktop P4 at everything.

          "Intel's forecast that their processes could scale to 10 GHz did absolutely not pan out. "

          The Cedarmill EXEC stack was designe

          • by teg ( 97890 )

            "Pentium 4 wasn't Intel being at the end of the line for what they could do with an old technology - they made a bet on a completely new technology"

            Everything after Pentium 4 was based on the Pentium M fork of the Pentium III from prior to the development of the P4. That's the definition of "end of the line". Yonah was pretty much a mobile core that beat the desktop P4 at everything.

            This was exactly my point. That the description "the late Pentium 4 days, when they were at the end of the line for what they could do with an old technology" which I replied to was wrong, and that going back to the old technology and building on that - instead of their new, failing Netburst microarchitecture - was what secured Intel dominance for a decade.

      • "That's going from something like a Core 2 Quad 6600 (still a very usable CPU even in 2021)"
        I'm running a i3-3120 or something like that (3+ GHz) and its usability is sometimes questionable.

    • Given that I've actually used micrometer scale chips, I guess that makes me... what... immortal?

    • Apple M1 uses 5nm
      AMD uses 7nm

      Intel has gotten slow and lazy, while its competitors are passing them by.

      The Core ix design is starting to show its age. Either Intel has something really big in its pipeline (like when they went from the Pentium to the Core Processors, nearly 15 years ago) or they are just riding on name brand, to a point where they are going to be so far behind that they cannot catch up. And will be relegated to likes of the Power PC, and Alpha Chips.

      • they can still patent troll and cut X86 apps from apple os. No Rosetta for you. Windows ARM is NEXT.

      • Not quite right. TSMC is at 5nm, Global Foundries is at 7nm. Apple and AMD are just buying the process and designing to tightly constrained rules given to them by the foundry. While these numbers are really just marketing labels at this point, the process development is really hard and Apple and AMD shouldn't be getting credit for it. Also unless someone has a really bright idea, getting another factor of 2 in real minimum features size reduction is going to be very unlikely.
        • by DarkOx ( 621550 )

          and Apple and AMD shouldn't be getting credit for it.

          I don't think its about credit so much as the ability to deliver product based on those processes and put it on the shelf so to speak. I think everyone knows Intel engineers can probably modify core designs to be something TSMC and Global Foundries could make on their respective 5 and 7nm processes fairly quickly. The reality is the very vertically integrated Intel can't do that for economic and supply chain reasons.

        • Comment removed based on user account deletion
    • by Zak3056 ( 69287 )

      I know Intel is behind on fabs, but 14nm went into full production around 2014. How is that ancient?

      Moore's Law is that the number of transistors per area of silicon will double every 18 months. When framed in context of the above, that's four entire cadences (pushing five) that Intel has missed. "Ancient" may be overstating the case slightly, but that is a HUGE miss for a company whose entire business model is built on that idea.

    • by GuB-42 ( 2483988 )

      Current tech is TSMC's 5nm (Apple M1), which is 2 nodes finer than Intel's 14nm.

      It is ancient not because of the timescale, but because it is two generations late. It would be like selling a smartphone that only supports 3G as we are rolling out 5G.

      • Basically Intel sells 3G in a billion-sized market where the competition can only produce 100 millions 5G devices.

    • Intel was able to protect its effective monopoly for three decades in large part by being a half or full node ahead of the rest of the processor industry, roughly an 18 month lead. Now they are bringing up the rear by four or five of those 18 month cycles. Oops.

    • But Intel is not on 14nm. They are on 14++ +++ ++++ nm
      (I thought their nanometers were better than other foundry nanometers - I read somewhere about Intel's 7nm as comparable to TSMC's future 5nm).
      Of course, neither Intel nor TSMC have those technologies yet (5nm for TSMC and 7nm for Intel), and Intel's 10nm is apparently worse than their own 14++ +++ ++++ nm.

  • by DarkOx ( 621550 ) on Tuesday March 16, 2021 @12:40PM (#61165296) Journal

    19% IPC speed up is certainly noteworthy this late in the game. I wonder how consistent that is or of its narrow benchmark on a fairly specific work load.

    Price looks not dear next to what AMD is asking these days too.

    • Re: (Score:3, Informative)

      by mjdrzewi ( 1477203 )
      It is in quite specific cases. See the review linked https://www.anandtech.com/show... [anandtech.com]
      In the SPECint2017 suite, we’re seeing the new i7-11700K able to surpass its desktop predecessors across the board in terms of performance. The biggest performance leap is found in 523.xalancbmk which consists of XML processing at a large +54.4% leap versus the 10700K. The rest of the improvements range in the +0% to +15% range, with an average total geomean advantage of +15.5% versus the 10700K. The IPC advantag
      • AMD’s current 6-core 5600X actually is very near to the new 11700K, but consuming a fraction of the power.

        This isn't what I saw at all.
        And "a fraction" while technically correct for literally any value, is certainly not normally used for 1/2
        The 11700 appears to perform around 8% faster per clock than the 5600X. Paired with its higher clock, gives it about a 15% advantage.
        This was in an average of benchmarks I looked at.

        Can you show otherwise?

        • Apparently asking someone to back up an assertion in the face of conflicting information is trolling.
          I see a lot of people being accused of "defending" Intel in this thread, but at least that defense appears to be on the merits. If you have to resort to downmoderation of inconvenient questions, I suspect you're just some sad little political creature.
    • by ChoGGi ( 522069 )

      Wait till you see the power usage for AVX-512 :)

  • At least the crap Intel produces has gotten somewhat cheaper. But there really is no sane reason to buy their stuff for _any_ application these days.

    • by Tailhook ( 98486 )

      But there really is no sane reason to buy their stuff for _any_ application these days.

      AVX512 is a thing. If you need it AMD won't do.

      • Re:Great, more crap (Score:4, Interesting)

        by CaptainLugnuts ( 2594663 ) on Tuesday March 16, 2021 @02:13PM (#61165596)
        If your workload is SIMDable like that a cheap GPU will crush it.
        • That depends entirely on the workload.
          I have written both GPU kernels, and AVX512 code.
          In bulk, it's not even a question. Even an Intel GPU will lay waste.
          However, the latency for initiating a GPU calc kernel is the computer time equivalent of a geological era.
          If your workload needs AVX512 code to do calculations sparingly along with other dynamic calculations, you're going to find that your workload doesn't favor a GPU at all.
          • True, but it you're only sprinkling AVX-512 code it's not a big win either.
            • Depends on the rate of "sprinkling" as it were.
              You can have relatively heavy AVX code in a reactive loop that's going to be more flexible than GPU kernels.
              AVX is definitely no replacement for GPUs, and it's definitely not the right tool if you have a trillion vectors you want to crunch with a well-defined pipeline of kernels; but if you're looking for vector acceleration in tight loops, it's quite useful.
        • by Tailhook ( 98486 )

          If your workload is SIMDable like that a cheap GPU will crush it.

          Maybe you should share your brilliant insight with AMD before they waste any more time putting AVX512 into Zen 4.

          Or maybe you just don't know what you're talking about.

          • I've been specializing in parallel processing, heterogeneous computing, parallel algorithms, and down to the metal optimization since the early 90's. I know what the hell I'm talking about.
      • AMD wisely decided to spend those transistors on something more useful.

        • by Tailhook ( 98486 )

          AMD wisely decided to spend those transistors on something more useful.

          Also, AMD will deliver AVX512 in Zen 4 because they've run out of useful things to do with transistors.

          • Shutting down Intel's last remaining talking point is useful enough for AMD. Not sure about the rest of us.

    • Comment removed based on user account deletion
    • If you need a handful of processors, you don't.
      On the other hand, if you need 100,000 or more of them delivered yesterday, Intel is (unfortunately) the only game in town.

  • I thought threads were independent of cores, while cores and processes were more aligned. I guess this must have changed since I last looked at these things. And do most apps make use of multiple cores effectively now? Not really a hardware person, even if I try to keep a surface overview some years. My newest machine has an AMD processor for full disclosure.
    • by Anonymous Coward

      Threads have to do with the number of logical process presented. You could think of it as the number of decoders and register sets really. Processes don't really align to cores except in the sense these multi-thread-per-core MIMD designs are not able to run two threads on one core if both of those threads have two instructions needing the same underlying resource nearby to each other temporally speaking; because these are not single cycle machines.

      If Instruction XYZ needs the ALU and takes 3 cycles than if

      • So per my own understanding of them, threads are just time slicing made accessible by higher level languages than assembler, with processor optimizations for them. I've written assembler to time slice a long time ago, granted on x86 chips. And processes don't time slice, with exception of what the OS forces for resources that it needs (unless it is a real time system). And each core has its underlying architecture for processing like you say, the registers, memory, and instructions sets, etc. So my confusio
        • Kinda always felt that Intel's threading has been underrated lately, I've liked AMD for a while, my personal choice. But networking early AMD based with comparable Intel based machines really showed me how tight threading was for Intel. Benchmarks be darned..
        • by brm ( 100455 )

          Threads are a program "thread of control", with several possible implementations. Time slicing is just one possible implementation. Explicit transfer of control is another (cooperative multithreading). Another is a separate processor for each thread/process. "Processes" are generally threads with different ownership/privileges, that usually can't be trusted to cooperate.

          Simultaneous MultiThreading (SMT) or Hyperthreading cores have separate thread-local instances of some processor state (some sort of

  • puts their integrated graphics in line with AMD's Vega. Nothing I'd want to game with, but it'll do in a pinch I guess. Both still lose out to a GTX 1030, which is a video card meant to add an extra display port on a PC with only 1.

    It's kind of too bad, I wish they could at least hit GTX 660 levels of performance on integrated. That would really supercharge PC gaming if you could do entry level gaming on them. But it would also probably bite into sales of external GPUs, a market Intel is now eyeing.
    • Comment removed based on user account deletion
      • Comment removed based on user account deletion
      • I own one in a laptop.
        Well- not the GH that is in the NUC, but the GL in the laptop version.
        Bought it for no other reason than to support the venture... I liked the idea of the better AMD GPU on-die.

        That being said, it's performance against even a cheap discrete is a joke- but not because the GPU sucks. Because it shares a power and thermal domain with the CPU, and there is simply no way to tax it without throttling with anything short of liquid nitrogen.
  • The 11th gen offerings from Intel contain absolutely nothing I want.

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...