Intel Launches 11th-Gen Rocket Lake-S CPUs (venturebeat.com) 91
The new generation of Intel Core CPUs is here. Intel is using a new architecture on its ancient 14nm process to power the 11th-generation Rocket Lake-S processors. From a report: That results in some significant power improvements, but it also means that Intel can only fit 8 cores on its flagship Core i9-11900K. That sacrifice to the number of cores looks bad compared to the 12-core AMD Ryzen 9 5900X or even the last-gen 10-core i9-10900K. But Intel is also promising massive improvements to efficiency that should keep the Rocket Lake-S parts competitive -- especially in gaming. Rocket Lake-S CPUs launch March 30. The $539 Core i9-11900K has 8 cores and 16 threads with a single-core Thermal Velocity boost of 5.3GHz and 4.8GHz all-core boost. The slightly more affordable $399 i7-11700K boosts up to 5GHz, and the i5-11600K is $262 with 6 cores at a 4.9GHz boost.
While the lack of cores is going to hurt Rocket Lake-S CPUs in multi-threaded applications, Intel claims that its 19% improvement to instructions per clock (IPC) will make up much of the difference. The UHD graphics processor in the CPUs also deliver 50% better performance than last generation. Of course, Intel is focusing on games because that is where its processors remain the most competitive versus AMD. And that should continue with its Rocket Lake-S chips. These high-clocked parts with improved performance should keep up and even exceed AMD's Zen 3 chips in certain games, like Microsoft's Flight Simulator (according to Intel).
While the lack of cores is going to hurt Rocket Lake-S CPUs in multi-threaded applications, Intel claims that its 19% improvement to instructions per clock (IPC) will make up much of the difference. The UHD graphics processor in the CPUs also deliver 50% better performance than last generation. Of course, Intel is focusing on games because that is where its processors remain the most competitive versus AMD. And that should continue with its Rocket Lake-S chips. These high-clocked parts with improved performance should keep up and even exceed AMD's Zen 3 chips in certain games, like Microsoft's Flight Simulator (according to Intel).
Such new, much wow (Score:3)
I wonder how easy it will be to implement Spectre-class data capture code with those new shiny ones...
Re: (Score:3)
Indeed. The huge IPC improvement may involve the sort of design compromises that Intel has been guilty of in the past.
Re: (Score:3)
For quite some while Intel was only "ahead" of AMD because they just did not care about the security of their customers. Spectre/Meltdown-type attacks were predicted a long time ago on the relevant CPU design conference. AMD was careful enough to avoid Meltdown and make Spectre so hard nobody really knows whether it can work in practice on their CPUs. Intel just did not care and got better speed than AMD pretty much by defrauding their customers.
Re: (Score:3)
Just as complex as previously rendering the exploit completely useless as it has been up until this point.
Is it fixed yet? (Score:2)
Ancient 14nm? (Score:1)
Re: (Score:3)
7 years is an eternity in chip time. Think about how outdated a year 2000 chip was in 2007. That's going from something like a Core 2 Quad 6600 (still a very usable CPU even in 2021) to a 600mhz Pentium III (not useable with modern software at all). Go 7 years before that and you were looking at a 486.
So yeah, 7 years is very longtime to be refining the same basic design AND using the same process. Especially so when competitors like Apple are making chips on the 7mm proces (two process improvements ahead).
Re: (Score:2)
If a CPU from 2007 is still usable in 2021 (14 years later), then 7 years isn't an eternity in "chip time."
Re: (Score:2)
It's only usable if you don't care about power consumption. Somebody must be defending a recent decision to stick with Intel.
Re: (Score:2)
The Q6600 I had before this I5 used slightly less power. You're perhaps thinking of netburst chips that were slow (high GHz numbers though) and power hungry.
Re: (Score:2)
Re: (Score:2)
Well, I think part of it is that development has slowed considerably compared to what we were used to in the 90s and 00s. But the fact remains that Apple and AMD are 1-2 generations ahead in terms of die shrinks.
Re: (Score:2)
Apple is not making chips with a 7nm process. Apple's A14 and M1 are both on a 5nm process, as is Qualcomm's latest (Snapdragon 888 5G). AMD and Nvidia are mostly at 7nm, although AMD's I/O chiplets are 14nm.
14nm for a 2021 CPU pretty much screams "process failure".
Re: (Score:2)
Intel fanbois always say that. There not much evidence for it, particularly since Intel stopped talking about its achieved gate density a while ago. One can infer that that metric doesn't look good for them.
Re: (Score:2)
Intel fanbois always say that. There not much evidence for it, particularly since Intel stopped talking about its achieved gate density a while ago. One can infer that that metric doesn't look good for them.
Not only Intel fanboys. However, since the competition has moved on to 5 nm (Apple) and 7 nm (AMD), that still leaves Intel far behind these days.
Re: (Score:2)
Intel hasn't brought EUV online yet, that's a massive fail. The tick tock clock is tick tick ticking away until that inevitable day when Intel finally has to give up on its bespoke fabs like everybody else.
Re: (Score:2)
nm has been a marketing term since 28nm (I believe), it doesn't refer to gate length anymore.
Re: (Score:2)
Re: (Score:2)
According to this article, everything under 32nm is a very approximate measurement, and mostly marketing these days.
https://prog.world/7-nm-proces... [prog.world]
The new measurement is fairly complicated, because the fundamental structure of transistors has changed in newer chips.
This sort of reminds me when the latest trend was "bits." We had 32-bit processors, which were obviously way better than 16-bit processors. Then marketers got hold, and they started claiming 64-bits, then 128 bits, but only for very selective
Re: (Score:1)
Re: (Score:2)
If they start using more watts = better CPU; I quit.
Re: (Score:2)
Re: (Score:2)
Besides their 5nm chips, Apple is still building and shipping iPad, iPad mini, iPhone xR and 11 units with A12, A12x, and A13 which are 7nm.
Re:Ancient 14nm? (Score:4, Interesting)
So yeah, 7 years is very longtime to be refining the same basic design AND using the same process. Especially so when competitors like Apple are making chips on the 7mm proces (two process improvements ahead).
Intel seems to be sort of where they were back in the late Pentium 4 days, when they were at the end of the line for what they could do with an old technology and were being bested by competitors as a result.
First, Apple is using 5 nm [anandtech.com] for their latest A14 (iPhone) and M1 (low end Macs) CPUs - not 7 nm.
Second, Pentium 4 wasn't Intel being at the end of the line for what they could do with an old technology - they made a bet on a completely new technology - the Netburst microarchitecture [wikipedia.org]. This had a lot of new stuff and initially good performance. Intel's forecast that their processes could scale to 10 GHz did absolutely not pan out. Thus, they had to revert to to their old architecture in a form that had been evolved for use in laptops while their Netburst architecture was their focus for server and desktop.
This old architecture - as opposed to their new, troubled one - was then turned into Intel's Core microarchitecture [wikipedia.org]. This led to a decade of Intel domination of the laptop, desktop and server CPU markets - and that was based on going back to their old architecture, not on their new.
This domination might even be one of the reasons why Intel is in so much trouble now. Sure, they have massive process issues - but they've also seemingly spent most of their effort on market segmentation and creating an insane amount of SKUs as opposed to general improvements and a simple, easy to understand product line. Got to keep companies willing to pay top dollar for the highest end SKUs. When competition then adds a lot of features in silicon that is missing from pretty much the entire Intel range, they risk their market share being eaten from below.
Re:Ancient 14nm? (Score:4, Interesting)
Actually the first Netburst processor was a disaster. The Willamette core ran non-P4 optimized code really poorly because it had a really long pipeline and small L1 caches. It also had no barrel shifter so a lot of original Pentium optimized code which optimized multiplies to shifts and adds ran slow like molasses. Then there is the fact that it only ran optimally at all if you had a motherboard which used dual channel Rambus DRAM in it. While Rambus DRAM stocks were quite limited and expensive and those motherboards were also expensive like hell. Instead of having motherboards with much cheaper and more available DDR SDRAM, Intel on purpose released motherboards with a single channel of much slower and older SDRAM. So the performance was indeed crap for most people. Things only got better once motherboards with DDR SDRAM became widely available, like the ones which used the VIA chipsets. Also the CPU got better with the next iteration of the Netburst family, the Northwood core, which scaled better and had larger L1 caches and then Intel released their own DDR SDRAM motherboards too. Still even that was short lived, since the P4 core Intel released afterwards was Prescott. Which had awfully high power consumption and didn't clock that much higher. It was a hog. So one could claim of all the Pentium 4 family only Northwood was any decent.
Re: (Score:2)
Oh and then there is the hardware bug on Intel Rambus DRAM chipsets. That was a treat too.
Re: (Score:2)
I don't know. I had a P4D at 2.8GHz, a Northwood I believe, and swapped it (only the CPU) for a 1.86GHz C2D and experienced a speedup, close to double for compiling IIRC when using both cores.
Re: (Score:2)
Re: (Score:2)
You must take into account two things:
- the "Moore Law" at that time was alive and kicking for over 30 years
- the increase in frequency in the past decade was copious
Pentium 60MHz in 1993, 100 MHz in 1994, 120MHz in 1995, 200MHz in 1996, 233 in 1997
Pentium II 300MHz in 1997, 450 MHz in 1998,
Pentium III 600MHz in 1999, up to 1100MHz in 2000, 1400 in 2001.
Pentium IV 2000 in 2001 also
Pentium IV (Northwood, the one to have) reached 3GHz in 2002-2003
Now, the "Moore Law" has slowed down lately but it really held
Re: (Score:2)
"Pentium 4 wasn't Intel being at the end of the line for what they could do with an old technology - they made a bet on a completely new technology"
Everything after Pentium 4 was based on the Pentium M fork of the Pentium III from prior to the development of the P4. That's the definition of "end of the line".
Yonah was pretty much a mobile core that beat the desktop P4 at everything.
"Intel's forecast that their processes could scale to 10 GHz did absolutely not pan out. "
The Cedarmill EXEC stack was designe
Re: (Score:2)
"Pentium 4 wasn't Intel being at the end of the line for what they could do with an old technology - they made a bet on a completely new technology"
Everything after Pentium 4 was based on the Pentium M fork of the Pentium III from prior to the development of the P4. That's the definition of "end of the line". Yonah was pretty much a mobile core that beat the desktop P4 at everything.
This was exactly my point. That the description "the late Pentium 4 days, when they were at the end of the line for what they could do with an old technology" which I replied to was wrong, and that going back to the old technology and building on that - instead of their new, failing Netburst microarchitecture - was what secured Intel dominance for a decade.
Re: (Score:2)
"That's going from something like a Core 2 Quad 6600 (still a very usable CPU even in 2021)"
I'm running a i3-3120 or something like that (3+ GHz) and its usability is sometimes questionable.
Re: (Score:2)
Given that I've actually used micrometer scale chips, I guess that makes me... what... immortal?
Re: (Score:2)
Apple M1 uses 5nm
AMD uses 7nm
Intel has gotten slow and lazy, while its competitors are passing them by.
The Core ix design is starting to show its age. Either Intel has something really big in its pipeline (like when they went from the Pentium to the Core Processors, nearly 15 years ago) or they are just riding on name brand, to a point where they are going to be so far behind that they cannot catch up. And will be relegated to likes of the Power PC, and Alpha Chips.
they can still patent troll and cut X86 apps apple (Score:2)
they can still patent troll and cut X86 apps from apple os. No Rosetta for you. Windows ARM is NEXT.
Re: (Score:1)
Re: (Score:2)
and Apple and AMD shouldn't be getting credit for it.
I don't think its about credit so much as the ability to deliver product based on those processes and put it on the shelf so to speak. I think everyone knows Intel engineers can probably modify core designs to be something TSMC and Global Foundries could make on their respective 5 and 7nm processes fairly quickly. The reality is the very vertically integrated Intel can't do that for economic and supply chain reasons.
Re: (Score:2)
Re: (Score:2)
I think you thought right.
Re: (Score:2)
Revisionist history. No one wanted to base products on Alpha because (1) 64 bit was not yet important, (2) Alpha has the most threadbare instruction set ever, and (3) power consumption was absolutely nuts. Alpha was a "technology failure" in addition to its other failures. Compaq bought DEC to acquire their talent, not their Alpha processors.
Re: (Score:2)
So uh yeah sure you meant heat, not Watts
This is so painful to read I winced.
Power must be radiated. It's radiated as heat. How hot the heatsink gets is a function of the power used by what it is cooling, and how quickly it can cool itself passively and actively.
Unit of temperature per unit of time is power. Power is heat. This goes for the heating part, and the cooling part.
The difference is temperature.
So I'm guessing you meant temperature?
Re: (Score:2)
Yes I knew some OCD type would go there. Like fishing in a barrel.
There's no OCD involved here- you try to argue that it doesn't use a lot of power, but then say it generates a lot of heat.
Heat is power. If something generates a lot of heat, it's because it uses a lot of power.
However, if you meant temperature, then that's not a function of power, but a function of ability to dissipate power.
These are important distinctions.
None of this has jack shit to do with the Alpha being a great chip in its day or why RISC is not a flawed architecture because it is "thread bare" nor to do with it consuming an "absolutely nuts!" about of power.
Sure, but if you say shit as dumb as what you said above, how are we supposed to give your opinion on the CPU the attention you think it deserves?
Re: (Score:2)
It was the CPU to get for those who wanted real power. However it got flooded with the new 64bit chips, UltraSparc, PowerPC, Itanium and when Intel made the x86 compatible Core2 Chips that were 64bit as well as AMD making a good foothold with its 64bit x86 chips. All these specialized CPU that were for Mainframes, Minicomputers and High End Workstations (Mostly Unix based) all died out or diminished rather quickly towards the x86 based Servers and workstations desktops.
Now power/watt was important factor
Re: (Score:2)
I know Intel is behind on fabs, but 14nm went into full production around 2014. How is that ancient?
Moore's Law is that the number of transistors per area of silicon will double every 18 months. When framed in context of the above, that's four entire cadences (pushing five) that Intel has missed. "Ancient" may be overstating the case slightly, but that is a HUGE miss for a company whose entire business model is built on that idea.
Re: (Score:2)
Current tech is TSMC's 5nm (Apple M1), which is 2 nodes finer than Intel's 14nm.
It is ancient not because of the timescale, but because it is two generations late. It would be like selling a smartphone that only supports 3G as we are rolling out 5G.
Re: (Score:2)
Basically Intel sells 3G in a billion-sized market where the competition can only produce 100 millions 5G devices.
Re: (Score:2)
Intel was able to protect its effective monopoly for three decades in large part by being a half or full node ahead of the rest of the processor industry, roughly an 18 month lead. Now they are bringing up the rear by four or five of those 18 month cycles. Oops.
Re: (Score:2)
But Intel is not on 14nm. They are on 14++ +++ ++++ nm
(I thought their nanometers were better than other foundry nanometers - I read somewhere about Intel's 7nm as comparable to TSMC's future 5nm).
Of course, neither Intel nor TSMC have those technologies yet (5nm for TSMC and 7nm for Intel), and Intel's 10nm is apparently worse than their own 14++ +++ ++++ nm.
19% IPC boost (Score:3)
19% IPC speed up is certainly noteworthy this late in the game. I wonder how consistent that is or of its narrow benchmark on a fairly specific work load.
Price looks not dear next to what AMD is asking these days too.
Re: (Score:3, Informative)
In the SPECint2017 suite, we’re seeing the new i7-11700K able to surpass its desktop predecessors across the board in terms of performance. The biggest performance leap is found in 523.xalancbmk which consists of XML processing at a large +54.4% leap versus the 10700K. The rest of the improvements range in the +0% to +15% range, with an average total geomean advantage of +15.5% versus the 10700K. The IPC advantag
Re: (Score:1)
AMD’s current 6-core 5600X actually is very near to the new 11700K, but consuming a fraction of the power.
This isn't what I saw at all.
And "a fraction" while technically correct for literally any value, is certainly not normally used for 1/2
The 11700 appears to perform around 8% faster per clock than the 5600X. Paired with its higher clock, gives it about a 15% advantage.
This was in an average of benchmarks I looked at.
Can you show otherwise?
Re: (Score:2)
I see a lot of people being accused of "defending" Intel in this thread, but at least that defense appears to be on the merits. If you have to resort to downmoderation of inconvenient questions, I suspect you're just some sad little political creature.
Re: (Score:3)
Wait till you see the power usage for AVX-512 :)
Great, more crap (Score:1)
At least the crap Intel produces has gotten somewhat cheaper. But there really is no sane reason to buy their stuff for _any_ application these days.
Re: (Score:2)
But there really is no sane reason to buy their stuff for _any_ application these days.
AVX512 is a thing. If you need it AMD won't do.
Re:Great, more crap (Score:4, Interesting)
Re: (Score:2)
I have written both GPU kernels, and AVX512 code.
In bulk, it's not even a question. Even an Intel GPU will lay waste.
However, the latency for initiating a GPU calc kernel is the computer time equivalent of a geological era.
If your workload needs AVX512 code to do calculations sparingly along with other dynamic calculations, you're going to find that your workload doesn't favor a GPU at all.
Re: (Score:2)
Re: (Score:2)
You can have relatively heavy AVX code in a reactive loop that's going to be more flexible than GPU kernels.
AVX is definitely no replacement for GPUs, and it's definitely not the right tool if you have a trillion vectors you want to crunch with a well-defined pipeline of kernels; but if you're looking for vector acceleration in tight loops, it's quite useful.
Re: (Score:2)
If your workload is SIMDable like that a cheap GPU will crush it.
Maybe you should share your brilliant insight with AMD before they waste any more time putting AVX512 into Zen 4.
Or maybe you just don't know what you're talking about.
Re: (Score:2)
Re: (Score:2)
AMD wisely decided to spend those transistors on something more useful.
Re: (Score:2)
AMD wisely decided to spend those transistors on something more useful.
Also, AMD will deliver AVX512 in Zen 4 because they've run out of useful things to do with transistors.
Re: (Score:2)
Shutting down Intel's last remaining talking point is useful enough for AMD. Not sure about the rest of us.
Re: (Score:2)
Re: (Score:2)
This time round there's a global chip shortage.
Re: (Score:2)
If you need a handful of processors, you don't.
On the other hand, if you need 100,000 or more of them delivered yesterday, Intel is (unfortunately) the only game in town.
Cores and Threads (Score:2)
Re: (Score:1)
Threads have to do with the number of logical process presented. You could think of it as the number of decoders and register sets really. Processes don't really align to cores except in the sense these multi-thread-per-core MIMD designs are not able to run two threads on one core if both of those threads have two instructions needing the same underlying resource nearby to each other temporally speaking; because these are not single cycle machines.
If Instruction XYZ needs the ALU and takes 3 cycles than if
Re: (Score:2)
Re: (Score:1)
Re: (Score:1)
Threads are a program "thread of control", with several possible implementations. Time slicing is just one possible implementation. Explicit transfer of control is another (cooperative multithreading). Another is a separate processor for each thread/process. "Processes" are generally threads with different ownership/privileges, that usually can't be trusted to cooperate.
Simultaneous MultiThreading (SMT) or Hyperthreading cores have separate thread-local instances of some processor state (some sort of
A 50% boost (Score:2)
It's kind of too bad, I wish they could at least hit GTX 660 levels of performance on integrated. That would really supercharge PC gaming if you could do entry level gaming on them. But it would also probably bite into sales of external GPUs, a market Intel is now eyeing.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Well- not the GH that is in the NUC, but the GL in the laptop version.
Bought it for no other reason than to support the venture... I liked the idea of the better AMD GPU on-die.
That being said, it's performance against even a cheap discrete is a joke- but not because the GPU sucks. Because it shares a power and thermal domain with the CPU, and there is simply no way to tax it without throttling with anything short of liquid nitrogen.
Re: (Score:2)
Re: (Score:2)
As someone willing to wait for Ryzen 5950X (Score:1)
The 11th gen offerings from Intel contain absolutely nothing I want.