Intel's 13th-Gen 'Raptor Lake' CPUs Are Official, Launch October 20 (arstechnica.com) 45
Codenamed Raptor Lake, Intel says it has made some improvements to the CPU architecture and the Intel 7 manufacturing process, but the strategy for improving their performance is both time-tested and easy to understand: add more cores, and make them run at higher clock speeds. From a report: Intel is announcing three new CPUs today, each with and without integrated graphics (per usual, the models with no GPUs have an "F" at the end): the Core i9-13900K, Core i7-13700K, and Core i5-13600K will launch on October 20 alongside new Z790 chipsets and motherboards. They will also work in all current-generation 600-series motherboards as long as your motherboard maker has provided a BIOS update, and will continue to support both DDR4 and DDR5 memory.
Raptor Lake uses the hybrid architecture that Intel introduced in its 12th-generation Alder Lake chips last year -- a combination of large performance cores (P-cores) that keep games and other performance-sensitive applications running quickly, plus clusters of smaller efficiency cores (E-cores) that use less power -- though in our testing across laptops and desktops, it's clear that "efficiency" is more about the number of cores can be fit into a given area on a CPU die, and less about lower overall system power consumption. There have been a handful of other additions as well. The amount of L2 cache per core has been nearly doubled, going from 1.25MB to 2MB per P-core and from 2MB to 4MB per E-core cluster (E-cores always come in clusters of four). The CPUs will officially support DDR5-5600 RAM, up from a current maximum of DDR5-4800, though that DDR5-4800 maximum can easily be surpassed with XMP memory kits in 12th-generation motherboards. The maximum officially supported DDR4 RAM speed remains DDR4-3200, though the caveat about XMP applies there as well. As far as core counts and frequencies go, the Core i5 and Core i7 CPUs each pick up one extra E-core cluster, going from four E-cores to eight. The Core i9 gets two new E-core clusters, boosting the core count from eight all the way up to 16. All E-cores have maximum boost clocks that are 400MHz higher than they were before.
Raptor Lake uses the hybrid architecture that Intel introduced in its 12th-generation Alder Lake chips last year -- a combination of large performance cores (P-cores) that keep games and other performance-sensitive applications running quickly, plus clusters of smaller efficiency cores (E-cores) that use less power -- though in our testing across laptops and desktops, it's clear that "efficiency" is more about the number of cores can be fit into a given area on a CPU die, and less about lower overall system power consumption. There have been a handful of other additions as well. The amount of L2 cache per core has been nearly doubled, going from 1.25MB to 2MB per P-core and from 2MB to 4MB per E-core cluster (E-cores always come in clusters of four). The CPUs will officially support DDR5-5600 RAM, up from a current maximum of DDR5-4800, though that DDR5-4800 maximum can easily be surpassed with XMP memory kits in 12th-generation motherboards. The maximum officially supported DDR4 RAM speed remains DDR4-3200, though the caveat about XMP applies there as well. As far as core counts and frequencies go, the Core i5 and Core i7 CPUs each pick up one extra E-core cluster, going from four E-cores to eight. The Core i9 gets two new E-core clusters, boosting the core count from eight all the way up to 16. All E-cores have maximum boost clocks that are 400MHz higher than they were before.
Security (Score:3, Insightful)
Re: (Score:2)
Intel? Not at all. They hope their fanbois will not look and if something bad comes up will ignore it. So far that strategy has worked out well for them.
Re: (Score:2)
By 'fanbois' I'm assuming you're referring in kind to some 3-letter agency, since speculative execution became a 'feature' after the Clipper chip controversy got a little too hot...
Re: (Score:3)
While I think this is a nice conspiracy theory, I do not think it pans out. Intel did this all by themselves because really the only thing they have (well had) going was speed. Most people are stupid and cannot understand details, so all they looked at was speed when making a buying decision. Hence Intel optimized very hard for speed and ignored everything else. AMD did know about the risks of speculative execution at the same time as Intel did and decided to be a lot more careful (Ever noted that the publi
Re: (Score:2)
Agreed, but IF Intel themselves are still making hardware that is vulnerable, then it's not so much a theory anymore.
I hope not, and not just for speeds sake.
Re: (Score:2)
Agreed, but IF Intel themselves are still making hardware that is vulnerable, then it's not so much a theory anymore.
I hope not, and not just for speeds sake.
Agreed.
Re: (Score:2)
I don't understand these CPU-level security vulnerabilities. It seems to me that some Internet application using JavaScript in the browser doesn't have that kind of low-level CPU access to exploit such vulnerabilities. And if a piece of malware made it onto my computer to exploit the CPU, well since it made it to launch some process on my computer in the first place, I already lost, with or without CPU exploit. It also sounds like these exploits are extremely hard to weaponize, because you can't really targ
Re: (Score:2)
I don't understand these CPU-level security vulnerabilities. It seems to me that some Internet application using JavaScript in the browser doesn't have that kind of low-level CPU access to exploit such vulnerabilities.
Actually, it does. There are enough papers and text on this. Don't be lazy read up on the issue.
Re: (Score:2)
Don't be such a stuck-up. I'm just trying to make friendly conversation here.
Re: (Score:2)
That's what I thought too. Read the Spectre paper [spectreattack.com] (and behold the impressive quality of today's Javascript compilers).
Re: (Score:2)
If I must, I will produce a comprehensive list of AMD CPU vulnerabilities, including notable faux pas [bleepingcomputer.com] they made when they claimed that retpoline would make their processors safe, where Arm and Intel produced architectural fixes that actually worked. (whoops, turns out they were wrong there, and Intel was right).
Here is just a start [cipher.com].
Bored security researchers are now tearing into AMD's microarch looking for branch misprediction and spec
Re: (Score:2)
Please include practically working demonstration code for AMD. All I have ever seen for AMD is plausibility arguments and some estimates what amount of time would actually be needed for some of the attacks. None of these estimates made them practical. It is, of course, still important to know about them, because somebody can have an idea to make the attacks a lot more effective.
But I guess you lack the insight to actually see what is going on. Here is a hint: There is a massive difference between a known vu
Re: (Score:2)
Please include practically working demonstration code for AMD. All I have ever seen for AMD is plausibility arguments and some estimates what amount of time would actually be needed for some of the attacks. None of these estimates made them practical. It is, of course, still important to know about them, because somebody can have an idea to make the attacks a lot more effective.
Please include practically working demonstration code for Intel (that isn't Meltdown).
All Spectre code is essentially only PoC- which 1 link I provided contained. Seems you didn't read them.
I'd hate for you to run into information that causes you cognitive dissonance.
But I guess you lack the insight to actually see what is going on. Here is a hint: There is a massive difference between a known vulnerability, a vulnerability that has exploit code and a vulnerability that has _practical_ exploit code.
Lack insight? Horseshit. I have clarity, you have the cloudy fever dreams of a fanboi.
Here's a hint for you, shithead. There is an article on this very fucking site that contains links to 2 of my CVEs for the linux kernel.
You have no idea w
Re: (Score:2)
Ignorant? Fucking spare me your drooling drivel.
Re:Security (Score:4, Funny)
What they lack in performance they make up for in winter heating. They started measuring the TDP in BTUs this year.
Re: (Score:1)
For years I ran antminers during the winter to offset the heating bill, and then would sell them off in the spring to get the latest version again in the fall.
Electricity is relatively cheap in WA state, it actually lowered my heating bill versus the natural gas heater. Just opened up the central heating return air duct in my garage and piped the exhaust from the antiminers into the duct system. The heater would only come on if the temp was 25F. I stopped doing it couple years ago, but I had 50 amps ins
Re: (Score:2)
Just opened up the central heating return air duct in my garage and piped the exhaust from the antiminers into the duct system.
Who installed your HVAC system? Mickey mouse? I find it hard to believe this would pass inspection anywhere.
Re: (Score:2)
What about the low-end, for gaming? I'm thinking i3 12100F right now, is there something on AMD's side that can compete on both price and power requirements? (i.e. around 150$CAD and around 60 watts)
I need a new motherboard and RAM either way, so right now I don't even care if I go with AMD or intel.
Re: (Score:2)
Re: (Score:2)
It's all partial patches, and some of the patches have significant performance penalties. As far as I'm aware, they've done nothing to actually fix the problem, as a real fix would likely also have performance problems.
before anyone says it (Score:5, Interesting)
Yes there are applications that can use the amount of power being offered by a new generation of processors, and yes we are all glad your core 2 duo runs everything you want using some version of linux
Re: (Score:2)
Seems like most of the highly parallel tasks that a newer generation of processors would do well at is handled even better by GPU cores if the code was optimized for it. And Intel quit that game before they even really got started.
Re: (Score:2)
It's a wonder they cannot mount a serious challenge to NVidia even with the sky-high prices NVidia is charging.
Re: (Score:2)
So... RISC-V?
Re: (Score:2)
RISC-V is currently just a toy and there is no indication it will ever be anything more than that. It needs to evolve a whole lot more to be relevant and they're already way behind. I don't see it happening.
RISC-V is a market where a smart (possibly Chinese) player could come in, re-engineer it, and release a cheap competitive full platform based on it. The problem is, that will take a lot of design innovation and the most likely player to pull something off like this (China) is not known for those skills.
TSMC is too busy raking in money to bother with RISC-V. They like the current situation.
So you don't follow slashdot.
https://science.slashdot.org/s... [slashdot.org]
https://techport.nasa.gov/view... [nasa.gov]
https://www.sifive.com/press/n... [sifive.com]
Re: (Score:2)
Or what are we measuring the "sucks" against?
AMD's Ryzen CPUs have been pretty good since at least the 3rd generation when they got rid of NUMA after the majority of software developers didn't seem to want to adopt NUMA awareness into their code, leading to performance issues in situation where there was a lot of concurrency.
Even though their newest Ryzen 7000 series seems to "
Re: (Score:2)
Because from anecdotal evidence, since Zen 2, none of the about 50 computers that I've built for customers with Zen 2 to Zen 3 CPUs have had issues. And certainly not my Ryzen 9 3900X which has been running pretty much 24/7 since Fall 2019.
Re: (Score:2)
AMD on servers sucks balls compared to Intel. Intel smashes them in performance. AMD chipsets are buggy as hell (USB, video, etc).
Dollar for dollar, AMD has always punked Intel, although admittedly the K6 was pretty bad at being a P2 until the /3. Intel has often had better single thread performance, but that rarely matters with servers. USB and video also rarely matter with servers. AMD has traditionally offered superior performance when the number of cores and/or processors has been high. What are you on about?
AMD Drivers are good now? (Score:2)
Re: (Score:2)
I don't use them either (mostly because of NVIDIA proprietary lock-in stuff that you'll find almost everywhere in professional applications). But the topic here is CPUs.
Chipset are fair criticism within this context, because you can't exactly run an AMD CPU with an Intel chipset. AMD chipset drivers do have some issues from time to time from what I read in the news. Compared to Intel chipset drivers, they'd be arguably worse. There's still improvement from my perspective thoug
Re: (Score:2)
Together with the predecessor Bulldozer, and the successors Steamroller, and Excavator, AMD did have a line of CPUs that wasn't that good that I associate with a lot of user complaints about gaming performance (CPU heavy games). From what I can tell this paved the way for Intel to be lazy and hardly making any improvements to their designs until AMD's Zen architecture
Re: (Score:2)
Making a CPU that could, in very, very limited workloads, perform unexpectedly well, while falling flat on its face on real-world workloads.
But ya, that era is absolutely what led Intel to rest on its laurels and get punked the Zen.
Re: (Score:2)
Zen on the other hand is pretty much universally useful.
Even Zen and Zen+ were fine enough in my book.
I worked with them myself and if you either didn't need a lot of concurrency or you went the extra mile to code in some CPU architecture awareness into your thread scheduling, where you're aware that if you put concurrent threads "too far apart" you'll be running into additional latencies, they r
Re: (Score:2)
Zen on the other hand is pretty much universally useful.
Zen3 is phenomenal. 1+2's NUMA architecture wasn't a great solution.
Even Zen and Zen+ were fine enough in my book.
Fine enough, sure. But the YMMV asterisk list was long when discussing the performance of the units.
AMD could have kept NUMA as far as I'm concerned, because from my perspective whether you lose performance if you do it this way or get a performance boost of you do it that way, depends well, on the perspective you're looking at it from. I draw a rough analogy here between RAM topologies of T-topology vs daisy-chain topology. The latter does have performance advantages where the (usually two) memory banks with the closest connection can be run at higher clocks and tighter timings, improving performance. Only you want a ton of memory in your system, then due to the higher latency the other banks you'll have to settle for an overall lower performance than with T-topology.
Ooof. Disagree with you, there.
The cross-CCX latency was the cause of several large performance problems on some software projects I was working on. It was surprisingly bad for such a local interconnect.
If the CCXs had, say, 16 cores in them, I would have cared a lot less (of course all consumer CPUs would have had a single NUMA domain that
Re: (Score:2)
Of course if we go into the nitty gritty of the details of each situation, like many analogies, it quickly starts to falls apart.
Zen1's and Zen1+'s NUMA did lead to a lot of issues in practical application, there's no denying in that.
Though I do argue that for the concurrency dependent consum
Re: (Score:2)
With Zen3 the shared L3 cache per CCD brought some huge improvements, where their CPUs were finally capable of beating comparable Intels even in gaming, without asking for anything from the side of the programmers as far as I can tell. Qualitatively there are still some interconnect issues between CCDs, which can become apparent in the CPUs with more than one CCD on them, but it's not that much of a problem any more than in previous generations.
There are, but having a central IO die was always going to have that trade-off, the trick was finding a sweet spot. 8 cores, I think, is pretty good for extant realistic multiprocessing workloads that aren't embarrassingly parallel.
The work I was doing (high performance IP traffic translation) had certain non-vectorable requirements (though mostly vectorable). While the Zen2 could handle disconnected workloads with admirable performance, as soon as you setup a pretty standard semaphore protected mailbox (F
Still has major issues... (Score:3)
Re: (Score:2)
Intel i3 for the win, no efficiency cores! (on the 12th gen, anyway)
More E-Cores than P-Cores??? (Score:2)
The Core i9 goes from 8 E-Cores to 16 E-Cores... and only 8 P-Cores???
The Core i7 has the same amount of E-Cores as it does P-Cores, 8 each...
With the Core i5 also having 8 E-Cores, but only 6 P-Cores, it follows the Core i9 wi
what's in a name (Score:2)
All these tech names are ridiculous. What is wrong with these people? Just use numbers.