AMD Launches Zen 4 Ryzen 7000 CPUs (tomshardware.com) 156
AMD unveiled its 5nm Ryzen 7000 lineup today, outlining the details of four new models that span from the 16-core $699 Ryzen 9 7950X flagship, which AMD claims is the fastest CPU in the world, to the six-core $299 Ryzen 5 7600X, the lowest bar of entry to the first family of Zen 4 processors. Tom's Hardware reports: Ryzen 7000 marks the first 5nm x86 chips for desktop PCs, but AMD's newest chips don't come with higher core counts than the previous-gen models. However, frequencies stretch up to 5.7 GHz - an impressive 800 MHz improvement over the prior generation -- paired with an up to 13% improvement in IPC from the new Zen 4 microarchitecture. That results in a 29% improvement in single-threaded performance over the prior-gen chips. That higher performance also extends out to threaded workloads, with AMD claiming up to 45% more performance in some threaded workloads. AMD says these new chips power huge generational gains over the prior-gen Ryzen 5000 models, with 29% faster gaming and 44% more performance in productivity apps. Going head-to-head with Intel's chips, AMD claims the high-end 7950X is 11% faster overall in gaming than Intel's fastest chip, the 12900K, and that even the low-end Ryzen 5 7600X beats the 12900K by 5% in gaming. It's noteworthy that those claims come with a few caveats [...].
The Ryzen 7000 processors come to market on September 27, and they'll be joined by new DDR5 memory products that support new EXPO overclocking profiles. AMD's partners will also offer a robust lineup of motherboards - the chips will snap into new Socket AM5 motherboards that AMD says it will support until 2025+. These motherboards support DDR5 memory and the PCIe 5.0 interface, bringing the Ryzen family up to the latest connectivity standards. The X670 Extreme and standard X670 chipsets arrive first in September, while the more value-oriented B650 options will come to market in October. That includes the newly announced B650E chipset that brings full PCIe 5.0 connectivity to budget motherboards, while the B650 chipset slots in as a lower-tier option. The Ryzen 7000 lineup also brings integrated RDNA 2 graphics to all of the processors in the stack, a first for the Ryzen family.
The Ryzen 7000 processors come to market on September 27, and they'll be joined by new DDR5 memory products that support new EXPO overclocking profiles. AMD's partners will also offer a robust lineup of motherboards - the chips will snap into new Socket AM5 motherboards that AMD says it will support until 2025+. These motherboards support DDR5 memory and the PCIe 5.0 interface, bringing the Ryzen family up to the latest connectivity standards. The X670 Extreme and standard X670 chipsets arrive first in September, while the more value-oriented B650 options will come to market in October. That includes the newly announced B650E chipset that brings full PCIe 5.0 connectivity to budget motherboards, while the B650 chipset slots in as a lower-tier option. The Ryzen 7000 lineup also brings integrated RDNA 2 graphics to all of the processors in the stack, a first for the Ryzen family.
Which one has the integrated graphics? (Score:2)
The Ryzen 7000 lineup also brings integrated RDNA 2 graphics to all of the processors in the stack, a first for the Ryzen family. These new iGPUs are designed to provide a basic display output, so they aren't suitable for gaming. However, this feature does improve Ryzen's positioning in several market segments.
After that, the author seems to have forgotten he even mentioned the terms graphics and GPU. The next mention of GPU is in the phrase "M.2 and GPU slots", followed by the title of an article on Intel's support for "Arc GPUs on Linux".
Re: (Score:2)
Something that I hate myself a lot, because it involves a lot of inaccurate advise on building computers, where people think they're smart for buying one of those cheaper SATA M.2 SSDs expecting NVMe performance from them.
Come on, even LTT explained it correctly in 2017 https://www.youtube.com/watch?... [youtube.com] while Tomshardware Paul Alcorn seemingly doesn't understand it and perpetuates the confusion.
Re: (Score:2)
will provide PCIe 5.0 to both the M.2 and GPU slots, while the standard B650 will only have 5.0 to the M.2 slot
Before you call someone computer illiterate maybe read just a couple of additional words, you don't even need to read the entire sentence. There's is nothing at all wrong or confusing, or technically inaccurate with what was said.
Why you bring up something completely unrelated to the topic of PCIe version to specific slots on the motherboard is beyond comprehension. *YOU* sir are the one sowing confusion.
Re: (Score:2)
I looked it up myself in the article, you are correct. It's out of context.
Re: (Score:2)
All of the Ryzen 7000 chips will have a basic iGPU chiplet within them. Without a graphics card in your system, the iGPU will act as your basic display. When you have a GPU installed, the iGPU will instead be used for various floating point operations. Remember that RDNA2 specifies various parts within the graphics and compute array (GCA). You've got all kinds of programmable pipes that RDNA2 dictates from CUDA cores to geometry processors, now the iGPU is more than likely going to very much skimp on th
Re:Which one has the integrated graphics? (Score:5, Interesting)
After that, the author seems to have forgotten he even mentioned the terms graphics and GPU.
That's because it's not worth mentioning. The only news here is that all Ryzen processors in the series now have an iGPU whereas previously only some of them did. There's nothing additional to say.
Re: (Score:2)
Re: (Score:2)
They didn't mention it again because it's not very interesting. It probably has about enough 3d performance to run the pipes screensaver. For instance on my Ryzen 3 budget laptop there is Vega graphics with either 2 or 3 cores, I forget. It struggles to play even ancient games. But it's 100% fine for daily non-gaming use. It has enough acceleration features for desktop acceleration and the like. If you hope to do anything else, you will need a real GPU. From what I can tell RDNA is only about 60% faster tha
Re: (Score:2)
They didn't mention it again because it's not very interesting. It probably has about enough 3d performance to run the pipes screensaver. For instance on my Ryzen 3 budget laptop there is Vega graphics with either 2 or 3 cores, I forget. It struggles to play even ancient games. But it's 100% fine for daily non-gaming use. It has enough acceleration features for desktop acceleration and the like. If you hope to do anything else, you will need a real GPU. From what I can tell RDNA is only about 60% faster than Vega (per core) so if they are only including one or two cores then this is still going to hold true. Maybe you'll be able to play some really old low-poly games like Quake smoothly.
That slow? I was under the impression that the AMD iGPUs could run any opensource game smoothly at a low to medium graphics configuration. FWIW, my fastest CPU is just an i3. I forget what generation, but it's fine enough for most opensource games, a notable exception being 0 AD [play0ad.com]. No Steam for me. Most of the browsing is done on a Pentium NUC.
Some of those processors (Score:2)
Will Wait For Review Other Than Tom's (Score:2)
Are the CPU bugs fixed Yet? (Score:2)
Fuck them (Score:2)
Years and years of "cache hierarchy errors" in both 5- and 3- series high-end CPUs
Wake me when they have a CPU that works reliably instead of crash quickly.
And for those of you out of the loop, just google for the hundreds of forum threads and thousands of posts about it. After 10 years with intel, i splurged on a 5950x, only to be given a kick in the nuts.
Yeah yeah speed frequency IPC backdoors intel management engine unfair practices blah ... i just want the computer to not randomly crash, is that too muc
Re:Fuck them (Score:4, Interesting)
Maybe you got unlucky with a bad chip? I have a 3950X, it does mprime (GIMPS) and after 40 hours uninterrupted computation at full power, the final checksum (Lucas-Lehmer residue) is always correct. This does not necessarily mean all AMD chips are as reliable as mine, but certainly not all of them are as bad as yours. Maybe your problem is not CPU, it's the RAM; simple memtest86 is known to pass even for some kinds of faulty RAM.
Re: (Score:2)
it's a known issue with all zen (all) CPUs
and the stress test you mention is irrelevant
the problem manifests when the CPU is idle or near idle
Re: (Score:2)
Wake me when they have a CPU that works reliably instead of crash quickly.
Crash? Sounds like you got quite unlucky. I did a quick google and while I found a whole series of general complaints the overwhelming majority seem not only to be not related to any hardware specific problem, they also seem to pale in comparison to complaints about intel. So ... yeah.
WAKE UP! THINGS ARE FINE! RMA YOUR DODGY PIECE AND MOVE ON!
Underclock? (Score:3)
How low can it go? AMD has been developing Ryzen 4 while electricity prices have doubled around the world.
The payback could be very quick on this generation if the power budget is tight.
Most of the time my computer does nothing that a RP4 couldn't handle but then I need to compile or render something that takes hours. It's silly to spin at 45W for most of the time but then buying 90W or 125W makes sense sometimes.
Our current architectures simply don't let me have my data and memory on multiple cpu's of different architecture yet, with the narrow exception of CUDA for special tasks. Eventually we'll get there.
Europeans may also be very interested in this dichotomy. Imagine the RP4 shortage when they figure out a RP4K runs at 9W and can handle all most people need!
Re: (Score:2)
Anyway, in the same article they mention that, in CPU-Z (a CPU info tool) they found mention of the same CPU codenames but without an "X". That usually means lower power versions of the CPUs so it's likely AMD will eventually release them.
Re: (Score:2)
Maybe he meant the system. My potato idled at 100W last I checked it, not counting the monitor. It's a FX-8350, but it was SLI at the time and now it's not, but I also doubled the RAM...
Re: (Score:2)
My complete 8 core Ryzen system draws 16 watts from the wall at idle to normal load. Recent AMD motherboards, processors and GPU are even better, time to measure again.
Comment removed (Score:4, Funny)
Re: (Score:2)
You've never run folding@home, encoded something with handbrake, or compiled the Linux kernel?
Some of those hit I/O limitations... How about "run the distro /etc/ssh/moduli"? Send them off into Chongo's tar pit. :)
Re: (Score:2)
Some of those hit I/O limitations...
M2 fixes that. Plus you're going to want 32GB ram at minimum.
Re: (Score:2)
I've got a Ryzen 9 3900X which I bought in 2019. It's been running flawless ever since then.
However its high amount of CPU threads has not been as useful to me as I thought it would. Because in practice, if you have some workload where the high amount of thread shines an OpenCL implementation usually is a lot faster on a mid range GPU that is from 2015, or it takes so long anyway, that it essentially locks up my computer for anything else productiv
Re:What software can use this power? (Score:5, Insightful)
As a (music) composer I exploit the crap out of my 16 Ryzen cores running hundreds of simultaneous VST plugins in my Daw.
Signal processing tends to be a poor fit for GPUs because its serial not parallel data. SOME algorithms can be parallelized, but most have a time domain component that just generates a huge amount of dependency with previous samples (Ie a reverb effect) and theres no easy way to parallelize that. Its a field under hot research but for now most effect plugins are solidly CPU stressors rather than GPU stressors. What IS parallel is using a lot of those damn plugins. I'll have 70-80 tracks of sata smashing sample playback (Some of those orchestral instrument libraries can be 60+ gigs) I'll have that folding down into maybe 20 subcchannels (various positions within the orchestra submixes) which will likely have a high end convolution reverb to create the illusion of 3d space, a high end channel strip emulation (softube console 1 usually) and possibly sundry effects (like a harmonic exciter, or filters if I'm doing something a bit whacko like "underwater piano" or whatrever.
All of which adds up to a hell of a lot of core smashing, RAM busting, SATA stressing fun. And the GPU? Mostly there to keep the monitors running and/or let me alt-tab over to Stellarium when the boredom kicks in.
Re: (Score:2)
Though I still do wonder if a heterogeneous approach wouldn't be more efficient in that regard, because if you run hundreds of tasks simultaneously at a time, it doesn't sound like a single task needs that "full sized core" single threaded performance potential. Did I understand something wrong there?
Re: (Score:2)
As the commenter above pointed out, not everything runs well on a GPU. GPUs excel at large tasks that can be expressed through SIMD, while you're usually a lot better off to run your mostly mixed operations on the CPU, maybe some limited SIMD using AVX on the CPU perhaps.
But that has very little to do with heterogeneous CPU architecture. My point there being that having a lot of energy efficient and less heat producing smaller cor
Re: (Score:2)
GPUs excel at large tasks that can be expressed through SIMD...
But think about what a modern GPU actually is: a box full of simple CPUs that are getting less simple and more like general purpose CPUs all the time. Each one is SIMD, yes, but the SIMD is pretty narrow - using just 4x, more rarely up to 16X. They are tied together into larger parallel blocks by executing a single instruction stream, which cuts down on instruction decoding hardware and instruction memory bandwidth. But each GPU generation makes these cores more independent of each other, with finer grained
Re: (Score:2)
So from my perspective it looks very much like you're equivocating the GPU into there to build some kind of strawman to argue against.
Thus I'm still wonderin
Re: (Score:2)
What I'd expect is lower production cost because you don't need all highly binned cores to make your high core count CPU from, especially if the entire thing isn't just one monolithic die but a chi
Re: (Score:2)
Quick offtopic question: movie music right now generated by composers on a DAW, or played by bands? Maybe expensive movies and telenovelas have different choices? I heard some years ago that you'd have to pay very much for a skilled electronic musician to make it sound realistic and it still takes a lot of time to optimize, while a local band would do a fair job in one practice and 2 takes and be done before lunch break, so the physical band was more cost effective. Has this changed recently with more power
Re: (Score:2)
Re: (Score:2)
Signal processing tends to be a poor fit for GPUs because its serial not parallel data.
Hmm, the GPU hardware itself can handle serialized processing but I would not be surprised if there are a bunch of language obstacles in the way to make it difficult to set up a nice hardware pipeline explicitly. I haven't tried this myself but I am willing to accept the challenge as I also have an interest in software synthesis.
What really kills GPGPU is random branching, which is where traditional CPUs will continue to hold the advantage for the next long time.
Re: (Score:2)
...takes so long anyway, that it essentially locks up my computer for anything else productive that I'll have to take a break anyway
Ah, life in Windows land. I don't miss it.
Re: (Score:2)
Theoretically I could keep the system responsive by setting the priority of the processes to low in the Taskmanager. I've often done it in the past when I rendered something on the CPU that didn't offer a CUDA/OpenCL altnerative.
But I've come to prefer to take a coffee break, because often the task that's taking up so many resources is required to proceed, meaning that the amount of useful work I can do besides of writin
Re: (Score:2)
Who needs faster CPUs, you ask?
You've never run folding@home, encoded something with handbrake, or compiled the Linux kernel? :D
Turn in your nerd card!
Oh wow speaking of turning in your Nerd card, one of those gets run on a GPU, one of those gets run on a dedicated encoding hardware, and the last one hasn't been really relevant since the kernel was modularised. ;-)
Okay I jest, but really I gave up on CPU encoding a long time ago. It's one of the reasons I haven't made the switch to AV1 yet, still waiting for dedicated encoding hardware to be present on my GPU.
Image editing my man. That's still incredibly CPU bound, as anyone who ever hits the "export" but
Re: (Score:2)
Just wanted to add my $0.2 about video encoding.
More specifically, where hardware encoding is really supposed to help you : for real-time encoding (capture).
Yes, of course it is nice to have a video hardware encoder in your graphic card, but here are several caveats I have run into and which I'd like to share.
_Generation mismatch : the card encodes 1080p60, but your screen is 1080p144. Or 2160p60.
_Fixed resolution : you workspace is composed of several monitors and the total exceed your card's standard enco
Re: (Score:2)
the last one hasn't been really relevant since the kernel was modularised
Complete fucking bullshit. Does posting that tripe give you a boner?
Re:What software can use this power? (Score:5, Insightful)
This is coming from someone who still uses an Intel i7 6800K as his main PC CPU...
When you buy a new PC or upgrade to a new one, you have a choice to spend less and obtain a certain potential performance, or spend more and get more potential performance. The difference between option 1 and option 2 could amount to, say, $2000, or even $3000. You might think it's not worth it, and, depending on use case, you might be right.
I prefer my new/upgraded PC to be as future-proof as I can make it. Therefore, I'm not buying new hardware because of what it can do right now, but how well it would hold 5 years from now. When I buy a PC which I want to hold well for 5 years with minimum upgrade in the meantime (except, maybe, storage), I divide that difference in price to usage time.
$3K more to pay, divided by 60 months is 50 bucks per month, which is well worth it, in my opinion.
Another point: new hardware releases push the price down for previous generations. You can get a Threadripper 1950x for $200 nowadays. Hell, you can get a 2990WX for less than a grand, and it used to be twice as much at release.
Re: (Score:2)
This is coming from someone who still uses an Intel i7 6800K as his main PC CPU...
Luxury! Seriously though, my main desktop is an i7 2700k. My laptop is only a year newer than that.
Now that the gap in performance is significant, and more importantly technologies like NVMe SSDs and Thunderbolt are mature, I'm feeling that it might be time to upgrade. I could keep going until Windows 10 falls out of support I suppose, or try yet again to switch to Linux.
Re: (Score:2)
You are in for a HUGE treat when you finally upgrade. I used an i7-4770K for about 10 years before upgrading to my Threadripper 3960X. AMD has had some amazing IPC uplift for the past few generations of Ryzen.
You don't say what your workload [slashdot.org] is but I recommend these 3 tiers:
* Budget: Ryzen 7600X
* High-End Gaming/Professional work: Ryzen 7950X
* HEDT/nice: Threadripper Pro (next gen coming in 2023)
Re: (Score:2)
I do sometimes do heavy loads, but much of it can be done on a GPU. Main thing is I need a lot of RAM.
So I was thinking of getting a Ryzen laptop with thunderbolt.
Re: (Score:2)
A Cezanne laptop is not your granddaddy's laptop.
Re: (Score:2)
I know :)
I have built a mITX watercooled PC for my wife a couple years ago, with a Ryzen 3600X in it, that CPU is 30% faster than mine.
Unfortunately, I am using quite a few PCI Express lanes, and the only upgrade path is Threadripper, unless I am willing to sacrifice a couple PCI Express cards (which I am not, at the moment).
Re: (Score:2)
I'm finding Ryzen runs so cool under moderate load I can drop the fans to zero rpm. You can also get fanless X570 motherboards now, a real treat.
Re: (Score:3)
Luxury! Seriously though, my main desktop is an i7 2700k. My laptop is only a year newer than that.
This isn't surprising. I've been saying we passed the real world performance gaps years ago. Well, with the exception of planned obsolescence like window 10 to 11.
Other than that, your average user won't be able to tell the difference between a modern processor and one made 10 years ago. Current processor specs show that new processors are more than twice as powerful as my i9-9900k. But that power is really useless if I can't use it. Hell, I don't even use all the power my current workstation has.
Re: (Score:2)
The hardware quality has dropped year by year, though. It used to be you'd buy a computer and it would keep working for more than a decade. Now you're lucky if it makes it that long. And I'm buying fancy pantsy motherboards with solid caps and extra copper, too.
Re: (Score:2)
As someone who worked in an IT repair shop during the late 90s and early 2000s, I would not be so sure. The amount of motherboards with bulging caps after 6 months to a year was staggering. In all fairness, they were mostly budget boards, but I've seen a fair share of "top-of-the-line" mobos with severe issues.
This was PSUs' faults as well, back in that day voltage ripple was atrocious and most PSUs were shit.
Re: (Score:2)
I've never retired a desktop machine for not working, only for being too slow or lacking modern interfaces like M2.
Re:What software can use this power? (Score:5, Interesting)
I'll bet very few slashdotters would even notice the difference between this and their current high end cpu without a stop watch or performance test software.
If you have a "current" high end CPU you're probably not upgrading it to one of these. But if your "current" CPU isn't particularly current and it's actually a few years old then this could be a good upgrade depending on what you do. I upgraded from my ~4 year old Xeon W-2195 to a Threadripper 3970 and that provided a huge performance boost for compilation, rendering, etc basically any parallel work that doesn't exclusively leverage the GPU. Can't see myself needing an upgrade this generation (or probably the next either) but I'm sure 4-5 years from now there will be something that will provide substantial benefit again.
Re: (Score:2)
> I'll bet very few slashdotters would even notice the difference between this and their current high end cpu without a stop watch or performance test software.
Without benchmarks determining if Ryzen 7000 is worth upgrading is kind of pointless as it is just pulling numbers out of thin air. It also depends on the type of workload. Context matters:
* Web Browsing / Email? Hell, any CPU in the past decade+ is complete overkill.
* Gaming? More then 6 cores is basically a waste although the IPC uplift may b
Re: (Score:2)
I guess you don't blend.
Re:What software can use this power? (Score:4, Interesting)
To answer your question, I can only look back at what were my reasons to upgrade my CPUs in the past, and suppose it can give an idea as to why someone would need today.
_Decode higher resolution videos (several occurences)
_Set up an MMO server (multi-threaded SQL performance)
_Get higher frame rate from a single-threaded physics engine
_Get a safety margin for a multi-window java-based real-time program.
Re: (Score:3)
Case in point. One of my current desktops is a Dell XPS 420 (circa 2007) w/8GB RAM that a friend gave me and it runs Windows 10 and the apps I use just fine -- Firefox (and sometimes Chrome, Edge), Thunderbird, Adobe Reader, GIMP, Office 2010, Lotus SmartSuite (123 and WorkPro) etc...
My main Linux box (Ubuntu 18.04) is a DIY ASRock Z77 (from a friend), i7-3770, 32GB RAM that also runs a generic Windows 10 VM just fine. I also have a Dell PowerEdge T110 (circa 2010) with 32GB ECC RAM that I'm not currentl
Re: (Score:2)
If you're coming from InDesign, Scribus will make you angry.
Even compared to Pagemaker it's a bit grumpifying.
Re: (Score:2)
The most sophisticated (or, at least, expensive) publishing software I've used is FrameMaker *way* back in the early 1990s on a Sun Workstation at NASA LaRC. Personally, I've been using Publisher 2010 for a while to make greeting cards and it's pretty good for that ...
Re: (Score:2)
Who -needs- these things?
It doesn't matter. If they can be made faster for similar price then they will be, it's the way capitalism works.
You don't have to upgrade anything, just be happy that when your computer eventually breaks then the next one will be faster/cheaper.
Re: What software can use this power? (Score:2)
Re:What software can use this power? (Score:5, Interesting)
Is there some other major use these powerhouses will be used for?
GCC and LLVM! Being able to simultaneously compile 16 files at lot-o-GHz decreases the amount of time you have to wait for formidably large projects.
Re: (Score:2)
Aaaand those projects should probably be doing what SQLite does. SQLite's official source code, like most projects, is spanned across a bunch of files but the official release version of the same product is a single combined 1MB .c file and the .h include file. Back when they first did the monolithic .c file release, the devs noticed that SQLite's runtime performance increased 10%. The only thing different? One .c file instead of many. The compiler was able to further optimize an already heavily optimi
Re: (Score:2)
Hey, might as well throw in a unicorn and unobtainium because I can get it just as easily. -_-
Re: (Score:2)
Reading some of your responses, you move your goal posts a fair amount, to maintain your mythical "normal user".
Fact of the matter is, for the last many of the biggest hobbyist power users have been non-computer geeks. It's been the hobby 3D artists, the musicians, the hobby boat designers(like my brother), the people doing editing and compositing of video just for fun, and, nowadays, dabbling with machine learning. Even if they don't do the application coding themselves, they use it, and build datasets for
Re: (Score:2)
Plus don't underestimate the satisfaction that comes from having the shiniest, coolest rig on the block. I mean, users don't need programmable LEDs either, right? And what's that glass window on the case doing there?
Re: (Score:2)
I built back in 2021. My last machine was 11+ years old and had started having disk controllers fail.
I do lots of SQL, VM and data conversion work.
As such, having lots of CPU "grunt" helps speed my job up and allows me to work in a deeply parallel fashion.
So my current machine is now a Ryzen 9 5950X, 64GB RAM and all SSD local storage (with a relatively sizeable NAS for cold storage)
Works out well for me.
Re: (Score:2)
You say "Who -needs- these things?" but then you add "I game a lot but my several generations old system still plays almost everything fine.".
So, did you need all the power of your several-generations-old system at the time you bought it? I bet no game was able to fully use it at the time.
Same thing here. People still using 10-years-old gaming rigs who want to buy something new also hope that their new rig will be good enough until 2032.
Re: (Score:2)
Yeah and you don't need a turbocharged sportscar either. But is life worth living without one?
Re: (Score:2)
With that said, if you're building a new system, going AMD is not the best idea.
Huh?
AMD is way ahead of everybody else.
Re: (Score:2)
AMD is ahead in general performance. It's behind on per core performance. Most games are mainly limited by primary thread as long as CPU has enough cores otherwise, which means that performance per core is more important for it than general performance that comes from having more cores.
That said, I'd still buy AMD today for gaming because of all the problems that intel's new biglittle architecture has with games. Even with many of the early DRM and anticheat problems fixed, older games haven't been. AMD is
Re: (Score:2)
Most games are mainly limited by primary thread as long as CPU has enough cores otherwise
That's a great talking point if it were 2010. No most games these days scale exceptionally well with core count. The ones that don't aren't in anyway CPU bound in the first place, and UE5 will make multithreading even easier for developers, which is something to think about as you buy something that you intend to use going forward rather than take back in time. Sure there are still some games out there that are limited by the primary thread, but they are also poorly optimised to the point where a faster CPU
Re:What software can use this power? (Score:5, Insightful)
So yeah, the entire idea that AMD is behind on "per core performance" is outdated as well. They've actually gotten pretty good there.
And as far as scaling on CPU cores goes. You can achieve really great performance if you manage to run your physics asynchronous for example, which might be a feature of UE5 that you are referring to.
I do manually code into UE4 myself for my projects as I let UE5 mature, so I do have some first hand experience there.
In general it that means though that you'll have to deal with concurrency, where UE4 by default sends back the values of your async task/physics to the threads where your game logic runs on the start of the next frame.
That can be awesome, where you can have millions of simple collisions per second while still running somewhere north of 120 FPS. But that doesn't work for everything. Sometimes you need your values on the same frame, which is especially true for higher fidelity physics simulations. This stuff you can't easily optimize away.
Thus at the end of the day it strongly depends on a case by case basis, where the general approach is to do as much as possible asynchronous while you try to limit the synchronous bottlenecks, that will sooner or later arise out of necessity.
In such cases there's a great benefit in CPUs with high IPC, high clock and large caches, like the Ryzen 7 5800X3D.
On that CPU even software like Arma 3 with piece of crap mods (I've looked into the source code of some of them, where for example LUT is used to safe some computations, but then the interpolation uses multiple pow instructions and is called 100 times per second for thousands of actors on the map) runs relatively well and gives the i9-12900k a run for its money.
Re: (Score:2)
You should take a look at the latest Spiderman game that does realtime shader compliation in its PC port. It loves extra cores.
Re: (Score:2)
Isn't it gross having the CPU compiling from source during real time rendering? Something needs to be done about this, urgently.
Re: (Score:2)
Threadrippers for example
Why is it that anytime something is said an idiot will come out with an extreme and completely irrelevant and off-topic example. Hint: Threadrippers are not just higher core standard CPUs, don't remotely achieve the same clocks as lower core parts, and while games do scale well with threading they are not embarrassingly parallels problem.
Literally no one other than you is talking about 64core 128 thread CPUs. Come back to real life and post something relevant if you want to do something other than look like
Re:What software can use this power? (Score:5, Interesting)
Only an idiot uses something like a Threadripper for gaming.
That's not what they're designed for.
They're designed for massively parallel function.
CAN you game on a Threadripper?
Sure. But that's like buying a Dodge Demon as a daily commuter.
It'll get you there. It's just not optimal.
If all you're looking for is a gaming rig, snag a Ryzen 5 or Ryzen 7 setup.
Ryzen 9 is HEDT territory.
Threadripper Pro is dedicated workstation stuff.
And Epyc is for server-grade stuff.
In short, use the right tool.
Otherwise, the problem is a nut loose on the keyboard.
AKA an ID 10 T error.
Re: (Score:2)
Mostly true at the moment, though the pendulum is sure to swing back the other way as game engine developers notice they can get a significant extra edge by fully loading all cores, including both CPU and GPU. There is also a trend towards more code that is not easily parallelizeable, like massive numbers of NPC scripts.
Re: (Score:2)
Thank you for trying to formulate a rule for a multi-factor reliant phenomenon.
What I'm saying is that you've noticed the phenomenon, yet haven't actually mapped out exactly what the factors are, besides simply "IT IS ONE CORE!".
As such, you're right about throwing a multi-thousand dollar WORK CPU at gaming.
Not that is CANNOT game.
It simply won't provide maximal values.
But your basic conclusion is incompletely supported.
Therefore wrong.
Re: (Score:2)
Re: (Score:2)
He obviously didn't. But then he also didn't even think about the 5800X3D which was already ahead in gaming, and many other workloads that love L3 cache.
Re: (Score:2)
Hint: announcements by the company making the product is not the same as reality.
Actual chance of AMD getting ahead of intel's new P-cores on the current gen is low but possible. Chance of AMD's next gen being better than intel's next gen P-cores is exceedingly low.
Re: (Score:2)
It happened, and there is no real prospect of Intel ever getting that halo back.
Re: (Score:2)
Re: (Score:2)
Fanboys: they'll read the first paragraph and then have a meltdown about something you pre-emptively covered in second paragraph.
I want to once again remind everyone that companies are not your friends, and fanboying for one is anti-consumer behavior at its dumbest.
Re: (Score:2)
Re:What software can use this power? (Score:4, Interesting)
Even not taking Zen4 into account, the 5800X3D is the fastest gaming CPU out there. Raphael will unseat it handily, and it's unlikely that Raptor Lake will be good enough to catch up. And on the off chance that the 13900k in 350W mode is able to compete, AMD will hit with Raphael-X (Zen4 + 3d cache) in early 2023 to seal the deal. Intel is mostly DoA in gaming.
As for "performance per core", that's a bit of a laugh, since Intel seems so keen on selling people scads of Atom cores but will only offer you 8 proper performance cores (Golden Cove, soon to be Raptor Cove).
Re: (Score:2)
AMD is ahead in general performance. It's behind on per core performance.
Earth to Rip van Winkle: that was then, this is now.
Re: (Score:3)
AMD doesn't use PBO in standard 142W PPT presentations. There are many benchmarks out there with PBO disabled (it is not enabled by default) that show the 5950X in a positive light. If your sample is having cache hierarchy errors, you may actually have a defective part, and you can get a replacement. You are not the only one to have had that problem, though it's admittedly pretty rare.
Re: (Score:2)
Anyone for whom IPC is important isn't using Geekbench for their benchmarking suite. Anyone for whom Geekbench offers a proper benchmarking equivalent doesn't need a computer more powerful than a potato.
Re: (Score:2)
However a heap of butthurt apple fanboys want to claim all that matters is efficiency as that is the only benchmark they are convincingly ahead on.
Even then, the laptop with the longest battery life isn't made by Apple and it has an AMD Ryzen CPU.
apples storage and ram markup kills it price wise (Score:2)
apples storage and ram markup kills it price wise and apple pro workstation can't keep up with intel or amd systems and high end video cards kill it.
Re:AMD is still lag behind Apple's M1 (Score:5, Informative)
IPC is what matters to most users. AMD is still way behind Apple's M1 in that department.
The big problem with the M1 is that you have to own a computer made by Apple.
Re: (Score:2)
Agreed.
The M1 is an amazing chip. I just noticed the Apple M1 8 Core is the only CPU to appear on the first page of the Passmark single-core performance chart [cpubenchmark.net], and the first page of the Passmark performance per watt [cpubenchmark.net] chart. Heck, this chip can almost emulate x86 fast enough to be competitive. Unfortunately, it is coupled with a closed-source proprietary operating system. I would love to see someone make a Windows-based machine running on an M1, but Apple will never allow it. But if Linux added x86 emula
Re: (Score:3)
IPC is what matters to most users.
Most users don't have a clue what IPC is. What matters to most users is does it do what I want it to and how much does it cost. Some will care if its AMD or Intel but not as many as there used to be.
Re: (Score:2)
What matters to most users is does it do what I want it to and how much does it cost.
You forget one: How will the brand affect my image? Will I be seen as "cool" or will people think "he/she bought a cheap one"?
Re: (Score:2)
Re: (Score:2)
Otherwise the performance in what you actually do with your machine matters, and for most that's not chasing some synthetic benchmark high score.
*Yes, there are people who could be considered professional benchmark runners. Though most of them aren't that interested in IPC but more in how IPC with an overclocked clock works out in those benchmarks, for which they then use their skills in "overclocking" with fancy stuff
Re: (Score:2)
of course it is
the M1 is not carrying 40 years of legacy crap
Re: (Score:2)
The Ryzen is not carrying 40 years of reality distortion.
Re: (Score:2)
IPC is what matters to most users.
I bought one of those M1 chip computers. I started it up. It didn't even have windows on it, so I returned it. *THAT* is what matters most. For "users" they largely don't give a shit about IPC. They care if their computer runs the software they want to run in a way that is familiar to them. Bonus points for long batterylife.
IPC is relevant only after you cover the most important thing: software and use. Comparing Manything to Wintel systems is utterly irrelevant. The M2 could have 20x the IPC performance an
Re: (Score:3)
IPC is what matters to most users.
What? Absolutely not. ops are what matters to most users. They don't give a shit what the clock rate is. They just want their operations retired.
To be fair, battery life also matters, so there is a case in which Apple's CPU is relevant, and that case is portable devices. And there's a big market there, so it has a reason to exist. But their processor only competes favorably with mobile CPUs today, and pretending otherwise is fanboy behavior.
Re: (Score:2)