

AMD Ryzen 9 9950X3D With 3D V-Cache Impresses In Launch Day Testing (hothardware.com) 31
MojoKid writes: AMD just launched its latest flagship desktop processors, the Ryzen 9 9950X3D. Ryzen 9 9950X3D is a 16-core/32-thread, dual-CCD part with a base clock of 4.3GHz and a max boost clock of 5.7GHz. There's also 96MB of second-gen 3D V-Cache on board. Standard Ryzen 9000 series processors feature 32MB of L3 cache per compute die, but with the Ryzen 9 9950X3D, one compute die is outfitted with an additional 96MB of 3D V-Cache, bringing the total L3 up to 128MB (144MB total cache). The CCD outfitted with 3D V-Cache operates at more conservative voltages and frequencies, but the bare compute die is unencumbered.
The Ryzen 9 9950X3D turns out to be a high-performance, no-compromise desktop processor. Its complement of 3D V-Cache provides tangible benefits in gaming, and AMD's continued work on the platform's firmware and driver software ensures that even with the Ryzen 9 9950X3D's asymmetrical CCD configuration, performance is strong across the board. At $699, it's not cheap but its a great CPU for gaming and content creation, and one of the most powerful standard desktop CPUs money can buy currently.
The Ryzen 9 9950X3D turns out to be a high-performance, no-compromise desktop processor. Its complement of 3D V-Cache provides tangible benefits in gaming, and AMD's continued work on the platform's firmware and driver software ensures that even with the Ryzen 9 9950X3D's asymmetrical CCD configuration, performance is strong across the board. At $699, it's not cheap but its a great CPU for gaming and content creation, and one of the most powerful standard desktop CPUs money can buy currently.
Flipped (Score:4, Interesting)
It's cool that they flipped the cores to the top so they can be cooled more efficiently.
Sounds like they're getting ready for Backside Power Delivery in '26 too, maybe on A16.
I'll likely be buying one of these Baby Threadrippers when BPD hits if there are enough PCI lanes. Especially if they can be efficiently downclocked dynamically to save power.
Re: (Score:2)
How many PCIe lanes are you expecting? There probably won't be a major expansion of I/O for consumer platforms.
Re: Flipped (Score:2)
Never underestimate what the bleeding edge overclockers will try to do.
Anything to shave off network lag for example.
Re: Flipped (Score:4, Insightful)
Wait what
How are overclockers going to expand available PCIe lanes? That doesn't make any sense.
This is terrible! (Score:5, Funny)
As an avid Intel fan, let me tell you about all the things you will miss out on by going with AMD:
* highly dubious benchmark results and advertised speed gains that don't pan out
* anti-competitive business practices
* new flashy instructions that are implemented without regard for security
* microcode updates that will drag down the processing speed
* a compiler designed specifically to hamper non-Intel processors
* a buddy-buddy relationship with Microsoft
* the Intel Management Engine with a checkered security record and probably has a backdoor for the NSA
* NDAs for literally anyone doing anything with our stuff
Without all these great things, why would you even bother with AMD?~
Re: (Score:1)
Re: (Score:2)
Spotted the Intel employee
Re:This is terrible! (Score:4, Insightful)
Wrong industry.
Let's see, here.
* highly dubious benchmark results and advertised speed gains that don't pan out
AMD is famous for this, lol. Check.
* anti-competitive business practices
Would be silly for the underdog to be accused of engaging in anticompetitive business practices, so 100% Intel on this one.
* new flashy instructions that are implemented without regard for security
This one doesn't even apply to Intel, I don't think.
Side-channel speculative exploits exist for every current superscalar CPU manufacturer. So what instruction in particular are you bitching about here?
But can we count how AMD gaslit the entire software industry except for MS into believing that retpoline was safe, while quietly patching it for Zen3?
Some of the best crow eating I've ever seen when linux had to move to Intel's original Spectre-V2 guidance after Linus loudly shit all over it.
* a compiler designed specifically to hamper non-Intel processors
I'll give you this one, but that's still a gross misrepresentation of what happened.
Refused to optimize for non-Intel? Yes. Bullshit move? Yes. hamper non-Intel? No.
* a buddy-buddy relationship with Microsoft
I'd say they both have a pretty buddy-buddy relationship with Microsoft.
AMD literally builds customer chips for them.
* the Intel Management Engine with a checkered security record and probably has a backdoor for the NSA
AMD's PSP has dozens of CVEs for it.
* NDAs for literally anyone doing anything with our stuff
Yes, because AMD doesn't require everyone to sign an NDA either, lol.
Re: (Score:2)
* The business leading warranty and return policy
Re: (Score:2)
You missed:
1) new flashy instructions that are implemented to win benchmarks, hyped to hell and with no regard to competent ISA design, so there needs to be a v2 extension in the next gen. And then a v3. And then a v4. (Which then transition from "Intel Mouthpiece says new instructions will cure cancer etc etc" to "We feel non-standard extensions like these are harmful to the x86 ecosystem as a whole" once #4 happens).
2) new flashy instructions that require the CPU to downclock.
3) new flashy instructions th
Why bother? (Score:4, Insightful)
You have to be absolutely pushing the bounds of gaming or work to need something like this. Few of us are. My daily grind is almost always drive speed and network bandwidth limited. Even if I quadruple my CPU speed, it won't matter much. CPU usage rarely goes over 50% as is, with a 13th gen i7 mobile. My desktop is faster, but I barely use it, and it's the same story there, except that maybe sometimes its GPU limited with an AMD 6800.
I'd take fewer cores and more battery life, or fewer cores and a faster SSD, or fewer cores and a better network.
Re:Why bother? (Score:4, Interesting)
You have to be absolutely pushing the bounds of gaming or work to need something like this. Few of us are.
*Few of us are, so far.* - Homer Simpson meme.
The idea of high end graphics cards was laughable, then came 4K monitors and now they are basically mandatory to make modern games playable. The same applies with CPUs. Over time new games have increasingly challenged the CPU. You can see that in comparison graphs benchmarking different games with different CPU platforms. You may not need it today. And you probably won't pay this price for it. But I for one am excited to buy this chip second hand in a few years to keep the latest ordinary games running well.
As we get more powerful hardware, games start optimising less.
Even if I quadruple my CPU speed, it won't matter much. CPU usage rarely goes over 50% as is, with a 13th gen i7 mobile.
I'm willing to bet that if you stop alt tabbing and looking at graphs, and instead do detailed performance monitoring you'll find that even on games where you think the CPU is not doing much, a different CPU will have impact to frame time, FPS, or other metrics that actually influence your gameplay. This has been shown time and time again, even in games which appear to not tax the CPU - it still has an impact.
Re: (Score:1)
I don't think the parent doubts it will have an impact.
I think he is more saying that as you move beyond budget tier parts in general you'd almost always be better served by allocating that extra $200 to more/faster memory, the chipset with more PCIE lanes, better GPU, or even a better performing ssd (or array).
If the subject is gaming when it comes to getting the most performance out of any titles availible today, buying into the top line of the current CPU generation is going to buy you less experienced i
Re: (Score:2)
When it comes to gaming, chipsets and PCIe lanes have little to no impact. More/faster memory can, but it depends where you're starting from (there are very heavy diminishing returns, especially if you have a lot of CPU cache). SSD will have little to no impact. GPU, on the other hand... If you're talking high-end GPUs, then no, because $200 won't let you jump up a tier in the high-end. But in the mid-range, $200 can make a big difference. Assuming you can find a GPU in the first place.
However, $200 can mak
Re: (Score:2)
SSD matters most, with loading the OS, levels, apps, etc. 32GB of memory is plenty today, 16GB is adequate, 8GB has performance issues but works, and 4GB is survivable but painful. 64GB and up are for specific memory hog uses. Faster memory? Doesn't matter so much. Barely makes a difference. A $200 GPU is going to be surprisingly close to a $400 GPU, which is not that far off from an $800 GPU. Only if you're focused on 4k with all details at max do you need to spend 800+ and get the fastest CPU and a
Re: (Score:2)
Having an SSD *at all* matters for that, having a faster SSD, not so much. You're not going to get any meaningful improvement out of your game buying a $400 SSD instead of a $200 SSD. Level load times are not really bottlenecked by raw disk throughput, and DirectStorage is still a mess (and largely a liability).
$200 on a GPU *can* make a big difference when you're at low-end or mid-range cards, but it depends on the cards. Comparing a closer-to-MSRP card to a "fancier cooler" card can be $200 with basically
Re: (Score:2)
The jump from spinny to SSD is big, yes, but the jump from slow SSD to fast SSD is pretty big too.
Re: (Score:1)
PCIe lanes make a ton of difference, it is easy to max out on the lower end chipsets. A couple nvme type ssds, a 16x GPU, and NIC, and you're full up with no room at all for expansion and you might be already be given less lanes to some of that hardware than it could support.
Re: (Score:2)
All AMD chipsets, from the cheapest to the most expensive, have a single PCIe 4.0x4 link back to the CPU. Ultimately, the CPU's PCIe lanes are your bottleneck. Certain peripherals are also directly connected to the CPU, making the chipset irrelevant. That includes (at least) the GPU, and the first m.2 slot.
In fact, it's much easier to max out the bandwidth on the higher-end chipsets, because they try to hang way more stuff off that single x4 link. The cheaper chipsets are less likely to bottleneck the chips
Re: (Score:2)
An impact is not the same as a big or even significant impact.
I tend to game in about 1/3 of my 48" 4k screen, because I sit close enough that I get nauseous otherwise.
With unlimited funds, faster is better, sure, but I have found that a passmark of around 3000 per cpu gets the job done quickly in everything I want to do for the past few years. It's not like I'm a competitive gamer.
The CPU sweet spot these days is far, far lower down the food chain.
The good and the bad (Score:2)
The benefit to the X3D chips is having extra cache for programs that benefit from it. If the programs you use don't benefit from the extra cache, you pay more money for something you won't actually see a benefit from. So, there's nothing "BAD" about X3D, but there are limited programs that benefit from it in the consumer space. The hype generated by gamers for the X3D chips has convinced some people to go with it, rather than saving some money by getting the normal Ryzen 9 9950X CPU.
Re: (Score:2)
The advice from reviewers has generally been, if you only play games, get the 9800X3D, if you only do productivity, get the 9950X, if you do both, get the 9950X3D.
However, at MSRP, it's a relatively minor price difference, $649 versus $699. At that point, I'd argue it's worth it just for the extra flexibility. But at street price, the gap appears to be wider (I'm seeing $550-600 versus $700), and with that sort of a price difference, yeah, get the 9950X if you're really productivity focused.
cache (Score:1)
Re: (Score:2)
It's not like a program gets to choose what goes into cache.
If your software has a large working set in hot loops, more cache will probably help you- but it will hurt you on every other CPU there is.
This is why most games aren't even impacted by this- but the ones that are, are majorly.
There isn't going to be a transition toward programming that favors CPUs that have this feature, of which there is 1, and there isn't going to be a transition to CPUs that have this feature because of the drawbacks.
Memory Gap Between Ryzen and Threadripper (Score:2)
Re: (Score:2)
Just let the fluid dynamics sim run longer if you need more precision?
I NEED BTU OUTPUT STATS (Score:2)
I never thought I'd even ask for this, but after buying a 13.9k and having it turns my office into a sauna, I need to ask-- how much heat does this thing pump out? Or to put it another way, is it more thermally efficient than the higher end Intel chips?
It's seriously a problem when you have to locate your case in another damn room
Why "gaming" (Score:2)