Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
AMD Hardware

AMD Ryzen 9 9950X3D With 3D V-Cache Impresses In Launch Day Testing (hothardware.com) 31

MojoKid writes: AMD just launched its latest flagship desktop processors, the Ryzen 9 9950X3D. Ryzen 9 9950X3D is a 16-core/32-thread, dual-CCD part with a base clock of 4.3GHz and a max boost clock of 5.7GHz. There's also 96MB of second-gen 3D V-Cache on board. Standard Ryzen 9000 series processors feature 32MB of L3 cache per compute die, but with the Ryzen 9 9950X3D, one compute die is outfitted with an additional 96MB of 3D V-Cache, bringing the total L3 up to 128MB (144MB total cache). The CCD outfitted with 3D V-Cache operates at more conservative voltages and frequencies, but the bare compute die is unencumbered.

The Ryzen 9 9950X3D turns out to be a high-performance, no-compromise desktop processor. Its complement of 3D V-Cache provides tangible benefits in gaming, and AMD's continued work on the platform's firmware and driver software ensures that even with the Ryzen 9 9950X3D's asymmetrical CCD configuration, performance is strong across the board. At $699, it's not cheap but its a great CPU for gaming and content creation, and one of the most powerful standard desktop CPUs money can buy currently.

This discussion has been archived. No new comments can be posted.

AMD Ryzen 9 9950X3D With 3D V-Cache Impresses In Launch Day Testing

Comments Filter:
  • Flipped (Score:4, Interesting)

    by bill_mcgonigle ( 4333 ) * on Tuesday March 11, 2025 @10:19PM (#65226937) Homepage Journal

    It's cool that they flipped the cores to the top so they can be cooled more efficiently.

    Sounds like they're getting ready for Backside Power Delivery in '26 too, maybe on A16.

    I'll likely be buying one of these Baby Threadrippers when BPD hits if there are enough PCI lanes. Especially if they can be efficiently downclocked dynamically to save power.

  • by Gravis Zero ( 934156 ) on Wednesday March 12, 2025 @12:31AM (#65227033)

    As an avid Intel fan, let me tell you about all the things you will miss out on by going with AMD:

    * highly dubious benchmark results and advertised speed gains that don't pan out
    * anti-competitive business practices
    * new flashy instructions that are implemented without regard for security
    * microcode updates that will drag down the processing speed
    * a compiler designed specifically to hamper non-Intel processors
    * a buddy-buddy relationship with Microsoft
    * the Intel Management Engine with a checkered security record and probably has a backdoor for the NSA
    * NDAs for literally anyone doing anything with our stuff

    Without all these great things, why would you even bother with AMD?~

    • 80% of those apply to AMD as well. You cheerleaders are so fucking sad.
      • Spotted the Intel employee

        • by DamnOregonian ( 963763 ) on Wednesday March 12, 2025 @03:14AM (#65227147)
          Not remotely.
          Wrong industry.
          Let's see, here.

          * highly dubious benchmark results and advertised speed gains that don't pan out

          AMD is famous for this, lol. Check.

          * anti-competitive business practices

          Would be silly for the underdog to be accused of engaging in anticompetitive business practices, so 100% Intel on this one.

          * new flashy instructions that are implemented without regard for security

          This one doesn't even apply to Intel, I don't think.
          Side-channel speculative exploits exist for every current superscalar CPU manufacturer. So what instruction in particular are you bitching about here?
          But can we count how AMD gaslit the entire software industry except for MS into believing that retpoline was safe, while quietly patching it for Zen3?
          Some of the best crow eating I've ever seen when linux had to move to Intel's original Spectre-V2 guidance after Linus loudly shit all over it.

          * a compiler designed specifically to hamper non-Intel processors

          I'll give you this one, but that's still a gross misrepresentation of what happened.
          Refused to optimize for non-Intel? Yes. Bullshit move? Yes. hamper non-Intel? No.

          * a buddy-buddy relationship with Microsoft

          I'd say they both have a pretty buddy-buddy relationship with Microsoft.
          AMD literally builds customer chips for them.

          * the Intel Management Engine with a checkered security record and probably has a backdoor for the NSA

          AMD's PSP has dozens of CVEs for it.

          * NDAs for literally anyone doing anything with our stuff

          Yes, because AMD doesn't require everyone to sign an NDA either, lol.

    • Don't forget:
      * The business leading warranty and return policy
    • by arQon ( 447508 )

      You missed:
      1) new flashy instructions that are implemented to win benchmarks, hyped to hell and with no regard to competent ISA design, so there needs to be a v2 extension in the next gen. And then a v3. And then a v4. (Which then transition from "Intel Mouthpiece says new instructions will cure cancer etc etc" to "We feel non-standard extensions like these are harmful to the x86 ecosystem as a whole" once #4 happens).
      2) new flashy instructions that require the CPU to downclock.
      3) new flashy instructions th

  • Why bother? (Score:4, Insightful)

    by TheMiddleRoad ( 1153113 ) on Wednesday March 12, 2025 @01:55AM (#65227077)

    You have to be absolutely pushing the bounds of gaming or work to need something like this. Few of us are. My daily grind is almost always drive speed and network bandwidth limited. Even if I quadruple my CPU speed, it won't matter much. CPU usage rarely goes over 50% as is, with a 13th gen i7 mobile. My desktop is faster, but I barely use it, and it's the same story there, except that maybe sometimes its GPU limited with an AMD 6800.

    I'd take fewer cores and more battery life, or fewer cores and a faster SSD, or fewer cores and a better network.

    • Re:Why bother? (Score:4, Interesting)

      by thegarbz ( 1787294 ) on Wednesday March 12, 2025 @05:49AM (#65227289)

      You have to be absolutely pushing the bounds of gaming or work to need something like this. Few of us are.

      *Few of us are, so far.* - Homer Simpson meme.

      The idea of high end graphics cards was laughable, then came 4K monitors and now they are basically mandatory to make modern games playable. The same applies with CPUs. Over time new games have increasingly challenged the CPU. You can see that in comparison graphs benchmarking different games with different CPU platforms. You may not need it today. And you probably won't pay this price for it. But I for one am excited to buy this chip second hand in a few years to keep the latest ordinary games running well.

      As we get more powerful hardware, games start optimising less.

      Even if I quadruple my CPU speed, it won't matter much. CPU usage rarely goes over 50% as is, with a 13th gen i7 mobile.

      I'm willing to bet that if you stop alt tabbing and looking at graphs, and instead do detailed performance monitoring you'll find that even on games where you think the CPU is not doing much, a different CPU will have impact to frame time, FPS, or other metrics that actually influence your gameplay. This has been shown time and time again, even in games which appear to not tax the CPU - it still has an impact.

      • by DarkOx ( 621550 )

        I don't think the parent doubts it will have an impact.

        I think he is more saying that as you move beyond budget tier parts in general you'd almost always be better served by allocating that extra $200 to more/faster memory, the chipset with more PCIE lanes, better GPU, or even a better performing ssd (or array).

        If the subject is gaming when it comes to getting the most performance out of any titles availible today, buying into the top line of the current CPU generation is going to buy you less experienced i

        • by Guspaz ( 556486 )

          When it comes to gaming, chipsets and PCIe lanes have little to no impact. More/faster memory can, but it depends where you're starting from (there are very heavy diminishing returns, especially if you have a lot of CPU cache). SSD will have little to no impact. GPU, on the other hand... If you're talking high-end GPUs, then no, because $200 won't let you jump up a tier in the high-end. But in the mid-range, $200 can make a big difference. Assuming you can find a GPU in the first place.

          However, $200 can mak

          • SSD matters most, with loading the OS, levels, apps, etc. 32GB of memory is plenty today, 16GB is adequate, 8GB has performance issues but works, and 4GB is survivable but painful. 64GB and up are for specific memory hog uses. Faster memory? Doesn't matter so much. Barely makes a difference. A $200 GPU is going to be surprisingly close to a $400 GPU, which is not that far off from an $800 GPU. Only if you're focused on 4k with all details at max do you need to spend 800+ and get the fastest CPU and a

            • by Guspaz ( 556486 )

              Having an SSD *at all* matters for that, having a faster SSD, not so much. You're not going to get any meaningful improvement out of your game buying a $400 SSD instead of a $200 SSD. Level load times are not really bottlenecked by raw disk throughput, and DirectStorage is still a mess (and largely a liability).

              $200 on a GPU *can* make a big difference when you're at low-end or mid-range cards, but it depends on the cards. Comparing a closer-to-MSRP card to a "fancier cooler" card can be $200 with basically

          • by DarkOx ( 621550 )

            PCIe lanes make a ton of difference, it is easy to max out on the lower end chipsets. A couple nvme type ssds, a 16x GPU, and NIC, and you're full up with no room at all for expansion and you might be already be given less lanes to some of that hardware than it could support.

            • by Guspaz ( 556486 )

              All AMD chipsets, from the cheapest to the most expensive, have a single PCIe 4.0x4 link back to the CPU. Ultimately, the CPU's PCIe lanes are your bottleneck. Certain peripherals are also directly connected to the CPU, making the chipset irrelevant. That includes (at least) the GPU, and the first m.2 slot.

              In fact, it's much easier to max out the bandwidth on the higher-end chipsets, because they try to hang way more stuff off that single x4 link. The cheaper chipsets are less likely to bottleneck the chips

      • An impact is not the same as a big or even significant impact.

        I tend to game in about 1/3 of my 48" 4k screen, because I sit close enough that I get nauseous otherwise.

        With unlimited funds, faster is better, sure, but I have found that a passmark of around 3000 per cpu gets the job done quickly in everything I want to do for the past few years. It's not like I'm a competitive gamer.

        The CPU sweet spot these days is far, far lower down the food chain.

  • The benefit to the X3D chips is having extra cache for programs that benefit from it. If the programs you use don't benefit from the extra cache, you pay more money for something you won't actually see a benefit from. So, there's nothing "BAD" about X3D, but there are limited programs that benefit from it in the consumer space. The hype generated by gamers for the X3D chips has convinced some people to go with it, rather than saving some money by getting the normal Ryzen 9 9950X CPU.

    • by Guspaz ( 556486 )

      The advice from reviewers has generally been, if you only play games, get the 9800X3D, if you only do productivity, get the 9950X, if you do both, get the 9950X3D.

      However, at MSRP, it's a relatively minor price difference, $649 versus $699. At that point, I'd argue it's worth it just for the extra flexibility. But at street price, the gap appears to be wider (I'm seeing $550-600 versus $700), and with that sort of a price difference, yeah, get the 9950X if you're really productivity focused.

  • There aren't a lot of programs today that make use of that much cache, but that's because programs aren't designed to. Tomorrow's programs will be, and when designed for it makes a big, big difference.
    • Na.

      It's not like a program gets to choose what goes into cache.
      If your software has a large working set in hot loops, more cache will probably help you- but it will hurt you on every other CPU there is.
      This is why most games aren't even impacted by this- but the ones that are, are majorly.
      There isn't going to be a transition toward programming that favors CPUs that have this feature, of which there is 1, and there isn't going to be a transition to CPUs that have this feature because of the drawbacks.
  • For my CFD tests, I need more memory than 256GB, but the jump Threadripper costs a lot. Threadripper motherboards about about 3 times more expensive. For the same number of cores, the chips are more than twice the price. Sigh. [Anyone got an old Threadripper or Epyc they don't need? It doesn't need to be that fast, but it needs at least 256GB or 512GB or more memory. Currently, doing CFD on a car body, I can get down to 9.5mm. It really needs to be 4mm, and the point of diminishing returns might be 2mm.]
  • I never thought I'd even ask for this, but after buying a 13.9k and having it turns my office into a sauna, I need to ask-- how much heat does this thing pump out? Or to put it another way, is it more thermally efficient than the higher end Intel chips?

    It's seriously a problem when you have to locate your case in another damn room

  • Why everyone still mentions just "gaming" for high performance CPUs? Accasionally also "content". Why not "programming", particularly on Slashdot. Half an hour of LLVM rebuild on 16-core CPU is a pain. OpenJDK 6 minutes or so is also not fun. Even my own C++ project takes minutes to rebuild when I touch some *.hpp file.

A businessman is a hybrid of a dancer and a calculator. -- Paul Valery

Working...