Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AMD Intel Hardware Technology

AMD Launches 16-Core Ryzen 9 3950X At $750, Beating Intel's $2K 18-Core Chip (hothardware.com) 67

MojoKid writes: AMD officially launched its latest many-core Zen 2-based processor today, a 16-core/32-thread beast known as the Ryzen 9 3950X. The Ryzen 9 3950X goes head-to-head against Intel's HEDT flagship line-up like the 18-core Core i9-9980XE but at a much more reasonable price point of $750 (versus over $2K for the Intel chip). The Ryzen 9 3950X has base and boost clocks of 3.5GHz and 4.7GHz, respectively. The CPU cores at the heart of the Ryzen 9 39050X are grouped into two, 7nm 8-core chiplets, each with dual, four-core compute complexes (CCX). Those chiplets link to an IO die that houses the memory controller, PCI Express lanes, and other off-chip IO. The new 16-core Zen 2 chips also use the same AM4 socket and are compatible with the same motherboards, memory, and coolers currently on the market for lower core-count AMD Ryzen CPUs. Throughout all of Hot Hardware's benchmark testing, the 16-core Ryzen 9 3950X consistently finished at or very near the top of the charts in every heavily-threaded workload, and handily took Intel's 18-core chip to task, beating it more often than not.
This discussion has been archived. No new comments can be posted.

AMD Launches 16-Core Ryzen 9 3950X At $750, Beating Intel's $2K 18-Core Chip

Comments Filter:
  • One word (Score:2, Funny)

    by The-Ixian ( 168184 )

    thundercougarfalconbird [urbandictionary.com]

  • I don't know why they're opting not to increase the bandwidth to the chip. This has been a problem with the AMD platform at least since Opteron's first came out, they had DDR2/3 while Intel had DDR3/4 and even now, the bandwidth and latency between the processor and memory is impactful for real-life situations, Intel has quad channel memory, these have dual channel.

    Chip design is hard, but these days it's not just raw calculations anymore that are important, anything I/O related (networking etc) will requir

    • You are asking why AMD didn't fall into the trap Intel is in.
    • by AmiMoJo ( 196126 )

      Performance in consumer apps is on par or better than Intel chips costing 3x as much. It even supports ECC RAM for workstations.

      I doubt AMD really care about memory benchmark numbers. App and game benchmarks speak for themselves.

    • by crgrace ( 220738 )

      Cost. This is one key way that AMD is able to sell their chip for less than Intel.

    • It's all about backwards compatibility and market segmentation. The Zen 2 parts are designed to slot into existing AM4 motherboards, which means dual channel memory - and frankly, for most consumer applications, you don't need more memory bandwidth. Memory latency matters a lot, but there isn't a whole lot AMD can do about that aside from throwing more L3 cache on their chips to hide it as best as possible.

      If you need more memory bandwidth, then you'll want to look at their Threadripper or EPYC lineup. Same

    • by willy_me ( 212994 ) on Thursday November 14, 2019 @05:53PM (#59415338)

      Intel has quad channel memory, these have dual channel.

      True, but Threadrippers have quad channel memory and it looks like they have been announced for the end of the month. If you follow the link in the article, it links to the announcement. Or you can click here [hothardware.com].

      Chip design is hard, but these days it's not just raw calculations anymore that are important, anything I/O related (networking etc) will require much more memory bandwidth than it requires CPU cycles.

      Yes, but anything I/O related will still be limited by I/O. Memory bandwidth is not really an issue when it comes to data streaming because memory is significantly faster then the I/O it is feeding. Any work the CPU must perform on the data will typically fit within the CPU cache so there will be no additional memory overhead.

      Where memory bandwidth becomes a concern is with things such as FFTs where the CPU does little work but that work is spread out over a large memory area. In this scenario, the CPU can not make effective use of the cache and ends up waiting on memory. But even in this case, the number of memory lanes is not the biggest problem. Memory latency is more important.

      There are a few benchmarks where the quad-channel Intel solutions are fantastic. But they are the exception and not the norm. This demonstrates that quad channel is great - but not required for most applications.

    • by Kjella ( 173770 )

      I don't know why they're opting not to increase the bandwidth to the chip. (...) Intel has quad channel memory, these have dual channel.

      Because this isn't AMDs HEDT platform. It's their mainstream platform that is blurring the line if you simply need CPU cores. On November 25th they're launching their Threadripper 3 platform with 24/32 cores, quad memory channels and a ton of PCIe lanes. There'll probably be a lower end processor eventually if you just need the memory/PCIe lanes. Those are the ones intended to kick Xeon's ass, this is just a boxer punching above his weight. Of course it's not exactly cheap anymore but you'll be able to get

    • Because, when you're fast enough, you don't have to remember things.
    • by Frobnicator ( 565869 ) on Thursday November 14, 2019 @06:46PM (#59415454) Journal

      Chip design is hard, but these days it's not just raw calculations anymore that are important, anything I/O related (networking etc) will require much more memory bandwidth than it requires CPU cycles.

      You started down the WHY path there. It depends on your workload.

      For tasks that are compute intensive AMD has been a distant second behind Intel for about fifteen years.

      At work we just added a Ryzen 9 3900X to our content cooking servers, basically the one notch before this new 3950X in the article. When I first saw they were going to buy an AMD chip for a server I was shocked and wondering if someone was trying to save a buck at a serious penalty to performance. I was pointed to the Zen-2 architecture changes, studied them, and thought "maybe it won't be a complete failure". One of Intel's strengths was that the core's execution ports could nearly always be kept busy, AMD's decision to split off integer and floating point gained some strength in some computing uses, but not for us. Comparing them they still have many similarities, still decoding up to 4 instructions per cycle, which hurts AMD's ability to keep the CPU fed since hyperthreading basically doubles that to up to 8 instructions per cycle on Intel's systems. Their SMT system is basically the same, and we subsequently verified it keeps the CPU well fed, back on par with what Intel has done for years. The Zen-2 design still has some integer and floating point split which I think means it spends more time idle internally. But in the specs a few things intrigued me. It has a more intense scheduler, better caches and predictors, and I think most critically, has it has two loads and one store per cycle, which addresses a data bottleneck.

      Once we started running builds on it, I was shocked at how it kept up with the workload. A full data cook for us is a massive number crunching operation on a heavily optimized, moderately cache friendly dataset. An incremental data cook is about 7 hours, a full data cook is a full day give or take. For our build servers the Intel 9900K server we recently added was fast, always finishing first by an hour or two over the other machines. But the Ryzen 9 3900X beats all our other machines and now is routinely the first to finish by a fair margin, and in some situations can complete double the work of our less powerful machines. While keeping the cook servers fed with data is a challenge and NVMe drives help, when it comes to raw number-crunching power I've been pleasantly surprised by the Zen-2 architecture.

      Intel hasn't faced serious competition in our compute server marketplace for a decade and a half. This new architecture has reversed that trend.

    • by jon3k ( 691256 )
      Because, along with PCIe lanes, that is what differentiates their HEDT (i.e., Threadripper) parts from their desktop parts, like the 3950X.
    • Comment removed based on user account deletion
    • by Anonymous Coward

      Intel gave up security for performance. Why would AMD want to fall into that trap?

      • Because it is what sells. I would gladly opt to buy an AMD chip at AMD prices with the completely irrelevant to me security benefits traded of for improved performance over Intel.

    • by fintux ( 798480 )
      This is a mainstream desktop part. The Intel flagship also has dual-channel DDR4, but rated at 2666 MHz whereas 3950X is rated at 3200 MHz. Sure, this chip is capable of taking on the Intel HEDT desktop parts, which have quad-channel memory. But so do AMD's ThreadRipper CPUs (and again, they have 3200 MHz RAM support vs. 2666 from Intel). Also, the ThreadRipper supports 1 TB of RAM while Intel's HEDT platform only supports a max of 128 GB (the same amount that is supported by 3950X). So in all regards AMD h
  • .. is an SSD chiplet, a networking chiplet, and a southbridge/PCH chiplet.

    Then maybe, if they were vertically mounted like blades ... and user-replaceable ... In some kind of ... desktop case ...

  • by UnknownSoldier ( 67820 ) on Thursday November 14, 2019 @05:37PM (#59415296)

    As Gamer's Nexus points out @2:16 [youtu.be] the Boost 4.7 GHz is only under single core. Clock speed under full load was 3,924 MHz @3:55 [youtu.be] -- a far cry from the advertised Boost 4.7 GHz.

    Likewise, the advertised 105 TDP @ 2:38 [youtu.be] is not accurate or useful. To be fair Intel plays shenanigans with their TDP numbers as well.

    With all that said a 16-core / 32-thread CPU for $750 is amazing -- this is approaching HEDT territory. I'm looking to pick up a ThreadRipper soonish and this throws a real monkey wrench into the works. More so that a TR 1920X [amazon.com] is only $200 !

    It's funny to see AMD's 3950X with 16C / 32T beating Intel's i9-7980XE 18C / 36 T in Blender. And when O/C'd that it compares (10.3 minutes) to Intel's i9-9980XE 18C/36T at $1949 (10.2 minutes) @13:00 [youtu.be]. Of course Blender / Rendering is embarrassingly parallel so it performing extremely well is to be expected.

    For gaming the i7-9700K is far faster and cheaper but part of picking a good CPU is knowing HOW it will be used. Rendering? Transcoding? Gaming? Decompression? Future Proof?

    It will be REAL interesting to see how the ThreadRipper 32C / 64T 3970X performs compared to the R9 3950X and if AMD releases a Threadripper 64C / 128T 3990WX in 2020 to see how the 3rd gen TR scales for IPC. The end of November can't come fast enough!

    • by AmiMoJo ( 196126 )

      The new Threadripper is going to have to be incredible to avoid people buying this instead. Unless you need loads of PCIe lanes it's going to be hard to justify spending 2-3x as much on a new Threadripper.

      • Few people will buy Threadripper because they need it.

        I built a new PC this summer and put the bottom end Ryzen 3K (6 core) processor in it. Someday I hope to be able to afford the 3950X, though truth be told, I don't need it either :-)

        But the primary benefit of my PC upgrade is that it means I get to retire my Skylake build. I had to leave the AMD family for a few years until something worth building came along.
        • Few people will buy Threadripper because they need it.

          Are you trying to say that server workloads don't exist? Plenty of people will buy threadripper because they *need* it.

          • If they are running server workloads, wouldn't they buy EPYC?
            • Depends on what the server is doing. Threadripper offers different cores per dollar, I/O per dollar, and memory throughput per dollar than EPYC. Pick the chip to suit the application.

      • Yes, it will be extremely interesting to see how the 3rd gen ThreadRippers play out.

      • My gut feeling is that for heavily parallel compute tasks it will offer kinda crazy performance as it's been confirmed to be a 4+1 die part. In addition to the two extra "compute" dies the IO die is even beefier and there's a new chipset, giving third gen Threadripper a grand total of 88 PCIe 4.0 lanes of which 72 are available to peripherals.

        If you look at the pricing you can see that the Ryzen 9-series (3900 and 3950) are going for Intel's very high end, but not workstation CPUs while the new Threadrip
    • by jon3k ( 691256 )
      "Far faster" is quite a stretch, but certainly less expensively if all you want to do is game. For anyone buying these CPUs they are GPU bound anyway so the difference won't matter. For example, World of Tanks [anandtech.com]. Either you play at 1080p and get 395 vs 345 frames per second, which your monitor cannot even display either way or you play at 4K and the fps difference is less than 1fps.

      Also worth mentioning that Gamer's Nexus had the worst results of pretty much any reviews that came out today. Some people
      • Also worth mentioning that Gamer's Nexus had the worst results of pretty much any reviews that came out today. Some people think they lost the "silicon lottery."

        By "some people", you should include Gamer's Nexus themselves. You know what they didnt do? They didnt try to find a better chip. Most of the "reviewers" are trying to compete on performance, while Gamer's Nexus is just trying to do a good job as a pc tech news source.

        • by jon3k ( 691256 )

          They didnt try to find a better chip.

          Not sure what you are suggesting. No one could try to "find a better chip". They aren't available via retail yet, so the reviewers had whatever review sample they were provided.

    • This reminds me of the time intel first came out with their clock multiplied 486 dx2-66 and everyone was crying foul because it wasn't a "real" 66mhz since the motherboard was only running at 33mhz.
      Intel and AMD seem to be doing more and more to squeeze the most mhz out of their CPUs at any given moment, further blurring the lines when it comes to what speed the cpu-mobo-ram are actually running at.

    • by fintux ( 798480 )

      Boost 4.7 GHz is only under single core. Clock speed under full load was 3,924 MHz @3:55 [youtu.be] -- a far cry from the advertised Boost 4.7 GHz.

      From AMD's specifications for the product: "Max boost for AMD Ryzen processors is the maximum frequency achievable by a single core on the processor running a bursty single-thread workload." Also, the 3,924 is well above the specified 3.5 GHz base clock.

      Likewise, the advertised 105 TDP @ 2:38 [youtu.be] is not accurate or useful. To be fair Intel plays shenanigans with their TDP numbers as well.

      You're right, and this number has kind of lost it's meaning. However, on AMD, as far as I know, this number is more realistically describing the max power draw when running in-spec. Whereas with Intel, it is the number is measured at a base clock running an

      • Thanks for confirming that Boost is only for single core!

        • by fintux ( 798480 )
          To be pedantic here, maximum boost is for single core only and some (unspecified amount) boost is for multiple cores, even applicable to all cores. However, the base clock is the minimum sustained clock. For Intel, the base clock means just the frequency at which the TDP has been measured. Their CPUs can drop below the base frequency for example when executing AVX-512 code.
          • by fintux ( 798480 )
            Whoops, didn't mean to emphasize all of the end of the post, sorry for almost yelling :P
  • by steveha ( 103154 ) on Thursday November 14, 2019 @05:46PM (#59415322) Homepage

    This chip has two TDP numbers: 105 W and 65 W. You can have 65 W by limiting the max performance. If you want to use the chip at its full potential, AMD recommends water cooling.

    It strikes me that instead of water cooling you could use something like Apple's "trash can" form factor from the 2013 Mac Pro [anandtech.com]. That form factor is designed around one really big heat sink, in a vertical "chimney" configuration with one big slow fan at the top to air-cool the system.

    The heat sink has three faces; one has the mainboard and the other two have matched GPU boards. This turned out to be a poor solution [theverge.com] for actual pro users, who would rather have one really powerful GPU than two medium-powerful GPUs; but if sold at a reasonable cost would make a fantastic solution for most users.

    So I wished even back in 2013, and still wish now, that someone would make an open standard that works the same way. One really big heat sink, perhaps with heat pipes inside it. A vertical orientation. One big slow (and quiet) fan to cool everything. It can be a squared-off box instead of a beautiful gleaming cylinder. Call the motherboard size standard "Tower-ITX" or something.

    I guess it won't happen. The market for it would be "people who want lots of desktop CPU power with only quiet air cooling" while both pros and hardcore gamers want the maximum available CPU power combined with the maximum available GPU power (and will put up with cooling fans that sound like leaf blowers).

    But I think we are reaching the point where designing a total system around cooling the CPU will be a good idea.

    P.S. I think Apple could sell a lot of a "Mac Pro Mini", which would be just the 2013 Mac Pro design with a desktop processor and a mainstream GPU (suitable for casual gaming) at a reasonable price. Apple is extracting maximum money from its customers by not offering such a product (they can by an iMac Pro for $5000) so Apple won't do it.

    • I wish my old gaming rig had been able to interface with my central heating system, I would have saved some serious money!
    • Just good old vertically mounted boards. Like the mainboard.
      I always wondered why they didn't mount the PCI(e) cards vertically for tower designs. Maybe because the connectors would have to be at another side than the back. This could have been an advantage though, as it would make cases with a locked panel to prevent connectors from beig removed more natural.

    • So I wished even back in 2013, and still wish now, that someone would make an open standard that works the same way. One really big heat sink, perhaps with heat pipes inside it. A vertical orientation. One big slow (and quiet) fan to cool everything.

      If my Brix VR [gigabyte.com] is anything to go by, even with heat pipes, a shared heat sink for this chip and an RTX 2080 could be the size of an actual trashcan to keep the fan speed you desire. The Brix VR sounds like a jet engine when you max out its chips.

      You could build your own and find out. You'll need access to a milling machine, but once you fabricate your custom heatsink, you can wrap the system around it using PCIe 'riser' cables [amazon.com]. Custom system builders now use these things as a matter of course. You could

    • Comment removed based on user account deletion
      • TDP is a totally meaningless figure for CPUs. These days as is the number of watts you can dissipate in a heatsink. AMD's 7nm line has significantly higher thermal transfer requirements which is why their GPUs have vapour chambers rather than heatpipes and why they recommend AIO watercoolers rather than stupidly oversized heatsinks.

    • by Khyber ( 864651 )

      " If you want to use the chip at its full potential, AMD recommends water cooling."

      I don't trust AMD to tell me what they fucking need for cooling. They told me get a water cooler for my FX-9370. They neglected to tell me that the fucking water block would block all the airflow that goes over the VRM heatsinks on their recommended motherboard, so the entire fucking thing kept overheating no matter what.

      Threw on a HSF and that 225W TDP processor runs at 65C fully-loaded and hasn't had an overheating issue si

    • by AmiMoJo ( 196126 )

      The advantage with water cooling is the immense thermal mass of the water. It allows the CPU to do very high boost clocks for longer than air cooling simply because it takes longer to come up to temperature.

      It's also hard to beat 3x 120mm fans for good cooling and low noise. Some air coolers have 2x 120mm fans but they aren't as efficient as a 2x radiator.

    • It sounds like you might be interested in the Compulab Airtop 3 [fit-iot.com]. Unfortunately, it's not modular and they don't currently sell it with AMD chips, but it manages to pack a ton of power into an extremely small case while using only passive cooling and maintaining minimal thermal throttling. Linus Tech Tips gave it a very favorable review [youtu.be].
    • This chip has two TDP numbers: 105 W and 65 W.

      Both of the numbers are completely irrelevant. The chip draws close to 200W when running full tilt. Both Intel and AMD advertise TDP numbers that they pull out of the darkest crevices of their rears and have nothing to do with reality.

      It strikes me that instead of water cooling you could use something like Apple's "trash can" form factor from the 2013 Mac Pro [anandtech.com]. That form factor is designed around one really big heat sink, in a vertical "chimney" configuration with one big slow fan at the top to air-cool the system.

      Watercooling is not just about total Watts removed. A large air cooler could comfortably radiate away the required heat from the chip. The problem with a lot of modern designs is that the heat source keeps getting smaller and it starts being harder to physically absorb the hea

In the long run, every program becomes rococco, and then rubble. -- Alan Perlis

Working...