Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AMD Hardware Technology

AMD Launches Lower Cost 12- and 24-Core 2nd Gen Ryzen Threadripper Chips (hothardware.com) 151

MojoKid writes: AMD launched its line of second generation Ryzen Threadripper CPUs over the summer, but the company offered 16-core and 32-core versions of it only at the time. Today however, the company began shipping 12-core and 24-core versions of the high-end desktop and workstation chips, dubbed Ryzen Threadripper 2920X and 2970WX, respectively. All 2nd Generation Ryzen Threadripper processors feature an enhanced boost algorithm that came with AMD's Zen+ architecture that is more opportunistic and can boost more cores, more often. They also offer higher-clocks, lower-latency, and are somewhat more tolerant of higher memory speeds. All of AMD's Ryzen Threadripper processors feature 512K of L2 cache per core (6MB total on the 2920X and 12MB on the 2970WX), quad-channel memory controllers (2+2), and are outfitted with 64 integrated PCI Express Gen 3 lanes. The new Ryzen Threadripper 2920X has a 180W TDP, while the 2970WX has a beefier 250W TDP. In highly threaded workloads, the Threadripper 2920X outpaces a far more expensive 10-core Intel Core i9-7900X, while the 24-core / 48-thread Threadripper 2970WX is the second most powerful desktop processor money can buy right now. It's faster than Intel's flagship Core i9-7980XE, and trailed only AMD's own 32-core Threadripper 2990WX. Pricing for the new chips falls in at $649 for the 12-core 2920X and $1299 for the 24-core Threadripper 2970WX.
This discussion has been archived. No new comments can be posted.

AMD Launches Lower Cost 12- and 24-Core 2nd Gen Ryzen Threadripper Chips

Comments Filter:
  • by Anonymous Coward

    Is all you need to know. (Oh yeah, and PCI-E lanes, and they don't have the money to bribe benchmarkers, and their PSP is a far cry from the full Intel IME. Oh yeah, and hyperthreading lol.)

    • by Anonymous Coward on Monday October 29, 2018 @09:00PM (#57559385)

      Well it's more sane than Intel's crippled offerings. Intel wants you to buy the Xeon platform so you can get enough lanes to do SLI.

      There is no reason why the CPU should not have sufficient lanes for two PCIe video cards, let alone four, and four NVME M2. PCIe x4 cards. So 4x16 =64, plus 16 for hard drives, and 4 lanes per thunderbolt USB-C port (most only have one right now) So you really need 84 lanes to cover all use case other than a dual processor server board.

      Most people will not be doing 4 GeForce 2080 RTX's, and/or 4 NVME devices, but the fact is that many people have money to burn, and the Intel platform is not even sufficient even with the Xeon's. You can't access more than 64 PCIe lanes even on the most insanely overpriced chip.

      • Someone mod this coward up.
      • by JBMcB ( 73720 )

        Are 16 lanes per GPU that important? There was an old benchmark from a few years ago that compared SLI using 2x16 to 1x16/1x8, and there was hardly any difference. Granted this was on relatively old hardware (I think 8xx series GeForce) but from what I remember the important part was bandwidth loading textures into the GPU. Since modern GPUs have gobs of RAM, and the faster SLI bridges let them pool memory better, it's less of an issue.

        • by aliquis ( 678370 )

          There never was any desktop 8xx parts, only laptop, desktop went straight from 7xx to 9xx.

          The PCI-express version is still 3.0, may have been 2.0 when Linus Tech Tips or whomever tested it back then but the graphics cards has become faster.

          On the other hand AMD has abandoned CrossFire branding completely and SLI is pretty dead/unused too. In DX12 you may be able to even force rendering onto two cards but really close to no-one is using two cards for gaming and very close to no-one is using four for that pur

      • Most people will not be doing 4 GeForce 2080 RTX's, and/or 4 NVME devices, but the fact is that many people have money to burn

        Remember also we're talking about people buying threadrippers / 9900Ks. In the venn diagram of people with money to burn and customers for these products overlap greatly.

        • But still, 64 pcie lanes for the threadripper family vs 16 for the 9900k is just stupid.

          We just set up a 32 core threadripper with 64gb ecc memory (yes, ecc!) and SLIed video cards and a beefy 8x PCIe SAS controller for bulk storage and a 4x nvme boot SSD. Used for scientific computing. Cost about $6k It compares pretty favorably with the dual-CPU 16 core Epyc setup we installed earlier this year for about $9.5k. Amazing.

          • No "but still" about it. I fully agree with you. I was supporting the AC's statements that the type of customers using these chips are precisely the type of customers who would use the extra PCI-E lanes.

      • by aliquis ( 678370 )

        Close to 0% buy two graphics cards. Even closer for four.

        As for the number of PCI-express lanes if we remove what's used to connect the chipset then the Ryzen CPU offer:
        20 lanes, of which 16 goes to the GPU slot(s.)
        Whereas the Intel one offer:
        16 lanes.
        So that's one M.2 drive connected to CPU extra on Ryzen.
        However on the chipset side they offer:
        X470: 8x PCI-express 2.0.
        Z370: 24x PCI-express 3.0.
        Typically a secondary M.2 slot on an X470 board only run at 4x PCI-express 2.0 instead, on some boards it takes la

    • Far more expensive and not available. Right now Amazon doesn't even have a preorder page up, that disappeared last week when scalpers were flipping at $1000/chip.

  • by SuperKendall ( 25149 ) on Monday October 29, 2018 @08:52PM (#57559359)

    Revised Mac Mini, offering an AMD chip.

    Maybe even the redesigned Mac Pro...

    To me it's been quite odd that Apple is so keen on AMD GPUs, while never using them for primary processors.

    • While I agree that Apple giving AMD a spin would be welcome, I can also see the reasons why there hasn't been any movement:

      - Apple working with ATI is positively ancient. AMD buying ATI didn't affect that. So that explains the continued use of the AMD Radeon line of GPUs.

      - The transition from IBM Power to Intel Core CPUs was done in classic Steve Jobs flamboyance. And with that Apple got preferential Intel treatment that had been the domain of Dell. (Whole other story there me thinks.)

      • The GPU integration has me scratching my head. AMD's integrated slaughters Intels in any fair (equal $) comparison. I dont see how the deal with Intel benefits AMD.
        • The GPU integration has me scratching my head. AMD's integrated slaughters Intels in any fair (equal $) comparison. I dont see how the deal with Intel benefits AMD.

          Because Intel still sells a lot more desktop chips than AMD. AMD probably won't be making any serious moves in the low to mid-range desktop CPU market any time soon. Intel has that locked up. So you may as well make money selling AMD graphics on those low end chips.

          There is something to be said for making strategic decisions on not partnering with potential competition. There is also something to be said for selling as much product as you can to make money.

          • AMD probably won't be making any serious moves in the low to mid-range desktop CPU market any time soon. Intel has that locked up.

            What makes you think that? For example, HP Pavilion with Ryzen 2400G, $488. Looks like low to midrange to me.

          • Comment removed based on user account deletion
      • And with that Apple got preferential Intel treatment that had been the domain of Dell. (Whole other story there me thinks.) Apple won't be in any rush to jeopardise this position.

        Just spitballin' here, but something tells me that whatever preferential treatment Apple got when they switched the Macbooks over may well be back in Dell's hands.

        Yes, the Macbook is still the darling laptop of college kids...but Apple's bread and butter has been iOS devices, which have all been in-house CPUs for years (and ARM before that). Intel doesn't get a slice of that very, very large pie. Moreover, Apple versions of Intel processors are at least somewhat custom runs, since they are all soldered in n

        • And with that Apple got preferential Intel treatment that had been the domain of Dell. (Whole other story there me thinks.) Apple won't be in any rush to jeopardise this position.

          I think you are correct considering that A) Apple sells new machines with 4 to 5 year old CPU models, and B) Apple is supposedly designing their own desktop and laptop chips to be rolled out in about 2 years.

    • by tlhIngan ( 30335 )

      To me it's been quite odd that Apple is so keen on AMD GPUs, while never using them for primary processors.

      Easy. Part shortages. One problem during the PowerPC era that Apple had was that neither Motorola (now Freescale) nor IBM would supply Apple with the processors Apple wanted. It was so bad that there would be delays for weeks of the top end models that Apple offered because the yields were terrible. This happened so consistently it was predictable - if you wanted a high end configuration, you reloaded

      • by Bert64 ( 520050 )

        Intel have been offering integrated third party GPUs for a while, several of the atom chips were available with a PowerVR GPU.

  • So long as these hefty core counts per socket don't end up in my Per Core licensed devices, I should be OK....
    • by torkus ( 1133985 )

      And I think that's what AMD is missing here. Cramming 24 or more cores into a CPU has already long passed diminishing returns for anything but very highly optimized parallel-thread applications. ... and those same applications typically cost FAR more per core to license than the CPUs themselves. For the average consumer and even the performance kiddies I don't see how they're really winning much in real-world terms.

      • by Anonymous Coward

        The world is a great deal bigger than you think.

        There are _plenty_ of parallel workloads out there than Oracle or what you're thinking of. I have a hard time understanding why you're wanting to deny "average consumers" playing with Blender or encoding video or why not some of the FEA packages out there, just to give a few examples.

        Not that the "average consumer" does much more than browse and watch YouTube or Netflix, for which you have absolutely no use for a Threadripper or one of Intel's counterparts.

        And

  • Oft overlooked (Score:4, Interesting)

    by NewtonsLaw ( 409638 ) on Monday October 29, 2018 @11:19PM (#57559791)

    About 6 months ago I built a new budget video editing rig. I was torn between going with an i7 8700 or a an AMD2700 but opted for Intel because of QSV.

    QSV allows for decoding and encoding H264 and H265 video in hardware using the on-chip video hardware. It's brillant watching my 6-cores idling while rendering 4K video into H265 files at realtime speeds. Try that with your AMD processor :-)

    However, these days I'd probably go for the 1950 Threadripper (cheaper and almost as good as the 2950 because those extra cores *are* useful in good video NLEs such as Davinci Resolve.

    • It's brillant watching my 6-cores idling while rendering 4K video into H265 files at realtime speeds. Try that with your AMD processor

      Doesn't QSV rely on the GPU? In which case the Ryzen 2400G's Vega 11 considerably outpowers the 8700k's UHD 630.

      • No. Yes. No. Wait!

        QSV is in part of the CPUs which have a dedicated video engine. AMD's APUs have the same thing (called VCE). However this feature is quite irrelevant on a high performance desktop which will have a dedicated GPU in it anyway.

        • Right, really only relevant to laptops. But I bet there are way more people encoding video on laptops than desktops.

    • I am very curious as to why QSV exists on desktop processors. QSV makes sense for mobile chips and AMD's APUs have a similar functionality. But frankly QSV is a slow dog compared to offloading rendering onto the compressor in even an old GTX1060.

      These features exist in video hardware, so it makes no sense to duplicate them in the CPU, espeically given the video cards typically have a shorter lifetime (for the power hungry) than a typical CPU. (At least for me personally, I've bought twice as many graphics c

      • by ytene ( 4376651 )
        Is it to cater for system-on-a-chip systems, or maybe laptops, where there is no dedicated GPU? For example, something like we see in Intel's Core i7 7700T and 8700T processors, which bundled Iris graphics with the core processor on one chip?
        • Yes exactly. As I said this is why AMD APUs have the same functionality, as does my several year old Core i5 with Iris graphics. However it makes no sense on chips in the class which we are discussing.

      • Comment removed based on user account deletion
        • You missed my point. What kind of a weird arse computer build specs out something like a Ryzen Threadripper and then doesn't have a dedicated GPU. If you think it's the top of the line CPUs that are finding their way to industry then it's not my perception of reality that needs to be questioned.

          Posted from my work PC with a low end processor where it makes perfect sense to include VCE on the AMD APU

          • " You missed my point. What kind of a weird arse computer build specs out something like a Ryzen Threadripper and then doesn't have a dedicated GPU. "

            CPU based render farms.

            • CPU based render farms.

              Render farm ... where hardware h.265 encoder is a critical application... A friend of mine owns a landscaping business and could rent you a nice backhoe, not that I think you're not doing a fine job digging yourself and your argument in a hole, but still I'm sure he'd offer his services.

    • I'm glad you find encoding to licensed codecs so enthralling.
      • I'm glad you find encoding to licensed codecs so enthralling.

        I think you'll find he's not enthralled, but rather like most people simply doesn't give a shit.

  • What matters is cache size, L2 and L1. Losing a few cores and bolstering cache will improve performance in quite a lot of cases.

    The Key is that threads don't talk that much. The amount of shared information needed to justify cores in close proximity and a huge shared cache isn't there a lot of the time.

    Cheaper SMP - not difficult with PCI-E's design - would leave much more room for the critical L1 cache, reduce the heat burden on a CPU, and potentially quadruple the number of cores (since 4-way SMP is not t

    • Comment removed based on user account deletion
    • What matters is cache size, L2 and L1. Losing a few cores and bolstering cache will improve performance in quite a lot of cases.

      Do you have any links researching this? Outside of CPUs cache very quickly has diminishing returns, and I'd be curious to know what effects it has on CPUs.

      I think one of the best things Threadripper has done is to finally introduce NUMA to consumer processors. It's going to take a hit because a lot of code still hasn't updated to support it, but it's a smart long-term add.

    • by Agripa ( 139780 )

      What matters is cache size, L2 and L1. Losing a few cores and bolstering cache will improve performance in quite a lot of cases.

      This is easier said that done, especially for the L1 cache. L1 cache size is limited by the cycle time and load-to-use latency of the instruction pipeline. If they could make the L1 cache larger without sacrificing clock rate and load-to-use latency, they would.

      This is why complex out-of-order instruction pipelines *must* be used to achieve high clock rates. They increase the allowable load-to-use latency which increases the allowable latency of the L1 cache. But good luck pushing the load-to-use latenc

  • by ytene ( 4376651 ) on Tuesday October 30, 2018 @03:45AM (#57560303)
    On the one hand, it is always good to see innovation and improvement in technology. Kudos to AMD and Intel for continuing to develop and evolve new technologies.

    On the other hand, am I the only one that thinks that both companies have completely lost the plot when it comes to model/variant naming conventions?

    In fairness, a big part of the problem is not entirely the fault of the chip makers... As the core computing world (desktop/mobile/server) matures, we are seeing the most successful companies achieve dominance through an ability to tweak their designs to more closely match the demands of their clients. Everything is up for optimisation - clock speed, core and thread counts, L1 and L2 cache, TDP, power consumption, the works. This generates a *lot* of different processor models.

    The problem is that when many of these chip permutations then make their way in to the retail channel, the resultant model naming conventions and "chip families" just result in endless confusion. Whilst it's also fair to say that it is not too difficult to figure out low, medium and high performance models [start by looking at prices within a given range, then dig for details], we're increasingly needing to become chip specialists who have a very clear idea of our intended use cases if we want to have confidence that we've bought the best chip for our desired task profile.

    I'm curious to know if slashdot readers think this is a fair criticism and/or whether there would be any interest in having a more uniform way of assessing the relative merits of different chips. For example, if I compare the Intel Core i7-7700T with the Core i7-8700T, not only is the move from 7th generation to 8th generation relatively easy to spot, but when we look at the specifications, then with pretty much everything except the base processor frequency, we can see the improvements delivered by the later generation. That sort of direct comparison just doesn't seem possible with the latest product announcements...

    What would you do differently? Or are the current naming conventions from AMD and Intel easy enough to follow?
    • Naming conventions stopped being relevant beyond the target for the machine a long time ago, and by that I mean designations for overclocking or for laptops etc.

      In general there are so many different variants and feature sets in any CPU suited to such a large number of different ideal workloads that no sane naming convention could keep up. Not unless you start following industrial product naming conventions such as:

      AMD-GENERATION-CORECOUNT-PACKAGE-SMT-FEATURE1-FEATURE2-FEATURE3-FEATURE4 . etc .

    • My approach to PCs has been this:
      1. Wait. Don't buy the newest tech, it's just not worth it.
      2. Figure out my budget
      3. Look at benchmarks - by getting older tech they are well established.
      4. Get the most #3 for #2

      In the last year I bought 3 'new to me' systems, one for me and two for my kids. They are used Dell Inspiron 7010s, with 8GB RAM and i5-3570 processors. They were $100 each. I picked up Nvidia GTX cards (750/460/460ti) for cheap, ~ $25 each. All said I spent less than $400 on 3 computers.

      A

  • by Anonymous Coward

    The obsession these days is more cores I guess. I'm sure its somewhat useful for a small percent of PC users. But its sort of like having a 190 MPH car to drive on a 70 mph road. Its there if you need it, but will you ever use it?

    • by Bert64 ( 520050 )

      There are many advantages to a car capable of much higher speeds than the legal speed limits...

      Operating an engine close to its maximum power output is inefficient and increases wear and tear, if a car is capable of 190mph but it spends most of its life at 70mph then there is very little stress on the engine and it's likely to last a long time.

      A car with a higher top speed typically has better acceleration too, you may not drive any faster than 70mph but your time to go from standing to 70mph will be lower

      • Operating an engine close to its maximum power output is inefficient and increases wear and tear, if a car is capable of 190mph but it spends most of its life at 70mph then there is very little stress on the engine and it's likely to last a long time.

        They solve this problem now with either 7-speed (or more) transmissions, or with CVTs. You can't compare computers to cars. It never works. Stop it.

        A car with a higher top speed typically has better acceleration too,

        They're solving this problem with mild hybridization. The starter and alternator are replaced with one belt-driven motor/generator which gets the vehicle moving while the engine is stopped, eliminating the drawbacks of auto start-stop. It also can torque fill.

        Most people don't need lots of CPU, by modern definitions. The average user is just web browsing. They c

    • My next system is a 16 core liquid cooled Threadripper paired with 64GB of memory, a pair of 1080ti GPU's and a few M.2 SSD's. Should keep the system relevant for a few years at least ( which is the goal )

      Clock speed is higher on the 16 core vs the 32 for applications ( or portions thereof ) that aren't multithreaded. ( Maya modeling, rigging and / or animating ) The extra cores are useful for CPU based rendering ( Arnold, Keyshot, Brazil, etc ) as they're all heavily multithreaded. So I went with a bal

      • Clock speed is higher on the 16 core vs the 32 for applications

        That's what the core disable function is for.

Genius is ten percent inspiration and fifty percent capital gains.

Working...