AMD Launches Lower Cost 12- and 24-Core 2nd Gen Ryzen Threadripper Chips (hothardware.com) 151
MojoKid writes: AMD launched its line of second generation Ryzen Threadripper CPUs over the summer, but the company offered 16-core and 32-core versions of it only at the time. Today however, the company began shipping 12-core and 24-core versions of the high-end desktop and workstation chips, dubbed Ryzen Threadripper 2920X and 2970WX, respectively. All 2nd Generation Ryzen Threadripper processors feature an enhanced boost algorithm that came with AMD's Zen+ architecture that is more opportunistic and can boost more cores, more often. They also offer higher-clocks, lower-latency, and are somewhat more tolerant of higher memory speeds. All of AMD's Ryzen Threadripper processors feature 512K of L2 cache per core (6MB total on the 2920X and 12MB on the 2970WX), quad-channel memory controllers (2+2), and are outfitted with 64 integrated PCI Express Gen 3 lanes. The new Ryzen Threadripper 2920X has a 180W TDP, while the 2970WX has a beefier 250W TDP. In highly threaded workloads, the Threadripper 2920X outpaces a far more expensive 10-core Intel Core i9-7900X, while the 24-core / 48-thread Threadripper 2970WX is the second most powerful desktop processor money can buy right now. It's faster than Intel's flagship Core i9-7980XE, and trailed only AMD's own 32-core Threadripper 2990WX. Pricing for the new chips falls in at $649 for the 12-core 2920X and $1299 for the 24-core Threadripper 2970WX.
Re: (Score:1, Interesting)
Re: (Score:1, Informative)
Comment removed (Score:5, Informative)
Comment removed (Score:4, Informative)
Re: (Score:2)
After having 4 to 8 cores, the extra money would be better thrown towards higher clock rates.
If you're a gamer, that's certainly true. But most heavy tasks are highly parallel today, and if they aren't, they will be tomorrow. If you're doing anything that requires heavy lifting, odds are good that it will benefit more from more cores than from a small increase in single-thread performance.
Re: (Score:2)
That's one of the reasons AMD released "Gamers Profile" for Threadripper chips where half of the cores would be disabled and the extra thermal and electrical headroom dedicated to higher core clock boosts.
Threadrippers get a nice speed boost as a result of this feature. Though benchmarks have shown it is actually a hindrance for many of the games on the market for something like the 2700X which only has 8 cores.
Re: (Score:3)
Re: (Score:1)
Seems like I was just living in the past where there was some relationship between price and single thread speed. Apparently now, it seems the single thread speed is pretty flat across the price ranges, and you pay more for the
Re: (Score:2)
Re:Hello intel my old friend (Score:5, Informative)
the single core performance seemed underwhelming on the new AMD processors, especially for the price.
Especially for the price? You've got to be kidding. The 8 core 2700 sells for $265 right now, 6 core 2600 for $160. And single core performance is respectable, I have no complaints at all. Multicore smoothness is great even if you aren't running compiles for a living. You never get some out of control web page slowing down everything the way it used to be. Mind you, I'm looking forward to the Zen 2 announce, less than 3 months from now. Most likely equivalent IPC to Intel parts while soundly beating them by every other measure.
Re: (Score:2)
If you're running true single core workloads then you're missing the core boost advantage of Zen+. Yeah at stock frequency they are outpaced by Intel. However when I look at my own Zen+ Ryzen if I run single core loads I typically see 300MHz over a loaded multi-threadded situation on that core, and over 600MHz above the base clock rate. This is also the reason why overclockers pushing the hell out of the chip don't see much single threadded improvement in their benchmarks.
In terms of peformance vs Intel the
Re: Hello intel my old friend (Score:2)
Yes, multi threaded software, we ALWAYS code for multiple CPU's.
Re: (Score:1)
This reminds me how Edison developed the electric chair to prove AC was dangerous!
Re: Hello intel my old friend (Score:2, Informative)
A video from 2014. Great work fanboy. Can't you fins anything newer?
Anyway, with Intel there's no money left to buy the eggs. Hehe.
Re: (Score:3)
Re:Hello intel my old friend (Score:4, Interesting)
Reports are that the 9900K draws way more than 95 watts when running overclocked for the fiddled benchmarks. A lot of complaints about cooling problems out there. A lot of doubt about accuracy of benchmarks. And the chip is out of stock everywhere, so a lot of people are calling it a paper release. A lot of talk about cancelling orders and going with 2700X or Threadripper instead.
Re: (Score:2)
German magazine c't did some measurements in their current 23rd volume, pages 100 to 102.
Running AVX2 code, the 9900K draws 148 W for up to 28 seconds, 50% over TDP.
The 28 seconds are just enough to complete a run through multithreaded Cinebench.
Previous Intel processors only went 25% over TDP for up to 8 seconds.
Re: (Score:2, Informative)
"Running AVX2 code, the 9900K draws 148 W for up to 28 seconds, 50% over TDP."
That is meaningless. Intel TDP is for base clocks only.
Look at the cooling recommendation (130W) for a better idea of medium term power usage.
It can burst above this for short periods.
Re: (Score:2)
That's not just for Intel. That's general recommendations from processor vendors.
Re: (Score:2)
It's called a "paper release". And now "ex-fans".
Re: (Score:2)
Many reports that a normal water cooler can't cool the 9900K in *normal* use. Many. Rage against it if you like, but Intel's attempt to shoehorn 8 cores into a sunset process is shaping up as mere theatre. Not going to end well.
Re: (Score:2)
some tasks still perform better on Intel
Single threaded only... about 15% better for Intel's high end parts. Not worth the nearly 100% extra cost. Ryzen 2 got nearly even with Intel's 8000 series then more than half a year later Intel puts out the aggressively clocked 9000's and gets ahead 15%. But way behind in value even for single thread, never mind multicore which is what actually matters these days with multicore rendering and video encoding now being the common loads, single core mostly legacy or things that just don't matter with the chips
Remember Pentium IV ? (Score:3)
AMD chips are also great for frying eggs.
Which is a pretty easy trick to achieve given that egg protein already start to precipitate somewhere north of 50~60C - you could achieve the same with the warm water of your faucet, try it ! Note that the egg will not have been thoroughly cooked at a high enough temperature and will not be sterilized : it might not be safe to eat due to bacterial risks.
You could do the same trick as the video with any piece of electronic more beefy that a raspberry pi
And while digging at old stuff, Intel was at the recievi
"outpaces a far more expensive Intel Core" (Score:2, Informative)
Is all you need to know. (Oh yeah, and PCI-E lanes, and they don't have the money to bribe benchmarkers, and their PSP is a far cry from the full Intel IME. Oh yeah, and hyperthreading lol.)
Re:"outpaces a far more expensive Intel Core" (Score:4, Insightful)
Well it's more sane than Intel's crippled offerings. Intel wants you to buy the Xeon platform so you can get enough lanes to do SLI.
There is no reason why the CPU should not have sufficient lanes for two PCIe video cards, let alone four, and four NVME M2. PCIe x4 cards. So 4x16 =64, plus 16 for hard drives, and 4 lanes per thunderbolt USB-C port (most only have one right now) So you really need 84 lanes to cover all use case other than a dual processor server board.
Most people will not be doing 4 GeForce 2080 RTX's, and/or 4 NVME devices, but the fact is that many people have money to burn, and the Intel platform is not even sufficient even with the Xeon's. You can't access more than 64 PCIe lanes even on the most insanely overpriced chip.
Re: (Score:2)
Lanes (Score:2)
Are 16 lanes per GPU that important? There was an old benchmark from a few years ago that compared SLI using 2x16 to 1x16/1x8, and there was hardly any difference. Granted this was on relatively old hardware (I think 8xx series GeForce) but from what I remember the important part was bandwidth loading textures into the GPU. Since modern GPUs have gobs of RAM, and the faster SLI bridges let them pool memory better, it's less of an issue.
Re: (Score:1)
There never was any desktop 8xx parts, only laptop, desktop went straight from 7xx to 9xx.
The PCI-express version is still 3.0, may have been 2.0 when Linus Tech Tips or whomever tested it back then but the graphics cards has become faster.
On the other hand AMD has abandoned CrossFire branding completely and SLI is pretty dead/unused too. In DX12 you may be able to even force rendering onto two cards but really close to no-one is using two cards for gaming and very close to no-one is using four for that pur
Re: (Score:1)
Yeah, didn't counted anything just thought it was a weird number.
I was looking a lot into the i7 5820K back in late 2014 wanting to buy one even though people said 4790K was better and 4690K enough. I then regreggeted not buying one because I was stuck with my old junk because the SEK tanked vs the soaring USD and deals went away.
The processors cost basically the same but the X99 and DDR4 cost twice as much as Z97 and DDR3.
By now latest rumors are Epyc will have 8 core dies of 8 cores each and 32 MB cache a
Re: (Score:3)
Most people will not be doing 4 GeForce 2080 RTX's, and/or 4 NVME devices, but the fact is that many people have money to burn
Remember also we're talking about people buying threadrippers / 9900Ks. In the venn diagram of people with money to burn and customers for these products overlap greatly.
Re: (Score:2)
But still, 64 pcie lanes for the threadripper family vs 16 for the 9900k is just stupid.
We just set up a 32 core threadripper with 64gb ecc memory (yes, ecc!) and SLIed video cards and a beefy 8x PCIe SAS controller for bulk storage and a 4x nvme boot SSD. Used for scientific computing. Cost about $6k It compares pretty favorably with the dual-CPU 16 core Epyc setup we installed earlier this year for about $9.5k. Amazing.
Re: (Score:2)
No "but still" about it. I fully agree with you. I was supporting the AC's statements that the type of customers using these chips are precisely the type of customers who would use the extra PCI-E lanes.
Re: (Score:1)
Close to 0% buy two graphics cards. Even closer for four.
As for the number of PCI-express lanes if we remove what's used to connect the chipset then the Ryzen CPU offer:
20 lanes, of which 16 goes to the GPU slot(s.)
Whereas the Intel one offer:
16 lanes.
So that's one M.2 drive connected to CPU extra on Ryzen.
However on the chipset side they offer:
X470: 8x PCI-express 2.0.
Z370: 24x PCI-express 3.0.
Typically a secondary M.2 slot on an X470 board only run at 4x PCI-express 2.0 instead, on some boards it takes la
Re: (Score:2)
Far more expensive and not available. Right now Amazon doesn't even have a preorder page up, that disappeared last week when scalpers were flipping at $1000/chip.
What I would love to see... (Score:3, Interesting)
Revised Mac Mini, offering an AMD chip.
Maybe even the redesigned Mac Pro...
To me it's been quite odd that Apple is so keen on AMD GPUs, while never using them for primary processors.
Re: (Score:1)
What the hell are you talking about? Intel's chips are pushing higher temps these days.
Re: (Score:2)
Ryzens don't run hot but you do need to pay attention to the TDP as always. Now we are seeing some fanless Ryzen designs [quietpc.com] coming into the market. Looks like the Ryzen 3 2200U [wikipedia.org] would be fine in a NUC form factor, with its 15-25 Watt power envelope. A respectable 2 core/4 thread part. I haven't seen any Ryzen NUC-like offerings yet but there seem to be a lot of folks asking for them.
Re: (Score:2)
The Ryzen 2200U, 2500U, 2700U are fairly weak on CPU performance.
Compared to what, have you got numbers? Looks like 2200U clobbers Celeron N3050 in performance while being in the same ballpark in power consumption. Didn't look deep, but this does suggest it has a place in SFF.
ATI tie-in runs deep (Score:2)
While I agree that Apple giving AMD a spin would be welcome, I can also see the reasons why there hasn't been any movement:
- Apple working with ATI is positively ancient. AMD buying ATI didn't affect that. So that explains the continued use of the AMD Radeon line of GPUs.
- The transition from IBM Power to Intel Core CPUs was done in classic Steve Jobs flamboyance. And with that Apple got preferential Intel treatment that had been the domain of Dell. (Whole other story there me thinks.)
Re: (Score:2)
Selling More (Score:3)
The GPU integration has me scratching my head. AMD's integrated slaughters Intels in any fair (equal $) comparison. I dont see how the deal with Intel benefits AMD.
Because Intel still sells a lot more desktop chips than AMD. AMD probably won't be making any serious moves in the low to mid-range desktop CPU market any time soon. Intel has that locked up. So you may as well make money selling AMD graphics on those low end chips.
There is something to be said for making strategic decisions on not partnering with potential competition. There is also something to be said for selling as much product as you can to make money.
Re: (Score:2)
AMD probably won't be making any serious moves in the low to mid-range desktop CPU market any time soon. Intel has that locked up.
What makes you think that? For example, HP Pavilion with Ryzen 2400G, $488. Looks like low to midrange to me.
Re: (Score:3)
Re: (Score:2)
And with that Apple got preferential Intel treatment that had been the domain of Dell. (Whole other story there me thinks.) Apple won't be in any rush to jeopardise this position.
Just spitballin' here, but something tells me that whatever preferential treatment Apple got when they switched the Macbooks over may well be back in Dell's hands.
Yes, the Macbook is still the darling laptop of college kids...but Apple's bread and butter has been iOS devices, which have all been in-house CPUs for years (and ARM before that). Intel doesn't get a slice of that very, very large pie. Moreover, Apple versions of Intel processors are at least somewhat custom runs, since they are all soldered in n
Re: (Score:2)
And with that Apple got preferential Intel treatment that had been the domain of Dell. (Whole other story there me thinks.) Apple won't be in any rush to jeopardise this position.
I think you are correct considering that A) Apple sells new machines with 4 to 5 year old CPU models, and B) Apple is supposedly designing their own desktop and laptop chips to be rolled out in about 2 years.
Re: (Score:3)
Easy. Part shortages. One problem during the PowerPC era that Apple had was that neither Motorola (now Freescale) nor IBM would supply Apple with the processors Apple wanted. It was so bad that there would be delays for weeks of the top end models that Apple offered because the yields were terrible. This happened so consistently it was predictable - if you wanted a high end configuration, you reloaded
Re: (Score:3)
Intel have been offering integrated third party GPUs for a while, several of the atom chips were available with a PowerVR GPU.
Per core licensing (Score:2)
Re: (Score:2)
And I think that's what AMD is missing here. Cramming 24 or more cores into a CPU has already long passed diminishing returns for anything but very highly optimized parallel-thread applications. ... and those same applications typically cost FAR more per core to license than the CPUs themselves. For the average consumer and even the performance kiddies I don't see how they're really winning much in real-world terms.
Re: (Score:1)
The world is a great deal bigger than you think.
There are _plenty_ of parallel workloads out there than Oracle or what you're thinking of. I have a hard time understanding why you're wanting to deny "average consumers" playing with Blender or encoding video or why not some of the FEA packages out there, just to give a few examples.
Not that the "average consumer" does much more than browse and watch YouTube or Netflix, for which you have absolutely no use for a Threadripper or one of Intel's counterparts.
And
Re: Per core licensing (Score:2)
Oft overlooked (Score:4, Interesting)
About 6 months ago I built a new budget video editing rig. I was torn between going with an i7 8700 or a an AMD2700 but opted for Intel because of QSV.
QSV allows for decoding and encoding H264 and H265 video in hardware using the on-chip video hardware. It's brillant watching my 6-cores idling while rendering 4K video into H265 files at realtime speeds. Try that with your AMD processor :-)
However, these days I'd probably go for the 1950 Threadripper (cheaper and almost as good as the 2950 because those extra cores *are* useful in good video NLEs such as Davinci Resolve.
Re: (Score:3)
It's brillant watching my 6-cores idling while rendering 4K video into H265 files at realtime speeds. Try that with your AMD processor
Doesn't QSV rely on the GPU? In which case the Ryzen 2400G's Vega 11 considerably outpowers the 8700k's UHD 630.
Re: (Score:2)
No. Yes. No. Wait!
QSV is in part of the CPUs which have a dedicated video engine. AMD's APUs have the same thing (called VCE). However this feature is quite irrelevant on a high performance desktop which will have a dedicated GPU in it anyway.
Re: (Score:3)
Right, really only relevant to laptops. But I bet there are way more people encoding video on laptops than desktops.
Re: (Score:2)
In graphics. Does it overpower it in video encoding? Does it overpower it in software support for said video encoding?
Don't know. Maybe. The Ryzen APUs have VCE 4.0, also a video ASIC, as part of the Vega GPU. I don't know how the GPU figures into it, if at all, but one thing seems clear: QSV is not a reason to stick with Intel, even if transcoding is your main thing.
Re: (Score:3)
I am very curious as to why QSV exists on desktop processors. QSV makes sense for mobile chips and AMD's APUs have a similar functionality. But frankly QSV is a slow dog compared to offloading rendering onto the compressor in even an old GTX1060.
These features exist in video hardware, so it makes no sense to duplicate them in the CPU, espeically given the video cards typically have a shorter lifetime (for the power hungry) than a typical CPU. (At least for me personally, I've bought twice as many graphics c
Re: (Score:2)
Re: (Score:2)
Yes exactly. As I said this is why AMD APUs have the same functionality, as does my several year old Core i5 with Iris graphics. However it makes no sense on chips in the class which we are discussing.
Re: (Score:3)
Re: (Score:2)
You missed my point. What kind of a weird arse computer build specs out something like a Ryzen Threadripper and then doesn't have a dedicated GPU. If you think it's the top of the line CPUs that are finding their way to industry then it's not my perception of reality that needs to be questioned.
Posted from my work PC with a low end processor where it makes perfect sense to include VCE on the AMD APU
Re: Oft overlooked (Score:2)
" You missed my point. What kind of a weird arse computer build specs out something like a Ryzen Threadripper and then doesn't have a dedicated GPU. "
CPU based render farms.
Re: (Score:2)
CPU based render farms.
Render farm ... where hardware h.265 encoder is a critical application... A friend of mine owns a landscaping business and could rent you a nice backhoe, not that I think you're not doing a fine job digging yourself and your argument in a hole, but still I'm sure he'd offer his services.
Re: (Score:2)
Re: (Score:2)
I'm glad you find encoding to licensed codecs so enthralling.
I think you'll find he's not enthralled, but rather like most people simply doesn't give a shit.
Large core count has limited value (Score:2, Interesting)
What matters is cache size, L2 and L1. Losing a few cores and bolstering cache will improve performance in quite a lot of cases.
The Key is that threads don't talk that much. The amount of shared information needed to justify cores in close proximity and a huge shared cache isn't there a lot of the time.
Cheaper SMP - not difficult with PCI-E's design - would leave much more room for the critical L1 cache, reduce the heat burden on a CPU, and potentially quadruple the number of cores (since 4-way SMP is not t
Re: (Score:2)
Re: (Score:2)
What matters is cache size, L2 and L1. Losing a few cores and bolstering cache will improve performance in quite a lot of cases.
Do you have any links researching this? Outside of CPUs cache very quickly has diminishing returns, and I'd be curious to know what effects it has on CPUs.
I think one of the best things Threadripper has done is to finally introduce NUMA to consumer processors. It's going to take a hit because a lot of code still hasn't updated to support it, but it's a smart long-term add.
Re: (Score:2)
What matters is cache size, L2 and L1. Losing a few cores and bolstering cache will improve performance in quite a lot of cases.
This is easier said that done, especially for the L1 cache. L1 cache size is limited by the cycle time and load-to-use latency of the instruction pipeline. If they could make the L1 cache larger without sacrificing clock rate and load-to-use latency, they would.
This is why complex out-of-order instruction pipelines *must* be used to achieve high clock rates. They increase the allowable load-to-use latency which increases the allowable latency of the L1 cache. But good luck pushing the load-to-use latenc
Is There A Pattern In The Data (Score:3)
On the other hand, am I the only one that thinks that both companies have completely lost the plot when it comes to model/variant naming conventions?
In fairness, a big part of the problem is not entirely the fault of the chip makers... As the core computing world (desktop/mobile/server) matures, we are seeing the most successful companies achieve dominance through an ability to tweak their designs to more closely match the demands of their clients. Everything is up for optimisation - clock speed, core and thread counts, L1 and L2 cache, TDP, power consumption, the works. This generates a *lot* of different processor models.
The problem is that when many of these chip permutations then make their way in to the retail channel, the resultant model naming conventions and "chip families" just result in endless confusion. Whilst it's also fair to say that it is not too difficult to figure out low, medium and high performance models [start by looking at prices within a given range, then dig for details], we're increasingly needing to become chip specialists who have a very clear idea of our intended use cases if we want to have confidence that we've bought the best chip for our desired task profile.
I'm curious to know if slashdot readers think this is a fair criticism and/or whether there would be any interest in having a more uniform way of assessing the relative merits of different chips. For example, if I compare the Intel Core i7-7700T with the Core i7-8700T, not only is the move from 7th generation to 8th generation relatively easy to spot, but when we look at the specifications, then with pretty much everything except the base processor frequency, we can see the improvements delivered by the later generation. That sort of direct comparison just doesn't seem possible with the latest product announcements...
What would you do differently? Or are the current naming conventions from AMD and Intel easy enough to follow?
Re: (Score:3)
Naming conventions stopped being relevant beyond the target for the machine a long time ago, and by that I mean designations for overclocking or for laptops etc.
In general there are so many different variants and feature sets in any CPU suited to such a large number of different ideal workloads that no sane naming convention could keep up. Not unless you start following industrial product naming conventions such as:
AMD-GENERATION-CORECOUNT-PACKAGE-SMT-FEATURE1-FEATURE2-FEATURE3-FEATURE4 . etc .
Here's what I do... (Score:2)
My approach to PCs has been this:
1. Wait. Don't buy the newest tech, it's just not worth it.
2. Figure out my budget
3. Look at benchmarks - by getting older tech they are well established.
4. Get the most #3 for #2
In the last year I bought 3 'new to me' systems, one for me and two for my kids. They are used Dell Inspiron 7010s, with 8GB RAM and i5-3570 processors. They were $100 each. I picked up Nvidia GTX cards (750/460/460ti) for cheap, ~ $25 each. All said I spent less than $400 on 3 computers.
A
Throw another set of cores on the die (Score:1)
The obsession these days is more cores I guess. I'm sure its somewhat useful for a small percent of PC users. But its sort of like having a 190 MPH car to drive on a 70 mph road. Its there if you need it, but will you ever use it?
Re: (Score:2)
There are many advantages to a car capable of much higher speeds than the legal speed limits...
Operating an engine close to its maximum power output is inefficient and increases wear and tear, if a car is capable of 190mph but it spends most of its life at 70mph then there is very little stress on the engine and it's likely to last a long time.
A car with a higher top speed typically has better acceleration too, you may not drive any faster than 70mph but your time to go from standing to 70mph will be lower
Re: (Score:2)
Operating an engine close to its maximum power output is inefficient and increases wear and tear, if a car is capable of 190mph but it spends most of its life at 70mph then there is very little stress on the engine and it's likely to last a long time.
They solve this problem now with either 7-speed (or more) transmissions, or with CVTs. You can't compare computers to cars. It never works. Stop it.
A car with a higher top speed typically has better acceleration too,
They're solving this problem with mild hybridization. The starter and alternator are replaced with one belt-driven motor/generator which gets the vehicle moving while the engine is stopped, eliminating the drawbacks of auto start-stop. It also can torque fill.
Most people don't need lots of CPU, by modern definitions. The average user is just web browsing. They c
Re: Throw another set of cores on the die (Score:2)
My next system is a 16 core liquid cooled Threadripper paired with 64GB of memory, a pair of 1080ti GPU's and a few M.2 SSD's. Should keep the system relevant for a few years at least ( which is the goal )
Clock speed is higher on the 16 core vs the 32 for applications ( or portions thereof ) that aren't multithreaded. ( Maya modeling, rigging and / or animating ) The extra cores are useful for CPU based rendering ( Arnold, Keyshot, Brazil, etc ) as they're all heavily multithreaded. So I went with a bal
Re: (Score:2)
Clock speed is higher on the 16 core vs the 32 for applications
That's what the core disable function is for.
Re: (Score:2)
Did you know Jack the Ripper has been identified as... the Loch Ness Monster [youtube.com]?
Bullshit or not?