Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AMD Graphics Intel Hardware Technology

AMD Unveils Ryzen 4000 Mobile CPUs Claiming Big Gains, 64-Core Threadripper (hothardware.com) 71

MojoKid writes: Yesterday, AMD launched its new Ryzen 4000 Series mobile processors for laptops at CES 2020, along with a monstrous 64-core/128-thread third-generation Ryzen Threadripper workstation desktop CPU. In addition to the new processors, on the graphics front the oft-leaked Radeon RX 5600 XT that target's 1080p gamers in the sweet spot of the GPU market was also made official. In CPU news, AMD claims Ryzen 4000 series mobile processors offer 20% lower SOC power, 2X perf-per-watt, 5X faster power state switching, and significantly improved iGPU performance versus its previous-gen mobile Ryzen 3000 products. AMD's U-Series flagship, the Ryzen 7 4800U, is an 8-core/16-thread processor with a max turbo frequency of 4.2GHz and integrated Vega-derived 8-core GPU.

Along with architectural enhancements and the frequency benefits of producing the chips at 7nm, AMD is underscoring up to 59% improved performance per graphics core as well. AMD is also claiming superior single-thread CPU performance versus current Intel-processors and significantly better multi-threaded performance. The initial Ryzen 4000 U-Series line-up consists of five processors, starting with the 4-core/4-thread Ryzen 5 4300U, and topping off with the aforementioned Ryzen 7 4800U. On the other end of the spectrum, AMD revealed some new information regarding its 64-core/128-thread Ryzen Threadripper 3990X processor. The beast chip will have a base clock of 2.9GHz and a boost clock of 4.3GHz with a whopping 288MB of cache. The chip will drop into existing TRX40 motherboards and be available on February 7th for $3990. AMD showcased the chip versus a dual socket Intel Xeon Platinum in the VRAY 3D rendering benchmark beating the Xeon system by almost 30 minutes in a 90-minute workload, though the Intel system retails for around $20K.

This discussion has been archived. No new comments can be posted.

AMD Unveils Ryzen 4000 Mobile CPUs Claiming Big Gains, 64-Core Threadripper

Comments Filter:
  • Not wishing to hijack this topic but does anyone have a recommended affordable laptop (Win10), ssd, decent graphics? Not Intel.
    • HP, choose the series which use Ryzen (and used to use A series before that) and do not have a brand name like Pavillion or Envy - just a model number. Search on Amazon. 100% maintainable, memory and disk are replaceable and should be replaced because by default these are deliberately crippled to make Intel look "good".

      I use these with Linux and I have bought a couple for other people who use them with Windows. They rock.

    • by Arkham ( 10779 ) on Tuesday January 07, 2020 @05:58PM (#59597080)

      Honestly wait a couple of months for one with these new series 4000 CPUs. They're a lot more powerful and efficient than the 3000 series. They are based off the 7nm process on the Ryzen 3xxx desktop series so should provide a lot of power for the money. AMD is eating Intel's lunch in the mid-range desktop market, and with this announcement the laptop market is in play. Asus, Dell, and MSI have announced laptops based on these new CPUs already, and others will follow.

      • In looking at the specifications, it seems that AMD has some utterly bonkers parts in the pipe. The 4800U is an 8C/16T APU with 512 GPU shaders that's got a 15W TDP. Granted, the base clock frequency is fairly low, but it's still quite impressive to be able to pack that kind of performance into a low power notebook chip. I don't know if there's any clear information about power draw at idle yet, but one would think that AMD would prioritize that for their APUs.
      • by ath1901 ( 1570281 ) on Tuesday January 07, 2020 @07:12PM (#59597314)

        I think AMD could already be eating Intels laptop lunch, at least according to Linus.

        This was going to be a standard showcase until we realized just how good the Ryzen 5 3500U processor was.

        https://www.youtube.com/watch?... [youtube.com]

        I'm almost sad my 5 year old Asus Zenbook with an i5 4200U is still good enough for whatever laptops do.

      • by bill_mcgonigle ( 4333 ) * on Tuesday January 07, 2020 @11:27PM (#59597944) Homepage Journal

        The AMD parts also seem to be far less vulnerable to speculative execution vulnerabilities too. Intel seems to have been highly dependent on this one optimization for performance but now they're getting killed for it. TPM was a bill of goods too and people are pissed.

      • They are based off the 7nm process on the Ryzen 3xxx desktop series so should provide a lot of power for the money.

        Kinda confirms what I've suspected for a decade. That Intel's "lead" in processors wasn't really due to better processors, but due to being ahead of everyone else in shrinking die size (allows same performance at lower power, or superior performance at the same power). Once their die shrink progression stalled, their "lead" in processors evaporated.

        Same thing for Apple's Ax SoCs. Due to

        • It's more complicated than that. Intel put all their (ahem) chips down on single threaded throughput. AMD went more cores for cheaper. AMD was already better value than Intel long before Ryzen, and it was only recently that Ryzen took the single threaded crown too, in large part because of process advantage, but not only that. It's complicated.

    • I have a Ryzen 5 3500u Laptop from Lenovo with Vega 8 iGPU, 16 GB ram and 256 GB SSD PCIe..15" screen 1080p. It's decent and low priced ( Vega iGPU definitely beats Intel integrated graphics by a long shot, but if it's really gaming you're after you better buy a desktop with Ryzen 5 CPU and dedicated Graphics card.
      I can play most recent games on low specs, if you're happy with 30 fps.
      You get what you pay for, but at that price you're certainly ahead of the Intel i5 with integrated graphics game.
    • Comment removed based on user account deletion
    • by AHuxley ( 892839 )
      Wait and see what AMD offers.
      Then ensure the display and keyboard is good.
      Also consider the display ratio, resolution.
      Should be some real innovation on the way.
    • can this be counted as affordable ? Asus TUF FX505DT https://www.amazon.com/R5-3550... [amazon.com]
  • I have not been into gaming PC's for a long time now, the new AMD chip series (both CPU and GPU) have seemed to be pretty awesome - in particular I am wondering how the AMD GPU updates compare to what Nvidia has been doing. Are there any good articles comparing at least initial specs (since the most recent update is not quite out yet).

    • They have a slight price advantage on every level in which they are competing, more significantly in the lower end. They do lack features for those prices (mid range NVIDIA cards offer Raytracing as well as VirtualLink as standard (at least in the reference PCB)). I highly recommend them for anyone but the most serious of gamers.

      They still offer no competition to NVIDIA in the high end, or in features. Their RX 5700 and Vega 64 are about comparable to a RTX 2060 Super and RTX 2070 and cheaper than both, how

      • by AmiMoJo ( 196126 )

        For me the attraction of AMD over Nvidia is that they are not Nvidia.

        • For me the attraction of AMD over Nvidia is that they are not Nvidia.

          Fair point. Though currently NVidia have a massive amount of dominance in deep learning which is why I keep buying 2080Tis and TITAN RTXs. I wish AMD could get their act together and port some of the toolkit backends to their GPUs.

        • by Kokuyo ( 549451 )

          Same here.

          Although I do hope their next Nav, while probably not doing to Nvidia what they did to Intel, will bring a bit more action itno this competition. I'm definitely planning on replacing my 4670K with the next Ryzen Desktop as soon as they come out. Hoping to go all team red primarily to be able to give both Intel and Nvidia the finger simultaneously.

          • by AmiMoJo ( 196126 )

            My main machine is even older than that (a 2700k) and I'm struggling to justify upgrading it...

            • by Kokuyo ( 549451 )

              I'm not. Granted, It's not a MUST but whenever I try working with Handbrake I yearn for more IPC and cores.

              I may or may not expect too much of a performance hike bet eh.

        • For me the attraction of AMD over Nvidia is that they are not Nvidia.

          I'm actually curious as to why. I know the reason people do this for Intel (I disagree with the reasons but that's a different topic) but I'm actually curious what NVIDIA has done to warrant your animosity?

          Is it Linux driver related? I know that has been an issue for some.

          • by AmiMoJo ( 196126 )

            Long ago their Windows drivers annoyed me, and I had issues with some of their cards.

            Later AMD were a lot better for certain compute tasks, particularly hashing.

            Then there is the proprietary crap like their dynamic sync tech.

        • For me, the open source drivers are a big deal. Fuck Nvidia.

      • by fintux ( 798480 )

        Well you forgot that also Nvidia is lacking features, like FreeSync over HDMI, Radeon Image Sharpening, Radeon Chill, Radeon Boost, Antilag, AMD Link to name a few. These are all features that are actually useful and beneficial. The RTX on the mid-range hardware is basically useless unless you want to play at 720p at 60 Hz to get just a small improvement in the graphics. Even in the 2080 Ti, the RTX is under-powered compared to the rest of the card.

        DLSS is also capping the frame rates, and thus is not very

        • Well you forgot that also Nvidia is lacking features, like FreeSync over HDMI, Radeon Image Sharpening, Radeon Chill, Radeon Boost, Antilag, AMD Link to name a few.

          Well not quite. The features I am talking about are hardware. What is enabled in software drivers is a completely different kettle of fish. Specifically many of that list are just performance enhancements which come with different processes that achieve the same result with NVIDIA cards, e.g. Antilag - not an issue since AMD is the one who historically had frame time issues, Radeon Image Sharpening is called DSR in NVIDIA terms, Radeon boost and chill is just local quality drop to boost performance not much

      • They still offer no competition to NVIDIA in the high end, or in features.

        That's not true. The Radeon VII is a highly respectable card loaded with features and roughly tie with nv 2080, e.g., the VII outperforms on Far Cry. And almost 300 dollars cheaper. It's true that AMD has nothing in the 2080 ti space, but let's be honest: A) you don't need a ti and B) you can't afford it.

        The only feature nVidia has to boast about that AMD doesn't offer is raytracing accel, and that doesn't matter because frankly it sucks on Nvidia's current generation cards. And AI filtering... sorry, pass.

    • Not really an article, but my own observations from spending a few months getting back into the hardware market:

      Apparently the GPU vendors are similar [videocardbenchmark.net] in benchmark scores, though AMD has fewer SKUs.

      Unlike the Intel/AMD competition, the prices seem to be on par for the performance, so purchase decisions are more based on features. The green guys have done quite a lot of nice things with computation (CUDA, Tensor Cores) and ray-tracing... but most applications (including games) aren't written to take advanta

    • I have not been into gaming PC's for a long time now, the new AMD chip series (both CPU and GPU) have seemed to be pretty awesome - in particular I am wondering how the AMD GPU updates compare to what Nvidia has been doing.

      They're competitive in the midrange on price, performance, and features. They do not have the ceiling Nvidia has at present. The RTX 2080 Ti stands alone. (And is extremely effective at sucking money out of status-seekers.) Their software is always a little bit worse than Nvidia, though Nvidia's software isn't always great. AMD hopes the latest iteration of Navi will actually stand up to the 2080, but they're not due out until the middle of the year, and odds are Nvidia's position firmly entrenched wit

      • It would be nice for AMD to release a Zen 2 desktop APU instead of this laptop crap.

        The 3200G and 3400G are still running Zen 1 chiplettes, "2nd gen refresh" moving them from 14nm to 12nm.

        I want an 8 core 16 thread desktop 7nm APU.
      • The 'laptop crap' is a much bigger market than desktop APU. That said, I'm sure AMD will release a 4-series APU this year some time.

        AMD is constrained by TSMC's wafer production volumes. AMD couldn't release all the chips at the same time even if they wanted to, TSMC just doesn't have the capacity to handle the rush.

        Remember how long it took for the 3800X and 3900X to settle down to their MSRPs? It took at least 4 months, even longer. And the 3950X is only just now becoming widely available at +$40 ove

  • Mac pro is toast (Score:4, Informative)

    by Joe_Dragon ( 2206452 ) on Tuesday January 07, 2020 @06:16PM (#59597138)

    Mac pro is toast

  • Who fucking cares? (Score:5, Insightful)

    by AndyKron ( 937105 ) on Tuesday January 07, 2020 @06:54PM (#59597250)
    And the software programmers have already sucked up that extra power and then some, so we can continue to wait while the bullshit eye candy animates.
    • There's nothing wrong with eye candy per se... you could argue eye candy is your reward when doing mundane tasks for HAVING a high end PC. What needs to improve is Windows' (and Linux's) ability to intelligently lay off the eye candy when it genuinely starts eating into performance. If my PC would otherwise just throttle down to two cores @ 800MHz, screw it... keep the CPU pegged, and give me realtime-raytraced window-translucenly effects like we all *thought* we'd have by 2015, until Windows & Linux su

      • by Anonymous Coward

        RIP Aero Glass, Beryl, and Compiz :-(

        More like "good riddance".

      • by epine ( 68316 )

        f my PC would otherwise just throttle down to two cores @ 800 MHz, screw it ... keep the CPU pegged, and give me realtime-raytraced window-translucenly effects like we all *thought* we'd have by 2015 ...

        As promised by Bling Crosby: The Voice of Christmas [wikipedia.org].

        Was Fake Snow Made from Asbestos Marketed as Christmas Decor? [snopes.com] — 22 December 2017

        Asbestos was once marketed as artificial snow and sprinkled on trees and wreaths and ornaments. Although those products have not been produced for many years, the oldest d

    • Its not eye candy that soaks up performance. Its lazy programming that does, following the "optimize later" mantra which can be correctly translated into "optimize never."
    • And the software programmers have already sucked up that extra power and then some, so we can continue to wait while the bullshit eye candy animates.”

      Is this what passes for Insightful [slashdot.org] on slashdot nowadays?
  • They use those words as if the current generation Threadripper is backwards compatible with the sTR40. No fellas literally no one is upgrading any sTRX40 processor to this one as anyone who is even remotely interested in this chip has waited for it and not settled for buying second best in the past few months.

    This chip will never see a reused motherboard.

    • The motherboards are a couple of years old already and the new boards have updates so of course people will get a new board. To be honest I can count on one hand the number of times I've done a CPU upgrade on a workstation in the last 20 years. 99.9% of the time once the chip is on the board they stay together for the life of the machine. This isn't the 90s when Moore's Law was in full effect and your 4 year old computer got a 5-6x boost from a CPU upgrade.

      • The motherboards are a couple of years old already

        You're thinking of TR40. TRX40 sockets were announced less than 3 months ago and are NOT compatible with TR40 despite having the same pin layout. That's my point. There are no "existing TRX40" motherboards.

        • True, but that still plays into my point that it's irrelevant for almost everyone. Nearly nobody outside of reviewers and a few enthusiasts are re-using motherboards. You buy a chip, you buy a board, they are connected and stay connected pretty much for life. So for the TRX40 chips 99.99% of people were going to buy a new board *anyway*.

    • by Kjella ( 173770 )

      Existing as in "previously announced" motherboards for the 24/32 core TR3s. AMD is now basically covering the entire range with two sockets, AM4 for 2-16 cores and TRX4 for 24-64. It's hard to get mad considering the upgrades from the previous generation, both in terms of cores and PCIe 4.0 lanes. Of course you're also heading way outside even generous "prosumer" ranges with a $4000 CPU.

    • Any TRX40 motherboard can run the 3990X. It is true that older X399 motherboards cannot run the new TR3 chips, or vise-versa, and I agree it kinda sucks a little. But its hard to be angry at AMD considering what they packed into the TRX40. They didn't just force people onto a new TR socket gratuitously, unlike Intel.

      The TRX40 motherboards have 4x the data bandwidth between cpu and chipset that X399 had. That's four times the bandwidth. Not the same, not twice... four times. It means that all the PCIe

      • Any TRX40 motherboard can run the 3990X.

        Yes my point entirely. The previous generation used TR40 sockets and they can't. There are precisely 2 existing chips with TRX40 support and no one will be upgrading either of those to the new threadripper.

        Workstations get upgraded less intergenerations than normal desktops do. If it's not backwards compatible with TR40 then it's not worth mentioning as a plus.

        • I'm sure many people will be replacing their older TR systems with newer TRX40 systems. I'm not sure why you believe people wouldn't. The TR3 chips are considerably more powerful than the TR2 chips core-for-core, older TR systems have value on the used market, and not everyone with TR2 systems are running the highest-end TR2 chips.

          Someone with a 2970WX or 2990WX system probably wouldn't be upgrading (except possibly to a 3990X), but I would say that many people with a 1900X, 1950X, 2920X, or 2950X will de

    • by fintux ( 798480 )
      There were rumors about the 64-core Threadripper requiring a separate sTRX80 motherboard with 8-channel memory among other things. That is not the case, thus "existing motherboards" makes sense here.
  • Now that CPUs have become mostly constrained by their thermal budgets, but can run single-core at 4GHz and above, how about allowing us to run with two or more CPUs again, and selling the i7/i9/Xeon semi-rejects (that have one or more bad cores, but one good one) with true SMP enabled so we can run with TWO CPUs in two sockets pegged at max speed, instead of today's "one at 4.5GHz, two at 3.6GHz, three at 2.5GHz, or four at 2GHz" compromise we're forced to endure.

    Or... let us have an i9 Xeon with 4+ cores a

    • by Guspaz ( 556486 ) on Tuesday January 07, 2020 @10:12PM (#59597796)

      AMD makes Zen 2 consumer CPUs with TDPs ranging from 15W to 280W, where they top out at 64 coers, and the server variants support two CPU operation to cover the 280 - 560W range. CPUs do all-core turbo these days, limited primarily by their thermal constraints. Your complaints make sense for 2010, not 2020. Considering the performance hit you take from the additional memory complexity of having more NUMA nodes on a dual-processor system, there is very little reason to want two 4-core CPUs instead of a single 8 or 16 core CPU.

      If you want your CPU to run at 4.5 GHz across all cores at all times instead of just one core at a time, then put a bigger cooler on it.

      • CPUs do all-core turbo these days, limited primarily by their thermal constraints. Your complaints make sense for 2010, not 2020.

        I dont think many people understand this, even the overclockers.

        These days you can "overclock" your AMD CPU simply by installing a high end aftermarket cooler. No settings need to be changed. Intel is just now also releasing some chips that do the same thing.

        Sure, you can overclock further... but you aint really accomplishing shit until you have the best coolers on top of it.

      • by m.dillon ( 147925 ) on Wednesday January 08, 2020 @03:35AM (#59598260) Homepage

        Yes and no. Yes, a better cooler will result in better performance, but there are three problems.

        First, there are limits to just how quickly heat can be dissipated from the silicon due to the transistor density. As geometries get smaller, power density continues to increase. Ambient cooler (whether air or liquid based) limit out. Going sub-ambient is generally a non-starter for regular use, but if you decide to you still can't go below freezing without causing serious condensation. Not for regular use anyway.

        The second problem is power consumption. Power goes exponential as the frequency goes past its sweet spot (around 3.8 GHz or so on Zen 2). This is fine if only one core is being boosted, but try to do it on all cores and you can easily start pulling 200-300W just for the CPU socket alone.

        The third problem is called electro-migration... basically the more current you throw into the CPU die on these smaller nodes, the lower the 'safe' voltage winds up being. Where the two cross gives you the maximum safe frequency you can actually run the CPU at. So when you are trying to push higher all-cores frequency you wind up in a rat-race. Higher frequencies require higher voltages, but the maximum safe voltage drops the more cores you try to run at those higher frequencies.

        These problems also apply to Intel's 10nm and will likely apply to all future (smaller) nodes as well for both Intel and other foundries.

        -Matt

    • what for? I have a four core chip of which everything I do uses some of one and a little of another. The mostly single-threaded daily apps just can't use SMP much.

      • Even when running apps that aren't optimized for multiple threads/cores, having 2-4 cores/CPUs is a definite step up from one, if only because Windows Explorer *still* has a few single-threaded chokepoints that are mitigated if you have 2+ cores at your disposal. But, with laptops, at least, invoking a second core destroys your single-thread performance due to thermal constraints. By moving to TWO CPUs, you'd separate out your two heat sources enough so that even a shitty, inadequate, crippled laptop coolin

        • by m.dillon ( 147925 ) on Wednesday January 08, 2020 @03:54AM (#59598278) Homepage

          The CPUs in TODAY's laptops beat the holy living crap out of what we had in the Sandy Bridge era, even when running at lower frequencies. It isn't even a contest. Yes, laptop vendors put physically smaller batteries in the thinner laptops... they still put large batteries in 'gaming' laptops, though, and even the smaller batteries generally have twice the watt-hours of capacity that older laptops from that era had.

          In addition, the CPU performance has very little to do with battery life unless the laptop is actually being loaded down. Most of the battery's power consumption is eaten up by the display.

          Just playing Video or browsing around puts basically ZERO load on a laptop CPU. The video is handled by dedicated decode hardware in the iGPU, and having a ton of browser windows open doing animations won't even move the needle on CPU use. The only way to actually load a laptop CPU down these days is to do some sort of creator type of work... batch photoshop, rendering, VR, or other work.

          Almost nothing running on a modern laptop is single-threaded, not even a browser that has only one tab open. At a minimum the graphics pipe will use a second core (whether using GPU HW acceleration or not), which means that software logic and screen updates get their own cores. Even for a single-threaded program. There are no bottlenecks outside of the storage subsystem so if that's a SSD a modern laptop is going to have lightning response under almost all conditions.

          Any real browser, such as chrome or firefox, is pretty seriously multi-threaded. I have four chrome windows open on my workstation right now with not very many tabs... maybe 6 tabs open at the moment, and ps shows 182 program threads associated just with chrome across 21 discrete processes. 182 program threads.

          Where there is bloat on today's systems tends to be with memory use, particularly when running browsers. Getting a laptop with at least 8GB (and better, 16GB) of ram is definitely an important consideration. My relatively minimal browser use is eating... 5GB of ram. Of course, my workstation has 32GB so I don't really feel it. But the same issue exists on a laptop. Get more ram, things will run more smoothly. You can swear at the software... but still get more ram :-).

          -Matt

          • by Anonymous Coward

            Just playing Video or browsing around puts basically ZERO load on a laptop CPU. The video is handled by dedicated decode hardware in the iGPU, and having a ton of browser windows open doing animations won't even move the needle on CPU use.

            Depends on how new the hardware we're talking is. Anything in the past year or so, sure, 100% right. You start going back much further than that, though, and support for H.265 and VP9 starts getting very spotty or even absent entirely. And software-decoded video does stra

            • Web clients and servers negotiate the format. Most CDNs can distribute video in numerous formats and resolutions (some even translate on the fly so they don't have to store them). Anything with hardware video acceleration in the last 10 years is likely to be supported.

              -Matt

          • I have four chrome windows open on my workstation right now with not very many tabs... maybe 6 tabs open at the moment, and ps shows 182 program threads associated just with chrome across 21 discrete processes. 182 program threads.

            Mind you, that's 6 discrete processes for the 6 tabs, and 15 indiscreet ones observing your every move, analyzing it, and when you look away, the gist of it all is quietly sent to Google.

            I did a quick cross check, and Firefox with the same tabs open shows just 6 discrete p

          • Almost nothing running on a modern laptop is single-threaded, not even a browser that has only one tab open. [...] I have four chrome windows open on my workstation right now[...] and ps shows 182 program threads

            sure, but there's a reason AMD was made fun of with the whole "moar cores" meme, and why threadrippers only excel with certain software (even though chrome uses 182 threads) -- cuz CPU load is not usually evenly distributed among threads and those 182 threads are not all active at the same time, making a CPU's single thread performance very important.

      • Please explain to us which single threaded daily app needs an overclock... its a simple question.

        The fact is that you dont sit there waiting for single threaded anything these days that isnt I/O bound (which can only very marginally benefit from faster single threaded performance)

        What people sit there waiting for are either all multi-threaded (video encoding, raytracing, etc) or I/O bound (network speed, disk speed, etc..) There is no universe where the typical person will even notice single thread perf
        • That was my point, most people neither video encode nor ray trace and so some chips are useless as tits on a bull.

          Sure, *some* people could benefit.

    • The AMD chiplet design is better than a dual processor setup because it avoids having multiple NUMA domains. They do offer dual socket solutions for their server grade Epyc platform but the consumer and professional platforms, Ryzen and Thread Ripper are single socket. I can't imagine what single task you need to run on a workstation that requires more than 64 cores/128 threads.

      • > I can't imagine what single task you need to run on a workstation

        > that requires more than 64 cores/128 threads.

        My whole argument is that "(a lot) more cores/threads" WON'T make a bigger difference to performance than a small number of individually-faster cores. Two cores are always better than one, and 3 or 4 equally-fast cores will make the scheduling easier, if nothing else. But diminishing rewards rapidly kick in.

        With today's technology, two or four physical CPUs -- each running at max Turbo sp

      • I can't imagine what single task you need to run on a workstation that requires more than 64 cores/128 threads.

        64 cores ought to be enough for everyone.

The most exciting phrase to hear in science, the one that heralds new discoveries, is not "Eureka!" (I found it!) but "That's funny ..." -- Isaac Asimov

Working...