Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
AMD Intel Hardware

AMD Ryzen Threadripper Launched: Performance Benchmarks Vs Intel Skylake-X (hothardware.com) 121

Reader MojoKid writes: AMD continues its attack on the desktop CPU market versus Intel today, with the official launch of the company's Ryzen Threadripper processors. Threadripper is AMD's high-end, many-core desktop processor, that leverages the same Zen microarchitecture that debuted with Ryzen 7. The top-end Ryzen Threadripper 1950X is a multi-chip module featuring 16 processor cores (two discrete die), with support for 32 threads. The base frequency for the 1950X is 3.4GHz, with all-core boost clocks of up to 3.7GHz. Four of the cores will regularly boost up to 4GHz, however, and power and temperature permitting, those four cores will reach 4.2GHz when XFR kicks in. The 12-core Threadripper 1920X has very similar clocks and its boost and XFR frequencies are exactly the same. The Threadripper 1920X's base-clock, however, is 100MHz higher than its big brother, at 3.5GHz. In a litany of benchmarks with multi-threaded workloads, Threadripper 1950X and 1920X high core-counts, in addition to strong SMT scaling, result in the best multi-threaded scores seen from any single CPU to date. Threadripper also offers massive amounts of memory bandwidth and more IO than other Intel processors. Though absolute power consumption is somewhat high, Threadrippers are significantly more efficient than AMD's previous-generation processors. In lightly-threaded workloads, Threadripper trails Intel's latest Skylake-X CPUs, however, which translates to lower performance in applications and games that can't leverage all of Threadripper's additional compute resources. Threadripper 1950X and 1920X processors are available starting today at $999 and $799, respectively. On a per-core basis, they're less expensive than Intel Skylake-X and very competitively priced.

AMD Ryzen Threadripper Launched: Performance Benchmarks Vs Intel Skylake-X

Comments Filter:
  • Let's just say this thing is going to put out some serious heat at 180 watts TDP. You will need a big and loud fan. And any money you save on the cost of the CPU, you will pay to the electric company. And you will have to hope that you do not use the CPU for long, because the longer you use it, the more this space heater will cost you over an Intel CPU.

    • Re: (Score:3, Insightful)

      by rahvin112 ( 446269 )

      TDP is calculated differently by both companies, it's essentially a worthless metric when comparing between manufacturers because it doesn't give you any real information. Wait for the power consumption tests.

    • by Anonymous Coward

      I seem to remember AMD and Intel use a different definition of TDP. For AMD it is the absolute maximum power you can make the chip dissipate, no matter the code you are running. A cooling system that can dissipate this amount of heat guarantees that the performance will not be impacted due to lack of cooling. For Intel it is the power for an above average use case, but not a worst-case one, allowing some throttling to occur.

      TDP doesn't really tell you anything about expected average power consumption, certa

    • What matters is the idle power consumption. Desktop CPUs are idle over 99% of the time.

      • Re: (Score:3, Informative)

        Spoiler: It's not great [hothardware.com]

        • But about what you'd expect for two Ryzen 7's.
        • by Kjella ( 173770 )

          Probably not unexpected, you have the quad-channel memory controller, the CCX interconnect, all the PCIe lanes, probably hard to power everything down. Since this is mostly a spin-off of the server chips I doubt they've given it that much care, a server with 32 cores is rarely idle. If you really wanted to bring the idle consumption down I think you'd have to do some kind of heterogeneous computing, but then you'd need a lot of OS/application support. I don't think AMD should bet on that, we saw how their A

    • by AmiMoJo ( 196126 )

      AMD strongly recommend liquid cooling for this CPU.

      • by Khyber ( 864651 )

        And what do they recommend for the VRMs if you go liquid cooling, now that the airflow over them has essentially been stripped away? That's been a common problem most mobo makers don't consider.

        • Last machine I watercooled had heatsinks put over the VRMs (something like 15 cents a piece), and a big Noctua fan right in front of my Hard drives and SSD. But I agree, some people forget to sink the VRMs when they go water

          • by Khyber ( 864651 )

            VRM Heatsinks haven't stopped my fiance's FX-9370 from overheating and those came stock on the mobo. That thing's got a 240mm radiator with huge airflow venting upwards out of the case. Two intakes fans on the front going over the hard drives, one intake fan on the back blowing over the heat-sinked VRMs, and one side panel fan that blows directly atop the VRMs. GPU is shrouded and blows out the back, as does the bottom-mounted PSU. There's tons of airflow over those VRMs.

            The bitch still overheats. Primarily

    • AMD recommends liquid cooling for this chip not air cooling
    • Until you wrote "over an Intel CPU" [hardwarecanucks.com], you were being borderline rational.
    • Re: (Score:3, Informative)

      The Intel i9 is hotter and the wattage difference is more like 25 watts. No you won't save $1000 in electrical costs.

      Keep in mind these are HUUUGGE 12 core dies. If you care about wattage then the Ryzen series which uses less watts than the i7 maybe more in your budget as these are workstation oriented processors and not desktop.

      The i9 sucks too with lots of heat and watts compared to the desktop oriented coutnerparts. Keep in mind these are new generation CPUs and not the crappy bulldozer architecture that

    • You can't just measure power consumed. You also need to look at the work done for the amount of power consumed. If the Intel CPU consumes 40W less (we'll assume both companies measurements are spot on accurate for max draw) but takes twice as long to complete some task, then it's probably less efficient over the long run.

      If you're really worried about power consumption, it's probably best to undrevolt and use less aggressive turbo settings. When looking at their Ryzen desktop parts there is considerable
  • This is interesting now but in 4-8 months is when the market will begin to really adjust and the competitors will be more squared off. Expect a very interesting holiday season. Now if we can just get that memory price down.
  • server / workstation ver in the works? MB's that will run them with ecc.

    Or an server / workstation desktop socket system?

    amd needs to have a server system for the users that don't need the full EPYC but still want to have server class hardware. They systems in the $1000-$1200 range (the server case + PSU can be $100-$300 of that price) Or just towers at $800-$1000 with an server board (IPMI) and maybe have the X16 (CPU) slot cut into X8 X8 + X4 or even X4 X4 X4 X4 X4. So storage (HBA / Raid card / pci-e) an

    • by Kokuyo ( 549451 )

      As far as I can see, the Asrock X399 Taichi supports ECC already and that's a consumer board.

      My main problem is that I want high clocked ECC memory and that seems to be a market that has been pretty much neglected so far.

    • by eWarz ( 610883 )

      amd needs to have a server system for the users that don't need the full EPYC but still want to have server class hardware. They systems in the $1000-$1200 range (the server case + PSU can be $100-$300 of that price) Or just towers at $800-$1000 with an server board (IPMI) and maybe have the X16 (CPU) slot cut into X8 X8 + X4 or even X4 X4 X4 X4 X4. So storage (HBA / Raid card / pci-e) and networking (10G). And be better then the intel ones at the same class that only have X16 + DMI on the cpu that have less cores.

      Except EPYC CPUs have fairly similar pricing, so why not just go with EPYC?

      • $800-$1200 (Full hardware cost) range server systems are desktop class. For the 1P next level it's like $1300 - $2000.

        and right now there are really no 1P EPYC boards out.

        For lower end needs AMD need to have stuff in the $800-$1200 range.

  • I've been burned too many times by AMD's claims of performance with CPUs and GPUs only to find that my games actually run better on Intel.

    The money saved is never worth it, to me. I always end up wishing I had Intel.

    • by jimbo ( 1370 )

      That's entirely your own fault. Marketing in any industry will never be trustworthy, that's not their job.

      AMD vs Intel is a good example, the were times when AMD was better performing and Intel had many more errata and visa versa. You have to evaluate for every purchase.

      Do your own research before you spend your money. Never be loyal to any brand. Research, then buy for your use cases and budget.

    • > The money saved is never worth it, to me. I always end up wishing I had Intel.

      Then you're not doing your research properly. I've bought both Intel and AMD chips and been very pleased with what I purchased because I do the research.

      Case in point, here you're looking at Threadripper and then mention games. You don't buy a Threadripper for games, you buy it for workstation tasks. You want games, wait for one of the new 8th Gen i5s in a few weeks or a Ryzen 5 1600X and then take all the money you would

      • Research requires knowing what matters.

        Unfortunately many people, even here, can't separate the meaningless from the important.

        For instance some will drone on-and-on about single threaded performance when the reality is that they never sit waiting for any non-i/o-bound single threaded tasks. Others will marvel at video encoding speeds but the only time that they actually encode video is when using the machine as a DVR that only needs to be fast enough for real-time video.

        The ultimate question is: What
    • Games? You want the Ryzen 1700 or the upcoming i7 8700k from Intel (we'll get actual info on the 21st).

    • If you're buying a Ryzen Threadripper or Skylake-X for gaming, I can say you've already messed up. [car analogy] That's like buying an 18-wheeler to haul your weekly groceries.[/car analogy]
      • by Kjella ( 173770 )

        If you're buying a Ryzen Threadripper or Skylake-X for gaming, I can say you've already messed up. [car analogy] That's like buying an 18-wheeler to haul your weekly groceries.[/car analogy]

        ...and then complain that everybody talked about cargo capacity and didn't mention it would be hard to park, doesn't have seats to take your kids to soccer practice and has terrible MPG for your commute.

      • The 1920 looks like an attractive all-round chip with reasonable thermal profile. It's ready for the next generation of thread-heavy Vulkan (and M$'s Vulkan clone) games and it's also good for heavy lifting, if you want that. I do.

        • Well there's no price yet for the 1920 so it's hard to gauge whether it reasonable when it comes to pricing. Based on the current know specs, it has only a slight advantage in terms of memory and PCIe lanes over the 1800 but it also requires more power. If it is priced way above the 1800 (which I think it will be), it's not worth it for all purpose chip. It's more like workstation lite at that point.
          • If it is priced way above the 1800 (which I think it will be), it's not worth it for all purpose chip. It's more like workstation lite at that point.

            What is not general purpose about a workstation-lite?

            • Pricing and purpose. I'm not encoding videos all day on a general purpose machine if that's my job. I'm going to ask for workstation to do that. Of course the company is going to spend more on my desktop than for the receptionist who doesn't need 8-16 cores.
              • I'm not encoding videos all day on a general purpose machine if that's my job. I'm going to ask for workstation to do that. Of course the company is going to spend more on my desktop than for the receptionist who doesn't need 8-16 cores.

                OK, well I'm glad your receptionist is ok with that, but anybody who tries to foist off a budget piece of crap on me as a "general purpose" machine can sit on my nether thumb.

    • Well, all of the benchmarks seem to put Ryzen 7's a little slower than the best i7s on Game FPS/thread performance. But those games and programs that can take advantage of the additional threads will naturally stomp on i7 since there's twice as many threads available. So that's how I rationalized my purchase. No, I'm not getting the best game performance. But it's not that far behind either.
      • I have a question since you seem to have more of a clue about these things than me: Do the number of threads automatically scale? So, if a game (or program) is designed to take advantage of two threads, or three, or four, will it automatically take advantage of a dozen threads?

        I do digital music production on a Xeon, and my DAW (Cockos Reaper) is designed to use multiple threads, as well as remote processors via ethernet. I'm about ready for a new music system, so maybe these new Ryzens would be just the

        • No, it's not automatic. Windows can automatically assign different processes (executables) to different threads but that's about the limit to "automatic" cpu load distribution. Beyond that, the individual processes are responsible for their own load distribution. I don't know much about Cockos Reaper or why music production would be especially costly in terms of CPU usage. Typically programs for rendering high resolution graphics or video editing or scientific calculations are designed to take advantage
          • No, it's not automatic.

            Wrong. It's not automatic unless is it programmed to be automatic. There is no technical obstacle to doing this in Vulkan, it's just a detail that some engine vendors may not yet have covered because four core processors are so common as of today. You can bet that variable thread count will be standard in all major engines in the near future.

        • No. Crysis 3 surprisingly scales well and kicks ass on a Ryzen since it is 8 core 16 threads but most games do not scale super well beyond 3 to 4 cores.

          Is your workflow dependent on an audio card or is it all CPU? But from what I read about Ryzen you are right to be a little cautious as it is very new. In a nutshell AMD hired it's former Alpha CPU architect who designed the AthlonXP back after their disastrous bulldozer failed.The new architecture is 52% faster per core than its predecessor.

          Problem is bugs

          • Is your workflow dependent on an audio card or is it all CPU?

            The VST and VSTi plugins I use eat up a lot of processor. The Xeon in my current music system can handle it no problem. The main bottleneck is disk throughput. I stream the recorded tracks from a Linux machine with a RAID array, and I've been throwing SSDs into the system as I go along. Now that I think about it, everything's running just fine and the only reason I would think to build a new DAW system is because I'm used to doing it every 3-4

            • Is your workflow dependent on an audio card or is it all CPU?

              The VST and VSTi plugins I use eat up a lot of processor. The Xeon in my current music system can handle it no problem. The main bottleneck is disk throughput. I stream the recorded tracks from a Linux machine with a RAID array, and I've been throwing SSDs into the system as I go along. Now that I think about it, everything's running just fine and the only reason I would think to build a new DAW system is because I'm used to doing it every 3-4 years.

              I'll just wait a bit and watch the DAW forums to see what people say about the Ryzen. I've learned my lesson about being the early adopter.

              Hey were nerds. That is why we are here!

              I would love to have something like this [youtube.com] (click to 5 minutes) and impress all the ladies with my build (in my dreams). But I too own a i7 4770K from 2014 and have no reason to change besides specs. I just want it :-)

              I think an NVME would be nicer to boot virtual machines but they already load in a few seconds on my raid 0 ssds so no need to change. What I have 3 years old or not works fine and has never been a moment where I cursed that it was too slow.

  • by BenJeremy ( 181303 ) on Thursday August 10, 2017 @12:43PM (#54983975)

    It looks like AMD will have some sort of RAID support in the X299 chipset, but at launch, they don't have bootable RAID-0 support for NVMe drives.

    Intel promises this on the X299 motherboards, but hobbles it with the DMI interface for motherboard-mounted M.2 slots, and the need for an expensive "VROC Upgrade Key" (i.e. DRM nonsense) just to run non-Intel parts in a bootable RAID-0 array... oh, and the "Key" isn't actually available yet, at any price.

    VROC was the last nail driving me away from their platforms. Sad really, considering their RAID technology promised almost direct multiplying of bandwidth in RAID-0 up to 25+ GB/s. Intel has crippled RAID support moving forward, and there is little point to using their stuff when AMD has managed to catch up and costs much less. I just wish they'd move faster to provide decent RAID support in their X399 chipset... though apparently there is a promise to deliver support in a future update.

  • “Threadripper” is obviously a compound word. I know what a “dripper” is, but what is a “threa”?

  • He called them "Lab Queens".

    In other words, there were fantastic in the engineering lab, where conditions could be tightly controlled and optimized; but in the real-world, they just didn't work out so well.

    So, what we have with the AMD Ryzen CPUs is something which, when benchmarks are constructed like virtually NO software actually is, they appear to kick ass. But, with software that is written like 99% of developers and their development toolchains do it, they are actually LOWER-performing than their "slo

    • He called them "Lab Queens".

      In other words, there were fantastic in the engineering lab, where conditions could be tightly controlled and optimized; but in the real-world, they just didn't work out so well.

      So, what we have with the AMD Ryzen CPUs is something which, when benchmarks are constructed like virtually NO software actually is, they appear to kick ass. But, with software that is written like 99% of developers and their development toolchains do it, they are actually LOWER-performing than their "slower" counterparts.

      TL;DR: LOL!!!

      Actually not true. Where Ryzen helps in the real world is having the system remain responsive when having a million things open that uses threads like Chrome for example. Ryzen is a skylake i5 with several cores basically. An i7 has more performance per core which you are correct.

      Keep in mind phones today have more cores than i7 and Intel has now woken up and redesigned coffeelake to include 6 and 8 cores.

      A ryzen r3/5 is cheaper than an i5 and has SMT (hyperthreading) where you need an i7 with an intel. I t

      • He called them "Lab Queens".

        In other words, there were fantastic in the engineering lab, where conditions could be tightly controlled and optimized; but in the real-world, they just didn't work out so well.

        So, what we have with the AMD Ryzen CPUs is something which, when benchmarks are constructed like virtually NO software actually is, they appear to kick ass. But, with software that is written like 99% of developers and their development toolchains do it, they are actually LOWER-performing than their "slower" counterparts.

        TL;DR: LOL!!!

        Actually not true. Where Ryzen helps in the real world is having the system remain responsive when having a million things open that uses threads like Chrome for example. Ryzen is a skylake i5 with several cores basically. An i7 has more performance per core which you are correct.

        Keep in mind phones today have more cores than i7 and Intel has now woken up and redesigned coffeelake to include 6 and 8 cores.

        A ryzen r3/5 is cheaper than an i5 and has SMT (hyperthreading) where you need an i7 with an intel. I think in the real world Ryzen is a great value as even the budget r3 1200 is a true quad core CPU at an i3 price! The speed difference is not that great for single tasking but things are changing and having +30 tabs in Chrome, running a game, virus scan in the background with Office will show some benefits having SMT and +4 cores.

        You could be right, actually; especially with OSes like macOS and iOS, which uses Grand Central Dispatch (GCD) to dole out threads to various cores basically automatically and transparently to the Application Developer.

        https://developer.apple.com/li... [apple.com]

        https://en.wikipedia.org/wiki/... [wikipedia.org]

        BTW, Apple Open Sourced GCD under the Apache License model; so there's really no excuse for any OS not to use it! But apparently, gcc doesn't support "blocks"; so neither Linux nor Android use it (or at least not generally). Bu

Philosophy: A route of many roads leading from nowhere to nothing. -- Ambrose Bierce

Working...