Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
AMD Hardware Technology

AMD Opteron Vs EPYC: How AMD Server Performance Evolved Over 10 Years (phoronix.com) 34

New submitter fstack writes: Phoronix has carried out tests comparing AMD's high-end EPYC 7601 CPU to AMD Opteron CPUs from about ten years ago, looking at the EPYC/Opteron Linux performance and power efficiency. Both on the raw performance and performance-per-Watt, the numbers are quite staggering though the single-threaded performance hasn't evolved quite as much. The EPYC 7601 is a $4,200 USD processor with 32 cores / 64 threads. The first of many tests was with NAS Parallel Benchmarks: "For a heavily threaded test like this, going from a single Opteron 2300 series to the EPYC 7601 yielded around a 40x increase in performance," reports Phoronix. "Not bad when also considering it was only a 16x increase in the thread count (4 physical cores to 32 cores / 64 threads). The EPYC 7601 has a lower base clock frequency than the Opteron 2300 CPUs tested but has a turbo/boost frequency higher, among many architectural advantages over these K10 Opterons. With the NASA test's Lower-Upper Gauss-Seidel solver, going from the dual Opteron 2384 processors to a single EPYC 7601 yields around a 25x improvement in performance over the past decade of AMD server CPUs. Or in looking at the performance-per-Watt with the LU.C test, it's also around a 25x improvement over these older Opterons."
This discussion has been archived. No new comments can be posted.

AMD Opteron Vs EPYC: How AMD Server Performance Evolved Over 10 Years

Comments Filter:
  • The article notes speedups ranging from about 3X to 40X depending on the test... while that initially sounds like a lot, but that's only after 10 years of development. If performance doubled every 18 months, the speedup should be 680X.
    • by ERJ ( 600451 ) on Wednesday September 20, 2017 @06:21PM (#55234717)
      I'll be that guy...

      Technically Moore's Law (I assume that is what you are referencing) says the number of transistors per square inch of wafer will double every 18 month, not that performance will double.
      • I'll be that guy...

        Technically Moore's Law (I assume that is what you are referencing) says the IC complexity for minimum component costs will double every two years, not that the number of transistors per square inch of wafer will double.

      • I'll be that guy... Technically Moore's Law (I assume that is what you are referencing) says the number of transistors per square inch of wafer will double every 18 month, not that performance will double.

        Ha, yes I saw that coming. I know what Moore's law says, which is why I didn't directly reference it. That said, the industry's success has been rooted in it's historical exponential performance gains. As a practical matter, no one cares how many transistors are on a trip, but what that chip can do.

    • Re: (Score:3, Insightful)

      by night ( 28448 )

      Moores "law" was doubling of transistors, not performance. But it's not a law and has already failed in both senses of the word. Intel just delayed their 10nm CPU again. AMD announced their next generation shrink will be 12nm, given that the current CPUs are 14nm that's going to be a rather slow increase. Generation to generation improvements are on the order of 10-33% on most metrics and take quite a bit longer than 18 months to come out.

      • by barc0001 ( 173002 ) on Wednesday September 20, 2017 @06:55PM (#55234873)

        It's been dead for some time. My home rig is an i5 2500K that I bought in the spring of 2011. It's only this year that I've found a decent midrange "doubling" candidate for building a new machine around, interestingly enough, the Ryzen 5 1600. Benchmarks suggest that I'll get about double the performance out of that, and it's in the same price bracket as when I bought the 2500K. 6.5 years and only 2x the oomph on the desktop in the midrange price bracket. I used to see that sort of improvement (and upgrade accordingly) every 2 years, but no longer.

        • by night ( 28448 ) on Wednesday September 20, 2017 @07:17PM (#55234975)

          Agreed, which if you think about it justifies more expensive desktops on a longer replacement cycle. Go ahead and spend the extra few hundred on ram, cpu, motherboard and case. The plan on keeping it for 7-10 years. The next doubling is going to take even longer than the last one.

        • by Kjella ( 173770 ) on Wednesday September 20, 2017 @08:59PM (#55235583) Homepage

          It's been dead for some time. (...) 6.5 years and only 2x the oomph on the desktop in the midrange price bracket. I used to see that sort of improvement (and upgrade accordingly) every 2 years, but no longer.

          It's getting very near the end, 10nm is already shipping but not on desktop chips, 7nm gets exotic with EUV but is probably doable but 5nm is a "maybe, if we get all the crazy quantum effects worked out". Even if they pull another rabbit out of the hat the silicon lattice constant is 0.543 nm which is a lot more fundamental problem than all the other issues they've found workarounds for. You're literally down to counting atoms, my guess is that by 2025 they've reached the end of the line. Not just a speed bump but like permanently. At least for anything remotely resembling the processors we have today.

        • It's been dead for some time.

          We're still too close to it to tell when Moore's law really ended. My guess is that historians will later peg somewhere within 2010-2015.

    • by Anonymous Coward

      Unless I'm mistaken 10 years = 120 months which is 6.66 doubling iterations so somewhere between 32 and 64x, even if we ignore the fact that Moore's "Law" refers to transistor counts rather than performance.

    • It's still pretty impressive especially if you account for most of the growth being towards more cores. That's going to have higher diminishing returns than single core improvements.

      Anything else improving 3x in ten years would be pretty much miraculous. Imagine if cars now got that much improvement in fuel efficiency. We're so spoiled it's not even funny.
      • Anything else improving 3x in ten years would be pretty much miraculous. Imagine if cars now got that much improvement in fuel efficiency. We're so spoiled it's not even funny.

        This is a good point, but the entire computing industry has been predicated on exponential growth. It'll take centuries to get real AI with linear growth of the underlying hardware.

  • by Anonymous Coward

    I want it to have my babies.

  • by Gravis Zero ( 934156 ) on Wednesday September 20, 2017 @07:01PM (#55234911)

    They did a comparison between the highest end Intel chips and the EPYC 7601. [phoronix.com] Not to spoil it but EPYC blew the panties straight off of Intel's chips while using less power. It's no wonder Intel has been flailing in the media. [slashdot.org]

  • So, will this shit run the new Wolfenstein II: The New Colossus? My old Gen 1 i5 doesn't have enough cores or threads or gigabytes or some bullshit.

Arithmetic is being able to count up to twenty without taking off your shoes. -- Mickey Mouse

Working...