Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AMD Hardware

AMD Launches 3 Second-Gen Epyc Processors With 50% Lower Cost of Ownership (venturebeat.com) 36

Advanced Micro Devices said it is adding three new 2nd-Gen AMD Epyc server processors that can deliver up to 50% lower cost of ownership than rival Intel Xeon processors. From a report: The chips are part of AMD's attempt to grab technology leadership away from Intel, which has long dominated the server chip market. AMD has had an advantage lately with its high-performance Zen 2 cores designed to handle database, high-performance computing, and hyper-converged infrastructure workloads, Dan McNamara, senior vice president at AMD's server business unit, said in a press briefing. The three new processors are the AMD Epyc 7F32 (with 8 computing cores), Epyc 7F52 (16 cores) and EPYC 7F72 (24 cores). They have up to 500MHz of additional base frequency and large amounts of cache memory. AMD said the design gives Epyc the world's highest per-core performance x86 server central-processing unit. The previous chips in the second generation of Epyc processors debuted in the third quarter of 2019. [...] The 7F32 is priced at $2,100, the 7F52 at $3,100, and the 7F72 at $2,450.
This discussion has been archived. No new comments can be posted.

AMD Launches 3 Second-Gen Epyc Processors With 50% Lower Cost of Ownership

Comments Filter:
  • Why would the higher core count CPU cost almost $1000 less than the mid-tier CPU?
    • by Guspaz ( 556486 ) on Tuesday April 14, 2020 @04:49PM (#59946990)

      Because the 16-core chip is a 64-core chip with 75% of the CPU cores disabled in order to increase the power and cache available to each core, while the 24-core chip is a 48-core chip with 50% of the CPU cores disabled.

      As a result, the 16-core chip has a total of 256MB of L3 cache (16MB/core), while the 24-core chip only has 192MB of cache (8MB/core).

      Put in a bit more detail, AMD's Zen 2 processors have 4 cores per CCX, and two CCX per chiplet. The EPYC 7F72 has 6 chiplets with 2 cores per CCX active (6x2x2=24) while the EYPC 7F52 has 8 chiplets with 1 core per CCX active (8x2x1=16).

      These chips are really specialized for people who need the maximum possible single-threaded performance in an enterprise-grade processor. They may be shipping chips where there are defects in the disabled cores, so this gives them another way to make a useful product out of them.

      • by alvinrod ( 889928 ) on Tuesday April 14, 2020 @05:15PM (#59947100)
        Another part of it is software licensing costs. Some of the stuff has a per-core license fee so it can be better to buy a CPU with fewer cores that are more performant. Cache is certainly a part of that, but the other thing to notice is that the base clock speed for that part is 3.5 GHz, which is considerably higher than the rest of the usual lineup. The 8-core chip with a 3.7 GHz base clock is only $350 less ($2100 vs $2450) than the 24-core part even though it has less cache.

        It's also important to understand that it's likely that a lot of these parts are created artificially by disabling cores on otherwise working chiplets. There may be some that are just naturally defective and couldn't be used for some other part, but with AMD moving away from monolithic dies that's a lot less likely, especially if they want to sell these at volume. This means that the cost of the system is, in part, what it could have otherwise been sold as. This is especially the case if AMD is selling everyone one of those better configurations that they can make.
        • Some? They are certainly shipping products with defects in the deactivated core. This is practically the reason why they do this.

          The chips are made en masse, with a certain amount of defects. If the defect rate is 10% per core, then in 100 sets of 4 cores, you get .9*.9*.9*.9 =
          65 perfect sets with 4 perfect cores
          29 sets with 3 perfect cores and 1 defective
          6 sets with 2 perfect cores and 2 defective
          none (statistically) with 3 or 4 defective cores,

          In a case like this (10% defective cores), then by selling

          • by sixoh1 ( 996418 )

            Even 90% yield (10% defects) would be a catastrophically bad process line for something that is the core of your business, blah blah 'its all sand' but the amount of energy and raw materials dedicated to semiconductor processing between the raw silicon boule/wafer and a processed wafer to be diced into chips is pretty big unless you have a completely dedicated line and you can afford to run processing boats all day to make up the lossage.

            Since nearly every major semiconductor company is "Fabless" [wikipedia.org] like AMD,

            • This article claims 60% effective rate for the newest stuff is considered normal. Yes, once you get the machines running well it goes down, which is why I used 90% working.

              https://electronics.stackexcha... [stackexchange.com]

              • by sixoh1 ( 996418 )

                This article claims 60% effective rate for the newest stuff is considered normal. Yes, once you get the machines running well it goes down, which is why I used 90% working.

                https://electronics.stackexcha... [stackexchange.com]

                Read farther down, 60% might be plausible for a bleeding-edge process, but that's claimed against Intel who owns their own fabs so they can in-fact run the process line infinitely to recoup investment in the line itself and the R&D cost of the device (or multiple different devices if they can share the same process line). Not making enough money, when you own the fab you can add a third shift for not a lot of marginal cost and make up the difference.

                Claiming you can have lots of defective chips in the t

                • "when you own the fab you can add a third shift"

                  Semiconductor fabs already have to run 24 hours a day. You can't stop them without losing the wafers in process. In addition, all leading technologies are supply limited. There's no excess capacity, you'd have purchase more equipment for another line.

                  Also, fab shifts are 12 hours long, so you can't get a third shift if the day is only 24 hours long.

                • by epine ( 68316 )

                  Claiming you can have lots of defective chips in the transaction for a fabless company doesn't meet the smell test, the economics for AMD on TSMC is much worse than for a vertically integrated Intel.

                  You are sadly deluded if you think vertical integration is a magic sauce of this magnitude.

                  I'm sure both sides work very hard to ramp the yield as fast as humanly possible, but fabrication has a thousand variables, and there's no magic bullet but to tweak and tune as you go.

                  * Almost certainly there's a negotiate

                • by Guspaz ( 556486 )

                  In April 2019, reports came out that Zen 2 yields were 70%, and Intel's biggest monolithic Xeon chips had a 35% yield. This was still a few months away from product launch on Zen 2 in July. You can imagine what yields on a monolithic die would have been.

                  Their yields probably improved by the time the product launched in July, and considering that the mobile Zen 2 chips *do* use monolithic dies, yields must have improved a ton since then.

        • by Kjella ( 173770 )

          Another part of it is software licensing costs. Some of the stuff has a per-core license fee so it can be better to buy a CPU with fewer cores that are more performant.

          That's pretty much exactly and only what these chips are for. The question is "We're paying $$$/core, what's the best possible performance per core you can give us? Max cache, max memory bandwidth, max frequency, the power envelope of a server with twice the cores, max everything except for more cores." And these chips are the answer. They're full of cherry picked, pimped out cores in an extremely imbalanced processor and the intended customers are happy to pay a premium for it. Everyone else would rather j

  • Are they really *that* different to the consumer CPUs with the same amount of cores?

    Alsoy saying 50% lower cost, and then revealing it is only compared to Intel, instead of the previous AMD ones, does not manipulate us into making it look better than it is, like you planned. It unly makes it look like manipulative lying.

    • by stikves ( 127823 ) on Tuesday April 14, 2020 @04:51PM (#59946998) Homepage

      I have used old servers as workstations, and can tell there is a significant difference. You might be only looking at gigahertz numbers and the cores. If so, the desktop processor might be a better choice. However server processors offers much more.

      They have multiple RAM controllers. It is easy to build a 256MB or even TB RAM machine for cheap. That is not possible on desktop (most motherboards stop at 64GB, if not 32GB). And don't forget that they can use ECC RAM, which is helpful when you work with large datasets.

      They have more PCIe lanes. Your motherboard might have 4 16x physical connectors, but only one, and if you are lucky to of them will run at full 16 lane connections. The server will run all of them at full bandwidth (although older servers are limited to PCIe 2.0, which defeats the purpose).

      They can use more than one CPU at a time. A dual Xeon server with 32 or even 64 cores can go for $1,000 (used). A similar Threadripper CPU itself can cost more.

      And they also get to be more stable. The CPU is usually built from the better silicone, and can run 24x7 for years on end.

      • by thegarbz ( 1787294 ) on Tuesday April 14, 2020 @06:07PM (#59947252)

        The CPU is usually built from the better silicone

        That just means they bounce when dropped and are waterproof. ;-)

      • by fintux ( 798480 )
        You're probably thinking about Intel with these figures. Ryzen PRO CPUs support ECC (desktop and mobile), and so do the Threadripper (workstation) CPUs. A lot of the Ryzen X570 mobo+3000 series CPUs support up to 128 GB of RAM on the desktop CPUs. The Threadripper supports 1TB of RAM. EPYC supports 4 TB of ram per socket, however.
        • by Guspaz ( 556486 )

          Ryzen PRO (which has no products in the current generation mobile lineup) doesn't have any extra features, the regular Ryzen chips also theoretically support ECC. I say theoretically because in reality board support is almost non-existent (users report conflicting info from manufacturer support, ECC options in bioses appearing and disappearing between revisions, ECC working but being treated as non-ECC), and no board has it on the QVL.

          It's worth noting that real-world memory limits are lower than AMD's spec

      • But THAT much of a price difference? Smells like "Add one zero for all corporate sales" to me.
        They should sell to the military! There you can add three zeroes! :D

    • by Guspaz ( 556486 ) on Tuesday April 14, 2020 @05:00PM (#59947042)

      The 16-core EPYC 7F52 has 256MB of L3 cache and a 240W TDP, and supports 2TB of ECC RAM in an 8-channel configuration with 128 PCIe lanes.

      The 16-core Ryzen 9 3950X has 64MB of L3 cache and a 105W TDP, and supports 128GB of non-ECC* RAM in a 2-channel configuration, with 16 PCIe lanes.

      The EPYC chip has 4x as much cache, 4x as much memory bandwidth, 2.3x as much available power per core, 16x as much RAM, 8x as many PCIe lanes, adds proper ECC support, and a bunch of other features that enterprise users care about but consumers never would.

      So yes, there is quite a large difference from the consumer chips.

      *: The 3950X technically supports ECC RAM, but this requires motherboard support, which is rare, and even those that support it often don't have any ECC memory on the QVL list.

      • And "wastes 2.3 times as much power" also isn't exactly a plus. :)
        Especially in the server room.

        • by Guspaz ( 556486 )

          It's not a waste, though. It means that a lot more power is available to each core when all cores are active. CPUs have power and thermal budgets, and those budgets are much lower than the maximum power consumed by one core multiplied by the number of cores.

          These new chips have the same TDPs as their fully-enabled counterpart, so in practice it just means the same power spread over less cores, which is why they clock so much higher than the fully-enabled versions. Of course, it's a game of diminishing retur

    • What? It's not manipulative if it's aimed at applications where Xeons, not AMD chips, were previously used. Please show me the high-per-core-performance chip from AMD that companies have been using to save on database per-CPU licenses (or other similarly licensed software) that you think this comparison is ignoring.
      • by Guspaz ( 556486 )

        Even the normal EPYC chips have a much higher per-core performance than Xeon, because Intel's Xeon chips are normally several generations behind their own consumer chips.

    • 1) If you are using your consumer PC to read Facebook, no there isn’t a difference when comparing to a workstation or server if that’s all you use it for. If you are using it as a server or workstation with multiple workloads happening concurrently, your consumer PC would buckle under that much pressure so yes there is a difference for professionals and companies

      2) Comparing the enterprise grade EPYC to Xeon is an apples to apples comparisons as they are meant for the same workloads. Using any

      • Don't insult me by associating me with Facebook! I have proudly managed to get them to ban me and delete all my profile data! :)
        So even if I wanted, I could not make an account.

        Using a fake name doesn't work. That is why I got banned in the first place. :)
        But I was told that spamming fake news works too, if you manage to do it convincingly for Facebook while not making your peer believe you were serious. :)
        Even works in the US.
        Still, you need to block web beacons though.

        • The point which you clearly missed is your workload for a consumer PC is not comparable to the workload to that of a server or workstation. That’s why these chips are far more expensive than consumer versions. You don’t have to buy the enterprise chip but people do because they have different needs than you. The second point is comparing a server chip to a server chip is the correct comparison. Otherwise anyone could compare a Core i3 to a Xeon and say that Intel is ripping you off and that you
    • Are they really *that* different to the consumer CPUs with the same amount of cores?

      Yes they are oh ignorant one. RAM controllers, error checking, inter IC PCIe lanes, fatter and lower latency I/O, many times the number of PCIe lanes...

      Man there's really no topic covered on Slashdot where you don't display an incredible amount of ignorance. Why do you even come to this site? It seems you've not an interest in any of the topics ever discussed here.

      • I had to find out that "ignorant" is not an insult like in German, for this. :)
        In German, it always implies *willful* ignorance.
        So I imply that isn't what you meant. :)

        No need to call me out though. I knew myself that I did not know. That's why I asked. :) I did not act like I knew.

    • by AHuxley ( 892839 )
      Yes. The products can work on different math without the limits of consumer CPU hardware.

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell

Working...