AMD Launches 3 Second-Gen Epyc Processors With 50% Lower Cost of Ownership (venturebeat.com) 36
Advanced Micro Devices said it is adding three new 2nd-Gen AMD Epyc server processors that can deliver up to 50% lower cost of ownership than rival Intel Xeon processors. From a report: The chips are part of AMD's attempt to grab technology leadership away from Intel, which has long dominated the server chip market. AMD has had an advantage lately with its high-performance Zen 2 cores designed to handle database, high-performance computing, and hyper-converged infrastructure workloads, Dan McNamara, senior vice president at AMD's server business unit, said in a press briefing. The three new processors are the AMD Epyc 7F32 (with 8 computing cores), Epyc 7F52 (16 cores) and EPYC 7F72 (24 cores). They have up to 500MHz of additional base frequency and large amounts of cache memory. AMD said the design gives Epyc the world's highest per-core performance x86 server central-processing unit. The previous chips in the second generation of Epyc processors debuted in the third quarter of 2019. [...] The 7F32 is priced at $2,100, the 7F52 at $3,100, and the 7F72 at $2,450.
Those prices seem wrong (Score:2)
Re:Those prices seem wrong (Score:5, Informative)
Because the 16-core chip is a 64-core chip with 75% of the CPU cores disabled in order to increase the power and cache available to each core, while the 24-core chip is a 48-core chip with 50% of the CPU cores disabled.
As a result, the 16-core chip has a total of 256MB of L3 cache (16MB/core), while the 24-core chip only has 192MB of cache (8MB/core).
Put in a bit more detail, AMD's Zen 2 processors have 4 cores per CCX, and two CCX per chiplet. The EPYC 7F72 has 6 chiplets with 2 cores per CCX active (6x2x2=24) while the EYPC 7F52 has 8 chiplets with 1 core per CCX active (8x2x1=16).
These chips are really specialized for people who need the maximum possible single-threaded performance in an enterprise-grade processor. They may be shipping chips where there are defects in the disabled cores, so this gives them another way to make a useful product out of them.
Re:Those prices seem wrong (Score:4, Informative)
It's also important to understand that it's likely that a lot of these parts are created artificially by disabling cores on otherwise working chiplets. There may be some that are just naturally defective and couldn't be used for some other part, but with AMD moving away from monolithic dies that's a lot less likely, especially if they want to sell these at volume. This means that the cost of the system is, in part, what it could have otherwise been sold as. This is especially the case if AMD is selling everyone one of those better configurations that they can make.
Some defective? No, most. (Score:3)
Some? They are certainly shipping products with defects in the deactivated core. This is practically the reason why they do this.
The chips are made en masse, with a certain amount of defects. If the defect rate is 10% per core, then in 100 sets of 4 cores, you get .9*.9*.9*.9 =
65 perfect sets with 4 perfect cores
29 sets with 3 perfect cores and 1 defective
6 sets with 2 perfect cores and 2 defective
none (statistically) with 3 or 4 defective cores,
In a case like this (10% defective cores), then by selling
Re: (Score:2)
Even 90% yield (10% defects) would be a catastrophically bad process line for something that is the core of your business, blah blah 'its all sand' but the amount of energy and raw materials dedicated to semiconductor processing between the raw silicon boule/wafer and a processed wafer to be diced into chips is pretty big unless you have a completely dedicated line and you can afford to run processing boats all day to make up the lossage.
Since nearly every major semiconductor company is "Fabless" [wikipedia.org] like AMD,
Re: (Score:2)
This article claims 60% effective rate for the newest stuff is considered normal. Yes, once you get the machines running well it goes down, which is why I used 90% working.
https://electronics.stackexcha... [stackexchange.com]
Re: (Score:2)
This article claims 60% effective rate for the newest stuff is considered normal. Yes, once you get the machines running well it goes down, which is why I used 90% working.
https://electronics.stackexcha... [stackexchange.com]
Read farther down, 60% might be plausible for a bleeding-edge process, but that's claimed against Intel who owns their own fabs so they can in-fact run the process line infinitely to recoup investment in the line itself and the R&D cost of the device (or multiple different devices if they can share the same process line). Not making enough money, when you own the fab you can add a third shift for not a lot of marginal cost and make up the difference.
Claiming you can have lots of defective chips in the t
Re: (Score:2)
"when you own the fab you can add a third shift"
Semiconductor fabs already have to run 24 hours a day. You can't stop them without losing the wafers in process. In addition, all leading technologies are supply limited. There's no excess capacity, you'd have purchase more equipment for another line.
Also, fab shifts are 12 hours long, so you can't get a third shift if the day is only 24 hours long.
Re: (Score:2)
You are sadly deluded if you think vertical integration is a magic sauce of this magnitude.
I'm sure both sides work very hard to ramp the yield as fast as humanly possible, but fabrication has a thousand variables, and there's no magic bullet but to tweak and tune as you go.
* Almost certainly there's a negotiate
Re: (Score:2)
In April 2019, reports came out that Zen 2 yields were 70%, and Intel's biggest monolithic Xeon chips had a 35% yield. This was still a few months away from product launch on Zen 2 in July. You can imagine what yields on a monolithic die would have been.
Their yields probably improved by the time the product launched in July, and considering that the mobile Zen 2 chips *do* use monolithic dies, yields must have improved a ton since then.
Re: (Score:2)
Another part of it is software licensing costs. Some of the stuff has a per-core license fee so it can be better to buy a CPU with fewer cores that are more performant.
That's pretty much exactly and only what these chips are for. The question is "We're paying $$$/core, what's the best possible performance per core you can give us? Max cache, max memory bandwidth, max frequency, the power envelope of a server with twice the cores, max everything except for more cores." And these chips are the answer. They're full of cherry picked, pimped out cores in an extremely imbalanced processor and the intended customers are happy to pay a premium for it. Everyone else would rather j
Still pretty expensive to me (Score:1)
Are they really *that* different to the consumer CPUs with the same amount of cores?
Alsoy saying 50% lower cost, and then revealing it is only compared to Intel, instead of the previous AMD ones, does not manipulate us into making it look better than it is, like you planned. It unly makes it look like manipulative lying.
Re:Still pretty expensive to me (Score:5, Informative)
I have used old servers as workstations, and can tell there is a significant difference. You might be only looking at gigahertz numbers and the cores. If so, the desktop processor might be a better choice. However server processors offers much more.
They have multiple RAM controllers. It is easy to build a 256MB or even TB RAM machine for cheap. That is not possible on desktop (most motherboards stop at 64GB, if not 32GB). And don't forget that they can use ECC RAM, which is helpful when you work with large datasets.
They have more PCIe lanes. Your motherboard might have 4 16x physical connectors, but only one, and if you are lucky to of them will run at full 16 lane connections. The server will run all of them at full bandwidth (although older servers are limited to PCIe 2.0, which defeats the purpose).
They can use more than one CPU at a time. A dual Xeon server with 32 or even 64 cores can go for $1,000 (used). A similar Threadripper CPU itself can cost more.
And they also get to be more stable. The CPU is usually built from the better silicone, and can run 24x7 for years on end.
Re:Still pretty expensive to me (Score:5, Funny)
The CPU is usually built from the better silicone
That just means they bounce when dropped and are waterproof. ;-)
Re: (Score:2)
Re: (Score:3)
Ryzen PRO (which has no products in the current generation mobile lineup) doesn't have any extra features, the regular Ryzen chips also theoretically support ECC. I say theoretically because in reality board support is almost non-existent (users report conflicting info from manufacturer support, ECC options in bioses appearing and disappearing between revisions, ECC working but being treated as non-ECC), and no board has it on the QVL.
It's worth noting that real-world memory limits are lower than AMD's spec
Re: Still pretty expensive to me (Score:2)
But THAT much of a price difference? Smells like "Add one zero for all corporate sales" to me. :D
They should sell to the military! There you can add three zeroes!
Re:Still pretty expensive to me (Score:5, Informative)
The 16-core EPYC 7F52 has 256MB of L3 cache and a 240W TDP, and supports 2TB of ECC RAM in an 8-channel configuration with 128 PCIe lanes.
The 16-core Ryzen 9 3950X has 64MB of L3 cache and a 105W TDP, and supports 128GB of non-ECC* RAM in a 2-channel configuration, with 16 PCIe lanes.
The EPYC chip has 4x as much cache, 4x as much memory bandwidth, 2.3x as much available power per core, 16x as much RAM, 8x as many PCIe lanes, adds proper ECC support, and a bunch of other features that enterprise users care about but consumers never would.
So yes, there is quite a large difference from the consumer chips.
*: The 3950X technically supports ECC RAM, but this requires motherboard support, which is rare, and even those that support it often don't have any ECC memory on the QVL list.
Re: Still pretty expensive to me (Score:2)
And "wastes 2.3 times as much power" also isn't exactly a plus. :)
Especially in the server room.
Re: (Score:2)
It's not a waste, though. It means that a lot more power is available to each core when all cores are active. CPUs have power and thermal budgets, and those budgets are much lower than the maximum power consumed by one core multiplied by the number of cores.
These new chips have the same TDPs as their fully-enabled counterpart, so in practice it just means the same power spread over less cores, which is why they clock so much higher than the fully-enabled versions. Of course, it's a game of diminishing retur
Re: (Score:2)
Re: (Score:2)
Even the normal EPYC chips have a much higher per-core performance than Xeon, because Intel's Xeon chips are normally several generations behind their own consumer chips.
Re: (Score:2)
Re: (Score:2)
1) If you are using your consumer PC to read Facebook, no there isn’t a difference when comparing to a workstation or server if that’s all you use it for. If you are using it as a server or workstation with multiple workloads happening concurrently, your consumer PC would buckle under that much pressure so yes there is a difference for professionals and companies
2) Comparing the enterprise grade EPYC to Xeon is an apples to apples comparisons as they are meant for the same workloads. Using any
Re: Still pretty expensive to me (Score:2)
Don't insult me by associating me with Facebook! I have proudly managed to get them to ban me and delete all my profile data! :)
So even if I wanted, I could not make an account.
Using a fake name doesn't work. That is why I got banned in the first place. :) :)
But I was told that spamming fake news works too, if you manage to do it convincingly for Facebook while not making your peer believe you were serious.
Even works in the US.
Still, you need to block web beacons though.
Re: (Score:2)
Re: (Score:2)
Are they really *that* different to the consumer CPUs with the same amount of cores?
Yes they are oh ignorant one. RAM controllers, error checking, inter IC PCIe lanes, fatter and lower latency I/O, many times the number of PCIe lanes...
Man there's really no topic covered on Slashdot where you don't display an incredible amount of ignorance. Why do you even come to this site? It seems you've not an interest in any of the topics ever discussed here.
Re: Still pretty expensive to me (Score:2)
I had to find out that "ignorant" is not an insult like in German, for this. :) :)
In German, it always implies *willful* ignorance.
So I imply that isn't what you meant.
No need to call me out though. I knew myself that I did not know. That's why I asked. :) I did not act like I knew.
Re: (Score:1)
Re: (Score:2)
Are you just bad at writing software, or poor at optimising it for a CPU? It can only be one of the two if you think this runs no faster than a Opteron.
If your problem is embarrassingly parallel get it of the CPU, if it's not then fire your programming team.
Re: (Score:2)
On the contrary, in order to get the same code to run as slow on a 32 or 64 core Zen 2 chip as on a 6 core Lisbon chip (the biggest Opteron chip available 10 years ago) is incredibly impressive. You need to be an extraordinarily good developer to accomplish that. I'm not even sure it's possible. Maybe if you leveraged 3dnow, which would have been supported on the Opteron, but would need to be emulated on the EPYC? That instruction set was dropped shortly after those 10 year old Opterons came out.
More likely