AMD Launches Rome Second Generation EPYC CPUs (anandtech.com) 142
"Today, AMD launched its Rome Second Generation EPYC CPUs, the AMD EPYC 7001 & 7002 series," writes Slashdot reader SolarAxix. "Was the hype real? According to Anandtech's review of the top-of-the-line EPYC 7742 with 64 cores and 128 threads (for a total of 128 cores and 256 threads), it seems to be the case." From the report: ...So has AMD done the unthinkable? Beaten Intel by such a large margin that there is no contest? For now, based on our preliminary testing, that is the case. The launch of AMD's second generation EPYC processors is nothing short of historic, beating the competition by a large margin in almost every metric: performance, performance per watt and performance per dollar. "
Competition == Good. (Score:1, Insightful)
Hopefully both sides will compete with each other, ignore RiscV and then be overtaken when x86 finally dies the death it should have done 10 years ago.
Re: (Score:2)
This. Intel 4004 (and it's derivatives) have lasted long enough by now.
Re: (Score:1)
You mean CTC Datapoint 2200. The 4004 and 8008 aren't all that alike. It's the Datapoint 2200 which lead to the 8008, which lead to the 8080, which lead to the 8086.
The instruction set encoding is almost irrelevant when you can manufacture chips with over 1 billion transistors (Core i3/i5/i7). The execution units, pipelining, branch prediction, bus architectures, and cache size are all way bigger pieces of the performance puzzle.
What really needs to die is Von Neumann architecture and all the software that
Motorola 68000 isn't that good (Score:5, Insightful)
Don't get me wrong its a very very clever instruction set, and a breeze to write assembler on, and neat that every instruction fits into 2 bytes. and still manages 16 registers using the data/address separation. I can't say enough nice things about the m68k, but sadly scaling up isn't one of them.
Re: (Score:2)
*assembly
its still risc (Score:2)
actually (Score:3)
As for x86 instructions are decoded into a wide and fixed size format that needs little or no further decoding one it hits the scheduler/execution part of the pipeline, easily 4 bytes just for the opc
Re: (Score:2)
High frequency traders are today more concerned with the communication latency. It seems like some are using HF radio communication instead of internet to overcome the latency introduced by intermediate routers over the Atlantic.
And HFT traders should be taken outside and shot (Score:1)
HFT should not even exist! It is a cancer in our economy!
Re: (Score:2)
Re: (Score:2)
You're wrong, only idiots care about single thread performance above everything else as even games these days are increasingly multithreaded. And and if you happen to run something particularly boneheadely singlethreaded, there's _always_ a plethora of other crap running in the background which more cores will help with, and speed up your main task with by spreading the load.
You're missing the point. Which is that you want to get the maximum performance single-thread core, and then multiply that core by whatever optimal factor to get your CPUs. That's what the OP of this thread was driving at
Re: (Score:2)
Not really. At some point, you'll have to double the size of a core to squeeze out 10% higher performance, or you can have a second core and get nearly double the performance in the real world.
Re: (Score:1)
It's this year, but no one told node.js and vert.x developers that. Node is single core only. You have to run multiple copies on the same box and proxy to get any parallel performance.
The single threadedness of Node is all thanks to Javascript being single threaded. Blame Javascript for that, not Node. Node was designed as an async programming environment so it scales really well for async events and async I/O. Because the underlying Javascript engine is single threaded, though, it is not suitable for CPU-intensive loads.
Anyone trying to use Node for compute workloads is about as smart as someone trying to use a darning needle to drive a nail into concrete. They deserve all the pain and
Re: (Score:2)
It's approximately a tie now, and Epyc slaughters on every other metric. No AVX512, that's roughly the end of the list. AVX512 with be in Zen 3. However, the raw power of 64 fast cores, and memory and IO bandwidth out the yinyang, is worth way way more than AVX512.
Re: (Score:2)
A blender scene?
Re: (Score:3)
Googles stressapptest will let you max your memory, one of my favorite stress programs. Should even be in debian repos also on github.
Re: (Score:2)
Yes, its a pretty awesome program. And I believe it even compiles on windows. It allows you to test a lot of different things quickly and easily. Great for stability testing. Also Phoronix-test-suite is pretty good but its kind of a pain in the ass to use in my opinion.
Well Done (Score:1)
Congratulations to AMD. The hard work is paying off.
Just remember the original Opteron and Athlon 64. Don't slack off and party for years like a bunch amateurs this time. Keep working hard if you want to keep the lead.
Re: (Score:2)
Re: (Score:2)
I am still using such a processor and it's still plenty for me. And I put a big fancy cooler on it. I even have my phenom ii x6 still as a backup. That thing was a beast for the money, too.
Re: (Score:3)
The 8350 is a work horse. I just retired my 8350 from my home server after 6 years of constant use. I didn't retire it because of any form of technical issues. It just couldn't keep up with the increased workload I put on it. It isn't going into complete retirement. I'm going to re-purpose it as a sound mixing station for my son-in-law.
Re: (Score:2)
I'm running even older hardware, a Thuban. 1090T (6 cores, 3.2 GHz, boost to 3.6, but runs fine 24/7/365 at 3.6). I've added RAM and an SSD and have changed video cards several times in eight years, but this build definitely deserves the "workhorse" moniker. When I originally built it, the intent was to provide enough processor muscle for Cubase, but it turns out Cubase really like clock speed over threads (I think it can only run two threads at a time) and although I get adequate performance, it wasn't as
Re:Well Done (Score:4)
Thuban. 1090T
I remember when that came out. That thing is a beast too. I wanted one but finances being what they where at the time, it just wasn't in the cards.
I think it's safe to say over the last 10 years if you keep up with technology, we are often disappointed by our expectations. Especially if you came up building systems in the 90's Where a new generation of CPU usually meant double the performance at least over the older systems.
Over the years we will probably continue to eek out a few more mhz and toss more cores at the problem. But nothing will beat the thrill I had going from a 33mhz '486 to a 100 Mhz pentium.
Well done AMD (Score:4, Interesting)
I know you won't be able to keep this up, but you have my vote, (via my pocket book). Bought a Ryzen 5 2400G early this year, and am very pleased with it. Cheaper than paying the Intel tax.
Do hope that AMD is keeping track of all the CPU security issues, and fixing them when possible. (Development pipelines may preclude fixing the latest, but at least get rid of older ones.)
not as illiterate as you (Score:2)
A homonym != illiteracy [wikipedia.org], but they can be quite funny.
Through the storm I see a weasel - the hilarious homonym resonating on stage for 2500 years.
Re:Well done AMD (Score:4, Interesting)
Nope, I got a 2990WX and it sucks bigtime compared to much smaller Intel CPUs in most loads I tried. It works well for C compiles but most C projects build fast enough that speeding them up is not cost-effective. For anything else, that big box chokes under memory and cross-chiplet bandwidth.
Re: (Score:3)
NUMA can be both an advantage and a disadvantage. If the application and OS is designed with NUMA considered then it may benefit from it by spreading the data traffic to different memory blocks/units avoiding competing for the same memory bus for two different threads.
But for random generic applications the NUMA architecture can be a problem.
Re:Well done AMD (Score:4, Interesting)
I don't know how much AMD said but Anandtech [anandtech.com] was pretty honest:
In order to take the full advantage of this setup, the workload has to be memory light. In workloads such as particle movement, ray-tracing, scene rendering, and decompression, having all 32-cores shine a light means that we set new records in these benchmarks.
In true Janus style, for other workloads that are historically scale with cores, such as physics, transcoding, and compression, the bi-modal core caused significant performance regression. Ultimately, there seems to be almost no middle ground here - either the workload scales well, or it sits towards the back of our high-end testing pack.
It is a very special chip, with a very special memory configuration. And it was really just the Threadripper chip, all the server chips had local memory. So yeah that's a lemon if you didn't know what you were buying it for, they kick ass in general though and that chip kick ass for a few things - but only a few.
Re:Well done AMD (Score:4, Interesting)
It's interesting that they used 3200 14-14-14 memory though. I think it was Linus Tech Tips that found that going to 3600 and keeping InfinityFabric in lock-step with it helped.
I wonder how much difference it would make here.
Re: Well done AMD (Score:1)
I drink Pepsi, not Coke!
H'yuk!
Re: (Score:2, Interesting)
Honestly I know several streamers and other folks that want to utilize lots of add-in cards for external video capture, encoding, etc, that are highly interested in the Epyc series because of that ludicrous amount of PCIe lanes it has.
When you're trying to juggle a 16-lane GPU, 4-lane for capturing 1080p60 console footage at full 4:4:4 instead of 4:2:0, and another 4-lane for an NVMe SSD alone, you've already blown past what a LOT of modern CPUs on the market can offer without spending thousands.
At the chea
Re: (Score:3)
Being able to hang a respectable amount of N
Re: (Score:2)
The ark page for the Xeon E-2186M says it supports up to 16 PCIe lanes; and the 7730 uses it with a CM246 chipset which supports up to 24; so we'll assume that the 7730 has 40 lanes to work with initially.
Dell's spec sheet explicitly states that the M.2 2280 slots are PCIe 4x; rather than 2x(which is an option for NVMe; but obviously a potential bottleneck) so that's 16 PCI
Re: (Score:2)
Indeed. The Intel X99 platform was great, supporting 28 and 40 PCI Express lanes respectively. Too bad I am stuck with the 6800K I bought back in 2016 because of Intel's refusal to reduce prices for the 6900K.
Re: (Score:2)
I've been very happy with the Ryzen 5 2600X I bought late last year. Good performance and significantly cheaper than comparable Intel offerings. I'm slowly being converted over to AMD, at least for CPUs. Still on the fence with their GPUs though.
Re: Well done AMD (Score:2)
Re: (Score:3)
... the lame choices Intel has currently fostered on us.
I think you meant foisted:
foist /foist/
verb
past tense: foisted; past participle: foisted
impose an unwelcome or unnecessary person or thing on.
"don't let anyone foist inferior goods on you"
No Cinebench or Blender Results? (Score:2)
Re:No Cinebench or Blender Results? (Score:5, Informative)
Re: (Score:1)
Another shout out for Phoronix and the great service they are giving to the Linux community.
Phoronix is a great resource for anyone interested in real performance metrics, you can even run the exact same tests yourself and compare your own machine to their results. They run BSD comparisons every now and then which are interesting too. Often they are the first people to show the real impact of security vulnerability patches. The quality of journalism on the regular "tech news" sites has dropped, it was of
AMD and TSMC (Score:1)
combined they are beating the crap out of Intel. Last time AMD was ahead, Intel had fab advantage. Now AMD is again ahead (or at least paired) in architecture, but Intel is far behind in fab tech. TSMC plans already to go to 5nm. Intel can't reach good 10nm yields. They're going to 7nm in 1-2 years only. They need to spin that off and go fabless ASAP.
Re: AMD and TSMC (Score:2)
Re: (Score:2)
Uh no. Intel no longer has the advantage over tsmc, their actually functioning processes are basically equivalent at this time. Without a process technology advantage, Intel has nothing since we have already seen that the remainder of their competitive advantage came from compromising security. They clearly have inferior staff working for them, and are clearly mismanaging them to boot. Frankly they should be sued into a smoking hole in the ground for fraudulently claiming that the processor had meaningful s
Re: (Score:1)
AMD is definitely the smarter purchasing choice for the vast majority of people at this point, even for any gamer who isn't a hard-core competitive player. The reason I say that is if you compare buying even a first gen Ryzen versus its price competitive Intel offering in gaming, the Ryzen is now notably better due to the lack of cores on the Intel. Games are utilizing threads better and better.
Unless you are gaming at 720P with a 2080 ti, buying a Ryzen makes way more sense. The latest ones use much les
Re: (Score:3)
None of this means Intel is doomed. Intel has fucked up many times in the past and come back stronger from even their earliest days.
Yeah, but back in those earliest days, they didn't have competition like this. The world has looked very different since the K5.
On the process mode side, Intel used to be seriously ahead in process technology, but their transition to the next node has been a total disaster.
Well, that's my point. Intel's technical advantage boiled down to two things, their process advantage and their compromising security for performance. The world is no longer amused at the latter, and they no longer have the former. Plus, there's no indication that they're going to ever have superior process technology again! There is not even a whisper of a rumor of that happening.
Re: (Score:2)
If this is true, Intel's ability to execute is dismal. According to laptop manufacturers like Apple & Lenovo, the lack of timely Intel product releases and physical product due to yield issues is hurting their bottom lines. Intel is proving to be an unreliable partner, at least in the short-term.
Re: (Score:2)
This is a $7000 server CPU, not something for home gamers.
Re: (Score:2)
It is large, but keep in mind it's a multi-chip package; there are actually 9 "chiplets" plus a bunch of passive elements under the heatsink.
supermicro needs to put the aplus link on the main (Score:2)
supermicro needs to put the aplus link on the main page and stop hiding it. AMD IS KING NOW!!!!
Re: (Score:2)
Re: (Score:3)
There is no "retooling" at this level because no product of theirs is manufactured long enough to warrant "tools" to be made, and even if they got it in their head to do that, they would then be years late to the market and would immediately go bankrupt at the expense combined with zero money ever coming in for that expense.
But you know what was once definitely up: Intel colluding with
Re: supermicro needs to put the aplus link on the (Score:2)
Re: (Score:2)
You think components are soldered to high-end motherboards by hand??
No, but there's nothing that requires retooling. The parts are positioned by high-precision pick-and-place machines, and then run through a convection reflow oven to solder the parts. The only thing that has to change when the factory produces a different board is the program that tells the pick-and-place machine which parts to put where.
Creating new mobo designs takes some time, but not much.
Comment removed (Score:4, Insightful)
Re: (Score:2)
If they already have the design, then they can switch a production line to make a different board within an hour, and have thousands of them available by tomorrow morning.
So going by this, they could make 100k boards in about 5 days.
Then why do you state:
As a random citizen, I can get 100K boards manufactured in China [pcbway.com] for competitive prices and 6 weeks lead time.
Are they working 6x slower just for you?
And how does board complexity play into this?
Are they really capable of pushing out 10+ layer densely populated motherboards every 3.6 seconds with just one line?
If so, how do you know? I mean, what is the source of this information?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
But will ARM spoil the party? (Score:2, Interesting)
I think AMD has some mojo going for them, but Intel is a sick company dealing with all the side channel issues coming out more frequently now. How long before hyperthreading must be disabled to fully protect a PC? But the quiet question mark is where is ARM in this competition because as AMD stretches out its cores onto its chips. ARM is quietly making more efficient chips that might just dethrone both AMD and Intel in many markets. ARM is not there yet, but Microsoft, server industry, even Apple are lookin
Re:But will ARM spoil the party? (Score:4, Informative)
That story has been developing and recently at least in the EPYC space there is a retreat from it in aggregate.
AMD, Qualcomm, Cavium, Marvell, and others were all doing 'transform the datacenter' ARM processors (either in production or in development). AMD has discontinued it, Qualcomm's offering was cancelled before it could ever release, Marvell's was stopped and they bought Cavium. Cavium I *think* is still in the game, but they haven't shown anything since ThunderX2 and that did not have the market success many were anticipating. Ampere is currently pushing their acquired Skylark, but to limited success.
The general problem is that ARM's traditional benefits have not really been relevant to datacenter scale. ARM excelled at offering lower performance chips with lower power envelopes than Intel bothered to offer, and through that heritage went higher performance with both a compatibility advantarge in the mobile space and the market driving them to develop much better sleep state behaviors in their architecture. In the datacenter, sleep states don't matter, the compatibility advantage goes to x86, and it's exclusively about high performance processors and consolodation as the answer to 'too much processor'. The remaining advantage is 'cheap processor' but AMD can apply pricing pressure to a significant extent and otherwise being relegated to a low-cost processor vendor in that space means terrible margins and is going to be unappealing to any manufacturer. The hyperscale datacenters certainly have an interest in the business side of the ARM ecosystem (the ability to have a lot of hungry chip vendors where at least one is willing to take a loss to make a 'big sale', basically what they do to server vendors today), but technically speaking, there hasn't been any sign that there would be an *advantage* to that architecture as of yet.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
These two things only sound similar, and dont just assume that either means that ARM might "do something" to be competitive in server racks. If it ever happens, there is only a chance that it will actually be ARMs doing.
To understand this you need to understand ARMs business model. ARM not only does not manu
I am dissapointed in /. (Score:3)
Re: (Score:1)
You seen the size of that thing? It's already a beowulf cluster.
Re:I am dissapointed in /. (Score:4, Funny)
Re: (Score:2)
Oh a Slashdot meme within a Slashdot meme. My kingdom for some modpoints :D
Re: I am dissapointed in /. (Score:1)
Yes, but can it run Linux?
Re: (Score:2)
That old Beowulf meme does not merely circulate one solitary drain. No, it circles the analytic continuation of reflected absolute(gamma(z)) in the negative half plane.
File:Gamma abs 3D.png [wikipedia.org]
50 to 100% higher performance at 40% less (Score:2)
Which is a pretty bold claim for them. It has my attention at least.
Threatening cloud? (Score:2)
Re: (Score:2)