Could AMD's Upcoming EPYC 'Rome' Server Processors Feature Up To 162 PCIe Lanes? (tomshardware.com) 107
jwhyche (Slashdot reader #6,192) tipped us off to some interesting speculation about AMD's upcoming Zen 2-based EPYC Rome server processors. "The new Epyc processor would be Gen 4 PCIe where Intel is still using Gen 3. Gen 4 PCIe features twice the bandwidth of the older Gen 3 specification."
And now Tom's Hardware reports: While AMD has said that a single EPYC Rome processor could deliver up to 128 PCIe lanes, the company hasn't stated how many lanes two processors could deliver in a dual-socket server. According to ServeTheHome.com, there's a distinct possibility EPYC could feature up to 162 PCIe 4.0 lanes in a dual-socket configuration, which is 82 more lanes than Intel's dual-socket Cascade Lake Xeon servers. That even beats Intel's latest 56-core 112-thread Platinum 9200-series processors, which expose 80 PCIe lanes per dual-socket server.
Patrick Kennedy at ServeTheHome, a publication focused on high-performance computing, and RetiredEngineer on Twitter have both concluded that two Rome CPUs could support 160 PCIe 4.0 lanes. Kennedy even expects there will be an additional PCIe lane per CPU (meaning 129 in a single socket), bringing the total number of lanes in a dual-socket server up to 162, but with the caveat that this additional lane per socket could only be used for the baseboard management controller (or BMC), a vital component of server motherboards... If @RetiredEngineer and ServeTheHome did their math correctly, then Intel has even more serious competition than AMD has let on.
And now Tom's Hardware reports: While AMD has said that a single EPYC Rome processor could deliver up to 128 PCIe lanes, the company hasn't stated how many lanes two processors could deliver in a dual-socket server. According to ServeTheHome.com, there's a distinct possibility EPYC could feature up to 162 PCIe 4.0 lanes in a dual-socket configuration, which is 82 more lanes than Intel's dual-socket Cascade Lake Xeon servers. That even beats Intel's latest 56-core 112-thread Platinum 9200-series processors, which expose 80 PCIe lanes per dual-socket server.
Patrick Kennedy at ServeTheHome, a publication focused on high-performance computing, and RetiredEngineer on Twitter have both concluded that two Rome CPUs could support 160 PCIe 4.0 lanes. Kennedy even expects there will be an additional PCIe lane per CPU (meaning 129 in a single socket), bringing the total number of lanes in a dual-socket server up to 162, but with the caveat that this additional lane per socket could only be used for the baseboard management controller (or BMC), a vital component of server motherboards... If @RetiredEngineer and ServeTheHome did their math correctly, then Intel has even more serious competition than AMD has let on.
No. (Score:5, Funny)
It could not, would not, on a boat.
It will not, will not, compute your float.
It will not have them in the rain.
It will not have them on a train.
Not in the dark! Not in a tree!
Not in a car! Listen to AMD!
It will not run your Firefox.
It will not run programs on your box.
It will not be inside your house.
It will not 'shop you with Mickey Mouse.
It does not have them here or there.
It does not have them anywhere!
Re: (Score:1)
163 lanes (Score:2)
I don't know now one gets to 162. 128+32+2 ? 2*9*9 ? since it's not a power of 2 or a simple sum of a power of 2, or a simple multimple of a power of 2 it seems unusual. Even if they were trucking around parity bits then it would have been something like 2*8*9 not 2*9*9. Maybe they are planning for having processor cores in multiples of 3?
Anyhow Intel loves meaningless spec wars like Megahertz counts, so I'm sure we'll hear about the one with 163. Or more likely they will just double up two of thei
Re: (Score:2)
If the two processors are both accessing the same region of memory at the same time, they need to be able to pass the changes back and forth to ensure memory state is consistent. I believe they do it by directly connecting a bunch of PCIe lanes between the processors.
You've got two issues at play here. When you get to the processors with tons of cores, there's often several chips inside the package. A 32 core processor might have 4 chips inside it, each with 8 cores. Those chips need to talk to each other,
Re: (Score:2)
Re: (Score:2)
10x x16 lanes + 2x x1 lanes used exclusively for the management interface (one for each chip) so that the x16 interfaces don't need to be robbed of a lane. That may not seem relevant until you realise there are no PCIe x15, PCIe x7 or PCIe x3 devices out there.
what about quad socket?? with the faster links? (Score:2)
what about quad socket?? with the faster links?
Re: (Score:2)
Re: (Score:3)
The question of course is why this would be different *now* than before. From the second we had our first dual core processor, I've been hearing people proclaim the death of multi-socket designs with every core count bump. Yet here we are...
The blade v. 1u argument is a bit moot. Already dense form factors can double up the density. I've seen proposals for 'dual system' boards to gain some marginal economy of scale for the physical components but not the complexity of having the sockets interconnected.
Of
Re: (Score:2)
Note that this was said the second we had dual core processors, and is generally repeated ever so often.
We now can have a single-socket system that is a 32-way server, and yet there is still push for 8 socket and beyond. Parts of the industry just stubbornly don't want to let go.
Some of it is adherence to tradition, but at least some of it is due to some memory capacity intensive workflows that will eat every byte of memory you can feed it.
A strong argument for the relatively few applications that do this
Re: (Score:2)
The question I have is who is pushing for quad, hex, or octo socket servers and what is the use case to it. Obviously such systems are vastly more mostly to build. Simple things like power supplies and and cooling are far more complicated. It is probably way cheaper to use multiple servers at that point.
I can only see only one real use case. For supercomputers or render farms where high CPU density is a necessity. Or for more limited scenarios where you physically can only have one server rack and you need
Re: (Score:2)
I would assume Supermicro and the others. It costs more to manufacture therefor higher markup. Why would you let that slip and not try to push it at every chance?
I don't know, mr viral marketing intern (Score:1)
Re: (Score:2)
You'll find that "Could x be the next y" is the type of shit that editors come up with when they rewrite user submitted stories for more clickbait.
As for how specific the numbers are 5 x 16 per chip + speculation from articles on the server control hub that each chip will get a lane dedicated to it rather than having split up one of the group of 16. Even as speculation that makes perfect sense.
Computational Numerology (Score:3)
from the article
See, that's why Intel is not using PCIe 4.0. The same theories regard 8 way servers as "auspicious" also fear the number fourth iteration of PCIe
Re: (Score:3)
Are they like Star Trek films, where alternate ones are good and then cursed?
Desktop CPU Lanes (Score:5, Interesting)
I've been waiting for 3 years now for a relatively affordable desktop CPU with enough PCI Express Lanes.
My current CPU is an Intel i7 6800K, using the X99 motherboard chipset and it has 28 PCI Express Lanes. The 6850K has 40 PCI Express lanes but otherwise brings no performance increase. The next step in the upgrade process is the 6900K which, albeit on an EOL platform, has enough meat to satisfy my requirements... but costs a fortune. As a matter of fact, it costs as much if not more than a 2nd gen Threadripper (2920X), which has 12 cores and 24 threads available, compared to Intel's 8/16. But that requires changing the motherboard as well, and those are pricy too.
Only the HEDT CPUs have enough PCI Express Lanes, if you have 2x GPUs and a minimum of 2x nVME SSDs. There are regular desktop solutions which allow you to use such a hardware combo, but one GPU will run at 8X, the other at 4x, one SSD will run at 2X and the other would most likely use motherboard-provided PCI Express lanes, reducing the data throughput or providing variable performance. The 9900K from Intel has 16 PCI Express Lanes. The Ryzen 2700X has 16 lanes as well. You need more PCI Express lanes? Tough luck, cough up a couple grands on CPU+motherboard alone.
Re: (Score:2)
What about Threadripper?
Re: (Score:2)
Never mind, missed seeing you refer to it. A new MB is in the cards either way. May as well accept that.
Re: (Score:1)
You can get a TR4 board for $250 right now. Pricey? Fuck that, the cpu is $800-1200 and you might as well get 8 sticks of ram too.
Re: (Score:2)
No, friend, YOU can get a TR4 board for 250 bucks. I can get the same one for 350 EUR if lucky. Same with CPU, a $1000 CPU is 1200 EUR here easily.
Re: (Score:2)
Re: (Score:3)
I'd jump to Threadripper. Intel goes through sockets way too fast to be worth sticking with. Threadripper is going to have 64 PCIe lanes, compared to only 40 on the 6900K, and the socket will most likely be around for years with decent support for future models of CPU.
Re: (Score:2)
I don't get where you only get 16 lanes from on ryzen. its 16 to the slots 1st slot 16x if you use first 2 its 8x/8x and if you use 3 it then drops to 8x/4x/4x and the NVMe slot has 4 dedicated. Plus the chipset also gives you 4x pcie 2.0 which is fine for most things not GPU/NVME ultimately giving yourself 20 PCIe lanes which can be used as 8x/8x/4x if you use a riser for the NVMe. And as another user pointed out, if you change processors with intel there is a VERY good chance you will be buying a new moth
Re: (Score:2)
I have 2x GPUs in SLI. I also have 2x nVME SSDs which reach 3.4 GB/s transfer rate (each). I also own a 16x PCI Express card which can fit 4 nVME SSDs (not using it for obvious reasons). If I want to use 8x+8x for GPUs and 4x+4x for nVMEs, I need 24 lanes.
The 6800K already has 28 lanes available (CPU only), so if I upgrade, I want to upgrade to something better from this point of view as well. Ryzen 2 would be a downgrade, unless I go Threadripper.
Re: (Score:2)
Pull the other one.
No, it's not equivalent, because you still have only 16 lanes for the primary slots, and still have only 4 lanes for NVMe. You must have a PCIe 4.0-capable device for there to be any bandwidth equivalence, otherwis
Re: (Score:2)
Well, AC pretty much summed it up better than i would have. But from everything you are trying to run then a desktop platform is not fot you and yiu should be jumping to workstation ot server platforms. There is no other way unless you dont use them all at once in which a pcie switch may work for you but those are not cheap at all and would be cheaper to go TR4.
Re: (Score:2)
I have 2x GPUs in SLI.
Even if they were 2x Titan X GPUs the difference between running 8x/8x and 16x/16x is a rounding error at best on all but a very peculiar and very specific set of circumstances even on incredibly graphic intense games.
The NVMe case is far more compelling, not that I see a reason to have that many high speed devices for anything other than posting brag numbers. If there was a real reason you'd likely be running an EPYC or Threadripper.
Re: (Score:2)
That's not the point. The point is 8x+8x is still impossible to achieve if you have even ONE nVME SSD. I don't need 16x+16x for the GPUs. I need 8x+8x, hell even 8x+4x would be fine, provided both my SSDs get 4x CPU lanes each. That's not the case with any current desktop platform except Intel's X299 and AMD's X399 platforms.
And there's another point I am trying to convey. Back in 2017 when I bought the CPU and motherboard, they set me back around $700 (both), in my country, which means USA prices would hav
Re: (Score:2)
Well, with ryzen you could have 8x/8x/4x
I currently have a 1070 at 16x and an NVMe at 4x. I dont understand how youre not getting the math on this. If I was to put another gpu in I would then be at 8x/8x with a 4x nvme nothing sharing lanes.
Re: (Score:2)
provided both my SSDs get 4x CPU lanes each Sorry I missed that. In this case you could stick a NVME in the board, one in your add-in card then do 8x/4x/4x + 4x nvme.
Re: (Score:2)
The point is 8x+8x is still impossible to achieve if you have even ONE nVME SSD.
Err x8+x8 + dedicated x4 for NVMe is the main selling point of Ryzen.
I need 8x+8x, hell even 8x+4x would be fine, provided both my SSDs get 4x CPU lanes each.
There's desktop motherboards that provide what you want. Asus's DIMM.2 system uses PCIe lanes from the memory leaving the GPU free for 2x x4 lanes to M.2 slots, and many motherboards on the market have PLX chips that switch idle PCIe lanes between slots allowing you to put 2 or even 4 M.2 slots at full speed on a motherboard with x8+x8 graphics at the same time.
Your requirement doesn't seem to include transferring at 12GB/s while maintaini
Re: Desktop CPU Lanes (Score:1)
Desktop GPU Lanes (Score:1)
Except that Crossfire (multigpu) setups in gaming are a bust and didn't work out like people wanted.
Re: (Score:2)
There are already options out to do that. And I'm sure after some devices use PCIe 4 there will be a spliter for it.
Re: (Score:1)
Re: (Score:2)
She builds furniture and does woodworking among other things. So yes, we both have hobbies, I'm learning her crafts and teaching her mine (PC modding, 3D design, PC hardware).
Re: (Score:3)
One person here is well-versed in microcomputing components and has good earning potential.
Another spends his time anonymously attempting to troll on /.
It's easy to tell which has trouble dating women.
If 2 extra PCIe lanes matters.... (Score:1)
Re: (Score:2)
Or maybe *you* are understanding it wrong.
The difference between having 2 extra PCIe lanes is that the SCH gets a dedicated lane to each chip. What this means in practice is that the otherwise group divisible by 16 which is split down into 2x8 or 4x4 lanes now suddenly has the bandwidth to support 2 additional full speed NVMe drives (2 x x4 lanes as required for NVMe) instead of giving you a 2x x3 interface (which for NVMe would only run at x2 for compatibility reasons) and splitting 2x x1 lanes off for sys
Filtering on slashdot? (Score:2)
Re: (Score:2)
Yes, yes you are..
*backs away slowly*