Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
AMD Networking Hardware

Could AMD's Upcoming EPYC 'Rome' Server Processors Feature Up To 162 PCIe Lanes? (tomshardware.com) 107

jwhyche (Slashdot reader #6,192) tipped us off to some interesting speculation about AMD's upcoming Zen 2-based EPYC Rome server processors. "The new Epyc processor would be Gen 4 PCIe where Intel is still using Gen 3. Gen 4 PCIe features twice the bandwidth of the older Gen 3 specification."

And now Tom's Hardware reports: While AMD has said that a single EPYC Rome processor could deliver up to 128 PCIe lanes, the company hasn't stated how many lanes two processors could deliver in a dual-socket server. According to ServeTheHome.com, there's a distinct possibility EPYC could feature up to 162 PCIe 4.0 lanes in a dual-socket configuration, which is 82 more lanes than Intel's dual-socket Cascade Lake Xeon servers. That even beats Intel's latest 56-core 112-thread Platinum 9200-series processors, which expose 80 PCIe lanes per dual-socket server.

Patrick Kennedy at ServeTheHome, a publication focused on high-performance computing, and RetiredEngineer on Twitter have both concluded that two Rome CPUs could support 160 PCIe 4.0 lanes. Kennedy even expects there will be an additional PCIe lane per CPU (meaning 129 in a single socket), bringing the total number of lanes in a dual-socket server up to 162, but with the caveat that this additional lane per socket could only be used for the baseboard management controller (or BMC), a vital component of server motherboards... If @RetiredEngineer and ServeTheHome did their math correctly, then Intel has even more serious competition than AMD has let on.

This discussion has been archived. No new comments can be posted.

Could AMD's Upcoming EPYC 'Rome' Server Processors Feature Up To 162 PCIe Lanes?

Comments Filter:
  • No. (Score:5, Funny)

    by Gravis Zero ( 934156 ) on Sunday April 07, 2019 @10:11PM (#58401088)

    It could not, would not, on a boat.
    It will not, will not, compute your float.
    It will not have them in the rain.
    It will not have them on a train.
    Not in the dark! Not in a tree!
    Not in a car! Listen to AMD!
    It will not run your Firefox.
    It will not run programs on your box.
    It will not be inside your house.
    It will not 'shop you with Mickey Mouse.
    It does not have them here or there.
    It does not have them anywhere!

    • Thank you for your post, I look for such article along time, today i find it finally. this post give me lots of advise it is very useful for me
  • what about quad socket?? with the faster links?

  • But those are some really specific numbers for a "could x be the next version?" post.
    • You'll find that "Could x be the next y" is the type of shit that editors come up with when they rewrite user submitted stories for more clickbait.

      As for how specific the numbers are 5 x 16 per chip + speculation from articles on the server control hub that each chip will get a lane dedicated to it rather than having split up one of the group of 16. Even as speculation that makes perfect sense.

  • by Jeremy Erwin ( 2054 ) on Sunday April 07, 2019 @11:08PM (#58401276) Journal

    from the article

    The Intel Xeon Scalable 8-socket servers are very popular in China because the number 8 has special significance, as does having the largest server.

    See, that's why Intel is not using PCIe 4.0. The same theories regard 8 way servers as "auspicious" also fear the number fourth iteration of PCIe

  • Desktop CPU Lanes (Score:5, Interesting)

    by war4peace ( 1628283 ) on Monday April 08, 2019 @01:00AM (#58401558)

    I've been waiting for 3 years now for a relatively affordable desktop CPU with enough PCI Express Lanes.
    My current CPU is an Intel i7 6800K, using the X99 motherboard chipset and it has 28 PCI Express Lanes. The 6850K has 40 PCI Express lanes but otherwise brings no performance increase. The next step in the upgrade process is the 6900K which, albeit on an EOL platform, has enough meat to satisfy my requirements... but costs a fortune. As a matter of fact, it costs as much if not more than a 2nd gen Threadripper (2920X), which has 12 cores and 24 threads available, compared to Intel's 8/16. But that requires changing the motherboard as well, and those are pricy too.

    Only the HEDT CPUs have enough PCI Express Lanes, if you have 2x GPUs and a minimum of 2x nVME SSDs. There are regular desktop solutions which allow you to use such a hardware combo, but one GPU will run at 8X, the other at 4x, one SSD will run at 2X and the other would most likely use motherboard-provided PCI Express lanes, reducing the data throughput or providing variable performance. The 9900K from Intel has 16 PCI Express Lanes. The Ryzen 2700X has 16 lanes as well. You need more PCI Express lanes? Tough luck, cough up a couple grands on CPU+motherboard alone.

    • What about Threadripper?

      • Never mind, missed seeing you refer to it. A new MB is in the cards either way. May as well accept that.

        • by Anonymous Coward

          You can get a TR4 board for $250 right now. Pricey? Fuck that, the cpu is $800-1200 and you might as well get 8 sticks of ram too.

          • No, friend, YOU can get a TR4 board for 250 bucks. I can get the same one for 350 EUR if lucky. Same with CPU, a $1000 CPU is 1200 EUR here easily.

            • I can not even buy a motherboard + Threadripper processor set without being forced to pay double the price charged in the US, And that's when I can find a store that sells this kind of parts.
    • by AmiMoJo ( 196126 )

      I'd jump to Threadripper. Intel goes through sockets way too fast to be worth sticking with. Threadripper is going to have 64 PCIe lanes, compared to only 40 on the 6900K, and the socket will most likely be around for years with decent support for future models of CPU.

    • I don't get where you only get 16 lanes from on ryzen. its 16 to the slots 1st slot 16x if you use first 2 its 8x/8x and if you use 3 it then drops to 8x/4x/4x and the NVMe slot has 4 dedicated. Plus the chipset also gives you 4x pcie 2.0 which is fine for most things not GPU/NVME ultimately giving yourself 20 PCIe lanes which can be used as 8x/8x/4x if you use a riser for the NVMe. And as another user pointed out, if you change processors with intel there is a VERY good chance you will be buying a new moth

      • I have 2x GPUs in SLI. I also have 2x nVME SSDs which reach 3.4 GB/s transfer rate (each). I also own a 16x PCI Express card which can fit 4 nVME SSDs (not using it for obvious reasons). If I want to use 8x+8x for GPUs and 4x+4x for nVMEs, I need 24 lanes.
        The 6800K already has 28 lanes available (CPU only), so if I upgrade, I want to upgrade to something better from this point of view as well. Ryzen 2 would be a downgrade, unless I go Threadripper.

        • Well, AC pretty much summed it up better than i would have. But from everything you are trying to run then a desktop platform is not fot you and yiu should be jumping to workstation ot server platforms. There is no other way unless you dont use them all at once in which a pcie switch may work for you but those are not cheap at all and would be cheaper to go TR4.

        • I have 2x GPUs in SLI.

          Even if they were 2x Titan X GPUs the difference between running 8x/8x and 16x/16x is a rounding error at best on all but a very peculiar and very specific set of circumstances even on incredibly graphic intense games.

          The NVMe case is far more compelling, not that I see a reason to have that many high speed devices for anything other than posting brag numbers. If there was a real reason you'd likely be running an EPYC or Threadripper.

          • That's not the point. The point is 8x+8x is still impossible to achieve if you have even ONE nVME SSD. I don't need 16x+16x for the GPUs. I need 8x+8x, hell even 8x+4x would be fine, provided both my SSDs get 4x CPU lanes each. That's not the case with any current desktop platform except Intel's X299 and AMD's X399 platforms.

            And there's another point I am trying to convey. Back in 2017 when I bought the CPU and motherboard, they set me back around $700 (both), in my country, which means USA prices would hav

            • Well, with ryzen you could have 8x/8x/4x

              I currently have a 1070 at 16x and an NVMe at 4x. I dont understand how youre not getting the math on this. If I was to put another gpu in I would then be at 8x/8x with a 4x nvme nothing sharing lanes.

              • provided both my SSDs get 4x CPU lanes each Sorry I missed that. In this case you could stick a NVME in the board, one in your add-in card then do 8x/4x/4x + 4x nvme.

            • The point is 8x+8x is still impossible to achieve if you have even ONE nVME SSD.

              Err x8+x8 + dedicated x4 for NVMe is the main selling point of Ryzen.

              I need 8x+8x, hell even 8x+4x would be fine, provided both my SSDs get 4x CPU lanes each.

              There's desktop motherboards that provide what you want. Asus's DIMM.2 system uses PCIe lanes from the memory leaving the GPU free for 2x x4 lanes to M.2 slots, and many motherboards on the market have PLX chips that switch idle PCIe lanes between slots allowing you to put 2 or even 4 M.2 slots at full speed on a motherboard with x8+x8 graphics at the same time.

              Your requirement doesn't seem to include transferring at 12GB/s while maintaini

    • I am pretty sure it is not aimed at the desktop marked.... But interesting post...
    • by Anonymous Coward

      Except that Crossfire (multigpu) setups in gaming are a bust and didn't work out like people wanted.

  • ...you are doing it wrong!
    • Or maybe *you* are understanding it wrong.

      The difference between having 2 extra PCIe lanes is that the SCH gets a dedicated lane to each chip. What this means in practice is that the otherwise group divisible by 16 which is split down into 2x8 or 4x4 lanes now suddenly has the bandwidth to support 2 additional full speed NVMe drives (2 x x4 lanes as required for NVMe) instead of giving you a 2x x3 interface (which for NVMe would only run at x2 for compatibility reasons) and splitting 2x x1 lanes off for sys

  • Why do I get to see the meta data on the post that are hidden by filter??? Really... Are you trying to impress me but showing all the hard work you are doing? How good you are at counting? Do your job, shut up, fuck out of my attention span! Am I the only one having in head conversations with user interfaces :)?

And it should be the law: If you use the word `paradigm' without knowing what the dictionary says it means, you go to jail. No exceptions. -- David Jones

Working...