Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage IT Technology

PCI Express 6.0 Specification Finalized: x16 Slots To Reach 128GBps (anandtech.com) 31

PCI Special Interest Group (PCI-SIG) has released the much-awaited final (1.0) specification for PCI Express 6.0. From a report: The next generation of the ubiquitous bus is once again doubling the data rate of a PCIe lane, bringing it to 8GB/second in each direction -- and far, far higher for multi-lane configurations. With the final version of the specification now sorted and approved, the group expects the first commercial hardware to hit the market in 12-18 months, which in practice means it should start showing up in servers in 2023. First announced in the summer of 2019, PCI Express 6.0 is, as the name implies, the immediate follow-up to the current-generation PCIe 5.0 specification. Having made it their goal to keep doubling PCIe bandwidth roughly every 3 years, the PCI-SIG almost immediately set about work on PCIe 6.0 once the 5.0 specification was completed, looking at ways to once again double the bandwidth of PCIe. The product of those development efforts is the new PCIe 6.0 spec, and while the group has missed their original goal of a late 2021 release by mere weeks, today they are announcing that the specification has been finalized and is being released to the group's members. As always, the creation of an even faster version of PCIe technology has been driven by the insatiable bandwidth needs of the industry. The amount of data being moved by graphics cards, accelerators, network cards, SSDs, and other PCIe devices only continues to increase, and thus so must bus speeds to keep these devices fed. As with past versions of the standard, the immediate demand for the faster specification comes from server operators, whom are already regularly using large amounts of high-speed hardware. But in due time the technology should filter down to consumer devices (i.e. PCs) as well.
This discussion has been archived. No new comments can be posted.

PCI Express 6.0 Specification Finalized: x16 Slots To Reach 128GBps

Comments Filter:
  • I really have a hard time imagining what they could be for.

    In a private setting, no GPU will saturate this in the next three decades at the rate we're going.
    NVMe? Even if they made a drive, nobody is gonna notice a difference compared to even PCIe 3.0.
    Fringe use cases? What could they be?

    In a professional environment... I really can only think of storage arrays but we just recently had to buy additional arrays because upgrading the current one would have yielded only a 20% increase in performance. The limit

    • by Junta ( 36770 )

      There's also the potential to reduce lanes for a given device. If a GPU is content with PCIe Gen 4 x16, then it would be content with PCIe Gen 6 x4. Similarly, NVMe drives might opt for x1 instead of x4.

      • One word: networking. Especially ISP intelligent devices that process traffic. Virtualizing firewalls and DOS mitigation requires a lot of CPU on the packet stream, and big network cards are the best way to get it there.
        • by Junta ( 36770 )

          True, and HPC also wants the bandwidth, and enterprise storage wants tons of nvme.

          But in terms of how it might matter more to the home computing segment, more affordable components with fewer lanes required may be the realistic relevant upshot.

      • Socket size and more compact PCIe cards seems like a useful benefit.

        But it also seems like the kind of thing that will trickle down to the unwashed masses far enough into the future that it's hard to get too excited.

        It seems like it used to be these kinds of things quickly got baked into accessible products. But between vendors holding back features for their eNtERprISe products and whatever low level of actual consumer demand exists for the new feature, it seems like they take forever to actually be avail

    • GPUs no longer would need to be X16. They could be X1 or X2, allowing for many GPUs to be connected to the same slot.
      Same goes for storage, imagine 32x nVME drives connected to a single motherboard.

      It's no longer about speed, rather quantity.

      • by dfghjk ( 711126 )

        "...allowing for many GPUs to be connected to the same slot."

        All other things remaining equal, doing this doesn't allow for a single extra GPU, much less extra GPUs connected TO THE SAME SLOT.

        Bandwidth of a wire means nothing on its own, the data still has to be produced and consumed. Also, in PCIe a "slot" is merely a single point-to-point connection to a downstream port of a switch, increasing the data rate of a channel doesn't change the number of GPUs that can be connected to a slot.

        Once upon a time th

        • Actually.... for the sake of argument, there's no reason you couldn't bifurcate a x16 slot to 4 x4 (already common for some NVMe cards on the market), and put 4 GPUs on it, each one connected to 4 lanes.

          They could be socketed, but wouldn't have to be.

          Such a thing might not be practical, but in a custom-designed machine it could very well make sense, and allow use of a standard PCIe x16 card edge connector instead of something proprietary (even if the carrier board itself doesn't adhere to any standard form

          • by tlhIngan ( 30335 )

            Actually.... for the sake of argument, there's no reason you couldn't bifurcate a x16 slot to 4 x4 (already common for some NVMe cards on the market), and put 4 GPUs on it, each one connected to 4 lanes.

            There is no argument on this.

            This is supported by PCIe since the beginning. In fact, during slot interrogation, the PCIe root complex interrogates to see which lanes are used by what devices during training so it can bond those lanes together.

            It's why when talking about PCIe root complexes and bridges, we ta

            • The only argument would be a physical one, if one thinks of GPUs as complete PCIe cards (which, while they're commonly sold as such, is not how I think of them).

              It's true that two standard graphics cards physically can't occupy the same slot, bifurcation or not, so we would need a riser/breakout type card, or a different form factor of GPU, or both. Electrically/logically, putting 4 devices on one x16 slot (or even one x8 slot) is perfectly doable, but there's some physical interfacing that has to be worke

        • by edwdig ( 47888 )

          A GPU is essentially a lot of relatively small processors in one chip. You could ramp up the core count and still have the bandwidth to keep them busy.

        • Once upon a time there were /. posters who knew things.

          Yes, such as the fact that many motherboards don't allow for PCI Express bifurcation (intentional design, but still).
          Also, it's one thing to use PCI Express 3.0 and having to have 4x lanes per GPU at a minimum (otherwise you choke GPU bandwidth), compared to using PCI Express 6.0 and having 1x lane per GPU (with bandwidth room to spare).

    • - Dual-port 400Gb NICs
      - Single-port 800Gb NICs
      - NVMe devices (which EASILY max out Gen 3 x4 today, and several can max out Gen 4 x4)
      - GPUs (either more bandwidth/lower latency to each, or more GPUs stuffed into one box using lower link widths)
      - Inter-CPU communication
      - Disaggregated/composable infrastructure.
      - Internals of flash-based storage arrays
      - ...the list goes on.

      Not much use for Gen6 at home, b

    • Two others already comented about reducing the # lanes. Doing that reduces the complexity of designing the borads (easier to design a Graphics card with 8xPCIe6 than to do a 16x PCIe5, or to do a 1x PCIe6 SSD than to do a 4x PCIe4 SSD) and the SoCs (think gravitron, A64FX, Yitian710, etc.)

      Other use case can be networking.

      The IEEE802.3 guys are not standing still. currently there are NICs with 2 400Gbps ports, and you need two NICs per server.

      IIRC, the next step for Eth is 1Tbps, followed by 4Tbps. You are n

      • Next step after 400Gb is 800Gb (already standardized and on the roadmap if not actually available from multiple switch vendors). After that, I'm not sure... my guess would be they'll double it to 1.6Tb since that seems to be the way of things recently, for the most part (double the speed of each lane and/or double the count of lanes in the optics). Next-gen switches should have 112Gb SerDes, which enables 800Gb-SR8 easily.

        Power is going to be the limiting factor soon, as well as cost - the higher data rat

    • I really have a hard time imagining what they could be for. ... In a private setting, no GPU will saturate this in the next three decades at the rate we're going. ... What use cases can you think of?

      UltraRAM? UltraRAM Breakthrough Moves Us Closer to a Unified RAM and Storage Solution [gizmodo.com]

      This next-gen component could provide the speeds of RAM with the storage capabilities of SSDs.

      New chip making techniques, faster SSD storage, and the use of AI and machine learning are some of the latest innovations helping increase the performance and efficiency of modern computers, and now a research team at Lancaster University in the U.K. might have invented the next big thing.

      It’s called UltraRAM and it combines the best of RAM and SSD storage to deliver an ultra-fast version of RAM that doesn’t dump data when turned off. But let’s rewind for a second and look at the two most commonly used components today.

    • Well, for one reason it can be fun to push things to the extreme? Why do people climb Mount Everest? Why do people do crossword puzzles? Why did Galileo waste his time documenting the behavior of the moons of Jupiter? Why did Isaac Newton develop the math to describe how moving objects and forces behave?

    • by stikves ( 127823 )

      Interconnects.

      There are already nvme cards with 4 slots, which require BIOS support for "bifurcation" and are x16 sized right now. With the upgrades, we can have server storage controllers with 10+ nvme slots from a single PCIe card.

      Or other I/O devices, like 10Gbe ethernet (which requires 2 lanes per port in PCIe 3.0). You can have a quad 10GBe card on your router with only a single lane.

      Or ... run 16 Ethereum miners from a single motherboard... okay scratch that one :)

    • Indeed. 640k should be enough for anybody.

    • by edwdig ( 47888 )

      I really have a hard time imagining what they could be for.

      In a private setting, no GPU will saturate this in the next three decades at the rate we're going.
      NVMe? Even if they made a drive, nobody is gonna notice a difference compared to even PCIe 3.0.
      Fringe use cases? What could they be?

      In a professional environment... I really can only think of storage arrays but we just recently had to buy additional arrays because upgrading the current one would have yielded only a 20% increase in performance. The limiting factor being CPU. It turs out with mirroring, compression and deduplication, the CPU becomes the bottleneck. Perhaps EPYC could fare better than Intel Xeon these days, but even so, saturating that bandwidth... You'd probably need a quad socket EPYC machine to even remotely have a chance.

      Since nobody, as far as I am aware, has made such a thing, I can only assume it isn't that simple either.

      So are we talking supercomputers here? What use cases can you think of?

      Look at the PlayStation 5 design for your answers. The goal is to have a fast enough SSD that you can stream in textures as needed rather than preload everything into RAM. It handles data compression on the SSD controller chip so the CPU doesn't get overloaded.

      Going even further, take a look at the tech coming in Unreal Engine 5. The engine is designed to take 3D scans of environments with massive polygon counts and enormous textures. It streams that data in at runtime and generates 3D models and textures s

    • by jd ( 1658 )

      Hmmm. I don't know if this would work, but here's two possibilities.

      InfiniBand-driven RDMA. The maximum theoretical speed for Infiniband is 1200 Gbps. PCI-e uses ten bit encoding, so your maximum bus speed is just about the same as your maximum network speed. Since InfiniBand is a bus master, you don't invoke the CPU or OS at all, data just goes straight to/from RAM. Then you've got the question of whether you can arrange RAM in such a way that it could support such a data rate.

      Very large numbers of bus mas

    • by Shinobi ( 19308 )

      Supercomputers/clusters, firewalls, some routers. A giant RAM disk could also be useful. Just hope the PCIe root complexes can keep up. In many I/O or GPU compute heavy systems nowadays, that's where you bottleneck.

    • What use cases can you think of?

      The benefit here isn't your graphics card, nor is it your NVMe drive. It's the CPU connectivity to external peripherals which currently for high end systems can be a system wide bottleneck. Running 2 GPUs, and a couple of NVMe drives? You're limited in bandwidth. All those devices connected to your PCH sharing a PCIe connection to the GPU? Limited in bandwidth.

      The point isn't talking to one device, it's talking to all at once.

  • The chart that ars has shows that it should be more than 128g. https://cdn.arstechnica.net/wp... [arstechnica.net]
  • My gaming PC is only PCI Express 2.0, you insensitive clod!

Save gas, don't use the shell.

Working...