Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware

NVMe 2.1 Specifications Published With New Capabilities (phoronix.com) 22

At the Flash Memory Summit 2024 this week, NVM Express published the NVMe 2.1 specifications, which hope to enhance storage unification across AI, cloud, client, and enterprise. Phoronix's Michael Larabel writes: New NVMe capabilities with the revised specifications include:

- Enabling live migration of PCIe NVMe controllers between NVM subsystems.
- New host-directed data placement for SSDs that simplifies ecosystem integration and is backwards compatible with previous NVMe specifications.
- Support for offloading some host processing to NVMe storage devices.
- A network boot mechanism for NVMe over Fabrics (NVMe-oF).
- Support for NVMe over Fabrics zoning.
- Ability to provide host management of encryption keys and highly granular encryption with Key Per I/O.
- Security enhancements such as support for TLS 1.3, a centralized authentication verification entity for DH-HMAC-CHAP, and post sanitization media verification.
- Management enhancements including support for high availability out-of-band management, management over I3C, out-of-band management asynchronous events and dynamic creation of exported NVM subsystems from underlying NVM subsystem physical resources.
You can learn more about these updates at NVMExpress.org.
This discussion has been archived. No new comments can be posted.

NVMe 2.1 Specifications Published With New Capabilities

Comments Filter:
  • by ctilsie242 ( 4841247 ) on Tuesday August 06, 2024 @11:24PM (#64686842)

    Offloading I/O onto the NVMe components is a nice thing. This means the CPU works less, because the NVMe controller can handle some items. This reminds me of the old school IBM DASD where the disk wasn't just a "dumb" storage device, but had its own processors as well.

    Booting over NVMe is a really nice item. This means that the SAN can handle 100% of the I/O, and machines can be completely diskless. Instead of providing RAID to every machine via a BOSS card, one could clone the OS to a number of boot images, which will take up a small amount of space with deduplication and/or copy-on-write. With out of band management, those images can be backed up or even scanned for malware outside of the OS, which can be a win.

    Overall, this is appearing to be a complete replacement for iSCSI, but with TLS based authentication (beats CHAP any day), encryption in transit, and handing off between controllers.

    NVMe also offers the ability to separate a controller and storage, so a presented LUN can be split up into a chunk of disk for the boot OS, and a chunk for the data, and each storage device handled separately.

    Overall, it is definitely going to be competition to iSCSI.

    • but iscsi direct to disk? vs iscsi to some kind of raid system?

      • This already exists for large NVMe SAN systems. It's called NVMe-oF. The only difference with the new revision is boot support.

        iSCSI is just another network protocol that already supported boot but wasn't as good for anything else with faster flash storage.

    • Support for offloading some host processing to NVMe storage devices.

      Yeah this was intriguing and made me think of mainframe stuff. Hopefully we see more mainframe concepts coming into the PC world again now that we're bumping up against harder wins on the CPU/GPU space, still a lot of room for improvements in other parts of the machine especially as the added complexity gets cheaper and cheaper to integrate.

    • Heat is already a big problem with the latest gen NVMe drives. The spec may support offloads but if they keep moving more stuff to the controller they're going to have to come up with different ways to cool the package (huh huh).
    • by JBMcB ( 73720 )

      This reminds me of the old school IBM DASD where the disk wasn't just a "dumb" storage device, but had its own processors as well.

      DASD controllers are one of the reasons modern mainframes can handle tens of thousands of transactions a second. They understand file systems, so an application on a mainframe's CPU can ask for a record in a database, and the DASD controller will look up the location of that record and drop it directly into memory without the CPU context switching. This is one of those performance roadbumps that make no difference at all when doing regular desktop or server things, but make a huge difference when you are se

      • Even on regular desktops or servers, being able to throw something into RAM without needing CPU is important. Things like games which load textures would greatly benefit from this, where textures can be scooped directly from the SSD and stuffed into the GPU without needing the CPU to assist.

        Zoned NVMe drives would be great. This would completely get rid of fiber channel, while still providing the stability of having dedicated storage fabric, where no matter what compromises the network, the zones will be

  • by PeeAitchPee ( 712652 ) on Wednesday August 07, 2024 @07:58AM (#64687300)
    For years, we were stuck on PCIe 3. This effectively limited max throughput from a single x16 slot to 32 MB/s. This wasn't much of a problem internally but e.g. did limit the effective top speed of *networked* spinning rust large RAID arrays and early SSDs. Same thing with SATA-based SSDs -- OK for consumers but the interface forces you to leave a lot of the device's inherent performance on the table. The current generation of NVMe devices are so fast that a single drive can now saturate a 100G duplex fiber NIC in a single PCIe 4 x16 slot. That's never before been possible. We're now rapidly progressing through PCIe 4 and PCIe 5, both of which are mainstream. We should start to see the first commercially available PCIe 6 products sometime in 2025. In all, it's the greater overall bus bandwidth that's making the difference more that any other components in the chain. It's fascinating to see the bottleneck shift to other parts of the infrastructure as we can finally run these devices at full speed, as well as more of them in the same machine.
  • - A network boot mechanism for NVMe over Fabrics (NVMe-oF).
    - Support for NVMe over Fabrics zoning.

    For those of us too lazy to use a search engine or LLM, what is this Fabrics thing?

  • "AI" has jumped the shark.

"Mach was the greatest intellectual fraud in the last ten years." "What about X?" "I said `intellectual'." ;login, 9/1990

Working...