NVMe 2.1 Specifications Published With New Capabilities (phoronix.com) 22
At the Flash Memory Summit 2024 this week, NVM Express published the NVMe 2.1 specifications, which hope to enhance storage unification across AI, cloud, client, and enterprise. Phoronix's Michael Larabel writes: New NVMe capabilities with the revised specifications include:
- Enabling live migration of PCIe NVMe controllers between NVM subsystems.
- New host-directed data placement for SSDs that simplifies ecosystem integration and is backwards compatible with previous NVMe specifications.
- Support for offloading some host processing to NVMe storage devices.
- A network boot mechanism for NVMe over Fabrics (NVMe-oF).
- Support for NVMe over Fabrics zoning.
- Ability to provide host management of encryption keys and highly granular encryption with Key Per I/O.
- Security enhancements such as support for TLS 1.3, a centralized authentication verification entity for DH-HMAC-CHAP, and post sanitization media verification.
- Management enhancements including support for high availability out-of-band management, management over I3C, out-of-band management asynchronous events and dynamic creation of exported NVM subsystems from underlying NVM subsystem physical resources. You can learn more about these updates at NVMExpress.org.
- Enabling live migration of PCIe NVMe controllers between NVM subsystems.
- New host-directed data placement for SSDs that simplifies ecosystem integration and is backwards compatible with previous NVMe specifications.
- Support for offloading some host processing to NVMe storage devices.
- A network boot mechanism for NVMe over Fabrics (NVMe-oF).
- Support for NVMe over Fabrics zoning.
- Ability to provide host management of encryption keys and highly granular encryption with Key Per I/O.
- Security enhancements such as support for TLS 1.3, a centralized authentication verification entity for DH-HMAC-CHAP, and post sanitization media verification.
- Management enhancements including support for high availability out-of-band management, management over I3C, out-of-band management asynchronous events and dynamic creation of exported NVM subsystems from underlying NVM subsystem physical resources. You can learn more about these updates at NVMExpress.org.
Re: (Score:3)
Your "fact" sounds more like an article of faith. Perhaps you'd like to back it up with some evidence or explanation?
Re:"post sanitization media verification"? (Score:4, Interesting)
Depends on how "sanitized" one wants to go. For physical media and compliance, yes, one has to destroy all media. However, almost all SSDs have built in encryption, either OPAL, or some other standard. So, doing a nuke of the drive, a secure erase using blkdiscard, or nvme format will ensure all the data that was on the drive is gone, especially if the controller takes the time to free up all the SSD's pages, ensuring data written is now completely gone.
There are times where this can come in handy. For example, if servers are being auctioned off, this should be something kicked off in BIOS.
Sometimes parts of the SSD can be securely erased, which is important for virtualization, so if one VM is deleted or moved, the space it took up on the SSD array is effectively zeroized, so it can't be recovered.
Overall, it is a good thing to have, but it definitely won't replace good ol' hardware compression or a shredder when it comes to disposing things as per a lot of standards.
Re: (Score:2)
Sometimes parts of the SSD can be securely erased, which is important for virtualization, so if one VM is deleted or moved, the space it took up on the SSD array is effectively zeroized, so it can't be recovered.
NVMe drives support multiple namespaces and I almost never see this used. Though if the drive implements the security correctly this would make what you described very easy. Each virtual drive is just a namespace on the SSD, wear leveling blocks with the other namespaces. And then each would have a unique encryption key.
Although you don't need hardware support if the VM drives are encrypted at the software layer. It just adds a little more overhead.
Re: (Score:2)
There are already plenty of drives that are always encrypted. A secure erase just involves telling the drive to throw away the key and generate a new one, marking all blocks as unused. No extra media wear.
Re: (Score:2)
Yah, I don't see why secure erase confuses people or is considered problematic.
With functional encryption, you simply need to erase they key. Unless you have a few trillion years the data is irrecoverable. The only exception would be a poor crypto implementation ... which I worry becomes more likely when you add all this extra complexity. But otherwise is quite straight forward and common.
Not exactly shiny but a lot of nice features... (Score:4, Informative)
Offloading I/O onto the NVMe components is a nice thing. This means the CPU works less, because the NVMe controller can handle some items. This reminds me of the old school IBM DASD where the disk wasn't just a "dumb" storage device, but had its own processors as well.
Booting over NVMe is a really nice item. This means that the SAN can handle 100% of the I/O, and machines can be completely diskless. Instead of providing RAID to every machine via a BOSS card, one could clone the OS to a number of boot images, which will take up a small amount of space with deduplication and/or copy-on-write. With out of band management, those images can be backed up or even scanned for malware outside of the OS, which can be a win.
Overall, this is appearing to be a complete replacement for iSCSI, but with TLS based authentication (beats CHAP any day), encryption in transit, and handing off between controllers.
NVMe also offers the ability to separate a controller and storage, so a presented LUN can be split up into a chunk of disk for the boot OS, and a chunk for the data, and each storage device handled separately.
Overall, it is definitely going to be competition to iSCSI.
Re: (Score:2)
but iscsi direct to disk? vs iscsi to some kind of raid system?
Re: (Score:2)
This already exists for large NVMe SAN systems. It's called NVMe-oF. The only difference with the new revision is boot support.
iSCSI is just another network protocol that already supported boot but wasn't as good for anything else with faster flash storage.
Re: (Score:2)
Support for offloading some host processing to NVMe storage devices.
Yeah this was intriguing and made me think of mainframe stuff. Hopefully we see more mainframe concepts coming into the PC world again now that we're bumping up against harder wins on the CPU/GPU space, still a lot of room for improvements in other parts of the machine especially as the added complexity gets cheaper and cheaper to integrate.
Re: (Score:2)
Re: (Score:2)
This reminds me of the old school IBM DASD where the disk wasn't just a "dumb" storage device, but had its own processors as well.
DASD controllers are one of the reasons modern mainframes can handle tens of thousands of transactions a second. They understand file systems, so an application on a mainframe's CPU can ask for a record in a database, and the DASD controller will look up the location of that record and drop it directly into memory without the CPU context switching. This is one of those performance roadbumps that make no difference at all when doing regular desktop or server things, but make a huge difference when you are se
Re: (Score:2)
Even on regular desktops or servers, being able to throw something into RAM without needing CPU is important. Things like games which load textures would greatly benefit from this, where textures can be scooped directly from the SSD and stuffed into the GPU without needing the CPU to assist.
Zoned NVMe drives would be great. This would completely get rid of fiber channel, while still providing the stability of having dedicated storage fabric, where no matter what compromises the network, the zones will be
It's the bus bandwidth (Score:4, Interesting)
Re: (Score:3)
You're right, but your numbers are wrong. An ancient PCI bus is 132MB/s. A gen 3 x16 slot gets 16GB/s.
Re: (Score:3)
What are Fabrics? (Score:2)
- A network boot mechanism for NVMe over Fabrics (NVMe-oF).
- Support for NVMe over Fabrics zoning.
For those of us too lazy to use a search engine or LLM, what is this Fabrics thing?
Re: (Score:2)
No, AI Does Not Need to Be Every Title (Score:2)