Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

PCIe 5.0 SSDs Promising Up To 14GB/s of Bandwidth Will Be Ready In 2024 (arstechnica.com) 37

An anonymous reader quotes a report from Ars Technica: Most companies still haven't shifted their entire NVMe SSD lineups to use PCI Express 4.0, but PCIe 5.0 SSDs for PCs are already on the horizon. Storage company Silicon Motion said in a recent earnings call that it expects its PCIe 5.0-capable SSD controllers for consumer SSDs will be available sometime in 2024, opening the door to a wide variety of high-performance drives from different manufacturers. SSD manufacturer ADATA teased some PCIe 5.0 SSDs at CES last month (albeit without an expected release date), boasting of read speeds up to 14GB/s and write speeds of up to 12GB/s using a Silicon Motion SM2508 controller. Current high-end PCIe 4.0 SSDs like Samsung's 980 Pro top out at roughly half those speeds.

Other reports have suggested that these PCIe 5.0 consumer SSDs are coming later in 2022, but according to the call transcript, that only applies to the latest version of Silicon Motion's PCIe 5.0 controller for enterprise SSDs -- the products that end up in servers and data centers, not what typically ends up in the PC on your desk or lap. Early PCIe 4.0 SSDs for consumer PCs were also demonstrated at CES a couple of years before they became products that you could actually buy. For 2022 and 2023, Silicon Motion will continue to focus on those PCIe 4.0 SSDs. Budget SSDs like Western Digital's WD Black SN770 SE are only beginning to transition to PCIe 4.0, and according to reviews like this one from Tom's Hardware, their controllers and flash memory aren't yet fast enough to benefit much from the extra bandwidth. Silicon Motion also says that PCIe 4.0 SSDs have only become common in pre-built PCs within the last year because of "extensive verification and testing" requirements.

This discussion has been archived. No new comments can be posted.

PCIe 5.0 SSDs Promising Up To 14GB/s of Bandwidth Will Be Ready In 2024

Comments Filter:
  • No new PC for me until 2024
    • And you better buy some dry ice to keep the thing chilled

      • These new M2 SSDs now need to have heatsinks? Such a step backwards. I guess that's ok if you need the speed but I prefer reliability.

        The next new innovation will be a cable to move the M2 slots off the motherboard so that they are away from all the heat. Haha progress.
        • The next new innovation will be a cable to move the M2 slots off the motherboard so that they are away from all the heat.

          Or at least put it topside. The slots underneath don't leave space for a real heatsink, and let's not discuss accessibility

          • Look at some of the new motherboard designs, they include a heatsink (well a flat metal plate) that screws down over the now not so convenient M2 slot. Some heatsinks run the full width of the motherboard.

            Now you have to remove not just the graphics card, but unbolt a heatsink, THEN unscrew the M2 card. This is laptop levels of stupidity except now on a desktop.
        • by JackAxe ( 689361 )
          A drive that transfers 14GBs is announced, and you're bothered about the need for cooling? Cooling is what increases reliability. And they already have M.2 extender cables. :)

          Heat has always been an issue for storage, especially with the higher RPM mechanical drives. My 10k drives and even my 7200 RPM drives were very hot to the touch without cooling -- I used a USB fan to cool my 2.5" external FireWire drives, and my 3.5" external drives had aluminum cases with external fins and an internal fan.

          I
          • by NFN_NLN ( 633283 )

            > Heat has always been an issue for storage

            Tell me about it. You never want to have an open flame around a stack of punch cards.

    • by vadim_t ( 324782 )

      This won't really benefit normal people.

      It takes serious work and planning to actually consume data this fast. Unless you're simply copying blocks around, most of the time you're doing something with the data: parsing, decompressing, decrypting, interpreting some data structure and allocating memory for the various parts of it... and that's nigh impossible to do in real time for this amount of data.

      Most games will for instance get bottlenecked on the CPU just trying to load the game data even on older NVMe

      • My main home server's main job is to back up data from itself and other systems in our network. It's indeed copying lots of blocks around, it has an SSD, and over time I've found those SSDs less than sufficiently reliable for this purpose.

        So what I'm really interested in, for this use case at least, is less improved speed, and more improved reliability.

        I should probably add a heat sink, and should probably consider a spinning-rust drive for the important data (/home, etc.) and leave only the OS and/or re-c

  • by ickleberry ( 864871 ) <web@pineapple.vg> on Thursday February 03, 2022 @05:07PM (#62234945) Homepage
    Couldn't they just have released this in 2006 or whenever PCIe was starting to gain traction?

    This slow-but-steady drip-feeding of improvement over the course of decades is after getting quite tiresome and has cost me many thousands.

    Are the likes of Chipzilla sitting on a 30+ year reserve of technology that they can just feed to us over a long period to maximise their profits? Perhaps the founder was visited by aliens who gave him the technology. Anyways this "every 18 months" business over the past few decades is too suspect altogether
    • Re: (Score:2, Informative)

      Couldn't they just have released this in 2006 or whenever PCIe was starting to gain traction?

      Do you have any idea how technology works, or are you just dense and/or trolling?

      That's like asking NASA to launch the space shuttle right after Russia launched sputnik, or asking Sony to release the PS5 right after the launch of the Intellivision.

      And "every 18 months, everything roughly doubles" is not black magic or alien tech conspiracies, it's just the average speed at which silicon technologies progresses are m

      • Sorry. I was hoping my "Intel CEO visited by aliens" comment was enough to plant my post firmly in the "non-serious" category but it didn't work. I'll make an even more ridiculous suggestion the next time
        • In my defense, I didn't read your post to the end. And mostly, this comes from the fact that I heard a very similar comment from some idiot in a Walmart, about why Microsoft couldn't just have launched the Xbox 360 years ago instead of the first Xbox. The problem is: that guy was dead serious.

      • Well it is a pretty good question. Why wait until recently to start using the bandwidth of PCIe?

  • by DontBeAMoran ( 4843879 ) on Thursday February 03, 2022 @05:10PM (#62234953)

    I'm still using SATA III SSDs, you insensitive clod!

  • SATA SSDs have similar real world performance to the NVM formats despite massive bandwidth advantage. Few are going to tangibly benefit from this.

    • Some of us may be more interested in the NIC scene that can allow more bandwidth for the adapter.

    • Demonstrably, objectively, and completely untrue.

      Boot times, application startup, responsiveness during multiple activities - all very much better with NVMe over SATA SSDs.
      The bandwidth absolutely helps especially for large files, but the response time is several times faster, and the CPU needs to do a lot less - no pretending to think about heads, sectors and all the other legacy nonsense that NVMe removes the need for. It all helps.

  • by ffkom ( 3519199 ) on Thursday February 03, 2022 @06:03PM (#62235141)
    An ever increasing ratio of SSDs store so many bits per cell that they become slow at writing (once their cache is full). The "quadruple bit cells" of contemporary SSDs are already slower than the faster of the magnetic disks when writing to them sequentially for extended periods of time. Instead of yet another faster bus, I for one would rather like to see SSDs that can actually be written completely at sustained PCIe 3 speeds.
    • I think you can buy SLC ones, but they're pricey. Personally, I'm in it for the high random read speed.

    • by AmiMoJo ( 196126 )

      Slow at reading too. Sometimes a read fails and the drive has to recalibrate the voltages for a block and re-read it.

  • Is it going to be more cost effective to upgrade SSD, motherboard, CPU and RAM to get PCIe5, or just set up a RAID array of more affordable SSD drives, in the near term? I'm guessing the latter.

  • by bloodhawk ( 813939 ) on Thursday February 03, 2022 @08:13PM (#62235477)
    nice, but seems kinda pointless at least for the next 4 or 5 years, current SSD speeds and bandiwdth isn't really an issue and they aren't hitting the bandwidth limits of 4.0.
  • Pointless (Score:4, Informative)

    by Solandri ( 704621 ) on Friday February 04, 2022 @04:00AM (#62236299)
    We don't perceive storage speed as MB/s. We perceive it as wait time - how long do I need to wait before the drive is finished with an operation? And wait time is the inverse of MB/s. Say you need to read 1 GB of data.
    • 125 MB/s HDD = 8 sec
    • 250 MB/s SATA 2 SSD = 4 sec (4 sec faster than previous)
    • 500 MB/s SATA 3 SSD = 2 sec (2 sec faster than previous)
    • 1000 MB/s early PCIe SSD = 1 sec (1 sec faster than previous)
    • 2000 MB/s NVMe SSD = 0.5 sec (0.5 sec faster than previous)
    • 4000 MB/s current NVMe SSD = 0.25 sec (0.25 sec faster than previous)
    • 14000 MB/s PCIe 5.0 SSD = 0.07 sec (0.18 sec faster than current NVMe SSD)
    • Magical instantaneous storage = 0 sec (0.25 sec faster than current NVMe SSD, 4 sec faster than SATA 2 SSD)

    Notice how every time MB/s doubles, the wait time saved is halved? In other words, the bigger MB/s gets, the less it matters.

    Also notice that the wait times are a converging series with each subsequent step halved. In other words, if you compared a HDD to a SATA 2 SSD to magical instantaneous storage, the wait time reduction going from the HDD to the SATA 2 SSD is the same as the wait time reduction going from SATA 2 to instantaneous storage. Again, the bigger MB/s gets, the less it matters.

    People already can't tell [youtu.be] if their system contains a 2 GB/s NVMe SSD or a 500 MB/s SATA 3 SSD. And that's a 1.5 sec reduction to read 1 GB. Going from a 2 GB/s NVMe SSD to this 14 GB/s SSD would only reduce wait time to read 1 GB by 0.32 sec. Or 1/5th of a wait time reduction that most people already can't distinguish.

    Imagine you're giving a task which requires reading 1 GB of sequential data, and 200 MB of 4k data. Which do you think will complete it faster? a NVMe SSD with 4 GB/s sequential speeds and 30 MB/s 4k speeds, or a SATA SSD with 500 MB/s sequential speeds and 45 MB/s 4k speeds? Well, there's 5x as much sequential data to read, and the NVMe drive is 8x faster at sequential reads, while only being 1.5 slower at 4k reads. So obviously the NVMe SSD will be faster, right?

    • NVMe: 1000 MB / 4000 MB/s = 0.25 sec. 200 MB / 30 MB/s = 6.7 sec. Total = 6.9 sec
    • SATA: 1000 MB / 500 MB/s = 2 sec. 200 MB / 45 MB/s = 4.4 sec. Total = 6.4 sec

    Surprise! the SATA SSD is faster. That's because the bigger MB/s becomes, the less difference it makes. You'll notice that despite there being only 1/5 as much 4k data, both drives spent more time (a lot more for the NVMe SSD) working on the 4k data. That's because (repeat one more time), the bigger MB/s becomes, the less difference it makes. And it's the small MB/s operations which consume the most time and thus make the biggest difference.

    tl;dr - If you want a SSD that feels fast, the stat you want to maximize is the lowest MB/s spec. That's the 4k read/write speeds. Get a drive which has fast 4k speeds and it'll make a much bigger difference in real-world use than a drive with fast sequential speeds. The sequential speeds only really matter if you're working with lots of big files (e.g. video editing).

    • by vadim_t ( 324782 )

      This isn't for consumer tech. This is high end enterprise tech that'll just happen to percolate down to the consumer domain because both use PCIe buses, and if you can sell to many rather than to a select few, why wouldn't you?

      In the enterprise domain it very much will make a difference. If you have a 100 Gb internet connection, you need the ability to reliably provide ~10GB/s of data to saturate it. It also saves PCIe lanes.

      • by jabuzz ( 182671 )

        Sure PCIe5 will save PCI lanes, and will be useful for high end servers with 100Gbps+ network connections.

        However for an SSD, for any SSD it's pointless other than instead of having PCIe3x4 I could have PCIe5x1, which is of course of value. On a consumer level it's cheaper on an enterprise level I can have many more NVMe SSD's attached to my server.

        However the original poster makes a very very valid point. That when you purchase an SSD the figure you really want to be looking at is who many random IOPS can

  • Must... stop.... salivating...

    I only hope I'll actually be able to afford a high-end GPU by 2024. But then again, maybe we'll all have flying cars.

  • North of 2TB, there aren't many choices for MLC SSDs. The choices are few and expensive No, I don't use Quad-bid drives. Creating video content, doing 3D design, using science/engineering apls such as FEM/CFD, I had two drives with over 25TB of 150TB in writes, so endurance is a real consideration.

In the long run, every program becomes rococco, and then rubble. -- Alan Perlis

Working...