Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage

Seagate Unveils First Ever PCIe NVMe HDD (techradar.com) 70

An anonymous reader quotes a report from TechRadar: Seagate has unveiled the first ever hard disk drive (HDD) that utilizes both the NVMe protocol and a PCIe interface, which have historically been used for solid state drives (SSDs) exclusively. As explained in a company blog post, the proof-of-concept HDD is based on a proprietary controller that plays nice with all major protocols (SAS, SATA and NVMe), without requiring a bridge. The NVMe HDD was demoed at the Open Compute Compute Project Summit in a custom JBOD enclosure, with twelve 3.5-inch drives hooked up via a PCIe interface. Although the capacity of the drive is unconfirmed, Seagate used images of the Exos X18 for the presentation, which has a maximum capacity of 18TB.

According to Seagate, there are a number of benefits to bringing the NVMe protocol to HDDs, such as reduced total cost of ownership (TCO), performance improvements, and energy savings. Further, by creating consistency across different types of storage device, NVMe HDDs could drastically simplify datacenter configurations. While current HDDs are nowhere near fast enough to make full use of the latest PCIe standards, technical advances could mean SATA and SAS interfaces are no longer sufficient in future. At this juncture, PCIe NVMe HDDs may become the default. That said, it will take a number of years for these hard drives to enter the mainstream. Seagate says it expects the first samples to be made available to a small selection of customers in Autumn next year, while full commercial rollout is slated for 2024 at the earliest.

This discussion has been archived. No new comments can be posted.

Seagate Unveils First Ever PCIe NVMe HDD

Comments Filter:
  • Nice to see that you can now play "Russian Disk Roulette" on NVME as well!

    Background: Seagate is the "flaky" option of disks, some good, some really bad. Do not buy if you do not want problems!

    • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Friday November 12, 2021 @09:11AM (#61981097) Homepage Journal

      Seagate is the "flaky" option of disks, some good, some really bad. Do not buy if you do not want problems!

      And they literally always have been, back in the day we called them "Seizegate" because they had so many stiction problems. It's amazing they're still around after all these years of mediocre product.

      • by gweihir ( 88907 )

        Seagate is the "flaky" option of disks, some good, some really bad. Do not buy if you do not want problems!

        And they literally always have been, back in the day we called them "Seizegate" because they had so many stiction problems. It's amazing they're still around after all these years of mediocre product.

        Indeed. It does not say good things about the average customer and about markets.

      • I remember some UltraSCSI disks they made back in the day that would make sounds like geese honking. Never knew what the hell was going on inside those things, but sure knew I wasn't buying from them ever again. Basically every rotating drive I've bought in the last 10 years has been a Toshiba, and I've been super happy with them. I've got some 2TB Toshiba drives that still work perfectly with 50000+ hours on them.

        • I remember some UltraSCSI disks they made back in the day that would make sounds like geese honking.

          Ooh, Barracuda!

          • >"Ooh, Barracuda!"

            LOL!

            • Thanks for your comment, that is specifically what I am aiming for in the majority of mine :)

              I was working for Silicon Engineering, Inc. (nee Sequoia Semiconductor, later Creative Silicon, a division of Creative Labs) when the Seagate Barracuda drives were popular. We had a couple of 'em attached to our Sun workstation/servers. Almost every single machine was not just serving data, but also a user workstation. Try that today and see what happens, but these were the SunOS4 days (we were just moving to SunOS5

              • Oh, I have used many Seagate Barracuda drives. They were actually pretty good. Fast, reasonable price, reasonable warranty, reasonable reliability. Oh the good 'ol days.

                And I love the song.

                • They were very fast for their day, but they had nonstandard mounting (they were missing the center holes) and they ran very hot if installed as if they were normal drives, as in too-hot-to-touch. We had two of 'em and finally resorted to putting them into full height 5.25" cases with adapter rails to solve both problems.

      • Ah, the good old days of taking a drive out of a box after it sits for a few weeks and having to hold it in one hand with power applied and torque it to get it to start spinning... I had one of their earlier 3.5" models testing on a bench, just sitting there with cables plugged in. It was working but making strange noises, then went bang and actually spun around a bit as something inside fell apart and smashed into the spinning platters.

        Thanks for the memories, Seagate, and I'm glad to see that your conti

        • Wow, you got them to spin by just rotating them? I had to whack them with a screwdriver. Obviously in or counter rotation direction, to get them to release stiction. Once I even took the cover off a 40MB half height RLL disk and rotated the stepper manually. Same disk burned a trace off the board (probably stepper power) and then burned my replacement jumper wire off the board as well.

    • Always worthwhile to read the latest report from Backblaze https://www.backblaze.com/b2/h... [backblaze.com]

      • by gweihir ( 88907 )

        Always worthwhile to read the latest report from Backblaze https://www.backblaze.com/b2/h... [backblaze.com]

        Well, Seagate takes all the top spots for crappy disks, except for that low number of Toshibas that come in 3rd. Absolutely no surprise there.

  • Use cases (Score:5, Insightful)

    by enriquevagu ( 1026480 ) on Friday November 12, 2021 @09:20AM (#61981123)

    Before everyone begins to condemn HDDs, please remember that they offer ~4x lower price per byte, and this is not likely to significantly change soon. For many use cases, this price difference is a key driver, such as online backup services (someone else has already posted a link to Backblaze [backblaze.com]).

    The NVMe interface will likely simplify the design of hybrid arrays, that combine HDDs and SSDs and transparently migrate hot data to SSDs and cold data to HDDs.

    • I just don't get what NVMe supposedly offers to HDDs today. Nor does the author of TFA:

      While current HDDs are nowhere near fast enough to make full use of the latest PCIe standards, technical advances could mean SATA and SAS interfaces are no longer sufficient in future.

      "nowhere near fast enough" ... "could mean"
      IoW there is no point whatsoever today.

      • Re:Use cases (Score:5, Informative)

        by Junta ( 36770 ) on Friday November 12, 2021 @10:27AM (#61981285)

        Asking that is asking why aren't keyboards all still PC-AT, humans can't type enough to use gigabits of bandwidth offered by the ports they plug into nowadays.

        Sure, the keyboard doesn't use the potential of the connector, but conversely it's a waste to continue to make a special port just for the keyboard when a keyboard could make do with the nicer port. It isn't a 'waste' particularly if ditching the PC-AT makes room for one or two extra usb ports that are more useful.

        SATA/SAS can't keep up with SSDs so NVME over PCIe provides what SSDs need, and a HDD can service the same protocols over the same medium in theory (just unable to drive the full protocol). At one point it still made practical sense as you had many-disk controllers to bridge a lot of SATA/SAS ports to few pcie lanes, but the uptake of SSDs has produced a rich ecosystem of lots of pcie lanes coming off of processors, out of chipsets, and if that's not enough, pcie switch chips that serve the role of sas expanders. Economies of scale flip the cost picture so that it may become more expensive to provide SAS connectivity to disks than PCIe.

        • At one point it still made practical sense as you had many-disk controllers to bridge a lot of SATA/SAS ports to few pcie lanes, but the uptake of SSDs

          ...is wholly irrelevant to these HDDs.

          There are basically two common use cases for HDDs. One is to get a whole lot of storage for not too much money, almost always involving RAID. The other is to warehouse data, and speed doesn't matter much. In the former case you will need lots of connections and you won't want to spend PCIe lanes for each of them, and using a switch will erase benefits. In the latter case the speed doesn't matter anyway. Consequently it's difficult to figure out who will benefit from thi

          • by Junta ( 36770 )

            you won't want to spend PCIe lanes for each of them, and using a switch will erase benefits

            The point is while a switch erodes benefits from high speed drives with contention, as you say that's hardly an issue here where there's plenty of room for HDDs. If you start loading up a bunch of enclosures with SSDs and suffer the switches... it's still not as bad as SAS (which in addition to bandwidth constraints being tighter than PCIe, has an overly limited queuing facility compared with NVMe). At some point, SATA/SAS controllers/expanders/cabling/backplanes will be wasteful when you can construct t

      • Re:Use cases (Score:4, Informative)

        by AmiMoJo ( 196126 ) on Friday November 12, 2021 @10:59AM (#61981351) Homepage Journal

        HDDs are now getting fast enough to exceed SATA 6GBps bandwidth, which is about 500MB/sec. They have a large DRAM cache and some are hybrid types of flash memory built in too.

      • The supposed point is simplification - if the HDD can connect directly to NVMe, then we don't have to include SATA / SAS controllers anymore (someday). However, it does mean that precious PCIe lanes are being wasted on slow rotating rust instead of efficiently used by servicing a whole lot of slow rotating rust on a few lanes servicing a SATA / SAS controller.

        I suppose they can use a PCIe bridge / mux to get around that the same way that Thunderbolt does, but it still seems like a whole lot of faffing arou

        • However, it does mean that precious PCIe lanes are being wasted on slow rotating rust instead of efficiently used by servicing a whole lot of slow rotating rust on a few lanes servicing a SATA / SAS controller.

          Thanks for getting my point!

          I suppose they can use a PCIe bridge / mux to get around that the same way that Thunderbolt does, but it still seems like a whole lot of faffing around to not just include proven technologies that have been handling this work for decades without issue.

          And again!

          I think the real problem at the heart of this development, is that Seagate would like large enterprise customers to throw away the thousands of existing racks of disks that have SAS interfaces on them, in favor of buying Seagate's brand new NVMe solutions. It's basically an answer to a question nobody asked.

          Yep.

          • by tlhIngan ( 30335 )

            However, it does mean that precious PCIe lanes are being wasted on slow rotating rust instead of efficiently used by servicing a whole lot of slow rotating rust on a few lanes servicing a SATA / SAS controller.

            Thanks for getting my point!

            You only need a x1 per NVMe HDD. A typical home PC with one HDD will only use a single PCIe lane. And they don't have to consume the fastest on-processor PCIe lanes, they can use the slower PCIe lanes attached to the southbridge.

            Though, these days PCie lanes aren't exactly

        • by EvilSS ( 557649 )

          I think the real problem at the heart of this development, is that Seagate would like large enterprise customers to throw away the thousands of existing racks of disks that have SAS interfaces on them, in favor of buying Seagate's brand new NVMe solutions. It's basically an answer to a question nobody asked.

          While I'm sure Seagate wouldn't object to someone doing that, I suspect that most people (not you apparently, but most) would expect the new interfaces to come in with new equipment during a normal refresh cycle. I don't see any companies going "Oh no! A new interface is available, we must throw out everything we have and buy new!"

          • Having NVME over a PCIE bus will not require you to give up the SAS/SATA you currently have....I don't see how it forces me to do anything. I still have USB to IDE shelved away just in case, but SATA will be available for ages. Aging SAS expanders and JBODs will be supported for the next decade, if you aren't migrated off in 10yrs ...suck it.
        • by Junta ( 36770 )

          However, it does mean that precious PCIe lanes are being wasted on slow rotating rust instead of efficiently used by servicing a whole lot of slow rotating rust on a few lanes servicing a SATA / SAS controller.

          Of course, PCIe lanes are less rare than they used to be. A dual socket Ice Lake SP has 128 Gen4 lanes native, a modern AMD server has 64 (whether single or dual processor).

          I suppose they can use a PCIe bridge / mux to get around that the same way that Thunderbolt does, but it still seems like a whole lot of faffing around to not just include proven technologies that have been handling this work for decades without issue.

          Of course, when you get to big JBODs, you have lots of 'faffing' about with SAS expanders as well, for the same reasons that it's tricky to connect to hundreds of disks straight to a controller and it would be 'wasteful' to try to have full SAS connectivity and matching number of PCIE lanes for every disk.

          I think the real problem at the heart of this development, is that Seagate would like large enterprise customers to throw away the thousands of existing racks of disks that have SAS interfaces on them, in favor of buying Seagate's brand new NVMe solutions. It's basically an answer to a question nobody asked.

          Which would be why they are not

          • Correction for you: AMD Epyc is 128 lanes whether dual or single CPU. Not 64. https://www.anandtech.com/show... [anandtech.com] AMD Single CPU has 128, 64 used for bridge in dual configuration (from each CPU) leaving 128 available for dual CPU. Epyc is always 128. Like you said though, PCIE gen 5 will change everything and SAS expanders die with CXL. Everything with be on network attached PCIE going forward, as you implied.
            • by Junta ( 36770 )

              Whoops, don't know why that was messed up in my head, thanks for correction.

              The one thing I'll say is I think the trend of reduction of 'SAN' thinking will continue, and while we *can* make PCIe attached enclosures, and we *can* do NVMe over ethernet, I think neither will be as prevelant as SAS, which was less prevalent than fiber channel in its day. Servers with direct attached storage, with solutions such as ceph or vsan and similar provide a superset of the benefit but easier to manage, particularly in

        • The supposed point is simplification - if the HDD can connect directly to NVMe, then we don't have to include SATA / SAS controllers anymore (someday).

          Yup! This is a good thing.
          NVMe supersedes SATA and SAS. There's no sense in continuing to use it.

          However, it does mean that precious PCIe lanes are being wasted on slow rotating rust instead of efficiently used by servicing a whole lot of slow rotating rust on a few lanes servicing a SATA / SAS controller.

          Nope.
          You'd feed them off of the South Bridge lanes, not the CPU lanes, same as the SATA/SAS controller.
          Nothing precious about those.

          I suppose they can use a PCIe bridge / mux to get around that the same way that Thunderbolt does, but it still seems like a whole lot of faffing around to not just include proven technologies that have been handling this work for decades without issue.

          That's called the South Bridge.

      • It means that SATA 3 was released in 2008 and transfers speeds from HDDs have been stuck at 6 Gb/s for consumers since then. In the same time HDD capacity has gone from 1TB to 20 TB. While SATA 3 is still adequate for most consumer usage like showing vacation photos, backup/restore takes an exceeding long amount of time. Games have already pushed that limit with game makers having to find ways to hide loading screens over the years.
        • SATA 3 was released in 2008 and transfers speeds from HDDs have been stuck at 6 Gb/s for consumers since then. In the same time HDD capacity has gone from 1TB to 20 TB

          HDD speed isn't just limited by that interface, which is only interfering with some short transfers which are mostly coming from cache anyway — data which can also be cached on the host. The longer a transfer from a HDD is, the lower the achievable throughput tends to be. Meanwhile, SSDs are far superior at short transfers anyway, and if you're doing a lot of them you want at least some SSD in the mix to handle that traffic. I get why you'd put the SSD on NVMe, but the HDDs still aren't being substant

          • While HDDs have other limits to their transfer, the current maximum is still 6Gb/s on SATA 3. That hinders what HDD makers can do to speed up the transfer. I do not see Seagate or WD or Toshiba investing much time or effort to speed up HDDs if the bottleneck is a 12 year old standard.
            • I do not see Seagate or WD or Toshiba investing much time or effort to speed up HDDs if the bottleneck is a 12 year old standard.

              I don't seem them doing it regardless, because their product would be of interest only to a vanishing few. It makes more sense to put the data that needs to be accessed rapidly on SSD, or in an array whose aggregate bandwidth is much higher. There are very, very few use cases where you need HDD but can't use multiple HDDs. And in general, if you're using enough data to where the cost of SSD is prohibitive, then you're using multiple HDDs anyway.

              • By "vanishing few" do you mean anyone with a NAS or SAN. This is a first step in transitioning off the SATA interface.
        • K fine show me a hdd that can sustain sata 2 speeds let alone 3 for more than a few MB worth of ram on the drive itself

          • Seagate's newest HDD can hit 5.24 Gb/s [extremetech.com]. While it is not sustained transfer, HDDs would be limited by SATA 3. This drive uses SAS 12 and the technology could never be moved to a SATA model.
            • by Osgeld ( 1900440 )

              its not sustained cause its using ram, simple fact is that a modern (consumer) hard drive with its cache turned off wouldn't be able to saturate a sata 1 interface and the best and brightest can just tip the SATA2 interface

              • its not sustained cause its using ram, simple fact is that a modern (consumer) hard drive with its cache turned off wouldn't be able to saturate a sata 1 interface and the best and brightest can just tip the SATA2 interface

                What kind of stupid fucking point is this?

                A modern CPU with one of its memory channels disabled wouldn't be able to saturate a single core.
                What the fuck is your point?

              • :its not sustained cause its using ram

                From the article: " The first-generation Mach.2 drive is expected to nearly saturate the SATA bus, which makes SAS 12Gb/s a better option for the long term".

                simple fact is that a modern (consumer) hard drive with its cache turned off wouldn't be able to saturate a sata 1 interface and the best and brightest can just tip the SATA2 interface

                1) The article says otherwise. 2) Are you trying to shift the goalposts? You asked for a drive that would exceed SATA 2. This one does. It took me about a minute to google it.

                My point again: the SATA interface is limiting what HDD manufacturers can do. Thus they have to use SAS for this model and could never make a consumer version due to the interface

      • It looks like their proof of concept is an external RAID enclosure that presents itself to the OS as a single drive. That kind of OS-agnostic abstraction would be nice. And multiple HDDs in tandem can definitely make use of the interface's speed. I really kind of like the sound of a plug and play RAID that is invisible to the whole computer, regardless of what interface it uses.

        • It looks like their proof of concept is an external RAID enclosure that presents itself to the OS as a single drive. That kind of OS-agnostic abstraction would be nice.

          You might think so, but you'd be wrong. TBC below.

          And multiple HDDs in tandem can definitely make use of the interface's speed.

          That ALREADY provides more than the limit for a single SATA controller, by using multiple interfaces. The SATA controller is already connected via PCIe. Literally nothing is gained there.

          I really kind of like the sound of a plug and play RAID that is invisible to the whole computer, regardless of what interface it uses.

          Yeah, it sounds great, until you have a problem. Then you're having to dick around with vendor-specific formats and/or interfaces. That's why software RAID on common interfaces is ultimately superior. If you need to recover data, you don't need special vendor-specific tools,

          • by Junta ( 36770 )

            That ALREADY provides more than the limit for a single SATA controller, by using multiple interfaces. The SATA controller is already connected via PCIe. Literally nothing is gained there.

            Note that NVME over PCIE is two things, a different block protocol and endorsing pcie as a transport. SATA and SAS both have some limited queuing nowadays, to facilitate a bit of limited out-of-order IO request fulfillment that can be done on a drive actuator (though the queue ultimately must be satisfied in-order, the drive can prepare future replies on the way to service the head-of-line). SSDs opened the door to massive potential benefits if IOs could be satisfied out of order and had even more awarene

      • by CityZen ( 464761 )

        Due to RAM buffering on the drive, it still makes sense to use the fastest interface possible for HDDs.

    • Use to be 100x lower, then 20x, then 10x (which is about where people started to use ssd as the only drives for mainstream desktops). That 4x is likely to keep shrinking significantly, making hdd less and less relevant. A quick look at pcpartpicker to asses the market today gives ssd 4.41x more expensive per gb than hdd for cheapo qlc ssd that I would never buy, and 4.7x for cheap but reasonable quality tlc ssd (considering 1tb+ capacity drives in both cases). Ssd prices are down about 20% per gb in the
      • While at some point SSD price may become lower than HDD, those SSDs may also have 8 or more level cells and wear out after being rewritten a few times.
        Or they may be good, but current QLC drives (at least the cheap ones) are not that good - if I do not need the random performance, building a 10 or more TB file server makes more sense with HDDs. Or some HDDs and a coupe of SSDs for L2ARC and SLOG.

        • HDD work for the file server although I wouldn't use them for like a home media server unless it was in a closet somewhere I didn't have to hear it. Or maybe you can get hdd engineered for quiet operation I haven't really looked into that market. There are definitely already comically bad ssds on the market. I just built a new system and was shopping for an ssd for the main system drive. Saw a 2tb QLC drive with a rated write endurance of 40 terrabytes! It might well exceed that but if the rated endura
          • I don't care about the noise (within reason, but I have a few servers in a rack), so HDDs make more sense to me than SSDs for movies and such. I use RAID (well, raidz) and I sometimes back my files up to tape to be safe. I keep the hard drives from unloading their heads and they seem to work just fine. As for noise, the drives are rather quiet, the fan noise is pretty much the only thing I can hear from my servers.

            I would not want to use the crap SSDs for anything, but normal ones are expensive and would fe

      • Seems to me that we are at the because its better price point era of SSD's. The reason to buy an HDD may be because the SDD of your required size is still painful to afford, but SSD's wont be comming down much further relative to HDD's because there has to be a premium.
        • Seems to me that we are at the because its better price point era of SSD's. The reason to buy an HDD may be because the SDD of your required size is still painful to afford, but SSD's wont be comming down much further relative to HDD's because there has to be a premium.

          Seems to me that we are at the because its better price point era of SSD's. The reason to buy an HDD may be because the SDD of your required size is still painful to afford, but SSD's wont be comming down much further relative to HDD's because there has to be a premium.

          It increasingly will do just that, especially with QLC, PLC, and novel placement modes including ZNS and NDP. HDD vendors know that their days are limited (one admitted that to me explicitly last year)

          SSDs sometimes offer competitive TCO even today. It is common to only look at the drive's $/TB, but that isn't TCO. Factor in:

          * HDD capacity has increased tenfold in the last handful of years. SATA hasn't (and SAS market share continues to dwindle)

          * It is not uncommon to limit HDD capacity to 8TB beca

    • Comment removed based on user account deletion
      • But... that said, is NVMe really all there yet for these kinds of applications? A lot of work was put into making SATA and SAS hot swappable, but NVMe requires workarounds to make that viable. Certainly it's not something you can do with standard M.2 drives

        The pic from Seagate in TFA shows a disk with SATA type connectors. Perhaps they are delivering NVMe through the same connector? This would imply that M.2 is not relevant here. Also this:

        The NVMe HDD was demoed at the Open Compute Compute Project Summit in a custom JBOD enclosure, with twelve 3.5-inch drives hooked up via a PCIe interface.

        It still of course raises the question of how/if they are supporting hotswap on NVMe at this time, and how they will handle it in the future.

      • Comment removed based on user account deletion
    • by klui ( 457783 )

      I see the benefit, but robably best to have the drives connected to an expander as I wouldn't want to waste a full x4 connection for a slow spinning drive. Best to put a bunch of them share the full bandwidth.

    • That price point is offset by the lower energy consumption of solid state disks and probably also by the higher reliability of solid state disks (at least better ones).

      IMHO, anyone who has so much data that the lower price per byte for HDDs means anything probably could just push it all into a cheap Amazon S3 and unburden themselves from the big infrastructure required for a large rotational media array and come out break even at worst on cost.

      Last place I worked we basically quit selling arrays with HDDs i

  • by DarkOx ( 621550 ) on Friday November 12, 2021 @10:08AM (#61981253) Journal

    Its kinda of heart warming in a way to see something kin to a hardcard come back into vogue.

  • SAS backplane seem better for slower HDD's for now.

    Need to work switches that can be feed from 2-3 pci-e slots and then linked to an bank of disks not just disks 1-8 are switched to pci-e slot 1 and disks 8-16 are switched to slot 2

  • NVMe (Score:4, Interesting)

    by stikves ( 127823 ) on Friday November 12, 2021 @03:36PM (#61982205) Homepage

    They could actually be onto something.

    NVMe protocol allows parallel access to storage mediums, and is overall more "bare metal" than SATA. Especially when the physical layout no longer matches the heads/cylinders/tracks of the old DOS times.

    So, *if*, and a large if, they allow full NVMe fanciness, the drives could actually become much faster. For example they could allow parallel writes to multiple platters at the same time. i.e.: 4 heads = 4x write speeds (with some overhead).

    Not to mention being able to drop-in SSD or HDD in the same slots would be a huge time saver for datacenter setups.

    • So, *if*, and a large if, they allow full NVMe fanciness, the drives could actually become much faster. For example they could allow parallel writes to multiple platters at the same time. i.e.: 4 heads = 4x write speeds (with some overhead).

      This is already how some enterprise HDDs operate. All HDDs write a full cylinder, before stepping over to the next track.

  • by Chas ( 5144 )

    Why would you use such high throughput interfaces to connect such a low throughput storage medium?

    This is like building a 12 lane super-highway and only allowing bicycle traffic...

    • Why would you use such high throughput interfaces to connect such a low throughput storage medium?

      No, it's more like building a second highway for slower cars. I wonder if we could brainstorm some reasons as to why that isn't the most efficient use of resources...

      Why would you plug a keyboard into a USB port? PS/2 is more than sufficient.

      • by Chas ( 5144 )

        Maybe if you're putting in some form of multi-drive controller into the M.2 slot and building a drive array.
        For bulk storage, spinning rust is still king.
        Other than that...

        • You miss the point.
          M.2 isn't relevant.
          SAS and SATA buses suck. Beyond the bus limitations- the protocols and host controller (AHCI in the case of SATA) specifications themselves *also* suck. Single locking interrupts per controller, limited queues.
          mSATA over an m.2 sucks too. m.2 is just a port.

          NVMe is an overwhelmingly superior protocol and mechanism for storage. Spinny, SSD, even optical media.
          What kind of port it travels over isn't relevant.

          All these claims about "Why would this be needed?!" are
      • by bardrt ( 1831426 )

        Why would you plug a keyboard into a USB port? PS/2 is more than sufficient.

        I'd argue instead of "more than sufficient", ps/2 is actually *superior* in many ways.

        n-key rollover, hardware interrupt-based, no chance of it being delayed by other devices hogging the bus, oh, and the drivers load much earlier in the boot process so you don't have to worry about not being able to get into the BIOS, like can sometimes happen with USB keyboards.

  • Just because you can, doesn't mean you should.

    This is a solution looking for a problem. NVMe was developed to overcome the limitations of a SATA or SAS interface, interfaces that were designed explicitly for rotating media.

    In the last ten years, HDDs have not massively increased in bandwidth, have not massively increased in IOPS, and have not massively decreased in latency - whereas SSDs have. NVMe was developed to offer more bandwidth, more IOPS and lower latency for flash media.
    Any single spindle can't ev

    • Wrong on all counts.

      NVMe will absolutely result in better performance for hard drives. It's simply a more efficient protocol, full stop.
      Beyond that, it allows things like the drives being able to use host RAM for big buffers on non-enterprise drives.
      There is precisely no reason what-so-fucking-ever to continue to use SATA or SAS.
      They are trash in comparison.
  • There's not many data hoarders left from the 90s and 00s but nowadays people just dump their stuff in the cloud or on a single USB hard drive.

    I like SATA, because I can buy 8 and 16 port controllers at not too expensive of a price, put it in a giant case and use the right software to get the most out of it.

    I don't need physically large, or physically expensive controllers for this.
    Show me an 8 port NVMe controller for the same price as SATA.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...