Seagate Unveils First Ever PCIe NVMe HDD (techradar.com) 70
An anonymous reader quotes a report from TechRadar: Seagate has unveiled the first ever hard disk drive (HDD) that utilizes both the NVMe protocol and a PCIe interface, which have historically been used for solid state drives (SSDs) exclusively. As explained in a company blog post, the proof-of-concept HDD is based on a proprietary controller that plays nice with all major protocols (SAS, SATA and NVMe), without requiring a bridge. The NVMe HDD was demoed at the Open Compute Compute Project Summit in a custom JBOD enclosure, with twelve 3.5-inch drives hooked up via a PCIe interface. Although the capacity of the drive is unconfirmed, Seagate used images of the Exos X18 for the presentation, which has a maximum capacity of 18TB.
According to Seagate, there are a number of benefits to bringing the NVMe protocol to HDDs, such as reduced total cost of ownership (TCO), performance improvements, and energy savings. Further, by creating consistency across different types of storage device, NVMe HDDs could drastically simplify datacenter configurations. While current HDDs are nowhere near fast enough to make full use of the latest PCIe standards, technical advances could mean SATA and SAS interfaces are no longer sufficient in future. At this juncture, PCIe NVMe HDDs may become the default. That said, it will take a number of years for these hard drives to enter the mainstream. Seagate says it expects the first samples to be made available to a small selection of customers in Autumn next year, while full commercial rollout is slated for 2024 at the earliest.
According to Seagate, there are a number of benefits to bringing the NVMe protocol to HDDs, such as reduced total cost of ownership (TCO), performance improvements, and energy savings. Further, by creating consistency across different types of storage device, NVMe HDDs could drastically simplify datacenter configurations. While current HDDs are nowhere near fast enough to make full use of the latest PCIe standards, technical advances could mean SATA and SAS interfaces are no longer sufficient in future. At this juncture, PCIe NVMe HDDs may become the default. That said, it will take a number of years for these hard drives to enter the mainstream. Seagate says it expects the first samples to be made available to a small selection of customers in Autumn next year, while full commercial rollout is slated for 2024 at the earliest.
Finally, a really bad HDD again! (Score:2)
Nice to see that you can now play "Russian Disk Roulette" on NVME as well!
Background: Seagate is the "flaky" option of disks, some good, some really bad. Do not buy if you do not want problems!
Re:Finally, a really bad HDD again! (Score:5, Informative)
Seagate is the "flaky" option of disks, some good, some really bad. Do not buy if you do not want problems!
And they literally always have been, back in the day we called them "Seizegate" because they had so many stiction problems. It's amazing they're still around after all these years of mediocre product.
Re: (Score:2)
Seagate is the "flaky" option of disks, some good, some really bad. Do not buy if you do not want problems!
And they literally always have been, back in the day we called them "Seizegate" because they had so many stiction problems. It's amazing they're still around after all these years of mediocre product.
Indeed. It does not say good things about the average customer and about markets.
Re: (Score:2)
I remember some UltraSCSI disks they made back in the day that would make sounds like geese honking. Never knew what the hell was going on inside those things, but sure knew I wasn't buying from them ever again. Basically every rotating drive I've bought in the last 10 years has been a Toshiba, and I've been super happy with them. I've got some 2TB Toshiba drives that still work perfectly with 50000+ hours on them.
Re: (Score:3)
I remember some UltraSCSI disks they made back in the day that would make sounds like geese honking.
Ooh, Barracuda!
Re: (Score:2)
>"Ooh, Barracuda!"
LOL!
Re: (Score:2)
Thanks for your comment, that is specifically what I am aiming for in the majority of mine :)
I was working for Silicon Engineering, Inc. (nee Sequoia Semiconductor, later Creative Silicon, a division of Creative Labs) when the Seagate Barracuda drives were popular. We had a couple of 'em attached to our Sun workstation/servers. Almost every single machine was not just serving data, but also a user workstation. Try that today and see what happens, but these were the SunOS4 days (we were just moving to SunOS5
Re: (Score:2)
Oh, I have used many Seagate Barracuda drives. They were actually pretty good. Fast, reasonable price, reasonable warranty, reasonable reliability. Oh the good 'ol days.
And I love the song.
Re: (Score:2)
They were very fast for their day, but they had nonstandard mounting (they were missing the center holes) and they ran very hot if installed as if they were normal drives, as in too-hot-to-touch. We had two of 'em and finally resorted to putting them into full height 5.25" cases with adapter rails to solve both problems.
Re: (Score:2)
Ah, the good old days of taking a drive out of a box after it sits for a few weeks and having to hold it in one hand with power applied and torque it to get it to start spinning... I had one of their earlier 3.5" models testing on a bench, just sitting there with cables plugged in. It was working but making strange noises, then went bang and actually spun around a bit as something inside fell apart and smashed into the spinning platters.
Thanks for the memories, Seagate, and I'm glad to see that your conti
Re: (Score:2)
Wow, you got them to spin by just rotating them? I had to whack them with a screwdriver. Obviously in or counter rotation direction, to get them to release stiction. Once I even took the cover off a 40MB half height RLL disk and rotated the stepper manually. Same disk burned a trace off the board (probably stepper power) and then burned my replacement jumper wire off the board as well.
Re: (Score:3)
Always worthwhile to read the latest report from Backblaze https://www.backblaze.com/b2/h... [backblaze.com]
Re: (Score:2)
Always worthwhile to read the latest report from Backblaze https://www.backblaze.com/b2/h... [backblaze.com]
Well, Seagate takes all the top spots for crappy disks, except for that low number of Toshibas that come in 3rd. Absolutely no surprise there.
Use cases (Score:5, Insightful)
Before everyone begins to condemn HDDs, please remember that they offer ~4x lower price per byte, and this is not likely to significantly change soon. For many use cases, this price difference is a key driver, such as online backup services (someone else has already posted a link to Backblaze [backblaze.com]).
The NVMe interface will likely simplify the design of hybrid arrays, that combine HDDs and SSDs and transparently migrate hot data to SSDs and cold data to HDDs.
Re: (Score:2)
I just don't get what NVMe supposedly offers to HDDs today. Nor does the author of TFA:
"nowhere near fast enough" ... "could mean"
IoW there is no point whatsoever today.
Re:Use cases (Score:5, Informative)
Asking that is asking why aren't keyboards all still PC-AT, humans can't type enough to use gigabits of bandwidth offered by the ports they plug into nowadays.
Sure, the keyboard doesn't use the potential of the connector, but conversely it's a waste to continue to make a special port just for the keyboard when a keyboard could make do with the nicer port. It isn't a 'waste' particularly if ditching the PC-AT makes room for one or two extra usb ports that are more useful.
SATA/SAS can't keep up with SSDs so NVME over PCIe provides what SSDs need, and a HDD can service the same protocols over the same medium in theory (just unable to drive the full protocol). At one point it still made practical sense as you had many-disk controllers to bridge a lot of SATA/SAS ports to few pcie lanes, but the uptake of SSDs has produced a rich ecosystem of lots of pcie lanes coming off of processors, out of chipsets, and if that's not enough, pcie switch chips that serve the role of sas expanders. Economies of scale flip the cost picture so that it may become more expensive to provide SAS connectivity to disks than PCIe.
Re: (Score:2)
At one point it still made practical sense as you had many-disk controllers to bridge a lot of SATA/SAS ports to few pcie lanes, but the uptake of SSDs
...is wholly irrelevant to these HDDs.
There are basically two common use cases for HDDs. One is to get a whole lot of storage for not too much money, almost always involving RAID. The other is to warehouse data, and speed doesn't matter much. In the former case you will need lots of connections and you won't want to spend PCIe lanes for each of them, and using a switch will erase benefits. In the latter case the speed doesn't matter anyway. Consequently it's difficult to figure out who will benefit from thi
Re: (Score:2)
you won't want to spend PCIe lanes for each of them, and using a switch will erase benefits
The point is while a switch erodes benefits from high speed drives with contention, as you say that's hardly an issue here where there's plenty of room for HDDs. If you start loading up a bunch of enclosures with SSDs and suffer the switches... it's still not as bad as SAS (which in addition to bandwidth constraints being tighter than PCIe, has an overly limited queuing facility compared with NVMe). At some point, SATA/SAS controllers/expanders/cabling/backplanes will be wasteful when you can construct t
Re:Use cases (Score:4, Informative)
HDDs are now getting fast enough to exceed SATA 6GBps bandwidth, which is about 500MB/sec. They have a large DRAM cache and some are hybrid types of flash memory built in too.
Re: (Score:3)
The supposed point is simplification - if the HDD can connect directly to NVMe, then we don't have to include SATA / SAS controllers anymore (someday). However, it does mean that precious PCIe lanes are being wasted on slow rotating rust instead of efficiently used by servicing a whole lot of slow rotating rust on a few lanes servicing a SATA / SAS controller.
I suppose they can use a PCIe bridge / mux to get around that the same way that Thunderbolt does, but it still seems like a whole lot of faffing arou
Re: (Score:2)
However, it does mean that precious PCIe lanes are being wasted on slow rotating rust instead of efficiently used by servicing a whole lot of slow rotating rust on a few lanes servicing a SATA / SAS controller.
Thanks for getting my point!
I suppose they can use a PCIe bridge / mux to get around that the same way that Thunderbolt does, but it still seems like a whole lot of faffing around to not just include proven technologies that have been handling this work for decades without issue.
And again!
I think the real problem at the heart of this development, is that Seagate would like large enterprise customers to throw away the thousands of existing racks of disks that have SAS interfaces on them, in favor of buying Seagate's brand new NVMe solutions. It's basically an answer to a question nobody asked.
Yep.
Re: (Score:2)
You only need a x1 per NVMe HDD. A typical home PC with one HDD will only use a single PCIe lane. And they don't have to consume the fastest on-processor PCIe lanes, they can use the slower PCIe lanes attached to the southbridge.
Though, these days PCie lanes aren't exactly
Re: (Score:2)
I think the real problem at the heart of this development, is that Seagate would like large enterprise customers to throw away the thousands of existing racks of disks that have SAS interfaces on them, in favor of buying Seagate's brand new NVMe solutions. It's basically an answer to a question nobody asked.
While I'm sure Seagate wouldn't object to someone doing that, I suspect that most people (not you apparently, but most) would expect the new interfaces to come in with new equipment during a normal refresh cycle. I don't see any companies going "Oh no! A new interface is available, we must throw out everything we have and buy new!"
Re: (Score:1)
Re: (Score:2)
However, it does mean that precious PCIe lanes are being wasted on slow rotating rust instead of efficiently used by servicing a whole lot of slow rotating rust on a few lanes servicing a SATA / SAS controller.
Of course, PCIe lanes are less rare than they used to be. A dual socket Ice Lake SP has 128 Gen4 lanes native, a modern AMD server has 64 (whether single or dual processor).
I suppose they can use a PCIe bridge / mux to get around that the same way that Thunderbolt does, but it still seems like a whole lot of faffing around to not just include proven technologies that have been handling this work for decades without issue.
Of course, when you get to big JBODs, you have lots of 'faffing' about with SAS expanders as well, for the same reasons that it's tricky to connect to hundreds of disks straight to a controller and it would be 'wasteful' to try to have full SAS connectivity and matching number of PCIE lanes for every disk.
I think the real problem at the heart of this development, is that Seagate would like large enterprise customers to throw away the thousands of existing racks of disks that have SAS interfaces on them, in favor of buying Seagate's brand new NVMe solutions. It's basically an answer to a question nobody asked.
Which would be why they are not
Re: (Score:1)
Re: (Score:2)
Whoops, don't know why that was messed up in my head, thanks for correction.
The one thing I'll say is I think the trend of reduction of 'SAN' thinking will continue, and while we *can* make PCIe attached enclosures, and we *can* do NVMe over ethernet, I think neither will be as prevelant as SAS, which was less prevalent than fiber channel in its day. Servers with direct attached storage, with solutions such as ceph or vsan and similar provide a superset of the benefit but easier to manage, particularly in
Re: (Score:2)
The supposed point is simplification - if the HDD can connect directly to NVMe, then we don't have to include SATA / SAS controllers anymore (someday).
Yup! This is a good thing.
NVMe supersedes SATA and SAS. There's no sense in continuing to use it.
However, it does mean that precious PCIe lanes are being wasted on slow rotating rust instead of efficiently used by servicing a whole lot of slow rotating rust on a few lanes servicing a SATA / SAS controller.
Nope.
You'd feed them off of the South Bridge lanes, not the CPU lanes, same as the SATA/SAS controller.
Nothing precious about those.
I suppose they can use a PCIe bridge / mux to get around that the same way that Thunderbolt does, but it still seems like a whole lot of faffing around to not just include proven technologies that have been handling this work for decades without issue.
That's called the South Bridge.
Re: (Score:2)
Re: (Score:2)
SATA 3 was released in 2008 and transfers speeds from HDDs have been stuck at 6 Gb/s for consumers since then. In the same time HDD capacity has gone from 1TB to 20 TB
HDD speed isn't just limited by that interface, which is only interfering with some short transfers which are mostly coming from cache anyway — data which can also be cached on the host. The longer a transfer from a HDD is, the lower the achievable throughput tends to be. Meanwhile, SSDs are far superior at short transfers anyway, and if you're doing a lot of them you want at least some SSD in the mix to handle that traffic. I get why you'd put the SSD on NVMe, but the HDDs still aren't being substant
Re: (Score:2)
Re: (Score:2)
I do not see Seagate or WD or Toshiba investing much time or effort to speed up HDDs if the bottleneck is a 12 year old standard.
I don't seem them doing it regardless, because their product would be of interest only to a vanishing few. It makes more sense to put the data that needs to be accessed rapidly on SSD, or in an array whose aggregate bandwidth is much higher. There are very, very few use cases where you need HDD but can't use multiple HDDs. And in general, if you're using enough data to where the cost of SSD is prohibitive, then you're using multiple HDDs anyway.
Re: (Score:2)
Re: Use cases (Score:1)
K fine show me a hdd that can sustain sata 2 speeds let alone 3 for more than a few MB worth of ram on the drive itself
Re: (Score:2)
Re: (Score:1)
its not sustained cause its using ram, simple fact is that a modern (consumer) hard drive with its cache turned off wouldn't be able to saturate a sata 1 interface and the best and brightest can just tip the SATA2 interface
Re: (Score:2)
its not sustained cause its using ram, simple fact is that a modern (consumer) hard drive with its cache turned off wouldn't be able to saturate a sata 1 interface and the best and brightest can just tip the SATA2 interface
What kind of stupid fucking point is this?
A modern CPU with one of its memory channels disabled wouldn't be able to saturate a single core.
What the fuck is your point?
Re: (Score:2)
:its not sustained cause its using ram
From the article: " The first-generation Mach.2 drive is expected to nearly saturate the SATA bus, which makes SAS 12Gb/s a better option for the long term".
simple fact is that a modern (consumer) hard drive with its cache turned off wouldn't be able to saturate a sata 1 interface and the best and brightest can just tip the SATA2 interface
1) The article says otherwise. 2) Are you trying to shift the goalposts? You asked for a drive that would exceed SATA 2. This one does. It took me about a minute to google it.
My point again: the SATA interface is limiting what HDD manufacturers can do. Thus they have to use SAS for this model and could never make a consumer version due to the interface
Re: (Score:2)
It looks like their proof of concept is an external RAID enclosure that presents itself to the OS as a single drive. That kind of OS-agnostic abstraction would be nice. And multiple HDDs in tandem can definitely make use of the interface's speed. I really kind of like the sound of a plug and play RAID that is invisible to the whole computer, regardless of what interface it uses.
Re: (Score:2)
It looks like their proof of concept is an external RAID enclosure that presents itself to the OS as a single drive. That kind of OS-agnostic abstraction would be nice.
You might think so, but you'd be wrong. TBC below.
And multiple HDDs in tandem can definitely make use of the interface's speed.
That ALREADY provides more than the limit for a single SATA controller, by using multiple interfaces. The SATA controller is already connected via PCIe. Literally nothing is gained there.
I really kind of like the sound of a plug and play RAID that is invisible to the whole computer, regardless of what interface it uses.
Yeah, it sounds great, until you have a problem. Then you're having to dick around with vendor-specific formats and/or interfaces. That's why software RAID on common interfaces is ultimately superior. If you need to recover data, you don't need special vendor-specific tools,
Re: (Score:2)
That ALREADY provides more than the limit for a single SATA controller, by using multiple interfaces. The SATA controller is already connected via PCIe. Literally nothing is gained there.
Note that NVME over PCIE is two things, a different block protocol and endorsing pcie as a transport. SATA and SAS both have some limited queuing nowadays, to facilitate a bit of limited out-of-order IO request fulfillment that can be done on a drive actuator (though the queue ultimately must be satisfied in-order, the drive can prepare future replies on the way to service the head-of-line). SSDs opened the door to massive potential benefits if IOs could be satisfied out of order and had even more awarene
Re: (Score:2)
Due to RAM buffering on the drive, it still makes sense to use the fastest interface possible for HDDs.
Re: (Score:2)
Re: (Score:2)
While at some point SSD price may become lower than HDD, those SSDs may also have 8 or more level cells and wear out after being rewritten a few times.
Or they may be good, but current QLC drives (at least the cheap ones) are not that good - if I do not need the random performance, building a 10 or more TB file server makes more sense with HDDs. Or some HDDs and a coupe of SSDs for L2ARC and SLOG.
Re: (Score:2)
Re: (Score:2)
I don't care about the noise (within reason, but I have a few servers in a rack), so HDDs make more sense to me than SSDs for movies and such. I use RAID (well, raidz) and I sometimes back my files up to tape to be safe. I keep the hard drives from unloading their heads and they seem to work just fine. As for noise, the drives are rather quiet, the fan noise is pretty much the only thing I can hear from my servers.
I would not want to use the crap SSDs for anything, but normal ones are expensive and would fe
Re: (Score:3)
Re: (Score:1)
Seems to me that we are at the because its better price point era of SSD's. The reason to buy an HDD may be because the SDD of your required size is still painful to afford, but SSD's wont be comming down much further relative to HDD's because there has to be a premium.
Seems to me that we are at the because its better price point era of SSD's. The reason to buy an HDD may be because the SDD of your required size is still painful to afford, but SSD's wont be comming down much further relative to HDD's because there has to be a premium.
It increasingly will do just that, especially with QLC, PLC, and novel placement modes including ZNS and NDP. HDD vendors know that their days are limited (one admitted that to me explicitly last year)
SSDs sometimes offer competitive TCO even today. It is common to only look at the drive's $/TB, but that isn't TCO. Factor in:
* HDD capacity has increased tenfold in the last handful of years. SATA hasn't (and SAS market share continues to dwindle)
* It is not uncommon to limit HDD capacity to 8TB beca
Re: (Score:2)
Re: (Score:2)
But... that said, is NVMe really all there yet for these kinds of applications? A lot of work was put into making SATA and SAS hot swappable, but NVMe requires workarounds to make that viable. Certainly it's not something you can do with standard M.2 drives
The pic from Seagate in TFA shows a disk with SATA type connectors. Perhaps they are delivering NVMe through the same connector? This would imply that M.2 is not relevant here. Also this:
It still of course raises the question of how/if they are supporting hotswap on NVMe at this time, and how they will handle it in the future.
Re: (Score:2)
Re: (Score:2)
I see the benefit, but robably best to have the drives connected to an expander as I wouldn't want to waste a full x4 connection for a slow spinning drive. Best to put a bunch of them share the full bandwidth.
Re: (Score:2)
That price point is offset by the lower energy consumption of solid state disks and probably also by the higher reliability of solid state disks (at least better ones).
IMHO, anyone who has so much data that the lower price per byte for HDDs means anything probably could just push it all into a cheap Amazon S3 and unburden themselves from the big infrastructure required for a large rotational media array and come out break even at worst on cost.
Last place I worked we basically quit selling arrays with HDDs i
Return of the HardCard! (Score:5, Insightful)
Its kinda of heart warming in a way to see something kin to a hardcard come back into vogue.
Re: Return of the HardCard! (Score:2)
Re: (Score:2)
They don't seem to be mounting the drive on a PCIe card though, it's got some kind of cabled interface that is supposed to be multi-protocol (so you can plug into SATA/SAS or NVMe).
SAS backplane seem better for slower HDD's for now (Score:2)
SAS backplane seem better for slower HDD's for now.
Need to work switches that can be feed from 2-3 pci-e slots and then linked to an bank of disks not just disks 1-8 are switched to pci-e slot 1 and disks 8-16 are switched to slot 2
Re: (Score:1)
PCIe expanders exist. Just ask the bitcoin miners.
NVMe (Score:4, Interesting)
They could actually be onto something.
NVMe protocol allows parallel access to storage mediums, and is overall more "bare metal" than SATA. Especially when the physical layout no longer matches the heads/cylinders/tracks of the old DOS times.
So, *if*, and a large if, they allow full NVMe fanciness, the drives could actually become much faster. For example they could allow parallel writes to multiple platters at the same time. i.e.: 4 heads = 4x write speeds (with some overhead).
Not to mention being able to drop-in SSD or HDD in the same slots would be a huge time saver for datacenter setups.
Re: (Score:2)
So, *if*, and a large if, they allow full NVMe fanciness, the drives could actually become much faster. For example they could allow parallel writes to multiple platters at the same time. i.e.: 4 heads = 4x write speeds (with some overhead).
This is already how some enterprise HDDs operate. All HDDs write a full cylinder, before stepping over to the next track.
WTF? (Score:2)
Why would you use such high throughput interfaces to connect such a low throughput storage medium?
This is like building a 12 lane super-highway and only allowing bicycle traffic...
Re: (Score:2)
Why would you use such high throughput interfaces to connect such a low throughput storage medium?
No, it's more like building a second highway for slower cars. I wonder if we could brainstorm some reasons as to why that isn't the most efficient use of resources...
Why would you plug a keyboard into a USB port? PS/2 is more than sufficient.
Re: (Score:2)
Maybe if you're putting in some form of multi-drive controller into the M.2 slot and building a drive array.
For bulk storage, spinning rust is still king.
Other than that...
Re: (Score:2)
M.2 isn't relevant.
SAS and SATA buses suck. Beyond the bus limitations- the protocols and host controller (AHCI in the case of SATA) specifications themselves *also* suck. Single locking interrupts per controller, limited queues.
mSATA over an m.2 sucks too. m.2 is just a port.
NVMe is an overwhelmingly superior protocol and mechanism for storage. Spinny, SSD, even optical media.
What kind of port it travels over isn't relevant.
All these claims about "Why would this be needed?!" are
Re: (Score:1)
Why would you plug a keyboard into a USB port? PS/2 is more than sufficient.
I'd argue instead of "more than sufficient", ps/2 is actually *superior* in many ways.
n-key rollover, hardware interrupt-based, no chance of it being delayed by other devices hogging the bus, oh, and the drivers load much earlier in the boot process so you don't have to worry about not being able to get into the BIOS, like can sometimes happen with USB keyboards.
Just because you can... (Score:2)
Just because you can, doesn't mean you should.
This is a solution looking for a problem. NVMe was developed to overcome the limitations of a SATA or SAS interface, interfaces that were designed explicitly for rotating media.
In the last ten years, HDDs have not massively increased in bandwidth, have not massively increased in IOPS, and have not massively decreased in latency - whereas SSDs have. NVMe was developed to offer more bandwidth, more IOPS and lower latency for flash media.
Any single spindle can't ev
Re: (Score:2)
NVMe will absolutely result in better performance for hard drives. It's simply a more efficient protocol, full stop.
Beyond that, it allows things like the drives being able to use host RAM for big buffers on non-enterprise drives.
There is precisely no reason what-so-fucking-ever to continue to use SATA or SAS.
They are trash in comparison.
Please don't get rid of SATA. (Score:2)
There's not many data hoarders left from the 90s and 00s but nowadays people just dump their stuff in the cloud or on a single USB hard drive.
I like SATA, because I can buy 8 and 16 port controllers at not too expensive of a price, put it in a giant case and use the right software to get the most out of it.
I don't need physically large, or physically expensive controllers for this.
Show me an 8 port NVMe controller for the same price as SATA.