Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage

Seagate's New Mach.2 Is the World's Fastest Conventional Hard Drive (arstechnica.com) 93

An anonymous reader quotes a report from Ars Technica: Seagate has been working on dual-actuator hard drives -- drives with two independently controlled sets of read/write heads -- for several years. Its first production dual-actuator drive, the Mach.2, is now "available to select customers," meaning that enterprises can buy it directly from Seagate, but end-users are out of luck for now. Seagate lists the sustained, sequential transfer rate of the Mach.2 as up to 524MBps -- easily double that of a fast "normal" rust disk and edging into SATA SSD territory. The performance gains extend into random I/O territory as well, with 304 IOPS read / 384 IOPS write and only 4.16 ms average latency. (Normal hard drives tend to be 100/150 IOPS and about the same average latency.)

The added performance requires additional power; Mach.2 drives are rated for 7.2 W idle, while Seagate's standard Ironwolf line is rated at 5 W idle. It gets more difficult to compare loaded power consumption because Seagate specs the Mach.2 differently than the Ironwolf. The Mach.2's power consumption is explicitly rated for several random I/O scenarios, while the Ironwolf line is rated for an unhelpful "average operating power," which isn't defined in the data sheet. Still, if we assume -- probably not unreasonably -- a similar expansion of power consumption while under load, the Mach.2 represents an excellent choice for power efficiency since it offers roughly 200% of the performance of competing traditional drives at roughly 144% of the power budget. Particularly power-conscious users can also use Seagate's PowerBalance mode -- although that feature decreases sequential performance by 50% and random performance by 10%.

This discussion has been archived. No new comments can be posted.

Seagate's New Mach.2 Is the World's Fastest Conventional Hard Drive

Comments Filter:
  • Reliability? (Score:3, Interesting)

    by kmoser ( 1469707 ) on Friday May 21, 2021 @11:34PM (#61409230)
    Twice as many actuators means double the chance an actuator will fail.
    • Re: (Score:2, Interesting)

      Twice as many actuators means double the chance an actuator will fail.

      Only if head failures are independent events, which they aren't.

      • Why aren't they? Surely all the mechanics of the rw assemblies are independent of each other when assessing probability that they will fail?

        • Why aren't they?

          There are many things that can cause failures to be correlated:
          1. Same manufacturing batch
          2. Same thermal and vibration environment
          3. Exposed to the same shocks
          4. Platter defects
          5. Helium leaked out
          6. Same power supply
          7. Same control electronics

          So actuators likely to fail will not be randomly distributed. They will be more likely to be paired in a single drive. The corollary is that good actuators will also be more likely to be paired.

          • Those are what you would call examples of dependent failures. They are not an exhaustive list of failure mechanisms, and not in any way a proof or counterpoint to the the fact that independent failures do exist, and that these failure modes in this device have doubled in likelihood.

            So actuators likely to fail will not be randomly distributed.

            Except they will be as well. Every electro mechanical component has a random failure mode. EVERY SINGLE ONE. Just because you can list correlated failure modes does not make the independent failure mode disappear.

      • by vivian ( 156520 )

        Twice as many actuators means double the chance an actuator will fail.

        Only if head failures are independent events, which they aren't.

        They need to go to 5 heads, two Aloe Vera lubricating strips and think about making the second one self lather. That should sort the competition out for good.

    • by dgatwood ( 11270 )

      Twice as many actuators means double the chance an actuator will fail.

      On the flip side, if a head doesn't physically explode and start scarring the platter, it means it could ostensibly detect a mechanical or electrical failure of the head or head motor and go into "limp mode" and recover all the data off the disk. Also, dual head preamps doubles your chances of being able to read data.

      • From the marketing picture I saw in the linked article, it looks like it's a stack of two drives on top of each other - so one actuator and set of heads reading the top half of the platter stack, and another actuator and set of heads reading the bottom half. I'm sure the firmware is then performing a transparent remapping of sectors so that, it's something like odd numbered sectors are on the top stack and even numbered sectors are on the bottom stack (i.e RAID 0 across the two stacks). This way, they get d

        • Correct. They are reading two platters at the same time with two sets of arms stacked on a common axis. So they put RAID into a drive? Seems dumb, oh wait we're talking Winchesters here. Hahahaha "The key to Mach.2's increased performance is a second set of actuator arms, which can be positioned independently from the first set. Essentially, this makes a Mach.2 "two drives in one chassis.""
  • by gweihir ( 88907 ) on Friday May 21, 2021 @11:37PM (#61409234)

    Seagate sometimes has drives that are reliable. The other vendors sometimes have drives that are not.

  • > ...edging into SATA SSD territory. The performance gains extend into random I/O territory as well, with 304 IOPS read / 384 IOPS write and only 4.16 ms average latency

    If my SSD could only handle 300 IOPS, I'd break it and blame it on the dog. Cause that is dog **** garbage for SSD IOPS.

    I know, they were talking about sequential read speed, but SSD's were limited more by the SATA bridge for sequential read, IIRC.

    • Itâ(TM)s also very dependent on the data you want being perfectly scattered so that the independent read heads can always fill in each otherâ(TM)s latency sealing the next bit of data. SSDs (even the slowest, crappiest) will always get that random rate for any data you want.

      This doesnâ(TM)t in reality get anywhere close to SSD speeds.

    • 380-odd iops? That's abysmal. I've seen SATA laptop drives (just barely) sustain 2x this load.

  • Doesn't mean much (Score:5, Informative)

    by Guspaz ( 556486 ) on Saturday May 22, 2021 @12:06AM (#61409262)

    The "fastest conventional hard disk" is like trying to ride your "fastest horse" down the autobahn. Even a budget consumer SATA SSD like the 870 EVO can do 98,000 IOPS on 4K QD32 reads... and SATA SSDs are increasingly rare since nvme SSDs fell so far in price.

    This thing doesn't even work as a single fast hard disk, it presents itself as two *separate* logical devices. You need to RAID0 the two devices just to get the advertised performance. If you're going to resort to RAID anyhow, why not just use regular RAID? Maybe throw some SSDs in front of them for caching?

    • From the summary, power consumption is one advantage. 100% increase in performance for only a 44% increase in power. It makes sense - you can use the same amount of energy to spin the platters (which is presumably the biggest drain), while the only extra power consumption is the second actuator's hardware and associated control circuitry.
      • What about price? That power difference isn't going to add up to a big difference in acquisition cost.

      • by Guspaz ( 556486 )

        I don't believe that performance is a terribly relevant factor for magnetic storage at this point, capacity is. So if this thing holds 14TB and uses more power than something like a traditional 14/16/18 TB helium drive, then it's less power efficient per terabyte stored. If energy-efficient performance is what you're looking for, then SSDs are way more energy-efficient per unit of performance.

    • Re:Doesn't mean much (Score:5, Informative)

      by blahplusplus ( 757119 ) on Saturday May 22, 2021 @12:51AM (#61409286)

      The "fastest conventional hard disk" is like trying to ride your "fastest horse" down the autobahn.

      The reality is unless there is a revolution in SSD size vs cost, HDD's will be with us a long time. Video storage is exploding, remember when we had to compress video/audio to an insane degree so it would fit on our teeny hard drives 20+ years ago? Well youtube/videogame/movie companies have been upping file sizes.

      The standard download of a movie/anime rip is 1080p if torrents are to be believed and even bigger if you are doing Blu-rays/4K. Those files are much larger then their 480p counterparts. AKA bigger files because compression is higher quality and less lossy.

      Many videogames now are pretty giant. Doom 2016 is 5x plus the size of hard drives were in 1998.

      http://www.redhill.net.au/d/72... [redhill.net.au]

      • Thereâ(TM)s been said revolution. I would have agreed with you when a 500GB SSD cost $600, but seriously, you can get a 2TB drive for $150. Hard drives are still a bit cheaper, but theyâ(TM)re not the order of magnitude they once were. Frankly, I know very few people with need for 2TB of storage. Those that do tend to be professionals with the ability and need to pay for big 8/16TB SSDs.

        • Thereâ(TM)s been said revolution. I would have agreed with you when a 500GB SSD cost $600, but seriously, you can get a 2TB drive for $150. Hard drives are still a bit cheaper, but theyâ(TM)re not the order of magnitude they once were. Frankly, I know very few people with need for 2TB of storage. Those that do tend to be professionals with the ability and need to pay for big 8/16TB SSDs.

          HDD's will be around a long time and when VR finally takes off, games are going to get really huge. You're making the same mistake many of us in the 90's did, many of us said XXX amount of RAM or HDD space would be enough for everyone. But here we are 20 years later, aka doom 2016 is 60GB+. For reference Doom 2 was around 50 Megabytes. Doom 2 was released oct 1994, doom 2016 was in may. I expect app demands to grow the need for storage because we just aren't smart enough to know what they are yet.

          • In principle you are correct, games are getting larger and larger in general. I do have to call out the VR thing though. VR has no bearing on game size. VR fundamentally changes only the field of view, resolution, and input devices (being controllers, and position sensors for the headsets). Ultimately the underlying games are still the same. They have similar texture resolutions, similar geometry, and use all the same graphics that any other game does.

        • by ClueHammer ( 6261830 ) on Saturday May 22, 2021 @03:07AM (#61409440)
          "I know very few people with need for 2TB of storage" All I can say is stop hanging out with "jocks" Average IT person has backups of everything. Their entire physical media collection in digital form. And various VM's doing what ever this weeks weird project... Multiply that by having actually lived longer than 5 minutes.. and you easily need many many TB.
        • by thegarbz ( 1787294 ) on Saturday May 22, 2021 @05:45AM (#61409614)

          Thereâ(TM)s been said revolution.

          No there hasn't. There's been lots of little evolutions that have done nothing to shift the power balance.

          SSDs currently occupy a sweet spot of 1TB with 2TB slowly catching up. These in their *cheapest* form cost $89 / TB.
          HDDs currently occupy a sweet spot of 4TB (for conventional) and 8TB for archival workloads. These in their cheapest form are $19/TB

          For an SSD revolution there will need to be a 5x drop in price before you have a hope in hell of HDDs actually disappearing. In many cases not looking at just the cheapest products HDDs often clock in at an order of magnitude cheaper.

          but seriously, you can get a 2TB drive for $150.

          Citation required. The cheapest 2TB SSD I found anywhere is this piece of shit: https://www.newegg.com/mushkin... [newegg.com] and it's RRP is $30 higher than what you quote.
          Thing is, when I'm talking about storing that many TB of data unless I'm doing video workloads then high IOPS / speed may not be the most critical factor. And for $150 you can buy an 8TB HDD.

          • by sphealey ( 2855 )

            Another point being that you can turn an HDD off, put it on the shelf for 5 years, and have a reasonable chance that the data is at least readable when you plug it back it. SSDs not so much.

            • by fahrbot-bot ( 874524 ) on Saturday May 22, 2021 @09:56AM (#61410024)

              Another point being that you can turn an HDD off, put it on the shelf for 5 years, and have a reasonable chance that the data is at least readable when you plug it back it.

              I did that with an enterprise-grade SCSI drive after 6 years of nearly 24/7 use in my home PC. When I tried to spin it up after 4 years on the shelf, the spindle was frozen. All I heard was "click... click... click..." I whacked it on the side with a screwdriver handle and it finally spun up. All the data was fine.

              • The GP's argument holds for ferrite cores too. The SCSI drive you mentioned -- Quantum 105S?
                • The SCSI drive you mentioned -- Quantum 105S?

                  5 GB WD SCSI from 199[89] or so -- they were expensive back then, but faster than ATA drives. Also had a SCSI CD drive, which was fast (back then anyway). PC had an add-on Adaptec controller.

          • You can get cheaper SSD and enterprise hard drives are already 2-3x more expensive than regular disks. Once you take into account power consumption, mechanical failure and heat production, enterprise SSD are pretty much on par with enterprise HDD.

            • If you want to compare something horrendously nasty and underperforming with overpriced "enterprise" grade devices (which many commercial companies flat out don't use), then you're making my point for me, not countering it.

          • That analysis smells like a low-duty-cycle desktop workload. In the enterprise / DC space, layering, QLC, and in a few years PLC will continue to erode the perceived gap. For years I've seen this sort of analysis that only considers drive unit cost. Rarely is the cost of the _drive bay_ factored in. Power/cooling, rack RUs, switch ports, chassis, CPU, shitty HBAs, AFR. Consider a 1U server with only front drive bays. Four of your 3.5" 8TB HDDs fit. Ten five 2.5" SSDs fit at up to 30TB. Consider addi
        • by Junta ( 36770 )

          Note this particular drive targets datacenters, where the fact that you can get 4x the capacity for same cost continues to matter greatly. You are right that it's no longer 10x cost/byte, but 4x is still nothing to sneeze at.

          I will say I was surprised they bothered with multi-actuator. Sure decoupling the top from bottom half of drive makes your IO performance better (for data that happens to be spread across distinct platters managed by distinct actuators) but it seems hardly worth the effort when your max

          • ssd are nice but the cost add's up.
            Safe cluster size is 16.67 TB (for 6 nodes with 10 TB each say 1TB X10 ) at $100 per 1TB disk that is $1000 / node in disk cost.

        • Good for you! Now let the real nerds continue the conversation

          I have 448TB of spinning rust in a home setting

        • Sure, you probably could get a 2TB SSD for that price, although around $200 is more common. And for the same price you could get a 6TB 7200 rpm conventional drive, probably an 8TB if we include bargain hunting. SSDs are still going to be 2-3 times more expensive per terabyte. The main advantage SSDs have right now is performance at the lower-end of storage, where a person with modest storage needs would be better off losing some capacity for the performance boost. At larger capacities conventional drives s

        • Frankly, I know very few people with need for 2TB of storage. Those that do tend to be professionals with the ability and need to pay for big 8/16TB SSDs.

          And for $150 I can get an 8TB conventional HD. Anybody who wants an offline media collection is going to need serious storage space for high def too (mine is occupying 29 of my 32TB). But even gamers now, have you seen the size of modern games? 2TB is rapidly getting to be the absolute floor for anyone besides grandma who checks her e-mail and Facebook and that's it.

      • by Guspaz ( 556486 )

        I think you've missed my point. I'm not arguing that people should buy SSDs for bulk storage instead of this thing, I'm arguing that people should buy conventional HDDs for bulk storage instead of this thing. This thing will cost more and consume more power than an equivalent capacity HDD, and only offers a performance advantage if you RAID its sub-drives (since it shows up as two separate 7TB disks to the host system).

        If you need more storage, get a conventional HDD. If you need more performance, get an SS

    • This thing doesn't even work as a single fast hard disk, it presents itself as two *separate* logical devices. You need to RAID0 the two devices just to get the advertised performance. If you're going to resort to RAID anyhow, why not just use regular RAID? Maybe throw some SSDs in front of them for caching?

      I was wondering about that, if the firmware was performing a transparent mapping of sectors to do RAID 0 internally, but apparently not. The to using this over 2x regular drives is that the power consumption is less than 2x a single drive, so in high density deployments you're potentially using 72% of the power compared to using twice as many physical disks.

      • by Guspaz ( 556486 )

        The capacity is not 2x that of a single drive, though. You need just as many of these things to get equivalent capacity. So it actually ends up using *more* power than the equivalent capacity in regular drives.

        This thing only saves energy if performance is all you care about, and in that scenario, SSDs are way more power efficient per throughput or IOPS.

    • > This thing doesn't even work as a single fast hard disk, it presents itself as two *separate* logical devices.

      Golly, there are very few actual use cases for this, then. Seagate is right to only sell it direct to OEM's.

      I am sure there is some device somewhere with a need for high IOPS, tons of storage, and only one drive bay, at a cost much lower than nvme.

    • by Anonymous Coward

      The "fastest conventional hard disk" is like trying to ride your "fastest horse" down the autobahn. Even a budget consumer SATA SSD like the 870 EVO can do 98,000 IOPS on 4K QD32 reads... and SATA SSDs are increasingly rare since nvme SSDs fell so far in price.

      This thing doesn't even work as a single fast hard disk, it presents itself as two *separate* logical devices. You need to RAID0 the two devices just to get the advertised performance. If you're going to resort to RAID anyhow, why not just use regular RAID? Maybe throw some SSDs in front of them for caching?

      Throw SSDs in front for caching? Well, if it's enterprise hybrid storage, of course, they've been doing that for years. And yes, an SSD is faster. When you're buying petabytes of storage, and your needs don't demand all flash, how much is an 18 TB HDD versus say a 15 TB SSD? Trust me, Seagate is still investing in hard drives, because there are still use cases for hard drives. Even my home NAS with it's limited storage has hard drives. I really don't need expensive NVMe SSDs to store ripped movies that get

      • by Guspaz ( 556486 )

        The point here is not that HDDs are cheaper than SSDs for bulk storage, the point is that this thing doesn't make sense compared to conventional HDDs. If you care about cheap capacity, get a regular HDD, it will be cheaper than this thing, thus giving you more storage per dollar. If you care about cheap performance, get an SSD, it will offer you more throughput and IOPS per dollar. In what scenario does this thing, which basically just crams two independent HDDs in a single enclosure (it shows up as two slo

    • What about eight SSDs in a Raid000 configuration?

    • They might sell these to consumers but they really aren't the target market I'm sure.

      This is for high-density enterprise storage where mechanical disks still feature heavily and will for a long time.

      When you want to put together 1PB in storage with a mirrored RAID the cost difference of high density SSDs vs HDDs is major.

      Also SSDs have crap write endurance compared to HDDs. That doesn't tend to matter when you install some games and then don't really write much else to the drive for a while, the average con

      • by Guspaz ( 556486 )

        You seem to be completely missing the point. I'm not arguing for SSDs instead of HDDs for bulk storage, I'm arguing for conventional HDDs instead of these dual-actuator HDDs. Compared to an equivalent amount of storage, these things cost more and draw more power, and don't have any performance advantage per logical drive (as the Mach.2 is a 14TB drive that appears to the host system as two 7TB drives).

  • by crunchy_one ( 1047426 ) on Saturday May 22, 2021 @01:10AM (#61409312)
    The IBM 3370 DASD featured two actuator mechanisms per drive, circa 1979. Perhaps something older? Try the IBM 350 from 1956, it came standard with two actuators and a third as an option.
  • Has anybody done any research on solid-state heads along the entire usable radius of the disk? Analogy is roughly to the linear array in a flatbed scanner vs the single scanning point in barcode reader; however, I am envisioning something that is not simply an array of one head per track. Perhaps actually relevant to optical media with phased arrays; not sure how one might "steer" the sensing point of a magnetic sensor.
    • The track width is so small that a device with that many rw sites would be cost prohibitive to produce.

    • by gweihir ( 88907 )

      Yes. This is called "drum memory": https://en.wikipedia.org/wiki/... [wikipedia.org]

      It is slow, unreliable and has low storage capacity.

      • He is obviously talking about multiple heads for plates ...

        And yes such drives exist(ed?).

        • by gweihir ( 88907 )

          Yes. But drums were there before that and the approach was scrapped afterwards because it is _known_ to not work well. Drums and platters are not really fundamentally different here. If anything, the idea works worse with platters.

          • Erm ... drums got replaced somewhere around 1950?
            So why argue with them?

            And of course drums with a read head for every track would work exceptional well.
            Same for platters. But the cost or complexity probably makes no sense.

  • by billyswong ( 1858858 ) on Saturday May 22, 2021 @01:33AM (#61409350)

    So it's in effect two 7TB drives sharing the same slot and same set of cables. Not an actual drive that is twice as fast. No wonder it is not available in retail as it is only useful for datacenters with huge amount of disks running RAID or ZFS. For single server / workstation, plugging in more drives is simpler.

  • It was in the early to mid 1990s, but one HDD maker had drives out with two sets of heads. The heads were active/active, and if one set failed, the other could handle all the read/write duty. This wasn't two sets of heads on one axle, but two separate axles with sets of heads. Having this wouldn't just help performance because one set could be reading on the inner tracks while the other is on the outer, but also increase MTBF.

    If Seagate wanted to be really fancy, they could add some high-wear, write-most

    • If Seagate wanted to be really fancy, they could add some high-wear, write-mostly SSD to the drives, like 100-250 gigs worth, split between write-through caching and a read cache. If the SSD failed, the drive could drop it and go back to direct HDD reads and writes. I know Seagate has tried SSHD drives, but those max out at 2 TB. It would be nice to see larger drives, with proportionally larger SSD caching. Add encryption to both sides (HDD and SSD), so the drive can work both as a self encrypting drive, as well as when a SECURE ERASE is done, it can ensure both the hard disk and the SSD data are purged, and this could be a useful drive for the enterprise. Done right, the HDD could run at 7200 RPM or even 5400 RPM while the SSD handles the landing zone for the writes and slowly moves them to the disk.

      SSHD drives combined the worst aspects of SSDs with the worst aspects of HDDs. Once the very small flash area wore out, the entire drive stopped accepting writes, as all writes went to the flash first. Because the flash area was so small, it usually wore out pretty quickly.

    • by swilver ( 617741 )

      Problem is probably to keep those heads aligned on the same tracks. You wouldn't want the second set of heads to write tracks slightly differently than the other set of heads.

  • On a new un-fragmented drive reading data on the outer (fastest) part of the disk. In real life when the drive is at half capacity, files are smaller and fragmented I would expect this to drop by 50% or more. Sure disks are still great for storing movies and audio libraries, archives, images & backups, large downloads and anything else extremely big or where file access speed is not as important. There will be a need for both ssd and hdd until the price and capacity for both are near the same level.
    • by swilver ( 617741 )

      How full a drive is has nothing to do with fragmentation. Write an empty drive to 95% full and almost nothing will be fragmented. What causes fragmentation is when a very full drive is actively used all the time (log files, adding/removing new files).

      It also heavily depends on the filesystem used, as some filesystems will create more fragmentation than others (copy on write causes fragmentation more easily when it could be avoided by in place changes, log structured systems tend to write everything togeth

    • In our days files rarely get fragmented. The disk might get slightly fragmented with empty parts in between. The reason is, that modern file systems always try to find a part with is big enough to hold the whole file.

      Of course there are exceptions, e.g. if the size is not known in before, e.g. a log file.

  • Could I just have a the "world's most reliable HDD" at a reasonable price for my server at a reasonable price? Oh, wait. That's hard to prove? Hmm. Bummer.

  • It's just a bleak start of what is to come.

    We are about to get drives with two heads per surface and r/W capability on multiple heads across one actuator.
    This will bring not just awesome transfer speeds and access times but also redundancy.

    • Can't add much mechanical complexity without eliminating the price advantage over solid state, which is so much faster.
  • Genius at work. I'll abbreviate the name as m.2
  • If you look at the drives specs, you will note that it is SAS only. I suspect this is because the drive appears to the host as two independent disks. Each disk has half the platters.

    So the performance is the same as two side-by-side HDDs that are half the capacity.

    This is not useless, but less than the specs imply. It also opens the possibility of a "stupid configuration" where the two halves are mirrored into a raid set.

  • I worked on a dual-actuator drive at Conner Peripherals called "Chinook" [wikipedia.org] (after the twin-rotor helicopter [boeing.com] of the same name). Conner was purchased by Seagate in '95 or '96 [sfgate.com]. The Chinook was different from the new Seagate product -- With the new Seagate product, each actuator can read/write only half the available surfaces; with the Conner Chinook, each actuator had full access to all surfaces. That made a "read-after-write" operation take only 1/2 a revolution instead of a full rev. A big time savings th

So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand

Working...