Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage

New Seagate Shingled Hard Drive Teardown 93

New submitter Peter Desnoyers writes: Shingled Magnetic Recording (SMR) drives are starting to hit the market, promising larger drives without heroic (and expensive) measures such as helium fill, but at a cost — data can no longer be over-written in place, requiring SSD-like algorithms to handle random writes.

At the USENIX File and Storage Technologies conference in February, researchers from Northeastern University (disclaimer — I'm one of them) dissected shingled drive performance both figuratively and literally, using both micro-benchmarks and a window cut in the drive to uncover the secrets of Seagate's first line of publicly-available SMR drives.

TL;DR: It's a pretty good desktop drive — with write cache enabled (the default for non-server setups) and an intermittent workload it performs quite well, handling bursts of random writes (up to a few tens of GB total) far faster than a conventional drive — but only if it has long powered-on idle periods for garbage collection. Reads and large writes run at about the same speed as on a conventional drive, and at $280 it costs less than a pair of decent 4TB drives. For heavily-loaded server applications, though, you might want to wait for the next generation. Here are a couple videos (in 16x slow motion) showing the drive in action — sequential read after deliberately fragmenting the drive, and a few thousand random writes.
This discussion has been archived. No new comments can be posted.

New Seagate Shingled Hard Drive Teardown

Comments Filter:
  • Sounds like an interesting idea to increase capacity, but the downsides are huge. This really needs filesystem level optimisations to get any performance out of it. For rarely modified bulk storage, this should be fine.
    • by cheater512 ( 783349 ) <nick@nickstallman.net> on Monday March 02, 2015 @06:58PM (#49168039) Homepage

      Oh look here are some SSD optimised file systems already. Incidentally they apply to these drives rather well.

      • ... except in some cases where these filesystems make some wrong assumptions about the wear leveling mechanism below and perform worse than a generic, disk-oriented fs.
    • Yeah. Also, this would be way more prone to data loss when there's a sudden power cut than a more traditional hard drive.
    • by Yomers ( 863527 )

      If they'll add ~32 gigs of ssd cache for delayed writes (and faster reads as a bonus, and reliability in case of power failures) - it'll be overall winner.

      • by Cramer ( 69040 )

        Because that's worked so well for the seagate hybrid drives. (hint: no it doesn't)

    • Re: (Score:3, Insightful)

      The main downside is the disk becoming much more algorithmically complex (read: bug-prone) for a less than a radical improvement in performance.
    • by Peter Desnoyers ( 11115 ) on Monday March 02, 2015 @08:27PM (#49168559) Homepage

      Drive performance is kind of like airplane legroom - people gripe about it, but in the end they ignore it and buy the cheap ticket.

      Shingled drives aren't better - they're bigger, and that's what people pay for. WD's 10TB helium drive is shingled, and I would guess that every drive over 10TB will be shingled for the foreseeable future. By the time HAMR and BPM come out, SSDs will probably have killed off the high-performance drive market, so those technologies will probably be released as capacity-optimized shingled drives, too.

    • by mlts ( 1038732 )

      If this technology becomes commonplace, I can see this used as a third tier of storage, between normal HDDs and tape, either used as a live landing zone until it gets copied to tape, or perhaps used in concert with a higher tier landing zone, where the data is written onto the platters already deduplicated, aimed at staying there for long term storage.

      Even operating systems are starting to become storage tier aware. Windows Server 2012R2 can autotier between SSD and HDD, and Windows Server 10 has improved

  • This seems that it would be beautiful for a backup server that backs up every few weeks.

    • It also sounds good for a video server. I have one attached to my PC-based DVR, with playback clients in other rooms. For 99.9% or more of the data, it's very large files (100s of MB to multiple GB) that are written once and read many times until deleted.

      However, since this server is also a backup server, its a RAID array. I wonder if this Shingled format has any effect on RAID performance. A lot of "green" drives do not work well in this RAID setup, causing stuttering video playback when they continual

      • I would think so at least for drive-managed devices as writing to a virtual sector doesn't neccesarily write to that physical sector, seek and read times could be inconsistent throwing the RAID read and write algorithms into dismay. With host-aware or host controlled drives, you could probably tune for better performance matching logs in the FS and stripes on the RAID to fit evenly into the shingles tracks.
  • Drive needles (Score:3, Interesting)

    by phorm ( 591458 ) on Monday March 02, 2015 @07:04PM (#49168083) Journal

    One thing I've tended to wonder, why have a single read-write needle on conventional drives (especially in multi-platter situations). Why not have two needles, one on either side so they can't touch.
    Alternately, why not a "track" that runs across the drive with shuttles on either side to perform the reads/writes. You could have two perpendicular tracks to increase performance

    • I had the same idea long ago. It ended up that the platters and the spindle are a small part of the disk in terms of cost. So instead of such a complex setup you can just buy twice more disks and get the performance you want.
    • Re:Drive needles (Score:4, Informative)

      by vadim_t ( 324782 ) on Monday March 02, 2015 @07:18PM (#49168149) Homepage

      More than a head per side? It's been attempted, and turned out it's not really worth it. It's a lot of extra complication for not that much benefit. Heads are expensive and generate heat, so it works out to close 2X the price anyway, plus an increased change of failure. Easier and safer to just add another drive.

      These days there are SSDs too.

    • by mlts ( 1038732 )

      About 10-15 years ago, some drives used to have two read/write stacks of heads, each independent from the other. This was killed due due to people wanting cheap drives.

    • by AaronW ( 33736 )

      I remember seeing a Conner hard drive like that many years ago. Tom's Hardware [tomshardware.com] discusses it.

      • by phorm ( 591458 )

        Conner. There's a name I haven't heard in awhile.

        It's interesting that they also did try the sweeping-servo (similar to a CD-ROm drive) read/write head design as well, once upon a time.

  • This reminds me of the original 'Log Filesystem' research in the 80s, back when drive geometry was known to the OS, and the OS could take steps to optimize for it.

    The Log Filesystem concept was to write all data sequentially to disk, and update metadata during idle times.

    The basic research influenced a number of filesystems, such as NetApp's WAFL, Sun's ZFS, Linux's JFFS2, etc.

    Interesting to see the concept implemented 'in hardware'.

  • by Anonymous Coward

    Why is the camera a potato?

  • The summary says "Reads and large writes run at about the same speed as on a conventional drive, and at $280 it costs less than a pair of decent 4TB drives", one of the 7 links in the summary mentions a 5TB model. 5TB for the price of two 4TB drives doesn't sound that great.

  • Nope..... (Score:5, Insightful)

    by wbr1 ( 2538558 ) on Monday March 02, 2015 @07:39PM (#49168253)
    From TFA

    The real question is whether or not Seagate can maintain similar full drive performance compared to a non-SMR drive.

    No.The real question is longevity. Per backblaze and my own anecdotal experience, Seagate drives already have a higher failure rate. Looking at this, any firmware bug or flaw could result in massive data loss of an entire 'band' if written incorrectly.

    I understand that in any environment backups are crucial, but I live in the real world. A world where small and medium size business (for good or ill) neglect IT until it bites them. At least with regular drives recovery is often possible with block for block copies, and baring that a clean room has a good chance of recovering crucial data.

    If a user has a performance need, I can suggest an SSD or SSD+HDD config with appropriate redundancy and backups. For pure space, large HDDs in an appropriate RAID or ZFS work fine. Per TFS, this is not ready for heavily drive loaded server configs yet, and i do not see a need in residential, or small biz or workstation use where other solutions are far better. To me this is currently a product looking for a solution, and one that is risky to data to boot.

    • While I pragmatically agree, it is not Seagate's fault nor burden to take care of companies that don't treat their data as important. In the same way, it's not a clothes-washer company's fault if you never change the lint trap and it catches on fire. (causing 15K fires a year in the US)

      There are clear, defined, industry standard ways to use a product and if you refuse to do so because you are a cheap and lazy, the ramifications are solely your own.

      Can't afford to replace large hard drives? Get smaller o
      • by wbr1 ( 2538558 )
        I never said it was Seagares fault or burden. A users actions are thier own. That said Seagate does have a much higher failure rate, and this tech could greatly complicate data recovery in a failure scenario. Given this I can not in good conscience recommend it to my customers.
  • Those two videos have less pep than an empty soda can.
  • 1 minute and 46 seconds of my life back. Please?
  • Impatience (Score:4, Funny)

    by Anonymous Coward on Tuesday March 03, 2015 @03:42AM (#49169855)

    How impatient must one be to tear down a Seagate hard drive before it breaks down?

  • data can no longer be over-written in place, requiring SSD-like algorithms to handle random writes.

    Good, now when my clients get hit by ransomware there is still hope that the "over-written" file can be recovered.

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...