Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
Data Storage Hardware

Is the Time Finally Right For Hybrid Hard Drives? 311

a_hanso writes "Hard drives that combine a traditional spinning platter for mass storage and solid state flash memory for frequently accessed data have always been an interesting concept. They may be slower than SSDs, but not by much, and they are a lot cheaper gigabyte-for-gigabyte. CNET's Harry McCracken speculates on how soon such drives may become mainstream: 'So why would the new Momentus be more of a mainstream hit than its predecessor? Seagate says that it's 70 percent faster than its earlier hybrid drive and three times quicker than a garden-variety, non-hybrid disk. Its benchmarks for cold boots and application launches show the new drive to be just a few seconds slower than a SSD. Or, in some cases, a few seconds faster. In the end, hybrid drives are compromises, neither as cheap as ordinary drives — you can get a conventional 750GB Momentus for about $150 — nor as fast and energy-efficient as SSDs.'"
This discussion has been archived. No new comments can be posted.

Is the Time Finally Right For Hybrid Hard Drives?

Comments Filter:
  • First hand (Score:4, Informative)

    by jamesh ( 87723 ) on Wednesday November 30, 2011 @04:26AM (#38211824)

    I have one. It works great, but "chirps" occasionally which I think is the sound of the motor spinning down. None of the firmware updates i've applied that claim to fix the chirp actually fix it.

    It runs much faster than my previous drive, but i'm also comparing a 7200RPM drive to a 5400RPM drive so the speed increase isn't just because it's a hybrid.

    I guess the advantage of the SSD cache is that if you use it in a circular fashion you can avoid a lot of the 'read-erase-rewrite' cycles... but I don't know how the cache is organised for sure.

  • by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Wednesday November 30, 2011 @04:32AM (#38211848) Homepage

    There are only two things drive cache can help with significantly. When rebooting, where memory is empty, you can get memory primed with the most common parts of the OS faster if most of that data can be read from the SSD. Optimizers that reorder the boot files will get you much of the same benefit if they can be used.

    Disk cache used for writes is extremely helpful, because it allows write combining and elevator sorting to improve random write workloads, making them closer to sequential. However, you have to be careful, because things sitting in those caches can be lost if the power fails. That can be a corruption issue on things that expect writes to really be on disk, such as databases. Putting some flash to cache those writes, with a supercapacitor to ensure all pending writes complete on shutdown, is a reasonable replacement for the classic approach: using a larger battery-backed power source to retain the cache across power loss or similar temporary failures. The risk with the old way is that the server will be off-line long enough for the battery to discharge. Hybrid drives should be able to flush to SSD just with their capacitor buffer, so you're consistent with the filesystem state, only a moment after the server powers down.

    As for why read caching doesn't normally help, the operating system filesystem cache is giant compared to any size it might be. When OS memory is gigabytes and drive ones megabytes, you'll almost always be in a double-buffer situation: whatever is in the drive's cache will also still be in the OS's cache, and therefore never be requested. The only way you're likely to get any real benefit from the drive cache is if the drive does read-ahead. Then it might only return the blocks requested to the OS, while caching ones it happened to pass over anyway. If you then ask for those next, you get them at cache speeds. On Linux at least, this is also a futile effort; the OS read-ahead is also smarter than any of the drive logic, and it may very well ask for things in that order in the first place.

    One relevant number for improving read speeds is command queue depth. You can get better throughput by ordering reads better, so they seek around the mechanical drive less. There's a latency issue here though--requests at the opposite edge can starve if the queue gets too big--so excessive tuning in that direction isn't useful either.

  • by UnknownSoldier ( 67820 ) on Wednesday November 30, 2011 @04:33AM (#38211850)

    While I love the speed the SSD (and the prices is hitting the "magic" $1/GB) you're forgetting the HUGE elephant in the room with SSD that almost no-one seems to notice ...

    SSDs have a TERRIBLE failure rate.

    http://www.codinghorror.com/blog/2011/05/the-hot-crazy-solid-state-drive-scale.html [codinghorror.com]

    He purchased eight SSDs over the last two years ⦠and all of them failed. The tale of the tape is frankly a little terrifying:

            Super Talent 32 GB SSD, failed after 137 days
            OCZ Vertex 1 250 GB SSD, failed after 512 days
            G.Skill 64 GB SSD, failed after 251 days
            G.Skill 64 GB SSD, failed after 276 days
            Crucial 64 GB SSD, failed after 350 days
            OCZ Agility 60 GB SSD, failed after 72 days
            Intel X25-M 80 GB SSD, failed after 15 days
            Intel X25-M 80 GB SSD, failed after 206 days

    and ...

    http://translate.googleusercontent.com/translate_c?hl=en&ie=UTF8&prev=_t&rurl=translate.google.com&sl=fr&tl=en&twu=1&u=http://www.hardware.fr/articles/843-7/ssd.html&usg=ALkJrhjecZZv1F6d_oT-dr41FPFYOIkVCw [googleusercontent.com]

    - Intel 0.1% (against 0.3%)
    - Crucial 0.8% (against 1.9%)
    - Corsair 2.9% (against 2.7%)
    - OCZ 4.2% (against 3.5%)

    Intel confirms its first place with a return rate of the most impressive. It is followed from Crucial, which significantly improves the rate but it must be said that the latter was heavily impacted by the M225 - the C300 is only reached 1%. The return rate for failure are up against Corsair and OCZ especially in the latter confirmed by far his last position. 8 SSDs are beyond the 5%:

    - 9.14% 2 240 GB OCZ Vertex
    - 8.61% 2 120 GB OCZ Agility
    - 7.27% 40GB OCZ Agility 2
    - 6.20% 60GB OCZ Agility 2
    - 5.83% 80 GB Corsair Force
    - 5.31% 90GB OCZ Agility 2
    - 5.31% 2 100 GB OCZ Vertex
    - 5.04% OCZ Agility 2 3.5 "120 GB

    At the _current_ price point & abysmal failure raite, SSD sadly has a ways to go before it catches on with the main stream.

  • This Drive is CRAP (Score:5, Informative)

    by rdebath ( 884132 ) on Wednesday November 30, 2011 @05:13AM (#38211998)

    This Drive is CRAP
    ASSUMING that it still only does read caching.

    I bought one of the Gen-1 drives and was very underwhelmed. I wanted write caching; 4GB of non-volatile memory with the performance of SLC flash could allow windows (or whatever) to write to the drive flat out for up many seconds without a single choke due to the drive.

    In addition 4G of write-back cache is enough to give a significant performance boost for continuous random writes across the drive and even more so across a small extent such as a database or a DotNET native image cache.

    But for reading it's insignificant compared to the 3-16Gbytes of (so much faster) main memory that most systems contain, except at boot time when, unlike RAM, it will already contain some data. The problem with this is that it will contain the most recently read data, whereas the boot files can quite reasonably be described as least recently read.

    So in the real world it's useless for anything except a machine that's rebooted every five minutes ...

  • by evilviper ( 135110 ) on Wednesday November 30, 2011 @05:20AM (#38212022) Journal

    They may be slower than SSDs, but not by much

    That's horribly incorrect. I liked the sound of hybrid drives as well when I saw the price... The 500GB laptop hard drives with 4GB Flash for $150, should be awesome... But I, not being an idiot, did some research, and sure enough, the reviews say it's not remotely comparable to a real SSD.

    eg. http://www.storagereview.com/seagate_momentus_xt_review [storagereview.com]

    It's faster than a drive without such a cache, and it might be a good option for a laptop, but even there I'd say a 32GB SD card would be cheaper, and will work wonders on FreeBSD with ZFS configured for L2ARC...

    I have no particular interest in what anyone buys, but the comparison to real SSDs is a massively dishonest.

  • by jimicus ( 737525 ) on Wednesday November 30, 2011 @05:20AM (#38212026)

    The cache on a hard disk is often used as write cache - store incoming data in cache, leave actually committing it to disk until a convenient opportunity arises.

    32MB of cache doesn't take that long to flush. 1GB, OTOH...

  • by abigsmurf ( 919188 ) on Wednesday November 30, 2011 @05:24AM (#38212044)
    Yep, had a OCZ drive fail after 3 months. First time I've had a drive that wasn't DOA fail before at least 2-3 years of usage

    It wasn't even one of those gradual fails you tend to get with HDDs where they tend to start getting faults for a while before failing, giving you a chance to get the data off of it and order a replacement. One day it was working normally, next day, wasn't even recognised by the bios.

    Just to add insult to injury, OCZ have an awful returns policy, had to pay to get it send recorded delivery to the Netherlands. Cost me £20. Going to be a few years before I take the plunge again and I won't be buying OCZ. Paying premium prices for something so unreliable, isn't on, especially given how much of an impact a sudden drive failure has on just about every type of user.
  • by cgenman ( 325138 ) on Wednesday November 30, 2011 @05:38AM (#38212090) Homepage

    MTBF is a complete BS statistic. Take the first week of a hard drive's life. Make a linear extrapolation to that over the next 1000 years. Post marketing statistic that is grossly divergent from reality. The Western Digital listed in the thread below has a MTBF of 171 years. Anyone working in a real environment will confirm that is just ludicrous. What you're measuring is that for the first week of a hard drive's life, it behaves like it would live for 171 years. After the first week, it's all downhill. Back in the real world I kill laptop drives at least every 2 years, and desktops every 5.

    This makes MTBF an OK but not great cross-device comparison statistic, with the assumption that all hard drives age in about the same way. SSD's really don't age like Hard Drives. They're less prone to total catastrophic failure. They lose a little capacity on a regular basis. They don't have axle bearings or dust to worry about. They will age and have electrical problems, but nowhere near the mechanical problems of hard drives. They will age in a more linear fashion. A 50 year MTBF of an SSD drive is actually a plausibly useful data point, whereas a 200 year MTBF of a hard disk is a BPOS.

  • by Rockoon ( 1252108 ) on Wednesday November 30, 2011 @05:59AM (#38212170)

    .... when they consistently surpass 1 or 2 million write-cycles per block ............
    (P.S. Please don't lecture me about wear-leveling, etc. I know how they work.)

    The last FLASH process size reduction took away ~39% of overall erase cycles but added ~64% more capacity per per mm^2.

    In your view the latest generation SSD's are even worse than the previous generation because they only have 61% of the erase cycles, right?

    If you really knew how SSD's worked, you wouldnt be talking about SSD's with millions of erase cycles per block. I mean what the fuck...

  • by jimicus ( 737525 ) on Wednesday November 30, 2011 @06:13AM (#38212206)

    Would add far too much cost to the hard drive, but this is essentially what server-class hardware RAID controllers do. The battery doesn't power the hard disk, it just keeps the cache running.

  • by Shivetya ( 243324 ) on Wednesday November 30, 2011 @06:35AM (#38212284) Homepage Journal

    and their upcoming Ivy Bridge chipset will take it even further. Both allow for the use of a small SSD drive as a cache against a larger traditional hard drive.

    Per the wiki page on their chipsets, The Z68 also added support for transparently caching hard disk data onto solid-state drives (up to 64GB), a technology called Smart Response Technology

    SRT link is http://en.wikipedia.org/wiki/Smart_Response_Technology [wikipedia.org]

  • by a_hanso ( 1891616 ) on Wednesday November 30, 2011 @07:16AM (#38212442) Journal
    OP here. To be honest, the original submission does not contain the "but not by much" and the sentences following the one about the CNET article.
  • by Joce640k ( 829181 ) on Wednesday November 30, 2011 @07:33AM (#38212522) Homepage

    LOL! Where did you get that from?

    If I bought a million hard drives I'd expect several of them to not even power on. By your definition I'm sure the MTBF of all consumer hardware would be zero.

    PS: MTBF means "Mean time between failures" not "Mean time before failure".

  • by olau ( 314197 ) on Wednesday November 30, 2011 @07:56AM (#38212634) Homepage

    That's because it doesn't do anything good for hard drives. There was a paper about it some years ago, I'm too lazy to google it up, but even 32 MB is too much (I think the sweet spot was around 2 MB).

    If you think about it, it's not surprising, what good would it do that the disk cache in main memory managed by the OS didn't already do?

    Large on-disk cache would only make sense if it was combined with a battery or something so you don't loose data on crashes.

  • by LoRdTAW ( 99712 ) on Wednesday November 30, 2011 @09:49AM (#38213304)

    It sucks but has an easy but time consuming fix that leaves you with the drive contents intact:
    Boot a live Linux distro. And hook a USB HDD to the system and mount it. The USB hdd can even be formatted NTFS if the live distro has FUSE installed along with the ntfs-3g driver, most live distros already have it or will allow you to install them. Assuming your SSD is the primary or only disk in your system then:

    (You need to be root or use sudo, on most live distros you simply type "su root" or "sudo -s")

    #dd if=/dev/sda of=/path/to/backup/disk/ssd_backup.img conv=sync,noerror bs=1024k

    /dev/sda is the first disk in the system. you may have to run ls /dev/sd* to get a list of disks and partitions. and note, sda is the entire disk block-for-block, sda1 is a partition just like sda2 , sda3 etc. If you have more than one disk and don't know which letter it is then simply type fdisk -lu /dev/sdX (X being the letter you want to check) and it will dump the drive info.

    It may take about 5+ hours assuming you have a 512GB SSD and an optimal USB transfer rate of 25MBps to the backup disk (in my experience the average for USB 2.0 write speeds). Faster backup disks and smaller capacity SSD's will backup much faster.

    Once complete, you now have a bit for bit block-level copy of the SSD. This ignores the boot sector, boot loaders, partitions and file systems. It does not matter what OS you had on it, how many partitions or what file system you used. if your very paranoid and want to wait hours more, the run diff against the disk and disk image file to be sure they are an exact copy (never did it and never will).

    Now reboot and upgrade the firmware the way the manufacturer tells you. So now your data is wiped out, big stinkin deal. Fire up the live Linux distro and again attach your backup disk and then enter the following command:

    #dd if=/path/to/backup/disk/ssd_backup.img of=/dev/sda conv=notrunk bs=1024k

    This writes the image file back to the SSD and if all goes well (It has never failed me yet and I have done this dozens of times for various systems) you now have your upgraded firmware with its original contents fully intact.

    You can even mount one or more of the partitions contained within the disk image (under Linux of course) if you do a bit of homework (search google for mounting dd images) or just go here:http://darkdust.net/writings/diskimagesminihowto [darkdust.net] That tutorial is how I started playing with dd images.

    You can also movethe contents of a smaller cramped disk to larger drives. Works for windows/NTFS too! You simply dd the entire smaller drive to the new drive (works best when both drives are hooked up via sata.) Then you use gparted or some other parted disk GUI to grow the file system on the new drive. Shut down and remove the linux cd/thumb-drive and remove the old disk and move the sata cable from the old disk to the new disk. Boot your PC and if your using windows (2000, XP , Vista, 7) it will run the check disk to verify the volume (DONT SKIP IT!) and reboot. Once it reboots to windows, open up explorer and see that you now magically have all that shiny new space without formatting, reinstalling, adding new drive letters or mounting drives under folders etc. Its transparent!

    Example command:

    #dd if=/dev/sda of=/dev/sdb conv=sync,noerror bs=1024k

    sda is the small disk and sdb is the new large disk. I have done that trick multiple times as well with a 100% success rate. My friends were amazed.

I attribute my success to intelligence, guts, determination, honesty, ambition, and having enough money to buy people with those qualities.

Working...