Forgot your password?
typodupeerror
Data Storage Hardware

Are SSD Accelerators Any Good? 331

Posted by Soulskill
from the newton's-second-law-objects dept.
MrSeb writes "When solid-state drives first broke into the consumer market, there were those who predicted the new storage format would supplant hard drives in a matter of years thanks to radically improved performance. In reality, the shift from hard drives (HDDs) to SSDs has thus far been confined to the upper end of the PC market. For cost-conscious buyers and OEMs, the higher performance they offer is still too expensive and the total capacity is insufficient. SSD cache drives have emerged as a means of addressing this situation. They are small, typically containing between 20-60GB of NAND flash and are paired with a standard hard drive. Once installed, drivers monitor which applications and files are accessed most often, then cache those files on the SSD. It can take the software 1-2 runs to start caching data, but once this process is complete, future access and boot times are significantly enhanced. This article compares the effect of SSD cache solutions — Intel Smart Response Technology, and Nvelo Dataplex — on the performance of a VelociRaptor, and a slow WD Caviar drive. The results are surprisingly positive."
This discussion has been archived. No new comments can be posted.

Are SSD Accelerators Any Good?

Comments Filter:
  • bcache (Score:5, Informative)

    by Anonymous Coward on Tuesday August 07, 2012 @08:15PM (#40912057)

    For Linux users: http://bcache.evilpiepirate.org/

    Lets you use any SSD as a cache in front of another filesystem.

  • Re:bcache (Score:5, Informative)

    by drinkypoo (153816) <martin.espinoza@gmail.com> on Tuesday August 07, 2012 @08:43PM (#40912463) Homepage Journal

    Lets you use any SSD as a cache in front of another filesystem.

    It would be niftier if it would let you use it as a block cache in front of any filesystem, instead of just one located on a specially-prepared partition. dm-cache will do this but isn't up to date.

  • by pla (258480) on Tuesday August 07, 2012 @08:48PM (#40912541) Journal
    To me it is not worth it to watch your os boot faster.

    First of all, putting the OS on a disk by itself doesn't only mean that Windows runs faster - The OS reads and writes to its files on a near continuous basis. For years before SSDs, we've known that simply getting that activity segregated onto its own disk, away from "real" file activity, gives a decent performance boost across the board; moving it to an ultra-fast random-access media helps even more (and even if you don't care about boot time, how about "responsiveness"? Every time Windows needs to wait for some stupid little icon to load, you need to wait for Windows to wait for some stupid little icon to load).

    Second, SSDs have gotten a lot bigger and a lot cheaper. You no longer need to decide between spending a fortune or segregating your apps out; a $60 SSD will hold the OS and every app you could ever possibly run, with plenty of room to spare. Yes, you'll still want that second big-slow-and-cheap HDD for general purpose storage, but you haven't needed to carefully weigh "on which disk should I install this program" for at least a year.


    Flash ram is not a permanent solution and will die due to the limited number of writes.

    And you think a drive with actual moving parts will live forever?


    Make no mistake, SSDs have their flaws, and cost definitely still counts as one of them. But once you really use a system set up with SSD system / HDD data, you'll never even consider going back. And mere boot time has nothing to do with it.
  • by cdrudge (68377) on Tuesday August 07, 2012 @08:50PM (#40912565) Homepage

    What also is not addressed in the article is the reliability of the SSDs. Flash ram is not a permanent solution and will die due to the limited number of writes. If you use mysql or MS access or run low on space and use XP that thing will be dead in a matter of months. It can only handle so much paging and writes before it dies. Tricks in the firmware move the write bits to random places in memory to prevent this but as it fills up the paging needs to keep to keep hiting the same memory addresses.

    There are a variety of different ongoing tests to look at how long drives actually last [xtremesystems.org]. Looking at a fairly standard older Intel 320 40GB drive, it went 190TB written before the MWI threshold was reached, and continued on until 685TB. That means it completely rewrote the drive 17500+ times.

    No, it won't last forever. And it's not ideally suited for every single industry and use. But for the typical user, they are more likely to need a larger drive or otherwise upgrade then wear out the drive.

  • by Nemilar (173603) on Tuesday August 07, 2012 @08:54PM (#40912619) Homepage

    It seems that SSD accelerators can be hit/miss. If you take a look at http://www.theregister.co.uk/2012/07/12/velobit_demartek/ [theregister.co.uk] for example, some of these products don't seem to do anything - while some seem to actually work.

    Like any young industry, it'll probably a while to shake out field until only a few decent contenders remain.

  • Surprisingly why? (Score:4, Informative)

    by Sycraft-fu (314770) on Tuesday August 07, 2012 @08:56PM (#40912647)

    It would be surprising if it weren't the case. We've been doing the same thing with memory for years. Our CPUs need memory that can perform in the realm of 100GB/sec or more with extremely low latency, but we can't deliver that with DRAM. So we cache. When you have multiple levels of proper caching you can get like 95%+ of the theoretical performance you'd get having all the RAM be the faster cache, but at a fraction of the price.

    This is just that taken to HDDs. Doesn't scale quite as well but similar idea. Have some high speed SSD for cache and slower HDD for storage and you can get some pretty good performance.

    I love Seagate's little H-HDDs for laptops. I have an SSD in my laptop, but only 256GB. Fine for apps, but I can't hold all my data on there (music, virtual instruments, etc). They are just too pricey to get all the storage I'd need. So I also have an H-HDD (laptop has two drive bays). It's performance is very good, quite above what you'd expect for a laptop drive, but was only $150 for 750GB instead of $900 for 600GB (the closest I can find in SSDs).

  • Pointless (Score:5, Informative)

    by AlienIntelligence (1184493) on Tuesday August 07, 2012 @09:08PM (#40912793)

    SSD's were recently @ $1/Gig. That's when I upgraded everything.

    I've seen them as low as 55-65c a gig now. Yeah... gotta love how
    tech drops in price RIGHT AFTER you decide to adopt.

    Buy a WHOLE SSD drive. Put all the programs you use daily on it.

    120G ~ $70

    That is all.

    FWIW, except for bulk storage, I will NEVER buy a spinning HD again.
    I experienced a RIDICULOUS speed up, going from a 7200rpm drive.

    -AI

  • Re:bcache (Score:5, Informative)

    by antifoidulus (807088) on Tuesday August 07, 2012 @09:19PM (#40912909) Homepage Journal
    Flashcache is a block-level caching algorithm, which means that it will work with any device, but it takes a metric TON of memory as it has to retain cache info for every block on the device. If you have the memory then yeah, you can get some speedup from it, but if you are memory constrained eating up that much memory for the small performance boost isn't worth it.
  • Re:bcache (Score:4, Informative)

    by Anonymous Coward on Tuesday August 07, 2012 @09:19PM (#40912913)

    For Linux users: http://bcache.evilpiepirate.org/

    Lets you use any SSD as a cache in front of another filesystem.

    For ZFS users:
    * read cache: zpool add {pool} cache {device}
    * write cache: zpool add {pool} log {device}

  • by antifoidulus (807088) on Tuesday August 07, 2012 @09:38PM (#40913123) Homepage Journal
    The article doesnt mention any other software(mostly OS) requirements for the accelerators, which is a pretty big deal. Basically there are 2 ways to cache:
    1. On the file level, which isnt very resource intensive(there aren't nearly as many files as blocks on a disk), but requires that the accelerator be able to read file system metadata(and of course be able to intercept OS calls) which severely restricts what kind of file system, and really even operating systems, you can use with the accelerator
    or
    2. Block-level caching. Much more generic, can really be used with any file system as the blocks, not any file system metadata, are the only thing that is used. However managing all that block information comes at a cost, either in main memory or more expensive hardware. For instance Flashcache requires about 500 megs of memory to manage a 300GB disk. Depending on your usage this may be acceptable(though is memory really that much cheaper than ssds nowadays?) but for most it isnt.

    From the article I can assume that they only tested Windows, and that really limits its usefulness.
  • by lightknight (213164) on Tuesday August 07, 2012 @10:04PM (#40913419) Homepage

    Exactly. I have a 240GB SSD for my laptop and desktop's main drives, with oodles of secondary storage (7200 RPM, of course). The difference is magnificent. If you've never used a SSD before, you simply do not understand -> Adobe Photoshop CS5 loads in only 3 or 4 seconds. Try doing that on a mechanical hard drive, and it's just PAIN.

  • Re:No. (Score:4, Informative)

    by DigiShaman (671371) on Tuesday August 07, 2012 @11:37PM (#40914261) Homepage

    Standard desktop chipsets can get real flaky with 16GB of RAM and above. So be sure you get a single Quad kit and not 2x Dual kits like most people get (because it's cheaper). But then again, if you're serious about needing that much RAM, I suggest going workstation level with an Intel Xeon or AMD chip. Those are the only to line of CPUs that will support ECC. Last thing you want to have to worry about is some bit flips happening someplace and then the corruption being committed back to disk. Ugh!!! The though alone is enough to give me ulcers. Seriously, go with ECC when working with that much memory.

  • Re:SSD Reliability (Score:2, Informative)

    by Anonymous Coward on Tuesday August 07, 2012 @11:58PM (#40914415)
  • by gman003 (1693318) on Wednesday August 08, 2012 @12:03AM (#40914447)

    I would recommend buying an SSD, putting the OS and all applications on it, and then using a magnetic drive as the "users" volume. Any sanely laid out OS makes this very easy.

    Just an FYI for everyone, Windows does not count as a sane OS for this purpose. I managed to render a Windows install unusable trying to do that.

    The best trick is to move just specific user's folders, not the whole Users directory, over, and then symlink it back to the original location. Trying to move the entire \Users folder almost always breaks something, often rendering it impossible to log in. Other methods either require setting up your own unattended install disks with odd config files, or do not work completely.

    The general process:
    1) Install Windows to the SSD as normal
    2) Create a user account and a backup account. For this demonstration, their original, default home folders will be C:\Users\GMan and C:\Users\Admin
    3) Reboot (to log both out completely)
    4) Log into the backup account, otherwise the system will choke while copying your registry files
    5) use Robocopy to copy the user folder to the hard drive (robocopy C:\Users\GMan D:\Users\GMan /COPYALL /E)
    6) Delete the user folder (rmdir C:\Users\GMan)
    7) Symlink the folder on the hard drive back to the SSD (mklink /J C:\Users\GMan D:\Users\GMan)
    8) Repeat for any other users, but note that you only need one "backup" account

    This gives a few advantages:
    1) It works transparently with programs that assume you are at C:\Users\[username]
    2) It copies all user data, not just documents/images/videos
    3) If the hard drive fails, you don't break the OS - you can log in using the alternate account (Admin in my example) to try to recover things
    4) If you really wanted to, you could try to set some specific files in your user directory to be on the SSD
    5) If you have a Windows install, or at least recovery partition, on the hard drive, either drive can fail without rendering the system unusable.

  • by sortius_nod (1080919) on Wednesday August 08, 2012 @12:24AM (#40914575) Homepage

    Not only this, the claims are at least 12 months out of date. SSD's are now less than $1/GB, & the average drive sold now is 120 or 128GB.

    I upgraded my desktop with a cheap solution (AMD A8, 990FX, 16GB RAM, 128GB SSD, 2TB HDD) all for less than my last upgrade cost ($669 vs $955) 3 years ago. SSDs are definitely part of the norm now, we order many machines with dual 128MB SSDs in them, both laptops & desktops. The price difference is negligible, so this article seems more like a cry by someone attempting to hold onto the old way of doing things.

  • by obarthelemy (160321) on Wednesday August 08, 2012 @01:11AM (#40914817)

    Not all of us are gamerz ?

  • Re:No. (Score:4, Informative)

    by tlhIngan (30335) <slashdotNO@SPAMworf.net> on Wednesday August 08, 2012 @01:18AM (#40914851)

    Now keep in mind, a regular HDD is about 70MB/s for sustained data. Put that in a raid 1 (mobo hardware, or software) and you can see 120-130, so an SSD on a bad connection may not be that much better than much less expensive RAID.

    You're ignoring a very important fact.

    An SSD is at least an order of magnitude faster still because the seek time is in the microsecond range. So even an SSD on a bad interface can easily peg the interface for random I/O, while the super RAID array doing the same accesses can bog down to a halt.

    The reason? Seek times. A typical hard drive is around 7ms or so, which means if you're doing lots of seeks, you'll never get that sustained transfer rate. Worst case, you can easily get less than 1MB/sec if the hard drive is reading 4kB blocks at random locations.

    A hard drive is great for long reads and writes. An SSD excels at random I/O and OS/application usage tends to be random I/O. It just makes the whole system feel "snappier" purely because read requets are fulfilled immediately versus skittering the head over the platters.

    In fact, Windows 7 does a quick test to determine if it's running on an SSD - it does a bunch of random I/O. If the drive is capable of more than 50MB/sec, it's an SSD because no spinning rust can meet that requirement due to seek time.

  • Re:bcache (Score:5, Informative)

    by Gaygirlie (1657131) <gaygirlie@hotmail. c o m> on Wednesday August 08, 2012 @03:13AM (#40915443) Homepage

    You should not expect much speedup from using a 10Mbyte/s memory card in front of a standard 150 MByte/s sustained transfer drive.

    You said it yourself: "sustained." The whole point with ReadyBoost is that it uses these Flash-devices for matters where low latency is more important, sustained transfer-rate is therefore not important. It doesn't even try to cache multi-megabyte files, it caches small files and details that are accessed frequently: a regular HDD is quite bad at reading dozens of small files from all over the disk due to seek times.

    If the small files are stored in your cache, you might save some seek time. But you can't compare some ultra-slow USB / SDHC card to a 2-300 Mbyte/s SSD.

    That's what I said.

    I tried the SDHC, did not work well. A fast USB 3.0 stick in a USB 2.0 port was way better, but still does not compare to SSD.

    If you were expecting SSD-level performance then you clearly didn't understand fully what you were doing in the first place. It is not meant to replace an SSD, it is simply meant to speed up your system as compared to only using a regular HDD.

  • by PeterKraus (1244558) <peter.kraus@member.fsf.org> on Wednesday August 08, 2012 @04:17AM (#40915733) Homepage

    That goes with the "AMD A8" part.

  • by jawtheshark (198669) * <slashdot&jawtheshark,com> on Wednesday August 08, 2012 @04:33AM (#40915801) Homepage Journal

    AMD A8 (and A6 and A4) have a graphics core on board. Those graphics cores even are quite sufficient for non-hardcore gaming. I'm a fan of the FM1 platorm, and think it is way underrated. It has decent processing power, decent graphics, is very quiet (even with stock cooler) and not expensive at all. Example: A6-3650 (86.90€), 2x8GB kit RAM DDR3-1333 from ADATA (67.98€), motherboard GIGABYTE GA-A75-D3H (99.90€). That's quite some power for 254.78€ (including taxes, excluding shipping -- prices taken at Alternate [alternate.de]). Like many of us, we just reuse the disk and the case we already own. There is no need for a graphics card in such a system, unless you're a hardcore gamer.

  • Re:bcache (Score:5, Informative)

    by Skal Tura (595728) on Wednesday August 08, 2012 @05:37AM (#40916051) Homepage

    USB latency is actually rather high. Infact, rather VERY high.

    Absolute minimum latency for a fetch is 16ms on USB port. It seems this has had some work on it, now being 125Hz rate by default, instead of 90.
    But still 8ms for sending request for file, device gets it, let's assume it's ultra fast and takes just 3ms to find, fetch and prep reply packet (and assuming fits on 1 packet), it means 16ms has been spent BEFORE the data can be sent back, 24ms for the whole round trip.

    HDDs seek faster than this, SO if your HDD is not having other activity, for single fetch your HDD is faster. Unless it's Caviar Green.

  • Re:bcache (Score:2, Informative)

    by Anonymous Coward on Wednesday August 08, 2012 @07:48AM (#40916597)

    You'll want /tmp and (depending on installed servers and how your distro is configured) /var on the HDD; if you have enough RAM, /tmp and /var/tmp on a tmpfs and the rest of /var on the HDD. So, / on the SSD, /home and /var on the HDD, and /tmp and /var/tmp either on tmpfs or on the HDD. /usr and /boot as dedicated partitions on the SSD may be a good idea too, all with sane defaults (for example, /boot should be mounted only when you update the kernel, / can be read-only and remounted in place when doing your weekly updates, etc).

  • Re:No. (Score:3, Informative)

    by SpinyNorman (33776) on Wednesday August 08, 2012 @08:01AM (#40916673)

    Not just for power users ...

    In Linux it's simple enough to, say, mount your root (OS data) folder on an SSD and /home (user data) on a HDD, but Windows 7 isn't so flexible.

    What most people (power users) end up doing under Windows 7 is to install the OS on an SSD, then use a "junction point" (cf Linux hard link) to redirect the /Users folder to a HDD (and reconfigure the Windows TEMP directory to be on the HDD to avoid killing the SSD with excessive temp file create/delete cycles). The trouble with this is that Windows 7 junction points don't play nice with restore points, as you'll find out when having to revert to a restore point and all your user data disappears requiring major hackery to restore.

    So, for Windows 7, a HDD with built in flash cache is a MUCH more convenient solution than using a separate SDD - even for a power user.

  • Re:No. (Score:4, Informative)

    by Miamicanes (730264) on Wednesday August 08, 2012 @09:41AM (#40917393)

    > SSDs have a propensity to just die like a normal HDD.

    No, normal drives tend to become flaky, then get super-slow, then start to make grinding noises and die outright a short time later. SSDs just commit data-suicide, then go into "panic" mode and lock the entire drive if they sense that you're trying to do data recovery on them. Ask anybody unfortunate enough to own a drive based on the Sandforce SF-1200 controller, like the OCZ Velocity2. OCZ's forums are *littered* with post after post after post (continuing to the present) from people who've had the drive just spontaneously decide to fail.

    The problem isn't flash-wear... the problem is a perfect storm of buggy firmware, drive-level encryption, and paranoid firmware that views aggressive attempts to recover data lost due to that buggy firmware as a hacking attempt & locks out the entire drive in a way that can't be fixed by end users (mostly, because Sandforce won't allow the recovery/repair tools to be released to end users). IMHO, it's completely inexcusable. At the VERY least, they should have made the encryption and protection something that can be disabled by end users (probably requiring complete reformatting, but at least present as an option). Then, they could have made a recovery mode that allows drives that had the encryption disabled to just sequentially rip the raw bits from the flash for offline recovery. But no. They have to protect their shit IP that nobody who's been burned by them will EVER purchase again anyway, and casually write off petabytes of lost user data due to their brittle embedded firmware and protection as "not our problem".

    OCZ and Sandforce are the best poster children for a class-action lawsuit since the day HP decided to sell CD writers without cache (that their engineers GUARANTEED would turn at least a quarter of the discs they touched into coasters). The sad part is that such a suit could only have things like piddling amounts of money as the penalty, instead of compelling Sandforce to furnish all source, signing keys, and in-house utilities relevant to the SF-1200 to anybody who's ever had the misfortune of purchasing a drive based on it.

Are we running light with overbyte?

Working...