Forgot your password?
typodupeerror
Data Storage Hardware

Are SSD Accelerators Any Good? 331

Posted by Soulskill
from the newton's-second-law-objects dept.
MrSeb writes "When solid-state drives first broke into the consumer market, there were those who predicted the new storage format would supplant hard drives in a matter of years thanks to radically improved performance. In reality, the shift from hard drives (HDDs) to SSDs has thus far been confined to the upper end of the PC market. For cost-conscious buyers and OEMs, the higher performance they offer is still too expensive and the total capacity is insufficient. SSD cache drives have emerged as a means of addressing this situation. They are small, typically containing between 20-60GB of NAND flash and are paired with a standard hard drive. Once installed, drivers monitor which applications and files are accessed most often, then cache those files on the SSD. It can take the software 1-2 runs to start caching data, but once this process is complete, future access and boot times are significantly enhanced. This article compares the effect of SSD cache solutions — Intel Smart Response Technology, and Nvelo Dataplex — on the performance of a VelociRaptor, and a slow WD Caviar drive. The results are surprisingly positive."
This discussion has been archived. No new comments can be posted.

Are SSD Accelerators Any Good?

Comments Filter:
  • bcache (Score:5, Informative)

    by Anonymous Coward on Tuesday August 07, 2012 @08:15PM (#40912057)

    For Linux users: http://bcache.evilpiepirate.org/

    Lets you use any SSD as a cache in front of another filesystem.

    • Re:bcache (Score:5, Informative)

      by drinkypoo (153816) <martin.espinoza@gmail.com> on Tuesday August 07, 2012 @08:43PM (#40912463) Homepage Journal

      Lets you use any SSD as a cache in front of another filesystem.

      It would be niftier if it would let you use it as a block cache in front of any filesystem, instead of just one located on a specially-prepared partition. dm-cache will do this but isn't up to date.

      • by mitgib (1156957)

        Lets you use any SSD as a cache in front of another filesystem.

        It would be niftier if it would let you use it as a block cache in front of any filesystem, instead of just one located on a specially-prepared partition. dm-cache will do this but isn't up to date.

        Maybe Flashcache [github.com] would be a better choice for some. I use this as a read-cache on several VPS nodes and the results are impressive.

        • Re:bcache (Score:5, Informative)

          by antifoidulus (807088) on Tuesday August 07, 2012 @09:19PM (#40912909) Homepage Journal
          Flashcache is a block-level caching algorithm, which means that it will work with any device, but it takes a metric TON of memory as it has to retain cache info for every block on the device. If you have the memory then yeah, you can get some speedup from it, but if you are memory constrained eating up that much memory for the small performance boost isn't worth it.
    • Re:bcache (Score:4, Informative)

      by Anonymous Coward on Tuesday August 07, 2012 @09:19PM (#40912913)

      For Linux users: http://bcache.evilpiepirate.org/

      Lets you use any SSD as a cache in front of another filesystem.

      For ZFS users:
      * read cache: zpool add {pool} cache {device}
      * write cache: zpool add {pool} log {device}

    • by ilkahn (6642)

      well this is great.

    • Re:bcache (Score:5, Interesting)

      by ShoulderOfOrion (646118) on Wednesday August 08, 2012 @12:50AM (#40914727)

      Why bother? I have an HDD mounted as /, and an SSD mounted as /usr on my Gentoo system. Using atop I consistently see the HDD receive 10-20 times the writes the SSD receives but only about 2x the reads. In other words, on Linux the SSD is already serving primarily as a read-only caching filesystem just by mounting it correctly.

    • Re:bcache (Score:4, Interesting)

      by ToasterMonkey (467067) on Wednesday August 08, 2012 @01:35AM (#40914959) Homepage

      For Linux users: http://bcache.evilpiepirate.org/ [evilpiepirate.org]

      Lets you use any SSD as a cache in front of another filesystem.

      Solaris and Windows have been shipping with production ready L2 FS cache for years already, L2ARC/ReadyBoost. I'll give Apple a pass because their systems are mostly not designed for adding drives, and they were apparently betting on high capacity SSDs coming down in price by now. Desktops have less of a need for caches in the tens of GB anyway. Linux, as a server OS doesn't have much of a good excuse, why wasn't L2 cache worked out years ago when everyone was racing for TRIM support? Using smaller cheaper SSD drives as L2 cache almost makes too much sense. It covers up the short write cycle lifetime and poor sequential read performance. 60 some odd GB of cache starts to look pretty dang good for a lot of server workloads.

      I feel I should point this out because these cheesy Linux +1 MeToo posts are _really_ aggravating to people who use it professionally. It's a tool. We're not in love with it.

      http://arstechnica.com/civis/viewtopic.php?f=21&t=1114013 [arstechnica.com]
      The developer apparently didn't even know what the ARC algorithm is... which is just bizarre, like developing a race car without knowing what variable valve timing is. Not saying it is needed, but what level of quality do you expect out of this?

      • by Skal Tura (595728)

        Mac OSX is basicly highly modified FreeBSD/NetBSD, so it might actually already have ZFS support, therefore L2ARC.
        Knowing apple tho they have probably disabled it and gives you no means to even try using ZFS.
        Besides that they've probably locked down SSD support to few select drives as well.

        Linux does not by default have these options but several are available. I bet some vendors do include these supports.
        For ZFS you don't need kernel mods in many distros.

  • No. (Score:5, Insightful)

    by Anonymous Coward on Tuesday August 07, 2012 @08:19PM (#40912129)

    Hybrid drives or mixed mode setups kinda suck ass now that actual ss drives are getting to a reasonable price/size.

    SSD for os/programs.

    Giant TB+ drive for storage and media files.

    • Re:No. (Score:5, Insightful)

      by wbr1 (2538558) on Tuesday August 07, 2012 @08:51PM (#40912579)
      For the average joe, they dont want to have to manage putting os/apps/frequent files on one drive and split the rest elsewhere. Software that automagically does this and keeps the cache up to date is a boon for the non power user.
      • by artor3 (1344997)

        Bingo. I've been manually managing the shifting of files back and forth between my HDD and SSD for a couple years now, and while it's not particularly hard, it's not something I'd want to guide a non-techie through. Getting the OS on one drive and the user folders (my documents, videos, music, etc.) on the other isn't particularly well documented, and moving individual Steam games seems to require console commands, a rarity in Windows.

        Even though I can manage it all myself, I would absolutely switch to ha

        • I have an SSD in my new work machine, with a big disc drive as D:. For this, though, it was pretty simple to install data files (p4 sync, PCB projects) on the HDD and use the SSD for OS and applications.

      • by gman003 (1693318)

        After getting it initially set up (which was, I will concede, a pain in three hundred asses), my own dual-drive system has required me to think about which drive to put something on precisely once, when installing the Unreal Anthology off CD (which was a case of picking "D:\Unreal Anthology" instead of "C:\Unreal Anthology" when installing).

        Maybe it's because I could afford an SSD big enough that I don't worry *too* much about space, and having a hard drive faster than normal (it's a 7200rpm drive in a lapt

      • Re: (Score:3, Informative)

        by SpinyNorman (33776)

        Not just for power users ...

        In Linux it's simple enough to, say, mount your root (OS data) folder on an SSD and /home (user data) on a HDD, but Windows 7 isn't so flexible.

        What most people (power users) end up doing under Windows 7 is to install the OS on an SSD, then use a "junction point" (cf Linux hard link) to redirect the /Users folder to a HDD (and reconfigure the Windows TEMP directory to be on the HDD to avoid killing the SSD with excessive temp file create/delete cycles). The trouble with this is t

    • by Lumpy (12016)

      Odd, the hybrid drive I just bought doesn't suck ass. It in fact is faster than ANY hard drive you can buy that has a 750gb size. Unless "suck ass" is the new hipster slang for "really fast". It made my old out of date quad core i5 laptop a whole lot faster.

      • Re:No. (Score:4, Interesting)

        by afidel (530433) on Tuesday August 07, 2012 @10:03PM (#40913409)

        I assure you that there are MANY 800 GB SAS/SATA SSD's that can beat your hybrid drive, they just cost more than most people will spend on their entire computer =)

        • ^ This

          Also, a Quad Core i5 isn't out of date, last time I checked. :)

          BTW, a pair of 512GB SSDs are no longer crazy expensive, they can be purchased for less than $1,000 total, which is less than many gamers spend on their computers.

      • by TheLink (130905)
        Seems to me the current hybrid drives don't do write-caching, they only do read-caching.

        I can see why read-caching would be a lot simpler to implement, but I bet decent write-caching will really make hybrid drives as fast as SSDs for most desktop use.

        Copying files from one location to another at SSD speeds till the write-cache fills up. Then while you do something else the drive flushes the cache to disk.
    • Re: (Score:2, Interesting)

      by cpu6502 (1960974)

      I'd rather spend my money on RAM. Up the system memory from 8 GB to 32GB, and you eliminate the slowdown caused by hard drive accesses.

      • Re:No. (Score:4, Interesting)

        by Sir_Sri (199544) on Wednesday August 08, 2012 @12:08AM (#40914465)

        That was true back when you were talking about MB numbers. It's definitely not true when talking in that rage of GB.

        To use windows as an example, it will try and cache what it thinks you're going to load based on I think a fairly simple algorithm. Which means it's usually wrong. If you never access more than 32 GB worth of data from your HDD then sure, 32 GB of Ram will do the trick. But, for example, if you play WoW or SWTOR (both of which flutter around 20GB), any other game + web browser + windows you could fairly easily waltz past 32 GB of data, at which point you're into 'cache misses'. And yes, this is conceptually the exact same problem as cache hit ratios, just working at a different level (logical files or directories rather than lines of memory).

        I had virtually no performance increase going from 12 to 24 GB of RAM on general disk use.

        You can get a big boost from an SSD, and especially, getting something that will actually work a SATAIII connection at full speed. My x58 board is lucky to pull more than 200MB/s from even a very good SSD, whereas the same drive on a sandy bridge board will do 450-550 range.

        Now keep in mind, a regular HDD is about 70MB/s for sustained data. Put that in a raid 1 (mobo hardware, or software) and you can see 120-130, so an SSD on a bad connection may not be that much better than much less expensive RAID.

        • Re:No. (Score:4, Informative)

          by tlhIngan (30335) <slashdotNO@SPAMworf.net> on Wednesday August 08, 2012 @01:18AM (#40914851)

          Now keep in mind, a regular HDD is about 70MB/s for sustained data. Put that in a raid 1 (mobo hardware, or software) and you can see 120-130, so an SSD on a bad connection may not be that much better than much less expensive RAID.

          You're ignoring a very important fact.

          An SSD is at least an order of magnitude faster still because the seek time is in the microsecond range. So even an SSD on a bad interface can easily peg the interface for random I/O, while the super RAID array doing the same accesses can bog down to a halt.

          The reason? Seek times. A typical hard drive is around 7ms or so, which means if you're doing lots of seeks, you'll never get that sustained transfer rate. Worst case, you can easily get less than 1MB/sec if the hard drive is reading 4kB blocks at random locations.

          A hard drive is great for long reads and writes. An SSD excels at random I/O and OS/application usage tends to be random I/O. It just makes the whole system feel "snappier" purely because read requets are fulfilled immediately versus skittering the head over the platters.

          In fact, Windows 7 does a quick test to determine if it's running on an SSD - it does a bunch of random I/O. If the drive is capable of more than 50MB/sec, it's an SSD because no spinning rust can meet that requirement due to seek time.

        • The Windows algorithm is anything but simple. It's actually quite damn good, and the system is measurably faster while using Readyboost.
    • by Cinder6 (894572)

      I built a system two weeks ago, and I put in a 1TB drive and a 60GB cache SSD. The SSD definitely makes certain things nice--Windows boot time is down to ~7 seconds (I don't use Intel Rapid Start), and loading games (primary purpose of this machine) is also very quick, particularly on games with long load times, such as SWTOR.

      That said, for common, everyday use, SSD cache drives are kind of meh. Shaving off a second from Chrome launch time, or even 20 seconds off of Windows boot time, isn't that big a dea

    • Agreed. The hybrid drives arrived at the market too late. What more, the 'caching' mechanism is a bit of a joke; a smarter, but more technically complicated approach, would have been to implement two drives in one package, one flash, one standard mechanical; however, I do not know if the SATA spec is cool with that. One drive, two partitions then?

      As it is right now, the lack of control over what get put on that all too small cache is killing the market for these things. Yes, your most often accessed files a

    • by OzPeter (195038)

      Hybrid drives or mixed mode setups kinda suck ass now that actual ss drives are getting to a reasonable price/size.

      SSD for os/programs.

      Giant TB+ drive for storage and media files.

      I have a laptop with a single drive bay you insensitive clod

    • by DragonTHC (208439)

      I may have to look into that.

      I'm using a 60GB SSD for my boot drive with a 1TB sata drive for storage.
      I have anything requiring mass storage mapped via symlink to the sata drive.
      steamapps, origin games, certain large program files too.

      works decent enough, and faster than sata alone. Windows boots in about 15 seconds.

  • by Anonymous Coward on Tuesday August 07, 2012 @08:24PM (#40912199)

    240GB SSDs are bouncing around 200. 2 bills for the boot SSD and your old drive gets the data partition and you are beating these hybrids on performance AND price.

  • by Nemilar (173603) on Tuesday August 07, 2012 @08:54PM (#40912619) Homepage

    It seems that SSD accelerators can be hit/miss. If you take a look at http://www.theregister.co.uk/2012/07/12/velobit_demartek/ [theregister.co.uk] for example, some of these products don't seem to do anything - while some seem to actually work.

    Like any young industry, it'll probably a while to shake out field until only a few decent contenders remain.

  • Surprisingly why? (Score:4, Informative)

    by Sycraft-fu (314770) on Tuesday August 07, 2012 @08:56PM (#40912647)

    It would be surprising if it weren't the case. We've been doing the same thing with memory for years. Our CPUs need memory that can perform in the realm of 100GB/sec or more with extremely low latency, but we can't deliver that with DRAM. So we cache. When you have multiple levels of proper caching you can get like 95%+ of the theoretical performance you'd get having all the RAM be the faster cache, but at a fraction of the price.

    This is just that taken to HDDs. Doesn't scale quite as well but similar idea. Have some high speed SSD for cache and slower HDD for storage and you can get some pretty good performance.

    I love Seagate's little H-HDDs for laptops. I have an SSD in my laptop, but only 256GB. Fine for apps, but I can't hold all my data on there (music, virtual instruments, etc). They are just too pricey to get all the storage I'd need. So I also have an H-HDD (laptop has two drive bays). It's performance is very good, quite above what you'd expect for a laptop drive, but was only $150 for 750GB instead of $900 for 600GB (the closest I can find in SSDs).

    • by fermion (181285)
      I think that RAM is dirt cheap, and storing local copies in RAM is always the better way to deal with HDD delays. It seems it would be better to put 8GB of RAM in the computer, for $50, than spend money on a SSD to accelerate a HDD.

      I have had laptops with up to 750GB HDD. My current laptop has a 256GB SSD, and my next will have at least 500GB. I hate to be buzzword complient, but I don't need all the video and music on my computer all the time. I can leave that in external storage, the cloud, and dow

      • by dbIII (701233)

        I think that RAM is dirt cheap, and storing local copies in RAM

        A decent OS (back down MS fanboys, Win7 fits sometimes too) uses RAM to cache a lot of stuff anyway without the hassle of a ramdisk. That file you opened yesterday may still be in RAM and open very quickly if you have enough spare memory.
        Sometimes they work well though. It's nice to have a 20GB ramdisk for scratch space for software that is too braindead to use memory when it's available and just hammers the disk instead. I've got two orders

    • Cache levels.

      L1 and L2 on die per CPU core.
      L3 is on die and shared among all CPU cores.
      L4 is RAM
      L5 is SSD accelerator
      L6 is HDD.
      L7 is Cloud storage off in Internet lala land.
      L8 is stored on my old HDDs someplace in the dumpster. It might be found in a few hundred years from now.

  • Pointless (Score:5, Informative)

    by AlienIntelligence (1184493) on Tuesday August 07, 2012 @09:08PM (#40912793)

    SSD's were recently @ $1/Gig. That's when I upgraded everything.

    I've seen them as low as 55-65c a gig now. Yeah... gotta love how
    tech drops in price RIGHT AFTER you decide to adopt.

    Buy a WHOLE SSD drive. Put all the programs you use daily on it.

    120G ~ $70

    That is all.

    FWIW, except for bulk storage, I will NEVER buy a spinning HD again.
    I experienced a RIDICULOUS speed up, going from a 7200rpm drive.

    -AI

    • by Mashiki (184564)

      To be honest, despite the dire doom and gloom warnings of people that the "end times will come" to first generation adopters of SSD's, I've got a first generation OCZ drive that's still chugging along and working like the day it was new. Heck, my page file is on it. It hasn't even used any of the backup blocks yet, 3 years on now and no complaints yet. I have a second 60GB drive that I transfer stuff on to if I'm using it alot, like MMO's and some programs that use large textures(Shogun2, Skyrim and so o

      • To be honest, despite the dire doom and gloom warnings of people that the "end times will come" to first generation adopters of SSD's, I've got a first generation OCZ drive that's still chugging along and working like the day it was new.

        Thanks for that.

        I did jump both feet into the SSD thing, knowing I could hit that switch
        one day and be met with the same silence, just less screen =)

        It's good to hear from a person that their SSD hasn't died on them.

        FWIW, I hope you didn't jinx yourself.

        -AI

        • by Mashiki (184564)

          All too possible, I do have a backup in place just in case. But if it dies, it dies. 3 years won't be bad, it's hard enough to find mechanical drives with a 3 year warranty on them anymore.

  • by jibjibjib (889679) on Tuesday August 07, 2012 @09:10PM (#40912819) Journal

    > In reality, the shift from hard drives (HDDs) to SSDs has thus far been confined to the upper end of the PC market.

    Not entirely. I have the cheapest netbook I could find, and I replaced its hard drive with a cheap low-capacity SSD. I don't keep much big stuff on it so the capacity isn't a problem. In terms of performance and power usage and not having to worry about my data when I drop my computer, it's been entirely worth it.

    • Indeed. I have a severe doubt that it's the 'upper end' of the market. In reality, it's probably the more technically literate part of the market, who understands the difference between a SSD and a HD.

  • by phoebus1553 (522577) on Tuesday August 07, 2012 @09:34PM (#40913085) Homepage

    This is exactly what has been going on in the enterprise storage space for a while. I only know much about two vendors, but they both have a solution like this. High end IBM storage has EasyTier, which while originally for the mix of FCAL/SAS to SATA, it works with SSD too, and in the latest revs all 3 tiers at the same time. NetApp used to have a PAM card which is now called... FlashCache? FlexCache? F-Something-Cache anyway, which is essentially an SSD drive on a PCI card.

    Good to see the high end tech being applied to consumer level workloads.

  • by antifoidulus (807088) on Tuesday August 07, 2012 @09:38PM (#40913123) Homepage Journal
    The article doesnt mention any other software(mostly OS) requirements for the accelerators, which is a pretty big deal. Basically there are 2 ways to cache:
    1. On the file level, which isnt very resource intensive(there aren't nearly as many files as blocks on a disk), but requires that the accelerator be able to read file system metadata(and of course be able to intercept OS calls) which severely restricts what kind of file system, and really even operating systems, you can use with the accelerator
    or
    2. Block-level caching. Much more generic, can really be used with any file system as the blocks, not any file system metadata, are the only thing that is used. However managing all that block information comes at a cost, either in main memory or more expensive hardware. For instance Flashcache requires about 500 megs of memory to manage a 300GB disk. Depending on your usage this may be acceptable(though is memory really that much cheaper than ssds nowadays?) but for most it isnt.

    From the article I can assume that they only tested Windows, and that really limits its usefulness.
  • by kwerle (39371) <kurt@CircleW.org> on Tuesday August 07, 2012 @09:43PM (#40913167) Homepage Journal

    In reality, the shift from hard drives (HDDs) to SSDs has thus far been confined to the upper end of the PC market.

    In reality, 100% of the smartphones, tablets, many/all? of the ultrabooks, and many notebooks now ship with SSDs. In a short time, virtually all laptops will ship with SSDs.
    Disks will go the way of tapes. You'll be able to get them, but the practical uses will be few.

    In reality, I imagine that more computers (yeah, I count smart phones and tablets) are now sold with SSDs than disks.

    As to your actual question about accelerators - I have no idea. I went solid state a couple of years ago and won't be going back.

  • by Above (100351) on Tuesday August 07, 2012 @09:59PM (#40913347)

    SSD prices have fallen quickly, while hard drives have gone up. If you don't need large amounts of storage it's better to just go SSD. But what if large amounts of storage are needed?

    I would recommend buying an SSD, putting the OS and all applications on it, and then using a magnetic drive as the "users" volume. Any sanely laid out OS makes this very easy. The OS and Apps will load quickly, the large items (like video) will be stored on the cheaper, larger disk storage. No "hybrid" algorithm to worry about working. Two separate parts that can be upgraded independently. No OS support required. Perhaps some acceleration of some small data files will be missed, but the large ones would have never fit in the accelerated flash anyway.

    I do think that file systems need to evolve in a new direction. ZFS is a preview in the right direction, but it would be nice to have a file system where you could add ram disk, or flash disk and tell it to be used as a "cache" for underlying disk, write through or write back. Easy to do in software. Plus better backup and replication support. I'd really like to configure my laptop with a 2TB spining disk, 256G super-fast SSD, and give 1G of RAM to the file system. Tell the file system to write everything through to magnetic, cache frequently used in SSD and RAM. When I'm on the hope network replicate the spinning disk to my NAS bit for bit. Perform incremental backups to my cloud backup service when connected to a fast enough network using compressed incremental to save space. Give me all that with ZFS's other features and it would be sysadmin filesystem nirvana...

    • ZFS already supports flash devices for caches. For read caching (L2 ARC), you can create striped cache volumes. You get better speed that way, and if one of the devices fails, ZFS knows it and just goes straight to the main storage volume (the one being cached). Meanwhile, the other drive continues. For write caching (ZIL), since the data is "worth" more, you can create a mirror of flash devices. The benefit of the ZIL is realized even if the cache is small, but unfortunately SSD write speed can be worse th

    • by gman003 (1693318) on Wednesday August 08, 2012 @12:03AM (#40914447)

      I would recommend buying an SSD, putting the OS and all applications on it, and then using a magnetic drive as the "users" volume. Any sanely laid out OS makes this very easy.

      Just an FYI for everyone, Windows does not count as a sane OS for this purpose. I managed to render a Windows install unusable trying to do that.

      The best trick is to move just specific user's folders, not the whole Users directory, over, and then symlink it back to the original location. Trying to move the entire \Users folder almost always breaks something, often rendering it impossible to log in. Other methods either require setting up your own unattended install disks with odd config files, or do not work completely.

      The general process:
      1) Install Windows to the SSD as normal
      2) Create a user account and a backup account. For this demonstration, their original, default home folders will be C:\Users\GMan and C:\Users\Admin
      3) Reboot (to log both out completely)
      4) Log into the backup account, otherwise the system will choke while copying your registry files
      5) use Robocopy to copy the user folder to the hard drive (robocopy C:\Users\GMan D:\Users\GMan /COPYALL /E)
      6) Delete the user folder (rmdir C:\Users\GMan)
      7) Symlink the folder on the hard drive back to the SSD (mklink /J C:\Users\GMan D:\Users\GMan)
      8) Repeat for any other users, but note that you only need one "backup" account

      This gives a few advantages:
      1) It works transparently with programs that assume you are at C:\Users\[username]
      2) It copies all user data, not just documents/images/videos
      3) If the hard drive fails, you don't break the OS - you can log in using the alternate account (Admin in my example) to try to recover things
      4) If you really wanted to, you could try to set some specific files in your user directory to be on the SSD
      5) If you have a Windows install, or at least recovery partition, on the hard drive, either drive can fail without rendering the system unusable.

      • by Above (100351)

        Uh, Windows is sane for this purpose, and this is commonly done in corporate environments, keeping all user data on an CIFS share.

        http://www.pcworld.com/article/190286/move_your_data_to_a_safer_separate_partition_in_windows_7.html

        Basically install windows to C:, the SSD.

        Spin up D:, the magnetic storage.

        Create D:\Users\

        Change the users home directory (My Documents) in the user properties to D:\Users\ (corp environments would be something like \\userserver\Users\).

        It's more or less the same thing as changing

  • i dont know about windows, but, wouldn't a more elegant way to accomplish this be paging? having a very large swap on the SSD portion and a very high swappiness value would sort of do what this intends to do, without such an end-run around the entire cache architecture of the OS

    • you'd end up breaking your SSD with excesive write cycles.

      here is a hint, never use swap on an SSD. with more than 4-8GB you really should not need swap on a modern linux desktop, at all.
  • Here is a good idea for linux

    plug your SSD into SATA slot one, a large magnetic disk into slot two.

    install the bootloader and OS onto the SSD, and use it for /, and then mount your magnetic disk for /home.

    problem solved. Of course this works for any pair of "large but slow" and "small but fast" disks.
  • complete BS (Score:5, Interesting)

    by slashmydots (2189826) on Wednesday August 08, 2012 @12:44AM (#40914695)
    SSD aren't just for high end systems. Out of my 300 or so past customers, approx 3 filled their hard drives to over 60GB total. I built several Kingston HyperX 90Gb and OCZ Agility 4 128GB drives without problems and they were all $500-600 final cost. I use an H77MA-G43 from MSI + 4GB Gskill 1333-CL7 memory and i3-2100 or 4GB 1333-CL9 and a Pentium B940-960. Put it in a decent $30-40 case, use an Antec VP450 or Basiq or other respectable but medium end PSU, and wait for a sale on Win7 64-bit OEM copies for $80 instead of $100 and you've got yourself an unbeatable, 7 year anticipated lifetime machine. Here's the kicker.

    I have an i5 (sandy) ridiculous gaming computer with a GTS450, 8GB of CL7 RAM, P67 chipset, and a pretty fast 7200 RPM 1TB Seagate main drive. It's custom built and would be around $1000 retail at my shop (at the time at least). It takes over a minute to log in and it takes forever to load games.

    I also built a system I'm selling for $520 with a Pentium B950, 4GB of pretty standard RAM, and a Maplecrest 60GB SSD. It logs into Windows in 4 seconds. The glowing balls don't even touch while loading the Windows 7 logo.

    SSDs are not for high end systems only! They're specifically exactly the opposite. They're the best way to make a really cheap budget PC seem extremely fast.
  • by WaffleMonster (969671) on Wednesday August 08, 2012 @01:49AM (#40915033)

    Hybrid drives have been on the market for years. It seems to me your risk exposure is only increased by combining the two. You now have to worry about the perils of spinning platters, oxide eating flash write operations and new management technology gluing the systems together not widely deployed.

    The last I checked about a year ago there were overwhelming negative comments related to reliability of hybrid drives. Even assuming all the bugs have since been worked out seems like such a fleeting and pointless stop-gap measure as to not be worthwhile.

    I have enough memory that most applications load instantly from the operating system cache. 32GB of ddr is readily available for less than $200 ... nothing involving a SATA bus can be faster than the operating systems main memory disk cache.

    Hopefully memristers or other technologies will pan out soon and we can be done with slow, power hungry access and inherently unreliable storage mediums once and for all.

  • by Terrasque (796014) on Wednesday August 08, 2012 @05:40AM (#40916067) Homepage Journal

    When I got my new Z77 board last week, I managed to slice my 120gb SSD into two parts. 18gb for Intel's cache system, and rest for "Data" - aka Windows install.

    I configured the SSD to cache my 2tb spinney a bit, and it generally worked as expected. Performance ranged from clean SSD speed some places, to in worst case old HDD speed.

    In other words, worst-case scenario was same as not having cache, and best-case scenario made it look like a 2TB SSD at no extra cost :)

    I've currently disabled it, since currently I'm re-installing my steam games (over 300 in total..), but will re-enable it when the data is a bit more static again.

    So far I consider the experiment a huge success, even though it complicated the install somewhat (SSD cache can only be configured from Windows, and only if the SSD is not running the OS).

  • by gelfling (6534) on Wednesday August 08, 2012 @07:39AM (#40916545) Homepage Journal

    The cost benefit of SSD's barely makes sense - the makers got greedy and decided to tack absurd price premiums on their gear far in excess of their benefit. And they've stayed there.

I have not yet begun to byte!

Working...