Hybrid Storage Solutions Compared 61
Vigile writes "While few would argue with the performance advantages of solid state drives, the relative cost compared to spindle-based disks still make them a luxury item. The promise of hybrid storage solutions is to combine the benefits of both — large capacities with standard drive technology and performance advantages of solid state. PC Perspective published an article comparing several different solutions that vary in their approach to hybrid storage. The OCZ RevoDrive Hybrid combines a standard 2.5-in drive with a PCI Express-based SSD that offers the best overall performance and largest cache size. Seagate's new Momentus XT 2.5-in solution embeds the cache on the PCB of the drive, allowing notebook users to install this solution easily. Finally, the Intel chipset-based caching option combines either a 2.5-in or mSATA SSD with a standard hard drive on either desktop or mobile platforms, allowing the most flexibility of any other hybrid solution. All three have advantages for specific consumers, though, and varying performance levels to go along with them."
Um (Score:1)
Re: (Score:2, Informative)
The limit of an ssd's max writes is high enough that time wise a mechanical hard drive is more likely to fail during the normal lifespan.
Wrong approach. (Score:2)
Putting both in the same can is the wrong way to do it.
A better way is to have SSD and hard disk as tiered storage. The OS is more likely to be able to make better decisions for what to store where. So think in terms of overmounting the SSD onto the HD file system.
The OS on reading a file from the HD looks at the access pattern and if it gets accessed frequently, it writes it out to the SDD. If a file hasn't been accessed for a while it gets deleted from the SSD if it's clean, written to the HD and delet
Re: (Score:1)
Re: (Score:2)
Yeah, well, you're wrong: http://maxschireson.com/2011/04/21/debunking-ssd-lifespan-and-random-write-performance-concerns/ [maxschireson.com]
Re: (Score:1)
That analysis is for high-end enterprise-class SSDs that use SLC memory. The lifespan for consumer-class MLC-based SSDs is much worse.
Re: (Score:2)
That analysis is for high-end enterprise-class SSDs that use SLC memory. The lifespan for consumer-class MLC-based SSDs is much worse.
And consumer drives are generally less write heavy as well. Newer enterprise class SSDs are MLC as well. Given the cheapness of MLC, you can increase the amount of FLASH available to make up for the lack of erase cycles, and using deduplication like the SandForce drives, you write less anyway (I wonder how SF drives do GC.)
Simple fact is that firmware errors or user errors will lose your data before the FLASH wears out.
Re:Um (Score:4, Insightful)
SSD controllers are good enough now that I wouldn't worry about the MLC flash in my laptop's SSD for general use, but I'd take a very close look at the numbers if I was using it to do anything that was write heavy (like video work or building a big codebase).
Re: (Score:3)
According to Seagate the Momentus XT will fail back to being a regular hard drive if flash failure is detected by the controller. All data in flash is also stored on the drive, the SSD part only caches a copy of already stored data for faster read performance.
Re: (Score:2)
According to Seagate the Momentus XT will fail back to being a regular hard drive if flash failure is detected by the controller. All data in flash is also stored on the drive, the SSD part only caches a copy of already stored data for faster read performance.
That's obviously true for reads, but is it true for writes?
These hybrid devices typically have a battery (excuse me, "super capacitor") to flush any cached writes out to disk.
But what if the OS thinks data was written (because it went to the SSD cache successfully), but flushing from cache to disk fails because something broke on the SSD side?
SSD controllers haven't been exactly stellar in terms of reliability so far.
For my money, I just got two 256 GB Crucial M4s, and I do daily full-image backups (excludi
Re: (Score:2)
That's obviously true for reads, but is it true for writes?
The Momentus XT only caches reads. Writes (i.e., data from the computer to the drive) completely bypass the cache.
If you can add 4-8GB of RAM, you're better off spending your money on that and a standard 3-1/2" mechanical hard drive. The Momentus XT really is only useful in laptops.
Re: (Score:2)
SSD controllers haven't been exactly stellar in terms of reliability so far.
Yeah. This post is quasi a dupe of a few days ago [slashdot.org] where this eye opening message was posted [slashdot.org]. ,and I'm really not sure I want an hybrid.
I have converted most of my systems to SSDs for the system disk and now I'm scared. I'm sure there's an OCZ somewhere in there... Add to this the risk of mechanical failure and added complexity
Re:Um (Score:5, Informative)
Doesn't really matter.
Anand from Anandtech writes [anandtech.com]:
My personal desktop sees about 7GB of writes per day. That can be pretty typical for a power user and a bit high for a mainstream user but it's nothing insane. ...
If I never install another application and just go about my business, my drive has 203.4GB of space to spread out those 7GB of writes per day. That means in roughly 29 days my SSD, if it wear levels perfectly, I will have written to every single available flash block on my drive. Tack on another 7 days if the drive is smart enough to move my static data around to wear level even more properly. So we're at approximately 36 days before I exhaust one out of my ~10,000 write cycles. Multiply that out and it would take 360,000 days of using my machine for all of my NAND to wear out; once again, assuming perfect wear leveling. That's 986 years. Your NAND flash cells will actually lose their charge well before that time comes, in about 10 years.
Re: (Score:2)
Multiply that out and it would take 360,000 days of using my machine for all of my NAND to wear out
The number to be concerned about isn't when all of the cells wear out, it's when one more than you need for cell redundancy wears out.
Flashcache (Score:2, Interesting)
Facebook released, some time ago, their Flashcache solution. It works similar to ReadyBoost, et al, except it works on Linux, and "pairs" an SSD with a hard drive. Very useful.
http://www.facebook.com/note.php?note_id=388112370932
Re: (Score:2)
See also bcache: http://bcache.evilpiepirate.org/MoreInformationAboutBcache [evilpiepirate.org]
(though I'm not sure why you wouldn't just put your fs journal on the ssd directly if you could)
Seagate Momentus XT *not* new (Score:3)
Seagate's new Momentus XT 2.5-in solution
Had a 500 GB version in my laptop since they came out last year (Summer I think) And yes, it's much faster than a typical 7200 rpm drive.
Re: (Score:2)
Yes, but the technology is essentially the same (as the article summary described it) - Flash memory on the PCB (the changes are to an upgraded SATA 6G interface)
Filesystem (Score:3, Interesting)
What I want is a filesystem that can use a partition of an SSD and a partition of a rotating magnetic disk. Metadata, directories and small files on the SSD, big files on the rotating disk.
This ought to be fairly simple - anyone fancy hacking ext4?
Re: (Score:1)
Since it ought to be fairly simple: Simply do it and mail the diffs.
Re: (Score:1)
Mirror an SSD and HDD, with either interleaving-reads or exclusively reading from the SSD - voilá.
Writes are slow (but may look fast, depending on the mirroring-algorithm), but reads are going to be either faster-than-HDD (when interleaving) or alot-faster-than-HDD (when only reading from SSD).
To ensure they remain identical, some reads should be from both drives, so don't expect everything to be SSD-speed, though.
Adaptec Hybrid RAID (Score:3)
Adaptec has a new technology called Hybrid RAID, which uses 50% SSD/50% HDD in RAID 1 or RAID 10. Reads are serviced by the SSD(s) only. It seems to me you might need to short-stroke the HDD in order to make this a reasonable approach.
Their new 6E series is very inexpensive for 6Gb SAS cached (128MB) hardware RAID; 6805E is (8 int. ports) is about $225 retail:
https://www.adaptec.com/en-us/products/controllers/hardware/sas/entry/sas-6805e [adaptec.com]
I'm no shill or fanboy; I just found this interesting and relevant.
Re: (Score:2)
It seems to me you might need to short-stroke the HDD in order to make this a reasonable approach.
Well gee, I'll install a shortened crank just as soon as I finish my valve job.
I'm no shill or fanboy; I just found this interesting and relevant.
Well, when you get done short-stroking, maybe you could explain what you were talking about.
Re: (Score:2)
Well gee, I'll install a shortened crank just as soon as I finish my valve job.
For what purpose? Please elaborate.
Well, when you get done short-stroking, maybe you could explain what you were talking about.
Did I say I was going to short-stroke a disk? I was talking about anyone who buys an Adaptec 6E controller for Hybrid RAID.
The reason I suggested short-stroking the HDDs, is due to the capacity and price/capacity differences between SSDs and HDDs... SSDs and HDDs of equal capacity will likely be either very slow or very expensive. Given that the 6E series is aimed at budget workstations, both of those options are probably not ideal.
Re: (Score:2)
Re: (Score:1)
Short-stroking is restricting a HD to the outer area of the disk which is higher performance. This also makes access times better since the max distance the head has to travel is shorter. The capacity of the drive is decreased significantly depending on how much you restrict the area e.g. 1 TB > 100 GB might cut access times by 50% and bring your minimum and average transfer rates close to your max rate.
Tom's hardware article here: http://www.tomshardware.com/reviews/short-stroking-hdd,2157.html [tomshardware.com]
T
Re: (Score:2)
Short Stroking a HDD refers to telling the firmware that a disk has less sectors than it really has. An easy example is setting a 1.5TB drive as a 500gb. That means that you only use the INNER 1/3 of the disk (the fastest part). Additionally you reduce head movement.
Re: (Score:2)
Short Stroking a HDD refers to telling the firmware that a disk has less sectors than it really has. An easy example is setting a 1.5TB drive as a 500gb. That means that you only use the INNER 1/3 of the disk (the fastest part). Additionally you reduce head movement.
But but but...how is the situation when the HDD has multiple platters? Assuming that this 1.5TB disk has 3 x 500GB platters, wouldn't short-stroke like that mean that the whole disk area is used, but only one platter? Or is the data interleaved across platters?
Re: (Score:1)
But but but...how is the situation when the HDD has multiple platters? Assuming that this 1.5TB disk has 3 x 500GB platters, wouldn't short-stroke like that mean that the whole disk area is used, but only one platter? Or is the data interleaved across platters?
The data is interleaved across the platters. If you look at the access times Tom's hardware got from short-stroking, you can see that they drop considerably. This is because the head moves over a much shorter range.
http://www.tomshardware.com/reviews/short-stroking-hdd,2157-5.html [tomshardware.com]
Re: (Score:2)
One criticism. It is the outer edge of the disk that is used, because it is faster.
Re: (Score:2)
Has Adaptec come up with a *usable* utility to manage these HBA's? Last I'd checked, all they had was arcconf which, if you could find it, sucked even worse than LSI's megacli.
I'm sorry, I have no idea; I want a 6805E but I don't have one yet. I've only used Adaptec's x64 Windows GUI utility with a Supermicro-branded Adaptec 3Gb SAS ZCR adapter. I had no complaints with Adaptec's software; the ZCR card's performance was bad enough to keep me from noticing. The non-RAID LSI U320 and Tekram U160 cards I used before that had firmware-based configuration only.
In what way were Adaptec and LSI's CLI utilities deficient (reliability, lack of control, etc.)?
Re: (Score:2)
Pointless (Score:2)
Reading from SSD is insanely faster than reading from SAS. Writing to SSD is much slower. There is no way around that.
Those hybrids products are simply a futile tentative to come up with a cheap alternative to what is really needed: adding a decent cache on SSD controllers so the write buffer is big enough to mitigate the write penalty, and adding enough processing power to perform destaging properly.
This being said - if someone comes up with a way to read from SSD and write to SAS, then it's a winner... b
Re: (Score:2)
Reading from SSD is insanely faster than reading from SAS. Writing to SSD is much slower. There is no way around that.
Those hybrids products are simply a futile tentative to come up with a cheap alternative to what is really needed: adding a decent cache on SSD controllers so the write buffer is big enough to mitigate the write penalty, and adding enough processing power to perform destaging properly.
This being said - if someone comes up with a way to read from SSD and write to SAS, then it's a winner... but the magic part that brings the written bit from the SAS to be read by the SSD is the million-dollar catch.
Until then, those hybrids are just a Fisher Price implementation of sub-volume tiering.
It is just a little more code on the controller... directing writes to HD and reads from SSD, sync happens in the background, using a flag (toggle specific bit in the header) for the copy of the file on the SSD to indicate that changes are being written to the HD and reads must be directed there until the file is synced again.
Re: (Score:2)
Reading from SSD is insanely faster than reading from SAS. Writing to SSD is much slower. There is no way around that.
Those hybrids products are simply a futile tentative to come up with a cheap alternative to what is really needed: adding a decent cache on SSD controllers so the write buffer is big enough to mitigate the write penalty, and adding enough processing power to perform destaging properly.
This being said - if someone comes up with a way to read from SSD and write to SAS, then it's a winner... but the magic part that brings the written bit from the SAS to be read by the SSD is the million-dollar catch.
Until then, those hybrids are just a Fisher Price implementation of sub-volume tiering.
It is just a little more code on the controller... directing writes to HD and reads from SSD, sync happens in the background, using a flag (toggle specific bit in the header) for the copy of the file on the SSD to indicate that changes are being written to the HD and reads must be directed there until the file is synced again.
Yeah, on paper, this is easy enough, anyone can draw a quick diagram... however, reality is much more tricky. Everything that is hidden under that "sync" concept is incredibly complex because both nodes are actively working on I/O and because of physics the backend spindle are slow; it's like reordering balls when you are juggling. You cannot just pick a source and a destination, or order things like a process scheduler; you need to take into account the evolving state and capabilities of many components (s
Re: (Score:2)
Exactly. It is easy to do... but difficult to do well.
Re: (Score:2)
Where do I find a SAS drive that can do a couple of thousand write IOPS ?
I wouldn't claim to be an SSD exper
Re: (Score:1)
Where do I find a SAS drive that can do a couple of thousand write IOPS ?
I wouldn't claim to be an SSD expert, but I'm pretty sure that happened a year or two ago.
Enterprise-grade SSD will NOT give you 2000 IOPS. Best case scenario, 5-10x less. If all the planets are aligned and the wind is blowing from the right direction, in read you might get 1500 IOPS. Might. And we are talking here about a 5 digits price tag for that kind of drive. As for consumer-grade, well, you get what you pay for.
The only way to get 1000+ IOPS on write, on any storage technology that is not a military secret, is via some kind of cache (or a power surge...). There are just no controllers tha
Re: (Score:2)
I think you need to expand a bit more on exactly what and how you're benchmarking, because even consumer-grade (ie: buy it from Newegg) SLC drives are reported to hit a couple of thousand IOPS in 4k random write benchmarks. 1500 read also seems ludicrously pessimistic.
(Not to mention even 350-400 IOPS is still twice the performance you'd get out o
Re: (Score:2)
I think you need to expand a bit more on exactly what and how you're benchmarking, because even consumer-grade (ie: buy it from Newegg) SLC drives are reported to hit a couple of thousand IOPS in 4k random write benchmarks. 1500 read also seems ludicrously pessimistic.
(Not to mention even 350-400 IOPS is still twice the performance you'd get out of a 15k spindle.)
WIse quote from Wikipedia here: "As with any benchmark, IOPS numbers published by storage device manufacturers do not guarantee real-world application performance". To get real numbers, what you need to do is call a vendor such as Hitachi or IBM, and ask them what they feel comfortable putting in a SLA as far as sustained IOPS goes (not theoretical maximum). If you get an answer, you will get the same numbers as any storage admin will give you:
SSD: Read 1500, Write 250-400
FC or SAS2: 200
SATA: 80
If you look
Re: (Score:2)
adding a decent cache on SSD controllers so the write buffer is big enough to mitigate the write penalty
You mean like using an HDD as a cache?
Re: (Score:2)
adding a decent cache on SSD controllers so the write buffer is big enough to mitigate the write penalty
You mean like using an HDD as a cache?
The key word here would be "decent".
Performance numbers (Score:3)
You can purchase 24 GB of ram for less than $200 today. I would love to see a side by side performance and energy consumption comparison with someone who decided to spend their extra money on DRAM.
I suspect what you'll see is that after a day all read operations are resolved instantly from the OS disk cache without the performance and power hit of flash. If you have write intensive workloads the current crop of SSDs would not be for you anyway.
Re:Performance numbers (Score:5, Insightful)
Re: (Score:2)
From a performance perspective it might be interesting, but from a power consumption perspective the SSD blows away a handful of DDR3 sticks
Higher density, process and lower operating voltages have been dropping DRAM power consumption many times over in the last few years. Latest DDR3 is able to run 24GB on 7 watts.
Whether you will see better or worse power consumption depends on your workload (how much longer or shorter computer needs to run to get the same job done) under each SSD vs DRAM usage scenario.
Re: (Score:2)
The DDR3 just can't compete. It might score favorably against specific SSDs, but that would be cherry picking bad SSDs in order to make stacking up DDR3 look better. Your proposition was that "after a day all read operations are resolved instantly from the OS disk cache" which means you've been burning 7W all day long on your cache. Its simply not possible in the propositional case for DDR3 to win on power consumption.
Re: (Score:2)
Re: (Score:2)
Just allow the OS to flip a bit and gain direct access to the flash part, too, for those OSes that are aware of such.
These products seem to be for people who can't do that.
I'm one - in my laptop anyway, I have a Momentus XT. It's a small boost, but I find it worthwhile. And I have nowhere else to put some flash.
But at the office, I have a few SSD's inside my servers. I use them for ZFS L2ARC and ZIL. They can also be used for bcache and ext4 journals. I have extra power and SATA headers in those things
thunk different (Score:2)
for some reason I thought this article may have been about Prius's or wind/solar power, but.....
Onboard SSD (Score:2)
I just built a system with this motherboard, an