Wear Leveling, RAID Can Wipe Out SSD Advantage 168
storagedude writes "This article discusses using solid state disks in enterprise storage networks. A couple of problems noted by the author: wear leveling can eat up most of a drive's bandwidth and make write performance no faster than a hard drive, and using SSDs with RAID controllers brings up its own set of problems. 'Even the highest-performance RAID controllers today cannot support the IOPS of just three of the fastest SSDs. I am not talking about a disk tray; I am talking about the whole RAID controller. If you want full performance of expensive SSDs, you need to take your $50,000 or $100,000 RAID controller and not overpopulate it with too many drives. In fact, most vendors today have between 16 and 60 drives in a disk tray and you cannot even populate a whole tray. Add to this that some RAID vendor's disk trays are only designed for the performance of disk drives and you might find that you need a disk tray per SSD drive at a huge cost.'"
Little Flawed study. (Score:5, Insightful)
This assumes that RAID controller manufacturers won't be making any changes though.
RAID for years has relied on millisecond access times. So why spend a lot of money on an ASIC & Subsystem that can go faster? So taking a RAID card designed for slow (relatively) spinning disks and attaching them to SSD of course the RAID card is going to be a bottleneck.
However subsystems are going to be designed to work with SSD that has much higher access times. When that happens, this so called 'bottleneck' is gone. You know every major disk subsystem vendor is working on these. Sounds like a disk vendor is sponsoring 'studies' to convince people not to invest in SSD technologies now knowing that a lot of companies are looking at big purchases this year because of the age of equipment after the downturn.
Re:Little Flawed study. (Score:3, Insightful)
The article is talking about stuff that's available today. They aren't saying "SSDs will never be suitable", they're saying they aren't suitable today. Why? Because none of the hardware infrastructure available is fast enough.
This study seems deeply confused in a specific way (Score:5, Insightful)
"Even the highest-performance RAID controllers today cannot support the IOPS of just three of the fastest SSDs. I am not talking about a disk tray; I am talking about the whole RAID controller. If you want full performance of expensive SSDs, you need to take your $50,000 or $100,000 RAID controller and not overpopulate it with too many drives. In fact, most vendors today have between 16 and 60 drives in a disk tray and you cannot even populate a whole tray. Add to this that some RAID vendor's disk trays are only designed for the performance of disk drives and you might find that you need a disk tray per SSD drive at a huge cost."
That sounds pretty dire. And, it does in fact mean that SSDs won't be neat drop-in replacements for some legacy infrastructures. However, step back for a minute: Why did traditional systems have 50k or 100k RAID controllers connected to large numbers of HDDs? Mostly because the IOPs on an HDD, even a 15K RPM monster, sucked horribly. If 3 SSDs can swamp a RAID controller that could handle 60 drives, that is an overwhelmingly good thing. In fact, you might be able to ditch the pricey raid controller entirely, or move to a much smaller one, if 3 SDDs can do the work of 60HDDs.
Now, for systems where bulk storage capacity is the point of the exercise, the ability to hang tray after tray full of disks off the RAID controller is necessary. However, that isn't the place where you would be buying expensive SSDs. Even the SSD vendors aren't even pretending that SSDs can cut it as capacity kings. For systems that are judged by their IOPS, though, the fact that the tradition involved hanging huge numbers (of often mostly empty, reading and writing only to the parts of the platter with the best access times) HDDs off extremely expensive RAID controllers shows that the past sucked, not that SSDs are bad.
For the obligatory car analogy: shortly after the début of the automobile, manufacturers of horse-drawn carriages noted the fatal flaw of the new technology: "With a horse drawn carriage, a single buggy whip will server to keep you moving for months, even years with the right horses. If you try to power your car with buggy whips, though, you could end up burning several buggy whips per mile, at huge expense, just to keep the engine running..."
Re:This study seems deeply confused in a specific (Score:5, Insightful)
And we don't have to use Highlander Rules when considering drive technologies. There's no reason that one has to build a storage array right now out of purely SSD or purely HDD. Sun showed in some of their storage products that by combining a few SSDs with several slower, large capacity HDDs and ZFS, they could satisfy many workloads for a lot less money. (Pretty much the only thing a hybrid storage pool like that can't do is sustain very high IOPS of random reads across a huge pool of data with no read locality at all.)
I hope we see more filesystems support transparent hybrid storage like this...
ZFS sidesteps the whole RAID controller problem (Score:4, Insightful)
If you use ZFS with SSDs, it scales very nicely. There isn't a bottleneck at a raid controller. You can slam a pile of controllers into a chassis if you have bandwidth problems because you've bought 100 SSDs - by having the RAID management outside the controller, ZFS can unify the whole lot in one giant high performance array.
Re:This study seems deeply confused in a specific (Score:4, Insightful)
I'll be interested to see, actually, how well the traditional 15K RPM SCSI/SAS enterprise screamer style HDDs hold up in the future. For applications where IOPS are supreme, SSDs(and, in extreme cases, DRAM based devices) are rapidly making them obsolete in performance terms and price/performance terms are getting increasingly ugly for them. The costs of fabricating flash chips are continuing to fall, the costs of building mechanical devices that can handle what those drives can aren't as much. For applications where sheer size or cost/GB are supreme, the fact that you can put SATA drives on SAS controllers is super convenient. It allows you to build monstrous, and still pretty zippy for loads that are low on random read/write and high on sustained read or write(like backups and nearline storage), storage capacity for impressively small amounts of money.
Is there a viable niche for the very high end HDDs, or will they be murdered from above by their solid state competitors, and from below by vast arrays of their cheap, cool running, and fairly low power, consumer derived SATA counterparts?
Also, since no punning opportunity should be left unexploited, I'll note that most enterprise devices are designed to run headless without any issues at all, so Highlander rules cannot possibly apply.
Re:Correction: (Score:4, Insightful)
I see this rather often on Slashdot and elsewhere. It's becoming a part of our collective culture it seems.
Increasingly, it's not good enough that you said what you did say, and chose not to say what you clearly haven't said. There's this unspoken expectation that you also have to actively disclaim things you clearly are not claiming, otherwise some clever individual who really wants to be "right" is going to assume that your lack of a disclaimer amounts to tacit support of whatever was not disclaimed. This leads to a great deal of both intentional trolling and unintentional creation of strawmen. Both invite unnecessary follow-up posts designed to correct unfounded assumptions.
I wonder if this comes from modern politics where the audience is generally "hostile" in the sense that it's eager to twist words and demagogue positions with which it may disagree. That's a poor substitute for good reasoning, for showing that there are substantive reasons to disagree. So much of politics is done by handling complex, nuanced issues with 20-second soundbites that I can see how it happens there. On Slashdot, it seems to lower the quality of discussion for no good reason.
Re:This study seems deeply confused in a specific (Score:3, Insightful)
Re:Seek time (Score:5, Insightful)
One of the fastest platters on the market today is the Seagate 15,000 RPM Cheetah and that one runs at about $1/GB. Some of the 15K drives go for $3/GB.
SSD's are running about $3/GB across the board at the top end, a cost not dissimilar from the top end platters, but they perform much better.
I understand that many people dont want to drop more than $120 on a drive, but many of the vocal ones are letting their unwillingness to do so contaminate their criticism. SSD's are actually priced competitively vs the top performing platter drives.
Re:RAID for what? (Score:3, Insightful)
289 *days* of constant 24/7 writing to use of the flash.
This assumes the case of repeated sequential write to blocks 1 to n, where no wear levelling occurs. Consider that I first write once to 100% of the disk, then repeatedly: write sequentially to the first 25% of the disk n times, then write to the remaining 75% of the disk once. Dynamic wear levelling is out. How is a typical static wear levelling algorithm likely to kick in in a way which prevents an unacceptable slowdown during one pass, while at the same time squeezing out max writes to all physical blocks?
Now.. and this is the key point.. will a platter drive survive 289 days of constant max-throughput writing? The answer is no.
According to whom? Where are the independent test results for various specified duty cycles, performed in real time?
(Although perhaps all that matters is whether, at any time before m years is up, I will get a warranty replacement for my drive.)
Re:Little Flawed study. (Score:3, Insightful)
Re:Seek time (Score:3, Insightful)
Which Seagate drives would this be? Those numbers sound very high for typical desktop drives.
Besides sustained sequential speed is one thing but what really gives a responsive "feel" on
the desktop is random access and any one of the post-JMicron-stutter SSDs will stomp even a small RAID of
dual-ported enterprise drives into the dirt on random reads and writes, especially combined with the order of magnitude
faster access time of an SSD.