Are RAID Controllers the Next Data Center Bottleneck? 171
storagedude writes "This article suggests that most RAID controllers are completely unprepared for solid state drives and parallel file systems, all but guaranteeing another I/O bottleneck in data centers and another round of fixes and upgrades. What's more, some unnamed RAID vendors don't seem to even want to hear about the problem. Quoting: 'Common wisdom has held until now that I/O is random. This may have been true for many applications and file system allocation methodologies in the recent past, but with new file system allocation methods, pNFS and most importantly SSDs, the world as we know it is changing fast. RAID storage vendors who say that IOPS are all that matters for their controllers will be wrong within the next 18 months, if they aren't already.'"
distibution (Score:2)
with things like Haadop and cloudstore, pNFS, Lustre, and others storage will be distributed. There will no longer be the huge EMC, Netapp, Hitachi etc central storage devices. There's no reason to pay big bucks for a giant single point of failure when you can use the Linus method of upload to the internet and let it get mirrored around the world. (In a much more localized manor.)
Re: (Score:3, Insightful)
That's fine for some things but I really don't want my confidential client work-product mirrored around the world. Despite all the cloud hype there is still a subset of data that I really do NOT want to let outside my corporate walls.
Re: (Score:3, Informative)
This is correct, there are laws on the books in most countries that prohibit the exposure of medical and other data
to risk by putting it out in the open. Some have even moved to private virtual circuits, and the SAN's with fast
access via solid state storage of active files works fine, and it moves less accessed data to drive storage,
but none the less quite fast and SAS technology is faster than SCSI tech in throughput.
Re: (Score:3, Informative)
An example of SAS throughput pushing out 6 Gbps.
http://www.pmc-sierra.com/sas6g/performance.php [pmc-sierra.com]
Re: (Score:3, Informative)
SAS technology is faster than SCSI tech in throughput
"SCSI" does not mean "parallel cable"!
Sorry, pet peev, but obviously Serial Attached SCSI [wikipedia.org] (SAS) is SCSI. All Fibre Channel storage speaks SCSI (the command set) all USB storage too. And iSCSI? Take a wild guess. Solid state hard drives that plug directly into PCIe slots with no other data bus? Still SCSI command set. Fast SATA drives? The high end ones often have a SATA-to-SCSI bridge chip in front of SCSI internals (and SAS can use SATA cabling anyhow these days).
Pardon me, I'll just be over here
Re: (Score:2)
SCSI started life as a command set AND a physical signaling specification. The physical has evolved several times, but until recently was easily recognizable as a natural evolution of the original parallel SCSI. At the cost of a performance degradation and additional limitations (such as nimber of devices), the generations of scsi have interoperated with simple adapters.
SaS uses the same command set, but the physical is a radical departure (that is, it bears no resemblance) from the original SCSI and it's d
Re: (Score:2)
I remember the days when people reading Slashdot wanted to use precise terminology about technology - don't you? Sure you do. But go on with your "Serial attached SCSI drives are not SCSI" and your "I double-clicked on the the internet, but it's broken" and so on. Those of us who are still nerds will pendanticly point out that all these storage technologies are "really SCSI drives, if you look closely" and we'll be right. Grumble grumble grumble.
Re: (Score:2)
In my world the T10 Technical Committee defines SCSI and to them SAS is a SCSI protocol. QED.
BTW, wtf is SaS?
Re: (Score:2)
Uhh, are you dense? Distributed storage doesn't mean you use someone else's servers. The software mentioned above is for internal use. Hadoop is used by yahoo for their internal cloud, and Lustre is used by a number of scientific labs that do military work.
Re: (Score:2)
That's what strong crypto is for.
Re: (Score:2)
Re: (Score:3)
For my own personal data, I'd consider that adequate. For data I'm legally required to keep secret - absolutely not. Your physical security design should force an attacker to steal both your keys and your data, each from a seperate physical location, so that you can destroy one as soon at the other is stolen to prevent data loss. Electronic security of course focuses on compartmentalization and auditing, so that an inside attacker can only steal a small portionof the data, and can be caught an jailed aft
Re: (Score:2)
A 512 key basically provides you a security guarantee.
Only salesmen talk about guarantees in security. Everything is vulnerable, it's just a question of effort.
There are several multi-key solutions you can buy from reputable vendors for at-rest data encryption (3 of 5 keycards needed, or 2 of 5, or whatever). That's a good approach to protection against insiders. It wouldn't justify making the at-rest data publicly accessible, nor failing to compartmentalize access.
And yeah, the "Store the key and the data in different buildings" approach is just one securi
Re: (Score:2)
eh, properly designed systems using the big disk arrays certainly don't have a single point of failure. And their data is replicated to other big disk arrays in other locations. That's why they cost "the big bucks". Your cloud is fine for relatively low-speed low-security read-mostly data, but not for high-volume financial and healthcare systems
Re: (Score:2)
In a much more localized manor
We're going to start putting data centers in big houses now?
Wait. You mean my SAN is Dead? (Score:5, Insightful)
Hardware RAID's are not exactly hopping off the shelf and I think many shops are happy with fiberchannel.
Let's do another reality check: this is enterprise class hardware. Are you telling me you can get SSD RAID/SAN in a COTS package that is cost approximate to whatever is available now? Didn't think so....
Let's face it, in this class of hardware things move much more slowly.
Re: (Score:2)
So previously it looked like (slowest to fastest): SATA (near-line), Fiber Channel (online) -> RAM cache
Now we'll have: SATA -> FC -> SSD -> RAM
And in a few years after the technology gets better a
Re: (Score:2)
Exactly, in the context of SANs, you typically don't have a RAID controllers.
I think you underestimate performance requirements if you don't see a need for SSDs. Typical SANs operate in microsecond latencies across the cable plant, whereas mechanical disks have seek latencies in the milliseconds. Also SSDs throughput is already twice that of mecahnical disks and (read) IOPS aren't even comparable, SSDs are an order of magnitude faster. And j
Re: (Score:2)
BAD MATH (Score:5, Interesting)
I'm not going to believe an article that assumes that because you can do 55K IOPS for 512Byte reads, you can do the same number of IOPS for 8K reads which are 16X larger and then just extrapolate from there. Especially since most SSD's (at least SATA ones) right now top out around 200MB/s and the SATA interface tops out at 300MB/s. Besides there are already real world articles out there where guys with simple RAID0 SSD's are getting 500-600 MB with 3-4 drives using Motherboard RAID much less dedicated harware RAID.
Re:BAD MATH (Score:5, Insightful)
The last part of that sentence is particularly interesting in the context of this article. "Motherboard RAID" is, outside of the very highest end motherboards, usually just bog-standard software raid with just enough BIOS goo to make it bootable. Hardware RAID, by contrast, actually has its own little processor and does the work itself. Of late, general purpose microprocessors have been getting faster, and cores in common systems have been getting more numerous, at a substantially greater rate than hardware RAID cards have been getting spec bumps(outside of the super high end stuff, I'm not talking about whatever EMC is connecting 256 fibre channel drives to, I'm talking about anything you could get for less than $1,500 and shove in a PCIe slot). Perhaps more importantly, the sophistication of OS support for nontrivial multi-disk configurations(software RAID, ZFS, storage pools, etc.) has been getting steadily greater and more mature, with a good deal of competition between OSes and vendors. RAID cards, by contrast, leave you stuck with whatever firmware updates the vendor deigns to give you.
I'd be inclined to suspect that, for a great many applications, dedicated hardware RAID will die(the performance and uptime of a $1,000 server with a $500 RAID card will be worse than a $1,500 server with software RAID, for instance) or be replaced by software RAID with coprocessor support(in the same way that encryption is generally handled by the OS, in software; but can be supplemented with crypto accelerator cards if desired).
Dedicated RAID of various flavors probably will hang on in high end applications(just as high end switches and rouers typically still have loads of custom ASICs and secret sauce, while low end ones are typically just embedded *nix boxes on commodity architectures); but the low end seems increasingly hostile.
Re: (Score:2)
The current problems with "motherboard RAID" are:
1. They can't take a BBU, so you either leave write caching turned on on the drives and lose data on an unexpected shutdown (possibly corrupting your array)
OR
Turn write caching off on the drives and have incredibly poor write speeds.
2. The software (and probably the hardware) are no where near smart enough. They might tell you a drive is failing, they might not. If they do they might rebuild the array successfully or may just corrupt it (and if it's your boot
Re: (Score:2)
Re: (Score:2)
It doesn't matter that SATA can do 300MB/s. That's just the interface line rate. Last I did benchmarks of 1T drives (seagate ES.2) they topped out at around 100MB/s. Drives still have a long way to go before they saturate the SATA bus. The only way that happens is if you are using port multipliers to reduce the number of host channels.
Re: (Score:3, Interesting)
I'm using a 30GB OCZ Vertex for my main drive on my windows machine and it benchmarks around 230MB/s _AVERAGE_ read speed. It cost $130 ($4.30/GB) when I bought
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Here's an HP 146GB 15K FC drive for over $1,000 [cdwg.com].
Are you sure you don't have your prices mixed up?
I'm sorry I dont maybe I wasn't clear enough. I totally agree it isn't for every workload, I think it works in a lot of instances,
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
When SATA600 goes live, expect Intel and OCZ to jump right up to the 520MB/sec area as if it was trivial to do so... (because it is!)
ioFusion has a PCIe flash solution that goes several times faster than these SATA300 SSD's. The problem is SATA. The problem is SATA. The problem is SATA.
Re: (Score:2)
Besides there are already real world articles out there where guys with simple RAID0 SSD's are getting 500-600 MB with 3-4 drives using Motherboard RAID much less dedicated harware RAID.
It is unlikely "dedicated hardware RAID" would be meaningfully faster.
enterprise storage (Score:4, Insightful)
Storage has been the performance bottleneck for so long, it's a happy problem if you actually must increase the bus speeds/cpu processors/get faster memory on raid cards to keep up. Seems to me the article(or at least the summary) was written by someone hadn't been following enterprise storage for very long...
Re: (Score:3, Interesting)
That's kind of what I was thinking too. When you really start pushing the 300mb/s sata gives its hard to find something to complain about. Most of my hard drives max out at like 60-100mb a second and even the 15,000k drives are not a great deal faster. Low latency, fast speeds, increased reliability. This could get interesting in the next few years. Heck why not just build a raid 0 controller into the logic card with a sata connection and break the ssd into a bunch of little chunks and raid 0 them all max p
Re: (Score:2)
Cost mostly, you'd need tons of controllers, cache, etc. Plus you can already nearly saturate SATA 3G with any decent SSD (Intel, Vertex, etc) so it's kind of pointless. The new Vertex and Intel SSDs are benchmarking at 250MB/s. Not point it making them much faster until we have SATA 6G.
Re: (Score:2)
Heck why not just build a raid 0 controller into the logic card with a sata connection and break the ssd into a bunch of little chunks and raid 0 them all max performance right out of the box so you get the performance advantages of raid without the cost of a card and the waste of a slot?
Because an error anywhere nukes the whole shebang.
Re:enterprise storage (Score:5, Interesting)
Ah... pointing the finger at the storage... My favorite activity. Listening to DBAs, application writers, etc point the finger at the EMC DMX with 256GB of mirrored cache and 4Gb/s FC interfaces. You point your finger and say, "I need 8Gb FibreChannel!. Yet when I look at your hba utilization over a 3mo period (including quarter end, month end etc..) I see you averaging a paltry 100MB/s. Wow. Guess I could have saved thousands of dollars with going with 2Gb/s HBAs. Oh yeah, and you have a minimum of two HBAs per server. Running a nagios application to poll our switchports for utilization, the average host is running maybe 20% utilization of the link speed, and as you beg, "Gimme 8Gb/s FC", I look forward to your 10% utilization.
We've taken whole databases and loaded them into dedicated cache drives on the array, and surprise, no performance increase. DBAs and application writers have gotten so used to yelling, "Add Hardware! That they forgot how to optimize their applications and sql queries."
If storage was the bottleneck, I wouldn't be loading up storage ports (FAs) with 10-15 servers. I find it funny that the only devices on my 10,000 port SAN that can sufficiently drive IO are media servers and the tape drives (LTO-4) that they push.
If storage was the bottleneck there would be no oversubscription in the SAN or disk array. Let me know when you demand a single storage port per HBA, and I'm sure my EMC will take us all out to lunch.
I have more data than you. :)
Re:enterprise storage (Score:4, Insightful)
Ah... pointing the finger at the storage... My favorite activity. Listening to DBAs, application writers, etc point the finger at the EMC DMX with 256GB of mirrored cache and 4Gb/s FC interfaces. You point your finger and say, "I need 8Gb FibreChannel!. Yet when I look at your hba utilization over a 3mo period (including quarter end, month end etc..) I see you averaging a paltry 100MB/s. Wow. Guess I could have saved thousands of dollars with going with 2Gb/s HBAs. Oh yeah, and you have a minimum of two HBAs per server. Running a nagios application to poll our switchports for utilization, the average host is running maybe 20% utilization of the link speed, and as you beg, "Gimme 8Gb/s FC", I look forward to your 10% utilization.
You do sound like you know what you're doing, but there is quite a difference between average utilization and peak utilization. I have some servers that average less than 5% usage on a daily basis, but will briefly max out the connection about 5-6 times per day. For some applications, more peak speed does matter.
Re: (Score:2)
What's the cost-benefit analysis of buying hardware that has headroom for those .1% peak events, vs data housekeeping and app/sql profiling? This is a management problem, not a technical one.
Re: (Score:2)
Depends on the business requirements and the number of end-points, but I wouldn't rule it out completely. For example, production companies moving large amounts of video for short periods of time, it might be worth the difference between 4Gb and 8Gb fiber channel, I don't know. You're also assuming those peak eve
Re: (Score:2)
In my experience, DBAs and their fellow travelers in the application group like to point their finger at SANs and virtualization and scream about performance, not because the performance isn't adequate but because SANs (and virtualization) threaten their little app/db server empire. When they no longer "need" the direct attached storage, their dedicated boxes get folded into the ESX clusters and they have to slink back into their cubicles and quit being server & networking dilettantes.
Re:enterprise storage (Score:4, Insightful)
Sort of true, but not entirely accurate.
Is the on-demand response slow? Stats lie. Stats mislead. Stats are only stats. The systems I'm monitoring would use more I/O if they could. Those basic read/write graphs are just the start. How's the latency? Any errors? Pathing setup good? Are the systems queuing i/o requests while waiting for i/o service response?
And traffic is almost always bursty unless the link is maxed - you're checking out a nice graph of the maximums too, I hope? That average looks mighty deceiving when long periods are compressed. At an extreme over months or years, data points can be days. Overnight + workday could = 50%. No big deal on the average.
I have a similiar usage situation on many systems, but the limits are generally still storage dependent issues like i/o latency (apps make a limited number of requests before requests start queuing), poorly grown storage (a few luns there, a few here, everything is suddenly slowing down due to striping in one over-subscribed drawer), and sometimes unexpected network latency on the SAN (switch bottlenecks on the path to the storage).
Those graphs of i/o may look pitiful, but perhaps that's only because the poor servers can't get the data any faster.
Older enterprise SAN units (even just 4 or 5 years ago) kinda suck performance wise. The specs are lies in the real world. A newer unit, newer drives, newer connects and just like a server, you'll be shocked. What'cha know, those 4Gb cards are good for 4Gb after all!
Every year, there's a few changes and growth, just like in every other tech sector.
Re: (Score:2)
Well boy did I not expect this kind of reaction... I'm kinda on your side, really. I meant, here's someone that's saying that SSDs means you're no longer starving for spindles... And I say "well that's good, they were holding us back, we can do something better now, that's not a problem." On the other hand, it seems it's a lot more loaded politically in places that don't do this with just three admins, and no dedicated storage admins, so I'll just shut up now cuz I hate politics. You guys have a nice day
Re: (Score:2)
Who cares about average use? The cost is driven by the PEAK use. That is why the average use for HBA's is almost nothing, but you are paying double the money or more because of the 8 hours a month you need to smoke. And woe betide the Architect who suggest postponing a business meeting for 48 hour every month so he can save $20 million a year. Seriously.
Re: (Score:2)
Well, that was unexpected for me too. And you know, you are right. Real world applications behave quite differently from how academical models say they would, that is because the models didn't model teams limitations and the unavoidable mistakes (from the techies and from the HR) that add into some very siginificant amount on any project.
Too bad I didn't let that academical misconception go yet. That is why I was surprized.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I've been waiting for the same thing, unfortunately SLC flashed-based drives (the more expensive NAND flash with the higher lifespan) is still exceptionally expensive. But, the good news is major SAN vendors are already offering SSD options. Everyone from EMC [emc.com] to Sun Microsystems [sun.com] is starting to include SSD drives in their storage products. While it would be very unusual for us to g
Hardware RAID becoming less relevant every day. (Score:2, Insightful)
The first question is really, why RAID a SSD? It's already more reliable than a mechanical disk, so that argument goes out the window. You might get some increased performance, but that's often not a big factor.
The second question is, with processors coming with 8 cores, why have some separate specialized controller that handles RAID and not just do it in software?
Re: (Score:3, Informative)
The second question is, with processors coming with 8 cores, why have some separate specialized controller that handles RAID and not just do it in software?
I much prefer s/ware raid (Linux kernel dm_mirror), it removes a complicated piece of h/ware which is just another thing to go wrong. It also means that you can see the real disks that make up the mirror and so monitor it with the smart tools.
OK: if you do raid5 rather than mirroring (raid1) you might want a h/ware card to offload the work to, but for many systems a few terabyte disks are big and cheap enough to just mirror.
Re: (Score:2, Informative)
Well, ZFS is great, but don't get that mixed up with software RAID. It's not. The storage redundancy algorithms used by ZFS are not the RAID algorithms, such that using ZFS is much better than using EITHER hardware or software RAID.
ZFS provides performance and data integrity assurance that standard RAID does not. Primarily, because filesystem level data is checksummed, and it should be almost impossible for silent data corruption to occur at the storage device level, except cases where the data writ
Re: (Score:2)
> OK: if you do raid5 ... .. you deserve to be shot.
There's nothing wrong with RAID5 in the right circumstances (large home server?), but if you use it instead of a backup you deserve to be shot.
Re: (Score:2)
Of course if you lose two drives everyone has to wait for you to restore it all from backup. There's also RAID6, but it just gives you a bit more leeway in the number of drives you can use.
Let's try that again (Score:2)
Re: (Score:2)
The first question is really, why RAID a SSD? It's already more reliable than a mechanical disk, so that argument goes out the window. You might get some increased performance, but that's often not a big factor.
The second question is, with processors coming with 8 cores, why have some separate specialized controller that handles RAID and not just do it in software?
RAID0 for Speed. SSD's in RAID0 can perform 2.5-3X faster [hothardware.com] than a single drive. A RAID SSD array can challenge the speed of a FusionIO card [hothardware.com] that is several thousand dollars.
Now that the new faster 34nm Intel SSD's can be preordered for under $250, it's reasonable for an enthusiast to buy 3-4 of them and thrown them in a RAID0 array. Also, software (or built-in MB RAID) is fine -- a lot of the sites have shown that 3 SSD drives is the sweet point for price/performance using standard MB RAID controllers.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
If someone is spending the money on SSD then performance had better be a big factor!
Re: (Score:2)
That's only true to a point. If the reliability of the SSD gets to the point where it's about as likely as the RAID controller to fail, then the RAID controller is just an extra point of failure that will not increase your availability at all. However, AFAIK SSDs aren't that reliable yet so the RAID controllers are still worth it.
Re: (Score:2)
Please stop spreading baseless FUD.
Re: (Score:2)
What FUD? I said AFAIK. I haven't been following them closely. If I'm wrong please feel free to correct me instead of jumping down my throat.
Re: (Score:2)
the on board chips are not build for high speed / (Score:2)
the on board chips are not build for high speed / useing all the ports at the max at one time.
Re: (Score:2)
This is where ZFS has some potential to become even more important than it already is.
The reason you RAID a SSD is to protect against silent data corruption, which SSDs are not immune from. While you don't necessarily need RAID for this with ZFS, it certainly makes it easier.
The point about the insane abundance of CPU power is one that ZFS specifically takes advantage of right out of the starting gate.
Re: (Score:2)
Re: (Score:2)
The second question is, with processors coming with 8 cores, why have some separate specialized controller that handles RAID and not just do it in software?
Transparency and simplicity. It's a lot easier dealing with a single device than a dozen.
iscsi, 10gig (Score:2)
Multiple interfaces and lots of block servers.
Does anyone actually still use NFS?
Re: (Score:2, Informative)
Of course. NFS provides an easy to use concurrent shared filesystem that doesn't require any cluster overhead or complication like GFS or GPFS.
Re: (Score:3, Informative)
Does anyone actually still use NFS?
Of course. It's nearly always fast enough, trivially simple to setup, and doesn't need complicated and fragile clustering software so that multiple systems can access the same disk space.
Re: (Score:2)
Where I work, we've only got a few petabytes of NFS storage. And it's only used for mission critical (in the literal meaning of the term -- no access to data, no work gets done, literally $millions lost if a deadline is blown) data.
NetApp doesn't seem to be having any trouble selling NFS, either.
So no, I don't think anyone uses NFS anymore.
Re: (Score:2)
Not everybody (hardly anyone) needs a single block device in a work environment. You might as well hang the hard drive in their systems if that's all you need, cheaper, faster and simpler. Also block devices don't separate very well. You have to assign and reserve a certain block of data no matter whether it's used.
NFS is much more granular that way, you put everything on a large block device, give it some permissions and you're good to go. Also for shared data, sharing block devices might not be a good ide
Not quite (Score:4, Informative)
There may need to be some minor rethinking of controller throughput for read applications on smaller data sets for SSD. But right now, I regularly saturate the controller or bus when running sequential RW tests against a large number of physical drives in a RAID{1}0 array, so it's not like that's anything new. Using SSD just makes it more likely that will happen even on random workloads.
There are two major problems with this analysis though. The first is that it presumes SSD will be large enough for the sorts of workloads people with RAID controllers encounter. While there are certainly people using such controllers to accelerate small data sets, you'll find just as many people who are using RAID to handle large amounts of data. Right now, if you've got terabytes of stuff, it's just not practical to use SSD yet. For example, I do database work for living, and the only place we're using SSD right now is for holding indexes. None of the data can fit, and the data growth volume is such that I don't even expect SSDs to ever catch up--hard drives are just keeping up with the pace of data growth.
The second problem is that SSDs rely on volatile write caches in order to achieve their stated write performance, which is just plain not acceptable for enterprise applications where honoring fsync is important, like all database ones. You end up with disk corruption if there's a crash [mysqlperformanceblog.com], and as you can see in that article once everything was switched to only relying on non-volatile cache the performance of the SSD wasn't that much better than the RAID 10 system under test. The write IOPS claims of Intel's SSD products are garbage if you care about honoring write guarantees, which means it's not that hard to keep with them after all on the write side in a serious application.
Mod Parent Up (Score:2)
The fact that SSD perf drops like a rock when you actually need to be absolutely sure the data makes it to disk is huge factor in enterprise storage. No enterprise storage customer is going to accept the possibility their data goes down the bit-bucket just because somebody tripped over the power cord. Enterprise databases are built around the idea that when the storage stack says data has been written, it has, in fact, been written. Storage vendors spend a great deal of money, effort, and complexity guar
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
You can't turn fsync into a complete noop just by putting a cache in the middle. A fsync call on the OS side that forces that write out to cache will block if the BBWC is full for example, and if the underlying device can't write fast enough without its own cache being turned on you'll still be in trouble.
While the cache in the middle will improve the situation by coalescing writes into the form the SSD can handle efficiently, the published SSD write IOPS numbers are still quite inflated relative to what y
Re: (Score:2)
Yes there's an improvement, but to compare read IOPS from an enterprise SSD to a short-stroked SATA disk on a purely performance basis isn't even close. We're talking orders of magnitude slower.
I think SSDs really shine when you get into situations where your performance requirements vastly outweigh our capacity requir
Re: (Score:2)
First a quick clarification: Intel X25 series SSDs do not use their RAM as a data writeback cache. Intel ships racks full of both M and E series drives, with those drives living in a RAID configuration. They couldn't pull that off if the array was corrupted on power loss. The competition had to start using large caches to reduce write stutters and increase random write performance, mostly in an attempt to catch up to Intel.
The parent article is a bit 'off' as far as bandwidth vs. IOPS on RAID controllers
Re: (Score:2)
There are two major problems with this analysis though. The first is that it presumes SSD will be large enough for the sorts of workloads people with RAID controllers encounter. While there are certainly people using such controllers to accelerate small data sets, you'll find just as many people who are using RAID to handle large amounts of data. Right now, if you've got terabytes of stuff, it's just not practical to use SSD yet. For example, I do database work for living, and the only place we're using SSD right now is for holding indexes.
That's probably true for your databases, but are databases that measure in terabytes really the norm?
None of the data can fit, and the data growth volume is such that I don't even expect SSDs to ever catch up--hard drives are just keeping up with the pace of data growth.
The latest SSD drives of Intel already has room for 320 GB. These are low end consumer disks. Once these things get popular you'll see a sharp increase in production volume. The growth *rate* of flash SSD is very, very high. They haven't caught up yet but I'm quite sure that they will, if only because the hard disks only seem to have these three advantages (size, price and many years of experience with them)
All wrong. (Score:3, Informative)
1) Most high-end RAID controllers aren't used for file serving. They are used to serve databases. Changes in filesystem technology don't affect them one bit, as most of the storage allocation decisions are made by the database.
2) Assuming that a SSD controller that can pump 55k IOPS w/ 512B I/O's can do the same w/ 4K I/O's is stupid and probably wrong. That is Cringely math; could this guy possibly be as lame?
3) The databases high-end RAID arrays get mostly used for do not now, and never have, used much bandwidth. They aren't going to magically do so just because the underlying disks (which the front-end server never even sees) can now handle more IOPS.
All SSD's do is flip the Capacity/IOPS equation on the back end. Before, you ran out of drive IOPS before ran out of capacity. Now, you get to run out of capacity before you run out of IOPS on the drive side.
Even if you have sufficient capacity (due to the rapid increase in SSD capacity), you are still going to run out of IOPS capacity on the RAID controller before you run out of IOPS or bandwidth on the drives. The RAID controller still has a lot of work to do with each I/O, and that isn't going to change just because the back-end drives are now more capable.
SirWired
Re: (Score:2)
Thank you so much for summarizing that point so succinctly, I'm stealing that line, hope you don't mind
Re: (Score:3, Interesting)
Well said. I've found using an ICH-10R kills that overhead, and I have seen excellent IOPS scaling with SSDs right on the motherboard controller. I've hit over 190k IOPS (single sector random read) with queue depth at 32, using only 4 X25-M G1 units. The only catch is the ICH-10R maxes out at about 650-700 MB/sec on throughput.
Allyn Malventano
Storage Editor, PC Perspective
Re: (Score:2)
For a file server, sure ZFS is a great solution, because most of the data just sits there and is never modified. NetApp has used copy-on-write for years in WAFL for this reason.
But the writers of databases are not morons and techniques such as copy-on-write are not new; the DB's already do what they can to optimize how writes are committed to the database. They don't need the help of a filesystem to optimize this process, as the possible optimizations, have already been made. If random writes were a prob
Re: (Score:3, Insightful)
I think we need a mod option to mod down the article summary: -1, stupid editor.
You had your chance [slashdot.org].
Re:I/O is random? What have you been smoking? (Score:5, Insightful)
All the important operations tend to be random. For a file server, you may have twenty people accessing files simultaneously. Or a hundred, or a thousand. For a webserver, it'll be hitting dozens or hundreds of static pages and, if you have database backend, that's almost entirely random as well.
For people consolidating physical servers to virtual servers, you now have two, three, ten or twenty VMs running on one machine. If every one of those VMs tries to do a "sequential" IO, it gets interlaced by the hypervisor into all the other sequential IOs. No hypervisor would dare tell all the other VMs to sit back and wait so that every IO is sequential. That delay could be seconds or minutes or hours.
Now imagine all that, and take into account that the latest Intel SSD gets around 6600 IOPS read and write. A good, fast hard drive gets 200. So you could put thirty three hard drives in RAID 0 and have the same number of IOPS, and your latency would still be worse. All the RAID0 really does for you is give you a nice big queue pipeline, like in a CPU. Your IO doesn't really get done faster, but you can have many more running simultaneously.
Given that SSDs are easily three to four times faster on sequential IO and an order of magnitude faster on random IO, I don't think it's that implausible to believe that the industry isn't ready.
Re: (Score:2)
So you could put thirty three hard drives in RAID 0 and have the same number of IOPS, and your latency would still be worse.
Actually, thats incorrects, Here's why:
When you calculate IOPS, a good portion small of reads and writes get executed at random places on the disks. When you you make one filesystem write on a raid0 set (depending on how smart the raid0 controller is), it will be locking up several or ALL the disk spindles for that individual read/write.
The IOPS are negligibly better on a 33 disk raid0 set, and depending on your disk controller, it might be worse (every write equates to 33 dma requests).
It is faster f
Re: (Score:3)
When you calculate IOPS, a good portion small of reads and writes get executed at random places on the disks. When you you make one filesystem write on a raid0 set (depending on how smart the raid0 controller is), it will be locking up several or ALL the disk spindles for that individual read/write.
Actually, that's incorrect. Here's why:
When you make a RAID0 array, you stripe large blocks between all the disks, usually 64K-256K large. If your operation does not cross the block boundary, you only access a single drive. Assuming those random small files are evenly distributed, your IOPS scale almost linearly with drive count.
Re: (Score:2)
Good points, though of course some problems are more a matter of server design/allocation than any gross inadequacy on the part of the RAID controller. You can always try faster hardware to solve a performance problem, but a lot of time it's just due to bad software/configuration.
For example, no one in their right mind would share physical disks between 10-20 VMs in any application where disk performance is critical - a good server architect builds a system that works with the hardware available. Problem
Re: (Score:2)
I'd rather have an ioDrive.
See: http://hothardware.com/Articles/Fusionio-vs-Intel-X25M-SSD-RAID-Grudge-Match/?page=9 [hothardware.com]
With ludicrously high IOPS, your CPU doesn't have to do much waiting, which pretty much defeats any RAID solution. RAID usually raises overhead, because your CPU has to decide which device the requests go to - unless you use expensive hardware RAID controllers, all of which have IOPS caps. Most RAID solutions also go through slower interfaces - although compared to PCIe 2.0 4x, every interface
Re: (Score:2)
Hmm, maybe I don't even want to know the pricetag on that
Re: (Score:2)
Re: (Score:2)
Really? Why has virtually every production parallel file system implementation I've ever seen (using GPFS, Lustre, and PVFS) been done on top of hardware RAID controllers?
How do you guarantee consistency? (Score:2)
The main speed up provided by hardware raid is reliable deep write buffering ... I don't see how parallel file systems will make that advantage go away.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
In a real datacenter the only raid seen is a raid 1 for the boot drives to get the server up into the operating system. The data lives on the SAN.
Hint: Do you think that's a raw drive you're seeing? No, you're seeing... a RAID5 volume presented as a drive by the array.
Not having RAID is simply not feasible in a 'real' datacenter, because you'll lose a disk or two each week -- if not day.
But then, what would I know -- I only work on a team handling several petabytes of space, having come from a team handling several *more* petabytes.
Re: (Score:2)
It will also deal with all of these smug statistical analyses that talk about RAID rebuild times growing (in line with spindle size growth) such that second disk failures prior to the rebuild of an original disk failure taking out an entire array.
If you aren't using RAID6, I will point my finger and laugh when this happens to you. :)
Re: (Score:2)