Become a fan of Slashdot on Facebook


Forgot your password?
Data Storage Hardware

Are RAID Controllers the Next Data Center Bottleneck? 171

storagedude writes "This article suggests that most RAID controllers are completely unprepared for solid state drives and parallel file systems, all but guaranteeing another I/O bottleneck in data centers and another round of fixes and upgrades. What's more, some unnamed RAID vendors don't seem to even want to hear about the problem. Quoting: 'Common wisdom has held until now that I/O is random. This may have been true for many applications and file system allocation methodologies in the recent past, but with new file system allocation methods, pNFS and most importantly SSDs, the world as we know it is changing fast. RAID storage vendors who say that IOPS are all that matters for their controllers will be wrong within the next 18 months, if they aren't already.'"
This discussion has been archived. No new comments can be posted.

Are RAID Controllers the Next Data Center Bottleneck?

Comments Filter:
  • by Anonymous Coward on Saturday July 25, 2009 @12:35PM (#28819435)

    It is very common when doing disk benchmarks so have separate tests for small random reads/writes, and large sequential reads/writes. The numbers are often different.

    And while you can't always predict what disk sector is going to be read next, often you can, which is why predictive raid controllers with lots of memory are very useful.

    I think we need a mod option to mod down the article summary: -1, stupid editor.

  • BAD MATH (Score:5, Interesting)

    by adisakp ( 705706 ) on Saturday July 25, 2009 @12:57PM (#28819631) Journal
    FTA Since a disk sector is 512 bytes, requests would translate to 26.9 MB/sec if 55,000 IOPS were done with this size. On the other end of testing for small block random is 8192 byte I/O requests, which are likely the largest request sizes that are considered small block I/O, which translates into 429.7 MB/sec with 55,000 requests

    I'm not going to believe an article that assumes that because you can do 55K IOPS for 512Byte reads, you can do the same number of IOPS for 8K reads which are 16X larger and then just extrapolate from there. Especially since most SSD's (at least SATA ones) right now top out around 200MB/s and the SATA interface tops out at 300MB/s. Besides there are already real world articles out there where guys with simple RAID0 SSD's are getting 500-600 MB with 3-4 drives using Motherboard RAID much less dedicated harware RAID.
  • by ZosX ( 517789 ) <zosxavius AT gmail DOT com> on Saturday July 25, 2009 @01:45PM (#28820027) Homepage

    That's kind of what I was thinking too. When you really start pushing the 300mb/s sata gives its hard to find something to complain about. Most of my hard drives max out at like 60-100mb a second and even the 15,000k drives are not a great deal faster. Low latency, fast speeds, increased reliability. This could get interesting in the next few years. Heck why not just build a raid 0 controller into the logic card with a sata connection and break the ssd into a bunch of little chunks and raid 0 them all max performance right out of the box so you get the performance advantages of raid without the cost of a card and the waste of a slot? PCIe SSD is quite interesting too..........

  • by HockeyPuck ( 141947 ) on Saturday July 25, 2009 @01:51PM (#28820073)

    Ah... pointing the finger at the storage... My favorite activity. Listening to DBAs, application writers, etc point the finger at the EMC DMX with 256GB of mirrored cache and 4Gb/s FC interfaces. You point your finger and say, "I need 8Gb FibreChannel!. Yet when I look at your hba utilization over a 3mo period (including quarter end, month end etc..) I see you averaging a paltry 100MB/s. Wow. Guess I could have saved thousands of dollars with going with 2Gb/s HBAs. Oh yeah, and you have a minimum of two HBAs per server. Running a nagios application to poll our switchports for utilization, the average host is running maybe 20% utilization of the link speed, and as you beg, "Gimme 8Gb/s FC", I look forward to your 10% utilization.

    We've taken whole databases and loaded them into dedicated cache drives on the array, and surprise, no performance increase. DBAs and application writers have gotten so used to yelling, "Add Hardware! That they forgot how to optimize their applications and sql queries."

    If storage was the bottleneck, I wouldn't be loading up storage ports (FAs) with 10-15 servers. I find it funny that the only devices on my 10,000 port SAN that can sufficiently drive IO are media servers and the tape drives (LTO-4) that they push.

    If storage was the bottleneck there would be no oversubscription in the SAN or disk array. Let me know when you demand a single storage port per HBA, and I'm sure my EMC will take us all out to lunch.

    I have more data than you. :)

  • Re:BAD MATH (Score:3, Interesting)

    by jon3k ( 691256 ) on Saturday July 25, 2009 @02:35PM (#28820355)
    You forgot about SSDs, consumer versions of which are already doing over 250MB/s reads for less than $3.00/GB. And we're still essentially talking about second generation products (Vertex switched from JMICRON to Indilinx controllers and Intel basically just shrunk down to 34nm for their new ones, although their old version did 250MB/s as well).

    I'm using a 30GB OCZ Vertex for my main drive on my windows machine and it benchmarks around 230MB/s _AVERAGE_ read speed. It cost $130 ($4.30/GB) when I bought it a couple months ago, and prices are falling. The new Intel X25-M is $225 for 80GB ($2.81/GB).
  • Re:All wrong. (Score:3, Interesting)

    by AllynM ( 600515 ) * on Saturday July 25, 2009 @04:37PM (#28821315) Journal

    Well said. I've found using an ICH-10R kills that overhead, and I have seen excellent IOPS scaling with SSDs right on the motherboard controller. I've hit over 190k IOPS (single sector random read) with queue depth at 32, using only 4 X25-M G1 units. The only catch is the ICH-10R maxes out at about 650-700 MB/sec on throughput.

    Allyn Malventano
    Storage Editor, PC Perspective

Due to lack of disk space, this fortune database has been discontinued.