Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Wear Leveling, RAID Can Wipe Out SSD Advantage 168

storagedude writes "This article discusses using solid state disks in enterprise storage networks. A couple of problems noted by the author: wear leveling can eat up most of a drive's bandwidth and make write performance no faster than a hard drive, and using SSDs with RAID controllers brings up its own set of problems. 'Even the highest-performance RAID controllers today cannot support the IOPS of just three of the fastest SSDs. I am not talking about a disk tray; I am talking about the whole RAID controller. If you want full performance of expensive SSDs, you need to take your $50,000 or $100,000 RAID controller and not overpopulate it with too many drives. In fact, most vendors today have between 16 and 60 drives in a disk tray and you cannot even populate a whole tray. Add to this that some RAID vendor's disk trays are only designed for the performance of disk drives and you might find that you need a disk tray per SSD drive at a huge cost.'"
This discussion has been archived. No new comments can be posted.

Wear Leveling, RAID Can Wipe Out SSD Advantage

Comments Filter:
  • Duh (Score:2, Interesting)

    by Anonymous Coward on Saturday March 06, 2010 @01:39PM (#31381690)

    RAID means "Redundant Array of Inexpensive Disks".

  • by vadim_t ( 324782 ) on Saturday March 06, 2010 @02:14PM (#31381898) Homepage

    Sure, but why do you put 60 drives in a RAID?

    Because hard disks, even the high end ones, have quite low IOPS. You can attain the same performance level with much fewer SSDs. If what you need is IOPS and not lots of storage that's a good thing even. You reach the required level with much fewer drives, so you need less power, less space and less cooling.

  • by Anpheus ( 908711 ) on Saturday March 06, 2010 @02:41PM (#31382088)

    I agree. 60 drives in RAID0 are going to see between 150 and 200 IOPS/drive, maybe more for 2.5" drives right? So that's 12,000 IOPS.

    The X25-E, the new Sandforce controller, and I believe some of the newer Indilinx controllers can all do that with one SSD.

    $/GB is crap, $/IOPS is amazing.

  • by itzdandy ( 183397 ) on Saturday March 06, 2010 @02:56PM (#31382230) Homepage

    You missed half the point. SSD use wear leveling and other techniques that are very effective on the desktop but in a high IO environment, the current wear leveling techniques reduce SSD performance to well below what you get on the desktop.

    I really think that this is just a result of the current trend to put high performance SSD on the desktop. When the market re-focuses these problems will disolve.

    This also goes for RAID controllers. If you have 8 ports and SAS 3Gb links, then you need to process 24Gb and a IO/s of current 15k SAS drives. Lets just assume for easy math that this requires a 500Mhz RAID Processor. What would be the point of putting in a 2Ghz Processor? What if you increase the IO/s by 100x and double the bandwidth? now you need to handle 48Gb/s throughput and 100x the IO and that requires 2x 3Ghz Processors.

    Its just takes time for the market players to react to each technology increase. New raid controllers will come out that can handle these things. maybe the current raid cpus have been using a commodity chip (powerpc often enough) because it was fast enough to handle these things and the new technologies are going to require more specific processors. Maybe you need to get cell chips or nvidia GPUs in there, whatever it takes.

    I admit it would be pretty interesting to see the new Dell/LSI 100Gb SAS powered by Nvidia logo in Gen12 Dell servers.

  • by Anonymous Coward on Saturday March 06, 2010 @03:00PM (#31382256)

    If you use ZFS with SSDs, it scales very nicely. There isn't a bottleneck at a raid controller. You can slam a pile of controllers into a chassis if you have bandwidth problems because you've bought 100 SSDs - by having the RAID management outside the controller, ZFS can unify the whole lot in one giant high performance array.

    If performance is that critical, you'd be foolish to use ZFS. Get a real high-performance file system. One that's also mature and can actually be recovered if it ever does fail catastrophically. (Yes, ZFS can fail catastrophically. Just Google "ZFS data loss"...)

    If you want to stay with Sun, use QFS. You can even use the same filesystems as an HSM, because SAMFS is really just QFS with tapes (don't use disk archives unless you've got more money than sense...).

    Or you can use IBM's GPFS.

    If you really want to see a fast and HUGE file system, use QFS or GPFS and put the metadata on SSDs and the contents on lots of big SATA drives. Yes, SATA. Because when you start getting into trays and trays full of disks attached to RAID controllers, arrays that consist of FC or SAS drives aren't much if any faster than arrays that consist of SATA drives. But the FC/SAS arrays ARE much smaller AND more expensive.

    Both QFS and GPFS beat the living snot out of ZFS on performance. And no, NOTHING free comes close. And nothing proprietary, either, although an uncrippled XFS on Irix might do it, if you could get real Irix running on up-to-date hardware. (Yes, the XFS in Linux is crippleware...)

  • by petes_PoV ( 912422 ) on Saturday March 06, 2010 @03:27PM (#31382554)
    Disks are cheap. There's no reason to use the full GB (or TB) capacity, especially if you want fast response. If you just use the outside 20% of a disk, the random I-O performance increases hugely. ISTM the best mix is some sort of journalling system, where the SSDs are used for read oparions and updates get written to the spinning storage (or NV RAM/cache). Then at predetermined times perform bulk updates back to the SSD. if some storage array manu. came up with something like that, I'd expect most performance problems to siomply go away.
  • by itzdandy ( 183397 ) on Saturday March 06, 2010 @04:12PM (#31382964) Homepage

    I dont think I missed the point. I am just a little more patient than most I guess. I don't think SSDs are ready from a cost/performance standpoint vs enterprise SAS 15k drives due to the market's focus.

    The OP may not have listed the hardware and disks but each controller has info published on max throughput.

    This is very comparable to running U320 SCSI disks on a U160 card. The performance bottleneck is often NOT the U160 interface but rather that the controller was not over engineered for its time. The difference is that the interface bandwidth today is fast enough for the throughput of SSD drives but the controllers arent fast enough to take advantage of the very low access tims especially when many drives are used.

    I suspect that the next generation of RAID controllers will be capable of handling a larger array of SSD drives. Until then, you can run MORE raid controllers and smaller arrays but that will increase costs significantly.

    SSD drives are a disruptive technology so the infrastructure needs a disruptive adaptation in controller design and/or CPU speed.

  • by gfody ( 514448 ) on Saturday March 06, 2010 @05:09PM (#31383464)
    This is why software based raid is the way to go for ultimate performance. The big SAN providers ought to be shaking in their boots when they look at what's possible using software like starwind or open-e with host-based raid controllers and SSDs. Just for example, look at this thing [techpowerup.com] - if you added a couple cx4 adapters and run open-e you've got a 155,000iop/s iSCSI target there in what's basically a $30k workstation. 3PAR will sell you something the size of a refrigerator for $500,000 that wouldn't even perform as well.
  • by itzdandy ( 183397 ) on Saturday March 06, 2010 @05:23PM (#31383600) Homepage

    how about opensolaris with ZFS. you get a high performance iSCSI target and a filesystem with re-ordered writes that improves IO performance by reducing seeks plus optional deduplication and compression.

    Additional gains can be had from seperate log and cache disks and with 8+ core platforms already available you can blow a traditional RAID card out of the water.

    One nice thing about software raid is it is completely agnostic to controller failure. If you need to recover a raid after a controller failure, you can even do it with SATA->USB adapters if you used SATA or you can use ANY other SAS/SATA controller that supports your disks.

  • by Anonymous Coward on Saturday March 06, 2010 @05:38PM (#31383708)

    You're thinking of Sun/Oracle "Open Storage," which works precisely as you describe. Volatile SSDs, or "readzillas," are used as L2 read caches, and non-volatile SSDs, or "logzillas," are used to store the filesystem intent logs. The intent logs and, to a certain extent, the nature of the filesystem itself ensure that nearly all disk writes are of the sequential type, so you can go with 7200rpm SATA disks -- which are actually usually faster than 15k SAS disks, for sequential I/O, due to the higher data density on the platters.

    Something sort of similar is also used in Oracle's new Exadata platform, though the implementation is completely different.

  • Re:RAID for what? (Score:3, Interesting)

    by Rockoon ( 1252108 ) on Saturday March 06, 2010 @06:27PM (#31384108)
    How about this for an argument.

    A 500GB SSD can be entirely over-written ("changing all the data on the medium") over 10,000 times. No wear leveling needed here. 10K writes is the low end for modern flash.

    Lets suppose you can write 200MB/sec to this drive. Thats about average for the top enders right now.

    It will take 2,500 seconds to overwrite this entire drive once. Thats about 42 minutes.

    So how long to overwrite it 10,000 times?

    Thats 25,000,000 seconds.
    Thats 416,667 minutes.
    Thats 6,944 hours.
    Thats 289 days.

    289 *days* of constant 24/7 writing to use of the flash.

    Now.. and this is the key point.. will a platter drive survive 289 days of constant max-throughput writing? The answer is no. You will burn the platter drives physical components way before that.
  • by petermgreen ( 876956 ) <plugwash@nOSpam.p10link.net> on Saturday March 06, 2010 @10:27PM (#31385772) Homepage

    WOW NICE motherboard there, TWO io hubs to give seven of x8 electrical/x16 mechanical slots along with a x8 link for the onboard SAS and an x4 link for the onboard dual port gigabit.

    http://www.supermicro.com/products/motherboard/QPI/5500/X8DTH-i.cfm [supermicro.com]

  • Re:Correction: (Score:3, Interesting)

    by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Sunday March 07, 2010 @04:34AM (#31387630)

    I'm working for an ISP, rack units are way too expensive to waste 3 on a server. Heck they're too expensive to waste 1 on a server.

    The advantage over enterprise SATA/SAS SSD's isn't large enough for us at least. We would have to go to 6 socket motherboards to get the same CPU density.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...