Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

IDE RAID Examined 597

Bender writes "The Tech Report has an interesting article comparing IDE RAID controllers from four of the top manufacturers. The article serves as more than just a straight product comparison, because the author has included tests for different RAID levels and different numbers of drives, plus a comprehensive series of benchmarks intended to isolate the performance quirks of each RAID controller card at each RAID level. The results raise questions about whether IDE RAID can really take the place of a more expensive SCSI storage subsystem in workstation or small-scale server environments. Worthwhile reading for the curious sysadmin." I personally would love to hear any ide-raid stories that slashdotters might have.
This discussion has been archived. No new comments can be posted.

IDE RAID Examined

Comments Filter:
  • by snowtigger ( 204757 ) on Wednesday December 04, 2002 @09:38PM (#4815498) Homepage
    A friend of mine set up a raid0 (striped array) using the built-in raid-controller in his motherboard. Later, this motherboard had to be changed. To our great surprise, the raid information was only stored in the motherboard and thus permanently lost. This could be a good thing to know ... Make sure the data is not lost if the controller fails.

    Personnally, I run several software RAID arrays under Linux and it works very well. It's easy to manage and gives me decent performance on my rather old machine.

    I feel very confident in mirroring system/boot partitions on my linux machines =)
  • IDE RAID (Score:5, Interesting)

    by 13Echo ( 209846 ) on Wednesday December 04, 2002 @09:45PM (#4815546) Homepage Journal
    My experiences with IDE RAID have been pretty darn good. Benchmarking my Desktar 60GXP drives in Windows 2000 last year showed that I was getting read speeds in striping mode (between two drives) at faster rates than the fastest seagate Cheetah SCSI drives. Times have probably changed now though.

    I started with a KT7A-RAID mobo. The important thing is that you get the cluster sizes just right for your particular partition. I used Norton Ghost to image my drive and try all sorts of different variables. In the end I had very satisfying results. Since I switched to Linux, I stopped using RAID-0 (yes, it is supported with this device!). I found that ReiserFS and the multi-drive Linux filesystem on these drives seemed to be just about as fast without having to hassle with soft-RAID controllers. It is probably due to my system RAM though. I couldn't seem to get Windows 2000 to make the most of 1024 MB without using that swapfile. Linux seems to avoid the swap altogether and uses static RAM instead. It is very nice having the extra IDE channels though. Without them, I probably wouldn't have 4 HDs hooked up right now.
  • by T-Ranger ( 10520 ) <jeffw@nOSpAm.chebucto.ns.ca> on Wednesday December 04, 2002 @09:51PM (#4815577) Homepage
    True, but both cheep IDE drives and expensive SCSI drives are cheep compared to something like a 7133 Serial Disk System [ibm.com] today. And especially cheep compared to "enterprize" storage solutions of yesteryear when RAID was coined.
  • by tcc ( 140386 ) on Wednesday December 04, 2002 @09:52PM (#4815587) Homepage Journal
    I bought that about a while ago when the maxtor 160GB 5400RPM drives started to ship.

    I had to build a datacenter and storage price was the main issue. I had to have something cheap, yet hold a LOAD of data. Problem is personally I hate maxtor drives, I always found the more or less reliable (but drive experiences varies from a person to another so..). Anyways at that time maxtor were the only one offering 160GB drives, at a decent price/meg, and although 5400RPM is quite slow for access time, the main issue was cost so I could take a hit on access speed as long as "streaming" speed was fast enough.

    the Adaptec 2400A card was the best at the time, simple, cheap efficient, it had 3 bad sides for my application, no 48Bits LBA support (130GB+), no 64bits PCI version (I was using a K7 thunder, and that chipset will slow down the pci bus to the slowest card connected to to bus, and since I wanted all available bandwidth to be thrown to the 64bits gigabit card, I couldn't accept using 32bits), and finally, no more than 4 drives. I wanted to break the terrabyte limit, so let's say I would have used 2 of those cards, it wouldn't have been price-performance-wise since the 2 would have shared the bus and I would have lost 2 drives for raid-5 instead of one with a 8 drive setup. but the performance of the Adaptec 2400A was the best. Still looks like the best overall today, yet I dunno if they are supporting 48bits LBA?

    Anyways the 3ware 7850 was an excellent choice. Although their tech support is more or less good (like most tech supports) especially for real bugs and not just standard drivers reinstallation issues, the response time and sales people were very nice and professionnal. I got surprising results from the array, where I thought it would run like molasse, I was getting over 50MB/sec sustained non-sequential reading if I recall correctly. And the tools are very good, rebuild time is about 3-4 hours with 8x160GB @ 400GB filled on the drives, there are email alert tools and web interface to the host machine to check diagnostics. Overall it's a nice system and I'm sure the 7500 series are even better.

    Oh and on a "funny" note, windows shows 1.1TB available in the explorer window, not 1134GB :) Reminds me when I plugued my first gigabyte drive in my amiga and saw big numbers :)

    As for the maxtor drives, I didn't take any chances, I ordered 10 to get 2 spares, 2 blew off in less than a month, but didn't have any problems since then, I guess if you can afford the time, doing a 1 month burn-in test with non critical data isn't overkill. usually they SHOULD blow up one by one so you could rebuild the array :).

  • by snowtigger ( 204757 ) on Wednesday December 04, 2002 @09:54PM (#4815597) Homepage
    HP has developped a pretty cool type of RAID. An automatic RAID-level that automatically organizes your disks for best performance while maintaining security.

    When a friend explained it to me, it sounded like a mixture of raid 5 and 0+1. For example, if you replace a disk with a larger one, the extra capacity will be used to duplicate some other part of the array.

    White papers here [hp.com]
  • by trandles ( 135223 ) on Wednesday December 04, 2002 @09:56PM (#4815612) Journal
    We've run several big RAID-5 setups on 3ware cards. When I say big I mean 1TB+ on each card. To do this we've used the 100GB+ drives available (120GB - 160GB) The biggest problem has been drive failures. Out of the 40 drives I think we've lost 6 in less than 1 year. In only 1 case have 2 drives gone bad at once (RAID-5, we're covered if 1 drive fails), but lost around 1TB of data. Luckily the data could be reproduced but took two weeks to regenerate.

    It's WAY too easy to build massive arrays using these devices. How the hell are you supposed to back them up? You almost have to have 2, one live array and 1 hot spare array. If you think you're going to put 1TB on tape, forget about it. If you have the cash to buy tape technology with that capacity and the speed to be worthwhile, you should be buying SCSI disks and a SCSI RAID controller.
  • by ostiguy ( 63618 ) on Wednesday December 04, 2002 @10:21PM (#4815750)
    have you had any drives go south yet? my experience with promise 33/66 cards a generation or two was that 2 drives on a cable, one bad drive = both drives data gets corrupted. So two cards, 4 drives, 1a,1b,2a,2b, in raid 10 meant one drives dies, all is lost = so much for raid.

    ostiguy
  • by lanner ( 107308 ) on Wednesday December 04, 2002 @10:22PM (#4815751)

    Holy cow. Sistina LVM (Logical Volume Manager) rocks. It is a partition system/file system of the future that really makes RAID sort of unnecessary. It is true that it is done by the host OS, but when integrated right it does not matter.

    Documentation for LVM is great. It is stable and works without quirks. It does all of the things that I would typically desire from a RAID 0,1,5 setup. Administration tools are awesome and give output just as I hoped. Expand partition sizes LIVE (ext2resize needs to unmount though, that is not LVM's problem), move a file system to another physical drive, mirror partitions, spread partitions over various devices. LVM is NUTSO!

    It is built into the Linux kernel past 2.4.7 (or somewhere around there), though I have heard that it was inspired from LVM for HPUX. I can't say much about this.

    Understanding the concept of how LVM works can be a little hard at first, but once you get past that and then actually use it on a system, you will be totally blown away by what it does and the performance.

    Here is the website for LVM
    http://www.sistina.com/products_lvm.htm

    I personally use Sistina LVM on a Debian Gnu/Linux system that has two IDE 60GB hard disks. I can change the sizes of partitions, move data around, move to a new hard drive on the fly, and tons of things that I don't even think I could do with the highest end of RAID controllers. As for performance, it is software RAID, but it does not have any of the typical software RAID slowness or cruft factor. I initially chose LVM as a cheap alternative to buying an IDE RAID card. Now, I don't even want an IDE RAID controller.
  • by PetiePooo ( 606423 ) on Wednesday December 04, 2002 @10:44PM (#4815842)
    I've got a friend that has a FileZerver [filezerver.com] NAS device. Does RAID0/1/5/JBOD on up to 12 IDE devices. As easy to use as a toaster.

    He initially bought it with six 100GB drives, giving him a formatted capacity of 477GB using RAID5. Ripped his CD collection, restored all his scanned images and textbooks and filled the sucker up to about 75% capacity.

    The only problem is that he used only 3 of the 6 channels to connect his 6 drives; 3 as master, 3 as slave. One controller had a momentary glitch and 2 of the 6 drives dropped out of the RAID. Can anyone tell me what happened next? Anyone? Anyone?

    After a bit of investigation, we found out the Zerver sled runs a version of Linux and uses the same md drivers modern Linux distros use. We pulled the drives out, and one by one slapped them into a spare Linux PC to update the superblocks. Brought it back up, and after a 24-hour fsck, the system was back up and stable. And each drive had its own IDE channel!!!
  • by cluge ( 114877 ) on Wednesday December 04, 2002 @10:47PM (#4815857) Homepage
    Our test of the promise raid under redhat linux with the "open source" drivers (2.4.19 vanilla) compared with the 3ware product were VERY different.

    I don't have the exact numbers off hand, but the 3 ware product was roughly 3 times faster at reading (raid 0+1 and raid 1). The 3ware was also faster at writing albeit the numbers were much closer. The number that DOES stick in my head was the postmark [netapp.com] benchmark from netapp we ran. The promise did 2500 files, from 2 to 200k with 500 operations in about 35 seconds. The 3ware product did the same in 12.

    The moral of the story is TEST TEST TEST, these types of articles only give you an idea. Promise worked great for me personally in several applications. After testing it for a production machine at work, we went with the 3ware because the promise did not perform well for our application. Test for youself, or forever be dissapointed.

    Cluge
  • by Anonymous Coward on Wednesday December 04, 2002 @11:16PM (#4815966)
    I have a dual Xeon 2.4GHz 4U with dual 8 channel IDE controllers connected to 16 160GB IDE drives under Windows 2000 arranged as two separate logical drives.

    I'm able to read sequentially from very large files (20GByte+ files) at a continuous rate of over 180Mbytes/sec.

    The controllers are 64-bit, 33MHz PCI cards and the high speed sequential reads are exactly what my application demands. SCSI would have added nothing to the performance of the system except an additional 60% to the cost.

    Find me a 2.5TByte dual Xeon 4GByte RAM 4U box with SCSI drives for well under $10K and I'll give SCSI another look.

    Once serial ATA comes out I think you'll see even more IDE based RAID being used.
  • by Akilla.Net ( 631529 ) on Wednesday December 04, 2002 @11:32PM (#4816040) Homepage
    I work in the Radiology department of a mid-size hospital. We recently decided to get a single image server to store all of our CT/MRI images at once. We figured out that if we got a 700GB system, that would hold about 9 months of data at once. Since we are not running a PACS yet, this is fine. We looked at pricing options, and since it wasn't mission-critical data (we had backups elsewhere, just not quite as accessible) we decided to go with IDE RAID.

    We ended up going with the Promise UltraTrak SX8000 [promise.com], which is an external RAID cabinet that holds up to 8 IDE drives and connects up to the host computer via SCSI. We then got 8 120GB Western Digital drives for around 150$ each. The RAID set up quickly, and within an hour we had a formatted 7-drive RAID 5 array with a hotspare for if things went badly.

    The cabinet has, in the 4 months since installation, given us zero problems, and worked flawlessly, with quick transfer rates, and extremely easy setup. Considering the price compared to an equivalent SCSI system, we feel that we got 90% of the value of a SCSI system (the only difference being that IDE drives break sooner than SCSI drives, and that SCSI drives are moderately faster, both of which weren't quite necessary for us.)

    If your system contains mission-critical data, go the more expensive route and get a full SCSI raid system with multiple hotspares and pay a guy to sit in a corner and maintain it. If, like us, you just need a large amount of very-reliable storage without much hassle, go the IDE RAID route. It's working great for us.

  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Wednesday December 04, 2002 @11:33PM (#4816044)
    Comment removed based on user account deletion
  • by Futurepower(R) ( 558542 ) on Wednesday December 04, 2002 @11:33PM (#4816047) Homepage

    From the Slashdot story: "I personally would love to hear any ide-raid stories that slashdotters might have." I also would like to hear about this.

    Here's my story: I have extensive experience with Promise controllers. An IDE mirror makes data reads faster. If you are about to do a possibly damaging operation, it is good to break the mirror, pull out one of the hard drives, and do the operation on the other drive only. Then, when craziness happens, the other drive is a complete backup.

    A mirroring controller is a convenient way to make a Windows XP operating system hard drive clone. Windows XP prevents this; normally third-party software that runs under DOS is needed to make a useable full hard drive backup. See the section "Backup Problems: Windows XP cannot copy some of its own files" in the article Windows XP Shows the Direction Microsoft is Going [hevanet.com]. (The article was updated today. To all those who have read the article, sorry for the previously poor wording of the section "Hidden Connections". Expect further improvements later in this section later.)

    But Promise controllers are quirky. Sometimes things go wrong, and there is no explanation available from Promise. Promise tech support is surprisingly ignorant of the issues. The setup is quirky; it is difficult to train a non-technical person to deal with the controller's interface.

    Mirrors are a GREAT idea, but Promise is un-promising. That's my opinion. I'm looking for another supplier, so I want to hear other's stories.
  • by Nintendork ( 411169 ) on Wednesday December 04, 2002 @11:46PM (#4816107) Homepage
    Whatever dude.

    Winmodems do the calculations through software because they lack the chips on the card. That's a horrible comparison. These ATA RAID cards have everything built on the card. The Promise SX6000 even has an on board Intel i960RM RISC processor for XOR calculations.

    CPU utilization of these ATA RAID cards is negligible, so if you really need that extra 2 or 3 percent, just get a faster CPU.

    The main advantages that SCSI has for performance is the individual drive performance (15,000 RPM and 4.5ms access time as opposed to 8.5) and command queueing. The transfer rate isn't a big issue if you're transferring it over the network. You're still limited to your PCI bus speed and the network speed. Even on a gigabit backbone, that's roughly 65MB per second of thoroughput in real world performance. The performance is only a factor for local reads/writes and access time.

    The cost of a 1TB RAID 5 IDE setup (6 200GB drives, Promise SX6000 card, removable enclosures for the drives, and 128MB cache) = $2,450

    The cost for a 1TB RAID 5 SCSI setup (8 10,000 RPM 146GB Cheetahs and an Adaptec 2200s dual channel card plus the hot swappable enclosures (add at least $700 here) = At least $9,350

    If price is no object, go with SCSI. If you're running an enterprise SQL or WWW server with thousands of users, the access time of the drives is a huge benefit, so go SCSI. If each server must have more than 1TB of fault tolerant storage space, go SCSI because it can house enough drives per card to accomplish this. For everything else, go IDE.

    As an FYI, I'm running the described ATA RAID 5 setup with 120GB WD Caviars with 8MB buffer, a dual port 3com teaming NIC, 512MB RAM, and an Athlon XP processor as a highly utilized file server. Runs like a champ. No issues and the boss is incredibly happy with the price tag. $2,800 to build the whole server. It's rackmounted under our incredibly expensive Compaq Proliant ML530 which is just doing SQL. If a drive goes out, I'll get an email notification. I simply remove the dead drive, replace it, and rebuild. No rebooting needed.

    -Lucas

  • by jonbrewer ( 11894 ) on Thursday December 05, 2002 @12:07AM (#4816237) Homepage
    Using IDE Raid is like using a winmodem. Unlike with modems, where everyone has one, RAID has a basic educational entry point. I seriously doubt IDE Raid will ever overtake SCSI in any area where knowledgeable people are doing the administration.

    To You, Unbeliever: In 1999 I set up a file server in a factory in Connecticut. I used a four-channel Adaptec card and four 76 GB IBM DeskStar disks to create a RAID 0+1. (they were the biggest IDE drives on the market at the time) The array lost one drive after a few months, which was replaced without incident. It has faithfully served a 50+ node network for almost four years now. And at the time, it cost that factory $2500 in hardware and 7 hours of labor, for a 150GB volume. This was less than 25% of the cost of the cheapest SCSI RAID.

    SCSI raid is for those who don't keep up with the times, and find it easier to throw money at a problem than to actually find a good solution.

    Maybe you're one of these people?
  • Fibre Channel RAID (Score:4, Interesting)

    by nuxx ( 10153 ) on Thursday December 05, 2002 @12:52AM (#4816469) Homepage
    Utilizing eBay and a few vendors that I dug around for, I was able to assemble a blazingly fast fibre channel RAID system for home for around $500. If you take a look at http://www.nuxx.net/gallery/fibrechannel [nuxx.net] you can see the assembly of the box. There are also benchmarks detailing the RAID 5 array bursting to >160MB/sec (image at http://www.nuxx.net/gallery/fc_benchmarks/aad [nuxx.net]).

    The box is set up as follows:

    o Mylex eXtremeRAID 3000 ($200 via eBay)
    o Crucial 256MB DIMM for Cache (~$50 from Crucial)
    o 4 x Seagate ST39102FC 9GB 10,000 RPM drives ($9/ea on eBay)
    o Venus-brand 4-disk external enclosure (~$35 on eBay)
    o Custom made FC-AL backplane for disks (~$200 from a site I can't remember at this time)
    o 35m FC-AL cable (HSSDCDB9) (~$40 for two on eBay)

    The best part? The box is located in my basement, so I have this incredibly fast disk disk access, with no noise and no extra heat inside my case. That also allows me to cool the case more efficiently. Sure, IDE RAID may be cheaper, but the performance, per-disk, coupled with the reduced noise in my office and the reduced heat in the case is a big plus. Also, I might eventually pick up a second backplane for another four disks and do RAID 0+1. Since each channel is capable of 100MB/sec (without caching), the use of a set created across two channels would be amazing.
  • My experience. (Score:3, Interesting)

    by WeThree ( 2688 ) on Thursday December 05, 2002 @01:22AM (#4816588) Homepage
    I've got 12 WD 120GB 7200rpm special edition drives (8mb cache on each).

    They're all hooked up to a 3ware Escalade 7500-12 card, RAID5, with a hot spare. Application is storage of large amounts of raw digital images 7-8MB each.

    Been going for a few weeks now, no problems, 2.4.19 kernel's built in drivers lights the array right up as sda1.

    bfair@deathstar:~$ df -h /dev/sda1
    Filesystem Size Used Avail Use% Mounted on
    /dev/sda1 1.1T 543G 574G 49% /storage1

    SCSI subsystem driver Revision: 1.00
    3ware Storage Controller device driver for Linux v1.02.00.025.
    scsi0 : Found a 3ware Storage Controller at 0x10d0, IRQ: 5, P-chip: 1.3
    scsi0 : 3ware Storage Controller
    Vendor: 3ware Model: 3w-xxxx Rev: 1.0
    Type: Direct-Access ANSI SCSI revision: 00
    Attached scsi disk sda at scsi0, channel 0, id 0, lun 0
    SCSI device sda: -1951238656 512-byte hdwr sectors (100477 MB)
    sda: sda1

    reiserfs: checking transaction log (device 08:01) ...
    Using r5 hash to sort names
    ReiserFS version 3.6.25


    I would show you more but I'm ssh'd in and the power just went out. The 300VA ups running this box while I'm testing it probably just let its smoke out. Doh.

    Anyway I like it. If its not fried. :\
  • by leek ( 579908 ) on Thursday December 05, 2002 @02:21AM (#4816775)
    The article is misleading because the Adaptec 2400A actually supports RAID 1+0 (striped mirrors), which is more fault-tolerant than RAID 0+1 (mirrored stripes). Useful article on the subject of RAID 1+0 vs. RAID 0+1 [ofb.net].

    But this is probably Adaptec's fault, since they label RAID 1+0 and RAID 0+1 opposite from standard convention.

    I connect different power supply lines to each of the mirrors' halves, so that one half of each mirror is powered by one line, and the other half is supplied by another line.

    If a power supply fails only partially, it usually does so on one of the peripheral power lines. With the right power supply wiring, and the 2400A set up in RAID 1+0 mode, a power supply failure will not usually result in any lost data, since it will be isolated to one half of each mirror.

    Power supplies have been failing on me more often than drives have lately, even when they are used well within their rated limits.

    Don't power both drives of a mirror with the same peripheral power cable!!! On many power suppplies, those separate peripheral power connector lines are on separate circuits, which means one may fail while the other doesn't. It's best to spread the chances of failure out as evenly as possible across the RAID.

    Two-channel IDE RAID cannot support RAID 1+0, only RAID 0+1. Four IDE channels are necessary for RAID 1+0 to be effective, because if one drive fails in a two-channel configuration, the other drive sharing the same channel can stop working too, especially if the failing drive was the master.

    Adaptec also offers open-source drivers [adaptec.com] for the 2400A, while the article neglects to mention that, and in doing so implies that only 3ware and HighPoint do.

    Also, the article's table has read/write speeds of the Promise FastTrak shown backwards (133 vs 100).

    Nonethless, the article's comments about the 2400A's slow rebuild time are accurate. It takes around 8 hours to rebuild my 120 GB 1+0 RAID (four 60 GB 7200 RPM drives).

    And keep in mind that the 2400A is a SCSI RAID solution retrofitted onto an IDE interface -- some of the 2400A's firmware is shared with Adaptec's SCSI RAID firmware. So the 2400A is not really built or optimized for IDE from the ground up.

    But if you need RAID 1+0 or RAID 5 data protection, and you have 4 inexpensive IDE drives to use, the 2400A is nice. It's twice saved me from losing any data. Don't expect blazing-fast performance, though -- just consistently good performance, very low CPU usage, and very strong reliability.

  • by Anonymous Coward on Thursday December 05, 2002 @04:17AM (#4817109)
    I have every byte of data on the main file server on a RAID array of some sort.

    There are 6 drives. Each has a common prefix (/boot and swap partitions), then the rest is divided thus:

    2 are striped RAID-0 for /var/cache and user temp space. This is stuff that can be recovered easily in the event of a drive failure. It's not backed up; it's for the squid cache, user mp3s, unpacked source trees, and the like.

    4 are in a RAID-10 array. That is, a pair of mirrors, striped together. This is done with two Promise IDE cards, and the mirrors span controllers, so even a controller card failure can't take the system down. (This has been experimentally verified when one acted up. Reseating it seemed to fix it, but there's a spare sitting on the shelf for next time it gets out of line.) The disaster plans also involve splitting the mirror and walking out the door with half of it if we have to evacuate for some reason. Instant, up-to-the-minute backup.

    Swap is over three mirrored pairs, and the kernel stripes them, so we have the equivalent of RAID-10 there, as well. /boot is normally mounted read-only, and contains a full (text-mode) system installation with all the goodies you could want to reconstruct a messed-up system, recover tape backups, etc. This is a 6-way mirror, so killing it would be extremely difficult. LILO's mirror support means that each of those 6 drives is bootable.

    This plus ext3 has given me a very robust system.
    Using software RAID lets me easily replace any broken part of the system without woprrying that the original vendor might have gone out of business or lost interest in the RAID product.

    It's saved my ass a few times already, so I'm happy.
  • by Anonymous Coward on Thursday December 05, 2002 @05:19AM (#4817232)
    Also at the source of the "SCSI is robust" myth is how the drives are treated.

    An operating drive can dissipate quite a bit of heat.

    SCSI has always been the server domain: up 24x7, always at the same temperature.

    IDE is used the most in desktop systems: powered down each night, they go through many more thermal cycles.
  • How about this (Score:3, Interesting)

    by TheLink ( 130905 ) on Thursday December 05, 2002 @07:45AM (#4817513) Journal
    18GB SCSI 10K rpm drive vs 120GB ATA 7200 rpm drive.

    Partition 120GB drive so that you only use the fastest 18GB of it.

    Now compare random access seek times. Only seeking 15% of 120GB drive ;).

    If 120GB ATA drive is too expensive. Test with an 80GB drive.

    Not sure what the results will be, but it's worth trying don't you think?

    Some drives would probably be better at short seeks than others (settling time etc). Don't see much info on this tho.
  • Re:experience (Score:2, Interesting)

    by Xyd ( 631642 ) on Thursday December 05, 2002 @12:20PM (#4818710)
    Be wary of blanket statements that RAID5 performs poorly for writes. While this is probably true for the RAID cards mentioned here, some storage systems (e.g. EMC Clariion, Dell Clariion) have two mechanisms for increasing performance.

    First, write cache. When performing a write, the storage enclosure fills the cache (i.e. 8GB) and ACKs the write back to the host before it even touches a disk. So, unless the write is huge there is no performance loss for RAID5. However, for huge writes....

    Some storage enclosures (again, Clariion -- that's what I know :P)use enhanced writing algorithms that perform the parity and write to all disks virtually simultaneously. (Yeah, that's open to flame.) There's a good whitepaper on this at EMC's site.

    Granted, none of us have these enclosures at home but making a general statement that RAID5 performs poorly is short-sighted and a poor generalization.
  • by mccormick ( 40772 ) on Thursday December 05, 2002 @01:36PM (#4819383)
    That's why it's suggested that your RAID controller has a separate UPS-like power supply so that in the event of a system failure, the controller can still make sure that the drives flush their caches to disk and shutdown properly. However, I'm not currently aware of an IDE RAID solution that does this. And I was suggesting the large cache for performance reasons, not necessarily for reliability.

He who has but four and spends five has no need for a wallet.

Working...