IDE RAID Examined 597
Bender writes "The Tech Report has an interesting article comparing IDE RAID controllers from four of the top manufacturers. The article serves as more than just a straight product comparison, because the author has included tests for different RAID levels and different numbers of drives, plus a comprehensive series of benchmarks intended to isolate the performance quirks of each RAID controller card at each RAID level. The results raise questions about whether IDE RAID can really take the place of a more expensive SCSI storage subsystem in workstation or small-scale server environments. Worthwhile reading for the curious sysadmin." I personally would love to hear any ide-raid stories that slashdotters might have.
IDE Raid, inexpensive but major hassle (Score:4, Insightful)
Even those so-called rounded cables can clutter the hell out of a tower case if you have a 4-channel RAID controller.
In my case it's the Adaptec 2400A four-channel, with four 120GB Western Digital hard drives, RAID 1+0.
SCSI for workstations? (Score:3, Insightful)
I mean, I know the hest drives are SCSI flavor, but it seems like there's so many other things you could spend money on first that would get you way better performance, like getting a Dual Athlon CPU or something.
Re:A little story (Score:5, Insightful)
So how did he decide which of the 5 drives he was going to pull ?
Re:IDE Raid, inexpensive but major hassle (Score:1, Insightful)
teeny tiny leetle cables...
Re:IDE Raid, inexpensive but major hassle (Score:2, Insightful)
Re:A little story (Score:4, Insightful)
I let out a yelp when I got to
puts it on the broken machine, formats and loads windows on it *
One of the things that really chaps my ass, more than anything else, is people asking my advice (and they do so specifically because of my experience in whichever field they're inquiring about), patiently listening to what I have to say, asking intelligent questions... then doing something completely or mostly against my recommendations.
More often than not, something ends up going wrong that would/could not have occurred had they followed my advice in the first place, and then I hear about it.
It sucks the last drop of willpower from my soul to hold myself back from saying "I told you so!" and charging them a stupidity fee. It's tempting to do so even to friends, if/when I get sucked into the resulting mess. [Hear that, Jared?
* Linux zealots: For a more warm-and-cozy feeling, disregard the first eight words of this quote.
Re:My experience with IDE RAID.... (Score:5, Insightful)
Two 80GB WD special edition drives in RAID 0 (7200RPM, 8mb cache) rarely burst over 90MB/s. They usually have a sustained transfer of ~50-65MB/s.
Additionally, your seek time is going to suck. I gaurantee its not going to be under 11ms. You cpu utilization during transfer will prolly be around 4% in the asolute best case senario and 11% on average. This is becuase, no matter what you think, all raid cards under ~140$ do the calculations for the transfers in software, not hardware. All you have is a controller card with special drivers. You wont come even close to beating the overall performance of a scsi 160 drive, or SCSI 160 RAID 0 setup.
Re:SCSI for workstations? (Score:5, Insightful)
Try getting sustained data transfer rates out of an IDE RAID under load. It won't happen. You'll stutter. *boom* goes your realtime process.
SCSI RAID, on the other hand, streams happily along with very little CPU load.
just like winmodems (Score:4, Insightful)
RAID5 doesn't need a RAID Card! (Score:2, Insightful)
One customer ordered a system from a vendor who insisted on installing an ATA raid card, and it was a remarkable disappointment. Linux was able to indentify the array as a SCSI device and mount it. Then, for some reason, the customer rebooted his system. During the BIOS detection, the raid card started doing parity reconstruction and ran for over 24 hours before finally allowing the system to boot! For comparison, the same sized array would resync in the background under Linux in about 3 hours.
Also, the reconstruction tools built into the raid cards are pretty limited. If you have a problem with a Linux software RAID array, at least you can use the normal low level tools to access the drives and try to diagnose the problems. Just MO.
Re:SCSI for workstations? (Score:4, Insightful)
I keep telling them to wait a couple of years, and we'll see who is wasting money.
Agreed. This is not always easy to back up with facts (by quoting mfgr specs, etc), but in both recent and long-term (10+ years) experience, my systems with SCSI drives have tended to fail less often, and usually less suddenly, than IDE.
Generally, in 24x7 server usage, a SCSI disk will run for years, then either slowly develop bad blocks, or you start getting loud bearing noise, and after powering down, the drive fails to spin back up. In the old days we'd blame that failure mode on stiction, and could usually get the drive to come back one last time (long enough to make a backup) by giving the server a good solid thump in just the right spot.
Background:
My first SCSI-based PC was a 286 with a 8-bit seagate controller and a 54 MEG Quantum drive recovered from my old Atari 500 "sidecar".
Careful, there's a gotcha with IDE RAID... (Score:3, Insightful)
Each IDE controller can support up to two drives, a master and a slave. What happens if you hang two drives off one controller, and the "master" drive dies?
If it dies badly enough, the "slave" drive can go offline. Now you've got TWO drives in your array that aren't talking. There goes your redundancy.
If your purpose in using RAID is to have a system that can continue operating after a single drive failure, then you better think again before you hang two drives off any one controller.
As it points out in the Linux software RAID docs, you should only have one drive per IDE controller if you're really concerned about uptime. That would imply that "4 channel" RAID cards should only be used with a maximum of two drives, both set to "master", and no "slaves".
Note that this does not apply to SATA drives, as there isn't really a master-slave relationship with SATA -- all drives have separate cables and controller circuits. SATA drives are enumerated the same way as older drives for backwards compatibility with drivers and other software, but they are otherwise independent. (At least that's what I hear, I haven't actually seen one of these beasts yet...)
And of course none of this touches on controller failures, which is another issue. But if you are worried about losing drives and still staying up, then better take this into consideration when you design your dream storage system.
(I don't know about you guys, but I have lost several drives over the years, and not one controller...)
Re:A little story (Score:3, Insightful)
We use both a Mylex960 and Promise card here (Score:2, Insightful)
However, more recently we needed more builds/CDimage space so we picked up a Promise FastTrak100 (TX2) raid controller ($150CA) plus a couple 7200rpm 80GB Maxtors (~$150CA each), and have been living happily ever since. Now for sure we'd never put this in the main server, but for a cheap, reliable solution that gives you tons of space on a server that has only medium load, it can't be beat.
The point is, examine your needs and see what fits!
-Malloc
Re:SCSI for workstations? (Score:5, Insightful)
So yeah, you could probably spend your money on other things to get better performance, but that's entirely besides the point. What could you spend that money on to get better data reliability?
But you should use one anyway... (Score:2, Insightful)
Then again, a RAID _card_ may not help here, since the disks are at the mercy of the system power. Best to use a real array, if you have the bucks.
Re:SCSI for workstations? (Score:5, Insightful)
Re:SCSI for workstations? (Score:1, Insightful)
Re:A little story (Score:2, Insightful)
-Ben
Re:IDE Raid, inexpensive but major hassle (Score:4, Insightful)
Re:IDE Raid, inexpensive but major hassle (Score:2, Insightful)
Back up to an off-machine disk or tape. NOW. You will thank me later.
Re:SCSI for workstations? (Score:2, Insightful)
And if you own an IBM hard drive the operational lifespan is a few months.
Re:experience (Score:3, Insightful)
Simple... You purchase a different controller, put the drives on it, build the RAID, and restore the data onto it from your backups.
RAID is meant to increase overall reliability; it is not meant as a substitute for backups.
Re:I'd have to say , yr wrong (Score:2, Insightful)
for my money its IDE raid all the way
On SCSI drives and RAID controllers (Score:4, Insightful)
What good it is to have a RAID 5 without a hot spare, when you can only guard against single drive failure? So, I really hope IDE RAID supports hot spare, otherwise I question the saity of mind of the admins who implement such solutions.
As for IDE vs SCSI drives, I have to say that I will always go with SCSI, as long as I am in a multuser environment where seek times are critical. Apparently (experience shows), if you put your database space on a RAID, seek times are critical for the performance of your application. In this context, I think this review/coparison would have benefitted from a real-life aplication's benchmarking, with a database hosted on the RAID.
Re:SCSI for workstations? (Score:3, Insightful)
Consider that the seek time on those 160GB IDE drives is around 9-12ms compared to a ibm's 146GB SCSI drive with a seek time of 4.7, 133MB burts vs 320MB burst, 7200 vs 10000rpm. And the thing most business love: 5 year warranty for scsi vs 1 year for the IDE's. Once serial ATA comes out I think you'll see even more IDE based RAID being used
In workstations yes, in high usage servers, no. Even in the small department I work in, we'd rather pay 60% more for scsi and get a 5 year warranty and proven long term reliabilty
Re:IDE Raid, inexpensive but major hassle (Score:1, Insightful)