Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware

IDE RAID Examined 597

Bender writes "The Tech Report has an interesting article comparing IDE RAID controllers from four of the top manufacturers. The article serves as more than just a straight product comparison, because the author has included tests for different RAID levels and different numbers of drives, plus a comprehensive series of benchmarks intended to isolate the performance quirks of each RAID controller card at each RAID level. The results raise questions about whether IDE RAID can really take the place of a more expensive SCSI storage subsystem in workstation or small-scale server environments. Worthwhile reading for the curious sysadmin." I personally would love to hear any ide-raid stories that slashdotters might have.
This discussion has been archived. No new comments can be posted.

IDE RAID Examined

Comments Filter:
  • by Anonymous Coward on Wednesday December 04, 2002 @09:22PM (#4815386)
    IDE can only handle one or two hard drives per channel, which makes the cabling a real nasty hassle as opposed to SCSI-based RAID.

    Even those so-called rounded cables can clutter the hell out of a tower case if you have a 4-channel RAID controller.

    In my case it's the Adaptec 2400A four-channel, with four 120GB Western Digital hard drives, RAID 1+0.
  • by autopr0n ( 534291 ) on Wednesday December 04, 2002 @09:23PM (#4815389) Homepage Journal
    Whats the point in having SCSI-Raid in most workstations these days? I mean, ram is so cheap now you can throw in a couple gigs for much less then the price diffrence between SCSI RAID and IDE raid.

    I mean, I know the hest drives are SCSI flavor, but it seems like there's so many other things you could spend money on first that would get you way better performance, like getting a Dual Athlon CPU or something.
  • Re:A little story (Score:5, Insightful)

    by tmark ( 230091 ) on Wednesday December 04, 2002 @09:34PM (#4815470)
    their big old file server had 5 hard drives in it, but was only using 1 in windows! Being the smart boy that he is, he dutifully shuts down the machine, removes one of the drives, puts it on the broken machine, formats and loads windows on it.

    So how did he decide which of the 5 drives he was going to pull ?
  • by Anonymous Coward on Wednesday December 04, 2002 @09:35PM (#4815481)
    see serial ata raid (coming soon to a world near you)

    teeny tiny leetle cables...
  • by PlanetX 00 ( 623339 ) on Wednesday December 04, 2002 @09:42PM (#4815531)
    The next generation of IDE will be serial ATA (SATA). These drives will have a small cable going from the controller to the drive getting rid of all the cable clutter. Also, these controllers will allow you to use more than four drives, the more ports the more drives. Finally, these controllers will have improved electronics allowing the card to do more work, and making them less of a CPU resource hog. Continuing to use SCSI will get you higher speeds and greater drive MTFBs, but with IDE RAID you might not have to worry about the drive MTBFs (I can buy several larger IDE drives at the same cost of a smaller SCSI drive).
  • Re:A little story (Score:4, Insightful)

    by alexburke ( 119254 ) <alex+slashdot@@@alexburke...ca> on Wednesday December 04, 2002 @09:50PM (#4815573)
    Oh. My. God.

    I let out a yelp when I got to
    puts it on the broken machine, formats and loads windows on it *

    One of the things that really chaps my ass, more than anything else, is people asking my advice (and they do so specifically because of my experience in whichever field they're inquiring about), patiently listening to what I have to say, asking intelligent questions... then doing something completely or mostly against my recommendations.

    More often than not, something ends up going wrong that would/could not have occurred had they followed my advice in the first place, and then I hear about it.

    It sucks the last drop of willpower from my soul to hold myself back from saying "I told you so!" and charging them a stupidity fee. It's tempting to do so even to friends, if/when I get sucked into the resulting mess. [Hear that, Jared? :P]

    * Linux zealots: For a more warm-and-cozy feeling, disregard the first eight words of this quote.
  • by Phosphor3k ( 542747 ) on Wednesday December 04, 2002 @09:54PM (#4815596)
    Thats bullshit. Post some links to benches that back that up.

    Two 80GB WD special edition drives in RAID 0 (7200RPM, 8mb cache) rarely burst over 90MB/s. They usually have a sustained transfer of ~50-65MB/s.

    Additionally, your seek time is going to suck. I gaurantee its not going to be under 11ms. You cpu utilization during transfer will prolly be around 4% in the asolute best case senario and 11% on average. This is becuase, no matter what you think, all raid cards under ~140$ do the calculations for the transfers in software, not hardware. All you have is a controller card with special drivers. You wont come even close to beating the overall performance of a scsi 160 drive, or SCSI 160 RAID 0 setup.
  • by aussersterne ( 212916 ) on Wednesday December 04, 2002 @09:56PM (#4815610) Homepage
    Ummm, no.

    Try getting sustained data transfer rates out of an IDE RAID under load. It won't happen. You'll stutter. *boom* goes your realtime process.

    SCSI RAID, on the other hand, streams happily along with very little CPU load.
  • by Suppafly ( 179830 ) <slashdot@s[ ]afly.net ['upp' in gap]> on Wednesday December 04, 2002 @09:56PM (#4815613)
    Using IDE Raid is like using a winmodem. Unlike with modems, where everyone has one, RAID has a basic educational entry point. I seriously doubt IDE Raid will ever overtake SCSI in any area where knowledgeable people are doing the administration.
  • by mprinkey ( 1434 ) on Wednesday December 04, 2002 @10:01PM (#4815644)
    I have about 5 TBs of RAID5 storage online at various customer sites. They are all using Linux software RAID and Promise ATA66/100/133 controllers. Even when using two drives per IDE channel, we still see very good performance. An RAID5 system with eight 120-GB 5400-RPM Maxtor drives gives about 55 MB/sec write and 80 MB/sec read performance under Bonnie. Those eight drives were on two Promise ATA100 controllers. Cabling is fairly easy if you use 24" UltraATA cables. And it will get much easier with Serial ATA.

    One customer ordered a system from a vendor who insisted on installing an ATA raid card, and it was a remarkable disappointment. Linux was able to indentify the array as a SCSI device and mount it. Then, for some reason, the customer rebooted his system. During the BIOS detection, the raid card started doing parity reconstruction and ran for over 24 hours before finally allowing the system to boot! For comparison, the same sized array would resync in the background under Linux in about 3 hours.

    Also, the reconstruction tools built into the raid cards are pretty limited. If you have a problem with a Linux software RAID array, at least you can use the normal low level tools to access the drives and try to diagnose the problems. Just MO.
  • by Anonymous Coward on Wednesday December 04, 2002 @10:10PM (#4815697)
    The "Enterprise Server Group" at my Fortune 500 employer keeps telling me I should be purchasing $1,200 "SunFire V100" servers with IDE instead of wasting $2K+ on the V120 with hot-swap SCSI.

    I keep telling them to wait a couple of years, and we'll see who is wasting money.

    There is also the reliability factor. SCSI drives tend to be more robust.

    Agreed. This is not always easy to back up with facts (by quoting mfgr specs, etc), but in both recent and long-term (10+ years) experience, my systems with SCSI drives have tended to fail less often, and usually less suddenly, than IDE.

    Generally, in 24x7 server usage, a SCSI disk will run for years, then either slowly develop bad blocks, or you start getting loud bearing noise, and after powering down, the drive fails to spin back up. In the old days we'd blame that failure mode on stiction, and could usually get the drive to come back one last time (long enough to make a backup) by giving the server a good solid thump in just the right spot.

    Background:
    My first SCSI-based PC was a 286 with a 8-bit seagate controller and a 54 MEG Quantum drive recovered from my old Atari 500 "sidecar".

  • by dougie404 ( 576798 ) on Wednesday December 04, 2002 @10:28PM (#4815770)
    ...so be alert.

    Each IDE controller can support up to two drives, a master and a slave. What happens if you hang two drives off one controller, and the "master" drive dies?

    If it dies badly enough, the "slave" drive can go offline. Now you've got TWO drives in your array that aren't talking. There goes your redundancy.

    If your purpose in using RAID is to have a system that can continue operating after a single drive failure, then you better think again before you hang two drives off any one controller.

    As it points out in the Linux software RAID docs, you should only have one drive per IDE controller if you're really concerned about uptime. That would imply that "4 channel" RAID cards should only be used with a maximum of two drives, both set to "master", and no "slaves".

    Note that this does not apply to SATA drives, as there isn't really a master-slave relationship with SATA -- all drives have separate cables and controller circuits. SATA drives are enumerated the same way as older drives for backwards compatibility with drivers and other software, but they are otherwise independent. (At least that's what I hear, I haven't actually seen one of these beasts yet...)

    And of course none of this touches on controller failures, which is another issue. But if you are worried about losing drives and still staying up, then better take this into consideration when you design your dream storage system.

    (I don't know about you guys, but I have lost several drives over the years, and not one controller...)
  • Re:A little story (Score:3, Insightful)

    by shaitand ( 626655 ) on Wednesday December 04, 2002 @10:32PM (#4815786) Journal
    I have a number of these stories to go along with my success stories. The problem is that at that point the technician is a saleman, your their technical advisor and the one who directly profits from their decision, this causes a certain amount of inherit distrust. No matter how well he explains it, he can't "force" the customer to do anything, it's their money. Only monopolistic corporations like say... microsoft (just a random pick) try to force their customers. That company was free to ignore the tech and put themselves out of buisness.
  • by malloc ( 30902 ) on Wednesday December 04, 2002 @10:37PM (#4815805)
    Here at work our main R&D server's been using a SCSI Mylex960 with RAID1 36GB drives. This has worked dandily for the past several years. This machine gets hit pretty had with tons of small IO, so I wouldn't consider IDE for it.

    However, more recently we needed more builds/CDimage space so we picked up a Promise FastTrak100 (TX2) raid controller ($150CA) plus a couple 7200rpm 80GB Maxtors (~$150CA each), and have been living happily ever since. Now for sure we'd never put this in the main server, but for a cheap, reliable solution that gives you tons of space on a server that has only medium load, it can't be beat.

    The point is, examine your needs and see what fits!

    -Malloc
  • by GT_Alias ( 551463 ) on Wednesday December 04, 2002 @10:57PM (#4815888)
    Ehhhh...RAID vs. RAM/Dual CPU's? I was under the impression people used RAID for data integrity (at least, that's what I use it for). Unless you're striping, I suppose.

    So yeah, you could probably spend your money on other things to get better performance, but that's entirely besides the point. What could you spend that money on to get better data reliability?

  • by Anonymous Coward on Wednesday December 04, 2002 @11:07PM (#4815933)
    RAID 5 in software can be dangerous. If a parity write fails (disk/system dies), you'll likely have data corruption and not even know it. Best to trust reliable hardware to do the XORs.

    Then again, a RAID _card_ may not help here, since the disks are at the mercy of the system power. Best to use a real array, if you have the bucks.
  • by prisoner-of-enigma ( 535770 ) on Wednesday December 04, 2002 @11:17PM (#4815970) Homepage
    You apparently didn't read the article, and have no current experience with IDE RAID systems. Take at look at the sustained tranfer rates of the 3Ware system. They meet just about any SCSI controller you're likely to find when paired with good 7200RPM drives. The myth that SCSI is the only way to get reliable sustained transfers is just that -- a myth. SCSI's only advantage now is reduced cable clutter and having up to 15 drives on one controller, but who needs that many drives these days when 120GB drives are available for next to nothing?
  • by Anonymous Coward on Wednesday December 04, 2002 @11:20PM (#4815987)
    Also note that you took the worst performing card of the bunch and used that as your yardstick. All of the rest have under 10% utilization. Trying awful hard to find something to gripe about, aren't we?
  • Re:A little story (Score:2, Insightful)

    by binner1 ( 516856 ) <bdwalton&gmail,com> on Wednesday December 04, 2002 @11:45PM (#4816093) Homepage
    I think a new more accurate moderation is in order: Sad

    -Ben
  • by I Am The Owl ( 531076 ) on Wednesday December 04, 2002 @11:46PM (#4816104) Homepage Journal
    You are confusing serial communications with parallel communications. Please research further before flaming. Thx.
  • by ZorinLynx ( 31751 ) on Thursday December 05, 2002 @01:14AM (#4816560) Homepage
    I hope you keep an off-machine backup. It only takes one violent power supply failure to make all your data suddenly vanish as multiple drives meet their maker in a blinding flash of light.

    Back up to an off-machine disk or tape. NOW. You will thank me later.

  • by Regul8or ( 603030 ) on Thursday December 05, 2002 @01:23AM (#4816590)
    "IDE hard drives have a market lifespan of a few months."

    And if you own an IBM hard drive the operational lifespan is a few months.
  • Re:experience (Score:3, Insightful)

    by ZorinLynx ( 31751 ) on Thursday December 05, 2002 @01:25AM (#4816601) Homepage
    >What do you do if the controller card itself dies?

    Simple... You purchase a different controller, put the drives on it, build the RAID, and restore the data onto it from your backups.

    RAID is meant to increase overall reliability; it is not meant as a substitute for backups.

  • by bonezed ( 187343 ) on Thursday December 05, 2002 @02:04AM (#4816733)
    I am the SysAdmin at my company and I built the network+servers by hand. The servers all run 3ware Escalade cards (Escalade 6800, raid10, 8 drives). In the 2 years they have been running I have had only 1 drive fail. Now my experience with SCSI is slightly different... IBM scsi drives and Mylex 150/160, lots of drive failures (in a high end raid cage too) and then lots of troubles with getting the raid sets to rebuild.

    for my money its IDE raid all the way

  • by haggar ( 72771 ) on Thursday December 05, 2002 @04:40AM (#4817161) Homepage Journal
    I like reading the comments here, I am humble enough to know I can always learn something. But there's something I didn't see mentioned, in all these IDE RAID setups that people describe: can you have a hot spare disk? Hot spare is critical for data reliability. If you have a large RAID 5 or RAID 0+1 (not advised, always do 1+0, whenever possible), you can do the math and see how darn important it is to have the host spare.

    What good it is to have a RAID 5 without a hot spare, when you can only guard against single drive failure? So, I really hope IDE RAID supports hot spare, otherwise I question the saity of mind of the admins who implement such solutions.

    As for IDE vs SCSI drives, I have to say that I will always go with SCSI, as long as I am in a multuser environment where seek times are critical. Apparently (experience shows), if you put your database space on a RAID, seek times are critical for the performance of your application. In this context, I think this review/coparison would have benefitted from a real-life aplication's benchmarking, with a database hosted on the RAID.
  • by clarkc3 ( 574410 ) on Thursday December 05, 2002 @12:13PM (#4818654)
    SCSI would have added nothing to the performance of the system except an additional 60% to the cost.

    Consider that the seek time on those 160GB IDE drives is around 9-12ms compared to a ibm's 146GB SCSI drive with a seek time of 4.7, 133MB burts vs 320MB burst, 7200 vs 10000rpm. And the thing most business love: 5 year warranty for scsi vs 1 year for the IDE's. Once serial ATA comes out I think you'll see even more IDE based RAID being used

    In workstations yes, in high usage servers, no. Even in the small department I work in, we'd rather pay 60% more for scsi and get a 5 year warranty and proven long term reliabilty

  • by mccormick ( 40772 ) on Thursday December 05, 2002 @01:40PM (#4819412)
    Actually, the large 8MB cache, 7200rpm, 100GB+ drives from Western Digital and IBM, as well as others, are still cheaper than equivilent SCSI. Besides, while the warranties may not be as long, SCSI and IDE drives are usually just about identical with the except of the disk controller, which is where the distinction between IDE and SCSI is created. Funny, I've personally enjoyed my ATA/66 RAID-1 array for about the same amount of time. Enjoyed it most thoroughly, in fact. The price tag was really enjoyable as well.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...