Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

Ask Slashdot: IDE Software RAID? 181

Edward Schlunder asks: "After setting up Software RAID on a SCSI system at work, I want to do the same at home for fun. Call me crazy, but I'm just completely geeked up about this after seeing it working. The Software RAID documentation says that each hard disk should be on a separate IDE cable and that RAID5 requires at least 3 hard drives. I want to use my two existing IDE hard drives and get the large, fast, and cheap IBM IDE ATA/66 Deskstar 22GXP hard drive to make up the third..." There's one small problem though. Hit the link for more.

"My motherboard only has two IDE ports. So, my question is, what IDE controller card can I get that satisfies the following:

  • Supports Linux (obviously!)
  • High speed, preferrably ATA/66 and PCI
  • Lets you use multiple controllers in one system (that is, it can co-exist with the onboard IDE controller on my SuperMicro P6DBE motherboard)

Please refrain from suggesting that I should just use SCSI -- the goal here isn't absolute greatest speed and reliability, but a cheap way to teach myself more about RAID5 and provide a test system to blow things up on without causing users unnecessary grief ;-)"

This discussion has been archived. No new comments can be posted.

Ask Slashdot: IDE Software RAID?

Comments Filter:
  • The RAID5 module of the Vinum driver is not distributed with Vinum - likely because the author wants to maintain a more restrictive license. While this is good for the author, it's bad for FreeBSD. This also shows the fundamental problem with BSD software: arbitrary 'valuable' chunks can be taken off and distributed 'separately'. The Linux RAID0/RAID1/RAID4/RAID5 driver is completely GPL. (AFAIK maintained by some RedHat guy) The Linux software RAID driver also appears to be much more feature-complete than Vinum: background reconstruction [no need to wait for the RAID array to reconstruct before fscking/mounting the filesystem], hot sparing and hot-add/hot-remove are supported. According to the configuration options the Linux RAID5 driver also has an MMX checksumming feature - this reportedly speeds RAID5 operation up.
  • by Anonymous Coward
    That limitaton was fixed when EIDE came out, now you can talk to both drives on one controller at once, though apparently it still doesn't do that as well as SCSI does. So if he just wants to play he should be able to do it with just two controllers.

    Otherwise the Promise PCI DMA/66 card sounds nice.

    -dantheperson
  • by Anonymous Coward on Wednesday June 23, 1999 @02:11PM (#1836118)
    For controllers, western digital has URL's at this main link [westerndigital.com]. Click on the "Solutions" link to go to suggested solutions for common problems with UDMA/66. For prebuilt IDE RAID system, try here [raidzone.com].
  • FreeBSD can definitely have performance advantages over Linux, especially in this area. I brought up this suggestion because I wouldn't limit myself to one good, open OS. If better performance could be had elsewhere, with stability as well (Note: I am _not_ saying Linux is unstable; I meant stable as well as how stable Linux is nowadays [much better than it used to be, although nothing's perfect]).
    I'd like to point out (or rather, I hoped it would seem as such that I did point out earlier) that a 'problem' (in this case, getting great performance with little money) can be solved in multiple ways. Until you try all of the (at least free) solutions, are you really done looking? He limited himself by saying "no SCSI", but SCSI is probably still a better idea.
  • Vinum's very good software, but I don't doubt that Linux's software RAID could probably be as fast, and possibly faster. The main point was that SoftUpdates and a very tunable kernel (NBUF, et al) could provide a huge boost in performance, and the striping's there. He has a good solution for inexpensive RAID, and a good platform. Why not try something new? I didn't say something that was Linux-specific because I don't want to see people limiting themselves (choose what you like best, right?)
    Vinum does implement RAID-5 in a separate version. As an aside, ccd and other much more primitive striping systems have been doing at least the lowest levels of RAID for years.
  • Was this possibly in the 2.2.X branch? I know that above that, Promise is _very_ well-supported.
  • by Brian Feldman ( 350 ) <green@FreBOHReBSD.org minus physicist> on Wednesday June 23, 1999 @02:02PM (#1836122)
    As long as you have good IDE controllers (no huge bottlenecks), try FreeBSD's RAID/LVM system "Vinum." It would require trying an OS other than the media baby of today, but that's definitely worth it anyway.
    If you _REALLY_ want to see great performance, try FreeBSD using Vinum and setting SoftUpdates on on the Vinum volume.

    (Now just watch this be moderated down for being a troll, because I suggested something different...)
  • The definition I usually see is: Redundant Array of Independant Disks. The Inexpensive has been changed/dropped by a lot of people because RAID in general is everything but inexpensive (just look at hardware RAID racks--mucho $$$).
  • I'm running out of drive space, and I've run out of IDE locations. I've also run out of power connectors, but that's easier to work around (snip, melt, mmm, smell that resin:). I've also got the problem of where to mount an extra drive. As it is, my fourth HD is taped to an other with old floppy slide cover providing support. Gets a tad warm.
  • Thanks, but after a little ferretting, WD's site takes you here [promise.com].
  • Replacing the drives isn't really a good option. I'ld rather add to what I've got. SCSI is actually an option if I can find a cheap PCI or ISA card as I've got 6 unused SCSI drives (inherited, totalling something over 2gig I think). I've actually got a card, but it's VLB and the only VLB slot in a working computer is taken by my video card and the computer with the card has no memory (DECpc, needs 36bit memory, standard FP memory won't work). The biggest reason I want more IDE slots is so I can replace my aging CD rom (SBPro interface, matsushita(?)).

    I've got old h/w coming out my ears, but I can't use it for various reasons:(. However, I've had fun getting things going:)

  • Actually, what he's proven is that you can't allow division by zero.


    ...phil
  • Posted by The Technical Revolutionary:

    In my experience you should not have a problem adding a third ide controller, as long as you have the resources - especially if its pci. Linux does have to support it though, but I think that Linux supports tertiary, and quadriary IDE controllers. Basically, plug it in and it should work..
  • We don't know of any Linux RAIDframe ports; we've occasionally toyed with the idea of doing one, but we really don't have time to do it now (our efforts are focused on the NASD [cmu.edu] project these days). If you're interested in porting it to Linux, though, please let us know.
  • Mainly because you provide ample support for my statements that FreeBeasties are just as nasty as any "Linux biggot" around.
  • Correct me if I'm wrong, but I don't think I am.
    Incoming correction.

    The IDE standard allows for only two IDE controllers in one system. Newer motherboards have both of the allowed controllers built in (hence the four IDE devices). If you plug in another PCI controller it will not work because of the two controllers already opperating in the system.
    BIOS can only deal with (currently) two controllers. However, the IDE spec allows at least 4 controllers to be present. Whether this conflicts with anything else in the system is another matter.

    Everything I have read on RAID (I'm not an expert but I have read a lot), has said it will not work on IDE systems. Here are the two reasons I can think of;
    1) RAID trys to write accross at least 3 drives at once. Exactly what it writes to each drive depends on which type of RAID (0, 5...).

    s/3/2/ RAID 0 and 1 require only two drives. RAID 4 and 5 require at least 3 and RAID 5 performs better with more drives.

    This is no problem for SCSI drives on the same cable, because each drive operates seperatly. On IDE the drives work in the master/slave fashion and the slave is truly dependant on the master and must wait for the master to respond. Because of the master/slave issue, each drive would need to be on another cable. Which brings up problem...
    There is no reason in software RAID that each drive needs to be on its own cable. You may not get as good of speed, but it is possible to run software RAID 0 or 1 on a single controller system while you need 2 controllers for RAID 4, 5, or "10" (RAID 0/1 combined).

    If I'm wrong, I'd like to know, because I wouldn't mind running RAID on IDE's too.
    It can be done, but it's not recommended.

  • I could set the IRQ to 13 or 11

  • SIIG makes decent and inexpensive scsi controllers. Not only that, but they were cooler than most back in the day and spontaneously decided to provide drivers for the 2.0.x kernels developed in-house.

    I've got a couple SCSI HDs and a burner running off mine. Works like a charm, no coasters on burns and the card cost maybe $80cdn from London Drugs when I got it.

    --
    rickf@transpect.SPAM-B-GONE.net (remove the SPAM-B-GONE bit)

  • Do you have those kernel modifications handy? I was investigating using two Ultra33s to build a cheap RAID server about a year ago, but never found time to get multiple Promise cards working. Has support for multiple Ultra33/66 cards found its way into the standard 2.2 kernel tree?
  • As many people have already noted, using the Promise Ultra33 is an excellent way to approach Software RAID with IDE drives. There is some very useful information at Erik Hendriks' website at NASA Goddard:

    http://www.beowulf.org/bds/disks.html [beowulf.org]

    They found that most of the "dual" channel IDE ports built into motherboards are not truly independent because of a shared buffer in the controller. This is a "feature" of the IDE controllers used and effectively limits the collective performance of the two IDE channels to roughly that of a single channel. The IDE channels on the Promise board are truly independent though. As their benchmarks show, placing one drive on one of the motherboard controllers and one on each of the Promise controllers yielded nearly three times the disk bandwidth of a single channel. Of course, this is for data striping, not RAID5, but the principle is the same.

    For those interested in building a RAID5 server, this configuration makes a lot of sense. Use two disks on each channel of the Promise and two disk on the motherboard controller...five data and one parity and roughly 3x the bandwidth of a single drive.

  • IDE has no notion of "disconnect" like SCSI, so the bus is held for the full duration of a read or write, which limits the usable full bus bandwidth to one device per bus. Technically there's no reason why you couldn't run a RAID5 on three or four IDE devices on two IDE busses, but it isn't practical since you're basically halving your bandwidth per device.

    This shouldn't stop you from playing around, though... I once made a RAID0 with two old 80mb IDE drives on the same bus. It was slower than a single drive, but had twice the capacity. :)

  • I have a raid 5 system built with 4 scsi drives. This system is a dual P150 and is my mp3 player for my house.. Problem is is that when booted with an SMP kernel it pops once in a while and logs a DMA/IRQ possible conflict. If I boot with an UP kernel it never has a problem.

    So.. I tried upgrading to the newest kernel 2.2.10 but then I lost my raid device. So two questions here..

    • Is sound not SMP safe?
    • Is the auto run raid specific to RedHat kernels?
  • About a year ago I played with the raid0 and md linux stuff and found that I was able to raid any set of devices. I had scsi and ide drives on the same raid set.

    I remember reading that it didn't even care if you use MFM drives although that would really slow things down. Since it's software, I don't see any reason why you couldn't raid0 a set of floppy drives... just to see how slow you can make you disk access :) Remember, this is *nix, any mountable /dev should work. It's up to you to decide how silly to make it.
  • Thought nobody would catch on to the head-seek issue.
  • All very true. In my own defence, look again, I said "...might see (marginally) better...".

    A while back I tried raid0 (2D/1C) with EIDE drives on an early UDMA controller and got roughly 1.3:1.0 speedup. Simply put, the drives were a fair amount slower than the controller.

  • by Robert Bowles ( 2733 ) on Wednesday June 23, 1999 @07:15PM (#1836142)
    Sorry for the top-level post. Instead of several 2nd level replies, I thought I'd try to answer a bunch of questions at once...
    1. Do you need 3 channels?
      No. In fact with 2D/1C (2 disks on 1 ctrlr), you will still get getter performance 1D/1C in most cases. Under general use, you're doing small-size reads distributed across the disk, so the real bottleneck is head-seek. Even with big contiguous-block reads, you'll still notice an improvement.
    2. RedHat Kernels and Autodetect
      First off, RH-kernels are far from stock linux kernels. Do an 'rpm -qpl [file].src.rpm' on one of their kernel SRPM's and you'll see a bunch of (non-dist) patches. Amoung them is the raid patch [kernel.org]
    3. Promise Cards
      Support for the new Ultra/66 hasn't hit the 2.2.x tree yet(I think). Check 2.3.5+ for new Ultra/(33,66) support.
    4. Raid 5 on Three Disks
      1. Doable: yes. Advisable: no.
        ( I've never tried it ) I suspect that you'd might see (marginally) better read-speeds, but you might even see degradation on write or mixed rdwr perfs ( since every write yanks two out of three heads across the platters )
      2. Two small and one big?
        This won't work for raid5, not unless you want most of the large disk unprotected. Consider instead striping (for example) hda3+hdb3==md0, and then making a raid0 or raid1 volume md0+hdc3==md1.
        Better yet, get four disks of the same size...
    5. Identical Drive Myth
      For hardware raid controllers, yes, go with identical disks. This is not needed for any kind of s/w raid I've dealt with (linux, disksuite, veritas-vm, xlv). For linux s/w-raid, you should be safe making a raid5-vol by mixing two ide-partitions, a scsi-disk, a loopback off of a file and a few NBD's (so long as they are the same size).
    6. Hotswap-IDE
      Scary, risky and very unwise.
      So I'm not the only one...
  • I run it at work, 4 IBM OEM Deskstar 22GPX @ RAID-5. Have it in our NFS server (Dual celeron 450, 256 Mbyte RAM). 10 diskless clients mounts thier root from it, works great.
  • I'll probably be playing with vinum soon... But I'd be really surprised if there was any real performance difference between vinum and linux software raid.

    The thing that kills disk performance, be it raid or not, is moving the disk heads, the speed of the disks and the busses, and memory bandwidth in the system.

    Parity calculation and calculating what block goes to what disk, seems not to be an issue at all, with modern CPUs. A PII with linux raid-5 is able to do parity calculations on several hundred of megabytes per second. More than the memory/pci bus can do anyway.

    However, AFAIK vinum does not yet implement raid-5. Since this guy wanted to play with raid-5 vinum may not be the choice for him. Please correct me if I'm mistaken here.
  • you can use any type of drive on a linux software RAID system. the only constraint is partition size. for instance, if you had a SCSI RAID 5 system, you could have a spare IDE as a failover device.
    at my old workplace at Austin Community College we had no budget so with a aha2940uw and a couple 4.3gb wide scsi drives they had "just lying around," i set up 2 RAID arrays. a 6GB RAID 0 array which was our main SAMBA share, and a 1GB RAID 1 arrary for nightly backups (no money so no tape drive...). works great. no problems at all.


    -l
  • According to what you just quoted, the answer would be "no." It says separate cables, not separate controllers.
  • As I recall, from the RAID I looked at in th 2.0.x kernels, it works on a partition level. If you go and use it with two drives on the same IDE "chain" it's gonna be slow, (due to IDE's request limitation) but I would think it should still work. This would allow you to screw around with it, but I can imagine a working system would be pretty painful to use ...

    My advice is try it and see. You are gonna buy the extra disk anyway, so it's not going to cost you anything. Maybe even save you from having to screw with another controller if it works fast enough for your needs.

    /dev

  • I tried installing Red Hat 5.2 on a new Gateway 2000 with a Promise 66 IDE card and the installer claimed there were no hard drives in the system.

    I haven't tried the suggestions in the responses, but I just wanted to point out that the Promise card wont work out of the box. If you buy one make sure you can return it.
  • I recently got a Promise Ultra/66 controller card and it works beautifully under Linux. It's supported under 2.3, but the driver has been backported to the 2.2 kernels (see http://www.dyer.vanderbilt.edu/server/udma/ [vanderbilt.edu]). I use it with a Western Digital 20.4 GB ATA/66 drive.

    The difference in speed while compiling a kernel on this controller/drive vs my SCSI-3 (80 MB/sec) Tekram DC-390U2W controller and a Quantum 9.1 GB SCSI-3 drive is minimal.

    The EIDE drive runs much quieter, cooler, and costs only about 1/3 as much per megabyte.

    Daniel Butler

  • striping - yeah you wast one disk, but read performance is awsome. On a home system, so long as you back up your stuff, the fact that it increases failure possibilities should be too much of a problem. I would rather do this than RAID5.

    Of course is fault tolerance is actually what you want, and not speed, then maybe you should just try simple mirroring.

    I do not think you can do RAID5 because there is no way to have more than 2 ATA controllers that I know of.
  • The drives should be identical for best results.
    Actually, the drive _have_ to be identical for mirroring or RAID-5. Only striping allows disparate disk sizes. To elaborate for RAID beginners:

    RAID-5 uses a number of disks (minimum three) to store data. The bits are spread across the disks, with one last disk acting as a 'parity' disk (in real situations, parity information is spread across the disks). When data is written to the disk, the parity bit is calculated and written to the final disk. When data is read (in normal circumstances), the parity information is ignored and the normal data read off.

    When the parity disk fails, nothing special happens except that the parity information is not stored. When another disk fails, performance dies, as reads have to be 'reverse-engineered' from the parity information. Once the disk is replaced, the information is rebuilt from the parity data.

    Mirroring is simply writing the data to two places and reading from a random disk; if one disk dies, data is simply read from the second disk. Since there is no calculation involved (the data is simply written to two places), reads and writes are much faster. However, this is more expensive in terms of hardware required.

    Of course someone is going to suggest "put 3 2gig partitions on the 6 gig and 11 on the 22 gig and now you have 28 gigs!
    As you've pointed out, this is a bad idea! The reason for using RAID-5 is reliability, and dumping 11 partitions of your RAID volume on one disk is asking for trouble! You will gain a little reliability if your disks tend to get bad sectors, but that's about it. Since RAID-5 will slow down your disk writes (and, to a lesser extent, reads) you only ever use it for reliability in the face of disk failures.

    In addition, that 22GB disk is going to slow down the rest of the system; writes and reads will require data from the entire length of the disk, which is not good.

    On a home system, it's not really worth the headaches and performance hit for it. Just take regular backups and you'll be ok. If you really want extra reliability, use mirroring; it's a lot faster.
    --

  • As I understand it the reason for it being slow has something to do with being on the same IDE channel as the cdrom drive. It is for this reason that several usually reliable sources recommend putting your cdrom drive on your secondary IDE channel (IRQ 15) and your "C" drive on the primary(IRQ 14) even if those 2 are your only IDE devices.

  • RAID: Redundant Array of Disks.

    You missed the I out. It's generally Redundant Array of Inexpensive Disks, although I've seen other words for the I.
  • Well there's the IRQ holders / PCI steering thing for using multiple devices on a single IRQ, I've got a Matrox Millenium sharing IRQ3 with my Yamaha sound card and a TNT sharing with my PS2 mouse and things work fine. An yes I am using windows on my game computer with multiple monitors and the only trouble I can see with the IRQ sharing seems to be when I disable the second monitor the mouse quits uintill a restart, but that wouldn't be a problem if I never had to disable the second monitor, but Q3 seems to not like the dual situation.

    matguy
    Net. Admin.
  • I tried one of those Promise cards "waybackintheday"(tm) when Win95 was still the standard for me and the cards as ultra33 just came out. I bought one because I had an abundance of 1.2gig ide drives just sitting around and I wanted to use as many of them as possible and I hated puting a bunch of drives on the same cable. First, it was pretty finicky as to what would fit where, but I figured it all out and things ran, well for a while. After a few weeks I started getting protection errors constantly, yeah usually on startup, but in win(whatever) you restart a lot, so it was a pretty big annoyance. So I ended up returning it and then just relied on scsi for large numbers of drives.

    Something else I had wondered about, would it be possible to have a small adapter type of thing to put a ide drive on a scsi chain, now there you would have something as long as it was pretty low priced. I do think it would be feasable and it would solve the problem. All it would take would be some kind of translation and an autodetecting chipset, it should work real well in an external case. It's something I've been thinking about since my mac days back in school, we had these macs with these little 200mb drives and our pc's over there with their 800mb drives that cost less. I always thought it just made sense, but no one in the industry would want to do it I'm sure.

    matguy
    Net. Admin.
  • > IDE has no notion of "disconnect" like SCSI

    Ah, but it does. In fact the IBM disks mentioned actually implement the ATA-4 disconnect/reconnect and tagged command queueing (depth 32). Kudos to IBM.

    This would be a great addition to the Linux IDE driver.
  • A few weeks ago, I installed RAID-0 across two IDE disks--one 8G the other 6G. Other than booting (doesn't load up at boot--have to do it manually), it's performed flawlessly (but slowly).

    I've RAID-0'd the two disks on one controller (master-slave) so it is SLLLLOOOOOOOOOWWWW. The other controller runs the system (RH 6.0) OS disk. It may be slow but it's still functional which is what I was shooting for.
  • Unless your card is something really, really odd... that is two controllers. Actually, probably only one controller, but with two "channels". Each 40 pin header is one channel and can support two devices. I've never seen otherwise, and I've seen lots of oddball hardware. P.S. If that is a CMD640 chip on it, consider replacing it.
  • we did have ide raid cards that are from promise here at my work

    we are out of them at the at this time but here
    is the ad for them

    http://www.compgeeks.com/cgi-bin/details.asp?cat =Drives&sku=205-1033
  • Back when I had my 486 I had a Promise caching IDE controller card(2 IDE channels per card for a total of 4 devices). It would allow you to have 2 of those cards in your system(or your onboard controller plus the Promise board). Not exactly sure how it worked, but when I had just the one card in my system it was fast! You could put up to 16MB on the card(I had 4MB and was quite happy with it). Not sure if Promise still makes it, but worth looking into!
  • Yes, having 2 IDE controllers, each with support for a primary and a master, will give you 4 drives, but you can't access all 4 of them simultaneously; having all of your drives accessable simultaneously is probably very desirable in a RAID system.

    One IDE controller can only control one disk at a time, so you can't read a file from two disks simultaneously, and you can't actually read from one disk write to another while you do so, even though it sortof looks like it, when you do something like copy a file from one drive to another, because the task is being swapped between disks so quickly.
  • by GoRK ( 10018 ) on Wednesday June 23, 1999 @02:17PM (#1836163) Homepage Journal
    To clarify the point, software RAID under Linux (any mode) does not absolutely require that each hard disk be on a seperate controller. I have had plenty of success using Software RAID on drives on the same controller. I haven't seen system performance bog down too much with this configuration either. On the newer bus-mastering Ultra33/Ultra66 controllers, CPU time for IDE access isn't really as big a problem as it used to be. So, if you're just talking three drives for a test machine, I don't know that the extra expense for a slick PCI IDE controller is going to be all that justified. Try it with your onboard controllers and then upgrade if you decide you need it.

    Another question is this: Is there any support in Linux for IDE Hardware RAID controllers like the Promise FastTrack, FastSwap Pro, or SuperTrak? Obviously, Hardware IDE RAID solutions are much less expensive than traditional SCSI RAID controllers and drives and can offer comparable performance on smaller workstations or smaller workgroup servers.

    ~GoRK
  • As far as I know, with onboard IDE controllers, it can only be writing or reading from only one of the drives at a time. So in effect, yes he would need 3 different controllers to be writing to all 3 drives at once (which is what makes RAID so fast).

    8Complex
  • Be careful with this one or that fryguy might be visiting you sooner than you think ;)

    The safest way to handle an IDE hotswap is to unplug the power first and let the drive totally spin down, and then unplug the data cable. When powering a hotswapped drive back up do it the other way - data cable then power cable. Never mess with that data cable if the drive is running. Unplugging the data cable first can cause bad things to happen, or so I have been told. We had a hardware course at the college I attended. The guy who was teaching it really knew his hardware and that was the way he recommended doing it. He explained why but it has been too long... something about toasting the controller.

    A neat trick based on this is hotswapping for data recovery. If you lose a hard drive to a bad ondisk controller and you have another identical hard drive, boot from the good drive, then follow the above steps to swap them - chances are that you can get your data back this way unless the controller on your bad drive is really fried.

    Just my $0.02
  • He said 3 controllers. Having RAID with multiple
    drives on the same IDE controller is a waste. Unlike SCSI, IDE can only access one drive on a
    controller at a time.

    The point of RAID is to read from multiple drives at once. Having those disks on the same IDE controller means you only read from 1 at a time, basically killing any speed benefits from RAID.
  • Actually, I *believe* you can have 3-4 IDE controllers on a single mobo. Most only have 2 for economic reasons, and the bios only controls the 2 on the mobo. There are adapter cards that can be plugged into the motherboard, which may have seperate bios setups (like SCSI), I'm not sure.
  • I do think there is a limit in memory mapping or bus mapping or something silly like that. It just happens to be 3 or 4. But I could be wrong or mis-remembering.

    Also, you run out of IRQs quickly, but I don't recall if an IRQ is used per drive or controller. I think things like Promise's IDE HW RAID uses some form of IRQ overlays, using one IRQ for multiple drives/controllers.
  • Typical bigot Freebsd user response.

    And not just that, a developer too :) .. ooh.. get the big guns out to face the Linux mob :D

    There's SOOOOOO much trolling about who's OS is better than who's .. give it a rest ppl and appreciate life and love and happiness and live in harmony together ...
  • While every bios I have ever used only supports two controllers(each with a master and slave), each using there own irq. The offboard cards have there own bios' to detect drives and know how to talk to the drive. You do not have to disable the onboard ide they work fine.

    My Ultra/33 has two connectors and supports up-to four drives.

    It is not really a problem sense like 2.0.35, even the dreaded products of M$ will let you use these things I ran two HD, a zip disk, a cdrom, a cdron changer, and a cd-burnner, No problem. But there was a notable differace in speed when you transfered between things on the same cable.
  • You might try striping (RAID-0) instead. It offers performance advantages like real RAID but does not provide redundancy. It only requires two disks, though more disks gives better concurrency (unless they're on the same IDE chain!).

    It's less reliable than real RAID for sure, but it's also even less reliable than using seperate disks as seperate disks. If one fails you effectively lose the contents of both.

    Still, for an experimental box, or a box where you care more about performance than the data safety (USENET server), it's cheaper than RAID-5.

  • Can you not put the system OS disk and one of the striped disks on the primary controller, and the other striped disk on the secondary?

    The idea is to split things up so that disks you are likely to be accessing at the same time will be on seperate controllers. The system disk probably doesn't matter as much as the stripped disks, because the frequently accessed stuff on the system disk tends to stay in RAM anyway.

    By the way, do you have to do anything special to keep your system cool with three disks? I had a 1 GB and a 340 MB disk in my system not long ago, and they warmed the inside of the whole box, even with two fans. My single 13 GB is cooler. (I guess 5400 RPM produces less heat than 3600 RPM + 3600 RPM)


  • Places I've worked at commonly throw old Adaptec ISA controllers in the junk bins never to be seen again until someone rips them off. You might want to check Ebay - an ISA SCSI card + older CD-ROM shouldn't set you back that much. There are also newer 'budget' PCI SCSI cards with no BIOS, which I think is OK if you are booting from IDE.
    --
  • I have four IDE drives on 2 interfaces and they're 17.4gig Maxtors and they seem to work just fine.I think that you do not need to have a separate channel for each drive. As a matter of fact...this is the 3rd machine I have built this way and they all built the same way....working.
    So my advice is....if you want to use 3 drives..go ahead. ;-)

    Kevin
  • If you're interested in doing research on raid, you might want to have a look at RAIDframe [cmu.edu], which is a system for prototyping disk arrays. It was added to NetBSD [netbsd.org] last November [netbsd.org]. It includes a simulator as well as a device driver for doing RAID on real disks, and supports levels 0, 1, 4, 5, hot spares, and more. The base code for level 6 and parity logging is also in there, though I don't know how well it's working.

    There's a web page with current notes on RAIDframe on NetBSD here [usask.ca].

    cjs

  • I have the exact same motherboard, and I simply filled up all the on board controllers and needed more hard drive space. I got the Promise Ultra33 card, and it works wonderfully. With it, I can even boot off of hde3! If you want a *lot* of drives, the Promise card is well behaved enough that you can have more than one of them in your system and they won't step on each other. I highly recommend it!
  • A typical motherboard today has two onboard (E)IDE controllers, called the primary and the secondary. The primary controller is usually assigned IRQ14, and the secondary IRQ15.

    Each controller can control two (E)IDE devices, but can only actually read or write to ONE of those devices at a given moment. The first device is called the 'master' and the second device the 'slave'.

    However, the controllers are independent of each other, meaning that you can access a drive on the primary controller while simultaneously accessing a drive on the secondary controller.

    For example, one way to essentially double your disk swapping performance is to put half your swapspace on a drive attached to the primary controller, and the other half on a drive attached to a secondary controller. (Note that when 'swapon'ing the swapfiles or swap partitions, they need to be assigned the same priority. 'man swapon' for details.)

    Hope this helps.

    mdm
  • Here [agnhardware.com] is a review of the Fast Trak IDE RAID Controller on AGN Hardware [agnhardware.com]. It costs between $60-80, and in a RAID 0 did increase performance a bit. It supports RAID 0, 1, and 0+1. Not sure if it'll work in linux though.
  • I got the impression he didn't care much about overall throughput, he's just trying to have fun.

    And remember: He'll still get the fault-tolerance advantage of RAID, no matter how he hooks up the drives. Heck, he could even use floppy drives. ;-)

  • Actually definetely not true, recent kernels come with raid, you just have to make sure you check that as something you want to compile into your kernel.

    Actually in my experience redhat 6 didn't come with raid but then it might have been a module that i didn't see.
  • I have an ATA card that has 2 cable plugs on it. Does this count as one controller or two?
  • The only thing Linux needs the BIOS for, in terms of hard drives, is for booting the kernel. After you have the kernel in memory, it takes over and doesn't bother to use the (often cruddy) code in the BIOS.

    At home I've been running an old Digital 386DX/16 workstation, as a server for a while. It initially came with an Adaptec SCSI controller and a 40MB Scsi hard drive. It used the SCSI BIOS to boot the HD. The system bios itself could only select a floppy and the SCSI drive as an option to boot. Right now, I don't have any scsi hard drives in it at all. It has 2 small IDE drives and a Hardcard, with a copy of the kernel in the floppy drive. It boots off the floppy, and then uses the IDE drives, despite the fact that the BIOS doesn't support them.

    This only thing IMHO that could prevent you from having eight IDE controllers is a lack of IRQ's or some other possible limitation, that I don't know of, placed by the PCI bus.

    This is explained in more detail in the Large-Disk mini-HOWTO [unc.edu]

  • Actually, I *believe* you can have 3-4 IDE controllers on a single mobo. Most only have 2 for economic reasons, and the bios only controls the 2 on the mobo. There are adapter cards that can be plugged into the motherboard, which may have seperate bios setups (like SCSI), I'm not sure.

    I believe you are correct. As a matter of fact, I would bet a chunk of pretty polly that the number of IDE channels allowed has nothing to do with the mobo or the BIOS. It is merely a function of how many IDE channels are on the mobo and how many are on your "additional" controller.

    Of course having more than one "extra" controller in a box might be a bit tricky...

  • Just needed to tell the community: IBM has released a few drivers for their ServeRAID controllers (model I, II, 3H and 3L) under GPL. It can be found here [ibm.com]
  • It slows down because on the IDE interface, once a command is sent out, NOTHING else can happen until that command is acted upon. That means that by having two drives, when a command is sent to one drive, the other one can't do anything until the original drive responds to it's request. Admitedly, it's very slight, as it's only milliseconds, but that is your answer.
    That is why SCSI is better (at least as far as multiple drives are concerned,) because SCSI can bunch up the commands, so all of the drives can take care of their responses whenever they get them, and send them back whenever they feel like it, without slowing down the rest of the system.

    Check out Thresh's FiringSquad [firingsquad.com]'s IDE vs. SCSI [firingsquad.com] review for the complete info. (It goes very in depth.

  • And you're helping whichever OS you use... Of course, you're an "Anonymous Coward", so you are probably just one of the 'nets great trolls... Jeez, if you REALLY want a troll, how 'bout this one?


    linux sucks, Linus sucks, ESR sucks, Windows rules forever!!!!
    Hmmm, I wonder if I can set the record for lowest score on /.? This should EASILY get moderated down to a -2 or -3.

    [rant on] But seriously, the Linux community is in danger of falling in to the same trap that is fighting against the Macintosh. Fans of the system are becoming so rabid in their fanaticism that they take offense at any slight against it, even if it's true. Take the [ominous music here] infamous Mindcraft survey; other, independent sources (Ziff Davis, maybe not the greatest of sources, but still independent) have confirmed that under the testing conditions supplied, Linux really is slower than Windows NT Server. But, do most Linux users sit down and say "Hmmm, well, that's a surprise. Now how do we go about fixing Linux so it is faster?" No, the vast majority of the posts on Slashdot were ones to the effect of "Mindcraft is evil, they must be burned at the stake for heresy!"

    Remember, in your religious pursuit, don't go so far as to refuse to accept facts, just because they go against your beliefs. Personally, I think that the worst at this is none other than good 'ol Eric S. Raymond. Yup. He is the Rev. Falwell of the Open Source movement. Fine, fine, he did plenty of good things, but he should stick to coding, as he does not make a good spokesperson.

    [rant off] Remember, we should not only be open source, but open minded.

  • WARNING: This is from personnal experience only, take what I say with caution and don't try this unless you have experiemental hardware. You have been warned.

    I have done hot swapping of ide drives at my work place. I didn't want to shut the system down (was erasing about 30 ide drives and rebooting would have been a pain). As far as sw goes, linux can reprobe for ide drives provided you compiled ide as a module (there goes booting, but this was an nfs root booting from floppy). I hear there is an ioctl() you can call to have it do the same thing. Basically, just rmmod ide-probe and insmod ide-probe.

    Now as far as hardware goes, this has always worked for me but I won't be responsible for someone trying this on their hardware and frying it! The way I have always done this is to plug in the power cable first and let the drive spin up (this goes for all ide and scsi drives). Once they have spun up, plug in the ide cable. Make sure you plug it in right! rmmod and insmod the ide-probe and you should be in business. I did hot adding of a cdrom to a linux server today (scsi bus) and didn't have any problems, had to reboot since the sync rate was set in the scsi bios to 5mb/sec instead of 20.

    WARNING: You could fry your drives. I have been lucky and waiting for the day when the fryguy comes =)
  • RAID5 is not striping and mirroring.
  • One IRQ per controller, tipicaly IRQ 14 for IDE 1 and 15 for IDE 2, if you're looking for spair IRQs try 10 and 11, they're usuaily SCSI controller 1 and 2.
    -Ted
  • Promise (and most likely, other companies,) makes a port expander card. This is an older card, but still does the job. EIDE compatible, no UDMA on this one. This will give you 4 IDE channels (most modern motherboards have 2 built in, this gives you an ADDITIONAL 2 channels.) It can be found here [promise.com]. Note: This is an ISA card.

    Promise also makes their Ultra33 expander card. This card supports UDMA33, and once again, adds an additional 2 channels. It can be found here [promise.com]. Note: This is a PCI card.

    For those who really want speed, once again, Promise comes through with their Ultra66 expander card. This card supports UDMA66, and, like their previous cards, adds 2 channels, leaving your original 2 free for other devices (or more hard drives). It can be found here [promise.com]. Note: This is a PCI card.

    By giving your machine 4 IDE channels, you will have the option of connecting up to 8 IDE devices, including hard drives, cd-rom drives, and the like. You should (if I'm thinking correctly..) be able to read/write from 4 of these devices simultaneously (one device from each channel). This is probably what the HOWTO or whatever is talking about (needing 3 controllers/channels/whatever). Accessing 2 devices on the same channel will be somewhat slower.

  • Have you tested the RAID5 in a crash situations (for example remove one of the disks from the system to simulate crash)? I am very interested in the way the system recovers.

    Stefan.
  • Personally I use a Promise Ultra33 [promise.com] controller and two 8.4 gig Western Digital drives under FreeBSD's vinum (striped) with great results. I picked up the controller for $26 at a computer show recently. Performance-wise, on a single controller (master/slave configuration), performance was about equal to a single drive. With each drive set up as the master on each of the Promise's controllers, I can get ~13 M/sec (dd-stone) continuously. I had to hack the driver a bit on FreeBSD, but Linux didn't seem to have a problem with it.
  • If I understand your description correctly, this is one controller. Each EIDE controller can control up to four devices, two primary and two secondary. The primary devices exist on one cable and the secondary on another, therefore you can communicate with only one primary and one secondary device at any particular time.
  • Actually, my understanding is that you cant write to two disks on the same /channel/ at the same time, however most modern EIDE controllers have two channels, each one capable of controlling two drives. Therefore, it is possible to write to two disks, provided they are on different channels.
  • The newer Intel chipsets include the PIIX4 in the south bridge, which doesn't suffer from the shared-buffer problem mentioned in the Beowulf document. Most Pentium boards (ie TX chipset) and pretty much all P2/Celeron/Slot1/Socket370 boards have independant IDE channels. They also support UltraDMA, so the CPU isn't loaded down managing the drives. If you can limit yourself to one drive per IDE channel, there isn't much of a performance penalty in using IDE over SCSI. It can easily win many bang-for-the-buck contests.

    Read-ahead in the drive firmware can sort-of supply some of the benefits of SCSI's bus disconnection feature too. ATA-33 and now ATA-66 can manage transfers from the drive's buffer cache to the system at more than twice the media rate, so the dual-drive channel isn't hurting for bandwidth... you just need to keep it from stalling. If the RAID chunk size is fairly small, say 4K, then while you're transfering that data from drive0 to memory drive1 will be filling it's buffer with the next 4K from automatic read-ahead. This means on purely sequential transfers the combined transfer rate approaches the sum of the media rates. The bus will still stall on random seeks of course... it's still no SCSI subsystem. But it does well enough to suprise a lot of SCSI fans.

    I'd rather have a new SCSI drive than a new IDE drive, but I'd take a new IDE drive over a SCSI drive a generation behind.
  • Correct me if I'm wrong, but I don't think I am.

    The IDE standard allows for only two IDE controllers in one system. Newer motherboards have both of the allowed controllers built in (hence the four IDE devices). If you plug in another PCI controller it will not work because of the two controllers already opperating in the system.

    Everything I have read on RAID (I'm not an expert but I have read a lot), has said it will not work on IDE systems. Here are the two reasons I can think of;
    1) RAID trys to write accross at least 3 drives at once. Exactly what it writes to each drive depends on which type of RAID (0, 5...). This is no problem for SCSI drives on the same cable, because each drive operates seperatly. On IDE the drives work in the master/slave fashion and the slave is truly dependant on the master and must wait for the master to respond. Because of the master/slave issue, each drive would need to be on another cable. Which brings up problem...
    2) only two controllers work in one system but three are needed to avoid slaving any drives. Three cables, three master drives, three controllers.

    IMHO it can't be done.

    If I'm wrong, I'd like to know, because I wouldn't mind running RAID on IDE's too.
  • If you're just looking to screw around with it, do you need to have three controllers, or is that just to get better performance out of it? What happens if you put two of them on one controller and the third on the second?
  • Umm... may be time to consider an external case or buying bigger drives to replace the ones you have... or maybe a cheap SCSI controller? Sounds like you're trying to do more than IDE was really designed to do.
  • Does Linux support this card?
  • BIOS can only deal with (currently) two controllers. However, the IDE spec allows at least 4 controllers to be present. Whether this conflicts with anything else in the system is another matter.

    Linux doesn't need drives to be on a BIOS recognized controller does it? I thought it talked directly to the controller.... I've been running more (and larger) drives on my 486 than my BIOS recognizes for a long time. I thought this was possibly because linux does the conversing on a lower level (to my newer controller)...maybe not.

    If so, then provided there is no limitaion in the (E)IDE specifications as bkosse thought and you can get the IRQ, etc to agree then it should be possible to address the drives... Perhaps this is different on newer systems (PCI, etc.) but i thought the controller would just be seen as another adapter by the BIOS. I know that's how it worked before they started sticking them on motherboards...
  • Actually you can have FOUR IDE controllers on a motherboard. (1-Primary, 2-Secondary, 3-Tertiary, 4-Quattinary) I had four cards in a system one time. The only problem is finding a controller that will allow you to set up the interrupts and the IO address. Some times you could 'stack' the interrupts on top of the ones for the primary and secondary IRQ's. (14/15)

    You may be able to find controller cards that came with CD-ROM that are dedicated tertiary controllers. I found that they no longer make these boards anymore (too bad) You also had to get special drivers or OS support for them.

    The BIOS will only reconize the primary and secondary controllers. It used to be that it would only reconize the primary controller. They made boards with on-board BIOS extentions to reconize the secondary and/or tertiary/quatinary cards. Then you could add hard drives and/or CD-roms to any system. (Anyone remember the 40mb HARDcards?)

  • Two words: pipe strap

    You can get it in rolls in the plumbing department at a hardware store. It is cheap, metal and has holes about every 1/2 inch and can be cut with a pari of side cutters. I used it to hang two extra drives in my tower system.

  • I'm running a software IDE raid on my box, Pentium 233 MMX, and it runs fine, if a little slow. Using the default 2 channels built into the motherboard, I have PriMaster as the boot drive and the PriSlave+SecMaster+SecSlave as the Raid5. I'm sure it would be faster with separate controllers / cables for each drive, but I was looking for redundancy, not speed.
  • http://www.pdl.cs.cmu.edu/RAIDframe/ is the URL for CMU's RAIDframe project.

    Dunno whether it works with Linux, but it's already been integrated into NetBSD. (http://www.cs.usask.ca/staff/oster/raid.html)
  • I was actually talking to my friend about this the other day and he said that when he researched it that there were limitations in the x86 architecture that allowed for only 4 controllers, and he had it working on his machine. That is, two on his motherboard and a card with two. I think he did it was because he had 8 drives they were throwing away at work, and a card. So he threw them in an old server case and loaded linux on it, it worked fine apparently.
  • > X=Y;X^2=XY;X^2-Y^2=XY-Y^2;(X+Y)(X-Y)=Y(X-Y);X+Y=Y; 2Y=Y;2=1

    This is all true except for the "2=1" thing if X=0 and Y=0.

    i.e. you have proven nothing.
  • I don't think I know of a pci ide controller that doesn't work in Linux actually. The controller should have its own bios on it and have a little init screen after your system's original bios does its thing. Only problem I've had is that lilo doesn't know how to write the mbr on it and the 2.2.x kernel option to have the drives on it show as the primary/secondary instead of tertiary/4th doesn't work but it doesn't seem that this will cause a problem for what you're trying to do.

    ~Kevin
    :)
  • How hard are externel cases to use? I've heard of them for scsi, but will they work for ide too? Does it just require some special cabling?
  • AFAIK you can only have 2 IDE channels on a motherboard. You'll notice in the BIOS that there is only room to identify 4 drives - Primary Master & Slave and Secondary Master & Slave. I've never come across any hardware which allows for three (or more) channels.

    That being said, there could be some wierd freaky mobo which allows it... I've had experience with several types & versions of SuperMicro Boards (they are THE BEST(tm) IMHO) and I'm fairly sure none of them will support it - you have to disable the onboard IDE channel to get a offboard one to work type idea...

    Perhaps someone knows of a mobo/bios combo which allows this? Or has a hack around it? heh, feel like coding in hex? (or whatever they make BIOSes out of these days)
  • And ABSOLUTELY NO LINUX SUPPORT!
  • Yes, you can do more with SCSI. But I see good performance on modern IDE in the only combo I have tried, using 2.2.10 kernel, BX onboard UDMA, and Maxtor 92048D8 (20G/7200RPM/1Mcache) drives. It seems a shame to ignore the potential of adding fast, inexpensive storage on channels already available. Note that I only tried one drive per channel, so I don't know what effects lack of scsi disconnect and other features might otherwise cause. I suspect the 20MB/s+ reads from the Maxtors would make the 33MB/s UDMA a bottleneck with more than one per channel anyway.
  • A linux driver would be nice. I didn't have much luck with the card in a multiprocessor NT 4.0 situation, though; two different versions of the FastTrak card, actually. Despite all efforts over about two weeks, the filesystems quickly deteriorated even as NT was being loaded. With two different motherboards and sets of disks, cables, et cetra. The problem disappeared when only a single processor was used.


  • "Typical bigot Freebsd user response."

    This is what I refer to as 'that guy who brings us down'. You get them everywhere, best thing to di is 1) ignore the prick, 2) assure he cannot reproduce if possible, and 3) get him kicked off the net, so he bothers people who aren't trying to do something important.

    As for the response "appreciate life and love and happiness and live in harmony together ..."
    If we do that, that means we can't do any micros~1 bashing anymore.. :(
  • There are three reasone to use RAID:

    1. Reliability
    You need to buy another controler and get disks as similar as possible. Use Raid 5 or 1 (mirroring).

    2. Performance
    Buy another controler (as MB controlers aren't independent (as mentioned)) and run RAID 0 _and_back_it_up_to_tape_ for fastest reads/writes or RAID 1 for faster reads.
    Again, get similar disks as you will be limited by the weakest link in the chain

    3. Just want to play/can say you have RAID
    If you can't justify buying a controler/disks (read: no real reason to run RAID), do Linear Append on two disks on your "seperate" IDE cables.
    Because of the way the ext2 FS statistically distributes data across the disks you should get slightly better performance if both disks are reasonable fast (don't do this with a old slow dog and a fast new disk).

    The real answers for "the best RAID setup" depend on exactly what you want to do with it. eg., most web servers want fast reads an don't care too much about writes vs. production database servers want fast reads and writes but care most about data integrity (RAID 5 or 0+1).

    or in mantra form:
    If you want performace stripe it; if you want reliability mirror it; if you neeed space append it.
  • I'm curious about IDE software RAID as well. Here
    are a couple bits of info that may be relevant. I
    haven't tried any of this yet.

    First, http://www.linuxhq.com/doc23/ide.txt
    This claims 2.1/2.2 kernels have:

    > - support for up to *four* IDE interfaces on one or more IRQs
    > - support for any mix of up to *eight* IDE drives

    And further in the document there is info on
    configuring such a system, which claims that you
    can run as many as 6 interfaces (3 controllers?):

    > This is the multiple IDE interface driver, as evolved from hd.c.
    > It supports up to six IDE interfaces, on one or more IRQs (usually 14 & 15).
    > There can be up to two drives per interface, as per the ATA-2 spec.
    >
    > Primary: ide0, port 0x1f0; major=3; hda is minor=0; hdb is minor=64
    > Secondary: ide1, port 0x170; major=22; hdc is minor=0; hdd is minor=64
    > Tertiary: ide2, port 0x1e8; major=33; hde is minor=0; hdf is minor=64
    > Quaternary: ide3, port 0x168; major=34; hdg is minor=0; hdh is minor=64
    > fifth.. ide4, usually PCI, probed
    > sixth.. ide5, usually PCI, probed

    For UDMA/66, the only controller I know of is the
    Promise one. From the Ultra-DMA Mini-Howto:

    > 5.2 Promise Ultra66
    >
    > This is essentially the same as the Ultra33 with
    > support for the new UDMA mode 4 66 MB/sec transfer
    > speed. Unfortunately it is not yet supported by
    > 2.2.x
    >
    > There is a patch for 2.0.x and 2.2.x kernels
    > availabe at
    > http://www.dyer.vanderbilt.edu/server/udma/, and
    > support is included in the 2.3.x development
    > kernel series at least as of 2.3.3.
    >
    > However to get far enough to patch or upgrade the
    > kernel you'll have to pull the same dirty tricks
    > as for the Ultra33 as in the section above.

    You mail also want to check out the linux raid
    mailing list
    http://linuxwww.db.erau.edu/mail_archives/.

    Good luck! Please post your results to the mailing
    list and/or comp.os.linux.hardware.

    Joel Auslander
    ausland@digital-integrity.com
  • Promise makes a great IDE Raid controller.. it's about 200 bucks canadian. It will allow you to have 4 cards and enable you to also use your onboard motherboard IDE drive ports as well. I've used two of these cards, with 6 drives. Wow and FAST are all I can say. The neatest thing: Multiple cards... The Site link is http://www.promise.com/Products/ideraid/fasttrak.h tm some stuff from their page: IDE RAID 0,1,0+1 card Supports up to 4 UDMA/EIDE drives Up to 25MB/sec sustained data transfers Fault-tolerant data protection for entry-level network
  • the company PROMISE makes a ATA66 compatible IDE RAID controller card called FASTRACK66 (http://www.promise.com). like the original FASTRACK card, it allows you to own yer own RAID 0 or RAID 1 array. pretty neat. only thing is i dont know if it will let you combine ATA66 and ATA33 drives. i wouldn't think so. there is a review of it in the new maximum pc (http://www.maximumpc.com). i have the original hooked up to 4 WD 6.4 drives, and it works flawlessly. i am in the process of building a new box and am salivating over the new ATA66 drives. hope this helps.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...