There, I have a significant choice — to use the on-board RAID, or do it entirely in software (e.g. OpenMediaVault)?
If you use hardware RAID -- especially one built-in to a motherboard -- and the hardware dies, you're screwed. Using software RAID, like the LVM Raid Linux provides, makes your RAID configuration hardware independent allowing you to move your disks to another system as you want/need. If you decide to use HW RAID, get a well-known dedicated RAID card that you can replace more easily than the motherboard with built-in RAID.
Still, there is a lot to be said for some of the off-the-shelf home RAID boxes. I haven't researched them, but I know a friend like the Synology NAS boxes. Most of those types of solutions are really Linux systems, but whether it's really software or hardware, I don't know, so I don't know if you could pull the drives and use Linux to recover your data. That should be a standard question to ask vendors before purchasing, but I doubt many people consider it.
I have a couple of Dell systems at home that have HW RAID on the motherboard and wondered if I should use them and my research made me decide against it (for the reasons I mentioned earlier). The documentation seems to indicate that these on-board systems are kind of picky, requiring (or strongly recommending) you use the exact same disks, etc... and it's unclear if disks could be moved to different model system. Even for a Windows system, using the SW mirroring Windows provides seems like the better choi
I'm admin for some online servers running on older Dell server chassis, Dell "PERC" HW RAIDs. One MB fried (impressive smoke residue inside the lid) and I simply pulled the drives, popped them into an older chassis, and it booted up and ran (Windows). I forget if it was the same PERC version controller (PERC 3, 4, 5...) but it just plain worked.
I've mixed and matched drive brands, sizes, etc. When you create the array (RAID 5 (or 6)) your array will be limited to the size of the smallest drive (times the
HP are also generally good, the metadata is stored on the disks themselves so if you move them to another similar controller it will detect the array and boot it.
Some servers actually have proper raid controllers built in, what you want to avoid is "fakeraid" which is a proprietary version of software raid that just happens to be implemented in the bios and have its own drivers.
Back in the day, there was no metadata written to the disks. If we had to pull the disks, we had to keep track of where they went or the array controller would treat the array as failed.
I don't know what took so long to get metadata written to disks. In theory the entire array configuration could almost not exist in the controller's memory, and just be read from the disks part of the total array integrity check at startup.
Despite this, even the "high end" PERC-type controllers are still somewhat stupid and limited in features. You're really best off just using them for physical disk integrity with double parity and then using something else as a logical volume manager. People swear by using operating system software RAID, but every time I've tried it I wind up with a lot of problems with really basic events like disk failures, replacements, rebuilds, etc.
Software RAID (Score:5, Informative)
There, I have a significant choice — to use the on-board RAID, or do it entirely in software (e.g. OpenMediaVault)?
If you use hardware RAID -- especially one built-in to a motherboard -- and the hardware dies, you're screwed. Using software RAID, like the LVM Raid Linux provides, makes your RAID configuration hardware independent allowing you to move your disks to another system as you want/need. If you decide to use HW RAID, get a well-known dedicated RAID card that you can replace more easily than the motherboard with built-in RAID.
Re: (Score:2)
THIS!
Still, there is a lot to be said for some of the off-the-shelf home RAID boxes. I haven't researched them, but I know a friend like the Synology NAS boxes. Most of those types of solutions are really Linux systems, but whether it's really software or hardware, I don't know, so I don't know if you could pull the drives and use Linux to recover your data. That should be a standard question to ask vendors before purchasing, but I doubt many people consider it.
Re: (Score:3)
I have a couple of Dell systems at home that have HW RAID on the motherboard and wondered if I should use them and my research made me decide against it (for the reasons I mentioned earlier). The documentation seems to indicate that these on-board systems are kind of picky, requiring (or strongly recommending) you use the exact same disks, etc... and it's unclear if disks could be moved to different model system. Even for a Windows system, using the SW mirroring Windows provides seems like the better choi
Re: (Score:3)
I'm admin for some online servers running on older Dell server chassis, Dell "PERC" HW RAIDs. One MB fried (impressive smoke residue inside the lid) and I simply pulled the drives, popped them into an older chassis, and it booted up and ran (Windows). I forget if it was the same PERC version controller (PERC 3, 4, 5...) but it just plain worked.
I've mixed and matched drive brands, sizes, etc. When you create the array (RAID 5 (or 6)) your array will be limited to the size of the smallest drive (times the
Re: (Score:2)
HP are also generally good, the metadata is stored on the disks themselves so if you move them to another similar controller it will detect the array and boot it.
Some servers actually have proper raid controllers built in, what you want to avoid is "fakeraid" which is a proprietary version of software raid that just happens to be implemented in the bios and have its own drivers.
Re:Software RAID (Score:2)
Back in the day, there was no metadata written to the disks. If we had to pull the disks, we had to keep track of where they went or the array controller would treat the array as failed.
I don't know what took so long to get metadata written to disks. In theory the entire array configuration could almost not exist in the controller's memory, and just be read from the disks part of the total array integrity check at startup.
Despite this, even the "high end" PERC-type controllers are still somewhat stupid and limited in features. You're really best off just using them for physical disk integrity with double parity and then using something else as a logical volume manager. People swear by using operating system software RAID, but every time I've tried it I wind up with a lot of problems with really basic events like disk failures, replacements, rebuilds, etc.