There, I have a significant choice — to use the on-board RAID, or do it entirely in software (e.g. OpenMediaVault)?
If you use hardware RAID -- especially one built-in to a motherboard -- and the hardware dies, you're screwed. Using software RAID, like the LVM Raid Linux provides, makes your RAID configuration hardware independent allowing you to move your disks to another system as you want/need. If you decide to use HW RAID, get a well-known dedicated RAID card that you can replace more easily than the motherboard with built-in RAID.
Still, there is a lot to be said for some of the off-the-shelf home RAID boxes. I haven't researched them, but I know a friend like the Synology NAS boxes. Most of those types of solutions are really Linux systems, but whether it's really software or hardware, I don't know, so I don't know if you could pull the drives and use Linux to recover your data. That should be a standard question to ask vendors before purchasing, but I doubt many people consider it.
I have a couple of Dell systems at home that have HW RAID on the motherboard and wondered if I should use them and my research made me decide against it (for the reasons I mentioned earlier). The documentation seems to indicate that these on-board systems are kind of picky, requiring (or strongly recommending) you use the exact same disks, etc... and it's unclear if disks could be moved to different model system. Even for a Windows system, using the SW mirroring Windows provides seems like the better choi
I'm admin for some online servers running on older Dell server chassis, Dell "PERC" HW RAIDs. One MB fried (impressive smoke residue inside the lid) and I simply pulled the drives, popped them into an older chassis, and it booted up and ran (Windows). I forget if it was the same PERC version controller (PERC 3, 4, 5...) but it just plain worked.
I've mixed and matched drive brands, sizes, etc. When you create the array (RAID 5 (or 6)) your array will be limited to the size of the smallest drive (times the
HP are also generally good, the metadata is stored on the disks themselves so if you move them to another similar controller it will detect the array and boot it.
Some servers actually have proper raid controllers built in, what you want to avoid is "fakeraid" which is a proprietary version of software raid that just happens to be implemented in the bios and have its own drivers.
Back in the day, there was no metadata written to the disks. If we had to pull the disks, we had to keep track of where they went or the array controller would treat the array as failed.
I don't know what took so long to get metadata written to disks. In theory the entire array configuration could almost not exist in the controller's memory, and just be read from the disks part of the total array integrity check at startup.
Despite this, even the "high end" PERC-type controllers are still somewhat stupid an
The low end Dell PERC RAID cards perform really badly for RAID 5 (anything other than RAID 0 or 1 really). Unless you are splashing out for a fairly high end HW RAID controller, you are probably better off using software RAID.
Yeah, thanks. It's all older hardware, on a tight budget, but all works well. Yes, all RAID 5. RAID performance is pretty good, not the bottleneck. But I agree- they seem much slower than they should be. I have some Mylex RAID controllers I've wanted to try someday. I used them a lot in the 90s and they were lightning fast.
The CPUs are likewise older and slower and I'm not sure if sw RAID would do well, as the CPUs get saturated running php crap.
All that said, I have newer much faster machines sitting
My Synology box (which is indeed a Linux box) has been humming along for over a decade now. Synology uses software RAID. They've implemented their own solution which lets you mix and match different sized disks, and it will intelligently make use of all the extra space (assuming you're using three or more disks). This lets you increase your RAID disks over time, a feature I've taken advantage of over the years as disk sizes have increased, while prices have come down.
I've had a couple of drives fail over that time, and I ordered new, larger drives for replacements. The replacement procedure was easy, as the drives are mounted in front of the box, and are hot-swappable. So pop the old one out, mount and push the new one in, then use the configuration control panel and tell it to rebuild the drive array using the new drive. After x hours of formatting and copying data, the job is done, with zero downtime.
if someone wants a NAS box, I'll always recommend Synology.
I believe Synology also makes it possible to recover data in case of hardware failure - either using another synology device, or using a special Linux distirbution that can mount all the disks.
Data protection is key - disk failure is the most likely cause of data loss, so RAID helps protect against it. However, you also want to protect against hardware failure - if the RAID hardware fails, how would you recover from that? Motherboard RAID is only recoverable in RAID1 (mirroring) mode - basically stick it in
Well, in theory. According to their docs you can recover an array by finding a desktop with enough SATA ports and booting it from an Ubuntu stick. Everything is done with MADM and LVM, so it should work. The one time I had to do it, it didn't work out so well. Possibly because there was a combination of an almost-dead drive and a pair of cache SSDs. It was simply not possible to mount or repair the array. The only thing that got us back up with all our data was replacement hardware, which worked immed
Still, there is a lot to be said for some of the off-the-shelf home RAID boxes. I haven't researched them, but I know a friend like the Synology NAS boxes.
I gave them a passing consideration - they cost several weeks of food above the cost of the discs. At which point I might just as well buy the discs and manage them manually.
"The pyramid is opening!"
"Which one?"
"The one with the ever-widening hole in it!"
-- The Firesign Theatre
Software RAID (Score:5, Informative)
There, I have a significant choice — to use the on-board RAID, or do it entirely in software (e.g. OpenMediaVault)?
If you use hardware RAID -- especially one built-in to a motherboard -- and the hardware dies, you're screwed. Using software RAID, like the LVM Raid Linux provides, makes your RAID configuration hardware independent allowing you to move your disks to another system as you want/need. If you decide to use HW RAID, get a well-known dedicated RAID card that you can replace more easily than the motherboard with built-in RAID.
Re:Software RAID (Score:2)
THIS!
Still, there is a lot to be said for some of the off-the-shelf home RAID boxes. I haven't researched them, but I know a friend like the Synology NAS boxes. Most of those types of solutions are really Linux systems, but whether it's really software or hardware, I don't know, so I don't know if you could pull the drives and use Linux to recover your data. That should be a standard question to ask vendors before purchasing, but I doubt many people consider it.
Re: (Score:3)
I have a couple of Dell systems at home that have HW RAID on the motherboard and wondered if I should use them and my research made me decide against it (for the reasons I mentioned earlier). The documentation seems to indicate that these on-board systems are kind of picky, requiring (or strongly recommending) you use the exact same disks, etc... and it's unclear if disks could be moved to different model system. Even for a Windows system, using the SW mirroring Windows provides seems like the better choi
Re: (Score:3)
I'm admin for some online servers running on older Dell server chassis, Dell "PERC" HW RAIDs. One MB fried (impressive smoke residue inside the lid) and I simply pulled the drives, popped them into an older chassis, and it booted up and ran (Windows). I forget if it was the same PERC version controller (PERC 3, 4, 5...) but it just plain worked.
I've mixed and matched drive brands, sizes, etc. When you create the array (RAID 5 (or 6)) your array will be limited to the size of the smallest drive (times the
Re: (Score:2)
HP are also generally good, the metadata is stored on the disks themselves so if you move them to another similar controller it will detect the array and boot it.
Some servers actually have proper raid controllers built in, what you want to avoid is "fakeraid" which is a proprietary version of software raid that just happens to be implemented in the bios and have its own drivers.
Re: (Score:2)
Back in the day, there was no metadata written to the disks. If we had to pull the disks, we had to keep track of where they went or the array controller would treat the array as failed.
I don't know what took so long to get metadata written to disks. In theory the entire array configuration could almost not exist in the controller's memory, and just be read from the disks part of the total array integrity check at startup.
Despite this, even the "high end" PERC-type controllers are still somewhat stupid an
Re: (Score:2)
The low end Dell PERC RAID cards perform really badly for RAID 5 (anything other than RAID 0 or 1 really). Unless you are splashing out for a fairly high end HW RAID controller, you are probably better off using software RAID.
Re: (Score:2)
Yeah, thanks. It's all older hardware, on a tight budget, but all works well. Yes, all RAID 5. RAID performance is pretty good, not the bottleneck. But I agree- they seem much slower than they should be. I have some Mylex RAID controllers I've wanted to try someday. I used them a lot in the 90s and they were lightning fast.
The CPUs are likewise older and slower and I'm not sure if sw RAID would do well, as the CPUs get saturated running php crap.
All that said, I have newer much faster machines sitting
Re:Software RAID (Score:5, Informative)
My Synology box (which is indeed a Linux box) has been humming along for over a decade now. Synology uses software RAID. They've implemented their own solution which lets you mix and match different sized disks, and it will intelligently make use of all the extra space (assuming you're using three or more disks). This lets you increase your RAID disks over time, a feature I've taken advantage of over the years as disk sizes have increased, while prices have come down.
I've had a couple of drives fail over that time, and I ordered new, larger drives for replacements. The replacement procedure was easy, as the drives are mounted in front of the box, and are hot-swappable. So pop the old one out, mount and push the new one in, then use the configuration control panel and tell it to rebuild the drive array using the new drive. After x hours of formatting and copying data, the job is done, with zero downtime.
if someone wants a NAS box, I'll always recommend Synology.
Re: (Score:2)
I believe Synology also makes it possible to recover data in case of hardware failure - either using another synology device, or using a special Linux distirbution that can mount all the disks.
Data protection is key - disk failure is the most likely cause of data loss, so RAID helps protect against it. However, you also want to protect against hardware failure - if the RAID hardware fails, how would you recover from that? Motherboard RAID is only recoverable in RAID1 (mirroring) mode - basically stick it in
Re: (Score:2)
Re: (Score:2)
Synology claims to use a proprietary RAID format, but, for the most part, you can use mdadm to manage them from any Linux box.
Re: (Score:2)
Synology is just a standard Linux software raid, which is not necessarily a bad thing.
Re: (Score:2)
I gave them a passing consideration - they cost several weeks of food above the cost of the discs. At which point I might just as well buy the discs and manage them manually.