Yes a hardware RAID controller will (or should) be faster than a software solution but, unlike your other examples, a HW RAID configuration is tied to the hardware and if the HW dies you can't access your data w/o identical, or confirmed compatible, replacement hardware. In this case, software RAID is "better" as you can simply move your configuration to another system as needed.
Your other examples are not really good comparisons for values of "better" other tha
Like many people out there, you have a fundamental misconception about RAID's raison d'etre.
RAID is NOT a backup solution.
RAID is an uptime solution. If one of your hard drives fails on Monday afternoon, you can continue using your system (or NAS) until the weekend when you get time to put in a new hard drive. But at no time should you really worry about things like moving the hardware controller to a different system and recovering data - because you would just pull that from backups.
Like many people out there, you have a fundamental misconception about RAID's raison d'etre.
No, not really.
RAID is NOT a backup solution.
Ya, I know.
I'm not talking about a failed drive, but a failed HW RAID controller. The fundamental point is that HW RAID ties your configuration to the controller whereas SW RAID does not. If that controller is on the motherboard, the problem is even worse. For whatever reason, bringing back up a failed system or simply moving your configuration to another system, becomes more problematic with HW RAID w/o resorting to a restore from backup situation -- which is never 100% complete, unless your system fails immediately after a backup completes. To minimize the need for a restore in a failure situation, minimizing dependence on specific hardware helps and SW RAID can help with that.
It's easy to find a new drive, less easy to find a replacement discrete RAID controller and even harder to find a compatible replacement motherboard (if the RAID controller is on-board). That said, I've used and relied on both SW and HW RAID. The former usually in home systems and the latter in enterprise / production situations where the maintenance/service contract meant I could reliably get replacement parts from the vendor in a timely fashion (at one company it was usually within 4 hours 24/7/365).
That said, I've used and relied on both SW and HW RAID. The former usually in home systems and the latter in enterprise / production situations where the maintenance/service contract meant I could reliably get replacement parts from the vendor in a timely fashion (at one company it was usually within 4 hours 24/7/365).
Noting that the larger systems had redundant SCSI and RAID controllers (and redundant NICs, etc...) with automatic fail over... They were not inexpensive.
Larger.
More expensive.
Too expensive compared to just doing it in software and spending the money on a better CPU.
The systems were 3 HP 9000 T600s T-Class running HP-UX 11 with 8 CPUs and all redundant hardware (except for the bus). They could auto-fail over for anything except a CPU, in which case the system would configure that out and reboot... Storage was on 10 HP AutoRAIDs, an EMC 3500 and HP XP256 disk arrays. There were also HP A/L/N Class systems (3 each) -- with the N-Class systems sharing the disk arrays. Two-thirds of the systems were in Boston and the rest in Norfolk, VA, where I administered them all. At
A failed HW RAID controller is much less likely than a failed drive. That's the point of hardware RAID. Even then, it's not a big deal if it happens. The only reason you think someone is tied to their controller is because their backups are inadequate. If they're running CDP (real-time) or even daily backups, then recovering from backup becomes a routine chore, not an ordeal. Your idea of backup "not being 100% complete" might be true for someone who runs it once a week (or worse) - but then they probably s
How have been your rebuild times for a failed drive, if you've had any? I read articles that recommend against RAID5 for large drives as the rebuild times can be large. Those articles favored RAID10 or RAID01 instead (can't remember which one first).
Not bad at all - I haven't had any failures with this setup since I just put it together last year, but on my previous setup which was around the same size, the rebuild ran overnight, and I was good to go the next morning. I've noticed anecdotally that rebuild with a dedicated RAID controller is significantly faster than when I have two hard drives in RAID1 using the motherboard.
I think RAID5 is a good solution for home use. We obviously use RAID60 and higher at work, but for less than 100TB of data in an
With 4x8TB drives, you have about a 1 in 6 chance of NOT encountering an unrecoverable error during a rebuild (which will likely take north of a week).
How is this on Slashdot? (Score:-1, Flamebait)
Re: (Score:5, Insightful)
If you don't know, why post?
I'm interested as well, and I want to read suggestions from people who have been there.
Re: (Score:-1)
Is hardware RAID better than software RAID?
Is a GPU miner better than a CPU miner?
Is an H264 decoder chip better than software?
Is AES-NI better than general CPU encryption/decryption?
Re: (Score:5, Informative)
In this case you have to define "better".
Yes a hardware RAID controller will (or should) be faster than a software solution but, unlike your other examples, a HW RAID configuration is tied to the hardware and if the HW dies you can't access your data w/o identical, or confirmed compatible, replacement hardware. In this case, software RAID is "better" as you can simply move your configuration to another system as needed.
Your other examples are not really good comparisons for values of "better" other tha
Re: (Score:2)
Like many people out there, you have a fundamental misconception about RAID's raison d'etre.
RAID is NOT a backup solution.
RAID is an uptime solution. If one of your hard drives fails on Monday afternoon, you can continue using your system (or NAS) until the weekend when you get time to put in a new hard drive. But at no time should you really worry about things like moving the hardware controller to a different system and recovering data - because you would just pull that from backups.
I continually encounte
Re:How is this on Slashdot? (Score:5, Informative)
Like many people out there, you have a fundamental misconception about RAID's raison d'etre.
No, not really.
RAID is NOT a backup solution.
Ya, I know.
I'm not talking about a failed drive, but a failed HW RAID controller. The fundamental point is that HW RAID ties your configuration to the controller whereas SW RAID does not. If that controller is on the motherboard, the problem is even worse. For whatever reason, bringing back up a failed system or simply moving your configuration to another system, becomes more problematic with HW RAID w/o resorting to a restore from backup situation -- which is never 100% complete, unless your system fails immediately after a backup completes. To minimize the need for a restore in a failure situation, minimizing dependence on specific hardware helps and SW RAID can help with that.
It's easy to find a new drive, less easy to find a replacement discrete RAID controller and even harder to find a compatible replacement motherboard (if the RAID controller is on-board). That said, I've used and relied on both SW and HW RAID. The former usually in home systems and the latter in enterprise / production situations where the maintenance/service contract meant I could reliably get replacement parts from the vendor in a timely fashion (at one company it was usually within 4 hours 24/7/365).
Re: (Score:2)
That said, I've used and relied on both SW and HW RAID. The former usually in home systems and the latter in enterprise / production situations where the maintenance/service contract meant I could reliably get replacement parts from the vendor in a timely fashion (at one company it was usually within 4 hours 24/7/365).
Noting that the larger systems had redundant SCSI and RAID controllers (and redundant NICs, etc...) with automatic fail over ... They were not inexpensive.
Re: How is this on Slashdot? (Score:2)
Larger.
More expensive.
Too expensive compared to just doing it in software and spending the money on a better CPU.
I mean that RAID controller also is just a slower, less capable CPU. Witz a tiny bit of different bus wiring.
Re: (Score:2)
Larger. More expensive. Too expensive compared to just doing it in software and spending the money on a better CPU.
The systems were 3 HP 9000 T600s T-Class running HP-UX 11 with 8 CPUs and all redundant hardware (except for the bus). They could auto-fail over for anything except a CPU, in which case the system would configure that out and reboot... Storage was on 10 HP AutoRAIDs, an EMC 3500 and HP XP256 disk arrays. There were also HP A/L/N Class systems (3 each) -- with the N-Class systems sharing the disk arrays. Two-thirds of the systems were in Boston and the rest in Norfolk, VA, where I administered them all. At
Re: (Score:2)
A failed HW RAID controller is much less likely than a failed drive. That's the point of hardware RAID. Even then, it's not a big deal if it happens. The only reason you think someone is tied to their controller is because their backups are inadequate. If they're running CDP (real-time) or even daily backups, then recovering from backup becomes a routine chore, not an ordeal. Your idea of backup "not being 100% complete" might be true for someone who runs it once a week (or worse) - but then they probably s
Re: (Score:2)
with four 8TB drives in RAID5
How have been your rebuild times for a failed drive, if you've had any? I read articles that recommend against RAID5 for large drives as the rebuild times can be large. Those articles favored RAID10 or RAID01 instead (can't remember which one first).
Re: (Score:2)
Re: (Score:2)
You may as well be running RAID0.
With 4x8TB drives, you have about a 1 in 6 chance of NOT encountering an unrecoverable error during a rebuild (which will likely take north of a week).