Yes a hardware RAID controller will (or should) be faster than a software solution but, unlike your other examples, a HW RAID configuration is tied to the hardware and if the HW dies you can't access your data w/o identical, or confirmed compatible, replacement hardware. In this case, software RAID is "better" as you can simply move your configuration to another system as needed.
Your other examples are not really good comparisons for values of "better" other than "faster".
My preference is software raid because you typically get added features like snapshots, dedup, etc that would cost a ton if done in hardware, and software raid still leaves plenty of compute resources left over for applications like plex. Oh and ZFS is pretty much the gold standard for home nas.
exactly this. software is much easier to work with and recover, and its futureproof. hardware raid might have its place, but I question the value of raid at all. you can use a software stack like gluster that just duplicates and distributes files on standard filesystems, or do something super basic like rsync your files once a week to a backup drive (local or remote). what you loose in functionality you might make up for in simplicity. it just depends on how much value you put on different features. definately ive seen more data lost on failed hardware raid that looses the ability to see some drives and using proprietry algorithms than software raid.
Why can't there just be hardware ACCELLERATION? Like a programmable RAID controller. Or festures/interfaces that make it hardware fast but with whatever code you want to happen.
An interesting idea, but it seems like it tries to occupy a strange niche between a pure software stack like ZFS, and just splitting your storage out to a separate box entirely.
Like many people out there, you have a fundamental misconception about RAID's raison d'etre.
RAID is NOT a backup solution.
RAID is an uptime solution. If one of your hard drives fails on Monday afternoon, you can continue using your system (or NAS) until the weekend when you get time to put in a new hard drive. But at no time should you really worry about things like moving the hardware controller to a different system and recovering data - because you would just pull that from backups.
Like many people out there, you have a fundamental misconception about RAID's raison d'etre.
No, not really.
RAID is NOT a backup solution.
Ya, I know.
I'm not talking about a failed drive, but a failed HW RAID controller. The fundamental point is that HW RAID ties your configuration to the controller whereas SW RAID does not. If that controller is on the motherboard, the problem is even worse. For whatever reason, bringing back up a failed system or simply moving your configuration to another system, becomes more problematic with HW RAID w/o resorting to a restore from backup situation -- which is never 100% complete, unless your system fails immediately after a backup completes. To minimize the need for a restore in a failure situation, minimizing dependence on specific hardware helps and SW RAID can help with that.
It's easy to find a new drive, less easy to find a replacement discrete RAID controller and even harder to find a compatible replacement motherboard (if the RAID controller is on-board). That said, I've used and relied on both SW and HW RAID. The former usually in home systems and the latter in enterprise / production situations where the maintenance/service contract meant I could reliably get replacement parts from the vendor in a timely fashion (at one company it was usually within 4 hours 24/7/365).
That said, I've used and relied on both SW and HW RAID. The former usually in home systems and the latter in enterprise / production situations where the maintenance/service contract meant I could reliably get replacement parts from the vendor in a timely fashion (at one company it was usually within 4 hours 24/7/365).
Noting that the larger systems had redundant SCSI and RAID controllers (and redundant NICs, etc...) with automatic fail over... They were not inexpensive.
Larger.
More expensive.
Too expensive compared to just doing it in software and spending the money on a better CPU.
The systems were 3 HP 9000 T600s T-Class running HP-UX 11 with 8 CPUs and all redundant hardware (except for the bus). They could auto-fail over for anything except a CPU, in which case the system would configure that out and reboot... Storage was on 10 HP AutoRAIDs, an EMC 3500 and HP XP256 disk arrays. There were also HP A/L/N Class systems (3 each) -- with the N-Class systems sharing the disk arrays. Two-thirds of the systems were in Boston and the rest in Norfolk, VA, where I administered them all. At
A failed HW RAID controller is much less likely than a failed drive. That's the point of hardware RAID. Even then, it's not a big deal if it happens. The only reason you think someone is tied to their controller is because their backups are inadequate. If they're running CDP (real-time) or even daily backups, then recovering from backup becomes a routine chore, not an ordeal. Your idea of backup "not being 100% complete" might be true for someone who runs it once a week (or worse) - but then they probably s
How have been your rebuild times for a failed drive, if you've had any? I read articles that recommend against RAID5 for large drives as the rebuild times can be large. Those articles favored RAID10 or RAID01 instead (can't remember which one first).
Not bad at all - I haven't had any failures with this setup since I just put it together last year, but on my previous setup which was around the same size, the rebuild ran overnight, and I was good to go the next morning. I've noticed anecdotally that rebuild with a dedicated RAID controller is significantly faster than when I have two hard drives in RAID1 using the motherboard.
I think RAID5 is a good solution for home use. We obviously use RAID60 and higher at work, but for less than 100TB of data in an
With 4x8TB drives, you have about a 1 in 6 chance of NOT encountering an unrecoverable error during a rebuild (which will likely take north of a week).
Considering software RAID worked fine back in the Pentium 200 days I wonder how much performance you would really lose over hardware RAID these days. Sure if you're raiding SSDs you may need a hardware RAID controller but for normal spinning disks I can't imagine there'd be a performance difference these days. Heck these days it's even recommended to enable disk compression for performance reasons as the cost of processing power is less than the cost of reading the extra bits from the platter.
Hardware raid controllers have battery/flash backed write caches. This makes a HUGE difference for things which perform writes and wait for them to sync (eg databases). Without a write cache, your database has to wait for the drive to commit the data to its platters. When you're doing lots of small writes this soon adds up.
Excuse my ignorance, but this doesn't make a difference practically for SSDs does it? I understand how cache is critical for spinning rust, but am I understanding this correctly?
A cache based on battery backed ram would still be faster than an SSD, although the difference would usually be much smaller. A cache would also help in corner cases, eg where the SSD has to first wipe the block before rewriting it which could cause slow writes.
Wouldn't RAM be able to serve this purpose in a system? I'm struggling why you'd need dedicated hardware for this. Unless your key is about being battery backed and therefore being able to commit to the disk before the power goes out, that I do see as a unique benefit to a dedicated hardware solution.
Battery backed is the whole point. Most (all?) controllers won't do write caching by default if there is no battery or the battery has failed, most operating systems won't do write caching for the same reason.
Software such as databases and journaling filesystems etc won't cache writes in ram, and won't consider a write as completed until the controller reports it saved.
It is quite possible to run zfs on top of hardware RAID and not use the data resiliency/redundancy features of zfs. You still get the snapshots and all other nifty features, you can also use both; example: 2 LUNS backed up by hardware RAID and zfs using the 2 LUNS as mirrors ("Use hardware RAID or ZFS redundancy or both"): https://docs.oracle.com/cd/E53... [oracle.com]
Keep in mind that I would be inclined to not use hardware RAID and let zfs handle the raw drives but it is quite possible to use hardware RAID with zfs.
Depends a lot on your use case. RAID controllers often mask the drives themselves behind a sort of "virtual JBOD" where they create RAID0 virtual disks for each individual drive. This works just fine, but there is an unknown element here then which is HOW the controller does the writing (and reading) to that virtual disk and that it is not directly under the control of the ZFS software. It might honour the cylinder/head/track write command or it might interpret that command and you have no way of knowing fr
Yes a hardware RAID controller will (or should) be faster than a software solution but, unlike your other examples, a HW RAID configuration is tied to the hardware and if the HW dies you can't access your data w/o identical, or confirmed compatible, replacement hardware. In this case, software RAID is "better" as you can simply move your configuration to another system as needed.
I'll add that at least in some cases, it's not only the RAID controller hardware, but even the exact BIOS version that counts.
So, hell yeah, go software RAID. Which is exactly what I've got running for my self-built home NAS. The only draw-back is the monthly check that takes quite a while to complete. Considering this is for home use, and the way it's scheduled, I can easily live with it.
GPU vs CPU miner: Yes the GPU miner is far more performant in practically every blockchain currently out there. No question.
Is H264 decoder vs software: Define your requirements. Is the goal fastest speed? Then the H264 decoder is better. Is the goal best quality? Then software is better. The decoder (and encoders) have been tweaked regularly over time and hardware CODECs are on a far slower update cadance than software.
AES-NI vs CPU: Yes it's better than the CPU, if your goal is to do only use AES. If your goal is security or encryption in general then the CPU is likely a better option for the same reason as the H264 codec above.
Which brings us to RAID: The ability to do something in software is far more flexible and less proprietary than hardware RAID. Hardware RAID is slightly more performant but I suspect this is only relevant when you're RAIDing SSDs. Anyone who has ever had a hardware RAID controller die on them will vow never again to play around with proprietary shit for the miniscule gain in speed, so the answer here is almost universally software.
How is this on Slashdot? (Score:-1, Flamebait)
Re: (Score:5, Insightful)
If you don't know, why post?
I'm interested as well, and I want to read suggestions from people who have been there.
Re:How is this on Slashdot? (Score:-1)
Is hardware RAID better than software RAID?
Is a GPU miner better than a CPU miner?
Is an H264 decoder chip better than software?
Is AES-NI better than general CPU encryption/decryption?
Re:How is this on Slashdot? (Score:5, Informative)
In this case you have to define "better".
Yes a hardware RAID controller will (or should) be faster than a software solution but, unlike your other examples, a HW RAID configuration is tied to the hardware and if the HW dies you can't access your data w/o identical, or confirmed compatible, replacement hardware. In this case, software RAID is "better" as you can simply move your configuration to another system as needed.
Your other examples are not really good comparisons for values of "better" other than "faster".
Re: How is this on Slashdot? (Score:5, Informative)
My preference is software raid because you typically get added features like snapshots, dedup, etc that would cost a ton if done in hardware, and software raid still leaves plenty of compute resources left over for applications like plex. Oh and ZFS is pretty much the gold standard for home nas.
Re: How is this on Slashdot? (Score:5, Interesting)
Re: How is this on Slashdot? (Score:2)
Why can't there just be hardware ACCELLERATION?
Like a programmable RAID controller. Or festures/interfaces that make it hardware fast but with whatever code you want to happen.
Re: (Score:2)
Re: (Score:2)
sorry, I wanted to reply to your post but replied to the post above yours instead, see here:
https://hardware.slashdot.org/... [slashdot.org]
Re: (Score:2)
Like many people out there, you have a fundamental misconception about RAID's raison d'etre.
RAID is NOT a backup solution.
RAID is an uptime solution. If one of your hard drives fails on Monday afternoon, you can continue using your system (or NAS) until the weekend when you get time to put in a new hard drive. But at no time should you really worry about things like moving the hardware controller to a different system and recovering data - because you would just pull that from backups.
I continually encounte
Re:How is this on Slashdot? (Score:5, Informative)
Like many people out there, you have a fundamental misconception about RAID's raison d'etre.
No, not really.
RAID is NOT a backup solution.
Ya, I know.
I'm not talking about a failed drive, but a failed HW RAID controller. The fundamental point is that HW RAID ties your configuration to the controller whereas SW RAID does not. If that controller is on the motherboard, the problem is even worse. For whatever reason, bringing back up a failed system or simply moving your configuration to another system, becomes more problematic with HW RAID w/o resorting to a restore from backup situation -- which is never 100% complete, unless your system fails immediately after a backup completes. To minimize the need for a restore in a failure situation, minimizing dependence on specific hardware helps and SW RAID can help with that.
It's easy to find a new drive, less easy to find a replacement discrete RAID controller and even harder to find a compatible replacement motherboard (if the RAID controller is on-board). That said, I've used and relied on both SW and HW RAID. The former usually in home systems and the latter in enterprise / production situations where the maintenance/service contract meant I could reliably get replacement parts from the vendor in a timely fashion (at one company it was usually within 4 hours 24/7/365).
Re: (Score:2)
That said, I've used and relied on both SW and HW RAID. The former usually in home systems and the latter in enterprise / production situations where the maintenance/service contract meant I could reliably get replacement parts from the vendor in a timely fashion (at one company it was usually within 4 hours 24/7/365).
Noting that the larger systems had redundant SCSI and RAID controllers (and redundant NICs, etc...) with automatic fail over ... They were not inexpensive.
Re: How is this on Slashdot? (Score:2)
Larger.
More expensive.
Too expensive compared to just doing it in software and spending the money on a better CPU.
I mean that RAID controller also is just a slower, less capable CPU. Witz a tiny bit of different bus wiring.
Re: (Score:2)
Larger. More expensive. Too expensive compared to just doing it in software and spending the money on a better CPU.
The systems were 3 HP 9000 T600s T-Class running HP-UX 11 with 8 CPUs and all redundant hardware (except for the bus). They could auto-fail over for anything except a CPU, in which case the system would configure that out and reboot... Storage was on 10 HP AutoRAIDs, an EMC 3500 and HP XP256 disk arrays. There were also HP A/L/N Class systems (3 each) -- with the N-Class systems sharing the disk arrays. Two-thirds of the systems were in Boston and the rest in Norfolk, VA, where I administered them all. At
Re: (Score:2)
A failed HW RAID controller is much less likely than a failed drive. That's the point of hardware RAID. Even then, it's not a big deal if it happens. The only reason you think someone is tied to their controller is because their backups are inadequate. If they're running CDP (real-time) or even daily backups, then recovering from backup becomes a routine chore, not an ordeal. Your idea of backup "not being 100% complete" might be true for someone who runs it once a week (or worse) - but then they probably s
Re: (Score:2)
with four 8TB drives in RAID5
How have been your rebuild times for a failed drive, if you've had any? I read articles that recommend against RAID5 for large drives as the rebuild times can be large. Those articles favored RAID10 or RAID01 instead (can't remember which one first).
Re: (Score:2)
Re: (Score:2)
You may as well be running RAID0.
With 4x8TB drives, you have about a 1 in 6 chance of NOT encountering an unrecoverable error during a rebuild (which will likely take north of a week).
Re: How is this on Slashdot? (Score:1)
Re: (Score:2)
Considering software RAID worked fine back in the Pentium 200 days I wonder how much performance you would really lose over hardware RAID these days. Sure if you're raiding SSDs you may need a hardware RAID controller but for normal spinning disks I can't imagine there'd be a performance difference these days. Heck these days it's even recommended to enable disk compression for performance reasons as the cost of processing power is less than the cost of reading the extra bits from the platter.
Re: (Score:3)
Hardware raid controllers have battery/flash backed write caches. This makes a HUGE difference for things which perform writes and wait for them to sync (eg databases).
Without a write cache, your database has to wait for the drive to commit the data to its platters. When you're doing lots of small writes this soon adds up.
Re: (Score:2)
Excuse my ignorance, but this doesn't make a difference practically for SSDs does it? I understand how cache is critical for spinning rust, but am I understanding this correctly?
Re: (Score:2)
A cache based on battery backed ram would still be faster than an SSD, although the difference would usually be much smaller.
A cache would also help in corner cases, eg where the SSD has to first wipe the block before rewriting it which could cause slow writes.
Re: (Score:2)
Wouldn't RAM be able to serve this purpose in a system? I'm struggling why you'd need dedicated hardware for this. Unless your key is about being battery backed and therefore being able to commit to the disk before the power goes out, that I do see as a unique benefit to a dedicated hardware solution.
Re: (Score:2)
Battery backed is the whole point. Most (all?) controllers won't do write caching by default if there is no battery or the battery has failed, most operating systems won't do write caching for the same reason.
Software such as databases and journaling filesystems etc won't cache writes in ram, and won't consider a write as completed until the controller reports it saved.
Re: (Score:2)
Gotcha, thanks.
Who'd have thought you can still learn something on Slashdot. I better go put the universe back in order and start another Trump vs Biden fight :-)
Re: (Score:2)
It is quite possible to run zfs on top of hardware RAID and not use the data resiliency/redundancy features of zfs. You still get the snapshots and all other nifty features, you can also use both; example: 2 LUNS backed up by hardware RAID and zfs using the 2 LUNS as mirrors ("Use hardware RAID or ZFS redundancy or both"):
https://docs.oracle.com/cd/E53... [oracle.com]
Keep in mind that I would be inclined to not use hardware RAID and let zfs handle the raw drives but it is quite possible to use hardware RAID with zfs.
Re: (Score:2)
Just because you can doesn't mean you should.
Zfs should always have raw access to the drives, otherwise there isn't much point in it.
Re: (Score:2)
Listen, consider the origin of the link I gave then, try again. Hardware RAID being bad for zfs is a myth, plain and simple!
Re: (Score:2)
Depends a lot on your use case. RAID controllers often mask the drives themselves behind a sort of "virtual JBOD" where they create RAID0 virtual disks for each individual drive. This works just fine, but there is an unknown element here then which is HOW the controller does the writing (and reading) to that virtual disk and that it is not directly under the control of the ZFS software. It might honour the cylinder/head/track write command or it might interpret that command and you have no way of knowing fr
Re: (Score:2)
Yes a hardware RAID controller will (or should) be faster than a software solution but, unlike your other examples, a HW RAID configuration is tied to the hardware and if the HW dies you can't access your data w/o identical, or confirmed compatible, replacement hardware. In this case, software RAID is "better" as you can simply move your configuration to another system as needed.
I'll add that at least in some cases, it's not only the RAID controller hardware, but even the exact BIOS version that counts.
So, hell yeah, go software RAID. Which is exactly what I've got running for my self-built home NAS. The only draw-back is the monthly check that takes quite a while to complete. Considering this is for home use, and the way it's scheduled, I can easily live with it.
Re: (Score:2)
Re:How is this on Slashdot? (Score:5, Insightful)
Let's look at those:
GPU vs CPU miner: Yes the GPU miner is far more performant in practically every blockchain currently out there. No question.
Is H264 decoder vs software: Define your requirements. Is the goal fastest speed? Then the H264 decoder is better. Is the goal best quality? Then software is better. The decoder (and encoders) have been tweaked regularly over time and hardware CODECs are on a far slower update cadance than software.
AES-NI vs CPU: Yes it's better than the CPU, if your goal is to do only use AES. If your goal is security or encryption in general then the CPU is likely a better option for the same reason as the H264 codec above.
Which brings us to RAID: The ability to do something in software is far more flexible and less proprietary than hardware RAID. Hardware RAID is slightly more performant but I suspect this is only relevant when you're RAIDing SSDs. Anyone who has ever had a hardware RAID controller die on them will vow never again to play around with proprietary shit for the miniscule gain in speed, so the answer here is almost universally software.