Tell me how your software RAID stands up to hundreds of concurrent users connecting to an IMAP service like cyrus with Horde webmail. Had a system like that years ago with good SCSI drives which had performance issues. Switched the system to new server with hardware RAID controller, performance was excellent. I think it's partially the better latency performance you get with dedicated RAID disk controllers.
Tell me how your software RAID stands up to hundreds of concurrent users connecting to an IMAP service like cyrus with Horde webmail.
I doubt the OP is going to have that on his personal NAS...:-)
As to the subject of your post, I would offer that for a hobby user it does matter and software RAID would be the better long-term option, with a well-known dedicated hardware RAID card as the alternative. Using HW RAID ties your configuration to that hardware. An enterprise user will have access to identical, or vendor-confirmed compatible, replacement hardware as part of their maintenance/service agreement, the hobby user won't. This will m
So... I stopped seeing new SCSI installations around 2007.
Around 2003-2005 Intel Xscale added an XOR engine and a Linux kernel guy updated md to support multiple xor engines.
That became the core of several soho NAS products.
Around the same time Intel and AMD vector instructions were providing sufficient xor speeds to max bandwidth to drives in a 1-2U system.
Yes, you could get custom ASICs that ran faster... and the price tended to be on par with a second machine.
That lead to... if you have the budget
Correct, Although software RAID is practically hardware independent, it has slower read/writes than a dedicated hardware RAID solution (that is not part of the motherboard).
The amount of users able to connect simultaneously is also more limited with software RAID. And re-silvering a software RAID is more likely to be a (tiny) bit slower too.
Having said all that though, software RAID would be the preferable solution. Or consider something completely different. There seems to be a new project: MinIO [min.io] (https://
If we are going back a few years, I had a system with a real hardware RAID card and enterprise drives, yet performance was horrible. The RAID card vendor agreed that the configuration should work well, but ultimately blamed the performance issues on the type of workload.
So, in my experience, dedicated RAID disk controllers do not guarantee good performance.
Back when SCSI drives were a thing, so were single or dual core CPUs and 1GB of RAM. You *needed* hardware RAID because the CPU was too busy.
Nowadays we have multicore CPUs with idle time. We have multiple GB RAM. We don't use swap. The embedded CPU on the hardware RAID is probably slower. It certainly has less RAM to help with buffering.
Your software RAID in Linux and ZFS is getting updated for bugs, security and new features. The underlying algorithms are getting looked at for performance. This wi
This whole thread is filled with hobbyists working on their community college PC Tech certifications and playing enterprise with a pile of junk hardware they picked up off Craigslist.
I've worked in IT consulting for nearly 20 years and I have only seen ONE install running a ZFS software RAID setup and the entire environment was a disaster of home-rolled scripts and glued together open source crap.
Every single other place was running hardware RAID at some level -- whether it was physical controllers driving
"I have just one word for you, my boy...plastics."
- from "The Graduate"
Hobby Linux user: doesn't matter, Enterprise: does (Score:2)
Re: (Score:3)
Tell me how your software RAID stands up to hundreds of concurrent users connecting to an IMAP service like cyrus with Horde webmail.
I doubt the OP is going to have that on his personal NAS... :-)
As to the subject of your post, I would offer that for a hobby user it does matter and software RAID would be the better long-term option, with a well-known dedicated hardware RAID card as the alternative. Using HW RAID ties your configuration to that hardware. An enterprise user will have access to identical, or vendor-confirmed compatible, replacement hardware as part of their maintenance/service agreement, the hobby user won't. This will m
Re: (Score:1)
Re: (Score:2)
My fault - I got a SCSI card and a couple of drives in 2006. Sorry about that.
I have a very good nose for which way to jump with hardware decisions - I always jump the wrong way.
Re: (Score:1)
Correct, Although software RAID is practically hardware independent, it has slower read/writes than a dedicated hardware RAID solution (that is not part of the motherboard).
The amount of users able to connect simultaneously is also more limited with software RAID. And re-silvering a software RAID is more likely to be a (tiny) bit slower too.
Having said all that though, software RAID would be the preferable solution. Or consider something completely different. There seems to be a new project: MinIO [min.io] (https://
Re: (Score:2)
Is two enough? One machine for each hand.
Re: (Score:2)
If we are going back a few years, I had a system with a real hardware RAID card and enterprise drives, yet performance was horrible. The RAID card vendor agreed that the configuration should work well, but ultimately blamed the performance issues on the type of workload.
So, in my experience, dedicated RAID disk controllers do not guarantee good performance.
Re: Hobby Linux user: doesn't matter, Enterprise: (Score:1)
Re: Hobby Linux user: doesn't matter, Enterprise: (Score:2)
Webmail must die in a fire.
All of it.
Web$anything that isn't hypertext, must die in a fire, for that matter.
Re: (Score:2)
Back when SCSI drives were a thing, so were single or dual core CPUs and 1GB of RAM. You *needed* hardware RAID because the CPU was too busy.
Nowadays we have multicore CPUs with idle time. We have multiple GB RAM. We don't use swap. The embedded CPU on the hardware RAID is probably slower. It certainly has less RAM to help with buffering.
Your software RAID in Linux and ZFS is getting updated for bugs, security and new features. The underlying algorithms are getting looked at for performance. This wi
Re: (Score:2)
This whole thread is filled with hobbyists working on their community college PC Tech certifications and playing enterprise with a pile of junk hardware they picked up off Craigslist.
I've worked in IT consulting for nearly 20 years and I have only seen ONE install running a ZFS software RAID setup and the entire environment was a disaster of home-rolled scripts and glued together open source crap.
Every single other place was running hardware RAID at some level -- whether it was physical controllers driving