


Ask Slashdot: Is a Software RAID Better Than a Hardware RAID? (wikipedia.org) 359
RockDoctor (Slashdot reader #15,477) wants to build a personal network-attached storage solution, maybe using a multiple-disk array (e.g., a RAID). But unfortunately, "My hardware pool is very shallow."
I eBay'd a desktop chassis, whose motherboard claims (I discovered, on arrival) RAID capabilities. There, I have a significant choice — to use the on-board RAID, or do it entirely in software (e.g. OpenMediaVault)?
I'm domestic — a handful of terabytes — but I expect the answer to change as one goes through the petabytes into the exabytes. What do the dotters of the slash think?
Share your own thoughts in the comments. Is a hardware RAID better than a software RAID?
I'm domestic — a handful of terabytes — but I expect the answer to change as one goes through the petabytes into the exabytes. What do the dotters of the slash think?
Share your own thoughts in the comments. Is a hardware RAID better than a software RAID?
Software? RAID (Score:5, Informative)
Unless you've got a dedicated controller with a CPU to do the RAID calculations, the RAID that comes off your motherboard is still software RAID, it's just implemented in the driver.
If you're going to go down this path, do it in the OS or in UnRAID or something, otherwise you might find you're tied to that driver for the RAID implementation. If you do it in software, you should be able to plug those drives (as a set) onto any other hardware and be recognised.
Re: (Score:3)
Unless you've got a dedicated controller with a CPU to do the RAID calculations, the RAID that comes off your motherboard is still software RAID, it's just implemented in the driver.
Let's not split hairs here. The thing on the motherboard is a whole different best: All down downsides of hardware raid vs actual software RAID without any of the upsides. With motherboards you're still at the mercy of a proprietary solution that can take your RAID array with it on failure.
IMO software RAID for its flexibility > hardware RAID. And motherboard RAID is an abomination that should die in a fire.
Re:Software? RAID (Score:5, Informative)
The other name for motherboard RAID used to be "FakeRAID." And yes, you should avoid it at all costs. It's really software, it's proprietary, and ties you to that motherboard.
That said, I have a friend who used to do hardware RAID, and he got proper hardware RAID cardS - yes, plural. He would always have a back-up RAID card, so he could always recover his data. He did that through several generations of hardware RAID cards, and eventually gave up and just went with Linux kernel-level RAID. I've been running Linux RAID-1 for between one and two decades now, myself.
Software RAID (Score:5, Informative)
There, I have a significant choice — to use the on-board RAID, or do it entirely in software (e.g. OpenMediaVault)?
If you use hardware RAID -- especially one built-in to a motherboard -- and the hardware dies, you're screwed. Using software RAID, like the LVM Raid Linux provides, makes your RAID configuration hardware independent allowing you to move your disks to another system as you want/need. If you decide to use HW RAID, get a well-known dedicated RAID card that you can replace more easily than the motherboard with built-in RAID.
Re:Software RAID (Score:5, Interesting)
If you decide to use HW RAID, get a well-known dedicated RAID card that you can replace more easily than the motherboard with built-in RAID.
TL;DR: Software RAID is the way to go, with a dedicated well-known HW RAID controller as the alternative. I wouldn't even consider using the on-board MB RAID.
Re: (Score:3)
Re: Software RAID (Score:3)
And a spare server, a spare building and a spare planet, of course.
On I am irreplaceable.
*is promptly stabbed to death by one of the listeners*
Re: (Score:2)
THIS!
Still, there is a lot to be said for some of the off-the-shelf home RAID boxes. I haven't researched them, but I know a friend like the Synology NAS boxes. Most of those types of solutions are really Linux systems, but whether it's really software or hardware, I don't know, so I don't know if you could pull the drives and use Linux to recover your data. That should be a standard question to ask vendors before purchasing, but I doubt many people consider it.
Re: (Score:3)
I have a couple of Dell systems at home that have HW RAID on the motherboard and wondered if I should use them and my research made me decide against it (for the reasons I mentioned earlier). The documentation seems to indicate that these on-board systems are kind of picky, requiring (or strongly recommending) you use the exact same disks, etc... and it's unclear if disks could be moved to different model system. Even for a Windows system, using the SW mirroring Windows provides seems like the better choi
Re: (Score:3)
I'm admin for some online servers running on older Dell server chassis, Dell "PERC" HW RAIDs. One MB fried (impressive smoke residue inside the lid) and I simply pulled the drives, popped them into an older chassis, and it booted up and ran (Windows). I forget if it was the same PERC version controller (PERC 3, 4, 5...) but it just plain worked.
I've mixed and matched drive brands, sizes, etc. When you create the array (RAID 5 (or 6)) your array will be limited to the size of the smallest drive (times the
Re: (Score:2)
HP are also generally good, the metadata is stored on the disks themselves so if you move them to another similar controller it will detect the array and boot it.
Some servers actually have proper raid controllers built in, what you want to avoid is "fakeraid" which is a proprietary version of software raid that just happens to be implemented in the bios and have its own drivers.
Re: (Score:2)
Back in the day, there was no metadata written to the disks. If we had to pull the disks, we had to keep track of where they went or the array controller would treat the array as failed.
I don't know what took so long to get metadata written to disks. In theory the entire array configuration could almost not exist in the controller's memory, and just be read from the disks part of the total array integrity check at startup.
Despite this, even the "high end" PERC-type controllers are still somewhat stupid an
Re:Software RAID (Score:5, Informative)
My Synology box (which is indeed a Linux box) has been humming along for over a decade now. Synology uses software RAID. They've implemented their own solution which lets you mix and match different sized disks, and it will intelligently make use of all the extra space (assuming you're using three or more disks). This lets you increase your RAID disks over time, a feature I've taken advantage of over the years as disk sizes have increased, while prices have come down.
I've had a couple of drives fail over that time, and I ordered new, larger drives for replacements. The replacement procedure was easy, as the drives are mounted in front of the box, and are hot-swappable. So pop the old one out, mount and push the new one in, then use the configuration control panel and tell it to rebuild the drive array using the new drive. After x hours of formatting and copying data, the job is done, with zero downtime.
if someone wants a NAS box, I'll always recommend Synology.
Re: (Score:2)
Synology claims to use a proprietary RAID format, but, for the most part, you can use mdadm to manage them from any Linux box.
Re: (Score:3)
That is my experience as well. Hardware RAID makes the most sense with a dedicated (and trustworthy) controller, fairly large array, and an expectation that your CPU will be busy doing other things. Prefer using software RAID unless you are pretty sure you have all those.
I would also add: Be VERY careful if you try to reconfigure your RAID array. If you do so, take a snapshot of as much detail as you can (for example, with "mdadm --detail -v /dev/md0", and fdisk to print the partition table for each disk
Re: (Score:2)
Also, if you buy a second-hand RAID controller with a battery, immediately replace the battery because it's already dead.
Re: (Score:2)
If it matters to you, buy a couple of spares. If you're buying hardware off ebay the controllers won't be too expensive.
There are cases where it doesn't matter, i have a hardware raid controller for a testlab where i run a hypervisor and a bunch of virtual machines for testing/experimentation. The hardware raid controller makes a big difference to performance because of the write caching, and if the array fails its of no consequence as i can just build it again next time i need to test something. It's also
My understanding is (Score:3)
From what I've read in the past, software RAID is "preferred" in the event something fails - either one of the HDDs, or other parts of the hardware itself (especially the RAID hardware).
The reasoning I read was that with software RAID, you could use any hardware/machine you wanted, and could re-build your setup without having to worry too much about compatibility issues, since software would run on pretty much anything (within reason), whereas a hardware RAID setup might require the exact same replacement parts (especially of the RAID cards) for you to be able to get back up an running, should any of those fail (vs. the actual drives failing).
I'm sure there's valid counterarguments however.
Re: (Score:3)
I ran nothing but hardware RAID for a long time. I liked the hot-swappable for obvious reasons.
However, a couple or three times when a drive failed, I had to pony up extra because the original drive specs were no longer available and I had to buy drives that could hold way more than I could use.
I know nothing about software RAID, but I'm gonna learn right here.
Thanks for the question.
Re: (Score:2)
Hardware RAID controllers are often backwards compatible so the next generation controller can read arrays created by the previous generation controller.
I like Software RAID (Score:2)
I set up an NFS server running OpenSolaris with a RAID-1 setup (two identical 300G drives -- about ten years ago), and it worked beautifully. It worked so well that it wasn't until about six months after one of the drives died that I happened to check the system's health, and discovered that the files I'd been casually been copying over were now saved on just a single drive. I just preferred the software solution because it was simpler to set up. I imagine a hardware solution might require a software cost,
Re: (Score:2)
Working well would include notifications of failed drives.
Hardware RAID can be "locked" to controller? (Score:2)
ZFS for General Purpose (Score:4, Informative)
If you need basic NAS, use ZFS to protect your data. RAIDz2 is good enough for large home storage.
The OpenZFS 2.x Debian packages are excellent nowadays but FreeNAS exists if you need an appliance.
If you have skillz, running a basic Linux server is easier in the long run than trying to map your needs to an appliance.
Re: (Score:2)
What am I missing?
Re: (Score:2)
I use software RAID on all my machines (well, not laptops) because disks are so cheap. And what it protects me from is the "minus the 1 day of data you'll be out" if a drive dies.
Re: (Score:2)
ZFS also does snapshots, providing some level of protection against ransomware, accidental deletion, or corruption. Yes, we should be doing backups, but the honest truth is that most people don't. RAID with snapshots is probably the best you can get if you're not reliably backing up.
Re: (Score:2)
I went with NAS4Free, the fork of FreeNAS when they changed the license (now renamed XigmaNAS). Then it choked on my network card when I upgraded at some point, so I ended up installing Linux and running the NAS in a VM, so Linux handles the hardware (and I wanted to run some other VMs anyway on the same system). I'm very happy with how it sits an an appliance and I never think about it.
I used the budget we had for a new entertainment center when I realized that what we were going to be paying for was sto
Re: (Score:2)
I have been running ZFS on Linux since Ubuntu 16.04 LTS. It was my first experience with ZFS.
So far no problems, and I have worked around two failed drives. If you are encountering bugs it could be that your use case is too complex.
Re: (Score:2)
Please be mores specific. What bugs are these? My experience with ZFS on Linux has been rather good. Particularly on Ubuntu. Even using ZFS as root.
Re:ZFS for General Purpose (Score:5, Informative)
Please do not sue ZFS on Linux. ZFS is absolutely beautiful, but not on Linux. It has a lot of bugs.
ZFS on Linux doesn't exist. Not only has ZFS on Linux been stable for years now, it is now effectively mainlined as part of the OpenZFS project which means you are running the same code base in Linux and BSD. There is no distinction between running ZFS on Linux vs anywhere else officially as of 6 months ago.
Re: (Score:2)
> ZFS is absolutely beautiful, but not on Linux. It has a lot of bugs.
I've used ZoL on many machines for about a decade and never had data loss from it. It's used everywhere - from hobbyists to national labs to corps that manage petabytes of data.
By contrast, my experience with BSD was a disaster. GELI will barf randomly destroying a vdev and sometimes a dataset. This cost me multiple weeks of work over a year.
People report this and the BSD devs say their ECC isn't working. Same hardware works great
Re: ZFS for General Purpose (Score:3)
Please do not sue ZFS on Linux. ZFS is absolutely beautiful, but not on Linux. It has a lot of bugs. Been on zfs for 4 years. Including a system migration. My experience suggests that you're an idiot.
Hobby Linux user: doesn't matter, Enterprise: does (Score:2)
Re: (Score:3)
Tell me how your software RAID stands up to hundreds of concurrent users connecting to an IMAP service like cyrus with Horde webmail.
I doubt the OP is going to have that on his personal NAS... :-)
As to the subject of your post, I would offer that for a hobby user it does matter and software RAID would be the better long-term option, with a well-known dedicated hardware RAID card as the alternative. Using HW RAID ties your configuration to that hardware. An enterprise user will have access to identical, or vendor-confirmed compatible, replacement hardware as part of their maintenance/service agreement, the hobby user won't. This will m
Re: (Score:2)
If we are going back a few years, I had a system with a real hardware RAID card and enterprise drives, yet performance was horrible. The RAID card vendor agreed that the configuration should work well, but ultimately blamed the performance issues on the type of workload.
So, in my experience, dedicated RAID disk controllers do not guarantee good performance.
Software raid advantages (Score:2)
No specialized controllers required
No constraints on matching disk drives required by some hardware controllers
No worrying about sustained software support/drivers for hardware controllers
Quick Note On Drives (Score:5, Informative)
If you’re planning on setting up a dedicated server for home use (then apart from noting that after a few days you’ll wonder how you ever managed without it) I would strongly recommend that you give particular attention to your choice of drives.
All of the well-known manufacturers offer drives particularly designed for NAS use. You will quickly find that they are more expensive (often quite a bit more expensive) than ‘desktop’ drives of similar capacity. Do not be tempted to forego proper NAS drives, even if it means that you scale down your capacity requirements to start with. The reason I make this recommendation is simply that because, if your NAS delivers on its promise, it will soon fade to invisibility on your home network and you’ll forget it is there. Right up to the point where you experience your first drive loss... at which point you’re going to wish you’d bought the best drives you could find. I can’t speak to any make other than Western Digital (which I’ve found to be excellent) but their range of “Red” NAS drives come in a regular variant [5,400rpm] and a Pro variant [7,200rpm]. I’m running RAID6 and can comfortably stream 4K content from that setup, but you might want to get a bit more advice or read up on performance if this is important to you.
Don’t order all your drives in the same order or from the same supplier. Not because you should expect defects in modern drives [these are thankfully extremely rare] but because every now and then you’ll get an idiot packing shipments and receive a consignment of drives in a loose box without packing. Who knows what sort of treatment they have had to survive in your journey to you.
To get a decent level of redundancy you should possibly set your sights on a RAID-5 configuration with 4 drives as a starting point [which will give you capacity equal to 3 of the drives], but if you can stretch a bit further, RAID 6 will give you enough resiliency to allow for the simultaneous loss of 3 volumes.
When you come to cable up your setup, do take the time to carefully read the technical specification of your RAID controller. I’m honestly not sure how this will work between ‘hardware’ and ‘software’ RAID, but at least some of these options, coupled with the right hardware [if you are in luck] will allow you to have a ‘hot swap’ capability.
Initially setting up a RAID [or recovering from a volume loss] can take a fair old bit of time [it will depend on your IO rates and the drive performance], which means that my last suggestion might well be unpalatable to you: the reason you are investing in a RAID setup is because you have data that you don’t want to lose in the event of a head failure. But for that preservation to happen, you’re going to need to know how to recover your RAID in the event of an error. So think about simulating it. Easy to do if you have a hot-swap capability... but either way find out about drive swapping and, before you put any data on your RAID, try a drive swap. Make notes / take screen shots.
And it’s hopefully obvious, but when you do buy your drives, by enough for *at least* one full round of drive swaps. In other words, if you’re going RAID5, by one drive more than you need. If you’re going RAID6, buy two spares. Mark them up and keep them safe.
Lastly, it’s kinda redundant... but they haven’t yet invented a single-unit RAID that you can build at home that will survive a house fire. So keep going with your off-premises backups, no matter how good your RAID setup.
Re: (Score:2)
Re: (Score:2)
Chiming in to second the choice of WD Red. That's what I've got in my Synology NAS. Also made sure they were CMR, not SMR drives.
Do NOT get SMR drives for RAID.
Re: (Score:3)
Chiming in to second the choice of WD Red. That's what I've got in my Synology NAS. Also made sure they were CMR, not SMR drives.
Do NOT get SMR drives for RAID.
^^^^ this ^^^^
Trouble is, it can be difficult to know if a drive is SMR, and some drive makers were hiding the fact that the drives were SMR.
Re: Quick Note On Drives (Score:4, Interesting)
Jabuzz, before posting this reply I went back and read a few of your comments and I find most of them to be very well informed, constructive and to add real value to the thread in which you make them.
I hope you don’t mind me chipping in here, but I think in this case you have something of real value to add but you have not done so because, in your own words, you “can’t be bothered”. But the OP to this article was someone who specifically came to ‘Ask Slashdot’ for advice on setting up a RAID on a home server: someone who implicitly doesn’t understand storage.
I understand that you might not want to sit and spend the time it would take to write out a lengthy explanation of your own, but I’m also pretty sure that you would be able to point someone to a web page somewhere that did a decent enough job to convey they point you would like the OP to understand.
I don’t mean to be rude or condescending, but clearly you have relevant, topical experience to share. Just a thought.
Re: (Score:2)
Yes, buy drives in separate orders, but for a reason not stated: Drive failures are not independent. If you have a bunch of drives from the same batch, and run them with the same load, they'll tend to fail at the same time. The math behind RAID assumes independent failures. The nature of RAID results in nearly identical write patterns to all the drives. Also, reconstructing a lost drive puts significant stress on the remaining drives. All told, this means that if all your drives are from the same batch
I can give you more questions :) (Score:3)
You will end up with data loss.
That said, there are of course endless kind s of ways to do software RAID.
If you are low on resources, you can do that BIOS these days.
If you want you can do several kinds of software RAID using a linux system.
You can use a NAS system with network file systems on the reasonable cheap vs reliable.
(which is usually a kind of linux ayway, but with the tools and signalling embedded and working.
Hardware RAID is a different game, how do you size the data storage need? do you require hot spares? what speed? what application? how many IOPS? How fast do you want a replacement disk (brand?)
Then there is the question of reliability, do you require a certain amount of data retention? back-up?
There is no "sw raid is better or worse than hw raid" question to answer really.
There is though, the determination that you may need to do manual tasks if you are going to use a kind of software RAID.
In hardware RAID the system is usually already equipped with signalling and self-repair, that is what you pay for.
Hardware for OS, Software for Data (Score:3)
Put the system boot media on a dedicated RAID Controller, nVMe or SAS HBA with built-in RAID1, or hardware-mirrored device (RAID1) to provide a reliable boot volume.
For the "data disks" for a NAS, However.. it is best to use a software RAID solution such as ZFS for the data disks in order to allow the self-healing capabilities of the filesystem - In addition the speed/performance will be better. The purpose of the hardware RAID solution for boot disks is to manage the failure of a disk and allow you to ensure the system still boots successfully if a reboot needs to occur while one of the boot disks is still failed - hardware fault management during the boot process is difficult and calls for a hardware solution.
Re: Hardware for OS, Software for Data (Score:2)
Home NAS out /boot on a SUB drive and the rest on the software RAID array. Have a cron script rsync /boot onto the software RAID array daily. The number of writes to the USB drive over the lifetime of the hardware is well below the expected life time of the USB drive, and should anything go wrong boot from recovery media, copy /boot onto a new boot USB drive and reinstall your boot loader. Makes setting up software RAID a whole bunch easier and you can have your OS install on a RAID6 if you want.
Software RAID for the V (Score:2)
Re: (Score:2)
> With software raid you can grab the drives from one machine
--
I do that w/HW raid -- just move the raid card to the other sys (or buy a 2nd one). I think the real turning point might be how many cores your RAID card has. If SW raid, then Raid0,1,10 are fine. If you want 1 disk for parity like in RAID5, have 1 processor on the controller - and if you want to RAID6,
having 2 cpu's on the RAID card can allow both parity disks to be run at the same time. If you have almost any RAID, you need to be sur
Re: Software RAID for the V (Score:2)
There is no issue with RAID6 in terms of data loss, unless you are spinning thousands of drives. With RAID5 you have to be super careful with what you are doing. There are some sweet spots with 3 drive RAID5 setups with acceptable MTTDL, however unless you really know what you are doing don't go there. Note if they upped the BER by a factor of 10 to 1e16 then RAID5 would be completely acceptable in many many more configurations. Interestingly SSD's have much better BER"s than spinning disks. So a RAID5 comp
Software RAID is great for non-parity RAID (Score:2)
tradeoffs (Score:2)
As with anything else where there are multiple popular choices, there are trade-offs. There are reasons both options exist. For a home user, I would recommend software RAID, though.
1. Hardware RAID makes it easier and safer to RAID your boot device.
2. As others have said, hardware RAID ties you to a particular hardware implementation. That's fine if you have a data center with a lot of duplicate hardware and sparing, but not so good if you're a home user.
3. Hardware RAID limits you to the RAID levels tha
Hardware RAID as long as you use RAID 6 (Score:2)
It will be faster, even if the Hardware RAID is in the end done by the BIOS/UEFI/Driver SW.
Also, how do you dualboot AND get visibility of all your partitions using SW RAID?
And, for those people that preffer SW RAID because, they say, that in case of HW failure they can take the drives and recover the data in other system, I reaply, I can do that too, using the backups I diligently make every night. Because, you see, RAID is about availability, not about backup or disaster recovery.
Just rememeber to use RAI
Are Trucks better than Cars? (Score:3)
It's a dumb question. Depending on hardware, OS, software stacks, WHY you are RAIDing the answer can be yes or no. So many permutations to this question, it's that so broad it can't even be answered with a series of if and qualifiers.
Probably Intel RSTe... (Score:2)
If you use Linux, in fact, then the software raid is used when Intel RAID is set up.
The only benefit is that your firmware would understand how to boot off of the raid volume more easily, but it's pretty trivial to have a RAID 1 /boot in an otherwise RAID5/RAID6 setup in more purely software RAID. It matters a bit more in Windows, where Microsoft withholds implementation of RAID5/RAID6 for more expensive editions of Windows and the driver based RAID gives access to that capability with cheaper Windows.
If yo
What about backup (Score:2)
For what it sounds like he is doing, a software raid will be easier for him to maintain and repair if something breaks. Cheap hardware raid is evil.
However, he says ""My hardware pool is very shallow."
This makes me suspect he is thinking of depending upon the RAID to protect his data instead of backup.
It is far more important to have and use a well thought out backup solution. Backup protects you from many things that RAID cannot.
It depends on what you want (Score:2)
Most hardware-RAID controllers are very limited in what they can do and monitoring a hardware RAID can be anywhere from a pain top next to impossible. On the other hand, they are simple to use. Software RAID gives you more flexibility, better monitoring, but you need to do and understand more.
Oh, and do not even think about ElCheapo mainboard/BIOS RAID. That stuff is just crap.
Is your "hardware RAID" really hardware? (Score:4, Interesting)
Many motherboards advertise "Hardware RAID", but are in fact what we call "fake raid". They have some hardware acceleration features on the motherboard, but the heavy lifting of the RAID is done in the driver. Some of these are Windows only as a result, while some are supported by various Open Source drivers now. See ABMX Servers [abmx.com] for an article on the differences.
If you use RAID, hardware, software, or fake, you also need to consider the sort of drives. Drive firmware is different for RAID arrays than for single drive applications. The most important difference is TLER/ERC [abmx.com] drives will retry a bad read/write MUCH fewer times before erroring out. If you use firmware configured for many retries in a single drive application it can absolutely destroy the performance of your RAID. Rather than the RAID being able to move on to the other drive(s) and/or remap the sector that failing drive just hangs the whole thing with retries. For a while this was a user configurable parameter in many drives with SMART, but most manufacturers have now limited it to RAID capable drives.
Why are "RAID" drives more money? Well, there are several reasons, but one of the big ones is vibration. When you have multiple drives in a chassis doing synchronized tasks they can end up vibrating each other into poor performance. A rather famous video [youtube.com] of a guy screaming into a RAID array proves the point. Non-RAID drives often omit some of the vibration dampening features and it leads to worse performance and premature failure, particular when using 5+ drives in a single chassis. Obviously this does not apply with modern SSD's, another case where SSD's are superior.
Generally my advise would be to use a pair of disks in a RAID1 mirror with a software driver for an end user machine, like a desktop used to play games. In a server application where multiple drives are required for capacity I'd recommend a dedicated hardware RAID card from a quality vendor driving RAID spec hard drives. YMMV, plenty of folks get away with other configurations.
Re: (Score:2)
It's pretty well known (among guitarists) that the coils in guitar pickups can be microphonic, picking up vibrations from the air and often causing acoustic feedback. The solution is usually to soak such a pickup in hot wax for a while, to stabilize the wiring so it can't move.
The voice coils that drive head positioning are not tremendously different from guitar pickups. Perhaps they are also capable of mistaking vibration for movement of the head, or the circuitry just can't deal with transients that occur
depends on the hardware (Score:2)
Re: (Score:2)
Note the key is *enough* money. I once worked at a little place that wouldn't pay for a supported card but bought a random second hand, end-of-life hardware raid controller. They had been bitten by a NAS appliance that used software RAID on an IDE system where they got hit by a failure that took out two disks. I informed them that if they want to continue to be cheap, they could do a hardware design with dedicated channels per disk to avoid that risk, but the president of the company declared that hardware
Maybe I'm just an old fogey but... (Score:2)
...what is the usage case for RAID these days?
I can cook up some obscure edge cases where it might be helpful but blistering performance is achieved via SSD and storage is so cheap that one can just hook up a 2tb+ usb drive to your system and run some sort of sync on it periodically likely at the fraction of the cost of getting a RAID card
a quick google search seems to confirm my suspicion that RAID has indeed fallen out of favor... prove me wrong?
Re: (Score:2)
Making it a NAS vs a USB is a significant improvement. That adds about $500 to the cost, at least with Synology or QNAP, but is well worth it. Having two slots so you can at least snapshot between devices is great.
I still use a spinning rust NAS for backups though— I need about 10TB at home for that.
RAID 101 (Score:3)
RAID does not protect your data. It protects your uptime.
Hardware RAID with a battery can protect against certain types of data loss/corruption during power failures. You can mitigate most of the same risks with a good UPS and well-tested UPS scripts. You can take it even further with DRBD (in modes that wait for remote confirmation).
Software RAID is cheaper, easier, more flexible, more portable and gives better performance in most circumstances.
RAID-6 is pretty much only for situations when the drives are not externally hot-swappable. If you need to shut the server down to replace a failed drive, you may want to let it ride the 2nd redundancy for a few days to schedule the downtime. If you can replace the drive without shutting down, you have no excuses for not getting the drive replaced within about 12 hours. With big drives, rebuilds can take several days. You may want extra redundancy during that time too.
If you ever get software RAID into an un-runnable state, remove your hands from the keyboard. Take a few deep breaths, and write "I will type NO unplanned commands" on a sign by your keyboard. You should have backups for this situation, but you probably don't. Despite that, your situation is almost certainly recoverable, and most or all of your data can be saved. Your #1 job at that point is to avoid making the situation worse.
real hardware raid is best (Score:2)
Like everything else, "it depends" (Score:5, Informative)
A number of these things have already been echoed in the thread, but here are my thoughts...
1.) If you're doing this for home use, spend a few bucks and get a Synology to put your drives in. They're simple to use, they've got good support, they have a bunch of apps that give lots of functionality with zero subscriptions or anything. If you're looking for something you can set-and-forget, get a Synology and thank me later.
1b.) Some may say "what about QNAP!". I used to like QNAP, but they have [expletive]'d me over, twice. Once, we had a 12-bay, rack-mount unit (i.e. clearly a business-grade unit) with a bad bay, which they confirmed. They wouldn't do an advance replacment, even though it was in warranty, and even though we were willing to put up 100% of the purchase price as collateral, and even though they told us that the lead time for the repair was over a month. In another case, we need to do an SSD cache...but they don't support anything but the first party cards, and zero of the compatible cards are available for purchase, even though these units are less than three years old. So, QNAP is on my s!!t list.
Other vendors exist - Buffalo (I dislike their WebUI and they seem very limited if you're looking for anything more than SMB or FTP), Asustor ('the best of the rest', but they are in the uncanny valley in my experience for some reason), Drobo (some swear by them, but the one I used once could barely do more than 10MBytes/sec on a gigabit connection and WD Black drives; no onboard apps, either), and a few other minor ones, but my experience has led me to recommend Syn pretty much exclusively.
2.) If you're looking to do a DIY job, you're not getting a set-and-forget situation. That's not a bad thing, and I've had some closer than others (ran a FreeNAS for my mom for nearly a decade), but it will be hands-on, no matter what you do. Just assume that.
3.) Now, to directly answer your question...don't use motherboard RAID. It's the worst of both worlds. As others have said, it's not 'real' hardware RAID, so you don't get the performance you think you do. I've got one in production because I inherited it...and i'm telling you, it's slower than if I had the drives by themselves and simply made a spanned partition. If you're looking to do hardware raid, you'll need an actual RAID card. A real one. With onboard RAM and a battery backup. Good news is that used ones are cheap on eBay; you won't win any benchmark prizes with a Dell H710, but they're going for $40 or less, easy.
4.) If you're going software, your big question is whether you're looking to dedicate the machine to being a NAS. It sounds like you are, but if you're planning to run a desktop OS in addition to it being a NAS, that's going to matter. If you are, both Windows and Linux have ways to do that, but you'll want to present the drives to the OS directly, rather than through the motherboard's RAID functions. Either way, there are a bunch of tutorials on doing software RAID on both OSes; Google and Youtube have you covered.
5.) ...If you are going with a DIY software build, here's my personal rundown...
--TrueNAS (formerly known as FreeNAS) - What I use. It's come a long way since I started in terms of being user friendly; most of my CLI usage is vestigial now. However, it is fundamentally a frontend for the ZFS file system running on FreeBSD. It's widely regarded as being one of the most stable available, but if you're used to little more than "partitions" at this point, "pools", "volumes", and "datasets" are going to be confusing at first. Not insurmountable of course, but just know what you're signing up for.
--XigmaNAS - A fork of the FreeNAS 8 code (though running on current releases of FreeBSD as well), it too is built on ZFS. It has fewer features than TrueNAS does, but it is still solid, still based on ZFS, still does all the core stuff well, and is well maintained. A solid runner-up.
--UnRAID - A paid product, this one has a loyal following for good reason - it has the most s
Re: (Score:2)
[used RAID cards are] going for $40 or less
That's because the batteries are dead. That can cost way more than $40 to replace.
Remember RAID is not backup. (Score:2)
Remember and follow the Rule of 3s for important data.
Those (often small business users!) who substitute RAID for backup are famous for learning those lessons the hard way.
depends (Score:2)
Depends on your goals and your budget. I like hardware but you'll want more than one set of the same hardware.
Software RAID is better. (Score:2)
In the old days, it made a lot of sense to have hardware raid so that you could offload the work to a dedicated processor. It was still software raid, just on another piece of hardware.
Nowadays, software raid has almost no impact on the load of the system. It's just WAY simpler to use software raid, and most Linux distros will recognize a software raid volume from another system.
I manage dozens of machines with RAID 1 pairs. They read at the maximum speed that the hardware interface will allow on very busy
No RAID (Score:2)
For most lightweight use, the spare disks of a RAID are better assigned to backup resources, backups protected from "rm -rf /" but with snapshots exposed for file recovery. There are many sophisticated technologies for this, a simple "rsync" based copy on another host is often invaluable. when content is accidentally deleted. RAID provides no protection against this, and for most environments it's a far higher risk.
Re: (Score:2)
One thing I would add, in case someone doesn't realize, but a backup system should be rsyncing *from* your primary storage rather than your primary storage rsyncing *to* the backup. This way you can set things up so that the primary system does not have write access to your backup, and thus a scheme do do rsync based snapshots is protected from attacker/malware potentially reaching into the backup system and doing more damage.
I'll say that I do like RAID because a lost disk is easier to deal with than resto
Re: (Score:2)
> This way you can set things up so that the primary system does not have write access to your backup
For rsync or similar direct mirroring technologies, please permit me to agree wholeheartedly. It's very useful to have the backups _accessible_ from the primary server for file or configuration recovery, but potentially deadly if a poorly formed "rsync --delete" expunges all backups.
Re: (Score:2)
On the flip side, if someone compromises your backup server then they can potentially use it to reach into all your servers. You should ensure that its access is read-only so they cannot disrupt operation, but your data is still compromised at that point. If you're using windows boxes, the attacker will usually be able to pass the hash from the backups to get onto the live systems too.
Companies leaving backups unsecured (often on open file shares) is a big risk.
Backups are better than RAID but I like UNRAD also (Score:3)
Option 3? None? (Score:2)
Honestly, in the... 20 years or so I've been using RAID systems, in 2021, I'd pick none. Put the data in cloud storage, and do whatever you need to do... in the cloud. It's probably not that much more expensive at the end of the day.
Every single RAID device I've ever owned (and I've owned quite a few), has failed catastrophically and suffered data loss. Corrupted drives, failed devices, even LVM raid that went south. No matter what solution you pick, make sure you have a great backup solution, and if you're
Re: (Score:2)
Aside from Buffalo units, I have never had a RAID system of any flavor crap out in the last 15 years. A Synology box with a few TB of storage will be orders of magnitude cheaper than any cloud storage, especially if the data does need to go back and forth.
Thought on RAID... (Score:2)
So... (begin appeal to personal authority) Storage is actually one of the few things I get to claim some professional expertise in. At multiple companies, several of which are mentioned in other posts here. (end appeal to personal authority)...
1. RAID is not a back up. As others have stated it protects your uptime, not your data. If you have to pick one or the other, pick a good solid backup. HINT: DO NOT PROCEED PAST HERE IF YOU CAN'T SATISFY THIS STEP.
2. RAID 5 is dead. DEAD! If you're using drives la
Cheap RAID is not good, hardware OR software (Score:2)
If you are running any kind of system a regular person can afford, the RAID code (whether software or hardware) is not going to be high quality. It's going to be just good enough to call it "RAID." Can you really hot-swap a failed drive? On desktop systems, you're supposed to shut down the entire system to swap out a drive.
Real RAID costs a LOT of money. It's best suited for commercial purposes that need to be able to keep running even when a drive fails.
Even for true high availability systems, I'd prefer r
SW is always better (Score:3, Informative)
Re:How is this on Slashdot? (Score:5, Insightful)
If you don't know, why post?
I'm interested as well, and I want to read suggestions from people who have been there.
Re:How is this on Slashdot? (Score:5, Informative)
I had a hardware RAID controller die and take it's array with it in the process thanks to proprietary crap not being interchangeable. That pain in the arse alone made me say never again to hardware RAID.
I switched to software RAID using mdadm and then eventually to ZFS which combines a file system and disk volume management into one.
Re: (Score:2, Funny)
So let me get this straight, you're opposed to conversations on a message forum?
Used to be "it depends". Now software is better (Score:5, Insightful)
I used to own a backup company and we did a lot of testing, testing both software and the major hardware vendors. Including some $2,000+ raid cards.
Once upon a time there were plusses and minuses.
Software raid is better. Specifically the Linux MD raid, with LVM on top. It's much more flexible / featurefull and doesn't make your data dependent on a specific controller.
Back in the day, hardware raid had the advantage of not using CPU cycles. At full write bandwidth, raid could use up 10% or even 20% (during a rebuild) of your 500 MHz CPU. That was a consideration.
10% of a 500 MHz CPU is, of course, 50 MHz. Less than 1% of any modern CPU, in the worst case. That's 50 MHz on one core, so if you have four cores @ 2.5 GHz the raid will max out at 0.5% of your total CPU.
Re:Used to be "it depends". Now software is better (Score:5, Informative)
Excellent arguments, and for the most part I agree with you.
However, in those rare situations when we're still building actual physical servers instead of deploying cloud-based infrastructure it sometimes does make sense to use hardware RAID, if only because we don't have the chance to use a software-based solution.
In my case this happens when deploying hypervisors. If I'm installing a VMware vSphere cluster on bare metal, the software won't give me any way to set up a software RAID and we have to rely on the hardware RAID controller in the machine.
This is just an example, and I'm sure there are several others (people using different operating systems, or reusing older hardware for home-based file/media servers, etc).
Re: (Score:2)
Thanks for pointing that out. I had actually intended to include a footnote mentioning that sometimes decent software raid isn't available, such as with some type 1 hypervisors.
There are also bound to be a few weird situations where hardware is better in some way for some very specific application.
The bigger footnote would probably be that I didn't test Windows software raid. Our solution was based on the Linux stack that I mentioned. I would assume the Windows implementation isn't 100X worse than the Lin
Re:Used to be "it depends". Now software is better (Score:4, Insightful)
Remember, VMware and EMC are the same company. They like selling storage hardware.
Re:Used to be "it depends". Now software is better (Score:5, Insightful)
Software raid is better.
I've seen enough bespoke systems with (admittedly excellent, until they weren't) ancient controllers break after 10 years and someone having to scrape Ebay for a replacement to say yes, this is the correct answer in most all current use cases.
Re: (Score:3)
RAID is not backup. Why should you have to go and buy an ancient controller if the card fails? Start fresh and restore from backup.
Re:How is this on Slashdot? (Score:5, Informative)
In this case you have to define "better".
Yes a hardware RAID controller will (or should) be faster than a software solution but, unlike your other examples, a HW RAID configuration is tied to the hardware and if the HW dies you can't access your data w/o identical, or confirmed compatible, replacement hardware. In this case, software RAID is "better" as you can simply move your configuration to another system as needed.
Your other examples are not really good comparisons for values of "better" other than "faster".
Re: How is this on Slashdot? (Score:5, Informative)
My preference is software raid because you typically get added features like snapshots, dedup, etc that would cost a ton if done in hardware, and software raid still leaves plenty of compute resources left over for applications like plex. Oh and ZFS is pretty much the gold standard for home nas.
Re: How is this on Slashdot? (Score:5, Interesting)
Re: (Score:2)
Like many people out there, you have a fundamental misconception about RAID's raison d'etre.
RAID is NOT a backup solution.
RAID is an uptime solution. If one of your hard drives fails on Monday afternoon, you can continue using your system (or NAS) until the weekend when you get time to put in a new hard drive. But at no time should you really worry about things like moving the hardware controller to a different system and recovering data - because you would just pull that from backups.
I continually encounte
Re:How is this on Slashdot? (Score:5, Informative)
Like many people out there, you have a fundamental misconception about RAID's raison d'etre.
No, not really.
RAID is NOT a backup solution.
Ya, I know.
I'm not talking about a failed drive, but a failed HW RAID controller. The fundamental point is that HW RAID ties your configuration to the controller whereas SW RAID does not. If that controller is on the motherboard, the problem is even worse. For whatever reason, bringing back up a failed system or simply moving your configuration to another system, becomes more problematic with HW RAID w/o resorting to a restore from backup situation -- which is never 100% complete, unless your system fails immediately after a backup completes. To minimize the need for a restore in a failure situation, minimizing dependence on specific hardware helps and SW RAID can help with that.
It's easy to find a new drive, less easy to find a replacement discrete RAID controller and even harder to find a compatible replacement motherboard (if the RAID controller is on-board). That said, I've used and relied on both SW and HW RAID. The former usually in home systems and the latter in enterprise / production situations where the maintenance/service contract meant I could reliably get replacement parts from the vendor in a timely fashion (at one company it was usually within 4 hours 24/7/365).
Re: (Score:2)
That said, I've used and relied on both SW and HW RAID. The former usually in home systems and the latter in enterprise / production situations where the maintenance/service contract meant I could reliably get replacement parts from the vendor in a timely fashion (at one company it was usually within 4 hours 24/7/365).
Noting that the larger systems had redundant SCSI and RAID controllers (and redundant NICs, etc...) with automatic fail over ... They were not inexpensive.
Re: (Score:2)
Considering software RAID worked fine back in the Pentium 200 days I wonder how much performance you would really lose over hardware RAID these days. Sure if you're raiding SSDs you may need a hardware RAID controller but for normal spinning disks I can't imagine there'd be a performance difference these days. Heck these days it's even recommended to enable disk compression for performance reasons as the cost of processing power is less than the cost of reading the extra bits from the platter.
Re: (Score:3)
Hardware raid controllers have battery/flash backed write caches. This makes a HUGE difference for things which perform writes and wait for them to sync (eg databases).
Without a write cache, your database has to wait for the drive to commit the data to its platters. When you're doing lots of small writes this soon adds up.
Re: (Score:2)
Re:How is this on Slashdot? (Score:5, Insightful)
Let's look at those:
GPU vs CPU miner: Yes the GPU miner is far more performant in practically every blockchain currently out there. No question.
Is H264 decoder vs software: Define your requirements. Is the goal fastest speed? Then the H264 decoder is better. Is the goal best quality? Then software is better. The decoder (and encoders) have been tweaked regularly over time and hardware CODECs are on a far slower update cadance than software.
AES-NI vs CPU: Yes it's better than the CPU, if your goal is to do only use AES. If your goal is security or encryption in general then the CPU is likely a better option for the same reason as the H264 codec above.
Which brings us to RAID: The ability to do something in software is far more flexible and less proprietary than hardware RAID. Hardware RAID is slightly more performant but I suspect this is only relevant when you're RAIDing SSDs. Anyone who has ever had a hardware RAID controller die on them will vow never again to play around with proprietary shit for the miniscule gain in speed, so the answer here is almost universally software.
Re: (Score:2)
And NVMe means software RAID, since an intermediate controller defeats the direct access to the cpu.
Re: (Score:2)
If you're running a database on top of RAID, you've already lost. That's what was done in the 90's.