Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage Virtualization Hardware

Ask Slashdot: Is a Software RAID Better Than a Hardware RAID? (wikipedia.org) 359

RockDoctor (Slashdot reader #15,477) wants to build a personal network-attached storage solution, maybe using a multiple-disk array (e.g., a RAID). But unfortunately, "My hardware pool is very shallow." I eBay'd a desktop chassis, whose motherboard claims (I discovered, on arrival) RAID capabilities. There, I have a significant choice — to use the on-board RAID, or do it entirely in software (e.g. OpenMediaVault)?

I'm domestic — a handful of terabytes — but I expect the answer to change as one goes through the petabytes into the exabytes. What do the dotters of the slash think?

Share your own thoughts in the comments. Is a hardware RAID better than a software RAID?
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Is a Software RAID Better Than a Hardware RAID?

Comments Filter:
  • Software? RAID (Score:5, Informative)

    by bernywork ( 57298 ) <bstapleton&gmail,com> on Sunday April 04, 2021 @06:56PM (#61236664) Journal

    Unless you've got a dedicated controller with a CPU to do the RAID calculations, the RAID that comes off your motherboard is still software RAID, it's just implemented in the driver.

    If you're going to go down this path, do it in the OS or in UnRAID or something, otherwise you might find you're tied to that driver for the RAID implementation. If you do it in software, you should be able to plug those drives (as a set) onto any other hardware and be recognised.

    • Unless you've got a dedicated controller with a CPU to do the RAID calculations, the RAID that comes off your motherboard is still software RAID, it's just implemented in the driver.

      Let's not split hairs here. The thing on the motherboard is a whole different best: All down downsides of hardware raid vs actual software RAID without any of the upsides. With motherboards you're still at the mercy of a proprietary solution that can take your RAID array with it on failure.

      IMO software RAID for its flexibility > hardware RAID. And motherboard RAID is an abomination that should die in a fire.

    • Re:Software? RAID (Score:5, Informative)

      by dpilot ( 134227 ) on Sunday April 04, 2021 @09:32PM (#61237100) Homepage Journal

      The other name for motherboard RAID used to be "FakeRAID." And yes, you should avoid it at all costs. It's really software, it's proprietary, and ties you to that motherboard.

      That said, I have a friend who used to do hardware RAID, and he got proper hardware RAID cardS - yes, plural. He would always have a back-up RAID card, so he could always recover his data. He did that through several generations of hardware RAID cards, and eventually gave up and just went with Linux kernel-level RAID. I've been running Linux RAID-1 for between one and two decades now, myself.

  • Software RAID (Score:5, Informative)

    by fahrbot-bot ( 874524 ) on Sunday April 04, 2021 @06:57PM (#61236668)

    There, I have a significant choice — to use the on-board RAID, or do it entirely in software (e.g. OpenMediaVault)?

    If you use hardware RAID -- especially one built-in to a motherboard -- and the hardware dies, you're screwed. Using software RAID, like the LVM Raid Linux provides, makes your RAID configuration hardware independent allowing you to move your disks to another system as you want/need. If you decide to use HW RAID, get a well-known dedicated RAID card that you can replace more easily than the motherboard with built-in RAID.

    • Re:Software RAID (Score:5, Interesting)

      by fahrbot-bot ( 874524 ) on Sunday April 04, 2021 @07:09PM (#61236698)

      If you decide to use HW RAID, get a well-known dedicated RAID card that you can replace more easily than the motherboard with built-in RAID.

      ... *and* a dedicated RAID card can be moved to another system if/when you need to upgrade, whereas your RIAD configuration is tied to the specific motherboard when using the on-board RAID -- unless you can find a new MB that's backwards compatible.

      TL;DR: Software RAID is the way to go, with a dedicated well-known HW RAID controller as the alternative. I wouldn't even consider using the on-board MB RAID.

      • I prefer hardware, I but I now keep a duplicate spare controller same firmware version(very important) and everything that has been swap tested.
    • by crow ( 16139 )

      THIS!

      Still, there is a lot to be said for some of the off-the-shelf home RAID boxes. I haven't researched them, but I know a friend like the Synology NAS boxes. Most of those types of solutions are really Linux systems, but whether it's really software or hardware, I don't know, so I don't know if you could pull the drives and use Linux to recover your data. That should be a standard question to ask vendors before purchasing, but I doubt many people consider it.

      • I have a couple of Dell systems at home that have HW RAID on the motherboard and wondered if I should use them and my research made me decide against it (for the reasons I mentioned earlier). The documentation seems to indicate that these on-board systems are kind of picky, requiring (or strongly recommending) you use the exact same disks, etc... and it's unclear if disks could be moved to different model system. Even for a Windows system, using the SW mirroring Windows provides seems like the better choi

        • by bobby ( 109046 )

          I'm admin for some online servers running on older Dell server chassis, Dell "PERC" HW RAIDs. One MB fried (impressive smoke residue inside the lid) and I simply pulled the drives, popped them into an older chassis, and it booted up and ran (Windows). I forget if it was the same PERC version controller (PERC 3, 4, 5...) but it just plain worked.

          I've mixed and matched drive brands, sizes, etc. When you create the array (RAID 5 (or 6)) your array will be limited to the size of the smallest drive (times the

          • by Bert64 ( 520050 )

            HP are also generally good, the metadata is stored on the disks themselves so if you move them to another similar controller it will detect the array and boot it.

            Some servers actually have proper raid controllers built in, what you want to avoid is "fakeraid" which is a proprietary version of software raid that just happens to be implemented in the bios and have its own drivers.

            • Back in the day, there was no metadata written to the disks. If we had to pull the disks, we had to keep track of where they went or the array controller would treat the array as failed.

              I don't know what took so long to get metadata written to disks. In theory the entire array configuration could almost not exist in the controller's memory, and just be read from the disks part of the total array integrity check at startup.

              Despite this, even the "high end" PERC-type controllers are still somewhat stupid an

      • Re:Software RAID (Score:5, Informative)

        by Dutch Gun ( 899105 ) on Sunday April 04, 2021 @09:12PM (#61237034)

        My Synology box (which is indeed a Linux box) has been humming along for over a decade now. Synology uses software RAID. They've implemented their own solution which lets you mix and match different sized disks, and it will intelligently make use of all the extra space (assuming you're using three or more disks). This lets you increase your RAID disks over time, a feature I've taken advantage of over the years as disk sizes have increased, while prices have come down.

        I've had a couple of drives fail over that time, and I ordered new, larger drives for replacements. The replacement procedure was easy, as the drives are mounted in front of the box, and are hot-swappable. So pop the old one out, mount and push the new one in, then use the configuration control panel and tell it to rebuild the drive array using the new drive. After x hours of formatting and copying data, the job is done, with zero downtime.

        if someone wants a NAS box, I'll always recommend Synology.

      • by kriston ( 7886 )

        Synology claims to use a proprietary RAID format, but, for the most part, you can use mdadm to manage them from any Linux box.

    • by Entrope ( 68843 )

      That is my experience as well. Hardware RAID makes the most sense with a dedicated (and trustworthy) controller, fairly large array, and an expectation that your CPU will be busy doing other things. Prefer using software RAID unless you are pretty sure you have all those.

      I would also add: Be VERY careful if you try to reconfigure your RAID array. If you do so, take a snapshot of as much detail as you can (for example, with "mdadm --detail -v /dev/md0", and fdisk to print the partition table for each disk

    • by kriston ( 7886 )

      Also, if you buy a second-hand RAID controller with a battery, immediately replace the battery because it's already dead.

  • by Sebby ( 238625 ) on Sunday April 04, 2021 @06:58PM (#61236672)

    From what I've read in the past, software RAID is "preferred" in the event something fails - either one of the HDDs, or other parts of the hardware itself (especially the RAID hardware).

    The reasoning I read was that with software RAID, you could use any hardware/machine you wanted, and could re-build your setup without having to worry too much about compatibility issues, since software would run on pretty much anything (within reason), whereas a hardware RAID setup might require the exact same replacement parts (especially of the RAID cards) for you to be able to get back up an running, should any of those fail (vs. the actual drives failing).

    I'm sure there's valid counterarguments however.

    • I ran nothing but hardware RAID for a long time. I liked the hot-swappable for obvious reasons.

      However, a couple or three times when a drive failed, I had to pony up extra because the original drive specs were no longer available and I had to buy drives that could hold way more than I could use.

      I know nothing about software RAID, but I'm gonna learn right here.

      Thanks for the question.

    • Hardware RAID controllers are often backwards compatible so the next generation controller can read arrays created by the previous generation controller.

  • I set up an NFS server running OpenSolaris with a RAID-1 setup (two identical 300G drives -- about ten years ago), and it worked beautifully. It worked so well that it wasn't until about six months after one of the drives died that I happened to check the system's health, and discovered that the files I'd been casually been copying over were now saved on just a single drive. I just preferred the software solution because it was simpler to set up. I imagine a hardware solution might require a software cost,

    • It worked so well that it wasn't until about six months after one of the drives died that I happened to check the system's health, and discovered that the files I'd been casually been copying over were now saved on just a single drive.

      Working well would include notifications of failed drives.

  • Maybe I can throw up my question too that is super relevant to the whole debate. I heard a hardware raid could be linked/locked to the hardware controller itself, meaning if the motherboard die, you need to find another one to get the RAID working. While software raid create identical disks that can be be plug to any other machine... If that's the case, I would definitely go with the software raid. Having a mother that break is less common than HD itself but in case he might be in serious trouble as the mot
  • by bill_mcgonigle ( 4333 ) * on Sunday April 04, 2021 @07:05PM (#61236684) Homepage Journal

    If you need basic NAS, use ZFS to protect your data. RAIDz2 is good enough for large home storage.

    The OpenZFS 2.x Debian packages are excellent nowadays but FreeNAS exists if you need an appliance.

    If you have skillz, running a basic Linux server is easier in the long run than trying to map your needs to an appliance.

    • Somebody school me, what is the use of RAID for home use at all? The only thing it accomplishes is prevent downtime from restoring from backup (which of course is important in the enterprise). But you still need incremental backups, since RAID doesn't protect you against ransomeware, accidental deletion, or incorrect modification. Meanwhile the backup does protect you against a drive failure (minus the 1 day of data you'll be out).

      What am I missing?

      • by dskoll ( 99328 )

        I use software RAID on all my machines (well, not laptops) because disks are so cheap. And what it protects me from is the "minus the 1 day of data you'll be out" if a drive dies.

      • by crow ( 16139 )

        ZFS also does snapshots, providing some level of protection against ransomware, accidental deletion, or corruption. Yes, we should be doing backups, but the honest truth is that most people don't. RAID with snapshots is probably the best you can get if you're not reliably backing up.

    • by crow ( 16139 )

      I went with NAS4Free, the fork of FreeNAS when they changed the license (now renamed XigmaNAS). Then it choked on my network card when I upgraded at some point, so I ended up installing Linux and running the NAS in a VM, so Linux handles the hardware (and I wanted to run some other VMs anyway on the same system). I'm very happy with how it sits an an appliance and I never think about it.

      I used the budget we had for a new entertainment center when I realized that what we were going to be paying for was sto

  • Tell me how your software RAID stands up to hundreds of concurrent users connecting to an IMAP service like cyrus with Horde webmail. Had a system like that years ago with good SCSI drives which had performance issues. Switched the system to new server with hardware RAID controller, performance was excellent. I think it's partially the better latency performance you get with dedicated RAID disk controllers.
    • Tell me how your software RAID stands up to hundreds of concurrent users connecting to an IMAP service like cyrus with Horde webmail.

      I doubt the OP is going to have that on his personal NAS... :-)

      As to the subject of your post, I would offer that for a hobby user it does matter and software RAID would be the better long-term option, with a well-known dedicated hardware RAID card as the alternative. Using HW RAID ties your configuration to that hardware. An enterprise user will have access to identical, or vendor-confirmed compatible, replacement hardware as part of their maintenance/service agreement, the hobby user won't. This will m

    • If we are going back a few years, I had a system with a real hardware RAID card and enterprise drives, yet performance was horrible. The RAID card vendor agreed that the configuration should work well, but ultimately blamed the performance issues on the type of workload.

      So, in my experience, dedicated RAID disk controllers do not guarantee good performance.

  • No specialized controllers required
    No constraints on matching disk drives required by some hardware controllers
    No worrying about sustained software support/drivers for hardware controllers

  • Quick Note On Drives (Score:5, Informative)

    by ytene ( 4376651 ) on Sunday April 04, 2021 @07:18PM (#61236728)
    In your post you note that this is going to be a RAID for home use - domestic purposes - but you don’t really say if you’re planning on running this as a dedicated NAS or whether you just want to have extra security with data being archived in your running machine.

    If you’re planning on setting up a dedicated server for home use (then apart from noting that after a few days you’ll wonder how you ever managed without it) I would strongly recommend that you give particular attention to your choice of drives.

    All of the well-known manufacturers offer drives particularly designed for NAS use. You will quickly find that they are more expensive (often quite a bit more expensive) than ‘desktop’ drives of similar capacity. Do not be tempted to forego proper NAS drives, even if it means that you scale down your capacity requirements to start with. The reason I make this recommendation is simply that because, if your NAS delivers on its promise, it will soon fade to invisibility on your home network and you’ll forget it is there. Right up to the point where you experience your first drive loss... at which point you’re going to wish you’d bought the best drives you could find. I can’t speak to any make other than Western Digital (which I’ve found to be excellent) but their range of “Red” NAS drives come in a regular variant [5,400rpm] and a Pro variant [7,200rpm]. I’m running RAID6 and can comfortably stream 4K content from that setup, but you might want to get a bit more advice or read up on performance if this is important to you.

    Don’t order all your drives in the same order or from the same supplier. Not because you should expect defects in modern drives [these are thankfully extremely rare] but because every now and then you’ll get an idiot packing shipments and receive a consignment of drives in a loose box without packing. Who knows what sort of treatment they have had to survive in your journey to you.

    To get a decent level of redundancy you should possibly set your sights on a RAID-5 configuration with 4 drives as a starting point [which will give you capacity equal to 3 of the drives], but if you can stretch a bit further, RAID 6 will give you enough resiliency to allow for the simultaneous loss of 3 volumes.

    When you come to cable up your setup, do take the time to carefully read the technical specification of your RAID controller. I’m honestly not sure how this will work between ‘hardware’ and ‘software’ RAID, but at least some of these options, coupled with the right hardware [if you are in luck] will allow you to have a ‘hot swap’ capability.

    Initially setting up a RAID [or recovering from a volume loss] can take a fair old bit of time [it will depend on your IO rates and the drive performance], which means that my last suggestion might well be unpalatable to you: the reason you are investing in a RAID setup is because you have data that you don’t want to lose in the event of a head failure. But for that preservation to happen, you’re going to need to know how to recover your RAID in the event of an error. So think about simulating it. Easy to do if you have a hot-swap capability... but either way find out about drive swapping and, before you put any data on your RAID, try a drive swap. Make notes / take screen shots.

    And it’s hopefully obvious, but when you do buy your drives, by enough for *at least* one full round of drive swaps. In other words, if you’re going RAID5, by one drive more than you need. If you’re going RAID6, buy two spares. Mark them up and keep them safe.

    Lastly, it’s kinda redundant... but they haven’t yet invented a single-unit RAID that you can build at home that will survive a house fire. So keep going with your off-premises backups, no matter how good your RAID setup.
    • by ytene ( 4376651 )
      Eek - typo. RAID6 will allow you to recover from the simultaneous loss of TWO drives, not three. Sorry!
    • by sconeu ( 64226 )

      Chiming in to second the choice of WD Red. That's what I've got in my Synology NAS. Also made sure they were CMR, not SMR drives.

      Do NOT get SMR drives for RAID.

      • by bobby ( 109046 )

        Chiming in to second the choice of WD Red. That's what I've got in my Synology NAS. Also made sure they were CMR, not SMR drives.

        Do NOT get SMR drives for RAID.

        ^^^^ this ^^^^

        Trouble is, it can be difficult to know if a drive is SMR, and some drive makers were hiding the fact that the drives were SMR.

    • by crow ( 16139 )

      Yes, buy drives in separate orders, but for a reason not stated: Drive failures are not independent. If you have a bunch of drives from the same batch, and run them with the same load, they'll tend to fail at the same time. The math behind RAID assumes independent failures. The nature of RAID results in nearly identical write patterns to all the drives. Also, reconstructing a lost drive puts significant stress on the remaining drives. All told, this means that if all your drives are from the same batch

  • by delirious.net ( 595841 ) on Sunday April 04, 2021 @07:19PM (#61236730)
    There is no reason to do any kind of RAID if you don't have the resources to replace a disk.
    You will end up with data loss.

    That said, there are of course endless kind s of ways to do software RAID.
    If you are low on resources, you can do that BIOS these days.
    If you want you can do several kinds of software RAID using a linux system.
    You can use a NAS system with network file systems on the reasonable cheap vs reliable.
    (which is usually a kind of linux ayway, but with the tools and signalling embedded and working.

    Hardware RAID is a different game, how do you size the data storage need? do you require hot spares? what speed? what application? how many IOPS? How fast do you want a replacement disk (brand?)

    Then there is the question of reliability, do you require a certain amount of data retention? back-up?

    There is no "sw raid is better or worse than hw raid" question to answer really.
    There is though, the determination that you may need to do manual tasks if you are going to use a kind of software RAID.
    In hardware RAID the system is usually already equipped with signalling and self-repair, that is what you pay for.
  • by mysidia ( 191772 ) on Sunday April 04, 2021 @07:21PM (#61236738)

    Put the system boot media on a dedicated RAID Controller, nVMe or SAS HBA with built-in RAID1, or hardware-mirrored device (RAID1) to provide a reliable boot volume.

    For the "data disks" for a NAS, However.. it is best to use a software RAID solution such as ZFS for the data disks in order to allow the self-healing capabilities of the filesystem - In addition the speed/performance will be better. The purpose of the hardware RAID solution for boot disks is to manage the failure of a disk and allow you to ensure the system still boots successfully if a reboot needs to occur while one of the boot disks is still failed - hardware fault management during the boot process is difficult and calls for a hardware solution.

  • With software raid you can grab the drives from one machine, put them in another, and work with the raid set. With hardware raid you need to worry about which controllers are compatible with each other, what if some controller is discontinued and you don't have a spare to replace it. For many years already CPU's are more than powerful enough to do any raid calculations required without working hard. The only special thing the higher end (and expensive) hardware raid controllers can claim is the optional
    • by lpq ( 583377 )

      > With software raid you can grab the drives from one machine
      --
      I do that w/HW raid -- just move the raid card to the other sys (or buy a 2nd one). I think the real turning point might be how many cores your RAID card has. If SW raid, then Raid0,1,10 are fine. If you want 1 disk for parity like in RAID5, have 1 processor on the controller - and if you want to RAID6,
      having 2 cpu's on the RAID card can allow both parity disks to be run at the same time. If you have almost any RAID, you need to be sur

    • There is no issue with RAID6 in terms of data loss, unless you are spinning thousands of drives. With RAID5 you have to be super careful with what you are doing. There are some sweet spots with 3 drive RAID5 setups with acceptable MTTDL, however unless you really know what you are doing don't go there. Note if they upped the BER by a factor of 10 to 1e16 then RAID5 would be completely acceptable in many many more configurations. Interestingly SSD's have much better BER"s than spinning disks. So a RAID5 comp

  • Striping and mirroring and combinations of the two (RAID 1+0 and RAID 0+1) work well with software RAID. They are very much on-par with hardware RAID. Hardware RAID with battery backed cache is far superior for parity based RAID types like RAID3, RAID5 and RAID6. This is especially true if you lose a disk.
  • As with anything else where there are multiple popular choices, there are trade-offs. There are reasons both options exist. For a home user, I would recommend software RAID, though.

    1. Hardware RAID makes it easier and safer to RAID your boot device.
    2. As others have said, hardware RAID ties you to a particular hardware implementation. That's fine if you have a data center with a lot of duplicate hardware and sparing, but not so good if you're a home user.
    3. Hardware RAID limits you to the RAID levels tha

  • It will be faster, even if the Hardware RAID is in the end done by the BIOS/UEFI/Driver SW.

    Also, how do you dualboot AND get visibility of all your partitions using SW RAID?

    And, for those people that preffer SW RAID because, they say, that in case of HW failure they can take the drives and recover the data in other system, I reaply, I can do that too, using the backups I diligently make every night. Because, you see, RAID is about availability, not about backup or disaster recovery.

    Just rememeber to use RAI

  • by Glasswire ( 302197 ) on Sunday April 04, 2021 @07:42PM (#61236780) Homepage

    It's a dumb question. Depending on hardware, OS, software stacks, WHY you are RAIDing the answer can be yes or no. So many permutations to this question, it's that so broad it can't even be answered with a series of if and qualifiers.

  • If you use Linux, in fact, then the software raid is used when Intel RAID is set up.

    The only benefit is that your firmware would understand how to boot off of the raid volume more easily, but it's pretty trivial to have a RAID 1 /boot in an otherwise RAID5/RAID6 setup in more purely software RAID. It matters a bit more in Windows, where Microsoft withholds implementation of RAID5/RAID6 for more expensive editions of Windows and the driver based RAID gives access to that capability with cheaper Windows.

    If yo

  • For what it sounds like he is doing, a software raid will be easier for him to maintain and repair if something breaks. Cheap hardware raid is evil.

    However, he says ""My hardware pool is very shallow."
    This makes me suspect he is thinking of depending upon the RAID to protect his data instead of backup.
    It is far more important to have and use a well thought out backup solution. Backup protects you from many things that RAID cannot.

  • Most hardware-RAID controllers are very limited in what they can do and monitoring a hardware RAID can be anywhere from a pain top next to impossible. On the other hand, they are simple to use. Software RAID gives you more flexibility, better monitoring, but you need to do and understand more.

    Oh, and do not even think about ElCheapo mainboard/BIOS RAID. That stuff is just crap.

  • by Above ( 100351 ) on Sunday April 04, 2021 @07:50PM (#61236806)

    Many motherboards advertise "Hardware RAID", but are in fact what we call "fake raid". They have some hardware acceleration features on the motherboard, but the heavy lifting of the RAID is done in the driver. Some of these are Windows only as a result, while some are supported by various Open Source drivers now. See ABMX Servers [abmx.com] for an article on the differences.

    If you use RAID, hardware, software, or fake, you also need to consider the sort of drives. Drive firmware is different for RAID arrays than for single drive applications. The most important difference is TLER/ERC [abmx.com] drives will retry a bad read/write MUCH fewer times before erroring out. If you use firmware configured for many retries in a single drive application it can absolutely destroy the performance of your RAID. Rather than the RAID being able to move on to the other drive(s) and/or remap the sector that failing drive just hangs the whole thing with retries. For a while this was a user configurable parameter in many drives with SMART, but most manufacturers have now limited it to RAID capable drives.

    Why are "RAID" drives more money? Well, there are several reasons, but one of the big ones is vibration. When you have multiple drives in a chassis doing synchronized tasks they can end up vibrating each other into poor performance. A rather famous video [youtube.com] of a guy screaming into a RAID array proves the point. Non-RAID drives often omit some of the vibration dampening features and it leads to worse performance and premature failure, particular when using 5+ drives in a single chassis. Obviously this does not apply with modern SSD's, another case where SSD's are superior.

    Generally my advise would be to use a pair of disks in a RAID1 mirror with a software driver for an end user machine, like a desktop used to play games. In a server application where multiple drives are required for capacity I'd recommend a dedicated hardware RAID card from a quality vendor driving RAID spec hard drives. YMMV, plenty of folks get away with other configurations.

    • by Mal-2 ( 675116 )

      It's pretty well known (among guitarists) that the coils in guitar pickups can be microphonic, picking up vibrations from the air and often causing acoustic feedback. The solution is usually to soak such a pickup in hot wax for a while, to stabilize the wiring so it can't move.

      The voice coils that drive head positioning are not tremendously different from guitar pickups. Perhaps they are also capable of mistaking vibration for movement of the head, or the circuitry just can't deal with transients that occur

  • If you are spending enough money with dedicated RAID Hardware then hardware RAID is generally better, but on the cheap end software RAID is significantly safer and better than any of the shit built into motherboards etc.
    • by Junta ( 36770 )

      Note the key is *enough* money. I once worked at a little place that wouldn't pay for a supported card but bought a random second hand, end-of-life hardware raid controller. They had been bitten by a NAS appliance that used software RAID on an IDE system where they got hit by a failure that took out two disks. I informed them that if they want to continue to be cheap, they could do a hardware design with dedicated channels per disk to avoid that risk, but the president of the company declared that hardware

  • ...what is the usage case for RAID these days?

    I can cook up some obscure edge cases where it might be helpful but blistering performance is achieved via SSD and storage is so cheap that one can just hook up a 2tb+ usb drive to your system and run some sort of sync on it periodically likely at the fraction of the cost of getting a RAID card

    a quick google search seems to confirm my suspicion that RAID has indeed fallen out of favor... prove me wrong?

    • Making it a NAS vs a USB is a significant improvement. That adds about $500 to the cost, at least with Synology or QNAP, but is well worth it. Having two slots so you can at least snapshot between devices is great.

      I still use a spinning rust NAS for backups though— I need about 10TB at home for that.

  • by Orgasmatron ( 8103 ) on Sunday April 04, 2021 @08:09PM (#61236878)

    RAID does not protect your data. It protects your uptime.

    Hardware RAID with a battery can protect against certain types of data loss/corruption during power failures. You can mitigate most of the same risks with a good UPS and well-tested UPS scripts. You can take it even further with DRBD (in modes that wait for remote confirmation).

    Software RAID is cheaper, easier, more flexible, more portable and gives better performance in most circumstances.

    RAID-6 is pretty much only for situations when the drives are not externally hot-swappable. If you need to shut the server down to replace a failed drive, you may want to let it ride the 2nd redundancy for a few days to schedule the downtime. If you can replace the drive without shutting down, you have no excuses for not getting the drive replaced within about 12 hours. With big drives, rebuilds can take several days. You may want extra redundancy during that time too.

    If you ever get software RAID into an un-runnable state, remove your hands from the keyboard. Take a few deep breaths, and write "I will type NO unplanned commands" on a sign by your keyboard. You should have backups for this situation, but you probably don't. Despite that, your situation is almost certainly recoverable, and most or all of your data can be saved. Your #1 job at that point is to avoid making the situation worse.

  • Is way better. but not to be confused with in bios raids on many motherbaords this is not hardware raid. The main issue is when raid needs to check or rebuild drives. you don't want it taking all of your cpu for an extended period of time. For eg my software raid system takes 48hours to rebuild the array! The machine is mostly unusable during that time.
  • by Voyager529 ( 1363959 ) <.voyager529. .at. .yahoo.com.> on Sunday April 04, 2021 @08:31PM (#61236938)

    A number of these things have already been echoed in the thread, but here are my thoughts...

    1.) If you're doing this for home use, spend a few bucks and get a Synology to put your drives in. They're simple to use, they've got good support, they have a bunch of apps that give lots of functionality with zero subscriptions or anything. If you're looking for something you can set-and-forget, get a Synology and thank me later.

    1b.) Some may say "what about QNAP!". I used to like QNAP, but they have [expletive]'d me over, twice. Once, we had a 12-bay, rack-mount unit (i.e. clearly a business-grade unit) with a bad bay, which they confirmed. They wouldn't do an advance replacment, even though it was in warranty, and even though we were willing to put up 100% of the purchase price as collateral, and even though they told us that the lead time for the repair was over a month. In another case, we need to do an SSD cache...but they don't support anything but the first party cards, and zero of the compatible cards are available for purchase, even though these units are less than three years old. So, QNAP is on my s!!t list.

    Other vendors exist - Buffalo (I dislike their WebUI and they seem very limited if you're looking for anything more than SMB or FTP), Asustor ('the best of the rest', but they are in the uncanny valley in my experience for some reason), Drobo (some swear by them, but the one I used once could barely do more than 10MBytes/sec on a gigabit connection and WD Black drives; no onboard apps, either), and a few other minor ones, but my experience has led me to recommend Syn pretty much exclusively.

    2.) If you're looking to do a DIY job, you're not getting a set-and-forget situation. That's not a bad thing, and I've had some closer than others (ran a FreeNAS for my mom for nearly a decade), but it will be hands-on, no matter what you do. Just assume that.

    3.) Now, to directly answer your question...don't use motherboard RAID. It's the worst of both worlds. As others have said, it's not 'real' hardware RAID, so you don't get the performance you think you do. I've got one in production because I inherited it...and i'm telling you, it's slower than if I had the drives by themselves and simply made a spanned partition. If you're looking to do hardware raid, you'll need an actual RAID card. A real one. With onboard RAM and a battery backup. Good news is that used ones are cheap on eBay; you won't win any benchmark prizes with a Dell H710, but they're going for $40 or less, easy.

    4.) If you're going software, your big question is whether you're looking to dedicate the machine to being a NAS. It sounds like you are, but if you're planning to run a desktop OS in addition to it being a NAS, that's going to matter. If you are, both Windows and Linux have ways to do that, but you'll want to present the drives to the OS directly, rather than through the motherboard's RAID functions. Either way, there are a bunch of tutorials on doing software RAID on both OSes; Google and Youtube have you covered.

    5.) ...If you are going with a DIY software build, here's my personal rundown...

    --TrueNAS (formerly known as FreeNAS) - What I use. It's come a long way since I started in terms of being user friendly; most of my CLI usage is vestigial now. However, it is fundamentally a frontend for the ZFS file system running on FreeBSD. It's widely regarded as being one of the most stable available, but if you're used to little more than "partitions" at this point, "pools", "volumes", and "datasets" are going to be confusing at first. Not insurmountable of course, but just know what you're signing up for.
    --XigmaNAS - A fork of the FreeNAS 8 code (though running on current releases of FreeBSD as well), it too is built on ZFS. It has fewer features than TrueNAS does, but it is still solid, still based on ZFS, still does all the core stuff well, and is well maintained. A solid runner-up.
    --UnRAID - A paid product, this one has a loyal following for good reason - it has the most s

    • by kriston ( 7886 )

      [used RAID cards are] going for $40 or less

      That's because the batteries are dead. That can cost way more than $40 to replace.

  • Remember and follow the Rule of 3s for important data.

    Those (often small business users!) who substitute RAID for backup are famous for learning those lessons the hard way.

  • Depends on your goals and your budget. I like hardware but you'll want more than one set of the same hardware.

  • In the old days, it made a lot of sense to have hardware raid so that you could offload the work to a dedicated processor. It was still software raid, just on another piece of hardware.

    Nowadays, software raid has almost no impact on the load of the system. It's just WAY simpler to use software raid, and most Linux distros will recognize a software raid volume from another system.

    I manage dozens of machines with RAID 1 pairs. They read at the maximum speed that the hardware interface will allow on very busy

  • For most lightweight use, the spare disks of a RAID are better assigned to backup resources, backups protected from "rm -rf /" but with snapshots exposed for file recovery. There are many sophisticated technologies for this, a simple "rsync" based copy on another host is often invaluable. when content is accidentally deleted. RAID provides no protection against this, and for most environments it's a far higher risk.

    • by Junta ( 36770 )

      One thing I would add, in case someone doesn't realize, but a backup system should be rsyncing *from* your primary storage rather than your primary storage rsyncing *to* the backup. This way you can set things up so that the primary system does not have write access to your backup, and thus a scheme do do rsync based snapshots is protected from attacker/malware potentially reaching into the backup system and doing more damage.

      I'll say that I do like RAID because a lost disk is easier to deal with than resto

      • > This way you can set things up so that the primary system does not have write access to your backup

        For rsync or similar direct mirroring technologies, please permit me to agree wholeheartedly. It's very useful to have the backups _accessible_ from the primary server for file or configuration recovery, but potentially deadly if a poorly formed "rsync --delete" expunges all backups.

      • by Bert64 ( 520050 )

        On the flip side, if someone compromises your backup server then they can potentially use it to reach into all your servers. You should ensure that its access is read-only so they cannot disrupt operation, but your data is still compromised at that point. If you're using windows boxes, the attacker will usually be able to pass the hash from the backups to get onto the live systems too.
        Companies leaving backups unsecured (often on open file shares) is a big risk.

  • It really depends on the amount of data vs the running costs of data storage and what kind of data you want. If the data doesn't change but is very important you want to back it up regardless of raid. If your talking about a giant movie collection that is mostly inconvenient to replace, but not impossible then RAID can be useful, though you could always rip or download the collection again. If you work on the data a lot, like video editing, you might want something that has versioning capacity if the app doesn't have it built in, (more apps should just build in better versioning imo). I think UNRAID is a pretty nice solution, but you could also just setup a LINUX or even Windows box or FreeNAS. I don't trust windows raid as much, but it might have gotten better while I wasn't looking. ;) I use UNRAID for a Plex backend and (buggy) Nvidia Shields for a front end for something 20 TBs. It's solid and pretty easy to use once your get use to it. It's pretty modular, forgiving and easy to rebuild. Data on very drive is still accessible even if not mounted in an array. I think for DIY home raid that should be a minimal requirement unless you really know what your doing and want max speed, though I'd argue you don't get much more speed and that's just about picking good hardware. I can stream multiple 4k streams to the Nvidia shield no problem which is about the most I'll ever need to do. With Unraid you dedicate one (largest) drive as parity and just add whatever size drives into one big array and if you lose any single drive you can rebuild. You can add more parity drives for more redundancy and you get a simple Web based interface with help bubbles and decent forums for support to help you or get advice. A dedicated NAS os can be easier to find support solution because a lot of people are all using it for the same few things vs Linux desktop where there is a billion times more support questions clogging up your ability to find the one you need. If you plan on 20+ TBs of data you will probably want to plan to eventually migrate to a SAS expansion card to no just rely on motherboard ATA ports. This makes you even more modular and lowers the mobo requirements. You don't NEED a super level motherboard or desktop or anything, but you do need a reasonable stable one. I'm just using some old ASROCK board though and it works fine. This is a complex question. I think maybe you should post it on reddit somewhere if you want a lot of detailed responses or some other site where user comments are the focus vs news commentary. Reddit FreeNAS and UNRAID and other NAS subs should give you a ton of info. Also stop buying stuff unless you're pretty sure as to what you're doing/what the real plan is, you might just make it harder on yourself if you get stuck with the wrong hardware. You really need stable hardware to not wind up hating your life.
  • Honestly, in the... 20 years or so I've been using RAID systems, in 2021, I'd pick none. Put the data in cloud storage, and do whatever you need to do... in the cloud. It's probably not that much more expensive at the end of the day.

    Every single RAID device I've ever owned (and I've owned quite a few), has failed catastrophically and suffered data loss. Corrupted drives, failed devices, even LVM raid that went south. No matter what solution you pick, make sure you have a great backup solution, and if you're

    • Aside from Buffalo units, I have never had a RAID system of any flavor crap out in the last 15 years. A Synology box with a few TB of storage will be orders of magnitude cheaper than any cloud storage, especially if the data does need to go back and forth.

  • So... (begin appeal to personal authority) Storage is actually one of the few things I get to claim some professional expertise in. At multiple companies, several of which are mentioned in other posts here. (end appeal to personal authority)...

    1. RAID is not a back up. As others have stated it protects your uptime, not your data. If you have to pick one or the other, pick a good solid backup. HINT: DO NOT PROCEED PAST HERE IF YOU CAN'T SATISFY THIS STEP.

    2. RAID 5 is dead. DEAD! If you're using drives la

  • If you are running any kind of system a regular person can afford, the RAID code (whether software or hardware) is not going to be high quality. It's going to be just good enough to call it "RAID." Can you really hot-swap a failed drive? On desktop systems, you're supposed to shut down the entire system to swap out a drive.

    Real RAID costs a LOT of money. It's best suited for commercial purposes that need to be able to keep running even when a drive fails.

    Even for true high availability systems, I'd prefer r

  • SW is always better (Score:3, Informative)

    by zib123 ( 7721916 ) on Monday April 05, 2021 @12:41AM (#61237414)
    Just don't forget to enable journaling in Linux MD or just use ZFS to avoid write hole problems. HW raid is only faster if your cpu is 10 years old.

"Everything should be made as simple as possible, but not simpler." -- Albert Einstein

Working...