Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Networking

Ask Slashdot: What Network-Attached Storage Setup Do You Use? 135

"I've been somewhat okay about backing up our home data," writes long-time Slashdot reader 93 Escort Wagon.

But they could use some good advice: We've got a couple separate disks available as local backup storage, and my own data also gets occasionally copied to encrypted storage at BackBlaze. My daughter has her own "cloud" backups, which seem to be a manual push every once in a while of random files/folders she thinks are important. Including our media library, between my stuff, my daughter's, and my wife's... we're probably talking in the neighborhood of 10 TB for everything at present. The whole setup is obviously cobbled together, and the process is very manual. Plus it's annoying since I'm handling Mac, Linux, and Windows backups completely differently (and sub-optimally). Also, unsurprisingly, the amount of data we possess does seem to be increasing with time.

I've been considering biting the bullet and buying an NAS [network-attached storage device], and redesigning the entire process — both local and remote. I'm familiar with Synology and DSM from work, and the DS1522+ looks appealing. I've also come across a lot of recommendations for QNAP's devices, though. I'm comfortable tackling this on my own, but I'd like to throw this out to the Slashdot community.

What NAS do you like for home use. And what disks did you put in it? What have your experiences been?

Long-time Slashdot reader AmiMoJo asks "Have you considered just building one?" while suggesting the cheapest option is low-powered Chinese motherboards with soldered-in CPUs. And in the comments on the original submission, other Slashdot readers shared their examples:
  • destined2fail1990 used an AMD Threadripper to build their own NAS with 10Gbps network connectivity.
  • DesertNomad is using "an ancient D-Link" to connect two Synology DS220 DiskStations
  • Darth Technoid attached six Seagate drives to two Macbooks. "Basically, I found a way to make my older Mac useful by simply leaving it on all the time, with the external drives attached."

But what's your suggestion? Share your own thoughts and experiences. What NAS do you like for home use? What disks would you put in it?

And what have your experiences been?

This discussion has been archived. No new comments can be posted.

Ask Slashdot: What Network-Attached Storage Setup Do You Use?

Comments Filter:
  • by bazorg ( 911295 ) on Saturday August 17, 2024 @03:11PM (#64714094)

    It has just worked so far, and I particularly like the backup of photos when my Android phone is in WiFi range. This way I never need more than what free Google storage I get.

    I think that in 5 years or so I'll get a new NAS with bigger drives and then retire this one to ebay. That's what I did with the old Netgear readynas and really enjoyed a significant software upgrade.

    • by Misanthrope ( 49269 ) on Saturday August 17, 2024 @05:27PM (#64714350)

      Synology's stuff just works, I don't have the free time at this stage of my life to mess with setting up a home grown setup even using the available solutions like FreeNAS, TrueNAS, Openmediavault, or Unraid.

      • by ls671 ( 1122017 )

        I always set my own, setting up samba and nfs shares backed up by mdraid or zfs isn't rocket science you know so no need for FreeNAS, TrueNAS, Openmediavault, Unraid etc. either. I just use whatever old computer I have in my junk room for the file server and install Slackware or Debian on it.

      • by skegg ( 666571 )

        Another Synology fan right here.

        Their phone apps allow back-up of data to the NAS (photos, documents, etc).
        Synology Drive Client allows good background syncing of desktops.
        HyperBackup allows (encrypted) backup of one's data with a 3rd party (as in AWS, Wasabi, etc).

        My current one has 4 drives; next one will have 5 ... I'll use the 5th as a hot spare.
        (Current one is 8 years old but running well ... I couldn't justify the upgrade to my domestic CFO.)

        I'd also say: go for the extra RAM.
        I was pleasantly surprise

    • by AmiMoJo ( 196126 )

      I'd go open source for all that. Build a NAS using an N100 based motherboard and run either TrueNAS or Windows as you prefer. Something like OwnCloud for storing files and their Android apps for sync. It has a decent interface for browsing photos, or you can add a dedicated gallery web app. You can also use it to sync all your calendars and other stuff.

      Hardware wise, and N100 or similar motherboard/CPU. ITX case designed for NAS use, Jonsbo ones are generally pretty good. Look for CMR HDDs, and a small SSD

      • Look for CMR HDDs, and a small SSD for the OS.

        On Synology and I imagine others there is a copy of the OS on every drive, so as long as there is still one functional drive left the NAS will boot.

    • Yeah I just have a DS420j with a bank of 2TB sata SSDs. It just works. My only gripe is its an ARM model so doesnt have the docker stuff. (Well it does have docker, but its not really built for it like the intel ones so the perf aint really worth it)

    • by dgatwood ( 11270 )

      Ostensibly, mine is a Synology DS1515+ with a repaired motherboard, but in actuality, it's an IOSafe 1515+, which is a DS1515+ with the drives packed into a fireproof enclosure.

      I recently migrated to new drives with the aid of a hacked-up DS1515+ in which I added a mechanical switch to short the power supply pin so that it would stay on because it kept shutting off randomly. Thanks, Intel, for the Atom disaster. (This is also why my IOSafe 1515+ has a repaired motherboard.)

    • Problem with Synology is outdated tech that lays just at the edge of what is necessary to operate, also it uses BtrFS and the bootloader for the OS and UI is a GPL violation (proprietary code compiled into the Linux kernel).

      TrueNAS makes appliances as well which donâ(TM)t have any of those problems.

  • Rolled my own (Score:5, Informative)

    by thegarbz ( 1787294 ) on Saturday August 17, 2024 @03:39PM (#64714140)

    Hardware: It's getting long in the tooth for everything except for the HDDs.
    Intel Avoton C2550 - an ancient passively cooled quad core CPU which does the job.
    Motherboard: ASRockRack C2550D4I - chosen for it's BMS, dual NICs, dedicated BMS NIC, and 12x SATA ports, and mini-ITX formfactor.
    HDDs: A couple of SSDs for the OS and Cache, and then 3x16TB Hitachi HDDs.

    Software: Ubuntu server - though any flavour of Linux will do.
    The rest depends on what you want to do:
    Seafile - for my own private cloud system
    NFS / SMB - for file sharing
    Plex - Media streaming - though if you want to transcode you need either modern hardware or a GPU thrown in, the Avoton is just not powerful enough for that.

    And then a crap ton of other things the box is doing:
    Collating and serving emails via Roundcube
    Downloading torrent with Deluge Core
    Home assistant for the smart home and power logging stuff.
    InfluxDB for trending of the server information as well as the smart home stuff.

    I'm sure Synology offers something with an out of the box working interface, but rolling you own is quite satisfying.

    • by Dohmar ( 1017428 )
      Rolled my own too. Running zfs Wanted ECC ram platform so selected Supermicro X11-SSM-F mobo with a Xeon 1275v6 and 64gb ECC ram. Pci1 is a 9211i for the 8x spinning rust I have, pci2 is a dual 10gbe SFP+ card (Chelsio), pci3 and 4 are running nvme m2 adaptors for a fast SSD share. Got boot ssd's and vm ssd's running off the internal sata ports. No zlog or l2arc. Built this thing back in 2016 and its been rock solid. Only have to swap the drives out when I run out of space or they go out of warranty. Doesn'
      • Forgot to mention ECC and ZFS. Yes running both as well. A ZFS scrub lead to my first upgrade from 2TB drives to 4TB drives 8 years ago when it started identifying checksum errors (that said a check of kernel messages also showed SATA errors at the same time so you can't exclusively credit ZFS if you're paying attention - which I wasn't).

        But now I am. Important for any NAS is that you monitor drive health. I now have 3x16TB drives because late last year one of my 4TB drives started reporting SMART errors, b

    • I have even older hardware, but if it works and is fast enough...
      Chassis - Supermicro 4U with 24 3.5" drive slots and two power supplies
      CPU - 2xAMD Opteron 270
      RAM - 24GB DDR ECC
      MB - Tyan Thunder K8WE
      SATA HBA - 3x Supermicro AOC-SAT2-MV8 (8 port PCI-X SATA2 HBA)
      HDD - 12x 3TB hard drives (mainly WD RED) for data

      It runs Debian 10 with zfs. The data drives are arranged as two 6-drive RAIDZ2 vdevs. I use it mainly for file storage (samba) and for backing up my other servers.
      I'm close to running out of space thou

      • Had that expansion thought recently. I opted for primary drive upgrade. Reason being multiple: wanted to keep the case tiny, wanted to keep the heat output low, older drives were getting some real years put on them and approaching end of life, and also for a system that is on 24x7 I didn't want superfluous energy consumption. 3 extra drives was ~20 watts, so 120EUR / year running costs (the last upgrade happened while our electricity prices were still ****ed from the Russian invasion)

    • With a used 2014 Mac Mini, 1 TB SSD, 256 GB NVME for the OS, and an external HDD. Secondary Backup is two other external HDD normally stored in another building.

      Anything newer than Core 2 Duo can fling files about. USB 3 is probably the limiting factor.

    • by tlhIngan ( 30335 )

      The problem with rolling your own tends to be an interface issue. If a hard drive goes bad - how do you tell which one it is?

      I've had many NAS and other storage appliances over the years. When a drive goes bad, they tell me which one it is - either a LED, or a display, or a message like "Drive #3 is dead".

      I then remove the drive because it's clearly labelled by the LED or printed on a label or slot, and replace it.

      The DIY system is far less accommodating to that. Maybe it's only 3 drives so you can hunt by

      • The problem with rolling your own tends to be an interface issue. If a hard drive goes bad - how do you tell which one it is?

        Given that every upgrade I've had has been due to a HDD going bad I can answer this easily: I do not require perfect 100% uptime so I can power down the system. The bad drive can be easily identified by serial number. Instead of doing everything with the sda sdb sdc sdd etc symlinks, use the disk/by-id symlinks and your drive becomes uniquely identifiable e.g. ata-TOSHIBA_MG08ACA16TE_14W0A052FWTG. If that thew up a smart error and needed replacing, I'd power down and quickly have a run through the drives lo

      • by AmiMoJo ( 196126 )

        There are other ways to tell which drive it is. I put a sticker with the serial number on the end of the drive so I know which one to pull. You can also do it by SATA/SAS port number, by simply arranging your drives in port order. Cheap NAS cases with hot-swap backplanes make that really easy to do.

        You can also get a cheap HBA off eBay with the ability to flash the LEDs on slightly more expensive cases.

  • With 8 x 14 TB drives in Raid Z2.
    Using a now fairly old i5 6600k and Z170 motherboard, 32 GB of RAM, and 10 Gbe network card.
    It's relatively noisy and power hungry so I don't leave it on 24/7 . I use WOL.

  • by ukoda ( 537183 ) on Saturday August 17, 2024 @04:33PM (#64714236) Homepage
    WARNING: Before buying any drives learn what a SMR drive is and how not to buy one. For a NAS they are pure evil disguised as a bargain.

    I have run my own NASes for many years now. Not hard to set up either off the shelf, like QNAP, or build a Linux PC with several large HDD and software RAID5.

    The hard part is a few years down the line when the drives are near full or near EOL. Simply swapping drives is a modest drama and unpleasantly leaves the RAID array in a degraded state for a long time while it syncs the replacement drive. Add to that you have to do it multiple time, for each drive, and then work out how to resize and it can be a royal pain.

    There may be a better way, or a better option than RAID5. If people have suggestions or links about how to make this easier I'm all ears as I am in the process of building a new NAS and want to address this at the set up stage..
    • by OneOfMany07 ( 4921667 ) on Saturday August 17, 2024 @07:51PM (#64714576)

      I got curious. Dropbox loves them (in my skimming of this article). And write speed is worse, with a manufacturer silently switching to these (WD Red) and making their customers angry. If you can schedule the updates/saves (give them lots of time to complete) and/or do it intelligently (only what changed)... I'd assume SMR might be fine too.

      https://dropbox.tech/infrastru... [dropbox.tech]

      https://www.reddit.com/r/DataH... [reddit.com]

      "FrederikNS 3y ago Edited 3y ago

      SMR is perfectly fine for some applications. A backup drive or a stand alone media drive. It should also work decently as a secondary game storage drive, though most gamers will prefer a snappy SSD anyway.

      The penalty for SMR is when writing, and only when you write enough data. If you only write a few gigabytes occasionally you will probably not even notice, but if you write 100s of gigabytes at a time, your write speed will absolutely tank. SMR doesn't affect read speeds or seek times. So as long as you use it in a more or less "write once, read many times" fashion it's perfectly fine.

      The big controversy was that WD started silently including SMR drives in their WD RED lineup, which are meant for NAS use.

      NAS use can also often be very write-once-like, however the big problem arises in RAID setups. If you have a disk failure, and need to replace a disk, and you happen to replace the failed disk with an SMR disk, your recovery time easily explodes from a single day ordeal to a possibly week-long restore. That's a lot of time to wait anxiously hoping your data isn't corrupted and another disk doesn't fail, ruining the entire RAID. The remaining disks are also more likely to fail during the restore, as the restore is quite intensive, and will raise the temperature of the disks over an extended period.

      For some types of RAIDs, such as ZFS and BTRFS, you somewhat routinely run rebalance and resilver operations. These rewrite all the data on the disk, and ensures that all the data is intact, as well as keep the filesystem in good health. If you use SMR for that the resilver or rebalance suddenly takes a week instead of a day or two.

      Additionally there has been many reports of people having resilvers on ZFS outright fail, due to the individual write operations timing out.

      I personally run a BTRFS on 3 SMR disks and 3 CMR disks. It's OK... But it's far from a pleasant experience when maintenance operations take so long to complete. In case one of my disks fail, I will definitely be replacing them with CMR, both to speed up the recovery but also to get rid of the SMR disks over time."

      • by ukoda ( 537183 )
        Back before I knew what SMR drives were I used one to replace a drive that had failed. Shortly after went on a brought 4 more large SMR drives for a new array. Not long after that the first SMR drive 'failed' taking the array into a degraded state. Investigating why a new drive failed was when I learnt the expensive truth, I had brought 5 expensive paper weights. The core problem being the SMR drives periodically stall while they do their internal bullshit to pretend they are real HDDs. The Linux RAID
        • I got bitten by this exact same thing when I was experimenting with RAID 1 on external USB drives in the 2.5" form factor. Found that almost all small drives are SMR. To boot, if I tried to use them with LUKS + dm-integrity, it could take weeks to "format" the drive with all the ECC needed. I wound up just using them as drives for offsite, GFS backups, using LUKS with btrfs using sha256 hashes for the checksumming, so there is some type of authenticated encryption present, even though btrfs's checksums w

        • by wwphx ( 225607 )
          SO... don't buy WD drive for NAS application. Got it! And thanks!
      • by Bert64 ( 520050 )

        SMR disks will often suffer from severely degraded read performance too when the drive is reorganising itself.

      • The penalty for SMR is when writing, and only when you write enough data. If you only write a few gigabytes occasionally you will probably not even notice, but if you write 100s of gigabytes at a time, your write speed will absolutely tank. SMR doesn't affect read speeds or seek times. So as long as you use it in a more or less "write once, read many times" fashion it's perfectly fine.

        Most of what you've written is correct and helpful. This... I think needs some fine-tuning.

        You're technically correct, but there's a huge practical issue that's being missed. When - not if - a drive fails, you need to undergo a redundancy rebuild. Some implementations do allow throttling back rebuild speed but I haven't seen one that goes as low as SMR actual maintained throughput. The vast lifespan of the drives you're totally fine, but when you're undergoing rebuild you're going to exceed the ~30G o

    • (Sorry, I don't know of a solution you can use. But it seems logically 'possible'.)

      I wonder if any RAID system allows you to add an 'extra' drive, and then mirror one of the existing drives. Like a RAID 1, but only for one of the drives in the other array. When done you could remove the original and never need to degrade current performance (while you wait for the replication). Heck it could accept reads on data it's copied already while mirroring.

      That'd assume you had extra drive capacity you didn't us

      • by unrtst ( 777550 )

        FWIW, you can do this with Linux DM raid 10. For example, you can start with 2 disks:
        mdadm -C /dev/md2 -c 256 -n 2 -l 10 -p f2 /dev/sda3 /dev/sdb3

        You can latter modify that by adding another drive, changing number of drives ("-n 2" to "-n 3"), and changing the layout to include 3 copies of all data blocks (-p f3). That'll do what you said and copy all that data over, and make it useful for reads (performance-wise).

        You can also set any number of hot spare drives. If a drive failes, it will rebuild using the

      • Yes, Unraid is perfect for that.

        You need one parity drive in the array (it should be the largest drive in the array, obviously). When you want to replace a HDD with a bigger one, yank that HDD, place a new one in, and tell Unraid that the new HDD is supposed to be replacing the old one. The data will be rebuilt from the parity information, and Bob's yer uncle.
        You can expand the array any time you want, and each expansion rebuilds the parity, so make sure the parity drive is a good, reliable HDD. You can use

    • Yes, Unraid's solution is beautiful.
      The main array is simply a bunch of HDDs, with one or two parity drives.
      You can take HDDs out, replace them with newer ones, you can replace the parity drive(s), etc.

  • Got a used one a few years back with a known fault that could be fixed by soldering a resistor.

    I certainly could've rolled my own but I'm glad I didn't. It "just works" out of the box but still has enough flexibility to run whatever services you want, as you can run VMs or Docker containers. Happy I went with the 4-bay model and started with 2 drives, I ended up using 1 more older drive as a non-redundant storage for media, and still have one slot to expand the array.

    It's an older Atom CPU which doesn't hav

  • ... running on an Asrock.Intel Atom motherboard, 2 x 4Tb RAID array, an additional 4TB for backup and a 120 GB SSD as system disk. Works flawless since about five years, together with a docker container for nextcloud and a few more stuff.
  • There is no need to waste money putting a Threadripper in a NAS box. All the desktop versions of Ryzen from 7000 series on include the capability of PCIe bifurcation. Combined with a B650 motherboard and a Hyper M.2 card, you can easily put together a blazing fast NAS with 12TB RAID5 SSDs (16TB raw) for about $1500.
    • Well, that depends on one's needs.
      I want to buy an EPYC ZEN 2nd gen setup (2nd hand) for my main all-in-one server.
      Reasons:
      - ECC RAM
      - Enough cores for multiple VMs
      - Ample RAM amount for RAMdisks (yes, I know, /tmp is RAM in many distros)
      - Enough cores to run hundreds of dockers at the same time (currently, I have 25 dockers always on, and about 30 more which i use infrequently)
      - Enough PCI Express lanes and slots for multiple PCIe devices (GPUs, LSI controllers, SFP+ NIC, NVME SSD cards, etc)
      - PCIe bifurcat

  • TrueNAS (Score:4, Informative)

    by RazorSharp ( 1418697 ) on Saturday August 17, 2024 @05:27PM (#64714348)

    It is open source, leverages ZFS, and scaleable. One version is FreeBSD and the other Debian.

    • TrueNAS Scale is great. I've got it running on my primary NAS and on my backup NAS (my primary NAS takes backups from my other systems, then it is synced to my backup NAS, so there's multiple copies of everything, plus a week's worth of snapshots) Its UI is nice and clean, it makes it easy to manage disks, run a couple of VMs, with the next major release they're moving to using Docker as well and so on.

      I just had an HDD die on me and I simply got an email from my TrueNAS system informing me of this and a co

  • Why NAS? (Score:4, Interesting)

    by Anubis IV ( 1279820 ) on Saturday August 17, 2024 @05:44PM (#64714380)

    You can save a lot if you already have an always-on machine by simply using direct-attached storage (e.g. a $100-200 four-bay enclosure from Mediasonic can hold ~60TB even with a parity drive) and setting up a network share in a few minutes. You can avoid vendor lock-in or other issues, plus Backblaze will back all of it up without any gymnastics. If you don’t have that always-on machine though, then yeah, NAS makes sense.

    • by dfghjk ( 711126 )

      "If you don’t have that always-on machine though, then yeah, NAS makes sense."
      Right, because the NAS is the always-on machine, and it nothing more than another machine.

      There are essentially no home users that can justify a NAS nor can they justify a need for 10 TB of data backup. A good backup plan includes off-site storage to protect against a large set of physical disasters that would take out on-premise backups. A NAS is not off-site backup and when you include off-site backup you no longer need

  • We are using both Synology (commercial) and also TrueNAS Scale (free). Both are very nice and have their strengths. Both seem to be very feature-full, powerful, robust/stable, and regularly enhanced.

    If you have existing hardware around and want to build something yourself, TrueNAS Scale is very appropriate. Completely open-sourced, tons of apps, well-known, Linux-based.

    If you just want to buy something all pre-done, then the Linux-based Synology is also a nice option. Synology also includes 2 free camer

  • Rather than some limited corporate solution. Avoid ZFS because it broke on me in an unrepairable way and there was zero support. Linux + mdraid + XFS is bulletproof.
    • The ironic thing is that I had the opposite happen. At a place I worked at, I set up a Supermicro for a backup destination using ZFS with a SLOG for a landing zone. One of the SAS controllers glitched and wrote garbage data at random spots in the array. A zfs scrub was not just able to find the damaged files... but because I was using RAID-Z2 and RAID-Z3 for different things, ZFS was able to find the damage and completely repair everything.

      Nothing against mdraid. It is simple, very light on the CPU, bec

  • Got a DS1019+ with 3x 8TB drives on SHR (available is about 16GB total).

    My original plan was to add another 2 more bigger drives as my needs increase. Currently have about 4TB available to use. Figure if I use about 12TB, I may either add new drives or change the hardware and use the current NAS as a backup to my new NAS (and maybe adding drive to current NAS to backup all the new data as well).

    I am looking for something with better hardware, and Synology's new NAS's that cater to the home / prosumer users

  • TrueNAS (Score:4, Informative)

    by doodleboy ( 263186 ) on Saturday August 17, 2024 @06:25PM (#64714440)

    I first started with a 6-drive RAIDZ2 array on FreeNAS 10+ years ago on a Supermicro X9-based system, IIRC when 8.3.2 came out. I'm now on the current FreeBSD release of TrueNAS. Over the years I replaced the drives for larger ones and migrated to a hot-swap case. No issues in all this time aside form the occasional drive failure. With spinning hard drives it's not exactly fast but it provides reliable ISCSI storage for my VMware homelab and NFS storage for my various Linux boxes.

    It's true there is a bit of a learning curve to TrueNAS but I don't think that will be a problem for anyone reading this. Based on my positive results with my homelab we started using 36-bay Supermicro boxes for NFS and for Veeam backups at work. We currently have 10 Supermicro boxes now for various things, but mostly Veeam. The latest pair are 60-bay E1CTR60L boxes with 22TB drives. It you don't need NVMe-type speeds, these provide a lot of storage for not much money.

  • Honestly, you can roll your own, and that's great for a tinkerer and someone who manages their own infrastructure like a home lab or the like. It works, it can be cheaper, and you get what you spec. Maybe it's too easy actually, since you can screw it up and if you have the right backups then it's time wasted setting it straight, and if you didn't have the backups, then WTF get your head on straight.
    You can buy something pre-rolled. Synology has great software, and while I think the hardware is a bit und

    • I know most people here can get a NAS together somehow. Doesn't take much... a Raspberry Pi, two USB drives, perhaps USB enclosures, network cable, and Linux on the Pi's SD card can get RAID 1 going and Samba. It won't be great, but md-raid is fairly forgiving and can do okay.

      However, for a lot of people, having a device that is a lot easier to admin is a must. Synology is good in this, because the software is easy for even a relative novice to get going. Take the NAS, add drives, go to find.synology.co

  • I don't leave it on or connected unless I'm using it because lightning strikes and power outages are so common in my area. I either boot it to run VMs or access files but storage is so cheap it's trivial to have a few TB on each of my other computers.

    Hanging the 1U case like a painting keeps it easy to access and out of my way with zero footprint. (I use two common L-hooks which cost pennies.) I also load files directly using USB adapters by pulling the appropriate drive caddy so ready physical access is us

    • I don't leave it on or connected unless I'm using it because lightning strikes and power outages are so common in my area.

      A good UPS + a SFP+ network card with OF connection isolates your NAS well enough.
      Three weeks ago a close lightning strike partially broke an Ubiquiti switch which powered my PoE cameras. Two ports went tits up (completely dead) and the third port had its PoE indicator LED snuffed out (PoE still worked on it, though). However, it was connected to the rest of the network via a SFP Optic Fiber cable, which passes no electric current.
      Fortunately, the PoE cameras were all fine.

  • I originally had a couple Drobo DAS [wikipedia.org] units:

    - 5 bay connected to a Mac Pro for storage and Time Machine
    - 8 bay connected to a Mac mini. It held media for iTunes, purchased via iTunes as well as rips from my physical media. The mini also ran my HDHomeRun DVR software [silicondust.com], both the server (recordings saved to the Drobo) and client (playback on the TV).

    I liked the Drobos, especially how you could expand storage by replacing a small drive with a new larger drive. Sadly Drobo went bankrupt due to the supply chains

    • At the time, Drobo was pretty unique, especially with the fact that it could expand the RAID array when drives were added, and to the computer it was attached, it just appeared as a USB drive. For the time, it was the best of breed. For network stuff, an iSCSI model was offered which had a bunch of nice features as well, if someone wanted a home SAN (perhaps for Windows), it worked well.

      What killed (IMHO) Drobo was that Synology and QNAP hit the market with relatively inexpensive NAS models that used Linu

  • I'd considered things like FreeNas or Unraid before, and when I do a search today I tend to get good information from posts on Reddit's /r/DataHoarder area https://www.reddit.com/r/DataH... [reddit.com]

    Guessing a post or two from there could be helpful. And they'll have loads more focused experiences to offer. Both about the hardware and software choices.

    Though I don't know if searching while on Reddit.com or searching with Google will find better Reddit results. I've preferred my Google results usually. With the to

  • I can readily back up my important files on a tiny USB drive, so the idea of a NAS is comical.

    • If all you're doing with a NAS is backups, you're wasting it, stick with multiple USB drives with a 3-2-1 strategy.
      But most NAS users are sharing files, running containers for home automation and plethora of other interesting uses, backup just being one of them

      • Part of the reason that is nice for using a NAS is that most models have backup software with them, and the ability to download restore/decryption software so the files can be pulled, even if the NAS is destroyed. Manual USB backups work, but having an automated mechanism that can back up to an external drive automatically, as well as to a cloud provider (Amazon Glacier, Wasabi, Backblaze B2) is nice because one can set it up to be "fire and forget", perhaps with an email every so often to let you know all

    • Back in the early 2010s, where one had a Drobo, which presented the drives as a DAS via USB (they eventually went into iSCSI and full fledges NAS models before imploding), I'd probably say that makes sense, but a NAS is more than just a way to share files. A lot of people use them for media storage with Plex, a place to evacuate/archive files (since Synology and QNAP have decent backup utilities which can use cloud providers), authentication via LDAP or even AD, iSCSI, or just a place with RAID protection

    • You realize the OP was actually asking for a method of backing up their systems that's LESS manual, right? As a result, your entire comment is pretty much irrelevant.

  • Four drive slots, three of which are populated as RAID 5 with about 15TB of usable storage. I back up to an external hard drive every weekend, and to Amazon Glacier every few months or so. Works for me.
  • by _Shorty-dammit ( 555739 ) on Saturday August 17, 2024 @10:17PM (#64714784)

    Been using unRAID for quite a few years now, and I am very happy with it. I've currently got 12 drives in it, with space for a couple more. I have two of them set as parity drives, so there's some redundant safety net. That'll keep it going if a drive or two die, and will only lose data if a third happens to die at the same time. Of course it is not the same as having multiple backups, I am aware. But it is enough of a safety net for my not-really-worth-anything file backups at home. I like how it only powers up drives that are in use, versus having to have all drives in the array running at the same time. And the fact that they're actually just solo drives you can individually pop into any machine and read its contents, versus having to have a fully functioning array to read conventional RAID drives. Which basically means if I *did* actually have a third drive fail all at once I would still be able to read the fully intact files that exist on all the drives that did not fail. If that happened it wouldn't suck as bad as a conventional RAID setup having a similar failure, as the entirety would be basically useless.

    unRAID isn't perfect, but it works very well for a home gamer NAS setup. At least, it does everything I want it to do, and I like the way it does it. When I have had drives die over the years it is quite nice to just pop in a new one and continue on as if nothing happened. That part isn't unique to unRAID, exactly, but I like how it does everything, including how it does that.

    It's built with leftover hardware from one of my gaming PCs from many, many years ago. AMD Phenom II X4 965, a very old quad-core CPU, with just 8 GB of DDR2 RAM, most of which is just disk cache. I think it only ever uses about 1/3 of that for non-cache usage. An old ADATA 480 GB SATA SSD as a cache drive sitting in front of the platter array to speed things up a bit. And the 12 platter drives behind that, connected to the motherboard SATA and an SAS9211-8I 8PORT Int 6GB Sata+sas Pcie 2.0 controller. I imagine there are cheaper/better controllers around these days, too. Looks like I threw that together back at the end of 2017, and it is still going. If anything major dies, another nice thing is unRAID just boots off a USB thumb drive. All the drives could simply be moved to a new motherboard and controller without having to do anything else besides swap everything over and plug that thumb drive into it and boot. It doesn't seem to want to die on me yet, though, haha. I may swap to something newer that's dirt cheap just to gain some efficiency one day, but I have sort of just been waiting for something to die before doing so. And so far, other than an old drive every now and then, it just keeps running.

    Oh, that's another thing. Your drives can be any capacity, and if an old 2 TB drive finally bites the dust you can swap in a new 10 TB or whatever, and just use it. The only limitation is that your data drives need to be the same size or smaller than your parity drives. In a pinch you could use a data drive that is larger, but it will limit its usable size to the size of parity, so at some point you'd also want to upgrade your parity drives accordingly. You're not limited to the smallest drive in the array like conventional RAID.

    • The ability to use different sized drives is one nice advantage to UnRAID. Other than btrfs, there isn't really anything that has that dynamic functionality. unRAID also supports LUKS which is one of the best ways to encrypt things, just because it is block based, so someone can't guess the contents. Add dm-integrity with sha256 hashing, and you have authenticated encryption.

      I personally keep with ZFS, but I generally save up, buy the NAS, buy the drives to stick in the NAS, and once set up, it stays tha

  • Mine's a dual Xeon server that I also happen to use as a NAS. It's a SuperMicro dual-Xeon server board sitting in a SuperMicro 2RU chassis that has hot-swap SATA slots. It's running FreeBSD with ZFS, and I have 8x4TB SATA drives in it, in ZFS' version of RAID 10. It's packed full of dual-port 10G NICs as well and runs a pile of FreeBSD jails (DNS server, sendmail server, web server, DB server, etc, etc) along with serving storage. Works wonderfully for my use cases.

    • That is a hard to beat setup. Add a pair of SSDs as a ZIL/SLOG, and you now have incoming I/O landing at SSD speed, so some process that is doing a ton of random stuff is not going to affect writes from other items.

      Backups are easy as well. A ZFS send to an external drive, and all data is saved off securely, and one can do a GFS rotation.

  • Some years ago there were a variety of prosumer SATA RAID chassis that connected via fiber channel or iSCSI. Typically they had a ethernet port and a web interface for controlling the pools and volumes. Last year I had some need of attaching storage to an existing server, but was surprised that I could hardly find any chassis like that. There are plenty of questionable USB disk arrays RAID, but nothing with iSCSI or FiberChannel. I found some old used Dell Perc arrays. I was quite surprised. What do pe

  • by SeaFox ( 739806 ) on Saturday August 17, 2024 @11:16PM (#64714860)

    I've been running TrueNAS for about a decade. I'm still on the FreeBSD variant (TrueNAS Core) for now, but that will have to change soon. Hardware is a Supermicro Mini-ITX board/CPU combo with a Xeon-D 1541 and 32 GB of ECC RAM. Storage is two mirrored pairs of drives (2x14 TB and 2x18TB) I shucked from Western Digital external hard dives I bought on sales, and OS and some apps running off solid state storage. The case is an Ablecom CS-M50, which I think Supermicro now owns the rights to and sells as the SuperChassis CSE-721TQ.

    This setup is like eight years old now, but running fine. This is obviously overkill for just a file server, but I run some media server apps as well and for personal use it suits me fine, even without any hardware accelerated video features. At this point I'm needing to get a new case at least that allows more spinning rust so I can increase storage.

    The NAS is considered the "secondary" storage in my home, and for an offsite backup I just copy files from the primary machines to external hard drives in travel cases I store elsewhere, collecting them occasionally to update their contents. This isn't as fancy as Backblaze, but I don't have a fiber internet connection to upload everything in any practical time frame.

  • I have a few Synology units, but recently, I bought a QNAP NAS that is x86 and has a HDMI out. I added two SSDs, threw the OS on one, the other one purposed as a SLOG so a ton of random I/O sent to it will hit the SSD and be handled by that. For the OS, I went with Ubuntu, but Debian works just as well, and of course, it is set with LUKS for the root filesystem, and the ability to SSH in and put in the encryption key on boot. ZFS is configured to have encrypted subvolumes with a keyfile, so after the enc

  • My file server is a 36c/72t Lenovo SR630 with an LSI 9600-24e running to a pair of 24-bay SAS chassis. There's ~20TB of u.2 drives connected and another ~250TB of too-old-to-use-in-production SAS drives. That system is mostly a VM host but it has 40Gb connections to my video editing workstation and my partner's gaming PC.
    I have an LTO6 changer and important files are sync'd back to my datacenter and/or Crashplan.

    Shockingly, none of this is loud except the first minute or so of the SR630 powering on.

  • Hardware:
    - two old PCs with four harddisks (maybe 15 years old by now).

    Software:
    - Devuan Linux and btrfs RAID1
    - NFS shared data volume (samba optional for Windows)
    - a sense of simplicity keeping the amount of data in check by deciding what is truly important to keep and what will not be used more than once or can be obtained/reinstalled easily.
    See also Swedish death cleaning.

  • by awwshit ( 6214476 ) on Saturday August 17, 2024 @11:33PM (#64714886)

    I went with my own PC and FreeNAS, now TrueNAS Core.

  • by RitchCraft ( 6454710 ) on Saturday August 17, 2024 @11:36PM (#64714890)

    I started out with a Synology DS213 with a pair of 2TB WD Red drives about 13 years ago. When that ran out of room I purchased a Synology DS216 with a pair of 4TB WD Red drives. When that ran out of room I purchased a Synology DS218 with a pair of 8TB WD Red drives. All three Synology NAS boxes still run perfectly! About 6 years ago one of the 4TB drives died but it was still under warranty. I put the new drive in and after the NAS rebuilt the image was back in business. I use the 2TB NAS for storing and transferring large files around on my network and don't use it for backup any longer due to its age. Synology, at least for me, just keeps on ticking.

  • CM Stacker 830 - FX 8320 - M5A99FX Pro R2 - 16GB HyperX 1600 CL9 - XFX 5670 - SeaSonic S12G-750 - 6x CF-V12HP - 16x HGST Ultrastar HC550 20TB - DW1640
    Entire fileserver is "RAID-1"ed (so to speak) at a different physical location
  • I use an old Dell R610 + a DAS chassis, running TrueNAS as my main file server.

  • Not a NAS, but I use a self-built computer with Windows 11 Pro, AMD Ryzen 5 3400G CPU, 64GB RAM, Dell PERC H710P hardware RAID controller, 8 x 6TB Hitachi Enterprise class SATA hard drives. I have the hard drives configured in a hardware RAID6 with no hot spare (I have 1 cold spare) for a total of around 32.7TB of storage space and I boot off of an NVMe drive. I've got the server built in a Fractal Design Define case, the case is sound insulated with 140mm fan mounts (the server is very quiet) and has 8 h

  • Mine is a Slackware machine with 12 disks (three RAID-5s) with NFS and SMB enabled.

    • by Bob_Who ( 926234 )

      Mine is a Slackware machine with 12 disks (three RAID-5s) with NFS and SMB enabled.

      The real deal, Classic, Stoic, Plain like steel cut oats that needs to cook a while because its not instant oats, This is the genuine article, fundamental, reliable, and high fiber. You can be very regular over time and not also worry about the system taking dump . A system for the ages. Almost Amish, not pink, and extra slack.

  • I personally prefer Synology over QNAP at home; I have both, but the QNAP gets in the way much more.

    But, OP really needs to separate local storage from backups. If you go for a Synology as local then Backblaze has some options for backup that made sense for me... although I will admit I never got around to finishing up my project or living with it. With everything in one place then backing up to Backblaze is more trivial.

    I'd also think talking about a personal cloud a la Nextcloud is an appropriate discus

    • I would say QNAP has better hardware, Synology's software is better. With QNAP's x86 units that have a HDMI port, you can install your own OS, which make for a solid setup, and if one wants a Web UI, there is always installing cockpit, or going with TrueNAS SCALE.

      QNAP does have a lot of apps that are good in niche situations. For example, my WordPress sites are easily backed up via the QNAP plugin to a local NAS, and this is completely independent of the backend of WordPress, be it PostgreSQL, or whatever

  • This QNAP TS-563 NAS review [tomshardware.com] sold the machine to me in 2016. It has 5 Western Digital 6Tb disks in it still. I bought the cheaper version that came with 2Gb of RAM along with a separate 8Gb RAM chip to replace the included 2Gb with.

    I bought it via Amazon directly from QNAP I believe. The first one that arrived flat out didn't work for some reason but QNAP support could not have been better. Upon diagnosis over the phone the guy quickly decided the unit wasn't solid and immediately arranged a replacement whic

    • Adding to my notes: QNAP has upgraded the OS over the years and still sends updates. The device is almost 9 years old and outdated, unsupported software has never been a problem, although QNAP sometimes does replace their older software with newer stuff. Now I use BOXAFE as I noted, but years ago before it was released I used some other, now unsupported QNAP software for the same purpose. I know of no other computer hardware so old that still gets manufacturer software updates keeping it so fresh and easy t

  • TLDR: KISS, a system that is simple to set up, easy to fix when it breaks, and does not get obsolete

    Based on life lessons, my home setup is a very simple roll-my-own. Raspberry Pi 5 with an SSD on the USB 3 port. [Yes, I know, there is PCIe support on the RPi5; as far as I recall when I was setting it up, support for that was still somewhat experimental and Jeff Geerling was still struggling to figure that one out. For me, SSD on the fastest USB port is good enough, and KISS plus life lessons told me that t

  • I use it primarily for media, so there are a few reasons why I moved to it from my QNAP.

    - It's pretty simple.
    - Individual files are fully contained on single drives. If you lose two drives, you still have whatever is on the remaining drives.
    - Any size drives can be added at any time with the caveat being that your parity drive controls the maximum drive size
    - Runs well on cheap hardware

    To be clear, my QNAP kicked Unraid's ass from a performance perspective. Not even close. And you don't get the speed benefi

  • Use spinning rust if you need the space and SATA SSD if you can afford it and nvme if you have the PCI lanes.

    ZFS lets you cache on faster media so that helps. Get 16GB+ RAM. Heh, I just bought a 32GB dimm for $65 for a laptop - 16 sounds old fashioned.

    ZFS encryption lets you keep your data safe and ZFS mirroring lets you easily split a drive for offsite (buy 3 14TB e.g. drives for an onsite mirror, one remote, one in transit).

    Hotswap SATA drive bays are good. A midtower case fits in a rack. Get an efficien

  • zrep every 1/2 hour
    work laptop zrep on demand as it's not always connected

    Only problems I've ever had with it are were due to the fans in the hard drive cages eating up all the 12 volt Supply
    Once I gave them their own separate connection back to the PSU instead of powering off of the cage backplane there were no problems

    Motherboards are thinkserver ts140s with core I3 and ECC memory
    Discs are Seagate
    Cases are Roswill 12 Bay hot swappable

  • A tiny headless ARM quad core, USB hub, and a pile of internal hard drives running over that USB hub.

    Cheap, fast enough, can run few containers (torrent, metube, pihole).

    You can spend a lot of money on faster machines, but this pushes around 800mb/s just fine over the LAN and uses around 25W of power.

  • 8 * 8 TB HD
    4 * 2 TB SSD
    64GB RAM

  • Have three QNAP devices -- a TR004 storage expander attached to my Windows server for network backups, and a TS-251+ and TS-431P2 on my gigabit network for extended storage. Have managed to escape from the curse of SMR, found out quickly how poor their performance was and returned the drives to WD. QNAP hardware seems well behaved but the software not so much. And error reporting is a function they have yet to discover. For months, one of the TS devices has complained it cannot get a time from the windows

  • I was surprised that the hatred for ZFS was not more obvious. Many times when ZFS is brought up, people hop on to say it's not good because of: Not using GPL; Not built in to Linux kernel; Violates storage layering; Does not have the features desired, (not that those "features" are common elsewhere); etc...

    On the other hand, BTRFS is not close to feature parity with ZFS. Neither BCacheFS nor Hammer(1/2) is even close to BTRFS, let alone ZFS. All 3 have far too much development & testing time needed. I
    • by tbuskey ( 135499 )

      Auto error correcting on your data
      Live adjustment of "partition" size
      Auto snapshots for oops! Multiple snapshots because of COW
      Built in compression speeds up compressible data
      It manages storage at all layers. I don't care about layering and artificial separation of function.
      The CLI works well enough that a GUI isn't needed.

      I'm pragmatic to not care about the CDDL vs GPL debate. There are distributions that ship it included. And BSD and Solaris distributions that ship it w/o issues because they do not u

  • Well, first priority is to identify what you're going to use as a more automated backup method and find what the best platform for your tools is going to be from the ones that support the software.

    For client based backups (Mac, Windows and Linux) I personally use Nextcloud. It's pretty flexible because it can run on multiple NAS platforms but notably not all of them. You can also roll your own solution very easily with it. You can direct local folders to be hosted on the Nextcloud instance and then your cli

  • Don't use any. Isn't worth it.

    I use physically small HDD/SSDs that run on the current fastest USB Port I have on my devices I need to backup or archive data from.
    These days they're so small you actually run the risk of losing or misplacing them. I just got my girlfriend two SSDs, one .5 TB for macOS TimeMachine Backup and the other 2TB for archiving. They are about as small as a cigarette lighter.

    Sure, I by now have a whole stack of SSD/HDDs accumulated over the years and consolidate those once every two ye

  • I have a large storage drive with its own AC power connection plugged into a Raspberry Pi. The drive is formatted with ZFS. The whole thing cost me less than $200 and it's been running for about eight years now. I can upgrade the drive as I run low on space.
  • Disks are binary devices, they have two states, new and full. So go big.
    So buy at least 4x15TB disks, amd processor, ilo motherboard and Linux + raid5 over the top. I’ve run a setup similar to this for 8 years with nfs and bacula for backups or simple tar and scp.
    The alternative same hardware and truenas.
    You gain ads, but miss out on flexibility.

  • if you're really into tinkering with stuff, sure you can build your own and you will have fun doing that. Most people don't need ridiculous spec hardware at home, you are not running data center storage loads and your network speed is likely to be slower than anything the NAS itself can do internally anyway.

    I'm not really into tinkering at that level anymore and so I just use a synology NAS that backs up to backblaze B2. It's all automated, works great. I have a small VM that I use to collect logs and do

  • I currently have an old 2U Rackmount IBM X3650 M3 sitting in my basement. It's huge, and power hungry, but it's got 16 cores, 72GB of ECC RAM, and is holding 10x5TB SMR drives and 6x1TB CMR drives, all 2.5" (as it only takes 2.5" drives), running TrueNAS Core. The server itself was cheap when I got it 3rd hand (CAD$100), the software's free, and it did what I needed it to, but I will soon be upgrading to a 36-bay Supermicro Skylake system, because 3.5" drives are just plain better, and the increase in both
  • HDDs mirrored. The only real drawback is the processor/ram are anemic - I did add a stick which went bad after a year or two which also caused some data corruption but I was able to figure out wtf was happening and remove it and the DS920 went back to be rock solid. I run a PLEX server on it, but mostly just for storage. Their web based UI is slick and pretty straightforward. Their updates to software can be insanely behind.

    I occasionally download and install a newer PLEX build for it directly from PLEX because Synology's shit is so far behind, but it doesn't matter much. They have a desktop client for Windows machines which is also straight forward.

    I'm not familiar with QNAP but I have read stories of their shit getting nasty remote access hacks. I would never leave this thing facing the internet. Everything sits behind a pfSense firewall which has 0 open ports. I don't care about trying to stream PLEX outside my local network.

    Sure, you could build something with nicer hardware for the money but I'm fucking tired and it's not worth my time to manage - that it Just Works is the main benefit.

  • Isn't this basically the "killer app" for old systems that are more or less perfectly functional but have been replaced for their original use by something newer?

    I have two - maybe actually three now? - sturdy servers that were beefy at the time and I haven't had the heart to toss them out. They'd certainly need bigger drives, and obviously I'd replace the windows os....but otherwise wouldn't they be basically perfect to sit in a corner of my home network, chugging away backing crap up?

    I cannot imagine tha

  • by kriston ( 7886 ) on Sunday August 18, 2024 @11:32AM (#64715890) Homepage Journal

    I use two Synology devices. I recommend using the "+" versions because they have a much, much stronger Intel processor and support expandable memory using SODIMMs.

    Avoid the "j" and non-"+" versions of these devices.

  • DS1515+. 3 drives in RAID5 (if storage grows too much, will switch to a 3 way mirror).

    2 64GB SSDs as cache.

    2 external 2.5" drives as hyperbackup.

    2 1GB eth cables directly from NAS to desktop in a bonded pair.

    iSCSI drives for the steam folder and other such stuff, SMB shares for docs and stuff.

    will go out of SW support around oct next year, will re-evalualate then and there

  • I'm a Unix sysadmin and created my 1st NAS at home in 2000 on a used sun system given to me by work.
    2nd was dual PII Compaq running Linux with software raid of the drives, LVM for partitioning and ext3 IIRC.

    I have NFS, Samba and a web server. The OS drives were mirrored. If I ran out of space on a partition, I could increase the size with LVM & downtime. Compared to the Netapp I admined at work, it was a pain.

    Downtime is ok on a home system. The drives failed often enough before TB sized drives wer

  • And it turned out more than a simple NAS. Epyc 7282 on a SuperMicro H11-SSL-I motherboard with 128GB ECC RAM, currently 5x8TB HDD in ZRAID, with a 32GB Optane drive as cache, 2x512GB (RAID-1) and 4x1TB (RAID-10) NVME SSDs (home and docker storage respectively). 10Gbit network, in a nice and quiet Fractal Design Define R5 case. Also used for virtualizing some regularly fired up environments.

Karl's version of Parkinson's Law: Work expands to exceed the time alloted it.

Working...