In your post you note that this is going to be a RAID for home use - domestic purposes - but you don’t really say if you’re planning on running this as a dedicated NAS or whether you just want to have extra security with data being archived in your running machine.
If you’re planning on setting up a dedicated server for home use (then apart from noting that after a few days you’ll wonder how you ever managed without it) I would strongly recommend that you give particular attention to your choice of drives.
All of the well-known manufacturers offer drives particularly designed for NAS use. You will quickly find that they are more expensive (often quite a bit more expensive) than ‘desktop’ drives of similar capacity. Do not be tempted to forego proper NAS drives, even if it means that you scale down your capacity requirements to start with. The reason I make this recommendation is simply that because, if your NAS delivers on its promise, it will soon fade to invisibility on your home network and you’ll forget it is there. Right up to the point where you experience your first drive loss... at which point you’re going to wish you’d bought the best drives you could find. I can’t speak to any make other than Western Digital (which I’ve found to be excellent) but their range of “Red” NAS drives come in a regular variant [5,400rpm] and a Pro variant [7,200rpm]. I’m running RAID6 and can comfortably stream 4K content from that setup, but you might want to get a bit more advice or read up on performance if this is important to you.
Don’t order all your drives in the same order or from the same supplier. Not because you should expect defects in modern drives [these are thankfully extremely rare] but because every now and then you’ll get an idiot packing shipments and receive a consignment of drives in a loose box without packing. Who knows what sort of treatment they have had to survive in your journey to you.
To get a decent level of redundancy you should possibly set your sights on a RAID-5 configuration with 4 drives as a starting point [which will give you capacity equal to 3 of the drives], but if you can stretch a bit further, RAID 6 will give you enough resiliency to allow for the simultaneous loss of 3 volumes.
When you come to cable up your setup, do take the time to carefully read the technical specification of your RAID controller. I’m honestly not sure how this will work between ‘hardware’ and ‘software’ RAID, but at least some of these options, coupled with the right hardware [if you are in luck] will allow you to have a ‘hot swap’ capability.
Initially setting up a RAID [or recovering from a volume loss] can take a fair old bit of time [it will depend on your IO rates and the drive performance], which means that my last suggestion might well be unpalatable to you: the reason you are investing in a RAID setup is because you have data that you don’t want to lose in the event of a head failure. But for that preservation to happen, you’re going to need to know how to recover your RAID in the event of an error. So think about simulating it. Easy to do if you have a hot-swap capability... but either way find out about drive swapping and, before you put any data on your RAID, try a drive swap. Make notes / take screen shots.
And it’s hopefully obvious, but when you do buy your drives, by enough for *at least* one full round of drive swaps. In other words, if you’re going RAID5, by one drive more than you need. If you’re going RAID6, buy two spares. Mark them up and keep them safe.
Lastly, it’s kinda redundant... but they haven’t yet invented a single-unit RAID that you can build at home that will survive a house fire. So keep going with your off-premises backups, no matter how good your RAID setup.
Screw NAS drives they are junk for idiots. Just do yourself a favour and get enterprise drives, cost a bit more worth every penny. If you don't understand why they are better I can't be bothered explaining because meh you don't understand storage in the slightest. As for software vs. hardware RAID it all depends because it always depends. However for home use software RAID is almost certainly the way to go. My favourite trick mind you is to put/boot on a usb drive. My day job invovles looking after more st
”If you don’t understand why they are better I can’t be bothered explaining because meh you don’t understand storage in the slightest.”
Jabuzz, before posting this reply I went back and read a few of your comments and I find most of them to be very well informed, constructive and to add real value to the thread in which you make them.
I hope you don’t mind me chipping in here, but I think in this case you have something of real value to add but you have not done so because, in your own words, you “can’t be bothered”. But the OP to this article was someone who specifically came to ‘Ask Slashdot’ for advice on setting up a RAID on a home server: someone who implicitly doesn’t understand storage.
I understand that you might not want to sit and spend the time it would take to write out a lengthy explanation of your own, but I’m also pretty sure that you would be able to point someone to a web page somewhere that did a decent enough job to convey they point you would like the OP to understand.
I don’t mean to be rude or condescending, but clearly you have relevant, topical experience to share. Just a thought.
Depends on where you're putting the NAS. For my own NAS, it's in a far-off storage area. I don't mind the loud Seagate Exos drives. But they are LOUD and thunky. If it's going directly in an office, I'll use the Ironwolf.
Backblaze's statistical publications show enterprise drives to be either not more reliable, or so marginally more reliable, that the cost can't be justified.
Hmm, Slashdot's editors use systems that insert characters that Slashdot's execrable non-ASCII-handling system can't handle. Naughty "EditorDavid" ! That emdash certainly wasn't in my submission.
Don't tell me that he's using something like Word to edit these things? And he's forgotten to disable the default typesetting interference options.
One place I do work for has a computer I use that has- I can't remember the brand- Toshiba maybe? It's 1 TB and every so many seconds it churns and gyrates when nothing else is going on. If sw wants disk access, it gets blocked for the several seconds of gyrations.
I did much searching about that drive and found mostly complete specs, but essentially nothing talking about SMR or CMR.
As I recall, SMR drives have to do a period of (I think the term is) "track trimming" between the initial write to the drive and the data reaching it's final location crammed like dehydrated sardines under a steam hammer - hence the "S" in 'Sardine Magnetic Resonance'. So, that would impact quite badly on write-heavy uses - say, the company sales database server - but less badly on (say) a media server, after the initial population of the device.
Yeah, I thought about that question, but didn't weigh it too
Yes, buy drives in separate orders, but for a reason not stated: Drive failures are not independent. If you have a bunch of drives from the same batch, and run them with the same load, they'll tend to fail at the same time. The math behind RAID assumes independent failures. The nature of RAID results in nearly identical write patterns to all the drives. Also, reconstructing a lost drive puts significant stress on the remaining drives. All told, this means that if all your drives are from the same batch
Even more to the point, the discovery of drive failures are not independent.
When you discover that a drive has failed, it is not because it just failed. It may have failed moments ago. It may have failed hours ago. It may have failed weeks ago. The data coming off the disk wasnt right, but thats not a failure, its that it cannot be corrected. Thats the failure.
So here you are rebuilding your array.. and whats actually happening? The entire array is being read, and therefore verified. Every single bit
WD Red used to be a good choice. But they now use SMR drives in that range which are completely unsuitable for ZFS - resilvering time is measured in weeks/months.
I have built with WD Reds in the past, and they were good, but my latest build was Seagate IronWolf.
I second WD drives, except I only use their Enterprise line. I actually sold a cluster of 500TB back in 2011 or 2012 that utilized Coraid (now defunct, although they're making a comeback) SANs (commodity Supermicro with their own O/S based on Plan 9). During the 7 years I serviced this installation, of 288 2TB WD Enterprise drives NONE had completely failed. I had about 10 which had thrown sector re-mapping errors, and because of my deal with my client, I replaced all 10 and ended up using them around fo
"I have just one word for you, my boy...plastics."
- from "The Graduate"
Quick Note On Drives (Score:5, Informative)
If you’re planning on setting up a dedicated server for home use (then apart from noting that after a few days you’ll wonder how you ever managed without it) I would strongly recommend that you give particular attention to your choice of drives.
All of the well-known manufacturers offer drives particularly designed for NAS use. You will quickly find that they are more expensive (often quite a bit more expensive) than ‘desktop’ drives of similar capacity. Do not be tempted to forego proper NAS drives, even if it means that you scale down your capacity requirements to start with. The reason I make this recommendation is simply that because, if your NAS delivers on its promise, it will soon fade to invisibility on your home network and you’ll forget it is there. Right up to the point where you experience your first drive loss... at which point you’re going to wish you’d bought the best drives you could find. I can’t speak to any make other than Western Digital (which I’ve found to be excellent) but their range of “Red” NAS drives come in a regular variant [5,400rpm] and a Pro variant [7,200rpm]. I’m running RAID6 and can comfortably stream 4K content from that setup, but you might want to get a bit more advice or read up on performance if this is important to you.
Don’t order all your drives in the same order or from the same supplier. Not because you should expect defects in modern drives [these are thankfully extremely rare] but because every now and then you’ll get an idiot packing shipments and receive a consignment of drives in a loose box without packing. Who knows what sort of treatment they have had to survive in your journey to you.
To get a decent level of redundancy you should possibly set your sights on a RAID-5 configuration with 4 drives as a starting point [which will give you capacity equal to 3 of the drives], but if you can stretch a bit further, RAID 6 will give you enough resiliency to allow for the simultaneous loss of 3 volumes.
When you come to cable up your setup, do take the time to carefully read the technical specification of your RAID controller. I’m honestly not sure how this will work between ‘hardware’ and ‘software’ RAID, but at least some of these options, coupled with the right hardware [if you are in luck] will allow you to have a ‘hot swap’ capability.
Initially setting up a RAID [or recovering from a volume loss] can take a fair old bit of time [it will depend on your IO rates and the drive performance], which means that my last suggestion might well be unpalatable to you: the reason you are investing in a RAID setup is because you have data that you don’t want to lose in the event of a head failure. But for that preservation to happen, you’re going to need to know how to recover your RAID in the event of an error. So think about simulating it. Easy to do if you have a hot-swap capability... but either way find out about drive swapping and, before you put any data on your RAID, try a drive swap. Make notes / take screen shots.
And it’s hopefully obvious, but when you do buy your drives, by enough for *at least* one full round of drive swaps. In other words, if you’re going RAID5, by one drive more than you need. If you’re going RAID6, buy two spares. Mark them up and keep them safe.
Lastly, it’s kinda redundant... but they haven’t yet invented a single-unit RAID that you can build at home that will survive a house fire. So keep going with your off-premises backups, no matter how good your RAID setup.
Re: (Score:2)
Re: (Score:2)
Chiming in to second the choice of WD Red. That's what I've got in my Synology NAS. Also made sure they were CMR, not SMR drives.
Do NOT get SMR drives for RAID.
Re: Quick Note On Drives (Score:1)
Screw NAS drives they are junk for idiots. Just do yourself a favour and get enterprise drives, cost a bit more worth every penny. If you don't understand why they are better I can't be bothered explaining because meh you don't understand storage in the slightest. As for software vs. hardware RAID it all depends because it always depends. However for home use software RAID is almost certainly the way to go. My favourite trick mind you is to put /boot on a usb drive. My day job invovles looking after more st
Re: Quick Note On Drives (Score:4, Interesting)
Jabuzz, before posting this reply I went back and read a few of your comments and I find most of them to be very well informed, constructive and to add real value to the thread in which you make them.
I hope you don’t mind me chipping in here, but I think in this case you have something of real value to add but you have not done so because, in your own words, you “can’t be bothered”. But the OP to this article was someone who specifically came to ‘Ask Slashdot’ for advice on setting up a RAID on a home server: someone who implicitly doesn’t understand storage.
I understand that you might not want to sit and spend the time it would take to write out a lengthy explanation of your own, but I’m also pretty sure that you would be able to point someone to a web page somewhere that did a decent enough job to convey they point you would like the OP to understand.
I don’t mean to be rude or condescending, but clearly you have relevant, topical experience to share. Just a thought.
Re: (Score:2)
Depends on where you're putting the NAS. For my own NAS, it's in a far-off storage area. I don't mind the loud Seagate Exos drives. But they are LOUD and thunky. If it's going directly in an office, I'll use the Ironwolf.
Re: (Score:2)
Re: (Score:2)
Which part of
or
or
was unclear?
When I deal with professionals, one of the first things they'd do is (now, this is difficult, concentrate!) read the fucking job specification.
Re: (Score:2)
Hmm, Slashdot's editors use systems that insert characters that Slashdot's execrable non-ASCII-handling system can't handle. Naughty "EditorDavid" ! That emdash certainly wasn't in my submission.
Don't tell me that he's using something like Word to edit these things? And he's forgotten to disable the default typesetting interference options.
"Why" is the obvious question.
Re: (Score:3)
Chiming in to second the choice of WD Red. That's what I've got in my Synology NAS. Also made sure they were CMR, not SMR drives.
Do NOT get SMR drives for RAID.
^^^^ this ^^^^
Trouble is, it can be difficult to know if a drive is SMR, and some drive makers were hiding the fact that the drives were SMR.
Re: (Score:1)
Not that hard to tell CMR from SMR.
https://blog.westerndigital.co... [westerndigital.com]
Re: (Score:1)
https://www.seagate.com/intern... [seagate.com]
Re: (Score:2)
Not that hard to tell CMR from SMR.
https://blog.westerndigital.co... [westerndigital.com]
When they're willing to admit to it.
https://blocksandfiles.com/2020/04/20/western-digital-smr-drives-statement/ [blocksandfiles.com]
One place I do work for has a computer I use that has- I can't remember the brand- Toshiba maybe? It's 1 TB and every so many seconds it churns and gyrates when nothing else is going on. If sw wants disk access, it gets blocked for the several seconds of gyrations.
I did much searching about that drive and found mostly complete specs, but essentially nothing talking about SMR or CMR.
Re: (Score:2)
Yeah, I thought about that question, but didn't weigh it too
Re: (Score:2)
Yes, buy drives in separate orders, but for a reason not stated: Drive failures are not independent. If you have a bunch of drives from the same batch, and run them with the same load, they'll tend to fail at the same time. The math behind RAID assumes independent failures. The nature of RAID results in nearly identical write patterns to all the drives. Also, reconstructing a lost drive puts significant stress on the remaining drives. All told, this means that if all your drives are from the same batch
Re: (Score:2)
Drive failures are not independent.
Even more to the point, the discovery of drive failures are not independent.
When you discover that a drive has failed, it is not because it just failed. It may have failed moments ago. It may have failed hours ago. It may have failed weeks ago. The data coming off the disk wasnt right, but thats not a failure, its that it cannot be corrected. Thats the failure.
So here you are rebuilding your array.. and whats actually happening? The entire array is being read, and therefore verified. Every single bit
Re: (Score:2)
WD Red used to be a good choice. But they now use SMR drives in that range which are completely unsuitable for ZFS - resilvering time is measured in weeks/months.
I have built with WD Reds in the past, and they were good, but my latest build was Seagate IronWolf.
Re: (Score:2)
I second WD drives, except I only use their Enterprise line. I actually sold a cluster of 500TB back in 2011 or 2012 that utilized Coraid (now defunct, although they're making a comeback) SANs (commodity Supermicro with their own O/S based on Plan 9). During the 7 years I serviced this installation, of 288 2TB WD Enterprise drives NONE had completely failed. I had about 10 which had thrown sector re-mapping errors, and because of my deal with my client, I replaced all 10 and ended up using them around fo