Ideas for a Home Grown Network Attached Storage? 105
Ken asks: "It seems that consumer level
1TB+ NAS boxes
are all the rage
right now. Being a digital packrat, with several computers/entertainment devices on my home network, I am becoming more interested in getting one of these for my home. Unwilling to dish out 1K or more up front, and possessing a little of the DIY spirit, I would like to build my own NAS and am interested in hardware/software ideas. While the small form factor PC cases are attractive, my NAS will dwell in the basement so I am thinking of a cheap/roomy ATX case with lots of power. I think that integrated gigabit Ethernet capabilities and PCI-Express on the motherboard are a must, as well as Serial ATA HDDs, but what processor/RAM? How strong does a computer really need to be to
serve files? What about the OS? Win2K3 server edition? WinXP Pro? Linux?"
"I have been using Red Hat and then Fedora Core since it came out but only in a workstation role, and I have little experience with other flavors. What file system should I use for maximum compatibility? I will need it to work with Windows, Linux and several UPnP devices. I am planning on starting out with two or three HDDs in a RAID 5 config. and I would like to be able to add more HDDs as space is needed without any major changes. Thanks for any ideas."
Why? (Score:4, Informative)
Re:Why? (Score:2)
Re:Why? (Score:5, Interesting)
Seriously, why buy something when you could 1) build it (probably cheaper) yourself and 2) learn more from building it? Most DIY projects have a habit of benefiting you at some point in the future in ways that you can't predict when you start them.
Either you - not you personally, the rhetorical you - 1) don't have the time, which is acceptable, or 2) you don't have the knowledge, which you should be trying to gain, or 3) you are lazy, which is really quite sad.
There's more to life than just spending money on a problem. There's actually figuring out the solution to the problem.
$.02
Re:Why? (Score:2)
In my personal case, I do not have the time to invest right now, but do have the knowledge. I actually was looking at building an NAS box recently but since then booked a ton of new contract work. In my case, it is easier to spend money on a known, working solution.
Finally, I thin
Re:Why? (Score:2)
If you have rolled your own you have a better idea of how things hook up and can find the errors easier. If you have something store bought you may not be able to save your data.
And it's my experience that for stuff like NAS this is especially true. Harddisks will fail, it's just a question of time.
Items like this NAS often portray themseves as "Turn key" solutions. In my experience there is no such things. And in many cases you spen
Re:Why? (Score:2)
Re:Why spend $1000 on $750 worth of hardware? (Score:1)
$ 50 case
$100 motherboard with Gigabit ethernet
$100 1 GB of memory
$500 (4x250 MB HD @ $125 each)
$0 to $100 Linux with Samba package
------------
$750 to $850.
My solution (Score:3, Interesting)
If you do want it all on one big raid5 partition, good luck finding a way to add additional disks into it without rebuilding.
Re:My solution (Score:5, Informative)
When you get new disks simply create a new RAID5 array and add that to the logical volume and add to your current and grow the FS on it.
You don't want everything on one big RAID0, I lost 200G of data that way. I can say I'll never do that mistake again.
See: (Score:2, Interesting)
Re:See: (Score:1)
A simple flash Linux distro with a converter board that plugs in to an IDE slot. Supports all the standard raid setups. I recommend investing in cooling for hard drives -- not things you want to have fail on a NAS system.
Re:See: (Score:1)
Geez... (Score:2)
?
Be aware (Score:5, Interesting)
Tony Battersby posted a patch to the LBD mailing list recently to address the ones he could find, but lacking a full audit, you probably shouldn't use any filesystem other than XFS.
Considering the gravity of these bugs, you might consider using XFS for everything, if the developers left these critical bugs in for so long, it makes you wonder about the general quality of the filesystems.
Do you know the bug references? (Score:2)
Re:Do you know the bug references? (Score:1)
Re:Do you know the bug references? (Score:3, Informative)
Patch 1 [unsw.edu.au]
Patch 2 [unsw.edu.au]
Says Tony:
"Here is an "example" patch to fix some of the LBD issues with various
filesystems (ext3, xfs, reiserfs, afs). Unfortunately it looks like
there are many more LBD problems with the filesystems that I didn't fix,
so I am just calling this an "example" patch that shows some of what
needs to be done, but doesn't fix everything."
He later mentions the only XFS fix is in some debugging
What of IBM's JFS? (Score:1)
Re:What of IBM's JFS? (Score:1)
Re:Be aware (Score:2)
Re:Be aware (Score:2)
Is data loss a risk? sure, but what is the most likely cause?
Re:Be aware (Score:2)
I believe Tivo units use the XFS filesystem for storing its multimedia, which from what you say makes sense.
Re:Be aware (Score:2)
Small form-factor is not a problem (Score:2, Informative)
The machines are fast enough to do anything a fileserver would need (and then some) they're quiet, as they use Duron chips for low heat/power, and they look good enough to put on your desk or whever you
Re:Small form-factor is not a problem (Score:2, Informative)
before posting misinformation.
I just checked the spec. Like most other
SFF, it has only ONE internal 3.5 storage bay.
Samba has the best performance. (Score:3, Interesting)
As for hardware, for small servers I like Linux software RAID, but for a big multidisk farm, you can't beat 3Ware cards. They take nice cheap IDE drives and turn them into a SCSI RAID. Moderately expensive, but beautifully functional. Finally, I've been having good luck with Seagate and WD drives, and bad luck with Maxtors. Your mileage may vary.
[1] Samba beats the MS implementations of SMB/CIFS. No guarantees about Samba vs NFS, GFS, Coda, whatever.
Re:Samba has the best performance. (Score:1)
Therefore, Samba is a must, but other protocols should be considered too.
Re:Samba has the best performance. (Score:3, Insightful)
When building your own, you're looking at unique specs. If you were buying something for a corporate environment I would highly recommend getting something with Samba, even if you don't use any Windows at the moment (for when the marketing consultant with a laptop needs to upload the large video file or whatever).
But for home use? "Look at your needs" is better than "here's the best".
--
Evan
Re:Samba has the best performance. (Score:2)
Log in.
--
Evan
Re:Samba has the best performance. (Score:2)
Mac Mini + Firewire Enclosure (Score:5, Interesting)
I was thinking of using a mini and a single firewire disk for a somewhat similar project.
But, OS X has RAID capability, so you could use something like this:
Re:Mac Mini + Firewire Enclosure (Score:2)
Doh.. invisible link
This Device [cooldrives.com]Re:Mac Mini + Firewire Enclosure (Score:1)
I say buy a big ass case (its going in the basement afterall) install your favorite distro and
Why not... (Score:2)
Re:Why not... (Score:1)
Re:Why not... (Score:1)
Re:Why not... (Score:2)
I want to build a 2.8TB storage array (Score:4, Interesting)
Re:I want to build a 2.8TB storage array (Score:2)
This won't be best solution noise-wise, but this would extend the drive lifetime.
Cut extra holes to the case and build air-flow tunnel to help cooling the drives.
I measured drop from 46C to 25C with 12cm nexus low speed fan.
My setup looks roughly like this from above:
|A_A____AAA|
|/A|AAAA|AA:
A==|AHDA|AA:
A|A|AAAA|AA: holes to allow flow through
A==|____|AA:
|\_AAAAAAAA|
=========== -front panel
(replace the A's with space, my ascii art won't scale right)
So basically there's two 12cm fans,
Take a look at the PetaBox (Score:4, Interesting)
When we developed the PetaBox [petabox.org] at The Archive, the idea was to use off-the-shelf PC hardware and maximize GB/buck, while keeping cooling and power costs low. It's worked out pretty well. See also my unofficial PetaBox web page [ciar.org].
It turns out that you really don't need much of a PC to serve files. We underclocked the cheap little Via C3 processors to 800MHz to reduce power and heat, and they still troop along nicely. SATA is not necessary, since you're going to be bottlenecked on the network connection anyway. We used 512MB of RAM per node, but only because our system runs a gaggle of perl scripts to provide a variety of services (file searches, XML-based metadata updates, etc). If you're just going to be running NFS or Samba, 256MB is probably plenty (unless you choose to run Gigabit over a mere 32-bit PCI bus, in which case 512MB or 1GB would be better, so that you're reading more from filesystem cache and pounding the hard drives over your overloaded bus less). Gigabit ethernet is a must (we used 100bT for the PetaBox, which is annoying at times, but the cheaper 100bT 48-port switches were instrumental in keeping the overall price of the system low). We stuck four hard drives in each case, mostly from previous bad experiences trying to work with eight-disk machines. I can't say too much about the disk failure rate statistics which incited us to switch to Hitachi Deskstars, but I will say that I'm glad our PetaBox is using Deskstars and I will only use Deskstars in my workstation at home.
If you really, really want to keep the gigabit pipe full while pounding on your disks, then a newer bus like PCI-Express is necessary. Otherwise, I'd be tempted to go with an older, cheaper (and imo, more reliable) Pentium-II or -III based PC. You can get solid, reliable, well-cooled and well-dustfiltered early model VA Linux servers with 500MHz Pentium-III's for $200 or less. I must stress the importance of buying a really solid, rigid case. Over time, normal computer cases get all bendy-wendy, turning every part into a moving part, including parts you don't want to have moving at all. Fans will start sticking, motherboard traces will start breaking, etc. Most of the rack-mountable cases are made of good thick solid steel panels, which makes them heavy as f**kall, but IMO that's a small price to pay for a system that will run forever.
For operating system, the most important thing is to get something you know how to run and maintain, or can get help running and maintaining. If you have geek friends who are willing to provide technical assistance, find out what they know best and use that. A well-known operating system will probably be of more use to you than a technically better, but less well understood, operating system.
Having said that, my personal preference is Slackware Linux, because I appreciate its philosophy of keeping things simple, and preferences for packages which are the most stable, as opposed to newest versions or lots of features. My second choice would be FreeBSD. Third would be the OS we decided to use at The Archive for the PetaBox nodes, Debian Linux. But if all you know is Windows, then go ahead and use Windows.
Regarding RAID, it's been my experience working at The Archive that RAID is often more trouble than it's worth, especially when it comes to data recovery. In theory, recovery is easy, you just replace a bad disk and it will rebuild the missing data, and you're good to go. In practice, though, you will often not notice that one of your disks are borked until two disks or borked (or however many it takes for your RAID system to stop working), and then you have a major pain in the ass on your hands. At least with one filesystem per disk, you can attempt to save the filesystem by dd'ing the entire raw partition contents onto a different physical drive of same make + model, skipping bad sectors, and then running fsck on the good drive. But if you have
Re:Take a look at the PetaBox (Score:2)
Just out of curiosity, why did you end up going with your third choice for OS (Debian) rather than your first or second choices?
Re:Take a look at the PetaBox (Score:2, Interesting)
Interesting comments regarding RAID. They seem to defy common sense, but common sense is not always correct.
Yeah, though I'm not necessarily correct, either. There are plenty of smart IT professionals who disagree with with The Archive's conclusions regarding RAID. It may just be a contextual thing -- our data storage clusters are friggin' huge, and we only have three sysadmins, two of whom work part-time. A smaller system with more manpower and better discipline about following good procedures ma
Re:Take a look at the PetaBox (Score:2)
You can easily adapt the RedHat scripts to run on Slackware. Personally I would recommend setting up nagios or some ot
How Much Can One Box Do (Score:2)
It'd also be nice if I could set the box up as a Myth back-end, then put a smaller, nicer, quieter Mac Mini as a myth back-end. And if the closet box could do some low-load web serving over cable, that'd be nice too.
But is this asking too much of one box? Will I have to get a hardware
Re:I want to build a 2.8TB storage array (Score:2)
Why use RAID50 instead of RAID5 ? You're not going to get any meaningful performance benefit and you're "wasting" a drive that could be otherwise used for more space or a hotspare.
Incidentally, the guy on that web page has got some very, very strange ideas. His whole reasoning for not having multiple drives on the same chan
Re:I want to build a 2.8TB storage array (Score:1)
You're right; I'm planning to go with RAID 5, not 50. I neglected to make that clear.
Re:I want to build a 2.8TB storage array (Score:2)
It looks reasonable. If you look in the motherboard's manual, on page 1-18 there is a block diagram showing the logical layout of the buses, etc. Note that each PCI-X slot gets its own bus, which it shares with one other item (the SATA and LAN controllers). "Everything else" (regular PCI slot, IDE, USB, etc) gets its own standard 32bit/33Mhz bus.
Also, after looking around a bit myself it
Re:I want to build a 2.8TB storage array (Score:1)
How about something like this [ebay.com]? Assuming I can find two four-channel PCI/66 ATA controller cards (two-channel PCI/66 cards are easy to find, I know). I'm not necessarily looking for the ne plus ultra of performance, but rather some reasonable combination of price, performance, and stability that will let me serve HDTV streams and below
Re:I want to build a 2.8TB storage array (Score:2)
Wait, wait, wait... (Score:4, Funny)
Oh, wait.
I suppose this is Slashdot. Nevermind.
Re:Wait, wait, wait... (Score:1)
One word: (Score:3, Informative)
Buy everything piecemeal. I just priced out a 900Gb NAS for $800, shipping included. Slap it all together, put your favorite Linux distro on it, and run Samba.
You won't be able to beat the price of the real thing by much, though: big hard drives are still expensive, and so are RAID cards (if you go that route).
USB Drive Enclosures... (Score:2)
I realize that this is a bit different than your orig question, but it might be an interesting stop-gap solution, partic as the enclosures are about $25 each (without disk).
Re:USB Drive Enclosures... (Score:2)
Re:USB Drive Enclosures... (Score:2)
I had to go through 3 different USB chipsets (different motherboards) before my external enclosure would write data without random corruption. The nForce2 motherboards are notorious for having strange timing issues, and making this problem even more apparent.
Firewire's no better, either. I had an Adaptec firewire card (Texas Instruments chipset, I believe) and it worked with my external drives, yet after 5 or 10 minutes, w
There's only one problem with storage (Score:1)
Backing it up. HD's have far outpaced backups in price/speed
1TB NAS? 400GB HD?
No problem. Want it backed up to ONE tape? Every day? Have fun.
Re:There's only one problem with storage (Score:1)
I think an ideal solution would be a small RAID solution (possibly with 2.5" drives) in an external enclosure with an Ethernet connection in a small form factor. Plug it into the network, run your backups to it, unplug it and put it in a fire safe.
Re:There's only one problem with storage (Score:1)
What happens when you need a file from:
Yesterday
Last Monday
Last year?
Re:There's only one problem with storage (Score:1)
In any case, "Last year" seems unrealistic. I don't know *companies* who keep 365day backups, the tapes are just too damn expensive. I mean if the data is really so important, wouldn't someone have noticed it missing before the next year rolls around?
Re:There's only one problem with storage (Score:2)
Re:There's only one problem with storage (Score:2)
Anyway, we can recover every project we ever did, and yes, sometimes we do need those old projects.
Re:There's only one problem with storage (Score:1)
Tapes aren't that expensive. One set a month/year isn't extreme.
They're really cheap when you consider a $40 tape vs spending a week re-creating a document from a hard copy (if you have it)
Re:There's only one problem with storage (Score:2)
They're really cheap when you consider a $40 tape vs spending a week re-creating a document from a hard copy (if you have it)
Yeah, but it's the drives that jack up the price.
Re:There's only one problem with storage (Score:2, Interesting)
Old hardware will do (Score:2, Informative)
It's a fairly large box, in a full size ATX case, and the disks are also stored in a rack which I built and bolted onto the side of the case
Power saving? (Score:2, Interesting)
Is this doable?
Re:Power saving? (Score:2, Informative)
take your time (Score:1)
i bought a 160 gig drive last year and for various tedious technical reasons the c
What I did (Score:2)
My fileserver runs 24/7 and has been doing that for about 3 years (minus downtime for moving).
I use 4 40GB SCSI drives in RAID 5 configuration, using Linux software RAID.(Obviousely I would have used large IDE now, but these were the cheapest per GB at the time, and I already had the SCSI controller laying around)
This gives me about 136GB of useable space. PArtition is running ext3 as filesystem.
I have had one disk fail because of a bad solde
GMS P502 Spider + PCI-X + RAID5 (Score:3, Interesting)
Imagine this with a high-performance SATA raid controller [1] [tomshardware.com] [2] [tomshardware.com], in an enclosure barely bigger than the 4 hard drives alone.
Someone knows here to buy this motherboard? What about practical experience with this sort of configuration?
geeze... (Score:3, Informative)
That said... if all you're doing is file serving, a tiny machine by modern standards is fine. 64 megs of ram in a P3/400 would make a very solid home server. If you want to use software RAID, though, it's a good idea to go faster... you'd want at least 1ghz for that, maybe 2, depending on how much traffic you were sending to the box and how patient/impatient you are.
Since it's going in your basement and you have no worries about size or noise levels, get a big whompin' case with lots of 5.25" slots. Cremax makes some nice enclosures that will let you put 5 3.5" drives into 3 5.25" bays, with good fans for cooling. They have multiple variants. I'm using the SCSI flavor, but you can get them in SATA too (and IDE, I think, but I'm less sure about that.)
I have an older 3ware 8500 RAID card, and it's dismally slow at RAID 5, even though it's supposedly 'optimized' for it. I don't know if the newer SATA versions are better, but while they are well-supported in Linux, and, being hardware RAID, are a total no-brainer from an admin perspective, my generation of cards was horribly, horribly slow. I get at least four times the performance using Software RAID on an Athlon 1900+.
This is how my network server looks:
Big case;
400W PC Power and Cooling power supply;
ASUS A7V333 motherboard;
Athlon 1900+, I think just 266mhz FSB (not sure);
1 gig of RAM (nice for caching, not at all necessary to have this much);
Ancient video card, Matrox Milllenium 2, I think;
3com 3c509 network card;
ICP Vortex 32-bit RAID controller, bought used. The first one I got was dead... had to replace it. I got it pretty cheap, intending it for another project that fell through, and so I ended up using it at home instead. I think it was about $100, but I'm not sure now. These boards KICK ASS. Great linux support, VERY fast. Awesome hardware.
6 18-gig 10KRPM SCSI drives; machine boots from this array, and Debian is installed here;
2 Cremax 5-in-3 SCSI enclosures;
1 3ware 8500+, in JBOD mode (software RAID is WAY faster);
4 80 gig IDE drives (small, but I set this part of the system up a long time ago)
The SCSI array is damn fast, an excellent spot for interactive, disk-intensive things like IMAP or big compiles, while the slower IDE array is ideal for filesharing.
You should be able to set up a similar system for, oh, $1500? And keep in mind... this is HUGE overkill for a home network, it would be a solid backbone for a company up to about 50 people... though it might need more drive space, and I'd probably want redundant power supplies in a really central machine. You could run mail, internal DNS, DHCP, a squid proxy, internal webserver, and Samba for that many people without it even working that hard.
File sharing is fundamentally a tremendously simple thing, and it just hardly takes anything at all to do a perfectly fine job. Once upon a time this was akin to rocket science, but at this point, even a garbage $200 PC from Walmart would probably be an okay fileserver.
Again: the specs on the machine above are wild overkill... swatting a fly with a sledgehammer. But if you want to spend that much money, or you have most of the parts laying around the house anyway, it'll do a damn good job.
Re:geeze... (Score:2)
This makes a heck of a lot more sense than the original poster's requirement of a PCIe slot on the server. Why would you need a PCIe slot on something that's just serving as a NAS and sitting in your basement?
Re:geeze... (Score:2)
Well, in this case, so you can add more really really fast disk controllers, I'd say.
Re:geeze... (Score:2)
is makes a heck of a lot more sense than the original poster's requirement of a PCIe slot on the server. Why would you need a PCIe slot on something that's just serving as a NAS and sitting in your basement?
A 32bit 33MHz PCI bus can handle at maximum 133MB/s, and that includes all your hardisks and network card. With a fast harddisk and a gigabit network card you can saturate the PCI bus, so a PCIe requirement (cheaper than PCI-X that is used for server motherb
more than you need (Score:5, Informative)
-Get the best-value processor that you can find. You won't need the fastest thing out there, but it's better to have a little more "oomph" than you need. If you end up using an encrypted filesystem at some point, you'll want enough power to decript and keep the network "fed"
-Have a plan for adding a second network interface. Maybe you don't need it now, but once the DIY bug bites, you may find yourself wanting to use the machine as your NAT box or as a wireless access point or something like that.
-Think about noise and power use. Yeah, those WD Raptors are fast, but they're really loud, too, particularly if you buy a pile of them. You might want to think about acoustic material for the inside of the case -- your local car customizing shop can hook you up. You'll also want an "overkill" power supply for the case so that you don't have problems when you add more drives later.
-Think about heat and airflow. At this time of the year, it's easy to ignore (Dear Australia: yes, I know it's summer there now), but during the summer, stuffing the fileserver into the closet might not be such a good idea.
-Consider underclocking. If you do buy a better processor than you need, bump the speed down for now. Less power, less heat, less noise.
-Get a BIOS or hardware-level RAID mirror for your "root" disk. You can use software RAID for the data disks, but you want to be absolutely certain that you can recover the disk with information about the software RAID. The RAID does no good if you don't know how to access it.
-If you use Linux, LVM will become your new best friend.
-Consider buying hard drives that are carried by your nearest Best Buy/CompUSA/other computer store. You don't actually have to buy the initial batch from there, but if a drive in the RAID set goes bad, you'll want to replace it ASAP. It's nice if you can do that tonight rather than "in a few days".
Home Server Boxen (Score:2)
Why can't it be silent? I mean "drive noise + just about nothing" silent. The file serving can't use much more than a P2 for a family unit, so it strikes me that there should be a readily available fanless option.
If it's just running the file system + maybe a minor family intranet, I'd think you could run this off of
Re:Home Server Boxen (Score:2)
Just watch your heat - a bunch of 7200+RPM drives in an enclosed space will generate a lot of heat & require decent cooling (if you want the drives to last any length of time). This is where most of your noise is going to come from.
Re:more than you need (Score:2)
You don't need a fast processor (Score:2)
USB Samba Server (Score:1)
http://estore.itmm.ca/product_info.php?products_i
Solaris 10 (Score:1)
Serving files on a home LAN is not really that heavy on the CPU, so anything that's on the supported Solaris x86 list should do, just make sure you stick enough RAM in it. Personally I'd go for somethin
We have the same specs (Score:2)
So far it seems its gonna be fedora core, on an ibm xseries 206 (real cheap in canada), with an adaptec sata raid card (8 or 16 connectors) and maxtor maxlineii drives (300gb). we'll get 5 drives first.. to make 1.2tb with raid5, and the onboard drive will just host the os in 80gb. we have a spare gigabit nic...
total price $2000 CDN or so..
the only downside for now is its only possible to put 4 additional drives
Buy two Mac minis when they come out. (Score:2)
Re:Buy two Mac minis when they come out. (Score:2)
my solution (Score:1)
I was doing this years ago; easy and cheap. (Score:2)
So why haven't I upgraded since then? I haven't needed to! It's still my fileserver, and has never had a problem. I have been especially happy with 3ware's linux software, and in their latest u
Jesus (Score:2)
Old, solid hardware (Score:2)
Pentium 233
768mb ram
2 Promise adapters
2 60GB drives - RAID 1
2 120GB drives - RAID 1
100MB ethernet
el cheapo video card
300 watt power supply
1 big case
I care not about video - I haven't looked at the screen in months. It serves files reliably, cost little or nothing. I left space between each drive, added 2 extra case fans, and let it run. It has been rock solid reliable since day 1, which was about 6 years ago. Sexy? No. Effective? Yes.
Harddrive Quality (Score:1)
My two 120s are both identical, but my two 200s are differing brands (yes it works, Software RAID, not hardware) It seems to me that, going with the same brand, while better for performance, means that if there is a manufacturing defect (these are home dr
Re:Harddrive Quality (Score:2)
REDUNDANCY! (Score:2)
eBay PII (Score:1)