Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage Upgrades

Ideas for a Home Grown Network Attached Storage? 105

Ken asks: "It seems that consumer level 1TB+ NAS boxes are all the rage right now. Being a digital packrat, with several computers/entertainment devices on my home network, I am becoming more interested in getting one of these for my home. Unwilling to dish out 1K or more up front, and possessing a little of the DIY spirit, I would like to build my own NAS and am interested in hardware/software ideas. While the small form factor PC cases are attractive, my NAS will dwell in the basement so I am thinking of a cheap/roomy ATX case with lots of power. I think that integrated gigabit Ethernet capabilities and PCI-Express on the motherboard are a must, as well as Serial ATA HDDs, but what processor/RAM? How strong does a computer really need to be to serve files? What about the OS? Win2K3 server edition? WinXP Pro? Linux?"
"I have been using Red Hat and then Fedora Core since it came out but only in a workstation role, and I have little experience with other flavors. What file system should I use for maximum compatibility? I will need it to work with Windows, Linux and several UPnP devices. I am planning on starting out with two or three HDDs in a RAID 5 config. and I would like to be able to add more HDDs as space is needed without any major changes. Thanks for any ideas."
This discussion has been archived. No new comments can be posted.

Ideas for a Home Grown Network Attached Storage?

Comments Filter:
  • Why? (Score:4, Informative)

    by CommanderData ( 782739 ) * <kevinhi@y[ ]o.com ['aho' in gap]> on Friday January 21, 2005 @09:33AM (#11431400)
    If you think you can beat a device like the Buffalo TeraStation [buffalotech.com] go for it, you will be rich! It was shown at CES, and goes on sale next month in the USA for $999. Gigabit Ethernet, 4 250GB hard drives (RAID 0, 1 or 5 support), 4 USB ports to attach additional external storage devices, built in print server for sharing a USB printer, blah blah blah. I'm going to buy 2 of them!
    • Doh, go ahead and mod me down! I didn't realize he already saw the TeraStation, that'll learn me to RTFA! Seriously though, my WHY? is still a valid question...
      • Re:Why? (Score:5, Interesting)

        by log0n ( 18224 ) on Friday January 21, 2005 @10:16AM (#11431859)
        Some of us have the DIY spirit...

        Seriously, why buy something when you could 1) build it (probably cheaper) yourself and 2) learn more from building it? Most DIY projects have a habit of benefiting you at some point in the future in ways that you can't predict when you start them.

        Either you - not you personally, the rhetorical you - 1) don't have the time, which is acceptable, or 2) you don't have the knowledge, which you should be trying to gain, or 3) you are lazy, which is really quite sad.

        There's more to life than just spending money on a problem. There's actually figuring out the solution to the problem.

        $.02
        • I can appreciate why people might want to do something for the experience of it. I do find that solving problems is enjoyable. I've written my own web browser and wifi sniffers just because I wanted to learn how things worked.

          In my personal case, I do not have the time to invest right now, but do have the knowledge. I actually was looking at building an NAS box recently but since then booked a ton of new contract work. In my case, it is easier to spend money on a known, working solution.

          Finally, I thin
          • by Hast ( 24833 )
            There is one more aspect to consider. What do you do when it goes wrong?

            If you have rolled your own you have a better idea of how things hook up and can find the errors easier. If you have something store bought you may not be able to save your data.

            And it's my experience that for stuff like NAS this is especially true. Harddisks will fail, it's just a question of time.

            Items like this NAS often portray themseves as "Turn key" solutions. In my experience there is no such things. And in many cases you spen
          • For some of us, small and attractive aren't issues. He said it'd go in his basement, after all.
    • Yeah, that's EASY to beat:

      $ 50 case

      $100 motherboard with Gigabit ethernet

      $100 1 GB of memory

      $500 (4x250 MB HD @ $125 each)

      $0 to $100 Linux with Samba package

      ------------

      $750 to $850.

  • My solution (Score:3, Interesting)

    by keiferb ( 267153 ) on Friday January 21, 2005 @09:35AM (#11431421) Homepage
    If you're not worried about having it all in one big partition, do what I did. Get a big case that can hold lots of drives, and just keep adding in SATA or IDE expansion cards and drives. It's worked well so far.

    If you do want it all on one big raid5 partition, good luck finding a way to add additional disks into it without rebuilding.
    • Re:My solution (Score:5, Informative)

      by Hast ( 24833 ) on Friday January 21, 2005 @10:22AM (#11431930)
      You don't. Take a bunch of disks, turn them into RAID5 array. Make a logical volume (LVM on Linux) and add the RAID-array to it. Create a growable device on the LVM and format with a standard gowable FS.

      When you get new disks simply create a new RAID5 array and add that to the logical volume and add to your current and grow the FS on it.

      You don't want everything on one big RAID0, I lost 200G of data that way. I can say I'll never do that mistake again.
  • See: (Score:2, Interesting)

    by virid ( 34014 )
    Samba.
    • Rebyte. [rebyte.com]
      A simple flash Linux distro with a converter board that plugs in to an IDE slot. Supports all the standard raid setups. I recommend investing in cooling for hard drives -- not things you want to have fail on a NAS system.
      • Cooling is certainly important, but it really depends on how hard you are pounding it. I couldn't imagine myself hitting a NAS box too hard on my home network.
  • .. If you're willing to fork out the kind of money to score you what I basically read as a pretty nifty desktop machine, you could just go off-the-peg and get a custom built 1 or 2RU NAS device (or a small box) for a similar kind of price, sans the hassle of building the thing and setting it up. Plug in and go....

    ?

  • Be aware (Score:5, Interesting)

    by GigsVT ( 208848 ) on Friday January 21, 2005 @09:37AM (#11431443) Journal
    Common linux file systems (ext, reiser, etc) contains critical data-losing type bugs on file systems bigger than 2TB, except XFS. This was found to be the case in even the most recent 2.6 kernels.

    Tony Battersby posted a patch to the LBD mailing list recently to address the ones he could find, but lacking a full audit, you probably shouldn't use any filesystem other than XFS.

    Considering the gravity of these bugs, you might consider using XFS for everything, if the developers left these critical bugs in for so long, it makes you wonder about the general quality of the filesystems.
    • You seem to know the issue(s) quite well, would you care to point out where to find out more? Specific bug numbers or URLs to specific documentation of the flaws would be nice.
      • Yes, I would like to read more about this, too.
      • Well, the LBD mailing list archive might be a good place to start, since I specifically mentioned it.

        Patch 1 [unsw.edu.au]

        Patch 2 [unsw.edu.au]

        Says Tony:
        "Here is an "example" patch to fix some of the LBD issues with various
        filesystems (ext3, xfs, reiserfs, afs). Unfortunately it looks like
        there are many more LBD problems with the filesystems that I didn't fix,
        so I am just calling this an "example" patch that shows some of what
        needs to be done, but doesn't fix everything."

        He later mentions the only XFS fix is in some debugging
    • There's no reason the NAS box has to have all the files in one file system. Just create multiple partitions or logical volumes. You export directory trees across the network on NAS, not file systems.
    • I had many years of digital photos on a partition that used the reiser file system. When i upgraded my version of slackware i wrote down my partition table wrong and ended up re-making the reiser file system over my picture three times. After realizing what i did i un-mounted the partition, read the man page, played the journal back, and got all but five pictures back.

      Is data loss a risk? sure, but what is the most likely cause?
    • "...contains critical data-losing type bugs on file systems bigger than 2TB, except XFS."

      I believe Tivo units use the XFS filesystem for storing its multimedia, which from what you say makes sense.

      • I don't think there are any 2TB+ Tivos. If they use XFS, it's probably because it deletes even very large files instantaneously whereas most other filesystems takes longer the larger the file is. This is a clear advantage if you want to be able to delete a large movie file from disk at the same time that you want to record TV to that disk.
  • At work, we just got a slew of Aopen XCCube [pcstats.com] machines (they were out of the white ones, so I got black), and I have to say, they're quite potent. Onboard gig-E, onboard SATA, DVD drive, and room for several HDs internally (and more if you get a stack of firewire enclosures and slap in some 300-500GB drives).

    The machines are fast enough to do anything a fileserver would need (and then some) they're quiet, as they use Duron chips for low heat/power, and they look good enough to put on your desk or whever you
  • by -dsr- ( 6188 ) on Friday January 21, 2005 @09:53AM (#11431631) Homepage Journal
    I feel strange advocating a MS-originated protocol -- but the truth us, serving files via Samba on Linux is going to be the best-performing[1], most-compatible remote file system available.

    As for hardware, for small servers I like Linux software RAID, but for a big multidisk farm, you can't beat 3Ware cards. They take nice cheap IDE drives and turn them into a SCSI RAID. Moderately expensive, but beautifully functional. Finally, I've been having good luck with Seagate and WD drives, and bad luck with Maxtors. Your mileage may vary.

    [1] Samba beats the MS implementations of SMB/CIFS. No guarantees about Samba vs NFS, GFS, Coda, whatever.
    • Ideally, a NAS solution should support all protocols used on you network. That way, zou can centralize storage, and easily centralize backups. If you only support one or a few protocols, you'll have to adapt all non supported devices, or use a more complex backup scheme.

      Therefore, Samba is a must, but other protocols should be considered too.
    • Why would I use Samba? I don't have a single Windows machine, nor do any of my roommates.

      When building your own, you're looking at unique specs. If you were buying something for a corporate environment I would highly recommend getting something with Samba, even if you don't use any Windows at the moment (for when the marketing consultant with a laptop needs to upload the large video file or whatever).

      But for home use? "Look at your needs" is better than "here's the best".

      --
      Evan

    • Microsoft also has an NFS connectoid, I think, if you wanted to go that route, in case you *did* get Win machines at some point in the future.
  • by MaxQuordlepleen ( 236397 ) <el_duggio@hotmail.com> on Friday January 21, 2005 @09:58AM (#11431683) Homepage

    I was thinking of using a mini and a single firewire disk for a somewhat similar project.

    But, OS X has RAID capability, so you could use something like this:

    • Doh.. invisible link

      This Device [cooldrives.com]
    • Unfortunatly the mac mini doesn't provide for much expansion. A single memory slot doesn't go very far. The small form factor starts to become a problem when you're attaching a ton of external drives - it makes it a mess. There is also something to be said for the value of hardware based RAID vs Software based. Although cute and cheap, it'll end up costing more later (read: external drives are more expensive).

      I say buy a big ass case (its going in the basement afterall) install your favorite distro and
  • just add some harddisks and a RAID controller to one of your existing computers? Saves adding YA device, and you probably already have a machine that's on 24/7 anyway.
    • I was thinking the same thing. Just get a dedicated external (SCSI) box to house the drives if there's no more room in the main box. If your main box isn't on 24/7 it'd most likely still be cheaper to leave it on than to add a new box running 24/7.
      • I think the main reason for setting up a seperate box for your server is that you are less likely to "play around" with it. So your files are always available. You can tear down and rebuild your main box at will without the worry of loosing anything critical (keep you home directory nfs-mounted, etc.)
  • by Yeechang Lee ( 3429 ) on Friday January 21, 2005 @10:09AM (#11431781)
    I'm very interested in this subject, and recently began a Usenet thread on the topic with this post [google.ca]:

    BACKGROUND:

    Inspired by http://www.finnie.org/terabyte/ [finnie.org], a few months ago I started a thread [google.com] to discuss the idea of building my own 1.5TB storage array using software RAID50 to hold video files.

    The main hitch keeping me from going ahead was that I had trouble finding eight 250GB drives at the price I wanted. Clearly, I wasn't thinking big enough; just before Christmas, I lucked out and bought nine Seagate *400GB* drives at $230 each (plus a $30 rebate on the first one) from CompUSA. I now have 3.6TB of raw storage sitting in a shipping carton in my apartment. Even with RAID 5 and keeping a drive as a spare, I'll have 400GB*8-400GB=2.8TB of space.

    PURPOSE:
    Video files (episodes of TV shows I already watch and enjoy, plus rips of TV shows on DVD sets I own). I'd like to build a MythTV system too, but the storage array comes first. No games.

    PRIORITIES, in order:
    * Stability. I'm very much in favor of build-right-and-leave-it-be as opposed to constant hardware tinkering.
    * Minize heat/noise. I have a studio apartment.
    * Price. I've already spent a fortune on the drives; I don't want to spend more on the rest than I need to.
    * Performance. Not that I'm against a fast machine, but I know that a storage server doesn't need the latest-and-greatest in terms of horsepower.

    PARTS:
    Advice is always appreciated. All prices are from ZipZoomFly.com unless otherwise specified.

    * Case: Antec SX1040BII, $92. I almost went with an Antec PlusView1000AMG ($72), but decided that a) the SX1040BII's 430W power supply might be enough for my purposes and b) if it isn't, a quality Antec supply for $20 that I can use someplace else is hard to pass up.
    * Motherboard: Gigabyte GA-7N400 Pro2 Rev 2, $98. I'm building a system with *massive* amounts of PCI traffic, and I'm hoping a Nvidia-chipset board will prove more stable than the hordes of Via-based models out there.
    * CPU: AMD Mobile Athlon XP 2400+, $89 at Newegg. The 2200+ is $10 cheaper but they're both rated at 35W. If there's a sub-35W processor that supports a 266-MHz FSB I'd like to hear about it.
    * CPU heat sink: I'm lost here. I've had a good experience with a Thermalright SLK-800 I installed three years ago, but current Thermalright heat sinks all seem to specify Athlon 2500+ and up. What gives?
    * CPU fan: A leftover Vantec 80mm fan. Loud but effective.
    * Memory: One 512MB DDR PC3200 DIMM. $80 at Crucial. My leftover 256MB PC133 168-pin DIMMs aren't going to work with the motherboard, right?
    * Power supply: Thermaltake PurePower 560W, $102. In case the Antec 430W supply mentioned above proves insufficient.
    * Drives: Eight Seagate Barracuda 7200.8 400GB ATA drives plus one cold spare, $230 each at CompUSA without rebate; currently $230 each after $70 rebate. Lite-On DVD+-RW drive, $60-100. Leftover Maxtor 13GB ATA drive for booting.
    * ATA controller: Two Highpoint RocketRAID 454, $87 each at Newegg. Unlike Ryan Finnie I am *not* planning on doing hardware RAID features; rather, I'm simply looking for high-quality ATA controller cards. If anyone can recommend high-quality non-RAID controller cards with four channels (or more) on each, I'd like to hear about it. For that matter, if four two-channel ATA controller cards are doable with my motherboard setup, I'd like to hear about that too.

    So, what do y'all think?
    • This won't be best solution noise-wise, but this would extend the drive lifetime.

      Cut extra holes to the case and build air-flow tunnel to help cooling the drives.
      I measured drop from 46C to 25C with 12cm nexus low speed fan.

      My setup looks roughly like this from above:

      |A_A____AAA|
      |/A|AAAA|AA:
      A==|AHDA|AA:
      A|A|AAAA|AA: holes to allow flow through
      A==|____|AA:
      |\_AAAAAAAA|
      =========== -front panel

      (replace the A's with space, my ascii art won't scale right)

      So basically there's two 12cm fans,

    • by TTK Ciar ( 698795 ) on Friday January 21, 2005 @11:28AM (#11432712) Homepage Journal

      When we developed the PetaBox [petabox.org] at The Archive, the idea was to use off-the-shelf PC hardware and maximize GB/buck, while keeping cooling and power costs low. It's worked out pretty well. See also my unofficial PetaBox web page [ciar.org].

      It turns out that you really don't need much of a PC to serve files. We underclocked the cheap little Via C3 processors to 800MHz to reduce power and heat, and they still troop along nicely. SATA is not necessary, since you're going to be bottlenecked on the network connection anyway. We used 512MB of RAM per node, but only because our system runs a gaggle of perl scripts to provide a variety of services (file searches, XML-based metadata updates, etc). If you're just going to be running NFS or Samba, 256MB is probably plenty (unless you choose to run Gigabit over a mere 32-bit PCI bus, in which case 512MB or 1GB would be better, so that you're reading more from filesystem cache and pounding the hard drives over your overloaded bus less). Gigabit ethernet is a must (we used 100bT for the PetaBox, which is annoying at times, but the cheaper 100bT 48-port switches were instrumental in keeping the overall price of the system low). We stuck four hard drives in each case, mostly from previous bad experiences trying to work with eight-disk machines. I can't say too much about the disk failure rate statistics which incited us to switch to Hitachi Deskstars, but I will say that I'm glad our PetaBox is using Deskstars and I will only use Deskstars in my workstation at home.

      If you really, really want to keep the gigabit pipe full while pounding on your disks, then a newer bus like PCI-Express is necessary. Otherwise, I'd be tempted to go with an older, cheaper (and imo, more reliable) Pentium-II or -III based PC. You can get solid, reliable, well-cooled and well-dustfiltered early model VA Linux servers with 500MHz Pentium-III's for $200 or less. I must stress the importance of buying a really solid, rigid case. Over time, normal computer cases get all bendy-wendy, turning every part into a moving part, including parts you don't want to have moving at all. Fans will start sticking, motherboard traces will start breaking, etc. Most of the rack-mountable cases are made of good thick solid steel panels, which makes them heavy as f**kall, but IMO that's a small price to pay for a system that will run forever.

      For operating system, the most important thing is to get something you know how to run and maintain, or can get help running and maintaining. If you have geek friends who are willing to provide technical assistance, find out what they know best and use that. A well-known operating system will probably be of more use to you than a technically better, but less well understood, operating system.

      Having said that, my personal preference is Slackware Linux, because I appreciate its philosophy of keeping things simple, and preferences for packages which are the most stable, as opposed to newest versions or lots of features. My second choice would be FreeBSD. Third would be the OS we decided to use at The Archive for the PetaBox nodes, Debian Linux. But if all you know is Windows, then go ahead and use Windows.

      Regarding RAID, it's been my experience working at The Archive that RAID is often more trouble than it's worth, especially when it comes to data recovery. In theory, recovery is easy, you just replace a bad disk and it will rebuild the missing data, and you're good to go. In practice, though, you will often not notice that one of your disks are borked until two disks or borked (or however many it takes for your RAID system to stop working), and then you have a major pain in the ass on your hands. At least with one filesystem per disk, you can attempt to save the filesystem by dd'ing the entire raw partition contents onto a different physical drive of same make + model, skipping bad sectors, and then running fsck on the good drive. But if you have

      • Interesting comments regarding RAID. They seem to defy common sense, but common sense is not always correct.

        Just out of curiosity, why did you end up going with your third choice for OS (Debian) rather than your first or second choices?
        • Interesting comments regarding RAID. They seem to defy common sense, but common sense is not always correct.

          Yeah, though I'm not necessarily correct, either. There are plenty of smart IT professionals who disagree with with The Archive's conclusions regarding RAID. It may just be a contextual thing -- our data storage clusters are friggin' huge, and we only have three sysadmins, two of whom work part-time. A smaller system with more manpower and better discipline about following good procedures ma

      • As to your RAID thoughts. You are clinically insane. man "mdadm". On a RedHat machine "service mdmonitor start".

        mdadm --scan --detail > /etc/mdadm.conf
        echo "DEVICE /dev/[sh]d[a-z][1-9]" >> /etc/mdadm.conf
        echo "DEVICE /dev/[sh]d[a-z][a-z][1-9]" >> /etc/mdadm.conf
        echo "MAILADDR alert_email@domain.com" >> /etc/mdadm.conf
        chkconfig mdmonitor on
        service mdmonitor start

        You can easily adapt the RedHat scripts to run on Slackware. Personally I would recommend setting up nagios or some ot

    • I'm interested in a project that will be very similar to the original poster's. Ideally what I'd like to do is set up a nice RAID 5 array that can hum away in the closet, serving video and allowing me to rsync backups to it.

      It'd also be nice if I could set the box up as a Myth back-end, then put a smaller, nicer, quieter Mac Mini as a myth back-end. And if the closet box could do some low-load web serving over cable, that'd be nice too.

      But is this asking too much of one box? Will I have to get a hardware
    • Inspired by http://www.finnie.org/terabyte/, a few months ago I started a thread to discuss the idea of building my own 1.5TB storage array using software RAID50 to hold video files.

      Why use RAID50 instead of RAID5 ? You're not going to get any meaningful performance benefit and you're "wasting" a drive that could be otherwise used for more space or a hotspare.

      Incidentally, the guy on that web page has got some very, very strange ideas. His whole reasoning for not having multiple drives on the same chan

      • Why use RAID50 instead of RAID5 ? You're not going to get any meaningful performance benefit and you're "wasting" a drive that could be otherwise used for more space or a hotspare.

        You're right; I'm planning to go with RAID 5, not 50. I neglected to make that clear.

        I would suggest using a motherboard with multiple PCI buses. Basically, look for something that's got two (or more) 64 bit PCI-X slots, as these boards nearly always have multiple PCI buses. It will be in the detailed specs - you want at least

        • Would something like this qualify? It has two PCI-X slots, but I don't see any mention of multiple PCI buses per se.

          It looks reasonable. If you look in the motherboard's manual, on page 1-18 there is a block diagram showing the logical layout of the buses, etc. Note that each PCI-X slot gets its own bus, which it shares with one other item (the SATA and LAN controllers). "Everything else" (regular PCI slot, IDE, USB, etc) gets its own standard 32bit/33Mhz bus.

          Also, after looking around a bit myself it

          • You wrote:

            Far more common is the configuration this board has, with one 64/133, one 64/100 and one 32/33. Realistically, for your scenario, that should be more than adequate.

            How about something like this [ebay.com]? Assuming I can find two four-channel PCI/66 ATA controller cards (two-channel PCI/66 cards are easy to find, I know). I'm not necessarily looking for the ne plus ultra of performance, but rather some reasonable combination of price, performance, and stability that will let me serve HDTV streams and below

            • It will be rock stable, but it will bottleneck at the buses. I'm not sure what sort of bandwidth you need for serving up HDTV though, so it may well be sufficient. However, moving things around internally on the machine and rebuilding the RAID array(s) will be negatively impacted. Of course, if you're just going to stick this on a 100Mb network for the forseeable lifetime of the machine, then it's all pretty irrelevant, as even my dodgy old 366Mhz Celeron with a ZX motherboard (pilfered from an old Gatew
  • by Xaroth ( 67516 ) on Friday January 21, 2005 @10:11AM (#11431799) Homepage
    Lemme get this straight. You asked Slashdot whether you should use Linux or Windows? Do you never read the comments around here?

    Oh, wait.

    I suppose this is Slashdot. Nevermind. ;)
  • One word: (Score:3, Informative)

    by saintp ( 595331 ) <{stpierre} {at} {nebrwesleyan.edu}> on Friday January 21, 2005 @10:12AM (#11431802) Homepage
    Newegg [newegg.com]

    Buy everything piecemeal. I just priced out a 900Gb NAS for $800, shipping included. Slap it all together, put your favorite Linux distro on it, and run Samba.

    You won't be able to beat the price of the real thing by much, though: big hard drives are still expensive, and so are RAID cards (if you go that route).

  • So, I have been wondering about this myself, partic as I have a large number of almost big enough hard drives sitting around. An inexpensive alternative (with obvious performance hit) might be to use usb or firewire disk enclosures and add them en mass to whatever system you are using.

    I realize that this is a bit different than your orig question, but it might be an interesting stop-gap solution, partic as the enclosures are about $25 each (without disk).

    • If you want to do this, FireWire may be a better bet than USB2. I've got some external drives that have both FW400 and USB2, and they're about 30% faster when connected via FireWire. Also, IIRC FW handles multiple disks on one bus better than USB.
      • The state of external enclosures, USB chipsets and firewire chipsets is a sad thing.

        I had to go through 3 different USB chipsets (different motherboards) before my external enclosure would write data without random corruption. The nForce2 motherboards are notorious for having strange timing issues, and making this problem even more apparent.

        Firewire's no better, either. I had an Adaptec firewire card (Texas Instruments chipset, I believe) and it worked with my external drives, yet after 5 or 10 minutes, w

  • Backing it up. HD's have far outpaced backups in price/speed

    1TB NAS? 400GB HD?

    No problem. Want it backed up to ONE tape? Every day? Have fun.

    • True, but it's probably cheaper to do a RAID solution and just swap out hard drives when they die rather than buying a DLT drive and the associated tapes.

      I think an ideal solution would be a small RAID solution (possibly with 2.5" drives) in an external enclosure with an Ethernet connection in a small form factor. Plug it into the network, run your backups to it, unplug it and put it in a fire safe.
    • When you're dealing with that much storage, you really need to catagorize your files into what needs to be backed up and what doesn't. In this type of application (if it was me), most of the storage is likely to be filled with dvd rips & mythtv recordings, or backups from your main system(s). So you would want to backup a list of what you have, but you can always recover from original media (in the case of dvd rips, or off of re-runs for tv shows). Also, on a storage server you're more likely to have
  • Old hardware will do (Score:2, Informative)

    by madaxe42 ( 690151 )
    My current fileserver is an aging 600MHz P3, with 4 PCI slots, each occupied by a raid card. I've decked it out with 250GB disks, 4 on each card, 2 on the mobo, and 2 100GB disks on the mobo for swap,boot,and root. I've got onboard 100Mb ethernet, no graphics card, no sound. I've installed gentoo with a stripped down kernel, running samba, and it all works beautifully.

    It's a fairly large box, in a full size ATX case, and the disks are also stored in a rack which I built and bolted onto the side of the case
  • Power saving? (Score:2, Interesting)

    by shooz ( 309395 )
    I would also like to build such a thing, but a box full of disks spinning 24/7 is likely to use a lot of power and give off a lot of heat. Are there any power saving solutions to this? It would be nice if there was some intelligent software that, when you try to play a movie off the disk, spin up only the disk that has the file, read a large chunk of it into memory, and spin the disk back down.

    Is this doable?
  • hard disk storage is still accurately described by moore's law, so it dosen't make any sense to buy storage you need right now. if you need a terabyte now, then get whatever you need. if it is going to take a year to fill up a terabyte then start small and build up to a terabyte. when you get to a terabyte, you can take the money you saved and buy a 2 terabyte nas, backup the first one and have another terabyte to fill up.

    i bought a 160 gig drive last year and for various tedious technical reasons the c
  • I did this a while back. (3+ years, so it's obviousely not 1TB).
    My fileserver runs 24/7 and has been doing that for about 3 years (minus downtime for moving).

    I use 4 40GB SCSI drives in RAID 5 configuration, using Linux software RAID.(Obviousely I would have used large IDE now, but these were the cheapest per GB at the time, and I already had the SCSI controller laying around)
    This gives me about 136GB of useable space. PArtition is running ext3 as filesystem.

    I have had one disk fail because of a bad solde
  • by forged ( 206127 ) on Friday January 21, 2005 @10:54AM (#11432318) Homepage Journal
    The P502 "Spider" [gms4vme.com] motherboard from GMS features a 800 MHz PowerPC processor, 2x onboard GigE LAN, optional PCI-X expansion bus, 256MB ECC memory, 16MB onboard flash, runs Linux, consumes 10W of power and measures about the size of a regular pack of cigarettes.

    Imagine this with a high-performance SATA raid controller [1] [tomshardware.com] [2] [tomshardware.com], in an enclosure barely bigger than the 4 hard drives alone.

    Someone knows here to buy this motherboard? What about practical experience with this sort of configuration?

  • geeze... (Score:3, Informative)

    by Malor ( 3658 ) on Friday January 21, 2005 @10:58AM (#11432354) Journal
    Wow, a front page article on Slashdot that amounts to, "gee, how do I build a server?" Spiffed up a bit with trendy, techie-sounding words, but cripes. This is FP news-worthy?

    That said... if all you're doing is file serving, a tiny machine by modern standards is fine. 64 megs of ram in a P3/400 would make a very solid home server. If you want to use software RAID, though, it's a good idea to go faster... you'd want at least 1ghz for that, maybe 2, depending on how much traffic you were sending to the box and how patient/impatient you are.

    Since it's going in your basement and you have no worries about size or noise levels, get a big whompin' case with lots of 5.25" slots. Cremax makes some nice enclosures that will let you put 5 3.5" drives into 3 5.25" bays, with good fans for cooling. They have multiple variants. I'm using the SCSI flavor, but you can get them in SATA too (and IDE, I think, but I'm less sure about that.)

    I have an older 3ware 8500 RAID card, and it's dismally slow at RAID 5, even though it's supposedly 'optimized' for it. I don't know if the newer SATA versions are better, but while they are well-supported in Linux, and, being hardware RAID, are a total no-brainer from an admin perspective, my generation of cards was horribly, horribly slow. I get at least four times the performance using Software RAID on an Athlon 1900+.

    This is how my network server looks:

    Big case;
    400W PC Power and Cooling power supply;
    ASUS A7V333 motherboard;
    Athlon 1900+, I think just 266mhz FSB (not sure);
    1 gig of RAM (nice for caching, not at all necessary to have this much);
    Ancient video card, Matrox Milllenium 2, I think;
    3com 3c509 network card;
    ICP Vortex 32-bit RAID controller, bought used. The first one I got was dead... had to replace it. I got it pretty cheap, intending it for another project that fell through, and so I ended up using it at home instead. I think it was about $100, but I'm not sure now. These boards KICK ASS. Great linux support, VERY fast. Awesome hardware.
    6 18-gig 10KRPM SCSI drives; machine boots from this array, and Debian is installed here;
    2 Cremax 5-in-3 SCSI enclosures;
    1 3ware 8500+, in JBOD mode (software RAID is WAY faster);
    4 80 gig IDE drives (small, but I set this part of the system up a long time ago)

    The SCSI array is damn fast, an excellent spot for interactive, disk-intensive things like IMAP or big compiles, while the slower IDE array is ideal for filesharing.

    You should be able to set up a similar system for, oh, $1500? And keep in mind... this is HUGE overkill for a home network, it would be a solid backbone for a company up to about 50 people... though it might need more drive space, and I'd probably want redundant power supplies in a really central machine. You could run mail, internal DNS, DHCP, a squid proxy, internal webserver, and Samba for that many people without it even working that hard.

    File sharing is fundamentally a tremendously simple thing, and it just hardly takes anything at all to do a perfectly fine job. Once upon a time this was akin to rocket science, but at this point, even a garbage $200 PC from Walmart would probably be an okay fileserver.

    Again: the specs on the machine above are wild overkill... swatting a fly with a sledgehammer. But if you want to spend that much money, or you have most of the parts laying around the house anyway, it'll do a damn good job.
    • Ancient video card, Matrox Milllenium 2

      This makes a heck of a lot more sense than the original poster's requirement of a PCIe slot on the server. Why would you need a PCIe slot on something that's just serving as a NAS and sitting in your basement?

      • Well, in this case, so you can add more really really fast disk controllers, I'd say.

      • Ancient video card, Matrox Milllenium 2 Th

        is makes a heck of a lot more sense than the original poster's requirement of a PCIe slot on the server. Why would you need a PCIe slot on something that's just serving as a NAS and sitting in your basement?

        A 32bit 33MHz PCI bus can handle at maximum 133MB/s, and that includes all your hardisks and network card. With a fast harddisk and a gigabit network card you can saturate the PCI bus, so a PCIe requirement (cheaper than PCI-X that is used for server motherb

  • more than you need (Score:5, Informative)

    by beegle ( 9689 ) on Friday January 21, 2005 @11:06AM (#11432445) Homepage
    A few comments about this

    -Get the best-value processor that you can find. You won't need the fastest thing out there, but it's better to have a little more "oomph" than you need. If you end up using an encrypted filesystem at some point, you'll want enough power to decript and keep the network "fed"

    -Have a plan for adding a second network interface. Maybe you don't need it now, but once the DIY bug bites, you may find yourself wanting to use the machine as your NAT box or as a wireless access point or something like that.

    -Think about noise and power use. Yeah, those WD Raptors are fast, but they're really loud, too, particularly if you buy a pile of them. You might want to think about acoustic material for the inside of the case -- your local car customizing shop can hook you up. You'll also want an "overkill" power supply for the case so that you don't have problems when you add more drives later.

    -Think about heat and airflow. At this time of the year, it's easy to ignore (Dear Australia: yes, I know it's summer there now), but during the summer, stuffing the fileserver into the closet might not be such a good idea.

    -Consider underclocking. If you do buy a better processor than you need, bump the speed down for now. Less power, less heat, less noise.

    -Get a BIOS or hardware-level RAID mirror for your "root" disk. You can use software RAID for the data disks, but you want to be absolutely certain that you can recover the disk with information about the software RAID. The RAID does no good if you don't know how to access it.

    -If you use Linux, LVM will become your new best friend.

    -Consider buying hard drives that are carried by your nearest Best Buy/CompUSA/other computer store. You don't actually have to buy the initial batch from there, but if a drive in the RAID set goes bad, you'll want to replace it ASAP. It's nice if you can do that tonight rather than "in a few days".
    • I've been wondering where these products are. Like just about everyone else on /., I'd LOVE to get a 1TB server going with (virtually) no processor, a decent amount of RAM, and so on, BUT:

      Why can't it be silent? I mean "drive noise + just about nothing" silent. The file serving can't use much more than a P2 for a family unit, so it strikes me that there should be a readily available fanless option.

      If it's just running the file system + maybe a minor family intranet, I'd think you could run this off of
      • If that's what you want/need, go for a mini-ITX EPIA board. Small, low power & several of the models operate fine with passive cooling.

        Just watch your heat - a bunch of 7200+RPM drives in an enclosed space will generate a lot of heat & require decent cooling (if you want the drives to last any length of time). This is where most of your noise is going to come from.
    • My fileserver sounds like a harrier jet, but it doesn't matter when it's stored in the basement. It's _network_ storage, doesn't have to be nearby.
  • I'm doing the same thing you are. I have a case with five drives in it. One 13GB drive that's my OS and junk disk, and four 230GB drives in a RAID5 array. The machine is a Celeron 700MHz with 256MB of RAM. It works great for what I use it for which is:
    • Firewall for my wireless AP (via Matrox quad NIC)
    • File server via samba
    • Postgres
    • Java development
    • Personal email and shell stuff
    • DHCP
    • Caching DNS
    For just a file server you can get away with a much slower machine.
  • Cheap USB based samba server. Just add usb drives and go:

    http://estore.itmm.ca/product_info.php?products_id =135 [estore.itmm.ca]
  • You might want to look into Solaris 10 with ZFS. It's free (as in beer), comes with source, and ZFS sounds just about like something from the book of black magic. http://www.sun.com/software/solaris/10/ds/zfs.jsp [sun.com]
    • 128-bit filesystem (zettabyte?)
    • Automagic volume management
    • Rock-stable NFS implementation

    Serving files on a home LAN is not really that heavy on the CPU, so anything that's on the supported Solaris x86 list should do, just make sure you stick enough RAM in it. Personally I'd go for somethin

  • We were also looking to build 1TB+, sata raid based nas server with a pcix gigabit eth card...

    So far it seems its gonna be fedora core, on an ibm xseries 206 (real cheap in canada), with an adaptec sata raid card (8 or 16 connectors) and maxtor maxlineii drives (300gb). we'll get 5 drives first.. to make 1.2tb with raid5, and the onboard drive will just host the os in 80gb. we have a spare gigabit nic...

    total price $2000 CDN or so..

    the only downside for now is its only possible to put 4 additional drives
  • Buy two $499 minis when they are released. Upgrade the size of the harddrives in both--they're already pics out that show how, this will satisfy your DIY urge. Have one rsync the the other daily with a cron job--I doubt that you'll be constantly writing critical information frequently enough that you require RAID. They are small, quiet, sleek, and beautiful. From what I've read, you can also buy video adapters for S-Video out to connect to your TV, so you can make them multi-task as: 1) media boxes, 2) NAS,
  • I guess it depends on your needs, but I found this to be an excellent solution in that its quiet, cheap, low power consumption, plus it acts as an itunes server. http://www.nslu2-linux.org/wiki/OpenSlug/HomePage
  • My last effort in this area was a Dual Pentium 933 with 1GB RAM, and a 3Ware Escalade 6400 hardware RAID-5 (real RAID, it has a RISC on it) handling four IDE drives. My total cost three years ago was $500 for the server, $240 for the drives, and $300 for the 3ware controller. I installed Red Hat 7.2 at the time.

    So why haven't I upgraded since then? I haven't needed to! It's still my fileserver, and has never had a problem. I have been especially happy with 3ware's linux software, and in their latest u
  • you must be clueless, all it takes is Microsoft Active Disk, or LVM or Linux Software Raid or some hardware implementation. I mean really, its about an hours work no matter what u use.

  • For home use, I use:

    Pentium 233
    768mb ram
    2 Promise adapters
    2 60GB drives - RAID 1
    2 120GB drives - RAID 1
    100MB ethernet
    el cheapo video card
    300 watt power supply
    1 big case

    I care not about video - I haven't looked at the screen in months. It serves files reliably, cost little or nothing. I left space between each drive, added 2 extra case fans, and let it run. It has been rock solid reliable since day 1, which was about 6 years ago. Sexy? No. Effective? Yes.
  • I, like most around here, have probably been fighting with the same thing... right now I have a gentoo server with two 120s, and two 200s, both setup in raid-1 (so 320g total space) I'm not worried so much about performance, as ensuring reliable data.

    My two 120s are both identical, but my two 200s are differing brands (yes it works, Software RAID, not hardware) It seems to me that, going with the same brand, while better for performance, means that if there is a manufacturing defect (these are home dr
    • Actually, I've had a few IDE harddisk fail at home over the last few years, so now I'm buying SCSI hardisks for my home server. Yes, they are far more expensive (with the added overhead of a good sCSI card), but are much more reliable. Now 74GB is plenty of space for my needs.....
  • Please, for the love of god, if you're not going to back your array up, MAKE SURE IT HAS SOME SORT OF REDUNDANCY! Merely striping the data is going to get you really pissed off when a disk fails. Note I said when, not if. I replace hard drives in our NetApps at work with frightening regularity. I no longer trust hard drives.
  • One solution that has worked for some people is to look for a large cheap pentium II server on eBay. You can usually get one for less than $200. 300 mhz is enough for casual file sharing if you put the maximum amount of ram in and put in a few IDE cards. Server 2k runs easily on these, but linux doesn't always because of propreitary hardware.

Single tasking: Just Say No.

Working...