Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Debian Media Operating Systems Hardware Linux

Ask Slashdot: Best Kit For a Home Media Server? 355

First time accepted submitter parkejr writes "I started off building a media library a few years ago with an old PC running Ubuntu. Folders for photos, ogg vorbis music from my CD collection, and x264 encoded mkv movies. I have a high spec machine for encoding, but over the years I've moved the server to a bigger case, with 8 TB of disk capacity, and reverted back to Debian, but still running with the same AMD Sempron processor and 2GB RAM. It's working well, it's also the family mail server, and the kids are starting to use it for network storage, and it runs both link and twonkyserver, but my disks are almost full, and there are no more internal slots. The obvious option to me is to add in a couple of SATA PCI cards, to give me 4 more drives, and buy an externally powered enclosure, but that doesn't feel very elegant. I'm a bit of an amateur, so I'd like some advice. Should I start looking at a rack system? Something that can accommodate, say, 10 3.5" drives (I'm thinking long term, and some redundancy)? Also, what about location — I could run some cat6 to the garage and move it out of the house, in case noise is an issue. Finally, what about file format, file system, and OS/software? I'm currently running with ext3 and Debian Squeeze. Happy with my audio encoding choice, but not sure about x264 and mkv. I'd also consider different media server software, too. Any comments appreciated."
This discussion has been archived. No new comments can be posted.

Ask Slashdot: Best Kit For a Home Media Server?

Comments Filter:
  • by InterestingFella ( 2537066 ) on Sunday December 25, 2011 @11:07PM (#38491694)
    Why would you change away from x264 and mkv. They are the industry standards. Not just in computers, but every way in the distribution chain. Going about it for some FOSS reason is just stupid because they're only for your own use, not for distribution. You would be either spending double the space or get half the quality by going with something other than H.264, and on top of that you introduce yourself additional problems because they are not what everyone uses.
  • Arduino. (Score:5, Funny)

    by Anonymous Coward on Sunday December 25, 2011 @11:09PM (#38491706)

    Just kidding.

    • Re: (Score:3, Funny)

      by Anonymous Coward

      I think you would need to imagine a beowulf of arduino...

      sysadmin'ed by natalie portman with grits in her pants who gets hugged by cowboyneal

    • and for a serious take on it, net.search on 'spinmaster DIY'.

      I designed and built that to solve some staggered spin-up power supply issues, but it does have a back-end CLI and if you connect your usb-TTLserial cable and run a term program, you can spin up/down any disk you want at the 5/12v molex power point.

      add in a database that knows which disk file X is on and you have a nice power-aware system that keeps noise down and only spins up drives (media drives) that are needed for a 'session'.

      (the arduino par

  • With drive capacities soaring, I wonder if you'll really need 10 drives.
    You might want to try something like a fractal design array. It's a small htpc case for a microatx board. It has mounts for six 3.5" drives, and these days a microatx board will have everything you need, including integrated video for all your playback needs.

  • raid (Score:2, Informative)

    by Anonymous Coward

    start playing linux mdadm and run raid 5 array with 1 or 2 spare drives. I built my first 1TB system about 9 years ago with 14 120G drives. ran without major failure for 6 years. Play with the array (remove drives, add new drives, etc) BEFORE you have a drive failure so you understand how to repair the failure. Let me say again, learn BEFORE you have failure. Otherwise, you'll freak out at losing all your data because you screwed something up during repair.

    • by swalve ( 1980968 )
      That's what I do, and I'm about to split my "server" into two machines. One, a quasi-SAN with just shitloads of hard drives raided and accessed via iSCSI, and the other will do all the heavy lifting.
    • by jedidiah ( 1196 )

      You should not be dependent on a single array.

      You can't just depend on RAID to save your butt. You need to have a backup. Given the size of these arrays, it has to be another array. There's really no other good option. It's kind of painful but unavoidable.

      You should never be in the position to panic about your data. Your stuff should be safe even if you need to rebuild an array from scratch.

  • Larger disks (Score:5, Informative)

    by Anonymous Coward on Sunday December 25, 2011 @11:17PM (#38491730)

    Larger disks. 4 TB should be available very soon (maybe now?).
    http://www.extremetech.com/computing/108665-hitachi-ships-worlds-first-4tb-hard-drive-sticks-it-to-thor

  • by JustNiz ( 692889 ) on Sunday December 25, 2011 @11:22PM (#38491752)

    I use mythtv. It does pretty much everything. I love it.

    • by jamesh ( 87723 )

      I use mythtv. It does pretty much everything. I love it.

      Out of space? Just add another box. The idea that your entire media store is attached to a single server seems a bit old fashioned.

      • Putting everything in a single server has its advantages - lower power consumption for one (because you do not need to heat additional CPUs, RAM etc) and less physical space used.

        I used the "just add another box" method some years ago, so now I have ~3TB (in >10 hard drives) and 6 computers (including the main one), total power consumption ~1.2kW. I would love to replace 5 of them with a single server, but servers are really expensive, especially the ones that have more than 4 HDD slots and a rackmount (

        • by jamesh ( 87723 )

          Putting everything in a single server has its advantages - lower power consumption for one (because you do not need to heat additional CPUs, RAM etc) and less physical space used.

          Do you think the OP really needs to keep his/her 8TB of disk powered up and online _all_ the time, just on the off chance they might want to watch something right now that can't wait 30 seconds for a second box to boot up?

          We're still using DVD's at our house and the kids seem to watch a new movie a few times (or a few hundred times... :) in the first month or so then, with a few exceptions, not really watch it again for ages. All those older movies could go on a box that only boots up when required. In fact

          • I use LTO-2 tapes to archive the stuff that I already watched (as they take up less space than DVDs).

            How many movies have you ever seen than you'd want to watch again?

            Movies? Probably not that many, I do not watch that many movies anyway, mainly TV shows. TV shows? Sure, I watched B5 and DS9 like 6 times each.

            As for the archiving, I archive because I do not know if I am able to download the same movie in the future. Maybe everybody stops seeding (the downside of BitTorrent compared to, say, Gnutella or ed2k, is that you have to specifically seed the file, instead of point

            • by jamesh ( 87723 )

              As for the archiving, I archive because I do not know if I am able to download the same movie in the future.

              Ah. I made the flawed assumption that people would actually own the shows/movies they were watching and could just re-load from the original DVD.

  • Instead of tacking function after function onto the same server, I'd encourage you to use several small ones, including the $35 Raspberry Pi. That way one one piece of software starts to go haywire, it doesn't bring the whole shebang down with it.
    Also, I'd go for fewer, larger disks. As long as you do backups it's not more risky, and it's a lot more practical. HDs reach 4TB these days, so you're talking 2 HD for your existing data, a 3rd one more more capacity, and double that for backups.

  • by Joe_Dragon ( 2206452 ) on Sunday December 25, 2011 @11:26PM (#38491772)

    10 disks in a raid 0 type setup is a big risk. Also lot's of pci cards eat up PCI bus io and maybe even the same io used for the network also your board like only has 100M e-net. Now if your system has a pci-e slot then a 10 port non raid card + software raid may work and is cheap then a raid card. But you may also want to get a newer MB + cpu most new amd and intel boards max out at 8 sata ports. 8 ports may work out ok or you can get a new board with a dual core cpu + a raid or non raid card also a new MB will get you GIG-E + pci-e IO.

    • by sirsnork ( 530512 ) on Sunday December 25, 2011 @11:38PM (#38491848)

      Yup, start again.

      Pick a board with plenty of SATA ports, put a modest amount of RAM and CPU in it. Make sure it's got PCI-E slots (what hasn't these days) and go from there.

      Use bigger drives than you are currently, it's a bad time to buy drives so wait if you can but just build a new box from scratch and save yourself the headache of trying to migrate drives or retain data while upgrading drives one at a time in an existing array.

      New machine, 3TB drives x as many as you want (6 would about double your capacity), add a 4 port PCI-E SATA card if you need it and rsync all the data across, job done

      • useing software raid is ok most boards have about 6 ports so if you want like 10 then maybe a x4 or better pci-e card may be needed.

        • by nabsltd ( 1313397 ) on Monday December 26, 2011 @12:46AM (#38492136)

          useing software raid is ok most boards have about 6 ports so if you want like 10 then maybe a x4 or better pci-e card may be needed.

          Or, get an actual server board (this is gonna be a server, right?), like this one [supermicro.com]. That's six SATA ports and 8 SAS ports. If you flash the SAS ROM to the "no-RAID" version, the controller is recognized natively by Linux. In addition, you get lots of PCIe connectivity, a pair of Gigabit Ethernet ports, and IPMI (allowing remote power cycle).

          Then, find a full-tower case with lots of 5-1/4" drive bays, and add hot swap bays [newegg.com]. There are smaller versions, as well...just budget what you need for drives.

          I use the motherboard I referenced along with an add-on 8-port SATA card (anything supported by Linux would be fine) and two of the drive bays for ten 2TB drives in RAID-10. I boot Fedora off a pair of SSDs in RAID-1 and also have four 2-1/2" 750GB drives in RAID-10. The 10TB array serves iSCSI over 10Gbit Ethernet to ESX systems that hold all my VMs, with the 1.5TB array as local and NFS storage. There's still PCIe slots available if you need more controller cards.

          With this setup, the VMs are how everything is accessed, so you can pick whatever OS you want to face client machines.

          • You do know, any ROM flash is just software raid right? Even SAS controllers don't have hardware RAID unless you buy a real raid card for $$$. Real raid cards have write back memory and a BBU.

            That being the case you're better off using something like linux software RAID so if your SAS controller happens to die, you can still recover your array by simply plugging it into another machine

    • Re: (Score:2, Insightful)

      Raid seem silly in a home setup. People see it as a backup solution when it's not, and I doubt a few extra tenths of a percent of uptime is really significant at home. Just have one or two big disks for all your media and stuff, then buy the equivalent to use for regular rsync backups. You can get a third to take "off site" occasionally (e.g. your sister's house, a friends house, a drawer at work) if you want to be really careful - just rotate the backup disks every so often.

      • now just to fit the data on one set is like 3 HDD's and then raid 5 or 6 starts to look like a good idea.

      • by swalve ( 1980968 )
        I like it for expediency (no need to restore when a drive fails) and for simplicity. I found myself with single drives in a bunch of machines, and my free space management got to be a huge waste of time. It became easier to just concatenate all my storage into one volume with one "xxxGB free space" indicator and when that starts to fill up, I start replacing hard drives.
      • Raid seem silly in a home setup.

        Once you start to use more than one hard drive worth of space, RAID is pretty much required unless you really like to spend days restoring data when a hard drive fails.

      • Failure rate of disk drives in consumer environments is about 3%/year. With N drives you'll see N*3% failures per year. You do the math.

  • HP Microserver (Score:5, Informative)

    by quenda ( 644621 ) on Sunday December 25, 2011 @11:36PM (#38491834)

    For a slightly more sane solution than rackmounting at home, consider the HP microserver.
    Very low power (12W CPU), small, quiet, cheap, server grade, no Windows tax, holds four pluggable 3.5" drives plus optical (which some people swap for a 5th HDD for RAID5.)

    http://blog.thestateofme.com/2011/05/14/review-hp-microserver/ [thestateofme.com]
    http://www.silentpcreview.com/HP_Proliant_MicroServer [silentpcreview.com]
    http://h10010.www1.hp.com/wwpc/us/en/sm/WF05a/15351-15351-4237916-4237918-4237917-4248009.html [hp.com]
    http://forums.overclockers.com.au/showthread.php?t=905262 [overclockers.com.au]

    If 8TB is full, you need to stop the obsessive collection of warez/pr0n/torrentz you are never likely to watch again.

    • Re:HP Microserver (Score:5, Interesting)

      by EdIII ( 1114411 ) on Monday December 26, 2011 @12:37AM (#38492118)

      If 8TB is full, you need to stop the obsessive collection of warez/pr0n/torrentz you are never likely to watch again.

      As opposed to the obsessive collecting of physical media that can be scratched and takes up 10x+ the space?

      My family since the 80s has amassed literally 10k CD/DVDs as well as almost 100 laserdisc titles. Not to mention a buttload of VHS tapes that we offloaded years ago.

      It has all been converted to digital storage. Since it is on multiple RAID 5 devices and I run a cron job that checks the MD5 sigs against a database I know that it is in good condition.

      Of course this requires constant rollover of the data from hard drive to hard drive. Half the drives have failed over the years and it has moved between multiple NAS systems. We still have all the data.

      In addition to that, we have over 100k family photos collected from all of our relatives scanned and tagged as well.

      Our collection is nearing 20 TB. With the low cost of drives we have backups in lead lined containers in safety deposit boxes at two banks. We swap them out every year or so adding to it. I am really looking forward to long term archival storage that is write once and designed to last 100 years plus. I'll pay for that.

      Now I know you may be thinking obsession, but we *paid* for it. Paying twice for music or movies is just plain insane and we never fell for the HD/Bluesuck shit they were shoving down our throats. Well my parents did, but Spiderman solved that problem the first time it could not be played because the encryption changed. Since then they are back on DVD only and we are waiting for a HD storage method that does not involve constant Big Brother monitoring and DRM in our houses.

      Then there is the most obvious benefit of all. You only have to rip the music or movie one time. Been years since we bought an actual CD, but you get my point.

      The convenience of having all of your media at your fingertips without touching physical media is pretty damn nice.

      Guess how much storage space you need for thousands of DVD/CDs when they are packed into spindles and put into storage? A heck of lot less than you would expect. Fits in a closet.

      • by BLKMGK ( 34057 )

        Umm, why not strip the DRM from the BD and store those too? the picture really is much better even if you re-encode them for smaller storage. I have a post above about this but yeah it's doable. I also have about the same amount of storgae as you but it's not backed up other than the unRAID software's protections. It's simply too expensive to have a full dupe for me :(

        • by EdIII ( 1114411 )

          BD is never an option for multiple reasons.

          1) Even if doable, which keeps changing, a true BD backup takes up a *lot* more storage. I have seen torrents that are 25-30GB for a BD rip. Why even download it or rip it?
          2) To rip it, I would have to purchase it. I would never do so because that would be voting with my wallet in the wrong direction.
          3) Re-encoding has variable quality depending on the algorithm and the parameters. Piracy groups do it better, so I would never do it myself. In the end I would

  • No Garage (Score:3, Insightful)

    by WoodburyMan ( 1288090 ) on Sunday December 25, 2011 @11:56PM (#38491934)
    I would think about putting the box in the garage. Yes it seems like a great location, it's out of the way and such. However it might not be the cleanest place in the house. I for one know my garage to be one of the dirtiest places. In the winter the car drags in massive amounts of sand from the the winter roads, and leaves in the spring. Spiders and other insects, not to mention baby snakes and rodents, also make their way in from time to time and would just live a nice warm dark place inside the case to live...in city area's it could also attract roaches in from outside. (Despite sonic repellents and traps they still get in). Combined all that with being near moisture (wet car or rainy days). I don't see the case lasting long there. It would need to be cleaned out fairly often to keep fans and heat sinks from gunking up. Of course I understand some people's garages are nice and clean, and not subject to some of these things, but just saying I know for me it would not work out well.
  • Since you're not complaining about processing power or ram, you're in the market for a NAS. There are several good brands. I personally use Synology. It's a bit pricy but you get what you pay for. Personally I'd just add a few external USB drives until the prices fall (they're pretty outrageous now). When prices fall, get a nice 5 bay and stock it with 3TB drives that will give you ~12 TB in raid 5 and ~9 TB in raid 6 (recommended unless you like living on the edge).

    You'll probably find the Synology c

  • by Wrath0fb0b ( 302444 ) on Monday December 26, 2011 @12:06AM (#38492004)

    Ballpark figures, this isn't exact, redo it with your preferred constants, I'm just trying to explain my reasoning against huge enclosures with > 10 drives,

    Standard drive idle usage (W) ~ 10W [1]
    Low-power (green) drive idle usage (W) ~ 5W [1]
    Cost of power ~0.20 $/KWH
    Cost of an older drive per year = $17
    Cost of a green drive per year = $8.50
    Replacing 6x500GB older drives with one 3TB green power savings = $95/yr

    So think about that for a sec. At $150[2] for a 3TB drive, you cover the price in power savings in 18 months. That's assuming that there is zero fixed-cost per drive. At the point where you are talking about adding SATA controllers or fancy multi-bay enclosures or, worse, external enclosures with their own PSUs (and fans!), the turnaround-point for older drives is far sooner.

    I'm a hobbyist, I understand that it's really cool to make do with older hardware and feel like you aren't letting anything to go waste but sometimes using old hardware instead of buying new is penny-wise and pound-foolish. Spending money on increasing how many hard drives you can accommodate instead of just buying newer high-capacity lower-wattage drives is absolutely batty; especially when you get into the price for anything remotely good in the RAID dept.

    My advice, move everything to the largest capacity drives that are reasonably priced (after the flood damage is sorted). Replace the drives when you can do between 4:1 and 6:1 replacement -- should be every 3-4 years. Live happily, quietly and simpler. Save money on power transparently.

    [1] http://hothardware.com/Reviews/Western-Digital-2TB-Caviar-Green-Power-Hard-Drive/ [hothardware.com]
    [2] I bought some Hitachi 3TBs before the Thailand floods at $130 on Newegg. Of course you would be silly as heck to buy hard drives now for your hobby storage project before they at least fall back to pre-flood level.
    [3] http://www.newegg.com/Product/Product.aspx?Item=N82E16817182221 [newegg.com]
    [4] Older drives need not go to waste, they can become offline storage with a simple USB dock[3] -- make a backup, throw it in an anti-static bag, leave it at your relative's house when you visit!
    [5] http://en.wikipedia.org/wiki/File:Hard_drive_capacity_over_time.svg [wikipedia.org]

  • The ZaReason MediaBox (http://zareason.com/shop/MediaBox-4220.html) but you have clearly outgrown that already... Both ZaReason (http://zareason.com/shop/Servers/) and System76 (http://www.system76.com/servers/) have server models that look like they would meet your needs. I usually prefer to buy from one of them (ZaReason will ship with Debian already). Another thing I would look at is using a video card for encoding. Couldn't find a link for how to do this, if anyone does please chime in. As for file
  • If the issue is just plain physical space for putting more HDDs, get one of these Supermicro storage solutions [supermicro.com.tw]. There's from 15 to 36 HDD chassis, so take your pick. It will cost "only" few hundred bucks. Then yes, you don't want this to sit in your main room, it would be too noisy. If there's no humidity in your garage, and it doesn't get too hot in there in the summer, then using a cat6 to it should be fine. If you don't want to change your motherboard just right away, then just use the old existing one y
  • Once your data grows past a certain level redundancy becomes a must. For about 3 years I have been running an unRAID server & I don't know how I survived before it. It allowed me to start with 3 drives (approx 250MB each) to my current configuration of 14 hard drives (mostly 1TB but also a couple smaller drives and three 2TB drives). After you have the system setup you don't need a monitor. I have it in my basement which helps with noise issues that result from having 12 drives spinning 24/7. Because
  • by jafo ( 11982 ) on Monday December 26, 2011 @12:52AM (#38492168) Homepage

    I wrote about the latest storage server I built back in 2008, and a lot of my thoughts at the time are written up in http://www.tummy.com/Community/Articles/ultimatestorage2008/ [tummy.com]

    However, to answer a few of your questions...

    External disc enclosures? Avoid them like the plague. My initial experience with the 5 bay eSATA enclosures was pretty good -- sometimes it wouldn't pick up the external drives, but usually I could get it to find them after some tweaking, rebooting, etc... I ended up getting 3 of them, the AMS DS-2350S, which at the time were well reviewed, etc... I have since pulled all 3 of them out of active use and have them just sitting around. I don't know exactly the mode of the failures, but eventually after replacing some with others, I finally put them in internal SATA enclosures, which have been very reliable (I used these Supermicro CSE-M35T-1.

    Also note that eSATA connectors don't really hold on that well. If anything, they're not as robust as internal SATA connectors, despite being outside the case where they can get banged around.

    If I were to do it over again, I'd probably stick with the case I started with, with 5 internal 3.5" bays, and 3 front 5.25" bays, and put the Supermicro in there. I'd also probably go with fewer big drives rather than more smaller drives like I did previously (even though at the time the drives were free, I had them from another project).

    As far as running it in the garage, don't even think it, unless your garage is not where you store your cars. I have some computers that I've run in the garage for the last 9 months, and they are filthy, I've had a lot of fan failures, lots of dust, insects, and random other crap. I put mine in our furnace room, which has enough extra space.

    As far as using a server case? Hard to see the payback there unless you have a cabinet. Most server cases are HUGE, heavy, and expensive. A 3U case with 12 drive bays likely costs $500, plus you usually have to deal with special form-factor power supplies, expect to spend another $200 on one of those. I wouldn't do it, and I have a 3U 12-bay Chenbro case just sitting at my office that I could re-purpose.

    As far as the file-system, I selected ZFS (via zfs-fuse under Linux) and I've been VERY happy with it. The primary benefit is that it checksums *ALL* data and can recover from some types of corruption or at least alert about corruption if it can't correct it. So, if you are storing photos or home videos that you may not be accessing very often, that's good peace of mind to have, I know in 10 years I won't go to look at some photographs I've taken and find they were silently corrupted. Of course, you could get similar benefits by saving off a database of file checksums and checking and alerting if they are bad. Really the only downside of ZFS that I've seen is that if you need to do a RAID rebuild it is a seek-heavy task rather than just streaming. I have a 8x2TB drive array that I'm currently rebuilding (drive failure, at work), and it's 33% done after 31 hours. A normal RAID-5 array would have rebuilt that in what, 10? The system is idle except for the rebuild.

    If you care about the data going into it, make sure you checksum and verify the files regularly.

    The 8 port PCI SATA card I got is fantastic, it's a Supermicro with the Marvel chipset and is very well supported (even supported by Nexenta).

    Finally, all this data is encrypted, so if someone were to burgle us I only have to worry about them getting the hardware, I don't have to worry about them now having scanned bills and other documents and other personal and private data, etc... This is why I'm running ZFS in Linux, it gave me encryption plus ZFS (not available otherwise in 2008), as well as being an OS I'm very familiar with.

    As far as OS, I am personally running CentOS on my system because that means I can install and set it up and then forget about it for quite a few years, except for regularly running "yum update". Debian should be fine, but you will get/have to track upstream changes more frequently.

  • by account_deleted ( 4530225 ) on Monday December 26, 2011 @02:05AM (#38492414)
    Comment removed based on user account deletion
    • by JayAEU ( 33022 )

      Very true indeed. On top of the great set of features already included, it's easy to enhance those things with extra packages. It's just so much less hassle to maintain these things if your main job isn't being a sysadmin. Anybody can basically do it and in a fraction of the time needed for regular server management, too.

      Some might argue that these NAS things are too expensive compared to self-built systems. If you consider the time spent on them however, NAS systems beat DIY-systems hands down.

  • As many others have already stated, a NAS definitely is the way to go here. There are 2 good manufacturers that accomodate any need and have vibrant communities providing excellent support on top of what the manufacturers themselves offer: QNAP and Synology.

    Both of them basically use custom Linux builds on their otherwise very PC-like hardware that is open to all sorts of tweaking and readily allows for adding all sorts of extra software.

  • You seemed unsure about encoding. I suggest h.264 video and AAC audio, since it's the only thing supported by Flash, Silverlight, iOS, Android.
  • by drolli ( 522659 ) on Monday December 26, 2011 @04:57AM (#38492852) Journal

    a) i am running out of hd space

    b) i feel (=movies dont play without interrupting) that the processor is a little slow
    c) i am bored over the christmas holidays

    d) i am worried the thing explodes or falls apart

    if only a): attach a NAS and make an archiving system

    if a) and b) or d): there are enough shopping guides for off-the shelf pc's out there. You priority should be energy consumption, reliablility and space for more hds. Condiser external sata boxes

    c) play around with something differnt.

Single tasking: Just Say No.

Working...