Become a fan of Slashdot on Facebook


Forgot your password?
Data Storage

RAID Vs. JBOD Vs. Standard HDDs 555

Ravengbc writes "I am in the process of planning and buying some hardware to build a media center/media server. While there are still quite a few things on it that I haven't decided on, such as motherboard/processor, and windows XP vs. Linux, right now my debate is about storage. I'm wanting to have as much storage as possible, but redundancy seems to be important too." Read on for this reader's questions about the tradeoffs among straight HDDs, RAID 5, and JBOD.

At first I was thinking about just putting in a bunch HDDs. Then I started thinking about doing a RAID array, looking at RAID 5. However, some of the stuff I was initially told about RAID 5, I am now learning is not true. Some of the limitations I'm learning about: RAID 5 drives are limited to the size of the smallest drive in the array. And the way things are looking, even if I gradually replace all of the drives with larger ones, the array will still read the original size. For example, say I have 3x500gb drives in RAID 5 and over time replace all of them with 1TB drives. Instead of reading one big 3tb drive, it will still read 1.5tb. Is this true? I also considered using JBOD simply because I can use different size HDDs and have them all appear to be one large one, but there is no redundancy with this, which has me leaning away from it. If y'all were building a system for this purpose, how many drives and what size drives would you use and would you do some form of RAID, or what?
This discussion has been archived. No new comments can be posted.

RAID Vs. JBOD Vs. Standard HDDs

Comments Filter:
  • by Richard McBeef ( 1092673 ) on Monday June 04, 2007 @08:22PM (#19389577)
    Nothing can possibly go wrong. Especially if you use, like, 10 disks.
    • Re: (Score:2, Funny)

      by erik umenhofer ( 782 )
      Place it under a rain gutter as well. Uptime and data retention increase 59.544%.
    • Nuh-Uh (Score:5, Funny)

      by rustalot42684 ( 1055008 ) <fake@acco u n> on Monday June 04, 2007 @08:40PM (#19389827)
      You're wrong. What if:
      • a psychopath hires a hitman to destroy his media center? The hitman comes in, destroys all 10 drives with a large axe, and leaves.
      • a crazed velociraptor claws open the case and destroys all 10 drives, then mauls him.
      • His power supply suffers extreme spontaneous combustion and explodes all 10 drives.
      • Steve Ballmer is angered by that fucking pussy Eric Schmidt and throws a chair which flies across the country and smashes into his computer.
      • a meteor crashes into the house and destroys all the drives, but leaves everything else untouched.
      • he becomes prone to sleep-sysadmining and accidentally formats them all.
      • His house is the target of a nuclear attack. (Didn't think of that one, did you, bitches?)

      Raid 0 won't protect you, man!
    • by CajunArson ( 465943 ) on Monday June 04, 2007 @08:45PM (#19389883) Journal

      Two words: RAID 0 Nothing can possibly go wrong. Especially if you use, like, 10 disks.

      For the love of God and all that's holy will someone mod this 'Funny' instead of Informative? I get the joke, but there's always somebody who won't!
      (Then again... maybe people who won't oughta make a 10 disk RAID 0, hell mod it insightful sucka!)
    • by Guido von Guido ( 548827 ) on Monday June 04, 2007 @11:35PM (#19391539)
      Crap, that reminds me--I gotta do some backups.
    • Re: (Score:3, Informative)

      by WoLpH ( 699064 )
      If you use either Linux/FreeBSD software raid (especially the FreeBSD variant is very good) you'll be able to use different drive sizes.
      Or... get an Intel motherboard with Matrix Raid chip, it'll allow you to add more drives to an array or increase it's size when you increase the storage space.

      However... not using RAID would make it more flexible for you, wheter it's worth it is up to you. Personally I'd go for RAID 5 (I have done the same at home).
  • by bi_boy ( 630968 ) on Monday June 04, 2007 @08:27PM (#19389639)
    Wikipedia has a very informative article regarding RAID and the various levels, in fact here it is. []
    • Re: (Score:3, Informative)

      by Anonymous Coward
      While you're there, check out how [] can make most of the issues other posters are point out irrelevant, or at least nothing to be worried about.

      While Solaris might be a dirty word among the Slashdot crowd, if all the OP needs is a way to store a bunch of files, ZFS is an excellent solution. Check out / [] and in particular the demos linked on the left side.

      Then, if you're still not convinced how appropriate ZFS might be for a so
    • by grcumb ( 781340 ) on Tuesday June 05, 2007 @02:08AM (#19392539) Homepage Journal

      Wikipedia has a very informative article regarding RAID and the various levels, in fact here it is. []

      Nonsense! Everything you need to know is in the RAID 5 song:

      10 TB of disk on the wall, 10 TB of disk
      You take one down
      Pass it around
      10 TB of disk on the wall!

      10 TB of disk on the wall, 10 TB of disk
      You take one down
      Pass it around
      0 TB of disk on the wall!

      (My friend Rich actually came up with this. I like him too much to slashdot him, though.)

  • Duh (Score:5, Insightful)

    by phasm42 ( 588479 ) on Monday June 04, 2007 @08:28PM (#19389647)

    I'm wanting to have as much storage as possible, but redundancy seems to be important too.
    Given that RAID 0 and JBOD give you no redundancy, RAID 5 is the only one you listed that has redundancy.

    That said, RAID is not a replacement for proper backup. RAID is just a first line of defense to avoid downtime.
    • Re: (Score:3, Insightful)

      by geedra ( 1009933 )
      That said, RAID is not a replacement for proper backup. RAID is just a first line of defense to avoid downtime.

      A good point. Consider, though, that most people don't run terabyte-size tape backup at home. It's not like it's business critical data, so RAID-5 is probably sufficient.
    • Re: (Score:3, Informative)

      by TheRaven64 ( 641858 )
      ZFS could be the solution to both problems. You can mix different RAID types across the same volume (no redundancy for unimportant stuff, like /tmp, mirror really important stuff, and RAID-Z the rest). It can also do snapshots, which gets rid of a big part of the reason for needing backups on top of RAID (accidental deletion). Of course, a virus, or kernel bug could still wipe out your data, so you still need backup stuff, you're just less likely to need to pull out the backups.
    • That said, RAID is not a replacement for proper backup. RAID is just a first line of defense to avoid downtime.

      RAID-5 is "good enough" for home use though. If you're paranoid then build a second box that just backs up the first via rsync or rdiff-backup. The second box doesn't necessarily even have to have a RAID array, you could LVM a bunch of disks together. If the backup array dies then oh well, just install a new drive and rsync from your production server again. Personally I don't even bother to b

    • Re:Duh (Score:4, Insightful)

      by Fred_A ( 10934 ) <> on Tuesday June 05, 2007 @04:45AM (#19393467) Homepage

      That said, RAID is not a replacement for proper backup. RAID is just a first line of defense to avoid downtime.

      RAID is just a first line of defense to reduce downtime.
  • by NotQuiteReal ( 608241 ) on Monday June 04, 2007 @08:28PM (#19389655) Journal
    You can just download them again, right?
    • Re: (Score:3, Insightful)

      by noidentity ( 188756 )
      Even though it was modded funny, it's good advice: if most of your data is not something you created on your own, either directly or indirectly as a part of using the computer, it's possible to replace it from an outside source if lost. All you really need a backup of is your unique data.
  • Design for today. (Score:2, Interesting)

    by Joe U ( 443617 )
    Design for what you want to use today and in the near future, don't design for a few years from now, you'll never get it built.

    That being said, mirroring might be the easiest solution to upgrade, but you'll sacrifice speed and space.

    If you want speed and redundancy, you'll have to go with something like RAID 5 or RAID 10 and just have a painful upgrade in the future.
  • I'm running a few arrays, all over 1TB. Largest is 8 drives in a raid6 config. everything uses software raid. Be sure to use LVM, so that you can snapshot your drives. Once you're properly RAIDed, your more likley to lose your data by an accidental file deletion than by unfixable hardware failure.
  • RAID (Score:2, Informative)

    by Anonymous Coward
    If you have 3x500GB disks in RAID5, you only have 1TB of usable space, as one drive is used as parity (and therefore not for effective data storage). If you replace the disks with larger ones, the array is not increased in size if you replace each disk one at a time and let the array rebuild itself. However, you can just plug in your new drives (if you have enough ports), create a new array, and then copy data across to the new array. Alternatively, if you are using software RAID, as you increase the size o
  • It depends (Score:2, Informative)

    by gbaldwin2 ( 548362 )
    It all depends on how the RAID is implemented. Most inexpensive controllers require a rebuild when you change sizes. It is not a big deal. I would never implement anything important jbod the chance of failure is too large. I have replaced too many disks. Do RAID5 or RAID1. Over 99% of my disk is RAID5 and I manage just over 500TB.
  • by Talez ( 468021 )
    RAID 5 drives are limited to the size of the smallest drive in the array.

    Yes... Duh....

    And the way things are looking, even if I gradually replace all of the drives with larger ones, the array will still read the original size. For example, say I have 3x500gb drives in RAID 5 and over time replace all of them with 1TB drives. Instead of reading one big 3tb drive, it will still read 1.5tb. Is this true?

    Yes... Fucking duh.... Have you even read the RAID 5 Wiki article? []

    I also considered using JBOD simply becau
    • by Kaenneth ( 82978 ) on Monday June 04, 2007 @09:00PM (#19390023) Homepage Journal
      You can put RAID 5 on varying size disks.

      I had 4 300GB drives, and 2 200GB drives.

      I broke them up into 100GB partitions, and layed out the RAID arrays:

      A1 = [D1P1 D2P1 D3P1 D5P1]
      A2 = [D1P2 D2P2 D4P1 D6P1]
      A3 = [D1P3 D3P2 D4P2 D5P2]
      A4 = [D2P3 D3P3 D4P3 D6P1]

      Then I concatenated the arrays together, giving a little less than 1.2 TB of space from 1.6 TB of drives; if I had just RAID'd the 4 300 gig drives, and mirrored the 200's I would have only had 1.1 TB available, and the drive accesses would be imbalanced.

      I could also grow the array, since it was built as concatenated, so later when I got 4 400GB drives I raided them then tacked them on for 2.4 TB total.
  • by CPE1704TKS ( 995414 ) on Monday June 04, 2007 @08:32PM (#19389737)
    This is what you do: buy 2 drives exactly the same size and mirror them. End of story. If you're worried about a blown raid controller, then buy another hard drive and stick that on another computer and run a weekly cron job to copy everything. Right now you can get 500 GB hard drive for about $150. Get two of them and mirror them. (If you need more than 500 GB I would highly suggest encoding your porn into a different format than MPEG2) By the time you run out of space, you will be able to get 1 TB drives for about $150. Migrate over to the 2 1 TB hard drives. Repeat every few years.

    With computers, the stupidest thing you can do is spend extra money to prepare for your needs for tomorrow. Buy for what you need now, and by the time you outgrow it, things will be cheaper, faster and larger.

    By the way RAID 5 is a pain in the ass unless you have physical hotswap capability, which I highly doubt.
    • by QuesarVII ( 904243 ) on Monday June 04, 2007 @08:47PM (#19389911)
      By the way RAID 5 is a pain in the ass unless you have physical hotswap capability, which I highly doubt.

      With recent kernels, you can hotswap drives on nvidia sata controllers (common onboard). I believe several other chipsets had support for this added in recent kernels too. Then you can swap drives live and rebuild as needed.

      One more important note - if you're using more than about 8 drives (I personally recommend 6), I would use raid 6 instead of 5. You often get read errors from one of your "good" drives during a rebuild after a single drive failure. Having a 2nd parity drive (that's what raid 6 gives you) solves this problem.
    • Re: (Score:3, Insightful)

      by Grishnakh ( 216268 )
      (If you need more than 500 GB I would highly suggest encoding your porn into a different format than MPEG2)

      500 GB isn't that much space any more. If he's thinking of making an HDTV MythTV box, for instance, full-res HDTV streams will require a lot of space to store in real-time. It would probably be too computationally intensive to recode them into MPEG4 on the fly.
    • This is what you do: buy 2 drives exactly the same size and mirror them. End of story.

      NO! That's NOT the end of the story. You need to do what is called "scrubbing" the array periodically, because drives "silently" fail, where areas become unreadable for various reasons. Guess when one usually discovers the bad data? When one drive screeches to a halt, and you confidently slap in another and hit "rebuild". Surpriiiiiiiiise.

      You can do it a variety of ways. The most harmless is probably to run a re

    • Re: (Score:3, Informative)

      by vinn ( 4370 )
      Yup. People overdesigning their drive systems are likely first time sys admins. There's a time and a place for complicated drive mechanisms and your porn collection is not one of them. I'd just do software RAID 1 with a possible backup to an external USB hard drive. NAS devices are cheap and interesting too.
    • by SanityInAnarchy ( 655584 ) <> on Tuesday June 05, 2007 @01:18AM (#19392195) Journal

      This is what you do: buy 2 drives exactly the same size and mirror them. End of story.

      Until another few years go by and you want to buy more storage. Then you're basically stuck with doubling it, clumsily -- or migrating away and essentially throwing out the old drives.

      RAID 5 is better in the short run. Even with a three disc array, you're getting more storage for your money, and you can always restripe it onto a fourth disc.

      (If you need more than 500 GB I would highly suggest encoding your porn into a different format than MPEG2)

      It's not all porn, and some of it is high def, in h.264. And I don't even edit videos, I just watch 'em.

      With computers, the stupidest thing you can do is spend extra money to prepare for your needs for tomorrow.

      That is true. However, I would fill a terabyte easily, and right now, I'm guessing it's cheaper to buy three 500 gig drives than two 1 tb drives.

      By the way RAID 5 is a pain in the ass unless you have physical hotswap capability, which I highly doubt.

      You highly doubt he's got SATA?

      The one thing I will say is, either have another disk (even a USB thumb drive) to boot off of, or do some sort of RAID1 across them. You almost certainly want software RAID on Linux, and you don't want to try to teach a BIOS to boot off of your array.

  • by GFree ( 853379 ) on Monday June 04, 2007 @08:33PM (#19389741)
    Out of all the details you're still working on, you decided to ask Slashdotters about storage?

    Why not the "windows XP vs. Linux" bit? Do you want 100 responses or 1000?
  • by foooo ( 634898 ) on Monday June 04, 2007 @08:33PM (#19389743) Journal
    Media Server: n. A euphamism for digital porn storage.
  • by Spirilis ( 3338 ) on Monday June 04, 2007 @08:34PM (#19389745)
    With Linux you can create a RAID5 md device, say /dev/md0, then run LVM on top of that (pvcreate /dev/md0 ; vgcreate MyVgName /dev/md0) and use that to carve out your storage. The key here is to create partitions on each drive, eg filling up the entire disk, and create your raid5 with those.

    If you buy 1TB drives further down the road, here's what you do- With each disk, create a partition identical in size to the partitions on the smaller disks, then allocate the rest of the space to a second partition.
    Join the first partition of the disk to the existing RAID set. Let it rebuild. Swap the next drive, etc. etc. Then once you've done this switcharoo to all the drives, create another raid set using the 2nd partition on your new disks--call it /dev/md1. So now you have /dev/md0, pointing to the first 500GB of each disk, and /dev/md1, pointing to the 2nd 500GB of each disk.

    Take that /dev/md1 and graft it onto your LVM volume group. (pvcreate /dev/md1 ; vgextend MyVgName /dev/md1). Now your LVM VG just doubled in size, and you can use all that new space. Whatever you do though, do NOT create any "striped" logical volumes (the "-i2" option to lvcreate; LVM's Poor Man's RAID0, basically) because you will suffer terrible performance, since you'll be striping across different volumes on the same physical spindles (a big no-no for any striped configuration). But if you use the extra space by creating new filesystems or growing existing ones, you shouldn't see any trouble.

    Just be sure that any replacement drives you have to buy... you must partition them out similarly. I'd recommend pulling back on the partition sizes a bit, maybe 5%, to account for any size differences between the drives you bought right now and some replacement drives you may purchase later on which might be slightly lower in capacity (different drive manufacturers often have differing exact capacities).
  • It depends on the implementation (and possibly the raid level). Some raid cards will let you expand the container after you've replaced all of the drives with new ones of a larger size. Then you have to expand the partition, or put another partition into the new space. I've done this with Compaq hardware running Win2k in a RAID 1 (mirrored pair).

    The "Raid 5 can't do what I heard" isn't quite what's going on, again, depending on the implementation. Most raid cards I've used allow you to add drives to the
  • Linux, RAID 5, md (Score:5, Informative)

    by Pandaemonium ( 70120 ) on Monday June 04, 2007 @08:35PM (#19389757)
    Go RAID5. RAID5 = Hardware failure resilience + maximum storage.
    Go Linux. The Linux MD driver allows you to control how you RAID- over disks or partitions. there are advantages. We will discuss.

    First, don't get suckered into a hardware RAID card. They are *NOT* really a hardware card- they rely on a software driver to do calculations on your CPU for RAID5 ops. Software RAID is JUST AS FAST. Unless you blow the big bucks for a card with a real dedicated ASIC to do the work, you're fooling yourself.

    Now, you want to go Linux. By using the md driver, you can stripe over PARTITIONS, and not the whole disk. By doing this, you can get MAXIMUM storage capacity out of your disks, even in upgrades.

    Say you have 3 500GB disks. You create a 1TB array, with 1 disk as parity. On each of these disks is a single partition, each the size of the drive. Now, you want to upgrade? SURE! Add 3 more disks. Create three partitions of EQUAL size to the original, and tack it on to the first array. Then, with the additional space, you can create a WHOLE NEW array, and now you have two seperate RAID5's, each redundant, each fully using your space.

    Another advantage with MD is flexibility. In my setup, I use 5x 250 drives right now. On each is a 245GB partition, and a 5GB partition. I use RAID1 over the 5's, and RAID5 over the rest. Why? Because each drive is now independently bootable! Plus, I can run the array off two disks, upgrade the file system on the other 3, and if there's a problem, I can always revert to the original file system. So much flexibility, it's not even funny.

    I recommend using plain old SATA, in conjunction with SATA drives, and just stick with the MD device. For increased performance, watch your motherboard selection. You could grab a server oriented board, with dedicated PCI buses for slots, and split the drives over the cards. Or, you can get a multiproc rig going, and assign processor affinity to the IRQ's- one card calls proc 1 for interrupts, the other card calls proc 0. If you have multiple buses, then performance is maximized.

    The last benefit? Portability. If your hardware suffers a failure, then your software RAID can move to any other system. Using ANY hardware RAID setup will require you to use the EXACT same card no matter what to recover data. Even the firmware will have to stay stable or else your data can be kissed goodbye.

    Windows? Forget about it.

    Good luck!
    • by tjstork ( 137384 )
      This is REALLY cool. Can you steer a poor man over to a FAQ on setting up such an MD device?
    • Re:Linux, RAID 5, md (Score:4, Interesting)

      by ptbarnett ( 159784 ) on Monday June 04, 2007 @09:08PM (#19390111)
      I did exactly this for a new server recently. The only thing I would add is to use RAID 6 instead of RAID 5. That way, you can tolerate 2 drive failures, giving you time to reconstruct the array after the first one fails.

      I have 6 320 GB disks. The /boot partition is RAID 1, mirrored across all 6 (yes, 6) devices, and grub is configured so that I can boot from any one of them. The rest of the partitions are RAID 6, with identical allocations on each disk.

      There's a RAID HOWTO for Linux: it tells you everything you need to know about setting it up.

      • Re:Linux, RAID 5, md (Score:4, Informative)

        by guruevi ( 827432 ) <evi&evcircuits,com> on Tuesday June 05, 2007 @11:22AM (#19397103) Homepage
        But then again, RAID6 is terrible in performance compared to RAID5 (especially on write operations) just as RAID5 is terrible in comparison on the same criteria to RAID10 (although it could be faster on non-sequential reads).

        Higher RAID-levels are not always THE ultimate solution and depending on your solution you might just have to go for a non-secure RAID level (RAID0) for large media storage with nightly snapshotting to your backup device. Usually it's not all that bad to lose a single day worth of data and if it is for these applications, use RAID10 or so. I do it as follows: get media on RAID0 (HD streams are large and fast on 10k drives) and then as soon as job is done, I copy it to the storage area which is RAID5 on cheap SATA storage and then a nightly copy to an offline backup station (HW-RAID5 with ATA100) of the data I want to keep.
  • by Erwos ( 553607 ) on Monday June 04, 2007 @08:36PM (#19389769)
    I really can't believe this made the front page. The questions are badly written, and the question itself could have been answered with some basic Internet research. RAID isn't an esoteric topic anymore, folks!

    This place has really gone downhill. I thought Firehose was supposed to stop stuff like this, not increase it!

    Anyways, just to be slightly on topic: there's no one answer to this question. It depends on your budget, your motherboard, your OS, and, most importantly, your actual redundancy needs. This kind of thing is addressed by large articles/essays, not brief comments.
  • Bad assumption (Score:2, Informative)

    by tfletche ( 708699 )
    you write that if you have 3 500G disks in a RAID 5 that you will have a 1.5T, etc. Don't you realize that (N x C) - C = Total ? i.e. (3 x 500) - 500 = 1000 or 1 terabyte. That's only the first problem with your logic...
  • I have a lot of data (500 GB of music/movies/pictures/wallpapers/audiobooks/ebook s ; filled to the last GB)

    I'm a student and I do not have the money for redundant storage.

    I rsync my documents and pictures over the two drives and burn my favourite movies to DVD. I use ffmpeg to turn DVDs into Xvids and oggenc to turn flacs into ogg q5s.

    If I lose one of the harddrives; that's life.

    So for those who do not have the luxury that the poster has; make sure that you backup what is really important and risk what you
  • If y'all were building a system for this purpose, how many drives and what size drives would you use and would you do some form of RAID, or what?

    For what purpose?! You haven't said word one about how this storage will be used. What is it for? Email back end, shared file systems, RDBMS (OLTP or OLAP), streaming loads, D2D backup, etc. Define your use case, please! Post after post on this topic and not one of you ever think to specify what the @!%*$ it is you're trying to do.

    Agonizing over the ability to incrementally upgrade an array is a sure sign you have cost at the very top of your list of concerns, with everything else far below. Learn ab

  • by Solder Fumes ( 797270 ) on Monday June 04, 2007 @08:42PM (#19389849)
    Hardware WILL get old, WILL die, and better stuff WILL become available. So it only makes sense to recognize this and plan for it.

    Here's the way I do it (for a home storage server, not a solution for business-critical stuff):

    Examine current storage needs, and forecast about two years into the future.

    Build new server with reliable midrange motherboard, and a midrange RAID card. These days you could do with a $100-$300 four-port SATA card, or two.

    Add four hard disks in capacities calculated to last you for two years of predicted usage, in RAID 5 mode. Don't worry about brand unless you know for a fact that a particular drive model is a lemon.

    Since manufacturer's warranties are about one year, and you may have difficulty finding an unused drive of the same type for replacement, buy two more identical drives. These will be your spares in the event of a drive failure.

    When the two years are up, you should be using 80 to 90 percent of your total storage.

    At this point, you build an entirely new server, using whatever technology has advanced to at that time.

    Transfer all your files to the new server.

    Sell your entire old storage server along with any unused spare drives. A completely prebuilt hot-to-trot RAID 5 system, with new matching spare disk, only two years old, will still be very useful to someone else and you can recoup maybe 30 to 40 percent of the cost of building a new server.

    Lather, rinse, repeat until storage space is irrelevant or you die.
  • When did slashdot become a substitute for usenet/google/wiki or (gawd forbid) a fucking manual? Why do editors feel inclined to post the drivel of every clueless newbie who needs handholding, while rejecting important/interesting news stories?

    As to the poster's question: read the fucking manual, kid.
  • No, really. It's all about cost. Even with hardware accelerated RAID, you can expect a steep performance hit. If you're going for a massive data repository I'd suggest several RAID 1+0 setups in hardware with a decent volume manager & file system (not NTFS).

    2x500GB drives in a RAID 1 (for peace of mind). Then double that in a RAID 0 stripe (for speed). That's 4 drives per TB. Then use a decent file system, like ZFS, to chain your RAID 1+0 clusters into a single volume 1TB at a time.

    Whatever you choose t
  • Some RAID controllers allow you to enlarge a RID5 array. If the OS also allows you to enlarge the partitioning, then you are set. I think currently both is possible under Linux.

    However, the better approach would be to recreate the array on disk upgrades. After all for any kind of reliability, you need backup anyways. RAID is not a replacement for backup!
  • It would be nice to know just how much data you are trying to store. If this is going to be a whole bunch of mp3s, then you might look into a Raid 1 array of that new 1TB drive from hitachi.
    At 1TB, it is still gonna be pretty hard to fill this with DIVX encoded movies. I guess though, if you need more space, do a 0+1. Meaning a redundant array of a data-striped set.

    If you are talking about some sort of seriously whacked out array of like some Blu-Rays or HDDVDs or some crazy thing like that....then i wou
  • by Fallen Kell ( 165468 ) on Monday June 04, 2007 @09:01PM (#19390027)
    If you are going to do this, do it right. It will cost you some up front, however, in the long run, doing it right will be cheaper. Get a real raid card, as in hardware RAID. Get something that supports multiple volumes and at least 8 disks. I personally just got the Promise SuperTrak EX8350. Now, why do you ask do you need 8 disks? So you can upgrade, that is why. Use your current 3 or 4 disks you have now in a raid volume. In a couple years when bigger disks are dirt cheap, pick up 4 1TB+ size disks and build a second volume on the RAID array using the new disks. Now you can offload all the old data onto the new RAID volume and either ditch the old disks or keep them around (up to you, however, I recommend ditching to other computers or whatever so that you now have 4 empty slots on the RAID card so that you can rinse/repeat the whole process again in another few years...)

    Again, doing it correct up front takes care of upgrade options down the line. It also gives you room to do monster sized volume if you ever need that much space (8 disk array). Most of these RAID solutions are also OS independent, so if you want dual boot, the volume would be recognized by Windows, Linux, Unix, BSD, etc., and you are also not dependent on using the exact same motherboard if you motherboard dies or wants to be upgraded (you would lose all your data if you use the built in RAID on the motherboard when changing to a new motherboard other then the exact same model).

    These better cards also can be linked together (i.e. you always get a second card assuming your motherboard has a slot for it, and add more disks to the array that way as well).
    • Re: (Score:3, Interesting)

      by nxtw ( 866177 )
      What is the advantage of having a proprietary RAID card? With PCI Express, there is enough bandwidth between the HBAs and CPU to perform software RAID[56]/RAIDZ2? without a problem. Any recent CPU can perform RAID5 calculations much faster than hard drives can provide it. The bottleneck becomes either PCI bandwidth or hard drive speed.

      I haven't seen a consumer level motherboard that has real (hardware) RAID. It's all software RAID with a fancy driver & BIOS support to allow Windows to boot.

      I'm not s
  • One acronym - ZFS (Score:3, Interesting)

    by GuyverDH ( 232921 ) on Monday June 04, 2007 @09:06PM (#19390081)
    Get a small box, install opensolaris on it, configure your JBOD as either raidz or raidz2, configure either iSCSI or SaMBa to share the files using a gig link.

    Your data should be perfectly safe, with raidz2 can lose up to 2 drives, without data loss.
  • The simple answer is (Score:3, Interesting)

    by Mad Quacker ( 3327 ) on Monday June 04, 2007 @09:33PM (#19390445) Homepage
    that there is no good solution I can find. Every solution is flawed for this purpose, including ZFS.

    I have been giving much thought to writing yet another filesystem, which would fill the needs of home/archival/media box users. Essentially it would be like ZFS, except it would improve upon ZFS's dynamic striping. I would have dynamic parity, such that the number of disks in the stripe-set and number of recovery blocks is completely independent per-file, ala PAR2. ZFS is still just as bone-headed as older filesystem because the vdev's are still atomic, you make a raidz, and it stays that way. The integrity would be on a per-file basis only. So you could add and remove disks at will, no dangerous re-striping operations, and protection and recovery from on-disk corruption. If you lose too many disks, you only lose the information on those disks. A file need not be striped on every disk. Only when a particular file has less parity blocks than missing blocks, wherever such blocks may be, is the file gone. Files on disk should always be recoverable, regardless of "corrupt superblocks", or something similar. This could probably be done using FUSE and some quick and dirty code.


    1. We want a lot of storage
    2. We want it expandable, no dangerous restriping or filesystem expansion. There can be NO BACKUPS!
    3. We don't want to wake up in the middle of the night and wonder if the next fsck is the last.
    4. We only care about enough performance to run the media center, i.e. record TV and play movies.
  • by merreborn ( 853723 ) on Monday June 04, 2007 @09:47PM (#19390591) Journal

    I am in the process of planning and buying some hardware to build a media center/media server

    The advantages of RAID 0 versus RAID 1 versus RAID 5 have already been covered in detail, here, and in many books and websites.

    However, allow me to address the issue of how they relate to a media center:

    Firstly, when you say "media center/media server", do you mean "I just want to build myself a kickass Tivo?", or do you mean "I want to serve video for everyone in my frat house, simultaneously?"

    If the former, consider that Tivos ship with 5500 RPM drives for several reasons:
    1) They're cheaper than faster drives
    2) They run cooler than faster drives
    3) They run quieter than faster drives
    4) They use less power than faster drives
    5) They're more than fast enough for streaming a single video to your TV while recording another

    Long story short, if you're just building a "free" Tivo with a kickass drive array, performance is *not* an issue. Keep in mind that if you're building a set-top box of sorts, the low heat and low noise features are *very* big benefits. You probably want RAID 5, and/or JBOD.

    If, however, you're planning on serving video to more than a handful of stations simultaneously, you may need to consider performance. This is a vote for RAID 0 and/or RAID 10.

    Now, the second axis: How important to you is this data? Really?

    I've got over 300 gigs of drive space on my Tivo. Most of it is the last two weeks of television reruns (Scrubs, 6 copies of last Thursday's Daily Show, etc.), movies I recorded but won't watch, etc. There are about 10 gigs (3%) of video on there that's been saved for a few months, and frankly, I couldn't tell you a single thing on there that I'd miss if my drives went belly up tomorrow. So: do you *really* need to save all those Seinfeld reruns on a highly-redundant storage array? How *much* of the stuff on the server do you really need to keep?

    Assuming it's less than 50% (in the Tivo scenario, it probably is), consider using JBOD for most of your storage, and maintaining a single backup drive, or small backup drive array. Or just backing up the good stuff to DVD.

    In summary: If you're just building a Tivo, you probably don't really need the performance, or redundancy that RAID offers.
  • by JoeShmoe ( 90109 ) <> on Monday June 04, 2007 @09:52PM (#19390659)
    Infrant (wow, just checked their website and it looks like they were bought by NetGear) created their own version of RAID that specifically addresses the issue of capacity and expansion. It's a nice transitional blend from RAID-1 to RAID-5 and does offer the ability to increase the total capacity (albeit with a lot of drive swapping).

    Buy an Infrant RAID with the two biggest drives you can afford. Let's say two 750GB drives or whatever's on sale that week. It starts out acting as RAID-1 with the drives mirroring. So you have 750GB of "safe" storage. Now you add another 750GB drive. Okay, now you have 1500GB of storage with one of the drives acting as parity drive (RAID-5). Add a fourth drive and how you have 2250GB of "safe" storage. Now you come back and just replace one of the original 750GB drives witha 1TB drive. Do you get extra capacity? No...not initially. But the drive is fully formatted and integrated as X-RAID. What this means is that eventually after you have piecemeal or onesie-twosie upgraded all four drives, suddenly the X-RAID resizes itself to match the capacity of the new drives with no transfer or downtime. So in theory if you wanted to upgrade your RAID, buy four 1TB drives, swap them out one at a time (letting each one rebuilt the array) and then at the end you'll have 3TB RAID isntead of the old 2250GB RAID and all the data intact. hp?name=About%20X-RAID []

    I have three ReadyNAS units and love them to death. They are a little fussy about drive temperatures (I guess that's a good things but, I may get like 40 emails during the course of the day about it and it's not like I'll drive home from work to turn up the A/C in my house). My only sadness is that Infrant doesn't have a higher capacity unit than four drives (oh please oh please, eight drives with a RAID-6 type protective hotspare in one nice rack-mountable unit would be my ultimate dream).

  • unRAID (Score:3, Informative)

    by Coppit ( 2441 ) on Tuesday June 05, 2007 @12:08AM (#19391815) Homepage

    I can't believe no one has suggested an unRAID [] server. You get redundancy, storage that can grow by just adding another drive, low power consumption, affordability, and the ability to telnet in. (Plus it runs Linux!) I really like this solution since the data isn't spread out over a bunch of disks in a way that only the RAID controller can understand. Instead it's just a bunch of files on a bunch of disks, with an extra parity drive for reliability.

    If a drive goes down, you can just pop a new one in and recover the lost data from the parity drive. If two drives simultaneously fail (unlikely), you lose the data on the drives that failed. Compare that to the nightmare if your RAID controller fails.

    Here's my unRAID server [], built for $400 plus the drives. I love being able to do backups by just running rsync. Once the author gets sshd built into the system, I can even do automatic incremental snapshot backups using rsync --link-dest.

    • AMEN! (Score:3, Informative)

      by BLKMGK ( 34057 )
      I should've searched for unRAId in the discussion before posting my own endorsement - I'd be able to mod you up then (doh!).

      I 100% agree with you on unRAID - it rox! I'm not using any sort of background Linux stuff to do my backups but I do use an unattended backup package that works just fine with unRAID (Acronis). The speed could be better (I'm not yet on the 2.6 based version) but it keeps up stutter free with my XBMC box so it's fast enough for me. :-) Not having striped data also rocks, lets me spin do
  • by BLKMGK ( 34057 ) <> on Tuesday June 05, 2007 @07:46AM (#19394417) Homepage Journal
    I use whatever disks I had laying arouind to build my NAS but the data is still protected. The software I use is developed by Lime-Technology []. It's NOT RAID and instead is a JBOD setup with the first drive being a PARITY drive. This means that if one of my drives fails I still have access to the data. If TWO drives fail I lose TWO drives worth of data - *not* the whole damned thing. The data is not striped and is stored in a ReiserFS F/S so I can pull a drive and mount it elsewhere if I desire. This also means that if a drive isn't being acively used it can be spun down - try that with a striped RAID :-) When you write only the parity disk and the disk being written to need to be spinning, love that. The system can hold more than 12 disks if you use their top of the line software - mine only holds 12 total for a bit over 4.5TB worth of storage. Boots a customized Linux off of a memory stick and yeas source for mods is distributed but not the source for the WEB management stuff - he appears to be GPL compliant.

    Some limitations: Parity drive must be as big or bigger than all others. Each drive is a seperate mount point unless you use a funky sort of shared folder feature. The system doesn't have as high a transfer speed as a RAID would, however it streams video for me to an XBMC XBOX1 just fine. It doesn't have a super robust system to notify you of failed drives out of the box although some users have added this functionality. Not a whole lot of security although I've met someone who has added this on and the developer is also working on expanding this in the future. Pretty decent support overall IMO and he's just moved to the 2.6 kernel - I've yet to upgrade though.

    All in all this system seems to be perfect for HTPCs and I also use it to store backup images of all my workstations. All of my music and DVDs are stored on it and I'm about to build a second one as I need still more storage and have "spare" drives that I've pulled from the existing one as I've upgraded that I'd like to put to good use :-) Check out the user forums on the site, the developer is pretty responsive...
  • by raw-sewage ( 679226 ) on Tuesday June 05, 2007 @09:45AM (#19395551)

    First, forget hardware RAID solutions. While their effectiveness is debatable for commercial and enterprise applications, it's definitely overkill for a home solution (particularly a media server). (Unless of course you have more money than sense.) But Linux RAID (md, multi-disk) is mature, stable, and well-tested. It's portable from one machine to another. It's free. With even modest hardware, it will be plenty fast for a home media server. Don't even bother with those pseudo RAID solutions that are built into your motherboard (or implemented via firmware or a proprietary driver): Linux software RAID and true hardware RAID beat these solutions in just about every conceivable way.

    Now, do you really need RAID? Many people equate RAID and backup. They are not equal. RAID is no substitute for a good backup. In the case of a media library, you do own all the media, right? :) There's your backup. Worst case, you lose the time spent ripping the media. So there's an argument to just use JBOD. However, I do use RAID5 for a bit of safety. If two drives fail simultaneously, I fall back on the media. But if only one drive fails, then I can replace the drive, rebuild the array, and lose very little time. It's quasi-backup. It's just too expensive for an individual to maintain multiple live copies of this much data.

    If I were to build a fileserver for someone right now, this is what I'd use:

    • Case: Lian Li PC-A16B, with an additional hard drive module (I actually have one of these on order right now)
    • Motherboard: Biostar TForce TF7025-M2 (on-board gigabit LAN, high-quality solid capacitors, low-power single chip north- and south-bridge, integrated video)
    • Cheapest AM2 processor (single core is fine for a strictly fileserver)
    • 1 GB RAM (even 512 MB would probably be fine, but RAM is cheap right now!)
    • Seasonic S12-400 power supply
    • 4x Western Digital Caviar SE16 WD5000AAKS 500GB hard drives (500 GB is pretty sweet for price/capacity right now; SilentPCReview [] is currently recommending the Western Digitals for coolest/quietest high-capacity drives)
    • A PATA-to-Compact flash adaptor (such as this one []), and a 1 GB or bigger compact flash card to use as your "system" disk (i.e. install the OS here).

    I have another post on this thread where I went into more detail about the choice of case. Quick summary: if you care about noise, don't cram your drives close together, or you'll have to use an obscenely loud high-speed fan to keep them cool. If you allow at least 0.5" between each drive, you can keep your drives cool with a low-speed (quiet) fan. That's why I'm buying the Lian Li case mentioned above: room for up to nine drives, with adequate spacing between each.

  • Forget RAID (Score:3, Insightful)

    by Reapman ( 740286 ) on Tuesday June 05, 2007 @11:15AM (#19397015)
    I used to run both Mirroring and RAID 5 in the past (not at the same time), but I found it overly complex for simple usage, plus it doesn't allow for what happens if the controller card fails or system goes up in smoke? Plus once you build a RAID you can't just add a drive to it easily or cheaply (I'm over simplifying this I know)

    I find the best is to have another computer or possibly external drives sitting somewhere, and just make weekly/daily/monthly/whatever rsync copies between them. This allows for you to recover from user error like accidental deletions, and if the entire system goes down your covered. Want more space? add a drive and presto, more space. No special configuration required. No expensive controller cards (or cheap and slow controller cards) required.

    And if your like me, you have another set of drives stored offsite... but I'm pretty paranoid about such things. =P

"I shall expect a chemical cure for psychopathic behavior by 10 A.M. tomorrow, or I'll have your guts for spaghetti." -- a comic panel by Cotham