Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage

RAID Vs. JBOD Vs. Standard HDDs 555

Ravengbc writes "I am in the process of planning and buying some hardware to build a media center/media server. While there are still quite a few things on it that I haven't decided on, such as motherboard/processor, and windows XP vs. Linux, right now my debate is about storage. I'm wanting to have as much storage as possible, but redundancy seems to be important too." Read on for this reader's questions about the tradeoffs among straight HDDs, RAID 5, and JBOD.


At first I was thinking about just putting in a bunch HDDs. Then I started thinking about doing a RAID array, looking at RAID 5. However, some of the stuff I was initially told about RAID 5, I am now learning is not true. Some of the limitations I'm learning about: RAID 5 drives are limited to the size of the smallest drive in the array. And the way things are looking, even if I gradually replace all of the drives with larger ones, the array will still read the original size. For example, say I have 3x500gb drives in RAID 5 and over time replace all of them with 1TB drives. Instead of reading one big 3tb drive, it will still read 1.5tb. Is this true? I also considered using JBOD simply because I can use different size HDDs and have them all appear to be one large one, but there is no redundancy with this, which has me leaning away from it. If y'all were building a system for this purpose, how many drives and what size drives would you use and would you do some form of RAID, or what?
This discussion has been archived. No new comments can be posted.

RAID Vs. JBOD Vs. Standard HDDs

Comments Filter:
  • Design for today. (Score:2, Interesting)

    by Joe U ( 443617 ) on Monday June 04, 2007 @08:31PM (#19389705) Homepage Journal
    Design for what you want to use today and in the near future, don't design for a few years from now, you'll never get it built.

    That being said, mirroring might be the easiest solution to upgrade, but you'll sacrifice speed and space.

    If you want speed and redundancy, you'll have to go with something like RAID 5 or RAID 10 and just have a painful upgrade in the future.
  • by QuesarVII ( 904243 ) on Monday June 04, 2007 @08:47PM (#19389911)
    By the way RAID 5 is a pain in the ass unless you have physical hotswap capability, which I highly doubt.

    With recent kernels, you can hotswap drives on nvidia sata controllers (common onboard). I believe several other chipsets had support for this added in recent kernels too. Then you can swap drives live and rebuild as needed.

    One more important note - if you're using more than about 8 drives (I personally recommend 6), I would use raid 6 instead of 5. You often get read errors from one of your "good" drives during a rebuild after a single drive failure. Having a 2nd parity drive (that's what raid 6 gives you) solves this problem.
  • by Kaenneth ( 82978 ) on Monday June 04, 2007 @09:00PM (#19390023) Journal
    You can put RAID 5 on varying size disks.

    I had 4 300GB drives, and 2 200GB drives.

    I broke them up into 100GB partitions, and layed out the RAID arrays:

    A1 = [D1P1 D2P1 D3P1 D5P1]
    A2 = [D1P2 D2P2 D4P1 D6P1]
    A3 = [D1P3 D3P2 D4P2 D5P2]
    A4 = [D2P3 D3P3 D4P3 D6P1]

    Then I concatenated the arrays together, giving a little less than 1.2 TB of space from 1.6 TB of drives; if I had just RAID'd the 4 300 gig drives, and mirrored the 200's I would have only had 1.1 TB available, and the drive accesses would be imbalanced.

    I could also grow the array, since it was built as concatenated, so later when I got 4 400GB drives I raided them then tacked them on for 2.4 TB total.
  • One acronym - ZFS (Score:3, Interesting)

    by GuyverDH ( 232921 ) on Monday June 04, 2007 @09:06PM (#19390081)
    Get a small box, install opensolaris on it, configure your JBOD as either raidz or raidz2, configure either iSCSI or SaMBa to share the files using a gig link.

    Your data should be perfectly safe, with raidz2 can lose up to 2 drives, without data loss.
  • Re:Linux, RAID 5, md (Score:4, Interesting)

    by ptbarnett ( 159784 ) on Monday June 04, 2007 @09:08PM (#19390111)
    I did exactly this for a new server recently. The only thing I would add is to use RAID 6 instead of RAID 5. That way, you can tolerate 2 drive failures, giving you time to reconstruct the array after the first one fails.

    I have 6 320 GB disks. The /boot partition is RAID 1, mirrored across all 6 (yes, 6) devices, and grub is configured so that I can boot from any one of them. The rest of the partitions are RAID 6, with identical allocations on each disk.

    There's a RAID HOWTO for Linux: it tells you everything you need to know about setting it up.

  • Re:go for RAID-5 (Score:3, Interesting)

    by bl8n8r ( 649187 ) on Monday June 04, 2007 @09:11PM (#19390139)
    > I would go RAID 5.

    Yep. And if you boot something like Knoppix you can keep the OS on cdrom and storage on the raid device. Samba config goes on a usb key. I have two servers in a corporate environment running software raid 5 and booting knoppix. Updates are nearly impossible, but you can keep the updates on the usb key (tzdata) and untar right over the top of UNIONFS after boot. Either that or just download a fresh Knoppix version (I've gone through 3 versions now). The software raid in Linux is surprisingly stable. I had one drive go bad on one of the servers a couple months ago. mdadm emailed me, I informed the dept. of the downtime, and at the end of the day I replaced the drive and rebuilt the array. Everything worked like the howto said. Very nice.

  • by nxtw ( 866177 ) on Monday June 04, 2007 @09:31PM (#19390415)
    What is the advantage of having a proprietary RAID card? With PCI Express, there is enough bandwidth between the HBAs and CPU to perform software RAID[56]/RAIDZ2? without a problem. Any recent CPU can perform RAID5 calculations much faster than hard drives can provide it. The bottleneck becomes either PCI bandwidth or hard drive speed.

    I haven't seen a consumer level motherboard that has real (hardware) RAID. It's all software RAID with a fancy driver & BIOS support to allow Windows to boot.

    I'm not sure how many people would want to dual-boot a large fileserver. There's no universal filesystem that's suitable for large volumes multiple OSes besides ZFS, if you're using Solaris, FreeBSD, and OS X... with (slow) userland support in Linux. Otherwise you've got ext2/3 on Linux and the myriad of various implementations for other operating systems.
  • Re:go for RAID-5 (Score:2, Interesting)

    by damacus ( 827187 ) on Monday June 04, 2007 @09:31PM (#19390417) Homepage
    Negative, ghostrider.

    With proper planning and the right skills, its not hard to build a RAID5 system that can grow with you. This solution is Linux based, but can be applied on any system that has a flexible abstracted filesystem support. The tricks: 1, a big case, 2, Linux w/ LVM.

    For the case, check out: http://www.xoxide.com/cooler-master-stacker-case-b lack.html [xoxide.com]

    It's huge. 12 5.25" slots. Supports dual power supplies. And you can get modules with fans that hold 4 3.5" drives in 3 5.25" slots. That's up to 4x4 drives, (or more realistically 3x4 drives, since you have controller units and presumably an optical drive.)

    Anyway, you could start with 4x750. RAID5, LVM on top. Later, say you fill it. You could then buy 4x1.25TB (or whatever the latest size is). RAID5 those new discs, and then put an LVM pv on the RAID, and join it to the first RAID's logical volume group. Extend your FS, and there you go. Also, you can now have up to two drives fail at the same time (so long as its just one dead per RAID5) and not lose data.

    Say you want to upgrade again? Do the same thing. And keep in mind, so long as the space is there, you could work LVM and filesystem resize magic to remove the oldest set of 4 drives from the logical volume group and replace them with newer drives.

    Takes a little linux skill, but its extensible. Also, flexible. You of course don't have to go 4x at a time. I just chose that since the drive cages support 4 drives apiece.

    Anyway, though. Start with the largest drives you can afford starting off, otherwise you'll be getting back into administrivia earlier than you'd probably prefer. I don't know your storage needs or finances, but most geeks should be able to swing 500GB drives (or 250, if one must.)

    PS - Don't forget to have at least one spare drive on hand in case one dies. Remember, it has to be the same size, or larger, than the drives in the RAID. This is especially advisable if all your drives are about the same age.
  • Re:Two words: RAID 0 (Score:1, Interesting)

    by bofkentucky ( 555107 ) <bofkentucky.gmail@com> on Monday June 04, 2007 @09:32PM (#19390431) Homepage Journal
    3.5" drives but still cool floppy raid [macworld.com]
  • The simple answer is (Score:3, Interesting)

    by Mad Quacker ( 3327 ) on Monday June 04, 2007 @09:33PM (#19390445) Homepage
    that there is no good solution I can find. Every solution is flawed for this purpose, including ZFS.

    I have been giving much thought to writing yet another filesystem, which would fill the needs of home/archival/media box users. Essentially it would be like ZFS, except it would improve upon ZFS's dynamic striping. I would have dynamic parity, such that the number of disks in the stripe-set and number of recovery blocks is completely independent per-file, ala PAR2. ZFS is still just as bone-headed as older filesystem because the vdev's are still atomic, you make a raidz, and it stays that way. The integrity would be on a per-file basis only. So you could add and remove disks at will, no dangerous re-striping operations, and protection and recovery from on-disk corruption. If you lose too many disks, you only lose the information on those disks. A file need not be striped on every disk. Only when a particular file has less parity blocks than missing blocks, wherever such blocks may be, is the file gone. Files on disk should always be recoverable, regardless of "corrupt superblocks", or something similar. This could probably be done using FUSE and some quick and dirty code.

    Why?

    1. We want a lot of storage
    2. We want it expandable, no dangerous restriping or filesystem expansion. There can be NO BACKUPS!
    3. We don't want to wake up in the middle of the night and wonder if the next fsck is the last.
    4. We only care about enough performance to run the media center, i.e. record TV and play movies.
  • by JoeShmoe ( 90109 ) <askjoeshmoe@hotmail.com> on Monday June 04, 2007 @09:52PM (#19390659)
    Infrant (wow, just checked their website and it looks like they were bought by NetGear) created their own version of RAID that specifically addresses the issue of capacity and expansion. It's a nice transitional blend from RAID-1 to RAID-5 and does offer the ability to increase the total capacity (albeit with a lot of drive swapping).

    Buy an Infrant RAID with the two biggest drives you can afford. Let's say two 750GB drives or whatever's on sale that week. It starts out acting as RAID-1 with the drives mirroring. So you have 750GB of "safe" storage. Now you add another 750GB drive. Okay, now you have 1500GB of storage with one of the drives acting as parity drive (RAID-5). Add a fourth drive and how you have 2250GB of "safe" storage. Now you come back and just replace one of the original 750GB drives witha 1TB drive. Do you get extra capacity? No...not initially. But the drive is fully formatted and integrated as X-RAID. What this means is that eventually after you have piecemeal or onesie-twosie upgraded all four drives, suddenly the X-RAID resizes itself to match the capacity of the new drives with no transfer or downtime. So in theory if you wanted to upgrade your RAID, buy four 1TB drives, swap them out one at a time (letting each one rebuilt the array) and then at the end you'll have 3TB RAID isntead of the old 2250GB RAID and all the data intact.

    http://www.infrant.com/products/products_details.p hp?name=About%20X-RAID [infrant.com]

    I have three ReadyNAS units and love them to death. They are a little fussy about drive temperatures (I guess that's a good things but, I may get like 40 emails during the course of the day about it and it's not like I'll drive home from work to turn up the A/C in my house). My only sadness is that Infrant doesn't have a higher capacity unit than four drives (oh please oh please, eight drives with a RAID-6 type protective hotspare in one nice rack-mountable unit would be my ultimate dream).

    -JoeShmoe
    .
  • by espressojim ( 224775 ) <eris@NOsPam.tarogue.net> on Monday June 04, 2007 @09:52PM (#19390661)
    Xraid does this as well, if you buy an infrant box. I have one for my media center. You can put 1 disk in the box, and it's just an enclosure. 2 disks is mirroring, 3+ disks = raid5. You can upgrade each disk like RAIDCore, and when they are all at the new size, the total raid size is larger.
  • by Smackintosh ( 1009941 ) on Monday June 04, 2007 @10:18PM (#19390891)
    Don't bother with dedicated RAID hardware controllers. I've seen the Linux md disk driver mentioned, and while a viable option, the better option IMO is Solaris x86 using ZFS. Basically you've got an industrial-strength piece of storage software ripe with features begging to be used in this situation....for free.

    If you're interested in an industrial strength hardware platform to go with the software, go for one of these [sun.com].

    If you're interested in rolling your own, then simply put together an x86 box with as many SATA controllers and buses you can stuff in a box and set the disk up as JBOD (just make sure the hardware is Solaris x86 compatible of course). Create some ZFS pools of whatever RAID suits your need, and sit back and enjoy data glory.

    Oh, and simply pick a protocol of your choosing to serve up the data to your clients...iSCSI, SMB, NFS, whatever.
  • Re:Two words: RAID 0 (Score:3, Interesting)

    by __aaclcg7560 ( 824291 ) on Monday June 04, 2007 @11:14PM (#19391329)
    I used to use 20 floppies to back up a 20MB hard drive. It was so nice to be able to buy a box of 25 floppy disks, keep 20 in the box as one data set and use the other five for whatever. These days I'm backing up to local USB/FireWire drives attached to each machine, and storing copies on a network RAID server.
  • Re:KISS it (Score:3, Interesting)

    by dwater ( 72834 ) on Tuesday June 05, 2007 @12:26AM (#19391941)
    > Another issue, RAID is NOT a backup solution

    It (RAID1) can be, with some caveats. Just 'fail' one of the mirrors and take it off site (same as you would a tape).

    I'm sure it works very nicely in some situations.
  • by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Tuesday June 05, 2007 @01:18AM (#19392195) Journal

    This is what you do: buy 2 drives exactly the same size and mirror them. End of story.

    Until another few years go by and you want to buy more storage. Then you're basically stuck with doubling it, clumsily -- or migrating away and essentially throwing out the old drives.

    RAID 5 is better in the short run. Even with a three disc array, you're getting more storage for your money, and you can always restripe it onto a fourth disc.

    (If you need more than 500 GB I would highly suggest encoding your porn into a different format than MPEG2)

    It's not all porn, and some of it is high def, in h.264. And I don't even edit videos, I just watch 'em.

    With computers, the stupidest thing you can do is spend extra money to prepare for your needs for tomorrow.

    That is true. However, I would fill a terabyte easily, and right now, I'm guessing it's cheaper to buy three 500 gig drives than two 1 tb drives.

    By the way RAID 5 is a pain in the ass unless you have physical hotswap capability, which I highly doubt.

    You highly doubt he's got SATA?

    The one thing I will say is, either have another disk (even a USB thumb drive) to boot off of, or do some sort of RAID1 across them. You almost certainly want software RAID on Linux, and you don't want to try to teach a BIOS to boot off of your array.

  • by asc99c ( 938635 ) on Tuesday June 05, 2007 @08:18AM (#19394623)
    I'm not disagreeing with any point you raise here, but for a media server, this is off-topic. You'll need at most ~20Mb/sec for high bit-rate 1080p videos. Even running multiple streams to different media centers, even the most basic RAID cards have enough performance.
  • Re:go for RAID-5 (Score:3, Interesting)

    by WuphonsReach ( 684551 ) on Tuesday June 05, 2007 @10:49AM (#19396557)
    I actually put those (15) drive inside of a SuperMicro 4U rackmount/tower case with the triple 760W PSU (that's a $600 case). But for a backup system or less critical system where a few hours of downtime doesn't matter, the Lian-Li case is suitable along with a regular PSU. Using a modular plug PSU with a spare in the closet can be a serviceable method instead of buying a large expensive enclosure.

    I was skeptical of the 5:3 backplanes too. But the 5:3 backplanes actually do a pretty good job of cooling. They're aluminum trays, it's all metal-to-metal contact points, so the heat spreads out a bit. There's also a 80mm fan on the back of each backplane that pulls a small amount of air through the drives. Some (most?) backplanes also come with a temperature sensor that you can set for temps of 50/55/60 Celsius which causes a buzzer to go off in the unit when it gets too warm.

    As for (3) way RAID1 vs (2) way RAID1 + hot-spare... Well, if I'm going to dedicate the drive to being available for the RAID as a hot-spare, why not get use out of it and make it active? Then, when a disk fails in the RAID1, I'm not depending on a single disk while the hot-spare gets synchronized.

    Which is one of the downsides of Software RAID. It works at the partition level, rather then the whole disk level. So it's more difficult to share hot-spares between different types of arrays. OTOH, it provides a lot more flexibility compared to hardware RAID. If you were doing a (4) disk RAID, you could do the first few partitions (for /boot and / and swap) as RAID1 across all 4 disks, then use the rest of the disk as a RAID5 or RAID10 volume.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...