RAID Vs. JBOD Vs. Standard HDDs 555
Ravengbc writes "I am in the process of planning and buying some hardware to build a media center/media server. While there are still quite a few things on it that I haven't decided on, such as motherboard/processor, and windows XP vs. Linux, right now my debate is about storage. I'm wanting to have as much storage as possible, but redundancy seems to be important too." Read on for this reader's questions about the tradeoffs among straight HDDs, RAID 5, and JBOD.
At first I was thinking about just putting in a bunch HDDs. Then I started thinking about doing a RAID array, looking at RAID 5. However, some of the stuff I was initially told about RAID 5, I am now learning is not true. Some of the limitations I'm learning about: RAID 5 drives are limited to the size of the smallest drive in the array. And the way things are looking, even if I gradually replace all of the drives with larger ones, the array will still read the original size. For example, say I have 3x500gb drives in RAID 5 and over time replace all of them with 1TB drives. Instead of reading one big 3tb drive, it will still read 1.5tb. Is this true? I also considered using JBOD simply because I can use different size HDDs and have them all appear to be one large one, but there is no redundancy with this, which has me leaning away from it. If y'all were building a system for this purpose, how many drives and what size drives would you use and would you do some form of RAID, or what?
Do some research first? (Score:5, Informative)
I would use (and do use) linux software raid (Score:2, Informative)
RAID (Score:2, Informative)
Personally, I would just buy a Buffalo TeraStation or Netgear StorageStation and let that do the hard work. Just plug it into your network, then share the data. Just have a single 500GB drive on your media centre for recording TV, and then anything you want to keep just copy over to your NAS box.
It depends (Score:2, Informative)
Linux, raid5, LVM on top, can use extra capacity (Score:5, Informative)
If you buy 1TB drives further down the road, here's what you do- With each disk, create a partition identical in size to the partitions on the smaller disks, then allocate the rest of the space to a second partition.
Join the first partition of the disk to the existing RAID set. Let it rebuild. Swap the next drive, etc. etc. Then once you've done this switcharoo to all the drives, create another raid set using the 2nd partition on your new disks--call it
Take that
Just be sure that any replacement drives you have to buy... you must partition them out similarly. I'd recommend pulling back on the partition sizes a bit, maybe 5%, to account for any size differences between the drives you bought right now and some replacement drives you may purchase later on which might be slightly lower in capacity (different drive manufacturers often have differing exact capacities).
depends on the raid implementation (and level?) (Score:2, Informative)
The "Raid 5 can't do what I heard" isn't quite what's going on, again, depending on the implementation. Most raid cards I've used allow you to add drives to the array and expand the array to the new drive(s) without downing the server or requiring a rebuild.
So RTFM for the card you're going to use.
Linux, RAID 5, md (Score:5, Informative)
Go Linux. The Linux MD driver allows you to control how you RAID- over disks or partitions. there are advantages. We will discuss.
First, don't get suckered into a hardware RAID card. They are *NOT* really a hardware card- they rely on a software driver to do calculations on your CPU for RAID5 ops. Software RAID is JUST AS FAST. Unless you blow the big bucks for a card with a real dedicated ASIC to do the work, you're fooling yourself.
Now, you want to go Linux. By using the md driver, you can stripe over PARTITIONS, and not the whole disk. By doing this, you can get MAXIMUM storage capacity out of your disks, even in upgrades.
Say you have 3 500GB disks. You create a 1TB array, with 1 disk as parity. On each of these disks is a single partition, each the size of the drive. Now, you want to upgrade? SURE! Add 3 more disks. Create three partitions of EQUAL size to the original, and tack it on to the first array. Then, with the additional space, you can create a WHOLE NEW array, and now you have two seperate RAID5's, each redundant, each fully using your space.
Another advantage with MD is flexibility. In my setup, I use 5x 250 drives right now. On each is a 245GB partition, and a 5GB partition. I use RAID1 over the 5's, and RAID5 over the rest. Why? Because each drive is now independently bootable! Plus, I can run the array off two disks, upgrade the file system on the other 3, and if there's a problem, I can always revert to the original file system. So much flexibility, it's not even funny.
I recommend using plain old SATA, in conjunction with SATA drives, and just stick with the MD device. For increased performance, watch your motherboard selection. You could grab a server oriented board, with dedicated PCI buses for slots, and split the drives over the cards. Or, you can get a multiproc rig going, and assign processor affinity to the IRQ's- one card calls proc 1 for interrupts, the other card calls proc 0. If you have multiple buses, then performance is maximized.
The last benefit? Portability. If your hardware suffers a failure, then your software RAID can move to any other system. Using ANY hardware RAID setup will require you to use the EXACT same card no matter what to recover data. Even the firmware will have to stay stable or else your data can be kissed goodbye.
Windows? Forget about it.
Good luck!
Re:Duh (Score:3, Informative)
Bad assumption (Score:2, Informative)
LVM is your friend (Score:3, Informative)
Not at all, these days one does have better options than rebuilding a blank array. Read up on LVM, it is powerful stuff.
Replace the drives in the array one at a time, allowing time for the array to rebuild. Then you can grow the volume to make use of the extra capacity. Yes it will require some planning and will probably take a week to slowly merge in the new set of drives, but it sure beats a bare metal restore because you can still be recording and watching video while all this rebuilding and resizing is happening.
Don't really know how much of the above applies to Windows, haven't seriously used it in a decade; so sometone else will have to supply details on it's volume management flexibility.
Infrant (Score:1, Informative)
Re:go for RAID-5 (Score:2, Informative)
Hardware vs Software RAID (Score:1, Informative)
Also, since you mentioned that you haven't chosen an OS, I believe MS will be releasing Windows Home Server this fall. It's based of the 2003 Server system, so it's well proven and has no problem with drivers or any of the issues Vista is currently having. Also since it's built on top of 2003, there's already lots of industry support out there. The UI they've grafted onto it is very friendly, and the backup system that it has is awesome (incl full-disk restore using only a boot CD!) It's actually being designed to fulfill more or less exactly the role you seem to be seeking a solution for.
I know I'll get blasted for having suggested an MS sol'n on
Anyway, for your system, I'd make a Software RAID1 partition for your OS (whichever it is) and install a Hardware RAID5 solution for my data. Since it's hardware RAID5, you can break it up however you like, and still have redundancy AND a minimal loss of space. You could consider RAID6 for increased safety, but I haven't seen a hardware RAID6 controller out there anywhere yet... (RAID 6 is like RAID 5, but has 2 parity drives, thus enabling up to two drives in the array to fail whilst remaining operational).
-AC
mdadm (Score:3, Informative)
If it is a Linux server, you're already using mdadm, which has a monitoring daemon with e-mail notification.
Re:I would use (and do use) linux software raid (Score:2, Informative)
If you're going to dump hundreds of $$ into hard drives, cough up a bit more for a HW raid controller!
Says the person who's never done any real benchmarking of these things...
Unless you buy the right raid card, you'll likely get worse performance from it than you would from software raid. I'm talking the name brands too - LSI, Adaptec, 3Ware. They all suck. Of the 3, 3ware is the best. On a LSI SAS raid controller I recently tested, I only got a 30% I/O speedup going from a single drive to a 6 drive raid 5. That's pathetic! Software raid at least gave me 140% improvement.
If you really want good numbers, get an Areca [areca.com.tw] controller. They perform very well and have drivers right in the linux kernel (2.6.19+).
The older RocketRaid cards (Highpoint) performed fairly well, but were not really hardware raid - they were "hardware assisted" raid. Most of the work was really software raid in the driver. As long as you had a fairly fast cpu, you got great numbers. I believe the newer ones are true hardware raid now, but I haven't benchmarked them yet as they only had up to 8 port controllers in PCIe last I checked.
Re:Linux, RAID 5, md (Score:4, Informative)
I recommend Gentoo to do this with. Other distro's dont include the latest mdadmtools required to manage and migrate RAID5 md devices. Ubuntu is catching up, I believe.
Here are some places to start:
http://gentoo-wiki.com/HOWTO_Gentoo_Install_on_So
http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2
http://linas.org/linux/Software-RAID/Software-RAI
http://linas.org/linux/raid.html [linas.org]
http://evms.sourceforge.net/ [sourceforge.net]
http://www.tldp.org/HOWTO/Software-RAID-HOWTO.htm
Re:Linux, RAID 5, md (Score:1, Informative)
Re:go for RAID-5 (Score:3, Informative)
Why? Linux software RAID (md) does a fine job with excellent performance, assuming you are not saturating the PCI bus (solution: use PCI Express or PCI-X instead). With sufficent bus bandwidth, software RAID outperforms the majority of soft RAID (rocketraid) and hardware RAID controllers.
You can do this with md without having to deal with quirky RAID hardware that leaves you in the cold if you have a controller failure.
Re:RAID (Score:3, Informative)
root @ backup (/usr/src/linux) cat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath]
md0 : active raid5 hdj1[3] hdi1[4] hdg1[2] hdf1[1] hde1[0]
468880896 blocks level 5, 4k chunk, algorithm 2 [5/5] [UUUUU]
unused devices: <none>
[U] is Up. [_] is Down.
Re:Do some research first? (Score:3, Informative)
While Solaris might be a dirty word among the Slashdot crowd, if all the OP needs is a way to store a bunch of files, ZFS is an excellent solution. Check out http://www.opensolaris.org/os/community/zfs/whati
Then, if you're still not convinced how appropriate ZFS might be for a somewhat clueless user, read about how it can save your ass from flaky hardware and data corruption: http://blogs.sun.com/elowe/entry/zfs_saves_the_da
Re:Is Google broken today? (Score:2, Informative)
"Why does stupid shit like this keep getting posted to the front page?"
Usually it's because n00bs are born every day...that is the effect of life going on.
Remember, you were a n00b at one time. Did you get help from greybeards when you were a n00b? If you did, then back off the unhelpful attitude!
Elitist wannabe asshats like you are a dime a dozen, and no help to the community at large...either get over your attitude, or crawl back in your hole/mom's basement.
to use a fscked up analogy...if it were all your way all we would have in our Armed Forces would be Generals and Admirals....there would be no NCO's or privates, Airmaen, or Seamen.
To use the typical
"Running by February, 1893 and ready for road trials by September, 1893 the car built by Charles and Frank Duryea, brothers, was the first gasoline powered car in America. The first run on public roads was made on September 21, 1893 in Springfield, MA. They had purchased a used horse drawn buggy for $70 and installed a 4 HP, single cylinder gasoline engine. The car (buggy) had a friction transmission, spray carburetor and low tension ignition." from:(http://www.ausbcomp.com/~bbott/cars/carhist
Back ontopic...all you had to do is either not post, or suggest he stick with RAID 5,6 or 10, and suggest he google for them to make a decision.
And yes, I do not apologise for calling you an asshat...if the shoe fits....! (yes, I love mixing metaphors!- so sue me!)
Re:Get what you need for *NOW* not for later (Score:3, Informative)
Re:go for RAID-5 (Score:4, Informative)
I concur. You would be crazy not to have redundancy--without out it one disk failure will pull down a good chunk of your data.
As for growing the array. From what I understand (and I have not tested this) you can grow the size of the array if you replace all the disks (one at a time with a resync obviously). Also, as of linux 2.6.17, you can add a disk to the RAID and grow it that way.
I would caution against making your array very large (either in disks or in space). Consider the case of a 3 disk RAID array where each disk has a probability of failing in any given second of 10^-10 (you would do this analysis using the reconstruction time of your array as the time window). The probability for any two drives not failing is (1-10^-10)^2. The total number of 2 drive pairs in a 3 disk RAID is 3, thus the probability of the array not failing in any given second is (1-10^-10)^6=0.99999999940. Over a period of five years, the overall probability of no two drives failing is (1-10^-10)^(6*157680000)=0.909729. If you increase the array size to 10 disks, the overall probability of two drives failing is 0.241927 (the number of 2 drive combinations is 45 so you replace the 3 with 45).
Re:go for RAID-5 (Score:3, Informative)
The 4-3 devices modules are cute, but a pain to deal with when you want to replace a drive (you have to rip apart 4 sets of cables). I'm not entirely satisfied with the 4-3 modules that I have, I prefer the older 3:2 units with a 80mm fan. Stick to only putting 2 drives in those old 3:2 units and you get superior airflow because there's no strange grillwork between the intake fan and the hard drives. You might get the same out of those 4-3 units if you install a drive in the middle upside down to create a decent gap.
But if you want maximum storage density, go look at the SuperMicro (or others) SATA 5:3 backplanes. Hot-swap (assuming your chipset supports it) SATA trays that fit (5) drives into (3) 5.25" bays. Merge that with a 4U rackcase or one of the (9) 5.25" bay cases from Lian Li (PC A16) and you have (15) SATA slots to play with. Or you could do (3) 5:3 and (1) 3:2 in that CM Stacker case for a total of (18) drives. (There are 3 types of SATA hotswap backplanes, 5:3 for cases with clean sides in the 5.25" bays, 4:3 for cases with guide-rails or tabs in between the 5.25" bays, and 3:2 units. Some cases have metal tabs designed to guide 5.25" devices into place, they'll interfere with 5:3 backplanes.)
My preferred setup for (15) drives? A (3) active disk RAID1 for the OS and misc partitions, then either a (10) disk RAID10 w/ (2) hot spares or a pair of RAID6 volumes with (2) hot spares. RAID5 is too risky once you get into the 1TB+ range. Rebuilding onto a hot-spare takes too long and leaves you vulnerable to a 2nd drive failure during the rebuild window. RAID6 is at least better in that regard, but with RAID10, rebuild times are static (they're based on the time required to rebuild a single RAID1 pair) no matter the # of spindles in the array.
Re:KISS it (Score:1, Informative)
Another issue, RAID is NOT a backup solution, it is only for speed and/or availability, it only prevents you from losing data from a failed HD. Do you really need the avaialability that RAID offers for that ripped DVD collection? I doubt it. There are many more ways to loose data then a HD failure. A slipped mouse, FS corruption, a goofy raid controller, an accidental delete etc. When I upgrade my file server or add disks, it is simple, mount my disks, point my new smb.conf to those mounts and I am done. Imagine using an on board SATA controller with Raid and having to upgrade because your MB failed. You better hope that new SATA chip set is compatible or your data is lost. Hows that for availability?
linux lets you resize the array (Score:1, Informative)
so you can start with 3x500 (1TB useable) and replace them with 3x1TB (2TB useable)
however, seriously consider raid 6 instead of raid 5, it eats up an extra drive (minimum usealbe arrays are 4 drives giving you the capacity of 2 drives), but with today's large drives it can take long enough to rebuild the array that you run a very real risk of a second drive failing while you are rebuilding from the first one.
currently md on linux will not let you switch on the fly from raid5 to raid6
David Lang
Re:SCRUB your arrays! (Score:4, Informative)
Re:Linux, RAID 5, md (Score:3, Informative)
What's rather humorous about this statement is that ultimately, all firewalls are implemented in software. What is firmware, again?
There are three different implementations of RAID on PC class hardware- software RAID, fake-hardware RAID, and hardware RAID.
When I said that software is just as fast, I'm comparing it to fake RAID cards, cost under ~$200 US. These cards rely on drivers to get their work done, and rely on the CPU just as much as software RAID. The only benefit they bring to the table is the ability to have the RAID exist in a pre-OS environment- you can boot off the RAID no matter what OS.
Ultimately, the disadvantages (extra cost, no speed increase, buggy drivers, et al.), do not weigh out over the advantages (dedicated BIOS).
The latest versions of software RAID support a snapshotting feature which makes it impossible for the array to become out-of-sync. Batteries are only required when you are caching information from the disk onto the controller for performance reasons. At this point, you're talking about a REAL hardware RAID card, which is most likely doing parity calculations on a dedicated processor. Cost is now over $200 for a *GOOD* 4-port card.
I bought an xSeries IBM chassis (two, infact), hacked out the SCSI backplane, and added 5 250GB drives and 2 SATA controllers. Total cost: $800. I still have room for 5 more drives. I also have two processors and 3GB of RAM. Cost-effective? You betcha. Hardware RAID? Nope. And, it's designed to handle the heat.
Sounds like they may not have thought through their implementation. Cost-effective means maximizing space, maximizing life, and maximizing versatility. Cost isn't just initial outlay- it's the life of the implementation. I'll take my 4TB array for $1700US over anything custom. Did I mention that my quad-Xeon 700MHZ can stream 1080p?
Sure, it's expensive in electricity. But over the life of the server, I'll get more use out of it than just storage. I'll have excess processor capacity when writes are not occuring. And I have a vendor independent implementation that can be moved to any system, any time, for any purpose, including data recovery. Using fake RAID or hardware RAID will just encumber that, and add unnecessary cost.
Re:KISS it (Score:5, Informative)
3ware [3ware.com] made some pretty good cards.
unRAID (Score:3, Informative)
I can't believe no one has suggested an unRAID [lime-technology.com] server. You get redundancy, storage that can grow by just adding another drive, low power consumption, affordability, and the ability to telnet in. (Plus it runs Linux!) I really like this solution since the data isn't spread out over a bunch of disks in a way that only the RAID controller can understand. Instead it's just a bunch of files on a bunch of disks, with an extra parity drive for reliability.
If a drive goes down, you can just pop a new one in and recover the lost data from the parity drive. If two drives simultaneously fail (unlikely), you lose the data on the drives that failed. Compare that to the nightmare if your RAID controller fails.
Here's my unRAID server [coppit.org], built for $400 plus the drives. I love being able to do backups by just running rsync. Once the author gets sshd built into the system, I can even do automatic incremental snapshot backups using rsync --link-dest.
Re:Two words: RAID 0 (Score:3, Informative)
If you use fake RAID [linux-ata.org], then you basically have no guarantee that the on disk format will be the same from one motherboard to the next, even within a particular vendor.
I would suggest not using fake RAID if you have any intentions of moving the disks to a new system (or really, at all... the only potential plus is Windows compatibility). Fake RAID uses a vendor-specific proprietary on disk format, and is typically slower than both software RAID, and hardware RAID.
RAID in any form has minimal impact on disk seeks, so if you're reading lots of tiny files, you'll notice minimal (if any) performance gain. Where RAID really shines is reading or writing large sequential files, where your performance increases more or less linearly with the number disks in the array (although this depends on what RAID level you use).
RAID 0 drastically increases your chance of data loss. For example, with 8 100G disks in RAID 0, if you lose any single disk, you lose all 800G of data stored in that RAID 0 partition. I wouldn't suggest RAID 0 for anything you can't completely recreate painlessly, unless you don't care about losing the data (and if you don't, then why do you have it in the first place?).
Re:Two words: RAID 0 (Score:3, Informative)
3ware makes a fine hardware raid card that have between 2 and 16 or more SATA ports on them. Of course the costs go up with more ports. Adaptec has some also but I had problems with the last one I used and according to the Internet I wasn't alone. If I remember right, with a hardware raid, there is information about the raid configuration and all the drives in the boot sector of the drives. You should be able to rebuild the array from this. And the newer raid cards have hot swap spares were you have an extra drive or two sitting there waiting for a drive to fail. The firmware initiates one of the other drives rebuilds the information and you go on uninterrupted. You can then use a program to turn the failed drive off and replace it all without rebooting. It has saved my ass a couple of times.
Something people don't look at when building a raid system is the power demands. A normal power supply has only one 12 volt rail that feeds both the main board and the drives. You can get some really weird thing happening when you run out of power like drive reporting as dead but still working. Some split that into two. But this could still leave you with too little power for your application. Spend some money and get a good power supply with separate 12 volt feeds and at least two of them if not three, You will see this on the power output chart for the power supply.
Re:Two words: RAID 0 (Score:3, Informative)
Or... get an Intel motherboard with Matrix Raid chip, it'll allow you to add more drives to an array or increase it's size when you increase the storage space.
However... not using RAID would make it more flexible for you, wheter it's worth it is up to you. Personally I'd go for RAID 5 (I have done the same at home).
Re:KISS it (Score:5, Informative)
Eh?
LVM [wikipedia.org] and RAID [wikipedia.org] are orthogonal solutions, and don't do the same thing. LVM will let you make a single larger partition out of a number of real partitions, and before anyone says that's the same as RAID0, I should point out that RAID0 is not a real RAID level (as it has no redundancy). The circumstances for failure for LVM and RAID0 (JBOD too) are basically the same - if one part fails, you will quite possibly lose the whole lot.
As for hardware RAID, I would not necessarily recommend that either, as it moves the single point of failure without resolving the problem. Replacing a broken controller with something compatible some years down the road can prove impossible, especially with onboard controllers. There's also the fact that a number of RAID controller cards are buggy and others do most os the work in software drivers anyway! Performance is also no longer a reason to use a pure hardware RAID solution, especially now that multi-core machines are available cheaply.
Hot-swap is still someting that requires a good hardware solution, but that's about it. Good (and well supported) RAID products cost good money too, and for most of us it's just not worth doing - better to use software RAID, buy more RAM, and pocket the rest.
-- Steve
Re:KISS it (Score:3, Informative)
Re:KISS it (Score:2, Informative)
What the fuck are you smoking? Software RAID vs hardware RAID isn't about CPU performance. It's about I/O. It's about reliability. You will never reach the same levels of I/O using onboard or expansion controllers as you will using hardware controller. EVER. Software RAID will never be able to touch the performance of ASICs for calculating parity. Software RAID can't utilize a BBU to protect the I/O stream to save it to disk upon power restoration. Software RAID can't benefit from the multi-channel read performance available with RAID 1, 0+1, 1+0, 5, 10, 30 or 50.
Your comment about buggy RAID controllers is pure bullshit. Name one buggy hardware RAID controller. I've used hardware RAID controllers from Adaptec, AMI, LSI, and 3Ware on 4 different platforms. I've even used fake hardware RAID controllers from Promise and Highpoint (I've got a Highpoint 404 going real cheap). The real hardware RAID controllers have always worked flawlessly. I have yet to encounter a buggy hardware RAID controller, even back when 3Ware support wasn't yet in the kernel and the driver was a canned offering from 3Ware. The fake hardware RAID controllers from Promise and Highpoint suck. Like you said, the driver does all the work on the non-hardware RAID controllers. The driver does nothing on the real hardware RAID controllers.
Re:KISS it (Score:4, Informative)
Simple: unRAID and use your JBOD (Score:3, Informative)
Some limitations: Parity drive must be as big or bigger than all others. Each drive is a seperate mount point unless you use a funky sort of shared folder feature. The system doesn't have as high a transfer speed as a RAID would, however it streams video for me to an XBMC XBOX1 just fine. It doesn't have a super robust system to notify you of failed drives out of the box although some users have added this functionality. Not a whole lot of security although I've met someone who has added this on and the developer is also working on expanding this in the future. Pretty decent support overall IMO and he's just moved to the 2.6 kernel - I've yet to upgrade though.
All in all this system seems to be perfect for HTPCs and I also use it to store backup images of all my workstations. All of my music and DVDs are stored on it and I'm about to build a second one as I need still more storage and have "spare" drives that I've pulled from the existing one as I've upgraded that I'd like to put to good use
AMEN! (Score:3, Informative)
I 100% agree with you on unRAID - it rox! I'm not using any sort of background Linux stuff to do my backups but I do use an unattended backup package that works just fine with unRAID (Acronis). The speed could be better (I'm not yet on the 2.6 based version) but it keeps up stutter free with my XBMC box so it's fast enough for me.
Software RAID5 or Manual Redundancy (Score:3, Informative)
First, forget hardware RAID solutions. While their effectiveness is debatable for commercial and enterprise applications, it's definitely overkill for a home solution (particularly a media server). (Unless of course you have more money than sense.) But Linux RAID (md, multi-disk) is mature, stable, and well-tested. It's portable from one machine to another. It's free. With even modest hardware, it will be plenty fast for a home media server. Don't even bother with those pseudo RAID solutions that are built into your motherboard (or implemented via firmware or a proprietary driver): Linux software RAID and true hardware RAID beat these solutions in just about every conceivable way.
Now, do you really need RAID? Many people equate RAID and backup. They are not equal. RAID is no substitute for a good backup. In the case of a media library, you do own all the media, right? :) There's your backup. Worst case, you lose the time spent ripping the media. So there's an argument to just use JBOD. However, I do use RAID5 for a bit of safety. If two drives fail simultaneously, I fall back on the media. But if only one drive fails, then I can replace the drive, rebuild the array, and lose very little time. It's quasi-backup. It's just too expensive for an individual to maintain multiple live copies of this much data.
If I were to build a fileserver for someone right now, this is what I'd use:
I have another post on this thread where I went into more detail about the choice of case. Quick summary: if you care about noise, don't cram your drives close together, or you'll have to use an obscenely loud high-speed fan to keep them cool. If you allow at least 0.5" between each drive, you can keep your drives cool with a low-speed (quiet) fan. That's why I'm buying the Lian Li case mentioned above: room for up to nine drives, with adequate spacing between each.
Re:KISS it (Score:3, Informative)
By the way, does anyone have recommendations on 4-port SATA controllers?
Re:KISS it (Score:3, Informative)
Re:Linux, RAID 5, md (Score:4, Informative)
Higher RAID-levels are not always THE ultimate solution and depending on your solution you might just have to go for a non-secure RAID level (RAID0) for large media storage with nightly snapshotting to your backup device. Usually it's not all that bad to lose a single day worth of data and if it is for these applications, use RAID10 or so. I do it as follows: get media on RAID0 (HD streams are large and fast on 10k drives) and then as soon as job is done, I copy it to the storage area which is RAID5 on cheap SATA storage and then a nightly copy to an offline backup station (HW-RAID5 with ATA100) of the data I want to keep.