Building a 10 TB Array For Around $1,000 227
As storage hardware costs continue to plummet, the folks over at Tom's Hardware have decided to throw together their version of the "Über RAID Array." While the array still doesn't stack up against SSDs for access time, a large array is capable of higher throughput via striping. Unfortunately, the amount of work required to assemble a setup like this seems to make it too much trouble for anything but a fun experiment. "Most people probably don't want to install more than a few hard drives into their PC, as it requires a massive case with sufficient ventilation as well as a solid power supply. We don't consider this project to be something enthusiasts should necessarily reproduce. Instead, we set out to analyze what level of storage performance you'd get if you were to spend the same money as on an enthusiast processor, such as a $1,000 Core i7-975 Extreme. For the same cost, you could assemble 12 1 TB Samsung Spinpoint F1 hard drives. Of course, you still need a suitable multi-port controller, which is why we selected Areca's ARC-1680iX-20."
We do this now (Score:5, Interesting)
We needed a solution for backups. Performance is therefore not important, just reliability, storage space, and price.
I reviewed a number of solutions with acronyms like JBOD, with prices that weren't cheap... I ended up going to the local PC shop and getting a fairly generic MOBO with 6 SATA plugs, and a SATA daughter card (for another 4 ports) running CentOS 5. The price dropped from thousands of dollars to hundreds, and took me a full workday to get set up.
It's currently got 8 drives in it, cost a little over the thousand quoted in TFA, and is very conveniently obtained. It has a script that backs up everything nightly, and we have some external USB HDDs that we use for archival monthly backups.
The drives are all redundant, backups are done automatically, and it works quite well for our needs. It's near zero administration after initial setup.
Re:Why This Article Is Stupid (Score:5, Interesting)
I actually did something similar around a year ago. 12 x 750Gb of diskspace including disks, controllers, system and everything for around 2000 dollars. It uses Linux softraid but I still get an easy 400MegaBYTE/s from it. I have some pictures here:
http://www.tmm.cx/~hp/new_server [www.tmm.cx]
Tom's hardware's idea is very late to the party ;)
What for? (Score:3, Interesting)
People who just want massive amount of data storage for private use just buy a few NAS units, plug them in a gigabit Ethernet or USB hub and keep the more needed data on the internal HDD's.
On the other side, people who want fast, reliable and a lot of data storage buy something like a HP Proliant, IBM or similar Rack server with redundant PSU's, RAID controller with battery packs and SAS HDD's at 10-15k rpm (and possibly a tape drive).
The later setup costs more in the short run, but you spare your self a lot of head aches (repair service, configuration, downtime, data loss) in the long run, as this hardware is designed for this kind of tasks.
So who is the article targeted at: wannabe computer leet folks? And why on earth is this article on the Slashdot frontpage??
Half uber raid setup (Score:2, Interesting)
I've got half the uber setup they talked about and its works great for me. With 6 sata ports on my mobo and another 2 in a pci-x by 1 slot (I found a regular pci card for $10 with two ports). I've got plenty of space with only an additional $30 on the card. I use mdadm in a raid 5 with 6 x 1TB drives with one spare. One 300GB drive for the OS and I had the rest of the parts laying around. You could assemble the setup I've got for $500 if you have any old system with a large enough case. Add a backplane for another $90 if you you case is only a midsize. I have never benchmarked the speeds but It seems fast. The price was certainly right.
Some advice (Score:4, Interesting)
For those who are concerned about backing up large amounts of data. Please call your local data storage company. Yes they do exist, but I'll skip naming names as I don't like to shill for free.
Simply ask them about external storage devices you can use. They'll often lease you the equipment for a small fee in return for a yearly contract.
For 3 years I simply had a $30 a month fee for a weekly backup to DLT tape (No limit on space, and I used a lot back then.). They gave me a nice SCSI card and the tape drive with 10 tapes in a container that I could then drop off locally on my way to work. Did encrypted backups and had 2 months (8 week) rotations with a monthly full backup. With the lower cost LTO drives that came out a few years the costs should be minimal. Can't wait till all this FiOS stuff is deployed. I'm hoping to start a data storage facility.
If you have your own backup software and media don't forget to check with your local bank for TEMPERATURE CONTROLLED SAFTEY DEPOSIT BOXES. Yes banks do have some location with temperature sensitive storage. Some of those vaults can take up to 2k degrees for short periods of time without cooking in the interior content.
Where I currently am the NetOps is kind enough to provide me some shelf space in the server room for my external 1TB backup drive that I store my monthlys on. I have 3 externals giving me 3 full monthly backups (sans the OS files since I have orignal CDs\DVDs in the bank)
For home brewed off site I suggest a parent or sibling in a basement but elevated. I used a sister's unfinished basement up in the floor joist inside an empty coleman lunchbox (annual backups).
Now a days with my friends having sick disk space also we tend to just RSYNC our system backups to one another in a ring A -> B -> C -> D -> A with full backups each node syncing to the next on separate days during the day when we are not home.
PSEUDO CODE
===========
CHECK IF I AM "IT" IF SO
SSH TO TARGET NODE
CAT CURRENT TIME INTO STARTING.TXT
RSYNC BACKUPS FOLDER TO TARGET
CAT CURRENT TIME INTO FINISHED.TXT
TELL TARGET "TAG YOUR IT"
BACKUPS\ ...
A_BACKUPS\
B_BACKUPS\
Put each node's backup folders under a quota if needed to ensure no hoarding of space.
To really crunch the space you could try and pull off doing a delta save of A's backup such that B's backup is the delta of A diffed to the subsequent nodes (Might be important for full disk backups such that a lot of the data is common between the systems).
Redundant Array of INEXPENSIVE Disks (Score:4, Interesting)
I've done this every 2-3 years three times now for personal use and a couple times for work. My first was 7x120 and used 2 4 port ATA controllers and software RAID5. My second was 7x400 and used a Highpoint rocket RAID card. My third one is 8x750gb and also uses a Highpoint card.
Lessons learned:
1. Non RAID type drives cause unpredictable and annoying performance issues as the RAID ages and fills with data.
1a. The drives can potentially drop out of the raid group (necessitating an automated rebuild) if they don't respond for too long.
1b. A single drive with some bad sectors can drag down performance to a crawl.
2. Software RAID is probably faster than hardware RAID for the money. A fast CPU is much cheaper than a very high performance RAID card low end cards like the Highpoint are likely slower for the money.
3. Software RAID setup is usually more complicated.
4. Compatibility issues with Highpoint cards and motherboards are no fun
5. For work purposes use RAID approved drives and 3Ware cards or software.
6. Old PCI will max out your performance. 33Mhz * 32bit = 132MB/sec minus over head, minus passing through it a couple times == 30MB/sec performance
7. If you go with software RAID you'll need a fat power supply, if you choose a raid card most of them support staggered start up and you won't really need much. Spin up power is 1-2amps typically but once they're running they don't take a lot of power.
8. Really cheap cases that hold 8 drives are hard to find. Careful to get enough mounting brackets, fans, power Y-adapters online so you don't spend too much on them at your local Fry's.
For my 4th personal RAID I will probably choose RAID6 and go back to software RAID. Likely at least 9x1.5TB if I were to do it today. 1.5TB drives can be had for $100 on discount. So RAID5 $800 for ~10TB formatted or $900 for RAID6. +case/cpu/etc...
I'd love to hear others feedback on similar personal use ULTRA CHEAP RAID setups.
Re:Why This Article Is Stupid (Score:3, Interesting)
Re:Try FreeBSD and ZFS (Score:3, Interesting)
As one of the techs behind the solution linked to on FreeBSD forums, I just wanted to chime in with a "definitely give ZFS a try". Whether you run it on FreeBSD or Solaris (or even Linux via FUSE if you don't really care about the throughput) doesn't really matter.
You don't even need to use RAID controllers like we did (although the individual drives are configured as "Single Drive" "arrays" so non of the actual RAID hardware was used). Just throw in some good SATA controllers into PCIe or PCI-X slots and you're set. (We used RAID controllers for the management features, and extra level of cache.)
ZFS takes care of the RAID setup (RAID1, RAID5, RAID6, with built-in striping across arrays/vdevs), detects data corruption via end-to-end checksumming, can alert you to when a drive has issues (and tell you which one), gives you in-filesystems snapshots, filesystem compression, and a whole bunch more.
Add in rsync for network transfers (the built-in snapshot send/receive feature still needs a bit of work) and you have a very nice backup setup, even across redundant servers.
Add iSCSI and you have a very nice SAN setup.
Add Samba or NFS and you have a very nice NAS setup.
There's even support for thin-provisioning (create a volume that's 500 GB in size, but only give it 100 GB of actual disk space) making it ideal for virtualisation setups.
And you can "stack" storage boxes to create a virtually infinite storage setup (create a pair of storage servers using disks and ZFS, export a single iSCSI volume -- then use those iSCSI exports on a third server to create a storage pool -- when you need more storage, just add another pair of storage servers).
You can also replace the drives with larger drives and get (almost) instant access to the extra space.
Finally, since it's a copy-on-write, transactional filesystem, you don't lose any write speed, since you always write out new files; which also eliminates the "RAID5/6 write hole".
Once you start using ZFS and pooled storage, you'll find the whole Linux storage stack (disks -> md -> lvm pv -> lvm vg -> lvm lv -> filesystem) to be unbelievable unwieldy and wonder how you ever managed TBs of disk before.
Re:How does the home user back this up? (Score:3, Interesting)
Or how would a photographer archive this? So that your kids could show your pictures to your grandkids. Like you were able to go through a shoebox full of negatives with good quality.
1st, you'll want to partition your data. This I can lose (the TV shows you recorded on your DVR), that I want to keep forever (photos & movies of the kids, 1st house), these I want to protect in case of disaster (taxes, resumes, scans of bills, current work projects).
Don't bother with the 1st case. Archive the forever to multiple media and do backups of the last.
Hopefully, backups are the smallest chunk. Often, but you don't need to keep more then 2-3 copies. If you want to retrieve something from x/y/zz, that's an archive not a backup.
Archives should be made to multiple copies (DVDs?) in diverse locations. Not magnetic unless you redo it periodically.
Offline media (tapes, optical, printouts) haven't kept up with online media capacity. *sigh*