Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware Hacking Hardware

HOWTO: 0.5TB RAID on a Budget 278

Compu486 writes "Inventgeek.com has a new how-to article titled 'The Poor Mans Raid Array.' The article details how to make a modular .5 terabyte Raid 5 array for under $250 (USD), and it all runs on the Mandriva flavor of Linux." Drive prices being what they are, this seems cooler than it is practical. Update: 06/25 23:31 GMT by T : If that's not enough storage, Yeechang Lee writes "Let me show off the 2.8TB Linux-powered RAID 5 array I built for home use a few months ago. I provide lots of details on how I did it, what I used, and the results. The Usenet thread has good followup posts from others, too."
This discussion has been archived. No new comments can be posted.

HOWTO: 0.5TB RAID on a Budget

Comments Filter:
  • typical? (Score:5, Insightful)

    by mnemonic_ ( 164550 ) <jamec@u m i ch.edu> on Saturday June 25, 2005 @04:11PM (#12910587) Homepage Journal
    this seems cooler than it is practical.
    Perfect for slashdot!
    • Re:typical? (Score:5, Informative)

      by Rei ( 128717 ) on Saturday June 25, 2005 @05:05PM (#12910814) Homepage
      Many home RAID setups that I have seen are more for show than they are as a speedup. Most of them are RAID 0 - i.e., they're for "performance", not redundancy (I assume the reason is that most people don't want half their space to dissapear with RAID 1, but don't want to have to use enough drives to make RAID 5 effective). Yet the primary performance limits on a home PC's disk performance (apart from operating-system issues like the filesystem and caching) are latency-based, not throughput based; RAID, if anything, will increase your latency.

      If you want a high performance system, spend the money to get a small, top-of-the-line drive for your root partition (15k rpm SCSIs are nice, if you have a scsi card - you can get a 9 gig for 30$ including shipping, an 18 gig for 55$), and then put all of your space-consuming files (movies, music, etc) on your cheap bulk storage. Get enough spare ram to have good disk caching. And, of course, choose a good filesystem for small files - ReiserFS works well for me, but there are a lot of good options.

      You only need major throughput if you're doing a lot of very long file reads that need to occur at top speed (i.e., not playing video or listening to music; more like what you need for running a large relational database, or being a fileserver on a crazy-fast network). To the "Raid 0" crowd: Does this really fit your disk's typical usage patterns?
      • yup, lots of video editing, raid0 works well. Saves a lot of time.
      • Re:typical? (Score:3, Informative)

        by egburr ( 141740 )
        I use RAID1 because I'm much more concerned about loss of data than about performance. Yeah, that means I buy two drives for the space of one, but for my personal data, it's worth it.

        Really, if you need that much storage, I would hope your hardware budget is a little bigger than what I allocate for my stuff at home.

  • by FireballX301 ( 766274 ) on Saturday June 25, 2005 @04:11PM (#12910588) Journal
    The Poor Man's Redundant Array of Inexpensive Disks.

    That aside, a decent motherboard will come with a RAID IDE controller, so you could easily just grab a pair of 250 WD caviars. Or go the cheapo route and do maxtor.
    • by James Cape ( 894496 ) on Saturday June 25, 2005 @04:20PM (#12910623) Homepage
      Actually, for RAID 5 you'd need a minimum of 3 drives.

      http://www.acnc.com/04_01_05.html [acnc.com]
      http://en.wikipedia.org/wiki/Redundant_array_of_in dependent_disks [wikipedia.org]
    • I always do the cheapo route and do maxtor, anyway. Every WD drive I've ever owned has failed me.

      • Keep those drives cool! Mount a fan next to them, using plastic straps for flexibility so as not to vibrate the drive.

        I like WD, but now Seagate has 5-year warranties.
      • So far, I have two (out of two) failed HDDs from Quantum, one failure and one oddly behaving drive (erratic performance) from Maxtor, none from WD and Seagate, though my only Seagate drive (2.1GB 4500RPM) sounds like a lawn mower - I was certain it would fail but it annoyingly hummed along 24/7 until I retired the PC in which it was 5+ years later.

        Thankfully, all four failures occured under warranty. Also, failure in the three total failures was progressive: after each restart, I could copy some (decreasin
    • by Anonymous Coward on Saturday June 25, 2005 @04:22PM (#12910637)
      You are really better off just using software raid as provided by the operating system than using the fake raid provided by those on board ide/sata raid controllers. Then if your mobo dies you don't have to find one with the same raid chipset, worry about proprietary drivers etc. You just get another mobo and everything works fine. I played around with the nvraid, the silicon image raid, and one other brand, and they all pretty much suck. The best part is that without a special driver it doesn't matter how you configure the devices in the raid bios, they show up to the OS as individual drives not as a raid drive.
      • If you set up both entire drives as a software raid array under Linux, with exactly the same parameters as in the motherboard raid, and use the Linux mdpart patch (2.6.6 or later, do NOT use 2.6.5 or below), you can get Linux and Windows to share the same array.

        Getting it to boot is a bit of a bitch though. You need to use a ramdisk and experiment with LILO an awful lot. LILO also won't work with anything other RAID1 for obvious reasons
      • >> Then if your mobo dies you don't have to find one with the same raid chipset, worry about proprietary drivers etc

        umm, if you're smart enough to set up raid, you will do backups won't you?
      • Many years ago, back in the days of MFM and RLL hard drive controllers, I got a really cool device by a company called Perstor which would take any MFM drive and convert it to ARLL, yielding a huge size increase of about 90%. So I put in a shiny new 40 MB (yep, megabytes in those days) and got 76 MB of capacity. Whee!

        Then the CONTROLLER failed. The drive itself was fine. But Perstor, in the meantime, had gone out of business. Bye bye data.
    • The Poor Man's Redundant Array of Inexpensive Disks.
      Well, many believe that the I is for Independent. See the wikipedia [wikipedia.org] for the debate.
    • How exactly do you do RAID 5 with a pair of drives?

      --
      Evan

    • unless the drivers improved a LOT, most (all?) RAID IDE controllers are unsupported in linux. since the raid is really all done in software, developpers see no reason to code a raid for every drivers, instead they point you to the MD driver.

      at the same time, the MD driver is faster than the cheapo drivers. or so they said last time i had one
  • Howto (Score:5, Insightful)

    by zabagel ( 731529 ) on Saturday June 25, 2005 @04:11PM (#12910589)
    Possible new Slashdot Category?
  • by roman_mir ( 125474 ) on Saturday June 25, 2005 @04:13PM (#12910597) Homepage Journal
    Nothing for you to see here. Please move along.
  • Cool? Naah, old (Score:3, Insightful)

    by riflemann ( 190895 ) <riflemann@Nospam.bb.cactii.net> on Saturday June 25, 2005 @04:15PM (#12910603)
    I seriously doubt that this is cool nowadays. A huge case, a lot of fans and the heat it generates isn't something in anyway impressive nowadays.

    It takes just TWO modern disks to get 1/2 terabyte of space, and not much more ot get them in raid5, plus you can have a compact box (the one in TFA is very boxy and ugly) and a lot less noise and power consumption.

    Not impressive. Sorry.
    • Actually, Maxtor now has a single-drive 500GB solution. However, both a pair of 250GB disks or a 500GB disk will cost more than twice what this array cost the guy to build (including power, case, and controller). $0.50/GB is a pretty decent rate (though he did get some decent deals on some of the eBay parts)
  • by Seumas ( 6865 ) * on Saturday June 25, 2005 @04:16PM (#12910607)
    Okay, half a terabyte? Hardly worth lifting a finger for. I have more than 2.5 terabytes almost entirely of porn. And not only that, but it's all stored on 20 IDE drives of various sizes, in external USB cases, plugged into three 7-port D-Link USB hubs, plugged into a PC.

    That's a lot of storage.

    That's balls-to-the-wall.

    I'll take a picture of all the drives stacked up on one another on the desk (5 rows, 4 drives tall).

    I take my porn seriously.
    • by Seumas ( 6865 ) * on Saturday June 25, 2005 @04:20PM (#12910625)
      Who marked this "Funny"?

      I'm serious. And it shoudl be "Informative", you insensitive clods!
      • I'm serious. And it shoudl be "Informative", you insensitive clods!


        He's telling the truth! I won't tell you how I know, but... he's using an insufficiently patched Windows XP. :)
    • You're the California Pimp?
    • by MustardMan ( 52102 ) on Saturday June 25, 2005 @04:22PM (#12910633)
      I'll take a picture of all the drives stacked up on one another on the desk (5 rows, 4 drives tall).

      With that much porn, I think the last picture /. geeks want to see is of the drives
  • by OverlordQ ( 264228 ) on Saturday June 25, 2005 @04:17PM (#12910610) Journal
    Only reason it's budget is because they bought drives off eBay . . . personally . . I think I'll skip eBay if I'm buying Drives.
    • On the other hand, if you are building a big RAID array, you can probably deal with a few drives failing. $70 for fourteen 50.1GB drives is a hell of a deal, though if I was him I'd try to get a few more of them so he can deal with failures when they crop up.
  • by Anonymous Luddite ( 808273 ) on Saturday June 25, 2005 @04:18PM (#12910612)
    That seems like a lot of screwing around.

    Why not just hang a four *large* drives in a workstation with MB that does RAID 1+0? Yeah, it'll cost more than 249, but it won't involve a 50 lbs box of drives..
  • by darthpenguin ( 206566 ) * on Saturday June 25, 2005 @04:25PM (#12910655) Homepage

    This is a bit off-topic, but I want to share my most recent experience with linux-raid

    A few months ago, I decided I'd put together a RAID5 system in a dedicated box, to be used as network storage. I put together a Duron 1.6 on an ECS (I know!) K7VTA3, 512mb RAM, a Promise IDE controller, and 4 200GB drives. I figured the kernel-based software raid would be fine for my purposes.

    I installed linux to a normal partition, then set up the RAID array. Everything seemed fine. I set up samba/nfs shares and ftp. Files seemed to transfer just fine. But for some reason, if I transfered a large file over the network directly to the RAID, the md5sum would have changed, no matter how I transfered it. To make things even more strange, if I transferred to a non-RAID partition, then directly used mv or cp to place it on the RAID partition, it worked great. Strange.

    I never quite figured it out what was wrong, and I scrapped the project, with the intention to try again with some more decent hardware. Any ideas as to what happened?

    • by Anonymous Coward
      My only guess is that you used a via chipset instead of a sis or nforce to do this.
      Actually sis beats out nforce on pci bus and ide throughput hands down on the athlonxp platform.

      Been there done this tons of times.
      I currently have a tyan tiger mp board with a fasttrak 100 & 4 120GB maxtor drives in software raid 5, It's about 40% full or so and I've had zero problems with it. The whole system is underclocked EXCEPT the hard drives (wish I could underclock those).
    • My guess is it was the piece of shit Promise IDE controller. I'm convinced those things eat drives, amongst other nastiness.
    • Yes! I mean NO! ...

      I too have this same exact problem and haven't been able to figure out what causes it. The array in question is a 1.6TB software RAID5 array with eight 250GB Maxtor SATA drives on Promise SATA150 TX4 controllers. A few months ago I noticed files that I'd coppied via samba being corrupted once they got to the array. As such I now checksum every file that I dump on it before copying from my windows box. I would say somwhere between 1/5 and 1/10 of the large (350MB+) files I copy to it
  • Ridiculous (Score:4, Insightful)

    by tabdelgawad ( 590061 ) on Saturday June 25, 2005 @04:36PM (#12910700)
    This project looks like a giant, hot, slow, old-tech, loud, power-hog of a 500 Gig 'drive' for $250 (low-ball estimate with all the eBay pricing and special batch price on the drives the author got, and not counting time/labor).

    A 400 Gig drive (probably of equal or better reliability overall and a warranty) costs about $260 on newegg.

    Reminds me of people using 486's as routers/firewalls when you can pick up a Linksys or D-Link for $20 or $30.

    Thanks, but no thanks.

    • Re:Ridiculous (Score:4, Insightful)

      by HermanAB ( 661181 ) on Saturday June 25, 2005 @04:54PM (#12910776)
      Well, a 486 with Linux and IPtables has better throughput than the little ARM processor in a Linksys / Dlink and you can run a proxy filter, since you have a hard disk for the cache. There is just no comparison really.
    • Re:Ridiculous (Score:3, Insightful)

      by lawpoop ( 604919 )
      "400 Gig drive (probably of equal or better reliability overall and a warranty).."

      Not really. Most drives coming out now have a 1-year warranty (some have 3). Modern drives pack more data into a smaller space, so they are more likely to lose data than older drive. Small imperfections will be more noticeable, and will cause more and greater problems. They are not the quality level of the old seagate SCSI drives used in this setup. Those SCSI originally came with a 5-year warranty. If those SCSI drives are s

      • Bottom line, you need RAID 5 for data reliability

        That's a common misconception. RAID 5 will give you greater data reliability in some cases (sudden drive failure, RAID 5 will keep you from going down), but won't help you in many others (eg. accidental rm -rf /, someone rootkitting you just so they can spam people in Brazil (happened to me), accidentally running an SQL UPDATE command without a WHERE clause (also happened to me), etc).

        Bottom line, you need incremental backups for data reliability. D

        • Bottom line, you need incremental backups for data reliability. Doesn't matter how you do it, you can do it on top of RAID 5 to give you more peace of mind if you want, but it's not really necessary. Instead, at a bare minimum, you must be able to go back to several points in time to recover as recent of data as possible.

          See, for instance:
          http://www.mikerubel.org/computers/rsync_snapshots / [mikerubel.org]

          This document describes a method for generating automatic rotating "snapshot"-style backups on a Unix-based sy

      • Re:Ridiculous (Score:3, Informative)

        by ashayh ( 636057 )
        Wrong.
        No manufacturer is giving less than 3 years warranty. [newegg.com]
        The 10,000 rpm WD Raptor and all Seagate drives come with 5 year warranty.
        I think you wanted to refer to the not so recent attempt by some major players to cut warranty to 1 year. That didn't last long, I guess because their sales must have suffered.
    • A 400 Gig drive (probably of equal or better reliability overall and a warranty) costs about $260 on newegg.

      A 400GB drive isn't redundant, unlike the setup in the article, so if it fails, you've lost 400GB of data. The drives are SCSI, which means they will be incredibly reliable, and a lot more so than a consumer-grade SATA drive. I would accessing large files on that thing would be incredibly fast.

      Reminds me of people using 486's as routers/firewalls when you can pick up a Linksys or D-Link for $20 or
    • Not a good comparison, as if you get an old machine to run as a 'router', you get a few more features then the cheapo dedicated units..

      Ive also found that those cheap 'home routers' that you get for 50 bucks or less are absolute garbage.

    • It's not ridiculous when you account for the failure scenario.

      It has always been cheaper to buy a single big disk than it is to buy a raid, but do tell us, how do you expect to:

      1. Replace the disk while still allowing access to it's contents?
      2. Recover your data after your single disk has failed?

      Raid has more advantages than just size, and although it's easy to point out that the storage size could be had more cheaply, that's not the same thing as saying that a hardware controller based RAID 5 system cou
  • I've kept away of the hardware trend since processors hit the 1 Ghz (back in 2000 I guess?), and turned from the hardware junkie to the casual hard drive and memory shopper.

    That said, is there any similar RAID controller to that of the article (one of which I have lying somewhere) but for IDE PATA/SATA drives? You know, in order to set up a similar project but with 160-200 Gb SATA drives instead?

    Regards,
  • eh? (Score:2, Informative)

    by cryptoz ( 878581 )
    What's with the .5TB? Is it not more standard to call it 512 GB, which, at least in my opinion, sounds far more impressive than .5 TB?
    • Do it in bytes, that's even more impressive. I have about 2598455214080 bytes of storage in my Linux box, all stuffed in a midi tower (8 drives total)
    • Re:eh? (Score:3, Informative)

      by aliquis ( 678370 )
      Seems like it was 14 50GB disks, but let's make it easy and said it was raid 0 of 2 250GB disks, then that's not 512GB is it? And also it's only 465GB of data. (most manufacturers count 1GB as 1.000.000.000 Bytes.)
  • ...the "ST118273LC" 18.6 GB drive is readily available on eBay for about $5.00...

    But shipping and handling as well as heat would make this too much hastle. Why not just get a left over PC, put in a pair of 250GB drives? Cooler, faster and about the same price or less. And if you ever needed to double or triple it many PCs will hold up to 3 drives and a CD-RON for 4 devices. Or if you really need alot, put 3 x 400GB = 1.2 TB. Use Linux for mirroring and Samba for NT sharing. Maybe even put a wireles

  • Why? (Score:2, Informative)

    by Jailbrekr ( 73837 )
    This is not economical, cutting edge, cool, nor is it practical. Why?

    1) The drives are used. If you want to impress us, do it with new components with warranties (even refurb). Used makes it impractical and unreliable, even moreso because you didn't use hot swap.
    2) It is only 500GB. This can be achieved in a RAID5 configuration with 3 NEW UNDER WARRANTY 250GB drives.
    3) Heat. This negates the whole "cool" (both figurative and literal) label.
    4) Power. Old drives suck up alot of power. Putting alot of them in
  • Price issues... (Score:2, Insightful)

    by flatface ( 611167 )
    So all geeking aside, for this project we choose the Seagate Barracuda SCA SCSI line of drives. There are several models you can choose from, the "ST118273LC" 18.6 GB drive is readily available on eBay for about $5.00 a drive, or the "ST150176LC" 50.1 GB at about $15.00 each. I was fortunate enough to get a "Bulk Lot" of the 50 GB model for about $70.00.

    And what about us not so fortunate enough to stumble upon deals like this?

    • Just for clarification, I was talking about not being able to get a .5tb array for this price. Even the 50gb one lists for $33 each on Pricewatch. That'll bring up the price of the drives alone to $330.
  • Bah! (Score:3, Informative)

    by KenFury ( 55827 ) <kenfury&hotmail,com> on Saturday June 25, 2005 @04:47PM (#12910744) Journal
    Nice project but.. at $15 for a 50gb drive 250gb raw will cost you $75. Add in shipping and I bet you are at $100+. I can get a seagate Sata 250Gb from Newegg for 120. I would rather have three of those RAID 5'd for 500Gb useable that some big, loud, hot, power hungry, loud drive array.
  • by HiyaPower ( 131263 ) on Saturday June 25, 2005 @04:56PM (#12910782)
    After paying for the electricity to power this thing, you would be much better off with a RR1820A and some Sata drives for about $1000. Not only would it use a lot less power, it would give you a lot more storage. The bucks now are not so much in the hardware (8 250 GB drives + a RR1820A $1100 ~ $250 for the size array this guy made), but in powering the beasts and keeping your house cool in summer at the same time. The way I figure it, you get about a 20:1 power saving on an equivalent sata array.

    $60 a barrel oil? What $60 a barrel oil? Must be nice not to have to pay your electricity bills...
  • This was exciting...3 years ago. I understand that this is on a "budget", but only 500 gb?
  • Fourteen SCSI disk drives - The MTBF will be rather bad...

    A more practical RAID is to put 4 large IDE disk drives in an old PC and run software RAID1, to give you two virtual disk drives.

    That means that you can't use a CDROM drive, since all IDE ports are used, but you can do a network install using either a boot floppy disk or a USB key.
    • That means that you can't use a CDROM drive, since all IDE ports are used, but you can do a network install using either a boot floppy disk or a USB key.

      I've done stuff like this. The best thing to do is to get a 5.25" USB 2.0 external drive enclosure, and put the CD/DVD/CD-RW/whatever drive of your choice in it. The enclosure usually only runs about $25-$30. Many computers can boot from USB now, and modern Linux distros have no problems with USB drives. As a bonus, after you are done installing, you
  • My shorter HOWTO: (Score:5, Informative)

    by Saeger ( 456549 ) <farrellj@nOSPam.gmail.com> on Saturday June 25, 2005 @05:08PM (#12910828) Homepage
    HOWTO make a 500MB software RAID5 array for about $250:

    1. Buy 3 250GB EIDE or SATA HD's very cheaply. [pricewatch.com]
    2. Plug them into your cheap linux PC (with at least a 400Watt powersupply). If EIDE then make sure each drive is on its own (master) channel. If your BIOS supports "hardware" RAID, disable it.
    3. Use a low-level drive diagnostic fitness test to burn the drives in so you can be sure they won't fail right away. A great tool for this is The Ultimate Boot CD [ultimatebootcd.com], as well as the 'badblocks' linux util.
    4. Assuming your 3 new drives are drives sdb, sdc, and sdd, with your bootdrive on sda (or hda), you should now partition each of them (instead of raiding the entire disk). I recommend creating one primary partition which is slightly smaller than the fullsize of the harddisk, such that if you buy a replacement drive of another brand and it isn't the EXACT same size, you won't be SOL when adding it. Mark the partition type as "FD", which is the raid autodetect type.
    5. Verify that your kernel supports software RAID by checking that /proc/mdstat exists, or by checking for the multidisk "md" module in the output of "lsmod | grep md" after attempting to "modprobe md" and "modprobe raid5". If not supported, then... figure that out yourself.
    6. Now the fun part (assuming mdadm's installed):
      mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
      View the status of the raidset construction by cat'ing /proc/mdstat
    7. Put a filesystem on the md0 device with mke2fs /dev/md0 (or mkreiserfs, or whatever)
    8. Add a line to your /etc/fstab to automount your new raid array at /raid5 or wherever.
    9. Oh, and if your distro doesn't automatically detect your array on reboot, you need to fix that by putting this in your init scripts somewhere:
      mdadm --assemble --scan
    Now, wasn't that easy? :)
    • HOWTO make a 500MB software RAID5 array for about $250:


      Even better, just send the $250 to me, and I'll send you a _1000_ MB RAID. :)
    • Re:My shorter HOWTO: (Score:3, Informative)

      by Spoing ( 152917 )
      HOWTO make a 500MB software RAID5 array for about $250:

      OK...let's do the math... 1. Buy 3 250GB EIDE or SATA HD's very cheaply. [pricewatch.com]

      (looks up prices) $98.00 * 3 = $294.00.

      Reminds me of a friend who keeps insisting that he can build a full-sized house for $10,000.00 if he only had the land.

    • Re:My shorter HOWTO: (Score:3, Interesting)

      by swillden ( 191260 ) *

      I recommend creating one primary partition which is slightly smaller than the fullsize of the harddisk

      I've built a ~600GB RAID array for my home video jukebox, and I'd modify your recipe in one way: Rather than creating just two partitions on each drive, create many, then create many small RAID arrays and glue them together with LVM. The result is much more flexible. You can use different partition sets with different RAID levels for different purposes, and it also makes adding additional storage int

  • by aussersterne ( 212916 ) on Saturday June 25, 2005 @05:25PM (#12910905) Homepage
    1. Get a big server tower case w/5+ 5.25" bays.
    2. Get 4 250GB EIDE drives (cheap anymore!)
    3. Get 4 $20.00 CompUSA lockable EIDE drive trays.
    4. Get an SMP board + CPUs and slap 'em in there.

    Ta-da. One power supply, four quiet drives, one case, software RAID-5 easily swappable with 2 dedicated fans per drive, looks professional, comparatively quiet, with the benefit of included scalable SMP workstation. And .7TB to boot. Or get a PCI EIDE raid card compatible with both Linux and Windows and go to town with RAID-0 and 1TB.

    There was a time when a SCSI array of many, many drives in a separate case at 10k RPM was something to lust after at home, but these days it just isn't. You can get close enough at home while saving space, using less power, and getting better overall performance.

      1. Get a big server tower case w/5+ 5.25" bays
      2. Get 4 250GB EIDE drives (cheap anymore!)
      3. Get 4 $20.00 CompUSA lockable EIDE drive trays.
      4. Get an SMP board + CPUs and slap 'em in there.


      Ta-da. One power supply, four quiet drives, one case, software RAID-5 easily swappable with 2 dedicated fans per drive, looks professional, comparatively quiet, with the benefit of included scalable SMP workstation. And .7TB to boot. Or get a PCI EIDE raid card compatible with both Linux and Windows and go to town with RAID-0

  • While it's cool to grab dirt cheap drives, I would think that if this is running 24/7, the impact on power costs and heat added to the room, would make this less cost effective long term. Also, I bet it's a lot noisier than just grabbing a few 300g drives and RAID-5'ing them for .6T storage. Also, with that many drives (and older ones), there's a lot more points of failure.
  • 4x250G SATA
    Motherboard with onboard 4 port sata raid
    1G ram
    AMD 3000+
    ATI 9800pro

    Put this all together several months ago for around 1100 bucks, nothing out of this world, No redundancy just straight raid-0 and 960 gigs or so of usable space (I'm just storing music and movies if the drives go south oh well).

    My other pc that runs my 57" projection TV has 4x120 and 3x80 gig drives in one bigass volume set (yeah boo hiss) and the machine next to it has 4x160 gig on a 3ware ide raid controller to hold even more
  • I wonder how long before one of them fails, or if there aren't any incompaiblities between them.... That'll be fun to debug.
  • From TFA:

    But I
    like to buy and build systems I can use for years and years without
    having to bother with upgrading, and figure I've made a long-term (at
    least 4-5 years, which is long term in the computer world) investment
    that provides me with much more than just storage functionality. And
    again, $1.46/GB is hard to beat.


    Sure, call me in two years from now, when they come out with $5 laser-etched holographic 3d memory cubes, which store an unlimited amount of data in a space the size of a few cubic inches..
  • I did this a 6 months ago with 10x250 GB IDE drives. It's a bit of a cabling nightmare, but with a dremel, drill, and some determination, I crafted a 5x2 "plane" setup. As the article points out, cooling is a major factor. The 4 fans I slapped on the front of the array keep it at a very nice 80 degrees year round. I haven't had a disk die yet, but I've got spares sitting in shrink wrap just incase.

    http://www.schaefer.nu/pics/nikita/ [schaefer.nu]
  • by The Optimizer ( 14168 ) on Saturday June 25, 2005 @07:38PM (#12911434)
    I just recently built a file server for my home. The most important considerations for me were data protection (I've got too much to lose), reliability, economy of operation and quietness, since the server would be in my office running 24/7.

    First off, Low-noise is my new religion (with 8 PC's in my office, it makes a huge difference), and secondly I don't belive in skimping... being frugal and practical yes, but cutting quality to save a buck (a la walmart) .. NO.

    So to achive that I acquired the following:
    - Antec Sonata Lifestyle case.
    - nForce 2 motherboard with out chipset cooling fan (just heat sink)
    - ATI Radeon 9200se video card with out cooling fan (just heat sink)
    - Mobile Athlon XP 2400+ CPU - 35 watts
    - 22 db Socket A Heat sink/Cooling fan unit
    - 22 db 12cm fan.
    - Gigabit NIC
    - 512mb RAM
    - Combo optical drive
    - Samsung 120gb drive (to hold OS, and work space)
    - 3ware Escalade 7504-LP RAID controller
    - 4x Maxtor 300gb 5400 RPM Drives (chosen for lower heat output over 7200 RPM) drives
    - APC 1000va UPS

    So put it all together and you get a system that has a total of only 4 fans in it including the one in the power supply. It is the quietist PC I have. The case has a nice rack to hold the 4 RAID drives with cushions to reduce vibration/noise and mount a 12cm fan draw air directly across them, as well as another at the back to produce decent airflow despite their lower cfm ratings.

    It runs cool and very quiet. I can't hear *anything* out of that system if my ears are more than a foot away from it. I can transfer large files like .iso to/from it at more than 40mb a second. It's protected and will safely shutdown in an extended power outage.

    It wasn't $250, but it's good enough for me to do real production work on and sleep better at night.

    So I may not have the fastest possible server, but it's still more than enough

    You could replicate using 400gb drives for 1.2TB of storage by trading off for the slightly higher heat of 7200 RPM.

  • Take it farther... (Score:4, Interesting)

    by HockeyPuck ( 141947 ) on Saturday June 25, 2005 @07:49PM (#12911478)
    I spend my entire life managing large SANs, so RAID is done in the array (EMC, HDS) while basic volume management is done on the host (LVM, VXVM)... so when i first read this I thought that somebody had used linux and a fibrechannel HBA running in target mode (http://www.emulex.com/ts/docfc/linux/430l/target_ mode_intro.htm [emulex.com])

    Put that up on /. and you'll have something b/c you'll have shown something more than 'look what linux can do' that the other OS's have had for years...

    And then going on to mount those luns on another system (say a solaris, aix or another linux box). Instead, I was dissapointed to find out that you took a linux box and created enough software RAID to for a TB or more. If this was done with windows, it would be rejected... so why doing it with Linux make it front page news?
  • by mpeg4codec ( 581587 ) on Saturday June 25, 2005 @07:52PM (#12911495) Homepage
    I'm surprised that nobody has mentioned the Linux Logical Volume Manager subsystem. It has many of the features of RAID arrays [such as spanning across multiple drives] with the added flexibility of being able to dynamically add [and theoretically remove] drives.

    Unfortunately, aside from RAID'ing the volumes or something similar, I haven't been able to find any information on making the system redundant.

    Read about it more on TLDP [tldp.org]. It's a very robust system that works well on both servers and desktops.
    • I'm surprised that nobody has mentioned the Linux Logical Volume Manager subsystem.

      As mentioned, my 2.8TB setup uses LVM2 on RAID 5 (mdadm, not raidtools). I think anyone building one of these babies would be crazy to not use LVM; why limit your future expansion options?
  • RAIFs (Score:3, Informative)

    by ChodeMonkey ( 65149 ) on Saturday June 25, 2005 @09:04PM (#12911702) Homepage
    I think it would be more interesting to consider a redundent array of independent flash cards. Since it is clear that solid state drives will soon be included in PCs and laptops in the near future it would be nice to address the speed and reliability issues associated with them. This would also help with the heat and all.

    Just a thought.

  • by MagnusDredd ( 160488 ) on Sunday June 26, 2005 @02:57AM (#12912929)
    1) Free -- Machine was an old K6-2 500 (192MB RAM, 1.4G Boot Drive) that I had laying around.
    2) Free -- I got a full tower case from my brother in law (no faceplate).
    3) Free -- I had a few 120mm fans laying around which I have cooling the drives.
    4) $1040 -- 8 Maxtor 250 GB PATA HDs. (8MB cache, 7200 RPM)
    5) $215 -- 3Ware 7810 (8 port PATA hardware RAID 5 card).
    6) $140 -- APC RS 1500 battery backup. (You don't want the array to suddenly lose power for any reason!)

    Total Cost $1395.

    What it got me: I have 1400 GB usable redundant storage with a hot-spare. If a drive fails at 1:00am the computer will automatically start the rebuild on the spare drive, and likewise if I'm not home. This was more important than the additional storage. I also know that I can get 40 minutes of power out of the APC if the power goes out. The machine is set up to shut itself down in the event that the battery runs low.

    I didn't have to fight with any software configs. The driver is included in the Linux kernel source, and can be compiled into the kernel. I don't have to worry about figuring out SMART data. "tw-cli info c0" gives me easily readable output on all of the drives plugged into the RAID card. It's simple, does the job, is stable as all hell, and was fairly cheap. It would have cost nearly as much to have bought 4 PATA cards (ones not using the flawed silicon image controller) as it cost for the 3ware card off of eBay.

    More information here [scatteredbits.net].

Do you suffer painful hallucination? -- Don Juan, cited by Carlos Casteneda

Working...