Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

Building a 10 TB Array For Around $1,000 227

As storage hardware costs continue to plummet, the folks over at Tom's Hardware have decided to throw together their version of the "Über RAID Array." While the array still doesn't stack up against SSDs for access time, a large array is capable of higher throughput via striping. Unfortunately, the amount of work required to assemble a setup like this seems to make it too much trouble for anything but a fun experiment. "Most people probably don't want to install more than a few hard drives into their PC, as it requires a massive case with sufficient ventilation as well as a solid power supply. We don't consider this project to be something enthusiasts should necessarily reproduce. Instead, we set out to analyze what level of storage performance you'd get if you were to spend the same money as on an enthusiast processor, such as a $1,000 Core i7-975 Extreme. For the same cost, you could assemble 12 1 TB Samsung Spinpoint F1 hard drives. Of course, you still need a suitable multi-port controller, which is why we selected Areca's ARC-1680iX-20."
This discussion has been archived. No new comments can be posted.

Building a 10 TB Array For Around $1,000

Comments Filter:
  • by eldavojohn ( 898314 ) * <`eldavojohn' `at' `gmail.com'> on Monday July 13, 2009 @03:05PM (#28680841) Journal
    One: The title is a borderline lie. Yes, you can buy 12x 1TB drives for about a grand. But if I'm going to build an array and bench mark it and constantly compare it to buying a Core i7-975 Extreme, the drives alone don't do me any good! (And I love how you continually reiterate with statements like "The Idea: Massive Hard Drive Storage Within a $1,000 Budget")

    Two: Said controller does not exist. They listed the controller as ARC-1680ix-20. Areca makes no such controller [areca.com.tw]. They make an 8, 12, 16, 24 but no 20 unless they've got some advanced product unlisted anywhere.

    Three: Said controller is going to easily run you another grand [newegg.com]. And I'm certain most controllers that accomplish what you're asking are pretty damned expensive and they will have a bigger impact than the drives on your results.

    Four: You don't compare this hardware setup with any other setup. Build the "Uber RAID Array" you claim. Uber compared to what, precisely? How does a cheap Adaptac compare [amazon.com]? Are you sure there's not a better controller for less money?

    All you showed was that we increase our throughput and reduce our access times with RAID 0 & 5 compared to a single drive. So? Isn't that what's supposed to happen? Oh, and you split it across seven pages like Tom's Hardware loves to do. And I can't click print to read the article uninterrupted anymore without logging in. And those Kontera ads that pop up whenever I accidentally cross them with my mouse to click your next page links, god I love those with all my heart.

    So feel free to correct me but we are left with a marketing advertisement for an Areca product that doesn't even exist and a notice that storage just keeps getting cheaper. Did I miss anything?
  • $1000 my ass (Score:3, Insightful)

    by Anonymous Coward on Monday July 13, 2009 @03:09PM (#28680905)

    That'll buy the disks. But nothing else. "Hey, look at my 10TB array. It's sitting there on the table in those cardboard boxes."

  • by zaibazu ( 976612 ) on Monday July 13, 2009 @03:16PM (#28681015)
    Another thing with RAID arrays that have quiete a few drives is, you have no method of correcting a flipped bit. You need at least RAID6 to correct these errors. With such vast amounts of data, a flipped bit isn't that unlikely.
  • by jo42 ( 227475 ) on Monday July 13, 2009 @03:18PM (#28681045) Homepage

    They need to keep 'publishing' something to justify revenue from their advertisers. Us schmucks in the IT trenches know better than to take the stuff they write without a bag of road salt. A storage array of that size is going to need at least two redundant power supplies and a real RAID card with battery backup and proven track record -- unless you want a solid guaranty to loose that amount of data at some point in the near future.

  • by HockeyPuck ( 141947 ) on Monday July 13, 2009 @03:26PM (#28681141)

    Ok, so let's say you built one of these monsters. Or you rolled your own with linux and a bunch of drives.... How would a home user, back this up? They've got every picture/movie/mp3/resume/recipe etc.. that they've ever owned on it.

    • Blu-Ray DVD? Those have a capacity of 50GB
    • An old LTO-3 drive from eBay. They have a native (no compression) of about 400GB. So you'd still need 4-5tapes for all your data. Though this will cost you over a grand. Plus you'll need to buy a LVD external SCSI adapter.
    • Online/internet backup? Backup and restore times would be brutal.

    Anybody got any reasonable ideas?

  • by gweihir ( 88907 ) on Monday July 13, 2009 @03:26PM (#28681151)

    but you won't have anything to connect them to, as the controller itself is another $1100.

    You don't need that. Get a port with enoigh SATA ports on PCI-E and add more ports per cheap PCI-E controller. Then use Linux software RAID. I did this for several research data servers and this is quite enough to saturate GbE unless you have a lot of small accesses.

  • by BiggestPOS ( 139071 ) * on Monday July 13, 2009 @03:31PM (#28681207) Homepage

    Build an identical one and keep it far enough away that you need to feel safe? Ideally at least a few blocks away, sync them over a short-haul wireless link. (encrypted of course!) and take the same precautions as you would with anything else?

    Oh yeah don't do a flat fire store, make it a SVN repository of course.

  • Sigh... (Score:3, Insightful)

    by PhotoGuy ( 189467 ) on Monday July 13, 2009 @03:34PM (#28681257) Homepage

    From the .COM bust, I have two leftover Netapp filers, with a dozen or so shelves, about 2T of storage. Each unit was about $250,000 new. A half million dollars worth of gear. Sitting in my shed. It's not worth the cost of shipping to even give the unit away any more. I guess it'll probably just go to the recycling depot. It seems a bit sad for such a cool piece of hardware.

    On the cheerier side, it is nice to enjoy the benefits of the new densities; I have two 1T external drives, I bought for $100 each, mirrored for redundancy, that sit in the corner of my desk, silently, drawing next to no power. (Of course the NetApp would have better throughput in a major server environment, but for most practical purposes, a small RAID of modern 1T drives is just fine.)

  • by Kjella ( 173770 ) on Monday July 13, 2009 @03:55PM (#28681613) Homepage

    A storage array of that size is going to need at least two redundant power supplies and a real RAID card with battery backup and proven track record -- unless you want a solid guaranty to loose that amount of data at some point in the near future.

    Depends on what you want it for. I got a 7TB server w/12 disks using a single power supply and JBOD - I could use RAID1 if I wanted, but I prefer the manual double copies and knowing at once when a disk has failed since the last time I messed with RAID I lost a RAID5 set because the warnings never reached me. Works like a charm with all disks running cool and stable as a rock, much cheaper than this. I'm also very aware of the limitations of this setup, it's in no way a redundant setup in any sense. If I wanted 10TB of highly available enterprise grade information then all the following apply:

    a) I wouldn't use my cheap gaming case
    b) I wouldn't use my single non-redundant PSU
    c) I'd get a server mobo with surveilance
    d) I'd get a real RAID card with staged boot etc.
    e) I'd get hotswap drive bays
    f) I wouldn't be using consumer SATA drives

    This sounds like the half-way being neither really cheap nor really reliable. What good is that?

  • by ocularDeathRay ( 760450 ) on Monday July 13, 2009 @04:21PM (#28681977) Journal
    well I suppose you could build two of them. I still wouldn't trust important data to that setup.... but I don't know of any cheaper setup in the long run if you just want to make one copy of everything. What I was just thinking is for a home user, how would you ever collect that much data worth saving... then I remembered that my shitty verizon DSL is the problem (only real connection where I live). I suppose if I had a fast connection I could collect that much porn or something. seriously though, it seems like most of the "home users" that I know that have that much data, its just a collection of free (maybe illegal, but free) downloaded crap. I think to a certain extent the original source is your backup. For example, if I download every ep of STtNG from BT, I am not going to bother backing that up at all, because I assume I will probably just be able to download it again, and the quality will probably be better when I do. Most users really don't have very much truely irreplaceable data. A few gigs of pics maybe, some digital media you actually purchased, a collection of resumes and letters. I have been using computers since I was a kid and I only have maybe 2 or 3 gigs of data I believe is actually important, and that is really a stretch. So this article is stupid, its not a solution for enterprise stuff, and very few "home users" really need that kind of storage.
  • by Anonymous Coward on Monday July 13, 2009 @04:26PM (#28682055)

    Seriously this is not news. it's an advertisement to sell gear that doesn't exsist.

  • by relguj9 ( 1313593 ) on Monday July 13, 2009 @04:35PM (#28682181)
    Exactly.... you can even set it up to automatically identify which HD has failed (with like 2 or 3 drive parity), hot swap out the hard drive (or add more) and have it resort the array without a reboot. This article is st00pid. Also, the guy who says you need an 1100 dollar controller is st00pid.
  • by MartinSchou ( 1360093 ) on Monday July 13, 2009 @04:36PM (#28682203)

    GbE is 1,000 megabits/s in theory. That's no more than 125 megabytes/s. With four Intel X25-E drives you'll hit 226 MB/s random read and 127 MB/s random write [anandtech.com] throughput.

    I'm fairly certain you can settle for the four on-board SATA ports for that. And those four drives combined will more or less eat a few thousand IO/s as horderves.

  • by iamhassi ( 659463 ) on Monday July 13, 2009 @04:57PM (#28682483) Journal
    "Did I miss anything?"

    You forgot reason Five, which is stated in the article: "we decided to create the ultimate RAID array, one that should be able store all of your data for years to come while providing much faster performance than any individual drive could."

    If this is suppose to be storing data for years, why am I dropping $1,000 on it today? Why am I (or anyone) buying "the next several years" of storage all at once? Did I win a huge settlement from suing myself? [slashdot.org]. Did I win the lottery? Did the economy suddenly rebound?

    And in several years when you actually use all 10 tb you're gonna be the douche with twelve old 1 tb drives while you're buddies are cruising along with single 5 and 7 tb drives that they spent $100-$200 on.

    Wouldn't it make more sense to buy more when I fill what I already have? What's the point of having 10 TB with 95% of it empty? Spending a grand on storage that will sit largely empty for several years all the while burning up electricity to keep those drives running doesn't make sense. Might as well leave them in the box and lower the electric bill a bit for a few years.

    Am I'm surprised they even bothered with testing RAID 0. 12 drives, no redundancy? Good way to lose 10 TB of data if you ask me.

    Just for shits and grins I decided to look up what drive the $85 they spent on a 1 tb drive would have bought 5 years ago, to see how this article would have gone if it was July 2004. Looks like they'd have twelve 120gb SATA drives [archive.org] or twelve 160gb IDE [archive.org]. The IDE drives would be sadly outdated by now and the SATA drives would have given you 1.2 TB of storage all for $1,000. I imagine we'll be looking at this article 5 years from now and thinking "WTF were they thinking??"
  • Re:We do this now (Score:1, Insightful)

    by Anonymous Coward on Monday July 13, 2009 @05:16PM (#28682765)

    The price dropped from thousands of dollars to hundreds, and took me a full workday to get set up.

    How much do they pay you? With benefits, you could easily cost your employer $500-$1000/day.

  • by vlm ( 69642 ) on Monday July 13, 2009 @05:49PM (#28683225)

    Lessons learned:

    9. Software raid is much easier to remotely admin online while using SSH and linux command line. Hardware raid often requires downtime and reboots.

    10. Your hardware RAID card manufacturer may go out of business, replacements may be unavailable, etc. Linux software raid is available until approximately the end of time, much lower risk.

    11. The more drives you have, the more you'll appreciate installing them all in drive caddy/shelf things. With internal drives you'll have to disconnect all the cables, haul the box out, unscrew it, open it, then unscrew all the drives, downtime measured in hours. With some spare drive caddies, you can hit the power, pull the old caddy, slide in the new caddy with the new drive, hit the power, downtime measured in seconds to minutes. Also I prefer installing new drives into caddies at my comfy workbench rather than crawling around the server case on the floor.

  • by TClevenger ( 252206 ) on Monday July 13, 2009 @07:51PM (#28684511)

    9. Software raid is much easier to remotely admin online while using SSH and linux command line. Hardware raid often requires downtime and reboots.

    I would imagine it's also easier to move a software array from one system to another. If your specialty RAID card dies, at a minimum you'll have to find another card to replace it with, and at worst the configuration is stored in the controller instead of on the disks, making the RAID worthless.

Never call a man a fool. Borrow from him.

Working...