Forgot your password?
typodupeerror
Data Storage Hardware

Build Your Own $2.8M Petabyte Disk Array For $117k 487

Posted by Soulskill
from the we-know-exactly-what-you'd-do-with-that-much-storage dept.
Chris Pirazzi writes "Online backup startup BackBlaze, disgusted with the outrageously overpriced offerings from EMC, NetApp and the like, has released an open-source hardware design showing you how to build a 4U, RAID-capable, rack-mounted, Linux-based server using commodity parts that contains 67 terabytes of storage at a material cost of $7,867. This works out to roughly $117,000 per petabyte, which would cost you around $2.8 million from Amazon or EMC. They have a full parts list and diagrams showing how they put everything together. Their blog states: 'Our hope is that by sharing, others can benefit and, ultimately, refine this concept and send improvements back to us.'"
This discussion has been archived. No new comments can be posted.

Build Your Own $2.8M Petabyte Disk Array For $117k

Comments Filter:
  • Not ZFS? (Score:2, Insightful)

    by pyite (140350)

    Good luck with all the silent data corruption. Shoulda used ZFS.

    • by Anonymous Coward

      I love free shipping, even if it costs me more !! I like FREE STUFF !!

    • Re: (Score:3, Insightful)

      by Lord Ender (156273)

      Are you saying that with the more expensive system, disks never fail and nobody ever has to get up in the night?

      • Re: (Score:3, Interesting)

        by chudnall (514856)

        What do you mean by more expensive? OpenSolaris [opensolaris.org] with ZFS costs the same as Linux. And yes, You'll have to get up a lot less often in the middle of the night, since a few bad sectors aren't going to force a fail of the entire disk.

      • Re:Not ZFS? (Score:5, Interesting)

        by ajs (35943) <ajs@@@ajs...com> on Wednesday September 02, 2009 @11:15AM (#29285921) Homepage Journal

        Are you saying that with the more expensive system, disks never fail and nobody ever has to get up in the night?

        Well... yes and no. When you've worked with high-end arrays, you learn that storage is only the beginning. NetApp and EMC provide far, far more. I was damned impressed when I first heard a presentation from NetApp about their technology, but the day that they called me up and told me that the replacement disk was in the mail and I answered, "I had a failure?" ... that was the day that I understood what data reliability was all about.

        Since that time (over 10 years ago), the state of the art has improved over and over again. If you're buying a petabyte of storage, it's because you have a need that breaks most basic storage models, and the average sysadmin who thinks that storage is cheap is going to go through a lot of pain learning that he's wrong.

        Someday, you'll have a petabyte disk in a 3.5" form-factor. At that point, you can treat it as a commodity. Until then, there are demands placed on you when you administrate that much storage which demand a very different class of device than a Linux box with a bunch of raid cards.

        As evidence of that, I submit that dozens of companies like the one in this article have existed over the years, and only a handful of them still exist. Those that still do have either exited the storage array business, or have evolved their offerings into something that costs a lot more to build and support than a pile of disks.

        • Re:Not ZFS? (Score:4, Insightful)

          by NotBornYesterday (1093817) * on Wednesday September 02, 2009 @12:05PM (#29286775) Journal

          As evidence of that, I submit that dozens of companies like the one in this article have existed over the years, and only a handful of them still exist. Those that still do have either exited the storage array business, or have evolved their offerings into something that costs a lot more to build and support than a pile of disks.

          Or they have been bought by one of the bigger storage companies.

        • Re: (Score:3, Interesting)

          by iphayd (170761)

          On a similar note, they claim that they will backup any one computer for $5/month. Well, my one computer happens to be the backup node for my SAN, so they're going to need about 15 TB (It's a small SAN) to have 30 day backups for me. Please note, that all of the files on my SAN are under 4GB and I have a SAN, not a NAS, so my servers see it as a native hard drive.

    • by leoc (4746) on Wednesday September 02, 2009 @12:05PM (#29286777) Homepage

      I like how you dismiss a detailed real world design example based simply on a claimed feature without any further substantiation. Very classy. I'm not saying you are wrong, but would it kill you to go into a little more detail about why these folks need "luck" when they are clearly very successful with their existing design?

      • by pyite (140350) on Wednesday September 02, 2009 @12:13PM (#29286895)

        are you a project manager by any chance?

        Of course not. A project manager would look at this and go, "wow, we saved a lot of money!" It's pretty simple. ZFS does what most other filesystems do not; it guarantees data integrity at the block level by the use of checksums. When you're dealing with this many spindles and dense, non-enterprise drives, you are virtually guaranteed to get silent corruption. The article does not once have any of the words corrupt.*, checksum, or integrity mentioned in it once. The server doesn't use ECC RAM. The project, while well intentioned, should scare the crap out of anyone thinking about storing data with this company.

        • by profplump (309017) <zach-slashjunk@kotlarek.com> on Wednesday September 02, 2009 @02:28PM (#29289003)

          What failure rate are you using to "virtually guarantee" that you'll get data corruption with 45 drives?
          What failure rate in your RAM, CPU, and motherboard are you using to guarantee that the ZFS checksum are not themselves corrupted? Not to mention the high possibility of bugs in a younger file system, and the different performance characteristics among FSes.

          I'm not say ZFS is a bad plan, at least if you're running enough spindles, but if you're going to "virtually guarantee" silent corruption with less than 100 drives I'd like to see some documentation for the the non-detectable failure rates you're expecting.

          It's also worth noting that in a lot of data, a small amount of bit-flips might not be worth protecting against at all. Or they might be better protected at the application level instead of the block level -- for example, if the data will be transmitted to another system before it is consumed, as would be typical for a disk-host like this, a single checksum of the entire file (think md5sum) could be computed at the end-use system, rather than computing a per-block checksum at the disk host and then just assuming the file makes it across the network and through the other system's I/O stack without error.

        • *sigh* (Score:5, Insightful)

          by upside (574799) on Wednesday September 02, 2009 @02:29PM (#29289005) Journal

          How about reading the section "A Backblaze Storage Pod is a Building Block".

          <snip> the intelligence of where to store data and how to encrypt it, deduplicate it, and index it is all at a higher level (outside the scope of this blog post). When you run a datacenter with thousands of hard drives, CPUs, motherboards, and power supplies, you are going to have hardware failures — it's irrefutable. Backblaze Storage Pods are building blocks upon which a larger system can be organized that doesn't allow for a single point of failure. Each pod in itself is just a big chunk of raw storage for an inexpensive price; it is not a "solution" in itself.

          Emphasis mine. I believe there are quite a few successful and reliable storage vendors not using ZFS. We get the point, you like it. Doesn't mean you can't succeed without it. Be more open minded.

    • Re: (Score:3, Interesting)

      I have news for you. The high end boxes from EMC, NetApp and the like have silent data corruption too!
    • Re: (Score:3, Insightful)

      by rnturn (11092)
      Because, you know, ZFS cures cancer and stops bad breath, too. No to be too snarky but jeez... what did everybody do before ZFS came along?
  • by Nimey (114278) on Wednesday September 02, 2009 @10:12AM (#29284995) Homepage Journal

    Support.

    • by bytethese (1372715) on Wednesday September 02, 2009 @10:31AM (#29285237)
      For the 2.683M difference, that support better come with a "happy ending" for the entire staff...
    • by Richard_at_work (517087) <richardprice@nOSPam.gmail.com> on Wednesday September 02, 2009 @10:47AM (#29285501)
      And backup, redundancy, hosting, cooling etc etc. The $117,000 cost quoted here is for raw hardware only.
      • by interval1066 (668936) on Wednesday September 02, 2009 @10:56AM (#29285625) Homepage Journal

        Backup: depends on the backup strategy. I could make this happen for less than an additional 10%. But ok, point taken.

        Redundancy: You mean as in plain redundancy? These are RAID arrays are they not? You want redundancy at the server level? Now you're increasing the scope of the project which the article doesn't address. (Scope error)

        Hosting: Again, the point of the article was the hardware. That's a little like accounting for the cost of a trip to your grandmother's, and factoring in the cost of your grandmother's house. A little out of scope.

        Cooling: I could probably get the whole project chilled for less than 6% of the total cost, depending on how cool you want the rig to run.

        I think you're looking for a wrench in the works where none exist.

      • by MrNaz (730548) * on Wednesday September 02, 2009 @10:56AM (#29285627) Homepage

        Redundancy can be had for another $117,000.
        Hosting in a DC will not even be a blip in the difference between that and $2.7m.

        EMC, Amazon etc are a ripoff and I have no idea why there are so many apologists here.

        • by Score Whore (32328) on Wednesday September 02, 2009 @02:32PM (#29289053)

          Redundancy can be had for another $117,000.
          Hosting in a DC will not even be a blip in the difference between that and $2.7m.

          EMC, Amazon etc are a ripoff and I have no idea why there are so many apologists here.

          First these aren't even storage arrays in the same sense that EMC, Hitachi, NetApp, Sun, etc. provide. The only protocol you can use to access your data is https? WTF! Second the Hitachi array in my data center doesn't put 67 TB storage behind half a dozen single points of failure the way this thing does. Third the Hitachi array in my data center doesn't put 67 TB behind a dinky gigabit ethernet link. My Hitachi will provide me with 200,000 IOPS with 5 ms latency. I can hook a whole slew of hosts up to my SAN. I can take off-host, change-only copies of my data so backups don't bog down my production work. I can establish replication between the Hitachi here in this building and the second array four hundred miles away with write order fidelity and guaranteed RPOs.

          Comparing this thing to enterprise class storage is like some sixteen year old adding a cold air intake and a coat of red paint to his Honda civic then running around bragging that his car is somehow comparable to a Ferrari ("look they're both red!") Every time I see something like this the only thing I learn is that yet another person doesn't actually "Get It" when it comes to storage.

          HelloWorld.c is to the Linux kernel as this thing is to the Hitachi USP-V or EMC Symmetrix.

          • Re: (Score:3, Informative)

            by ToasterMonkey (467067)

            My Hitachi will provide me with 200,000 IOPS with 5 ms latency.

            While that is just a TAD overkill for disk backup, these guy's $.11/GB is not something I'd trust my backups on.

            HelloWorld.c is to the Linux kernel as this thing is to the Hitachi USP-V or EMC Symmetrix.

            You nailed it.

            Service Time/IOPS is less important here than trustworthy and proven controller hardware & software, and built in goodies like replication. That's why I would trust disk backups to Sun, NetApp, Hitachi, EMC, and not these people. Possibly home systems I guess, but bragging about homemade storage is a real turnoff.

        • Re: (Score:3, Informative)

          by Sandbags (964742)

          "Redundancy can be had for another $117,000." ...plus the inter SAN connectivity ...plus the SAN Fabric aware write plitting hardware and licensing ...plus the redundancy aware server connected to that SAN fabric ...plus the multipath HBA licensing for the servers ...plus multiple redundant HBAs per server and twice as many SAN fabric switches ...plus journaling and rollback storage, and block level deduplication within it (having a real-time copy is useless if you get infected with a virus). ...plus anothe

      • by MoonBuggy (611105) on Wednesday September 02, 2009 @10:59AM (#29285699) Journal

        The lowest cost of an (apparently) comparable solution on their site is from Dell, at $826,000 per PB. That includes hardware and support but still requires hosting, cooling and so on at extra cost. To quote backup and redundancy as part of the cost seems misleading, since none of the solutions appear to include that.

        Basically, in order to compare favourably to the Dell units simply requires that one can get support for less than $709,000. If you want to throw in backup and redundancy, then buy twice as many units - you've still got change from half a million compared to the single Dell unit in order to cover the extra power, support and cooling costs, not to mention that support costs don't necessarily scale linearly.

    • by johnlcallaway (165670) on Wednesday September 02, 2009 @10:51AM (#29285571)
      It's great having someone tell you they will be there in three hours to replace your power supply, that you then have to dedicate a staff person to be with when they go out on the shop floor because some moron in security requires it. If they had just left a few spare parts you could do it yourself because everything just slides into place anyway.

      That 2.683M also pays for salaries, pretty building(s), advertising, research, conventions, and more advertising.

      I could hire a couple of dedicated staff to have 24x7 support for far less than 2.683M, plus a duplicate system worth of spare parts.

      This stuff isn't rocket science. Most companies don't need high-speed, fiber-optic disk array subsystems for a significant amount of their data, only for a small subset that needs blindingly fast speed. The rest can sit on cheap arrays. For example, all of my network accessible files that I open very rarely but keep on the network because it gets backed up. All of my 5 copies of database backups and logs that I keep because it's faster to pull it off of disk than request a tape from offsite. And it's faster to backup to disk, then to tape.

      BackBlaze is a good example of someone that needs a ton of storage, but not lightening fast access. Having a reliable system is more important to them than one that has all the tricks and trappings of an EMC array that probably 10% of all EMC users actually use, but they all pay for.
  • by eldavojohn (898314) * <eldavojohn@nOspAM.gmail.com> on Wednesday September 02, 2009 @10:12AM (#29284999) Journal

    Before realizing that we had to solve this storage problem ourselves, we considered Amazon S3, Dell or Sun Servers, NetApp Filers, EMC SAN, etc. As we investigated these traditional off-the-shelf solutions, we became increasingly disillusioned by the expense. When you strip away the marketing terms and fancy logos from any storage solution, data ends up on a hard drive.

    That's odd, where I work we pay a premium for what happens when the power goes out, what happens with a drive goes bad, what happens when maintenance needs to be performed, what happens when the infrastructure needs upgrades, etc. This article left out a lot of buzzwords but they also left out the people who manage these massive beasts. I mean, how many hundreds (or thousands) of drives are we talking here?

    You might as well add a few hundred thousand a year for the people who need to maintain this hardware and also someone to get up in the middle of the night when their pager goes off because something just went wrong and you want 24/7 storage time.

    We don't pay premiums because we're stupid. We pay premiums so we can relax and concentrate on what we need to concentrate on.

    • by SatanicPuppy (611928) * <Satanicpuppy@g m a i l .com> on Wednesday September 02, 2009 @10:23AM (#29285127) Journal

      The focus of the article was only on the hardware, which was extremely low cost to the point of allowing massive redundancy...This is not an inherently flawed methodology.

      If you can deploy cheap 67 terabyte nodes, then you can treat each node like an individual drive, and swap them out accordingly.

      I'd need some actual uptime data to make a real judgment on their service vs their competitors, but I don't see any inherent flaws in building their own servers.

    • by staeiou (839695) * <staeiou@@@gmail...com> on Wednesday September 02, 2009 @10:27AM (#29285187) Homepage

      We don't pay premiums because we're stupid. We pay premiums so we can relax and concentrate on what we need to concentrate on.

      They actually do talk about that in the article. The difference in cost for one of the homegrown petabyte pods from the cheapest suppliers (Dell) is about $700,000. The difference between their pods and cloud services is over $2.7 million per petabyte. And they have many, many petabytes. Even if you do add "a few hundred thousand a year for the people who need to maintain this hardware" - and Dell isn't going to come down in the middle of the night when your power goes out - they are still way, way on top.

      I know you don't pay premiums because you're stupid. But think about how much those premiums are actually costing you, what you are getting in return, and if it is worth it.

    • In the article he does mention that this solution is not for everyone and that failover and other features are outside the scope of the article. However, for his particular usage this is a nice solution.

      My question is, where does one acquire the case he uses? My company currently stores a lot of video and the 10TB 4U machines I have been building are quickly running out of space. This would be an ideal solution for my needs.
    • by Tx (96709) on Wednesday September 02, 2009 @10:28AM (#29285197) Journal

      We don't pay premiums because we're stupid. We pay premiums because we're lazy.

      There, fixed that for you ;).

      Ok, that was glib, but you do seem to have been too lazy to read the article, so perhaps you deserve it. To quote TFA, "Even including the surrounding costsâ"such as electricity, bandwidth, space rental, and IT administratorsâ(TM) salariesâ"Backblaze spends one-tenth of the price in comparison to using Amazon S3, Dell Servers, NetApp Filers, or an EMC SAN.". So that aren't ignoring the costs of IT staff administering this stuff as you imply, they're telling you the costs including the admin costs at their datacentre.

    • by Overzeetop (214511) on Wednesday September 02, 2009 @10:30AM (#29285223) Journal

      Yeah, this only works if your the geeks building the hardware to begin with. The real cost is in setup and maintenance. Plus, if the shit hits the fan, the CxO is going to want to find some big butts to kick. 67TB of data is a lot to lose (though it's only about 35 disks at max cap these days).

      These guys, however, happen to be both the geeks, the maintainers, and the people-whos-butts-get-kicked-anyway. This is not a project for a one or two man IT group that has to build a storage array for their 100-200 person firm. These guys are storage professionals with the hardware and software know how to pull it off. Kudos to them for making it and sharing their project. It's a nice, compact system. It's a little bit of a shame that there isn't OTS software, but at this level you're going to be doing grunt work on it with experts anyway.

      FWIW, Lime Technology (lime-technology.com) will sell you a case, drive trays, and software for a quasi-RAID system that will hold 28TB for under $1500 (not including the 15 2TB drives - another $3k on the open market). This is only one fault tolerant, though failure is more graceful than a traditional RAID). I don't know if they've implemented hot spares or automatic failover yet (which would put them up to 2 fault tolerant on the drives, like RAID6).

    • Re: (Score:3, Interesting)

      by parc (25467)

      At 67T per chassis and 45 drives documented per chassis, they're using 1.5T drives. 1 petabyte would then be 667 drives.

      The worst part of this design that I see (and there's a LOT of bad to see) is the lack of an easy way to get to a failed drive. When a drive fails you're going to have to pull the entire chassis offline. Google did a study in 2007 of drive failure rates (http://labs.google.com/papers/disk_failures.pdf) and found the following failure rates over drive age (ignoring manufacturer):
      3mo: 3% =

      • by Anarke_Incarnate (733529) on Wednesday September 02, 2009 @10:57AM (#29285671)

        You will more than likely NOT have to take a node offline. The design looks like they place the drives into slip down hot plug enclosures. Most rack mounted hardware is on rails, not screwed to the rack. You roll the rack out, log in, fail the drive that is bad, remove it, hot plug another drive and add it to the array. You are now done.

        They went RAID 6, even though it is slow as shit, for the added failsafe mechanisms.

    • Re: (Score:3, Informative)

      by fulldecent (598482)

      >> You might as well add a few hundred thousand a year for the people who need to maintain this hardware and also someone to get up in the middle of the night when their pager goes off because something just went wrong and you want 24/7 storage time.

      >> We don't pay premiums because we're stupid. We pay premiums so we can relax and concentrate on what we need to concentrate on.

      Or... you could just buy ten of them and use the left over $1m for electricity costs and an admin that doesn't sleep

    • by rijrunner (263757) on Wednesday September 02, 2009 @11:50AM (#29286549)

          Having a couple decades of working both sides of the Support Divide, I am now of the opinion that the sole purpose of a Support Contract is to have someone at the other end of the phone to yell at. It makes people feel better and have a warm fuzzy. But, having had to schedule CE's to come onto site to replace failed hardware, I have generally found that that adds hours to any repair job. I would guess that you could power off this array, remove every single drive, move them to a new chassis, reformat them in NTFS, then back to JFS and still finish before a CE shows up on site. I recall that in the winter of 1994, *every* Seagate 4GB drive in our Sun boxes died.

          What happens now when a drive goes bad now is that a drive goes bad. You spot it through some monitoring software. You pick up the phone and call a 1-800 number. Someone asks a few questions like "What is you name? What is your quest? What is your favorite color?", then you hear typing in the background. After a bit, if you're lucky, they have you in the system correctly and can find your support contract for that box. Then, they give you a ticket number and put you on hold. Then, after a bit, an "engineering" rep will come appear and say "What is the nature of the emergency" and you then tell them the same stuff, except you get to add works like "var adm messages" or something. They'll tell you to send them some email so they can do some troubleshooting. You send them what they ask for. About an hour or so later, you get an email or call back saying that the drive has gone bad and need replaced, which is pretty much the same thing you told them when you called in. They then tell you that you are on a Gold Contract with 24/7 support and that the CE has a 4 hour callback requirement from the time the call is dispatched to the CE. By this point, you are about 3-4 hours after the disk drive failed in the first place. Finally, the CE will call back after some amount of time to schedule a replacement. And here comes the real kicker.... In almost every instance for the last 10 years, we have had to do all maintenance during a scheduled window. At 1AM.

          What happens now when something breaks is that someone fixes it.

          Any business is faced with a Buy-It-Or-Build-It dilemma for any service or equipment. Since this was their core business, it certainly makes sense. And, it makes sense for any business of a certain size or set of skills. The reality is that the math is favoring consumer electronics for most applications because they are good enough for 85% of the business needs out there. The whole Cost-Benefit analysis must be periodically re-addressed. If you do not have $1 million a year in billed repair from a Support contract, is it worth $1 million a year for the contract? Seriously.. Even if you have a support contract, you're probably going to get billed time and materials on top of everything else.

          With the math on this unit, you can build in massive layers of redundancy to greatly reduce even the possibility of the data being inaccessible and still come in far, far cheaper than any support contract and you can schedule downtown because you have redundancy across multiple chassis.

      • Re: (Score:3, Interesting)

        by PRMan (959735)

        I used to work at a company that paid a 20% premium on hardware for support from HP that was COMPLETELY WORTHLESS. I told them they would be better off just ordering a 6th computer for every 5 that they bought.

        The guy would show up with no tools, not even a screwdriver, and then he would need to come back the next day (with a screwdriver). Then he didn't have the part (say RAM) that we told them in the first call and the day before. Then he showed up the next day with RAMBUS instead of DDR RAM. After 3

    • Please.... (Score:3, Interesting)

      by mpapet (761907)

      where I work we pay a premium for what happens when the power goes out, what happens with a drive goes bad,

      Whomever spec'd your systems should have accommodated obvious failures like this. As in, paying for colo, using servers with dual power supplies that fail over, sensible RAID strategy. Giving money to EMC in this situation is not sensible.

      but they also left out the people who manage these massive beasts. I mean, how many hundreds (or thousands) of drives are we talking here?
      I have a couple of hundre

  • Ripoff (Score:5, Insightful)

    by asaul (98023) on Wednesday September 02, 2009 @10:14AM (#29285027)

    Looks like a cheap downscale undersized version of a Sun X4500/X4540.

    And as others have pointed out, you pay a vender because in 4 years they will still be stocking the drives you bought today, where as for this setup you will be praying they are still on ebay

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      why wouldn't you just build an entirely new pod with current disks and migrate the data? You could certainly afford it.

      • by pyite (140350)

        why wouldn't you just build an entirely new pod with current disks and migrate the data? You could certainly afford it.

        Maybe because there's no need to update and you just want to be able to replace broken drives?

        • Re: (Score:3, Interesting)

          by PAjamian (679137)

          Fine then, replace just the broken drives but as far as I'm aware Linux software raid 6 does not require the drives be the same model, or even the same size. You can get newer drives for the same or less cost as the old drives and just plug them in. Who cares if they have more capacity? Just let it go to waste if you must but it'll work just fine and certainly you won't have to be scrounging drives off of ebay.

          Also consider that five years down the road we may have 10tb drives or better, but 1.5 tb drive

    • cheap drives too (Score:3, Informative)

      by pikine (771084)

      Reliant Technology sells you NetApp FAS 6040 for $78,500 with a maximum capacity of 840 drives, without the hard drive (source: Google Shopping). If you buy FAS 6040 with the drives, most vendors will use more expensive and less capacity 15k rpm drives instead of the 7200rpm drives the BlackBlaze Pod uses, and this makes up a lot of the price difference. The point is, you could buy NetApp and install it yourself with cheap off-the-shelf consumer drives and end up spending about the same magnitude amount of

    • Re:Ripoff (Score:5, Interesting)

      by timeOday (582209) on Wednesday September 02, 2009 @10:42AM (#29285427)
      Depends on how it works. Hopefully (or ideally) it's more like the google approach - build it to maintain data redundancy, initially with X% overcapacity. As disks fail, what do you do then? Nothing. When it gets down to 80% or so of original capacity (or however much redundancy you designed in), you chuck it and buy a new one. By then the tech is outdated anyways.
    • Re: (Score:3, Informative)

      by ciroknight (601098)
      Since most modern commercial-grade HDs come with a 3-5 year or better warranty these days [1] [wdc.com], it's easier just to cash those in when the drives go bad and build a new box around the newer-model drives they ship you in return.

      This is truly RAID, as Google, etc. have realized and developed. When the drives die, you don't cry over having the exact same drive stocked. You don't cry at all. At $8k a machine, you could actually afford to flat-out replace the entire box every 4 years and not affect your bottom
  • Cool. (Score:2, Interesting)

    by SatanicPuppy (611928) *

    Nominally a Slashvertisement, but the detailed specs for their "pods" (watch out guys, Apples gonna SUE YOU) are pretty damn cool. 45 drives on two consumer grade power supplies gives me the heebie jeebies though (powering up in stages sounds like it would take a lot of manual cycling, if you were rebooting a whole rack, for instance), and I'd be interested to know why they chose JFS (perfectly valid choice) over some other alternative...There are plenty of petabyte capable filesystems out there.

    Very intere

    • by XorNand (517466)
      It's not all that interesting, IMHO. If you read the description, all network I/O is done using HTTPS. The comparison to Amazon's S3 is fair, but it's ridiculous to compare this to NetApp or any of the other SANs they have listed; no iSCSI, no fiber channel.
      • 67 terabytes for under 8000 dollars isn't interesting? Ooookay...

        I don't give a damn about iSCSI; this isn't a database server, it's just a flat data file server...Most datacenters are limited by their network bandwidth anyway, not their internal bandwidth, and https isn't any worse than sftp. Paying Amazon a thousand times more, and I'd still be limited by MY bandwidth, not their internal bandwidth.

        If they can deliver more storage for less price, then more power to 'em.

  • by grub (11606) * <slashdot@grub.net> on Wednesday September 02, 2009 @10:17AM (#29285057) Homepage Journal

    AHhh, this is why the EMC guy committed suicide. It wasn't because he was dying of cancer.
  • by elrous0 (869638) * on Wednesday September 02, 2009 @10:20AM (#29285081)

    Soon I shall have a single media server with every episode of "General Hospital" ever made stored at a high bitrate. WHO'S LAUGHING NOW, ALL YOU WHO DOUBTED ME!!!!

    And how big is a petabyte you ask? There have been about 12,000 episodes of General Hospital aired since 1963. If you encoded 45 minute episodes at DVD quality mpeg2 bitrate, you could fit over 550,000 episodes of America's finest television show on a 1 petabyte server, enough to archive every episode of this remarkable show from its auspicious debut in 1963 until the year 4078.

    • by ShadowRangerRIT (1301549) on Wednesday September 02, 2009 @10:27AM (#29285175)
      But what about storing the new episodes in HD? Clearly a masterpiece of TV such as this should not be stored at mere SD quality!
    • by RMH101 (636144) on Wednesday September 02, 2009 @10:29AM (#29285213)
      I think we have a new metric unit of storage, to rival the (now deprecated) Library Of Congress SI unit.
    • I wouldn't watch Genital Hospital with a gun to my head! Give me All My Children, or give me Death!

      Well, maybe Tea and Cake instead of Death, but you get the idea.
      • by elrous0 (869638) *
        I think you need to show more respect for a show that gave both Rick Springfield and John Stamos their acting debuts. These episodes also have incredible historic value. Years from now, when historians are needing footage of Demi Moore before plastic surgery, you'll thank me!
    • by ari_j (90255) on Wednesday September 02, 2009 @10:33AM (#29285263)

      Soon I shall have a single media server with every episode of "General Hospital" ever made stored at a high bitrate. WHO'S LAUGHING NOW, ALL YOU WHO DOUBTED ME!!!!

      And how big is a petabyte you ask? There have been about 12,000 episodes of General Hospital aired since 1963. If you encoded 45 minute episodes at DVD quality mpeg2 bitrate, you could fit over 550,000 episodes of America's finest television show on a 1 petabyte server, enough to archive every episode of this remarkable show from its auspicious debut in 1963 until the year 4078.

      Of all the computer systems out there, yours is the one for which becoming self-aware terrifies me the most.

    • by WMD_88 (843388)
      General Hospital was only 30 minutes originally; it didn't become 60 until the late 70s. And even then, the number of commercials per hour has surely changed over time. So, your estimate is quite off. I prefer One Life to Live anyway ;D
    • I'm holding out for the porn version, Genital Horse Spittle.

      Great donkey scenes.

    • Re: (Score:3, Interesting)

      by MartinSchou (1360093)

      You raise an "interesting" train of thought in my mind.

      Encoding in 720p x264 you get something like 45 minutes in 1.1 GB. This gives you 60,900 episodes per 4U unit or 609,000 episodes per 40U rack.

      In 1080p x264 you get something like 45 minutes in about 2.5 GB. This is 27,000 episodes per 4U unit or 270,000 episodes per 40U rack.

      Assuming 22 episodes per season and a five year average run time, you end up with 220 episodes per show (typical science fiction shows).
      Assuming 5 shows per week, 40 weeks a year,

  • Disk replacement? (Score:4, Insightful)

    by jonpublic (676412) on Wednesday September 02, 2009 @10:20AM (#29285089)

    How do you replace disks in the chassis? We've got 1,000 spinning disks and we've got a few failures a month. With 45 disks in each unit you are going to have to replace a few consumer grade drives.

    • Re: (Score:2, Informative)

      by markringen (1501853)
      slide it out on a rail, and drop in a new one. and there is no such thing as consumer grade anymore, they are often of much higher quality stability wise than server specific drives these days.
    • yeah, the lack of ANY kind of hot swap on those chassis is laughable.

      totally the wrong way to go. this guy is hell bent on density but he let that over-ride common sense!

    • by LordKazan (558383)

      be like google - hardware redundancy and software handling the failover.

      take down the node with a bad drive, swap the drive, rebuild that pod's RAID (preferably i would RAID6 them as it has better error recovery than RAID5 at the expense of storage size being [drive size]*[number of drives - 2] instead of [drive size]*[number of drives - 1] of RAID5). when it comes back up it syncs to it's other copy.

      i would also get LARGE write cache drives and any databases would be running with LARGE ram buffers for perf

    • Re: (Score:3, Interesting)

      by TooMuchToDo (882796)
      What kind of drives are you using? We've got 4800+ spinning drives, and we only have 1-2 failures a month.
  • wtf? (Score:5, Insightful)

    by pak9rabid (1011935) on Wednesday September 02, 2009 @10:23AM (#29285123)
    FTA...

    But when we priced various off-the-shelf solutions, the cost was 10 times as much (or more) than the raw hard drives.

    Um..and what do you plan on running these disks with? HD's don't magically store and retreive data on their own. The HD's are cheap compared to the other parts that create a storage system. That's like saying a Ferrari is a ripoff because you can buy an engine for $3,000.

    • RTFA. That $117,000 figure includes the whole rack, not just the raw HDs (which come to $81,000 according to their chart). They priced out everything in what they refer to as a "storage pod" in detail, so you can see for yourself. My primary concern is the fact that the boot disk (priced separately) doesn't appear to have a drop in back up. If one of the 45 storage HDs goes down, you can replace it (presumably it supports hot swapping), but if the boot drive goes you've got downtime.
      • by corsec67 (627446)

        Looking at the case, where they have a vibration reducing layer of foam under the lid screwed down onto the drives, and with the pods stacked in the frame like they are, you have to pull a whole unit out anyways to replace a drive.

        So, no hot-swap of anything anyways. PSUs fail pretty commonly in my experience, and not only do they not have redundant PSUs, they have 2 non-redundant power supplies. (RAID 0 for PSUs..... what happens when the 12V rail gets a huge surge that fries the boards on all of the drive

    • Re: (Score:3, Interesting)

      by Rich0 (548339)

      Yup.

      You can do even better than the price quoted in this article. On Newegg I found a 1TB drive for $95 - that is only $95k/PB. What a bargain!

      Except that I don't have a PB of space with my solution. I have 0.001PB of space. If I want 1PB of space then I need hundreds of drives, and some kind of system capable of talking to hundreds of drives and binding them into some kind of a useful array.

      This sounds like criticizing the space shuttle as being wasteful as you can cover the same distance in a truck fo

  • by TheGratefulNet (143330) on Wednesday September 02, 2009 @10:32AM (#29285253)

    where's the extensive stuff that sun (I work at sun, btw; related to storage) and others have for management? voltages, fan-flow, temperature points at various places inside the chassis, an 'ok to remove' led and button for the drives, redundant power supplies that hot-swap and drives that truly hot-swap (including presence sensors in drive bays). none of that is here. and these days, sas is the preferred drive tech for mission critical apps. very few customers use sata for anything 'real' (it seems, even though I personally like sata).

    this is not enterprise quality no matter what this guy says.

    there's a reason you pay a lot more for enterprise vendor solutions.

    personally, I have a linux box at home running jfs and raid5 with hotswap drive trays. but I don't fool myself into thinking its BETTER than sun, hp, ibm and so on.

    • by N1ck0 (803359) on Wednesday September 02, 2009 @10:58AM (#29285673)
      Its better at what they need it for. Based on the services and software they describe on their site, it looks like they store data in the classic redundant chunks distributed over multiple 'disposable' storage systems. In this situation most of the added redundancy that vendors put in their products doesn't add much value to their storage application. Thus having racks and racks of basic RAIDs on cheap disks and paying a few on-site monkeys to replace parts is more cost effective then going to a more stable/tested enterprise storage vendor.
    • by SatanicPuppy (611928) * <Satanicpuppy@g m a i l .com> on Wednesday September 02, 2009 @11:04AM (#29285769) Journal

      This sort of attitude is how Sun got it's lunch eaten in the market in the first place.

      Yes, your hardware rocks. It's so fucking sexy I need new pants when I come into contact with it.

      It also costs more than a fucking italian sports car.

      Turns out that if your awesome hardware is 10 times better than commodity hardware, but also 25 times as expensive, people are just going to buy more commodity hardware.

      I've got some Sun data appliances and I've got some Dell data appliances, and the only difference I've seen between them is purely one of cost. The only thing that ever breaks is drives.

      • Re: (Score:3, Funny)

        by BobMcD (601576)

        And speaking of sexy, sports cars, and Sun, there is one huge factor that sets apart the purchase decisions -

        Sun has nothing on Ferrari for getting you laid.

    • by swillden (191260) <shawn-ds@willden.org> on Wednesday September 02, 2009 @11:12AM (#29285875) Homepage Journal

      personally, I have a linux box at home running jfs and raid5 with hotswap drive trays. but I don't fool myself into thinking its BETTER than sun, hp, ibm and so on.

      I don't these folks guy believe their solution is better -- just cheaper. MUCH cheaper. So much cheaper that you can employ a team of people to maintain the "homebrew" solution and still save money.

  • Since you can now get 2TB drives you should be able to fit 90TB in one of these boxes :)

    And I thought I was doing well with a few terabytes in my home server (but hey, ZFS should save me from silent data corruption when the drives inevitably start to fail).

  • If you build a petabyte stack using 1.5TB disks you need about 800 drives including RAID overhead. With an MTBF for consumer drives of 500,000 hours, a drive will fail roughly every 10-15 days, if your design is good and you create no hotspots/vibration issues.

    Rebuild times on large RAID sets are such that it is only a matter of time before they run a double drive failure and lose their customers data. The money they saved by going cheap will be spent on lawyers when they get the liability claims in.

  • Though I don't run a datacenter, I do rely heavily on one. My co-manager is in charge of keeping my 80 TB of data online 24/7 using redundant HP StorageWorks 8000 EVA units. [hp.com]

    These cost a bit and have drives which fail at a fairly infrequent rate. It doesnt' hurt that the data center is kept at 64 degrees by two (redundant) chillers and has 450 KVa redundant power conditioners keeping the electricity on at all times. (We do shut off the power to the building once a month to check these and the diesel generat

  • by fake_name (245088) on Wednesday September 02, 2009 @10:50AM (#29285559)

    If an article went up describing how a major vendor released a petabyte array for $2M the comments would full of people saying "I could make an array with that much storage far cheaper!"

    Now someone has gone and done exactly that (they even used linuxto do it) and suddenly everyone complains that it lacks support from a major vendor.

    This may not be perfect for everyones needs, but it's nice to see this sort of innovation taking place instead of blindy following the same path everyone else takes for storage.

  • by xrayspx (13127) on Wednesday September 02, 2009 @10:53AM (#29285583) Homepage
    These guys build their own hardware, think it might be able to be improved on or help the community, and they release the specs, for free, on the Internet. They then get jumped on by people saying "bbbb-but support!". They're not pretending to offer support, if you want support, pay the 2MM for EMC, if you can handle your own support in-house, maybe you can get away with building these out.

    It's like looking at KDE and saying "But we pay Apple and Microsoft so we get support" (even though, no you don't). The company is just releasing specs, if it fits in your environment, great, if not, bummer. If you can make improvements and send them back up-stream, everyone wins. Just like software.

    I seem to recall similar threads whenever anyone mentions open routers from the Cisco folks.
  • by pedantic bore (740196) on Wednesday September 02, 2009 @02:23PM (#29288913)
    Forgive me; I've committed the sin of working for one of those name-brand storage companies.

    The real value in a data storage system isn't in the hardware, it's in the data. And the real cost incurred in a data storage system is measured in the inability of the customer to access that data quickly, efficiently and (in the case of a disaster) at all.

    If you need to crunch the data quickly, a higher-performing system is going to save you money in the end. Look at all the benchmarks: no home-grown systems are anywhere on the lists. If you want to stream through your data at several gigabytes per second, you need to pay for a fast interconnect. Putting 45 drives behind a single 1GbE just doesn't cut it.

    Similarly, if you want to ensure that the data is protected (integrity, immutable storage for folks who need to preserve data and be certain it hasn't been tampered with, etc) and stored efficiently (single instance store, or dedupe, so you don't fill your petabytes of disks with a bajillion copies of the same photos of Anna Kournakova) then you need to pay for the extra goodness in that software and hardware as well.

    Finally, if you want extremely high availability, then the cost of the hardware is miniscule compared to the cost of downtime. We had customers that would lose millions of dollars per service interruption. They're willing to pay a million dollars to eliminate or even reduce downtime.

    These folks are essentially just building a box that makes a bunch of disks behave like a honking big tape drive. It's a viable business--that's all some folks need. But EMC et al are not going to lose any sleep over this.

Often statistics are used as a drunken man uses lampposts -- for support rather than illumination.

Working...