Build Your Own $2.8M Petabyte Disk Array For $117k 487
Chris Pirazzi writes "Online backup startup BackBlaze, disgusted with the outrageously overpriced offerings from EMC, NetApp and the like, has released an open-source hardware design showing you how to build a 4U, RAID-capable, rack-mounted, Linux-based server using commodity parts that contains 67 terabytes of storage at a material cost of $7,867. This works out to roughly $117,000 per petabyte, which would cost you around $2.8 million from Amazon or EMC. They have a full parts list and diagrams showing how they put everything together. Their blog states: 'Our hope is that by sharing, others can benefit and, ultimately, refine this concept and send improvements back to us.'"
Not ZFS? (Score:2, Insightful)
Good luck with all the silent data corruption. Shoulda used ZFS.
Yeah, but with Amazon you get FREE SHIPPING !! (Score:2, Insightful)
I love free shipping, even if it costs me more !! I like FREE STUFF !!
Re: (Score:3, Insightful)
Are you saying that with the more expensive system, disks never fail and nobody ever has to get up in the night?
Re: (Score:3, Interesting)
What do you mean by more expensive? OpenSolaris [opensolaris.org] with ZFS costs the same as Linux. And yes, You'll have to get up a lot less often in the middle of the night, since a few bad sectors aren't going to force a fail of the entire disk.
Re:Not ZFS? (Score:5, Interesting)
Are you saying that with the more expensive system, disks never fail and nobody ever has to get up in the night?
Well... yes and no. When you've worked with high-end arrays, you learn that storage is only the beginning. NetApp and EMC provide far, far more. I was damned impressed when I first heard a presentation from NetApp about their technology, but the day that they called me up and told me that the replacement disk was in the mail and I answered, "I had a failure?" ... that was the day that I understood what data reliability was all about.
Since that time (over 10 years ago), the state of the art has improved over and over again. If you're buying a petabyte of storage, it's because you have a need that breaks most basic storage models, and the average sysadmin who thinks that storage is cheap is going to go through a lot of pain learning that he's wrong.
Someday, you'll have a petabyte disk in a 3.5" form-factor. At that point, you can treat it as a commodity. Until then, there are demands placed on you when you administrate that much storage which demand a very different class of device than a Linux box with a bunch of raid cards.
As evidence of that, I submit that dozens of companies like the one in this article have existed over the years, and only a handful of them still exist. Those that still do have either exited the storage array business, or have evolved their offerings into something that costs a lot more to build and support than a pile of disks.
Re:Not ZFS? (Score:4, Insightful)
As evidence of that, I submit that dozens of companies like the one in this article have existed over the years, and only a handful of them still exist. Those that still do have either exited the storage array business, or have evolved their offerings into something that costs a lot more to build and support than a pile of disks.
Or they have been bought by one of the bigger storage companies.
Re: (Score:3, Interesting)
On a similar note, they claim that they will backup any one computer for $5/month. Well, my one computer happens to be the backup node for my SAN, so they're going to need about 15 TB (It's a small SAN) to have 30 day backups for me. Please note, that all of the files on my SAN are under 4GB and I have a SAN, not a NAS, so my servers see it as a native hard drive.
Re: (Score:3, Informative)
So you are saying that they're happy to get their return of investment on their hardware alone in 44 years? I doubt it.
Re: (Score:3, Interesting)
You need to look at the grand scheme of things. Sure, you may get 5-10% of customers using massive amounts of data (over 500Gb) but when 90-95% of your customers are home users and small businesses who don't have their own data centers, and they may only have a 50Mb backup, their lack of use offsets the heavy users.
Imagine if in a 1Pb server, 750Tb of data was used by 10,000 individuals paying $5/mth and the other 250Tb was used by 50 individuals paying $5/mth. I failed at mathematics at school, but I'm sur
Re: (Score:3, Informative)
Re:Not ZFS? (Score:4, Interesting)
They're betting on the MTTF of the drives, on RAID, and on redundant system backups.
Yes, it's cheap hardware. Yes, cheap hardware fails more often than expensive hardware. Yes, cheap hardware is slower than expensive hardware. But you have to look at the offsets: they are building a backup service, where they don't need "instant" data access speeds. As for drive failures, I have some experience there. I have 57,000 cheap-ass consumer drives in service, and over 10,000 of them are 11 years old. They're dying at the rate of about ten failures per day. The key is to build your processes to tolerate and handle failures.
As long as your redundant systems are keeping copies of the data, and you understand exactly what the impact is of a failed component as well as have a recovery plan in place, why not use cheap hardware? Let's do a bit of math. The guy had a photo of himself standing behind about 18 of these boxes. That's 810 drives. If we lowball cheap drives at 300,000 hours MTBF, he'll see an average of two failures per month. It might take him $200 and an hour to recover each failed drive. We could keep doing the math on each component, but I suspect this is still a complete and total bargain that will meet his business needs very well.
It may not be as shiny as EMC or NetApp, and you have to do the legwork yourself, but why spend the extra money on a system that would provide him with "too much service"? From an ROI perspective, this guy is probably going to do very well, even though he may drive a few sysadmins crazy in the process.
are you a project manager by any chance? (Score:5, Insightful)
I like how you dismiss a detailed real world design example based simply on a claimed feature without any further substantiation. Very classy. I'm not saying you are wrong, but would it kill you to go into a little more detail about why these folks need "luck" when they are clearly very successful with their existing design?
Re:are you a project manager by any chance? (Score:5, Informative)
are you a project manager by any chance?
Of course not. A project manager would look at this and go, "wow, we saved a lot of money!" It's pretty simple. ZFS does what most other filesystems do not; it guarantees data integrity at the block level by the use of checksums. When you're dealing with this many spindles and dense, non-enterprise drives, you are virtually guaranteed to get silent corruption. The article does not once have any of the words corrupt.*, checksum, or integrity mentioned in it once. The server doesn't use ECC RAM. The project, while well intentioned, should scare the crap out of anyone thinking about storing data with this company.
Re:are you a project manager by any chance? (Score:4, Insightful)
What failure rate are you using to "virtually guarantee" that you'll get data corruption with 45 drives?
What failure rate in your RAM, CPU, and motherboard are you using to guarantee that the ZFS checksum are not themselves corrupted? Not to mention the high possibility of bugs in a younger file system, and the different performance characteristics among FSes.
I'm not say ZFS is a bad plan, at least if you're running enough spindles, but if you're going to "virtually guarantee" silent corruption with less than 100 drives I'd like to see some documentation for the the non-detectable failure rates you're expecting.
It's also worth noting that in a lot of data, a small amount of bit-flips might not be worth protecting against at all. Or they might be better protected at the application level instead of the block level -- for example, if the data will be transmitted to another system before it is consumed, as would be typical for a disk-host like this, a single checksum of the entire file (think md5sum) could be computed at the end-use system, rather than computing a per-block checksum at the disk host and then just assuming the file makes it across the network and through the other system's I/O stack without error.
*sigh* (Score:5, Insightful)
How about reading the section "A Backblaze Storage Pod is a Building Block".
<snip> the intelligence of where to store data and how to encrypt it, deduplicate it, and index it is all at a higher level (outside the scope of this blog post). When you run a datacenter with thousands of hard drives, CPUs, motherboards, and power supplies, you are going to have hardware failures — it's irrefutable. Backblaze Storage Pods are building blocks upon which a larger system can be organized that doesn't allow for a single point of failure. Each pod in itself is just a big chunk of raw storage for an inexpensive price; it is not a "solution" in itself.
Emphasis mine. I believe there are quite a few successful and reliable storage vendors not using ZFS. We get the point, you like it. Doesn't mean you can't succeed without it. Be more open minded.
Re: (Score:3, Interesting)
Re: (Score:3, Insightful)
Re:Not ZFS? (Score:5, Interesting)
Get both Debian and ZFS.. Nexenta. Links in my sig.
Re:Not ZFS? (Score:5, Insightful)
And I think I would use dual micro ATA motherboards, perhaps in their own cases to make them replaceable in case of failure.
I realize that the layout of the drives was done with an eye toward airflow, but I personally don't like to see drives set on their edges. It's probably a personal bias, but I like to see drives set flat. The bearings seem to last longer that way. Just my personal experience.
And, one final point, storage density is reaching the point where we can jam a lot of storage into a small space. Perhaps we have reached the point where we can start to spread things out and do things like put the drives in a separate enclosure or multiple enclosures. It makes designing, installing, and servicing easier. Use eSATA ports on the SATA cards to make external storage easier.
You know why Amazon charges that much? (Score:5, Insightful)
Support.
Re:You know why Amazon charges that much? (Score:5, Funny)
Re:You know why Amazon charges that much? (Score:5, Funny)
Damn. I was going to offer support for half of that price until I saw this new requirement...
Re: (Score:3, Funny)
For 2.683M, you can probably afford to outsource that part.
Re:You know why Amazon charges that much? (Score:5, Funny)
Re:You know why Amazon charges that much? (Score:5, Funny)
Re:You know why Amazon charges that much? (Score:5, Insightful)
Re:You know why Amazon charges that much? (Score:5, Insightful)
Backup: depends on the backup strategy. I could make this happen for less than an additional 10%. But ok, point taken.
Redundancy: You mean as in plain redundancy? These are RAID arrays are they not? You want redundancy at the server level? Now you're increasing the scope of the project which the article doesn't address. (Scope error)
Hosting: Again, the point of the article was the hardware. That's a little like accounting for the cost of a trip to your grandmother's, and factoring in the cost of your grandmother's house. A little out of scope.
Cooling: I could probably get the whole project chilled for less than 6% of the total cost, depending on how cool you want the rig to run.
I think you're looking for a wrench in the works where none exist.
Re:You know why Amazon charges that much? (Score:5, Insightful)
Redundancy can be had for another $117,000.
Hosting in a DC will not even be a blip in the difference between that and $2.7m.
EMC, Amazon etc are a ripoff and I have no idea why there are so many apologists here.
Re:You know why Amazon charges that much? (Score:5, Interesting)
First these aren't even storage arrays in the same sense that EMC, Hitachi, NetApp, Sun, etc. provide. The only protocol you can use to access your data is https? WTF! Second the Hitachi array in my data center doesn't put 67 TB storage behind half a dozen single points of failure the way this thing does. Third the Hitachi array in my data center doesn't put 67 TB behind a dinky gigabit ethernet link. My Hitachi will provide me with 200,000 IOPS with 5 ms latency. I can hook a whole slew of hosts up to my SAN. I can take off-host, change-only copies of my data so backups don't bog down my production work. I can establish replication between the Hitachi here in this building and the second array four hundred miles away with write order fidelity and guaranteed RPOs.
Comparing this thing to enterprise class storage is like some sixteen year old adding a cold air intake and a coat of red paint to his Honda civic then running around bragging that his car is somehow comparable to a Ferrari ("look they're both red!") Every time I see something like this the only thing I learn is that yet another person doesn't actually "Get It" when it comes to storage.
HelloWorld.c is to the Linux kernel as this thing is to the Hitachi USP-V or EMC Symmetrix.
Re: (Score:3, Informative)
My Hitachi will provide me with 200,000 IOPS with 5 ms latency.
While that is just a TAD overkill for disk backup, these guy's $.11/GB is not something I'd trust my backups on.
HelloWorld.c is to the Linux kernel as this thing is to the Hitachi USP-V or EMC Symmetrix.
You nailed it.
Service Time/IOPS is less important here than trustworthy and proven controller hardware & software, and built in goodies like replication. That's why I would trust disk backups to Sun, NetApp, Hitachi, EMC, and not these people. Possibly home systems I guess, but bragging about homemade storage is a real turnoff.
Re: (Score:3, Informative)
"Redundancy can be had for another $117,000." ...plus the inter SAN connectivity ...plus the SAN Fabric aware write plitting hardware and licensing ...plus the redundancy aware server connected to that SAN fabric ...plus the multipath HBA licensing for the servers ...plus multiple redundant HBAs per server and twice as many SAN fabric switches ...plus journaling and rollback storage, and block level deduplication within it (having a real-time copy is useless if you get infected with a virus). ...plus anothe
Comment removed (Score:5, Interesting)
Re:You know why Amazon charges that much? (Score:5, Insightful)
That 2.683M also pays for salaries, pretty building(s), advertising, research, conventions, and more advertising.
I could hire a couple of dedicated staff to have 24x7 support for far less than 2.683M, plus a duplicate system worth of spare parts.
This stuff isn't rocket science. Most companies don't need high-speed, fiber-optic disk array subsystems for a significant amount of their data, only for a small subset that needs blindingly fast speed. The rest can sit on cheap arrays. For example, all of my network accessible files that I open very rarely but keep on the network because it gets backed up. All of my 5 copies of database backups and logs that I keep because it's faster to pull it off of disk than request a tape from offsite. And it's faster to backup to disk, then to tape.
BackBlaze is a good example of someone that needs a ton of storage, but not lightening fast access. Having a reliable system is more important to them than one that has all the tricks and trappings of an EMC array that probably 10% of all EMC users actually use, but they all pay for.
A Very Shortsighted Article (Score:3, Insightful)
Before realizing that we had to solve this storage problem ourselves, we considered Amazon S3, Dell or Sun Servers, NetApp Filers, EMC SAN, etc. As we investigated these traditional off-the-shelf solutions, we became increasingly disillusioned by the expense. When you strip away the marketing terms and fancy logos from any storage solution, data ends up on a hard drive.
That's odd, where I work we pay a premium for what happens when the power goes out, what happens with a drive goes bad, what happens when maintenance needs to be performed, what happens when the infrastructure needs upgrades, etc. This article left out a lot of buzzwords but they also left out the people who manage these massive beasts. I mean, how many hundreds (or thousands) of drives are we talking here?
You might as well add a few hundred thousand a year for the people who need to maintain this hardware and also someone to get up in the middle of the night when their pager goes off because something just went wrong and you want 24/7 storage time.
We don't pay premiums because we're stupid. We pay premiums so we can relax and concentrate on what we need to concentrate on.
Re:A Very Shortsighted Article (Score:5, Informative)
The focus of the article was only on the hardware, which was extremely low cost to the point of allowing massive redundancy...This is not an inherently flawed methodology.
If you can deploy cheap 67 terabyte nodes, then you can treat each node like an individual drive, and swap them out accordingly.
I'd need some actual uptime data to make a real judgment on their service vs their competitors, but I don't see any inherent flaws in building their own servers.
Re:A Very Shortsighted Article (Score:5, Insightful)
Re: (Score:3, Informative)
I'd need some actual uptime data to make a real judgment on their service vs their competitors,
I did an extensive interview with the Backblaze CEO. No hard data on uptime but he says they lose one drive a week from the whole 1.5petabyte system and have never had a pod fail. They've been running for a year. Here's the link to the story. Also comments about the designing/testing process. http://www.crn.com.au/News/154760,want-a-petabyte-for-under-us120000.aspx [crn.com.au]
Re:A Very Shortsighted Article (Score:4, Insightful)
Why would you bother? Just start off by writing the data to three nodes, and then you can swap new ones in and out silently. If your space really is cheap, then that's not a problem.
Re:A Very Shortsighted Article (Score:5, Informative)
We don't pay premiums because we're stupid. We pay premiums so we can relax and concentrate on what we need to concentrate on.
They actually do talk about that in the article. The difference in cost for one of the homegrown petabyte pods from the cheapest suppliers (Dell) is about $700,000. The difference between their pods and cloud services is over $2.7 million per petabyte. And they have many, many petabytes. Even if you do add "a few hundred thousand a year for the people who need to maintain this hardware" - and Dell isn't going to come down in the middle of the night when your power goes out - they are still way, way on top.
I know you don't pay premiums because you're stupid. But think about how much those premiums are actually costing you, what you are getting in return, and if it is worth it.
Re: (Score:2)
My question is, where does one acquire the case he uses? My company currently stores a lot of video and the 10TB 4U machines I have been building are quickly running out of space. This would be an ideal solution for my needs.
Re:A Very Shortsighted Article (Score:4, Informative)
From the credits list: "Protocase for putting up with hundreds of small 3-D case design tweaks", which I assume is http://www.protocase.com/ [protocase.com].
Re:A Very Shortsighted Article (Score:5, Informative)
We don't pay premiums because we're stupid. We pay premiums because we're lazy.
There, fixed that for you ;).
Ok, that was glib, but you do seem to have been too lazy to read the article, so perhaps you deserve it. To quote TFA, "Even including the surrounding costsâ"such as electricity, bandwidth, space rental, and IT administratorsâ(TM) salariesâ"Backblaze spends one-tenth of the price in comparison to using Amazon S3, Dell Servers, NetApp Filers, or an EMC SAN.". So that aren't ignoring the costs of IT staff administering this stuff as you imply, they're telling you the costs including the admin costs at their datacentre.
Not that shortsighted for their purposes (Score:5, Insightful)
Yeah, this only works if your the geeks building the hardware to begin with. The real cost is in setup and maintenance. Plus, if the shit hits the fan, the CxO is going to want to find some big butts to kick. 67TB of data is a lot to lose (though it's only about 35 disks at max cap these days).
These guys, however, happen to be both the geeks, the maintainers, and the people-whos-butts-get-kicked-anyway. This is not a project for a one or two man IT group that has to build a storage array for their 100-200 person firm. These guys are storage professionals with the hardware and software know how to pull it off. Kudos to them for making it and sharing their project. It's a nice, compact system. It's a little bit of a shame that there isn't OTS software, but at this level you're going to be doing grunt work on it with experts anyway.
FWIW, Lime Technology (lime-technology.com) will sell you a case, drive trays, and software for a quasi-RAID system that will hold 28TB for under $1500 (not including the 15 2TB drives - another $3k on the open market). This is only one fault tolerant, though failure is more graceful than a traditional RAID). I don't know if they've implemented hot spares or automatic failover yet (which would put them up to 2 fault tolerant on the drives, like RAID6).
Re: (Score:3, Interesting)
At 67T per chassis and 45 drives documented per chassis, they're using 1.5T drives. 1 petabyte would then be 667 drives.
The worst part of this design that I see (and there's a LOT of bad to see) is the lack of an easy way to get to a failed drive. When a drive fails you're going to have to pull the entire chassis offline. Google did a study in 2007 of drive failure rates (http://labs.google.com/papers/disk_failures.pdf) and found the following failure rates over drive age (ignoring manufacturer):
3mo: 3% =
Re:A Very Shortsighted Article (Score:5, Insightful)
You will more than likely NOT have to take a node offline. The design looks like they place the drives into slip down hot plug enclosures. Most rack mounted hardware is on rails, not screwed to the rack. You roll the rack out, log in, fail the drive that is bad, remove it, hot plug another drive and add it to the array. You are now done.
They went RAID 6, even though it is slow as shit, for the added failsafe mechanisms.
Re: (Score:3, Informative)
Re: (Score:3, Informative)
>> You might as well add a few hundred thousand a year for the people who need to maintain this hardware and also someone to get up in the middle of the night when their pager goes off because something just went wrong and you want 24/7 storage time.
>> We don't pay premiums because we're stupid. We pay premiums so we can relax and concentrate on what we need to concentrate on.
Or... you could just buy ten of them and use the left over $1m for electricity costs and an admin that doesn't sleep
Re:A Very Shortsighted Article (Score:5, Interesting)
Having a couple decades of working both sides of the Support Divide, I am now of the opinion that the sole purpose of a Support Contract is to have someone at the other end of the phone to yell at. It makes people feel better and have a warm fuzzy. But, having had to schedule CE's to come onto site to replace failed hardware, I have generally found that that adds hours to any repair job. I would guess that you could power off this array, remove every single drive, move them to a new chassis, reformat them in NTFS, then back to JFS and still finish before a CE shows up on site. I recall that in the winter of 1994, *every* Seagate 4GB drive in our Sun boxes died.
What happens now when a drive goes bad now is that a drive goes bad. You spot it through some monitoring software. You pick up the phone and call a 1-800 number. Someone asks a few questions like "What is you name? What is your quest? What is your favorite color?", then you hear typing in the background. After a bit, if you're lucky, they have you in the system correctly and can find your support contract for that box. Then, they give you a ticket number and put you on hold. Then, after a bit, an "engineering" rep will come appear and say "What is the nature of the emergency" and you then tell them the same stuff, except you get to add works like "var adm messages" or something. They'll tell you to send them some email so they can do some troubleshooting. You send them what they ask for. About an hour or so later, you get an email or call back saying that the drive has gone bad and need replaced, which is pretty much the same thing you told them when you called in. They then tell you that you are on a Gold Contract with 24/7 support and that the CE has a 4 hour callback requirement from the time the call is dispatched to the CE. By this point, you are about 3-4 hours after the disk drive failed in the first place. Finally, the CE will call back after some amount of time to schedule a replacement. And here comes the real kicker.... In almost every instance for the last 10 years, we have had to do all maintenance during a scheduled window. At 1AM.
What happens now when something breaks is that someone fixes it.
Any business is faced with a Buy-It-Or-Build-It dilemma for any service or equipment. Since this was their core business, it certainly makes sense. And, it makes sense for any business of a certain size or set of skills. The reality is that the math is favoring consumer electronics for most applications because they are good enough for 85% of the business needs out there. The whole Cost-Benefit analysis must be periodically re-addressed. If you do not have $1 million a year in billed repair from a Support contract, is it worth $1 million a year for the contract? Seriously.. Even if you have a support contract, you're probably going to get billed time and materials on top of everything else.
With the math on this unit, you can build in massive layers of redundancy to greatly reduce even the possibility of the data being inaccessible and still come in far, far cheaper than any support contract and you can schedule downtown because you have redundancy across multiple chassis.
Re: (Score:3, Interesting)
I used to work at a company that paid a 20% premium on hardware for support from HP that was COMPLETELY WORTHLESS. I told them they would be better off just ordering a 6th computer for every 5 that they bought.
The guy would show up with no tools, not even a screwdriver, and then he would need to come back the next day (with a screwdriver). Then he didn't have the part (say RAM) that we told them in the first call and the day before. Then he showed up the next day with RAMBUS instead of DDR RAM. After 3
Please.... (Score:3, Interesting)
where I work we pay a premium for what happens when the power goes out, what happens with a drive goes bad,
Whomever spec'd your systems should have accommodated obvious failures like this. As in, paying for colo, using servers with dual power supplies that fail over, sensible RAID strategy. Giving money to EMC in this situation is not sensible.
but they also left out the people who manage these massive beasts. I mean, how many hundreds (or thousands) of drives are we talking here?
I have a couple of hundre
Ripoff (Score:5, Insightful)
Looks like a cheap downscale undersized version of a Sun X4500/X4540.
And as others have pointed out, you pay a vender because in 4 years they will still be stocking the drives you bought today, where as for this setup you will be praying they are still on ebay
Re: (Score:3, Insightful)
why wouldn't you just build an entirely new pod with current disks and migrate the data? You could certainly afford it.
Re: (Score:2)
why wouldn't you just build an entirely new pod with current disks and migrate the data? You could certainly afford it.
Maybe because there's no need to update and you just want to be able to replace broken drives?
Re: (Score:3, Interesting)
Fine then, replace just the broken drives but as far as I'm aware Linux software raid 6 does not require the drives be the same model, or even the same size. You can get newer drives for the same or less cost as the old drives and just plug them in. Who cares if they have more capacity? Just let it go to waste if you must but it'll work just fine and certainly you won't have to be scrounging drives off of ebay.
Also consider that five years down the road we may have 10tb drives or better, but 1.5 tb drive
cheap drives too (Score:3, Informative)
Reliant Technology sells you NetApp FAS 6040 for $78,500 with a maximum capacity of 840 drives, without the hard drive (source: Google Shopping). If you buy FAS 6040 with the drives, most vendors will use more expensive and less capacity 15k rpm drives instead of the 7200rpm drives the BlackBlaze Pod uses, and this makes up a lot of the price difference. The point is, you could buy NetApp and install it yourself with cheap off-the-shelf consumer drives and end up spending about the same magnitude amount of
Re:Ripoff (Score:5, Interesting)
Re: (Score:3, Informative)
This is truly RAID, as Google, etc. have realized and developed. When the drives die, you don't cry over having the exact same drive stocked. You don't cry at all. At $8k a machine, you could actually afford to flat-out replace the entire box every 4 years and not affect your bottom
Cool. (Score:2, Interesting)
Nominally a Slashvertisement, but the detailed specs for their "pods" (watch out guys, Apples gonna SUE YOU) are pretty damn cool. 45 drives on two consumer grade power supplies gives me the heebie jeebies though (powering up in stages sounds like it would take a lot of manual cycling, if you were rebooting a whole rack, for instance), and I'd be interested to know why they chose JFS (perfectly valid choice) over some other alternative...There are plenty of petabyte capable filesystems out there.
Very intere
Re: (Score:2)
Re: (Score:2)
67 terabytes for under 8000 dollars isn't interesting? Ooookay...
I don't give a damn about iSCSI; this isn't a database server, it's just a flat data file server...Most datacenters are limited by their network bandwidth anyway, not their internal bandwidth, and https isn't any worse than sftp. Paying Amazon a thousand times more, and I'd still be limited by MY bandwidth, not their internal bandwidth.
If they can deliver more storage for less price, then more power to 'em.
It's all clear now. (Score:4, Funny)
AHhh, this is why the EMC guy committed suicide. It wasn't because he was dying of cancer.
My plan comes to fruition! (Score:5, Informative)
Soon I shall have a single media server with every episode of "General Hospital" ever made stored at a high bitrate. WHO'S LAUGHING NOW, ALL YOU WHO DOUBTED ME!!!!
And how big is a petabyte you ask? There have been about 12,000 episodes of General Hospital aired since 1963. If you encoded 45 minute episodes at DVD quality mpeg2 bitrate, you could fit over 550,000 episodes of America's finest television show on a 1 petabyte server, enough to archive every episode of this remarkable show from its auspicious debut in 1963 until the year 4078.
Re:My plan comes to fruition! (Score:4, Funny)
Re:My plan comes to fruition! (Score:5, Funny)
Re: (Score:2)
Well, maybe Tea and Cake instead of Death, but you get the idea.
Re: (Score:2)
Re:My plan comes to fruition! (Score:5, Funny)
Soon I shall have a single media server with every episode of "General Hospital" ever made stored at a high bitrate. WHO'S LAUGHING NOW, ALL YOU WHO DOUBTED ME!!!!
And how big is a petabyte you ask? There have been about 12,000 episodes of General Hospital aired since 1963. If you encoded 45 minute episodes at DVD quality mpeg2 bitrate, you could fit over 550,000 episodes of America's finest television show on a 1 petabyte server, enough to archive every episode of this remarkable show from its auspicious debut in 1963 until the year 4078.
Of all the computer systems out there, yours is the one for which becoming self-aware terrifies me the most.
Re: (Score:2, Interesting)
William Shatner has continued to be awesome into well into his 70s. He even went on Conan and mocked Sarah Palin (while gently ribbing himself).
Of the personalities in Hollywood, he is one I like quite a bit.
Re: (Score:2)
Re: (Score:3, Funny)
I'm holding out for the porn version, Genital Horse Spittle.
Great donkey scenes.
Re: (Score:3, Interesting)
You raise an "interesting" train of thought in my mind.
Encoding in 720p x264 you get something like 45 minutes in 1.1 GB. This gives you 60,900 episodes per 4U unit or 609,000 episodes per 40U rack.
In 1080p x264 you get something like 45 minutes in about 2.5 GB. This is 27,000 episodes per 4U unit or 270,000 episodes per 40U rack.
Assuming 22 episodes per season and a five year average run time, you end up with 220 episodes per show (typical science fiction shows).
Assuming 5 shows per week, 40 weeks a year,
Disk replacement? (Score:4, Insightful)
How do you replace disks in the chassis? We've got 1,000 spinning disks and we've got a few failures a month. With 45 disks in each unit you are going to have to replace a few consumer grade drives.
Re: (Score:2, Informative)
Re: (Score:2)
yeah, the lack of ANY kind of hot swap on those chassis is laughable.
totally the wrong way to go. this guy is hell bent on density but he let that over-ride common sense!
Re: (Score:2)
be like google - hardware redundancy and software handling the failover.
take down the node with a bad drive, swap the drive, rebuild that pod's RAID (preferably i would RAID6 them as it has better error recovery than RAID5 at the expense of storage size being [drive size]*[number of drives - 2] instead of [drive size]*[number of drives - 1] of RAID5). when it comes back up it syncs to it's other copy.
i would also get LARGE write cache drives and any databases would be running with LARGE ram buffers for perf
Re: (Score:3, Interesting)
wtf? (Score:5, Insightful)
But when we priced various off-the-shelf solutions, the cost was 10 times as much (or more) than the raw hard drives.
Um..and what do you plan on running these disks with? HD's don't magically store and retreive data on their own. The HD's are cheap compared to the other parts that create a storage system. That's like saying a Ferrari is a ripoff because you can buy an engine for $3,000.
Re: (Score:2)
Re: (Score:2)
Looking at the case, where they have a vibration reducing layer of foam under the lid screwed down onto the drives, and with the pods stacked in the frame like they are, you have to pull a whole unit out anyways to replace a drive.
So, no hot-swap of anything anyways. PSUs fail pretty commonly in my experience, and not only do they not have redundant PSUs, they have 2 non-redundant power supplies. (RAID 0 for PSUs..... what happens when the 12V rail gets a huge surge that fries the boards on all of the drive
Re: (Score:3, Interesting)
Yup.
You can do even better than the price quoted in this article. On Newegg I found a 1TB drive for $95 - that is only $95k/PB. What a bargain!
Except that I don't have a PB of space with my solution. I have 0.001PB of space. If I want 1PB of space then I need hundreds of drives, and some kind of system capable of talking to hundreds of drives and binding them into some kind of a useful array.
This sounds like criticizing the space shuttle as being wasteful as you can cover the same distance in a truck fo
Re: (Score:2)
This is from someone who has to maintain these things, my Clariion is slower and harder to maintain than my Linux storage server. FC vs SATA, both over iSCSI. Ingenuity and innovation for the win.
Inquiring minds want to know...why would yall spend the money on FC drives only to be run over iSCSI? Why not just use SATA drives in your Clariion? I'm sure they would have been cheaper.
they are missing hardware mgmt (Score:5, Interesting)
where's the extensive stuff that sun (I work at sun, btw; related to storage) and others have for management? voltages, fan-flow, temperature points at various places inside the chassis, an 'ok to remove' led and button for the drives, redundant power supplies that hot-swap and drives that truly hot-swap (including presence sensors in drive bays). none of that is here. and these days, sas is the preferred drive tech for mission critical apps. very few customers use sata for anything 'real' (it seems, even though I personally like sata).
this is not enterprise quality no matter what this guy says.
there's a reason you pay a lot more for enterprise vendor solutions.
personally, I have a linux box at home running jfs and raid5 with hotswap drive trays. but I don't fool myself into thinking its BETTER than sun, hp, ibm and so on.
Re:they are missing hardware mgmt (Score:5, Insightful)
Re:they are missing hardware mgmt (Score:5, Informative)
This sort of attitude is how Sun got it's lunch eaten in the market in the first place.
Yes, your hardware rocks. It's so fucking sexy I need new pants when I come into contact with it.
It also costs more than a fucking italian sports car.
Turns out that if your awesome hardware is 10 times better than commodity hardware, but also 25 times as expensive, people are just going to buy more commodity hardware.
I've got some Sun data appliances and I've got some Dell data appliances, and the only difference I've seen between them is purely one of cost. The only thing that ever breaks is drives.
Re: (Score:3, Funny)
And speaking of sexy, sports cars, and Sun, there is one huge factor that sets apart the purchase decisions -
Sun has nothing on Ferrari for getting you laid.
Re:they are missing hardware mgmt (Score:5, Insightful)
personally, I have a linux box at home running jfs and raid5 with hotswap drive trays. but I don't fool myself into thinking its BETTER than sun, hp, ibm and so on.
I don't these folks guy believe their solution is better -- just cheaper. MUCH cheaper. So much cheaper that you can employ a team of people to maintain the "homebrew" solution and still save money.
You can get 2TB drives now (Score:2)
Since you can now get 2TB drives you should be able to fit 90TB in one of these boxes :)
And I thought I was doing well with a few terabytes in my home server (but hey, ZFS should save me from silent data corruption when the drives inevitably start to fail).
Or wait 5 years and buy it at newegg for $280 (Score:2, Funny)
Re: (Score:2)
Cool for home pr0n collection, but business? (Score:2)
These cost a bit and have drives which fail at a fairly infrequent rate. It doesnt' hurt that the data center is kept at 64 degrees by two (redundant) chillers and has 450 KVa redundant power conditioners keeping the electricity on at all times. (We do shut off the power to the building once a month to check these and the diesel generat
Lets try to be a bit more supportive here! (Score:5, Insightful)
If an article went up describing how a major vendor released a petabyte array for $2M the comments would full of people saying "I could make an array with that much storage far cheaper!"
Now someone has gone and done exactly that (they even used linuxto do it) and suddenly everyone complains that it lacks support from a major vendor.
This may not be perfect for everyones needs, but it's nice to see this sort of innovation taking place instead of blindy following the same path everyone else takes for storage.
What's all the hate? (Score:5, Insightful)
It's like looking at KDE and saying "But we pay Apple and Microsoft so we get support" (even though, no you don't). The company is just releasing specs, if it fits in your environment, great, if not, bummer. If you can make improvements and send them back up-stream, everyone wins. Just like software.
I seem to recall similar threads whenever anyone mentions open routers from the Cisco folks.
Re:What's all the hate? (Score:5, Interesting)
Running on the cheapest hardware possible and engineering the software to gracefully deal with hardware failure is exactly how Google runs their datacenters, as well. As long as you've got the talent to pull it off, it's much more cost effective than buying a prefab solution.
Don't forget where the real value is (Score:3, Insightful)
The real value in a data storage system isn't in the hardware, it's in the data. And the real cost incurred in a data storage system is measured in the inability of the customer to access that data quickly, efficiently and (in the case of a disaster) at all.
If you need to crunch the data quickly, a higher-performing system is going to save you money in the end. Look at all the benchmarks: no home-grown systems are anywhere on the lists. If you want to stream through your data at several gigabytes per second, you need to pay for a fast interconnect. Putting 45 drives behind a single 1GbE just doesn't cut it.
Similarly, if you want to ensure that the data is protected (integrity, immutable storage for folks who need to preserve data and be certain it hasn't been tampered with, etc) and stored efficiently (single instance store, or dedupe, so you don't fill your petabytes of disks with a bajillion copies of the same photos of Anna Kournakova) then you need to pay for the extra goodness in that software and hardware as well.
Finally, if you want extremely high availability, then the cost of the hardware is miniscule compared to the cost of downtime. We had customers that would lose millions of dollars per service interruption. They're willing to pay a million dollars to eliminate or even reduce downtime.
These folks are essentially just building a box that makes a bunch of disks behave like a honking big tape drive. It's a viable business--that's all some folks need. But EMC et al are not going to lose any sleep over this.
Re:My math is a bit rusty... (Score:5, Informative)
Linux-based server using commodity parts that contains 67 terabytes of storage at a material cost of $7,867.
Re: (Score:3, Informative)
(1000 TB / 67 TB) * $7,867 = $117417.91
Re: (Score:2, Informative)
Re:That's great but what about all the hidden cost (Score:2, Insightful)
They designed and built it so they should know how to support it. If someone else builds one, just learning how to get that beast up and running is excellent hands on training.
Re: (Score:2)
Re: (Score:3, Informative)
they used incredibly cheep-ass HBA's for no good reason.
In their defence:
A note about SATA chipsets: Each of the port multiplier backplanes has a Silicon Image SiI3726 chip so that five drives can be attached to one SATA port. Each of the SYBA two-port PCIe SATA cards has a Silicon Image SiI3132, and the four-port PCI Addonics card has a Silicon Image SiI3124 chip. We use only three of the four available ports on the Addonics card because we have only nine backplanes. We don't use the SATA ports on the motherboard because, despite Intel's claims of port multiplier support in their ICH10 south bridge, we noticed strange results in our performance tests. Silicon Image pioneered port multiplier technology, and their chips work best together.