PetaBox: Big Storage in Small Boxes 295
An anonymous reader writes "LinuxDevices.com is reporting that a Linux-based system comprising more than a petabyte of storage as been delivered to the Internet Archive, the non-profit organization that creates periodic snapshots of the Internet. The PetaBox products, made by Capricorn Technologies, are based on Via mini-ITX motherboards running Debian or Fedora Linux. The IA's PetaBox installation consists of about 16 racks housing 600 systems with 2,500 spinning drives, for a total capacity of roughly 1.5 petabytes, according to the article. Now to strap one of those puppies to my iPod!" The Internet Archive continues to astound.
archive.org (Score:5, Interesting)
They do a lot more than that! I've just been downloading some Warren Zevon [archive.org] shows from their Live Music Archive.
copyright (Score:5, Interesting)
1.5 Petabytes? (Score:4, Interesting)
The math doesn't work when you multiply the number of systems out either: 600 systems * 1.6TB/system = 960TB. That's just under a petabyte, or am I missing something?
Also, if you've got those in a RAID5 setup, you're 'only' talking about approx 800TB of usable space. That's far less than the 1.5 petabytes claimed.
800TB is a lot of space, but there must be a cheaper/easier way than purchasing 600 systems to do it.
The MPAA and RIAA (Score:3, Interesting)
Re:copyright (Score:3, Interesting)
Besides, the IA only archives HTML pages, and small images in them, nothing else. If you consider your HTML content to be unproductible copyrighted material, might I ask why the hell is it publically accessible on the Web in the first place?
Re:copyright (Score:3, Interesting)
Re:What's wrong with hot swap and RAID 5? (Score:3, Interesting)
They dont use hot swap and raid5 for the same reason google doesnt run on mainframes:
Its just cheaper to let a higher level logic take care of that stuff instead of strapping redundancy on every node...
Why hot swap if it isnt needed? The rest of the node will be mirrored somewhere else, so for the cost of fitting out everything with HS bays you could get 5 or 10% more nodes...
Same for raid5: good high performance Raid5 controllers would increase the system cost by 50% or something. And then its not less expensive than just mirroring nodes.
Re:What's wrong with hot swap and RAID 5? (Score:2, Interesting)
GOK, I have 3Pb of storage syncronised across two data centres here, all in 7+1 RAID5. Mostly self healing too, if a drive pops, then a spare drive in the same array builds itself into that stripe set, enabling hot replacement of the dead drive.
I would love to know what their "painful experience" was!
Using JBOD for this seems a tad courageous, to say the least.
And then, of course, there's backup...
They don't like RAID (Score:5, Interesting)
Also, the article says they don't like RAID, due to bad experiences with RAID5, and the system is configured as JBOD (Just a Bunch Of Disks). It doesn't say why, or what users should do to get equivalent protection. My guess is that depending on RAID within a box means you're still vulnerable if the box's CPU or disk controller decides to scribble the disks, or the power supply decides to catch fire or short out and deliver 240VAC on the +5V line or whatever. So if you want a RAID-like set of redundancy, set up your applications or file system mounting or something to calculate the protection disk in software and hand it off to another 1U box for storage.
The overhead of the motherboards here is not that high - they're about $150-200, and support 4 disks that probably cost $200-300 each, so they're only about 20% of the cost, which is not bad. The article didn't say they're using SATA, and it sounded like it's some IDE variant instead, but if you're only using 100 Mbps Ethernet to connect to the box and not the optional GigE, it's not the bottleneck anyway. If you wanted an alternative design, you could probably do something with a couple of 4-way SATA controllers per CPU, with a lot of disks stacked vertically in a 3-4U box looking like an X-serve or something. But that wouldn't necessarily have much of an advantage.
Re:copyright (Score:2, Interesting)
Did you ask this question when Google introduced site cache several years ago?
Not a big improvement... (Score:2, Interesting)
Re:Good to see. (Score:5, Interesting)
5 petabytes of storage is enough for a brief five-minute DVD-quality sex scene for each person of legal age in the US (two to a scene). 100 petabytes would be five minutes of porn of every pair of people in the world.
I actually wonder about this a little; how many women have posed nude on the internet? There seem to be an awful lot; I haven't been able to see them all (though I will continue to try). Where do they mostly come from, I wonder.
Two points (Score:5, Interesting)
First off, this isn't quite an example of a company suddenly deciding to donate stuff to the Archive. As can be seen on their own website [capricorn-tech.com], Capricorn was spun off from the Archive on July 1, 2004. To a large extent, Capricorn exists for the specific purpose of providing storage to the Archive, and if that same storage can be sold to others so much the better.
Second, what about interconnects and performance? The product descriptions say nothing about SCSI or FC or other storage-oriented connectivity, so one must assume that the connection to these boxes is through a network. That would mean each node is an NFS server (or similar), serving up 1.6TB using a 1GHz C3 processor, a maximum of 1GB of memory (for caching etc.) and what appears to be a single GigE link. Can you say unbalanced? The Internet Archive might be the only system with an access pattern so sparse that the ratio between capacity and performance wouldn't be crippling. Don't try using one of these with any other kind of application if performance is a concern...and BTW they don't seem to say anything about high availability or other storage functionality (e.g. integrated backup or snapshots) either. Capricorn's big play seems to be power consumption, but there are other players that can beat them on density (e.g. Copan with 224TB per rack [copansys.com]) and multitudes who can offer better performance/functionality. I hate to sound negative, but this is a product so specialized as to be uninteresting.
Disclaimer: I think I met some of the Copan guys once and they seemed cool enough, but there's no other relationship between me and them. That just happened to be the first name I thought of in this space.
Re:No redundancy? WTF? (Score:2, Interesting)
The archive.org [archive.org] maintains its archives in several geographicaly different locations and files are mirrored between those sites. If one disk or node breaks, you still have two or more copies of that material.
If you archive serious amounts of data, redundancy within node is not the best solution, but to distrbute information between systems. For very important data, you can have as many copies as you have nodes; lesser important data may have just a single copy. If it gets lost, then ok, shit happens but so what. For example, I have just a single copy (no backups, partly RAID) of 10 TiB data (and that data is not available from P2P shop) because it is not economicaly viable to make backups. On the other hand, I have some data in 5 geographicaly diverse copies, both on-line and off-line.