A Terabyte In A Cigar Box 691
Anonymous Howard writes "LaCie has introduced a 1 Terabyte (capacity) disk for (get this) only $1,199.00!(USD) It is external and equipped with FireWire 800, FireWire 400, iLink/DV, Hi-Speed USB 2.0 or USB 1.1 to connect to both PC and Mac. Take a look here."
wow... (Score:5, Insightful)
Missing bytes growing fast (Score:5, Insightful)
I know this is "just the way" drives are measured, but all those missing 24 bytes are really starting to add up. --H
Man... (Score:3, Insightful)
Sorry, nothing terribly insightful to say here. Just amazed at how far storage has come. This particular device would have been interesting for Weta to have during production of RotK. They used many many terabytes of data. They'd probably have been quite happy to hand carry a terabyte of data. (Faster than a gigabit network in many ways...)
proprietary controller (Score:5, Insightful)
Hey Epson, (Score:5, Insightful)
Of course, for a grand and some change, this thing better make the bed the next morning, you follow...
Great (Score:1, Insightful)
200 Gb for a Hello World program here we come.
Re:Not a 1TB *disk* (Score:2, Insightful)
Re:wow... (Score:3, Insightful)
One could hope for redundancy within the "disk". Since it seems to contain 4 250GB disks it's on the same stupidity level as the 1TB firewire setup of that guy in a story some time ago.
Re:wow... (Score:2, Insightful)
Yes, if you measure individual components of a product, it's not very interesting. For example, humans are just organic pain collectors.
$1/GB (Score:5, Insightful)
What's so amazing about that? HD space has been under one dollar per gigabyte [pricewatch.com] for a few years now. Add the cost of RAID and it's still under a buck a gig.
--
Re:Sorry.. (Score:5, Insightful)
espescially when you consider that the size will make this a "portable" drive. the jostle-n-drop action can wear drives already... very bad.
Every /. story like this has to have a post like (Score:2, Insightful)
Obviously since I can't see a need for such massive amounts of storage, there's no reason anybody should waste their time making this. They should build stuff that solves my problems.
Re:Sorry.. (Score:5, Insightful)
Because after all, we haven't been doing RAID for a long time now. Oh wait, doesn't RAID mean Redundant Array of Inexpensive Disks?
Come on, it certainly has its reliability concerns, but if you mirror one to another, where's the difference between this and two racks of smaller disks? Seems to me that 4 points of failure on each side of the mirror rather than a dozen or two could actually HELP reliability.
Re:No, only 0.9094 TB (Score:1, Insightful)
Damn standards institutes. What a bunch of arrogant bastards.
Re:Slow interface = bottleneck (Score:2, Insightful)
Re:unfortunately the drives are mounted vertically (Score:2, Insightful)
Re: Not as much space as you think (Score:4, Insightful)
That's actually not a lot of space once you get into multimedia.
But backup/recovery of a terabyte of data is not exactly trivial. Re-scanning and re-syncing a large disk array can take over a day. Moving that data across a 100mbps ethernet would require anywhere from 38 to 60 hours.
The cost isn't too bad (close to $1/Gb), but I'd prefer to see it reconfigured as a RAID5 unit.
Re:Sorry.. (Score:2, Insightful)
Re:Not a 1TB *disk* (Score:3, Insightful)
just an FYI, the real scale [wikipedia.org] is what hard drive manufactures have been using all along.
we've been using an incorrect variation that the standards people finally fixed [wikipedia.org]... 5 years ago
Re:Slow interface = bottleneck (Score:5, Insightful)
I do think this product would be a lot better with built-in RAID though.
Re:Sorry.. (Score:3, Insightful)
This has 4 250 GB drives in it. There is no redundancy. This is an AID.
But take two, they're small. Now you have a mirror. As the poster below pointed out, raid over firewire is possible. Also you can chain many of these together to form all kinds of configurations, and FireWire is hotswappable.
Re:wow... (Score:4, Insightful)
Let's say that the MTBF for each of the drives they are using is 500,000 hours/drive (which is what is rated for the Maxtor Diamondmax16
If you have 4 drives, you have an average of 8 failures in 1,000,000 hours. That is 1,000,000/8 = 125,000 hours average MTBF.
Note that that doesn't include failure rates for any of the other components including the enclosure (physical USB port, etc).
BTW, how can a hard drive last 500,000 hours? Easy. Sell 100,000 hard drives. Run them for 10 hours. See how many fail.
What's that? You've had MANY hard drives die on you in the past and there is no way that ANY of them ran 500,000 hours (that's only 57 years)? How many of them were past there warranty? Did you report the failure back to the company? Remember the 1 out of 10 rule
That fits with my experience in the last few years, I am lucky to average 50,000 (1/10th of the supposed norm under these assumptions) hours on a drive before death. That is assuming I have an average life of 4 years on a drive. I have a few drives that have -never- died, but in general I have to replace the inexpensive IDE drives in various machines every approximately 7 years on average (meaning some last only a few months and others have run for over 5 years before being upgraded into obsolescence which I will count as a "0" for # of failures).
That would put the average "real" MTBF at 12,500 hours. That's less than 18 months. Combine that with the horrible time for backing up such a box, the overhead of running over USB/Firewire (which in turn runs over PCI instead of attaching directly to PCI) along with the flakiness that alot of USB/Firewire devices have, and you have a LOT of reasons to spend extra money to build it yourself.
I would much rather buy a case with a low-end CPU, room for more than 4 drives, and build a RAID system with a hot-spare or two. Cost more? Yeah
Thank you, Captian Obvious. (Score:4, Insightful)
Re:Man... (Score:4, Insightful)
Disk consumption recipe:
Re:RAID and what happens if a drive in it goes bad (Score:5, Insightful)
Your data isn't any more protected on this drive than on any other hard drive.
With this device you probably have to send everything back to them to fix with no guarantee of data preservation.
Just like any other hard drive.
Even though this device "looks cool" I'll stick to the RAID system that I built in my fileserver at home. It holds almost as much data, costs less, and if something in it breaks I can fix it quickly without any loss of data.
A RAID array is not a backup solution. It's a fault tolerance solution. There are several scenarios where you could lose everything on even a RAID5 array (controller failure, multiple disk failure, etc). So your ability to "fix it quickly without any loss of data" is by no means certain.
But, I think you are missing a major point here: unlike your fileserver-based RAID array, this drive is small, quiet, and portable.
I currently have a bigass fileserver at home in a big, loud, power-sucking server case with 8 case fans and dual power supplies (and it sounds like a jet engine). It houses my video library (among other roles) on a 400GB RAID5 array built from six 80GB drives in hotswap drive cages connected to a Promise SX6000 controller. It was relatively cheap, it holds a lot of stuff, and I can replace faulty components off the shelf. It's great. Except for the noise and power requirements of having to house the thing in a big server.
I'm looking at this LaCie 1TB drive as a way to scale down my server to a desktop case just big enough to hold two mirrored system disks, a CD drive, and a DAT drive. The rest of my storage would be in external, self-contained drives.
As for backups, I backup my system disks (where the home directories live) nightly to DAT, but the data in my library (like most) is write once, ready many. I back up my data to DVD before it gets stored on the array, rendering periodic backups unnecessary. If the disk crashes and dies, no big deal. I just have to endure a few hours (days) of restoring files from DVD archives.
And in the event that my home catches fire, I can grab an external drive on the way out the door. Try that with a 100lb server.
Re:Sorry.. (Score:3, Insightful)
Oh, and the LaCie pocket drive you mention was based on a better performing laptop drive and incorporated a rubber bumper protection design and both Firewire AND USB interfaces.
Still waiting (Score:2, Insightful)
Does it come preformatted?
How long does it take to perform a defragment?
I think the hard drive metaphor for storage is starting to reach its limits...
78 years later, Analysis complete. 78% defragmentd. Would you like to defragment now?
We use the 500 GB models (Score:2, Insightful)
Re:Sorry.. (Score:2, Insightful)
I think he gets it, but his point is that there are still as many points of potential failure. Two of these drives, for example, are effectively eight drives, and if any given IDE drive has, say, a 5% chance of failing per month (obviously, I'm making this up to illustrate the math involved, rather than trying to show real life failure rates), then two drives would have a 10% chance of failure. This isn't actually two drives though: it's eight drives, meaning you have a 40% chance of at least one sub-drive failing.
Wouldn't it be more robust to be able to treat each of these devices as a single, four disc, 250gb RAID array? If you want to store 1tb of data, then 4 of these, configured as RAIDS rather than monolithic nodes, seem like they would be more reliable.
I mean, I see what you're saying, but the earlier point is still valid. Your suggestion treats two of these as a Redundant Array of Inexpensive Discs, but I'd argue that a $1200 Disc wouldn't fit well with most people's idea of "inexpensive". On the other hand, a quartet of 250gb "more traditional" RAIDs would consist of sub-drives of about $180 each [cnet.com] -- even if you have to replace all four discs in one of the RAID nodes here, that's still cheaper than the $1200 unit.
Like I say, I see your point, but I think that to do what you're suggesting would be both more expensive and less reliable than other approaches. I'd be willing to consider well-reasoned counter-arguments, though :-)
Re:Sorry.. (Score:3, Insightful)
I knew something didn't look right, but didn't bother to sit down and do the math properly. And now this is on my permanent record. Oh well -- thank you for the correction, and in future I'll double check my math before spouting off in public like this...
Hopefully my point stands otherwise, even if I screwed up the details of the demonstration: with more points of failure, the probability of failure rises quickly, and a design that aims to compartmentalize parts of the system will tend to be more robust & fault-tolerant. The math seems valid, even if my particular demonstration of that math was, well, stupid :-)