Hard Drives Made for RAID Use 201
An anonymous reader writes "Hard drive giant Western Digital recently released a very interesting product, hard drives designed to work in a RAID. The Caviar RE SATA 320 GB is an enterprise level drive without native command queueing and uses an SATA interface. In works better in RAID than other drives because of features like its time-limited error recovery and 32-bit CRC error checking, so it is an option when previously only SCSI drives would be considered."
Typo! (Score:3, Informative)
You should change "In" to "It"
Thank you very much.
earth to 11 year old kid (Score:1, Informative)
p.s. Pay attention in English class.
Summary of article:
The Good (+)
- Very good performance
- Looks cool (for a hard drive)
- Optimized for RAID use
The Bad (-)
- High initial investment
SATA version may be new, but features are not new (Score:5, Informative)
http://www.wdc.com/en/products/Products.asp?DriveI D=92 [wdc.com]
I bought one to replace what I thought was a bad drive in a RAID configuration about a year ago.
TechReport (Score:5, Informative)
Go read. Now!
Re:Slashdot: Stories Made For Ad Use (Score:5, Informative)
It's not an error by NewEgg. Follow the link to the manufacturer's site, and you'll see the same specification:
http://www.wdc.com/en/products/Products.asp?DriveI D=114 [wdc.com]
Re:No NCQ? (Score:3, Informative)
In snort, without NCQ, SATA drives are going to be slower than SCSI. The other two features probably just offset/mitigate the speed differences, but I would probably hold out for something that has NCQ (or just go SCSI) if I were building a RAID today.
Re:Slashdot: Stories Made For Ad Use (Score:5, Informative)
MTBF is defined as [short time period] * [number of drives tested] / [number of drives which failed within that time period]. An MTBF of 114 years doesn't mean that half of the drives will survive for 114 years without a failure; it means that if you run 114 drives for a year, you should expect to have 1 failure.
A more intuitive way of conveying the same information is to say that the drives have an expected failure rate of no more than 1E-6 per hour.
Re:Slashdot: Stories Made For Ad Use (Score:5, Informative)
Easy: You, like most people, don't know what MTBF means. MTBF is only meaningful in context with the expected lifespan of the device. This is probably somewhere in the neighborhood of 5 years, or about 43,800 hours. Essentially, what the manufacturer is saying is "Based on some data, we estimate that if you run x number of these drives, the average time between failures will be 1,000,000/x hours, up until the expected lifespan of the drive, at which point all bets are off"
For computer hardware this is always some sort of extrapolated estimate, since they have of course not actually been testing the drive for it's expected lifespan, or it would be obsolete by the time they released it.
NCQ.. (Score:2, Informative)
Re:About time (Score:3, Informative)
Re:Network RAID? (Score:3, Informative)
It also depends what you want to be doing with it. I've played with both hardware and software RAID5 and home and at work. Software RAID offers excellend bandwidth, and seems to use very little CPU time. This is why I think a P3 should work. However, the seek time is terrible. Perhapse it has something to do with the RAID intelligence being located so much farther away from the drives than it would be with a dedicated RAID card. I've tried running an SQL server on soft IDE RAID on a dual Xeon 3.2, and it had the snot kicked out of it by a dual P3 700 with an ancient MegaRAID driven SCSI array.
As for running it as a home directory for Win/Mac/Linux, between Samba and NFS you should be just fine. You may even be able to go the fancy route and set up a few logical volumes as iSCSI targets and run your own SAN.
Re:About time (Score:3, Informative)
http://www.adaptec.com/sas/index.html?source=home
Pro level already moving but I suspect it will be OK for home with enterprise features it offers.
I checked a bit you know
Re:About time (Score:3, Informative)
You could get around that if you were to use a Adaptec Serial ATA RAID 2810SA with 8 ports or a more expensive Adaptec Serial ATA RAID 21610SA with 16 ports.
You might look at the price and say too expensive but the speed and availible configuration should make up for it. Besides i got might for around $425 wich is less then thier suggested price. Also both these cards can use the waisted space from mismatched drive sizes as well run multiple raid volumes one each drive. What i like the most is the hotswap and hotspare were you could just leave a blank drive in and if one other drive failes it automaticaly recovers with the spare and you can replace the bad drive without rebooting. Another thing i like about the card is that it is a full controler and not one of these host based things. Your computer will just see it as a harddrive(s) without any special drivers. You can even access them from DOS, most linux kernels, as well as windows 95 and 3.11 (note the drives had to be small for 3.1 and 95 to see correctly).
BTW, i don't work for adaptec or sell thier stuff. I'm just impressed with a product that finaly took alot of frustration away that has been associated with cheaper IDE and SATA ad-on cards. I'm sure there are better solutions availible. this is just one that i have found. Most of the cheaper (under $100) IDE,SATA,or raid controlers i have found use the system for thier existance. This is why you need a special driver in windows or linux to use it corectly. the extra cominucations here could be somethign saturating you pci bus (or helping it saturate)
Re:Network RAID? (Score:2, Informative)
Buffalo TeraStation (Score:5, Informative)
Supports RAID 5.
I emailed if external USB hard drives could be added and swapped to a raid 5 array, and if it can be done "on the fly"...
but all I got was this lousy message:
"Please call (800) 456-9799 x. 2013 between 8:30 and 5:30 CT and our presales guys will be able to assist you."
I'm one of those weird people that would rather communicate in writing. Oh well - no sale.
synchronized spindles? (Score:4, Informative)
I would think if these drives are really designed for RAID (like other drives have been in the past), then they would have support for synchronized spindles.
The idea behind synchronized spindles is that in order to read data from a disk, you have to wait for the platter to come around part of a revolution for your data to become available, just like picking up your suitcase on the luggage carousel at the airport. How long you need to wait is a matter of luck, because the disk can be assumed to be in a random position when you decide you want your data. When you have RAID without synchronized spindles and you want data that's bigger than the stripe width (or when you're writing and need to update the parity), you have to wait for multiple disks, and they will tend to be spread out so that you tend to wait longer than if you were just waiting for one. With synchronized spindles, as soon as the whole group hits the right position, you've got what you're looking for, and you're done.
So, the point is, not having synchronized spindles tends to increase average access time, so having synchronized spindles is a desirable feature for a drive designed specifically for RAID.
Re:looking for an inexpensive raid5 tower (Score:1, Informative)
Re:Sal Cangeloso is a moron (Score:2, Informative)
OK, so it was just to niggle.
Not slower than SCSI (Score:3, Informative)
If lower access times are needed, SCSI drives beat SATA drives just because you can only get 15,000 RPM with a SCSI interface. May also make sense to have 15,000 RPM drives if you're already spending a lot of money on 16GB of RAM.
The question about this drive which interests me is whether drive write caching can be easily turned off and will stay off, so you don't lose database data when the database thinks the data has been flushed to the surface but it hasn't really been flushed. If you can't do that, it's unsuitable for a lot of database work - certainly unsuitable for use with RADI controllers with battery backed up write caches, where you have the battery to make sure you don't lose cached data if the power goes off. Anyone who things colo power and UPS will protect against loss of power hasn't suffered enough yet...
RAID 0 is fine, in its place (Score:3, Informative)
You consider RAID 0 when you don't care about losing the data if there's a drive failure and want the benefits of striping and the extra space available for a given number of drive bays, compared to other RAID levels. RAID 5 can get you some of the space but it's slower for database work.