Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Hard Drives Made for RAID Use 201

An anonymous reader writes "Hard drive giant Western Digital recently released a very interesting product, hard drives designed to work in a RAID. The Caviar RE SATA 320 GB is an enterprise level drive without native command queueing and uses an SATA interface. In works better in RAID than other drives because of features like its time-limited error recovery and 32-bit CRC error checking, so it is an option when previously only SCSI drives would be considered."
This discussion has been archived. No new comments can be posted.

Hard Drives Made for RAID Use

Comments Filter:
  • by andrewman327 ( 635952 ) on Saturday September 17, 2005 @02:00PM (#13585644) Homepage Journal
    Does anyone have any benchmarks to back up this claim? This seems very vague.
  • About time (Score:5, Interesting)

    by Tuor ( 9414 ) <tuor.beleg@gCOBOLmail.com minus language> on Saturday September 17, 2005 @02:06PM (#13585673) Homepage
    While I've been a proponent of SCSI for a long time -- Apple really was thinking ahead when it had it in Macs all those years -- it has been getting thread-worn. Ultra-wide-tall-double-hex-SCSI is just getting to be too much!

    SATA is the right technology, especially for controllers since each channel is dedicated. The only alternative is Firewire, and there are no native controller drives.
  • by fimbulvetr ( 598306 ) on Saturday September 17, 2005 @02:08PM (#13585688)
    On the newegg link they list the MTBF as 1 million hours. Google tells me that that is about 114 years. How can it have such high mtbf? Is that newegg just not having correct data or is there something special about these drives (or are they designed to be "used" less)?
  • by garat ( 899448 ) on Saturday September 17, 2005 @02:19PM (#13585750) Homepage
    Here's an interesting quote from Tom's Hardware [tomshardware.com]:

    "In sum, we must state that all Command Queuing enabled drives have an advantage over those that do not support this feature. At the same time, CPU load is also slightly higher when Command Queuing technologies are used. However, considering the performance of today's processors, the additional CPU load is a marginal factor."

    Basically, you put some load on the processor for increased disk performance... Why not include it?
  • Re:About time (Score:3, Interesting)

    by $RANDOMLUSER ( 804576 ) on Saturday September 17, 2005 @02:29PM (#13585792)
    > ...especially for controllers since each channel is dedicated...

    I generally tend to agree with that, but as a guy running 8 200GB SATA drives on four controllers, I can tell you that the PCI bus gets saturated _way_ too quickly for my tastes.

  • by v1 ( 525388 ) on Saturday September 17, 2005 @02:31PM (#13585803) Homepage Journal
    These buggers are hard to find for anywhere near decent cash. I've found one model that is fairly popular, going by several different names and brands, but nobody seems to have them in stock. They look like a GREAT deal and loaded with most or alll of the best features of raid5. (hot swap, live rebuild, live GROW, etc) Has anyone seen one IN STOCK anywhere?

    Same exact models:

    http://www.raidweb.com/fb605fw.html [raidweb.com]
    http://www.micronet.com/General/prodList.asp?CatID =45&Cat=Product [micronet.com]
    http://www.firewiremax.com/fire-wire-1394-ilink/mi harasyfor5.html [firewiremax.com]
    http://www.pcrush.com/prodspec.asp?ln=1&itemno=779 19&refid=1057 [pcrush.com]
    http://www.cooldrives.com/firewire-raid-5-enclosur e-mini.html [cooldrives.com]
    http://www.topmicrousa.com/combo-205.html [topmicrousa.com]

    same internals, different enclosure:

    http://fwdepot.com/thestore/product_info.php/produ cts_id/657 [fwdepot.com]
    http://www.cooldrives.com/fii13toatade.html [cooldrives.com]

    Everyone I call says they have them in stock. Then I ask them to check and they suddenly change their mind and say no it's not really in stock, (despite what their web page says) and they expect it in the generic "1-2 weeks". (retail-speak for "we don't know when it'll be in, please call back later")

    Two of them actually told me they have yet to receive any of these units, so I don't think they've shipped from the manufacturer yet? (vaporware?)
  • Network RAID? (Score:5, Interesting)

    by Eccles ( 932 ) on Saturday September 17, 2005 @02:33PM (#13585808) Journal
    Is there a reasonable cost, relatively low power RAID-5 setup for home networks? I'd love to set up a file server with gigabit ethernet and RAID-5 to serve as the home directories for my multiple machines. Things like the Buffalo LinkStation are a step in the right direction, but no RAID, etc. Is my only solution a Celeron or Pentium-M based PC? If so, is it possible to set up such a system to act as home directories for a combo of Windows, Mac, and /or Linux machines?
  • Dumb Drives (Score:3, Interesting)

    by Doc Ruby ( 173196 ) on Saturday September 17, 2005 @02:47PM (#13585851) Homepage Journal
    EIDE drives are the cheapest type. But AFAIK, each drive has a controller card onboard, which seems redundant when all the drives are being controlled in conjunction. Software RAIDs seem to have parity (pun intended ;) with HW raid controllers, but wouldn't a real "Made for RAID" drive have nearly no controller logic of its own (maybe just data separator and head/spindle speed/position calibration)? Lots of logic for controlling the RAID drive will be on the central controller card, or running on the CPU. So why have more on the drive? The cheaper the drives, the bigger the array at the same budget (shared overhead of common controller).

    Am I correct, or are some RAID drive makers already doing this? Or have I just got all the controller:drive economics wrong?
  • by tomhudson ( 43916 ) <barbara,hudson&barbara-hudson,com> on Saturday September 17, 2005 @02:50PM (#13585870) Journal
    Theoretically, any cheap drive used in a raid will experience less wear per gig of RAID data storage, since it is only storing a portion of the data. It's a cheat. Also, MTBF is a theoretical extrapolation from failure time of individual components. In the hard disk industry, its relation to reality is about the same as Harry Potters'. But we should be used to that, just like a megabyte ain't a megabyte when they calculate capacities.

    Its like this quote from the article:

    In (sic) works better in RAID than other drives because of features like its time-limited error recovery and 32-bit CRC error checking, so it is an option when previously only SCSI drives would be considered."
    It's all bullshit. Sure, it might be better than another drive for use in a raid, but its not like people couldn't consider IDE drives in the past, and that this is some miracle cure.

    Just look at what RAID means - Redundant Array of Inexpensive Disks. Lots of people use cheapie IDE hard disks in RAID setups. We've got a 4-drive terrabyte raid. Why would we consider expensive drives when the whole idea is to use cheap drives in a redundant array?

    Fuck the marketing departments. And fuck the PHBs who make their buying decisions based on them. Oh, right, the PHBs *ARE* getting fucked by the marketing departments. Sorry lads, carry on.

  • by Anonymous Coward on Saturday September 17, 2005 @03:04PM (#13585939)
    Specificially RAID 0, mirrored. It would be nice to be able to split one of those oversized drives into a mirrored drive, using the opposite sides of the disk platter as mirrors. You'd get better reliability with the slight trade off of all that surplus disk space you never use.
  • Re:Network RAID? (Score:1, Interesting)

    by Anonymous Coward on Saturday September 17, 2005 @03:31PM (#13586079)
    I use an old P3/600 with 4 250GB drives (2 raid 0+1 sets). File serving is generally not CPU intensive and I can achieve saturation on 2 100mbit eth lines while reading data from multiple machines. That is a relatively cheap setup but what do you consider cheap? The only thing I do at home that needs more bandwidth then my server may be able to handle is video capture and a second 200GB drive in the local machine I am doing my video on is more then adequate. My final video edit and any work I am doing gets copied automatically at night to the file server for backup which eliminates the expense and need for gigabit and and a faster raid setup.

    A side note. In roughly two years with this setup, I've had three various raid failures or reduced raid functionality (drives marked bad that were not bad or a corrupt raid config). None of these problem were the drives themselves. The raid setup caused more problems then it has prevented so use caution when trying to achieve a "cheap" raid setup. Next time it happens, I'll probably get rid of the raid setup and go back to four invididual drives and using rsync between two similar sized drives via a cron job.

    Assuming you really need Gb speed and want raid 5, your setup done reliably can get expensive quick. Don't forget to look into your data recovery options if you decide on hardware raid and your card fails!
  • Re:About time (Score:3, Interesting)

    by Fweeky ( 41046 ) on Saturday September 17, 2005 @03:32PM (#13586088) Homepage
    Quite. 32bit 33MHz PCI (especially shared among on-board stuff *and* multiple card slots) is amazingly feeble these days, so consumer-level PCI Express comes not a minute too soon. Of course if you can afford and appreciate 8 200G drives you can probably also afford and appreciate a half-decent workstation/server board with PCI-X, but even a pair of modern drives can completely saturate the bus, and if you're into file sharing over GigE even one drive is way too much.

    For that matter even sharing /dev/zero over GigE on PCI is.. disappointing.
  • by whoever57 ( 658626 ) on Saturday September 17, 2005 @03:45PM (#13586134) Journal
    An MTBF of 114 years doesn't mean that half of the drives will survive for 114 years without a failure; it means that if you run 114 drives for a year, you should expect to have 1 failure.
    That is a good explanation. Many people confuse MTBF with lifetime.

    Most products (and especially electronics) have a failure rate that when plotted over time looks like a bathtub. There is a high initial failure rate (infant mortality) that drops over time to a base rate (the random failure rate described by MTBF), this low failure rate continues until one reaches the end of useful life of the product, when the failure rate rises once again as age and wear effects cause the device to fail.

    Note that most extended warranties are designed by the seller to kick in after the early failure rate has droped, but expire before the end-of-life failures.

  • by alc6379 ( 832389 ) on Saturday September 17, 2005 @03:46PM (#13586145)
    Mod me offtopic, or whatever, but this has to be the most dumb-ass review I've ever read. It's a drive meant for RAID use, as in RAID 5 or RAID 1, in servers, where data integrity is very important. But what does this guy do?

    ...he puts it through the paces of a desktop hard drive. Where's the test of how it could run under mySQL? It's been replaced by a comment about how you can never have too much space "in this age of DVD-burning, file-sharing, and 40 GB MP3 players." Who the fuck cares about that on a server?

    Where's the review of how well it facilitates serving pages through Apache? Oh, that's replaced by "Look how neat the drive looks!"

    ...Nope. This FA was a waste of time, not just for the reader, but for the author, and for Western Digital to have even sent the drives to this guy. He should go back to playing UT2k4OMFGBF2, and find someone who actually knows something about industry usage patterns on hard drives like this to write a thoughful review.

  • by MarkTina ( 611072 ) on Saturday September 17, 2005 @04:55PM (#13586474)
    What do you mean sat on the wayside ? It's been out and about for donkeys years, I've been involved in storage for 9 years and it pre-dates me by a LOOOONG time.
  • by rizzo320 ( 911761 ) on Saturday September 17, 2005 @05:00PM (#13586496)
    Wow, did Western Digital plot to have your family killed? What a vendetta!

    All hard drive manufacturers have gone through cycles of poor quality and reliability. Maxtor, Seagate, IBM/Hitatchi (remember the "DeathStar") have all had the same problems. In all my years of repairing and building desktops, I can say I have had the most problems with Seagates and (the now owned by Maxtor) Quantum drives. If you ask someone else, they'll give you a different answer too.

    This drive has a 5 year warranty. Most other Western Digital's have a 3 year warranty, even if you buy the OEMs (in most cases). And read the articles above for what 1 million hrs MTBF means!
  • by swmccracken ( 106576 ) on Saturday September 17, 2005 @05:02PM (#13586509) Homepage
    Recent promise RAID cards have a "gigabyte boundrary" mode, where they round the size of the array down to the nearest whole gigabyte.

    This allows for minor variations in replacement disc sizes, at the cost of wasting some disc space. (It'd make a 250 gb array instead of a 250.23 GB one.)
  • by dtfinch ( 661405 ) * on Saturday September 17, 2005 @05:40PM (#13586659) Journal
    I personally trust WD more than I trust Maxtor, but all manufacturers have bad years and bad models. This year I only trust Seagate, on only certain specific models.
  • by TheOrquithVagrant ( 582340 ) on Saturday September 17, 2005 @06:57PM (#13587013)
    What part of "don't care about data loss" did you fail to understand?
    Why would I want to waste %25 of my volume's storage capacity to get better data security on something where I don't _care_ about data security? And no - raid-5 doesn't match raid-0 for speed even on reads, at least not in my linux software raid setup. No "probably" about it - I have a raid5 volume running on the same hd's, where I keep data I actually care about.
  • by Anonymous Coward on Saturday September 17, 2005 @08:17PM (#13587330)
    I really wish there was some sort of standardisation of reviews such that ones like this could be filtered out. How it got on slashdot is beyond me!

    They review a RAID edition drive yet don't even test it in RAID5! Unless reviews are thorough how are we supposed to draw anything but the vaguest conclusions. This reviews testing set should have included all these combinations:

    - software vs hardware RAID solution (including hybrid semi-hardware cards like RocketRaid 1820A)
    - 2,4,8 drive tests for RAID0,1,1+0,5
    - synthetic tests such as the one they used or HDTach or similar as well as real world tests such as a database benchmarch, file server test

    i hate to rant but these thoughtless reviews really are a waste of time.

I've noticed several design suggestions in your code.

Working...