Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Hard Drives Made for RAID Use 201

An anonymous reader writes "Hard drive giant Western Digital recently released a very interesting product, hard drives designed to work in a RAID. The Caviar RE SATA 320 GB is an enterprise level drive without native command queueing and uses an SATA interface. In works better in RAID than other drives because of features like its time-limited error recovery and 32-bit CRC error checking, so it is an option when previously only SCSI drives would be considered."
This discussion has been archived. No new comments can be posted.

Hard Drives Made for RAID Use

Comments Filter:
  • by Anonymous Coward on Saturday September 17, 2005 @01:57PM (#13585632)
    Sheesh, this is a VERY thinly disguised ad. Here's a direct link to NewEgg [newegg.com] $169. Has the same details as this "story."
    • Does anyone have any benchmarks to back up this claim? This seems very vague.
    • by fimbulvetr ( 598306 ) on Saturday September 17, 2005 @02:08PM (#13585688)
      On the newegg link they list the MTBF as 1 million hours. Google tells me that that is about 114 years. How can it have such high mtbf? Is that newegg just not having correct data or is there something special about these drives (or are they designed to be "used" less)?
      • by ptbarnett ( 159784 ) on Saturday September 17, 2005 @02:15PM (#13585724)
        Is that newegg just not having correct data or is there something special about these drives (or are they designed to be "used" less)?

        It's not an error by NewEgg. Follow the link to the manufacturer's site, and you'll see the same specification:

        http://www.wdc.com/en/products/Products.asp?DriveI D=114 [wdc.com]

      • by cperciva ( 102828 ) on Saturday September 17, 2005 @02:26PM (#13585782) Homepage
        On the newegg link they list the MTBF as 1 million hours. Google tells me that that is about 114 years. How can it have such high mtbf?

        MTBF is defined as [short time period] * [number of drives tested] / [number of drives which failed within that time period]. An MTBF of 114 years doesn't mean that half of the drives will survive for 114 years without a failure; it means that if you run 114 drives for a year, you should expect to have 1 failure.

        A more intuitive way of conveying the same information is to say that the drives have an expected failure rate of no more than 1E-6 per hour.
        • by whoever57 ( 658626 ) on Saturday September 17, 2005 @03:45PM (#13586134) Journal
          An MTBF of 114 years doesn't mean that half of the drives will survive for 114 years without a failure; it means that if you run 114 drives for a year, you should expect to have 1 failure.
          That is a good explanation. Many people confuse MTBF with lifetime.

          Most products (and especially electronics) have a failure rate that when plotted over time looks like a bathtub. There is a high initial failure rate (infant mortality) that drops over time to a base rate (the random failure rate described by MTBF), this low failure rate continues until one reaches the end of useful life of the product, when the failure rate rises once again as age and wear effects cause the device to fail.

          Note that most extended warranties are designed by the seller to kick in after the early failure rate has droped, but expire before the end-of-life failures.

        • Correct. MTBF is not designed as an index of reliability for any one specific drive in use. It is designed as an index for manufacturers and repair facilities to estimate how many spares are required per year for any widescale deployment. So if you have 114 drives deployed in your enterprise, you would need to stock 1 spare drive to replace the estimated failures in one year.
        • thank you for that information. i assumed incorrectly that at least this one spec from the manufacturer wasn't a complete and utter sham. now i shall remain ever more vigilant.

          gee, one wonders if anything the manufacturers of products say is true.

          LCD monitor manufacturers lie just about everything on the specs. Hard Drive manufacturers lie about an enormous amount about their products.
          software vendors lie a ton about their products and "fitness or lack of for a particular purpose" (then why the hell are you
      • by Rasta Prefect ( 250915 ) on Saturday September 17, 2005 @02:28PM (#13585791)
        On the newegg link they list the MTBF as 1 million hours. Google tells me that that is about 114 years. How can it have such high mtbf? Is that newegg just not having correct data or is there something special about these drives (or are they designed to be "used" less)?

        Easy: You, like most people, don't know what MTBF means. MTBF is only meaningful in context with the expected lifespan of the device. This is probably somewhere in the neighborhood of 5 years, or about 43,800 hours. Essentially, what the manufacturer is saying is "Based on some data, we estimate that if you run x number of these drives, the average time between failures will be 1,000,000/x hours, up until the expected lifespan of the drive, at which point all bets are off"

        For computer hardware this is always some sort of extrapolated estimate, since they have of course not actually been testing the drive for it's expected lifespan, or it would be obsolete by the time they released it.

      • Theoretically, any cheap drive used in a raid will experience less wear per gig of RAID data storage, since it is only storing a portion of the data. It's a cheat. Also, MTBF is a theoretical extrapolation from failure time of individual components. In the hard disk industry, its relation to reality is about the same as Harry Potters'. But we should be used to that, just like a megabyte ain't a megabyte when they calculate capacities.

        Its like this quote from the article:

        In (sic) works better in RAID than

        • RAID - Redundancy Across Independent Disks
          • i've seen "redundant array of inexpensive disks" (which i belive was the original) and "redundant array of independent disks" but never the one you mention. Care to cite a source?
            • When RAID was first deployed, the I was originally "inexpensive". "Inexpensive" in this context meaning "cheaper than the $100,000 washing machine sized disks you have now". Think old-school VAXen and PDP-11s. "Inexpensive" transformed to "independent" when the industry moved away from big iron to modern server hardware using commodity SCSI disks.
      • They don't run a drive for a million hours, they run 100 drives for 1000 hours, and one breaks. Drives are getting better and better. I don't see how this is unusual, in my expierence, if a drive works for the first two weeks it is installed, it usually lasts for decades if it is properly stored (not running in dust, properly cooled, ect... That is of course, unless it is a Maxtor, in which case, get your fire extinguisher.
      • It also says "24/7 reliability" which I think means "100% duty cycle" so ostensibly they are not designed to be used less (as most ide desktop type drives are).

        IIRC Seagate is the only other company to offer a 5 year warranty on ide type drives (also subject to proper use -- no desktop drives in servers).

        Due to fluid dynamic bearings, better motors and other former SCSI-only technology the reliability of ide type drives has gone up a lot in recent times (thank god).
      • MTBF is the failure rate of drives that are neither defective or worn out. If I had to guess, it's an estimate of the lowest point on the failure rate bathtub curve, maybe around 2 years into the life of the drive. Defective drives usually fail within 1-2 years, and the rest start to wear out after 3-5 years.
  • by Cerdic ( 904049 ) on Saturday September 17, 2005 @02:02PM (#13585655)
    If they would stop eating around the hard drives, leaving crumbs in them, we wouldn't need to use Raid to take care of the cockroaches in them. Ugh.
    • I got an alpha hardware version of one of these bad boys and it came with a very handy extra that they assumed the end users wouldn't need: a de-bug port. Now I don't even NEED raid, but it's nice to have the option.
  • No NCQ? (Score:5, Insightful)

    by gbjbaanb ( 229885 ) on Saturday September 17, 2005 @02:03PM (#13585658)
    Interesting that they don't have NCQ, whereas SCSI drives generally do (well, called TCQ on SCSI IIRC)

    Is this just marketing speak, has it truly included scsi features, or could it actually be better performing than SCSI in a RAID array?
    • Re:No NCQ? (Score:3, Informative)

      by rsborg ( 111459 )
      Is this just marketing speak, has it truly included scsi features, or could it actually be better performing than SCSI in a RAID array?

      In snort, without NCQ, SATA drives are going to be slower than SCSI. The other two features probably just offset/mitigate the speed differences, but I would probably hold out for something that has NCQ (or just go SCSI) if I were building a RAID today.

      • Re:No NCQ? (Score:2, Insightful)

        by keltor ( 99721 ) *
        I think that all SCSI RAID controllers disable it on the drives as the controller takes care of all queueing. Remember most of the SATA drives now have NCQ. WD chose to specifically disable this as their regular Caviar SE drives have queueing.
      • Not slower than SCSI (Score:3, Informative)

        by Jamesday ( 794888 )
        Since I wanted some facts, Wikipedia ordered two systems for database service, both dual Opterons with 4GB of RAM and six drives. One with 10,000 RPM SCSI drives and one with 10,000 RMP SATA drives. The SATA system, without NCQ, was generally faster and ended up with a higher proportion of the site load assigned to it. The SCSI system was sometimes faster in mixtures which included lots of writes with lots of reads and that made it lag a bit less in replication of bulk update operations, so newer systems ha
  • Typo! (Score:3, Informative)

    by Anonymous Coward on Saturday September 17, 2005 @02:05PM (#13585671)
    "In works better in RAID..."

    You should change "In" to "It"

    Thank you very much.
  • About time (Score:5, Interesting)

    by Tuor ( 9414 ) <tuor.beleg@gTWAINmail.com minus author> on Saturday September 17, 2005 @02:06PM (#13585673) Homepage
    While I've been a proponent of SCSI for a long time -- Apple really was thinking ahead when it had it in Macs all those years -- it has been getting thread-worn. Ultra-wide-tall-double-hex-SCSI is just getting to be too much!

    SATA is the right technology, especially for controllers since each channel is dedicated. The only alternative is Firewire, and there are no native controller drives.
    • This might be a good alternative to SCSI drives, except it is only 7200rpm !?!

      Why would Western Digital market THIS drive for RAID configs when they have 10K rpm SATA drives (Raptors) they could have used instead ?
      • Re:About time (Score:3, Informative)

        by Glonoinha ( 587375 )
        RTFA - they used a different type of encoding on these drives in order to implement the 'time-limited error recovery.' The problem is that the encoding is done on three-vector bi-furate substrate instead of the two-vector bi-furate substrate used in the Raptors, and the 3V stuff can't handle speeds of the 10k RPM (the lateral acceleration at 10k RPM is significantly more than at 7,200 RPM, and the 3V stuff is taller than the 2V stuff - hence the problem.)
      • Why would Western Digital market THIS drive for RAID configs when they have 10K rpm SATA drives (Raptors) they could have used instead ?
        AFAIK the Raptor is still stuck at 75GB, no? That's getting downright pathetic.

        Individually, each of these new drives is slightly slower than a Raptor. But the cheap price for high capacity would allow liberal use of Raid-1. A pair of these in Raid-1 should destroy a single Raptor in every read benchmark.

        • Well, yeah, the sizes of higher end drives tend to be pretty poor because they use smaller, less dense platters to aid seek times (less distance for the head to travel, less time waiting for the head to settle on a track). These drives quote latencies of ~9ms; Raptors and SCSI drives start at around 4.5ms and get as low as 3.

          In a server environment where you're limited by the number of drives you can fit in your icke pizzabox (and power and cool effectively) you can't always just say "let's throw twice as
    • Re:About time (Score:3, Interesting)

      > ...especially for controllers since each channel is dedicated...

      I generally tend to agree with that, but as a guy running 8 200GB SATA drives on four controllers, I can tell you that the PCI bus gets saturated _way_ too quickly for my tastes.

      • Re:About time (Score:3, Informative)

        by sumdumass ( 711423 )
        8 drive on four controlers.

        You could get around that if you were to use a Adaptec Serial ATA RAID 2810SA with 8 ports or a more expensive Adaptec Serial ATA RAID 21610SA with 16 ports.

        You might look at the price and say too expensive but the speed and availible configuration should make up for it. Besides i got might for around $425 wich is less then thier suggested price. Also both these cards can use the waisted space from mismatched drive sizes as well run multiple raid volumes one each drive. What i lik
        • They're AAC devices, I believe. If their SCSI models are anything to go by, I'd say.. avoid like the plague. Between buggy firmware, overheating hardware, crappy drivers and even crappier management tools, you're probably better off spending that money on a motherboard with a decent bus (which you'll need to drive any non-trivial RAID card anyway) and just using software RAID on a known-good controller. This is based on several years experience with a number of 2120S cards and various systems and OS's.

          Se
          • I cannot seem to find were it says that is an AAC. here are some specs for it [adaptec.com]. I'm not exactly sure what AAC is anyways. I just know these are better then the $70 cards that needs the aray rebuilt for each operating system installed. With the 2810sa, i was able to access it from windows, linux installed or not (old versions as well as new) and even dos without a special driver or software program installed.

            Seriously, this is the company which sells a £50 dual port SATA card based on the same SiI chi

            • AAC is "Adaptec AdvancedRAID Controller", and yeah, I just checked and that model is indeed one.

              It is indeed a real hardware controller, but it's quite likely they have a bog standard SATA controller behind the 80303; I think they have a AIC79XX behind the SCSI models, but I dunno, I've never had a good look at the physical card.

              They're not *awful*; our master database has ran one for years with few problems (FreeBSD; uptime currently 218 days, with ~5.2 billion queries), the main one being disks randomly p
      • Re:About time (Score:3, Interesting)

        by Fweeky ( 41046 )
        Quite. 32bit 33MHz PCI (especially shared among on-board stuff *and* multiple card slots) is amazingly feeble these days, so consumer-level PCI Express comes not a minute too soon. Of course if you can afford and appreciate 8 200G drives you can probably also afford and appreciate a half-decent workstation/server board with PCI-X, but even a pair of modern drives can completely saturate the bus, and if you're into file sharing over GigE even one drive is way too much.

        For that matter even sharing /dev/zero
    • Re:About time (Score:3, Informative)

      by Ilgaz ( 86384 )
      Well, serial attached scsi started to ship

      http://www.adaptec.com/sas/index.html?source=home_ story1a_SAS_technology_home [adaptec.com]

      Pro level already moving but I suspect it will be OK for home with enterprise features it offers.

      I checked a bit you know ;)
  • by Nom du Keyboard ( 633989 ) on Saturday September 17, 2005 @02:08PM (#13585689)
    How does the lack of Native Command Queuing improve RAID performance? Generally I thought NCQ improved all drive's performance, and TFA says that NCQ is normally part of Enterprise High-Performance.
    • That the RAID card handles it. Please remember I am just guesing here, I don't know. However I know there used to be PATA IDE RAID cards that did this. The discs didn't support any kind of special reading, they just processed requests in order. Ok, no problem, the controller, which had a processor, RAM, etc did all that. It would implement scatter-gather and so on. Basically it was a SCSI RAID controller with IDE connectors instead.

      So perhaps the thought here is since you have a controller that'll handle it
  • by ptbarnett ( 159784 ) on Saturday September 17, 2005 @02:11PM (#13585702)
    Western Digital has been selling an EIDE version with this feature set for a while:

    http://www.wdc.com/en/products/Products.asp?DriveI D=92 [wdc.com]

    I bought one to replace what I thought was a bad drive in a RAID configuration about a year ago.

  • TechReport (Score:5, Informative)

    by JohnnyBigodes ( 609498 ) <morphine AT digitalmente DOT net> on Saturday September 17, 2005 @02:14PM (#13585719)
    Proper TechReport's review here [techreport.com].

    Go read. Now!
  • by garat ( 899448 ) on Saturday September 17, 2005 @02:19PM (#13585750) Homepage
    Here's an interesting quote from Tom's Hardware [tomshardware.com]:

    "In sum, we must state that all Command Queuing enabled drives have an advantage over those that do not support this feature. At the same time, CPU load is also slightly higher when Command Queuing technologies are used. However, considering the performance of today's processors, the additional CPU load is a marginal factor."

    Basically, you put some load on the processor for increased disk performance... Why not include it?
    • Probably because of the mirrage of software raid cards that use the system processors and memory. Are we talking about an extra load on the system's processor, The raid controlers or the little IDE's processor? (yes there is a small proccessor on most IDE or SATA drives that do LBA)

      Many of the cheaper level controlers do this exact thing but apear to be a hardware controler. OTOH, i'm not sure if a true hardware controler would be able to take advantage of it either. Are there current SATA raid controlers
    • Because that functionality is supposed to be handled by the RAID controller, and if the drive has it enabled you can end up losing performance as the two Command Queue Controllers fight each other. Theoretically you can turn it off (most SCSI based solutions do), but in practice many ATA drives lie to you when you tell them to turn off certain features (like Write Cacheing) and leave them on.
    • CPU load is also slightly higher when Command Queuing technologies are used.

      I'm not sure if the command queuing is done on the physical drive (seems reasonable to me) or in the SATA driver (could be). But either way, the command queueing is not going to be a load on the CPU. The observed load on the CPU is probably because it is free from waiting on the disk and is going about its business doing what its supposed to do.
  • by laing ( 303349 ) on Saturday September 17, 2005 @02:22PM (#13585765)
    The manufacturer specifically says to only use these in a RAID-1 configuration (mirroring). They have a reason for this: The error recovery mechanisim is abbreviated. So what does Sal do... He connects two drives in a RAID-0 configuration. Now his data reliability has gone to about 1/4 of a regular drive.
    • Yep, RAID-0 is the configuration you want to use if you want to risk losing data. You want RAID-1. I'm not saying that it's an excuse to have a proper back-up though, RAID merely keeps you going should a drive die.
    • Damn right. For those who don't get it, RAID-0 is not even really RAID. RAID is Redundant Array of Independent Disks. RAID-0 does not have any redundancy! It should be NRAID-0 (Non Redundant Array of Independent Disks). Only an idiot would even consider RAID-0 for any purpose.
      • If you need speed, but don't care about data loss raid-0 is entirely appropriate. For example, I keep my games installed on a 4-stripe raid-0 volume, since it helps considerably with level load times, and should a drive fail, it's easy enough to reinstall the games.
      • Not to niggle, but your assessment is incorrect. RAID-0 is equally as redundant as RAID-1. There are two disks performing a job that either one of them could do alone. The second disk is redundant. RAID-0 uses the redundancy for performance instead of continuous backup.

        OK, so it was just to niggle.

      • Only an idiot would even consider RAID-0 for any purpose.
        If the consequences of total drive or card failure are very small it isn't a problem. I have a production machine with a couple of striped 200GB disks, but can afford to lose it for up to a week if necessary. With that system a full bare metal restore would only take a few hours unless I have to spend time finding a new card.
      • So, I have seven database servers, all with identical copies of the data. Do I really care if I lose all the data on one of them because one drive in a RAID 0 set fails? The completely redundant systems do the job better than any RAID setup can.

        You consider RAID 0 when you don't care about losing the data if there's a drive failure and want the benefits of striping and the extra space available for a given number of drive bays, compared to other RAID levels. RAID 5 can get you some of the space but it's slo
  • NCQ.. (Score:2, Informative)

    NCQ allows hard drive to reorder various commands/accesses to suit its current head position. Depending on your app you might not see a lot benfits from it e.g when you do serial access all the time but lack of it will certainly cause degradations when multiple apps are active. Also by using one big hard-drive instead of multiple smaller ones its putting all eggs in one basket. Mechanical problems are more frequent than magnetic ones for a hard drive..
  • by v1 ( 525388 ) on Saturday September 17, 2005 @02:31PM (#13585803) Homepage Journal
    These buggers are hard to find for anywhere near decent cash. I've found one model that is fairly popular, going by several different names and brands, but nobody seems to have them in stock. They look like a GREAT deal and loaded with most or alll of the best features of raid5. (hot swap, live rebuild, live GROW, etc) Has anyone seen one IN STOCK anywhere?

    Same exact models:

    http://www.raidweb.com/fb605fw.html [raidweb.com]
    http://www.micronet.com/General/prodList.asp?CatID =45&Cat=Product [micronet.com]
    http://www.firewiremax.com/fire-wire-1394-ilink/mi harasyfor5.html [firewiremax.com]
    http://www.pcrush.com/prodspec.asp?ln=1&itemno=779 19&refid=1057 [pcrush.com]
    http://www.cooldrives.com/firewire-raid-5-enclosur e-mini.html [cooldrives.com]
    http://www.topmicrousa.com/combo-205.html [topmicrousa.com]

    same internals, different enclosure:

    http://fwdepot.com/thestore/product_info.php/produ cts_id/657 [fwdepot.com]
    http://www.cooldrives.com/fii13toatade.html [cooldrives.com]

    Everyone I call says they have them in stock. Then I ask them to check and they suddenly change their mind and say no it's not really in stock, (despite what their web page says) and they expect it in the generic "1-2 weeks". (retail-speak for "we don't know when it'll be in, please call back later")

    Two of them actually told me they have yet to receive any of these units, so I don't think they've shipped from the manufacturer yet? (vaporware?)
    • Seems like you could save quite a bit of money by going with something like this (assuming it was SATA you're looking for):

      http://www.macgurus.com/productpages/sata/satakits .php [macgurus.com]

      They have 2-, 3-, 4-, and 8-bay kits to suite your need. Get 'em with or without drives, cables, etc. The only drawback I see is the lack of a controller card (might have to go with something like the Sonnet further down the page). Then again, this may not be such of a drawback, since you're not stuck with a built-in RAID controller
      • I've already got a pair of 8 bay towers, same design as your link but beige instead of black. Been using them as software mirrors, which has worked in the past but is becoming very cumbersome. I just was hit with a DOUBLE drive failure that very nearly cost me 250gb of data, so I am looking for a RAID5 self-contained solution. I could really use the improved efficiency of usable space in raid5 - 80% on a 5 drive system, as opposed to 50% on my mirrors. I also want a box that I stuff drives into, and plu
    • Seems kind of expensive for what they are. We use these at the office: http://www.infortrend.com/ [infortrend.com] which can be purchased from Adjile Systems [adjile.com].
      • Those use quite a few more drives than I require, but worth looking into. Shame they have no prices on their web site, will have to give them a ring on Monday and see what they can do.

        In my brief browse I didn't see a big writeup on what those enclosures can do, though I did find several good descriptions of what those 5-bay units I found can do. They seem to do everything we need... firewire 800, no controller card, hot swap, hot rebuild, and though I wouldn't trust it... hot grow. That feature set has
  • Network RAID? (Score:5, Interesting)

    by Eccles ( 932 ) on Saturday September 17, 2005 @02:33PM (#13585808) Journal
    Is there a reasonable cost, relatively low power RAID-5 setup for home networks? I'd love to set up a file server with gigabit ethernet and RAID-5 to serve as the home directories for my multiple machines. Things like the Buffalo LinkStation are a step in the right direction, but no RAID, etc. Is my only solution a Celeron or Pentium-M based PC? If so, is it possible to set up such a system to act as home directories for a combo of Windows, Mac, and /or Linux machines?
    • buffalo has a terabyte 4x250 drive raid capable of raid5 for 750g. has gige too, but a relatively slow processor, and though the box clearly states it supports nfs it doesn't and they don't plan to. if you're pure windows and just looking for a nice solution it sounds good though, very small and easy to manage. mac and linux will have to use samba, which is a decent bit slower, and without the same permissions, but how much slower depends on your workload. For things like a central backup for important data
    • Re:Network RAID? (Score:3, Informative)

      I really think your best bet would be to use an old P3 or Athlom running software RAID on Linux or BSD. You can add several PATA/SATA cards to your machine and stack it full of drives and fans, and I think you'll find the performace to be acceptable. Of course, no software RAID can compete with an expensive SCSI RAID card with a dedicated XScale chip or whatever, but it's a heck of a lot cheaper.

      It also depends what you want to be doing with it. I've played with both hardware and software RAID5 and home an

      • For a small SATA RAID setup for a small server I've been very happy running a system with a 4 port, SATA-150 LSI MegaRAID card. It supports RAID 0, 1, 5, and 10 in hardware. It's not the cheapest way to go but with some OEM SATA drives the card shipped with its own SATA cables so I was good to go. I've run both SuSE Linux 9.2 and 9.3 on my server and the LSI controller was supported out of the box. My needs were pretty simple and so I've been using two SATA drives in a RAID 1 (mirrored) setup. I guess I reg
    • Re:Network RAID? (Score:2, Informative)

      by xlsior ( 524145 )
      One of the major reasons for the high price of most hardware RAID5 solutions, is the hot-swap backplane. If you are OK with a solution where you would have to shut down the server in order to replace a bad drive (which would be OK for most home use I would image), you can find some *very* cheap hardware RAID controllers ($50, for both ATA and SATA) that will do the job just fine...
    • Buffalo TeraStation (Score:5, Informative)

      by chocolatetrumpet ( 73058 ) <(moc.treblifnahtanoj) (ta) (todhsals)> on Saturday September 17, 2005 @03:40PM (#13586120) Homepage Journal
      Buffalo TeraStation [buffalotech.com]

      Supports RAID 5.

      I emailed if external USB hard drives could be added and swapped to a raid 5 array, and if it can be done "on the fly"...

      but all I got was this lousy message:

      "Please call (800) 456-9799 x. 2013 between 8:30 and 5:30 CT and our presales guys will be able to assist you."

      I'm one of those weird people that would rather communicate in writing. Oh well - no sale.
      • Please call (800) 456-9799 x. 2013 between 8:30 and 5:30 CT and our presales guys will be able to assist you.

        translation: call us on the phone so we can lie with impunity, and you can't prove it.

        par for the course anymore
    • Whats your idea of reasonable cost? You could probably do just that for less then $1500 on a PC and use linux with the varios SMB configs around. Or you might be able to use some windows version with cygwin exporting NFS or somethign. Any ways, $1500 sounds a little expensive for most (it is the price of a new midlevel gaming rig).

      I'm sure there are some NAS stuff but they tend to be what i would consider pricy too. OF course reasonable cost is a reletive term so my idea might be lower then yours.
    • Is there a reasonable cost, relatively low power RAID-5 setup for home networks?

      RAID-5 for home networks is a solution looking for a problem. RAID-5 is nice for minimizing down time, but for a home network that is very seldom the case.

      You see, the problem is usually not that my harddisk failed, but that I need to get an older version of a file, or get a file I deleted by accident. RAID-5 is utterly useless for this. For most home users it's better to use something like rsnapshot [rsnapshot.org] and take daily/hourly

    • A Celeron isn't really amazingly low power. I'm thinking something more like VIA EPIA. You don't really need a lot of CPU as long as the only thing the system is doing is filesharing and handling the RAID. Generally speaking a fairly current CPU will knock the socks off of the RAID performance of even fairly expensive controllers because they really don't have all that much CPU on them. Using linux md or similar you can create whatever kind of RAID levels you want. Pentium-M would work fine, but the power c
  • Dumb Drives (Score:3, Interesting)

    by Doc Ruby ( 173196 ) on Saturday September 17, 2005 @02:47PM (#13585851) Homepage Journal
    EIDE drives are the cheapest type. But AFAIK, each drive has a controller card onboard, which seems redundant when all the drives are being controlled in conjunction. Software RAIDs seem to have parity (pun intended ;) with HW raid controllers, but wouldn't a real "Made for RAID" drive have nearly no controller logic of its own (maybe just data separator and head/spindle speed/position calibration)? Lots of logic for controlling the RAID drive will be on the central controller card, or running on the CPU. So why have more on the drive? The cheaper the drives, the bigger the array at the same budget (shared overhead of common controller).

    Am I correct, or are some RAID drive makers already doing this? Or have I just got all the controller:drive economics wrong?
  • by rongage ( 237813 ) on Saturday September 17, 2005 @03:18PM (#13586027)

    Is it just me, or did this review stink for lack of proper testing and comparison...

    If I were comparing this product and it's performance, I certainly would not be benchmarking a SATA based RAID setup against a single Parallel ATA drive. Something in this arrangement just doesn't seem... well, logical.

    If you were really going to try to impress me with it's performance, then you would have to show me how it compares to "non-RAID" optimized drives of near simular characteristics. Show me how this drive performs against, say, Hitachi SATA 320 gig drives using an identical test rig. Also show me how this drive compares to 320 gig SCSI drives. Show me the results as JBOD, RAID-0, RAID-1 and RAID-5. You know, like the real world.

    While the graphs are pretty, I'm afraid that this "review" it fairly content-free.

    • but it's fairly good at getting ad impressions, which is all it's designed to do.

      most "reviews" on the web are are extremely basic done by people with little knowledge in the methodology of testing hardware/software.

      it's useful in that it exemplifies how not to review products.
  • by jandrese ( 485 ) * <kensama@vt.edu> on Saturday September 17, 2005 @03:51PM (#13586165) Homepage Journal
    IMHO, the biggest things manufacturers could do to make the drives more RAID friendly is to change the name (even with just a v1, v2, etc...) when they change platters.

    Nothing is worse than buying a bunch of drives and a couple of spares and building the array and then discovering down the road that in fact one of your spares came from a different production run and has a slightly different (maybe 3 block smaller) geometry and can't be used on your array. Usually there is absolutely no indication on the box or the drive that one of your drives is different unless you decode the cryptic serial number.

    For that matter, just printing the exact LBA count on the back of the box would be a huge boon.

    This isn't limited to ATA drives either. I've seen it plenty of times in professional SCSI solutions too, especially as the arrays start to get older.
  • by adrianmonk ( 890071 ) on Saturday September 17, 2005 @03:59PM (#13586204)

    I would think if these drives are really designed for RAID (like other drives have been in the past), then they would have support for synchronized spindles.

    The idea behind synchronized spindles is that in order to read data from a disk, you have to wait for the platter to come around part of a revolution for your data to become available, just like picking up your suitcase on the luggage carousel at the airport. How long you need to wait is a matter of luck, because the disk can be assumed to be in a random position when you decide you want your data. When you have RAID without synchronized spindles and you want data that's bigger than the stripe width (or when you're writing and need to update the parity), you have to wait for multiple disks, and they will tend to be spread out so that you tend to wait longer than if you were just waiting for one. With synchronized spindles, as soon as the whole group hits the right position, you've got what you're looking for, and you're done.

    So, the point is, not having synchronized spindles tends to increase average access time, so having synchronized spindles is a desirable feature for a drive designed specifically for RAID.

    • On a mirrored RAID, having them out of sync can be better, at least in theory, if probably not in practice. There's no easy way for software to know where an SATA drive is in its rotation, but if you request the same block at the same time from both drives, one of them will respond first, and it will be sooner on average than if both drives were in sync.
      • If you're after data from two different stripe positions on a RAID 1 set, sending one drive to each place will get you the data faster than having both go to one place then both go to the other.
  • by Rev.LoveJoy ( 136856 ) on Saturday September 17, 2005 @04:14PM (#13586282) Homepage Journal
    In my whole IT career (some ... christ ... 13 years now) I have seen no other vendor of HDD that comes close to WD for sheer volume of failed drives (Maxtor is a distant 2nd). That they resort to cheap marketing gimmicks like this (1 million hours mean time between failure, puhleeze, these are the people who pioneered the 1 year warranty) is only so much more indication of their propensity to manufacture garbage.

    Buy their gear if you must but I would not put my data on it.

    -- RLJ

  • 3 platters (Score:3, Insightful)

    by dtfinch ( 661405 ) * on Saturday September 17, 2005 @04:52PM (#13586467) Journal
    I'm no expert, but I look forward to mostly buying 2 platter drives from now on. Early failures seem to double when you add a third platter, and 5 platters is just scary. You can get 250gb SATA 2 platter Seagate drives for about $110 each, which seem to have a great record for reliability so far. But when I need real SCSI reliability I'll just get a real SCSI. The warranty for most SATA drives may be 5 years, but usually it's void if you put it in a server.
  • I have a serious question. I have had *VERY* bad experience with SATA drives. I had a two-drive SATA raid0 for temp and cache and video processing. One drive crashed within 2 years; click of death. So of course the raid is gone, but I reformatted the remaining drive and used it by itself. Gone in under another year.

    I dont remember the manufacturer but I realize it could just be a bad manufacturer.

    BUT just recently the SATA drive in a friend's factory-spec dell crashed hard. No click-death but seems li

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...