Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Chipset Serial ATA RAID Performance Exposed 359

TheRaindog writes "Serial ATA RAID has become a common check-box feature for new motherboards, but The Tech Report's chipset Serial ATA and RAID comparison reveals that the performance of Intel, NVIDIA, SiS, and VIA's SATA RAID implementations can be anything but common. There are distinct and sometimes alarming performance differences between each chipset's Serial ATA and RAID implementations. It's also interesting to see performance scale from single-drive configurations to multi-disk arrays, which don't offer as much of a performance gain in day-to-day applications as one might expect."
This discussion has been archived. No new comments can be posted.

Chipset Serial ATA RAID Performance Exposed

Comments Filter:
  • All to common (Score:3, Insightful)

    by Jedi1USA ( 145452 ) on Monday June 14, 2004 @10:07AM (#9420056)
    To put "compatable" above "performance" just to save time and a couple of pennies a chipset.

    • Quick question (Score:5, Insightful)

      by sczimme ( 603413 ) on Monday June 14, 2004 @10:34AM (#9420312)

      You are a hardware vendor. Would you rather sell a) 10,000 units that are broadly compatible but offer [arbitrary number] 80% performance or b) 3,000 narrowly-focused units that offer 100% performance at a slight price premium?

      I believe the revenue generated by selling 10,000 units would outweigh that of the 3,000 higher-priced units, even if the technology in a) is inferior.

      I'm not saying this is the best/worst/right/wrong way of looking at the situation; I'm saying this is probably the compromise the vendor has to make when offering such items.
      • Correct. Compatibility means a hit on performance. If you want the ultimate performance, you invariably end up with a highly customized system.

        In an important way, performance==customization.

        Look at overclockers. For the increase in performance, significant incompatible customizations have to be done.
      • by AHumbleOpinion ( 546848 ) on Monday June 14, 2004 @11:25AM (#9420804) Homepage
        I'm not saying this is the best/worst/right/wrong way of looking at the situation

        Choosing compatibility over performance probably is the smarter decision when you are dealing with integrated devices. Those who want top performace can add the appropriate PCI/PCI-X/PCIe card.

        Also, machines that need top performance often also need low downtime. When that RAID hardware goes bad replacing the card is far easier, and less expensive, than replacing the motherboard.
    • by Anonymous Coward on Monday June 14, 2004 @10:37AM (#9420352)
      Why do folks act shocked when commodity hardware behaves like commodity hardware?

      Why should computer hardware be exempt from the "You get what you pay for?" dictum which dominates other markets.

      And when you make millions and millions of any one thing, a "couple of pennies a chipset" adds up. Once again, that's what you get when you buy a commodity.

  • by jefe7777 ( 411081 ) on Monday June 14, 2004 @10:08AM (#9420067) Journal
    it's not about pure transfer rate as newbs and even an alarming number of techies, often think...
    • yes and the world isn't all about britney spears, but she sure is hot...

      and pure transfer rate is an extremely important statistic and consideration for many storages uses

      by grouping newbs and "an alarming number of techies" are you suggesting you represent a new and improved species of techie! oh ya? well what's your max transfer rate? huh? eh?
  • Surprised much? (Score:4, Insightful)

    by Anonymous Coward on Monday June 14, 2004 @10:08AM (#9420070)
    SiS, nVidia and Via are hardly world renowned for their RAID controllers, so why should we all act surprised that a consumer level product from low-cost manufacturers with very little experience designing these types of device doesn't exactly have screaming fast performance?
    • Re:Surprised much? (Score:3, Interesting)

      Because often times the SATA controller is not included in the integrated southbridge. Unfortunately the site is slashdotted so I don't know exactly which motherboards we're dealing with, but an example I have off hand is the Tyan S2885 with uses an AMD8111 southbridge and Silicon Image SATA controller seperately. Sort of like how many mainboards with three or four IDE disk connectors or perhaps SCSI support use a Promise, Highpoint, Adaptec, and other PCI devices for the added functionality.
      • Re:Surprised much? (Score:2, Informative)

        by Anonymous Coward
        No, these are all embedded SouthBridge devices, nothing from SI is on test.
    • Indeed. In fact onboard RAID is what I look to have absent from any motherboard I buy since the BIOS (whats the plural of BIOS?) tend to conflict. This means that adding a third party PCI RAID to a system with onboard RAID dont always work.
  • Best Upgrade (Score:5, Insightful)

    by swordboy ( 472941 ) on Monday June 14, 2004 @10:08AM (#9420071) Journal
    I think that the hard drive is the most overlooked upgrade for a "power user". If at all possible, go out and pick up a 15krpm Ultra SCSI hard drive [overstock.com] and controller for the boot partition. Use that slow ATA crap for storage of non-performance type stuff.

    18 or 36 gig drives aren't exactly too expensive given the performance that they offer.
    • Re:Best Upgrade (Score:3, Interesting)

      by Anonymous Coward

      >[...]for the boot partition.

      I boot once a day. I'm typically in the bathroom while the machine goes up. Seems like a darn waste to put the boot partition on a RAID-0.

      I run all my games off a RAID-1, and it does help loading time in most games. Game resources are ever-increasing in size.

    • Re:Best Upgrade (Score:5, Interesting)

      by Lead Butthead ( 321013 ) on Monday June 14, 2004 @10:18AM (#9420156) Journal
      Good deal, ONLY if the manufacturors are being honest with drive spec. Several coworkers that used to work for Quantum have indicate that the actual drive mechanism used in SCSI and ATA drives frequently shares common mechanical parts (platters, spindle motors, etc.) Their differences are ENTIRELY artifical. SCSI drive spec at times looked "better" in part because of the firmware difference and cheating on SPEC; for instance seek time for SCSI drive are computed differently to create the illusion that somehow SCSI drives seek faster...
      • Re:Best Upgrade (Score:3, Insightful)

        by rhinoX ( 7448 )
        This is pretty well known. But I would hardly consider the difference between a 15k rpm drive and a 7200 rpm drive "artificial".

        Plus, everyone knows Quantam drives were total crap.

        • Re:Best Upgrade (Score:3, Insightful)

          by GooberToo ( 74388 )
          And, by far, most users are going to see pretty much the same difference between a 7200rpm drive and a 10k drive. A power user should spend their money on a 10k drive and spend the difference they saved on more memory and/or a better video card.

      • by john_uy ( 187459 ) on Monday June 14, 2004 @11:00AM (#9420567)
        there are physical differences in the manufacturing of the drives.


        1. the surface disk are different from ide and in scsi. the scsi drives are much reflective than the ide drives. though i am not sure if this affects reliability.

        2. the size of the platter (diameter) is much smaller in scsi than in ide. probably this will help them achieve a higher rpm than the ide counterparts.

        3. the head movement is much sturdier in scsi (probably attributed to more better magnets.) i find it much difficult to move the heads in scsi than in ide.

        4. there are more chips underneath the scsi drive than in ide. however, this does not tell much. but in fc drives, there are 2 dsp chips, one that handle internal drive functions like motor and head, while the other handle io host requests making them much faster!

        5. scsi drives have higher mtbf. though this may not be much the only guage for quality but scsi drives are much better in quality.

        • I find this hard to believe. SCSI and IDE are just interfaces. They have nothing to do with MTBF and the color of the platters. You probably just had two different drives that happened to be IDE and SCSI. As far as I know, it's entirely possible to get the same seek times and reliability in SCSI and IDE. The only difference is how the firmware and wires work.
          • by Jeff DeMaagd ( 2015 ) on Monday June 14, 2004 @01:12PM (#9422024) Homepage Journal
            While SCSI and IDE are just interfaces, that often isn't the only differences, because they are sold to different markets.

            The IDE drives are sold to a consumer market where they don't need to be tested as vigorously. SCSI drives are often tested more vigorously from a mechanical, electrical and firmware aspects. Because the SCSI drives are often sold for heavy server use, they must be able to withstand constant use, around the clock for years.

            While it is possible to get the same mechanicals in both SCSI and IDE formats, I don't think that is done for any of the cheapest drives, IIRC,
            WD Raptor is one. So far that I know, there aren't any 15k RPM SATA or IDE drives. It could be done, but it wouldn't be that much cheaper.

            10k and 15k RPM drives also have different platters, cases and mechanicals - the platters are more like 2" in diameter than 3".

            Generally a SCSI drive is expected to last for five years, and I suspect that there really is an improved build quality to make it worth putting the 5yr warranties that drive makers put on SCSI drives, in a day when a typical IDE drive gets 1 yr, or if you are lucky, three.

            I know it isn't much, to say, but I've yet to have any of my SCSI drives fail on my, something I can't say for the IDEs.
      • Re:Best Upgrade (Score:4, Informative)

        by Isao ( 153092 ) on Monday June 14, 2004 @11:13AM (#9420694)
        Another reason that SCSI drives perform better in RAID arrays is that SCSI permits out-of-order I/O request execution.

        If a read request goes out to drive 3 and waits for rotational latency, the channel is not blocked. Another request for a read on drive 1 can be executed and satisfied while still waiting on drive 3.

        IDE performs blocking I/O, so everything would have to wait until drive 3's read was complete. I don't know if this also applies to SATA.

        • Re:Best Upgrade (Score:4, Informative)

          by stilwebm ( 129567 ) on Monday June 14, 2004 @01:15PM (#9422062)
          Another reason that SCSI drives perform better in RAID arrays is that SCSI permits out-of-order I/O request execution.

          It also has great command queuing as part of the out of ourder command execution. Serial ATA supports Native Command Queuing, providing these features plus First Party DMA and Interrupt Aggregation. Hardware support is relatively new. Seagate was the first to make a drive that supported it. My understanding is that the majority of Serial ATA drives out there essentially have parallel IDE controllers with a Serial ATA converter.

          Here is a great article from Intel on NCQ: PDF [intel.com] HTML [216.239.39.104].

          IDE performs blocking I/O, so everything would have to wait until drive 3's read was complete. I don't know if this also applies to SATA.

          Interrupt Aggregation and First Party DMA were designed to limit the effects of this. SCSI still has an advantage with its offloading controller though. I also understand that the maximum queue depth for commands on the SATA is 32, while it is 256 for SCSI.
    • LOUD much? (Score:2, Interesting)

      by Anonymous Coward
      go out and pick up a 15krpm Ultra SCSI hard drive

      Riiiight, I want a quieter computer not a turbo-fan-jet-in-a-box.

      Of all the benchmarks I've seen, with a configuration of 4 or less drives, the modern UDMA ATA drives can keep up with the best SCSI drives. They are cheaper and use newer, quieter technologies.
    • Hello.. did anyone bother to mention the difference between a basic RAID chipset, a RAID card, and a RAID card with XOR engine?

      My somewhat bitter and basic description:

      RAID Chipset - cheap RAID controller chip and the sacrifice is that it uses system processor power to work the RAID calculations.

      RAID card - a little more expensive, however, has a basic RAID controller chip which offloads some of the processor requirement.

      RAID card with XOR engine - a full blown chip that controls and processes the RAID
    • Might also be a good place to put your swap.
  • by biscuit67 ( 517991 ) on Monday June 14, 2004 @10:12AM (#9420102)
    Does the raid driver typically allow two independent seeks on the seperate drives with mirroring enabled? I would expect this to significantly improve things like boot times as most of the time is spent seeking for new data. I would have expected a 50% drop in seek. If they don't do independent seeks, why the hell not?
    • by cduffy ( 652 ) <charles+slashdot@dyfis.net> on Monday June 14, 2004 @10:21AM (#9420184)
      It's not as big of a boost as you might think, because not infrequently you'll be reading enough data to require two consecutive stripes to be read (anything that crosses a typically 64k boundary).

      Then you can be penalized for seeking your heads independently, because you need to pay your seek time separately for the second 64k of a given read.
      • However, Windows is typically accessing more than one file. Part of the slow down in boot times, I suspect, is due to the thrashing caused on the disk array. It's most certainly not CPU bound. If there are multiple seeks outstanding (versus linear reads) then there will be a significant performance benefit from doing independant seeks. Now, I don't have any figures to show what this would do. Then again, I don't have any figures to the contrary. Whether or not a raid array has it's own independant process
    • by Tranzig ( 786710 ) <voidstar@freemail.hu> on Monday June 14, 2004 @10:23AM (#9420214)
      Don't forget that those RAID controllers are just toys for the kiddies. Industrial grade RAID controllers have on board processor and memory, and they do optimize the read access for RAID 1 arrays. Though they don't halve seek time on two disk arrays, they still provide noticeable speedup for reading.
  • ..been a sore spot with me. Most users do a RAID 0 setup so their cool rig is hottie fast. Truth is, I can't see much real world performance difference. For my money, a large SATA drive and an external FireWire for backups is the way to go. Simple setup, no worries about drive failure and losing data, and still fast enough for UT2004.
  • Re: (Score:2, Insightful)

    Comment removed based on user account deletion
    • by LookSharp ( 3864 ) on Monday June 14, 2004 @10:33AM (#9420305)
      So why make a RAID?

      They are not talking about mirroring (RAID 1) exclusively. They are talking about RAID 0 so people can stripe drives and achieve considerable performance increases.

      As for me, I have an 8-channel IDE raid card with 8 x 120GB drives, hardware RAID 5, and in 24 months have blown oh about 4 drives (3 on the same channel til I found a faulty cable)... I have really appreciated the 860GB array having fault tolerance. And yes, I do some backups of critical data, but I can't afford the storage required for regular full backups.
      • Comment removed based on user account deletion
        • by riptide_dot ( 759229 ) * on Monday June 14, 2004 @11:44AM (#9421022)
          1) There's no such thing as a "normal home setup".

          2) Whatever setup you can afford that accomplishes what you want it to is ideal for you.

          3) RAID arrays have benefits outside of the fault tolerance, mainly higher transfer rates.

          4) You don't have to be a multimillionaire to afford multiple hard drives. They are still around $1 per megabyte, so the last time I checked, one can buy a 60GB drive for about the cost for two for dinner at a nice restaurant. Skip two nice meals and you have enough money for a nicely performing RAID0 array, provided you have the motherboard/daughter card that supports it.

          I understand your feeling that maybe having 8 120+ GB drives in a "home" configuration might be a litle overkill, but keep in mind that everyone has different uses for their computer.

          I do a little video editing at home (not professionally by any means), and having the benefit of faster throughput without the expense of buying 10K RPM Ultra320 SCSI drives is a beautiful thing. If I didn't have the RAID array, encoding a video to burn to DVD would probably take me about four hours, compared with the two it takes right now becaus of the killer transfer rates I get with my RAID0 configuration.

          CAUTION: the above mentioned behavior of skipping nice dinners with your significant other in order to buy computer hardware is not endorsed and/or recommended by the author. Use at your own risk.
  • by SilentChris ( 452960 ) on Monday June 14, 2004 @10:20AM (#9420173) Homepage
    I recently put together a rig with a K8V SE Deluxe. The chipset includes two SATA RAID chipsets: the standard VIA one and a Promise one. I've been absolutely floored by the Promise's performance (easily the fastest desktop RAID I've ever tested) and I don't see it anywhere in this review.

    For those hankering for another opinion, setting up the SATA RAID was a breeze. It was literally set it up and forget about it. The servers at work were much more difficult to set up. If you have the extra money for a spare drive (mine is two WD 10,000 RPM HDs :) ), it's worth it. Nearly double the speed.
  • by dangerweasel ( 576874 ) on Monday June 14, 2004 @10:21AM (#9420185)
    Must...Stop...Reading.
  • Could someone who can actually see the article please post the text? I know without the graphs it will be somewhat incomplete but I would like to see the meat and can skip the potatoes.
  • by fugu ( 99277 ) on Monday June 14, 2004 @10:27AM (#9420252)
    storage review did a writeup a while ago comparing RAID 0 performace to that of a single drive. more often that not you're better off getting a single, faster drive [storagereview.com]if you're looking for desktop performance.
  • ATA should be enough for everybody right?
  • by davejenkins ( 99111 ) <slashdot&davejenkins,com> on Monday June 14, 2004 @10:30AM (#9420283) Homepage
    From what I can see in the market right now,
    1. Everyone says they need more storage, so the market for it should be huge
    2. SAN or NAS configurations are always more expensive than people think (even though they are radically more cheap than they were two-three years ago).
    3. Because of the sticker-shock, a lot of people actually spend their first swipe at the problem cleaning out the cruft and streamlining their business processes and data management rather than drop coinage on storage kit
    4. Storage companies are having a very hard time here in Japan, probably from the influx of vendors (see #1 above).
    • One thing I always wondered is why there are no inexpensive boxes that do IDE RAID on one side and USB 2.0/Firewire on the other ?

      Say a $100 box that can fit 5 IDE drives - would be perfect for bulk data storage.

  • PCI-E RAID (Score:2, Informative)

    by drfishy ( 634081 )
    That's what I'm waiting for... A nice hardware RAID controller on a 4x or more PCI-E slot would rock! And should be available on your typical consumer board pretty soon... No more wishing PCI-X wasn't just on expensive server boards... Check these out: http://www.areca.com.tw/products/html/pciE-sata.ht m *drool*
  • RAID Perfomance (Score:5, Interesting)

    by Berylium ( 588468 ) * on Monday June 14, 2004 @10:41AM (#9420399)
    For the past 3 years I've had a RAID array set up on my home computer. It is a RAID 5 array with four 18GB Seagate X15 hard drives on an AcceleRAID 170 PCI card. I'm on the computer several hours a day during which time I play various video games, program in visual studio, and transfer a bunch of MP3 sized files and very large video files (~2GB). From my experience, the RAID 5 is definitely faster in some tasks than a high-performance ATA drive (like game loads) but for the types of activities I'm doing the expense of the SCSI drives and the noise they generate is more costly to me than the (perceived) slight speed disadvantage of a single disk serial ATA drive.

    Don't get me wrong, the RAID 5 array is sweet and certainly amps up geek appeal, but I don't have enough friends who know what the hell a RAID array is to really impressive them.

    -Berylium
  • by gsfprez ( 27403 ) * on Monday June 14, 2004 @10:46AM (#9420444)
    I know that on my Mac - if i slap in an additional identical HD to the one that shipped with it...

    1. i go to Disk Utility (standard issue with OS X)
    2. select the two blank drives (with the mouse, clicking on them)
    3. click "RAID 1" or "RAID 0"
    4. repartition them with a GUI (not required)

    then the RAID is mounted automatically on the desktop, ready for use. period. end of issue.

    that's basically 4 steps - none of which require any "understanding" beyond your average emailer's brainpower. (i'm not including the "Are you sure?" dialogs - those don't count as steps)

    its things like this test that bake my brain... and why Mac users are rabidly so asshole when it comes to stuff like this.

    All this geek speek about a few kbps difference between the various choices out there - but when it comes down to it - its a motherfscker to try to set it up in windows and, unfortunately, Linux, which takes the cake for scoring highest on the "WTF Does That Mean?"-o-meter for disk partitioning.

    And the PROBLEM with all the difficulties in setup of such a ... setup... is that that many people would would bennifit from such technology will NEVER USE IT because its inaccessable to them.

    How useful is that? its not.

    Its a classic GSFPREZ Axiom On system Performance...

    "A Mac Plus will always outperform a Pentium 100 when the Pentium is experiencing an IRQ conflict between the video card and the modem card"

    while i KNOW that IRQ issues are of the past - the idea that a superfast desktop comptuer that is difficult to get functioning is no gawddamned use - and by definition is an anchor compared to a Model-T Macintosh... at least the Model T moves, whereas anchors don't.

    all the speed and power in the world is useless to those who are more interesed in DOING work with their computer, than WORKING ON the computer to get it functional.

    My RAID on my G5 may be slower than yours - but it took me about 2 minutes - total, including the installation of the 2nd card drive but most importantly...

    (Mitch Hedberg =+5) this thing is useful, motherfscker!(/mitch)

    laugh, its funny.
    • by dave420 ( 699308 ) on Monday June 14, 2004 @11:26AM (#9420815)
      And it's just that easy on a Windows box. Don't get above yourself :)

      In my experience, I've found most mac users only scrape the surface of the potential their mac holds. When I'm trying to sort out some OSX networking issues, I can never find information on mac sites. I have to go to BSD sites to find the goodies. It seems mac users just use their macs to shout at non-mac users and try to rub their faces in their macness, instead of actually USING their computers.

      Don't think macs are anything they're not. They're not easier to use, not faster to set up. They just look pretty and cost an arm and a leg.

      • What's your definition of use? You are assuming most mac users care about the BSD subsystem. Most do not, never will. I've been using UNIX for 12 years and OSX has basically replaced my need for 99% of the command line garbage I used to think was cool. I had no idea how unproductive I was under linux and freebsd until I started USING my mac rather than playing with it. No wonder IT is so down when the vast majority of IT people just don't get it. The point is productivity, not nifty little gadgets that 99.9
  • by Anonymous Coward on Monday June 14, 2004 @10:47AM (#9420452)
    Since every SATA raid controller (bar the i960 based one from adaptec) is done using software, I reckon that what is actually benchmarked here is how optimum the drivers are, not the hardware performance. Besides (I'm guessing, as I only read the conclusions page) that each of these interfaces is connected off of a crummy 32 bit 33MHz PCI interface... That's the real killer right there.

    I have a Dell PowerEdge in the back room with 2 15k scsi drives running linux and raid 0 - with hdparm -t this thing gets 125-128 mb/sec! The HD interface on that machine is definitely hung off of a PCI-E interface or something better; as the maximum theoretical transfer rate of PCI is about 33*32 million bits per second or 132 megabytes per second.

    What would be really nice is if the filesystem was put on the i960 based adaptec card...
  • Software RAID... (Score:2, Interesting)

    by Tyranny12 ( 717899 )
    Frankly, I have yet to see an implementation of a motherboard-based RAID 0 array ever provide a noticable increase in performance compared to the hit your CPU takes to implement it. If you want performance off of that, take a hardware RAID card.

    That said, IMO, looking for performance out of an IDE RAID array is futile. There are rare cases, or people who have two screaming drives in RAID 0 and a perfect setup, but for the most part IDE and RAID aren't for performance - the drives and common file usage
  • by sane? ( 179855 ) on Monday June 14, 2004 @10:56AM (#9420539)
    Rather than several hundred graphs, most of which just show the same shape from test to test - why not through software and hardware RAID 5 into the mix?

    I could care less about a few percentage points difference in real world speed, but being able to up the reliability would be useful.

    Specifically,

    1. What is the hit in doing RAID 5, and how does it scale with load and CPU usage?
    2. How does the number of drives affect things?
    3. Software/Hardware - what's the real difference and if you're going the NAS route, does it matter?
    4. Which saturates first in NAS, network, processing or hard disk performance? Do you need 1000BaseT, or just how well does 100BaseT do in the real world?
    5. If you really want better performance, how do you go about getting it? Which cache size has the biggest effect?
    I'm sure that the graphs were easy to make, after the data was gathered, but putting a little more thought into the study would have yielded results that were more useful.

    To sum it up, don't both with RAID if you are looking for performance - buy more memory instead.

  • by kill $(pidof explore ( 778891 ) on Monday June 14, 2004 @11:04AM (#9420599)
    Let's talk about profermance. Most SATA drives are, still low end IDE drive, 8ms seeking is not a hit. the one SATA fanboys talking all the time is Western Digital Raptor, but, hey, they are the same price as 10K SCSI U320, what the point?

    I agree Raptor are great disks, 2 of them will out run PCI bus bandwidth, would you go PCI-X for SATA raid? a good PCI-X RAID card will cost $300+ for 4 ports, no thanks, I will stay my SCSI solution.

    The bottom line is SATA don't even have a BUS.

  • by jeffmeden ( 135043 ) on Monday June 14, 2004 @11:09AM (#9420648) Homepage Journal
    "Of course, for all its prowess, I'm still a little troubled that the ICH5R's RAID 1 arrays crashed out of IOMeter under our highest load level. A load of 256 outstanding IOs is quite a bit beyond what most desktops and workstations will encounter, but it's well within the realm of possibility for servers" Can anyone confirm or deny that this occurs in real world settings? Its definitely troubling that the crash condition was consistent, but I am suspicious that it was simply an incompatibility between the benchmarking tool and the raid controller. Does someone know more? Jeff
  • Benchmarking different block sizes is absolutely useless. It's ridiculous that they didn't even do a full test of all the common (16, 32, 64, 128) block sizes. No empirical data is obtained here - no direct comparisons may be made of the tested devices because of the laziness of the reviewer. By leaving the defaults, he's assuming the user has no idea what their own data delivery needs are.

    The only users who should even contemplate deploying a RAID array will certainly do the research to come up with the ideal stripe block size, given their usage patterns and requirements.
  • RAID 0,1,5 (Score:5, Informative)

    by mr_rizla ( 758012 ) on Monday June 14, 2004 @11:17AM (#9420735)
    Raid 0 = striped disks for improved performance. No redundancy. In fact, increasing your chances of losing data because if one goes down, no chance of data recovery. (total storage = total of disks)

    Raid 1 = Mirrored disks, writing same data to all disks so if one fails you simply replace it and no loss of data. (Total storage = 1/2 of disks)

    Raid 5 = Redundant striped disks. One of the disks is used to store a XOR bit, so that basically any one of the disks can go down and once it is replaced the RAID system will rebuild the data on to that disk. (Total storage = total storage of (all disks minus one))

    In RAID 1 and RAID 5, which is used in business servers, you really need hotswappable drives so any drives going kaka will not impact the server in any way, just replace the hard drive under warranty without even rebooting the server and the RAID system will rebuild the drive.

    RAID 5 is most effective in a business situation, offering a good compromise of speed, capacity and redundancy.
    • Re:RAID 0,1,5 (Score:5, Informative)

      by bobv-pillars-net ( 97943 ) * <bobvin@pillars.net> on Monday June 14, 2004 @12:06PM (#9421231) Homepage Journal
      RAID 5 is most effective in a business situation, offering a good compromise of speed, capacity and redundancy.

      Nope. In a real business situation, i.e. data-warehousing or ISP hosting environment, nobody trusts RAID 5. It's slow and fragile. Instead, everybody I know goes with RAID 10 (striped mirrors). Here's a typical 8-drive configuration:

      Stripe:
      1. Disk 1 mirrored with Disk 2
      2. Disk 3 mirrored with Disk 4
      3. Disk 5 mirrored with Disk 6
      4. Disk 7 mirrored with Disk 8

      Total storage equals the same as a 4-drive RAID-0 system. Performance should be slightly better, on a high-end dedicated controller, as the mirrors should be able to seek to different files independently for concurrent read requests (thus lowering latency), while the stripes should be able to operate simultaneously for large-block i/o (thus raising the streaming i/o rate).

      Reliability is better than Raid-5, for two reasons:
      1. When a drive fails and is replaced, only that particular stripe is rebuilt. That means that until the rebuild is done, one drive will be doing streaming-reads, and the other will be doing streaming writes. None of the other drives are affected. Contrast this with Raid-5, where one drive is doing block-writes and all the others are doing block-reads, interspersed with CPU checksum calculations, until the entire drive array is rebuilt. The result is that RAID-10 has much shorter disaster recovery times.
      2. In a RAID-10 system, up to half the drives can fail simultaneously without data loss, as long as one drive in each stripe remains functional. In a RAID-5 system, the loss of two drives guarantees loss of all your data.
      • Re:RAID 0,1,5 (Score:3, Insightful)

        by mr_rizla ( 758012 )
        Sure, RAID 10 is even more reliable than RAID 5, but at some point budget comes into play and when you've got other points of failure such as CPU, PSU, memory or fans. I've got to be honest, I've seen a lot more RAID 5 installations then RAID 10. Your experience might be different. But its definitely horses for courses - different situations call for different RAID setups.
    • Re:RAID 0,1,5 (Score:5, Insightful)

      by dfghjk ( 711126 ) on Monday June 14, 2004 @02:48PM (#9422991)
      Tecnically, the only justification for hot-swap is a zero-downtime requirement. If downtime can be scheduled, then an online spare is all you need and spares are good in any case. The need for hotplug is consistently overstated.

      RAID 5 is increasingly marginalized by the low cost of drives and high capacity they offer. RAID 1 *should* increasingly replace RAID 5 in the minds of people who understand the issues but sadly it does not. Many people believe that RAID 5 is simply "four better". Those same people also like hot-swap.
      • Let me put it to you simply:

        You have 6 bays in which to insert a (cold/hot) swap disk. Would you rather have 6*250GB/2 = 750GB of space, or (6-5)*250GB = 1250GB of space?

        Keep in mind that in either case, if you lose a disk (doesn't matter which one), you're probably bringing the machine offline unless you're using hotswap (which you say is superflouous).

        Get a decent RAID-5 hardware controller. Seriously.

        Less wasted disks = less noise, less power, less heat, more room in the rack, and more storage.
  • Linux support (Score:3, Interesting)

    by yet another coward ( 510 ) <yacowardNO@SPAMyahoo.com> on Monday June 14, 2004 @11:20AM (#9420758)
    How good is Linux support for any of these chipsets? Are they real RAID?

    Many motherboards come with RAID controllers that actually expect the operating system to handle them. The Intel ICH-5R did have rather poor Linux support last time I checked. Although it exists, installation is a pain. It seems that many SATA and consumer RAID solutions either demand running in legacy mode if they work at all. I did not see this issue addressed in the review. I would like to know how support stands now.
    • Re:Linux support (Score:3, Informative)

      by Dimensio ( 311070 )
      Linux supposedly supports many SATA RAID chipsets, though I have yet to get it to successfully mount an existing ntfs partition on a RAID 0 disk set with a Silicon Image 3114 chipset.

      Fortunately my chipset does not require a seperate driver when running in RAID mode. My boyfriend's computer uses a Promise SATA chipset that requires a RAID BIOS switch and a completely different driver (Windows AND Linux) if you want to use it in signle-disk mode. I can't imagine the mess I'd have if I used that.
  • by rwa2 ( 4391 ) * on Monday June 14, 2004 @12:27PM (#9421521) Homepage Journal
    Anyone know where to find these kinds of benchmarks for Linux software RAID systems? I almost always set up 2-disk RAID 0 and 1 on my Linux boxes, and haven't run into as many problems as they describe here. The performance scales up fairly linearly.

    I've always wanted to compare the Linux SW RAID to the HW RAID controllers, to see if it's worth the extra CPU cycles. My guess is that it is, but it'd be great to have some numbers to back this up.

    I suppose I could do it myself with hdparm and bonnie++ if it really came down to it, though... any interest in that?
  • by farrellj ( 563 ) * on Monday June 14, 2004 @12:34PM (#9421611) Homepage Journal
    I found out a few months back some interesting things about the state of SATA RAID...most of the SATA chipset RAIDS are not hardware RAID controllers.

    If you check Linux Mafia [linuxmafia.com]'s web page on SATA controllers, you will find that very few of the SATA RAID controllers are actually hardware RAID. What their "Drivers" really are is proprietory software RAID pretending to be Hardware RAID. I think of all the SATA RAID controllers and chipsets being offered, there are only three that are really hardware RAID. And 3Ware's offering is the least expensive of the real hardware RAID.

    ttyl
    Farrell
  • A few questions (Score:3, Insightful)

    by mabu ( 178417 ) on Monday June 14, 2004 @12:53PM (#9421810)
    I'm curious if those of you out there may have some recommendations based on your personal experience?

    I've been snooping around for a stand-alone RAID array. Ideally I'd like it to be SCSI-compatible and I can plug it into a SCSI port on a server and it would be relatively OS-independent. RAID 5.

    What are the most economical options in this area? Any recommendations for brands/manufacturers? Are there IDE-based RAID 5 drive arrays that have a SCSI interface and are they worth exploring?

  • 3Ware - or SCSI (Score:4, Informative)

    by rainer_d ( 115765 ) * on Monday June 14, 2004 @01:53PM (#9422497) Homepage
    I can't believe how many people fall for this "onBoard-RAID"-crap.
    In most, if not all, cases, the RAID is really a software-RAID, that the hardware-driver implements.
    Only 3Ware [3ware.com] seems to offer real RAID-in-hardware these days (and some high-end Adaptec-cards).

    Rainer

  • by Axello ( 587958 ) on Monday June 14, 2004 @03:07PM (#9423165)
    I've been using a couple of 3Ware [3ware.com] hardware RAID cards in my FreeBSD servers. More expensive than the onboard crap, but Very Nice. Full hardware RAID 0,1,10,5,50, remote control, hot swap, hot spare, email notification on failure, the works.
    You can configure your RAID remotely while your server is running. (But always be careful with your boot disc ;-) Or you can install your OS while the RAID is building in the background. Works with Linux & Windows as well, unfortunately not with MacOS X.
    But for MacOS X (& linux) geeks, the XRaid [apple.com] RuleZ!
  • by jgarzik ( 11218 ) on Monday June 14, 2004 @05:05PM (#9424136) Homepage
    Being the person implementing Serial ATA for Linux...

    Most "SATA RAID" is a bunch of marketing malarkey. It is provided by the BIOS and OS, not the hardware.

    There are a few "true" hardware RAID controllers, such as 3ware or some of the more advanced Adaptec controllers.

    In the middle is Promise, which produces controllers what I call "RAID offload" features -- not true RAID, but faster than non-RAID if you use Promise-specific features.

    Finally, the third group of SATA controllers is vast majority -- no RAID support whatsoever, but they are being sold as RAID.

    Any benchmark of SATA RAID simply benchmarks the OS- or vendor-provided software RAID driver.

I THINK THEY SHOULD CONTINUE the policy of not giving a Nobel Prize for paneling. -- Jack Handley, The New Mexican, 1988.

Working...