Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage

Comparison of Nine SATA RAID 5 Adapters 221

Robbedoeske writes "Tweakers.net has put online a comparison of nine Serial ATA RAID 5 adapters. Can the establishment counter the attack of the newcomers? Which of the contestants delivers the best performance, offers the best value for money and has the best featureset?"
This discussion has been archived. No new comments can be posted.

Comparison of Nine SATA RAID 5 Adapters

Comments Filter:
  • Eight or Nine? (Score:4, Interesting)

    by SirTwitchALot ( 576315 ) on Tuesday March 08, 2005 @12:19PM (#11877842) Homepage Journal
    TFA says nine adapters, but the graphic says eight, whoops.
    • Re:Eight or Nine? (Score:5, Interesting)

      by pbranes ( 565105 ) on Tuesday March 08, 2005 @12:25PM (#11877910)
      They must have started counting at 0. Stupid off-by-1 errors. ;-)

      Seriously, though, I have been seeing many servers start to come in with SATA drives. Right now it is low end and off-brand servers. Dell even ships SATA drives in their cheapest server line. Sure SCSI has high spin rates & throughput, but they are freakin expensive. A good SCSI raid controller costs close to $1000 and a good SCSI hard drive can cost $400. It is so expensive, that it is reallly worth it sometimes to get the SATA drives in servers. I haven't seen that reliability of SATA over SCSI is a problem. I'm truly hoping that SCSI goes the way of the dodo. Its a pain to use. Who know what kind of cable you're supposed to use with that external SCSI device. SCSI, in its current form, is just opening itself up to becoming antiquated.

      • Yeah, but what sort of real-world operating performance improvement will you get with SATA? I can understand using non-scsi raid to add redundancy, but to improve performance seems kind of silly. In the real world, your delays on the vast majority of files are not in throughput, but instead in seek time and latency. And as far as these things go, even the best ide drives are pretty bad compared to even moderate performance scsi drives.

        If you just want redundancy, go ahead. But if you want better system
      • SCSI vs SATA (Score:5, Informative)

        by sjbe ( 173966 ) on Tuesday March 08, 2005 @12:54PM (#11878184)
        SCSI, in its current form, is just opening itself up to becoming antiquated.

        Perhaps, though personally I've had far more trouble getting SATA (and IDE) drives to work than SCSI drives and I've used both extensively. Driver issues mostly. SCSI's performance is better in multi-user systems, it's easy to set up, drivers tend to be less problematic especially on systems other than Windows, and it can have more devices attached. People claim it's more reliable though I have no evidence of this, and frankly am a bit dubious of the claim. SATA is also easy to set up and is a lot cheaper, though the drivers are still less ubiquitous than with SCSI and performance doesn't match SCSI yet for multi-user systems. (on a single user system it doesn't matter much)

        That said, the next generation of SCSI is Serial Attached SCSI [adaptec.com] which is compatible with SATA. A SAS controller will be able to use SATA drives if you don't need the extra features of SAS. SCSI isn't going away, it's just adapting.
        • Re:SCSI vs SATA (Score:5, Informative)

          by gmezero ( 4448 ) on Tuesday March 08, 2005 @02:21PM (#11879243) Homepage
          Here's how I seperate SCSI vs SATA. I use SATA RAID setups for video workstations that need large drive space and cheap drives... and I don't care if the drive pops after a year of abuse.

          I put SCSI in my servers (RAID or otherwise) when I want the box to run for years and years under heavy load and not have to worry about replacing drives regularly.

          With SCSI, your paying for the quality control/quality assurance more than anything else.

          From what I understand a good SATA drive has the same TTL quality as a good IDE drive, just faster performance.
          • More like SATA/IDE is the ONLY option for high capacity.

            SCSI drives available to the general consumer don't go higher than 180GB. SATA and IDE are way beyond that mark. If someone has a link to a 400GB single SCSI drive, let me know.

      • I have an onboard sata raid controller on two of my 1u servers. So I configured it and it's bios said, hey you've got a mirrored drive using these two physical disks.

        Linux sees then as completely seperate hard drives. Turns out the sata raid controller relies on a windows driver and is nothing more than software raid. I'm not even sure it's accellerated in any way.

        So I just used linux software mirroring and it works fine. (Had to use a sarge nightly to recognize all the hardware.)
      • > I haven't seen that reliability of SATA over SCSI is a problem.

        And did you know that rebuilding a SATA RAID can take ages (which overlaps with times when you need every bit of performance you can get)?

        > Who know (sic!) what kind of cable you're supposed to use with that external SCSI device.

        How many companies have a single disk or JBOD (or RAID without enclosure) on an external SCSI connection without enclosure?

        > SCSI, in its current form, is just opening itself up to becoming antiquated.

        Ev
      • SCSI, in its current form, is just opening itself up to becoming antiquated

        It would be more accurate to say that the SCSI pricing model is becoming antiquated. Vendors have gotten used to being able to charge a 300%+ premium for SCSI hardware because, until recently, it was the only game in town for serious server storage.

        The current generation of SATA gives you roughly 90% of the performance of SCSI for less than 50% of the price. Unless you absolutely need every shred of I/O throughput money can

      • A good SCSI raid controller costs close to $1000 and a good SCSI hard drive can cost $400.

        Depends on size and features. Our Adaptec U320 controllers cost $150. 36G fujitsu SCA drives cost $100 a shot. Speed, reliability and hot-swap capability are well worth the money when your server is actually doing something worthwhile...
    • by ackthpt ( 218170 ) * on Tuesday March 08, 2005 @12:43PM (#11878082) Homepage Journal
      TFA says nine adapters, but the graphic says eight, whoops.

      It was a parity bit, ignore it.

    • Re:Eight or Nine? (Score:2, Informative)

      by Phantom69 ( 758672 )

      From Page 2 of TFA:

      Note: Since the original Dutch article was published in late January, we have finished tests of the 16-port Areca ARC-1160 using 128MB, 512MB and 1GB cache configurations and RAID 5 arrays of up to 12 drives. The ARC-1160 was using the latest 1.35 beta firmware. Furthermore, a non-disclosure agreement on the LSI MegaRAID SCSI 320-2E PCI Express x8 SCSI RAID adapter was lifted. The performance graphs have been updated to include the Areca ARC-1160 and LSI MegaRAID SCSI 320-2E results. Di

    • Re:Eight or Nine? (Score:2, Informative)

      by FemmeT ( 803927 )
      There are really nine adapters: 3ware Escalade 8506-8, 3ware Escalade 9500S-8, Areca ARC-1120, Areca ARC-1160, HighPoint RocketRAID 1820A, LSI MegaRAID SATA 150-4, LSI MegaRAID SATA 150-6, Promise FastTrak S150 SX4 and RAIDCore BC4852.

      The results of the LSI MegaRAID SATA 150-4 and MegaRAID SATA 150-6 have been combined in the graphs since there is basicly no performance difference between to two in configurations up to four drives.
      • They did miss a good feature of the LSI MegaRaid.

        It supports a daughter board for controller redudancy.

        Note, I don't have the daughter board and I can't test how well it works. Overall, I think the feature set was a bit understated. (It's definately in the affordable range too)

        Though I would have liked a non-host based option for raid access, it was one of my more appealing choices when I had a new system come in for backups.
  • 32 pages? No thanks. (Score:5, Informative)

    by Mr. Sketch ( 111112 ) * <<moc.liamg> <ta> <hcteks.retsim>> on Tuesday March 08, 2005 @12:19PM (#11877846)
    After 32 pages, it's probably just best to skip to the conclusion:
    http://www.tweakers.net/reviews/557/32 [tweakers.net]

    Where it has the executive summary:

    Areca ARC-1120: highly recommended
    RAIDCore BC4852: recommended
    HighPoint RocketRAID 1820A: recommended

    For several reasons, we will refuse recommendations on the remaing adapters in this comparison


    I think that pretty much covers the jist of the article.
  • by tabkey12 ( 851759 ) on Tuesday March 08, 2005 @12:20PM (#11877853) Homepage
    quite badly! They've been synonymous with quality in the RAID industry for many years. Look at this:

    3ware Escalade 8506-8 is lagging far behind the competition. Moreover, it misses important features such as online capacity expansion, online RAID level migration and RAID 50 support.

    http://www.tweakers.net/reviews/557/6 [tweakers.net]

    What they say in the article is almost damning really...

    • 3ware needs to develop new chips for their SATA raid controllers they are using ATA bridges on these SATA controllers and of course the controller chips are several years old now.

      3ware needs to step it up w/ SATA controllers.
    • They've been synonymous with quality in the RAID industry for many years.

      I've had my share of 3ware cards drop a raid pack and need to be rebuilt from the BIOS, doing nothing special at all but running a RAID-0 as a big storage mountpoint. When the online rebuild tools fail you have massive downtime.
      • RAID 0 is not the most reliable thing in the world. Couple that with the unreliability of SATA (yes, I work in a Validation Lab and we go through dozens of these a day) you would *NEVER NEVER* want SATA in RAID 0 storing anything valuable. Swap, sure, but never data! That said, we test dozens of SATA raid controllers as well. The best performer in my experience has been the 3Ware 9500-8. Does it have many advanced features? How many people who will be using SATA raid really NEED those advanced features?
      • I am going to chime in with my damnation of 3ware's cards too. We have about 20 8500s and 8506s, with either Maxtor or Seagate drives. The things are horribly unreliable. Almost every day at least one array needs to be rebuilt. On a few occasions, we've even seen the controllers spontaneously lose an entire array - just poof, not accessible anymore and not visible through the administrative tools. Reboot and there's the array again.

        Most of our support has been through a VAR (who sucks too, but that's
        • I had the rebuild problems also with my Promise SX6000 controller, but strangely I had it more in the beginning, and when I used ReiserFS on my RAID-5 system (running Red Hat 9).

          The same goes for upgrading the controllers firmware.

          It seems that both companies make the most money of a market based around small companies who want to give themselves the air of professionality by implementing their own storage solutions.

    • by arivanov ( 12034 ) on Tuesday March 08, 2005 @12:50PM (#11878148) Homepage
      Well... as someone who has both of reviewed 3ware adapters in production I am not amazed. They are nice, but nothing to shout about. They also have LOADS of PROBLEMS not mentioned in the article.
      • 8506 SATA series prior to a certain board revision are extremely susceptible to bus noise. As a result you have to find a way to bastardize the PCI bus down to 33MHz and provide additional grounding. Even so, they are likely to cause random system deaths and serious memory corruption in most Opteron MSI and Assus motherboards as well as some other designs. Using in 1U and 2U chassis with riser cards is a no-no for the same reason (exemption for some buffered risers). As a side note, most resellers will try to stuff you with an old board despite the fact that they know about this problem.
      • 9506 board and linux driver at least as of 2.6.9 defaults to no read cache, only write cache which is outright daft. It is also the major reason for low performance at least under Linux.

      Both are nice cards, but I would not recommend them to anyone who does not have extensive PC hardware knowledge. They are fussy, carpicious and very hard to troubleshoot when they go wrong.

      • Gotta agree on the 8506's being flaky. I could not get them to work reliably with 3 different mobo's (one was a supermicro, very nice server board), after calls to supermicro, 3ware, and much hair pulling, I purchased the Highpoint listed in the article, and have been very happy with it, especially considering the price delta for the 8506-8's... So I currently use the HPT, and have 2 8506's sitting in boxes that I'll never use because of all the trouble they caused. Maybe time to head out to the 'ol gun
      • Both are nice cards, but I would not recommend them to anyone who does not have extensive PC hardware knowledge. They are fussy, carpicious and very hard to troubleshoot when they go wrong.

        I wouldn't say that. For one thing I had my 3ware controller running just fine with debian out of the box and I didn't have to tweak anything. While it might not be the best performing it at the very least has excellent support in Linux, for example smartctl can work with 3ware (but not any other raid controller) not to
    • Just as an aside and sticking up for 3ware. 3ware is one of the few companies that has good driver support for Linux and FreeBSD. As far as 2port SATA mirroring I always recommend 3ware as my first choice - performance is good enough.

      Obviously if you're looking at a raid 5 solution, you're moving more towards higher end stuff, so it would be hard to recommend anything that performs poorly there. Rather dissapointing, but probably not that surprising since their SATA cards seem very similar to the ATA ca
  • by GoodNicsTken ( 688415 ) on Tuesday March 08, 2005 @12:27PM (#11877923)
    I had a Rocket Raid 100 (IDE 4 drive RAID1/0) and a RocketRaid 1640 (4 Channel SATA RAID 0,1,5) card. With nothing connected to the 1640 and 2 mirrored drives on the RR 100 the disks attached to the RR100 in bios show up on the 1640, and when windows gets to the boot screen it locks up.

    When I removed the drives in windows, it booted up without problems. Highpoint has sent me diag tools to run rather than building this in their lab!

    I'm not too impressed with them so far.
    • I've used a LOT of different IDE RAID cards (promise, sil image based, HighPoint based, ... most of the stuff around).

      The only one that ever gave me a problem of all of them was a Rocket Raid 100 (HPT370A based). Promise has the best card of them all IMHO (performance wise, good drivers, ... good overall). The sil image stuff is generic but it's very inexpensive and works surprisingly well (slow in DOS compared to promise cards though). Nice thing about them is most of the time they have a jumper to make t
  • I dread to think where my 'vanilla' dual channel SATA controller would come on the evaluation list but, hey, it's working fine and only cost £25!!!
  • by lanc ( 762334 ) on Tuesday March 08, 2005 @12:37PM (#11878026)

    Well, cheap+reliable == linux + softraid + Enhanced Network Block Device [uc3m.es] + Enterprise Volume Management System [sourceforge.net] (or LVM2). It is often faster than non-hw-raid (fake-hw [linux.yyz.us]controllers.
  • Drivers? (Score:5, Insightful)

    by sjbe ( 173966 ) on Tuesday March 08, 2005 @12:40PM (#11878047)
    While I've admittedly not read the entire article (it's really long) I couldn't find much info about drivers. It seems the author basically assumed one would be running windows, which for servers (the most likely place for a RAID array) is a pretty poor assumption. I've tried a number of SATA RAID cards on my linux server (SuSE 9.1) and keep getting driven back to SCSI due to crappy/non-existant driver issues. Thank god for Addonics SATA-SCSI adaptors [addonics.com] which work great and have saved me a bunch of money.

    It's a nice article comparing performance but without a serious analysis of drivers along with it for Windows AND linux (and Mac if applicable) the article isn't complete. I don't really care which one is fastest if I can't run it on my system.
    • Re:Drivers? (Score:5, Informative)

      by LWATCDR ( 28044 ) on Tuesday March 08, 2005 @12:44PM (#11878093) Homepage Journal
      3-ware has very good support for linux
    • Re:Drivers? (Score:2, Informative)

      by ajrs ( 186276 )
      read the rest of the article. The fine article says which ones have drivers for different versions of MS, prebuilt drivers for linux, Bsd (which must not be dead), Mac, and if the source code is available.
  • If it doesn't say (Score:3, Interesting)

    by cmefford ( 810011 ) * <cpm&well,com> on Tuesday March 08, 2005 @12:50PM (#11878149)
    3ware, It's a waste of /my/ time. So I really don't care what the tweakers think. The 3ware cards are reliable, easy to deal with, have brilliant drivers, good software, and they WORK! Always! I have a 5 of them. I have a friend who has 40, he agrees. I use a 2 channel as a backup-to-sata drive, (cheaper than tape), another 2 channel in a IIS server for payroll stuff, 1 4 channel for a mail server and 2 8 channels for file/web. I love'em. Nuff said.
    • I have to agree. I've deployed 3-ware SATA controllers at work and at home. Disk recovery has been straightforward, driver support for Linux excellent (it's in the stock kernel sources), and performance more than satisfactory.

      It would be nice if one could expand the array "hot," rather than having to copy data around and redefine/reformat the array, but in terms of reliability in protecting my data against disk faults (I've had several disks die, and replacement was a breeze, with zero downtime). As oth
  • My thoughts (Score:5, Informative)

    by tonsofpcs ( 687961 ) <slashback@tonsofpc s . com> on Tuesday March 08, 2005 @12:53PM (#11878167) Homepage Journal
    Areca ARC-1120 looks better on each and every page except for the sequential read/write tests where it tends to come in third [I'm just reading off the graphs].
    The RAIDCore BC4852 seems fastest for sequential reads/writes.

    BOTH of these have linux support. The Areca supports: Mandrake (9.0),Red Hat (7.3, 8.0, 9.0, AS 3.0), Fedora Core (2, 2 AMD64), SuSE (7.3, 9.1 Pro, 9.0 SLES, 9.0 SLES AMD64)
    The RAIDCore: Red Hat (9.0, AS 3.0), Fedora Core (1)
    The Areca also supports Windows XP and Server 2003 64-bit versions and BSDs: 4.2R, 4.4R, 5.2.1 (incl. source).

    Also, the Areca ARC-1160 (they finished testing after the original article was written, so it didn't make it into most of the text) appears at the top of all of the Index/performance tests, except for "Fileserver - Large Filesize - RAID 1/10" [tweakers.net] and "My SQL - Data Drive - RAID 1/10" [tweakers.net].
  • by dfn5 ( 524972 ) on Tuesday March 08, 2005 @12:55PM (#11878192) Journal
    I can say from my own experience with 3ware is that it sucks. We decided that we wanted to use S-ATA because we could get a lot of disk cheap. The problem was that these escalade cards didn't do parallel IO very well and by that I mean if one user is doing a long write operation the entire RAID array would go unresponsive to other users. For example if I created a large 20G oracle datafile the entire system would seem unresponsive until the operation completed. I wouldn't even be able to ssh into the server. And this was RedHat AS in case anyone wanted to know.

    Moral of this story? You get what you pay for. SCSI should be used for servers.

    To be fair, however, I was never able to determine if it was a result of using S-ATA, 3Ware or the linux device driver.

    • I have no experience with 3ware, but I've heard that they are not that great, and that they are basically software RAID that depends on the drivers to do the work for you. This is probably why the system is unresponsive during a large write.

      OTOH, I have an Apple Xserve RAID that uses SATA drives with a fibre channel interface. In using it, I cannot tell its not a SCSI system.
      • I've had some pretty bad experiences with the 3Ware 8506-4 controllers on a SuperMicro MB running FC2. When running 4 WD2500JD drives as RAID-5, one of them pooped out and the RAID was running degraded. Got a replacement WD2500JD, which happened to have some hard errors on it out-of-the-box. Had to use the mhdd.com utility to zero out and remap the 14 hard errors, then had to reload FC2 before the machine would boot. At least it was just the OS. The data was on a separate partition, and was OK.

        Another serv

  • by Anonymous Coward on Tuesday March 08, 2005 @01:05PM (#11878293)
    I am running RAID 5 in my computer right now.

    Linux software RAID. Makes all this crap obsolete except for some specific cases.

    I can have as many drives as I want, I can have hot swapability, I can have hot spares and all sorts of fun stuff.

    Add LVM on top of that and you have a solution that is much superior then going out and buying any raid controller, except for the most fastest.

    Linux software raid is actually VERY nice, I don't know of any OS that has better setup.
    • "Linux software RAID. Makes all this crap obsolete except for some specific cases."

      So, you're saying that somehow your software RAID is calculating XOR bits and such without putting a serious hurt on your CPU and memory? Interesting.

      "...I can have hot swapability..."

      You're also saying that your motherboard has hot-swap capabilities built into it? Because it takes nothing short of specialized hardware controllers and BIOS's to be able to hot-plug a drive in. (ATA/SATA drive initialization is done d

      • by Phil Wherry ( 122138 ) on Tuesday March 08, 2005 @04:25PM (#11880732) Homepage
        So, you're saying that somehow your software RAID is calculating XOR bits and such without putting a serious hurt on your CPU and memory? Interesting.
        I'm not the original poster to whom you're responding, but the answer to your question is actually kind of surprising (it was to me, in any case).

        At any given point in time, your system is in one of three states:
        • partially idle (there's unused CPU and disk I/O capacity),
        • CPU-bound (the CPU is fully utilized but there's disk I/O bandwidth available), or
        • I/O bound (the CPU has spare cycles, but the disk can't provide data fast enough to put them to use).
        I suppose that a heavily overloaded system could be both CPU and I/O bound, but it would require a mix of CPU- and disk-intensive processes that isn't usually seen in practice.

        Let's ignore the partially idle case, in which there's ample disk and CPU to go around, as it doesn't really matter in this scenario whether the CPU or disk controller perform the XOR operations.

        In the case of a CPU-bound process, you're going to incur the additional CPU overhead of the XOR operation. XOR is almost absurdly fast, particularly if the data is in the CPU's cache. I'm pretty sure that modern CPUs execute XOR on at least one byte per clock cycle. But let's say, for the sake of argument, that it takes three cycles per byte. On a CPU clocked at 3 GHz, you'd be able to perform XORs on one gigabyte of data per second if you ignore memory and cache issues. Given moderate memory bandwidth, you're also able to transfer over a gigabyte of data to or from the CPU per second. Given a more reasonable amount of data (say, one megabyte, to transfer), you'd be looking at a CPU impact of around one millisecond to perform the XOR. That's a 0.1% impact at most in a CPU-bound environment, and that's presuming you're doing a megabyte of disk I/O per second.

        Now let's look at the I/O-bound case. Here, the CPU is sitting around waiting for the disk I/O to finish up. In this case, it clearly doesn't matter who's doing the XOR operations, since the CPU isn't fully utilized. PCI bus utilization is going to be increased by up to 100% (in the worst-case scenario involving drive mirroring; the worst-case RAID5 scenario is a 50% increase). A typical server's 66 MHz 64-bit PCI bus has a capacity of around 533 megabytes per second (PCI Express increases this dramatically, but let's stick with pessimistic examples for now). At the moment, a SCSI bus tops out at 320 megabytes per second, and those transfer rates are only achievable with at least four drives on the channel and an almost exclusively sequential I/O mix (the best-case numbers for a 15,000-RPM drive are about 100 megabytes/second). So there's generally bus bandwidth to spare.

        You raise a number of other points in your note that are potentially issues (hot swappability, for example). But I've become convinced that the CPU/machine performance argument against software RAID really only made sense when CPUs/memory/bus bandwidth were much more constrained.
  • TRUE raid? (Score:3, Interesting)

    by fire-eyes ( 522894 ) on Tuesday March 08, 2005 @01:05PM (#11878294) Homepage
    All I care about is if these are 100% raid, unlike a seemingly increasing number of cards. In windows you might do alright, but anything else, look out.

    In linux you will be treating such cards as a software raid array. Kind of defeats the point of buying "hardware" in the first place.

    Wankers (the manufacturers).
  • Beware hardware RAID (Score:5, Interesting)

    by puke76 ( 775195 ) on Tuesday March 08, 2005 @01:19PM (#11878462) Homepage
    Sure everyone buys a few spare drives.. but make sure you buy more than one RAID card. If the RAID card goes, unless you replace it with an identical make and model, you can kiss your data goodbye.

    That's what I like about software RAID on Linux - you can mount the array on another linux box if you need to.

    Have yet to see a good comparison between low-end hardware RAID and Linux software RAID..
    • Anecdotal because I'm not paid to do this stuff..

      We bought a 3ware controller for a large and somewhat valuable datastore (high resolution images of Alan Turing's personal papers which include all the text available elsewhere plus handwritten annotations, scribbled diagrams, etc.)

      In the end I only used it as a fast and not particularly full-featured ATA controller, running Linux software RAID on top because it was not only _faster_ in every test I could think of, but also simpler to set up and maintain.

      T
    • by justins ( 80659 ) on Tuesday March 08, 2005 @02:21PM (#11879237) Homepage Journal
      If the RAID card goes, unless you replace it with an identical make and model, you can kiss your data goodbye.

      If you are dumb enough to use RAID as a substitute for backing up, that is.
      • Well, that might save your data, but the availability is still a problem. You can have a great backup (probably using tapes for a RAID array) but if you need to get an exact copy of a RAID card on sunday you are in trouble.

        Anyway, it's rather hard to backup something like 1 TB of data using cheap storage solutions which is why RAID as backup is currently viable. Just don't use it for sensitive information and beware of software issues...
        • Anyway, it's rather hard to backup something like 1 TB of data using cheap storage solutions

          It's not really, anymore - or at least, it shouldn't be.

          For what I used to pay for a single DLT tape a few years ago I can get an external USB drive of greater size. The problem, and it is a big one, is that most backup software doesn't have any idea how to properly take advantage of something as advanced as hot-swappable, external, fast USB mass-storage.
    • I've actually swapped 3ware controllers of similar (not entirely exact) make and model and it has always been able to rebuild the raid and just work. Same with my ICP (scsi) raid controllers.

      My biggest problem with software raid is that it is very alpha software. Problems occur, and there's little documentation for the new tools when you need help - and trust me - when you're data is on the line you won't want to putz around with something that isn't well documented if you don't have a support network to h
  • my 2 cents (Score:2, Informative)

    by Ankou ( 261125 )
    We have used 3 LSI 150-6 MegaRaid Cards and I must say that its the most increadible card / bang for buck you can get. Works perfectly in linux (Slackware 10.0 - 10.1 in our case), uses either the megaraid or megaraid2 (for those that want verbose information) right from the stock kernel compile. In each server we put in 6 Seagate SATA drives 250 GB each, totalling an impressive 1.2 TB total space. For under a grand (card + 6x 250 GB drives) you cant get a cheaper more reliable alternative. The thing ai
    • by falser ( 11170 )
      Just last week I bought the 6ch LSI card, and will be recieving the rest of my drives this week. After reading the relevant parts of the article I was a little worried I made the wrong choice. But your comments have eased my paranoia. Thanks.
  • by windowpain ( 211052 ) on Tuesday March 08, 2005 @01:40PM (#11878686) Journal
    Now you could argue that a car review in Car and Driver doesn't bother explaining what a transmission does but RAID is several orders for magnitude more complex and esoteric.

    There are so many different flavors of RAID it can be hard to keep them straight if you're not working with them every day.

    Anyway there are good explanations of RAID here [techtarget.com] and here [prepressure.com].
    • by justins ( 80659 ) on Tuesday March 08, 2005 @02:16PM (#11879193) Homepage Journal
      Now you could argue that a car review in Car and Driver doesn't bother explaining what a transmission does but RAID is several orders for magnitude more complex and esoteric.

      Are you kidding?

      RAID 5 can be explained in a few pages - the math, the implementation, the whole bit. Have you ever seen a technical drawing of a transmission? Modern slushboxes are about the most advanced mechanical engineering application that the average person ever comes in contact with (when they aren't at the airport).

      You won't find an article that does most of the issues involved in designing and implementing a transmission justice. I know you just meant it as an example, but still. :)
  • 32 pages of ?? and nothing regarding compatiblity with Linux.

    Actually, I only searched the conclusion....
  • Finally, a controller that supports RAID 6. RAID 6 is just like RAID 5 but with an extra parity drive, so that you can have two drives (instead of just one) fail in an array and be OK. RAID-50 is slightly less robust (two drives on the same RAID5 chain can break and then you're up shit creek), but faster (for the same card implementation).

    The interesting thing is that the Areca card is infact SATA-II. Things like NCQ, and port multipliers can really elevate usefulness. Buy a cheap 4-port multiplier card an
    • Nitpicky, I know, but it's not an additional parity drive, it's an additional parity calculation that's distributed among the disks evenly, just as in RAID 5. It's sometimes called two dimensional parity.

      http://www.acnc.com/04_01_06.html
  • The review looks nice, I'm convinced. If I want to buy an Areca card in the US, where would I go?

    Google doesn't help. Pricewatch doesn't help. Tom's Hardware didn't provide an answer that I could see. Nothing on eBay but palm trees. What appears to be the US distributor [areca.us] has a "Where to buy" link that points to the Taiwanese site which points to... the US distributor.

  • While informative, my brief read-through of the article revealed numerous typos or spelling mistakes. The pictures and research here is cheapened by the lack of proper editing and proofreading.

    Samples from TFA:

    "...caused by differences in I/O processor and I/O controller performance, cache memory, available bus bandwidth etcetera."

    If you're not going to use the traditional abbreviation "etc." at least use it properly; "et cetera."

    "You can't make judgements by simply..."

    Judgment is spelled with one "e"
  • Not hardware RAID (Score:3, Informative)

    by jgarzik ( 11218 ) on Tuesday March 08, 2005 @03:52PM (#11880330) Homepage
    Note that several of the cards reviewed in this review are not hardware RAID. SATA RAID is famous for being non-RAID controller + RAID software driver.

    See my SATA RAID FAQ [linux.yyz.us] for a listing of the most common SATA chipsets which are sold as RAID, but are really software RAID (a.k.a. "fake RAID").

    I'm also rather amazed that this wasn't mentioned in the review, but I admit I did not read all the of the 32 pages.

  • Saying that S-ATA is worse than SCSI is mostly not due to the protocol (see the various S-ATA is not as good as SCSI comments) but due to the cards, and more importantly, to the drives.

    SCSI cards and drives have been created specifically for enterprise use. That means that they perform, and that they last. The only way to compare these technologies is to use an expensive S-ATA controller with fast hardware XOR, large cache and controllers and drives that support command queueing. Furthermore the drives sho
  • by swb ( 14022 ) on Tuesday March 08, 2005 @03:54PM (#11880367)
    I'm sure that me and the few drunks I've managed to hoodwink with this concept are the only market for it, but why not a USB2/1394 hub that's actually a RAID controller?

    The hub could present whatever defined logical volumes to the OS as additional mass storage devices on the hub, and a configuation application would be all that was needed since the logical volumes would be presented to the OS as generic mass storage devices.

    I think this could have a real market; while the bus would certainly be a limitation in performance (perhaps 1394b would help), it:

    * Wouldn't require a massive case with internal bays and power taps for the drives. (S)ATA RAID is cheap, but scaling beyond 3 or 4 drives is a huge challenge in all but the biggest cases. Using external connectors like 1394/USB2 would solve this easily.

    * Wouldn't require any drivers beyond existing USB/1394 generic mass storage support. Yes, you would need a special application to configure the hub's logical volumes or to perform stupid RAID tricks, but beyond that you wouldn't.

    * Portability to other systems, either in the event of a host failure or, since it doesn't require drivers and once configured, it could be moved to another platform that only supported the generic mass storage device.

    * OK, speed would suck, but it's about adding big, reliable mass storage with a trivial interface, not about transfer rates. The hub could actually have distinct USB/1394 channels to individual ports, since it's not really a _real_ hub and the host OS wouldn't see the individual disks, just the defined logical volumes presented as mass storage devices.

    I think this would be great for "backup" applications or other small-time/home user data warehousing (keeping your native DV-AVI files, DVD backups, CD backups, MP3 backups, yadda...) Tape is nice, but SDLT or LTO drives are expensive, as are the media. For $600 you can do better than half a terrabyte of RAID-5 disk, but you need almost an entire PC to house internal disks.

    Given how cheap RAID cards are, I can't believe that merging RAID into a hub would be all that expensive, especially since you're actually removing a lot of the disk control logic from the controller.

  • Never heard of them until now. Reading TFA and visiting their web site has made them unforgettable. 10/10 for getting it right and 15/10 for supporting most of the major open source OS out there (I nearly said all, and risked getting modded as a troll), including FreeBSD. We could do with more companies like these.
  • I came across this project called SCST
    http://scst.sourceforge.net/
    It lets you take direct storage(lvm, raid, plain disks) or files on the system and serve them out over fibre channel to clients. So you can take 4 sata disks of 200GB each, RAID5 that up and get 600GB usable space. Break that 600GB into ten 60GB partitions and serve that out, and you have absolutely failsafe storage for your systems. Windows systems can use any old supported(most are) fibre channel card($25 on ebay), plug into a linux box

Never test for an error condition you don't know how to handle. -- Steinbach

Working...