Comparison of Nine SATA RAID 5 Adapters 221
Robbedoeske writes "Tweakers.net has put online a comparison of nine Serial ATA RAID 5 adapters. Can the establishment counter the attack of the newcomers? Which of the contestants delivers the best performance, offers the best value for money and has the best featureset?"
Eight or Nine? (Score:4, Interesting)
Interesting that the 3ware offerings performed... (Score:5, Interesting)
3ware Escalade 8506-8 is lagging far behind the competition. Moreover, it misses important features such as online capacity expansion, online RAID level migration and RAID 50 support.
http://www.tweakers.net/reviews/557/6 [tweakers.net]
What they say in the article is almost damning really...
Re:3ware me (Score:2, Interesting)
Re:Eight or Nine? (Score:5, Interesting)
Seriously, though, I have been seeing many servers start to come in with SATA drives. Right now it is low end and off-brand servers. Dell even ships SATA drives in their cheapest server line. Sure SCSI has high spin rates & throughput, but they are freakin expensive. A good SCSI raid controller costs close to $1000 and a good SCSI hard drive can cost $400. It is so expensive, that it is reallly worth it sometimes to get the SATA drives in servers. I haven't seen that reliability of SATA over SCSI is a problem. I'm truly hoping that SCSI goes the way of the dodo. Its a pain to use. Who know what kind of cable you're supposed to use with that external SCSI device. SCSI, in its current form, is just opening itself up to becoming antiquated.
Re:3ware me (Score:5, Interesting)
You know the cheap-reliable-fast triangle. (Score:4, Interesting)
Well, cheap+reliable == linux + softraid + Enhanced Network Block Device [uc3m.es] + Enterprise Volume Management System [sourceforge.net] (or LVM2). It is often faster than non-hw-raid (fake-hw [linux.yyz.us]controllers.
If it doesn't say (Score:3, Interesting)
My experience with 3ware (Score:4, Interesting)
Moral of this story? You get what you pay for. SCSI should be used for servers.
To be fair, however, I was never able to determine if it was a result of using S-ATA, 3Ware or the linux device driver.
TRUE raid? (Score:3, Interesting)
In linux you will be treating such cards as a software raid array. Kind of defeats the point of buying "hardware" in the first place.
Wankers (the manufacturers).
Beware hardware RAID (Score:5, Interesting)
That's what I like about software RAID on Linux - you can mount the array on another linux box if you need to.
Have yet to see a good comparison between low-end hardware RAID and Linux software RAID..
Re:Comparison of Nine SATA RAID 5 Adapters (Score:4, Interesting)
I, personally, would completely avoid any card manufactured by Promise or Highpoint as I've had crap luck with them in the past. They're just not very good cards, imho. And I'm not talking about their performance in Linux. I'm talking their performance in general. They're crap by my estimation regardless of platform. After losing data on my Windows 2000 box becuase of a crappy Highpoint card, I'll never buy another.
Anyway, your assertions are not even germaine. You point to the problem with "trick-BIOS" software RAID cards, which have been around for years and are not exclusive to SATA-RAID. They are shit cards, period...have been from the day they were made. Most of the cards in this review, however, are true hardware based SATA-RAID cards.
And, again, they all are supported on Linux. 3Ware, for example, has been a bastion of Linux support for ages.
As for the whole winmodem issue, who cares? What has it to do with a freaking troll blathering incorrectly about Linux not supporting SATA-RAID cards? Besides, the fact is, winmodems are NOT real modems. They're telecom interfaces, but not modems. You need software to make them modems. And I'm not talking about driver software to give access to the cards' functions. I'm talking software that has to implement the modem functionality itself...because the modem functionality doesn't exist on the "winmodem"...because it's not really a modem. Just because we now have linmodems.org and such to provide that software, it doesn't automagically make them "real" modems.
Re:My experience with 3ware (Score:2, Interesting)
OTOH, I have an Apple Xserve RAID that uses SATA drives with a fibre channel interface. In using it, I cannot tell its not a SCSI system.
Re:Related Question (Score:3, Interesting)
External enclosures can be had for less than $30 and 250GB drives are under $140 each. Is your data worth $340?
My RAID fantasy: 1394/USB2 Raid hub (Score:3, Interesting)
The hub could present whatever defined logical volumes to the OS as additional mass storage devices on the hub, and a configuation application would be all that was needed since the logical volumes would be presented to the OS as generic mass storage devices.
I think this could have a real market; while the bus would certainly be a limitation in performance (perhaps 1394b would help), it:
* Wouldn't require a massive case with internal bays and power taps for the drives. (S)ATA RAID is cheap, but scaling beyond 3 or 4 drives is a huge challenge in all but the biggest cases. Using external connectors like 1394/USB2 would solve this easily.
* Wouldn't require any drivers beyond existing USB/1394 generic mass storage support. Yes, you would need a special application to configure the hub's logical volumes or to perform stupid RAID tricks, but beyond that you wouldn't.
* Portability to other systems, either in the event of a host failure or, since it doesn't require drivers and once configured, it could be moved to another platform that only supported the generic mass storage device.
* OK, speed would suck, but it's about adding big, reliable mass storage with a trivial interface, not about transfer rates. The hub could actually have distinct USB/1394 channels to individual ports, since it's not really a _real_ hub and the host OS wouldn't see the individual disks, just the defined logical volumes presented as mass storage devices.
I think this would be great for "backup" applications or other small-time/home user data warehousing (keeping your native DV-AVI files, DVD backups, CD backups, MP3 backups, yadda...) Tape is nice, but SDLT or LTO drives are expensive, as are the media. For $600 you can do better than half a terrabyte of RAID-5 disk, but you need almost an entire PC to house internal disks.
Given how cheap RAID cards are, I can't believe that merging RAID into a hub would be all that expensive, especially since you're actually removing a lot of the disk control logic from the controller.
Re:waste of time and money. (Score:5, Interesting)
At any given point in time, your system is in one of three states:
Let's ignore the partially idle case, in which there's ample disk and CPU to go around, as it doesn't really matter in this scenario whether the CPU or disk controller perform the XOR operations.
In the case of a CPU-bound process, you're going to incur the additional CPU overhead of the XOR operation. XOR is almost absurdly fast, particularly if the data is in the CPU's cache. I'm pretty sure that modern CPUs execute XOR on at least one byte per clock cycle. But let's say, for the sake of argument, that it takes three cycles per byte. On a CPU clocked at 3 GHz, you'd be able to perform XORs on one gigabyte of data per second if you ignore memory and cache issues. Given moderate memory bandwidth, you're also able to transfer over a gigabyte of data to or from the CPU per second. Given a more reasonable amount of data (say, one megabyte, to transfer), you'd be looking at a CPU impact of around one millisecond to perform the XOR. That's a 0.1% impact at most in a CPU-bound environment, and that's presuming you're doing a megabyte of disk I/O per second.
Now let's look at the I/O-bound case. Here, the CPU is sitting around waiting for the disk I/O to finish up. In this case, it clearly doesn't matter who's doing the XOR operations, since the CPU isn't fully utilized. PCI bus utilization is going to be increased by up to 100% (in the worst-case scenario involving drive mirroring; the worst-case RAID5 scenario is a 50% increase). A typical server's 66 MHz 64-bit PCI bus has a capacity of around 533 megabytes per second (PCI Express increases this dramatically, but let's stick with pessimistic examples for now). At the moment, a SCSI bus tops out at 320 megabytes per second, and those transfer rates are only achievable with at least four drives on the channel and an almost exclusively sequential I/O mix (the best-case numbers for a 15,000-RPM drive are about 100 megabytes/second). So there's generally bus bandwidth to spare.
You raise a number of other points in your note that are potentially issues (hot swappability, for example). But I've become convinced that the CPU/machine performance argument against software RAID really only made sense when CPUs/memory/bus bandwidth were much more constrained.