Chipset Serial ATA RAID Performance Exposed 359
TheRaindog writes "Serial ATA RAID has become a common check-box feature for new motherboards, but The Tech Report's chipset Serial ATA and RAID comparison reveals that the performance of Intel, NVIDIA, SiS, and VIA's SATA RAID implementations can be anything but common. There are distinct and sometimes alarming performance differences between each chipset's Serial ATA and RAID implementations. It's also interesting to see performance scale from single-drive configurations to multi-disk arrays, which don't offer as much of a performance gain in day-to-day applications as one might expect."
All to common (Score:3, Insightful)
Quick question (Score:5, Insightful)
You are a hardware vendor. Would you rather sell a) 10,000 units that are broadly compatible but offer [arbitrary number] 80% performance or b) 3,000 narrowly-focused units that offer 100% performance at a slight price premium?
I believe the revenue generated by selling 10,000 units would outweigh that of the 3,000 higher-priced units, even if the technology in a) is inferior.
I'm not saying this is the best/worst/right/wrong way of looking at the situation; I'm saying this is probably the compromise the vendor has to make when offering such items.
Re:Quick question (Score:2)
In an important way, performance==customization.
Look at overclockers. For the increase in performance, significant incompatible customizations have to be done.
Compat over perform is probably smarter decision (Score:4, Insightful)
Choosing compatibility over performance probably is the smarter decision when you are dealing with integrated devices. Those who want top performace can add the appropriate PCI/PCI-X/PCIe card.
Also, machines that need top performance often also need low downtime. When that RAID hardware goes bad replacing the card is far easier, and less expensive, than replacing the motherboard.
Surprise, surprise, surprise (Score:5, Insightful)
Why should computer hardware be exempt from the "You get what you pay for?" dictum which dominates other markets.
And when you make millions and millions of any one thing, a "couple of pennies a chipset" adds up. Once again, that's what you get when you buy a commodity.
not about transfer rate (Score:3, Insightful)
Re:not about transfer rate (Score:2)
and pure transfer rate is an extremely important statistic and consideration for many storages uses
by grouping newbs and "an alarming number of techies" are you suggesting you represent a new and improved species of techie! oh ya? well what's your max transfer rate? huh? eh?
Re:Many techies are little more than newbs (Score:3, Interesting)
Re:not about transfer rate (Score:5, Interesting)
And for the record, this article only cements in my mind that SATA is seriously no better than IDE, just a faster version of IDE, with all its inherent problems.
Firewire is another one that's basically DOA for HDs. SCSI really is your only solution, especially if you're looking for RAID performance. (Of course, that's not your normal consumer purchase, but I have 3 SCSI RAIDed systems, so I'm not your normal consumer;)
For hard drives, read/write & reliability. (Score:5, Interesting)
But then you have to hook your drives to a controller. And controllers have the read/write & reliability factors that hard drives do AND they also have CPU utilization.
Ideally, you'll want hard drives with fast read/writes and high reliability hooked to a controller that does fast read/writes and has high reliability AND very low CPU utilization.
But if you're just looking at hard drives, you're correct in your statement.
But for best utilization of the hard drive, you at least have to look at the controller, also.
And cables.
Re:not about transfer rate (Score:3, Informative)
Surprised much? (Score:4, Insightful)
Re:Surprised much? (Score:3, Interesting)
Re:Surprised much? (Score:2, Informative)
Re:Surprised much? (Score:2)
Best Upgrade (Score:5, Insightful)
18 or 36 gig drives aren't exactly too expensive given the performance that they offer.
Re:Best Upgrade (Score:3, Interesting)
>[...]for the boot partition.
I boot once a day. I'm typically in the bathroom while the machine goes up. Seems like a darn waste to put the boot partition on a RAID-0.
I run all my games off a RAID-1, and it does help loading time in most games. Game resources are ever-increasing in size.
Re:Best Upgrade (Score:5, Informative)
Re:Best Upgrade (Score:2)
Re:Best Upgrade (Score:2)
Ewan
Re:Best Upgrade (Score:5, Informative)
Seriously though, the proper RAID level all depends on how much money your willing to spend for the speed and/or performance you require. Consideration for the types of operations (mixed read/write, read-only, or write-only) and reliability (can you afford to lose the filesystem or do you need fault protection) along with your budget usually determine the RAID level for a given system. Also throw in that you can use hardware and software RAID and the choice becomes even more difficult.
Personally, I tend to mirror the OS and application filesystems and use RAID5 for data, but these are systems we deploy and need a high degree of reliability and performance (pretty even mix of read/write data transactions).
Raid 5 is a combination of the 2 in some ways, but it requires 3 hard disks.
This is the minimum configuration, but RAID5 really just requires a disk to maintain parity. You lose capacity for the sake of reliability (example: 5 disk setup could use 4 for data while the other disk maintains parity). Optionally you could add "spare" pool disk(s) to provide failover to automatically take the place of the failed disk until it is replaced (to ensure availability - wouldn't want a two disk failure, rare but possible).
Re:Best Upgrade (Score:2, Insightful)
I hope this helps further your understanding of RAID 1.
Re:Best Upgrade (Score:3, Informative)
In essence, that's correct. However, because the same data is written to two (or more) disks, chunks of a single file can be read from multiple disks at once, much like with RAID 0. So while write times are the same as for a single disk, read speeds are higher.
Re:Best Upgrade (Score:5, Informative)
Mirroring generally improves performance, which most users and most inexperienced engineers don't realize. Because you have the exact same data on at least two different spindles, you can transfer data with twice the concurrency, and at times approaching twice as fast. When reading a large file, for instance, if each disk can transfer, say, 10 MB/second and the file is 20 MB in size, the file can be loaded in one second with mirroring and two seconds without.
In addition, concurrency allows you to load two different files simultaneously on different disks. Not only do you get faster transfer times, you don't suffer from disk head seeks back and forth as you read the files. This can actually improve "load time" by much more than twice.
Since most filesystem operations are reads, the concurrency gained by mirroring usually helps immensely. However, writes do not suffer significantly either. When you write to a file on a mirrored filesystem, it obviously must be written out to both sides of the mirror. But, it doesn't take twice as long, as one might immediately think. Data can be written simultaneously to both drives, at a cost which is only marginally slower than writing to a single disk (assuming they are attached to different disk controllers/buses, as best practices dictate).
All-around, mirroring is very good for performance.
Re:Best Upgrade (Score:3, Informative)
Unfortunately, this is only true for high-end
Re:Best Upgrade (Score:3, Insightful)
I recently moved my boot RAID 1 from the Intel ICH5R controller to the Promise controller on my Asus P4C800-E Deluxe motherboard which is connected to the PCI bus. Based on benchmarks before and after the move, the Promise controller supports reading from both drives in a RAID 1 (I was actually able to watch the drive activity lights to verify this) while the Intel controller does not. In addition, the
Re:Best Upgrade (Score:3, Informative)
Write performance *should* be the same or very close to a single drive, provided both mirror drives are equal in performance, and the controller is able to dispatch the writes simultaneously. (This will depend on the head pl
Re:Best Upgrade (Score:3, Interesting)
I'm talking about reading from the drives. Given a mirror of two identical drives, you have your choice of which to read from. This will not cause delays. It will speed things up. It is speedier than striping for reads overall, because you have your choice of which spindle to read from, and there i
Re:Best Upgrade (Score:2)
And, unless you are dealing with very large files (some games I guess) the striping size is probably too large to really help you. You are probably just putting one big data drive on there, so you aren't forcing it to stripe stuff.
RAID-1 is probably better for performance, if you are going for performance instead of size.
Re:Best Upgrade (Score:5, Interesting)
Re:Best Upgrade (Score:3, Insightful)
Plus, everyone knows Quantam drives were total crap.
Re:Best Upgrade (Score:3, Insightful)
we've opened scsi drives and ide (Score:5, Informative)
1. the surface disk are different from ide and in scsi. the scsi drives are much reflective than the ide drives. though i am not sure if this affects reliability.
2. the size of the platter (diameter) is much smaller in scsi than in ide. probably this will help them achieve a higher rpm than the ide counterparts.
3. the head movement is much sturdier in scsi (probably attributed to more better magnets.) i find it much difficult to move the heads in scsi than in ide.
4. there are more chips underneath the scsi drive than in ide. however, this does not tell much. but in fc drives, there are 2 dsp chips, one that handle internal drive functions like motor and head, while the other handle io host requests making them much faster!
5. scsi drives have higher mtbf. though this may not be much the only guage for quality but scsi drives are much better in quality.
Re:we've opened scsi drives and ide (Score:3, Interesting)
Re:we've opened scsi drives and ide (Score:5, Informative)
The IDE drives are sold to a consumer market where they don't need to be tested as vigorously. SCSI drives are often tested more vigorously from a mechanical, electrical and firmware aspects. Because the SCSI drives are often sold for heavy server use, they must be able to withstand constant use, around the clock for years.
While it is possible to get the same mechanicals in both SCSI and IDE formats, I don't think that is done for any of the cheapest drives, IIRC,
WD Raptor is one. So far that I know, there aren't any 15k RPM SATA or IDE drives. It could be done, but it wouldn't be that much cheaper.
10k and 15k RPM drives also have different platters, cases and mechanicals - the platters are more like 2" in diameter than 3".
Generally a SCSI drive is expected to last for five years, and I suspect that there really is an improved build quality to make it worth putting the 5yr warranties that drive makers put on SCSI drives, in a day when a typical IDE drive gets 1 yr, or if you are lucky, three.
I know it isn't much, to say, but I've yet to have any of my SCSI drives fail on my, something I can't say for the IDEs.
Re:Best Upgrade (Score:4, Informative)
If a read request goes out to drive 3 and waits for rotational latency, the channel is not blocked. Another request for a read on drive 1 can be executed and satisfied while still waiting on drive 3.
IDE performs blocking I/O, so everything would have to wait until drive 3's read was complete. I don't know if this also applies to SATA.
Re:Best Upgrade (Score:4, Informative)
It also has great command queuing as part of the out of ourder command execution. Serial ATA supports Native Command Queuing, providing these features plus First Party DMA and Interrupt Aggregation. Hardware support is relatively new. Seagate was the first to make a drive that supported it. My understanding is that the majority of Serial ATA drives out there essentially have parallel IDE controllers with a Serial ATA converter.
Here is a great article from Intel on NCQ: PDF [intel.com] HTML [216.239.39.104].
IDE performs blocking I/O, so everything would have to wait until drive 3's read was complete. I don't know if this also applies to SATA.
Interrupt Aggregation and First Party DMA were designed to limit the effects of this. SCSI still has an advantage with its offloading controller though. I also understand that the maximum queue depth for commands on the SATA is 32, while it is 256 for SCSI.
LOUD much? (Score:2, Interesting)
Riiiight, I want a quieter computer not a turbo-fan-jet-in-a-box.
Of all the benchmarks I've seen, with a configuration of 4 or less drives, the modern UDMA ATA drives can keep up with the best SCSI drives. They are cheaper and use newer, quieter technologies.
Re:Best Upgrade (Score:3, Informative)
My somewhat bitter and basic description:
RAID Chipset - cheap RAID controller chip and the sacrifice is that it uses system processor power to work the RAID calculations.
RAID card - a little more expensive, however, has a basic RAID controller chip which offloads some of the processor requirement.
RAID card with XOR engine - a full blown chip that controls and processes the RAID
Re:Best Upgrade (Score:2)
Re:*Boot* partition? A real OS doesn't boot too mu (Score:2)
It's not just about booting.
Question about striped/mirrored raid (Score:5, Interesting)
Re:Question about striped/mirrored raid (Score:5, Informative)
Then you can be penalized for seeking your heads independently, because you need to pay your seek time separately for the second 64k of a given read.
Re:Question about striped/mirrored raid (Score:3, Interesting)
Re:Question about striped/mirrored raid (Score:2)
(BTW, our new DB server here uses a 3ware Escalade; we're very, very happy with its performance)
Re:Question about striped/mirrored raid (Score:5, Informative)
Built-in RAID chipset performance has always... (Score:5, Interesting)
Re:Built-in RAID chipset performance has always... (Score:2)
Re: (Score:2, Insightful)
Re:Well lets see here (Score:5, Interesting)
They are not talking about mirroring (RAID 1) exclusively. They are talking about RAID 0 so people can stripe drives and achieve considerable performance increases.
As for me, I have an 8-channel IDE raid card with 8 x 120GB drives, hardware RAID 5, and in 24 months have blown oh about 4 drives (3 on the same channel til I found a faulty cable)... I have really appreciated the 860GB array having fault tolerance. And yes, I do some backups of critical data, but I can't afford the storage required for regular full backups.
Re: (Score:2)
Re:Well lets see here (Score:4, Insightful)
2) Whatever setup you can afford that accomplishes what you want it to is ideal for you.
3) RAID arrays have benefits outside of the fault tolerance, mainly higher transfer rates.
4) You don't have to be a multimillionaire to afford multiple hard drives. They are still around $1 per megabyte, so the last time I checked, one can buy a 60GB drive for about the cost for two for dinner at a nice restaurant. Skip two nice meals and you have enough money for a nicely performing RAID0 array, provided you have the motherboard/daughter card that supports it.
I understand your feeling that maybe having 8 120+ GB drives in a "home" configuration might be a litle overkill, but keep in mind that everyone has different uses for their computer.
I do a little video editing at home (not professionally by any means), and having the benefit of faster throughput without the expense of buying 10K RPM Ultra320 SCSI drives is a beautiful thing. If I didn't have the RAID array, encoding a video to burn to DVD would probably take me about four hours, compared with the two it takes right now becaus of the killer transfer rates I get with my RAID0 configuration.
CAUTION: the above mentioned behavior of skipping nice dinners with your significant other in order to buy computer hardware is not endorsed and/or recommended by the author. Use at your own risk.
Seemed to miss an important implementation (Score:5, Informative)
For those hankering for another opinion, setting up the SATA RAID was a breeze. It was literally set it up and forget about it. The servers at work were much more difficult to set up. If you have the extra money for a spare drive (mine is two WD 10,000 RPM HDs
Re:Seemed to miss an important implementation (Score:2, Informative)
True, these RAID solutions aren't as "robust" as true enterprise RAID solutions... but you're making it sound worse than it really is.
Re:Seemed to miss an important implementation (Score:2)
Of course what I'd *really* want is a motherboard with a real, fast RAID controller built in, connected to the fast interconnects like thse chips.
Promise's chips are closest to this. Would have loved to see them compared to the south bridge crap they went thru.
Re:Seemed to miss an important implementation (Score:2)
I stopped reading around there. The Promise controller is totally hardware. If it wasn't, I would be using software RAID (Windows 2000 in this case) or none at all.
Re:Seemed to miss an important implementation (Score:3, Informative)
RAID-0 came
Too...Many...Graphs (Score:4, Funny)
Since it is already slashdotted, (Score:2)
RAID vs. single drive performance (Score:5, Informative)
In the words of our immortal forefathers (Score:2)
Re:In the words of our immortal forefathers (Score:3, Funny)
Forget ATA. What about MFM? Or RLL? Oh wait, this is slashdot. Everyone here was in diapers when these were used.
storage market sluggishness? (Score:5, Insightful)
1. Everyone says they need more storage, so the market for it should be huge
2. SAN or NAS configurations are always more expensive than people think (even though they are radically more cheap than they were two-three years ago).
3. Because of the sticker-shock, a lot of people actually spend their first swipe at the problem cleaning out the cruft and streamlining their business processes and data management rather than drop coinage on storage kit
4. Storage companies are having a very hard time here in Japan, probably from the influx of vendors (see #1 above).
Re:storage market sluggishness? (Score:3, Interesting)
Say a $100 box that can fit 5 IDE drives - would be perfect for bulk data storage.
PCI-E RAID (Score:2, Informative)
RAID Perfomance (Score:5, Interesting)
Don't get me wrong, the RAID 5 array is sweet and certainly amps up geek appeal, but I don't have enough friends who know what the hell a RAID array is to really impressive them.
-Berylium
neat - but who knows how to set this RAID up??? (Score:3, Interesting)
1. i go to Disk Utility (standard issue with OS X)
2. select the two blank drives (with the mouse, clicking on them)
3. click "RAID 1" or "RAID 0"
4. repartition them with a GUI (not required)
then the RAID is mounted automatically on the desktop, ready for use. period. end of issue.
that's basically 4 steps - none of which require any "understanding" beyond your average emailer's brainpower. (i'm not including the "Are you sure?" dialogs - those don't count as steps)
its things like this test that bake my brain... and why Mac users are rabidly so asshole when it comes to stuff like this.
All this geek speek about a few kbps difference between the various choices out there - but when it comes down to it - its a motherfscker to try to set it up in windows and, unfortunately, Linux, which takes the cake for scoring highest on the "WTF Does That Mean?"-o-meter for disk partitioning.
And the PROBLEM with all the difficulties in setup of such a
How useful is that? its not.
Its a classic GSFPREZ Axiom On system Performance...
"A Mac Plus will always outperform a Pentium 100 when the Pentium is experiencing an IRQ conflict between the video card and the modem card"
while i KNOW that IRQ issues are of the past - the idea that a superfast desktop comptuer that is difficult to get functioning is no gawddamned use - and by definition is an anchor compared to a Model-T Macintosh... at least the Model T moves, whereas anchors don't.
all the speed and power in the world is useless to those who are more interesed in DOING work with their computer, than WORKING ON the computer to get it functional.
My RAID on my G5 may be slower than yours - but it took me about 2 minutes - total, including the installation of the 2nd card drive but most importantly...
(Mitch Hedberg =+5) this thing is useful, motherfscker!(/mitch)
laugh, its funny.
Re:neat - but who knows how to set this RAID up??? (Score:4, Interesting)
In my experience, I've found most mac users only scrape the surface of the potential their mac holds. When I'm trying to sort out some OSX networking issues, I can never find information on mac sites. I have to go to BSD sites to find the goodies. It seems mac users just use their macs to shout at non-mac users and try to rub their faces in their macness, instead of actually USING their computers.
Don't think macs are anything they're not. They're not easier to use, not faster to set up. They just look pretty and cost an arm and a leg.
Re:neat - but who knows how to set this RAID up??? (Score:3, Interesting)
Driver software is probably key (Score:5, Interesting)
I have a Dell PowerEdge in the back room with 2 15k scsi drives running linux and raid 0 - with hdparm -t this thing gets 125-128 mb/sec! The HD interface on that machine is definitely hung off of a PCI-E interface or something better; as the maximum theoretical transfer rate of PCI is about 33*32 million bits per second or 132 megabytes per second.
What would be really nice is if the filesystem was put on the i960 based adaptec card...
Software RAID... (Score:2, Interesting)
That said, IMO, looking for performance out of an IDE RAID array is futile. There are rare cases, or people who have two screaming drives in RAID 0 and a perfect setup, but for the most part IDE and RAID aren't for performance - the drives and common file usage
RAID 5 would have been useful (Score:4, Informative)
I could care less about a few percentage points difference in real world speed, but being able to up the reliability would be useful.
Specifically,
To sum it up, don't both with RAID if you are looking for performance - buy more memory instead.
SATA RAID? I pass that (Score:4, Insightful)
I agree Raptor are great disks, 2 of them will out run PCI bus bandwidth, would you go PCI-X for SATA raid? a good PCI-X RAID card will cost $300+ for 4 ports, no thanks, I will stay my SCSI solution.
The bottom line is SATA don't even have a BUS.
Intel RAID crashing under load (Score:5, Interesting)
The article is completely useless (Score:3, Insightful)
The only users who should even contemplate deploying a RAID array will certainly do the research to come up with the ideal stripe block size, given their usage patterns and requirements.
RAID 0,1,5 (Score:5, Informative)
Raid 1 = Mirrored disks, writing same data to all disks so if one fails you simply replace it and no loss of data. (Total storage = 1/2 of disks)
Raid 5 = Redundant striped disks. One of the disks is used to store a XOR bit, so that basically any one of the disks can go down and once it is replaced the RAID system will rebuild the data on to that disk. (Total storage = total storage of (all disks minus one))
In RAID 1 and RAID 5, which is used in business servers, you really need hotswappable drives so any drives going kaka will not impact the server in any way, just replace the hard drive under warranty without even rebooting the server and the RAID system will rebuild the drive.
RAID 5 is most effective in a business situation, offering a good compromise of speed, capacity and redundancy.
Re:RAID 0,1,5 (Score:5, Informative)
Nope. In a real business situation, i.e. data-warehousing or ISP hosting environment, nobody trusts RAID 5. It's slow and fragile. Instead, everybody I know goes with RAID 10 (striped mirrors). Here's a typical 8-drive configuration:
Stripe:Total storage equals the same as a 4-drive RAID-0 system. Performance should be slightly better, on a high-end dedicated controller, as the mirrors should be able to seek to different files independently for concurrent read requests (thus lowering latency), while the stripes should be able to operate simultaneously for large-block i/o (thus raising the streaming i/o rate).
Reliability is better than Raid-5, for two reasons:Re:RAID 0,1,5 (Score:3, Insightful)
Re:RAID 0,1,5 (Score:5, Insightful)
RAID 5 is increasingly marginalized by the low cost of drives and high capacity they offer. RAID 1 *should* increasingly replace RAID 5 in the minds of people who understand the issues but sadly it does not. Many people believe that RAID 5 is simply "four better". Those same people also like hot-swap.
How is this insightful? (Score:3, Interesting)
You have 6 bays in which to insert a (cold/hot) swap disk. Would you rather have 6*250GB/2 = 750GB of space, or (6-5)*250GB = 1250GB of space?
Keep in mind that in either case, if you lose a disk (doesn't matter which one), you're probably bringing the machine offline unless you're using hotswap (which you say is superflouous).
Get a decent RAID-5 hardware controller. Seriously.
Less wasted disks = less noise, less power, less heat, more room in the rack, and more storage.
Linux support (Score:3, Interesting)
Many motherboards come with RAID controllers that actually expect the operating system to handle them. The Intel ICH-5R did have rather poor Linux support last time I checked. Although it exists, installation is a pain. It seems that many SATA and consumer RAID solutions either demand running in legacy mode if they work at all. I did not see this issue addressed in the review. I would like to know how support stands now.
Re:Linux support (Score:3, Informative)
Fortunately my chipset does not require a seperate driver when running in RAID mode. My boyfriend's computer uses a Promise SATA chipset that requires a RAID BIOS switch and a completely different driver (Windows AND Linux) if you want to use it in signle-disk mode. I can't imagine the mess I'd have if I used that.
Software RAID under Linux (Score:4, Interesting)
I've always wanted to compare the Linux SW RAID to the HW RAID controllers, to see if it's worth the extra CPU cycles. My guess is that it is, but it'd be great to have some numbers to back this up.
I suppose I could do it myself with hdparm and bonnie++ if it really came down to it, though... any interest in that?
They are not Hardware RAID! (Score:5, Informative)
If you check Linux Mafia [linuxmafia.com]'s web page on SATA controllers, you will find that very few of the SATA RAID controllers are actually hardware RAID. What their "Drivers" really are is proprietory software RAID pretending to be Hardware RAID. I think of all the SATA RAID controllers and chipsets being offered, there are only three that are really hardware RAID. And 3Ware's offering is the least expensive of the real hardware RAID.
ttyl
Farrell
A few questions (Score:3, Insightful)
I've been snooping around for a stand-alone RAID array. Ideally I'd like it to be SCSI-compatible and I can plug it into a SCSI port on a server and it would be relatively OS-independent. RAID 5.
What are the most economical options in this area? Any recommendations for brands/manufacturers? Are there IDE-based RAID 5 drive arrays that have a SCSI interface and are they worth exploring?
3Ware - or SCSI (Score:4, Informative)
In most, if not all, cases, the RAID is really a software-RAID, that the hardware-driver implements.
Only 3Ware [3ware.com] seems to offer real RAID-in-hardware these days (and some high-end Adaptec-cards).
Rainer
3Ware s-ata hardware RAID (Score:5, Informative)
You can configure your RAID remotely while your server is running. (But always be careful with your boot disc
But for MacOS X (& linux) geeks, the XRaid [apple.com] RuleZ!
Exposed? Everybody knows it's software RAID. (Score:5, Informative)
Most "SATA RAID" is a bunch of marketing malarkey. It is provided by the BIOS and OS, not the hardware.
There are a few "true" hardware RAID controllers, such as 3ware or some of the more advanced Adaptec controllers.
In the middle is Promise, which produces controllers what I call "RAID offload" features -- not true RAID, but faster than non-RAID if you use Promise-specific features.
Finally, the third group of SATA controllers is vast majority -- no RAID support whatsoever, but they are being sold as RAID.
Any benchmark of SATA RAID simply benchmarks the OS- or vendor-provided software RAID driver.
Re:Too Many Checkbox feature (Score:5, Funny)
Holyshit, at this rate I will have 1 new input per year. Why can't we wait a couple years and all agree on 1 super format.
Yeah, then they'll come out with double sided bluetooth and the upgrade cycle will start again!
Re:Too Many Checkbox feature (Score:5, Funny)
Yeah, when all you had to worry about was MFM or RLL? ST506, IDE, E-IDE, Western Digital IDE, ATA, ATA-2, ATA-3, ATA-4, ATA-5, ATA-6, SCSI, SCSI-2, SCSI-3, Wide SCSI, Fast SCSI, Fast SCSI-2, UltraWide SCSI, Ultra SCSI-160? Connectors were just as simple; 40pin, 44pin or 80pin? 25pin D conector for external SCSI, male or female? How about a dense 50 pin D connector, or wait, maybe 64pin? 50 or 64 pin cable for internal drives; your choice.
Don't forget to setup your SCSI bus and wave that chicken. Does your SCSI controller boot from SCSI ID 0 or 7? Maybe 6 or 4? Did you set your master and slave jumpers on those IDE devices properly? Your IDE performance sucks; you didn't put a PIO device as a slave on the same channel as your screaming-fast UDMA166 120Gb hard drive now did you? By the way, does your BIOS support 48bit LBA for that drive? Got SCSI terminators. Need a terminator block or is it an internal jumper perhaps?
Oh boy, things were so much simplier back then..
Re:Too Many Checkbox feature (Score:5, Funny)
Dear god I'm having sysadmin flashbacks now. Gonna be thinking of sendmail.cf all day...
Bastard.
Re:Too Many Checkbox feature (Score:3, Interesting)
Re:Too Many Checkbox feature (Score:2, Funny)
Anything that involves mixing SCSI-2 and SCSI-3 devices, internal and external chains or RAID is a full chicken experience. If you're using anything made by Iomega,
Re:RAID is for redundancy, not performance (Score:2, Insightful)
True for write at raid 4/5, not true for read under any raid. If two pieces of data are on different drives, you can get the differfent heads seeking independently. Raid 0, 1, 3 have the seek efficiency of a single drive and the data transfer efficiency of a multiple drive. Dince data processinjg accesses are dominated by seek, 4 and 4, which allow multiple seeks, will beed single drives.
Re:RAID is for redundancy, not performance (Score:4, Informative)
Yes, I realize that the name is somewhat misleading, but just because RAID was originally intended for redundancy does not mean that it does not have performance enhancing modes. I happen to have a RAID-0 array on my home PC.
Wrong (Score:5, Informative)
I normally don't respond to ACs, but this one is just incorrect.
Yes, RAID {1|5|10} are generally used for their redundancy purposes, but RAID 0 is used because it offers improved I/O performance. It is certainly not used for redundancy because - guess what - it doesn't offer any on its own*. Go read this [recoverdata.com] before you provide more misinformation.
* it can be used in combination with other levels - e.g. RAID 0+1 - to provide performance and redundancy.
Question on RAID 0 being used (Score:2)
Now assuming that someone buying two large hard disks doesn't want to buy yet a third disk to boot from and store vital files (e-mails, save games, documents, whatever), I can imagine them wanting to 'format' the disks in 3 partitions (per disk). Then they would back up A1 to B3 and B1 to A3 us
Re:No, you're wrong (Score:2, Informative)
Re:RAID is for redundancy, not performance (Score:2, Informative)
This couldn't be less true. RAID 0 is *all* about performance. Its only other benefit is increasing the size of a virtual disk to N*disk size. RAID 1 is mainly about redundancy, of course, but the reason people use RAID 1 over RAID 5 is almost solely performance. It's safe to say that, in most cases, RAID 0/1 yield better performance than single-disk access. That's why people use them.
Just because you read Slashdot doesn't mean
Re:Unfair comparison (Score:3, Informative)