IDE, SCSI And Recording Everything 581
Raju writes: "For many years we were told that SCSI is superior to IDE. I always made my systems with SCSI and the others in the household got el-cheapo IDE disks. In the past SCSI beat IDE hands-down but now according to Simson Garfinkel, "today's IDE drives are significantly faster than SCSI drives". In the article at O'Reilly Network he talks about the tests they had run for storage of network data on disks. In the light of this article does anyone see any reason for going with SCSI in a desktop machine? For servers with heavy disk usage patterns it might be different due to command queuing." Disk types aren't what the article's really about, though -- it's a top-level look at network forensics (including advice on building a traffic-analysis system), and makes some interesting points about the unbalanced growth of storage and bandwidth.
IDE is not necessarily worse than SCSI (Score:3, Interesting)
Speed (Score:2, Troll)
Re:Speed (Score:5, Insightful)
I can run three instances of grip and rip/encode from all three drives simultaneously. Desktop still runs like a champ, it doesn't bog down. Rip from one IDE drive and it does ok
Sure, I may pay $400 for an 18GB SCSI drive, but it's worth it.
Re:To add to the myth... (Score:3, Interesting)
I'm confused.. you say you have 2 drives, striped, and then talk about copying big files between them? IF they are striped they are one volume, and you can't copy things between them.
That was a mistype on my part, and what I meant to type was that while I have two 60GB drives off of a RAID controller, I haven't taken the plunge and striped them yet. As such they're both hanging as the single drive on their own bus, on two separate buses obviously.
I think the only reason IDE is more cpu intensive
It should be pointed out that while this is constantly restated, repetition doesn't count as evidence. It was ironic that just prior to seeing this debate, I saw this [tomshardware.com] page which shows significantly higher CPU utilization for the two SCSI drives (mind you, they're extremely high performance drives, however they're not of a scale that would justify the difference between them and the IDEs). Each new time I replace my workstation I go through the whole IDE versus SCSI debate because I want to go with what's best (SCSI just has an air of superiority around it, much like Honda enthusiasts feel about their 115 lb-ft of torque VTEC engines : Enthusiasm, again, doesn't indicate that it's rational or based on any truths), but it seems that, firstly, it's extremely hard to find cold hard facts on the matter (i.e. basic metrics. Most of the evidence is anecdotal or based on uneven systems), but secondly that a lot of SCSI enthusiasts are very emotional about it. I have zero faith in anyone's personal opinion about the "feel" of one over the other: I remember back in the BBS days when a program made the rounds that promised to "convert your 386 to a 486!" and people would argue with me and ASSURE me that, yup, it made their system that much faster and smoother. A little persuasion and predisposition goes a long way when it comes to subjective measures, which is why I usually discount them.
Re:Speed (Score:4, Interesting)
On the surface, I would agree with you. However, the planned usage of the disk space in question becomes an important point.
I had this conversation with Greg Oster, a friend from University, who wrote the NetBSD RAIDframe [usask.ca] implementation. We were considering setting up a large network server. After doing some number crunching, something became very very very clear: unless we were going to be moving to Gigabit Ethernet, 3 IDE disks in a RAID configuration were going to be more than sufficient to fill our 100MB LAN.
The point is, whether IDE will be "good enough" depends on what you're using it for. For a large fileserver, IDE RAID may well be good enough, depending on you local LAN. For video editting and other purposes where the data is used on the machine where the disks reside, SCSI's command queueing may be the better choice.
Re:Speed (Score:2, Interesting)
Sure the question was is there any reason to use SCSI anymore in a desktop, for which my answer would be "there never was one for 99% of users" Of course gamer/benchmark freaks who need 200+ FPS (why?) will likely disagree with me.
Re:Speed (Score:3, Informative)
Look at "TCQ" -- Tagged Command Queueing -- that has been worked on by Andre Hedrick in the past, and is currently going into the Linux 2.5.x kernel series due to the work of Jens Axboe.
TCQ is where SCSI gets a lot of its speed, by allowing multiple device commands to be outstanding on the bus at any given time. TCQ really levels the playing field for IDE and SCSI... assuming your IDE driver supports it (most do not).
This is NOT what the article is about (Score:4, Insightful)
Western Digital's new 120 GB IDE Drive (Score:2)
I want one!
Re:Western Digital's new 120 GB IDE Drive (Score:3, Insightful)
However, I want to see not one, but eight IDE drives outperforming eight SCSI drives doing heavy I/O. That's the crux of the question for servers. For desktops, just go IDE and that's it.
Re:Western Digital's new 120 GB IDE Drive (Score:4, Informative)
If you check out Storage reviews File Server Benchmark database, you'll see that the fastest ATA drive scores well below half what a 15,000 rpm Fujitsu drive does.
Um, duh? (Score:2)
IDE still has a long way to go.
- A.P.
High Quality (Score:2, Insightful)
Re:High Quality (Score:2)
Re:High Quality (Score:2)
Re:High Quality (Score:3, Informative)
I've also had a 2GB IDE Western Digital (which I'm told is crap) that is still alive and well after 6 years.
I'm by no means an even distribution for proving or disproving any statistics, but this is my personal experience so far...
Does anyone have statistics on this? (Score:2, Interesting)
My Sys. Admin. swears by SCSI drives, horrified by the possibility of maintaining cheaper IDE RAID systems. We probably have about 2-3 TB of SCSI RAID disks, with an average of maybe 70 GB per drive? So that's about 30-40 units, and they get heavy (24 hours a day) use. I think roughly one drive fails a month. But we don't have any experience with IDE drives in the same environment, and it probably makes a big difference what the usage patterns and even temperature/cooling and humidity is for the drives. Our RAID arrays are in a climate-controlled room, which should help somewhat.
We have also been buying thin rack-mounted dual-P-III systems running Linux to do most of the processing (~24 computers so far), and these are built with SCSI local disks as well. I've never entirely agreed with the sys admin on this choice, since the local disks don't get all that much use and it's not critical if they fail. IDE would be much cheaper and for that many units, the savings would add up. But his primary reasoning is that SCSI is more reliable, and he doesn't want to waste time replacing failed drives.
So, is it really true that SCSI is more reliable than IDE? Or is this based on either ancient history or our instinctive need to justify the price. They say "You get what you pay for." I say "either that or less" -- it's an inequality.
Performance Depends (Score:2, Informative)
Depends upon usage (Score:2)
The age-old debate... (Score:5, Insightful)
As if the tens of thousands of times this has been hashed out weren't enough already...
The question of IDE vs. SCSI is not (or should not) be about speed. Really. There are nice, fast drives in each camp. If speed is all that matters to you, go with IDE, it'll be a lot cheaper.
So are there any advantages to SCSI? Sure. But not for the majority of people. SCSI's beauties are:
- You can hook a LOT of drives to one controller
- You can hook most any kind of device to the controller
- You can hook devices up both inside and outside of the case
- You can use much longer cables
- When the controller is waiting on one command, it can issue other commands while it's waiting
SCSI was designed for systems where you would either have many, many devices connected to the controller, or where many different processes (or users) would be accessing the hardware simultaneously - and in either of those situations, it *does* perform better than IDE. However, the portion of systems that will actually enter into that area are very, very few. In general, "if you have to ask, you don't need it."
As for straight speed, if you're looking for all-out throughput, don't rely on a single drive, get a RAID array - be it IDE or SCSI. By getting a faster drive, you can increase your throughput by what - 10%? 20%? A two-drive array will nearly double your throughput, and with quality controllers, it's fairly linear up through three to five drives - again, depending on the quality of the controller.
steve
Re:The age-old debate... (Score:4, Interesting)
As for arrays, beware of the benefits of striping. RAID 0 (striping) has the problem that the more drives you add, the less reliable your array becomes. RAID 0+1 (or RAID 10) mirrors the data as well and keeps your data secure in the event of a single disk failure (and RAID 10 can occassionally suffer multiple disk failures).
Re:The age-old debate... (Score:2, Informative)
For example, SCSI-2 device types include:
Direct Access Devices (Hard Drives)
Sequential Access Devices (Tape Drives)
Printer Devices
Processor Devices(*)
Write Once Devices (WORM, CD-R)
Optical Memory Devices (CD-RW, MO)
Medium-Changer Devices (Jukeboxes,Tape Libs)
Communications Devices
(*) Yes, you can do processor-processor communications using SCSI. You do, however, need a target-mode driver.
Re:The age-old debate... (Score:3, Interesting)
This is simply not true for all SCSI busses. Each device will use the speed it is capable of. All devices are not forced to the slowest speed. It is true that slow devices may tie up the bus for longer periods than a time-sensitive device can tolerate, but then you shouldn't have placed the two onto the same bus anyway!
One thing that is true is that mixing single ended (SE) and low voltage differential (LVD) devices on a bus will cause all devices to behave as SE, with a possible lowering in the maximum speed possible for the LVD devices, but again, this does not necessarily mean they will all run at the same speed.
Re:The age-old debate... (Score:2)
That shall be HVD "High Voltage Differential" sir. 25m max to be more exact. It is a standard that is not suported by anything but some tapes (as in your example) and some SANs. You are not connecting anything else to this conroller (except in SCSI-2 compatibility mode to the 50 pin internal connector assuming you have an AH2944). It is also obscenely expensive (low volume production).
The more common varieties that are priced at more normal prices do much shorter distances.
You are mostly correct about the speed though. Connecting an old device to a chain with new ones makes the chain go from differential to SE mode. This change does not necessarily force drop to SCSI-1 if I recall correctly. It usually does but it is not obliged to. But it does impose a speed penalty.
Re:The age-old debate... (Score:4, Funny)
A two-drive array can double your throughput, but halve your reliability since if one of the drives fails, you lose all your data
that sort of RAID is neat but it's just inviting disaster. you need to move to the higher levels of RAID which involve more drives and offer parity as well as striping!
Re:The age-old debate... (Score:2)
I don't mean to jump on you in particular, but
Re:The age-old debate... (Score:2)
The drives are just as reliable on their own as they are in a pair. The way I saw his post worded was that he was insinuating there's some kind of degraded reliability of the drives while in running a raid configuration.
Re:The age-old debate... (Score:2)
And as another poster pointed out, yes, I was talking about RAID striping two drives here, not a two-drive mirror configuration.
Re:The age-old debate... (Score:3, Insightful)
Re:The age-old debate... (Score:4, Interesting)
Take a 7200 rpm SCSI drive. Take a 7200 rpm IDE drive. Rip off the electronics.
You now have two identical drives.
That's how it's been for most vendors for years now. SCSI does offer higher speeds (10 and 15k RPM), and the various other benefits spoken of, but reliability is not one of them. The electronics rarely fail on HD's. Instead it's a failure of a mechanical device (the motor, the heads, etc).
SCSI really doesn't serve much purpose on desktop machines anymore. Three times the cost for little or no performance gain. The days of IDE being vastly slower (even on the desktop) are gone, as are the days of IDE CD-R/RW's spitting out coasters if you as much as moved the mouse. There are a few people who will go out and buy the fastest SCSI drives out there, toss them in a RAID array, and then play games on it (no, I'm not kidding... a friend of mine did), but the cost-benefit there is so small as to be ludicrous.
Re:The age-old debate... (Score:4, Insightful)
Re:The age-old debate... (Score:3, Interesting)
Obviously this works better if you look at older drives, since there aren't many 7200 rpm SCSI drives manufactured still.
Sorry, but anyone thinking otherwise is trying to convince themselves that there's something magical about a physical transport medium that has the same performance requirements and characteristics.
They're also trying to convince themselves they're not being ripped off for buying SCSI.
Re:The age-old debate... (Score:2)
Re:The age-old debate... (Score:2)
- When the controller is waiting on one command, it can issue other commands while it's waiting
This is exactly why it's NOT a good idea to have two IDE devices on the same cable, if you expect that both will be used at the same time. Such as your HD and CDROM.
IDE allows for only one transaction to be active on the cable, until it's entirely completed. So for example you can not issue a read to the HD, then while waiting for the HD to become ready for transfer, issue a read to the CDROM. You have to wait until the HD is ready, and transfer the data, and THEN you can only access the next device.
A big shortcomming, that's why it's nice to have a board with a RAID controller, just for the extra IDE interfaces.
Google... (Score:2)
-kwishot
Re:Google... (Score:2)
It's all Ram between several thousand machines. The IDE disk system is a backup for power outtages.
Similar to how many companies use a tape backup incase the harddrive dies.
Re:Google... (Score:2)
hahahaha you haven't worked in the corporate sector much, have you?
SCSI Advantage.. (Score:5, Insightful)
I think one of the big things is that processor speeds have kept on shooting up, meaning that while IDE has been considered a serious contender for small to mid- sized servers increasingly over the past few years, it's now becoming much more plausible to use it on higher scale systems.
Re:SCSI Advantage.. (Score:2, Informative)
Re:SCSI Advantage.. (Score:2)
It depends on what you need. (Score:2)
If you run a multi-user computer, high-end server, or a system where hardware reliability is at a premium, SCSI is still the way to go, though - but you pay a premium for it. Features like command queueing and disconnect/reconnect are really helpful when running a server that has to manage a heavy load, or a complex multi-user application. And the best RAID systems are still SCSI-based.
But if you are running a server box that runs some sort of brain-damaged inefficient server or client OS [microsoft.com], IDE is more than enough for you.
Misinterpretation (Score:4, Insightful)
That's probably true. For example, you can buy a n 80GB western digital 7200RPM drive for $150. That is $1.88/GB. The only 7200RPM SCSI drive made these days is the Seagate Barracuda, which is $300 for 36GB: $8.33/GB.
That really isn't the point of SCSI though. I'll accept that IDE wins on a money-per-GB basis. But, IDE has a performance ceiling that SCSI doesn't have. You can't get 10000RPM and 15000RPM drives for IDE at any price, period.
There is a point, when building RAID systems, where SCSI exceeds IDE in the $-per-I/O-per-second metric. In desktop systems, you probably won't exceed this point. But if you intend to have stripe sets of 4 or more disks, SCSI will win the price wars again.
Anyway it really isn't a matter of SCSI being expensive and IDE being cheap. It's the drives that are expensive/cheap and it simply works out that expensive drives get SCSI connections and cheap drives get IDE connections.
P.S. Have fun trying to get you 4-disk IDE RAID all within 18 inches of your IDE controller :)
Re:Misinterpretation (Score:2)
Maxtor's product page [maxtor.com]
I like the metal on metal sound of SCSI! (Score:5, Funny)
Re:I like the metal on metal sound of SCSI! (Score:4, Funny)
Give the Western Digital ATA-100 drives a shot. They sound like stones mixed with sand being ground together.
CD to CD copy (Score:2)
For CD to CD copy you'd be hard pressed to beat the speed you could get with SCSI. The multiple IO at the same time makes that rock.
Re:CD to CD copy (Score:2, Informative)
The SCSI drive burned CDs in half the time.
It's All About MTBF (Score:2, Informative)
For example, the Seagate Cheetah X15 36LP has MTBF of 1.2 million hours, whereas the Seagate Barracuda ATA IV has an MTBF of 0.6 million hours.
Longer life = better ROI
Re:It's All About MTBF (Score:3, Informative)
All this means is that for any given drive in any given year, you have a 1.4705% chance of your drive failing, on average.
If you have 68 drives in your system, then, it is likely that one will fail per year.
That's stats for you.
So higher MTBF can actually work out to better reliability.
Of course, this is a simplistic analysis, which doesn't take into account the actual distribution of mortality for those drives (which, for any hardware, tends to have the stillborn/geriatric ends of the spectrum with the most failures)
Simon
how many drives (Score:2)
ide is now faster, but has been limited to the amount of ide cards/motherboard you have...
granted, with the new abit max [abit.com.tw] boards coming out, with 12 ide devices, that's not a problem...
if you need more than 12 hard drives, when you're building a perfectly NEW system, i would use SCSI... if not, just go with the 'new level' of motherboards coming out, and smack some IDE drives into the case....
now if i could only get a better power supply for all of them.
The Big Picute is Much More Important (Score:2)
The ramifications are important.
Also - how does this storage boon impact other kinds of surveillance?
This whole line of thought is a big part of making big brother a reality.
Just a thought.
.
Data integrity (Score:2, Informative)
Yes, the delta between SCSI and EIDE drives performance seems to be shrinking, but I would take a 15k SCSI drive over a 7200RPM 8MB cache EIDE drive any day.
Just my $0.02
No. (Score:2)
SCSI drives (from the same manufacturer) use exactly the same physical mechanism as EIDE ones, but with different controller cards (or sometimes, just different firmware and different physical connector)
SCSI and EIDE do exactly the same ECC mechanism and exactly the same reserved-bad-blocks mechanism.
I wish it was possible... (Score:5, Insightful)
The speed comparison of SCSI vs. IDE was most certainly referenced within the story context of the story; however, that was by no means the intended takeaway that the author had for his readers - it was but a supporting factoid of his other conclussions and thoughts. The article was a very written analysis, history and summation of the practice of Network Forensics. While it did cover a wide range of technologies (including hard disks) that aid in the collecting of such forensic intelligence, by no means was his observation of the increased speed of IDE drives intended to monopolize the reader's attention or be the central focus!!!
Even worse, the majority of posters have (unsurprisingly) focused on everything but the article's intended subject matter. Now ensues the typical flame-war of people supporting their preferred technology instead of having intelligent discourse concerning this exciting and evolving new field of I/T security...
Oh well...if you can't beat them, I suppose you might as well join them! For the record, my vote remains with the tried and true performance and quality of SCSI...
How appropriate... (Score:5, Funny)
Hello SCSI my old friend
It's getting very near the end
Some people like it both ways (IDE+SCSI) (Score:3, Interesting)
Over the years the benefit running SCSI decreased. First bus-mastering IDE channels came along and got rid of the annoying pauses. Then they started turning up the clock speeds with UDMA 66, 100, and so forth, until my aging SCSI drives could barely compete with even an average IDE drive.
Naturally, I did what any self-respecting bithead would do: I upgraded my SCSI components. By that time (circa 1999) the price gap between IDE and SCSI had narrowed somewhat (this was before IDE storage prices bottomed out) and I was able to purchase two 18Gb SCSI drives for a mere 25% than the equivalent IDE drives would cost me. And once again, I was happy with decent performance, low latency and high throughput.
Two weeks ago, I found myself scrabbling to free up a few megs and realized it was that time again, time to upgrade my storage. Looking at Pricewatch, I noticed that IDE drives are now cheaper than Big Macs and come in similarly absurdly-sized portions. Would you like 160Gb of space for your MP3s? No problem--they've got you covered, at $200 a pop! Meanwhile, relatively few vendors have stayed on the SCSI bandwagon, demand for SCSI drives is mostly limited to legacy systems that don't support an IDE bus, and a 160Gb SCSI drive will cost you $900.
In the face of this incredible price ratio, I did what any self-respecting bithead would do: I threw in the damn towel. Now I'm in a transitional period where I run 36Gb of fast UW SCSI storage and 160Gb of even faster IDE storage; I have a SCSI DVD-ROM drive, a SCSI CD burner, and an IDE DVD+RW burner, I/O controllers are fighting each other to the death to secure an interrupt, and the inside of my case looks like the aftermath of a tragic explosion at a cabling factory. I'm damned lucky my system is water-cooled, because I doubt any system fan could pull enough air through that morass of ribbon cables to make a difference in cooling.
The moral of the story: SCSI had its glory days, but it just ain't cost-effective anymore. And with Serial ATA looming on the horizon and promising God's own transfer rates, it just doesn't make any sense to buy SCSI.
Re:Some people like it both ways (IDE+SCSI) (Score:2)
Expandable Storage... (Score:2)
I do a lot of work with 3D Rendering and Digital Video, etc. I have tons of high quality footage that needs to be stored. The reason I'm running SCSI is because its' really easy to add new devices. SCSI has enough channels that you can have one card control a bunch of disks. I have 5 SCSI Drives at work and a couple of Firewire for transporting data around to other computers.
At home I have 1 SCSI and 2 IDE hard disks, and now an external Firewire drive. The SCSI drive is my performance capture drive. I have a 14 gigger that's reasonably fast, and an 80 gigger that's slow. The 80 gigger is for archival of the compressed video, or the uncompressed I don't need to get as quickly. Then I have the Firewire 80 gig drive (also slow) that I attach and do backups to occasionally. The drive stays off when it's not in use. I figure it's more reliable that way.
I can forsee the day coming before too long where I have only high performance IDE drives and Firewire drives, but no more SCSI.
What keeps most people on scsi? (Score:2, Interesting)
Not to mention, there is also the smartness of the scsi controller. I've used "good" usb scanners, and ittakes over you computer when scanner. With scsi, you can burn a cd, scan a picture, and play quake3 with out a hickup. Now, who would do all that? Well think enterprises. Think 14 15000 RPM scsi drives in a raid 5 (or what ever). Or think media people having to render imamges while saving to a file and other stuff.
Oh ya, and nothing like sending in an older (3 year old ) scsi drive for RMA, with no questions asks other than "how can I help you".
But I have to admit, these days I keep quesioning myself on my coninuation of buying scsi for home. The I just look at all the things I have in scsi, and think of how I would "try" to do it w/o it. And I can't.
Oh ya, somethign that you ide people can't do
1 10k rpm hd OS
1 10k rpm hd swap/tmp
1 10k rpm hd data
1 10k rpm hd applications/games
1 10k rpm hd mp3/downloads, etc
SCSI vs. IDE is not the issue (Score:3, Interesting)
Manufacturers produce the fastest disks on the planet on SCSI interfaces only. There are no 10K/15K RPM IDE discs, period. If one wants the lowest access time available today coupled with respectable transfer rates, one must purchase a 15K RPM drive, which are only available in SCSI interfaces.
For single-user access patterns, the author is correct to state that IDE drives have the lead today. StorageReview.com recently reviewed the latest 7200 RPM Seagate SCSI offering, and it was beaten down in single user tests by half a dozen of the newer IDE drives; however, when tested with server access patterns, it was the clear leader (excluding higher-RPM offerings, of course.) Still, 7200 RPM drives can't beat 15K RPM drives in any access pattern.
And I noticed the author was RAIDing drives -- 3ware's RAID products are very high quality, and their performance exceeds each and every other RAID card out there, SCSI or IDE interface. That surely contributed to his conclusion that current IDE drives are faster than their SCSI counterparts.
Re:SCSI vs. IDE is not the issue (Score:3, Informative)
Actually, SSA is rated at 180Mbps, whilst SCSI 3 is 160Mbps. Technically, the fastest drives are RAM drives. DATARAM used to make boxes (8 or 12 U, as I remember) of nothing but static ram. Blazing speed, sky high prices.
OK, I'm nit picking here.
Re:SCSI vs. IDE is not the issue (Score:3, Insightful)
Faster controllers are SCSI, too (Score:3, Interesting)
IDE RAID is fine, it's cheap, but with newer IDE drives pushing 50 MB/sec (sustained) you could max out a standard PCI bus with three drives. Need more throughput? Then you're stuck waiting for PCI-X IDE RAID controllers, or at least 64-bit/66 MHz versions. And in the meantime, SCSI will just get faster.
When they fix the jagged mouse pointer problem.... (Score:2)
Heavy use of SCSI drives does not noticably impact system performance. When I say "noticably," I mean those intermittent pauses a computer experiences during disk usage. That is, when you're moving your mouse and the pointer skids across the screen, making it incredibly difficult to get any work done. I absolutely hate this. If anyone knows of an IDE setup that will solve this problem, just THIS problem, I'll dump my ridiculously expensive Seagate X15 in a heartbeat. Until then, its worth it to me to shell out an extra $200/box and deal with smaller capacity drives.
All worry, no substance (Score:2)
This is completely irrelevant unless you have an application which is tremendously hard drive bound, and you've done benchmarking to determine which type of drive or specific model of drive will work best for your purposes. Otherwise this is just the typical, meaningless fretting that some geeks have made a hobby of, such as buying a new, expensive video card so they can get 327fps in Quake instead of 270.
Whatever... (Score:2)
- tagged command queuing (multiple outstanding I/O requests to a single drive)
- disconnect (drive does not "hog the bus" while waiting for an I/O to complete)
- you can have up to 15 drives per channel (compared to 2 on IDE) with minimal performance impact
- 15,000 RPM SCSI drives are available, although they do require extensive cooling.
It really burns me when some idiot claims SCSI is dead just because he doesn't see any reason to use it on his POS desktop system. A friend of mine recently set up a PCI-X based system with 8 SCSI channels and lotsa drives, and benchmarked it at over a gigabyte a second transfer rates (yes, that's 1024+ MB/s). It'll be a long time before you see that with IDE anything.
(Serial-ATA does promise to bring many improvements to the low end of storage, but by the time it gets common, SCSI will be even further along with Ultra320, etc)
Neither are prefect. (Score:2, Insightful)
For one, most of the ATA133/ATA100 is a lot of hype. On long transphers (or any transphers that exceed the cache size of the drive) I have yet to see an IDE drive break 30-40 MB/s. In fact, testing an "ATA133" drive on an ATA133 controller vs. an ATA100 controller I saw no gain in speed. There was a gain from ATA66 because the ATA66 bus can't quite sustain 30-40MB/s constant.
Which brings me to another point, like all buses, the 66/100/133 is the peak allowed, it is usually not nearly that fast.
The drive speeds could be higher on IDE. You can get some top notch SCSI drives that run at 15,000 rpm. The best you find with IDE is 7200rpm. The drives would obviously be a little better at filling the bus if they had faster motors.
The IDE bus lacks any intelligence. It is the intelligence you are really paying for on SCSI. The command queues, multitasking bus, ect. ect.
Lastly, SCSI drives are obviously way more expensive, as are there controllers. Of course you are getting a higher quality (read=better built, not faster) product.
Basically what it comes down to in real world performance is no matter what you choise, IDE or SCSI, your disk drives will be the biggest bottleneck in your system by a long shot. If you run a single drive system, or have enough buses so you don't share them SCSI doesn't really provide enough to justify the cost on a desktop in my opinion.
Costs: Why SCSI > IDE? (Score:2, Insightful)
I can understand that 80's and 90's that SCSI electronics were expensive, but I would have expected that electronics prices would fall. How complex is a SCSI controller? Does it have a chip running at 600Mhz or something?!? (Guess not).
Any input about the reasons why SCSI $> IDE is welcomed.
Simon & Garfunkel Reunited!! (Score:2, Funny)
What? Simson Garfinkel? Who the hell is that? I thought it said... oh hell, never mind...
Don't forget disconnection and hot-swapping! (Score:2)
Of course, those who would try this should be using another great feature of SCSI: the single connector attachment (SCA) plug, which allows SCSI drives to be hotswapped, and often assigned a SCSI ID on the fly.
While many have spoken about the ability for SCSI drives to be used in RAID configurations, a huge benefit is the fact that the drives can be swapped off of the bus/host without turning the host off. This is a huge boon for server environments, where uptime is king. IDE does not have any features like this.
SCSI also has the ability to be used in a "simple" cluster of two machines. Sorry, but I'm hardly an expert on this, so I can't fill in the specifics. But you basically have two identical machines each with a RAID controller, and then these are both hooked up to the same disk array. That way, if one machine goes down, the other still has the current file data.
use the best technology for the job! (Score:5, Insightful)
Beta is technically superior to VHS.
Novell Netware is technically superior to Windows NT.
SCSI is technically superior to IDE.
Does any of this matter to most of the market? Not really, since most people look primarily at up-front cost. I've been telling my customers (mainly small businesses) that mirrored IDE drives are the best value for general purpose data storage. The gap has narrowed; IDE definately makes more sense for most people (and even most servers) these days.
If I were specing out a system for high-end video editing, or a system that absoulutely had to process thousands of transactions a second, or a general purpose file or e-mail server that supported thousands of users, or a GIANT SAN, I'd go with SCSI. SCSI shines in really big storage pools, or in places where you absolutely need the fastest possible speed. But for most things, IDE undercuts SCSI by a longshot.
That said, there is one major problem with IDE, and it's not bandwidth (as most "higher-end" IDE-RAID controllers (such as some of the new ones by Adaptec) have multiple channels for multiple drives) - it's lack of VERY standard chipsets & APIs needed to access IDE block devices. The original spec has been hacked onto so many times that you're really at the mercy of the manufacturers' drivers for any "sophisticated" IDE implementations. This has gotten me into trouble several times. SCSI drivers tend to be more plentiful than high-end IDE drivers, and the testing cycles seem to be better because OS vendors actually care about them.
But again, people who buy IDE just on the technical merits of it may as well throw their money away. I wish the situation were different, but I don't think it will change unless drive vendors DRASTICALLY lower SCSI drive prices. Right now they're getting away with charging lots of extra dough simply because managers are hearing "SCSI is way better!" from their employees when purchasing hardware. That may have been true a few years ago, but it'll take a few years for the general consensus to swing in the other direction. (I really, really like SCSI too, and I think IDE sucks as a technology... but money talks) :(
SCSI Adapters for IDE drives (Score:2)
Heavy IDE disk load = poor performance (Score:4, Interesting)
My previous machine was a single PPro-200 with SCSI disks. Under heavy CPU load, it crawled horribly. However, under heavy disk load, it remained much more responsive than my current system.
Therefore I conclude that SCSI really does perform better, even if the drives themselves are matched on throughput and access times. I think most benchmarks suffer a little from tunnel-vision and focus only on the raw disk performance without really taking into consideration what it all means in real world situations.
I put up with the worse overall performance of IDE because it's so much cheaper. Of course, I'm up to my limit (4 devices) and need a new controller if I want to add anymore. And, I have to remember to be careful about tying up the IDE bus attached to my CD-RW when I'm burning discs. I can't see the last point being a problem with SCSI.
Another Reason IDE sucks (Score:5, Informative)
FreeBSD 4.3 flirted with turning off IDE write caching. This reduced write bandwidth to IDE disks but was considered necessary due to serious data consistency issues introduced by hard drive vendors. Basically the problem is that IDE drives lie about when a write completes. With IDE write caching turned on, IDE hard drives will not only write data to disk out of order, they will sometimes delay some of the blocks indefinitely when under heavy disk loads. A crash or power failure can result in serious filesystem corruption. So our default was changed to be safe. Unfortunately, the result was such a huge loss in performance that we caved in and changed the default back to on after the release.
[...]
There is a new experimental feature for IDE hard drives called hw.ata.tags (you also set this in the bootloader) which allows write caching to be safely turned on. This brings SCSI tagging features to IDE drives. As of this writing only IBM DPTA and DTLA drives support the feature. Warning! These drives apparently have quality control problems and I do not recommend purchasing them at this time.
So, SCSI is better both for performance and for data integrity.
Look! They are *different*, not better/worse. (Score:5, Insightful)
If however you have 100 people all accessing different pieces of the disk, some reading some writing then IDE will just not cut the mustard. It requires too much CPU involvement. With SCSI the CPU just says here you handle this to the SCSI interface and gets on with something else instead. In addition, with SCSI I can have 15 devices on a single bus, with IDE, I can have 2.
So basically:
SCSI = scalability & heavy loads.
IDE = low cost & single user access.
Use the one appropriate to your application. For most people that'll be IDE, for other people chucking a lot of data around and lots of processes doing different things, SCSI would be better.
Just a quick rant about laptops. People think that a 1GHz laptop is as fast as a 1GHz desktop. It isn't. The laptop disks are designed with power management in mind and are often significantly slower than normal IDE even. So if your managment think that everyone should have laptops, tell them not to complain when their Oracle client runs like shit.
Why I'm a SCSI Bigot (Score:3, Interesting)
I've been a SCSI bigot since my Amiga days. Just 15 short years ago, all that was really available for consumer-level computers was SCSI, ESDI, and ST-506.
ST-506 was hardly an interface at all. You had to tell the BIOS the number of cylinders, heads, and sectors the drive had (sound familiar?), so that it could do the multiplication and convert logical block addresses into positioning information for the drive. You also had to enter the bad block list by hand, printed on a sticker affixed to the drive. An ST-506 interface was available for the Amiga-2000, and setting it up was predictably a bear.
SCSI saw its first consumer deployment on the Mac, and Amiga got it not too long after. No more CHS crap. No more typing in lists of bad blocks. All that intelligence was on the drive itself. Just plug the drive into the chain, tell the OS what SCSI address it had, and you were ready to start partitioning and using the drive.
So when it comes time for PCs to get intelligent drives, SCSI was the obvious choice. But no, they invent this new thing called IDE. What was different about it? As far as anyone could tell, the cable. You still had to feed CHS addresses at it; SCSI used LBA from the start. IDE drives from different manufacturers wouldn't work together; SCSI mandated interoperability. IDE now let you have two drives in your machine; SCSI already allowed up to seven.
IDE was touted as much cheaper, but it wasn't. SCSI and IDE drive prices were at near parity for years. Manufacturers were offering drives in both IDE and SCSI flavors (all other characteristics identical), with the SCSI flavor costing only ten dollars more (for a $600.00 drive, a typical price in those days, this was epsilon). It's only in the last few years or so that SCSI drive prices have skyrocketed for no readily discernable reason.
Add to that the fact that, even on a modern SCSI controller, all your old drives will still work. I have an old 600M 5-1/4-inch full-height Hewlett/Packard drive with a SCSI-I (asynchronous) interface. I plug it into the Adaptec AHA2940-U2W controller in my main rig, and Linux sees and mounts it just fine. Same with all my other old SCSI drives; I don't have to leave any of my data behind. It Just Works.
I also have an HP Omnibook 800CT laptop, which has SCSI built-in. All my drives work on that, too.
Apart from the artificially inflated costs, SCSI's only real headache is bus termination [scsita.org]. But aside from that, the increased speed, flexibility, expandability, and reliability, for me, make SCSI an obvious choice.
Schwab
SCSI is not desktop. (Score:5, Insightful)
IMHO, the SCSI bus system is better than everything IDE/ATA can offer to date. It's not necessarily the devices that need to be put up against each other. Most recent SCSI disks in "acceptable" sizes are so expensive that you can easily build a RAID system from IDE disks for the same or even lower price. However what's really bad about IDE is the short bus. Face it, length and size do matter in some cases.
You can have a 12m LVD-SCSI bus with 15 devices plus controller running at full speed. But that's not desktop. You'll have trouble just cramming the disks in your average-sized tower, and you still need one or two additional PSUs to get them spinning. And now you take the sucker out for a LAN; but don't forget calling your chiropractor and get a reservation for the next two weeks straight.
Then there's IDE. With todays U-ATA133 specs you're limited to, like, 50cm bus length. Heck, that's about the height of a midi-tower! But it gets the job done. But no external devices for you, sorry. And you're down to 4 devices on your average motherboard, but most users can live with CD-ROM, CD-RW and one or two disks. With onboard RAID controllers coming up, there's an additional four disks possible and you can even plug in a separate DVD drive. You don't need a nuclear plant to get it running, you have lots of storage for a desktop machine and you can still carry it around. Perfect.
To sum it up, I think SCSI is still great, but it's losing on the desktop nowadays. The disks might last longer, it might be more flexible, but in the end, it's way too expensive and overkill. And then there's serial ATA on the horizon.
Comment removed (Score:3, Insightful)
Re:Stability... (Score:2)
Re:Stability... (Score:2)
The IDE drive's performace SUCKS. It's horrible. My PII 266MHz (state of the art at the time I bought it) at home with SCSI drives just kicks the crap out of this thing on file copies, compiles etc.
Re:SCSI (Score:3, Insightful)
Ever try to add 8 IDE devices to a system? With SCSI it's a snap as long as your power supply is large enough.
I think this is very application specific though.
Re:SCSI (Score:2)
Re:SCSI (Score:3, Informative)
Re:SCSI (Score:4, Insightful)
That's only true if the program is doing disk I/O asychronously. If your program is doing I/O inline with its execution, it will be paused just as long reguardless of where the disk I/O computation is being done.
Re:Can't scsi support multiple disks? (Score:2)
Re:Can't scsi support multiple disks? (Score:2)
Re:Can't scsi support multiple disks? (Score:2)
IDE is nice, but not when you are talking about 5+ drives.
Re:Can't scsi support multiple disks? (Score:2)
If you're really looking at many devices on a controller, you'd be better advised to look into fibre channel or SSA (Serial Storage Architecture; normal on IBM, rare elsewhere). I'm not sure of the limits on FC, but SSA is >40 per loop IIRC.
Re:Can't scsi support multiple disks? (Score:2)
Re:Can't scsi support multiple disks? (Score:2)
really though, sometimes quality is just worth it.
Re:yep, I know this one... (Score:2)
Re:yep, I know this one... (Score:2, Funny)
Don't count out FireWire (Score:4, Informative)
Sure USB2.0 is about the same speed as FireWire, but FireWire hasn't been standing still - it's next version calls for speeds of 800Mbps and 1.2Gbps. There's even plans for fiber and wireless based versions.
However, even more import is that FireWire is PEER based. A computer is not required to transfer video from one device to another. There's already a bunch of video equipment that has FireWire support, camcorders as well as the Playstation 2(Sony calls it i.LINK instead of FireWire or IEEE 1394) come to mind.
While it might be possible to hack USB 2.0 for use without a computer, USB 2.0 wasn't designed for it. I suspect such a hack would be a successful as the "patched on security" we see in Windows.
Re:From experience (Score:2)
Re: USB 2.0 v. Firewire (Score:2)
Re:Apple realized that a long time ago (Score:2)
This was a dark day in Apple history, IMO. :( Apple seems to have done this in an effort to drive down costs, to try and compete better in the low-end PC market. As a result, while you can pick up an SE/30 with a still-functional original hard drive, you don't have to go far to find some iBook user who needed their drive replaced.
Re:Apple realized that a long time ago (Score:2)
Earlier. Large batches of early PPC Performas for the education sector went out outfitted with IDE. I have to deal with one such antycomputer from time to time (my significant half's machine) and it is a c*** of s***t. It also has SCSI but the disk and the CD are connected to the IDE bus.
The G3 was the first Mac to have only IDE and no SCSI. Otherwise Apple was quitely putting in IDE for a while before that.
Considering that most Apple cult followers do not check the hardware they had no problem doing this. And the machines still had the SCSI connector on the back for Apple branded external devices.
Re:Apple realized that a long time ago (Score:3, Informative)
Apple didn't stop using SCSI as standard equipment because of its speed. They used it in their Macs for YEARS because of better speeds than any drives of the time. Apple chose IDE later (when Job returned) for reasons of cost, just as PC makers do. Removing SCSI as standard brought down Mac prices by a few hundred dollars.
For general daily use, and because of recent advances in IDE, there was no advantage to using SCSI as standard any longer for Apple.
However, SCSI, particularly the LVD (SCSI-3) will SMOKE any hard drive interface today, which is why Apple still equips various SCSI configs on build-to-order workstations and their Server models.
FireWire (1394) is theoretically as fast as SCSI-3, but few people can afford a true FireWire drive with genuine FW controllers (earlier FW drives were some IDE or SCSI to FW translator or used slow drives on a FW interface).
Apple is overdue to upgrade their logic boards (motherboards) to the faster buses found in the best PC boards now, so there should be improvements in their performance for that platform in the coming months.
Re:external connections, length and number of cabl (Score:2)
Nope. That is HVD. LVD is less. 12m if you do not have SE devices on the bus. Even one SE device drops it further down to 1.5m. Check the FAQs on http://www.cablemakers.com [cablemakers.com]. In btw: they are the only ones I found to supply HVD parts.
Hmm.. (Score:4, Informative)
Firewire is 400Mbps, which is 50 MBps. That's faster than Ultra2 SCSI, but slower than Wide Ultra2, Ultra3 and Ultra160/320 SCSI. Check out this [scsita.org] link for details. Firewire is still nice tech, and a fair bit smarter than USB2.0, but it's not the bandwidth king that SCSI is.