Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware Books Media Book Reviews

The Book of SCSI, 2nd Edition 148

Craig Maloney contributes the below review of a book he claims is long overdue -- the second edition of Gary Feld's The Book of SCSI. Probably it won't be long until someone is reviewing The Book of Firewire, but SCSI remains probably the most widespread standard for high-quality, reasonably priced storage. I know it's gotten better since the last time I struggled with termination issues and bad cables, but if you rely on SCSI every day, you may need this book.

The Book of SCSI, 2nd Edition: I/O for the New Millennium
author Gary Field, Peter M. Ridge
pages 456
publisher No Starch Press
rating 7.5
reviewer Craig Maloney
ISBN 1-886411-10-7
summary A one stop resource for the SCSI protocol.

What's Good?

For those in a hurry, Appendix A (The All-Platform Technical Reference) is the entire book in a nutshell. I think Appendix A should be included with every SCSI card sold. It includes pin-out descriptions of the major and not-so-major SCSI interfaces, tables for bus timings, and a quick description of termination rules. The pages that surround Appendix A are also quite good.

The chapter on connecting devices to a PC talks at length about one of the more troubling aspects of SCSI; termination. Anyone who has had to troubleshoot SCSI installation problems will enjoy how thorough Chapter 6 deals with troubleshooting. (It even includes what a SCSI signal should look like on an oscilloscope). Programmers will find a Chapter with information on programming using ASPI, as well as protocol specifications for those looking for more low-level information. You'd be very hard pressed to find a more complete and readable treatment of the SCSI protocol than this book.

What's Bad?

Unfortunately completeness can lead to information overload. Novice users will find themselves at a disadvantage with the sheer amount of material presented.

When discussing how to set up a SCSI adapter, the book mentions the various PC busses from the earliest IBM PC to draft revisions of PCI and everything in-between. Had I been a novice reader, I would have been overwhelmed with all the information about historical PC busses that are no longer in use. (When was the last time you used VLB or EISA?) In the interest of completeness, the authors also include a chart comparing these interfaces. I question whether this is really necessary. Some may also be put off by the hand-drawn diagrams in the earlier chapters.

On the CD

The CD includes items such as the SCSI FAQ, ASPI Development Files, ASPI tar, SCSI disk driver source for MSDOS, Western Digital SCSI Utilities, SCSITool, Postmark I/O benchmark source code, and Linux SCSI information. Of note, the CD also includes a PDF file of the entire book.

What's in it for me?

The Book of SCSI is definitely written by SCSI enthusiasts. On the early pages, the authors include a bit of SCSI poetry, and the CD includes a text file entitled "SCSI: A Game With Many Rules and No Rulebook?". This book reads with an excitement only an enthusiast can project. If you have ever been curious about SCSI, I encourage you to sit down and read the first few chapters of this book. If you are in a position to use SCSI components more than occasionally, I recommend you purchase this book and keep it on your reference shelf for those times when troubleshooting is necessary.

My biggest complaint? I wish the authors had written this book ten years ago. However, it is still a welcome addition to my library today.

  • Chapter Listing
  • Chapter 1: Welcome to SCSI
  • Chapter 1.5: A Cornucopia of SCSI Devices
  • Chapter 2: A Look at SCSI-3
  • Chapter 3: SCSI Anatomy
  • Chapter 4: Adding SCSI to Your PC
  • Chapter 5: How to Connect Your SCSI Hardware
  • Chapter 6: Troubleshooting Your SCSI Installation
  • Chapter 7: How the Bus Works
  • Chapter 8: Understanding Device Drivers
  • Chapter 9: Performance Tuning Your SCSI Subsystem
  • Chapter 10: RAID: redundant Array of Independent Disks
  • Chapter 11: A Profile of ASPI Programming
  • Chapter 12: The Future of SCSI and Storage in General
  • Appendix A: All-Platform Technical Reference
  • Appendix B: PC Technical Reference
  • Appendix C: A Look at SCSI Test Equipment
  • Appendix D: ATA/IDE versus SCSI
  • Appendix E: A Small ASPI Demo Application
  • Glossary
  • Index


You can purchase this book at Fatbrain.

This discussion has been archived. No new comments can be posted.

The Book of SCSI, 2nd Edition

Comments Filter:
  • by crumbz ( 41803 )
    Has been my favorite drive interface since ~1986. Once you get used to the quirks and setup non-sense, it is very reliable and FAST. Fuck IDE and that garbage, you want multiple drives in your machine this is the way to go.
    Ultra Fast Wide LVD SCSI-3!!!

    my 2 cents...
    • Unless money is an issue, of course.. IDE drives/controllers are dirt cheap.
      • Yes they are. And this anoys me greatly. Is it really that SCSI parts are twice as expensive to manufacture? I doubt it. Its more likely that they want to keep a large prices difference between them so they can justify SCSI=better. Which it is.
        • Is it really that SCSI parts are twice as expensive to manufacture?

          I know that IBM manufactures the same hardware for IDE and SCSI. With the exception of a small part of the external circuitry (ie: connector etc) and the EEPROM contents, they are the same.

          Companies charge a larger markup on SCSI devices because they can. They're "not consumer" devices - they're more for specialists etc..

          Chicken and egg! I know the retail volume is lower, but if someone came out with a SCSI drive at an "IDE" price, I think it would change.

          Quick comparison from the local store:

          IDE: IBM Deskstar 60GXP 60.0GB UDMA100 7200rpm 8.5msec 2MB DM 430
          SCSI: IBM Ultrastar 36LP 36.9GB Ultra160 7200rpm 6.8msec 4MB DM 730
          SCSI: IBM Ultrastar 36LZX 36.9GB Ultra160 10000rpm 4.9msec 4MB DM 960

          It's not just the size! There's no question which is faster, and I know SCSI has less overhead/more bandwidth etc, but for most of us it's not worth it. I wonder why we don't see larger lower-speed SCSI drives. (Those two are the largest ones any shop around here has to offer.)

          -- Steve

        • SCSI drives are manufactured with a 2% error tolerance on the disk. That means that up to 2% of the disk can be bad sectors in the factory format, and it can still be shipped.

          IDE drives are manufactured with a 40% error tolerance - up to 40% of the disk can be bad and the drive can still be sold.

          This makes forces SCSI disks to be more reliable, simply by natural selection - disks not good enough to be SCSI can always be downgraded to IDE and sold on. Also, the manufacturing have to be much higher.

          Because these disks are so much better made and more reliable, it is safe to spin them faster and bash the heads about quicker. Hence the much higher performance of SCSI disks to IDE, even when the electronics inside may be the same.

  • by Monte ( 48723 ) on Thursday August 30, 2001 @11:24AM (#2234836)
    Which chapter has the instructions for sacrificing the goat?

  • Funny, I've always heard it pronounced "scuzzy". =P

    Perhaps there are even more ways? Feel free to reply with weird pronunciations you've heard.

    -Kasreyn
  • Termination (Score:2, Informative)

    What's so difficult about termination anyway? Terminate both ends of the bus, nothing else. If you have both internal and external devices, check if your controlles uses the same or separate buses for them. That seems to be it to me..
    • Re: (Score:3, Insightful)

      Comment removed based on user account deletion
      • Frankly, I think that termination problems with SCSI has had more to do with its demise in high-end consumer PCs than any other factor.

        Generally I don't see termination as being any stickier than the master/slave/solo jumpering you do with IDE... but then you occasionally run across some sticky little SOB that's determined to be a pain.

        How's this for termination hell: I bought (at a good price) a large (read: full height) 10G SCSI drive in it's own external case w/ power supplet et al. Great, except it was terminated, it wasn't going to be the last thing on my chain, and the only way to turn off the termination was to open the case and void the manufacturer's warranty.

        It's enough to make you go IDE...
        • How about this... contact the manufacturer.

          A few years ago I bought a Travan4 drive from APS. It was more expensive than if I bought it in a generic case but I liked their case design at the time (built-in active termination, small footprint, stackable).

          The problem is once it arrived I found out they had connected the SCSI ID jumpers oddly, so I could only do 0 or... 7 I think. As it turned out they'd reversed the grounds. I contacted them, explained what I thought was happening, and asked if it'd be OK to open the case and switch the jumpers around. They said it'd be okay, marked it in my customer record, and of course I dutifully saved the emails.

          The other option is to send it back to them so they can make the change, but usually if you be a stickler and make them pay shipping costs both ways they'll relent and let you do the work yourself.

          Though I think the biggest problem is the need for you to impress on them you're not a yokel who feels the need to stick screwdrivers into power supplies for no particular reason.
      • Re:Termination (Score:1, Informative)

        by Anonymous Coward
        I do not think that termination is a reason for the decline of SCSI on the consumer market. I think cost is. Most users do not understand the benefits of SCSI. Most users also do not understand that SCSI under Windows 9X/ME is simply useless, and find its performance lackluster because of the lack of OS support.

        Termination has if anything, gotten simpler over the past few years. Devices other than mass storage wide devices are becoming rarer. Few devices other than tape still ship in narror formats.

        Most current HBAs for SCSI cards do an excellent job of auto-termination. With most new SCSI cables shipping with built in multi-mode terminators, the job is even simpler. Removing terminators from LVD devices also simplified the matter.

        There are some instances of designers incorrectly implementing auto-termination schemes, and bus widths, but all in all, a good designer can get it right. The only complexity comes when people do not read documentation. Serialized busses such as SATA and serial SCSI will make life even easier going forward, as all points will require termination.
      • Some controllers and devices are incredibly picky about the quality of the termination, refusing to work reliably with anything but the best quality active termination.

        Well, obviously buying the cheap stuff gets you in trouble. But I would think you wouldn't buy SCSI at all if you're picky about the price.. (Unless it's for something like an A3000, which simply needs SCSI, but it works with seriously incorrect termination, so it's still not an issue.. (The A3000 SCSI controller is well known for being picky, but that's about the SCSI-protocol, not termination.))

        Then you have SCSI devices of varying bus widths and you have to terminate on half of the data bus or the other.

        Well.. I suppose my experience is limited to the above mentioned A3000 (with and 8bit bus) and server-type stuff where everything is nicely streamlined and just works. =) (Brand servers)

        Frankly, I think that termination problems with SCSI has had more to do with its demise in high-end consumer PCs than any other factor.

        Make that termination problems with incorrectly designed stuff and bad docs, not SCSI as such, and I might even agree.. =)

    • I find it extremely annoying that varying types
      of SCSI terminators with the same number of pins
      are not labeled by the OEM as HVD, LVD, etc...

      I have a zillion terminators here for normal SCSI
      devices and a few HVD for my tape libraries and
      it's impossible to determine (to my knowledge)
      which is which... ugh
    • Re:Termination (Score:3, Insightful)

      by singularity ( 2031 )
      And then you get into devices that attempt to have "built-in" termination. Or devices that, for whatever reason, have to be at the end of the chain (due to termination issues) and you need to attach two of them. Or your motherboard (at one end of the bus sometimes) is not providing decent termination.

      And that is not even getting into cable length.

      At one time I had four external SCSI devices attached to my computer. Placement of the four items would cause the chain to work or not. I am not talking about placement on the chain (which can definitely between working and not), but rather on my desk. If I moved the Zip drive too far to the right the chain would fail. If I tried to move the hard drive under the desk the chain would fail.

      Luckily I was running seperate busses for internal and external.

      "There are very technical reason why you need to sacrifice a goat to get your SCSI chain working properly."

      As for the other post - I have always heard is "scuzzy." I always thought that it was appropriate for how messed-up SCSI was. Of course, I would take SCSI over parallel and slow serial any day of the week.

      Now Firewire and USB... I still have too much invested in SCSI to go over just yet. Looks like good specs. Now if they would only keep USB as a low-speed powered bus and not try to get in over their heads I will be fine. I just want something I can attach keyboards, mice, and printers to. Having two seperate busses makes sense (one slower and powered, the other faster). Yes, I understand that Firewire is powered.

      • As for the other post - I have always heard is "scuzzy."

        Back in the very early days, when people could still remember SASI, there was actually a debate about whether SCSI should be "scuzzy" or "sexy". The former pronunciation prevailed, and I never thought it was a coincidence. ;-)

      • At one time I had four external SCSI devices attached to my computer. Placement of the four items would cause the chain to work or not. I am not talking about placement on the chain (which can definitely between working and not), but rather on my desk. If I moved the Zip drive too far to the right the chain would fail. If I tried to move the hard drive under the desk the chain would fail.


        Sounds like you had a bad connection somewhere. Of course, faulty hardware will give you nothing but trouble, no matter what you're running.

        FYI, SCSI doesn't care where a device is phsyicaly on the chain, it only cares about the device's ID. Just make sure the chain is properly terminated, you don't try to make two or mre devices share an ID, and you keep slower devices and faster devices on seperate chains.
    • It sounds simple, but keep in mind that there are multiple *types* of termination, and different versions of the SCSI standard require different types of termination. It's easy to get confused, and I found the quick-reference table to be invaluable.
    • Then you have the fact that there's a 8 and 16 bit version of almost every scsi protocol, and most controllers seem to be very picky on what kind of termination you need to have - and depending on the drive.

      I've seen controllers that don't seem to like Seagate and IBM drives on the same bus.

      And heaven help you if for some reason you need to put SCSI-2 narrow devices along side with wide devices.
    • If only that were true, if only that were true. Apparently you have never tried to terminate the SCSI chain on old Macs. I have had to deal with chains that work fine on an old Mac clone and the same chain would refuse to work on an Apple Mac. Also Apple put crap terminators in PowerBooks that were insufficient for any more than one external device. Any more than that and you were asking for trouble. Doing so was an ideal time to sacrifice a goat to your favorite deity/demonic entity. There is also the whole terrifying experience of the O'Hare SCSI controller. I guess at Apple was thinking: "Let's package IDE, serial, and SCSI into one ASIC for our new laptop, and make the SCSI part of the ASIC just like the old MESH SCSI chip." Now that was a very bad idea. First off Apple had only designed a barely passable Fast SCSI controller that hung rather ridiculously off of the I/O controller on its own bus instead of off of the PCI bus. Additionally the MESH controller is picky about termination even when used internally. On top of that the MESH controller on O'Hare is run at Slow SCSI speeds. Just to throw into mix there is a hardware-timing bug that shows up with some scanners. Next some Einstein decided to share the love and put the O'Hare controller in desktop computers and leave them unterminated on the motherboard, but put a terminator right after the internal SCSI CR-ROM. Yes, some of these machines had a SCSI CD-ROM drive, but an IDE internal hard drive. What was really scary was that there were some with both an IDE CD-ROM and hard drive, which was achieved by making them both master drives on the same bus. These machines are actually more stable when you put a terminator on the external SCSI port regardless of whether or not there are any external devices connected.
  • I have to deal with VLB Every time I have to service an old, but usuable 486 machine.
  • OK, every time I say "IDE seems good enough for me", there is a SCSI enthusiast who cringes in horror. Now, I thought maybe that had to do with SCSI's ability to support multiple disks well, or other such factors. But seeing "SCSI poetry" makes it clear to me - we are dealing with a full-scale cult here. Hide the wimmin and chillin, folks.
    • Well, common sense will tell you that if you only need 1 or 2 HDD's and 1 or 2 CD/DVD's then IDE is the way to go if you are the least little bit concerned about price. You get something like 90% of the speed and 90% the reliability in a machine that will serve you well and cost possibly thousands less than a SCSI setup.

      Even if you need another 9.9% reliability, IDE raids are becoming more and more commong.

      Now, if you're doing 'mission critical' stuff (I hate that term.) you'll know that you'll get that extra reliance and speed, but you'll pay through the ass for it.

      Price versus quality, folks.
  • I love SCSI! (Score:4, Informative)

    by aussersterne ( 212916 ) on Thursday August 30, 2001 @11:46AM (#2234934) Homepage
    IDE is clumsy and slow compared to SCSI when you start to get many devices in the same machine.

    I have a 3-channel LVD SCSI controller in my video system and it's talking to devices of all vintages:

    1) Three 18.2GB Barracuda LVD drives in a RAID-0.
    2) Four 9.1GB Micropolis UW drives in a RAID-0.
    3) 8x CD-R (not CD-RW) drive.
    4) Brand new DVD-R drive (whoopee!)
    5) Two 1.3GB 5.25" Magneto-Optical drives.
    6) 7/14GB 8mm tape drive.
    7) 12/24GB 4mm tape drive.
    8) Very old (but needed) Archive 2150S (QIC-150).
    9) 100 MB Zip drive.
    10) 300 DPI scanner (for rough stuff).
    11) 1200 DPI scanner (for more important stuff).

    The system lives in a server case with dual 450W power supplies, so of these devices, only the two optical drives and the two scanners are external. There are only three cables inside the case for the lot. Theoretically, there are 28 more SCSI IDs available for use.

    Now, the nice thing about this is that I can have damn near all of them running at the same time without any appreciable slowdown -- something that never happens on my "play" system with IDE drives.

    On my IDE system, I've got two hard drives, a CD-RW and an IDE tape, and the IDE channels often seem to slow each other down and fight for control when I start to burn, backup, and do lots of disk I/O at the same time. I've been told that this is because a single IDE interface doesn't do concurrent access to both drives.

    Either way, I love using the SCSI system. It's an I/O monster. And I love being able to just hang whatever kind of device I need to use off of the external connector and know with reasonable certainty that Linux will support it. Long live SCSI.
    • WOW you are in for it with that Micrapolis RAID. Those drives are destined to fail! Take a look at the PC board and let me know if there are little green wires running between pins on surface-mount ICs. Every Micrapolis UW drive I bought was missing traces on the PCB, fixed apparently by enslaved child labor with tiny soldering irons. Anyway none of mine lasted more than 1 month.

      I see you don't have any SCSI printers. Hah!

      Anyway the greatest benefit of SCSI is that you get to put all your peripheral devices right on your desk. Nothing is quite like a stack of disk drives sitting next to your monitor. Handy for CD-Rs as well. At least we can still keep stuff on the desk with ieee-1394!

    • Whats the 150MB QIC driver for?
    • Re:I love SCSI! (Score:3, Insightful)

      by Howie ( 4244 )
      Did you get free ear defenders with that system?
    • 1) 4 25gb IDE drives (it'd actually be more cost effective to get 60gb drives @ $150 each, but I digress)
      3) 8x CDR
      4) DVD-R drive
      5) Zip drive
      6) Tape drive (why you use two I dunno...)
      7) odd external SCSI devices

      Most new mobos have 4 IDE channels on them, 2 or which can be dedicated to a raid config

      In channels 1 & 2 install your 40gb drives in a raid-0 config
      In channel 3 place your CDR & Zip drive
      In channel 4 place your DVDR drive & Tape drive

      Get a cheap SCSI card to hook up your scanners and other external devices (IDE can't really be used with external stuff :)).

      This isn't as fast as your configuration, but it will be close. Additionally, you get more storage space and it costs a hell of alot less.

      If that extra performance means that much to you, and it's worth the extra cost -- that's great, but if cost ever enters the equation IDE setups can come close to that of a decent SCSI setup.
    • IDE is clumsy and slow compared to SCSI when you start to get many devices in the same machine.

      This is true. IDE isn't designed to support a large number of devices. Then again, do you really need a large number of devices. Your system looks like a lot of old parts kludged together, and a lot of the parts look redundant to the point of being useless.

      1) Three 18.2GB Barracuda LVD drives in a RAID-0.
      2) Four 9.1GB Micropolis UW drives in a RAID-0.

      Replace thes old drivers with one or 2 newer high capacity drives. Select IDE or SCSI as your needs require, but if you don't end up needing a lot of devices, IDE is likely fine, and much cheaper. You obviously aren't real concerned about losing some of the data on these since they're RAID 0. I realize you can back up the really important data on one of your 8 different methods of data backup (9 if the system has a floppy), but if you lose a drive, you've lost all the data on that RAID set.

      3) 8x CD-R (not CD-RW) drive.
      4) Brand new DVD-R drive (whoopee!)

      Do you really still need the CD-R? If you're making CD to CD coppies I guess this could be useful. Does your DVD-R write CDs as well. or just DVDs?

      5) Two 1.3GB 5.25" Magneto-Optical drives.
      Are you using these to make disk to disk coppies, or are they just parts you used to need, but don't feel like throwing out.

      6) 7/14GB 8mm tape drive.
      7) 12/24GB 4mm tape drive.
      8) Very old (but needed) Archive 2150S (QIC-150).

      Ok, I'm getting a picture of a system where you've used too many different formats in the past to backup data, and are now paying the price to have access to that data. Sooner or later as some of those tape drives start to fail it's going to come back and haunt you.

      9) 100 MB Zip drive.
      Why not you've got everything else. Why not add a compact flash reader too.

      10) 300 DPI scanner (for rough stuff).
      11) 1200 DPI scanner (for more important stuff).

      The 300 DPI scanner has to be both old and slow. Just use the 1200 DPI one. Quit being such a pack rat and get rid of the old junk.

      You might even be able to pay for a new system with big IDE drives with the savings on your electric bill. This monster system you have right now must be a power hog.

      On my IDE system, I've got two hard drives, a CD-RW and an IDE tape, and the IDE channels often seem to slow each other down and fight for control when I start to burn, backup, and do lots of disk I/O at the same time. I've been told that this is because a single IDE interface doesn't do concurrent access to both drives.

      Newer CD-RWs have firmware and drivers that do an exelent job of hiding this. Busmastering IDE drivers for your motherboard also help a lot. IDE implemented poorly is far from a high performance system. But when it's implemented well, it can challange SCSI in many (but not all) for small systems.

      Either way, I love using the SCSI system. It's an I/O monster. And I love being able to just hang whatever kind of device I need to use off of the external connector and know with reasonable certainty that Linux will support it. Long live SCSI.

      I suspect SCSI will live a long life yet, though it's a high end product, and Fibre Channel may squeeze it out from the high end eventually because of SCSI's reliance on parallel cables.

      For the low end a system with IDE and USB 2.0 would do quite nicely. All the recordable media types could easily hook up to USB 2.0 along with the scanner. They would also be easily used on another machine if necessary, which would reduce costs, and provide an alternative if this non fault tolerant system goes down. They could also be swapped in and out hot, which would reduce downtime. You could use firewire instead of USB 2.0, but I haven't seen a lot of firewire devices. I don't recommend USB 1.x because it's just too slow for things like DVD writers, and even 1200 DPI scanners. USB 2.0 devices are just comming out, so this isn't a real practical solution yet.
      • You're right, I've got a lot of old data but mine is all on DDS -- the rest is people bringing me still image or video data on these formats (i.e. Zip, QIC-6150, 8mm!) I've got to get a 250 Zip to replace the 100 Zip. The QIC drive rarely gets used these days (though I did have someone last Friday) but Zip and 8mm are still heavily used.

        The CD-R drive is still there because I burn at least a hundred a week and I don't want to kill the new DVD-R (*grrrrr*) with that workload.

        The two optical drives are connected because I'm working with someone using a large set of data which has been stored over the years that way: for each disc containing database text, there is a second, matching disc containing the related image data. To get at the stuff seamlessly, both must be mounted!

        The scanners are both used very heavily so I'm not ready to do away with either one yet. The 300dpi unit is actually much faster at 300dpi (it "starts" a scan quicker, if that means anything). The drives could (admittedly) be replaced by a newer drive configuration (I have a pair of 75GB SCSI waiting to replace all), but I keep putting it off and depending on my DAT24 backups because the move will be a pain and will shut me down (by completely occupying my attention and waiting for everything to copy) for a day or two while I make the switch.

        I do have a CF/SmartMedia reader because I get digital camera stuff in here all the time, too. Unfortunately, it's on the parallel port. :P
        • Wow, it sounds like you actually have a use for all that stuff. It sounds strange to me to have it all on one system. It seems like keeping it all working would take a significat amount of your time. It also leasves you with one point of failure for all those different tasks. Maybe in your case this all makes sense.

          The drives could (admittedly) be replaced by a newer drive configuration (I have a pair of 75GB SCSI waiting to replace all), but I keep putting it off and depending on my DAT24 backups because the move will be a pain and will shut me down (by completely occupying my attention and waiting for everything to copy) for a day or two while I make the switch.

          I can imagine that if I got all that equipmet working well together, I wouldn't want to play with it any more than I had to either. If it works, don't screw with it is a good rule to live by in many cases. If you decide you don't need those 75 GB SCSI drives sitting around taking up space, let me know.
  • by jurgen ( 14843 ) on Thursday August 30, 2001 @11:48AM (#2234948)
    ..you're behind the times. Fiberchannel, firewire, and yes, IDE, have made SCSI obsolete. IDE made SCSI obsolete? Heresy! So I would have said myself only a couple of years ago, but today the cost/benefit ratio puts me firmly behind IDE for anything on the low end... and on the high end, let's give SCSI a well deserved retirement, with fanfare and honors, and replace it with more modern stuff, please.

    On the low end, the cost difference between IDE and SCSI has been increasing (i.e. prices for IDE drop faster than SCSI) and IDE has also been getting better, to the point where the benefits of SCSI simply aren't enough anymore. IDE drives have gotten smarter, too, making up for some of the performance and reliability differences. If you want a high-performance, cost-effective, "low-end" RAID solution, look to i.e. 3Ware which makes some absolutely superb RAID cards for IDE drives... even though it needs an IDE controller dedicated to each drive it's still cheaper than a comparable SCSI solution, even before factoring in the cost of the drives! And performs at least as well.

    As to the high end... Fiberchannel is a step forward, but not enough. Forget all these special purpose buses anyway... my suggestion would be to put a gigabit ethernet interface and an IP stack directly in the drive. In fact, I hear that people are doing exactly that and using something called "SCSI over IP", which sounds like an interesting idea but probably not optimal. Better to run something like GFS directly on the drive.

    In other words, my objection to SCSI is: not enough brains per drive! On the low end this can be accomplished with fewer drives per brain... instead of huge RAID arrays with one smart control node (like NetApps, etc), use lots of PCs with small IDE RAIDs... call it RAIIS (redundant array of independent inexpensive servers) if you will. Fewer drives per brain means more brains per drive. On the high end take this to its logical extreme... one drive per brain, a full computer in each drive, each drive a full node on the network.

    Either way SCSI is not the answer.

    -j

    • The project's home is here [alphanet.ch] but hasn't been updated in a lonnnng time.

      Basically, take 2 computers with a scsi card in each, and use a scsi cable to connect the two machines. I don't know how this solution compares to myrinet [myri.com] or gigabit ethernet in terms of performance, but the idea is a nice one.

    • ".you're behind the times. Fiberchannel, firewire, and yes, IDE, have made SCSI obsolete. IDE made SCSI obsolete? ... on the high end, let's give SCSI a well deserved retirement, with fanfare and honors, and replace it with more modern stuff, please..."

      I'm sorry you feel this way. I could not disagree more vehemently. You appear to be basing your statement solely on "price/performance" points rather than hard technical facts. For my part, I've been using SCSI exclusively, in every system I have, since 1990 and I have not the slightest regret about it.

      I will grant that SCSI is not for everyone, but take it in context. It was never DESIGNED to suit the demands of Joe/Jane Consumer. It was designed to be a versatile and (relatively) simple-to-use I/O bus for just about any type of computer system or data processing device. With versatility and power comes complexity; It's as unavoidable as breathing, and it has always been true that SCSI requires a little more in the way of technical know-how to take full advantage of.

      No matter how many "enhancements" are kludged into it, IDE was still never designed, from the GROUND UP, to be a multi-device, multi-tasking I/O system. Where else can you find a system where, if you have two drives, the second one is almost entirely dependent on the electronics of the first to do its job while its own onboard electronics go largely unused?

      Compare that with a SCSI bus where every device, if properly designed, has the smarts to become an initiator or a target, and where such devices can do direct transfers to/from each other without intervention from the system CPU. Given that, and especially comparing it with IDE's truly brain-dead interface (IDE is an interface, NOT a true bus), I don't see how you can possibly come to the conclusion that SCSI devices don't have "enough brains per drive."

      SCSI has been around, in one form or another, since at least 1982. It has been, and continues to be, used on everything from PC's to mainframes. As for your "Price/Performance" points, I would say that the used/surplus market can easily undercut what little advantage IDE may have in this area.

      SCSI is indeed an excellent answer for many applications. You don't have to take my word for it: I think the mountain of equipment Out There that uses it, and how long said equipment has been around, AND the fact that ANSI continues to develop the spec, shows that SCSI has stood the test of time, and will continue to do so.

      I'm sorry if this upsets you. The complaint department is upstairs, third door on the right.
      • I'd mod you up if I could.
      • Fibre channel does rock...and is EXPENSIVE. But, oh, the scalability, performance and sheer sexiness of 24 x 36G fibre channel drives with 8 servers (multi platform) over 2Gb fibre... I may need to excuse myself.... heh.
      • SCSI is still the best way to go. Drives are still somewhat expensive (I bought a 19GB IBM drive for $150, while a 40GB IBM EIDE cost only $110). But the SCSI drive is better - it has a 4MB cache, and 4.5ms access. The EIDE drive is only 1MB cache with 8.5ms access. Still, a 19GB drive for $150 is still cheap, just not dirt cheap.

        Next, SCSI has progressed - it is now at 320MB/s. IDE and Firewire are stuck at 100MB/s.

        If you have a simple pc (floppy, CD-ROM, and HD), by all means go IDE. But don't put anything else in - if you want a second drive, then you have to put it with the first drive, and only one drive is active at any one time - slowing drive-drive copies or moves. Put it with the CD-ROM, then the second drive bites whenever the CD is acessed. Or, add a CD-RW. Now where? Put it with the HD? No, because you'll make a lot more coasters when burning from the drive. So, put it with the CD-ROM, and so any CD-CD copies need to make a stop-over on the HD. SCSI doesn't have these problems.
    • Um, SCSI is still invaluable as a protocol for addressing block storage. I realize the lower-end user doesn't care as long as the drive will format with NTFS and store his MP3s, but we who use Fibre Channel still talk to drives using SCSI semantics.

      "GFS on the drive"? Dude, we're talking about the drives here, put whatever embedded widget in front of them that you want, but at some point the block-addressed device will exist behind it.

      BTW, When we're talking SCSI, we're talking about aggregating storage devices here, and that goes _behind_ the filesystem (at least, given today's meaning of the word)....
    • by Salamander ( 33735 ) <jeff AT pl DOT atyp DOT us> on Thursday August 30, 2001 @02:38PM (#2235762) Homepage Journal
      As to the high end... Fiberchannel is a step forward, but not enough. Forget all these special purpose buses anyway... my suggestion would be to put a gigabit ethernet interface and an IP stack directly in the drive.

      IP is a poor match for storage needs, IMO. TCP in particular was designed - and designed rather well - for the high-latency small-packet environment of the Internet, but storage is a low-latency large-packet world. It's also a world where the hardware must cooperate in ensuring a high level of data integrity, where robust and efficient buffer management is critical, etc. etc. etc. Even on cost, the equation does not clearly favor storage over IP. Sure, you get to use all of your familiar IP networking gear, but it will need to be upgraded to support various storage-related features already present in FC gear. Even on the controller end, do you really think a GigE interface plus an embedded IP stack is easier or cheaper to incorporate into a controller design than FC? I could go on, but I hope you get the point. "One size fits all" is a bankrupt philosophy. Let IP continue to be designed to suit traditional-networking needs, and for storage use something designed to suit storage needs.

      Better to run something like GFS directly on the drive.

      No, not better at all. Who wants the drive to be a bottleneck or SPOF? The whole point of something like GFS is to avoid those problems via distribution. Putting an IP stack on the drive is bad enough, and now you want to put a multiple-accessor filesystem on it? Dream on. People used to put things like networking stacks and filesystems on separate devices, because main processors were so wimpy, but they stopped doing that more than a decade ago. For a reason.

      huge RAID arrays with one smart control node (like NetApps, etc)

      NetApp doesn't make disk arrays. If you look at the people who do make high-end disk arrays, you'll see that they have far more than one brain. A big EMC, IBM, or Hitachi disk array is actually a very powerful multiprocessing computer in its own right, that just happens to be dedicated to the task of handling storage.

      one drive per brain, a full computer in each drive, each drive a full node on the network

      ...at which point you're back to distributed systems as they exist today, wondering how to connect each of those single brains to its single drive with a non-proprietary interface. Going around in circles like that doesn't seem very productive to me.

    • by Anonymous Coward
      SCSI is very much the answer for quite a few market spaces. While IDE does cost less, the performance gap between IDE and SCSI has grown, not shrunk. Simply look at the latest versions of SCSI and see how they compare to IDE.

      Disks, available now
      40 MB/s media rate
      15,000 RPM
      158MB/s cache rates
      16+MB cache sizes
      21,000 IOP/s
      Extremely high MTBF

      Available in next 6 months
      60 MB/s per disk media rate
      22,000 RPM
      300 MB/s cache rates
      64 MB cache sizes
      40,000 IOP/s
      Extremely high MTBF

      Now look at top end IDE disks
      30 MB/s disk media rate
      7,200 RPM
      70 MB/s cache rates
      4MB caches
      6000 IOP/s

      6 months from now, assuming SATA
      50 MB/s media rate
      10,000 RPM
      120 MB/s cache rates
      8 MB caches
      8,000 IOP/s

      There is a stagering difference in performance of the drives, not to mention the controllers, Ultra-320 has nearly 4 times the IOP performance, and over 3 times the media rate of SATA comparable drives. When you factor in the fact that SCSI latencies are about 3 times faster than IDE, performance is substaintially better across the board. If you want to save money, go IDE. But if you want performance, SCSI is still the only choice. FC is not a viable cost alternative for internal connection, and only competes today with SCSI for external mass storage attach, where it accels in multi-disk configurations. There is no effective method for doing IDE external.

      As for iSCSI. iSCSI will not compete directly with SCSI in any market. The cost disadvantage of iSCSI on this disks is huge. It makes more sense for JBOD, RAID and Switch vendors to go the last meter, using IDE or SCSI, since GBE has horrible performance with iSCSI compared even to ATA 66.
    • I tend to agree. I've had just as many failures with SCSI as IDE lately. I just got a IBM DMVS 18 gig drive back (from rma - this same drive was replaced twice already), and now its no longer large enough considering the space (and the drive size) in the drive cage. One thing for it is that it is brutally fast - I think last time I benchmarked it it did over 35 megs per second sustained.


      anyhoo it got replaced with a bunch of 89$ 40 gig Seagate IDE drives - which have so far performed just as well as the IBM drives and seem to be just as reliable. And on top of that they seem to run an awful lot cooler.


    • In fact, I hear that people are doing exactly that and using something called "SCSI over IP", which sounds like an interesting idea but probably not optimal

      I've been mulling over the pros and cons of NAS vs SAN lately, as our environment is moving to FC-AL SANs connecting our servers, but 1000 BSX for the desktop LAN.


      Just today, though, I caught notice of this iSCSI site [iscsistorage.com], though, which looks kind of interesting.


      I though GFS looked pretty good, but wondered why, for example, coda had achieved greater buy-in from the Linux crowd.

    • The problem with the fibre channel argument, is that most of these systems still run SCSI over fibre or some variant thereof. I work on uber-high end systems day in and day out (Sun Enterprise systems, Compaq 4+ processor servers) and EVERYTHING that I've seen always comes back to SCSI. IDE is nice, but in reality you're not enough brains comment regarding the SCSI disk controller vs IDE/ATA is null since the drives are the bottleneck. Spinning the drive to the correct cylander, and retrieving data takes much longer than the SCSI bus switching from drive 0 to drive 14, back to 0, off to 2, back to 14 etc etc etc.

      Gigabit Fibre w/IP -> Drive = Bad Idea.

      To use IP, you have to fragment the data, create checksums, encapsulate the data, then find where you're going to transfer it to, wait for the "IP-BUS" to become available, then transmit at a hardware layer (after potentially doing a DNS lookup and arp/rarp request), have some sort of transmission aceptance and queing (TCP vs UDP), deincapsulate, check the checksums, defragment the data and utilize it.

      That's a lot of overhead for something that SCSI does in much fewer clock-ticks.
  • Anyone care to comment on the 3ware parts? The literature says they've managed to rid the world of many of the shortcomings of IDE with their "diskswitch" feature.

    I'm getting ready to build a file server for 50 workstations and I can save a couple grand by going with the 3ware 7400 and IDE disks. Not to mention the MB/$ ratio.

    -sid
    • by Anonymous Coward
      Hi Sid:

      I'm in the process of having a file server built for myself using similar technology. It is not built (and thus is not in my hot little hands) so I cannot speak from experience. You might be interested in the data at Storage Review [storagereview.com]. Although Storage Review focuses on timings under a Microsoft O/S, the IOMeter measures are interesting, and they have a nice database of measures that allows you to query for a comparison.

      One interesting note is that 3Ware's 7400 series appears (according to their analysis) to be weak at Raid 5 performance (I've decided not to go Raid 5 so it is not currently an issue for me). If you need Raid 5, you might want to consider an Adaptec 2400 series which allows you to plug in extra cache memory on the card for write buffering.

      The FreeBSD mailing lists have recently had some tales of woe for a Raid install. One speculation is that the IDE drives don't have staggered spin up like their SCSI counterparts, so if you have a large number of drives, you may need extra power to get the system to startup reliably (get a redundant or high capacity supply and offload some drives perhaps).
    • Yes, they are very nice. I got a coupel of the 4-way (6410) cards for testing and will get more. As drives are cheaper you can do things you wouldnt do with SCSI, like use RAID 1 instead of RAID5 and get much better performance too. The only annoying thing is that the monitoring tools are not open source and are web based: give me a command line version please! They are supported by recent installers, and appear as a SCSI drive, so you can just disable your motherboard IDE.

      Theya re supposed to do hot swap too, but I havent tried yet.
    • I've had good luck with 3ware's 4 channel RAID card - it's hardware RAID so the drivers are lightweight. It's plunking away in a FreeBSD box and is really fast with FreeBSD soft updates.
    • My company sells/supports these cards and as such we use them in our own servers.

      We are using the 3ware 6000 series cards for our Win2k Domain controller, our Lotus Domino mail/app server, and our database server (Linux/PostgreSQL).

      These servers support 50 users and performance is very good. So as SysAdmin I am very happy with the 3ware cards.

  • Paying $200 for 34gb of storage isn't whant I would consider reasonable. Especially considering I can buy an IDE drive, with similar performance features and twice the storage space, for $150.
    • People buy SCSI drives for their performance. It is true that you can buy a SCSI device and an IDE device with equal performance. But, the highest performance SCSI device is a lot faster than the highest performance IDE device. In fact, a high-end SCSI device from 5 years ago is likely to beat any IDE device you can buy today.

      For example, the tired old Seagate Cheetah 4LP, introduced in 1996, is still faster than the fastest IDE disk you can buy today, the WD800BB. The Cheetah delivers 50% more performance in the IOMeter file server benchmark (2.21 MB/s vs. 1.40 MB/s), responding on average 700ms before the WD does.

    • That would be cheap - for a reasonably fast scsi drive I paid 247$ for an 18gig ibm drive the other month.
  • The reviewer comments that there may actually be too much information in the book, and that newcomers to the subject of SCSI may get lost. My response is that the book itself was never really written for beginners; It was written, IMO, as a technical reference for folks who are in the range between decently hardware-literate (able to build a system without too much trouble) and engineering technician. Witness the oscilloscope examples. How many would-be SCSI users in the Joe/Jane Consumer arena have even seen an O-scope, let alone could guess how to use one or what they're useful for?

    Speaking as a second-year EE student, and as someone who's spent 20+ years doing hands-on with all kinds of electronics, the book came as a very welcome reference for me. I would not, however, recommend it for someone who just wants to find out enough about SCSI just to make use of it. For that, I would suggest http://www.scsifaq.org

    I would suggest to the reviewer to place a book in context before writing said review. It just plain looks better in print.

  • ``When was the last time you used VLB or EISA?''


    I'll bet that there's still quite a few EISA systems alive and kicking out there (maybe hidden behind some drywall :-) ).

    I had one on the home network up until just last month. It was, alas, decommissioned it after ten years of service and replaced with a PIII/733. Originally, purchased with an Adaptec 1740 adapter (later switched to a 2740) and 420MB of disk space (later up to 12GB) to run Coherent and SVR4.2, it ran various flavors of Linux (mostly Slackware and RedHat) beginning in 1996. If it weren't for what appeared to be problems developing with the memory (hard to find that old stuff) it'd probably still be performing some useful function on the home network. (I haven't tossed it yet so there's still that possibility.)

    Cheers...

    • > maybe hidden behind some drywall

      Ooooh... The Black Cat [online-literature.com] of computing...

    • I still have both running...

      • ALR VEISA 433 - The hard disk died on it years ago, but it runs as a TV-SVGA converter so I can watch DVDs on the projector from the office. Right now it boots off of a 360KB 5-1/4" Floppy to MS-DOS 6.0 and auto-runs the TSR.
      • VLB - This is my pride and joy:
        • 486 DX 50 MHz (NOT dx2, DX!)
        • VLB runs at 50 MHz native as opposed to 33 MHz on DX2 and DX4 systems
        • 40 MB, 70ns, 30 pin SIMMS
        • Promise Ultra IDE controller with 16MB disk cache onboard
        • ATI VGAWonderXL (with the passthru header)


      The VLB system still runs as a netware print server, happily chugging away after ~7 years.
    • I use my IBM Model 8595 PS/2 for many things including Firewall, HTTP Proxy, NFS & Samba, DNS, VPN, DHCP, SMTP & IMAP, SSH, etc. It has ~27 GB of SCSI disks inside and outside of the frame. Not bad for a 486-50 with 64MB RAM.

  • I purchased this book before it was published and promptly read it from cover to cover when I received it. Using that knowledge, I was able to help an out-of-state friend fix his system. At the time, he could connect his scanner or his CD-RW drive, but not both at the same time. The problem turned out to be that the scanner had a single-ended, 25-pin Mac-style connector and was messing up the rest of the system. Once we configured his host adapter correctly, and got the scanner connected to the end of the bus with a short cable and appropriate terminators, his problems were fixed.

    The path of SCSI standards is convoluted. And this book does an extremely good job of sorting though it all and presenting it in an understandable manner.

    Highly recommended.

    -- Chad

    • I bought this book about a year ago, & also read it cover-to-cover. It is good in explaining the hardware issues with SCSI, but it hads a major oversight.

      When it talked about Operating Systems, & SCSI programming, it was extremely Wintel centric.

      The point of my criticism is not that Fields, et alia, devoted room to getting SCSI to work with Windows 95, NT & 2000, but that they kept in a number of pages from the first edition that talked about SCSI & DOS. (Who is going to lay out several hundred dollars in hardware then run an antiquated OS with it?) This wouldn't be so irritating if it weren't for how little some space they devoted to UNIX-like systems -- less than five pages in total, which amounted to saying ``there are issues, & learn what they are by talking to your OS vendor."

      The authors devoted an entire chapter to writing SCSI drivers under Windows using one vendor's SDK, but failed to even mention that one could study how to code for UNIX by looking at *BSD or Linux code -- that was available for study to all.

      And as pathetic as the UNIX coverage was, Mac SCSI users received only a pair of by-the-way mentions in the text. And the hardware discussions focussed on common, Intel-based systems; for instance, there is no mention of the Mac 25-pin SCSI cable. Perhaps a beginning SysAdmin could use Appendix A to troubleshoot her/his Sparc, PowerPC, or Alpha systems, but I would recommend Evi Nemeth, et alia _Unix System Administration Handbook_ as the first reference to turn to. Nemeth's book discusses much of the same hardware issues in less space, & in a far more hardware-agnostic manner.

      And the material on the CD, although Linux-oreinted, is out of date -- as a simple ``ls -l" will show.

      There are strengths in this book, but the weaknesses in it bothered me far more. I hope in the next edition much of the DOS-related stuff is flushed out, & far more useful UNIX-related information is included. And that would make it a definite buy for any computer nerd's library, instead of a strong maybe.

      Geoff
  • i once had one external scsi drive that needed a terminator if it sat on one side of the computer but not if i put it on the other. freaky.

    isn't part of firewire/1394 actually based on scsi?
  • The concept of needing a 400 page book for end users of SCSI devices is appalling.
  • Sorry to be provocative here but there seems to be a number of people extolling the virtues of IDE over SCSI whilst overlooking one of the most important features:
    Mean Time Before Failure.
    If you go to your favourite disk manufacturer, here's mine: http://www.seagate.com/cda/products/discsales/inde x/1,1123,,00.html
    and compare the MTBF values of IDE and SCSI drives, you'll see a glaring difference.
    One comparrison that stands out:
    Cheeetah 73LP(Fibre Chan 160): 1,200,000-hour MTBF
    Barracuda ATA III (IDE 40): 500,000 hour MTBF

    Reliability and seek times are the main differences not capacity and burst speeds which is why they are still the only real choice for proffessional video/audio systems.
  • http://www.cinonic.com/ [cinonic.com]

    Yes a shameless plug...

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...