Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware Books Media Book Reviews

The Book of SCSI, 2nd Edition 148

Craig Maloney contributes the below review of a book he claims is long overdue -- the second edition of Gary Feld's The Book of SCSI. Probably it won't be long until someone is reviewing The Book of Firewire, but SCSI remains probably the most widespread standard for high-quality, reasonably priced storage. I know it's gotten better since the last time I struggled with termination issues and bad cables, but if you rely on SCSI every day, you may need this book.

The Book of SCSI, 2nd Edition: I/O for the New Millennium
author Gary Field, Peter M. Ridge
pages 456
publisher No Starch Press
rating 7.5
reviewer Craig Maloney
ISBN 1-886411-10-7
summary A one stop resource for the SCSI protocol.

What's Good?

For those in a hurry, Appendix A (The All-Platform Technical Reference) is the entire book in a nutshell. I think Appendix A should be included with every SCSI card sold. It includes pin-out descriptions of the major and not-so-major SCSI interfaces, tables for bus timings, and a quick description of termination rules. The pages that surround Appendix A are also quite good.

The chapter on connecting devices to a PC talks at length about one of the more troubling aspects of SCSI; termination. Anyone who has had to troubleshoot SCSI installation problems will enjoy how thorough Chapter 6 deals with troubleshooting. (It even includes what a SCSI signal should look like on an oscilloscope). Programmers will find a Chapter with information on programming using ASPI, as well as protocol specifications for those looking for more low-level information. You'd be very hard pressed to find a more complete and readable treatment of the SCSI protocol than this book.

What's Bad?

Unfortunately completeness can lead to information overload. Novice users will find themselves at a disadvantage with the sheer amount of material presented.

When discussing how to set up a SCSI adapter, the book mentions the various PC busses from the earliest IBM PC to draft revisions of PCI and everything in-between. Had I been a novice reader, I would have been overwhelmed with all the information about historical PC busses that are no longer in use. (When was the last time you used VLB or EISA?) In the interest of completeness, the authors also include a chart comparing these interfaces. I question whether this is really necessary. Some may also be put off by the hand-drawn diagrams in the earlier chapters.

On the CD

The CD includes items such as the SCSI FAQ, ASPI Development Files, ASPI tar, SCSI disk driver source for MSDOS, Western Digital SCSI Utilities, SCSITool, Postmark I/O benchmark source code, and Linux SCSI information. Of note, the CD also includes a PDF file of the entire book.

What's in it for me?

The Book of SCSI is definitely written by SCSI enthusiasts. On the early pages, the authors include a bit of SCSI poetry, and the CD includes a text file entitled "SCSI: A Game With Many Rules and No Rulebook?". This book reads with an excitement only an enthusiast can project. If you have ever been curious about SCSI, I encourage you to sit down and read the first few chapters of this book. If you are in a position to use SCSI components more than occasionally, I recommend you purchase this book and keep it on your reference shelf for those times when troubleshooting is necessary.

My biggest complaint? I wish the authors had written this book ten years ago. However, it is still a welcome addition to my library today.

  • Chapter Listing
  • Chapter 1: Welcome to SCSI
  • Chapter 1.5: A Cornucopia of SCSI Devices
  • Chapter 2: A Look at SCSI-3
  • Chapter 3: SCSI Anatomy
  • Chapter 4: Adding SCSI to Your PC
  • Chapter 5: How to Connect Your SCSI Hardware
  • Chapter 6: Troubleshooting Your SCSI Installation
  • Chapter 7: How the Bus Works
  • Chapter 8: Understanding Device Drivers
  • Chapter 9: Performance Tuning Your SCSI Subsystem
  • Chapter 10: RAID: redundant Array of Independent Disks
  • Chapter 11: A Profile of ASPI Programming
  • Chapter 12: The Future of SCSI and Storage in General
  • Appendix A: All-Platform Technical Reference
  • Appendix B: PC Technical Reference
  • Appendix C: A Look at SCSI Test Equipment
  • Appendix D: ATA/IDE versus SCSI
  • Appendix E: A Small ASPI Demo Application
  • Glossary
  • Index


You can purchase this book at Fatbrain.

This discussion has been archived. No new comments can be posted.

The Book of SCSI, 2nd Edition

Comments Filter:
  • by jurgen ( 14843 ) on Thursday August 30, 2001 @11:48AM (#2234948)
    ..you're behind the times. Fiberchannel, firewire, and yes, IDE, have made SCSI obsolete. IDE made SCSI obsolete? Heresy! So I would have said myself only a couple of years ago, but today the cost/benefit ratio puts me firmly behind IDE for anything on the low end... and on the high end, let's give SCSI a well deserved retirement, with fanfare and honors, and replace it with more modern stuff, please.

    On the low end, the cost difference between IDE and SCSI has been increasing (i.e. prices for IDE drop faster than SCSI) and IDE has also been getting better, to the point where the benefits of SCSI simply aren't enough anymore. IDE drives have gotten smarter, too, making up for some of the performance and reliability differences. If you want a high-performance, cost-effective, "low-end" RAID solution, look to i.e. 3Ware which makes some absolutely superb RAID cards for IDE drives... even though it needs an IDE controller dedicated to each drive it's still cheaper than a comparable SCSI solution, even before factoring in the cost of the drives! And performs at least as well.

    As to the high end... Fiberchannel is a step forward, but not enough. Forget all these special purpose buses anyway... my suggestion would be to put a gigabit ethernet interface and an IP stack directly in the drive. In fact, I hear that people are doing exactly that and using something called "SCSI over IP", which sounds like an interesting idea but probably not optimal. Better to run something like GFS directly on the drive.

    In other words, my objection to SCSI is: not enough brains per drive! On the low end this can be accomplished with fewer drives per brain... instead of huge RAID arrays with one smart control node (like NetApps, etc), use lots of PCs with small IDE RAIDs... call it RAIIS (redundant array of independent inexpensive servers) if you will. Fewer drives per brain means more brains per drive. On the high end take this to its logical extreme... one drive per brain, a full computer in each drive, each drive a full node on the network.

    Either way SCSI is not the answer.

    -j

  • Comment removed (Score:3, Insightful)

    by account_deleted ( 4530225 ) on Thursday August 30, 2001 @11:50AM (#2234956)
    Comment removed based on user account deletion
  • Re:Termination (Score:3, Insightful)

    by singularity ( 2031 ) <nowalmart@gmai[ ]om ['l.c' in gap]> on Thursday August 30, 2001 @12:09PM (#2235067) Homepage Journal
    And then you get into devices that attempt to have "built-in" termination. Or devices that, for whatever reason, have to be at the end of the chain (due to termination issues) and you need to attach two of them. Or your motherboard (at one end of the bus sometimes) is not providing decent termination.

    And that is not even getting into cable length.

    At one time I had four external SCSI devices attached to my computer. Placement of the four items would cause the chain to work or not. I am not talking about placement on the chain (which can definitely between working and not), but rather on my desk. If I moved the Zip drive too far to the right the chain would fail. If I tried to move the hard drive under the desk the chain would fail.

    Luckily I was running seperate busses for internal and external.

    "There are very technical reason why you need to sacrifice a goat to get your SCSI chain working properly."

    As for the other post - I have always heard is "scuzzy." I always thought that it was appropriate for how messed-up SCSI was. Of course, I would take SCSI over parallel and slow serial any day of the week.

    Now Firewire and USB... I still have too much invested in SCSI to go over just yet. Looks like good specs. Now if they would only keep USB as a low-speed powered bus and not try to get in over their heads I will be fine. I just want something I can attach keyboards, mice, and printers to. Having two seperate busses makes sense (one slower and powered, the other faster). Yes, I understand that Firewire is powered.

  • by KC7GR ( 473279 ) on Thursday August 30, 2001 @12:10PM (#2235074) Homepage Journal
    The reviewer comments that there may actually be too much information in the book, and that newcomers to the subject of SCSI may get lost. My response is that the book itself was never really written for beginners; It was written, IMO, as a technical reference for folks who are in the range between decently hardware-literate (able to build a system without too much trouble) and engineering technician. Witness the oscilloscope examples. How many would-be SCSI users in the Joe/Jane Consumer arena have even seen an O-scope, let alone could guess how to use one or what they're useful for?

    Speaking as a second-year EE student, and as someone who's spent 20+ years doing hands-on with all kinds of electronics, the book came as a very welcome reference for me. I would not, however, recommend it for someone who just wants to find out enough about SCSI just to make use of it. For that, I would suggest http://www.scsifaq.org

    I would suggest to the reviewer to place a book in context before writing said review. It just plain looks better in print.
  • by KC7GR ( 473279 ) on Thursday August 30, 2001 @12:37PM (#2235214) Homepage Journal
    ".you're behind the times. Fiberchannel, firewire, and yes, IDE, have made SCSI obsolete. IDE made SCSI obsolete? ... on the high end, let's give SCSI a well deserved retirement, with fanfare and honors, and replace it with more modern stuff, please..."

    I'm sorry you feel this way. I could not disagree more vehemently. You appear to be basing your statement solely on "price/performance" points rather than hard technical facts. For my part, I've been using SCSI exclusively, in every system I have, since 1990 and I have not the slightest regret about it.

    I will grant that SCSI is not for everyone, but take it in context. It was never DESIGNED to suit the demands of Joe/Jane Consumer. It was designed to be a versatile and (relatively) simple-to-use I/O bus for just about any type of computer system or data processing device. With versatility and power comes complexity; It's as unavoidable as breathing, and it has always been true that SCSI requires a little more in the way of technical know-how to take full advantage of.

    No matter how many "enhancements" are kludged into it, IDE was still never designed, from the GROUND UP, to be a multi-device, multi-tasking I/O system. Where else can you find a system where, if you have two drives, the second one is almost entirely dependent on the electronics of the first to do its job while its own onboard electronics go largely unused?

    Compare that with a SCSI bus where every device, if properly designed, has the smarts to become an initiator or a target, and where such devices can do direct transfers to/from each other without intervention from the system CPU. Given that, and especially comparing it with IDE's truly brain-dead interface (IDE is an interface, NOT a true bus), I don't see how you can possibly come to the conclusion that SCSI devices don't have "enough brains per drive."

    SCSI has been around, in one form or another, since at least 1982. It has been, and continues to be, used on everything from PC's to mainframes. As for your "Price/Performance" points, I would say that the used/surplus market can easily undercut what little advantage IDE may have in this area.

    SCSI is indeed an excellent answer for many applications. You don't have to take my word for it: I think the mountain of equipment Out There that uses it, and how long said equipment has been around, AND the fact that ANSI continues to develop the spec, shows that SCSI has stood the test of time, and will continue to do so.

    I'm sorry if this upsets you. The complaint department is upstairs, third door on the right.
  • Re:I love SCSI! (Score:3, Insightful)

    by Howie ( 4244 ) <howie@COFFEEthingy.com minus caffeine> on Thursday August 30, 2001 @01:14PM (#2235419) Homepage Journal
    Did you get free ear defenders with that system?
  • As to the high end... Fiberchannel is a step forward, but not enough. Forget all these special purpose buses anyway... my suggestion would be to put a gigabit ethernet interface and an IP stack directly in the drive.

    IP is a poor match for storage needs, IMO. TCP in particular was designed - and designed rather well - for the high-latency small-packet environment of the Internet, but storage is a low-latency large-packet world. It's also a world where the hardware must cooperate in ensuring a high level of data integrity, where robust and efficient buffer management is critical, etc. etc. etc. Even on cost, the equation does not clearly favor storage over IP. Sure, you get to use all of your familiar IP networking gear, but it will need to be upgraded to support various storage-related features already present in FC gear. Even on the controller end, do you really think a GigE interface plus an embedded IP stack is easier or cheaper to incorporate into a controller design than FC? I could go on, but I hope you get the point. "One size fits all" is a bankrupt philosophy. Let IP continue to be designed to suit traditional-networking needs, and for storage use something designed to suit storage needs.

    Better to run something like GFS directly on the drive.

    No, not better at all. Who wants the drive to be a bottleneck or SPOF? The whole point of something like GFS is to avoid those problems via distribution. Putting an IP stack on the drive is bad enough, and now you want to put a multiple-accessor filesystem on it? Dream on. People used to put things like networking stacks and filesystems on separate devices, because main processors were so wimpy, but they stopped doing that more than a decade ago. For a reason.

    huge RAID arrays with one smart control node (like NetApps, etc)

    NetApp doesn't make disk arrays. If you look at the people who do make high-end disk arrays, you'll see that they have far more than one brain. A big EMC, IBM, or Hitachi disk array is actually a very powerful multiprocessing computer in its own right, that just happens to be dedicated to the task of handling storage.

    one drive per brain, a full computer in each drive, each drive a full node on the network

    ...at which point you're back to distributed systems as they exist today, wondering how to connect each of those single brains to its single drive with a non-proprietary interface. Going around in circles like that doesn't seem very productive to me.

  • by Anonymous Coward on Thursday August 30, 2001 @02:59PM (#2235869)
    SCSI is very much the answer for quite a few market spaces. While IDE does cost less, the performance gap between IDE and SCSI has grown, not shrunk. Simply look at the latest versions of SCSI and see how they compare to IDE.

    Disks, available now
    40 MB/s media rate
    15,000 RPM
    158MB/s cache rates
    16+MB cache sizes
    21,000 IOP/s
    Extremely high MTBF

    Available in next 6 months
    60 MB/s per disk media rate
    22,000 RPM
    300 MB/s cache rates
    64 MB cache sizes
    40,000 IOP/s
    Extremely high MTBF

    Now look at top end IDE disks
    30 MB/s disk media rate
    7,200 RPM
    70 MB/s cache rates
    4MB caches
    6000 IOP/s

    6 months from now, assuming SATA
    50 MB/s media rate
    10,000 RPM
    120 MB/s cache rates
    8 MB caches
    8,000 IOP/s

    There is a stagering difference in performance of the drives, not to mention the controllers, Ultra-320 has nearly 4 times the IOP performance, and over 3 times the media rate of SATA comparable drives. When you factor in the fact that SCSI latencies are about 3 times faster than IDE, performance is substaintially better across the board. If you want to save money, go IDE. But if you want performance, SCSI is still the only choice. FC is not a viable cost alternative for internal connection, and only competes today with SCSI for external mass storage attach, where it accels in multi-disk configurations. There is no effective method for doing IDE external.

    As for iSCSI. iSCSI will not compete directly with SCSI in any market. The cost disadvantage of iSCSI on this disks is huge. It makes more sense for JBOD, RAID and Switch vendors to go the last meter, using IDE or SCSI, since GBE has horrible performance with iSCSI compared even to ATA 66.

All your files have been destroyed (sorry). Paul.

Working...