



The Book of SCSI, 2nd Edition 148
The Book of SCSI, 2nd Edition: I/O for the New Millennium | |
author | Gary Field, Peter M. Ridge |
pages | 456 |
publisher | No Starch Press |
rating | 7.5 |
reviewer | Craig Maloney |
ISBN | 1-886411-10-7 |
summary | A one stop resource for the SCSI protocol. |
What's Good?
For those in a hurry, Appendix A (The All-Platform Technical Reference) is the entire book in a nutshell. I think Appendix A should be included with every SCSI card sold. It includes pin-out descriptions of the major and not-so-major SCSI interfaces, tables for bus timings, and a quick description of termination rules. The pages that surround Appendix A are also quite good.The chapter on connecting devices to a PC talks at length about one of the more troubling aspects of SCSI; termination. Anyone who has had to troubleshoot SCSI installation problems will enjoy how thorough Chapter 6 deals with troubleshooting. (It even includes what a SCSI signal should look like on an oscilloscope). Programmers will find a Chapter with information on programming using ASPI, as well as protocol specifications for those looking for more low-level information. You'd be very hard pressed to find a more complete and readable treatment of the SCSI protocol than this book.
What's Bad?
Unfortunately completeness can lead to information overload. Novice users will find themselves at a disadvantage with the sheer amount of material presented.When discussing how to set up a SCSI adapter, the book mentions the various PC busses from the earliest IBM PC to draft revisions of PCI and everything in-between. Had I been a novice reader, I would have been overwhelmed with all the information about historical PC busses that are no longer in use. (When was the last time you used VLB or EISA?) In the interest of completeness, the authors also include a chart comparing these interfaces. I question whether this is really necessary. Some may also be put off by the hand-drawn diagrams in the earlier chapters.
On the CD
The CD includes items such as the SCSI FAQ, ASPI Development Files, ASPI tar, SCSI disk driver source for MSDOS, Western Digital SCSI Utilities, SCSITool, Postmark I/O benchmark source code, and Linux SCSI information. Of note, the CD also includes a PDF file of the entire book.
What's in it for me?
The Book of SCSI is definitely written by SCSI enthusiasts. On the early pages, the authors include a bit of SCSI poetry, and the CD includes a text file entitled "SCSI: A Game With Many Rules and No Rulebook?". This book reads with an excitement only an enthusiast can project. If you have ever been curious about SCSI, I encourage you to sit down and read the first few chapters of this book. If you are in a position to use SCSI components more than occasionally, I recommend you purchase this book and keep it on your reference shelf for those times when troubleshooting is necessary.
My biggest complaint? I wish the authors had written this book ten years ago. However, it is still a welcome addition to my library today.
- Chapter Listing
- Chapter 1: Welcome to SCSI
- Chapter 1.5: A Cornucopia of SCSI Devices
- Chapter 2: A Look at SCSI-3
- Chapter 3: SCSI Anatomy
- Chapter 4: Adding SCSI to Your PC
- Chapter 5: How to Connect Your SCSI Hardware
- Chapter 6: Troubleshooting Your SCSI Installation
- Chapter 7: How the Bus Works
- Chapter 8: Understanding Device Drivers
- Chapter 9: Performance Tuning Your SCSI Subsystem
- Chapter 10: RAID: redundant Array of Independent Disks
- Chapter 11: A Profile of ASPI Programming
- Chapter 12: The Future of SCSI and Storage in General
- Appendix A: All-Platform Technical Reference
- Appendix B: PC Technical Reference
- Appendix C: A Look at SCSI Test Equipment
- Appendix D: ATA/IDE versus SCSI
- Appendix E: A Small ASPI Demo Application
- Glossary
- Index
You can purchase this book at Fatbrain.
SCSI (Score:1)
Ultra Fast Wide LVD SCSI-3!!!
my 2 cents...
Re:SCSI (Score:1)
Re:SCSI (Score:2)
Re:SCSI (Score:1)
Is it really that SCSI parts are twice as expensive to manufacture?
I know that IBM manufactures the same hardware for IDE and SCSI. With the exception of a small part of the external circuitry (ie: connector etc) and the EEPROM contents, they are the same.
Companies charge a larger markup on SCSI devices because they can. They're "not consumer" devices - they're more for specialists etc..
Chicken and egg! I know the retail volume is lower, but if someone came out with a SCSI drive at an "IDE" price, I think it would change.
Quick comparison from the local store:
IDE: IBM Deskstar 60GXP 60.0GB UDMA100 7200rpm 8.5msec 2MB DM 430SCSI: IBM Ultrastar 36LP 36.9GB Ultra160 7200rpm 6.8msec 4MB DM 730
SCSI: IBM Ultrastar 36LZX 36.9GB Ultra160 10000rpm 4.9msec 4MB DM 960
It's not just the size! There's no question which is faster, and I know SCSI has less overhead/more bandwidth etc, but for most of us it's not worth it. I wonder why we don't see larger lower-speed SCSI drives. (Those two are the largest ones any shop around here has to offer.)
-- Steve
Re:SCSI (Score:1)
IDE drives are manufactured with a 40% error tolerance - up to 40% of the disk can be bad and the drive can still be sold.
This makes forces SCSI disks to be more reliable, simply by natural selection - disks not good enough to be SCSI can always be downgraded to IDE and sold on. Also, the manufacturing have to be much higher.
Because these disks are so much better made and more reliable, it is safe to spin them faster and bash the heads about quicker. Hence the much higher performance of SCSI disks to IDE, even when the electronics inside may be the same.
Re:how ironic that this post get mod'ed (Score:1)
Chapter list (Score:4, Funny)
Re:Chapter list (Score:1)
Secsi? (Score:2)
Perhaps there are even more ways? Feel free to reply with weird pronunciations you've heard.
-Kasreyn
Re:Secsi? (Score:1)
Re:Secsi? (Score:1)
Re:Secsi? (Score:2)
"sexy and sassy" go together better than "scuzzy and sassy" in my opinion.
*shrug*
Re:Secsi? (Score:1)
Secsi is pronounced sexy.
he's just trying to be funny
Termination (Score:2, Informative)
Re: (Score:3, Insightful)
Re:Termination (Score:1)
Generally I don't see termination as being any stickier than the master/slave/solo jumpering you do with IDE... but then you occasionally run across some sticky little SOB that's determined to be a pain.
How's this for termination hell: I bought (at a good price) a large (read: full height) 10G SCSI drive in it's own external case w/ power supplet et al. Great, except it was terminated, it wasn't going to be the last thing on my chain, and the only way to turn off the termination was to open the case and void the manufacturer's warranty.
It's enough to make you go IDE...
Re:Termination (Score:1)
A few years ago I bought a Travan4 drive from APS. It was more expensive than if I bought it in a generic case but I liked their case design at the time (built-in active termination, small footprint, stackable).
The problem is once it arrived I found out they had connected the SCSI ID jumpers oddly, so I could only do 0 or... 7 I think. As it turned out they'd reversed the grounds. I contacted them, explained what I thought was happening, and asked if it'd be OK to open the case and switch the jumpers around. They said it'd be okay, marked it in my customer record, and of course I dutifully saved the emails.
The other option is to send it back to them so they can make the change, but usually if you be a stickler and make them pay shipping costs both ways they'll relent and let you do the work yourself.
Though I think the biggest problem is the need for you to impress on them you're not a yokel who feels the need to stick screwdrivers into power supplies for no particular reason.
Re:Termination (Score:1, Informative)
Termination has if anything, gotten simpler over the past few years. Devices other than mass storage wide devices are becoming rarer. Few devices other than tape still ship in narror formats.
Most current HBAs for SCSI cards do an excellent job of auto-termination. With most new SCSI cables shipping with built in multi-mode terminators, the job is even simpler. Removing terminators from LVD devices also simplified the matter.
There are some instances of designers incorrectly implementing auto-termination schemes, and bus widths, but all in all, a good designer can get it right. The only complexity comes when people do not read documentation. Serialized busses such as SATA and serial SCSI will make life even easier going forward, as all points will require termination.
Re: (Score:2)
Re:Termination (Score:1)
Well, obviously buying the cheap stuff gets you in trouble. But I would think you wouldn't buy SCSI at all if you're picky about the price.. (Unless it's for something like an A3000, which simply needs SCSI, but it works with seriously incorrect termination, so it's still not an issue.. (The A3000 SCSI controller is well known for being picky, but that's about the SCSI-protocol, not termination.))
Then you have SCSI devices of varying bus widths and you have to terminate on half of the data bus or the other.
Well.. I suppose my experience is limited to the above mentioned A3000 (with and 8bit bus) and server-type stuff where everything is nicely streamlined and just works. =) (Brand servers)
Frankly, I think that termination problems with SCSI has had more to do with its demise in high-end consumer PCs than any other factor.
Make that termination problems with incorrectly designed stuff and bad docs, not SCSI as such, and I might even agree.. =)
Re: (Score:2)
how about different types? (Score:2, Interesting)
of SCSI terminators with the same number of pins
are not labeled by the OEM as HVD, LVD, etc...
I have a zillion terminators here for normal SCSI
devices and a few HVD for my tape libraries and
it's impossible to determine (to my knowledge)
which is which... ugh
Re:Termination (Score:3, Insightful)
And that is not even getting into cable length.
At one time I had four external SCSI devices attached to my computer. Placement of the four items would cause the chain to work or not. I am not talking about placement on the chain (which can definitely between working and not), but rather on my desk. If I moved the Zip drive too far to the right the chain would fail. If I tried to move the hard drive under the desk the chain would fail.
Luckily I was running seperate busses for internal and external.
"There are very technical reason why you need to sacrifice a goat to get your SCSI chain working properly."
As for the other post - I have always heard is "scuzzy." I always thought that it was appropriate for how messed-up SCSI was. Of course, I would take SCSI over parallel and slow serial any day of the week.
Now Firewire and USB... I still have too much invested in SCSI to go over just yet. Looks like good specs. Now if they would only keep USB as a low-speed powered bus and not try to get in over their heads I will be fine. I just want something I can attach keyboards, mice, and printers to. Having two seperate busses makes sense (one slower and powered, the other faster). Yes, I understand that Firewire is powered.
Re:Termination (Score:2)
Back in the very early days, when people could still remember SASI, there was actually a debate about whether SCSI should be "scuzzy" or "sexy". The former pronunciation prevailed, and I never thought it was a coincidence. ;-)
Re:Termination (Score:1)
Re:Termination (Score:1)
Re:Termination (Score:1)
Re:Termination (Score:1)
I've seen controllers that don't seem to like Seagate and IBM drives on the same bus.
And heaven help you if for some reason you need to put SCSI-2 narrow devices along side with wide devices.
Re:Termination (Score:1)
When was the last time you used VLB or EISA? (Score:1, Offtopic)
Re:When was the last time you used VLB or EISA? (Score:2)
I also have a server with an EISA bus supporting 4 slots.
Re:When was the last time you used VLB or EISA? (Score:1)
Just because Windows 2000 won't run on my 486/33, that doesn't mean that it doesn't make a damn fine firewall/gateway. Why replace hardware when you can just use more efficient software?
And yes, the 486 is VLB, and so is my web server.
MadCow.
Re:When was the last time you used VLB or EISA? (Score:1)
Re:When was the last time you used VLB or EISA? (Score:1)
Of course, VLB SCSI cards are few and far between. I've got a fully-decked-out VLB machine (just for the heck of it). It's got a VLB video card, and a VLB cacheing IDE controller, but though I've scrounged the dustbins for a long time, I've never even seen a VLB SCSI card.
Apparently they were made (Adaptec claims [adaptec.com] to have made some) but they seem to have all evaporated.
Re:When was the last time you used VLB or EISA? (Score:2)
Now I know why (Score:1)
Re:Now I know why (Score:2)
Even if you need another 9.9% reliability, IDE raids are becoming more and more commong.
Now, if you're doing 'mission critical' stuff (I hate that term.) you'll know that you'll get that extra reliance and speed, but you'll pay through the ass for it.
Price versus quality, folks.
I love SCSI! (Score:4, Informative)
I have a 3-channel LVD SCSI controller in my video system and it's talking to devices of all vintages:
1) Three 18.2GB Barracuda LVD drives in a RAID-0.
2) Four 9.1GB Micropolis UW drives in a RAID-0.
3) 8x CD-R (not CD-RW) drive.
4) Brand new DVD-R drive (whoopee!)
5) Two 1.3GB 5.25" Magneto-Optical drives.
6) 7/14GB 8mm tape drive.
7) 12/24GB 4mm tape drive.
8) Very old (but needed) Archive 2150S (QIC-150).
9) 100 MB Zip drive.
10) 300 DPI scanner (for rough stuff).
11) 1200 DPI scanner (for more important stuff).
The system lives in a server case with dual 450W power supplies, so of these devices, only the two optical drives and the two scanners are external. There are only three cables inside the case for the lot. Theoretically, there are 28 more SCSI IDs available for use.
Now, the nice thing about this is that I can have damn near all of them running at the same time without any appreciable slowdown -- something that never happens on my "play" system with IDE drives.
On my IDE system, I've got two hard drives, a CD-RW and an IDE tape, and the IDE channels often seem to slow each other down and fight for control when I start to burn, backup, and do lots of disk I/O at the same time. I've been told that this is because a single IDE interface doesn't do concurrent access to both drives.
Either way, I love using the SCSI system. It's an I/O monster. And I love being able to just hang whatever kind of device I need to use off of the external connector and know with reasonable certainty that Linux will support it. Long live SCSI.
Re:I love SCSI! (Score:1)
I see you don't have any SCSI printers. Hah!
Anyway the greatest benefit of SCSI is that you get to put all your peripheral devices right on your desk. Nothing is quite like a stack of disk drives sitting next to your monitor. Handy for CD-Rs as well. At least we can still keep stuff on the desk with ieee-1394!
Re:I love SCSI! (Score:1)
Re:I love SCSI! (Score:3, Insightful)
Re:I love SCSI! (Score:1)
Re:I love SCSI! (Score:2)
3) 8x CDR
4) DVD-R drive
5) Zip drive
6) Tape drive (why you use two I dunno...)
7) odd external SCSI devices
Most new mobos have 4 IDE channels on them, 2 or which can be dedicated to a raid config
In channels 1 & 2 install your 40gb drives in a raid-0 config
In channel 3 place your CDR & Zip drive
In channel 4 place your DVDR drive & Tape drive
Get a cheap SCSI card to hook up your scanners and other external devices (IDE can't really be used with external stuff
This isn't as fast as your configuration, but it will be close. Additionally, you get more storage space and it costs a hell of alot less.
If that extra performance means that much to you, and it's worth the extra cost -- that's great, but if cost ever enters the equation IDE setups can come close to that of a decent SCSI setup.
Re:I love SCSI! (Score:2)
This is true. IDE isn't designed to support a large number of devices. Then again, do you really need a large number of devices. Your system looks like a lot of old parts kludged together, and a lot of the parts look redundant to the point of being useless.
1) Three 18.2GB Barracuda LVD drives in a RAID-0.
2) Four 9.1GB Micropolis UW drives in a RAID-0.
Replace thes old drivers with one or 2 newer high capacity drives. Select IDE or SCSI as your needs require, but if you don't end up needing a lot of devices, IDE is likely fine, and much cheaper. You obviously aren't real concerned about losing some of the data on these since they're RAID 0. I realize you can back up the really important data on one of your 8 different methods of data backup (9 if the system has a floppy), but if you lose a drive, you've lost all the data on that RAID set.
3) 8x CD-R (not CD-RW) drive.
4) Brand new DVD-R drive (whoopee!)
Do you really still need the CD-R? If you're making CD to CD coppies I guess this could be useful. Does your DVD-R write CDs as well. or just DVDs?
5) Two 1.3GB 5.25" Magneto-Optical drives.
Are you using these to make disk to disk coppies, or are they just parts you used to need, but don't feel like throwing out.
6) 7/14GB 8mm tape drive.
7) 12/24GB 4mm tape drive.
8) Very old (but needed) Archive 2150S (QIC-150).
Ok, I'm getting a picture of a system where you've used too many different formats in the past to backup data, and are now paying the price to have access to that data. Sooner or later as some of those tape drives start to fail it's going to come back and haunt you.
9) 100 MB Zip drive.
Why not you've got everything else. Why not add a compact flash reader too.
10) 300 DPI scanner (for rough stuff).
11) 1200 DPI scanner (for more important stuff).
The 300 DPI scanner has to be both old and slow. Just use the 1200 DPI one. Quit being such a pack rat and get rid of the old junk.
You might even be able to pay for a new system with big IDE drives with the savings on your electric bill. This monster system you have right now must be a power hog.
On my IDE system, I've got two hard drives, a CD-RW and an IDE tape, and the IDE channels often seem to slow each other down and fight for control when I start to burn, backup, and do lots of disk I/O at the same time. I've been told that this is because a single IDE interface doesn't do concurrent access to both drives.
Newer CD-RWs have firmware and drivers that do an exelent job of hiding this. Busmastering IDE drivers for your motherboard also help a lot. IDE implemented poorly is far from a high performance system. But when it's implemented well, it can challange SCSI in many (but not all) for small systems.
Either way, I love using the SCSI system. It's an I/O monster. And I love being able to just hang whatever kind of device I need to use off of the external connector and know with reasonable certainty that Linux will support it. Long live SCSI.
I suspect SCSI will live a long life yet, though it's a high end product, and Fibre Channel may squeeze it out from the high end eventually because of SCSI's reliance on parallel cables.
For the low end a system with IDE and USB 2.0 would do quite nicely. All the recordable media types could easily hook up to USB 2.0 along with the scanner. They would also be easily used on another machine if necessary, which would reduce costs, and provide an alternative if this non fault tolerant system goes down. They could also be swapped in and out hot, which would reduce downtime. You could use firewire instead of USB 2.0, but I haven't seen a lot of firewire devices. I don't recommend USB 1.x because it's just too slow for things like DVD writers, and even 1200 DPI scanners. USB 2.0 devices are just comming out, so this isn't a real practical solution yet.
Re:I love SCSI! (Score:2)
The CD-R drive is still there because I burn at least a hundred a week and I don't want to kill the new DVD-R (*grrrrr*) with that workload.
The two optical drives are connected because I'm working with someone using a large set of data which has been stored over the years that way: for each disc containing database text, there is a second, matching disc containing the related image data. To get at the stuff seamlessly, both must be mounted!
The scanners are both used very heavily so I'm not ready to do away with either one yet. The 300dpi unit is actually much faster at 300dpi (it "starts" a scan quicker, if that means anything). The drives could (admittedly) be replaced by a newer drive configuration (I have a pair of 75GB SCSI waiting to replace all), but I keep putting it off and depending on my DAT24 backups because the move will be a pain and will shut me down (by completely occupying my attention and waiting for everything to copy) for a day or two while I make the switch.
I do have a CF/SmartMedia reader because I get digital camera stuff in here all the time, too. Unfortunately, it's on the parallel port.
Sounds like it works for you (Score:2)
The drives could (admittedly) be replaced by a newer drive configuration (I have a pair of 75GB SCSI waiting to replace all), but I keep putting it off and depending on my DAT24 backups because the move will be a pain and will shut me down (by completely occupying my attention and waiting for everything to copy) for a day or two while I make the switch.
I can imagine that if I got all that equipmet working well together, I wouldn't want to play with it any more than I had to either. If it works, don't screw with it is a good rule to live by in many cases. If you decide you don't need those 75 GB SCSI drives sitting around taking up space, let me know.
If you rely on SCSI every day... (Score:3, Insightful)
On the low end, the cost difference between IDE and SCSI has been increasing (i.e. prices for IDE drop faster than SCSI) and IDE has also been getting better, to the point where the benefits of SCSI simply aren't enough anymore. IDE drives have gotten smarter, too, making up for some of the performance and reliability differences. If you want a high-performance, cost-effective, "low-end" RAID solution, look to i.e. 3Ware which makes some absolutely superb RAID cards for IDE drives... even though it needs an IDE controller dedicated to each drive it's still cheaper than a comparable SCSI solution, even before factoring in the cost of the drives! And performs at least as well.
As to the high end... Fiberchannel is a step forward, but not enough. Forget all these special purpose buses anyway... my suggestion would be to put a gigabit ethernet interface and an IP stack directly in the drive. In fact, I hear that people are doing exactly that and using something called "SCSI over IP", which sounds like an interesting idea but probably not optimal. Better to run something like GFS directly on the drive.
In other words, my objection to SCSI is: not enough brains per drive! On the low end this can be accomplished with fewer drives per brain... instead of huge RAID arrays with one smart control node (like NetApps, etc), use lots of PCs with small IDE RAIDs... call it RAIIS (redundant array of independent inexpensive servers) if you will. Fewer drives per brain means more brains per drive. On the high end take this to its logical extreme... one drive per brain, a full computer in each drive, each drive a full node on the network.
Either way SCSI is not the answer.
-j
SCSI over IP... what about IP over SCSI? (Score:3, Informative)
Basically, take 2 computers with a scsi card in each, and use a scsi cable to connect the two machines. I don't know how this solution compares to myrinet [myri.com] or gigabit ethernet in terms of performance, but the idea is a nice one.
Re:If you rely on SCSI every day... (Score:2, Insightful)
I'm sorry you feel this way. I could not disagree more vehemently. You appear to be basing your statement solely on "price/performance" points rather than hard technical facts. For my part, I've been using SCSI exclusively, in every system I have, since 1990 and I have not the slightest regret about it.
I will grant that SCSI is not for everyone, but take it in context. It was never DESIGNED to suit the demands of Joe/Jane Consumer. It was designed to be a versatile and (relatively) simple-to-use I/O bus for just about any type of computer system or data processing device. With versatility and power comes complexity; It's as unavoidable as breathing, and it has always been true that SCSI requires a little more in the way of technical know-how to take full advantage of.
No matter how many "enhancements" are kludged into it, IDE was still never designed, from the GROUND UP, to be a multi-device, multi-tasking I/O system. Where else can you find a system where, if you have two drives, the second one is almost entirely dependent on the electronics of the first to do its job while its own onboard electronics go largely unused?
Compare that with a SCSI bus where every device, if properly designed, has the smarts to become an initiator or a target, and where such devices can do direct transfers to/from each other without intervention from the system CPU. Given that, and especially comparing it with IDE's truly brain-dead interface (IDE is an interface, NOT a true bus), I don't see how you can possibly come to the conclusion that SCSI devices don't have "enough brains per drive."
SCSI has been around, in one form or another, since at least 1982. It has been, and continues to be, used on everything from PC's to mainframes. As for your "Price/Performance" points, I would say that the used/surplus market can easily undercut what little advantage IDE may have in this area.
SCSI is indeed an excellent answer for many applications. You don't have to take my word for it: I think the mountain of equipment Out There that uses it, and how long said equipment has been around, AND the fact that ANSI continues to develop the spec, shows that SCSI has stood the test of time, and will continue to do so.
I'm sorry if this upsets you. The complaint department is upstairs, third door on the right.
Amen, Brother... (Score:1)
Re:If you rely on SCSI every day... (Score:1)
Re:If you rely on SCSI every day... (Score:1)
Next, SCSI has progressed - it is now at 320MB/s. IDE and Firewire are stuck at 100MB/s.
If you have a simple pc (floppy, CD-ROM, and HD), by all means go IDE. But don't put anything else in - if you want a second drive, then you have to put it with the first drive, and only one drive is active at any one time - slowing drive-drive copies or moves. Put it with the CD-ROM, then the second drive bites whenever the CD is acessed. Or, add a CD-RW. Now where? Put it with the HD? No, because you'll make a lot more coasters when burning from the drive. So, put it with the CD-ROM, and so any CD-CD copies need to make a stop-over on the HD. SCSI doesn't have these problems.
Re:If you rely on SCSI every day... (Score:1)
Maxtor recently announced that it has released the ATA/133 spec. Check out: http://www.maxtor.com/Maxtorhome.htm
-neel.Re:If you rely on SCSI every day... (Score:1)
"GFS on the drive"? Dude, we're talking about the drives here, put whatever embedded widget in front of them that you want, but at some point the block-addressed device will exist behind it.
BTW, When we're talking SCSI, we're talking about aggregating storage devices here, and that goes _behind_ the filesystem (at least, given today's meaning of the word)....
Re:If you rely on SCSI every day... (Score:4, Insightful)
IP is a poor match for storage needs, IMO. TCP in particular was designed - and designed rather well - for the high-latency small-packet environment of the Internet, but storage is a low-latency large-packet world. It's also a world where the hardware must cooperate in ensuring a high level of data integrity, where robust and efficient buffer management is critical, etc. etc. etc. Even on cost, the equation does not clearly favor storage over IP. Sure, you get to use all of your familiar IP networking gear, but it will need to be upgraded to support various storage-related features already present in FC gear. Even on the controller end, do you really think a GigE interface plus an embedded IP stack is easier or cheaper to incorporate into a controller design than FC? I could go on, but I hope you get the point. "One size fits all" is a bankrupt philosophy. Let IP continue to be designed to suit traditional-networking needs, and for storage use something designed to suit storage needs.
No, not better at all. Who wants the drive to be a bottleneck or SPOF? The whole point of something like GFS is to avoid those problems via distribution. Putting an IP stack on the drive is bad enough, and now you want to put a multiple-accessor filesystem on it? Dream on. People used to put things like networking stacks and filesystems on separate devices, because main processors were so wimpy, but they stopped doing that more than a decade ago. For a reason.
NetApp doesn't make disk arrays. If you look at the people who do make high-end disk arrays, you'll see that they have far more than one brain. A big EMC, IBM, or Hitachi disk array is actually a very powerful multiprocessing computer in its own right, that just happens to be dedicated to the task of handling storage.
...at which point you're back to distributed systems as they exist today, wondering how to connect each of those single brains to its single drive with a non-proprietary interface. Going around in circles like that doesn't seem very productive to me.
Re:If you rely on SCSI every day... (Score:1)
Re:If you rely on SCSI every day... (Score:1, Insightful)
Disks, available now
40 MB/s media rate
15,000 RPM
158MB/s cache rates
16+MB cache sizes
21,000 IOP/s
Extremely high MTBF
Available in next 6 months
60 MB/s per disk media rate
22,000 RPM
300 MB/s cache rates
64 MB cache sizes
40,000 IOP/s
Extremely high MTBF
Now look at top end IDE disks
30 MB/s disk media rate
7,200 RPM
70 MB/s cache rates
4MB caches
6000 IOP/s
6 months from now, assuming SATA
50 MB/s media rate
10,000 RPM
120 MB/s cache rates
8 MB caches
8,000 IOP/s
There is a stagering difference in performance of the drives, not to mention the controllers, Ultra-320 has nearly 4 times the IOP performance, and over 3 times the media rate of SATA comparable drives. When you factor in the fact that SCSI latencies are about 3 times faster than IDE, performance is substaintially better across the board. If you want to save money, go IDE. But if you want performance, SCSI is still the only choice. FC is not a viable cost alternative for internal connection, and only competes today with SCSI for external mass storage attach, where it accels in multi-disk configurations. There is no effective method for doing IDE external.
As for iSCSI. iSCSI will not compete directly with SCSI in any market. The cost disadvantage of iSCSI on this disks is huge. It makes more sense for JBOD, RAID and Switch vendors to go the last meter, using IDE or SCSI, since GBE has horrible performance with iSCSI compared even to ATA 66.
Re:If you rely on SCSI every day... (Score:1)
anyhoo it got replaced with a bunch of 89$ 40 gig Seagate IDE drives - which have so far performed just as well as the IBM drives and seem to be just as reliable. And on top of that they seem to run an awful lot cooler.
Re:If you rely on SCSI every day... (Score:2)
In fact, I hear that people are doing exactly that and using something called "SCSI over IP", which sounds like an interesting idea but probably not optimal
I've been mulling over the pros and cons of NAS vs SAN lately, as our environment is moving to FC-AL SANs connecting our servers, but 1000 BSX for the desktop LAN.
Just today, though, I caught notice of this iSCSI site [iscsistorage.com], though, which looks kind of interesting.
I though GFS looked pretty good, but wondered why, for example, coda had achieved greater buy-in from the Linux crowd.
Re:If you rely on SCSI every day... (Score:2)
Gigabit Fibre w/IP -> Drive = Bad Idea.
To use IP, you have to fragment the data, create checksums, encapsulate the data, then find where you're going to transfer it to, wait for the "IP-BUS" to become available, then transmit at a hardware layer (after potentially doing a DNS lookup and arp/rarp request), have some sort of transmission aceptance and queing (TCP vs UDP), deincapsulate, check the checksums, defragment the data and utilize it.
That's a lot of overhead for something that SCSI does in much fewer clock-ticks.
Re:If you rely on SCSI every day... (Score:1)
3ware RAID cards vs SCSI (Score:1)
I'm getting ready to build a file server for 50 workstations and I can save a couple grand by going with the 3ware 7400 and IDE disks. Not to mention the MB/$ ratio.
-sid
Re:3ware RAID cards vs SCSI (Score:1, Informative)
I'm in the process of having a file server built for myself using similar technology. It is not built (and thus is not in my hot little hands) so I cannot speak from experience. You might be interested in the data at Storage Review [storagereview.com]. Although Storage Review focuses on timings under a Microsoft O/S, the IOMeter measures are interesting, and they have a nice database of measures that allows you to query for a comparison.
One interesting note is that 3Ware's 7400 series appears (according to their analysis) to be weak at Raid 5 performance (I've decided not to go Raid 5 so it is not currently an issue for me). If you need Raid 5, you might want to consider an Adaptec 2400 series which allows you to plug in extra cache memory on the card for write buffering.
The FreeBSD mailing lists have recently had some tales of woe for a Raid install. One speculation is that the IDE drives don't have staggered spin up like their SCSI counterparts, so if you have a large number of drives, you may need extra power to get the system to startup reliably (get a redundant or high capacity supply and offload some drives perhaps).
Re:3ware RAID cards vs SCSI (Score:1)
Theya re supposed to do hot swap too, but I havent tried yet.
Re:3ware RAID cards vs SCSI (Score:1)
Re:3ware RAID cards vs SCSI (Score:1)
We are using the 3ware 6000 series cards for our Win2k Domain controller, our Lotus Domino mail/app server, and our database server (Linux/PostgreSQL).
These servers support 50 users and performance is very good. So as SysAdmin I am very happy with the 3ware cards.
Since when was SCSI reasonably priced? (Score:2)
Re:Since when was SCSI reasonably priced? (Score:3, Informative)
For example, the tired old Seagate Cheetah 4LP, introduced in 1996, is still faster than the fastest IDE disk you can buy today, the WD800BB. The Cheetah delivers 50% more performance in the IOMeter file server benchmark (2.21 MB/s vs. 1.40 MB/s), responding on average 700ms before the WD does.
Re:Since when was SCSI reasonably priced? (Score:2)
Re:Since when was SCSI reasonably priced? (Score:1)
Re:Since when was SCSI reasonably priced? (Score:1)
Re:Since when was SCSI reasonably priced? (Score:2)
If SCSI drives were $20 more expensive than the same IDE drive, I'd be all over SCSI -- but it's not.
In the book's defense... (Score:2, Insightful)
Speaking as a second-year EE student, and as someone who's spent 20+ years doing hands-on with all kinds of electronics, the book came as a very welcome reference for me. I would not, however, recommend it for someone who just wants to find out enough about SCSI just to make use of it. For that, I would suggest http://www.scsifaq.org
I would suggest to the reviewer to place a book in context before writing said review. It just plain looks better in print.
Hey! Don't count out those old EISA boxes! (Score:2)
I'll bet that there's still quite a few EISA systems alive and kicking out there (maybe hidden behind some drywall :-) ).
I had one on the home network up until just last month. It was, alas, decommissioned it after ten years of service and replaced with a PIII/733. Originally, purchased with an Adaptec 1740 adapter (later switched to a 2740) and 420MB of disk space (later up to 12GB) to run Coherent and SVR4.2, it ran various flavors of Linux (mostly Slackware and RedHat) beginning in 1996. If it weren't for what appeared to be problems developing with the memory (hard to find that old stuff) it'd probably still be performing some useful function on the home network. (I haven't tossed it yet so there's still that possibility.)
Cheers...
Re:Hey! Don't count out those old EISA boxes! (Score:1)
> maybe hidden behind some drywall
Ooooh... The Black Cat [online-literature.com] of computing...
Re:Hey! Don't count out those old EISA boxes! (Score:1)
The VLB system still runs as a netware print server, happily chugging away after ~7 years.
Re:Hey! Don't count out those old EISA boxes! (Score:2)
I use my IBM Model 8595 PS/2 for many things including Firewall, HTTP Proxy, NFS & Samba, DNS, VPN, DHCP, SMTP & IMAP, SSH, etc. It has ~27 GB of SCSI disks inside and outside of the frame. Not bad for a 486-50 with 64MB RAM.
This is a valuable book (Score:1)
I purchased this book before it was published and promptly read it from cover to cover when I received it. Using that knowledge, I was able to help an out-of-state friend fix his system. At the time, he could connect his scanner or his CD-RW drive, but not both at the same time. The problem turned out to be that the scanner had a single-ended, 25-pin Mac-style connector and was messing up the rest of the system. Once we configured his host adapter correctly, and got the scanner connected to the end of the bus with a short cable and appropriate terminators, his problems were fixed.
The path of SCSI standards is convoluted. And this book does an extremely good job of sorting though it all and presenting it in an understandable manner.
Highly recommended.
-- Chad
This could have been a more valuable book (Score:2)
When it talked about Operating Systems, & SCSI programming, it was extremely Wintel centric.
The point of my criticism is not that Fields, et alia, devoted room to getting SCSI to work with Windows 95, NT & 2000, but that they kept in a number of pages from the first edition that talked about SCSI & DOS. (Who is going to lay out several hundred dollars in hardware then run an antiquated OS with it?) This wouldn't be so irritating if it weren't for how little some space they devoted to UNIX-like systems -- less than five pages in total, which amounted to saying ``there are issues, & learn what they are by talking to your OS vendor."
The authors devoted an entire chapter to writing SCSI drivers under Windows using one vendor's SDK, but failed to even mention that one could study how to code for UNIX by looking at *BSD or Linux code -- that was available for study to all.
And as pathetic as the UNIX coverage was, Mac SCSI users received only a pair of by-the-way mentions in the text. And the hardware discussions focussed on common, Intel-based systems; for instance, there is no mention of the Mac 25-pin SCSI cable. Perhaps a beginning SysAdmin could use Appendix A to troubleshoot her/his Sparc, PowerPC, or Alpha systems, but I would recommend Evi Nemeth, et alia _Unix System Administration Handbook_ as the first reference to turn to. Nemeth's book discusses much of the same hardware issues in less space, & in a far more hardware-agnostic manner.
And the material on the CD, although Linux-oreinted, is out of date -- as a simple ``ls -l" will show.
There are strengths in this book, but the weaknesses in it bothered me far more. I hope in the next edition much of the DOS-related stuff is flushed out, & far more useful UNIX-related information is included. And that would make it a definite buy for any computer nerd's library, instead of a strong maybe.
Geoff
scsi chicken entrails (Score:1)
isn't part of firewire/1394 actually based on scsi?
Appalling (Score:2)
SCSI vs IDE.... Just look at the MTBF. (Score:1)
Mean Time Before Failure.
If you go to your favourite disk manufacturer, here's mine: http://www.seagate.com/cda/products/discsales/ind
and compare the MTBF values of IDE and SCSI drives, you'll see a glaring difference.
One comparrison that stands out:
Cheeetah 73LP(Fibre Chan 160): 1,200,000-hour MTBF
Barracuda ATA III (IDE 40): 500,000 hour MTBF
Reliability and seek times are the main differences not capacity and burst speeds which is why they are still the only real choice for proffessional video/audio systems.
Got FIbre Channel? (Score:1)
Yes a shameless plug...
Re:Windows XP SCSI support (Score:1)
Re:Windows XP SCSI support (Score:1)
Re:Windows XP SCSI support (Score:1)
Re:Windows XP SCSI support (Score:1)
Re:robbie (Score:1)
Re:Yup (Score:1)
Haven't used the MacOS in a the last few years, have you? As I recall, Apple had built-in suport for SCSI in 1986, Firewire (an Apple Trademark, BTW, it's 1394 in Windows)in 1996, and USB in 1998.
Not that I use it anymore, but Apple was a few years ahead of MS where connectivity and multimedia are concerned.
But then, you are just trolling, anyway...
Re:Yup (Score:1)
they already HAVE hit the ceiling (Score:2)
SCSI devices have gone to the LVD technology...
this is due to the fact that every time a faster
SCSI standard emerged, the max cable length was
reduced to a fraction of the previous standard.
LVD and HVD were introduced to combat this problem
while maintaining speed. I'm not sure about the
max cable length of LVD, but HVD is at least 25
meters max length, which makes it more than sufficient for
future desktop devices should cable lengths start
to again shrink with future speed increases...
Re:they already HAVE hit the ceiling (Score:2)
Assuming you don't use Ultra2 or above. HVD is at most 25m; and doesn't support Ultra160 at all.
Re:they already HAVE hit the ceiling (Score:2)
Generally speaking the bulk data is transferred in synchronous mode i.e. with a window like TCP. I doubt a few nanoseconds delay would make much difference.
The limit for SE SCSI went down to 1.5m. Add 0.5m internal wiring in each box and you could effectively only add one external peripheral a max of 0.5m away!
Re:This post will bring you luck! (Score:1)
Although it does seem to be an interesting idea of a way to get a post of yours up a few points using the stupidity of others.