Become a fan of Slashdot on Facebook


Forgot your password?

Serial SCSI Standard Coming Soon 328

rchatterjee writes "SCSI is very close to joining ATA in leaving a parallel interface design behind in favor of serial one. Serial attached SCSI, as the standard will be known, is expected to be ratified sometime in the second quarter of this year according to this article at Computerworld. Hard drive manufacturers Seagate and Maxtor have already said that they will have drives conforming to the new standard shipping by the end of the year. The new standard will shatter the current SCSI throughput limit of 320 megabit/sec with a starting maximum throughput of 3 gigabit/sec. But before this thread turns into a SCSI fanboy vs. ATA fanboy flame war this other article states that Serial Attached SCSI will be compatible with SATA drives so you can have the best of both worlds."
This discussion has been archived. No new comments can be posted.

Serial SCSI Standard Coming Soon

Comments Filter:
  • SASCSI (Score:3, Insightful)

    by epyT-R ( 613989 ) on Sunday March 09, 2003 @06:18PM (#5472998)
    Well, at least we can get rid of those hard-to-route ribbon cables. That alone is worth the switch, IMHO.
    • Re:SASCSI (Score:3, Interesting)

      by chrisseaton ( 573490 )
      I've always wondered, why are they ribbions? Why not simply roll the ribbons up into cables? Can anyone enlighten me?
      • Re:SASCSI (Score:2, Interesting)

        by sirsex ( 550329 )
        I believe it is due to the inductive crosstalk between the channels. With ribbons, only the two adjacent wies are a major concern, while a cable would have many more conductors in close proximity. The interfence will add linearly with each noise source
      • Re:SASCSI (Score:5, Informative)

        by Ryan Amos ( 16972 ) on Sunday March 09, 2003 @06:30PM (#5473058)
        To reduce crosstalk between the wires so that you can run at faster speeds. Indeed, the "rounded" IDE cables often reduce performance by 5% or so. We're getting better at data throughput though, so we can use serial technologies and actually get faster transfer rates. Good riddance to ribbon cables :P
      • Re:SASCSI (Score:5, Informative)

        by shepd ( 155729 ) <> on Sunday March 09, 2003 @06:31PM (#5473067) Homepage Journal
        >Why not simply roll the ribbons up into cables?

        Impedance, crosstalk (mentioned) and price.

        It takes seconds to crimp a ribbon cable. Cheap and easy. You can even do it yourself!

        Taking a bunch of twisted pair wires (which is what would be required to keep the impedance and crosstalk bearable) and soldering them onto connectors individually takes a lot more effort, and therefore costs more.

        Not to mention fabbing individual strands of insulated wire and twisting them together costs more than running 5 wires parallel to each other and simply coating them all at the same time with PVC.
      • I have some rounded cables. But:

        1) The connectors must still be huge
        2) As a consequence, the connector -> cable area is big.
        3) There's so many connectors, the cable is big and inflexible.

        Basicly, I couldn't fit the cables the way I wanted to have the disks, because they were so inflexible they collided with my GF4. So I had to rearrange the disks instead. With ribbon cables, it'd be much more of a mess but it would have worked. SerialATA is much better designed for this.

      • Re:SASCSI (Score:3, Interesting)

        by Waffle Iron ( 339739 )
        Another major advantage of ribbon cables is that they are dirt cheap. They can be stamped out in one step without handling the individual wires. You can also attach connectors to all ~50 wires just by shoving the sharp teeth through the ribbon in one motion. No soldering or advanced tools required.
    • Re:SASCSI (Score:3, Insightful)

      by iotaborg ( 167569 )
      If you've taken a look inside Apple's cases, you would notice how much better ribbon cables can be than rounded. With the ribbon cables routed on the side of the case, under the motherboard, completely out of your way due to the flat nature, it's much more cleaner than what you get with rounded cables (and esp ribbon cables just dangling in mid air). However I do not think this is easy to do in an ATX format.
  • by Anonymous Coward on Sunday March 09, 2003 @06:19PM (#5473002)
    Serial ATA Network = SATAN
  • bits vs. bytes (Score:5, Informative)

    by David Jao ( 2759 ) <> on Sunday March 09, 2003 @06:20PM (#5473013) Homepage
    Guys (meaning submittors and editors), the current version of SCSI delivers 320 megabytes [] per second of interface transfer rate, not megabits.

    320 megabytes is about 2.5 gigabits ... which is a lot closer to 3 gigabits than the erroneous 320 megabits figure.

    • Re:bits vs. bytes (Score:3, Insightful)

      by g4dget ( 579145 )
      Yeah, but what drive actually delivers 320 Mbytes/second? As long as the connection between the controller and each drive can keep up with the drive, the connection is fast enough.

      Of course, a really fast connection may allow you to daisy chain and still get almost full transfer rates from each drive, but that's not really such a big deal, in particular when the cables are as small as they are for serial connections.

      • Re:bits vs. bytes (Score:4, Insightful)

        by Daniel Phillips ( 238627 ) on Sunday March 09, 2003 @07:08PM (#5473239)
        Yeah, but what drive actually delivers 320 Mbytes/second? As long as the connection between the controller and each drive can keep up with the drive, the connection is fast enough.

        Scsi is a bus. I have a box here with 5x10K drives, at 49 MB/s each, easily able to saturate its ultra 160 bus. These days, that box is nothing special.
        • Yes, I'm aware of that; that was, in fact, the point of my posting: if you attach each of those drives individually to a controller using a slower connection, you don't need an expensive, high-speed bus standard and still get the same aggregate bandwidth. You also reduce the risk of breaking something when you add or remove drives. Sorry if I didn't spell it out clearly enough.
  • SCSI = ... (Score:4, Funny)

    by product byproduct ( 628318 ) on Sunday March 09, 2003 @06:21PM (#5473015)
    If I understand the title correctly, SCSI = Standard Coming Soon Interface?
  • by tjstork ( 137384 ) <> on Sunday March 09, 2003 @06:24PM (#5473025) Homepage Journal
    Will this new standard be able to do things in parallel the way SCSI can? Will I turn my server into a PC like box that seemingly pauses every time the swap file gets touched?
  • by taliver ( 174409 ) on Sunday March 09, 2003 @06:25PM (#5473027)
    If the article meant to say 3GBytes, then how in the world will the PCI *at 64bits and 133MHz, it's 1 GB/sec transfer) bus keep up? Or even RAMBUS memory, which, here [] says it has a bandwidth of 4.2GB/sec. (So, kinda means you couldn't have more than one SCSI system at a time and get full bandwidth from both.) Now, if you may have to have memory banks for each SCSI component... ick.

    • PCI Plus proposed by Intel is promising 2GB/sec dedicated channel per device on the PCI Plus bus.... this doesn't fully meet the needs of the drives but is certainly a step in the right direction.
    • by torre ( 620087 ) on Sunday March 09, 2003 @06:54PM (#5473175)
      With PCI-X 1066 [] 8.6GB/s bus tranfers are possible so that should be too much of a problem. Also, the InfiniBand [] aims to solve that problem. One can see that 6GB bus' were planned even in this older dell whitepaper [] suggests.
    • The pci and rambus busses don't need to keep up. The peak throughput only needs to be serviced within the scsi chain. Buffering on the scsi adapter could deliver a relatively high sustained transfer rate from the scsi chain to the pci bus, within pci limitations.
  • U320 SCSI (Score:2, Informative)

    Greetings fellow geeks :-)

    U320/LVD SCSI is capabable of 320MB / sec not 320mbps.

    3gbps ~= 300MB/sec. therefore it would not be be quite as fast as U320 SCSI.

    Naturally 320MB/sec is the theoretical max bandwidth for the SCSI bus not the individual drives in the SCSI chain.

    Live long and prosper
    • Sorry but your math is off. Remember the whole 1024 bytes to a kbyte thing. 3 gigabits a second should come out to about 384 megabytes a second which is fast than U320's 320 megabytes a second but not by much. The added speed here is not a big issue as SCSI drives dont typically max out their available bandwidth for very long.
    • Re:U320 SCSI (Score:2, Interesting)

      Indeed, my math was wrong and for that I apologize!

      However I did say:

      3gbps ~= 300MB/sec which was meant to indicate it was "[very] approximately" 300MB/sec

      300*1024^3=322,122,547,200 bytes per second and 3gbps = 3,000,000,000 (3 billion bits per second).

      3000000000/8 = 375,000,000 bytes = 357 MiB/sec (1KiB = 1024^1 bytes, 1MiB = 1024^2 bytes, 1GiB= 1024^3 bytes)

      In the real world we also run into: encoding overhead, protocol overhead, errors, bus resets, cache misses, interference and many other factors which impact actual throughput.

      FYI: Studies I have observed myself during a research project indicated that the maximum total throughput under GigE is approx. 80MiB/sec under ideal conditions, even though 1,062*1000^3 = 126,600MiB/sec
      Of course it all varies depending on the network adapter used, packet size, processor "speed", RAM, Operating System [!!!], 64bit x 66MHz PCI vs. 64bit x 33MHz PCI vs. 32bit x 33MHz PCI, copper vs. MMF or SMF, HD vs FD, and about a bazillion other factors.

      Believe it or not, at an undisclosed, fully accredited, state-owned University somewhere in the US they taught us in a senior level networking class of all places that due to those factors it is wiser to divide by 10 when converting bits to bytes.

      Go figure! I am NOT making this up!

      Peace and Long Life
  • by thadeusPawlickiROX ( 656505 ) on Sunday March 09, 2003 @06:26PM (#5473035)
    Sure, this definately looks like it could be a great setup: fast, and compatable on multiple systems. But how much will this technology cost? Standard, run of the mill IDE hard drives are about a dollar per Gig. Regular SCSI is a few times higher, especially as drives grow in size. This will be a great advantage if the price range is in the middle of the range, but I doubt that. Now, this won't matter to those with plenty of money to burn on their servers, but would that added price be worth the new types of hard drives? I still don't even see a huge advantage to going Serial ATA right now, so this seemingly good idea could just be another good idea that won't pan out for most users.
    • Look, the whole point of SCSI anymore is just to differentiate between "industrial-strength" drives with high markup and long warranties, and consumer-grade drives. Without some clear boundary, the "server hard drive" market would die. In other words, bringing SCSI to the masses would defeat the whole point.
  • Horray! (Score:5, Interesting)

    by norton_I ( 64015 ) <> on Sunday March 09, 2003 @06:26PM (#5473039)
    Hopefully this will eventually lead to the elimination of the distinction between ATA and SCSI interfaces. Already the feature distinctions between the two are blurring, hopefully soon the interface will be the same and people will just decide whether they need fast or cheap drives. That would improve the quality of desktop class drives and lower the price on workstation/server drives, as well as make system managment a bit easier.
    • Seeing as I can put a serial-ATA or a serial-SCSI drive on a serial SCSI bus...

      What is the exact technical difference between a serial ATA and a serial SCSI drive? I read somewhere that the only difference between IDE drives and SCSI drives are the interfaces and electronics, while the actual storage mechanisms are identical. So if both can now work on a SCSI bus, what the heck is the difference between ATA and SCSI???

  • For more info (Score:3, Informative)

    by Anonymous Coward on Sunday March 09, 2003 @06:29PM (#5473048)
    For more info on Serial attached SCSI check out this page: d.html []
  • Is this a trend? (Score:5, Interesting)

    by Anonvmous Coward ( 589068 ) on Sunday March 09, 2003 @06:33PM (#5473075)
    I've only paid attention to HD controllers for the last couple of years or so. But I'm starting to wonder if we're seeing a pattern here. "We'll make everything more efficient by making it serial, and then years later when that's not enough we'll make it paralell to send even MORE data through!"

    Anybody think we'll have a massive paralell trend in a few years?
    • Re:Is this a trend? (Score:5, Informative)

      by TheShadow ( 76709 ) on Sunday March 09, 2003 @06:40PM (#5473109)
      I don't think so. The reason there is a tremendous push towards serial right now is because parallel interfaces create more interference at higher frequencies. The theory with serial is that you can push the frequency as high as you want without the interference.
      • "I don't think so. The reason there is a tremendous push towards serial right now is because parallel interfaces create more interference at higher frequencies."

        That problem is on it's way to being rectified. My company has a large LCD monitor that needs lots of data to drive it. We've got a $700 optical cable that takes it's range to some ridiculous length like 30 meters. They've made a small adapter that converts the electrical impulses to light, and back to electricity again on the other end so that the monitor itself doesn't have to be modified to use the cable.

        Light doesn't cause this type of interference, so they'd be able to (in theory) apply this to other technologies as well. I don't think it'll be long before we see hard drives using something like it.
        • They already have this: fibre channel and work over either fibre or copper.

          The thing is, for short distances there is almost no reason to use fibre over copper. Ever notice that not many workstations have fibre gigabit ethernet? Its great for connecting two routers together that happen to be a mile apart, but over short distances it makes no sense.

        • Light doesn't cause this type of interference, so they'd be able to (in theory) apply this to other technologies as well. I don't think it'll be long before we see hard drives using something like it.

          I don't. We're nowhere near saturating the potential bandwidth of a fiber with consumer electronics. So there's not a great benefit in putting a bunch of fibers next to each other to aggregate their bandwidth--especially since (regardless of the optical path) the electrical signals going into the electrical-optical converters would be subject to the same high frequency timing issues that're causing the push away from parallel busses in the first place.
      • Right, and the setup/hold deltas become irrelevant. As the switching speeds go up the time wasted for all the pin signals to be good becomes a hard limit for the bus; after all parallel is just a workaround to slow inverters. As long as the cables aren't too long for signal distortion to arise, serial can really push the limit and when that'll come I expect Si optoelectronic frontends will be mature enough to substitute electrical transmission lines in consumer electronics ;-)
    • The very reasons interfaces are changing to serial is that it's very problematic to keep signals synchronized in parallell. So it's either "fast" serial, or "slow" parallell, and it looks like serial is winning. While fast parallell obviously would be the best, don't hold your breath for it.

      • What I don't get is there already is a serial SCSI, it is called Fibre Channel. Right now it is clocked at 4 Gbits/sec. and there is no reason it can't go faster.

        But I do agree about the problems with parallel. Thing about the interfaces called "parallel" and "serial", the old ports on the back of the computer. Sure the LPT ports were faster, but were very limited to the distance they could run because interference.

        Also to get IDE over 33 Mbits/sec. they had to add an extra ground wire between each data wire to keep the noise down. SCSI always had extra wires, but they had to go to twisted pair (aka LVD) with in the cables to get any distance.

        But FC is here today, it supports high, and huge cable lengths on optical cables, and respectible lengths on copper.
        • What I don't get is there already is a serial SCSI, it is called Fibre Channel. Right now it is clocked at 4 Gbits/sec. and there is no reason it can't go faster.

          Fiber optic hardware is more expensive. What I'd like to know is why *Firewire* doesn't serve the purpose.
          • If all of the SCSI market were moved to FC the cost would drop considerably. But by creating a compeating standard they'll never see the wide range adoption of Fibre Channel that would be needed to get the cost down.

            I haven't used multiple Firewire devices on the same bus to find out how they perform. That is my main reason for using SCSI and FC now. I just hope any new standard that comes out doesn't suffer like ATA does with 2 devices on the same channel.
        • Fibre channel is godawful expensive. It's only practical use is for multi attached devices and where long cable runs are necessary.

          The same economics are behind copper Gig ether dominating fibre for everything but long haul links.

      • The very reasons interfaces are changing to serial is that it's very problematic to keep signals synchronized in parallell. So it's either "fast" serial, or "slow" parallell, and it looks like serial is winning. While fast parallell obviously would be the best, don't hold your breath for it.

        Hypertransport is a good example of a serial/parallel interface. To get more bandwidth, you add more links in parallel, each of which is a serial link capable of carrying the whole traffic on its own, just slower.
  • It's too bad... (Score:3, Insightful)

    by Quaoar ( 614366 ) on Sunday March 09, 2003 @06:33PM (#5473076)
    ...that the speed limitation on data access is mostly the fault of the DRIVE, not the interface. Show me a drive that can achieve 3 gigabytes/sec and I'll be impressed.
    • Re:It's too bad... (Score:2, Insightful)

      by Anonymous Coward
      Informative? How about ignorant?

      That bandwidth can be shared between many drives. The drive itself has cache, so it isn't always returning data from the platters. And it's gigabits, not gigabytes. Get a freakin clue.

    • Show me a drive that can achieve 3 gigabytes/sec and I'll be impressed.

      The interface runs at 3Gbps, not 3GBps. A standard SCSI interface can support at least 7 drives. This only 45MBps per drive on an U320 channel not counting protocol overhead. Quite a few SCSI drives can handle that speed.

    • by Quaoar (614366)

      According to this [] you might be a planet soon.

    • Why? What conceivable use is there for a single drive that can kick out 300MB/sec?

      Haven't we learned this already a dozen times now? Bandwidth is EASY. If you want a high-bandwidth link between NY and LA, charter a few trucks and fill them with DDS4 tapes. If you want a high-bandwidth disk subsystem, fill it with a dew dozen drives. If you want more memory bandwidth, add another channel or three.

      Latency, not bandwidth, is the problem in nearly all applications. You want a drive that can sustain 3GB/sec. Well, I'll give you a hypothetical drive that can transfer data instantaneously. With a 5ms access time, it can still only transfer 100KB/sec if it reads 512byte sectors randomly. A drive with half the latency but only 10MB/sec transfer rate could come within 95% of doubling the first drive's performance under those conditions.

      Until you approach petabits/second, bandwidth is not a technical problem, it is a financial problem. I have to go, so thus endeth the lesson.
  • SCSI is very close to joining ATA in leaving a parallel interface design behind in favor of serial one.

    If I'm not mistaken, doesn't SCSI stand for "small computer serial interface"?

  • Ok, this is not a fanboy post on either side. But I'm wondering what the benefits of SCSI are in this day and age. Is it just the ability to have more than four drives? If that's it, are IDE/SATA drives somehow hard-limited to just four connections, or is that a motherboard limitation that hardware vendors stubbornly refuse to leave behind?

    I keep hearing that SCSI drives are better for hardcore media editing and for servers, but I'm curious why. Is there a compelling advantage for desktop users (or even servers)?

    I have to admit, I've got a box with two IDE drives and two CD/DVD drives, and I'm irritated that I can't keep my IDE ZIP drive installed or add another drive (transferring data is a pain in the butt...). It would be awfully nice just to throw another drive in the chassis, and add the free space to my existing partitions.

    I dunno, I'll be in the market for a new desktop in the next year or so, so I'm trying to figure out now what the best hardware arrangement is.

    • Uhmmm ... you CAN have more than 4 IDE devices ... what you need is more IDE channels.

      Each IDE channel can have only 2 devices, a master and a slave.

      The more IDE channels you have, the more devices you can have. Currently, on my Motherboard, it has 4 channels, (2 for "standard" IDE connections, for 4 devices, and 2 for "RAID" IDE connections, for another 4 devices).

      In fact, there are a couple of MOBO mfgs that have 6 channels (2 + 4 RAID channels, for maximum throughput you would have only 1 device per RAID channel.) ... however, you don't need to configure the RAID array, and could have 12 IDE devices.

      Currently, I have:

      • 60G - master - channel 1
      • 60G - slave - channel 1
      • CDRW - master - channel 2
      • DVD - slave - channel 2
      • 40G - master - channel 3

      BTW, it's really nice not to partition anything, and have a whole drive dedicated to an OS.
      • I run a similar 2+4 channel controllers, my optical drives are on the 2-channel and my HDD on the RAID controller, all master devices.
      • Why limit yourself to controllers on the motherboard? With SCSI the use of an add-on controller is almost assumed, no reason not to do the same with IDE. 2 channels = $36 [], and that'll support 4 devices if you're willing to do the master/slave thing.
    • Re:Benefits of SCSI? (Score:2, Informative)

      by khuber ( 5664 )
      Better drives that are designed to run 24/7 with load. The drives usually have lower seek times/lower rotational latency. Some of this comes at the cost of heat and noise which Joe Consumer might not tolerate. Seek times are incredibly underrated, btw. The SCSI interface itself really doesn't have much advantage over ATA, but the industry builds its best drives for SCSI/FCAL.
    • There is no "hard-limited" maximum of 4 IDE drives per motherboard. Most boards have two IDE channels built in, and IDE will only support 2 devices per channel, so you get four devices from the board. However, you can buy many, many boards that have more than that, especially lately (Abit's AT7/IT7 models come to mind).

      Most board manufacturers include only two IDE channels because that's how many are generally built into north-bridge chipsets. The Abit boards mentioned above use an additional Promise HPT374 chip to provide FOUR extra IDE channels, for a total of TWELVE IDE devices, altogether.

      If you want more IDE devices than your board supports natively, you can just buy PCI cards that have more IDE channels. Promise, SIIG, and Highpoint all make really cheap cards that have an extra two channels, or four more devices.

      SCSI limitations are similar. You only get 15 devices PER BUS, but you can add as many devices into your system as you have PCI slots and IRQs for. You can buy an Adaptec 29160 card (dual busses) and plug 30 hard drives into it. Buy four of them, and can have more than 100 drives.
    • The SCSI bus supports more drives per channel (limited by bus type, up to 126 right now using FireWire. ATA is stuck at 2). SCSI drives support fancy things such as command queing and the controllers are optimized to handle things like high numbers of small transactions with greater efficiency. A nice explanation to set you in the right direction is found here [].

      I run both ATA and SCSI drives. My take is that if you're using small numbers of drives or just doing straight, simple high bandwidth sequential seeks, ATA is fine. SCSI will show when you have differing loads that are more real. Personally, I'm much happier with SCSI for just about anything. The fact is that ATA propenents can only compare against current SCSI technology by trying to be "good enough" for the job. They're not. It's all an issue of price vs. performance - but take out the issue of price, and SCSI wins.

    • Re:Benefits of SCSI? (Score:3, Informative)

      by drsmithy ( 35869 )
      I keep hearing that SCSI drives are better for hardcore media editing and for servers, but I'm curious why. Is there a compelling advantage for desktop users (or even servers)?

      For desktops, not really. For server, yes. SCSI, due to (generally) lower latencies, higher rotational speeds and a smarter interface destroys IDE in high-load multi-user style scenarios (lots of random reads & writes all over the disk). Very few (if any) desktop users generate the sort of usage patterns that allow SCSI to shine, so on the desktop it has little advantage (particularly taking into account the cost).

      Most people who say SCSI gives them a good boost on their desktop machines are usually comparing quite new SCSI drives to quite old IDE ones, are dealing with poorly-configured IDE setups (more than one device on a channel) or are using an older, slower machine (probably with a crappy IDE controller). For the vast, vast majority of users (and that includes high-end users) SCSI offers little benefit.

    • Here's a shot (Score:4, Insightful)

      by 0x0d0a ( 568518 ) on Sunday March 09, 2003 @07:55PM (#5473477) Journal
      Okay. SCSI lets you do command queuing and reordering. Serial ATA will have this too. Theoretically, if you have a bunch of things doing sequential accesses at once, this can help. SCSI can have outstanding requests to multiple devices on a bus at once, and ATA cannot. This is a pretty big deal for environments where you can have heavy disk load, since if you have two drives on an ATA bus, one can get starved if the other is doing lots of work. I'm not sure if Serial ATA addresses this. For Average Joe's desktop, it's not a big deal because he's usually only doing one thing at once -- loading a game up, or copying a file.

      SCSI is generally used to allow price discrimination by vendors. SCSI drives have a reputation for being more reliable, and much more expensive.

      SCSI supports many more devices on a bus. This is a big deal to me -- it's a royal pain to buy another controller to add another device or two.

      It's unlikely that the two will be merged any time soon, because there's tremendous financial incentive to prevent "enterprise-class" drives from becoming commoditized. SCSI is one of the industry's last useful tools to avoid this.

      If you're getting a desktop, use ATA, almost certainly. If you're getting a server with a lot of drives, it may be worth your while to get SCSI, for the abovementioned benefits.

      If I had some extra money and just wanted some extra reliability, I'd probably have a mirrored RAID pair of IDE drives, if I were building a desktop without a ton of drives.
    • Re:Benefits of SCSI? (Score:3, Interesting)

      by SETIGuy ( 33768 )
      I'm not sure I get what the problem is. No room left in your case? No PCI slots for additional controllers?

      My current desktop setup is...

      • IDE Channel 1A: 100 GB master
      • IDE Channel 2A: 100 GB master
      • IDE Channel 3A: IDE ZIP 100 (Bay 6)
      • IDE Channel 4A: DVD-RW (Bay 1)
      • IDE Channel 4B: DVD-ROM (Bay 2)
      • SCSI O id 4: Jaz 1GB (external)
      • SCSI 0 id 5: CD-RW (Bay 3)
      • FD0: 3.5" (Bay 5)
      • One 5" bay free. (Bay 4)

      The additional cost to get the extra two IDE channels was $25 for a dual channel IDE RAID card. For a home machine, IDE is perfectly adequate for the main drives. I keep SCSI around in hopes of acquiring a reasonably priced backup solution at some point. (My current backup is to copy modified files to another machine in the garage with an eventual dump to DVD). If I need more storage in the near term, I'd probably pick up a firewire drive.

      "Next year or so" the arrangement I'd choose would likely be entirely different. We'll see where serial ATA and SASCSI are at that point.

  • Firewire? (Score:3, Interesting)

    by TheRaven64 ( 641858 ) on Sunday March 09, 2003 @06:54PM (#5473174) Journal
    SCSI is expensive, FireWire is proven technology. Wouldn't it be more sensible to use FireWire? []
    • Re:Firewire? (Score:5, Informative)

      by torre ( 620087 ) on Sunday March 09, 2003 @07:02PM (#5473212)
      SCSI is expensive, FireWire is proven technology. Wouldn't it be more sensible to use FireWire? []

      Firewire is low end consumer product...even with its successor (which is taking longer than expected to ship) running at 800Mbits/s (100 Megabytes/second) it falls short of current SCSI technology running @ 320MB/s. As such there is no one who would seriously consider firewire for a large scale server handling many gigabytes/terabytes of data. Firewire is just too slow of a bus for big needs, but does fills its convenience needs in the consumer market. Everything has it's own niche... that's why heavily marked up servers/mainframes/supercomputers still exist instead of cheaper home machines which just can't fill the requirements.

      • Creating another serial bus is pretty dumb though. Surely we should be moving towards a world where you can plug your hard disk into any computer, your scanner into your mobile phone, your printer into your digital camera?

        Why not create a serial bus with many different speeds depending on the application required, make a wireless version too. Why do we need bluetooth, 802whatever, USB, firewire, serial ATA and now serial SCSI? it's just in the interests of hardware vendors to make all these different technologies.
    • SCSI is expensive, FireWire is proven technology. Wouldn't it be more sensible to use FireWire? []

      I quote from the article you posted: The current generation supports transfer speeds of 800Mb/s (100MB/s, the same as most ATA controllers).

      This discussion is about Serial SCSI which will have a peak throughput of 384MB/s. Clearly, firewire is insufficient.
    • What I'd really like to know is why not use 3gio / PCI Express [], the upcoming variable-width PCI bus that can shrink to a 250 million byte per second point-to-point "one lane" configuraturation that sounds like it could replace USB, firewire, ethernet, serial ATA and serial SCSI. The drive would be "directly" on the PCI bus. I would think that this approach would involve the least amount of silicon on a computer that already had PCI Express.

      n.b.: Putting the controller logic back in the drive unit harkens back to the original In Drive Electronics approach.

  • If SCSI is pronounced "scuzzy."

    And the full acronym for "Serial attached SCSI" is SASCSI..

    How exactly would we pronounce that? Sacksie? Sasky? Oh God, I bet it will be a silent C. .. "Sassy."

    Yay, my computer iss really sspeedy now that I've upgraded to the new SSSASSSSSY DRIVE !@#!@^#^$^$#! []

    Jason Fisher. :P []
  • > But before this thread turns into a SCSI
    > fanboy vs. ATA fanboy flame war...

    FWIW, the alternative name for fanboy is "fanboi". An even more disrespectful version of the term. (As if fanboy wasn't disrespectful enough for some people.)
  • A couple of notes (Score:4, Informative)

    by Jordy ( 440 ) <jordan@snocap . c om> on Sunday March 09, 2003 @07:40PM (#5473391) Homepage
    There are a couple important notes about Serial-attached SCSI (SAS) that I think are important.

    First, SAS uses a point-to-point topology similar to Serial-ATA instead of a shared bus like SCSI. This means each drive has access to full bandwidth, not just one (the bottleneck being the card itself).

    Second, according to the SAS working group, SAS comes in three speeds; 150, 300 and 600 MB/s. I'm not sure where that 3 Gbps figure came from.

    Third, unlike Serial-ATA or parallel SCSI, SAS is full duplex like fibre channel. This should have some interesting effects on latency.

    Fourth, SAS uses the same physical connector as Serial-ATA and in fact can use Serial-ATA drives in legacy mode.
  • by Anonymous Coward on Sunday March 09, 2003 @07:43PM (#5473415)
    Serial Streaming Architecture that is. It's a 40MBps serial hardware layer that runs SCSI protocal. It's configured in a loop so there's automatic redundancy in case one link gets disconnected. And a single segment can be up to 20 meters long. Anyone have a Shark (ESS)? It's all SSA inside.

  • Ummmm.... (Score:3, Informative)

    by psyconaut ( 228947 ) on Sunday March 09, 2003 @08:30PM (#5473608)
    SCSI is already at "SCSI320"....which is 320Mbyte/sec NOT 320Mbits/sec!!!!

    That's already ~2.5Gbits/sec.

    And isn't there a SCSI640 working group, too?

  • I thought IEEE-1394 a.k.a. Fireware, was the serial version of SCSI-3. No?
  • by berwyn ( 409396 ) on Sunday March 09, 2003 @08:38PM (#5473633)
    There is a good article here 03 -03-03.asp?article_id=211

    The article states that the SAS drives won't work on a SATA channel, but SATA drive will on the SAS.

    I wonder if mobo makers like ASUS, ABIT, MSI and the likes will choose to have SAS ships on the mobo instead of SATA, as a performance feature?

    Lets hope so it would sure open a lot of option for upgrading a PC over time.
  • by jkorty ( 86242 ) on Monday March 10, 2003 @12:40AM (#5474642) Homepage
    Didn't anybody read the pdf whitepaper? The only thing common between serial SCSI and SATA is the connector and the power and ground pins on the connector. The two protocols use entirely different signal waveforms and higher level protocols on the signaling pins. The article specifically states that to plug the wrong device into the connector results in a nonfunctional unit.
  • by AbRASiON ( 589899 ) on Monday March 10, 2003 @02:02AM (#5474863) Journal
    Contrary to popular beleif the SATA cables are approx 1mm thick, 6-7.5mm wide and quite "awkward" to work with :(

    I for one will be doing my best to hunt down a supplier which makes precise lengths so I can have mine cut to size as they aren't as easy to route as a ribbon cable (seriously!)

    Plus if you have 6 devices that's SIX cables in the box instead of 3,... - one of the small shortcomings of SATa :(

    (when I first heard about it, I was under the impression it dasiy chained with an "in" and an "out" port - boy did I think that was FANTASTIC... but I was sorely disapointed when I discovered I was incorrect) :(

  • Is Serial faster? (Score:3, Interesting)

    by samdu ( 114873 ) <samdu@ron i n> on Monday March 10, 2003 @02:28AM (#5474944) Homepage
    I remember a while back that there were some Parallel modems (I think I actually have one in my closet). The spin was that Parallel modems had higher throughput. In addition, Maximum PC just did a benchmark test between Parallel ATA and Serial ATA and the Parallel drive/interface beat the Serial in all but one test. Is Serial actually faster and why?

God made the integers; all else is the work of Man. -- Kronecker