Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

Serial SCSI Standard Coming Soon 328

rchatterjee writes "SCSI is very close to joining ATA in leaving a parallel interface design behind in favor of serial one. Serial attached SCSI, as the standard will be known, is expected to be ratified sometime in the second quarter of this year according to this article at Computerworld. Hard drive manufacturers Seagate and Maxtor have already said that they will have drives conforming to the new standard shipping by the end of the year. The new standard will shatter the current SCSI throughput limit of 320 megabit/sec with a starting maximum throughput of 3 gigabit/sec. But before this thread turns into a SCSI fanboy vs. ATA fanboy flame war this other article states that Serial Attached SCSI will be compatible with SATA drives so you can have the best of both worlds."
This discussion has been archived. No new comments can be posted.

Serial SCSI Standard Coming Soon

Comments Filter:
  • Re:SASCSI (Score:3, Interesting)

    by chrisseaton ( 573490 ) on Sunday March 09, 2003 @06:19PM (#5473007) Homepage
    I've always wondered, why are they ribbions? Why not simply roll the ribbons up into cables? Can anyone enlighten me?
  • by tjstork ( 137384 ) <todd DOT bandrowsky AT gmail DOT com> on Sunday March 09, 2003 @06:24PM (#5473025) Homepage Journal
    Will this new standard be able to do things in parallel the way SCSI can? Will I turn my server into a PC like box that seemingly pauses every time the swap file gets touched?
  • Horray! (Score:5, Interesting)

    by norton_I ( 64015 ) <hobbes@utrek.dhs.org> on Sunday March 09, 2003 @06:26PM (#5473039)
    Hopefully this will eventually lead to the elimination of the distinction between ATA and SCSI interfaces. Already the feature distinctions between the two are blurring, hopefully soon the interface will be the same and people will just decide whether they need fast or cheap drives. That would improve the quality of desktop class drives and lower the price on workstation/server drives, as well as make system managment a bit easier.
  • Re:SASCSI (Score:2, Interesting)

    by sirsex ( 550329 ) on Sunday March 09, 2003 @06:29PM (#5473047)
    I believe it is due to the inductive crosstalk between the channels. With ribbons, only the two adjacent wies are a major concern, while a cable would have many more conductors in close proximity. The interfence will add linearly with each noise source
  • Is this a trend? (Score:5, Interesting)

    by Anonvmous Coward ( 589068 ) on Sunday March 09, 2003 @06:33PM (#5473075)
    I've only paid attention to HD controllers for the last couple of years or so. But I'm starting to wonder if we're seeing a pattern here. "We'll make everything more efficient by making it serial, and then years later when that's not enough we'll make it paralell to send even MORE data through!"

    Anybody think we'll have a massive paralell trend in a few years?
  • Re:SASCSI (Score:3, Interesting)

    by Waffle Iron ( 339739 ) on Sunday March 09, 2003 @06:45PM (#5473130)
    Another major advantage of ribbon cables is that they are dirt cheap. They can be stamped out in one step without handling the individual wires. You can also attach connectors to all ~50 wires just by shoving the sharp teeth through the ribbon in one motion. No soldering or advanced tools required.
  • Firewire? (Score:3, Interesting)

    by TheRaven64 ( 641858 ) on Sunday March 09, 2003 @06:54PM (#5473174) Journal
    SCSI is expensive, FireWire is proven technology. Wouldn't it be more sensible to use FireWire? [sucs.org]
  • Re:Horray! (Score:1, Interesting)

    by g4dget ( 579145 ) on Sunday March 09, 2003 @06:59PM (#5473202)
    Why would adopting a serial standard lead to "the elimination of the distinction"? When both were parallel, they were different.

    The distinction is largely one of software and controller standards. SerialATA looks like an IDE controller, and SerialSCSI looks like a SCSI controller. The fact that both use a handful of wires in a thin cable to attach them doesn't change that.

  • Re:Benefits of SCSI? (Score:3, Interesting)

    by ProfMoriarty ( 518631 ) on Sunday March 09, 2003 @07:05PM (#5473228) Journal

    Uhmmm ... you CAN have more than 4 IDE devices ... what you need is more IDE channels.

    Each IDE channel can have only 2 devices, a master and a slave.

    The more IDE channels you have, the more devices you can have. Currently, on my Motherboard, it has 4 channels, (2 for "standard" IDE connections, for 4 devices, and 2 for "RAID" IDE connections, for another 4 devices).

    In fact, there are a couple of MOBO mfgs that have 6 channels (2 + 4 RAID channels, for maximum throughput you would have only 1 device per RAID channel.) ... however, you don't need to configure the RAID array, and could have 12 IDE devices.

    Currently, I have:

    • 60G - master - channel 1
    • 60G - slave - channel 1
    • CDRW - master - channel 2
    • DVD - slave - channel 2
    • 40G - master - channel 3

    BTW, it's really nice not to partition anything, and have a whole drive dedicated to an OS.
  • Re:U320 SCSI (Score:2, Interesting)

    by Unix_Geek_65535 ( 625946 ) on Sunday March 09, 2003 @07:46PM (#5473432)
    Indeed, my math was wrong and for that I apologize!

    However I did say:

    3gbps ~= 300MB/sec which was meant to indicate it was "[very] approximately" 300MB/sec

    300*1024^3=322,122,547,200 bytes per second and 3gbps = 3,000,000,000 (3 billion bits per second).

    3000000000/8 = 375,000,000 bytes = 357 MiB/sec (1KiB = 1024^1 bytes, 1MiB = 1024^2 bytes, 1GiB= 1024^3 bytes)

    In the real world we also run into: encoding overhead, protocol overhead, errors, bus resets, cache misses, interference and many other factors which impact actual throughput.

    FYI: Studies I have observed myself during a research project indicated that the maximum total throughput under GigE is approx. 80MiB/sec under ideal conditions, even though 1,062*1000^3 = 126,600MiB/sec
    Of course it all varies depending on the network adapter used, packet size, processor "speed", RAM, Operating System [!!!], 64bit x 66MHz PCI vs. 64bit x 33MHz PCI vs. 32bit x 33MHz PCI, copper vs. MMF or SMF, HD vs FD, and about a bazillion other factors.

    Believe it or not, at an undisclosed, fully accredited, state-owned University somewhere in the US they taught us in a senior level networking class of all places that due to those factors it is wiser to divide by 10 when converting bits to bytes.

    Go figure! I am NOT making this up!

    Peace and Long Life
  • by Adam J. Richter ( 17693 ) on Sunday March 09, 2003 @08:25PM (#5473592)
    What I'd really like to know is why not use 3gio / PCI Express [pcisig.com], the upcoming variable-width PCI bus that can shrink to a 250 million byte per second point-to-point "one lane" configuraturation that sounds like it could replace USB, firewire, ethernet, serial ATA and serial SCSI. The drive would be "directly" on the PCI bus. I would think that this approach would involve the least amount of silicon on a computer that already had PCI Express.

    n.b.: Putting the controller logic back in the drive unit harkens back to the original In Drive Electronics approach.

  • Re:Benefits of SCSI? (Score:3, Interesting)

    by SETIGuy ( 33768 ) on Sunday March 09, 2003 @08:55PM (#5473689) Homepage
    I'm not sure I get what the problem is. No room left in your case? No PCI slots for additional controllers?

    My current desktop setup is...

    • IDE Channel 1A: 100 GB master
    • IDE Channel 2A: 100 GB master
    • IDE Channel 3A: IDE ZIP 100 (Bay 6)
    • IDE Channel 4A: DVD-RW (Bay 1)
    • IDE Channel 4B: DVD-ROM (Bay 2)
    • SCSI O id 4: Jaz 1GB (external)
    • SCSI 0 id 5: CD-RW (Bay 3)
    • FD0: 3.5" (Bay 5)
    • One 5" bay free. (Bay 4)

    The additional cost to get the extra two IDE channels was $25 for a dual channel IDE RAID card. For a home machine, IDE is perfectly adequate for the main drives. I keep SCSI around in hopes of acquiring a reasonably priced backup solution at some point. (My current backup is to copy modified files to another machine in the garage with an eventual dump to DVD). If I need more storage in the near term, I'd probably pick up a firewire drive.

    "Next year or so" the arrangement I'd choose would likely be entirely different. We'll see where serial ATA and SASCSI are at that point.

  • Re:Is this a trend? (Score:2, Interesting)

    by daoine_sidhe ( 619572 ) on Sunday March 09, 2003 @09:04PM (#5473720)
    Actually, it's more mission dependent than that. The distance isn't as much of a factor (though it is in some applications) as the bandwidth future proofing capability. In the vast majority of telecommunications specs that have been coming out for the last few years (at least in New England) it's fiber for ANY kind of backbone application. Even if the closets (for some godawful reason, though it does happen) happen to be 50ft apart. 12-strand multimode fiber is what you'll usually see. It makes sense over even short distance because of the bandwidth capability. In 10 years, that backbone fiber can be pushing 10gbps between those closets, per transmit recieve pair. Now, I know that they are working on a 10gbps standard for Cat6 fiber, but who wants to run a copper backbone? Hell, that fiber put in 15yrs ago can push the new 10gbps fiber standard right now! And you've got no crosstalk, no interferance (ever see what happens to your bandwidth when your 200pair copper is running too close to a bank of fluorescent lighting?). I've even seen a dozen or so specifications for fiber to the desktop projects in schools even! I've managed two of those installations in the last year, and the reason they do it isn't because it's some newfangled technology, it's for future-proofing. They know that later on, they'll be able to push more and more bandwidth down those same fibers and just have to swap out the active components in the closets. Any idea how much it costs to swap out a cable plant? Anyway, what it boils down to is this; even internally in workstations, the actual interface itself could be fiber, and not only would it allow much greater bandwidth immediately, it would future-proof the specification as well. I think 5 or 10 years down the road, we're going to see optical links for data begin to really take over. There's only so much that can be pushed through copper, and only so much interference before data integrity is severely degraded.
  • by nicotinix ( 648645 ) on Sunday March 09, 2003 @10:30PM (#5474103) Journal
    Great, the interface will be faster, but what about the actual drive speed. We are currently maxed out at 15k RPM and ~3ms access time. Compared to the improvements made on other PC parts (CPU, memory, video etc), hard drives are limping way behind. Todays drives cannot even come close to saturate the existing interfaces (except in RAID configurations).

    What I would really like to see is some kickass desktop performance improvements for drives. Not just 15-25%, no, I want 4x, 10x performance improvements.

    Seagate, Maxtor, do you hear me???
  • Re:Horray! (Score:1, Interesting)

    by Anonymous Coward on Monday March 10, 2003 @01:29AM (#5474784)
    There is still a huge difference between SATA and SAS drives. A SATA drive is designed to be a desktop drive. The SAS standard just allows for SATA drives to be plugged into a SAS backplane. There is translation from the SAS STP protocol to the SATA protocol in the expander that the SATA drive is attached to. You can't just plug a SATA drive into a SAS HBA. You need an expander--so it isn't a cost effective setup for just 1 drive. Who knows if the STP feature of SAS will take off, since the SATA guys are inventing their own SATA expanders. SAS is just the follow-on the U320 parallel SCSI, and it was derived from fibre channel (but designed so the expanders won't need a processor and should be cheap).
  • Why do we need this? (Score:1, Interesting)

    by Anonymous Coward on Monday March 10, 2003 @02:05AM (#5474870)
    The artical leaves me asking why. This is the closest answer I got.

    "IDC analyst Robert Grey said the ability to mix serial SCSI and ATA drives in servers and arrays has the potential to lower total costs of ownership for corporate users while also letting them customize storage setup to meet their needs."

    OK, so SCSI costs more. But I see not a single technical reason for this other than economies of scale and the possible extra quality/longer warrenties that go into SCSI drives.

    So why create a new SCSI? Didn't SATA take all the best things SCSI offered and added them to the ATA standard like queing? Is there any technical reason SATA can't add whatever SASCSI has? They added DMA ability to parallel ATA-33 which IMHO killed the biggest advantage SCSI had, I see no reason they can't come out with SATA 2004 and add whatever they needed for SASCSI instead of making a 2nd standard.

    What does this mean? "Serial Attached SCSI complements Serial ATA by adding device addressing"

    That's the only advantage SASCSI has over SATA I got when I read the FAQ [scsita.org].

    The rest of the advantages seem to be "it lets you use SCSI drives, which everybody knows are more reliable and cost 10x more", but there is no reason for having SCSI drives. Just build better ATA drives!
  • Is Serial faster? (Score:3, Interesting)

    by samdu ( 114873 ) <samduNO@SPAMronintech.com> on Monday March 10, 2003 @02:28AM (#5474944) Homepage
    I remember a while back that there were some Parallel modems (I think I actually have one in my closet). The spin was that Parallel modems had higher throughput. In addition, Maximum PC just did a benchmark test between Parallel ATA and Serial ATA and the Parallel drive/interface beat the Serial in all but one test. Is Serial actually faster and why?

Always draw your curves, then plot your reading.

Working...