Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

SATA 3.0 Release Paves the Way To 6Gb/sec Devices 248

An anonymous reader writes "The Serial ATA International Organization (SATA-IO) has just released the new Serial ATA Revision 3.0 specification. With the new 3.0 specification, the path has been paved to enable future devices to transfer up to 6Gb/sec as well as provide enhancements to support multimedia applications. Like other SATA specifications, the 3.0 specification is backward compatible with earlier SATA products and devices. This makes it easy for motherboard manufactures to go ahead and upgrade to the new specification without having to worry about its customers' legacy SATA devices. This should make adoption of the new specification fast, like previous adoptions of SATA 2.0 (or 3Gb/sec) technology."
This discussion has been archived. No new comments can be posted.

SATA 3.0 Release Paves the Way To 6Gb/sec Devices

Comments Filter:
  • Re:Ah! (Score:2, Informative)

    by prjt ( 1369213 ) on Wednesday May 27, 2009 @06:50PM (#28116631) Homepage Journal
    My bank account is dead.
  • by Jason Pollock ( 45537 ) on Wednesday May 27, 2009 @06:50PM (#28116633) Homepage

    Devices which aggregate themselves as a striped array behind a single eSATA/SATA interface. While the individual device may not be able to pump out enough data, they can in aggregate.

  • by evanbd ( 210358 ) on Wednesday May 27, 2009 @06:51PM (#28116653)

    Wow, both your numbers are wrong. SATA 2.0 has a theoretical transfer rate of 3Gb/s, not 3GB/s. It also uses an 8b/10b encoding [wikipedia.org], so 3.0Gb/s translates to 300MB/s. Data throughput will be less than that, thanks to control protocol overhead, though the overhead is very small.

    Modern drives do seriously better than 25MB/s. Seriously, go look at benchmarks. Also, SSDs, which are a very real design influence on things like SATA, are already getting close to the 300MB/s mark.

  • by BeardedChimp ( 1416531 ) on Wednesday May 27, 2009 @06:53PM (#28116679)
    Exactly, it's not like technology advances or anything. [tomshardware.com]
  • Re:isn't it time for (Score:5, Informative)

    by Wesley Felter ( 138342 ) <wesley@felter.org> on Wednesday May 27, 2009 @06:54PM (#28116699) Homepage

    No, because SAS will always be more expensive than SATA.

  • by Wesley Felter ( 138342 ) <wesley@felter.org> on Wednesday May 27, 2009 @06:58PM (#28116741) Homepage

    Current SSDs are very close to the SATA 2.0 limit and the performance of flash is about to double thanks to ONFI 2.0, so we can expect SSDs to quickly adopt SATA 3.0.

  • by ichigo 2.0 ( 900288 ) on Wednesday May 27, 2009 @07:04PM (#28116789)
    Actually the limit is 300MB/s [wikipedia.org] which some of the new drives are very close to reaching [anandtech.com]. One more generation of SSDs and they'll be bottlenecked by SATA 2.0.
  • by geekoid ( 135745 ) <dadinportlandNO@SPAMyahoo.com> on Wednesday May 27, 2009 @07:07PM (#28116811) Homepage Journal

    Not true. SSDs are approaching that now.

    HP has an enterprise SSD that is 800MB/s (Note the large B as opposed to b). So this drive could saturate SATA 3's 6 Gb/s

  • Re:isn't it time for (Score:5, Informative)

    by kaiser423 ( 828989 ) on Wednesday May 27, 2009 @07:26PM (#28116981)
    You do realize that at either end of a Parallel link you'd have to re-serialize right? That's what PATA does. So you still need the high clock rate regardless of how much you parallelize it on the wires. That's extra hardware, and another piece the needs to be be really fast. Then you also have issues with maintaining clocking integrity over parallel lines, which gets tricky at high data rates.

    Right now, our technology is better in going pure serial. In the past, it was parallel. It might swing back and forth a couple of times between the two in the future. But make no mistake: right now, on commodity hardware for drives connected via cables, serial is pulling ahead in the speed war.
  • Re:isn't it time for (Score:5, Informative)

    by Wrath0fb0b ( 302444 ) on Wednesday May 27, 2009 @07:27PM (#28116993)

    The problem with parallel is that you can't crank up the clock speed because you have to make sure that the signal on each line is combined with the ones from the other lines that were sent at the same time. This limits how fast you can send the send the bits (if the time being bits is comparable to the skew time, the receiver will not be able to reliably reassemble the data) and how long the interconnect can be (skew being linearly amplified by length). It's not for nothing that PCI has been replaced with PCI-E, PATA with SATA, SCSI with SAS. USB and IEE1394 would be impossible with parallel. Serial communications are more reliable and more scalable (one big exception -- wireless RF, but that's not what we are discussing here).

    Multiprocessing, incidentally, has nothing to do with it -- the software interface to a storage device hides all the implementation details (PATA/SATA, for instance) anyway. The hard part in multi-threading IO-intensive apps has quite a bit more to do with latency issues and atomicity guarantees (the complete lack thereof) rather than the inability of the storage device to do 2 things at once (which, for a physical disk, is impossible anyway, meaning that it would have to back-convert into a serial process anyway).

  • Re:SSD (Score:5, Informative)

    by Wrath0fb0b ( 302444 ) on Wednesday May 27, 2009 @07:31PM (#28117039)

    If my understanding of the technology is correct, the seek time on most hard drives already limits drive access speed to typically be slower than 3Gb/sec. Would this rely on a transition to Solid State Drives for any noticeable difference in performance?

    The seek time has nothing to do with the throughput. The seek time refers to the latency between when a read command is issued and when it begins to be fulfilled. The throughput refers to the data transferred per unit time during fulfillment.

    Here's a nice car analogy for those of us in New England -- consider the Mass Pike versus I-93. The Mass Pike has a very long seek time from the onramp because of the toll lanes (and the mouth breathers that won't get a transponder even though they are now free and clog the automatic lanes) but once you get on the highway, you can go 80 MPH until your exit. On I-93, by contrast, you can get right on, but you will be going 30 MPH for the duration. Of course, if you drive down to CT and get on I-84, you have a low-latency AND high throughput highway but if you drive too far down to, say, the Bronx, it becomes high-latency and low throughput.

  • by Clover_Kicker ( 20761 ) <clover_kicker@yahoo.com> on Wednesday May 27, 2009 @07:46PM (#28117181)

    Prepare for mass storage connected to the north bridge.

    /me wanks furiously!

  • by Firehed ( 942385 ) on Wednesday May 27, 2009 @07:52PM (#28117233) Homepage

    Sequential reads on large-capacity drives are often in the 70-90MB/s range (yes MB, not Mb), bursting into the 200MB/s range. Hell, I've seen 50MB/s+ for at least the last half a decade. High-quality (read: expensive) SSDs can roughly double that.

    And of course, the spec is in gigabits per second, not gigabytes, and includes overhead. Actual supported, sustained transfer is supported at 150MB/s, 300MB/s, and 600MB/s on SATAI-III respectively.

  • Re:isn't it time for (Score:4, Informative)

    by morgan_greywolf ( 835522 ) on Wednesday May 27, 2009 @08:09PM (#28117389) Homepage Journal

    Actually, there really isn't much difference. The main difference is that hard drive manufacturers build their SCSI/SAS drives better than their IDE/SATA drives, because most SCSI/SAS drives are going into servers.

    The performance difference historically was much faster and that's the reason why SCSI is used in server hardware, but now it's mostly a matter of economics and pricing.

  • by yachius ( 1348219 ) on Wednesday May 27, 2009 @08:13PM (#28117429) Homepage
    Same here. I plugged in a drive a few weeks ago with a regular straight cable and bent the cable up to fit in the case and the connector promptly snapped off.
  • Re:isn't it time for (Score:3, Informative)

    by Barny ( 103770 ) on Wednesday May 27, 2009 @09:27PM (#28118059) Journal

    Firstly, multi platter is not the best, increased heat, increased complexity all increases rate of failure.

    Secondly, you now are making a standard for the number of platters/heads in a drive, in reality everyone wants something different (reliability over density).

  • Re:Forget Heads... (Score:4, Informative)

    by vadim_t ( 324782 ) on Wednesday May 27, 2009 @10:22PM (#28118421) Homepage

    Ok, so let's say those 4MB/minute (IO writes/s would be a better measure) are made from 64K requests. So that's 64 requests/minute, or about one a second.

    That's not terribly high, so let's double it to 2 requests a second.

    1310720000 max block erases, at 2 per second will last 7585 days, or 20 years.

    This assuming a MLC drive with 16GB available for reallocation for it. If you use a SLC drive, you probably won't live long enough to see the disk wear out, and even with MLC it's doubtful you're going to keep the same drive around for 20 years. I think that 20 years ago you'd be running a 386 or a 486, and have maybe 200MB of disk space, and can't even plug in a hard disk from back then into most modern computers.

  • Re:isn't it time for (Score:2, Informative)

    by Anonymous Coward on Thursday May 28, 2009 @03:04AM (#28120035)

    IIRC, A 1x PCI Express channel is a single differential pair for data. (I think there's a side band channel and some other stuff.) This is just like DVI and SATA.

    Actually, I remember looking this up just last night. Each PCIe lane consists of two transmit pairs and two receive pairs, for a total of four pairs. DVI also uses multiple pairs, both in single-link and dual-link mode. I'm not sure about SATA.

    At the electrical level, each lane consists of two unidirectional LVDS or PCML pairs at 2.525 Gbit/s. Transmit and receive are separate differential pairs, for a total of 4 data wires per lane.

    Thus sayeth the Wiki. [wikipedia.org]

    I think the only other thing you missed was a discussion of crosstalk. IIRC, crosstalk was one of the major limitations of the old IDE cabling standard, since transmitting many high speed signals in parallel was a recipe for interference. The cables were therefore required to be of a particular length (18 inches), shape (flat), and electrical characteristics. SATA and especially eSATA aren't anywhere near so picky.

  • by Colin Smith ( 2679 ) on Thursday May 28, 2009 @03:24AM (#28120137)

    Any RAID stripe on a reasonable controller and the SAS/SATA bus will at 300MB/s be the I/O bottleneck. Not much point going beyond 4-5 drives at the moment.

    What I want though is for 10G ethernet to drop a little in price. Then it'll just be the one technology, and when 10G is too slow for storage I/O, the kit can be reused on the other side of the machine. iSCSI has made FC a legacy technology.

     

  • Re:Forget Heads... (Score:3, Informative)

    by asdf7890 ( 1518587 ) on Thursday May 28, 2009 @04:50AM (#28120601)

    Did I miss the memo that says flash no longer has a limit on how many times it can be written upon?

    No, but the limits are sufficiently high with current technology revisions that it isn't really a problem.

    For good solid state drives in all but the most convoluted use cases the expected average time before failure is of about the same order, or some claim better than, spinning disk bases drives. I emphasize the word "good" in that last sentence as this probably may not extent to cheap USB sticks that could be using old design memory and controllers and are generally subject to hasher physical conditions then an internal drive (even in a laptop/netbook).

    They key issues with solid state drives at the moment are relative cost (though this will change as the tech matures further), write speed for many small writes (though better drives are coming with more intelligent controllers now, that mitigate this issue somewhat), and write speeds in general particularly after some use (but again, this issue is being actively worked on).

    Unless you have a specific use that you think will punish individual flash cells, the write limits should not be a concern when comparing SSDs to spinning disks - instead pick the technology that best fits your desired I/O, power use and noise profiles in your price range.

  • by Rockoon ( 1252108 ) on Thursday May 28, 2009 @05:04AM (#28120695)
    Its at least a year old now, but look up the "Battleship MTRON" guy who tried to mount like 8 SSD's in RAID0. This was before OCZ and Intel changed the dynamics of the SSD market, and even then he was very near 1GB/sec sustained transfer rates once he found the right RAID controller.

    SATAx isnt a RAID controller. While people without good solid RAID controllers can get away with decent RAID0 performance, the serious people never rely on a single SATAx controller for RAID0 since that is not its purpose.

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...