Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

SATA 3.0 Release Paves the Way To 6Gb/sec Devices 248

An anonymous reader writes "The Serial ATA International Organization (SATA-IO) has just released the new Serial ATA Revision 3.0 specification. With the new 3.0 specification, the path has been paved to enable future devices to transfer up to 6Gb/sec as well as provide enhancements to support multimedia applications. Like other SATA specifications, the 3.0 specification is backward compatible with earlier SATA products and devices. This makes it easy for motherboard manufactures to go ahead and upgrade to the new specification without having to worry about its customers' legacy SATA devices. This should make adoption of the new specification fast, like previous adoptions of SATA 2.0 (or 3Gb/sec) technology."
This discussion has been archived. No new comments can be posted.

SATA 3.0 Release Paves the Way To 6Gb/sec Devices

Comments Filter:
  • isn't it time for (Score:1, Interesting)

    by markringen ( 1501853 ) on Wednesday May 27, 2009 @06:43PM (#28116521)
    isn't it about time for us to switch to SAS? (Serial Attached SCSI)
  • by markringen ( 1501853 ) on Wednesday May 27, 2009 @06:53PM (#28116685)
    ssd's will probably end up being connected to a form of ram socket with an on-cpu controller (like system ram) in the future. eventually flash can be half as fast as system ram, so there is no real reason not to have it connected to the CPU.
  • Worth noting (Score:4, Interesting)

    by earnest murderer ( 888716 ) on Wednesday May 27, 2009 @06:54PM (#28116695)

    The spec as we have seen with most other transfer specs have little to do with real world device designs. Hardware interfaces (much less devices) languish in the "has to cost less than x per part" hell... But you bet your ass they'll put a SATA 3.0 up to 6GB per second label even though the actual device isn't designed to transfer more than a fifth (peak) of the spec. data rate.

  • by Anonymous Coward on Wednesday May 27, 2009 @07:02PM (#28116775)

    http://www.serialata.org/developers/naming_guidelines.asp

    Here's a clue: If you have to post a web page explaining the proper way to refer to your products, your products are poorly named.

    Here's another clue: If there's a shorter/easier/faster way to refer to your product, people are going to go with that. Insisting that they do otherwise indicates delusions of grandeur.

    Get the hell over it already.

  • by pembo13 ( 770295 ) on Wednesday May 27, 2009 @07:03PM (#28116777) Homepage

    I've lost 3 drives due to plugs breaking off into the SATA ports on the 3.5" drives

  • Re:isn't it time for (Score:2, Interesting)

    by Brian Gordon ( 987471 ) on Wednesday May 27, 2009 @07:13PM (#28116887)
    I'd say if it's bandwidth we're after, we shouldn't be reducing the number of signal lines. Do things in parallel [wikipedia.org] instead of serializing everything and depending on astronomical clock speeds. Obviously PATA is obsolete but especially with the rising importance of multiprocessing we should be focusing on more parallel solutions, perhaps allowing multiple reads at a time on different lines of the connector.
  • Stupid (Score:4, Interesting)

    by TheParadox2 ( 1562593 ) on Wednesday May 27, 2009 @07:19PM (#28116937)
    I think in a years time frame, we could see the 6 Gb/s passed with the way SSDs are going. To make this standard is dumb. If we're looking for speed, SATA 6Gb/s is not it and this ancient CHS scheme has to go to accommodate a better way to map, access and control data. Ultimately, we need to have these devices understand & control the file system. (Trim does this for SSDs) For example: The OCZ vertex nearly saturates the 3Gb/s mark already. They only way the drives 'fail' to accomplish this sustaining speed is with random writes, typically which occur when writing data to a spot marked as available when the NAND isn't zeroed, it either has to re-zero or move on. If the drive knows that the OS is deleting a file (not marking the site, as available) then the drive can zero automatically without you noticing. Its only in certain conditions, these drive don't Consistently perform at peak performance: Free space not consolidated, Free space not zeroed, Swap file creates random writing (slows performance), Indexing is now useless with .1 ms seek times. Using write filters, or something that converts random writes to sequential writes (through buffers, caches or drivers) greatly enhances speed, such as the MFT Software or even windows SteadyState for the devices. I like the idea of the 'RAM socket' interface as someone stated above. These devices i think work better in a parallel manner. Most work like this internally anyway.
  • Re:isn't it time for (Score:4, Interesting)

    by Ilgaz ( 86384 ) on Wednesday May 27, 2009 @07:55PM (#28117281) Homepage

    Yea, while swearing at Apple 24/7 for giving SATA1 with Quad G5 Workstation (most expensive G5), I purchased a very nice performing Western Digital Caviar 1TB drive having 32MB cache. It took a while to figure that I can't really saturate SATA1 bus, even with "fill with zeros" (format) of OS X, it went up to 140MB/sec. Of course, Apple expects me to buy a ATTO like high end card if I need more bandwidth.

    What matters is SSD, that is why they release the spec right now. If you have enough money to setup a very high end (not toy-like) SSD right now, you will see SATA2 is the bottleneck. People were already talking about a different standard or even getting rid of SATA alltogether for them.

  • Re:isn't it time for (Score:3, Interesting)

    by vadim_t ( 324782 ) on Wednesday May 27, 2009 @08:42PM (#28117687) Homepage

    It's been tried, and didn't work well.

    The drive heads are some of the most expensive parts of a hard disk, so it raises the price considerably. Then you get higher power usage, heat generation, decreased reliability, and higher complexity in exchange for the extra performance.

    The problem is that normal people don't look at speed, they look at capacity. So they won't buy the expensive drives. And the people who do look at things like bandwidth and latency are already running a RAID and benefitting from multiple heads already. They're also unlikely to want something that's less reliable.

  • Re:Stupid (Score:5, Interesting)

    by afidel ( 530433 ) on Wednesday May 27, 2009 @08:46PM (#28117715)
    I think the most likely outcome is SSD's move to something like ExpressCard, a physical spec which extends the PCIe bus out to the storage. The drives will show up as a SCSI/SATA controller AND a virtual disk attached to that controller so that the software layer doesn't have to be changed.
  • Re:isn't it time for (Score:3, Interesting)

    by Penguin Follower ( 576525 ) <scrose1978@NOsPAm.gmail.com> on Wednesday May 27, 2009 @10:03PM (#28118291) Journal

    For drives of equivalent spec, on SAS, on SATA, same spindle speed, I suspect that it is largely marketing fluff and a few firmware tweaks; but 15k RPM vs. slower is a nontrivial difference.

    I agree completely. We've got two SANs at work... the older one is full of U320 10k RPM drives and the new SAN is all 15k RPM SAS drives. The new SAN leaves the old one in the dust (and has 20TB more space, too! :D).

  • Re:isn't it time for (Score:4, Interesting)

    by Antique Geekmeister ( 740220 ) on Wednesday May 27, 2009 @10:33PM (#28118519)

    And the OP is, frankly, unaware of the history of SCSI and PATA. Those big wide cables are deprecated for many reasons: one is their expense, another is their fragility, and another is the incredible variety of vaguely distinct, and often stupidly different, specifications for such broad interfaces. I had to deal with that debris, for decades, and it was extremely painful.

    The amount of time saved in consistent, small interfaces having fewer things to screw up is enough, by itself, to make up for the expense of any drives lost from the fragility of the SATA connector. I remember the amazing crap shoot it used to be to design a SCSI chain of devices, the awful incompatibility and expense of the cables even for what were nominally the same type of SCSI, and tendency of those connectors to bend pins or fail under stress.

    Give me SATA (and its low cost peer for external devices, USB), any day over the technically superior but less consistent SCSI and firewire.

  • Huh? (Score:2, Interesting)

    by symbolset ( 646467 ) on Wednesday May 27, 2009 @11:03PM (#28118735) Journal

    There are several SSDs currently that offer more than 1GB/s Read/Write, which would more than saturate this bus. I mentioned them here [slashdot.org]. The trick is that they don't use this bus. Because that would be silly.

  • Re:isn't it time for (Score:3, Interesting)

    by guruevi ( 827432 ) on Thursday May 28, 2009 @12:14AM (#28119111)

    Apparently you are not completely up to snuff with your jargon there.

    I have worked with the guts of computers long enough to have known ESDI drives (in the PS/2 no less) those had as far as I remember serial data lines (and a separate control line to control head movements). Then came SCSI and IDE (later standardized as ATA, faster versions as EIDE or ATA-2, for CD/DVD/ZIP drives ATAPI and recently known as PATA) which were parallel versions.

    The first SCSI drives I used had 8 data lines (SCSI-2) - you could even make your own cables for those things, very robust. Later SCSI's (Wide SCSI) had 32 data lines and a very wide connector with thick cables that would have the whole side of your case covered with cables if you were putting in a dual controller setup - sometimes those cables would have so much tension and take up so much space they wouldn't stay in the drive and then you could start rerouting the whole cable again to find a 'better' way.

    ATA had less of an issue as the cable wasn't as wide nor thick but you could only get 2 devices on a cable and the one designated slave (usually CDROM) could tie up your bus and use valuable time so you should've ran a separate cable for each device.

    The problem with both Parallel SCSI and Parallel ATA was that you could only drive it up to a certain speed before you would get synchronization issues between the data lines. Serial is much cheaper for that as you can drive up the frequency without caring too much about the synchronization.

    Firewire and USB have always been serial. Firewire is technically superior and also more expensive than USB (same as SCSI was far, far, far more superior than any IDE installation for the same reasons) because the devices (both host and target) require internal controllers so as to not tie up the CPU. SCSI also required (sometimes manual) terminations and (before SCSI-3) SCSI ID's

    SAS is Serial Attached SCSI, just like SATA is Serial ATA. SCSI (this time serial) is again, more superior but also more expensive than ATA and allows much more flexibility (you can for example attach SATA devices to a SAS controller, not vice versa) and SAS can maintain multiple drives on a single cable while SATA is limited to one device per cable.

    Give me Firewire and SCSI over USB and ATA anytime.

Thus spake the master programmer: "After three days without programming, life becomes meaningless." -- Geoffrey James, "The Tao of Programming"

Working...