

SATA 3.0 Release Paves the Way To 6Gb/sec Devices 248
An anonymous reader writes "The Serial ATA International Organization (SATA-IO) has just released the new Serial ATA Revision 3.0 specification. With the new 3.0 specification, the path has been paved to enable future devices to transfer up to 6Gb/sec as well as provide enhancements to support multimedia applications. Like other SATA specifications, the 3.0 specification is backward compatible with earlier SATA products and devices. This makes it easy for motherboard manufactures to go ahead and upgrade to the new specification without having to worry about its customers' legacy SATA devices. This should make adoption of the new specification fast, like previous adoptions of SATA 2.0 (or 3Gb/sec) technology."
Theoretical != Real World speeds (Score:0, Insightful)
It's a pity that while SATA 2.0 has a theoretical speed of 3GB/sec the real world speeds are around 20-25MB/sec.
Re:Only one problem with this: (Score:1, Insightful)
how about they put some equivalent effort into speeding up the actual output of devices that use the interface?
SSDs use the interface, and they're getting close to hitting the 300MBps throughput mark (maximum after sata overhead).
There are also several external raid enclosures that use eSATA and appear as a single high-throughput drive to the onboard sata controller.
Re:isn't it time for (Score:1, Insightful)
Agreed. What we need to see is some form of in-drive RAID (as a comparison only, not actual implementation), where there are multiple INDEPENDANT heads reading/writing on multiple platters all at the same time, with each head having it's own independant I/O line on the connector.
That would be cool.
Re:isn't it time for (Score:5, Insightful)
Why? Do you have a hard drive that can even saturate a SATA I bus?
Forget Heads... (Score:5, Insightful)
where there are multiple INDEPENDANT heads reading/writing on multiple platters all at the same time
The entire idea of 'heads' should be forgotten. Mechanical drives should be sent to oblivion and we should welcome your idea of parallelism on solid state solutions.
Re:hard drive that can saturate SATA? (Score:3, Insightful)
Agreed that it's eventually going to be on the northbridge. However, SAS isn't there now, either, and SSDs are still likely to saturate that bus in the near future.
SATA vs SAS is a different debate than IDE vs SCSI. Even on servers, it's easy to now justify the cheeper standard compared to the older standards. Not in all cases, of course, but far more often than you could with IDE.
Re:Forget Heads... (Score:3, Insightful)
Re:Theoretical != Real World speeds (Score:3, Insightful)
I really wish SATA 3.0 had a bigger jump than this. 600MB/sec is hardly anything for some of the high end SSDs and RAM-drives available.
If they become affordable, I'm definitely going for PCIe 4x SSDs, since they can hit 8GB/sec (80gbit) when RAID'd on server boards with tons of PCIe lanes [channelregister.co.uk].
I remember when someone stuck six FusionIO IODrives together and got about 2.2GB/sec of bandwidth out of a regular 2-socket server board. (like those Tyan ones, which can be had for well under $1000) It seriously makes me drool... though I suppose all I really need out of an SSD is 200MB/sec with minimal latency.
Re:isn't it time for (Score:5, Insightful)
And what you clearly missed from the post you're responding to is that the clock rates that you can get from serial are so much higher than what you can do with parallel that it more than offsets the disadvantage of serialization.
There are two things that limit the speed of parallel interfaces. As the GP mentioned, one is signal skew. The clock rate has to be slow enough so that the receiver can sample all data lines at the same time and get them all within the data eye. The second is that the data lines are single-ended, meaning that there's only one wire per signal. The clock rate has to be slowed down to ensure that the signals have reached full on or full off at the other end and that they're noise free.
High-speed serial interfaces use DIFFERENTIAL SIGNALLING. The signal is transmitted over two wires that switch in antiphase. You decode them by comparing them. This has several beneficial effects. One is that noise affects them the same, so even if they're both offset by noise, they compare the same. The other is that now you don't have to wait as much on the effects of resistance, capacitance, and inductance. You can reliably decode the signal before the transitions are complete. (Look up "slew rate".)
So, using the same basic silicon technology, you can get a single differential pair to transmit data MUCH faster (in bytes/sec) than you can with parallel. It's interesting to see how technology transitioned from serial to parallel (wider means more bits per second), back to serial. The reason they didn't just stick with serial was that they just didn't have the technology to make the I/O drivers go that fast until recently.
IIRC, A 1x PCI Express channel is a single differential pair for data. (I think there's a side band channel and some other stuff.) This is just like DVI and SATA. 16x PCI Express is sixteen 1x channels. The trick here is that although data is interleaved across all 16 channels, those channels are not syncronized with each other. They are out of phase, and the the data is just put back into phase at the receiver.
Re:isn't it time for (Score:3, Insightful)
Yes. I do. My single drive has an average sustained transfer rate of 230MB/s. A SATA1 bus would severely constrain the performance of my drive (an Intel x25-m).
There are numerous other SSDs on the market whose manufacturers focused on higher sustained performance rather than random access performance that already hit the 300MB/s wall of SATA2. And I expect that Intel's next series of drives will do the same. SATA2 is woefully unprepared for the very near future, let alone the present; it's slow enough to already be constraining high-end performance.
Re:Theoretical != Real World speeds (Score:3, Insightful)
>>Are you seriously moving around THAT much data
Fast boot speeds and load times, man, are the holy grail for PC gaming. When SSDs fall enough in price that they're remotely competitive, I'm slapping a SSD RAID0 into my box.
As it is, my 2x7200RPM RAID0 from late 2004 still outperforms a single SSD drive in my SiSoft benchmarks, so I'm happy for now.
Re:Forget Heads... (Score:3, Insightful)
You are very very kind to windows, that 4MB/min of IO was across about 20 different processes, most of which were writing a few bytes a second, not nice neat 64K writes (or even, as you add double, 32K writes).
Re:isn't it time for (Score:5, Insightful)
Well, this may not be exactly what you were getting at, but I'd like to split hairs here anyway, and divide this into two separate issues that SATA/SAS resolved.
For best results it's important to model the cable as an RF transmission line, with a specific impedence. An ideal transmission line has the important qualities that all the energy you send from one end will arrive at the other, and none will be reflected back to you. To get reasonably close to this ideal, we space the wires we use a fixed distance apart (in relation to the wire's diameter), choose our dielectric (insulating material) carefully, use terminating resistors at both ends, and keep the line a simple line (no tees, etc.)
For those of you who cut your teeth on parallel SCSI, 10base2/10base5 Ethernet, or Apple LocalTalk, you'll wax nostalgically at just how much of a pain in the ass this was.
For those of you who have only messed with parallel IDE, you'll wonder why you never had to deal with this. The reason is that IDE cheated a little bit - they only terminated the controller (motherboard) side of the bus, and let the signals reflect off the other end. This left only a master/slave/cable-select jumper to infuriate you - but it also limited how long an IDE cable could be and prevented them from jacking up the clock rates on it.
SATA/SAS fixes this for good by limiting you to one device per cable ("port", not "bus"). Both ends are hard-wired to always terminate and any cable problems are limited to a single drive.
The other issue you may have been referring to is balanced (differential) vs. unbalanced signalling (where one wire is held to ground and the voltage read off the other wire). Electrical engineers do commonly call unbalanced signalling one wire because ground is so boring that they never bother to connect it on their schematics, but it does have to be connected in real life and coax Ethernet/most old SCSI/Parallel IDE/RS-232/VGA still used two wires per signal. Balanced/differential signalling (LVD/HVD SCSI, SAS, SATA, 10/100/1000baseT, USB, telephone lines, T1 lines, LocalTalk, etc.) allows for the can't-imagine-life-without-it common-mode noise rejection technique you describe.
Re:Theoretical != Real World speeds (Score:2, Insightful)
A few reasons spring to mind. One is that expanders are cheaper than controllers. Another is that they don't take a slot. That's handy if you're using a case that supports 25 drives. A third is that you want to maximize throughput per slot for various reasons. A last is that you want to attach external storage and you want the maximum amount of external storage per connection - because some people want to connect 48TB of storage to one 4-port SATA card, which ain't going to work directly unless you've got a source for 12TB HDDs.
Was that enough reasons?
Re:isn't it time for (Score:2, Insightful)
Well, no, not a SINGLE disk.. But, hey, I'm using a backplane/port-multiplier combo that allows me to connect 5 drives to a single SATA-connector..
(I think someone actually mentioned something like this, far above in the earlier comments)
Besides, having interfaces be ahead of the drives, performance-wise, is not a bad thing, it's actually a very good idea, so that drives can advance without hitting the roof..