SATA 3.0 Release Paves the Way To 6Gb/sec Devices 248
An anonymous reader writes "The Serial ATA International Organization (SATA-IO) has just released the new Serial ATA Revision 3.0 specification. With the new 3.0 specification, the path has been paved to enable future devices to transfer up to 6Gb/sec as well as provide enhancements to support multimedia applications. Like other SATA specifications, the 3.0 specification is backward compatible with earlier SATA products and devices. This makes it easy for motherboard manufactures to go ahead and upgrade to the new specification without having to worry about its customers' legacy SATA devices. This should make adoption of the new specification fast, like previous adoptions of SATA 2.0 (or 3Gb/sec) technology."
6 Gb/sec? Meh (Score:5, Funny)
Re:6 Gb/sec? Meh (Score:5, Funny)
1.21 Joule Watts?
WTF is 1.21 m^4*kg^2/s^5 good for?
Re: (Score:2, Funny)
Worth noting (Score:4, Interesting)
The spec as we have seen with most other transfer specs have little to do with real world device designs. Hardware interfaces (much less devices) languish in the "has to cost less than x per part" hell... But you bet your ass they'll put a SATA 3.0 up to 6GB per second label even though the actual device isn't designed to transfer more than a fifth (peak) of the spec. data rate.
I hope they make the plug stronger (Score:5, Interesting)
I've lost 3 drives due to plugs breaking off into the SATA ports on the 3.5" drives
Re:I hope they make the plug stronger (Score:5, Funny)
Agreed, that's the dumbest physical connector I've seen in the longest time. I'd like to take those broken bits and shove them up the fingernails of the engineer that designed it.
Re: (Score:2, Funny)
Agreed, that's the dumbest physical connector I've seen in the longest time. I'd like to take those broken bits and shove them up the fingernails of the engineer that designed it.
Obviously, you have never used an HDMI connector.
Re:I hope they make the plug stronger (Score:4, Funny)
Re:I hope they make the plug stronger (Score:5, Funny)
Maybe you should stop using a hammer when plugging in a new hard drive?
Re: (Score:3, Informative)
Sata Smata (Score:4, Funny)
What about us using MFM drives with removable platters?
Re: (Score:2)
We are still looking for serial cables with 8ga wire......
Stupid (Score:4, Interesting)
Re:Stupid (Score:5, Interesting)
I love hard drive technology..... (Score:2)
Today at work a brand new 1TB seagate came in. I went over to my machine to breathe life back into it to find out that it was instead a 32 megabyte drive according to Windows. Immediately the cache sprang to mind. The drive actually is reporting the cache as the actual drive. Well...hell. At first I thought it was just DOA with corrupt firmware, but after some googling you can actually reset the size that the drive reports with LBA. Hopefully I won't have too many other problems. Not a big fan of the newer
Re: (Score:3, Funny)
SSDs are pulling a whole lot more than that ... at least when they are new ;)
Re: (Score:2, Interesting)
Re: (Score:2)
Or just be RAM with a battery backup.
Re: (Score:2)
SSDs aren't (currently) aiming for the price/GB crown. The power instability is manageable. I'm not saying it's for everyone, but there's definately a niche.
Re: (Score:2)
Gb!=GB. Divide by 8.
And you should beck your drive settings. My old IDE drives beat 20MB/s. I just checked my newest SATA drive and I got 113MB/sec in hdparm.
Re:Theoretical != Real World speeds (Score:5, Informative)
Wow, both your numbers are wrong. SATA 2.0 has a theoretical transfer rate of 3Gb/s, not 3GB/s. It also uses an 8b/10b encoding [wikipedia.org], so 3.0Gb/s translates to 300MB/s. Data throughput will be less than that, thanks to control protocol overhead, though the overhead is very small.
Modern drives do seriously better than 25MB/s. Seriously, go look at benchmarks. Also, SSDs, which are a very real design influence on things like SATA, are already getting close to the 300MB/s mark.
Re: (Score:3, Insightful)
I really wish SATA 3.0 had a bigger jump than this. 600MB/sec is hardly anything for some of the high end SSDs and RAM-drives available.
If they become affordable, I'm definitely going for PCIe 4x SSDs, since they can hit 8GB/sec (80gbit) when RAID'd on server boards with tons of PCIe lanes [channelregister.co.uk].
I remember when someone stuck six FusionIO IODrives together and got about 2.2GB/sec of bandwidth out of a regular 2-socket server board. (like those Tyan ones, which can be had for well under $1000) It seriously makes me
Re: (Score:2)
This of course begs the question: other than the ability to say your computer is capable of doing this, what the hell use is it? Are you seriously moving around THAT much data that it is even remotely worth spending the kind of money it would take to actually accomplish this? If your reason really is that it's fricking cool, that's great, but I have trouble believing you have a practical use for this.
Re: (Score:3, Insightful)
>>Are you seriously moving around THAT much data
Fast boot speeds and load times, man, are the holy grail for PC gaming. When SSDs fall enough in price that they're remotely competitive, I'm slapping a SSD RAID0 into my box.
As it is, my 2x7200RPM RAID0 from late 2004 still outperforms a single SSD drive in my SiSoft benchmarks, so I'm happy for now.
Re: (Score:2)
Hehe. :)
Well, it would make some of my work go way faster. I imagine creating .ISO files would be whizz bang fast.
But no, I don't have a 24/7 need for it - maybe a 2h/1d need for it. :P It would mostly just be to boast that while most people are stuck at 100MB/sec, my computer is pumping through the gigabytes! ;)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re:Theoretical != Real World speeds (Score:5, Informative)
Sequential reads on large-capacity drives are often in the 70-90MB/s range (yes MB, not Mb), bursting into the 200MB/s range. Hell, I've seen 50MB/s+ for at least the last half a decade. High-quality (read: expensive) SSDs can roughly double that.
And of course, the spec is in gigabits per second, not gigabytes, and includes overhead. Actual supported, sustained transfer is supported at 150MB/s, 300MB/s, and 600MB/s on SATAI-III respectively.
Saturating current SAS/SATA buses is easy (Score:3, Informative)
Any RAID stripe on a reasonable controller and the SAS/SATA bus will at 300MB/s be the I/O bottleneck. Not much point going beyond 4-5 drives at the moment.
What I want though is for 10G ethernet to drop a little in price. Then it'll just be the one technology, and when 10G is too slow for storage I/O, the kit can be reused on the other side of the machine. iSCSI has made FC a legacy technology.
Re: (Score:3, Informative)
SATAx isnt a RAID controller. While people without good solid RAID controllers can get away with decent RAID0 performance, the serious people never rely on a single SATAx controller for RAID0 since that is not its purpose.
Re: (Score:2, Informative)
Re: (Score:2)
You don't need to buy new system. Well, even if you need SATA3 bandwidth, regular companies will release interface cards which will be better performing than "coming in mainboard" ones. I'd prefer a cache having, dedicated and configurable SATA card instead of that dumb chip on mainboard anytime.
Of course if you talk about laptop, it is a different matter.
Re:What is the point? (Score:5, Informative)
Devices which aggregate themselves as a striped array behind a single eSATA/SATA interface. While the individual device may not be able to pump out enough data, they can in aggregate.
Re: (Score:3, Informative)
Re: (Score:2)
Re:What is the point? (Score:5, Informative)
Re: (Score:2)
Faster, Faster!
More, More!
I think it's some kind of nerdgasm with hard drive space and speed or something.
Re: (Score:2)
RAM drives would hit the SATA 3.0 cap, and they were available in 2005. The Gigabyte I-RAM could manage multiple GB of transfer per second(like all RAM can), but was capped by using a single 150MB/sec SATA1 port. :/
Luckily RAM is so cheap now, that if you really want a RAM drive, you just buy 16GB and create a software RAM drive.
Oh - but in 2005 games didn't average 8GB each. :P
Re:What is the point? (Score:5, Informative)
Current SSDs are very close to the SATA 2.0 limit and the performance of flash is about to double thanks to ONFI 2.0, so we can expect SSDs to quickly adopt SATA 3.0.
Re: (Score:3, Informative)
Not true. SSDs are approaching that now.
HP has an enterprise SSD that is 800MB/s (Note the large B as opposed to b). So this drive could saturate SATA 3's 6 Gb/s
Re: (Score:2)
Re: (Score:2)
No current hard disk or even SSD can do 3Gb/sec so what is the point?
Oh yeah? [gizmodo.com]
Re:isn't it time for (Score:5, Informative)
No, because SAS will always be more expensive than SATA.
Re:isn't it time for (Score:4, Informative)
Actually, there really isn't much difference. The main difference is that hard drive manufacturers build their SCSI/SAS drives better than their IDE/SATA drives, because most SCSI/SAS drives are going into servers.
The performance difference historically was much faster and that's the reason why SCSI is used in server hardware, but now it's mostly a matter of economics and pricing.
Re: (Score:2)
For drives of equivalent spec, on SAS, on SATA, same spindle speed, I suspect that it is largely marketing fluff and a few firmware tweaks; but 15k RPM vs. slower is a nontrivial difference.
Re: (Score:3, Interesting)
For drives of equivalent spec, on SAS, on SATA, same spindle speed, I suspect that it is largely marketing fluff and a few firmware tweaks; but 15k RPM vs. slower is a nontrivial difference.
I agree completely. We've got two SANs at work... the older one is full of U320 10k RPM drives and the new SAN is all 15k RPM SAS drives. The new SAN leaves the old one in the dust (and has 20TB more space, too! :D).
Speaking of high IOPS and SSD (Score:2)
Until recently to get decent performance in a reasonable size you needed a huge SAN with hundreds of spindles. Now that you can get stuff like The OCZ Z-Drive [it-review.net], the PhotoFast G-Monster [pclaunches.com] and of course the Fusion-IO IODrive Duo [fusionio.com] that's not really necessary unless you also need >6TB. The 50 microsecond latency is just bonus.
And oh, joy, there will be more. The SAN vendors who are betting their next year's revenue on those $million+ performance SAN's better get a plan B, and quick.
Re: (Score:2, Interesting)
Re:isn't it time for (Score:5, Informative)
Right now, our technology is better in going pure serial. In the past, it was parallel. It might swing back and forth a couple of times between the two in the future. But make no mistake: right now, on commodity hardware for drives connected via cables, serial is pulling ahead in the speed war.
Re: (Score:2)
at either end of a Parallel link you'd have to re-serialize right?
Why? At the disk end, enough platters/heads will give you bits in parallel. Just buffer
each to allow for skew, and you can read a byte on each clock tick.
And why would you need to re-serialise at the main bus end?
NOT that I am saying parallel is a good idea.
Re: (Score:3, Informative)
Firstly, multi platter is not the best, increased heat, increased complexity all increases rate of failure.
Secondly, you now are making a standard for the number of platters/heads in a drive, in reality everyone wants something different (reliability over density).
Re:isn't it time for (Score:5, Informative)
The problem with parallel is that you can't crank up the clock speed because you have to make sure that the signal on each line is combined with the ones from the other lines that were sent at the same time. This limits how fast you can send the send the bits (if the time being bits is comparable to the skew time, the receiver will not be able to reliably reassemble the data) and how long the interconnect can be (skew being linearly amplified by length). It's not for nothing that PCI has been replaced with PCI-E, PATA with SATA, SCSI with SAS. USB and IEE1394 would be impossible with parallel. Serial communications are more reliable and more scalable (one big exception -- wireless RF, but that's not what we are discussing here).
Multiprocessing, incidentally, has nothing to do with it -- the software interface to a storage device hides all the implementation details (PATA/SATA, for instance) anyway. The hard part in multi-threading IO-intensive apps has quite a bit more to do with latency issues and atomicity guarantees (the complete lack thereof) rather than the inability of the storage device to do 2 things at once (which, for a physical disk, is impossible anyway, meaning that it would have to back-convert into a serial process anyway).
Re: (Score:2)
That's right, but what the OP is (I think) saying is that we really should just look at overall performance. Generally speaking, you can get more done per clock cycle over a parallel interface than over a serial one. And, of course, what makes serial interfaces beat parallel interfaces (when they do win) is their ability to be clocked at a higher rate.
Re:isn't it time for (Score:5, Insightful)
And what you clearly missed from the post you're responding to is that the clock rates that you can get from serial are so much higher than what you can do with parallel that it more than offsets the disadvantage of serialization.
There are two things that limit the speed of parallel interfaces. As the GP mentioned, one is signal skew. The clock rate has to be slow enough so that the receiver can sample all data lines at the same time and get them all within the data eye. The second is that the data lines are single-ended, meaning that there's only one wire per signal. The clock rate has to be slowed down to ensure that the signals have reached full on or full off at the other end and that they're noise free.
High-speed serial interfaces use DIFFERENTIAL SIGNALLING. The signal is transmitted over two wires that switch in antiphase. You decode them by comparing them. This has several beneficial effects. One is that noise affects them the same, so even if they're both offset by noise, they compare the same. The other is that now you don't have to wait as much on the effects of resistance, capacitance, and inductance. You can reliably decode the signal before the transitions are complete. (Look up "slew rate".)
So, using the same basic silicon technology, you can get a single differential pair to transmit data MUCH faster (in bytes/sec) than you can with parallel. It's interesting to see how technology transitioned from serial to parallel (wider means more bits per second), back to serial. The reason they didn't just stick with serial was that they just didn't have the technology to make the I/O drivers go that fast until recently.
IIRC, A 1x PCI Express channel is a single differential pair for data. (I think there's a side band channel and some other stuff.) This is just like DVI and SATA. 16x PCI Express is sixteen 1x channels. The trick here is that although data is interleaved across all 16 channels, those channels are not syncronized with each other. They are out of phase, and the the data is just put back into phase at the receiver.
Re:isn't it time for (Score:5, Insightful)
Well, this may not be exactly what you were getting at, but I'd like to split hairs here anyway, and divide this into two separate issues that SATA/SAS resolved.
For best results it's important to model the cable as an RF transmission line, with a specific impedence. An ideal transmission line has the important qualities that all the energy you send from one end will arrive at the other, and none will be reflected back to you. To get reasonably close to this ideal, we space the wires we use a fixed distance apart (in relation to the wire's diameter), choose our dielectric (insulating material) carefully, use terminating resistors at both ends, and keep the line a simple line (no tees, etc.)
For those of you who cut your teeth on parallel SCSI, 10base2/10base5 Ethernet, or Apple LocalTalk, you'll wax nostalgically at just how much of a pain in the ass this was.
For those of you who have only messed with parallel IDE, you'll wonder why you never had to deal with this. The reason is that IDE cheated a little bit - they only terminated the controller (motherboard) side of the bus, and let the signals reflect off the other end. This left only a master/slave/cable-select jumper to infuriate you - but it also limited how long an IDE cable could be and prevented them from jacking up the clock rates on it.
SATA/SAS fixes this for good by limiting you to one device per cable ("port", not "bus"). Both ends are hard-wired to always terminate and any cable problems are limited to a single drive.
The other issue you may have been referring to is balanced (differential) vs. unbalanced signalling (where one wire is held to ground and the voltage read off the other wire). Electrical engineers do commonly call unbalanced signalling one wire because ground is so boring that they never bother to connect it on their schematics, but it does have to be connected in real life and coax Ethernet/most old SCSI/Parallel IDE/RS-232/VGA still used two wires per signal. Balanced/differential signalling (LVD/HVD SCSI, SAS, SATA, 10/100/1000baseT, USB, telephone lines, T1 lines, LocalTalk, etc.) allows for the can't-imagine-life-without-it common-mode noise rejection technique you describe.
Re:isn't it time for (Score:4, Interesting)
And the OP is, frankly, unaware of the history of SCSI and PATA. Those big wide cables are deprecated for many reasons: one is their expense, another is their fragility, and another is the incredible variety of vaguely distinct, and often stupidly different, specifications for such broad interfaces. I had to deal with that debris, for decades, and it was extremely painful.
The amount of time saved in consistent, small interfaces having fewer things to screw up is enough, by itself, to make up for the expense of any drives lost from the fragility of the SATA connector. I remember the amazing crap shoot it used to be to design a SCSI chain of devices, the awful incompatibility and expense of the cables even for what were nominally the same type of SCSI, and tendency of those connectors to bend pins or fail under stress.
Give me SATA (and its low cost peer for external devices, USB), any day over the technically superior but less consistent SCSI and firewire.
Re: (Score:3, Interesting)
Apparently you are not completely up to snuff with your jargon there.
I have worked with the guts of computers long enough to have known ESDI drives (in the PS/2 no less) those had as far as I remember serial data lines (and a separate control line to control head movements). Then came SCSI and IDE (later standardized as ATA, faster versions as EIDE or ATA-2, for CD/DVD/ZIP drives ATAPI and recently known as PATA) which were parallel versions.
The first SCSI drives I used had 8 data lines (SCSI-2) - you could
Re: (Score:2)
Re: (Score:2)
The problem with the parallel approach is the difficulty in ensuring parallel signals get to their destinations at the same time.
Maybe in the future we'll figure out how to take today's high signaling rates and parallelize them, but the engineering choices made right now are for good reasons.
Forget Heads... (Score:5, Insightful)
where there are multiple INDEPENDANT heads reading/writing on multiple platters all at the same time
The entire idea of 'heads' should be forgotten. Mechanical drives should be sent to oblivion and we should welcome your idea of parallelism on solid state solutions.
Re: (Score:2)
Did I miss the memo that says flash no longer has a limit on how many times it can be written upon?
Re: (Score:3, Insightful)
Re: (Score:2)
It has a limit, but it's hard to reach even with cheap flash, and almost impossible with good flash unless you're doing something really unusual. And the bigger the disk gets the harder it is to reach it, due to wear levelling.
Also, hard disks are not eternal by any measure, and will fail mechanically, and often without warning and in a much less predictable fashion.
Re: (Score:2)
Hard to reach? Try installing vista on one, and letting it use it for temp files, system logs and page file.
In fact, thats all on by default! Not exactly hard.
A better idea is to install on a normal HDD, then copy all the files from program files into a SSD and mount it in an empty folder and point windows to that for its program files directory, not the neatest solution, but stops the drive dieing too young and gives a marked speed boost.
Re: (Score:2)
If you're swapping enough to kill a SSD, you're doing it very wrong. Get more RAM, it's cheap.
SSDs do wear levelling, so even if you have a limit of 10K erases that's a per-block limit, and there is a LOT of them on a modern SSD. To kill it, you'll need to make a quite serious effort.
For instance, suppose you leave only 16GB free on your SSD, which has a 128K erase blocks. That means there are 131072 blocks the SSD can recycle, not including whatever amount it reserved internally. To kill those, you'll need
Re: (Score:2)
Lets see, with firefox, steam and the performance monitor open, I am clocking about 4MB/min of writes (thats with not a lot of firefox windows open too, its using about half of that amount).
Note since most windows use 64K clusters, every time 64K of data is written, a 128K block needs to be erased and written.
Oh, and my work PC (the one the measurements were taken on) has 4GB ram under vista 64, my home is double that.
Re:Forget Heads... (Score:4, Informative)
Ok, so let's say those 4MB/minute (IO writes/s would be a better measure) are made from 64K requests. So that's 64 requests/minute, or about one a second.
That's not terribly high, so let's double it to 2 requests a second.
1310720000 max block erases, at 2 per second will last 7585 days, or 20 years.
This assuming a MLC drive with 16GB available for reallocation for it. If you use a SLC drive, you probably won't live long enough to see the disk wear out, and even with MLC it's doubtful you're going to keep the same drive around for 20 years. I think that 20 years ago you'd be running a 386 or a 486, and have maybe 200MB of disk space, and can't even plug in a hard disk from back then into most modern computers.
Re: (Score:3, Insightful)
You are very very kind to windows, that 4MB/min of IO was across about 20 different processes, most of which were writing a few bytes a second, not nice neat 64K writes (or even, as you add double, 32K writes).
Re: (Score:2)
Btw, a good read on what some of the bad things could be for them.
http://forums.slizone.com/index.php?showtopic=29361 [slizone.com]
Re: (Score:2)
Re: (Score:2)
Apparently. At the very least you are making "perfect" the enemy of "damn good".
Re: (Score:3, Informative)
Did I miss the memo that says flash no longer has a limit on how many times it can be written upon?
No, but the limits are sufficiently high with current technology revisions that it isn't really a problem.
For good solid state drives in all but the most convoluted use cases the expected average time before failure is of about the same order, or some claim better than, spinning disk bases drives. I emphasize the word "good" in that last sentence as this probably may not extent to cheap USB sticks that could be using old design memory and controllers and are generally subject to hasher physical conditions t
Re: (Score:3, Interesting)
It's been tried, and didn't work well.
The drive heads are some of the most expensive parts of a hard disk, so it raises the price considerably. Then you get higher power usage, heat generation, decreased reliability, and higher complexity in exchange for the extra performance.
The problem is that normal people don't look at speed, they look at capacity. So they won't buy the expensive drives. And the people who do look at things like bandwidth and latency are already running a RAID and benefitting from multi
Re:isn't it time for (Score:5, Insightful)
Why? Do you have a hard drive that can even saturate a SATA I bus?
Re:isn't it time for (Score:4, Interesting)
Yea, while swearing at Apple 24/7 for giving SATA1 with Quad G5 Workstation (most expensive G5), I purchased a very nice performing Western Digital Caviar 1TB drive having 32MB cache. It took a while to figure that I can't really saturate SATA1 bus, even with "fill with zeros" (format) of OS X, it went up to 140MB/sec. Of course, Apple expects me to buy a ATTO like high end card if I need more bandwidth.
What matters is SSD, that is why they release the spec right now. If you have enough money to setup a very high end (not toy-like) SSD right now, you will see SATA2 is the bottleneck. People were already talking about a different standard or even getting rid of SATA alltogether for them.
Re: (Score:2)
Re: (Score:3, Insightful)
Yes. I do. My single drive has an average sustained transfer rate of 230MB/s. A SATA1 bus would severely constrain the performance of my drive (an Intel x25-m).
There are numerous other SSDs on the market whose manufacturers focused on higher sustained performance rather than random access performance that already hit the 300MB/s wall of SATA2. And I expect that Intel's next series of drives will do the same. SATA2 is woefully unprepared for the very near future, let alone the present; it's slow enough to al
Re: (Score:3, Informative)
Prepare for mass storage connected to the north bridge.
/me wanks furiously!
Re: (Score:2)
Preparing for how much faster your porn will load?
Only a woman could have (Score:4, Funny)
Re: (Score:3, Insightful)
Agreed that it's eventually going to be on the northbridge. However, SAS isn't there now, either, and SSDs are still likely to saturate that bus in the near future.
SATA vs SAS is a different debate than IDE vs SCSI. Even on servers, it's easy to now justify the cheeper standard compared to the older standards. Not in all cases, of course, but far more often than you could with IDE.
Huh? (Score:2, Interesting)
There are several SSDs currently that offer more than 1GB/s Read/Write, which would more than saturate this bus. I mentioned them here [slashdot.org]. The trick is that they don't use this bus. Because that would be silly.
Re: (Score:2)
Re: (Score:2)
If you are doing large database work, redundancy needed or 2K/4K video work, you may need SAS. In fact, you would still boot OS and Apps from a serial ATA device and use SAS for program (database, movie etc) data. SATA and SAS have compatible connectors for that reason. They don't really replace each other.
Of course, SAS is really expensive but for example, if you are at a professional studio which speed may actually earn you more money, you wouldn't care.
Interestingly, even SMART like features of SCSI does
Re: (Score:2)
from Qui audet adipiscitur
Re: (Score:2)
Probably not. Even if you had a device that would supply the (most? if not all) commercial interfaces aren't actually capable of moving it that fast.
Re:SSD (Score:5, Informative)
If my understanding of the technology is correct, the seek time on most hard drives already limits drive access speed to typically be slower than 3Gb/sec. Would this rely on a transition to Solid State Drives for any noticeable difference in performance?
The seek time has nothing to do with the throughput. The seek time refers to the latency between when a read command is issued and when it begins to be fulfilled. The throughput refers to the data transferred per unit time during fulfillment.
Here's a nice car analogy for those of us in New England -- consider the Mass Pike versus I-93. The Mass Pike has a very long seek time from the onramp because of the toll lanes (and the mouth breathers that won't get a transponder even though they are now free and clog the automatic lanes) but once you get on the highway, you can go 80 MPH until your exit. On I-93, by contrast, you can get right on, but you will be going 30 MPH for the duration. Of course, if you drive down to CT and get on I-84, you have a low-latency AND high throughput highway but if you drive too far down to, say, the Bronx, it becomes high-latency and low throughput.
Re: (Score:2)
and the mouth breathers that won't get a transponder even though they are now free and clog the automatic lanes
Some of us don't like the government being able to monitor our vehicle's location, and another group of us doesn't like the government to have direct access to our bank account.
Re: (Score:2)
Yes, but now they don't need the RFID to track your car. Those barcode readers you see at every toll booth make quick work of scanning your registration sticker every time you pass by.
Just sayin'.
Re: (Score:2)
ah, this is great. We have the car analogy. Now all we need is for someone to write a post with a Hitler/Nazi reference and we can mark this this one complete.
Re: (Score:2)
Maybe they will double the disk memory cache to something larger like 16MB or 32MB. Even with only 8MB it really amazes me that just the disk drive has more RAM memory than PC's from a decade ago.
Re: (Score:2)
Hard drives have been shipping with 16MB cache for several years now, and plenty are available with 32MB. RAM is so damn cheap though that they should be seriously considering stuff in the 1GB range for high-end drives.
Re: (Score:2)
For hard disks, making them much faster isn't really possible. The disk needs to spin faster, or the information needs to be packed more tightly. Currently advances are mostly in the packing, but aren't reaching yet even SATA II levels.
Hard disks will get a slight benefit though because they have a cache and they can transfer data from or to it faster than the platter can handle.
For SSDs, even exceeding SATA 3 is perfectly possible by simply internally parallelizing requests. Also, for SSDs, the interface's
Re: (Score:2)
Yes, I was overlooking the effect of striping multiple drives on the SATA bus, but I doubt if even the fanciest RAID 0 or 5 disk array can come close to saturating even SATA II. SSDs are a much bigger threat, but still pretty costly.
Re: (Score:2)
Except the interface specs and other technology will move forward yet again before the devices themselves ever catch up, as has happened with virtually ALL the SATA-bearing motherboards I have ever bought. I'm paying for an interface that I will never be able to fully utilize before the motherboard becomes obsolete e-waste. I don't think the total cumulative combined cost of this interface advancement is as cheap as you think it is, and I don't like paying for something I can't even fully use. I can cite
Re: (Score:3)
Re: (Score:2)
That'd be because the SATA specs are coming out every 3 years and the networking ones seem to take 7.
For Ethernet: 10Mbps was 1985, 100Mbps was 1995, 1Gbps was 1999 and 10Gbps was 2006 (ref: Wikipedia 802.3).
SATA went from SATA 1 to SATA 3 between 2003 and 2009 (ref: Wikipedia SATA).
Sorry about the references, CBF doing links today.
Re: (Score:2)
If you increase 10x each generation you have to wait quite a while between generations. You can also end up in a "Goldilocks" situation where 1 Gbps is not enough but 10 Gbps is overkill and too expensive. 2x or 4x per generation is a lot smoother.
Re: (Score:3)
don't worry. by the sound of your attitude towards others it doesn't look like you'll have much of a future anyway. better to give it to someone who deserves it.