The State of Solid State Storage 481
carlmenezes writes "Pretty much every time a faster CPU is released, there are always a few that are marveled by the rate at which CPUs get faster but loathe the sluggish rate that storage evolves. Recognizing the allure of solid state storage, especially to performance-conscious enthusiast users, Gigabyte went about creating the first affordable solid state storage device, and they called it i-RAM. Would you pay $100 for a 4GB Solid State Drive that is up to 6x faster than a WD Raptor?"
Am I getting old? (Score:5, Insightful)
New Tech (Score:3, Insightful)
15,000 for a 500gb solid state drive isn't affordable
100 for a 4gb solid state drive is affordable, but not worth the price.
What they need to do is make the tech better, yet affordable. What makes it so expensive to competetivly price large solid state storage devices?
On a sidenote, is anyone going to buy this drive that is 4gb and costs 100 bucks? I don't think it's much use to anyone.
Umm more then that... (Score:5, Insightful)
Re:Let me think. (Score:4, Insightful)
Swap Drive (Score:3, Insightful)
Deja-Vu all over again (Score:3, Insightful)
Looks like they dug up an old PCB screen, added a battery backup and changed the connectors to work with modern RAM
Among other things, I handle the physical hardware design spec for my companies product (the product is software which is loaded onto a hardware to make an "appliance"). I've received emails from quite a few vendors recently offering this sort of solid-state NV storage. I think this market sector is really starting to creep forward, and these might be the kinds of "disks" we see as the norm in the not-so-distant future.
I think first off, though, these will be like caching drives - holding only the data that is most seek-time sensitive to a particular application.
Re:Let me think. (Score:3, Insightful)
-Jesse
Re:More than $100... (Score:2, Insightful)
That means this card uses your old chump-RAM, or very very cheap to buy RAM. It's a good deal, just in that it gives me something to do with all the PC2100 I've got laying around.
I'm afraid... (Score:2, Insightful)
Re:No Way! (Score:3, Insightful)
Re:Let me think. (Score:5, Insightful)
I also wonder if the benchmarks were done with drive caches on or off. I would imagine that this drive would be faster with caches off. With what might as well be zero latency on disk accesses, the benefit of a cache is lost; reading ahead probably will only waste bandwidth reading stuff we may not need.
I'm very disappointed that the article didn't mention SATA2 (300MB/s), which is already available in most new motherboards. With double the bandwidth it would have made a big difference. It's very likely the device doesn't support SATA2. However the Anandtech article makes NO MENTION at all of SATA2, not even to the point of saying "We'd like to see this drive with SATA2 support."
Disk evolution (Score:5, Insightful)
On the contrary, I've always been amazed at the rate of price/performance evolution in HD technology.
Consider that in 1982 a 10 MB disk cost something on the order of $3500 while today you can reasonably expect to get an 80 GB disk for $50, that's a drive that has 8000x the storage for 1/70 the price or a price/MB improvement of roughly 420,000x. And, that doesn't take into account the dramatic improvement in reliability and speed (both access and interface) that the newer drives exhibit. Do you think CPUs have kept up with this?
I've heard people predict the end of moving-parts mass storage for years now, but it still seems pretty distant considering the great values we're getting with HD technology.
Re:New Tech (Score:2, Insightful)
This type of storage makes for a good
Another good use would be to use it as a filesystem cache for the OS on a diskless network client.
Re:No Way! (Score:2, Insightful)
Re:Non-Technical Users Don't Understand (Score:5, Insightful)
3. Imagine how fast your database server would be with its transaction log installed in a memory file. Hey, throw the tempdb (for SQL Server) on there as well, or since the memory is now just standard memory and won't need a special driver, you can just switch to Linux and use a real database.
Re:Darn straight I would/will! (Score:5, Insightful)
Mmmm, hyper-fast builds that don't depend on the latency of moving parts...
This doesn't make sense. I suspect that you were misled by the incorrect summary. You don't get 4GB of solid state storage for $100.-. That would actually be a really good deal. All you get is a card which has SATA on one side and RAM slots on the other side.
So instead of buying this card you could take the $100 towards a motherboard that supports > 4GB of RAM. Then the RAM will be sitting on a bus that can actually sustain datarates WAY higher than SATA.
Since you don't need persistent storage for cache it makes little sense to stick it on a bus that can theoretically do, what, 150 MB/s? When you can stick it on a bus which can do several GB/s.
I don't really see the point of this card, since it will only keep the data for 16 hours if not powered. In other words, if you leave for a weekend and for some reason the power to your PC is turned off, your tough out of luck.
Other cards that I have seen in the past that make more sense, actually have a normal drive for persistent storage. If power fails, there's enough backup power to write everything to disk. That's basically like having cache on the disk equal to the size of the disk.
Bottom line; this is a rehash of what's been done many times before, didn't really take off then, and considering a relatively stupid implementation, probably won't take off now.
Could be great for some data servers (Score:3, Insightful)
Re:I'd use Raid (Score:4, Insightful)
Latency comes from three sources:
1) Head latency.
2) Rotational latency.
These are the two sources you have considered. Striping indeed does absolutely nothing to help there.
You forgot the third source of latency:
3) The-disk-is-busy-serving-another-request latency.
Your comment would be true for a primitive OS with a single-threaded I/O method, and/or a RAID system with no command queue.
Given that modern RAID systems are NOT primitive, I/O performance is no longer measured with rotational + head latency vs. throughput, because those measurements no longer make sense.
There are two kinds of performance measurements for modern disk subsystems:
1) MB/sec. (bandwidth) This is what most people think of when they think of throughput.
2) I/O / sec. This measurement is simply the reciprocal of the head+rotational latency in the case of a SINGLE DRIVE. However, in a multi-drive setup, max I/O / sec. increases proportionally with the number of drives, up to a point (eventually you hit the limits for the RAID controller, bandwidth, whatever).
If we measure latency a the time it takes a single drive to physically get the data given a single request, sure, mutiple drives don't help. If we measure latency as the amount of time between when the application asks for the data, and when the disk delivers it, RAID helps quite a bit, beacuse the different I/Os are distributed to multiple disk heads, each of which can contribute it's own I/O handling capacity.
SirWired
Re:Disk evolution (Score:3, Insightful)
If I could make a not-so-appropriate industrial comparison to the article summary:
Pretty much every time a faster F1 engine is released, there are always a few that are marveled by the rate at which carts get faster but loathe the sluggish rate that diesel engines evolves.
HDD and CPUs are different beasts that do different tasks, and fight different issues. It is not surprising that one can pick up speed and the other capacity.