Sun Adding Flash Storage to Most of Its Servers 113
BobB-nw writes "Sun will release a 32GB flash storage drive this year and make flash storage an option for nearly every server the vendor produces, Sun officials are announcing Wednesday. Like EMC, Sun is predicting big things for flash. While flash storage is far more expensive than disk on a per-gigabyte basis, Sun argues that flash is cheaper for high-performance applications that rely on fast I/O Operations Per Second speeds."
Write cycles. again. (Score:4, Insightful)
Good (Score:3, Insightful)
I, for one, welcome our new flash disk overlords
Re:Lifespan? (Score:2, Insightful)
Re:Write cycles. again. (Score:4, Insightful)
Then, in a fit of wisdom, a few posters, all of whom will be modded down as flamebait, will say "There's room for both and price/performance does matter, at least for now."
Re:We are going to have two layers of storage (Score:0, Insightful)
Re:We are going to have two layers of storage (Score:3, Insightful)
Two layers, but not those ones (Score:5, Insightful)
High frequency, low volume operations - metadata journalling, certain database transactions - will go to flash, and low frequency, high volume operations - file transfers, bulk data moves - will go to regular hard drives. SSDs aren't yet all that much faster for bulk data moving, so it makes the most economic sense to put them where they're most needed: Where the IOPs are.
Back in the day, a single high-performance SCSI drive would sometimes play the same role for a big, cheap, slow array. Then, as now, you'd pay the premium price for the smallest amount of high-IOPs storage that you could get away with.
Re:Write cycles. again. (Score:2, Insightful)
Re:Samsung 256GB Flash Drive (Score:4, Insightful)
Just because you don't want to, doesn't mean everyone else doesn't want to also.
Re:I'm surprised that it is big enough to talk abo (Score:5, Insightful)
MBTF (Score:0, Insightful)
Wha??!?!? (Score:1, Insightful)
(notice the GB GIGABYTES vs. gb GIGABITS)
Yeah.. I know I could buy a 4gb/s FC RAMSAN unit -- anybody got $50k laying around
oh yeah, and they don't burn a rack unit either, and their power consumption (and heat) is ridiculously lower.
Not that any of us need high performance storage devices for things like databases, or ZFS journal logs.
Re:Lifespan? (Score:2, Insightful)
The fact that flash is only really well suited for infrequent writes and frequent non-contiguous reads doesn't bode well for its utility in OLTP applications.
Re:Samsung 256GB Flash Drive (Score:3, Insightful)
You can go either way with it, but I think faster (and smaller) drives are more attractive than bigger and slower.
You need to compete against the sequential speed of a 15,000 rpm SCSI drive too (SSD will beat them dead on access speed, but not all workloads are small random reads)
RAID 4, anyone? (Score:4, Insightful)
This has benefits for workloads that issue many small randomly located reads and writes: if the requested data size is smaller than the block size, a single disk can service the request. The other disks can independently service other requests, leading to much higher random access bandwidth (though it doesn't help latency).
One of the side effects of this is that the parity disk must be much faster than the data disks, since it must service all requests, to provide the parity info. Here SSD shines, with its quick random access times, but poor sequential performance. Interesting, no?
Cheaper than RAM? (Score:4, Insightful)
Okay, you would still need a HDD for backing store, but in many server applications involving databases (high performance dynamic web servers for example) a normal RAID can cope with the writes - it's the random reads accessing the DB that cause the bottleneck. Having 200GB of database in RAM with HDDs for backing store would surely be higher performance than SSD.
For things where writes matter like financial transactions, would you want to rely on SSD anyway? Presumably banks have lots of redundancy and multiple storage/backup devices anyway, meaning each transaction is limited by the speed of the slowest storage device.
Re:We are going to have two layers of storage (Score:2, Insightful)
So, as density of flash goes up, write cycle lifetime potentially goes down.
HDDs have the same issue of bits being less "durable" as capacity goes up. However, the media never wears out for HDD. Furthermore, it is already accepted that there will be many bit errors and these are simply corrected with error correction codes and mapping out bad sectors.
As far as reliability goes, everybody talks about it but nobody actually buys on the basis of reliability. At the end of the day it all comes down to dollars per gigabyte for most applications.
Power usage on the other hand is becoming more and more important. That may actually be a strong selling point.
As I've said elsewhere, the first step is going to be OS support for treating Flash differently than HDD. This will allow for hybrid storage solutions. At that point we will see the medium between HDDs and Flash. Right now, it is all or nothing going Flash. In that environment it is going to be hard for Flash to get going.
Re:We are going to have two layers of storage (Score:5, Insightful)
Because write caches in RAM go away when your computer crashes, the power fails, etc. Battery-backed RAM is an option, but is a lot harder to get right than a USB flash part connected to an internal USB connector on a motherboard.... In-memory write caching (without battery backup) for more than a handful of seconds (to avoid writing files that are created and immediately deleted) is a very, very bad idea. There's a reason that no OS keeps data in a write cache for more than about 30 seconds (and even that is about five times too long, IMHO).
Write caching is the only way you can avoid constantly spinning up the disk. We already have lots of read caching, so no amount of improvement to read caching is likely to improve things that dramatically over what we have already.
Even for read caching, however, there are advantages to having hot block caches that are persistent across reboots, power failures, crashes, etc. (provided that your filesystem format provides a last modified date at the volume level so you can dispose of any read caches if someone pulls the drive, modifies it with a different computer, and puts the drive back). Think of it as basically prewarming the in-memory cache, but without the performance impact....
what drives are for. (Score:3, Insightful)
Also, fibre channel drives already cost $1000, so paying this much is nothing new for enterprise customers. An enterprise server with LESS than $50,000 of storage would be the oddball case.