Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Sun Microsystems Upgrades

Sun Adding Flash Storage to Most of Its Servers 113

BobB-nw writes "Sun will release a 32GB flash storage drive this year and make flash storage an option for nearly every server the vendor produces, Sun officials are announcing Wednesday. Like EMC, Sun is predicting big things for flash. While flash storage is far more expensive than disk on a per-gigabyte basis, Sun argues that flash is cheaper for high-performance applications that rely on fast I/O Operations Per Second speeds."
This discussion has been archived. No new comments can be posted.

Sun Adding Flash Storage to Most of Its Servers

Comments Filter:
  • by Amiga Lover ( 708890 ) on Wednesday June 04, 2008 @01:27PM (#23655427)
    Cue up 20 comments going "But what about the limited write cycles, these things will fail in a month" and 500 comments replying "this is no longer an issue n00b"
  • Good (Score:3, Insightful)

    by brock bitumen ( 703998 ) on Wednesday June 04, 2008 @01:29PM (#23655457)
    They are trying to push new technology on their high paying customers because they can get a premium since it's a scarce resource, this will drive up production, and down the costs, and soon we'll all be toting massive flash disks all the day

    I, for one, welcome our new flash disk overlords
  • Re:Lifespan? (Score:2, Insightful)

    by jo42 ( 227475 ) on Wednesday June 04, 2008 @01:32PM (#23655529) Homepage
    When the old one fails, they sell you a new one. This is The Business Plan (c)(tm).
  • by morgan_greywolf ( 835522 ) * on Wednesday June 04, 2008 @01:34PM (#23655563) Homepage Journal
    You forgot the 1000 comments prognosticating about SSDs replacing HDDs permanently "any day now" with the added bravado of saying "I knew this would happen! See, I told you!" with 3000 comments replying 'Yeah, but price/performance!", all of which will be replied to with "but price/performance doesn't matter, n00b. Price makes no difference to anyone."

    Then, in a fit of wisdom, a few posters, all of whom will be modded down as flamebait, will say "There's room for both and price/performance does matter, at least for now."
  • by Anonymous Coward on Wednesday June 04, 2008 @01:36PM (#23655589)
    Why not just run those apps off of a ram drive? the only benefit i see in doing this would be faster boot/start time, which is kind of pointless when you have servers that stays up for months.
  • by CastrTroy ( 595695 ) on Wednesday June 04, 2008 @01:45PM (#23655735)
    Why would you bother putting the programs and operating system on SSD for a server? Once the files are loaded into memory, you'll never need to access them again. SSD only helps with OS and Programs when you are booting up, or opening new programs. This almost never happens on most servers.
  • by clawsoon ( 748629 ) on Wednesday June 04, 2008 @01:47PM (#23655771)
    We are going to have two layers, but they'll be deeper in the filesystem than that.

    High frequency, low volume operations - metadata journalling, certain database transactions - will go to flash, and low frequency, high volume operations - file transfers, bulk data moves - will go to regular hard drives. SSDs aren't yet all that much faster for bulk data moving, so it makes the most economic sense to put them where they're most needed: Where the IOPs are.

    Back in the day, a single high-performance SCSI drive would sometimes play the same role for a big, cheap, slow array. Then, as now, you'd pay the premium price for the smallest amount of high-IOPs storage that you could get away with.

  • by maxume ( 22995 ) on Wednesday June 04, 2008 @01:55PM (#23655879)
    I'm just glad there is enough interest in paying for the performance to keep the development moving at a decent clip, flash really does look like it will have a big advantage for laptop users that are not obsessed with storing weeks worth of video.
  • by Archangel Michael ( 180766 ) on Wednesday June 04, 2008 @01:57PM (#23655907) Journal

    Also, their "thumper" server has 48 drives in it. Would you want to pay around $1000 per drive to fill that up?
    Yes. If performance dictated it was necessary.

    Just because you don't want to, doesn't mean everyone else doesn't want to also.
  • by gbjbaanb ( 229885 ) on Wednesday June 04, 2008 @02:07PM (#23656051)

    "Adding a flash storage option" is pretty much an engineering nonevent
    but as a marketing event its a magnificent and almost unbelievable paradigm-shift approach to a massive problem that's been crying out for a reliable storage-based performance solution for years.
  • MBTF (Score:0, Insightful)

    by marafa ( 745042 ) on Wednesday June 04, 2008 @02:11PM (#23656113) Homepage Journal
    whats the mbtf on this kind of stuff? as far i know my 256mb flash drive has a lifetime of 100000 writes. has that improved?
  • Wha??!?!? (Score:1, Insightful)

    by Anonymous Coward on Wednesday June 04, 2008 @02:35PM (#23656521)
    Connected to a PCI-x16 133mhz interface we're talking about at a non-blocking device with up to a 4GB/s connection to the bus -- compared to a SATA or SAS interface which is at best 3gb/s .. if you can even find a device interface that fast.
    (notice the GB GIGABYTES vs. gb GIGABITS) .. also the seek times of ~50 microseconds really turns me on.

    Yeah.. I know I could buy a 4gb/s FC RAMSAN unit -- anybody got $50k laying around .. oh wait, I need redundancy so that's $100k. (and btw it's *STILL* slower than one of these cards)

    oh yeah, and they don't burn a rack unit either, and their power consumption (and heat) is ridiculously lower.

    Not that any of us need high performance storage devices for things like databases, or ZFS journal logs. .. I've got almost 90 amps of redundant power coming into each of my cabinet right now.. so the power savings alone is attractive to me.

  • Re:Lifespan? (Score:2, Insightful)

    by ShieldW0lf ( 601553 ) on Wednesday June 04, 2008 @02:39PM (#23656585) Journal
    There is no flash wear myth. If it was a myth, they never would have gone to all that trouble. The whole point behind Static Wear Leveling is to mitigate a very significant and real weakness in the storage medium.

    The fact that flash is only really well suited for infrequent writes and frequent non-contiguous reads doesn't bode well for its utility in OLTP applications.
  • by Calinous ( 985536 ) on Wednesday June 04, 2008 @03:08PM (#23657053)
    Samsung will have Multi Level Cells, which are slower (and cheaper). The Single Level cells are faster (up to twice as fast I think), but more expensive.
          You can go either way with it, but I think faster (and smaller) drives are more attractive than bigger and slower.
          You need to compete against the sequential speed of a 15,000 rpm SCSI drive too (SSD will beat them dead on access speed, but not all workloads are small random reads)
  • RAID 4, anyone? (Score:4, Insightful)

    by mentaldrano ( 674767 ) on Wednesday June 04, 2008 @03:31PM (#23657407)
    In the time between now and when SSD becomes cheaper than magnetic storage, might we see a resurgence of RAID 4? RAID 4 stripes data across several disks, but stores parity information all on one disk, rather than distributing the parity bits like RAID 5.

    This has benefits for workloads that issue many small randomly located reads and writes: if the requested data size is smaller than the block size, a single disk can service the request. The other disks can independently service other requests, leading to much higher random access bandwidth (though it doesn't help latency).

      One of the side effects of this is that the parity disk must be much faster than the data disks, since it must service all requests, to provide the parity info. Here SSD shines, with its quick random access times, but poor sequential performance. Interesting, no?
  • Cheaper than RAM? (Score:4, Insightful)

    by AmiMoJo ( 196126 ) on Wednesday June 04, 2008 @03:39PM (#23657555) Homepage Journal
    At the moment high performance SSDs are still more expensive than RAM. Since a 64 bit processor can address vast amounts of RAM, wouldn't it be even better and cheaper just to have 200GB of RAM rather than 200GB of SSD?

    Okay, you would still need a HDD for backing store, but in many server applications involving databases (high performance dynamic web servers for example) a normal RAID can cope with the writes - it's the random reads accessing the DB that cause the bottleneck. Having 200GB of database in RAM with HDDs for backing store would surely be higher performance than SSD.

    For things where writes matter like financial transactions, would you want to rely on SSD anyway? Presumably banks have lots of redundancy and multiple storage/backup devices anyway, meaning each transaction is limited by the speed of the slowest storage device.
  • by Anonymous Coward on Wednesday June 04, 2008 @04:29PM (#23658439)
    As capacity goes up, the feature size on flash gets smaller. This means less energy per bit and a thinner dielectric.

    So, as density of flash goes up, write cycle lifetime potentially goes down.

    HDDs have the same issue of bits being less "durable" as capacity goes up. However, the media never wears out for HDD. Furthermore, it is already accepted that there will be many bit errors and these are simply corrected with error correction codes and mapping out bad sectors.

    As far as reliability goes, everybody talks about it but nobody actually buys on the basis of reliability. At the end of the day it all comes down to dollars per gigabyte for most applications.

    Power usage on the other hand is becoming more and more important. That may actually be a strong selling point.

    As I've said elsewhere, the first step is going to be OS support for treating Flash differently than HDD. This will allow for hybrid storage solutions. At that point we will see the medium between HDDs and Flash. Right now, it is all or nothing going Flash. In that environment it is going to be hard for Flash to get going.
  • by dgatwood ( 11270 ) on Wednesday June 04, 2008 @05:39PM (#23659553) Homepage Journal

    Because write caches in RAM go away when your computer crashes, the power fails, etc. Battery-backed RAM is an option, but is a lot harder to get right than a USB flash part connected to an internal USB connector on a motherboard.... In-memory write caching (without battery backup) for more than a handful of seconds (to avoid writing files that are created and immediately deleted) is a very, very bad idea. There's a reason that no OS keeps data in a write cache for more than about 30 seconds (and even that is about five times too long, IMHO).

    Write caching is the only way you can avoid constantly spinning up the disk. We already have lots of read caching, so no amount of improvement to read caching is likely to improve things that dramatically over what we have already.

    Even for read caching, however, there are advantages to having hot block caches that are persistent across reboots, power failures, crashes, etc. (provided that your filesystem format provides a last modified date at the volume level so you can dispose of any read caches if someone pulls the drive, modifies it with a different computer, and puts the drive back). Think of it as basically prewarming the in-memory cache, but without the performance impact....

  • by flaming-opus ( 8186 ) on Wednesday June 04, 2008 @06:08PM (#23660067)
    You're confusing two very different sorts of storage. There is bulk data storage. This is a fileserver for home directories, video archives, piles of email, that sort of stuff. This is the market where the 1TB sas drive thrives. Then there's the database backing store. Almost every customer I've sold to wants a huge number of very fast, very small drives for database backing store. The extra capacity is meaningless, as they have to use so many spindles to get a decent IOPS performance. In this area, selling drives hasn't been about capacity for 10 years. IOPS, in particular read IOPS is your throttle point for these. Now that flash drives are beginning to get traction for high-end laptops, and we have affordable, SDD drives, with industry standard interfaces, there's no reason NOT to use them.

    Also, fibre channel drives already cost $1000, so paying this much is nothing new for enterprise customers. An enterprise server with LESS than $50,000 of storage would be the oddball case.

THEGODDESSOFTHENETHASTWISTINGFINGERSANDHERVOICEISLIKEAJAVELININTHENIGHTDUDE

Working...