Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage Sun Microsystems Upgrades

Sun Adding Flash Storage to Most of Its Servers 113

BobB-nw writes "Sun will release a 32GB flash storage drive this year and make flash storage an option for nearly every server the vendor produces, Sun officials are announcing Wednesday. Like EMC, Sun is predicting big things for flash. While flash storage is far more expensive than disk on a per-gigabyte basis, Sun argues that flash is cheaper for high-performance applications that rely on fast I/O Operations Per Second speeds."
This discussion has been archived. No new comments can be posted.

Sun Adding Flash Storage to Most of Its Servers

Comments Filter:
  • by javilon ( 99157 ) on Wednesday June 04, 2008 @01:27PM (#23655425) Homepage
    I would put the operating systems, binaries and configuration files on the SSD.

    But most of what makes up the volume on current computers (log files, backups, video/audio) can be committed to a regular hard drive.
  • Comment removed (Score:2, Interesting)

    by account_deleted ( 4530225 ) on Wednesday June 04, 2008 @01:30PM (#23655491)
    Comment removed based on user account deletion
  • by QuantumRiff ( 120817 ) on Wednesday June 04, 2008 @01:42PM (#23655675)
    This is just a story about SUN doing something that others have already done in for sometime now

    Really? What other top 5 computer manufacturer has been putting flash drives in SERVERS? I've seen a few laptops, but I haven't seen any used in servers or storage systems. (EMC and a few others have announced plans to do it, but haven't released anything AFAK)

    Also, their "thumper" server has 48 drives in it. Would you want to pay around $1000 per drive to fill that up?

  • by E-Lad ( 1262 ) on Wednesday June 04, 2008 @01:58PM (#23655913)
    Current versions of ZFS have the feature where the ZIL (ZFS Intent Log) can be separated out of a pool's data devices and onto it's own disk. Generally, you'd want that disk to be as fast as possible, and these SSDs will be the winner in that respect. Can't wait!
  • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday June 04, 2008 @01:58PM (#23655925) Journal
    Given that you can get flash disks that hang off pretty much any common bus used for mass storage(IDE, SATA, SAS, USB, SPI, etc.) "Adding a flash storage option" is pretty much an engineering nonevent, and a very minor logistical task.

    If Sun expects to sell a decent number of flash disks, or is looking at making changes to their systems based on the expectation that flash disks will be used, then it is interesting news; but otherwise it just isn't all that dramatic. While flash and HDDs are very different in technical terms, the present incarnations of both technologies are virtually identical from a system integration perspective. This sort of announcement just doesn't mean much at all without some idea of expected volume.
  • by boner ( 27505 ) on Wednesday June 04, 2008 @02:15PM (#23656151)
    RAM drive uses DRAM, Enterprise class DRAM ~ $100/GB and uses ca 8W/GB. Enterprise Flash, ~ $30-80/GB and uses 0.01W/GB

    In addition, assume that 90% of ram-drive accesses go to 10% of the storage, you can see that effectively you are burning a lot of energy with zero gain. Multiply by up-time.

    Flash has the potential of greatly improving performance/watt for most servers.
  • by allanw ( 842185 ) on Wednesday June 04, 2008 @02:17PM (#23656207)

    Current versions of ZFS have the feature where the ZIL (ZFS Intent Log) can be separated out of a pool's data devices and onto it's own disk. Generally, you'd want that disk to be as fast as possible, and these SSDs will be the winner in that respect. Can't wait!
    As far as I know, contiguous writing of large chunks of data is slower for flash drives than plain HDD's. I'm guessing the ZIL is some kind of transactional journal log, where all disk writes go before they hit the main storage section of the filesystem? I don't think you'd get much of a speed bonus. SSDs are only really good for random access reads like OLTP databases.
  • by boner ( 27505 ) on Wednesday June 04, 2008 @02:21PM (#23656275)
    Ummm, most programs are not completely loaded into memory and inactive pages do get swapped out in favor of active pages. While the most active regions of a program are in memory most of the time, having the whole program in memory is not the general case.

    Also, DRAM burns ~8W/GB (more if FB-DIMMS), Flash burns only 0.01W/GB. Thus swapping inactive pages to Flash allows you to use your DRAM more effectively, improving your performance/W.

    From a different perspective: you have a datacenter and you are energy constrained. Most applications use 10% of the DRAM 90% of the time. It may be an attractive proposition to give the applications less DRAM (at a slight performance loss) and let them swap to Flash (with a significant reduction in power). Multiply by 10000 servers, even a 20W reduction per server becomes significant.

  • by Anonymous Coward on Wednesday June 04, 2008 @02:24PM (#23656313)
    The benchmarks say something like a 200x performance by putting the ZIL onto the an alternate high performance logging device.

    I have been actively researching a vendor who will supply this type of device. Currently we're testing with Gigabyte i-Ram cards, connected in through a separate SATA interface. (Note: Gigabyte are battery backed SDRAM .. but I won't have lost power for 12 hours so it's a non-issue for me)

    Fusion-IO is a vendor who is making a board for Linux - but as near as I can tell the cards aren't available yet, and when they are - they won't work with Solaris anyway!

    The product which Neil Perrin did his testing with (umem/micromemory) with their 5425CN card doesn't work with current builds of Solaris. Umem is also a pain to work with .. they don't even want to sell the cards (I managed to get some off eBay)

    I hope Sun lets me buy these cards separately for my HP proliant servers. Of course if they didn't, this is one thing that might make me consider switching to Sun Hardware! (Hey HP/Dell - are you reading this??)

  • by Anonymous Coward on Wednesday June 04, 2008 @02:29PM (#23656431)
    Most server manufacturers are reluctant to include the drives in servers where disk writes are common because of possible corruptions due to sector wear.

    This problem hasn't been solved by the drive manufs, although their marketing depts have convinced many!
  • by dgatwood ( 11270 ) on Wednesday June 04, 2008 @02:32PM (#23656481) Homepage Journal

    I was thinking about this at Fry's the other day when trying to decide whether I could trust the replacement Seagate laptop drive similar to the one that crashed on me Sunday, and I concluded that the place I most want to see flash deployed is in laptops. Eventually, HDDs should be replaced with SSDs for obvious reliability reasons, particularly in laptops. However, in the short term, even just a few gigs of flash could dramatically improve hard drive reliability and battery life for a fairly negligible increase in the per-unit cost of the machines.

    Basically, my idea is a lot like the Robson cache idea, but with a less absurd caching policy. Instead of uselessly making tasks like booting faster (I basically only boot after an OS update, and a stale boot cache won't help that any), the cache policy should be to try to make the hard drive spin less frequently and to provide protection of the most important data from drive failures. This means three things:

    1. A handful of frequently used applications should be cached. The user should be able to choose apps to be cached, and any changes to the app should automatically write through the cache to the disk so that the apps are always identical in cache and on disk.
    2. The most important user data should be stored there. The user should have control over which files get automatically backed up whenever they are modified. Basically a Time Machine Lite so you can have access to several previous versions of selected critical files even while on the go. The OS could also provide an emergency boot tool on the install CD to copy files out of the cache to another disk in case of a hard drive crash.
    3. The remainder of the disk space should be used for a sparse disk image as a write cache for the hard drive, with automatic hot files caching and (to the maximum extent practical) caching of any catalog tree data that gets kicked out of the kernel's in-memory cache.

    That last part is the best part. As data gets written to the hard drive, if the disk is not already spinning, the data would be written to the flash. The drive would spin up and get flushed to disk on shutdown to ensure that if you yank the drive out and put it into another machine, you don't get stale data. It would also be flushed whenever the disk has to spin up for some other activity (e.g. reading a block that isn't in the cache). The cache should also probably be flushed periodically (say once an hour) to minimize data loss in the event of a motherboard failure. If the computer crashes, the data would be flushed on the next boot. (Of course this means that unless the computer had boot-firmware-level support for reading data through such a cache, the OS would presumably need to flush the cache and disable write caching while updating or reinstalling the OS to avoid the risk of an unbootable system and/or data loss.)

    As a result of such a design, the hard drive would rarely spin up except for reads, and any data frequently read would presumably come out of the in-kernel disk cache, so basically the hard drive should stay spun down until the user explicitly opened a file or launched a new application. This would eliminate the nearly constant spin-ups of the system drive resulting from relatively unimportant activity like registry/preference file writes, log data writes, etc. By being non-volatile, it would do so in a safe way.

    This is similar to what some vendors already do, I know, but integrating it with the OS's buffer cache to make the caching more intelligent and giving the user the ability to request backups of certain data seem like useful enhancements.

    Thoughts? Besides wondering what kind of person thinks through this while staring at a wall of hard drives at Fry's? :-)

  • by UpooPoo ( 772706 ) on Wednesday June 04, 2008 @03:00PM (#23656909) Journal
    I work in a company that has a few thousand servers running in a few regional data centers. We are looking into SSDs not because of their superior IOPs (this is a mitigating factor vs HDD performance) but because of their low power consumption and low heat dissipation. When you scale your operations reach a scale where you are using an entire data center, heating and power become more and more of a cost issue. Right now we are trying to build some hard data on actual sabings, but there's lots of spin out there that gives you an idea of what potential savings could be. Here are a few interesting links, google around for more information, there's plenty to be had:

    http://www.stec-inc.com/green/storage_casestudy.php [stec-inc.com]
    http://www.stec-inc.com/green/green_ssdsavings.php [stec-inc.com] (You have to request the whitepaper to see this one.)
  • by Anonymous Coward on Wednesday June 04, 2008 @03:26PM (#23657323)
    There are no spare sectors in a 32GiB SSD.
  • by StCredZero ( 169093 ) on Wednesday June 04, 2008 @03:43PM (#23657627)
    A single ioFusion [tgdaily.com] card has the concurrent data serving ability of a 1U server cabinet full of media servers. They do this by having 160 channels on a drive controller that also incorporates flash memory. Since each channel is a few orders of magnitude faster than a mechanical hard drive, one card can handle a flurry of concurrent random access requests as fast as 1000 conventional hard drives.

    The perfect thing for serving media, where you don't need a few GB per customer, you need the same few GB served out to 1000's or millions of users concurrently. So while $/GB stored stinks, $/GB streamed is fantastic.

If you think the system is working, ask someone who's waiting for a prompt.

Working...