Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage Hardware IT

MS Researchers Call Moving Server Storage To SSDs a Bad Idea 292

An anonymous reader writes "As an IT administrator did you ever think of replacing disks by SSDs? Or using SSDs as an intermediate caching layer? A recent paper by Microsoft researchers provides detailed cost/benefit analysis for several real workloads. The conclusion is that, for a range of typical enterprise workloads, using SSDs makes no sense in the short to medium future. Their price needs to decrease by 3-3000 times for them to make sense. Note that this paper has nothing to do with laptop workloads, for which SSDs probably make more sense (due to SSDs' ruggedness)."
This discussion has been archived. No new comments can be posted.

MS Researchers Call Moving Server Storage To SSDs a Bad Idea

Comments Filter:
  • by Timothy Brownawell ( 627747 ) <tbrownaw@prjek.net> on Wednesday April 08, 2009 @02:08PM (#27506603) Homepage Journal

    How about a slashdot policy of not linking to articles behind paywalls?

    Seriously, it's even worse than the "free registration required" links that we used to have problems with.

    Original PDF at http://research.microsoft.com/pubs/76522/tr-2008-169.pdf [microsoft.com].

  • by A. B3ttik ( 1344591 ) on Wednesday April 08, 2009 @02:24PM (#27506879)
    That wasn't hard. [74.125.47.132]
  • Re:XServe (Score:5, Informative)

    by DAldredge ( 2353 ) <SlashdotEmail@GMail.Com> on Wednesday April 08, 2009 @02:26PM (#27506905) Journal
    Page 1 Microsoft Research Ltd. Technical Report MSR-TR-2008-169, November 2008 Not a thing to do with it.
  • 'Real Workloads' (Score:2, Informative)

    by dchaffey ( 1354871 ) on Wednesday April 08, 2009 @02:27PM (#27506925)
    What a misleading term - I know of companies using Enterprise SSD in production precisely because it's financially sound for them to utilise the ridiculous speed improvement it provides.

    Sure, it's not a lot of companies that are using this yet, but as longevity increases with better garbage collection and write-spreading algorithms as well as stabilty and feature set through maturing software and firmware it's closer than you think.

    For clarity, the product wasn't SSD behind SATAII, it was FusionIO's PCI devices.
  • Inaccurate summary (Score:5, Informative)

    by chazzf ( 188092 ) <.gro.thguohtpeed. .ta. .notlufc.> on Wednesday April 08, 2009 @02:28PM (#27506933) Homepage Journal
    Hat tip to the anon for the Google cache link (http://tinyurl.com/d2py5r). The summary doesn't quote exactly from the paper, which actually said this:

    "Our optimization framework is flexible and can be used to design a range of storage hierarchies. When applied to current workloads and prices we find the following in a nutshell: for many enterprise workloads capacity dominates provisioning costs and the current per-gigabyte price of SSDs is between a factor of 3 and 3000 times higher than needed to be cost-effective for full replacement. We find that SSDs can provide some benefit as an intermediate tier for caching and write-ahead logging in a hybrid disk-SSD configuration. Surprisingly, the power savings achieved by SSDs are comparable to power savings from using low-power SATA disks."

  • by lgw ( 121541 ) on Wednesday April 08, 2009 @02:28PM (#27506939) Journal

    These days, hard drives are cheaper than tapes and will hold their data longer and more compatibly.

    That's entirely false.

    Hard drives are vastly cheaper than tape drives, but enterprise quality tape is stil cheaper than enterprise quality HDDs.

    Enterprise tape has a proven 20-year shelf life, no HDD does.

    I wrote new commercial software that could (and did) work with IBM's 9-track tape format in 1994, 30 years after it released, and there is still hardware and software in use today that can read that hardware format - 45 years of compatibility. The abstract format - ANSI tape labels - is still in niche use for newly saved data today. DLT format is 25 years old, and while I'm not sure you can buy a new drive that reads the original DLT format, used drives are still easy to come by and you can connect them to new SCSI cards.

    How easy is it to read an MFM drive (assuming there are more than 0 in the world that still work)? That format is 30 years old, and it would be a real challenge to find a slot on a modern PC that would take an MFM controller, vastly harder than reading a DLT tape. FAT is also about 30 years old, but disk formats older than that are basically extinct.

  • Re:XServe (Score:3, Informative)

    by Yvan256 ( 722131 ) on Wednesday April 08, 2009 @02:32PM (#27507001) Homepage Journal

    Since when are we supposed to read the articles?

  • by drsmithy ( 35869 ) <drsmithy&gmail,com> on Wednesday April 08, 2009 @02:40PM (#27507109)

    Windows 2020 will have the same features as Open Solaris 10, just wait and see. They will be able to use a SSD as a cache reader I swear!

    They could call it... ReadyBoost.

  • by kroyd ( 29866 ) on Wednesday April 08, 2009 @02:42PM (#27507149)
    Sun has been making quite a bit of noise in the storage architecture world with their use of SSDs as intermediate cache to improve reading and writing speeds.

    http://blogs.sun.com/brendan/entry/test [sun.com] has some background information, and http://blogs.sun.com/brendan/entry/l2arc_screenshots [sun.com] and http://blogs.sun.com/brendan/entry/my_sun_storage_7410_perf [sun.com] has some performance numbers.

    Basically, what Sun is claiming is that by adding a SSD cache layer you can improve IOPS by about 5x, for what amounts to a really small amount of money for say a 100tb system. This is being marketed quite heavily by Sun as well. (The numbers look convincing, and the prices for the Sun Storage servers are certainly very competitive, well, compared to say NetApp.)

    IMHO this is just a repeat of the well known Microsoft tactic of spreading massive amounts of FUD about any competing technology that you can't reproduce yourself - you'll have to wait until Windows Server 2013 for this.

  • Re:'Real Workloads' (Score:1, Informative)

    by Anonymous Coward on Wednesday April 08, 2009 @02:44PM (#27507203)

    In the enterprise space you will also see people using SSDs in large SAN attached storage arrays. In some cases performance requirements can force you to short stroke (or use only a fraction of the capacity of) disks to meet IO per second requirements. Sometimes weighing the cost of hundreds of mostly empty spinning disks vs a few enterprise flash drives can swing the cost in favor of the flash drives. Floorspace and cooling also need to be taken into account in this case.

  • by Vellmont ( 569020 ) on Wednesday April 08, 2009 @02:45PM (#27507211) Homepage

    Dismissing using SSD because it's only cost effective for the boot partition is a mistake. Anyone who's put together servers before knows the boot partition is critical to the system, and the hardest part to backup. Once you get a system booted, there's a million things you can do to fix it or restore the relevant data. Getting it bootable if the boot partition is toast is much harder.

  • by David Jao ( 2759 ) <djao@dominia.org> on Wednesday April 08, 2009 @02:46PM (#27507231) Homepage
    This paper is biased and premature even by the prevailing low standards of typical CS papers. For example, they model SSD failure, but completely ignore mechanical drive failure, which is far more devastating and commonplace. I kid you not:

    Since this paper is focused on solid-state storage, and wear is a novel, SSD-specific phenomenon, we include it in our device models. Currently we do not model other failures, such as mechanical failures in disks.

    The correct approach to incomplete data is, of course, to gather complete data, and they have no excuse here, because there is PLENTY of data on mechanical drive failure rates. However, if you are not willing to do that, the least you can do is ignore the data equally on both sides. The authors' failure to treat both sides equally leads to a hopelessly biased and skewed analysis.

  • Read the Paper (Score:5, Informative)

    by kenp2002 ( 545495 ) on Wednesday April 08, 2009 @02:48PM (#27507271) Homepage Journal

    I just finished the reading the paper.

    The paper boils down to this:

    SSD disk when measured against IOPS, Watts, and Capacity in relation to cost based on several different server types is not cost effective yet. Depending on the type of server costs need to come down at least 3 fold, and under some scenarios as much as 3000 times. Hosting MP3s that are largely sequental, low write storage SSDs are 3000 times over priced. For insaine random IO scenarios that need to come down 3 fold to make it worth it compared to conventional drives.

    Depending on the type of server they can perform worse then standard mechanical disks.

    They found no advantage to 15k RPM drives versus 10k RPM drives when cost is factored in.

    SSD drives pay for themselves in power saving in about 5 years, well past their expected longevity.

    Mechanical disks wear out more or less independant of their data load, SSDs wear out proportional to their data load.

    SSDs do not handle tiny files very well due to how data is written.

    I see nothing in the paper that is pro-microsoft, rather straight dealing on the drives themselves.

    I would suggest MOD-TROLL any evanglest on any side of the OS wars as this paper doesn't seem to deal with OS touting.

    It was a boring but informative read.

  • I concur (Score:3, Informative)

    by Locke2005 ( 849178 ) on Wednesday April 08, 2009 @03:16PM (#27507741)
    For read data, it makes more economic sense to cache to RAM instead of SSD and just read everything into RAM at startup. Fpr writes, I'm not that sure -- is a write to SSD really that much faster than a write to disk? It might make sense to use SSD for journaling in those cases where a transaction can't complete until you are certain the results have been saved. But in that case, your network latencies are probably much greater than you disk write latency anyway.
  • by HTH NE1 ( 675604 ) on Wednesday April 08, 2009 @03:25PM (#27507889)

    notice how they say 3-3000 times cheaper.

    I don't notice that. The summary says "decrease by 3-3000 times". As in a number of price drops of indeterminate amount per drop.

    The paper's abstract says, "the capacity per dollar of SSDs needs to increase by a factor of 3-3000," which makes more sense.

    Comparing prices: [google.com]

    • A 1000 GB HD goes on sale for about $100 right now (sometimes less), or 10 GB/$
    • A 4 GB SSD sells for $52 (0.077 GB/$, * 129.87 to get to 10 GB/$)
    • A 64 GB SSD can go for $7,095 (0.009 MB/$, * 1111 to get to 10 GB/$)
    • A 16 GB SSD for $3,459 (0.0046 MB/$, * 2162 to get to 10 GB/$)
    • A 12 GB SSD for $3,305.89 (0.0036 MB/$, * 2755 to get to 10 GB/$)

    SSD prices are all over the place.

    Meaning a $3000 SSD would have to cost $1 for them to consider it... Don't you love pulling numbers of your ass?

    If that $3000 SSD only held 10 GB, I'd see how they'd want it down to $1.

  • by Andy Dodd ( 701 ) <atd7NO@SPAMcornell.edu> on Wednesday April 08, 2009 @03:39PM (#27508107) Homepage

    They're not the only ones pulling numbers out of their ass. You also seem to be too, unless you're finding the absolutely most expensive drive in any given capacity class.

    For example, the 80GB Intel X25-M runs around $380, so is better than any of the prices you pulled up.

    Obviously, it doesn't make sense to replace every drive in a server farm with SSDs, especially if you want lots of storage, but you have to keep in mind that while SSDs may suck for GB/$, they do have major advantages in other areas, such as MB/S/$ - That Intel X25-M is FAST, and if you are primarily interested in serving lots of small transactions rather than storing big files, it's the way to go.

    For example, Slashdot is probably better off with an array of X25-Ms because it's only storing text and is getting LOTS of hits.

  • by KonoWatakushi ( 910213 ) on Wednesday April 08, 2009 @03:39PM (#27508113)

    Sun has been making quite a bit of noise in the storage architecture world with their use of SSDs as intermediate cache to improve reading and writing speeds.

    You are conflating Sun's claims here, as the performance gains from using SSDs in their configuration are not generally applicable to other Flash based systems.

    ZFS will use SSDs in two very different ways: as cache(L2ARC) devices, and log devices. The cache devices are for improving read IOPs on a mostly static working data set, and a large Flash-based SSD is fine in this scenario. The log devices are for reducing the latency of synchronous writes, and a small DRAM-based SSD is used in this case.

  • by drsmithy ( 35869 ) <drsmithy&gmail,com> on Wednesday April 08, 2009 @03:48PM (#27508261)

    No 350 IOPS is pretty standard for SSD in real world conditions.

    Intel specs their X25-E at 3300 IOPS for random 4K writes. I'm willing to consider there might be a bit of fudge factor in that number (although all the benchmarks I've seen suggest it is conservative, if anything), but certainly not an order of magnitude.

  • by Emnar ( 116467 ) on Wednesday April 08, 2009 @04:05PM (#27508575)

    Enterprise tape has a proven 20-year shelf life, no HDD does.

    That may be, but I've lost track of the number of times (as a storage engineer) that I've seen tape backups go bad. Even "enterprise-quality" tapes. I think the claims don't match the reality.

    Hard drives die too, but in the case of drive storage (1) it's a lot easier to verify your backups on a periodic basis, like every month; and (2) you can suffer a failure or two (depending on your RAID setup -- most people wouldn't run anything more than RAID-5 for backups) and react accordingly to preserve the data in full.

    Of course, if you're really serious about your backups, you back up to disk and THEN offload to tapes and keep those offsite.

  • by DrgnDancer ( 137700 ) on Wednesday April 08, 2009 @04:06PM (#27508597) Homepage

    A well designed RAID in a robust SAN can survive not just the death of a drive but often the death of an entire enclosure (10-16 drives depending on age and enclosure design). Most of the time a small enterprise class SAN has 8-12 enclosures worth of drives. Big ones can span half a dozen or more racks. I don't think this article is talking about a couple drives thrown into a box with a hardware RAID controller here. When a player like Microsoft starts talking about "storage" they are talking 100 TB or more. Last place I worked had ONLY 25 TB of storage, made up of older storage tech that only gave us 300 GB FC drives. We had 8x14 disk enclosures and could loose and entire enclosure without data loss. The disks were striped in such a way as to ensure that none of our RIAD5s had more than one disk in any one enclosure, and 4 spares made sure that up to 4 disks could die before we even had a chance of any long term performance issues. If you're really paranoid you can build a RAID5+1 to make sure that up to two drives per RAID could die without data loss. I've heard of, but not seen companies so paranoid that they use RAID5+2.

    The storage system at my current place is even fancier and dynamically handles the RAIDs. We've got about 100TB spread across two racks worth of enclosures and any 20 or so disks could die at one time before we lost data.

  • by symbolset ( 646467 ) on Wednesday April 08, 2009 @04:23PM (#27508909) Journal

    Many new servers (and desktops!) come with internal USB for this, ESXi, and other reasons. Even blade servers.

  • by JakFrost ( 139885 ) on Wednesday April 08, 2009 @04:38PM (#27509163)

    One thing about this research paper is that they used only one model MemoRight GT MR25.2 in 8/16/32 GB capacities to do their testing before 2008-11-11 publication of the paper in the United Kingdom.

    I'm concerned that the research test and results are largely skewed against SSDs because they used only that one model to do all their testing with based on only one price point for the SSDs.

    There is a very large difference in performance between many various SSD drives based on the original flawed JMicron JMF602 chipset (stuttering/freezing on write), newer JMF602B (smaller stuttering), Samsung's chipset, Intel's chipset (fastest random writes by 4x), and the newest Indilnix Barefoot chipset (balanced sequential/random read/write). Additionally the huge drops in prices in the last 6-12 months ($1,500->$400) is a big change in the SSD arena. These price, capacity, and performance changes are going to continue fluctuating for the next few years yielding much better drives for the consumers.

    I believe that the research in the paper will be shortly obsolete, if it isn't already, given the latest products on the market and price points and the Q3/Q4 new upcoming products from Intel and others.

    I'm helping a friend of mine build an all-in-one HTPC / Desktop / Gaming system [hardforum.com] and I've been doing research into SSDs for the past few weeks based on reviews and benchmarks so I wanted to share my info.

    Basically there are only two drives to consider and I list them below. A good alternative at this time is to purchase smaller SSDs and create RAID-0 (stripping) sets to effectively double their performance instead of buying a single large SSD. The RAID-0 article below shows great benchmark results to this effect.

    Intel X25-M

    The Intel X25-M series of drives is the top performance leader right now, and the 80GB drive is barely affordable for a desktop system build if you consider the increased performance of the drive.

    Intel X25-M SSDSA2MH080G1 80GB SATA Internal Solid state disk (SSD) - Retail [newegg.com] - $383.00 USD ($ 4.7875 / per GB)

    OCZ Vertex

    The new OCZ Vertex series of drives with the newer 1275 firmware is the price/performance leader and they are much more affordable than the Intel drives. When you combine two of these smaller 30/60 GB drives into RAID-0 (stripping) you get double the performance at still acceptable prices.

    OCZ Vertex Series OCZSSD2-1VTX30G 2.5" 30GB SATA II MLC Internal Solid state disk (SSD) - Retail [newegg.com] - $129.00 USD ($ 4.3 / per GB)

    OCZ Vertex Series OCZSSD2-1VTX60G 2.5" 60GB SATA II MLC Internal Solid state disk (SSD) - Retail [newegg.com] - $209.00 USD ( $ 3.483 / per GB)

    Reviews

    Required Reading:
    AnandTech - The SSD Anthology: Understanding SSDs and New Drives from OCZ [anandtech.com]

    AnandTech - Intel X25-M SSD: Intel Delivers One of the World's Fastest Drives [anandtech.com]

    AnandTech - The SSD Update: Vertex Gets Faster, New Indilinx Drives and Intel/MacBook Problems Resolved [anandtech.com]

    RAID-0 Performance:
    ExtremeTech - Intel X25 80GB Solid-State Drive Review - PCMark Vantage Disk Tests [extremetech.com]

    BenchmarkReviews - OCZ Vertex SSD RAID-0 Performance [benchmarkreviews.com]
    (Be Warned about BenchmarkReviews! Synthetic benchmark results only, no real-life benchmarks such as PCMark Vantage.)

  • by Cramer ( 69040 ) on Wednesday April 08, 2009 @04:42PM (#27509243) Homepage

    No it's not. I routinely netboot systems for repairs, upgrades, reimaging, etc. And with USB booting available on almost everything these days, it's just a matter of walking up to it...

  • by Anonymous Coward on Wednesday April 08, 2009 @05:16PM (#27509777)

    Those were actual prices extracted from a Google Product Search. Actual prices being charged on the web.

    I'm sure they were, but you egregiously cherry-picked only the most ridiculously expensive prices for SSDs (i.e. drives which were undoubtedly designed for enterprise storage), and then chose the cheapest $/bit consumer drive you could find.

    Given that the topic was server storage, and one of the motivating reasons for using SSDs in servers is random IO performance, and that rotating disks optimized for random IOPS cost a hell of a lot more than consumer 1TB drives, why didn't you look for those instead? For example, it looks like the cheapest you can do for a brand new 10K RPM 300GB Serial Attached SCSI disk is about $210. 15K RPM 300GB goes for at least $260.

    (If you try to do the same search, you will find cheaper Maxtor SAS disks out there. This is misleading, however: Seagate acquired Maxtor in 2006 and quickly killed off all Maxtor product lines, keeping only the brand name. Since Seagate only chose to sell consumer drives under the Maxtor brand, the cheap Maxtor SAS disks you'll find are leftover 3+ year old stock which is very hard to move because nobody wants to buy enterprise drives from a defunct vendor, even at fire sale prices. So I think it's fair to ignore prices on Maxtor SAS disks.)

    Anyhow, that's the comparison which should be made. Anybody interested in a SSD for a server is looking for random I/O performance, and that means you should restrict your choice of rotating disks to those which are also designed for random IOPS.

  • If it is just for a temporary cache, wouldn't RAM give you a bigger speed up than Flash?

    Sure, but you have the worry about losing it to a backhoe incident. Sure, a UPS is a good idea but getting the data to non-volatile storage sooner lets you complete the database commit faster. (And anyway, a UPS system for a high-end server deployment is a major chunk of hardware anyway, and there's always the worry that the UPS is going to fail at the wrong moment...)

  • by kaiwai ( 765866 ) on Thursday April 09, 2009 @06:03AM (#27515735)

    The only real difference between the two in the SSD world is the 'enterprise' and 'extreme' tend to be SLC rather than MLC. It'll be a matter of time before the performance difference between the two will be so minor that it'll be difficult to justify the higher price tag for performance alone.

"If it ain't broke, don't fix it." - Bert Lantz

Working...