Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage Hardware IT

MS Researchers Call Moving Server Storage To SSDs a Bad Idea 292

An anonymous reader writes "As an IT administrator did you ever think of replacing disks by SSDs? Or using SSDs as an intermediate caching layer? A recent paper by Microsoft researchers provides detailed cost/benefit analysis for several real workloads. The conclusion is that, for a range of typical enterprise workloads, using SSDs makes no sense in the short to medium future. Their price needs to decrease by 3-3000 times for them to make sense. Note that this paper has nothing to do with laptop workloads, for which SSDs probably make more sense (due to SSDs' ruggedness)."
This discussion has been archived. No new comments can be posted.

MS Researchers Call Moving Server Storage To SSDs a Bad Idea

Comments Filter:
  • by Tumbleweed ( 3706 ) * on Wednesday April 08, 2009 @01:01PM (#27506519)

    News at 11!

    • by VeNoM0619 ( 1058216 ) on Wednesday April 08, 2009 @01:07PM (#27506587)
      That's hardly the issue... notice how they say 3-3000 times cheaper. Meaning a $3000 SSD would have to cost $1 for them to consider it... Don't you love pulling numbers of your ass?
      • I actually don't think times cheaper makes any sense.

        I hear it all the time, but it is meaningless.

        3000 times cheaper than what? The current price?

        If I am selling something that is now "twice as cheap" is that half the price?, double the discount?, twice as shoddily made?

        • You have to define cheap.
          A common sense definition puts it as the reciprocal of expensive.

          Just like saying twice as slow or twice as fast.
          You logically define slow as the time it takes, and fast as the reciprocal.

          • Re: (Score:2, Insightful)

            by AvitarX ( 172628 )

            Funny, my other complaint is twice as slow.

            The problem I see is 3 times slower doesn't multiply anything by 3, it divides it.

            and intuitively slower of cheaper are not inverse, since we use statements like 5 dollars cheaper.

            • The problem I see is 3 times slower doesn't multiply anything by 3, it divides it.

              It divides velocity by 3, but it also multiplies time by 3.

            • Re: (Score:3, Insightful)

              by sexconker ( 1179573 )

              And dollars is a unit, so you add and subtract.

              If you said something was 5 times something, the 5 would be a scalar (on the second something) and you would divide or multiply.

              It makes perfect sense if you just stop to think about what the words slower, faster, cheaper, etc. mean. They all measure something, find out what, and the logically appropriate operation will be obvious.

              5 times slower means something takes 5 times as long, and therefore runs at 1/5th the rate.

              5 times cheaper means something gives 5

            • by trb ( 8509 ) on Wednesday April 08, 2009 @04:28PM (#27509929)

              Funny, my other complaint is twice as slow.

              Yeah, I prefer "half fast."

        • If A is X times cheaper than B, we must first calculate the cheapness of B. If B should be priced at P, but is really priced at Q, where Q<P, we can calculate B's absolute cheapness as P-Q, and relative cheapness as (P-Q)/P. Therefore if A should cost P', then an A that is X times cheaper than B will cost P' discounted by the relative cheapness of B multiplied by X, that is, P'-P'X(P-Q)/P. Quite simple really.
    • by Cormacus ( 976625 ) on Wednesday April 08, 2009 @01:17PM (#27506753) Homepage
      I dunno about that. I'm pretty sure that if your only tool is a hammer, all of your problems start looking like nails . . . allowing the hammer to be "applied" to every application . . .
      • Re: (Score:3, Funny)

        by Tumbleweed ( 3706 ) *

        I dunno about that. I'm pretty sure that if your only tool is a hammer, all of your problems start looking like nails . . . allowing the hammer to be "applied" to every application . . .

        I think an SSD would make for a very expensive hammer. Still, think about the low latency of such a hammer! Plus with the wear levelling feature, the useful life of and SSD hammer seems like it would be much more reliable over a spinning disc hammer. And the lower power requirements could pay for itself very quickly if you h

      • by macraig ( 621737 )

        Isn't that the very definition of a salesman?

  • by nweaver ( 113078 ) on Wednesday April 08, 2009 @01:03PM (#27506543) Homepage

    This is an ACM article behind a paywall.

    How about a slashdot policy of not linking to articles behind paywalls?

  • by eples ( 239989 ) on Wednesday April 08, 2009 @01:04PM (#27506555)

    Their price needs to decrease by 3-3000 times for them to make sense.

    Hm. I was thinking the same thing about the ACM subscription.

    • Ditto. I got a year free for being inducted into UPE (Smart move on their part, since I've got fsckall time to read anything between work and studies), but a quick scan of it told me that it wasn't worth $200 (or even the $99 to renew just membership)

       

  • by hxnwix ( 652290 ) on Wednesday April 08, 2009 @01:05PM (#27506567) Journal

    SSD is already cheaper per gig than some SAS drives. Also, 3-3000 times? What the hell sort of estimate is that?

    • by Larry Clotter ( 1527741 ) on Wednesday April 08, 2009 @01:08PM (#27506627)
      It's called "pulling numbers out of your ass".
    • I can only guess they're referring to differently priced SSDs. Some cost in the thousands, but provide top-teir performance. Their price would be justified at approximately 1/3rd the current price, as that's what would be necessary to provide similar cost/performance to a raid array of rotational drives.

      On the other hand, the low cost MLC ssds typically provide lower performance than a single rotational drive at a cost premium in the range of about 100x the cost of a rotational drive. These lower cost dr

      • by drsmithy ( 35869 )

        I can only guess they're referring to differently priced SSDs. Some cost in the thousands, but provide top-teir performance. Their price would be justified at approximately 1/3rd the current price, as that's what would be necessary to provide similar cost/performance to a raid array of rotational drives.

        The interesting thing is, according to the performance table on page 6, the SSD they used only had write performance of ~350 IOPS. Either that number is missing a zero, or something is _seriously_ wrong w

    • SSD gives phenomenal random read performance, equally good serial read performance, and average write and random write performance (at least if you get a good SSD, the low end ones using the crap IO chips are worse than budget HDs). The only way to beat the read performance of a good SSD is a really expensive SAS RAID, and even then it's not going to be by much. Yes you can take a hit on serial write performance, but not much of one (it's on par with most medium to high end HDs, with surprisingly few high e
      • High-end HDDs still edge out SSDs for serial reads in many setups.

        Keep in mind that write performance degrades over time (goes from great to very good) as the pages get full.

        When you're out of free pages, you have to read an entire block of pages to cache, erase the entire block, then write back the new block.

        Current OSs and controllers do not yet support the "yes, actually delete it" command, and current controllers do not yet support any sort of automatic drive-level page consolidation.

        If money is no obje

        • Actually the write performance degradation problem is a bit more involved than that, but the way you describe it is essentially correct. There are some applications out there that can consolidate an SSD to get back to the original performance (think of it as SSD defrag), but having support for the new erase command in both hardware and the OS will ultimately be the best solution. I've heard rumor that Windows 7 is going to have support for it when it ships, and I assume Linux either has, or will have soon s
      • by vrmlguy ( 120854 )

        If you're primarily going to be doing reads, particular random reads, or even if you're going to be doing mostly random writes rather than serial writes, an SSD is probably a good idea.

        Which, as you undoubtedly know, is why enterprise data centers use SANs (Storage Attached Networks) with multiple tiers of storage. You use big slow drives for archival storage (old emails, for instance), and smaller faster drives for day-to-day use (databases, etc). Flash drives get used when performance really matters, such as database indexes, not the actual data.

    • Of course, SAS drives are also often too expensive to survive a purely cost/benefit driven analysis. For many real-world loads you're better off adding more spindles which can give you similar iops per dollar but with the added benefit of vastly more storage space.

      There's a lot of snake oil and very little quality analysis in enterprise storage these days, so it's good to see at least some do attempt to do actual real-world cost/benefit calculations before jumping onto the marketing train.

      • by drsmithy ( 35869 )

        Of course, SAS drives are also often too expensive to survive a purely cost/benefit driven analysis. For many real-world loads you're better off adding more spindles which can give you similar iops per dollar but with the added benefit of vastly more storage space.

        You need 2-3x as many SATA drives as 15k SAS/FC drives to get equivalent IOPS. That means 2-3x as much physical space required, probably around 1.5-2x as power usage and decreased reliability overall.

        Storage volume is rarely a concern when the

        • probably around 1.5-2x as power usage and decreased reliability overall.

          Have you ever heard of thing called RAID? It is actually designed for inexpensive crappy of the shelf disks.

          • by drsmithy ( 35869 )

            Have you ever heard of thing called RAID? It is actually designed for inexpensive crappy of the shelf disks.

            More disks means less reliability due to more points of failure. A 16-drive array is more likely to have a failure than a 6-drive array by virtue of simple statistics.

            • More disks means less reliability due to more points of failure. A 16-drive array is more likely to have a failure than a 6-drive array by virtue of simple statistics.

              The chance of a single disk failing is irrelevant when the array is capable of handling disk failures without failing as a whole. I was actually kidding when I asked if you hadn't heard of RAID, but you obviously haven't; so read up on it, it will explain the details for you.

              • by drsmithy ( 35869 )

                The chance of a single disk failing is irrelevant when the array is capable of handling disk failures without failing as a whole.

                No, it's not.

                I was actually kidding when I asked if you hadn't heard of RAID, but you obviously haven't; so read up on it, it will explain the details for you.

                I've been working with RAID for ~15 years now. I have a rough idea about how it works. You, OTOH, seem to think an array that has lost a disk suffers no change in ongoing reliability or performance, which strongly sug

            • Re: (Score:3, Informative)

              by DrgnDancer ( 137700 )

              A well designed RAID in a robust SAN can survive not just the death of a drive but often the death of an entire enclosure (10-16 drives depending on age and enclosure design). Most of the time a small enterprise class SAN has 8-12 enclosures worth of drives. Big ones can span half a dozen or more racks. I don't think this article is talking about a couple drives thrown into a box with a hardware RAID controller here. When a player like Microsoft starts talking about "storage" they are talking 100 TB or

    • by Z00L00K ( 682162 )

      It depends on the application which to select right now, but in the long run the SSD drives will have an advantage.

      So soon we may no longer need those noisy hard disks at all.

      And when a storage is built on flash memories it may be possible to work with it in segments where parts of the disk isn't powered in order to save power and generate less heat. The latter is a huge advantage in datacenters where cooling is expensive.

      The ruggedness is also an advantage, but not in datacenters. What you usually want in

      • And when a storage is built on flash memories it may be possible to work with it in segments where parts of the disk isn't powered in order to save power and generate less heat.

        Flash drives all do this right now automatically.

        Unless the controller is actively accessing a particular block, there is no power to that part of the flash, as there is no need to power it in any way. This is the underlying concept behind all non-volatile memory (like flash)...it doesn't have to be continuously powered to maintain its state.

    • by afidel ( 530433 )
      Not ones you'd use in an enterprise! X-25e is the only SLC based flash with a decent controller under $1k and it's still $24/GB. Unless you have a WORM application that needs fast seeks (pretty rare) MLC based flash isn't a good fit for most enterprise applications. The only areas we've found for them are log drives for high transation database servers where the insane IOPS per $ make sense and cache for a BI system which still sees enough writes to rule out MLC. Oh and their analysis is based on a rather s
      • by C. E. Sum ( 1065 ) *

        More like 13$/GB. HTH! HAND!

        http://www.provantage.com/intel-ssdsa2sh064g101~7ITE90J5.htm [provantage.com]

        X25E SLC 64GB 2.5INCH SATA SSD $827

        And Provantage is rarely a price-leader.

        • by afidel ( 530433 )
          Cool, when I bought mine the 64GB wasn't available and the 32GB was almost that much. That's why SSD's are so cool right now, even at the high end the $/GB is falling rapidly, much more so then enterprise HDD's. 450GB 15K FC drives are about three times that much so about 1/3rds the $/GB but MUCH higher $/IOP.
    • by qoncept ( 599709 )
      What is this, an SSD for ants???
    • SSD is already cheaper per gig than some SAS drives. Also, 3-3000 times? What the hell sort of estimate is that?

      I remember an article a while ago talking about how Windows disk drivers are not optimized for SSD. Now there is a white paper showing how SSD is not practical by Microsoft. So to answer your question, it is a PR estimate.

  • What if... (Score:4, Insightful)

    by Thelasko ( 1196535 ) on Wednesday April 08, 2009 @01:06PM (#27506577) Journal
    they don't use NTFS?
  • 3 to 3000 percent? (Score:4, Insightful)

    by erroneus ( 253617 ) on Wednesday April 08, 2009 @01:10PM (#27506643) Homepage

    My goodness! They have really done their research in order to produce data as accurate as that!

    The fact was, they said the same thing when it came to magnetic tape versus magnetic disks. These days, hard drives are cheaper than tapes and will hold their data longer and more compatibly.

    Microsoft fears change that they do not control. If they don't control the changes, someone might write them out of the story.

    • by lgw ( 121541 ) on Wednesday April 08, 2009 @01:28PM (#27506939) Journal

      These days, hard drives are cheaper than tapes and will hold their data longer and more compatibly.

      That's entirely false.

      Hard drives are vastly cheaper than tape drives, but enterprise quality tape is stil cheaper than enterprise quality HDDs.

      Enterprise tape has a proven 20-year shelf life, no HDD does.

      I wrote new commercial software that could (and did) work with IBM's 9-track tape format in 1994, 30 years after it released, and there is still hardware and software in use today that can read that hardware format - 45 years of compatibility. The abstract format - ANSI tape labels - is still in niche use for newly saved data today. DLT format is 25 years old, and while I'm not sure you can buy a new drive that reads the original DLT format, used drives are still easy to come by and you can connect them to new SCSI cards.

      How easy is it to read an MFM drive (assuming there are more than 0 in the world that still work)? That format is 30 years old, and it would be a real challenge to find a slot on a modern PC that would take an MFM controller, vastly harder than reading a DLT tape. FAT is also about 30 years old, but disk formats older than that are basically extinct.

      • Re: (Score:3, Informative)

        by Emnar ( 116467 )

        Enterprise tape has a proven 20-year shelf life, no HDD does.

        That may be, but I've lost track of the number of times (as a storage engineer) that I've seen tape backups go bad. Even "enterprise-quality" tapes. I think the claims don't match the reality.

        Hard drives die too, but in the case of drive storage (1) it's a lot easier to verify your backups on a periodic basis, like every month; and (2) you can suffer a failure or two (depending on your RAID setup -- most people wouldn't run anything more than RA

    • by blueg3 ( 192743 )

      Probably shouldn't take mathematical advice from someone who confuses "3 to 3000 times" with "3 to 3000 percent".

      Considering that SSD prices vary and performance and workload situations vary more, it's not surprising that there is a range. It's not even surprising that it's a large range. (For example, "if your workload is closer to the optimal profile, the price needs to decrease by a factor of 3; if the workload is closer to the worst-case profile, the price needs to decrease by a factor of 3000". The onl

    • fud mod parent troll please, not kid born in the 90s should never talk about the tape wars unless they were there...

  • by dAzED1 ( 33635 ) on Wednesday April 08, 2009 @01:17PM (#27506769) Journal

    seriously? "we don't have enough people here. we need between 2-2000 times as many people in the configuration department." Does that sound like I have ANY idea how many people we need?

    Sorry, that is a *ridiculous* range to give.

    • For consumer grade telescopes to be able to see the flag planted on the moon, they would need to be 2-2000 times more powerful.

      Makes perfect sense if the item you are talking about varies greatly from one vendors product to the next.

      • by dAzED1 ( 33635 )

        1) any telescope that needs to be 2000x more powerful to see a flag planted on the moon isn't a telescope.
        2) there is not 1000x the price variance (since what was being discussed was price) between viable SDD vendors for similar products.

        • there is not 1000x the price variance (since what was being discussed was price) between viable SDD vendors for similar products.

          There may, however, be a 1000x total lifetime cost differential between an SSD solution and a standard HD solution, especially in write-intensive, redundant storage applications.

          • by dAzED1 ( 33635 )

            I think you're missing the point. If there's 1000x the total lifetime cost differential, then great! You've got an idea of the difference there.

            That has nothing to do with my post, though. To say "2000-3000x difference" would at least be reasonable. "5-10 times difference" might be as well. But when the range of differential goes from a single digit (3) to a number 1000x larger (3000) then the range of potential differential demonstrates - when the specifics of something can be so easily known, like in

    • Sorry, that is a *ridiculous* range to give.

      Is it now? Different uses give different read/write profiles for the same server configuration. Different server configurations add to the mix. A write-intensive application on a RAID5 system will have a much different cost/benefit analysis than a read-intensive file server using a single drive. Especially when one factors in the fact that the more often one writes to an flash memory based device, the faster that device wears out.

      Your little quote should read: "we

      • by dAzED1 ( 33635 )

        Their price needs to decrease by 3-3000 times for them to make sense.

        There are plenty of configuration options that are possible...that would not make sense. I could make a mirror set of a raid 1+0 set where each component is a raid5 set composed of.... ...yeah, wouldn't make any sense, would it. Which is why I said it is a ridiculous range. We have the specifics of the situation, there's no reason to go trying to large divergent sets; my very point is that the range of divergence is not reasonable given

        • by dAzED1 ( 33635 )

          "there's no reason to go trying to large divergent sets"

          should be: "there's no reason to go trying to compare to large divergent sets"

        • Did you bother to read the article?

          There is a recent post [slashdot.org] that you might want to read.

          • by dAzED1 ( 33635 )

            yes. Doesn't change the position. If the answer is "3-10 times cheaper for most configurations, but 2500-3000 times cheaper for small files" then that would be acceptable. ranges can have elements within them. merely saying, however, "3-3000 times" means that there are situations near 100x, some near 245x, some near 1221x, some near 2331x...and all of them would "make sense" per the "Their price needs to decrease by 3-3000 times for them to make sense" quote.

  • 'Real Workloads' (Score:2, Informative)

    by dchaffey ( 1354871 )
    What a misleading term - I know of companies using Enterprise SSD in production precisely because it's financially sound for them to utilise the ridiculous speed improvement it provides.

    Sure, it's not a lot of companies that are using this yet, but as longevity increases with better garbage collection and write-spreading algorithms as well as stabilty and feature set through maturing software and firmware it's closer than you think.

    For clarity, the product wasn't SSD behind SATAII, it was FusionIO's PCI dev
  • Inaccurate summary (Score:5, Informative)

    by chazzf ( 188092 ) <cfulton AT deepthought DOT org> on Wednesday April 08, 2009 @01:28PM (#27506933) Homepage Journal
    Hat tip to the anon for the Google cache link (http://tinyurl.com/d2py5r). The summary doesn't quote exactly from the paper, which actually said this:

    "Our optimization framework is flexible and can be used to design a range of storage hierarchies. When applied to current workloads and prices we find the following in a nutshell: for many enterprise workloads capacity dominates provisioning costs and the current per-gigabyte price of SSDs is between a factor of 3 and 3000 times higher than needed to be cost-effective for full replacement. We find that SSDs can provide some benefit as an intermediate tier for caching and write-ahead logging in a hybrid disk-SSD configuration. Surprisingly, the power savings achieved by SSDs are comparable to power savings from using low-power SATA disks."

  • by HangingChad ( 677530 ) on Wednesday April 08, 2009 @01:34PM (#27507017) Homepage

    Microsoft researchers provides detailed cost/benefit analysis for several real workloads.

    If Microsoft researchers report that SSD's are not cost effective storage, it means that Microsoft is not getting any revenue from SSD storage. Or that they're behind on incorporating SSD's into the server stack. Or they caught blind-sided by the trend like they did with netbooks and are now scrambling to explain why they didn't see it coming. Oh, we found that wasn't cost effective, so we didn't incorporate it.

    I really miss the days Microsoft had it together. There was a time they were great to work with. Now they seem like the Three Stooges Do IT. SSD, eh? Oh, a wise guy! SMACK! Wo-wo-wo-wo!

  • Something's wrong (Score:3, Insightful)

    by drsmithy ( 35869 ) <drsmithy@gm[ ].com ['ail' in gap]> on Wednesday April 08, 2009 @01:42PM (#27507139)
    They list the write IOPS of their "Enterprise SSD" drive as only ~350. That number seems like it's an order of magnitude too low, which would obviously skew the conclusions.
  • by kroyd ( 29866 ) on Wednesday April 08, 2009 @01:42PM (#27507149)
    Sun has been making quite a bit of noise in the storage architecture world with their use of SSDs as intermediate cache to improve reading and writing speeds.

    http://blogs.sun.com/brendan/entry/test [sun.com] has some background information, and http://blogs.sun.com/brendan/entry/l2arc_screenshots [sun.com] and http://blogs.sun.com/brendan/entry/my_sun_storage_7410_perf [sun.com] has some performance numbers.

    Basically, what Sun is claiming is that by adding a SSD cache layer you can improve IOPS by about 5x, for what amounts to a really small amount of money for say a 100tb system. This is being marketed quite heavily by Sun as well. (The numbers look convincing, and the prices for the Sun Storage servers are certainly very competitive, well, compared to say NetApp.)

    IMHO this is just a repeat of the well known Microsoft tactic of spreading massive amounts of FUD about any competing technology that you can't reproduce yourself - you'll have to wait until Windows Server 2013 for this.

    • Re: (Score:3, Informative)

      Sun has been making quite a bit of noise in the storage architecture world with their use of SSDs as intermediate cache to improve reading and writing speeds.

      You are conflating Sun's claims here, as the performance gains from using SSDs in their configuration are not generally applicable to other Flash based systems.

      ZFS will use SSDs in two very different ways: as cache(L2ARC) devices, and log devices. The cache devices are for improving read IOPs on a mostly static working data set, and a large Flash-based SSD is fine in this scenario. The log devices are for reducing the latency of synchronous writes, and a small DRAM-based SSD is used in this case.

  • by Vellmont ( 569020 ) on Wednesday April 08, 2009 @01:45PM (#27507211) Homepage

    Dismissing using SSD because it's only cost effective for the boot partition is a mistake. Anyone who's put together servers before knows the boot partition is critical to the system, and the hardest part to backup. Once you get a system booted, there's a million things you can do to fix it or restore the relevant data. Getting it bootable if the boot partition is toast is much harder.

    • Shit, these days you can get a boot partition on a 32MB CF/SD card for a server, might have to go all the way up to 128MB for a desktop box... I wonder if they have SD-to-ATAPI adapters...

      Provided, of course, that you're not using an OS that thinks the entire fucking UI needs to live there...

  • by David Jao ( 2759 ) <djao@dominia.org> on Wednesday April 08, 2009 @01:46PM (#27507231) Homepage
    This paper is biased and premature even by the prevailing low standards of typical CS papers. For example, they model SSD failure, but completely ignore mechanical drive failure, which is far more devastating and commonplace. I kid you not:

    Since this paper is focused on solid-state storage, and wear is a novel, SSD-specific phenomenon, we include it in our device models. Currently we do not model other failures, such as mechanical failures in disks.

    The correct approach to incomplete data is, of course, to gather complete data, and they have no excuse here, because there is PLENTY of data on mechanical drive failure rates. However, if you are not willing to do that, the least you can do is ignore the data equally on both sides. The authors' failure to treat both sides equally leads to a hopelessly biased and skewed analysis.

  • Read the Paper (Score:5, Informative)

    by kenp2002 ( 545495 ) on Wednesday April 08, 2009 @01:48PM (#27507271) Homepage Journal

    I just finished the reading the paper.

    The paper boils down to this:

    SSD disk when measured against IOPS, Watts, and Capacity in relation to cost based on several different server types is not cost effective yet. Depending on the type of server costs need to come down at least 3 fold, and under some scenarios as much as 3000 times. Hosting MP3s that are largely sequental, low write storage SSDs are 3000 times over priced. For insaine random IO scenarios that need to come down 3 fold to make it worth it compared to conventional drives.

    Depending on the type of server they can perform worse then standard mechanical disks.

    They found no advantage to 15k RPM drives versus 10k RPM drives when cost is factored in.

    SSD drives pay for themselves in power saving in about 5 years, well past their expected longevity.

    Mechanical disks wear out more or less independant of their data load, SSDs wear out proportional to their data load.

    SSDs do not handle tiny files very well due to how data is written.

    I see nothing in the paper that is pro-microsoft, rather straight dealing on the drives themselves.

    I would suggest MOD-TROLL any evanglest on any side of the OS wars as this paper doesn't seem to deal with OS touting.

    It was a boring but informative read.

    • by OzPeter ( 195038 )
      Agree with you on your analysis. Just skimmed the paper as well and all those people who mentioned the paper writes pulling numbers out of their arse should go read it themselves.

      Wished I had mod points for you
    • Re:Read the Paper (Score:4, Interesting)

      by amorsen ( 7485 ) <benny+slashdot@amorsen.dk> on Wednesday April 08, 2009 @02:21PM (#27507825)

      They tested precisely one brand/model SSD, as far as I can tell from the paper. It did 351 random writes per second, which is pitiful (but probably typical). Intel claims 3300 random writes per second for the X25-E.

      • Re: (Score:3, Interesting)

        by owlstead ( 636356 )

        I would not call an OCZ Vertex drive enterprise level, but you can already see that changes in the controllers can make massive differences. Just compare the random write speeds in this article with those in the latest reviews of Anandtech. Once new controllers start coming out and compete (both in performance and price), the landscape will be entirely rewritten by SSD.

        And although the Memoright is still very expensive, in general you see a massive change in price for SSD's. Currently they seem temporarily

    • Re:Read the Paper (Score:4, Insightful)

      by careysub ( 976506 ) on Wednesday April 08, 2009 @03:50PM (#27509369)

      It is informative, but there is one aspect that they omit from their analysis - the effect of device performance of the cost of the server farm needed to provide service.The whole analysis is based on the storage device cost only (there are good reasons for this, but limits the relevance of their analysis). If higher read rates of an SSD translate into higher server transaction rates then fewer servers are needed at possibily dramatic additional savings.

      Here is a specific scenario to make this concrete.

      You have a search engine application that accesses a relative static index (small parts refreshed daily maybe, all of it refreshed monthly). The ability to randomly read blocks determines how many queries per second your server can handle. The 17-fold speed advantage of the SSD over the Cheetah 15K is a huge win here. Of course you can set up a RAID 0+1 of Cheetah's but your server box only holds 4 data drives (out of 6, you mirror 2 more for redundant storage of the OS and application). So you need to buy four times as many servers using Cheetahs than SSDs, which use more than 4X the power and take up extra data center space (which is not free).

      Or you could stuff a dozen or more Cheetahs into a RAID chassis that costs several times more than one server box.

      Either way the cost of the Cheetahs themselves is trivial compared to the cost of the hardware required to actually make use of them.

  • I concur (Score:3, Informative)

    by Locke2005 ( 849178 ) on Wednesday April 08, 2009 @02:16PM (#27507741)
    For read data, it makes more economic sense to cache to RAM instead of SSD and just read everything into RAM at startup. Fpr writes, I'm not that sure -- is a write to SSD really that much faster than a write to disk? It might make sense to use SSD for journaling in those cases where a transaction can't complete until you are certain the results have been saved. But in that case, your network latencies are probably much greater than you disk write latency anyway.
    • Re: (Score:3, Insightful)

      by saiha ( 665337 )

      It really depends on how much data you are talking about and your performance requirements. SSD gives a good medium between RAM and HD for both speed and cost.

  • by JakFrost ( 139885 ) on Wednesday April 08, 2009 @03:38PM (#27509163)

    One thing about this research paper is that they used only one model MemoRight GT MR25.2 in 8/16/32 GB capacities to do their testing before 2008-11-11 publication of the paper in the United Kingdom.

    I'm concerned that the research test and results are largely skewed against SSDs because they used only that one model to do all their testing with based on only one price point for the SSDs.

    There is a very large difference in performance between many various SSD drives based on the original flawed JMicron JMF602 chipset (stuttering/freezing on write), newer JMF602B (smaller stuttering), Samsung's chipset, Intel's chipset (fastest random writes by 4x), and the newest Indilnix Barefoot chipset (balanced sequential/random read/write). Additionally the huge drops in prices in the last 6-12 months ($1,500->$400) is a big change in the SSD arena. These price, capacity, and performance changes are going to continue fluctuating for the next few years yielding much better drives for the consumers.

    I believe that the research in the paper will be shortly obsolete, if it isn't already, given the latest products on the market and price points and the Q3/Q4 new upcoming products from Intel and others.

    I'm helping a friend of mine build an all-in-one HTPC / Desktop / Gaming system [hardforum.com] and I've been doing research into SSDs for the past few weeks based on reviews and benchmarks so I wanted to share my info.

    Basically there are only two drives to consider and I list them below. A good alternative at this time is to purchase smaller SSDs and create RAID-0 (stripping) sets to effectively double their performance instead of buying a single large SSD. The RAID-0 article below shows great benchmark results to this effect.

    Intel X25-M

    The Intel X25-M series of drives is the top performance leader right now, and the 80GB drive is barely affordable for a desktop system build if you consider the increased performance of the drive.

    Intel X25-M SSDSA2MH080G1 80GB SATA Internal Solid state disk (SSD) - Retail [newegg.com] - $383.00 USD ($ 4.7875 / per GB)

    OCZ Vertex

    The new OCZ Vertex series of drives with the newer 1275 firmware is the price/performance leader and they are much more affordable than the Intel drives. When you combine two of these smaller 30/60 GB drives into RAID-0 (stripping) you get double the performance at still acceptable prices.

    OCZ Vertex Series OCZSSD2-1VTX30G 2.5" 30GB SATA II MLC Internal Solid state disk (SSD) - Retail [newegg.com] - $129.00 USD ($ 4.3 / per GB)

    OCZ Vertex Series OCZSSD2-1VTX60G 2.5" 60GB SATA II MLC Internal Solid state disk (SSD) - Retail [newegg.com] - $209.00 USD ( $ 3.483 / per GB)

    Reviews

    Required Reading:
    AnandTech - The SSD Anthology: Understanding SSDs and New Drives from OCZ [anandtech.com]

    AnandTech - Intel X25-M SSD: Intel Delivers One of the World's Fastest Drives [anandtech.com]

    AnandTech - The SSD Update: Vertex Gets Faster, New Indilinx Drives and Intel/MacBook Problems Resolved [anandtech.com]

    RAID-0 Performance:
    ExtremeTech - Intel X25 80GB Solid-State Drive Review - PCMark Vantage Disk Tests [extremetech.com]

    BenchmarkReviews - OCZ Vertex SSD RAID-0 Performance [benchmarkreviews.com]
    (Be Warned about BenchmarkReviews! Synthetic benchmark results only, no real-life benchmarks such as PCMark Vantage.)

  • "As an IT administrator did you ever think of replacing disks by SSDs? Or using SSDs as an intermediate caching layer?

    SSDs aren't big enough for some uses as mass storage but they could speed up things if used as a cache.

    Note that this paper has nothing to do with laptop workloads, for which SSDs probably make more sense (due to SSDs' ruggedness)."

    I think laptops are where SSDs can come into their own. There shouldn't as much need to large mass storage and SSDs extend battery life. Having said that, I replaced the 160GB HDD in my 1 1/2 year old laptop with a 320GB drive, the biggest I could find.

    Falcon

C'est magnifique, mais ce n'est pas l'Informatique. -- Bosquet [on seeing the IBM 4341]

Working...