Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

SSD Prices Fall Dramatically In 2012 But Increase In Q4 77

crookedvulture writes "Solid-state drives became much more affordable in 2012. The median price for 240-256GB models fell by about 44% over the course of the year and now sits around 83 cents per gigabyte. Lower-capacity drives also got cheaper, albeit by smaller margins that kept median prices from dipping below the $1/GB threshold. Surprisingly, most drives actually got more expensive over the fourth quarter, despite Black Friday and other holiday sales. This upswing was driven largely by OCZ's decision to back off its strategy of aggressively discounting drives to gain market share, allowing its rivals to raise prices, as well. Although some new models arrived with next-generation 19- and 20-nm NAND that should be cheaper to produce, those drives didn't debut at lower prices. We may have to wait a while before SSD makers pass the savings along to consumers."
This discussion has been archived. No new comments can be posted.

SSD Prices Fall Dramatically In 2012 But Increase In Q4

Comments Filter:
  • yea they fell by 44% (Score:5, Interesting)

    by Osgeld ( 1900440 ) on Wednesday January 16, 2013 @01:25AM (#42600533)

    they also started using 3 bit per cell storage, effectively making their lives 1/3 as long while decreasing speed, while still being expensive as jewel encrusted shit

    give me a modern SLC quarter gig drive for 150 bucks then I might start looking, otherwise I am not looking to replace my expensive drive every 2-7 years while counting every write, I have 3.5inch drives as early as 1986 damit, I expect more for the investment.

    • by Osgeld ( 1900440 )

      ugh quarter TB

    • by Anpheus ( 908711 ) on Wednesday January 16, 2013 @01:39AM (#42600601)

      This is just uninformed. Not all drives use TLC and most drives released in 2012 do not. Some drives did, like the Samsung 840, but the 840 Pro for example did not, nor did the OCZ Vector, etc.

      Anyway, the case has always been that if you're not sure about the reliability of your disks: don't just use one! Software RAID solves the issue of TRIM support and you shouldn't be using parity on SSD drives anyway (due to garbage collection issues) so throw it in a RAID1 or RAID10 and build an even more reliable disk.

      And if you're on a laptop that can only hold one internal disk and you still feel unsafe with just one disk: why aren't you using some sort of network based or "cloud" backed storage to ensure you have copies of your most valuable data? Why aren't you making backups?

      Seriously, these problems didn't just up and appear with the invention of SSDs. It's not like we had a 30 year golden age in which no hard drive ever failed or there weren't bad runs of drives (*cough*DeskStar*cough*) that caught users by surprise. The solution has been and always will be: use RAID for redundancy, make backups for recovery.

      • by blind biker ( 1066130 ) on Wednesday January 16, 2013 @02:39AM (#42600849) Journal

        This is just uninformed. Not all drives use TLC and most drives released in 2012 do not. Some drives did, like the Samsung 840, but the 840 Pro for example did not, nor did the OCZ Vector, etc.

        I have to disagree - this is very well informed, because the OP is at least aware that triple-level cell SSD drives have been introduced last year, and he/she is aware that TLC is crap waiting to unleash it's crappiness.

        Besides, just because "not all drives are TLC", the point still remains that manufacturers are only interested in high margins by selling MLC and now TLC drives, and fuck reliability and longevity.

        • by Anonymous Coward

          It is uninformed.

          TLC is not "crap waiting to happen". The Samsung 840 (not pro) which introduced TLC this year is a homeuser decive. While the number of writes per cell is greatly reduced you still (at 10 GB/day) have statitically much more writes than the prognosted lifetime of a homeuse device is.

          Never let math and reason get into the way of uninformed FUD.

          For the price offered, the 840 is an awesome deal and I for one will take it to put my game-partition on one (which means very little writes beyond the

        • the point still remains that manufacturers are only interested in high margins by selling MLC and now TLC drives, and fuck reliability and longevity.

          This is not necessarily a bad thing. TLC drives have dysmal write life expectancy but not all applications are critical or require some stupendous writing life. Hell I don't expect anyone out there to buy a 128GB drive and actually expect to use it next yeah when Windows 9 will use 200GB just to run the install.

          I bought one of the Samsung TLC drives. I'm very happy with it. It sits in a media centre. It has windows on it. That's it. No database, no critical documents, just an OS, the media platform, and dam

          • by Luckyo ( 1726890 ) on Wednesday January 16, 2013 @10:30AM (#42603753)

            A less-then-year life expectancy for home user internal PC storage medium is okay with you?

            Are you also okay with couple of years life expectancy of a car? Five year life expectancy of a house?

            Because that is just an absurd reduction in life expectancy.

            • Are you OK with a 1 year like expectancy on your car's air filter?
              • by Luckyo ( 1726890 )

                Sure. But not the engine block or breaks. Kind of like how I'm okay with dust filters on my PC intakes being clogged every 6 months or so.

            • Re: (Score:2, Informative)

              by Anonymous Coward

              TLC has not a "less-then-year life expectancy".
              See the endurance test here:
              http://www.anandtech.com/show/6459/samsung-ssd-840-testing-the-endurance-of-tlc-nand [anandtech.com]

              • by Luckyo ( 1726890 )

                I was answering to parent's ridiculous hyperbole. Not actual life expectancy of current drivers.

            • A less-then-year life expectancy for home user internal PC storage medium is okay with you?

              Except it's only a less than a year depending on use. As mentioned this is a media server. It's gets turned on every couple of days. The most read-writes it will ever do is installing a windows update. I fully expect this drive to last far longer than one year. Also for the price I paid ... yes a year would be a good run for the speed.

              Are you also okay with couple of years life expectancy of a car? Five year life expectancy of a house?

              You're going to think of me as strange but yes, I bought my last car expecting to get less than 1 year out of it. Were it a new car from a reputable manufacturer I'd have expe

        • Re: (Score:3, Informative)

          by David_Hart ( 1184661 )

          This is just uninformed. Not all drives use TLC and most drives released in 2012 do not. Some drives did, like the Samsung 840, but the 840 Pro for example did not, nor did the OCZ Vector, etc.

          I have to disagree - this is very well informed, because the OP is at least aware that triple-level cell SSD drives have been introduced last year, and he/she is aware that TLC is crap waiting to unleash it's crappiness.

          Besides, just because "not all drives are TLC", the point still remains that manufacturers are only interested in high margins by selling MLC and now TLC drives, and fuck reliability and longevity.

          Until this article, I didn't realize that there was a difference in SSD technology (SLC, MLC, TLC). I recently built a new system with two Samsung 840 250GB TLC SSD drives (paid about $170 each). I have one dedicated to the OS, one for programs, and I'm storing my data on standard SATA III hard-drives. As I understand it, this is the current recommended setup for SSD drives. My static usage on each SSD drive is about 80GB with 120GB free and 32GB unallocated. The only data being written to the drives a

          • I have one dedicated to the OS, one for programs, and I'm storing my data on standard SATA III hard-drives. As I understand it, this is the current recommended setup for SSD drives

            Are you really spending that much time seeking in applications and OS files? That stuff typically gets loaded once on boot and then stays in RAM. It's the data where the fast random read/write times are a big win, and that's the stuff that you're storing on the spinning disks.

            The only data being written to the drives are OS generated files and Temporary Internet Files, which I now plan to move off to one of my data drives

            So, having identified something where an SSD is a speedup (lots of small random reads and writes), you're now going to stop using it for that? At which point, why do you even bother with an SSD?

            • by Christian Smith ( 3497 ) on Wednesday January 16, 2013 @08:27AM (#42602121) Homepage

              I have one dedicated to the OS, one for programs, and I'm storing my data on standard SATA III hard-drives. As I understand it, this is the current recommended setup for SSD drives

              Urgh, no, if you have 2 SSD, at least RAID them. Or put OS+APPs on one, data on the other, and use the HDD is a live backup for both.

              Are you really spending that much time seeking in applications and OS files? That stuff typically gets loaded once on boot and then stays in RAM. It's the data where the fast random read/write times are a big win, and that's the stuff that you're storing on the spinning disks.

              OS binaries and libraries are often read in a random IO pattern, as the process jumps from one section of code to another. This is where a low latency drive on OS/application startup helps.

              User data, on the other hand, is usually read/written in a sequential IO pattern, from start to finish.

              Loading that word doc? Word will read and parse the file in one fell sweep. Saving the updated document? Why not just write it out in one go, rather than update the document in place (not sure if this is how Word works, BTW).

              Viewing pictures or listening to music or watch videos? All sequential reads, what HDD are good at.

              The only data being written to the drives are OS generated files and Temporary Internet Files, which I now plan to move off to one of my data drives

              So, having identified something where an SSD is a speedup (lots of small random reads and writes), you're now going to stop using it for that? At which point, why do you even bother with an SSD?

              Personally, I'd have gone for a single bigger SSD, put all my OS/Apps on that one, and use the HDD as backup for the SSD as well as for bulk files (media files etc.)

              In fact, I'd have stuck with the small single 128GB SSD for OS/apps + small data, and bought two HDD instead, a fast 7200 rpm one for live big data and backup of SSD, and the other as a backup HDD (which can be a low speed, low power 5400 rpm drive in an external enclosure.)

              • OS binaries and libraries are often read in a random IO pattern, as the process jumps from one section of code to another. This is where a low latency drive on OS/application startup helps.

                The only runtime linker I'm familiar with will prefault the entire binary and then let it be demand paged out, on the basis that binaries are usually small and mostly-used, that reading the entire binary is as cheap as faulting in a few pages, and if some pages aren't used for a while then they can be paged out at no cost later.

                User data, on the other hand, is usually read/written in a sequential IO pattern, from start to finish.

                Since this is the sort of thing that usually deserves a big fat [citation needed], I'll skip that and just point you straight to a peer-reviewed citation that roundly refutes that id

                • OS binaries and libraries are often read in a random IO pattern, as the process jumps from one section of code to another. This is where a low latency drive on OS/application startup helps.

                  The only runtime linker I'm familiar with will prefault the entire binary and then let it be demand paged out, on the basis that binaries are usually small and mostly-used, that reading the entire binary is as cheap as faulting in a few pages, and if some pages aren't used for a while then they can be paged out at no cost later.

                  And what about libraries? An app could contain 100's of MB of code, even if only a small fraction of it is referenced. I'd rather that code not push out my working set of data.

                  User data, on the other hand, is usually read/written in a sequential IO pattern, from start to finish.

                  Since this is the sort of thing that usually deserves a big fat [citation needed], I'll skip that and just point you straight to a peer-reviewed citation that roundly refutes that idea:

                  A File is Not a File: Understanding the I/O Behavior of Apple Desktop Applications, published at ACM Symposium on Operating Systems Principles, 2011.

                  The paper above doesn't entirely refute my assertion:

                  Summary: A substantial number of tasks contain purely sequential accesses. When the definition
                  of a sequential access is loosened such that only 95% of bytes must be consecutive, then even more
                  tasks contain primarily sequential accesses. These “nearly sequential” accesses result from metadata
                  stored at the beginning of complex multimedia files: tasks frequently touch bytes near the beginning
                  of multimedia files before sequentially reading or writing the bulk of the file.

                  This was based on observations of IO patterns from the studied applications in the paper.

                  Loading that word doc? Word will read and parse the file in one fell sweep. Saving the updated document? Why not just write it out in one go, rather than update the document in place (not sure if this is how Word works, BTW).

                  See the above paper.

                  Yeah, this was a bad example, as Word docs are highly structured. SQLite files operate similarly.

            • by gmack ( 197796 )

              OS and application loading both involve a lot or random reads. I have three computers setup with OS/APPS on SSD and data on spinning hard drive since I have far more data than I have OS and applications and a small SSD was a cheap addition to the machines. In all three cases (two Linux machines, one Windows) the result was a huge improvement in both OS start up time and application load times. It is enough of an improvement that several apps I used to just leave open now get closed when I'm not using the

          • by ars ( 79600 )

            As I understand it, this is the current recommended setup for SSD drives.

            Actually, I think even better than that is to use an SSD as a caching disk, backed by regular disks.

            Then everything can benefit from the speed boost since the computer is better than you at deciding which files need a speedup. You don't want to move so many high write files off of the SSD that you make it pointless.

            If you are on linux see: http://bcache.evilpiepirate.org/ [evilpiepirate.org]

          • by blind biker ( 1066130 ) on Wednesday January 16, 2013 @10:18AM (#42603549) Journal

            Until this article, I didn't realize that there was a difference in SSD technology (SLC, MLC, TLC).

            "I am an uninformed buyer and will now dispense my lack of wisdom to you."

            I recently built a new system with two Samsung 40 250GB TLC SSD drives (paid about $170 each). I have one dedicated to the OS, one for programs, and I'm storing my data on standard SATA III hard-drives. As I understand it, this is the current recommended setup for SSD drives. My static usage on each SSD drive is about 80GB with 120GB free and 32GB unallocated. The only data being written to the drives are OS generated files and Temporary Internet Files, which I now plan to move off to one of my data drives.

            I'm not worried about my setup.

            "I am actually worried about my setup, as I intend to move frequently written datasets to the mechanical drives."

            Based on the TLC numbers, it should last about 7 to 10 years in this configuration, much longer than the expected lifetime of most consumer grade mechanical drives.

            "I am pulling some numbers right out of my ass. Also, my configuration which consists of mechanical drives should last much longer than mechanical drives."

            How did this bullshit get modded up?

        • Besides, just because "not all drives are TLC", the point still remains that manufacturers are only interested in high margins by selling MLC and now TLC drives, and fuck reliability and longevity.

          Except that's the point – the reliability of longevity of these drives is still well above the average of a hard disk (you can reasonably get 7-10 years out of even a TLC 250GB SSD).

          Add to that that the short term failure rates of SSDs are much lower than those of HDDs (you're talking 0.5-1.5% per year for non OCZ SSDs, and about 4-5% per year for HDDs), and that the failure mode of SSDs when the flash does finally wear out is that you simply can't write (but can read), and I'll take a TLC SSD over an

          • by DrXym ( 126579 )
            SSDs haven't been around long enough to judge their reliability or longevity. I bought a bunch of LED bulbs for my house which claimed 30 year lifespan and about 6 or 7 have failed already in the course of 3 years.
        • What is TLC?

      • This is just uninformed. Not all drives use TLC

        And this is a straw man. The OP never said that "all drives are TLC" - it is you who made that straw-man in order to imply that the OP is "uninformed", whereas the OP is actually perfectly well informed, and shared valuable info - namely, that TLC drives have been introduced to the market.

      • The problem with SSDs is not that they fail. It's that they fail completely without warning (or at least mine did), no have no chance to do emergency backups, order a replacement and no way of running repair utilities to reconstruct some of the files.

        I've had HDDs die but never complete data loss out of the blue like with the OCZ Vertex 2.
      • Anyway, the case has always been that if you're not sure about the reliability of your disks: don't just use one!

        so I'm supposed to buy two diamond-priced SSDs?

        And if you're on a laptop that can only hold one internal disk and you still feel unsafe with just one disk: why aren't you using some sort of network based or "cloud" backed storage to ensure you have copies of your most valuable data?

        Laptops are as popular as desktops now, maybe more, so you should try to realize that you're potentially addressing a majority of users with this question. And many users have only one computer, and if the main disk goes out, they won't be able to use it whether they have backups or not. Well, unless they're running Linux or similar and have made a bootable backup. And people SHOULD do this, but that doesn't change the fact that some new SSDs are less reliable

    • Re: (Score:3, Interesting)

      by Anonymous Coward

      At 40GB written-read /day, the current estimation for a TLC drive is a life expectancy is 7 years. Aka, much more than the current median of my past hard drives.
      Also, I can expect in 5 years that an equivalent of a 256GB SSD will cost next to nothing, so replacing it wouldn't be *that* hard...

    • they also started using 3 bit per cell storage, effectively making their lives 1/3 as long while decreasing speed, while still being expensive as jewel encrusted shit

      Hey, 70TB write endurance ought to be enough for anyone!

      • Hey, 70TB write endurance ought to be enough for anyone!

        Well, if it isn't then a TLC SSD isn't for you today. Fortunately for you, there are other involving HDDs and RAID setups.

    • Do you mean Samsung? In 2012 they were the only manufacturer using TLC NAND and in only one line of drives (840). Don't let me steal your thunder though...

      An incidentally, the 840 has been shown to do over 400 TB of writes (http://www.xtremesystems.org/forums/showthread.php?271063-SSD-Write-Endurance-25nm-Vs-34nm&p=5163560&viewfull=1#post5163560), which is probably fine for most desktop uses...

    • by janisozaur ( 1465907 ) on Wednesday January 16, 2013 @04:16AM (#42601177)
      There's an interesting article on how do TLC drives compare to others on anandtech: http://www.anandtech.com/show/6459/samsung-ssd-840-testing-the-endurance-of-tlc-nand [anandtech.com]
    • give me a modern SLC quarter gig drive for 150 bucks then I might start looking, otherwise I am not looking to replace my expensive drive every 2-7 years while counting every write, I have 3.5inch drives as early as 1986 damit, I expect more for the investment.

      And the seek performance on your 1986 hard drive sucks. That's all SSD is really good for - low-seek optimizations. Boot drives, caches in front of spinning rust. OK, tiny ones for low-power embedded work, but even at that the low reliability make

    • What are you talking about? Every time you write to flash memory, you have the erase and rewrite the entire 4096 bit block. So 1, 3, whatever. It doesn't matter. Citation?
    • Comment removed based on user account deletion
  • Prediction (Score:4, Interesting)

    by backslashdot ( 95548 ) on Wednesday January 16, 2013 @01:33AM (#42600579)

    Here's whats gonna happen .. A scandal will break about price fixing. The govt will get involved a lawsuit will be filed. A fine will be paid. Prices will then stagnate instead of drop.

    That's the normal pattern.

    • Re:Prediction (Score:5, Insightful)

      by jonnat ( 1168035 ) on Wednesday January 16, 2013 @01:58AM (#42600697)

      You don't seem to know what price fixing is. Prices dropping steeply as more competitors enter a market are indicative of a price war, effectively the opposite of price fixing.

      But don't let this minor detail interfere with your rant about the government.

      • I think backslashdot's talking about the sudden rise after the fall. These days you can very safely read "back off its strategy of aggressively discounting drives to gain market share, allowing its rivals to raise prices, as well" as "work with its rivals to keep prices high so they don't have to worry about those pesky 'competition' or 'can't pay their CEOs bonuses this quarter' things".

        Whether by government fine ("unfortunately, we have to pass the costs of onerous government regulation on to the consume


        • I think backslashdot's talking about the sudden rise after the fall.

          agreed, the gp misunderstood his argument (but didn't let that get in the way of ranting about ranting about government)

          the days of hope for reasonable SSD prices will be (if not are) over.

          In 2020, a 2TB SDD will be under $60 USD(2012).

      • They're all competing twice as hard as a year ago to bringout the fastest, biggest, cheapest drive and suddenly an OCZ Vertex costs $139 instead of the $79.99 I paid on 4 builds that I used them in in the past. That's the definition of price fixing. Companies are killing each other over price and then suddenly they all stop for no apparent reason and raise their prices.
        • by tlhIngan ( 30335 )

          They're all competing twice as hard as a year ago to bringout the fastest, biggest, cheapest drive and suddenly an OCZ Vertex costs $139 instead of the $79.99 I paid on 4 builds that I used them in in the past. That's the definition of price fixing. Companies are killing each other over price and then suddenly they all stop for no apparent reason and raise their prices.

          That's not price fixing, that's just market economics.

          Your competitor decides they want marketshare and basically dumps product on the marke

          • by gl4ss ( 559668 )

            evidently they would have been better off keeping stock instead of going to price war.

  • by Anonymous Coward

    At least magnetic storage has recovered to pre-Thailand prices and apparently reliability as well

  • Selection Bias? (Score:5, Informative)

    by Anonymous Coward on Wednesday January 16, 2013 @01:40AM (#42600611)

    Did anyone read their methodology? They only looked at Amazon and Newegg. And only in the US. -1 Misleading.

    • Re:Selection Bias? (Score:5, Interesting)

      by GeekWithAKnife ( 2717871 ) on Wednesday January 16, 2013 @04:08AM (#42601153)

      Why is this a surprise?
      This is how things roll on Slashdot. It's up to us to pick the article apart, dissect the argument, question the premise and finally formulate a succinct rebuttal.

      \o/
      • by swilly ( 24960 )

        All these years, and I never realized that Slashdot is just a crowdsourcing tool for editing and peer reviewing news.

        I feel used.

      • This is how things roll on Slashdot. It's up to us to ignore the article completely and start a massive flamewar on a related but hotly contested topic like evolution or global warming or [insert big tech company here].

        FTFY.

    • by CAIMLAS ( 41445 )

      How is that misleading?

      Most drives are going to be purchased from one of those two sites, or through somewhere like Fry's. Fry's pricing is, effectively, identical to the lower of the two, within a couple dollars. Other than that, you're looking at grossly inflated locales, like Best Buy.

  • by SuperBanana ( 662181 ) on Wednesday January 16, 2013 @01:58AM (#42600693)

    19- and 20-nm NAND that should be cheaper to produce, those drives didn't debut at lower prices.

    I remember reading at one point that the drives with smaller processes sizes had higher failure rates. Has that been addressed, or are drive makers over-provisioning more to compensate?

    • 19nm/20nm has proven to be no worse than the existing 2Xnm processes as far as durability is concerned. So you're still looking at a 3,000-5,000 program/erase cycles before NAND cells start giving out.

The unfacts, did we have them, are too imprecisely few to warrant our certitude.

Working...