Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage Upgrades Technology

SanDisk Announces 4TB SSD, Plans For 8TB Next Year 264

Lucas123 (935744) writes "SanDisk has announced what it's calling the world's highest capacity 2.5-in SAS SSD, the 4TB Optimus MAX line. The flash drive uses eMLC (enterprise multi-level cell) NAND built with 19nm process technology. The company said it plans on doubling the capacity of its SAS SSDs every one to two years and expects to release an 8TB model next year, dwarfing anything hard disk drives can ever offer over the same amount of time. he Optimus MAX SAS SSD is capable of up to 400 MBps sequential reads and writes and up to 75,000 random I/Os per second (IOPS) for both reads and writes, the company said."
This discussion has been archived. No new comments can be posted.

SanDisk Announces 4TB SSD, Plans For 8TB Next Year

Comments Filter:
  • Oh goody (Score:5, Funny)

    by Anonymous Coward on Saturday May 03, 2014 @01:06AM (#46906017)

    Now you can pay $4000 for a drive that won't last 2 years! Yeah.. sign me up.

    • by epyT-R ( 613989 )

      only 4k? probably more like 20..

    • Re: Oh goody (Score:4, Informative)

      by Anonymous Coward on Saturday May 03, 2014 @01:13AM (#46906049)

      My primary OS is running on an SSD going on 4 years old now... Out of 5 that I have only one had had issues, which was actually it's controller catastrophically failing and not a NAND issue - could have just as easily happened to a HDD.

      • Re: (Score:2, Insightful)

        by epyT-R ( 613989 )

        I went through three different intel ssds within a year before I gave up and went back to raid spinning disks. They're fine for laptop use, and there's a place for them in data centers as caching drives, but they still suck for heavy workstation loads.

        • Re: Oh goody (Score:5, Interesting)

          by Bryan Ischo ( 893 ) * on Saturday May 03, 2014 @01:40AM (#46906155) Homepage

          False. Your one anecdotal story does not negate the collective wisdom of the entire computer industry.

          As far as anecdotal evidence goes, here's some more worthless info: I've owned 8 SSD drives going all the way back to 2009 and not a single one has ever failed. They're all currently in use and still going strong. I have:

          - 32 GB Mtron PATA SLC drive from 2009
          - 64 GB Kingston from 2010 (crappy JMicron controller but it was cheap)
          - 80 GB Intel G2 from 2010
          - 80 GB Intel G3 from 2011
          - 2x 80 GB Intel 320 from 2011
          - 2x 240 GB Intel 520 in my work computer, it gets pretty heavily used, from 2012
          - Whatever is in my Macbook Pro from 2012
          - Just purchased a 250GB Samsung 840 Evo

          Not a single failure on any of them, even the old 32 GB Mtron and the piece of crap JMicron controller Kingston.

          But this evidence doesn't really matter; it's the broad experience of the industry as a whole that matters, and I assure you, SSDs have already been decided as ready for prime time.

          For a recent example, linode.com, my data center host for like 10 years now, just switched over to all SSDs in all of their systems.

          • Re: Oh goody (Score:5, Informative)

            by shitzu ( 931108 ) on Saturday May 03, 2014 @02:52AM (#46906329)

            We have ~100 SSDs installed in our company, workstations, laptops and servers. Over five years only 3 of them died, all Kingstons. Samsung and Intel have been spotless. All of those that died had the following symptoms - if you accessed a certain sector the drive just dropped off - as if you switched off its power. The drive did not remap them as it always dropped off before it could do so. Otherwise the drive remained functional. Got them replaced under warranty.

        • I'm using SSDs in a compile farm that builds software 24/7 and no drive has ever failed.

        • I have 2 SSDs in a ZFS mirror that more or less constantly rebuilds the FreeBSD ports tree. The reasons for doing so are silly and not important to this discussion. It may spend 2 or 3 hours a day idle, the rest of the time, its building ports on those SSDs, with sync=yes (meaning ALL writes are sync, no write caching so i can see the log leading up to a kernel panic I'm searching for). Its been doing this for over a year already.

          It has never thrown so much as a checksum error.

          So my anecdotal evidence be

    • Re:Oh goody (Score:5, Insightful)

      by beelsebob ( 529313 ) on Saturday May 03, 2014 @01:20AM (#46906083)

      Assuming you write an average of 100GB a day to this drive (which is... an enormous overestimate for anything except a video editor's scratch disk), that's 40,000 days before you write over every cell on the disk 1000 times. Aka, 100 years before it reaches its write limit. So no... SSDs are far from the 2 year proposition that people who bought first gen 16/32GB drives make them out to be.

      • If you only write infrequently (use for image editing) and then backup storage - how many years would the SSD maintain values?

        • Re:Oh goody (Score:5, Informative)

          by Tapewolf ( 1639955 ) on Saturday May 03, 2014 @07:56AM (#46907043)

          If you only write infrequently (use for image editing) and then backup storage - how many years would the SSD maintain values?

          If the drive is powered down, I wouldn't bet on it lasting the year. Intel only seem to guarantee up to 3 months without power for their drives: http://www.intel.co.uk/content... [intel.co.uk]

          Note also that the retention is said to go downwards as P/E cycles are used up. For me, I think they make great system drives, but I don't use them for anything precious.

      • by dgatwood ( 11270 )

        Of course, in the worst case, with a suitable synthetic workload in which every 512-byte block write causes a 512 KB flash page (again, worst case) to get erased and rewritten, that could translate to only a 40-day lifespan. Mind you, that worst-case scenario isn't likely to occur in the real world, but....

        • How is that the worst case? Block erasure is only necessary to free up space, not to make a write.

          • Re: (Score:3, Informative)

            by Mr Z ( 6791 )

            If you know something about the drive's sector migration policies, in theory you could construct a worst-case amplification attack against a given drive. Leverage that against the drive's wear leveling policies. But, that seems rather unlikely.

            Flash pages retain their data until they're erased. You can write at the byte level, but you must erase at the full page level. You can't rewrite a byte until you erase the page that contains it. That's the heart of the attack: Rewriting sectors with new data.

            • So what if you want to prolong the lifetime of an SSD's, and want to use it as a long term backup storage medium that you can bury in your backyard, and only dig it out once a decade to update? Also a decade later writable SSD's may no longer be available, so as you have no clue how many writes and rewrites you will do in the future, what's an efficient strategy, an optimum %fill on these drives? As the future is unknown, you cannot predict this accurately, but it's certainly not 99.9% full, and not 0.01% f
              • Re:Oh goody (Score:4, Interesting)

                by jon3k ( 691256 ) on Saturday May 03, 2014 @09:48AM (#46907525)
                You do not want to use SSDs for long term storage: http://www.intel.co.uk/content... [intel.co.uk]

                "In JESD218, SSD endurance for data center applications is specified as the total amount of host data that can be written to an SSD , guaranteeing no greater than a specified error rate (1E - 16) and data retention of no less than three months at 40 C when the SSD is powered off."

            • No matter what you do you cannot burn through more than the maximum (ideal conditions) write speed, and the strategies you are talking about would ultimately be far from maximum.

              At 400MB/sec max erase throughput and 250 erase cycles per block (conservative?), it would still take 30 days to wear down this 4TB drive.

              Write amplification is a red herring when you are calculating time to failure because write amplification doesnt magically give the SSD more erase ability. These things arent constructed to be
            • Thus, the basic idea [of a write amplification exploit] goes something like this: Fill the disk to 99.9% full.

              Your attack has already failed. A 4 TB drive has 4 TiB (4*1024^4), or 4.4 TB of physical memory, but only 4 TB (4*1000^4) is partitioned. The rest is overprovisioned to prevent precisely the attack you described. You're not going to get it more than 90.95% full. And in practice, a lot of sectors in a file system will contain repeated bytes that the controller can easily compress out, such as runs of zeroes from the end of a file to the end of its last cluster or runs of spaces in indented source code.

        • Like someone else already said, that's what the wear levelling algorithms in the controller are for.

      • Assuming you write an average of 100GB a day to this drive (which is... an enormous overestimate for anything except a video editor's scratch disk),

        Doesn't Windows use a swap file, no matter how much memory you have? That could conceivably see any amount of traffick per day.

        • Well, yes and no. Yes, by default it enables a swap file. You can turn it off, given enough memory however, so that disk cache never puts pressure on the rest of the VM system, it will not use the swap file. This is true for every modern OS however, Windows, Linux or *BSD, all of which favor larger disk cache instead of keeping unused blocks in memory.

        • Just because it uses a swap file doesn't mean it ever writes to it. A lot of operating systems have historically had the policy that every page that is allocated to a process must have some backing store for swapping it to allocated at the same time. If you have enough RAM, however, this most likely won't ever be touched. If you're actually writing out 100GB/day to swap then you should probably consider buying some more RAM...
          • Actually, it's likely to be written... very occasionally. It's likely that when the OS has time to do something other than what you asked it to do, it'll start writing out dirty memory to swap, just because that means that if you do need to swap at a later date, you don't need to page out.

        • Yes it does, so that it can page things out before it needs to page things in. But no, that's not really a conceivable write rate. The average home user (even with windows' swap file involved) will be closer to 5GB a day, even developers, hammering a workstation will only be around 20GB a day in the worst case.

      • by jon3k ( 691256 )
        Depends on the model. The Sandisk Optimus Extreme supports up to 45 (yes, fourty five) full drive writes per day
    • Going 4 years on my Intel SSD. I am replacing it, but only to gain capacity.
      • Re: (Score:3, Funny)

        by Anonymous Coward
        Just turn on DoubleSpace ... might buy you a couple of more years.
        • LOL, does that still exist?

          Guess not. Ended after Windows 98. I remember using it fondly, though my dad got upset when I told him I turned it on.

          • NTFS actually supports compressed folders. The contents are compressed transparently, so applications can work with the files easily.
    • Now you can pay $4000 for a drive that won't last 2 years! Yeah.. sign me up.

      Huh? What are you blathering on about, AC? From TFA:

      In all, SanDisk announced four new data center-class SSDs. As the drives are enterprise-class, which are typically sold through third parties, SanDisk did not announce pricing with the new drives.[Emphasis added]

      • They didn't announce pricing because it's not for us mortals, but they did announce the technology to make us and the competitors jealous. ;)
      • by fnj ( 64210 )

        Easy there pardner. You can type "sandisk optimus max" into google and up comes an ad selling a Sandisk Optimus Eco 1.6 TB for $3,417.25.

        So while it's true AFAIK you can't find pricing info on the Optimus Max, you can make book that it's gonna be on the high side of that figure. IMHO $4000 is a low estimate.

    • by aussersterne ( 212916 ) on Saturday May 03, 2014 @01:51AM (#46906199) Homepage

      Anecdotal and small sample size caveats aside, I've had 4 (of 15) mechanical drives fail in my small business over the last two years and 0 (of 8) SSDs over the same time period fail on me.

      The oldest mechanical drive that failed was around 2 years old. The oldest SSD currently in service is over 4 years old.

      More to the point, the SSDs are all in laptops, getting jostled, bumped around, used at odd angles, and subject to routine temperature fluctuations. The mechanical drives were all case-mounted, stationary, and with adequate cooling.

      This isn't enough to base an industry report on, but certainly my experience doesn't bear out the common idea that SSDs are catastrophically unreliable in comparison to mechanical drives.

      • I have had the opposite experience. 6 SSD's of which 4 have failed, the 2 still alive are less than 12 months old. 16 physical 2 and 3TB disks which are currently all running. both our experiences are anecdotal though I do believe the current failure rates on SSD's is still significantly higher than physical disks (at least it was in the last report I read on them early last year).
        • Re: (Score:2, Troll)

          by Bryan Ischo ( 893 ) *

          Let me guess ... you bought OCZ drives because they were cheap, and even though they kept failing, you kept buying more OCZ drives, and they failed too?

          It's a common story. What I don't understand is, why *anyone* buys an OCZ drive after the first one fails.

          • yes...

            Buying an SSD only from Sandisk, Samsung, or Intel is a no-brainer. These are the companies that actually make flash chips..

            OCZ and the various re-branders begin at a competitive disadvantage and then make things worse in their endless effort to undercut each other.
          • by Hadlock ( 143607 )

            I don't know why you got modded down for pointing out that OCZ drives are utter trash, they've consistently outranked all of their competitors combined in number of returns since they came out. They were recently sold to another brand, but the damage to the brand has already been done. It's been known for years that OCZ = ticking time bomb. Nobody has complaints about quality drives like Intel and Samsung.

        • and there's workload and on hours and all of that stuff to consider, too. So of course it's not scientific by any stretch of the imagination.

          But we've been very happy with our Intel SSDs and will continue to buy them.

      • I've had 4 (of 15) mechanical drives fail in my small business over the last two years

        Let me guess... Seagate? Let me

        • Ack, I swear that typo wasn't there when I clicked "submit!"
        • Two Seagate 2TB, upon which we switched loyalties, then two WD Green 2TB.

          The Seagates both hand spindle/motor problems of some kind—they didn't come back up one day after a shutdown for a hardware upgrade. The WD Green 2TB both developed data integrity issues while spinning and ultimately suffered SMART-reported failures and lost data (we had backups). One was still partially readable, the other couldn't be mounted at all.

          Is there some kind of curse surrounding 2TB drives?

    • Now you can pay $4000 for a drive that won't last 2 years! Yeah.. sign me up.

      With capacity like this they could put in a RAID0 option which halves the capacity but increases the reliability by orders of magnitude. If corruption is detected you can grab the shadow copy, remap it somewhere else, mark the block as bad. The chances of two blocks failing at the exact same time is insignificant.

  • ssd vendors should be rushing to get nvme out the door, rather than wasting time on capacity. flash does not and simply never will scale the same way capacity in recording media (including that mounted in spinning disks) does...

    • Funny, it seams to be doing just that. It just started way behind, so it will take a while to catch up. That and the abandonment of density increases on spinning media.
    • and what happens when someone figures out how to make flash memory with infinite writes?
      If someone can figure out how to jump a charge across the insulating layer without damaging it, flash memory will never wear out.

      • If someone can figure out how to jump a charge across the insulating layer without damaging it, flash memory will never wear out.

        Limited lifespan [wikipedia.org] is good for the Powers That Be, so even if such a technology exists, it's not for consumers like you. Your role is to run the economic Red Queen's Race [wikipedia.org] in a desperate and ultimately futile attempt to keep your position in the hierarchy, all for the glory of the 1% and their masters.

  • by Billly Gates ( 198444 ) on Saturday May 03, 2014 @01:23AM (#46906097) Journal

    It is so archaic in this day and age of microization to have something mechanic bottlenecking the whole computer. It just doesn't mix in the 21st century.

    For those who have used them will agree with me. It is like light and day and there is no way in hell you could pay me to do things like run several domain VM's on a mid 20th century spinning mechanical disk. No more 15 minute waits to start up and shutdown all 7 vms at the same time.

    Not even a 100 disk array can match the IOPS (interrupts and operations per second) that a single ssd can provide. If the price goes down in 5 years from now only walmart specials will have any mechanical disk.

    Like tape drive and paper punch cards I am sure it will live someone in a storage oriented server IDF closet or something. But for real work it is SSD all the way.

    • IOPS stands for IO operations per seconds. Interrupts has nothing to do with it.

  • How fast can data be pumped through the controller interface?

    • On my single SATA3 Samsung pro with rapid mode I get about 600 megs a second.

      But that is nto the real speed bump. My 270 meg a second Sansdisk doesn't boot Windows any faster?! Why? It is about latency and IOPS interrupt and operations per second. I can do heavy heavy simultaneous things like run 5 virtual machines for my domain in my virtual network with VMware workstation in about 1.5 minutes. This took almost 20 minutes to start and shutdown before!

      A 100 meg disk raid will not be as fast as single drive

      • In my laptop, I have an SSD. Upgrading the HDD cost about as much as a new laptop and cost significantly less. I've been able to buy 2+ years of time on my old laptop with an upgrade at significantly less cost.

        So the numbers make sense, here!

        We host a heavily database-driven app. Use of an SSD reduces latency by at *least* 95% in our testing. It's a no-brainer. Even if we replaced the SSDs every single year, we'd still come out way ahead. SSDs are where it's at for perfromance!

    • by smash ( 1351 )
      Apple's PCIe SSD machines are getting 900 MB per second. SSD is already faster than SATA, but for all but niche applications its actually IOPs that you're chasing and the difference between SSD and spinning disk there is absolutely massive.

      For a single user doing "stuff" though, a short-stroked hard drive is about 1/4 the price and well fast enough. And yes, i had a work machine (laptop) with SSD that i ditched and went back to a momentus XT hybrid due to lack of capacity.

      • For a single user doing "stuff" though, a short-stroked hard drive is about 1/4 the price and well fast enough. And yes, i had a work machine (laptop) with SSD that i ditched and went back to a momentus XT hybrid due to lack of capacity.

        You keep saying that, but that doesn't make it magically true.

        So you had a laptop with an SSD too small for your working set and that makes SSDs bad? No. It makes you or whoever provisioned the machine incompetent. More likely you were using your work machine for shit you shouldn't have, so you were all pissy that your working set was larger than your storage space.

        I'd be willing to be a months pay that my 2009 macbook pro with SSD will out perform whatever brand new laptop you want to buy with spinning

  • by Nagilum23 ( 656991 ) on Saturday May 03, 2014 @02:06AM (#46906233) Homepage
    Seagate already announced 8-10TB disks for next year: http://www.bit-tech.net/news/h... [bit-tech.net] .
    Now if SanDisk can deliver 16TB SSDs in 2016 then they might be indeed ahead of the hard-disks but not in 2015.
  • by swb ( 14022 ) on Saturday May 03, 2014 @08:08AM (#46907105)

    Why do SSD makers only make 2.5" SSDs? It seems like a lot of the capacity limitation is self-enforced by constraining themselves to laptop-sized drives.

    Why can't they sell "yesterday's" flash density at larger storage capacities in the 3.5" disk form factor? For a a lot of the use cases, the 3.5" form factor isn't an issue. More, cheaper flash would enable greater capacities at lower prices.

    The same thing is true for hybrid drives -- the 2.5" ones I've used have barely enough flash to make acceleration happen, a 3.5" case with a 2.5" platter and 120GB flash would be able to keep a lot more blocks in flash and reserve meaningful amounts for write caching to flash.

    • by Rich0 ( 548339 )

      Why do SSD makers only make 2.5" SSDs? It seems like a lot of the capacity limitation is self-enforced by constraining themselves to laptop-sized drives.

      Why can't they sell "yesterday's" flash density at larger storage capacities in the 3.5" disk form factor? For a a lot of the use cases, the 3.5" form factor isn't an issue. More, cheaper flash would enable greater capacities at lower prices.

      The same thing is true for hybrid drives -- the 2.5" ones I've used have barely enough flash to make acceleration happen, a 3.5" case with a 2.5" platter and 120GB flash would be able to keep a lot more blocks in flash and reserve meaningful amounts for write caching to flash.

      I doubt anybody really wants these big SSDs anyway. I mean, who buys an SSD when they need to store 1TB of data? I could see it for certain niches, such as for a cache (even an SSD is cheaper than RAM, and is of course persistent as well). Otherwise anybody storing a lot of data uses an SSD for the OS, and an HD for storage, and you don't need a big SSD for the OS. Still, I wouldn't mind a 3.5" drive just for the sake of it using the same mount as my other drives.

      • by Amouth ( 879122 )

        In my work case the need for the larger drive is that you only have one drive, not the option for two. so when your traveling you need to take a lot with you and you want it to be fast. so larger ssd's are welcome as long as the price remains in the range of sanity

    • by jon3k ( 691256 ) on Saturday May 03, 2014 @10:08AM (#46907651)
      It's not constrained by size. It's the cost of NAND flash that's the limiting factor. And no one is going to manufacture last generation's NAND, it doesn't make any business sense. Ask Intel why they don't sell last years CPUs at cut rate prices. Same reason.
    • by Amouth ( 879122 ) on Saturday May 03, 2014 @10:10AM (#46907663)

      there are a few reasons they don't make 3.5's

      1: physical size isn't an issue, for the sizes they release that people are willing to pay for it all fits nicely in 2.5
      2: 2.5's work in more devices, including in desktops where 3.5's live. if noting is forcing the 3.5 usage then it would be bad for them to artificially handicap them selves.

      now for your commend on larger physical drives being cheaper. Flash does not work the way that normal dries to.

      Normal platter drives the areal density directly impacts pricing as it drives the platter surface to be smoother, the film to be more evenly distributed, the head to be more sensitive, the accurater to be more precise, all things that cause higher precision that drive up costs as it increases failure rates and manufacturing defects causing product failure.

      Now in the flash world. they use the same silicon lithography that they use for making all other chips. there are two costs involved here.

      1: the one time sunk cost of the lithography tech (22nm, 19nm, 14nm...) This cost is spread across everything that goes though it. And in reality evens out to no cost increase for the final product because the more you spend the smaller the feature the more end product you can get out per raw product put in.
      2: the cost of the raw material in. It does not matter what level of lithography you are using the raw material is nearly exactly the same (some require doping but costs are on par with each other). So in fact your larger lithographic methods become more expensive to produce product once there is newer tech on the market.

      No please note that in the CPU world where you have complex logic sets and designs there is an added cost for the newer lithography as it adds to the design costs. but for flash sets there is nearly zero impact form this as it is such a simple circuit design.

The use of money is all the advantage there is to having money. -- B. Franklin

Working...