Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Stats Upgrades Hardware

Samsung Demos PCIe NVMe SSD At 5.6 GB Per Second, 1 Million IOPS (hothardware.com) 88

MojoKid writes: Samsung decided to show off their latest SSD wares at Dell World 2015 with two storage products that are sure to impress data center folks. Up and running on display, Samsung showcased their PM1725 drive, which is a half-height, half-length (HHHL) NVMe SSD that will be one of the fastest on the market when it ships later this year. It sports transfer speeds of 5500MB/sec for sequential reads and 1800MB/s for writes. Samsung had the drive running in a server with Iometer fired up and pushing in excess of 5.6GB/sec. The PM1725 also is rated for random reads up to 1,000,000 IOPS and random writes of 120,000 IOPS. The top of the line 6.4TB SSD is rated to handle 32TB of writes per day with a 5-year warranty.
This discussion has been archived. No new comments can be posted.

Samsung Demos PCIe NVMe SSD At 5.6 GB Per Second, 1 Million IOPS

Comments Filter:
  • by smallfries ( 601545 ) on Thursday October 22, 2015 @04:35AM (#50779585) Homepage

    How many gigadollars?

    • Your name better be "Dell" or "Apple" or "HP" if you want to buy one of these things.
      Samsung has been parading around their OEM-only SSDs for about 3 years, and rarely are consumers able to get their hands on them.

  • Rating vs. Warranty (Score:3, Interesting)

    by Anonymous Coward on Thursday October 22, 2015 @05:14AM (#50779681)
    Samsung PRO line offers 5 years or total bytes written, whichever comes first, as a part of their warranty package:
    http://www.samsung.com/global/... [samsung.com]

    While this drive "is rated to handle 32TB of writes, every day for five years without failure" - I want to see a warranty to go with that. That's ~58400TB total, about 200 times higher than their best warranty offers right now at 300TBW.
    • by swb ( 14022 ) on Thursday October 22, 2015 @06:46AM (#50779941)

      Given the endurance test people have peformed against "consumer" SSDs, it sure seems like the expected endurance exceeds the warrant by a lot.

      This guy:

      http://blog.innovaengineering.... [innovaengineering.co.uk] ...has 7 PB written to an 850 Pro and it's still going (last blog update was more than a month ago).

      I'd be awful curious to see what the actual durability of an 850 Pro would be in a real production SAN. My suspicion is that the better-than-rated endurance coupled with the low replacement cost might make it worthwhile when you consider the staggering performance you would get.

      There might even be gimmicks you could apply on a per-disk basis to improve durability, such as underprovisioning each drive by 25% so that you could wear level across more capacity.

      • by Anonymous Coward

        I'm testing this theory right now. I'm using 850 Pros in RAID 10 arrays. We'll see how it goes. My math says I should be fine for 7 years, but the anxiety after only 3 months is palpable.

        The issue isn't that a $500 drive might fail. The issue is that if the drives start failing, I have to chuck and replace $23,000 worth of drives.

        • Re: (Score:2, Interesting)

          by Anonymous Coward

          i've been running on 840pros in a datacenter enviroment and i haven't had one fail on me yet in two years. recently we've changed to ordering 850pros for new servers and in the 3-4 months we've done that i haven't seen a dead one either.

        • by swb ( 14022 )

          I'd love to hear more about how you're using them.

          The performance is generally so good that I might be inclined to use a double parity redundancy scheme and hot spare auto rebuild as a hedge against failure. I would generally expect double parity and hot spare rebuild to be fast enough to protect against all but the most catastrophic failures, like all drives somehow failing faster than you can replace spares.

          Do tell how you're actually using them and what kind of write usage you se.

          I built a tiered storag

          • I think this is one of the biggest advantages of using SSDs in the data center. With spinning platters, rebuilding the array after a drive dies can take a serious amount of disk resources. SSDs are so much faster that rebuilding a drive can be done in a fraction of the time, and not put so much strain on the system when the rebuild is being done.

        • by Qzukk ( 229616 )

          but the anxiety after only 3 months is palpable.

          I assume they're under warranty for the next while.

          My fear would be that there is an intelesque [techreport.com] hard limit on writes that bricks the drive, and ALL of your drives in a RAID array will hit that limit simultaneously.

          • by swb ( 14022 )

            My fear would be that there is an intelesque hard limit on writes that bricks the drive, and ALL of your drives in a RAID array will hit that limit simultaneously.

            All of the drives in that test indicated potential failure via monitoring way before they actually failed, regardless of the actual event of their failure.

            My guess is that part of the trick to using an 850 Pro-type drive is close monitoring of drive error status and aggressive replacement of drives showing an indication of failure. It seems unlikely that simultaneous failure would happen across an entire shelf, so closely spaced, as to prevent rebuilds to available hot spares (and hot spare replacement).

            Th

        • That's why we ponied up for the 845 DC Pro, just in case.

      • Speak for yourself.

        Last weekend I just replaced my 2nd sansdisk ssd after constant disk corruption issues. Only 1\2 a petabyte was written. I replaced lots of components trying to find the culprit as the other drive had the same problem.

        Another slashdotter a few months ago mentioned his team downgrades all his industrial equipment with mechanical disks as the ssds always fail out in the environment. Sure benchmarks show how great and reliable. Real use dictates otherwise. I only buy Samsung pros now for re

        • Another slashdotter a few months ago mentioned his team downgrades all his industrial equipment with mechanical disks as the ssds always fail out in the environment.

          Ahh yes. I'm sure he completed a thorough RCFA on the issue and didn't just say "these drives are shit let's throw these others in there".

          The thing about random failures is that they are not related to end of life and SSDs are no different from conventional HDDs in that regard. You ask anyone about any failure and he'll have a story.

    • by TheRaven64 ( 641858 ) on Thursday October 22, 2015 @07:28AM (#50780105) Journal
      You don't need an explicit warranty anywhere with sensible consumer protection laws. The Sale of Goods Act in the UK (and equivalents in most EU countries) allow you to return the goods for a full refund if they do not meet the promises made at time of sale. I had a battery fail in an Apple laptop after four and a half years, but within the number of charge cycles that their ads claimed. They replaced it (couriered out a replacement that arrived at 9am the day after I called them at 3pm - better service than I've ever had from them for anything under warranty) as soon as I mentioned the Sale of Goods Act.
      • by JBMcB ( 73720 )

        The Sale of Goods Act in the UK (and equivalents in most EU countries) allow you to return the goods for a full refund if they do not meet the promises made at time of sale.

        So what's the difference between a 1 year replacement warranty and a promise that a product will last for a year under a Sale of Goods Act?

        • The main difference is that the warranty is usually offered by the manufacturer, whereas the Sale of Goods Act governs interaction between people who buy and sell things. It's often easier to get a refund / repair directly from the manufacturer, but this requires a manufacturer warranty. If you don't have one, then you can return it (under the SoGA) to the retailer, they can return it (also under the SoGA) to the wholesaler, who has to return it to the manufacturer. This can take a lot longer. A lot of
          • by JBMcB ( 73720 )

            The main difference is that the warranty is usually offered by the manufacturer, whereas the Sale of Goods Act governs interaction between people who buy and sell things.

            I understand legally what the difference is, I don't understand how it's different for the consumer.

            • It's different in that what the warranty oh so graciously offers to cover is often vastly different from what the product is claimed to be capable of in advertising. My understanding is that SoGA entitles you to a full refund if the item does not live up to its advertising and not just the portion the manufacturer states in the warranty. Additionally, as explained by the poster before me, the ability to take the product back to the store rather than having to deal with the manufacturer, regardless of the st
      • by tlhIngan ( 30335 )

        You don't need an explicit warranty anywhere with sensible consumer protection laws. The Sale of Goods Act in the UK (and equivalents in most EU countries) allow you to return the goods for a full refund if they do not meet the promises made at time of sale. I had a battery fail in an Apple laptop after four and a half years, but within the number of charge cycles that their ads claimed. They replaced it (couriered out a replacement that arrived at 9am the day after I called them at 3pm - better service tha

    • The trick is that it's a 6.4TB SSD. So that's really only equivent to writing to each sector 9125 times. Seems reasonable to expect that each sector could handle 10000 writes easily. You won't see that kind of load advertised on a 256 GB drive, as it would require the disk to endure over 220,000 writes per sector.

      • Size matters. Having plenty of empty space is a huge advantage for SSD drives.

      • by jon3k ( 691256 )
        If it's measured in drive writes per day, why would the size matter? My guess is that the PCIe SSDs are using SLC NAND which has a much higher write endurance.
        • It's measured in bytes written per day. Let's assume that the disk was really small, like 1 GB. If you wanted to write 32 TB to that disk every day, you'd have to write to the same sector 32000 times every day. If the Size were 1 TB, you could spread the writes out more over individual sectors, and you would only have to write to the same spot 32 times a day. When you have a 6.4 TB disk, you can write 32 TB a day, and only need to write to the same sector 5 times a day. If you had a 32 TB drive, you cou

    • by jon3k ( 691256 )
      Most likely the PCIe SSD is using SLC NAND. The Samsung Pro line uses some form of MLC (eMLC, HET, whatever).
  • PCIe is great and all, but when are we going to get one of these that fits into a DIMM socket?

    • by Kjella ( 173770 )

      Probably never, DIMMs have the controller in the CPU and flash drives have the controller on the drive. Today the controller and the NAND chips are tightly paired, to separate the two you'd have to define a standard, get Intel/AMD to implement it CPU side and put NAND chips on a DIMM. If you don't to that, PCIe is a better protocol for talking to a controller. There's plenty downsides to that solution, mostly that you'll be stuck with whatever your CPU supports.

    • by jon3k ( 691256 )
      Why would you want it in a DIMM socket? You can already get M.2 drives that slot directly into the board and operate over PCIe.
  • by swb ( 14022 ) on Thursday October 22, 2015 @06:31AM (#50779881)

    More or less, most storage systems are SAS based and achieve capacity scale and IOPS with many units on a SAS bus.

    What's the scaling concept behind this? I'm not aware of a (commonly available) storage expansion system based on PCIe connectivity unless you start getting into something like VSAN or the buzzwordy hyperconverged model where compute nodes create a distributed SAN. But this usually requires a lot of nodes.

    This kind of storage seems to aim for single server gross performance, which I guess might be aimed at local caching or for DBs running on a native installed OS in most conventional senses. But if you're in a virtualized environment, this seems to run against the grain somewhat -- DBs utilizing local storage and pinned to nodes with the internal storage or if you're using it as a local cache against a more conventional SAN environment, crippling performance when you move a VM until the new nodes local cache catches up.

    I guess I'm not seeing how this is better (other than some gross numbers) than more conventional SAS bus aggregation that achieves IOPS through aggregating individual drives. A dozen conventional 1 TB SSDs will provide similar IOPS, greater aggregate storage and redundancy and with SAS-3 backplane probably even greater throughput.

    Eduncate me, please.

    • Today: lots of spindles with a few SAS/SATA SSDs as cache.

      Tomorrow: lots of SAS/SATA SSDs with a few PCIe NVMe drives as cache.

      Think ZFS and ZIL/L2ARC.

      • by swb ( 14022 )

        Tomorrow: lots of SAS/SATA SSDs with a few PCIe NVMe drives as cache.

        Still not seeing the benefits versus complexity and overhead. In a 24 drive shelf you're looking at close to a million read IOPS and sequential reads into the GBytes/second range for sequential reads *just* from SSDs on a SAS backplane.

        Maybe there's some exotic, single-host database environment that would benefit from this, but a SSD-only solution would saturate 16GBFC with multipathing. At the point of combining NVMe and SSD, you're now spending more on exotic interconnect fabrics to get the data off the

        • The million IOPS isn't for getting data off the host, it's for data crunching done on the host; often times a host only sends out a fraction of what it reads from disk. In most cases, simple gigE or 10gigE will be sufficient to get data off of such a system, but more IOPS will mean the data gets processed faster and is ready to be retrieved sooner. For datasets under a couple hundred GB, there's still nothing that beats a RAMdisk, but for larger sets this is where it's at.
          • by swb ( 14022 )

            So where are you still seeing monolithic data processing hosts?

            As far as I can tell, everyone has moved on to virtualization for all the usual benefits associated with scale out, high availability, disaster recovery. The clients I've run across that need DB crunching IOPS have moved to tiered storage where their DBs live on SSD (like 98%) and a handful which have invested in per-host cache cards, but they're still virtualized and the cache cards were a side result of bad storage/fabric decisions they were

            • If you're counting node licenses you're doing it wrong. If you're at the point of needing performance beyond what FOSS can provide, you're at the point where hiring a team to build a solution tailored to your data will cost less in the long-term than buying a solution that was built for someone else's. You'll get better performance that way, as well.

              But if you'd rather pay perpetual licensing to third parties for the rest of your natural life (as implied at the end of your post), you're right, I guess hav
              • by swb ( 14022 )

                But if you'd rather pay perpetual licensing to third parties for the rest of your natural life (as implied at the end of your post), you're right, I guess having a large amount of local storage doesn't make sense.

                It doesn't make a lot of sense, regardless of your FOSS trolling and fantasies. Power, hardware and physical space all cost real money no matter what name is on the login prompt. You live in a fantasy world if you think that managing petabyte scale storage across dozens of independent compute nodes makes any sense at all with FOSS tools off the shelf.

                Sure, your specialized team and it's tailored solution could make it work, but I'd love to see the spreadsheet that explains the cost savings of basically go

                • basically going into the SAN business and re-inventing the wheel

                  Local caches are useless?

                  • by swb ( 14022 )

                    They're less useful as clusters grow and workloads migrate across nodes, especially if you use automation like DRS to maintain a self-balancing cluster that levels workloads across clusters.

                    I've seen them work OK in very small clusters with more or less static node membership, but they're pretty uncommon outside of specialized markets. The last time I've seen one deployed was someplace where they had a crummy, specialized database application that performed poorly and they were tacked on in desperation. T

    • What's the scaling concept behind this? I'm not aware of a (commonly available) storage expansion system based on PCIe connectivity

      Isn't that what these things are for eventually: http://www.avagotech.com/produ... [avagotech.com].

    • Here's a solution for virtualized environments (e.g. the cloud):

      Think LVM + software RAID + PCIe disk + SAS. When attaching an volume from a SAS array, first allocate a matching partition on the PCIe SSD, then build a RAID 1 with the SAS volume and the local partition. It's on you to figure out how to prioritize the local disk over the SAS for IO, so all reads come from the SSD and all writes hit the SSD first for consistency, but there you go. The caveat is that, once the SAS volume is used in the softwa

  • The top of the line 6.4TB SSD is rated to handle 32TB of writes per day with a 5-year warranty.

    Finally something that can handle my torrent load.

The bomb will never go off. I speak as an expert in explosives. -- Admiral William Leahy, U.S. Atomic Bomb Project

Working...