Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Samsung Unveils First PCIe 3.0 x4-Based M.2 SSD, Delivering Speeds of Over 2GB/s 100

Deathspawner writes: Samsung's SM951 is an unassuming gumstick SSD — it has no skulls or other bling — but it's what's underneath that counts: PCIe 3.0 x4 support. With that support, Samsung is able to boast speeds of 2,150MB/s read and 1,550MB/s write. But with such speeds comes an all-too-common caveat: you'll probably have to upgrade your computer to take true advantage of it. For comparison, Samsung says a Gen 2 PCIe x4 slot will limit the SM951 to just 1,600MB/s and 1,350MB/s (or 130K/100K IOPS), respectively. Perhaps now is a bad time to point out a typical Z97 motherboard only has a PCIe 2nd Gen x2 (yes, x2) connection to its M.2 slot, meaning one would need to halve those figures again.
This discussion has been archived. No new comments can be posted.

Samsung Unveils First PCIe 3.0 x4-Based M.2 SSD, Delivering Speeds of Over 2GB/s

Comments Filter:
  • by RogueyWon ( 735973 ) on Thursday January 08, 2015 @12:15PM (#48765231) Journal

    It's curious how many relatively recent high-end PCs from prestige-brands don't have PCIe 3.0 slots. Alienware are a particular offender here - they were very slow adopters, quite possibly because a lot of their customers don't actually think to check for this when speccing up a machine.

    That said, it's questionable how much it really matters in the real world at the moment. Performance tests on the latest video cards (which can take advantage of PCIe 3.0) have found very little performance gap between 3.0 and 2.0 (and even 1.0) with the likes of the Nvidia 980. The gap is most apparent at extremely high (150+) framerates - which is unlikely to constrain the average gamer, who probably just turns up the graphical settings until his PC can't sustain his target framerate (probably somewhere in the 40-60fps rate) any more.

    • Something that held back PCIe 3 support in many high end systems was the X79 chipset. If you want an E series, Intel's ultra high end desktop, processor, you have to use a different chipset. They don't rev that every generation though. So The X79 came out with the Sandy Bridge-E processors, and then the Ivy Bridge-E runs on the same thing.

      There is a new chipset now, the X99, that works with the Haswell-E, but that just launched a few months ago.

      Also with the high end processors, they are out of cycle with t

      • Hence you can have a situation where for things like PCIe and USB the high end stuff is behind.

        USB? yes, SATA? yes, PCIe? no.

        None of intels chipsets has PCIe 3.0 on the chipset, not even X99. The only PCIe 3.0 lines on intel systems so-far have been those from the processor and the lanes on the processor have been PCIe 3.0 since "sandy bridge e" on the high end and and ivy bridge on the mainstream. So high end got PCIe 3.0 before mainstream did. Furthermore the high end platforms have a lot more PCIe lanes. One lane of 3.0 is equivilent to 2 lanes of 2.0 or 4 lanes of 1.0 so in terms of total PCIe da

    • Since when has Alienware been a premium brand but that's beside the point. My two an a half year old PC would be fine as it has Gen3 16,16,8 and Gen2 1,1. Not sure I want to dump one of my graphics cards for one of these. Not unless I could get a flex PCIe ribbon connector, use one of the 8x slots without fouling the graphics cards...
    • Another thing to remember is that Intel has not put PCIe 3.0 into their PCH chips yet. So the only PCIe 3.0 lanes are those direct from the processor which are usually used for the big slots intended to take graphics cards. Especially on mainstream (LGA115x) boards.

    • "That said, it's questionable how much it really matters in the real world at the moment."

      On I/O devices? Immensely! Your example with the graphics cards shows little difference due to the fact that the performance of graphics cards isn't limited by PCIe bandwidth. IO devices which can read and write faster than PCIe 2.0 slots are able to throughput data, on the other hand, will show immediate gains. If an SSD can read at 3GB/sec and the slot only allows 2GB/sec, obviously upgrading to a newer standard is g

    • It's not about having PCIe 3.0, it's about having 4 lanes of PCIe 3.0 piped to an M.2 / SATA Express connector.
      Otherwise you have to buy a flaky adapter and hope it supports the number of lanes (and PCIe revision) you need.

  • "...unfortunately itâ(TM)s currently shipping only to OEMs..."

    ;-(

    • What irony that those are the one stripping off DIMM slots, putting in the wrong frequency RAM in single sticks instead of pairs, using garbage chipsets, adding in onboard graphics from 8 years ago, and refusing to put SSDs in anything.
  • by azav ( 469988 ) on Thursday January 08, 2015 @12:35PM (#48765397) Homepage Journal

    Then max out the RAM and create a RAM drive.

    On my 2010 iMac, I have a 16 GB RAM drive that gets between 3 and 4 GB/s and still have 16 GB of RAM available for my apps.

    Check this terminal command out before entering it just to be safe.

    diskutil erasevolume HFS+ 'RAM Disk' `hdiutil attach -nomount ram://33554432`

    Under Mac OS 10.6.8, and 10.9 the above creates a 17 GB RAM disk.

    diskutil erasevolume HFS+ 'RAM Disk' `hdiutil attach -nomount ram://8388608`

    This creates a 4.27 GB RAM disk. Enjoy the speed.

    • by Anonymous Coward
      You should never do shit like this. Never. All modern operating systems will take full advantage of the RAM and cache any needed information / files into it themselves. By creating a RAMdisk, you're adding an unnecessary layer of complexity and not allowing the OS the flexibility to prioritize things the way it wants to. Again: bad idea.
      • Re: (Score:3, Insightful)

        by Anonymous Coward

        No. You should RARELY do this. If you go back and forth from other tasks where you can expect the cache to be re-used and need the absolutely best performance when you come back to those files, then something like this is a good idea. You're essentially committing content to RAM for cases where you know better than your operating system's optimizations.

      • by azav ( 469988 ) on Thursday January 08, 2015 @01:43PM (#48766115) Homepage Journal

        Well, shame on me. I've been doing it for 3 years on a daily basis.

        I have my RAM drive rsynched to an SSD partition that is the same-ish size.

        And here's one area where you're incorrect. Safari loads web pages. Each page loads javascript. Many of these leak over time or simply never purge their contents. I often end up with 8 GB used in Safari. Safari alone is a citizen that doesn't play by these rules because each page that loads is a prisoner of the javascript that loads and often doesn't handle memory freeing properly.

        When I use my RAM as a drive, I get near INSTANT builds on OS X.

        This matters to me more than your claims of "all modern operating systems taking full advantage of the RAM". If the operating system takes full advantage of the RAM, it may not be to my best benefit.

        For example, Apple apps now by default do not quit when you close the last document. They merely stay in memory, hide the UI and then need to be relaunched to enable the UI again. Why does this matter? For TextEdit, if I want to open a document form the open menu if I close the last document and click elsewhere, this forces me to reopen the app because the OS fake closes the app (really only hiding the UI) while the rest of the app stays memory resident.

        So, I have to relaunch the app. This takes more time and ONLY just renables the UI. How much memory does this save on my 32 GB machine? 1 MB. Now, that's certainly not taking full advantage of the RAM. It's a case of the OS designers thinking that "he wanted to quit the app, so we'll do it for him". But I didn't want to quit the app. The computer is not taking full advantage of the RAM in this case. That's not what I wanted it to do.

        Maybe I have apps in the background that are doing stuff, but I want them to pause completely if another app is running in the foreground. Maybe I want ALL Safari pages to suspend their javascript when in the background, but the app still can still process downloads as if it's running at normal priority.

        See, there are many cases where the computer's OS will not take proper advantage of the RAM and the processing power since it can not mirror the user's intentions. Even in cases where it tries to, it often gets them wrong. And in some cases where it does (Safari javascript), the computer ends up eating processing power and RAM for tasks that the user doesn't want it to be placing priority on. And in some of these cases, it can't allocate RAM and processing power properly, because it can't if it relies on other programmers writing their javascript competently and acting as good citizens.

        I can cordon off a small chunk of my computer's RAM (since I have way more than enough) and direct it to do pretty damn much just want I want it to do.

        That's why I bought it. I don't want the OS to prioritize things the way it wants to. I want to tell (parts of) the OS to prioritize things the way I want it to.

        Cheers.

        • This matters to me more than your claims of "all modern operating systems taking full advantage of the RAM". If the operating system takes full advantage of the RAM, it may not be to my best benefit.

          tl;dr: If you don't know what you're doing (or like me are too lazy to care) with respect to memory management (most Mac users) then the OS is likely a better steward than you. For everyone else, there are RAM drives :)

          Why someone would criticize you for using a RAM drive ....doesn't make sense to me.

      • BS.

        Ramdrives have several advantages.

        1: they are explicitly volatile, application developers don't know your usecase and therefore often err on the side of preserving your data over power failures and so use calls like fsync. Even when the app doesn't use fsync the OS will usually try and push the data out to disk reasonablly quickly. If you know you don't care about preserving the data across power cycles and you know you have sufficient ram then a ramdrive can be a much better option.
        2: operating systems

    • by Anonymous Coward

      This sounds clever but is actually quite stupid.
      A modern OS is going to manage how filesystems are caches in memory way better than a blank ramdisk.
      Plus, you still need to read all of your data OFF your current drives before you can take advantage of this, which means you don't actually get to enjoy the speed until you pay the upfront cost. Plus, none of the things you do on that disk is permanent.

      TL;DR: Ramdisks are a stupid idea

      • I had a ramdisk in my Epson QX-10 and it was amazing. 2 megabytes of pure, blazing speed. TP/M booted and Valdocs loaded in seconds. It had a power supply and battery backup to keep the data from going poof every time the computer was turned off.
      • Ramdisks are a not a stupid idea. They are just not useful in most cases.

        Putting temporary files (e.g. /tmp) in a ram disk can be beneficial but only if you have significantly more ram than needed. That can significantly speed up tasks that are known to create lots of temporary files (e.g. compilation). This is also very interesting to prevent your (old) ssd or flash disk to wear-off too quickly.

        On Linux, it is not uncommon for /run to be a ram disk (of type tmpfs). This is where most services will put sma

        • That can significantly speed up tasks that are known to create lots of temporary files (e.g. compilation).

          I set up a RAM disk on my Windows machine because of Audacity.

          It creates temp files to store intermediate work (like the decode to PCM of a compressed format, or the output of a filter) instead of using RAM. Even with an SSD, this was not nearly as fast as it should have been, and a serious waste, since the total space used by the temp files is far less than the memory space available to the application. The RAM disk solved the speed problem quite nicely.

          I also store things like Firefox's page cache on th

    • by Anonymous Coward

      RAM disk are complete waste of resources. If you wish to stuff something into file cache, just do an md5sum on everything you want cached.

      cd make_this_fast
      find -type f -exec md5sum {} \+ > /dev/null

      done. Run it once, then if you don't believe me, run it again and you'll see somewhat of a difference. You can substitute something faster for md5sum, like cat.

      RAM disk are ways of allocating space for temporary files, like /tmp or something like that. Not as a way to cache disk files!

    • by ledow ( 319597 ) on Thursday January 08, 2015 @01:17PM (#48765809) Homepage

      Last time I used a RAM drive, it was on the contents a floppy disk. My brother was sick of slow compile times and worked out how to use the university DOS computers to produce a RAM drive. Autoexec.bat created it and copied his files into it, and then it ran like greased lightning.

      But that was back when 1.44Mb of RAM was a lot and he was lucky enough to be somewhere where every computer had that spare.

      Last time I saw it when when making a single-floppy Linux distribution that copied itself into RAM because it was often used on diskless workstations. Just like almost every Ubuntu install disk can do now if you select Live CD from the boot menu.

      But on ordinary desktop OS? Since Windows 95, RAMDisks have been dead. Since then, we've been using RAM better to cache all recent filesystem accesses. There's very, very, very, very little that will ever benefit from a RAMDisk over just having that RAM as filesystem cache automatically anyway. You still have to read the data from permanent storage anyway, and once you've done that, it's in RAM until you start to fill up RAM. Read it often enough and it will never drop out of the cache. If you're not reading it often enough, why the hell bother to RAMDisk it?

      And you lose NOTHING if the machine dies mid-way. With a RAMDisk, any changes you make are gone.

      Please. Stop spreading absolute "gold-plated-oxygen-free" junk advice like this.

      Anyone who wants to do this can do it with any bit of freeware on any machine. But why they would bother is beyond me. Hell, next you'll be telling me to enable swapfiles and put them on the RAMDisk....

      • by Mr Z ( 6791 )

        On some Linux distros, /tmp is a tmpfs volume [wikipedia.org], which is effectively a RAM disk. SunOS/Solaris also do that. Many files live in /tmp for very short periods, and have no requirement to persist across a reboot. So, building them in RAM makes sense. The filesystem can still get backed to disk in the swap partition.

        The only other case I can think of where a RAM drive might make sense is if you have a set of files you need access to with tight deadlines, and the total corpus fits in RAM. Of course, you could

        • I've built a a handful web servers hosting live HLS streams for PEG and hospitality customers and RAM disks are a very simple solution that works great for me. It doesn't take much memory to store just ~30 seconds of a hundred different streams, the encoders can use webdav to push the streams onto the server and Nginx (but probably almost any other webserver) can easily serve 10's of Gbps on the cheapest of the E3 Xeons.

          I can't think of a cheaper and easier solution than a RAM disk for this particular app
      • by azav ( 469988 )

        That's actually why I decided to use it. Faster compile times.

        OS X hits the disk so often, that I moved my user environment on to the RAM drive.

        Even with 1066 MHz RAM, I would get instant build times as the swap files were now in RAM.

        That when compared to 30 second build times are a trade off I'm willing to make.

        And losing my contents? That's what rsync is for. And that's what back up batteries are for. My RAM drive is rsynched to an SSD partition. Happens in the background every 5 mins. I never see t

        • That's actually why I decided to use it. Faster compile times.

          OS X hits the disk so often, that I moved my user environment on to the RAM drive.

          Even with 1066 MHz RAM, I would get instant build times as the swap files were now in RAM.

          That when compared to 30 second build times are a trade off I'm willing to make.

          I/O limited compilers? More likely need to enable parallel builds to hide I/O latency.

          So, yeah. Swap files on the RAM Disk. Insane speed as a result. Disk backed up to an SSD. Battery backup (laptops have batteries too, don't they?) Never a problem.

          Page file + ram disk = oxymoron

      • by Kjella ( 173770 )

        Anyone who wants to do this can do it with any bit of freeware on any machine. But why they would bother is beyond me.

        Actually I got an example from work. Utility (that we can't easily change) expects file from disk. We must make corrections on file first. So process is:

        1. Read original file
        2. Apply corrections
        3. Write out temp file
        4. Point utility to file
        5. Delete temp file

        Writing a big file to any persistent media wastes quite a bit of time for no particular reason. So a RAM disk is quite useful if you need to pipe your process through files.

      • by rdnetto ( 955205 )

        But on ordinary desktop OS? Since Windows 95, RAMDisks have been dead. Since then, we've been using RAM better to cache all recent filesystem accesses. There's very, very, very, very little that will ever benefit from a RAMDisk over just having that RAM as filesystem cache automatically anyway. You still have to read the data from permanent storage anyway, and once you've done that, it's in RAM until you start to fill up RAM. Read it often enough and it will never drop out of the cache. If you're not reading it often enough, why the hell bother to RAMDisk it?

        This is consistent with my experiences on Linux. When compiling the kernel, I found no significant difference in compilation times on a SSD and tmpfs. If you only have a mechanical hard drive, it might make sense to use a tmpfs, but if you don't have a SSD you probably don't have enough RAM for that anyway.

    • You miss the point.

      The os boots almost instantly. I have 500 gigs of accelerated storage and not 4. The speed gains are latency and not bandwidth. So the extra bandwidth won't offer an improvement. With your ram disk you still wait for it to load in your tiny ram disk. Ssd is permanent. Until you use one you can't comment.

  • by Red_Chaos1 ( 95148 ) on Thursday January 08, 2015 @01:21PM (#48765841)

    Because that's just so terrible, right? :p

  • by WaffleMonster ( 969671 ) on Thursday January 08, 2015 @01:57PM (#48766325)

    How long can you sustain these kinds of I/O rates before burning the thing out?

    Awesome it is so fast yet like LTE with tiny data caps utility appears to be substantially constrained by limitations on use.

    For subset of people with workloads actually needing this kind of performance how useful is this? Reads can be cached by DRAM which is quite cheap.

    For those who don't really need it I can understand how it would be nice to have.

    • > How long can you sustain these kinds of I/O rates before burning the thing out?

      If you were to sustain 1550 MByte/s write for 1 year, you'd write a total of 48 PB. (1550*60*60*24*365/1000/1000/1000), or 0.13 PB/day. In Techreport's endurance test, only two drives made it past 1.5 PB. So, if that is the bar, the drive would last only 11 days.

      However, that would give you no time to read the data you'd written. Since you're not likely to write at max speed 24/7, the drive should last considerably longe
  • Don't overlook NVME! (Score:3, Informative)

    by Anonymous Coward on Thursday January 08, 2015 @01:58PM (#48766341)

    The big news here is that this is pretty much the first consumer available SSD that supports NVME! There are some super-expensive pro devices that do NVME but they likely alone cost more than your whole high-end gaming rig.

    http://en.wikipedia.org/wiki/NVM_Express

    NVME is the interface that replaces AHCI, which was designed for spinning rust devices that can really only read and write one bit at a time. Flash based devices don't have to wait for moving parts and thus can access many things at once.

    AHCI was designed for magnetic drives attached to SATA. NVME is explicitly designed to accommodate fast devices directly connected to PCI express. Take a look at the comparison table on the wikipedia page linked above. Multiple, deep, queues and lots of other features to remove bottlenecks that don't' apply to flash based storage.

    How useful NVME currently is to consumers, though, is different. Only really new operating systems can boot from NVME devices. (Windows 8.1 or later. Don't know the current state of linux support but I bet at least someone's got a patched version of the kernel and grub if there's not mainline support already) And most most motherboards don't properly support NVME booting yet either. (Ive heard reports that some do with a BIOS/firmware update but it's currently really spotty)

  • More boards come with a 16x PCI-E 3.0 port for graphics than a 3.0-capable 4x slot. In fact, the big ones built for multiple graphics cards have 16x slots that reduce to 8x when both are in use or sometimes just the 2nd slot runs at 8x. They're all PCI-E 3.0 though. The problem is, you drop in that SSD in more basic but modern boards and there goes your aftermarket graphics card. You can never add one because you're taking up the only PCI-E slot that isn't 1x or 2.0.
  • by Solandri ( 704621 ) on Thursday January 08, 2015 @03:51PM (#48767765)

    Samsung is able to boast speeds of 2,150MB/s read and 1,550MB/s write.

    1. These are sequential speeds. They're only relevant when you're dealing with large files. Unless your job is working with video or disk images or other large files, the vast majority of your files are going to be small, and the IOPS matters more. 130k/100k IOPS is really good, but only about a 10%-20% improvement over SATA3 SSDs. It translates into 520/400 MB/s at queued 4k read/writes best case. Current SATA3 drives are already surpassing 400 MB/s queued 4k read/writes.

    2. Like car MPG, the units here are inverted from what actually matters. You don't say "gee, I have 5 gallons in the tank I need to use today, how many miles can I drive with it?", which is what MPG tells you. You say "I need to drive 100 miles, how many gallons will it take?" which is gal/100 miles. Yes they're just a mathematical inverse, but using the wrong one means the scaling is not linear. If you've got a 100 mile trip:

    A 12.5 MPG vehicle will use 8 gallons
    A 25 MPG vehicle will use 4 gallons (a 4 gallon savings for a 12.5 MPG improvement)
    A 50 MPG vehicle will use 2 gallons (a 2 gallon savings for a 25 MPG improvement)
    A 100 MPG vehicle will use 1 gallon (a 1 gallon savings for a 50 MPG improvement)

    See how the fuel saved is inversely proportional to the MPG gain? As you get higher and higher MPG, it matters less and less because MPG is the wrong unit. If you do it in gal/100 miles it's linear. (This is why the rest of the world uses liters / 100 km.)

    An 8 gal/100 mile vehicle will use 8 gallons.
    A 4 gal/100 mile vehicle uses 4 gallons (a 4 gallon savings for a 4 gal/100 mi improvement)
    a 2 gal/100 mile vehicle uses 2 gallons (a 2 gallon savings for a 2 gal/100 mi improvement)
    a 1 gal/100 mile vehicle uses 1 gallon (a 1 gallon savings for a 1 gal/100 mi improvement)

    The same thing is true for disk speeds. Unless you've got a fixed amount of time and need to transfer as much data as you can in that time, MB/s is the inverse of what you want. The vast majority of use cases are a fixed amount of MB that needs to be read/written, and the time it takes to do that is what you're interested in because that's time you spend twiddling your thumbs. If a game needs to read 1 GB to start up:

    A 100 MB/s HDD will read it in 10 sec
    a 250 MB/s SATA2 SSD will read it in 4 sec (a 6 sec savings for a 150 MB/s improvement)
    A 500 MB/s SATA3 SSD will read it in 2 sec (a 2 sec savings for a 250 MB/s improvement)
    A 1 GB/s PCIe SSD will read it in 1 sec (a 1 sec savings for a 500 MB/s improvement)
    This 2 GB/s PCIe SSD will read it in 0.5 sec (a 0.5 sec savings for a 1000 MB/s improvement

    Again, the actual time savings is inverted from the units we're using to measure. We really should be benchmarking HDDs and SSDs by sec/MB.

    A 10 sec/MB HDD will read 1 GB in 10 sec
    A 4 sec/MB SATA2 SSD will read it in 4 sec (a 6 sec savings for a 6 sec/MB improvement)
    A 2 sec/MB SATA3 SSD will read it in 2 sec (a 2 sec savings for a 2 sec/MB improvement)
    A 1 sec/MB PCIe SSD will read it in 1 sec (a 1 sec savings for a 1 sec/MB improvement)
    This 0.5 sec/MB PCIe SSD will read it in 0.5 sec (a 0.5 sec savings for a 0.5 sec/MB improvement)

    That's nice and linear. You see that the vast majority of your speedup comes from switching from a HDD to a SSD - any SSD, even the old slow first gen ones. The next biggest savings is switching to a SATA3 SSD. Beyond that the extra speed is nice, but don't be mislead by the huge MB/s figures - the speedup from PCIe drives will never be as big as those first two steps from a HDD to a SATA SSD. Manufacturers just report performance in MB/s (instead of sec/MB) because it exaggerates the importance of tiny increases in time saved, and thus helps them sell new and improved (and more expensive) products. Review sites also report in MB/s because if you report in sec/MB, the benchmark graphs are boring and the speedup from these shiny new SSDs is barely perceptible.

    • You don't buy one of the these for a regular desktop that just runs email, word and internet. This is for those people that need to move large files around even faster. One of my use cases for one of these is a cheap SAN cache for a backup to disk system.

  • According to TFA, it's being release to oem's only. I think Samsung did that for their previous gen PCIe 2.0 ssd also. Does anyone have an idea where to look for these, or an idea when they might become available to individual system builders? BTW, the Asus X99 MB's have a slot for these. Probably other brands of X99 MB's too.
  • FINALLY I haven't needed to build a new computer for a while. But INTEL's next gen (Skylake, due 2015 supposedly) includes PCIe 4.0 support right from the processor/bridge. Skylake + PCIe 3.0/4.0 SSD + displayport 1.3 for 4K monitors (also due 2015) = finally a reason to build a new system Skylake also supports DDR4, so we're finally attacking the problem of getting enough data to the processor fast enough!

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...