Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel Games Hardware

New PCIe SSDs Load Games, Apps As Fast As Old SATA Drives 162

crookedvulture writes Slashdot has covered a bunch of new PCI Express SSDs over the past month, and for good reason. The latest crop offers much higher sequential and random I/O rates than predecessors based on old-school Serial ATA interfaces. They're also compatible with new protocols, like NVM Express, which reduce overhead and improve scaling under demanding loads. As one might expect, these new PCIe drives destroy the competition in targeted benchmarks, hitting top speeds several times faster than even the best SATA SSDs can muster. The thing is, PCIe SSDs don't load games or common application data any faster than current incumbents—or even consumer-grade SSDs from five years ago. That's very different from the initial transition from mechanical to solid-state storage, where load times improved noticeably for just about everything. Servers and workstations can no doubt take advantage of the extra oomph that PCIe SSDs provide, but desktop users may struggle to find scenarios where PCIe SSDs offer palpable performance improvements over even budget-oriented SATA drives.
This discussion has been archived. No new comments can be posted.

New PCIe SSDs Load Games, Apps As Fast As Old SATA Drives

Comments Filter:
  • So for a segment of the market, data throughput is no longer a competitive advantage.

    Big, established players sometimes have trouble adapting to that kind of competitive shift; they are used to optimizing on one dimension and their engineers are amazing at it, but the market goes in a different direction.

    It sometimes gives newcomers with smaller capital bases and different technologies that would seem silly by high-end standards the ability to disrupt markets.

    • Oh, just you wait. As has been the case with pretty much every hardware advancement in the history of computing, software bloat will offset it. Give it time, and a SATA SSD will feel as old and clunky as a hard drive does today.
      • It will be interesting if at some point games will begin listing a certain minimum disk transfer speed in the system requirements.
  • SATA Slots. (Score:4, Insightful)

    by TechyImmigrant ( 175943 ) on Monday April 20, 2015 @04:38PM (#49514755) Homepage Journal

    A PCIe SSD opens up the sole SATA slot for the backup disk in the small form factor PCs that are currently in vogue.

  • So? (Score:5, Interesting)

    by iamwhoiamtoday ( 1177507 ) on Monday April 20, 2015 @04:41PM (#49514795)

    Most folks who need the throughput of a PCI-E SSD won't use it for just gaming. These same users are likely power users. Everything from running test VMs locally to Video / Audio editing would see a huge improvement from this tech.

    Loading apps? games? That's nice and all, but those are far from the only use cases of fast storage media.

    Personally, the new PCI-E SSDs have gotten a good amount of use from me as ZFS cache drives, where they've been wonderful for saturating 10gbps Ethernet.

    • You are saturating 10gbps Network Legs? I would love to see any part of that setup.

      Its not that I don't believe you, it is that I would love to be able to do it.

      • by jon3k ( 691256 )
        Sure, that's not abnormal at all. You can saturate 10GbE with a handful of off the shelf consumer SSDs in RAID. A single SATA 6G drive is capable of ~4.5Gb/s (550-575MB/s) of throughput.
        • But not with consumer RAID controllers. There you'll saturate the RAID controller instead of the 10 GbE.

          • Don't use hardware RAID. Buy an OEM LSI SAS2008 controller based (IBM serveraid M1015, Dell Perc H310, ...) card from ebay
            and flash it to passthrough IT mode. These get +2GB/s bulk throughput each that way.
  • Different SSD, different target...

    What would you expect? Loading games and applications is mostly about reading a large block as data as fast as possible and your CPU processing it. The CPU is the larger limit if not the actual transfer speed. Random IO (where these are much faster) is not that big of an issue for most games.

    • by PRMan ( 959735 )

      Random IO (where these are much faster) is not that big of an issue for most games.

      Then you would expect it not to come in last place on a Windows boot, but it did.

      • Booting from PCIe is not well supported at this point and that may be interfering with the boot times. As for the game loading benchmark results, these drives are usually used for high speed working file space in servers/workstations (e.g. latency critical databases, video editing, scientific computing). If you aren't trying to solve an I/O bottleneck problem for a specific application, PCIe SSDs probably aren't what you're looking for. And even if you are, you have to know exactly what type of I/O is cr

        • You'll note that to produce this crappy summary they skipped over the IOmeter pages which show the Intel 750 bursting @ 180k IOPS and sustaining 20k, while 90% of consumer SSDs cant sustain more than 8k and the x-25m theyre touting struggles to break 2k.

          Load up a slew of VMs on a virtualization lab on that x-25M and compare it to the 750-- THEN tell me that its no faster.

        • Comment removed based on user account deletion
          • Yeah, I meant specifically PCIe mass storage support in consumer grade BIOS and OSs. If you know what you're doing (and have a MB that doesn't lock out most of chip set features), it isn't terribly difficult to set up. It's just that it's nowhere near as brain dead simple as a booting from a standard SATA drive.

          • by AcquaCow ( 56720 )

            Booting from RAID is more supported, and the support is baked into nearly every BIOS out there... booting PCI-e over NVMe or UEFI is brand new and very few things support it and all the code is new.

            Booting through a raid card that has it's own BIOS is nothing like booting off of a native PCI-e device.

      • by AcquaCow ( 56720 )

        Booting from pci-e uses either a UEFI driver or NVMe today, two technologies that are kinda in their infancy.

        The code is not yet fully optimized/etc and you may see reduced speeds at least until you can get into an OS layer and load up a more feature-full driver.

    • And most game files are packed compressed binary files, PAK. There is a lot of CPU work to be done once it is in memory, they have traded CPU cycles for disk space.

      I recall at least one game (A Total War title?) that offered an option to unpack the PAK files so that access was faster, but it took up a lot of disk space.

  • by 0123456 ( 636235 ) on Monday April 20, 2015 @04:49PM (#49514897)

    Since so many games make you sit through crappy videos, copyright screens and other garbage for thirty seconds while they start up, or at least make you hit a key or press a mouse button to skip them and that damn 'Press Any Key To Start' screen that they couldn't even take five minutes to remove when porting from a console, faster load time is pointless once you've eliminated the worst HDD delays.

    • by darkain ( 749283 )

      I have absolutely no idea what you're talking about! https://www.youtube.com/watch?... [youtube.com]

  • A guy named Amdahl had something to say on the subject [wikipedia.org]. SSDs excel at IOPS, but that buys you little if you're not IOPS-constrained.

    Examples of things that eat operations as fast as you can throw them at 'em: databases, compilation, most server daemons.

    Examples of things that couldn't care less: streaming large assets that are decompressed in realtime, like audio or video files. Loading a word processing document. Downloading a game patch. Encoding a DVD. Playing RAM-resident video games.

    It should be a shock to roughly no one that buffing an underused part won't make the whole system faster. I couldn't mow my lawn any faster if the push mower had a big block V8, nor would overclocking my laptop make it show movies any faster.

    TL;DR non-IO-bound things don't benefit from more IO.

    • I think the problem here isn't that games aren't IO bound but the testing methodology is flawed. On a PC environment when you've got multiple browser windows open, IRC, email client, etc. getting constrained for IOPS is easier than expected.

      When you're just running one task in the background with nothing else competing for IOPS, sure, it's easy to show that there's no performance gained with PCIe vs SATA.

      Do it in a real world environment, and I'm willing to bet PCIe will show it's worth. I don't think that

      • On a PC environment when you've got multiple browser windows open, IRC, email client, etc. getting constrained for IOPS is easier than expected.

        Generally, I would say that machine would only be IO bound if it had so little memory it was constantly paging.

        Those things once loaded are NOT doing heavy disk IO. Heavy disk IO would be thrashing in all likelihood.

        So you add more RAM. You'd be amazed how many "IO" problems can be fixed with eliminating the IO in the first place by adding RAM.

      • On a PC environment when you've got multiple browser windows open, IRC, email client, etc. getting constrained for IOPS is easier than expected.

        An off-the-shelf SATA 840 EVO SDD hits 98,000 read IOPS [storagereview.com], and all those tasks you mention added together wouldn't hit more than 1% of that. They're the very definition of network bound operations. The average email in my IMAP spool right now is 43KB and would take 11 4KB operations to completely read from or write to storage. Browsers site there idle 99.9% of the time. IRC? Not that I've ever seen.

        Do it in a real world environment, and I'm willing to bet PCIe will show it's worth. I don't think that games will run any faster than the baseline results of no load, but I'm willing to guess it'll do better than the SATA equivalents.

        I haven't bothered to look at their methodology but I tentatively agree with their conclusion: almost no desktop

        • Its worth remembering that that 98k IOPS will rapidly drop to 2-10k for basically every SATA based SSD on the market. The 750 being advertised here is the first one I've seen sustain 20k.

          • by AcquaCow ( 56720 )

            It's worth remembering that 98k IOPS will be at a very small block size and will rapidly drop as you increase block size to 4K-1MB as the larger transfer size will directly equate to less I/O.

            The real problem is that apps are not written for multi-threaded I/O which is what you really need in order to take advantage of the throughput provided by PCI-e flash.

        • by tlhIngan ( 30335 )

          An off-the-shelf SATA 840 EVO SDD hits 98,000 read IOPS, and all those tasks you mention added together wouldn't hit more than 1% of that. They're the very definition of network bound operations. The average email in my IMAP spool right now is 43KB and would take 11 4KB operations to completely read from or write to storage. Browsers site there idle 99.9% of the time. IRC? Not that I've ever seen.

          1% of 98,000 IOPS is 980 IOPS. The best hard drives still only manage around 100 IOPS (10ms seek+latency).

          Applic

          • I'm not sure how any of what I said led you to believe that I don't think SSD is an improvement over HDD. I was specifically responding to the guy talking about needing IOPS for IRC, web browsing, and email. I've personally upgraded every computer in my care to use SSDs for local storage (but I keep huge HDDs in the family NAS, because file services over Wi-Fi aren't going to be disk-bound anyway).
    • Actually, large compiles use surprisingly little actual I/O. Run a large compile... e.g. a parallel buildworld or a large ports bulk build or something like that while observing physical disk I/O statistics. You'll realize very quickly that the compiles are not I/O constrained in the least.

      'most' server demons are also not I/O constrained in the least. A web server can be IOPS-constrained when asked to load, e.g. tons of small icons or thumbnails. If managing a lot of video or audio streams a web server

      • Yeah, I exaggerated that for contrast. Most servers are pretty bored, too. But if a build or database server isn't IO constrained, then someone running Photoshop would never notice the difference between PCIe and SATA.
    • It's also probable(though not assured) that a fair chunk of games are carefully designed to avoid IOPS-heavy demands because they are supposed to run from an optical disk in a console, a situation that makes an unremarkable HDD look positively random access. The PC version will still have more trouble with other processes butting in, but anyone whose game or game engine imposes load that craters an HDD is not going to have a pleasant time in the console market.
    • by kesuki ( 321456 )

      my single point of reference is that SSD's while fast, are actually too fast for some video transcoders. (I have converted tape to DVD and Blu-ray with my computer before) it actually caused a bug that would crash the system when i was using an older, but functional tool to convert video, and for transcoding video IO is a factor as the processor can write out data faster if the CPU can keep up with the transcode (for instance doing 600 FPS of transcoding with simultaneous audio muxing i think it's called, t

      • by Jeremi ( 14640 )

        it actually caused a bug that would crash the system

        It would be more accurate to say it revealed a bug. The bug was almost certainly a race condition that had always been present, but it took particular entry conditions (such as an unusually fast I/O device that the transcoder developers never tested against) to provoke the bug into causing a user-detectable failure.

    • Examples of things that couldn't care less: streaming large assets that are decompressed in realtime, like audio or video files. Loading a word processing document. Downloading a game patch. Encoding a DVD. Playing RAM-resident video games.

      Yes, any one of those things. However, if you're downloading a game patch while playing a game and maybe playing some music in the background, at the same time as perhaps download a few torrents or copying files, whatever... SSD's kick ass.

      Why? Because singularly those th

      • Darn it, people! I loved SSDs. I use them everywhere. I think they're great. But we're discussing the subject of PCIe SSDs versus SATA SSDs, and I still contend that SATA SSDs are so freaking fast (compared to HDDs) that desktop users are highly unlikely to ever bump up against that interface's limits.
    • by LordLimecat ( 1103839 ) on Monday April 20, 2015 @09:39PM (#49516697)

      IOPS are like cores: for any one single task, more doesnt mean faster. But in the real world, on a multitasking OS, the more you have the better things will be and the fewer times you'll ever be stuck waiting on your PC to stop thrashing.

    • So what's the limitation? The CPU?

  • Why use a PCI Express slot when you can use SATA anyway?
    • sata has overhead of the, well, sata and scsi layers.

      the pci-e ssd stuff is going to be based on NVMe and that cuts thru all the old layers and goes more direct. its also network-able (some vendors).

      this is really NOT for consumers, though. consumers are just fine with ssd on sata ports.

  • While I'm sure there are some people who use the current crop of PCIe SSDs to max out databases, builds or whatever, the number of people for whom it makes a real difference is pretty small. For the overwhelming number of people there's just another, different bottleneck they're now hitting or the speed difference isn't noticeable.

    It currently seems to be hitting a bit of a benchmark-mania where people run disk benchmarks just for the numbers without any actual improvement in usable performance in most are

  • We've reached the limits of the flash technology which drives both the SATA and PCIe versions of the storage device, at least in terms of how fast the data can be received from the media (the nand flash). This is not surprising. Flash is not all that fast and it quickly becomes the limiting factor on how fast you can read data out of it.

    Just moving from SATA to PCIe wasn't going to change the underlying speed of the media. The slowest device in the chain is what rules the overall speed. We've just move

    • No, the Flash isn't the bottlneck. The problem is now the CPU processing the data coming off of the flash. If you have storage constrained tasks these drives are 3-4x faster than other SSDs. But loading a game mostly involves decompressing thousands of compressed textures. Your HDD doesn't help with that task.

      • Then this article is worthless because the benchmark is not measuring anything worth measuring.. But Slashdot tends to be that sometimes.

        However, I'm not so sure you are correct. The decompression of textures, while CPU intensive, is not that much of an issue for a modern multicore CPU. (Unless the codec being used was poorly implemented or something).

    • We've reached the limits of the flash technology which drives both the SATA and PCIe versions of the storage device

      Individual chips have an upper cap on speed, but that's why every SSD on the market accesses numerous devices in parallel. All you need to do to make an SSD go faster is add more NAND devices in parallel and a slightly faster controller to support them.

      Flash is not all that fast and it quickly becomes the limiting factor on how fast you can read data out of it.

      Maybe if you have no idea what yo

  • I don't understand why people still don't understand the difference between latency and bandwidth, and the fact that a huge amount of the desktop IO load is still a few hundred MB/sec. So, the pieces large enough to take advantage of the higher bandwidth is a smaller (and growing smaller) portion of the pie.

    Next time you start your favorite game look at the CPU/DISK IO. Its likely the game never gets anywhere close to the max IO performance of your disk, and if it does its only for a short period.

    Anyway, i

    • by bored ( 40072 )

      Gosh, stupid html tags ate most of my posting. Anyway here it is.

      I don't understand why people still don't understand the difference between latency and bandwidth, and the fact that a huge amount of the desktop IO load is still less than 4k with a queue depth of basically 1.

      If you look at many of the benchmarks you will notice that the .5-4k IO performance is pretty similar for all of these devices and that is with deep queues. Why is that? Because the queue depth and latency to complete a single command di

      • by m.dillon ( 147925 ) on Monday April 20, 2015 @06:05PM (#49515439) Homepage

        That's isn't correct. The queue depth for a normal AHCI controller is 31 (assuming 1 tag is reserved for error handling). It only takes a queue depth of 2 or 3 for maximum linear throughput.

        Also, most operating systems are doing read-ahead for the program. Even if a program is requesting data from a file in small 4K read() chunks, the OS itself is doing read-ahead with multiple tags and likely much larger 16K-64K chunks. That's assuming the data hasn't been cached in ram yet.

        For writing, the OS is buffering the data and issuing the writes asynchronously so writing is not usually a bottleneck unless a vast amount of data is being shoved out.

        -Matt

  • Not surprising (Score:5, Informative)

    by m.dillon ( 147925 ) on Monday April 20, 2015 @05:16PM (#49515085) Homepage

    I mean, why would anyone think images would load faster? The cpu is doing enough transformative work processing the image for display that the storage system only has to be able to keep ahead of it... which it can do trivially at 600 MBytes/sec if the data is not otherwise cached.

    Did the author think that the OS wouldn't request the data from storage until the program actually asked for it? Of course the OS is doing read-ahead.

    And programs aren't going to load much faster either, dynamic linking overhead puts a cap on it and the program is going to be cached in ram indefinitely after the first load anyway.

    These PCIe SSDs are useful only in a few special mostly server-oriented cases. That said, it doesn't actually cost any more to have a direct PCIe interface verses a SATA interface so I these things are here to stay. Personally though I prefer the far more portable SATA SSDs.

    -Matt

  • If what is done with the data as it streams to/from disk is the bottleneck, it doesn't matter if it's sitting in RAM for you before you need it, you'll still be bottlenecked.

    Of course, if you're waiting for disk I/O then there will be a difference.

  • The thing is, PCIe SSDs don't load games or common application data any faster than current incumbents—or even consumer-grade SSDs from five years ago.

    The SATA bus gets saturated for sequential reads and writes so of course PCIe SSDs can trump SATA SSDs here. But, generally speaking, the controller silicon on PCIe SSDs is no faster than their SATA counterparts so they offer no improvement for random reads and writes. Still orders of magnitude better than spinning rust, though.

  • by fuzzyfuzzyfungus ( 1223518 ) on Monday April 20, 2015 @06:00PM (#49515403) Journal
    The PCIe devices are faster; but (since they also tend to be either substantially similar to SATA devices; but packaged for the convenience of OEMs who want to go all M.2 on certain designs and clean up the mini-PCIe/SATA-using-mini-PCIe's-pinout-for-some-horrible-reason/mini-SATA/SATA mess that crops up in laptops and very small form factor systems; or tend to be markedly more expensive enterprise oriented devices that focus on IOPS) it isn't clear why you'd expect much improvement on application loading workloads.

    SSDs are at their best, and the difference between good and merely adequate SSDs most noticeable, under brutal random I/O loads, the heavier the better. Those are what make mechanical disks entirely obsolete, cheap SSD controllers start to drop the ball, and more expensive ones really shine. Since application makers generally still have to assume that many of their customers are running HDDs(plus the console ports that may only be able to assume an optical disk and a tiny amount of RAM, and the mobile apps that need to work with cheap and mediocre eMMC flash), they would do well to avoid that sort of load.

    HDD vs. SSD was a pretty dramatic jump because even the best HDDs absolutely crater if forced to seek(whether by fragmentation or by two or more programs both trying to access the same disk); but there aren't a whole lot of desktop workloads where 'excellent at obnoxiously seeky workloads' vs. 'damned heroic at obnoxiously seeky workloads' makes a terribly noticeable difference. Plus, a lot of desktop workloads still involve fairly small amounts of data, so a decent chunk of RAM is both helpful and economically viable. Part of the appeal of crazy-fast SSDs is that the cost rather less per GB than RAM does, while not being too much worse, which allows you to attack problems large enough that the RAM you really want is either heroically expensive or just not for sale. On the desktop, a fair few programs in common use are still 32 bit, and much less demanding.
  • I have an Evo 840 for my OS and I put my games on a RAID1 array built from 2, 1TB Western Digital black drives with 64MB of cache. The Windows pagefile and temp directory are on a second RAID1 array with older drives that have 32MB of cache.

    I play a lot of Battlefield 4 and I am frequently one of the first players to join the map, even when I am playing on a server with others who have SSD drives.

    When I am moving files around my system, I often get ~120MB/s read speed out of the RAID1 array.

    While this is o

  • by Solandri ( 704621 ) on Monday April 20, 2015 @07:14PM (#49515937)

    Slashdot has covered a bunch of new PCI Express SSDs over the past month, and for good reason. The latest crop offers much higher sequential and random I/O rates than predecessors based on old-school Serial ATA interfaces.

    That's just it. Their speeds are not "much higher." They're only slightly faster. The speed increase is mostly an illusion created by measuring these things in MB/s. Our perception of disk speed is not MB/s, which is what you'd want to use if you only had x seconds of computing time and wanted to know how many MB of data you could read.

    Our perception of disk speed is wait time, or sec/MB. If I have y MB of data I need read, how many seconds will it take? This is the inverse of MB/s. Consequently, the bigger MB/s figures actually represent progressively smaller reductions in wait times. I posted the explanation [slashdot.org] a few months ago, the same one I post to multiple tech sites. And oddly enough Slashdot was the only site where it was ridiculed.

    If you measure these disks in terms of wait time to read 1 GB, and define the change in wait time from a 100 MB/s HDD to a 2 GB/s NVMe SSD as 100%, then:

    A 100 MB/s HDD has a 10 sec wait time.
    A 250 MB/s SATA2 SSD gives you 63% of the reduction in wait time (6 sec).
    A 500 MB/s SATA3 SSD gives you 84% of the reduction in wait time (8 sec).
    A 1 GB/s PCIe SSD gives you 95% of the reduction in wait time (9 sec).
    The 2 GB/s NVMe SSD gives you 100% of the reduction in wait time (9.5 sec).

    Or put another way:

    The first 150 MB/s speedup results in a 6 sec reduction in wait time.
    The next 250 MB/s speedup results in an extra 2 sec reduction in wait time.
    The next 500 MB/s speedup results in an extra 1 sec reduction in wait time.
    The next 1000 MB/s speedup results in an extra 0.5 sec reduction in wait time.

    Each doubling of MB/s results in half the reduction in wait time of the previous step. Manufacturers love waving around huge MB/s figures, but the bigger those numbers get the less difference it makes in terms of wait times.

    (The same problem crops up with car gas mileage. MPG is the inverse of fuel consumption. So those high MPG vehicles like the Prius actually make very little difference despite the impressively large MPG figures. Most of the rest of the world measures fuel economy in liters/100 km for this reason. If we weren't so misguidedly obsessed with achieving high MPG, we'd be correctly attempting to reduce fuel consumption by making changes where it matters the most - by first improving the efficiency of low-MPG vehicles like trucks and SUVs even though this results in tiny improvements in MPG.)

    • It's not really the same as MPG, because the distances we travel are fixed, while the amount of data we consume is has been growing. Think 480p -> 1080p -> 4k. We need faster transfers to maintain the same perception of performance year over year given the increased data consumption.

  • Can anyone explain to me why a seek time of 50 microseconds, 200,000+ IOPS read speed, and 1.5GB/s + throughput can't load game files faster and why that's specific to PCI-E SSDs only? That sounds completely ridiculous.
    • by AcquaCow ( 56720 )

      You can't read game files faster because the process that reads the game file is single threaded vs multi-threaded... so it can only read as fast as a single thread can read.

      A fully saturated CPU has enough lanes to do about 26-28GB/sec, but a single I/O thread might only be able to do 100-200MB/sec.

      There was never any reason in the OS before today to make that any faster because the spinning disks that fed data to the CPUs couldn't do more than 100MB/sec.

      Now that we have all this great flash, code needs to

  • In SSD tests I'd like them to try RAGE.
    This game may have been criticized for various good reasons, however, the engine is a bit unusual in a sense that the textures are huge and continuously streamed from the disk. Disk performance made a big difference in gameplay, not just in loading times.

  • NOTE: I'm speaking for myself here, and not for my company, but I have been working full time in the games industry for 23 years.

    Most games use pack-files (sometimes called packages) that are large binary blobs on disk that are loaded contiguously in a seek-free manner. Additionally, these blobs may have ZIP or other compression applied to them (often in an incremental chunked way). The CPU's can only process the serialization of assets (loading) at a certain speed due to things like allocation of memo
    • by adisakp ( 705706 )
      Game engines are constantly improving on this too... with file read ahead, and multithreading decompression of chunks, as well as other optimizations. Over time, this process has been gradually getting faster and at some point, SSD's will come out ahead. It's just that the current bottleneck is quite often CPU and memory bandwidth, not HD linear read speed.

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...