New PCIe SSDs Load Games, Apps As Fast As Old SATA Drives 162
crookedvulture writes Slashdot has covered a bunch of new PCI Express SSDs over the past month, and for good reason. The latest crop offers much higher sequential and random I/O rates than predecessors based on old-school Serial ATA interfaces. They're also compatible with new protocols, like NVM Express, which reduce overhead and improve scaling under demanding loads. As one might expect, these new PCIe drives destroy the competition in targeted benchmarks, hitting top speeds several times faster than even the best SATA SSDs can muster. The thing is, PCIe SSDs don't load games or common application data any faster than current incumbents—or even consumer-grade SSDs from five years ago. That's very different from the initial transition from mechanical to solid-state storage, where load times improved noticeably for just about everything. Servers and workstations can no doubt take advantage of the extra oomph that PCIe SSDs provide, but desktop users may struggle to find scenarios where PCIe SSDs offer palpable performance improvements over even budget-oriented SATA drives.
The end of a dimension of competition (Score:2)
So for a segment of the market, data throughput is no longer a competitive advantage.
Big, established players sometimes have trouble adapting to that kind of competitive shift; they are used to optimizing on one dimension and their engineers are amazing at it, but the market goes in a different direction.
It sometimes gives newcomers with smaller capital bases and different technologies that would seem silly by high-end standards the ability to disrupt markets.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Good, now they can focus on getting me more space for less cash.
^^Absolutely this. SSD's have proven themselves to be reliable enough and plenty fast, but they are so anemic in size in relation to price that they are realistically only interesting to use as a system disk.
SATA Slots. (Score:4, Insightful)
A PCIe SSD opens up the sole SATA slot for the backup disk in the small form factor PCs that are currently in vogue.
Re: (Score:3)
Since when is a disc mounted permanently in the computer case considered even remotely a backup option?
When it's a second disc with a copy of files from the first disk, or a raid-0 mirror disk.
Good for backup from hardware failures. Not so good at backup from malware. Back up from malware needs to be on a remote machine that isn't mounted into the file space of the backed up machine so the malware can't infect it. That's why I have both. Local mirroring and a backup system that scp's the files periodically over the network.
Re:SATA Slots. (Score:4, Informative)
And that is sooo wrong. RAID 0 is not a mirror of any kind.
Raid 1 - data is mirrored across multiple drives
RAIND 0 - data is striped across multiple drives.
Re: (Score:2)
Re: (Score:2)
Yes. I mean't disk mirroring and I could not be bothered to check the RAID numbers, which I got wrong, since it's been a heck of long time since I studied them in college, and still a pretty long time since I implemented RAID-3 for a mainframe company.
Re:SATA Slots. (Score:4, Insightful)
Yes. I mean't disk mirroring...
So, there's still another mistake then.
Indeed, disk mirroring is not a replacement for backup. Disk mirroring only protects against (some...) hardware failures, but not against human error (such as accidentally removing the wrong file). Backups protect against human errors too (... and natural disasters, if kept offsite, and plenty of other error conditions which mirroring doesn't protect against).
Re: (Score:2)
Er.. That's exactly what I said. Read the post before you contradict the thing it didn't say.
Re: (Score:2)
It's not. Raid is high availability not backup.
Re: (Score:2)
That's not a backup, it's a copy.
A backup is one step further -- it's a copy that's been physically archived in some way.
Re: (Score:2)
Yes. That's why I use a period file backup rather than mirroring.
So? (Score:5, Interesting)
Most folks who need the throughput of a PCI-E SSD won't use it for just gaming. These same users are likely power users. Everything from running test VMs locally to Video / Audio editing would see a huge improvement from this tech.
Loading apps? games? That's nice and all, but those are far from the only use cases of fast storage media.
Personally, the new PCI-E SSDs have gotten a good amount of use from me as ZFS cache drives, where they've been wonderful for saturating 10gbps Ethernet.
Re: (Score:2)
You are saturating 10gbps Network Legs? I would love to see any part of that setup.
Its not that I don't believe you, it is that I would love to be able to do it.
Re: (Score:2)
Re: (Score:2)
But not with consumer RAID controllers. There you'll saturate the RAID controller instead of the 10 GbE.
Re: (Score:2)
and flash it to passthrough IT mode. These get +2GB/s bulk throughput each that way.
It all depends on the workload... (Score:2)
Different SSD, different target...
What would you expect? Loading games and applications is mostly about reading a large block as data as fast as possible and your CPU processing it. The CPU is the larger limit if not the actual transfer speed. Random IO (where these are much faster) is not that big of an issue for most games.
Re: (Score:2)
Random IO (where these are much faster) is not that big of an issue for most games.
Then you would expect it not to come in last place on a Windows boot, but it did.
Re: (Score:3)
Booting from PCIe is not well supported at this point and that may be interfering with the boot times. As for the game loading benchmark results, these drives are usually used for high speed working file space in servers/workstations (e.g. latency critical databases, video editing, scientific computing). If you aren't trying to solve an I/O bottleneck problem for a specific application, PCIe SSDs probably aren't what you're looking for. And even if you are, you have to know exactly what type of I/O is cr
Re: (Score:2)
You'll note that to produce this crappy summary they skipped over the IOmeter pages which show the Intel 750 bursting @ 180k IOPS and sustaining 20k, while 90% of consumer SSDs cant sustain more than 8k and the x-25m theyre touting struggles to break 2k.
Load up a slew of VMs on a virtualization lab on that x-25M and compare it to the 750-- THEN tell me that its no faster.
Re: (Score:2)
Re: (Score:2)
Yeah, I meant specifically PCIe mass storage support in consumer grade BIOS and OSs. If you know what you're doing (and have a MB that doesn't lock out most of chip set features), it isn't terribly difficult to set up. It's just that it's nowhere near as brain dead simple as a booting from a standard SATA drive.
Re: (Score:2)
Booting from RAID is more supported, and the support is baked into nearly every BIOS out there... booting PCI-e over NVMe or UEFI is brand new and very few things support it and all the code is new.
Booting through a raid card that has it's own BIOS is nothing like booting off of a native PCI-e device.
Re: (Score:2)
Booting from pci-e uses either a UEFI driver or NVMe today, two technologies that are kinda in their infancy.
The code is not yet fully optimized/etc and you may see reduced speeds at least until you can get into an OS layer and load up a more feature-full driver.
Re: (Score:2)
And most game files are packed compressed binary files, PAK. There is a lot of CPU work to be done once it is in memory, they have traded CPU cycles for disk space.
I recall at least one game (A Total War title?) that offered an option to unpack the PAK files so that access was faster, but it took up a lot of disk space.
Re: (Score:2)
Re: (Score:2)
A PAK file can be compressed. It should be compressed.
What would be the advantage if it was not?
http://www.file-extensions.org... [file-extensions.org]
Re: (Score:2)
Blame the game developers (Score:5, Insightful)
Since so many games make you sit through crappy videos, copyright screens and other garbage for thirty seconds while they start up, or at least make you hit a key or press a mouse button to skip them and that damn 'Press Any Key To Start' screen that they couldn't even take five minutes to remove when porting from a console, faster load time is pointless once you've eliminated the worst HDD delays.
Re: (Score:2)
I have absolutely no idea what you're talking about! https://www.youtube.com/watch?... [youtube.com]
Re: (Score:2)
ISTR hearing something about that... (Score:5, Insightful)
A guy named Amdahl had something to say on the subject [wikipedia.org]. SSDs excel at IOPS, but that buys you little if you're not IOPS-constrained.
Examples of things that eat operations as fast as you can throw them at 'em: databases, compilation, most server daemons.
Examples of things that couldn't care less: streaming large assets that are decompressed in realtime, like audio or video files. Loading a word processing document. Downloading a game patch. Encoding a DVD. Playing RAM-resident video games.
It should be a shock to roughly no one that buffing an underused part won't make the whole system faster. I couldn't mow my lawn any faster if the push mower had a big block V8, nor would overclocking my laptop make it show movies any faster.
TL;DR non-IO-bound things don't benefit from more IO.
Re: (Score:2)
I think the problem here isn't that games aren't IO bound but the testing methodology is flawed. On a PC environment when you've got multiple browser windows open, IRC, email client, etc. getting constrained for IOPS is easier than expected.
When you're just running one task in the background with nothing else competing for IOPS, sure, it's easy to show that there's no performance gained with PCIe vs SATA.
Do it in a real world environment, and I'm willing to bet PCIe will show it's worth. I don't think that
Re: (Score:2)
Generally, I would say that machine would only be IO bound if it had so little memory it was constantly paging.
Those things once loaded are NOT doing heavy disk IO. Heavy disk IO would be thrashing in all likelihood.
So you add more RAM. You'd be amazed how many "IO" problems can be fixed with eliminating the IO in the first place by adding RAM.
Re: (Score:3)
On a PC environment when you've got multiple browser windows open, IRC, email client, etc. getting constrained for IOPS is easier than expected.
An off-the-shelf SATA 840 EVO SDD hits 98,000 read IOPS [storagereview.com], and all those tasks you mention added together wouldn't hit more than 1% of that. They're the very definition of network bound operations. The average email in my IMAP spool right now is 43KB and would take 11 4KB operations to completely read from or write to storage. Browsers site there idle 99.9% of the time. IRC? Not that I've ever seen.
Do it in a real world environment, and I'm willing to bet PCIe will show it's worth. I don't think that games will run any faster than the baseline results of no load, but I'm willing to guess it'll do better than the SATA equivalents.
I haven't bothered to look at their methodology but I tentatively agree with their conclusion: almost no desktop
Re: (Score:2)
Its worth remembering that that 98k IOPS will rapidly drop to 2-10k for basically every SATA based SSD on the market. The 750 being advertised here is the first one I've seen sustain 20k.
Re: (Score:2)
It's worth remembering that 98k IOPS will be at a very small block size and will rapidly drop as you increase block size to 4K-1MB as the larger transfer size will directly equate to less I/O.
The real problem is that apps are not written for multi-threaded I/O which is what you really need in order to take advantage of the throughput provided by PCI-e flash.
Re: (Score:2)
If you're asking "what is my proof", check out any anandtech review's "consistency" test on SSDs.
If you're asking what the cause is, I would assume theres a buffer thats getting saturated, or else a cache that is exhausted, or perhaps the SSD controller's CPU gets pegged. Whatever the cause, most SSDs will sustain very high IOPs for a short period of time before falling into a "steady state pattern". For some SSDs it is a wildly swinging pattern, others (higher quality) hold a pretty steady rate around 5-
Re: (Score:2)
Interesting, I hadnt seen the 840/50 pro reviews. Theyre somewhat exceptional in that regard, though, Im not aware of general consumer SSDs being able to hold that level of performance.
In any case I was responding to someone discussing the 840 EVO, which is an entirely different animal than the 840 pro, and certainly cannot hold 30k IOPS.
Re: (Score:2)
1% of 98,000 IOPS is 980 IOPS. The best hard drives still only manage around 100 IOPS (10ms seek+latency).
Applic
Re: (Score:2)
Re: (Score:3)
Actually, large compiles use surprisingly little actual I/O. Run a large compile... e.g. a parallel buildworld or a large ports bulk build or something like that while observing physical disk I/O statistics. You'll realize very quickly that the compiles are not I/O constrained in the least.
'most' server demons are also not I/O constrained in the least. A web server can be IOPS-constrained when asked to load, e.g. tons of small icons or thumbnails. If managing a lot of video or audio streams a web server
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
my single point of reference is that SSD's while fast, are actually too fast for some video transcoders. (I have converted tape to DVD and Blu-ray with my computer before) it actually caused a bug that would crash the system when i was using an older, but functional tool to convert video, and for transcoding video IO is a factor as the processor can write out data faster if the CPU can keep up with the transcode (for instance doing 600 FPS of transcoding with simultaneous audio muxing i think it's called, t
Re: (Score:2)
it actually caused a bug that would crash the system
It would be more accurate to say it revealed a bug. The bug was almost certainly a race condition that had always been present, but it took particular entry conditions (such as an unusually fast I/O device that the transcoder developers never tested against) to provoke the bug into causing a user-detectable failure.
SSD's and seek times, multiple operations. (Score:2)
Examples of things that couldn't care less: streaming large assets that are decompressed in realtime, like audio or video files. Loading a word processing document. Downloading a game patch. Encoding a DVD. Playing RAM-resident video games.
Yes, any one of those things. However, if you're downloading a game patch while playing a game and maybe playing some music in the background, at the same time as perhaps download a few torrents or copying files, whatever... SSD's kick ass.
Why? Because singularly those th
Re: (Score:2)
Re:ISTR hearing something about that... (Score:4, Insightful)
IOPS are like cores: for any one single task, more doesnt mean faster. But in the real world, on a multitasking OS, the more you have the better things will be and the fewer times you'll ever be stuck waiting on your PC to stop thrashing.
Re: (Score:2)
So what's the limitation? The CPU?
why? (Score:2)
Re: (Score:3)
sata has overhead of the, well, sata and scsi layers.
the pci-e ssd stuff is going to be based on NVMe and that cuts thru all the old layers and goes more direct. its also network-able (some vendors).
this is really NOT for consumers, though. consumers are just fine with ssd on sata ports.
Re: (Score:2)
Re: (Score:2)
Theres certainly a bandwidth penalty.
It does seem benchmark oriented at current (Score:2)
While I'm sure there are some people who use the current crop of PCIe SSDs to max out databases, builds or whatever, the number of people for whom it makes a real difference is pretty small. For the overwhelming number of people there's just another, different bottleneck they're now hitting or the speed difference isn't noticeable.
It currently seems to be hitting a bit of a benchmark-mania where people run disk benchmarks just for the numbers without any actual improvement in usable performance in most are
This is the long way to say... (Score:2)
We've reached the limits of the flash technology which drives both the SATA and PCIe versions of the storage device, at least in terms of how fast the data can be received from the media (the nand flash). This is not surprising. Flash is not all that fast and it quickly becomes the limiting factor on how fast you can read data out of it.
Just moving from SATA to PCIe wasn't going to change the underlying speed of the media. The slowest device in the chain is what rules the overall speed. We've just move
Re: (Score:2)
No, the Flash isn't the bottlneck. The problem is now the CPU processing the data coming off of the flash. If you have storage constrained tasks these drives are 3-4x faster than other SSDs. But loading a game mostly involves decompressing thousands of compressed textures. Your HDD doesn't help with that task.
Re: (Score:2)
Then this article is worthless because the benchmark is not measuring anything worth measuring.. But Slashdot tends to be that sometimes.
However, I'm not so sure you are correct. The decompression of textures, while CPU intensive, is not that much of an issue for a modern multicore CPU. (Unless the codec being used was poorly implemented or something).
Re: (Score:3)
Individual chips have an upper cap on speed, but that's why every SSD on the market accesses numerous devices in parallel. All you need to do to make an SSD go faster is add more NAND devices in parallel and a slightly faster controller to support them.
Maybe if you have no idea what yo
Latency vs bandwidth (Score:2)
I don't understand why people still don't understand the difference between latency and bandwidth, and the fact that a huge amount of the desktop IO load is still a few hundred MB/sec. So, the pieces large enough to take advantage of the higher bandwidth is a smaller (and growing smaller) portion of the pie.
Next time you start your favorite game look at the CPU/DISK IO. Its likely the game never gets anywhere close to the max IO performance of your disk, and if it does its only for a short period.
Anyway, i
Re: (Score:3)
Gosh, stupid html tags ate most of my posting. Anyway here it is.
I don't understand why people still don't understand the difference between latency and bandwidth, and the fact that a huge amount of the desktop IO load is still less than 4k with a queue depth of basically 1.
If you look at many of the benchmarks you will notice that the .5-4k IO performance is pretty similar for all of these devices and that is with deep queues. Why is that? Because the queue depth and latency to complete a single command di
Re:Latency vs bandwidth (Score:5, Interesting)
That's isn't correct. The queue depth for a normal AHCI controller is 31 (assuming 1 tag is reserved for error handling). It only takes a queue depth of 2 or 3 for maximum linear throughput.
Also, most operating systems are doing read-ahead for the program. Even if a program is requesting data from a file in small 4K read() chunks, the OS itself is doing read-ahead with multiple tags and likely much larger 16K-64K chunks. That's assuming the data hasn't been cached in ram yet.
For writing, the OS is buffering the data and issuing the writes asynchronously so writing is not usually a bottleneck unless a vast amount of data is being shoved out.
-Matt
Re: (Score:2)
It only takes a queue depth of 2 or 3 for maximum linear throughput.
I haven't any idea why you are so up voted, because your flat out wrong, 5 minutes with a benchmark like ATTO allows you to see the performance with small sequential IO [legitreviews.com] and queue depth. Another benchmark showing ATTO sequential IO's for small transfers [hothardware.com]
And, your sort of right the OS will do a certain amount of prefech/etc but that doesn't help when things are fragmented or the application/whatever is requesting things in a pattern that is
Not surprising (Score:5, Informative)
I mean, why would anyone think images would load faster? The cpu is doing enough transformative work processing the image for display that the storage system only has to be able to keep ahead of it... which it can do trivially at 600 MBytes/sec if the data is not otherwise cached.
Did the author think that the OS wouldn't request the data from storage until the program actually asked for it? Of course the OS is doing read-ahead.
And programs aren't going to load much faster either, dynamic linking overhead puts a cap on it and the program is going to be cached in ram indefinitely after the first load anyway.
These PCIe SSDs are useful only in a few special mostly server-oriented cases. That said, it doesn't actually cost any more to have a direct PCIe interface verses a SATA interface so I these things are here to stay. Personally though I prefer the far more portable SATA SSDs.
-Matt
wait chains... (Score:2)
If what is done with the data as it streams to/from disk is the bottleneck, it doesn't matter if it's sitting in RAM for you before you need it, you'll still be bottlenecked.
Of course, if you're waiting for disk I/O then there will be a difference.
No kidding (Score:2)
The thing is, PCIe SSDs don't load games or common application data any faster than current incumbents—or even consumer-grade SSDs from five years ago.
The SATA bus gets saturated for sequential reads and writes so of course PCIe SSDs can trump SATA SSDs here. But, generally speaking, the controller silicon on PCIe SSDs is no faster than their SATA counterparts so they offer no improvement for random reads and writes. Still orders of magnitude better than spinning rust, though.
Is this a big surprise? (Score:5, Informative)
SSDs are at their best, and the difference between good and merely adequate SSDs most noticeable, under brutal random I/O loads, the heavier the better. Those are what make mechanical disks entirely obsolete, cheap SSD controllers start to drop the ball, and more expensive ones really shine. Since application makers generally still have to assume that many of their customers are running HDDs(plus the console ports that may only be able to assume an optical disk and a tiny amount of RAM, and the mobile apps that need to work with cheap and mediocre eMMC flash), they would do well to avoid that sort of load.
HDD vs. SSD was a pretty dramatic jump because even the best HDDs absolutely crater if forced to seek(whether by fragmentation or by two or more programs both trying to access the same disk); but there aren't a whole lot of desktop workloads where 'excellent at obnoxiously seeky workloads' vs. 'damned heroic at obnoxiously seeky workloads' makes a terribly noticeable difference. Plus, a lot of desktop workloads still involve fairly small amounts of data, so a decent chunk of RAM is both helpful and economically viable. Part of the appeal of crazy-fast SSDs is that the cost rather less per GB than RAM does, while not being too much worse, which allows you to attack problems large enough that the RAM you really want is either heroically expensive or just not for sale. On the desktop, a fair few programs in common use are still 32 bit, and much less demanding.
Anecdotal Real World Testing (Score:2)
I have an Evo 840 for my OS and I put my games on a RAID1 array built from 2, 1TB Western Digital black drives with 64MB of cache. The Windows pagefile and temp directory are on a second RAID1 array with older drives that have 32MB of cache.
I play a lot of Battlefield 4 and I am frequently one of the first players to join the map, even when I am playing on a server with others who have SSD drives.
When I am moving files around my system, I often get ~120MB/s read speed out of the RAID1 array.
While this is o
That's because they're not much faster (Score:5, Insightful)
That's just it. Their speeds are not "much higher." They're only slightly faster. The speed increase is mostly an illusion created by measuring these things in MB/s. Our perception of disk speed is not MB/s, which is what you'd want to use if you only had x seconds of computing time and wanted to know how many MB of data you could read.
Our perception of disk speed is wait time, or sec/MB. If I have y MB of data I need read, how many seconds will it take? This is the inverse of MB/s. Consequently, the bigger MB/s figures actually represent progressively smaller reductions in wait times. I posted the explanation [slashdot.org] a few months ago, the same one I post to multiple tech sites. And oddly enough Slashdot was the only site where it was ridiculed.
If you measure these disks in terms of wait time to read 1 GB, and define the change in wait time from a 100 MB/s HDD to a 2 GB/s NVMe SSD as 100%, then:
A 100 MB/s HDD has a 10 sec wait time.
A 250 MB/s SATA2 SSD gives you 63% of the reduction in wait time (6 sec).
A 500 MB/s SATA3 SSD gives you 84% of the reduction in wait time (8 sec).
A 1 GB/s PCIe SSD gives you 95% of the reduction in wait time (9 sec).
The 2 GB/s NVMe SSD gives you 100% of the reduction in wait time (9.5 sec).
Or put another way:
The first 150 MB/s speedup results in a 6 sec reduction in wait time.
The next 250 MB/s speedup results in an extra 2 sec reduction in wait time.
The next 500 MB/s speedup results in an extra 1 sec reduction in wait time.
The next 1000 MB/s speedup results in an extra 0.5 sec reduction in wait time.
Each doubling of MB/s results in half the reduction in wait time of the previous step. Manufacturers love waving around huge MB/s figures, but the bigger those numbers get the less difference it makes in terms of wait times.
(The same problem crops up with car gas mileage. MPG is the inverse of fuel consumption. So those high MPG vehicles like the Prius actually make very little difference despite the impressively large MPG figures. Most of the rest of the world measures fuel economy in liters/100 km for this reason. If we weren't so misguidedly obsessed with achieving high MPG, we'd be correctly attempting to reduce fuel consumption by making changes where it matters the most - by first improving the efficiency of low-MPG vehicles like trucks and SUVs even though this results in tiny improvements in MPG.)
Re: (Score:2)
It's not really the same as MPG, because the distances we travel are fixed, while the amount of data we consume is has been growing. Think 480p -> 1080p -> 4k. We need faster transfers to maintain the same perception of performance year over year given the increased data consumption.
that can't be true (Score:2)
Re: (Score:2)
You can't read game files faster because the process that reads the game file is single threaded vs multi-threaded... so it can only read as fast as a single thread can read.
A fully saturated CPU has enough lanes to do about 26-28GB/sec, but a single I/O thread might only be able to do 100-200MB/sec.
There was never any reason in the OS before today to make that any faster because the spinning disks that fed data to the CPUs couldn't do more than 100MB/sec.
Now that we have all this great flash, code needs to
RAGE (Score:2)
In SSD tests I'd like them to try RAGE.
This game may have been criticized for various good reasons, however, the engine is a bit unusual in a sense that the textures are huge and continuously streamed from the disk. Disk performance made a big difference in gameplay, not just in loading times.
Opinion from a game developer (Score:2)
Most games use pack-files (sometimes called packages) that are large binary blobs on disk that are loaded contiguously in a seek-free manner. Additionally, these blobs may have ZIP or other compression applied to them (often in an incremental chunked way). The CPU's can only process the serialization of assets (loading) at a certain speed due to things like allocation of memo
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
Question I have is, how much of that load time is CPU and how much of it is reading from disk?
Re: (Score:2)
Re:SSDs (Score:5, Interesting)
I have an x25-m G2 80GB and a crucial M500. The crucial drive has substantially better random iops, and the system does feel faster booting off it than the x25-m. But the difference in "feel" is like a 7200rpm platter drive vs a 10,000rpm platter drive... same ballpark but the 10k is just a bit snappier.
Newer SSDs are definitely faster than earlier ones, but we've kind of hit a wall with needs for even more speed. The slowest (non-broken) SSD you can buy today will be no less beneficial in real world home-user operation than the fastest SSD you can buy. Its just that there is a little bit of room for improvement over 2008-2009 era SSDs. (Don't take this as a disagreement, just an elaboration).
Re: (Score:3)
Re: (Score:2)
Fundamentally, that doesn't make sense because if it were true, we'd just use SSDs instead of RAM.
Re: (Score:2)
Infinitely fast SSD's would be almost useless for RAM. You would need to make your cache lines 8kiB or more (they are rarely larger than 128 bytes today), and a spinlock would burn out a flash cell in less than a second -- probably less than a millisecond.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
I installed a game on a temporary "RAM drive" after I bought 16 GB of it a few years ago. Load times of Deus Ex HR didn't change at all from my SATA SSD. Was really disappointed.
Re: (Score:3)
This is because you get stuck leaving the CPU to handle all the context switching between virtual block storage in DRAM and memory. The CPU has to copy data out of block and into memory before it can actually use it, so by making a ram disk you end up giving the CPU 2-4x the amount of work to do for what should be a DMA read/write, which would normally be offloaded.
Also, your reads from your game are going to be single-threaded, and a single read/write thread is going to be pretty slow.
Re: (Score:2)
You seem to know what you're talking about. Maybe you can explain the article to me. First it says that the new PCIe SSDs achieve "much higher" speeds and "destroy the competition" and then it says that they don't really load anything faster and average consumers will hardly know a difference?
What?
Re: (Score:2)
Re: (Score:2)
I'm not sure why this is news. Sticking any device on the PCIe bus is going to allow for a lot more speed than using the SATA bus, and because SSDs are not limited by any mechanical mechanism, many layers of RAID 0 striping can be used to keep increasing performance.
Where I see this a big help personally is virtualization [1]. Even a SSD that is stuffed into an enclosure and is run over USB 3, because VMs do a lot of random I/O, performance is distinctively better than HDDs.
[1]: With all the Web based co
Re: (Score:2)
I'm not sure why this is news. Sticking any device on the PCIe bus is going to allow for a lot more speed than using the SATA bus...
Did you read the summary? It's reporting that new PCIe SSDs are not faster than "old" SATA SSDs as measured by real-world app- and game-loading times (not benchmarks, in which of course PCIe outperforms, as they do mention). By "not faster," I mean "equal," which is what the headline means (somewhat odd usage of the phrase "as fast as" when you already expect the first thing to be faster, so maybe that's where the confusion comes from).
Re: (Score:2)
It's just another situation with diminishing returns. Going from the spinning rust to an SSD achieves considerable and noticable gains whereas going to the next step does not. The transition to SSD probably already got rid of most of the perceptable bottleneck.
Re: (Score:2)
Did you read the summary? It's reporting that new PCIe SSDs are not faster than "old" SATA SSDs as measured by real-world app- and game-loading times
Im calling BS on the statistics, which has a 1.2TB ssd as substantially slower than a 120GB SSD from years ago, which is itself substantially slower than consumer-grade drives like the Crucial BX series.
This all points to some horrible firmware issue, or testing problems, or bad methodology. Theres no real other way to explain that performance; simply increasing the capacity to 1.2TB should have it topping all of the benchmarks, regardless of the protocol used to connect it.
Re: (Score:2)
Re: (Score:2)
The PCI-e native SSDs are indeed faster, the problem is, the code reading data off of them (your application/os) isn't written to take advantage of the increased speeds. Single threaded reads cap out at the read speed of a single thread, and that isn't that fast. This is especially true if they are 4K reads vs 1M reads, as you aren't going to saturate anything until you get up into larger read sizes.
To really take advantage of the bandwidth SSDs enable, you need to be running multiple parallel apps running
Re: (Score:3)
Huh? This sounds like nonsense. Operating systems already cache frequently used data in ram.
-Matt
Re: (Score:2)
Re: (Score:2)