Samsung Unveils First PCIe 3.0 x4-Based M.2 SSD, Delivering Speeds of Over 2GB/s 100
Deathspawner writes: Samsung's SM951 is an unassuming gumstick SSD — it has no skulls or other bling — but it's what's underneath that counts: PCIe 3.0 x4 support. With that support, Samsung is able to boast speeds of 2,150MB/s read and 1,550MB/s write. But with such speeds comes an all-too-common caveat: you'll probably have to upgrade your computer to take true advantage of it. For comparison, Samsung says a Gen 2 PCIe x4 slot will limit the SM951 to just 1,600MB/s and 1,350MB/s (or 130K/100K IOPS), respectively. Perhaps now is a bad time to point out a typical Z97 motherboard only has a PCIe 2nd Gen x2 (yes, x2) connection to its M.2 slot, meaning one would need to halve those figures again.
PCIe 3.0 availability (Score:5, Interesting)
It's curious how many relatively recent high-end PCs from prestige-brands don't have PCIe 3.0 slots. Alienware are a particular offender here - they were very slow adopters, quite possibly because a lot of their customers don't actually think to check for this when speccing up a machine.
That said, it's questionable how much it really matters in the real world at the moment. Performance tests on the latest video cards (which can take advantage of PCIe 3.0) have found very little performance gap between 3.0 and 2.0 (and even 1.0) with the likes of the Nvidia 980. The gap is most apparent at extremely high (150+) framerates - which is unlikely to constrain the average gamer, who probably just turns up the graphical settings until his PC can't sustain his target framerate (probably somewhere in the 40-60fps rate) any more.
Re: (Score:2)
If you are not doing any data processing then simply increasing the RAM and using an SSD has a huge impact. I have a 2.9 GHz i7-920 and I can not see the difference between it and a more modern system. It was an excellent buy - 5 years and it is still great. The addition of an SSD was key to keeping it fast.
Sounds like you know what you're doing with respect to storage but using a modern SSD and maxing out the RAM will likely help. Video cards have also improved significantly in the past 5 years and m
Re: (Score:1)
This SSD is not a PCIe slot form-factor card like you may be used to seeing. It is a M.2 form-factor SSD (see the picture of the drive in the articles). So having a x16 or even a x4 standard PCIe plugin slot will not help at all. The motherboard has to list that has a M.2 slot that is capable of PCIe gen3 speeds.
However, when you do have a real x4 PCIe card, you can plug it into x4, x8, and x16 slots. Just be aware that the unused pcie lanes may become unavailable.
With the new Haswell-E processors, the
Re:PCIe 3.0 availability (Score:5, Insightful)
With the new Haswell-E processors, the CPU has 40 lanes of PCIe x4. So on a lot of high end x99 motherboards you'll see four PCIe gen3 x16 slots. However, since the CPU only has 40 lanes, this means not all of those "x16" slots are truely using 16 lanes of PCIe. Normally when four cards are plugged in, you'll get slot 0 running at x16, and the other three slots running at x8.
That's not really correct, high end motherboards usually have PLX chips which act like PCIe switches. Like the motherboard the GP listed runs at 16x/16x/16x/16x (or 16x/8x/8x/8x/8x/8x/8x) and only the total is limited to 40.
Re:PCIe 3.0 availability (Score:5, Informative)
This SSD is not a PCIe slot form-factor card like you may be used to seeing. It is a M.2 form-factor SSD (see the picture of the drive in the articles). So having a x16 or even a x4 standard PCIe plugin slot will not help at all. The motherboard has to list that has a M.2 slot that is capable of PCIe gen3 speeds.
Bollocks. An M2 to PCIE adapter is twenty bucks.
Re: (Score:2)
Bollocks. An M2 to PCIE adapter is twenty bucks.
SanDisk A110 M.2 card with PCIE adapter benchmarks:
http://www.tomshardware.co.uk/... [tomshardware.co.uk]
Re: (Score:1)
You can get fairly inexpensive adapter cards that let you use M.2 devices in regular PCI express slots. The interface is electrically compatible with PCI express.
Plextor sells PCI express SSDs that go in to regular slots that that are exactly this. (An M.2 pre-assembled in to an adapter card)
Re: (Score:2)
That's been generally true,
Re: (Score:2)
And yes, most motherboards have a "primary" slot where it's a real x16 slot, and 1 or 2 or more SLI or Crossfire slots which are x8 or even x4.
Actually, most motherboards today have auto-switching slots, so that they have 16 lanes assigned to 3 slots, and the motherboard and card negotiate so that each card performs as fast as it can regardless of slot, up to the total maximum of 16 lanes across the three slots. This means there is no "primary" slot any more, and helps you lay out cards where clearance is an issue.
What this means is that you can install that x16 video card in any of the three x16 form factor slots, and it will get all 16 lanes.
Re: (Score:2)
Honestly I never got the issues people have with RealTek.
Crappy wi-fi drivers, for a start. If I don't disable pretty much every performance and power-management wi-fi option on my laptop, they disconnect every 30 minutes or so.
Re: (Score:2)
A random Realtek NIC still is better than wifi anyway, sometimes hugely (no inconsistent latency, dropped packets and susceptibility to weather or time of day.)
100 BaseT even is mostly enough.
Rule of buying a motherboard : get a Realtek NIC actually, because the Atheros one advertised as "killer NIC" is worse (same NIC is sold without the "killer" name too). Yes, an Intel one will be better still.
Well it also depends on chipset (Score:2)
Something that held back PCIe 3 support in many high end systems was the X79 chipset. If you want an E series, Intel's ultra high end desktop, processor, you have to use a different chipset. They don't rev that every generation though. So The X79 came out with the Sandy Bridge-E processors, and then the Ivy Bridge-E runs on the same thing.
There is a new chipset now, the X99, that works with the Haswell-E, but that just launched a few months ago.
Also with the high end processors, they are out of cycle with t
Re: (Score:2)
Hence you can have a situation where for things like PCIe and USB the high end stuff is behind.
USB? yes, SATA? yes, PCIe? no.
None of intels chipsets has PCIe 3.0 on the chipset, not even X99. The only PCIe 3.0 lines on intel systems so-far have been those from the processor and the lanes on the processor have been PCIe 3.0 since "sandy bridge e" on the high end and and ivy bridge on the mainstream. So high end got PCIe 3.0 before mainstream did. Furthermore the high end platforms have a lot more PCIe lanes. One lane of 3.0 is equivilent to 2 lanes of 2.0 or 4 lanes of 1.0 so in terms of total PCIe da
Re: (Score:1)
Re: (Score:2)
Another thing to remember is that Intel has not put PCIe 3.0 into their PCH chips yet. So the only PCIe 3.0 lanes are those direct from the processor which are usually used for the big slots intended to take graphics cards. Especially on mainstream (LGA115x) boards.
Re: (Score:2)
"That said, it's questionable how much it really matters in the real world at the moment."
On I/O devices? Immensely! Your example with the graphics cards shows little difference due to the fact that the performance of graphics cards isn't limited by PCIe bandwidth. IO devices which can read and write faster than PCIe 2.0 slots are able to throughput data, on the other hand, will show immediate gains. If an SSD can read at 3GB/sec and the slot only allows 2GB/sec, obviously upgrading to a newer standard is g
Re: (Score:2)
It's not about having PCIe 3.0, it's about having 4 lanes of PCIe 3.0 piped to an M.2 / SATA Express connector.
Otherwise you have to buy a flaky adapter and hope it supports the number of lanes (and PCIe revision) you need.
Re:Starts to get reasonable (Score:4, Informative)
No! I would say: FORTUNATELY I don't think that's going to happen.
2000MB/s is still one or two orders of magnitude slower than DDR3/4 (between 20000 and 60000MB/s) so I clearly don't want direct mapping.
Let the OS cache the SSD sectors in RAM pages and everything will be fine (and from the user point of view, that won't change anything)
Re:Starts to get reasonable (Score:4, Insightful)
There's nothing unfortunate about it. Access times for SSD is around .1 ms vs at worst around 15 nanoseconds for DDR3 RAM. You do realize how significant of a performance impact that would be, right?
Re: (Score:2)
Memory-mapped I/O is already a thing. Of course, it is a way to read and write from/to a file with less overhead.
Re: (Score:2)
Memory-mapped I/O is already a thing.
Yes, it is ancient. So what?
Of course, it is a way to read and write from/to a file with less overhead.
Sure, different memory-mapped I/O methods can eliminate certain overheads. DMA eliminates CPU overhead. mmap allows you to pull in data from internal/external storage into RAM so that you can access it faster, but you're still suffer the overhead of accessing the SSD whenever things are paged into and out of memory. Neither one magically changes the fact that accessing an SSD over RAM, which is what the person I responded to wanted, is 1000s of times slower than accessing RAM.
Re: (Score:2)
Now we only need to get them memory mapped and good OS support for not loading constant data to volatile memory.
Even with PCIe 3 access times for RAM vs an SSD still favors RAM extremely heavily. Enjoy your system slowing to a crawl by doing that.
Re: (Score:2)
Memory mapping the disk will remove the read and throw away part.
Wrong. Assuming by "memory mapping" you mean "mmap", it pages things in and out of memory all the time.
Your argument is that since memory is faster than disk IO it is better to do (1 disk read + 1 memory write + 1 memory read + 1 conversion) rather than (1 disk read + 1 conversion)
It is. It's magnitudes faster to keep things in RAM.
Unfortunately... (Score:2)
Re: (Score:2)
If you don't want to upgrade your box (Score:3, Interesting)
Then max out the RAM and create a RAM drive.
On my 2010 iMac, I have a 16 GB RAM drive that gets between 3 and 4 GB/s and still have 16 GB of RAM available for my apps.
Check this terminal command out before entering it just to be safe.
diskutil erasevolume HFS+ 'RAM Disk' `hdiutil attach -nomount ram://33554432`
Under Mac OS 10.6.8, and 10.9 the above creates a 17 GB RAM disk.
diskutil erasevolume HFS+ 'RAM Disk' `hdiutil attach -nomount ram://8388608`
This creates a 4.27 GB RAM disk. Enjoy the speed.
Re: (Score:1)
Re: (Score:3, Insightful)
No. You should RARELY do this. If you go back and forth from other tasks where you can expect the cache to be re-used and need the absolutely best performance when you come back to those files, then something like this is a good idea. You're essentially committing content to RAM for cases where you know better than your operating system's optimizations.
Re: (Score:2)
Re:If you don't want to upgrade your box (Score:4, Interesting)
Well, shame on me. I've been doing it for 3 years on a daily basis.
I have my RAM drive rsynched to an SSD partition that is the same-ish size.
And here's one area where you're incorrect. Safari loads web pages. Each page loads javascript. Many of these leak over time or simply never purge their contents. I often end up with 8 GB used in Safari. Safari alone is a citizen that doesn't play by these rules because each page that loads is a prisoner of the javascript that loads and often doesn't handle memory freeing properly.
When I use my RAM as a drive, I get near INSTANT builds on OS X.
This matters to me more than your claims of "all modern operating systems taking full advantage of the RAM". If the operating system takes full advantage of the RAM, it may not be to my best benefit.
For example, Apple apps now by default do not quit when you close the last document. They merely stay in memory, hide the UI and then need to be relaunched to enable the UI again. Why does this matter? For TextEdit, if I want to open a document form the open menu if I close the last document and click elsewhere, this forces me to reopen the app because the OS fake closes the app (really only hiding the UI) while the rest of the app stays memory resident.
So, I have to relaunch the app. This takes more time and ONLY just renables the UI. How much memory does this save on my 32 GB machine? 1 MB. Now, that's certainly not taking full advantage of the RAM. It's a case of the OS designers thinking that "he wanted to quit the app, so we'll do it for him". But I didn't want to quit the app. The computer is not taking full advantage of the RAM in this case. That's not what I wanted it to do.
Maybe I have apps in the background that are doing stuff, but I want them to pause completely if another app is running in the foreground. Maybe I want ALL Safari pages to suspend their javascript when in the background, but the app still can still process downloads as if it's running at normal priority.
See, there are many cases where the computer's OS will not take proper advantage of the RAM and the processing power since it can not mirror the user's intentions. Even in cases where it tries to, it often gets them wrong. And in some cases where it does (Safari javascript), the computer ends up eating processing power and RAM for tasks that the user doesn't want it to be placing priority on. And in some of these cases, it can't allocate RAM and processing power properly, because it can't if it relies on other programmers writing their javascript competently and acting as good citizens.
I can cordon off a small chunk of my computer's RAM (since I have way more than enough) and direct it to do pretty damn much just want I want it to do.
That's why I bought it. I don't want the OS to prioritize things the way it wants to. I want to tell (parts of) the OS to prioritize things the way I want it to.
Cheers.
tl;dr - if you know what you're doing... (Score:2)
This matters to me more than your claims of "all modern operating systems taking full advantage of the RAM". If the operating system takes full advantage of the RAM, it may not be to my best benefit.
tl;dr: If you don't know what you're doing (or like me are too lazy to care) with respect to memory management (most Mac users) then the OS is likely a better steward than you. For everyone else, there are RAM drives :)
Why someone would criticize you for using a RAM drive ....doesn't make sense to me.
Re: (Score:2)
BS.
Ramdrives have several advantages.
1: they are explicitly volatile, application developers don't know your usecase and therefore often err on the side of preserving your data over power failures and so use calls like fsync. Even when the app doesn't use fsync the OS will usually try and push the data out to disk reasonablly quickly. If you know you don't care about preserving the data across power cycles and you know you have sufficient ram then a ramdrive can be a much better option.
2: operating systems
Re: (Score:1)
Significant numbers of people actually. People who aren't really bought into the agenda of "everything must be open", but want a good quality UNIX OS with lots of commercial apps. They're basically the ideal developer's machine. They have really strong native development tools, good support for installing all the random scripting languages you want, if you happen to do web dev, they support running all major OSes web browsers (not something you can say for any other machine)... Basically, anyone serious
Re: (Score:1)
And interesting that the conversation moved over to Mac, mostly since they cannot use this drive at all. No version of OS X supports NVME.
Re: (Score:2)
No they don't, they ship with AHCI drives. There is no support for NVME in OS X. You do understand that NVME is not the form factor right? Just because they have PCIe drives (basically a Samsung XP941 with a proprietary connector, which is an AHCI drive) does not mean they use NVME
https://discussions.apple.com/... [apple.com]
and just so you can learn the difference
http://www.anandtech.com/show/... [anandtech.com]
http://en.wikipedia.org/wiki/N... [wikipedia.org]
Re: (Score:2, Insightful)
Have you ever gone to developer conferences, notice the number macs? Me thinks maybe you might want to check your assumptions. For example my mac is really easy to service. I buy a warranty and if something goes wrong the Apple store fixes it for free, quite often same day. Can't get much easier to service than that.
Re: (Score:3)
Er... welcome to business level warranties.
Which run quite a bit less on business kit when you're buying £300 PCs instead of £2k Macs.
Sorry, but the Apple "service" isn't - it's often a "replace with new". Any business can do that and wait while the replacement comes back from the manufacturer.
Macs at developer courses? Sure. But most of those developers will be doing web-stuff mainly, or forced to use Mac if they want their stuff to compile and work ON a Mac as an end-result.
And
Re: (Score:2)
I was talking home warranty where Apple is excellent. And most urban people are near an Apple store and they are open business days plus most non-business days.
For business where you have replication I recommend / use redundancy often with little or no warranties. Warranties are a 2nd resort.
Re: (Score:3)
Not something to base your business on versus a 4hr on-site service response with 24-hour turnaround for less than half the price difference between a PC and Mac of equivalent spec.
Yeah, I had one of those 24-hour warranties, from HP, on a $2000 Elitebook. First of all, you only get 24-hour turnaround if you are located within one of their service areas, if they have to drive more than two hours to reach you then you get service when they get around to it. After they show up, days after you called, if they botch the repair (they only bring enough parts to make whatever the knowledge base says are the most common repairs) then you have to wait some more days while they get more parts a
Re: (Score:2)
Re:If you don't want to upgrade your box (Score:4, Insightful)
There exists no PC that costs $300 that will match up to a $2k Mac. Even if you plunk down $700 for a Mac Mini with AppleCare, it will be hard to find a similar machine with a similar service contract (think Dell Gold Service Contract).
Apple will come to you within 24h or ship you a new machine overnight but even after the warranty expires you can still call them and they will answer you. I have dealt with Dell, HP and Lenovo, it doesn't even come close.
Re: (Score:2)
Does it surprise me no. I've used RAM disks with Linux and older OSes over the years mostly to trick older software to get it running faster.
I'm not sure what your point is regarding superstition. There is nothing superstitious about a brand with good customer service, better parts and higher prices.
Re: (Score:2)
I'm not sure you know what the word superstitious means. But I haven't found that people that build their boxes tend to worse understanding. The programers in telco / networking have the best low level understanding and they don't build their boxes at all they work on tiny components. Same with the embedded guys. The device driver guys often do.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
What walled garden. I haven't had any problem installing what ever software I want from where ever I want on a Mac.
Oh I see your trying to claim the Macs are tlike the phones.
And if Apple stopped selling anything I still think they would last more then 5 years with the amount of cash they have,
Re: (Score:2)
Yeah, walled garden, right. You are aware that OS X is just *BSD under the hood, right? Moreover, one can run Parallels on top of OS X. I am currently typing this in a full screen Ubuntu 14.10 session in Parallels on a Mac mini. My first impression of this little beast: why didn't I buy one years sooner. It's silent. It just works, out of the box. And so far it runs Ubuntu fantastic! And with one four finger swipe I am in the OS X desktop.
Apple out of business within 5 years? Right, that's about the same ti
Re: (Score:1)
This sounds clever but is actually quite stupid.
A modern OS is going to manage how filesystems are caches in memory way better than a blank ramdisk.
Plus, you still need to read all of your data OFF your current drives before you can take advantage of this, which means you don't actually get to enjoy the speed until you pay the upfront cost. Plus, none of the things you do on that disk is permanent.
TL;DR: Ramdisks are a stupid idea
Re: (Score:2)
Re: (Score:2)
Ramdisks are a not a stupid idea. They are just not useful in most cases.
Putting temporary files (e.g. /tmp) in a ram disk can be beneficial but only if you have significantly more ram than needed. That can significantly speed up tasks that are known to create lots of temporary files (e.g. compilation). This is also very interesting to prevent your (old) ssd or flash disk to wear-off too quickly.
On Linux, it is not uncommon for /run to be a ram disk (of type tmpfs). This is where most services will put sma
Re: (Score:2)
That can significantly speed up tasks that are known to create lots of temporary files (e.g. compilation).
I set up a RAM disk on my Windows machine because of Audacity.
It creates temp files to store intermediate work (like the decode to PCM of a compressed format, or the output of a filter) instead of using RAM. Even with an SSD, this was not nearly as fast as it should have been, and a serious waste, since the total space used by the temp files is far less than the memory space available to the application. The RAM disk solved the speed problem quite nicely.
I also store things like Firefox's page cache on th
Re: (Score:1)
RAM disk are complete waste of resources. If you wish to stuff something into file cache, just do an md5sum on everything you want cached.
cd make_this_fast /dev/null
find -type f -exec md5sum {} \+ >
done. Run it once, then if you don't believe me, run it again and you'll see somewhat of a difference. You can substitute something faster for md5sum, like cat.
RAM disk are ways of allocating space for temporary files, like /tmp or something like that. Not as a way to cache disk files!
Re:If you don't want to upgrade your box (Score:5, Insightful)
Last time I used a RAM drive, it was on the contents a floppy disk. My brother was sick of slow compile times and worked out how to use the university DOS computers to produce a RAM drive. Autoexec.bat created it and copied his files into it, and then it ran like greased lightning.
But that was back when 1.44Mb of RAM was a lot and he was lucky enough to be somewhere where every computer had that spare.
Last time I saw it when when making a single-floppy Linux distribution that copied itself into RAM because it was often used on diskless workstations. Just like almost every Ubuntu install disk can do now if you select Live CD from the boot menu.
But on ordinary desktop OS? Since Windows 95, RAMDisks have been dead. Since then, we've been using RAM better to cache all recent filesystem accesses. There's very, very, very, very little that will ever benefit from a RAMDisk over just having that RAM as filesystem cache automatically anyway. You still have to read the data from permanent storage anyway, and once you've done that, it's in RAM until you start to fill up RAM. Read it often enough and it will never drop out of the cache. If you're not reading it often enough, why the hell bother to RAMDisk it?
And you lose NOTHING if the machine dies mid-way. With a RAMDisk, any changes you make are gone.
Please. Stop spreading absolute "gold-plated-oxygen-free" junk advice like this.
Anyone who wants to do this can do it with any bit of freeware on any machine. But why they would bother is beyond me. Hell, next you'll be telling me to enable swapfiles and put them on the RAMDisk....
Re: (Score:1)
On some Linux distros, /tmp is a tmpfs volume [wikipedia.org], which is effectively a RAM disk. SunOS/Solaris also do that. Many files live in /tmp for very short periods, and have no requirement to persist across a reboot. So, building them in RAM makes sense. The filesystem can still get backed to disk in the swap partition.
The only other case I can think of where a RAM drive might make sense is if you have a set of files you need access to with tight deadlines, and the total corpus fits in RAM. Of course, you could
Re: (Score:1)
I can't think of a cheaper and easier solution than a RAM disk for this particular app
Re: (Score:2)
That's actually why I decided to use it. Faster compile times.
OS X hits the disk so often, that I moved my user environment on to the RAM drive.
Even with 1066 MHz RAM, I would get instant build times as the swap files were now in RAM.
That when compared to 30 second build times are a trade off I'm willing to make.
And losing my contents? That's what rsync is for. And that's what back up batteries are for. My RAM drive is rsynched to an SSD partition. Happens in the background every 5 mins. I never see t
Re: (Score:2)
That's actually why I decided to use it. Faster compile times.
OS X hits the disk so often, that I moved my user environment on to the RAM drive.
Even with 1066 MHz RAM, I would get instant build times as the swap files were now in RAM.
That when compared to 30 second build times are a trade off I'm willing to make.
I/O limited compilers? More likely need to enable parallel builds to hide I/O latency.
So, yeah. Swap files on the RAM Disk. Insane speed as a result. Disk backed up to an SSD. Battery backup (laptops have batteries too, don't they?) Never a problem.
Page file + ram disk = oxymoron
Re: (Score:2)
A Linux RAM disk (tmpfs) just stores files in the page cache, so there's really no difference. OK, it can also write out those files to the swap partition if you run out of RAM, but you usually shouldn't be using a RAM disk unless you have plenty of RAM.
And that's a big improvement over some old-style RAM disks where you told it you wanted 500MB and it grabbed those 500MB and didn't let anyone else use it.
Re: (Score:2)
Anyone who wants to do this can do it with any bit of freeware on any machine. But why they would bother is beyond me.
Actually I got an example from work. Utility (that we can't easily change) expects file from disk. We must make corrections on file first. So process is:
1. Read original file
2. Apply corrections
3. Write out temp file
4. Point utility to file
5. Delete temp file
Writing a big file to any persistent media wastes quite a bit of time for no particular reason. So a RAM disk is quite useful if you need to pipe your process through files.
Re: (Score:2)
But on ordinary desktop OS? Since Windows 95, RAMDisks have been dead. Since then, we've been using RAM better to cache all recent filesystem accesses. There's very, very, very, very little that will ever benefit from a RAMDisk over just having that RAM as filesystem cache automatically anyway. You still have to read the data from permanent storage anyway, and once you've done that, it's in RAM until you start to fill up RAM. Read it often enough and it will never drop out of the cache. If you're not reading it often enough, why the hell bother to RAMDisk it?
This is consistent with my experiences on Linux. When compiling the kernel, I found no significant difference in compilation times on a SSD and tmpfs. If you only have a mechanical hard drive, it might make sense to use a tmpfs, but if you don't have a SSD you probably don't have enough RAM for that anyway.
Re: (Score:2)
You miss the point.
The os boots almost instantly. I have 500 gigs of accelerated storage and not 4. The speed gains are latency and not bandwidth. So the extra bandwidth won't offer an improvement. With your ram disk you still wait for it to load in your tiny ram disk. Ssd is permanent. Until you use one you can't comment.
Re: (Score:2)
Currently sitting in front of a pretty bog-standard server for a small school, with the disks writing 130MB/s (yes, bytes) when they're just replicating VM's across the network. Not even particularly high-end, not a particularly brilliant network, in the middle of the working day, just normal load, pretty quiet - the servers are on 5% CPU and 10% RAM.
Pretty sure if I had 10Gbit network and a serious amount of users, I could push them even further just on that simple task. 3-500GB/s write is not unusual wi
"just 1,600MB/s and 1,350MB/s" (Score:3)
Because that's just so terrible, right? :p
Re: (Score:2)
About the only thing this will improve is game load times (which I guess is a market, but not the biggest on earth, especially given the price of this).
I have 32GB of RAM in my gaming PC, so the second time I start a game, it's all in RAM. Still takes an age to get through the damn videos--increasingly, unskippable freaking videos--every game wants to play these days, and then from there to the menu.
I don't even know what games are doing between the point where they stop playing videos and actually get to the menu. The files are cached in RAM, and the CPU isn't doing much. Just crappy code, I guess, and an SSD won't help much with that.
Re: (Score:2)
I don't even know what games are doing between the point where they stop playing videos and actually get to the menu.
It's not worth it for a game you play half a dozen times, but in many cases if you figure out what the video is named in the program directory and either rename or delete it, the video will no longer play.
I have found this speeds up getting to the menu dramatically.
In a few cases where it'll error out if the file isn't found you replace the ~15-30 second video with one(of the same general format) that is a second or less.
Re: (Score:2)
True, they might be smarter and try to cache the next level from the latest save file. But there's an obvious difference between the hard drive light glowing red because it has to load everything from disk, and barely flickering because it's all or almost all in RAM.
Daring us to use it (Score:3)
How long can you sustain these kinds of I/O rates before burning the thing out?
Awesome it is so fast yet like LTE with tiny data caps utility appears to be substantially constrained by limitations on use.
For subset of people with workloads actually needing this kind of performance how useful is this? Reads can be cached by DRAM which is quite cheap.
For those who don't really need it I can understand how it would be nice to have.
Re: (Score:2)
If you were to sustain 1550 MByte/s write for 1 year, you'd write a total of 48 PB. (1550*60*60*24*365/1000/1000/1000), or 0.13 PB/day. In Techreport's endurance test, only two drives made it past 1.5 PB. So, if that is the bar, the drive would last only 11 days.
However, that would give you no time to read the data you'd written. Since you're not likely to write at max speed 24/7, the drive should last considerably longe
Don't overlook NVME! (Score:3, Informative)
The big news here is that this is pretty much the first consumer available SSD that supports NVME! There are some super-expensive pro devices that do NVME but they likely alone cost more than your whole high-end gaming rig.
http://en.wikipedia.org/wiki/NVM_Express
NVME is the interface that replaces AHCI, which was designed for spinning rust devices that can really only read and write one bit at a time. Flash based devices don't have to wait for moving parts and thus can access many things at once.
AHCI was designed for magnetic drives attached to SATA. NVME is explicitly designed to accommodate fast devices directly connected to PCI express. Take a look at the comparison table on the wikipedia page linked above. Multiple, deep, queues and lots of other features to remove bottlenecks that don't' apply to flash based storage.
How useful NVME currently is to consumers, though, is different. Only really new operating systems can boot from NVME devices. (Windows 8.1 or later. Don't know the current state of linux support but I bet at least someone's got a patched version of the kernel and grub if there's not mainline support already) And most most motherboards don't properly support NVME booting yet either. (Ive heard reports that some do with a BIOS/firmware update but it's currently really spotty)
So use the 16x (Score:2)
Don't get too excited (Score:5, Informative)
1. These are sequential speeds. They're only relevant when you're dealing with large files. Unless your job is working with video or disk images or other large files, the vast majority of your files are going to be small, and the IOPS matters more. 130k/100k IOPS is really good, but only about a 10%-20% improvement over SATA3 SSDs. It translates into 520/400 MB/s at queued 4k read/writes best case. Current SATA3 drives are already surpassing 400 MB/s queued 4k read/writes.
2. Like car MPG, the units here are inverted from what actually matters. You don't say "gee, I have 5 gallons in the tank I need to use today, how many miles can I drive with it?", which is what MPG tells you. You say "I need to drive 100 miles, how many gallons will it take?" which is gal/100 miles. Yes they're just a mathematical inverse, but using the wrong one means the scaling is not linear. If you've got a 100 mile trip:
A 12.5 MPG vehicle will use 8 gallons
A 25 MPG vehicle will use 4 gallons (a 4 gallon savings for a 12.5 MPG improvement)
A 50 MPG vehicle will use 2 gallons (a 2 gallon savings for a 25 MPG improvement)
A 100 MPG vehicle will use 1 gallon (a 1 gallon savings for a 50 MPG improvement)
See how the fuel saved is inversely proportional to the MPG gain? As you get higher and higher MPG, it matters less and less because MPG is the wrong unit. If you do it in gal/100 miles it's linear. (This is why the rest of the world uses liters / 100 km.)
An 8 gal/100 mile vehicle will use 8 gallons.
A 4 gal/100 mile vehicle uses 4 gallons (a 4 gallon savings for a 4 gal/100 mi improvement)
a 2 gal/100 mile vehicle uses 2 gallons (a 2 gallon savings for a 2 gal/100 mi improvement)
a 1 gal/100 mile vehicle uses 1 gallon (a 1 gallon savings for a 1 gal/100 mi improvement)
The same thing is true for disk speeds. Unless you've got a fixed amount of time and need to transfer as much data as you can in that time, MB/s is the inverse of what you want. The vast majority of use cases are a fixed amount of MB that needs to be read/written, and the time it takes to do that is what you're interested in because that's time you spend twiddling your thumbs. If a game needs to read 1 GB to start up:
A 100 MB/s HDD will read it in 10 sec
a 250 MB/s SATA2 SSD will read it in 4 sec (a 6 sec savings for a 150 MB/s improvement)
A 500 MB/s SATA3 SSD will read it in 2 sec (a 2 sec savings for a 250 MB/s improvement)
A 1 GB/s PCIe SSD will read it in 1 sec (a 1 sec savings for a 500 MB/s improvement)
This 2 GB/s PCIe SSD will read it in 0.5 sec (a 0.5 sec savings for a 1000 MB/s improvement
Again, the actual time savings is inverted from the units we're using to measure. We really should be benchmarking HDDs and SSDs by sec/MB.
A 10 sec/MB HDD will read 1 GB in 10 sec
A 4 sec/MB SATA2 SSD will read it in 4 sec (a 6 sec savings for a 6 sec/MB improvement)
A 2 sec/MB SATA3 SSD will read it in 2 sec (a 2 sec savings for a 2 sec/MB improvement)
A 1 sec/MB PCIe SSD will read it in 1 sec (a 1 sec savings for a 1 sec/MB improvement)
This 0.5 sec/MB PCIe SSD will read it in 0.5 sec (a 0.5 sec savings for a 0.5 sec/MB improvement)
That's nice and linear. You see that the vast majority of your speedup comes from switching from a HDD to a SSD - any SSD, even the old slow first gen ones. The next biggest savings is switching to a SATA3 SSD. Beyond that the extra speed is nice, but don't be mislead by the huge MB/s figures - the speedup from PCIe drives will never be as big as those first two steps from a HDD to a SATA SSD. Manufacturers just report performance in MB/s (instead of sec/MB) because it exaggerates the importance of tiny increases in time saved, and thus helps them sell new and improved (and more expensive) products. Review sites also report in MB/s because if you report in sec/MB, the benchmark graphs are boring and the speedup from these shiny new SSDs is barely perceptible.
Re: (Score:1)
You don't buy one of the these for a regular desktop that just runs email, word and internet. This is for those people that need to move large files around even faster. One of my use cases for one of these is a cheap SAN cache for a backup to disk system.
OEM only--where to find? (Score:1)
This plus Skylake = awesome (Score:1)