Intel Launches SSD 750 Series Consumer NVMe PCI Express SSD At Under $1 Per GiB 67
MojoKid writes Today, Intel took the wraps off new NVMe PCI Express Solid State Drives, which are the first products with these high speed interfaces, that the company has launched specifically for the enthusiast computing and workstation market. Historically, Intel's PCI Express-based offerings, like the SSD DC P3700 Series, have been targeted for datacenter or enterprise applications, with price tags to match. However, the Intel SSD 750 Series PCI Express SSD, though based on the same custom NVMe controller technology as the company's expensive P3700 drive, will drop in at less than a dollar per GiB, while offering performance almost on par with its enterprise-class sibling. Available in 400GB and 1.2TB capacities, the Intel SSD 750 is able to hit peak read and write bandwidth numbers of 2.4GB/sec and 1.2GB/sec, respectively. In the benchmarks, it takes many of the top PCIe SSD cards to task easily and at $389 for a 400GB model, you won't have to sell an organ to afford one.
Re:How many read/writes? (Score:5, Informative)
Re: (Score:1)
Re:How many read/writes? (Score:4, Interesting)
One step behind bleeding edge is the sweet spot for me. The last gaming rig I built is approaching 3 years and it's still going strong. The only bleeding edge part was the X79 Extreme 11 motherboard. I built it with one of the 750 gig Seagate hybrids which was later replaced with one of their 2tb hybrids. Works plenty fast for me. When I'm gaming, the next level generally finishes loading before the cut-scene is done so faster load times wouldn't make any difference.
Re: (Score:2)
Re: (Score:2)
A few of my Mitsui Gold and Kodak Gold (similar formulation) burns from 1995 and 1996 went bad in the last few years. Expensive media from back then (when it all was expensive), written to at low speed, did seem to last better than the mass-produced media of later years. I also have two cheap Ritek disks burned closer to 2006 that lasted less than 5 years.
Re: (Score:2)
Okay, the flash looks good, how about the controller? Often the controller dies long before the flash does.
Re: (Score:3)
Funny, the specs say 70GB/Day, that is significantly different. This appears at the bottom of the linked HotHardware article on the right hand column of the spec sheet, and it is for the 1.2 TB model.
Re: (Score:1)
Re: (Score:3, Interesting)
From the review:
"We should also note the the SSD 750 Series comes with a 5 year limited warranty and its endurance is rated for 70GB of writes per day, with a total of 219TB written and a 1.2 million hour MTBF or meantime between failures. "
And by looking at some of the SSD endurance tests, I'd be surprised if this card can't beat 1-2PB before dying.
Hopefully Intel didn't add a suicide option into the firmware, like they did with the 335 SSD. As soon as the counter hits 0%, don't reboot or it's a brick. Doe
Re: (Score:2)
Does Samsung even have a competing product? Also, your price and size make no sense as the article is about a 400Gb and 1.2TB model at around $1/GB (a little less than, but not much) The Samsung model you mention is also a SATA drive, not a PCIe drive, and most likely would not be able to put up the numbers these drives can push. Apple is using Samsung for mSATA, not PCIe (though there isn't a terrible lot different electrically..), and theirs doesn't push near the speed of this product as this one pushe
Re: (Score:3)
Does Samsung even have a competing product?
Yes. SM915.
Which seems to not exist according to Google. Perhaps you mean the SM951? This is a mSATA card that runs over PCIe, not a desktop computer card that runs over a PCIe slot. They are completely different things. Look at the size of the Intel card, it is a full daughter board with many large flash chips. The Samsung has like 3 chips on it. I would expect this would be a much lesser part that isn't designed for enterprise level loads like the Intel card is designed for.
Apple is using Samsung for mSATA, not PCIe
Guess which PCIe 3.0 x4 drive the PCIe SSDs in the 2015 mbp and mb are based on.
Which is why only quoting a portion
Linux support? (Score:2)
Re: (Score:3)
Re: (Score:3)
So it looks like I can just drop one of these into my xubuntu 14.04 LTS desktop. Kewl!
Now if I can just convince SWMBO that I NEED one...
Re: (Score:2)
The specs mention Windows 7, 8 and 8.1 as well as a UEFI version of 2.3.1 or later. No mention of Linux support, so I guess I won't get this for my ESX box :(
Re:Linux support? (Score:4, Informative)
Those improvements are not necessary to reach the full speed of this drive, at 440K IOPS. In my own tests I've even seen a FusionIO drive hit 8GB/s under the old RHEL6 2.6.32 kernel. This new drive is at an amazing price/performance spot, but it's not exploring the upper limits of where the Linux kernel is shooting at.
Re: (Score:2)
You're assuming one slot, but they actually sell multiple slot monstrosities that are aggregated together as a single drive. Here's a similar one to what I tested benchmarked at 5.8GB/s [storagereview.com] on reads.
Re: (Score:1)
Comment removed (Score:4, Insightful)
Re: (Score:2)
Re: (Score:3)
Re: (Score:3)
Re:My problem with SSDs (Score:4, Informative)
Handling power off issues is a different problem. What the GP was referring to is how drives will fail spectacularly in the face of anything seen as corruption. You can see some examples in some longevity failure tests [techreport.com].
The problem in those cases was wearout, but the way that happens is scary. Let's say there's a bug in the firmware that causes a write to fail for no good reason. It's quite likely that the drive will kick into a mode where it doesn't trust itself anymore. And the way that will play out on most SSDs, the drive will shut itself down at the firmware level, so it isn't even picked up by the BIOS on boot anymore. What people would expect is getting read-only behavior there; instead they will find everything gone. And unlike most catastrophic spinning drive failures, you could easily hit the same bug that wipes out your data on both halves of a RAID-1 pair at the same time.
Re:My problem with SSDs (Score:4, Interesting)
Fail spectacularly is a vague term IMO. What were talking about is when the Intel firmware has determined that the SSD is in failure it will allow the drive to boot in a read-only state once. After you shut the power off once receiving the warning the drive commits suicide and will no longer boot or respond, in other words it bricks itself at the firmware layer and there is NO recovery.
What I'd argue is the correct failure mode is boot in read-only and warn that power loss will result in data loss but continue to boot in read-only format with a warning at each boot that files may be corrupt or lost. The intentional bricking aspect is just bad design IMO. The data you need to access could be on a part of the drive that's perfectly fine, in addition you may get the data warning at a time and place where it's simply not feasible to backup everything.
I completely disagree with Intel's failure model and think it's beyond stupid. It should warn the user of corruption and data loss but continue to boot. That way if the person is off somewhere they can backup critical files to either the cloud or a thumb drive and try to recover the non-critical data when they get back. Intentional bricking is just stupid.
Re: (Score:3)
In cases like that, it's preferable to get some of the data back rather than none.
Even with backups... (Score:3)
I have backups, every half hour or so.
Sometimes you can do quite a lot in half an hour that would be really annoying to replicate though. That's where it would be nice to at least have the drive be able to give you what it thought it had before it went into a failure state. Even if it's partly corrupted that may be fine, especially for coders who work with lots of little files.
Re: (Score:2)
Did you miss the SSD endurance tests where they abused the hell out of SSDs and found them to be way more durable than the skeptical wags like to say they are?
Given normal precautions like backups, they seem good enough to me, at least reasonable brands like Samsung/Intel. I plan to make my next NAS/SAN box totally SSD based, which, by the time I get around to doing it in a year or so will be even more affordable.
Even if the risk of single disk failure is higher than SSD, performance is so overwhelmingly b
Re: (Score:2)
I bought 5 SSDs in 2014 - now in all my machines, so I'll be playing the part of (near) bleeding-edge adopter in the upcoming years. So far, am loving the performance.
vs. raid controller + cheap drives (Score:2)
pci express raid controller ~ 100
5 x 256gb ssd ~ 500
$600 vs $1200 (assuming $1 per gb for this intel card)
not sure about about speed. in theory, it should be faster due to raid or stripping (4 or 5 x 500mb/sec).
power and cable is a mess so definitely a con here.
fault tolerance is a plus from raiding.
upgradeable storage capacity is a plus.
otherwise, great for server farms.
Re: vs. raid controller + cheap drives (Score:4, Informative)
You're also using 5 (2 per U) drive slots vs 1 (10 per U). And assume that your raid controller can push to the drives at pcie speed. Raid controllers aren't that fast, even from expensive manufacturers chips push the boundaries at 6Gbps and ~100,000 IOPS for the entire array.
Re: (Score:3)
Re: (Score:2)
IMHO ssd's is all about speed... "2 lanes of PCIe 3.0 offers 3.3x the performance of SATA 6Gb/s"
Re: (Score:2)
I really don't see many cases where a RAID array is better than a drive like this, especially considering Intel's reputation and reliability ($100
Re: (Score:3)
I really don't see many cases where a RAID array is better than a drive like this, especially considering Intel's reputation and reliability ($100 256GB SSDs aren't going to be the best fault tolerant ones...).
I've gotten burned by an Intel SSD. The Intel 320's data loss bug bit me, and that was on a drive that had the firmware patch to fix the flaw.
Never trust a single drive. RAID array is always better because it give you a chance to recover from a disk failure. Then back that RAID array to external storage. Then back up the essential data to off-site storage. With this drive, buy two and do a software RAID-1. With write speed like this you will never notice the performance drop-off.
Re: (Score:2)
I think these PCIe flash drives win for raw performance because they have access to the entire device at PCIe bus speeds.
Maybe some ideal 16x card with a dedicated SATA controller per connection would give you RAID-0 performance of 30 Gbits/sec for five disks, but something tells me you'd be limited by the individual SATA limit of 6 Gbit/sec. In the real world, I don't know that any $100 8x RAID card would do that.
But I would also bet that most workloads are IOP bound, not mass throughput bound, and with r
Re: vs. raid controller + cheap drives (Score:2)
IOPS in raid controllers are awful. I have an external entry level eonstor appliance that tops out at ~10,000IOPS with 2gb cache. Same goes for areca. The only way of getting good performance out of SSD arrays is by doing clean access and letting the CPU handle it.
Re: (Score:2)
The cabling wouldn't be much of a mess if if you use something like this:
http://www.newegg.com/Product/... [newegg.com]
Encryption? (Score:2)
Couldn't see OPAL V2 / eDrive support anywhere. Shame because I think it's an essential feature of any SSD these days.
OPAL V2 allows the drive to encrypt using a user supplied key, with near zero performance loss.