Intel Unveils Optane SSD DC P4800X Drive That Can Act As Cache Or Storage (hothardware.com) 63
MojoKid writes from a report via HotHardware: Intel unveiled its first SSD product that will leverage 3D Xpoint memory technology, the new Optane SSD DC P4800X. The Intel SSD DC P4800X resembles some of Intel's previous enterprise storage products, but this product is all new, from its controller to its 3D Xpoint storage media that was co-developed with Micron. The drive's sequential throughput isn't impressive versus other high-end, enterprise NVMe storage products, but the Intel Optane SSD DX P4800X shines at very low queue depths with high random 4kB IO throughput, where NAND flash-based storage products tend to falter. The drive's endurance is also exceptionally high, rated for 30 drive writes per day or 12.3 Petabytes Written. Intel provided some performance data comparing its SSD SC P3700 NAND drive to the Optane SSD DC P4800X in a few different scenarios. This test shows read IO latency with the drive under load, and not only is the P4800X's read IO latency significantly lower, but it is very consistent regardless of load. With a 70/30 mixed read write workload, the Optane SSD DC P4800X also offers between 5 and 8x better performance versus standard NVMe drives. The 375GB Intel Optane SSD DC P4800X add-in-card will be priced at $1520, which is roughly three times the cost per gigabyte of Intel's high-end SSD DC P3700. In the short term, expect Intel Optane solid state drives to command a premium. As availability ramps up, however, prices will likely come down.
XPoint (Score:2)
Stay away from Xpoint (Optane) products for a while.
They're far, far off their initial promises, which points to manufacturing issues.
http://semiaccurate.com/2016/0... [semiaccurate.com]
Re: (Score:2)
Stay away from Xpoint (Optane) products for a while.
They're far, far off their initial promises, which points to manufacturing issues.
http://semiaccurate.com/2016/0... [semiaccurate.com]
I imagine OCZ must have mixed feelings about any Optane story given their Octane SSDs debuted 5 years ago
Re: (Score:2)
I've seen better.
Just a bunch of blocks? (Score:2)
When are we going to get drives that are just a bunch of blocks so that we can do our own wear and use leveling? Why should I trust a tiny computer on the SSD when I've got one that can do far more on the other end of the SATA line?
PCI-E X2-X4 is a lot faster then sata. (Score:2)
PCI-E X2-X4 is a lot faster then sata.
Re: (Score:2)
Not soon, probably never.
The disk controller interfaces deliberately abstract those writeable areas. You would need to toss SATA/NVMe entirely and develop a new standard.
This leaves the SSD manufacturers free to choose a variety of controllers and flash chips, and they can optimize the firmware for their hardware without worrying about meddling from userspace.
Even if you did develop a standard that obstensibly provides direct access to low-level sectors, there is nothing in the world that you can do to stop
Re: (Score:2)
What you're proposing is effectively ditching Integrated Drive Electronics and going back to CPU or card-based disk controllers.
I look forward to seeing 1701 error codes again.
Cache? (Score:3)
Intel Unveils Optane SSD DC P4800X Drive That Can Act As Cache Or Storage
I might be missing something, but if you're going to mention that it can act as cache it would be nice to include something about that bit in the summary.
No-one reads TFA anyway, of course, but in this case it's counter-blocked because I'm using uBlock, so I'm doubly put off.
Re: (Score:3)
Yeah, because it's not true. It doesn't work well as cache, and if you tried it you'd burn it out fast. It was SUPPOSED to, they promised a lot that you could use this as both storage as main memory, but they couldn't make it work and even Intel has stopped talking about Apache Pass.
Basically, this is a evolutionary step in flash, giving you much faster random write and maybe 2x the endurance for 2x the price, so not bad as a disk.
But it's not the revolutionary step Intel was promising.
Re: (Score:2)
The endurance listed is 30 drive writes per day for the 375GB model. That's 11.25TB/day, or about 130MB/s sustained 24/7 writes. A cache device should be spending 90% of its time being read, rather than written (or it's not doing that good a job as a cache[1]), which means that even if you're reading from it at about half of its peak rate every second of every day and getting then it's going to last its rated lifetime. Additionally, for a cache device, we really don't care if it burns out. When it does,
Re: (Score:2)
It's not great for reads though - Even the consumer m.2 SSD in my PC is faster for that. It's just really fast on small random writes, or small reads under LOW read/write load (small queue depth) because it really wins for latency there. So you have to be using it on something with lots of writes and low load small reads to get amazing benefit over current flash - and then it burns out fast.
I'm sure there will be some very targeted applications where this is perfect, like maybe mostly data acquisition an
Re: (Score:2)
They're still looking at Apache Pass for 2018, but it's dead as a product /now/ for Skylake-EP/Purley and they're playing it down. At IDF 2015 it was 'Apache Pass Apache Pass Apache Pass' and at IDF 2016 it was 'Uh, yeah we'll have DIMMS', and now it's 'Wow, this is great for SSDs.'
I'm not surprised they brought some DIMMs to show (it's not dead dead, just not now), but I'm genuinely curious about this: Ask when you can buy some.
Re: (Score:2)
Honestly I could care less about another SSD technology - having good NVRAM on the memory bus is one of the most exciting things in system design I've seen in a long time.
I may be wrong - but in my experience a minimum of 10% of runtime at load, and at least 25-30% of OS code is all about hiding the fact that storage speed sucks and we have to stuff everything through a storage protocol. New IO designs could open up some very cool technologies that currently depend on clunky NAND/flash limitations.
Its rather exaggerated (Score:5, Interesting)
Intels claims are rather exaggerated. Their claims have already been torn apart on numerous tech forums. At best we're talking only a ~3-5x reduction in QD1 latency and the intentionally omit vital information in the specs to force everyone to guess what the actual durability of the XPoint devices is. They say '12PB' of durability for the 375GB part but refuse to tell us how much overprovisioning they do. They say '30 drive writes per day' without tellling us what the warrenty will be.
In fact, over the last 6 months Intel has walked back their claims by orders of magnitude, to the point now where they don't even claim to be bandwidth competitive. They focus on low queue depths and and play fast and loose with the stats they supply.
For example, their QOS guarantee is only 60uS 4KB (99.999%) random access latency and in the same breath they talk about being orders of magnitude faster than NAND NVMe devices. They fail to mention that, for example, the Samsung NVMe devices also typically run around ~60-70uS QD1 latencies. Then Intel mumbles about 10uS latencies but bandies about large factors of improvement over NAND NVMe devices, far larger than the 6:1 one gets simply assuming 10uS vs 60uS.
Then they go on to say that they will have a NVDIMM form for the device later this year, with much faster access times (since in the NVMe form factor access times are constricted by the PCIe bus and block I/O protocol). But with potentially only 33,000 rewrite cycles per cell to failure that's seriously problematic. (And that's the best guess, since Intel won't actually tell us what the cell durability is).
--
The price point is way too high for what XPoint in the NVMe format appears to actually be capable of doing. The metrics look impossible for a NVDIMM form later this year. Literally we are supposed to actually buy the thing to get actual performance metrics for it? I don't think so.
Its insane. This is probably the biggest marketing failure Intel has ever had. Don't they realize that nobody is being fooled by their crap specs?
-Matt
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
They say '12PB' of durability for the 375GB part but refuse to tell us how much overprovisioning they do. They say '30 drive writes per day' without tellling us what the warrenty will be.
Those numbers (12.3PB) work out to be very nearly 3 years [wolframalpha.com], for what it's worth -- perhaps (???) there's a 3-year warranty or something (or expected lifetime).
Re: (Score:2)
That's wrong, and it's not your fault, it's the article's fault. They bought Intel's weaseling about this.
They don't overprovision for /performance/ (which Intel will continually remind you), they overprovision for /endurance/.
The only way they hit their stated life numbers on a 375GB drive is to put 448GB of XPoint memory on it, which you can see by opening one up.
Re: (Score:2)
>Intel's claims are rather exaggerated. Their claims have already been torn apart on numerous tech forums.
Because people on tech forums always know more than the people who who actually design the process and products right?
Maybe not but they may know more than the people writing the marketing brochures or the commercials selling the things. That said, the technology looks to be different enough from everything that came before that it's quite likely a lot of people are incorrectly applying irrelevant knowledge of how past products worked to this new one. It also feels like disk drives are a bad fit for the technology, but also the only one in the short term. Things will truly get interesting when/if they move to the 'large me
Re: (Score:2)
Because people on tech forums always know more than the people who who actually design the process and products right?
Probably not, but they will be a lot more honest than the marketing weasels who cherry-pick numbers to make the drive look as good as possible. Marketing weasels who don't even cherry-pick all the time---who sometimes just make things up.
Both the press releases and the forum chatter should be taken with a grain of salt until in-depth independent reviews are available.
But in the case that brochures and forums disagree, the company brochures are guaranteed to come with a strong bias.
Re: (Score:2)
But with potentially only 33,000 rewrite cycles per cell to failure that's seriously problematic. (And that's the best guess, since Intel won't actually tell us what the cell durability is).
I can guarantee you it's not 33.000 R/W cycles - the only tech that would allow that is SLC, and practically nobody sells SSD based on SLC anymore. A few manufacturers sell highly overpriced SLC-based SD and microSD cards. Hell, nowadays you'll struggle to even find MLC-based SSDs (~10.000 rewrite cycles AT BEST). Every SSD manufacturer today uses TLC, which means 1000 R/W cycles per cell.
Off topic (Score:2)
Is there any way we can banish xx.xxx (or even worse, xx xxx) notation?
It is incredible that the world doesn't have a single standard for numbers (not counting scientific notation, which is something different, and btw rarely used).
How are we supposed to differentiate 33.000 (i.e. 33 thousand) from 33.000 (33, to an accuracy of 3 decimals)?
We can go to the moon, but can't state numbers unambiguously. Mind officially blown...
Re: (Score:2)
I can guarantee you it's not 33.000 R/W cycles - the only tech that would allow that is SLC, and practically nobody sells SSD based on SLC anymore. A few manufacturers sell highly overpriced SLC-based SD and microSD cards. Hell, nowadays you'll struggle to even find MLC-based SSDs (~10.000 rewrite cycles AT BEST). Every SSD manufacturer today uses TLC, which means 1000 R/W cycles per cell.
You're still talking about flash technology, as far as I can see... the point of Optane is that it's using this new phase-change stuff. Supposedly it would have a thousand times the write endurance, and be a thousand times faster than NAND flash (though I don't think they ever said whether it was SLC/MLC/TLC they were comparing to). It doesn't look like either promise has come true, though.
Re: (Score:2)
Matt I respect your work on both Freebsd and Dragonfly but have you ever seen latency spikes from lots of small writes? My home PC has 2 raid 0 Samsung pros. Latency spikes go well over 1000 when running several Hyper-V or VMware workstation sessions easily. I am sure a server grade solution would be a little bit better.
From what I have read is low latencies is the key and writes. Xpoint doesn't use blocks and does have endurance issues. 550,000 iops and lower latency is no joke on a SQL database or virtual
Re: (Score:2)
This is probably the biggest marketing failure Intel has ever had.
This is a marketing failure. Not a market failure. Currently all we have to go buy is what Intel say. It won't be a market failure until a) the product is released and b) the product fails in the market.
Re: (Score:2)
Disregard that. I'm off for some much needed coffee.
SSD as cache (Score:2)
This seems like a particularly dumb idea. Wouldn't it be better to just add more RAM?
Re: (Score:2)
SSDs for cache is extremely common in the storage server market. Check the section on L2ARC https://en.wikipedia.org/wiki/... [wikipedia.org]
Re: (Score:1)
You can get 384GB(6x64) of ECC server ram for $5,202 on newegg...
Is it more then 2K...yes Is it WAY faster then Xpoint at every metric...yes Does it ware out....no Do you basically have to think about it at all?...no For a 2K to 5K difference...the ram memory beats all...and it is not THAT much more expensive.
If you want under 2K...what about a 2xM.2 pcie card(~$100-$150) with two 1TB Samsung PRO 950 M.2 cards for ~$1400? Run that in raid0 and you would probably be equal or better then the current Xp
Re: (Score:2)
Are there hardware RAID controllers that support NVMe drives (U.2 or M.2 or direct PCIe), preferably with battery backup?
I'm tired of being limited by SATA/SAS because hardware RAID is a requirement and I haven't seen anything reputable for the other options.
Re: (Score:2)
Hardware RAID is ancient at this point, I wouldn't expect to see much further development in it. With SSD's there is no reason anymore to use hardware RAID or BBU's and hardware RAID controllers have been slower than the aggregate of even spinning disks for well over a decade now.
Re: (Score:2)
RAID provides redundancy and speed. ESXi for example doesn't let you do RAID across multiple storage devices at a software level. My hardware RAID controller has 8 PCIe 3.0 lanes, cache, and battery backup. Yes, battery backup is still important. Yes, a RAM cache is faster than SSDs (though you'll wants to toggle the write cache policy). I don't live in pretend fairy land where SSDs never fail, and I don't shuffle shit off to a SAN/etc. device over network, fiberchannel, etc. because I want the perform
Re: (Score:2)
I understand the concept behind RAID, but there is no way the slow ASIC with slow 1-4GB DDR2/3 RAM compete with the same drives directly attached to the CPU and potentially 100's of GB in DDR4 RAM. In your case your BBU RAID controller is a single point of failure, when your controller, its RAM and/or BBU fail, your file system will be corrupted so I wouldn't trust it with any 'write cache' policy unless you have a complement of them.
A modern file system with SSD write caches should have at least 2 of them
Re: (Score:2)
Are you kidding me? 2 GB of "slow" DDR3 blows away whatever SSD you're imagining in terms of both speed and latency. And the ASICs in RAID controllers can keep up just fine. They're typically designed to saturate all their internal links, with headroom.
When the battery backup fails, nothing happens unless the server also loses power.
Where are you getting servers with "100's of GB in DDR4 RAM" as a dedicated cache? Why would such amounts of cache matter when just a few GBs can provide a very low cache mis
Re: (Score:1)
While I agree in general, ECC server ram ain't ECC server ram. /system that supports that amount of ram can end up more expensive than the ram itself. And then you'll find that the 'board doesn't like those sticks you got from newegg, because server hardware tends to be a lot more picky in that regard. After you've dropped 20 grand on the rest of the system and returned that $5,000 ram kit, you could find that the only 384gb kit you can get working on your system costs three times as mu
Finding a motherboard
Re: (Score:2)
You can get 384GB(6x64) of ECC server ram for $5,202 on newegg...
So you can get 384GB of RAM for a bit more than 8TB of NVMe flash (not sure what the XPoint pricing is going to be).
For a 2K to 5K difference...the ram memory beats all...and it is not THAT much more expensive.
It's about 20 times as expensive per GB. You're arguing that the better latency and throughput of the RAM is going to outweigh the increased capacity of the NVRAM. That's by no means clear. If your working set fits entirely into that 384GB RAM cache, then the RAM will definitely be faster, but your working set is 1-4TBs (not that uncommon for a SAN device) then the RAM solution is going to
Re: (Score:2)
Re: (Score:2)
Good luck with that.
Re: (Score:2)
Go run a server? For a corporate file shared drive using this as a tier in server 2012 R2 or later will put the frequently accessed files on this greatly improving performance over accessing the slower mechanical disks in the raid. SQL databases can use this for stored procedures and frequently accessed data too as a cache from the slower SAN and latency
Re: (Score:2)
RAM is expensive and a lot of it requires quite a bit on energy to keep refreshing,. Also it's not persistent so files in memory don't survive a reboot. This is especially true if you are working with data sets in the tens of gigabytes.
Re: (Score:2)
Know when to run
You never count your money
When you're sittin' at the table
STT-MRAM or bust (Score:2)
XPoint is well... pointless. It can't compete with MRAM and by the time it matures enough (If it matures enough) to substitute flash in any kind of significant way MRAM is likely to have already taken over.
MRAM has effectively infinite read/write endurance, high density and performance characteristics of static ram.
XPoint even if executed perfectly has only a narrow window in which it can hope to remain relevant.
Re: (Score:3)
Re: (Score:2)
There's MRAM available for sale right now. Everspin sells it:
https://www.everspin.com/ [everspin.com]
The problem is it has less density than DRAM and it's a lot more expensive. It does have better latency though, so it could be used as a kind of persistent last level cache.