With Optane Memory, Intel Claims To Make Hard Drives Faster Than SSDs (pcworld.com) 109
SSDs are generally faster than hard drives. However, they are also usually more expensive. Intel wants to change that with its new Optane Memory lineup, which it claims is faster and better performing than SSDs while not requiring customers to break their banks. From a report on PCWorld: Announced Monday morning, these first consumer Optane-based devices will be available April 24 in two M.2 trims: A 16GB model for $44 and a 32GB Optane Memory device for $77. Both are rated for crazy-fast read speeds of 1.2GBps and writes of 280MBps. [...] When the price of a 128GB SATA SSD is roughly $50 to $60 today, you may rightly wonder why Optane Memory would be worth the bother. Intel says most consumers just don't want to give up the capacity for their photos and videos. PC configurations with a hard drive and an SSD, while standard for higher-end PC users, isn't popular for the newbies. Think of the times you've had friends or family fill up the boot drive with cat pictures, but the secondary drive is nearly empty. Intel Optane Memory would give that mainstream user the same or better performance as an SSD, with the capacity advantage of the 1TB or 2TB drive they're used to. Intel claims Optane Memory performance is as good or better than an SSD's, offering better latency by magnitudes and the ability to peak at much lower queue depths.
But (Score:2, Funny)
Can wouldn't SSDs be more energy efficient?
Re:But (Score:4, Insightful)
Re:But (Score:5, Funny)
Re: (Score:2, Funny)
They can wouldn't be more as even such as many though.
Re: (Score:2)
That's highly dependent on the woodening process.
Re: (Score:1)
Get a crappier power supply and you'll be able to hear the SSDs being accessed just fine...
Re: (Score:2)
Probably also more physical shock resistance.
However for data center and desktops and large laptops we have an other option.
Re: (Score:2)
Can wouldn't SSDs be more rugged/durable?
Re: (Score:2)
The deal is they have a bunch of half-broken XPoint shit they need to sell off in some form to recoup some $.
XPoint (Currently "Optane" products from Intel) isn't fucking ready: http://semiaccurate.com/2016/0... [semiaccurate.com]
If Intel & Micron can get to the point where it fucking works as planned then it'll be great. But who the fuck knows if/when that'll actually happen. What you're seeing now is a broken mess that is shippable only because they're loading it up with tons of redundancy / overprovisioning for when
Re: (Score:3)
This!
My first thought was exactly this. You can have a Samsung 960 EVO, that is three times faster in read and over five times faster in write speeds for only twice the money of that Intel module. And it has a capacity of 250 GB, not 32 GB. If Samsung would make a 960 EVO 128GB model, the entire Intel product line would be dead in the water. Oh, wait. They have, somewhat... the SM961 128GB, which is both faster and about as expensive as 32 Intel GBs.
Sorry Intel, and thanx for the deja-vu moment, for my sec
Re:Yeah, but no (Score:4, Informative)
Certainly faster writing. Read speed is about the same for the EVO (on real blocks of uncompressible data, not the imaginary compressable or zerod blocks that they use to report their 'maximum').
XPoint over NVMe has only two metrics that people need to know about to understand how it fits into the ethos: (1) More durability, up to 33,000 rewrites apparently (many people have had to calculate it, Intel refuses to say outright what it is because it is so much lower than what they originally said it would be). (2) Lower latency.
So, for example, NVMe devices using Intel's XPoint have an advertised latency of around 10uS. That is, you submit a READ request, and 10uS later you have the data in hand. The 960 EVO, which I have one around here somewhere... ah, there it is... the 960 EVO has a read latency of around 87uS.
This is called the QD1 latency. It does not translate to the full bandwidth of the device as you can queue multiple commands to the device and pipeline the responses. In fact, a normal filesystem sequential read always queues read-ahead I/O so even an open/read*/close sequence generally operates at around QD4 (4 read commands in progress at once) and not QD1.
Here's the 960 EVO and some randread tests on it at QD1 and QD4.
nvme1: mem 0xc7500000-0xc7503fff irq 32 at device 0.0 on pci2
nvme1: mapped 8 MSIX IRQs
nvme1: NVME Version 1.2 maxqe=16384 caps=00f000203c033fff
nvme1: Model Samsung_SSD_960_EVO_250GB BaseSerial S3ESNX0J219064Y nscount=1
nvme1: Request 64/32 queues, Returns 8/8 queues, rw-sep map (8, 8)
nvme1: Interrupt Coalesce: 100uS / 4 qentries
nvme1: Disk nvme1 ns=1 blksize=512 lbacnt=488397168 cap=232GB serno=S3ESNX0J219064Y-1
(/dev/nvme1s1b is a partition filled with uncompressible data)
xeon126# randread /dev/nvme1s1b 4096 100 1 /dev/nvme1s1b bufsize 4096 limit 16.000GB nprocs 1
device
11737/s avg= 85.20uS bw=48.07 MB/s lo=66.22uS, hi=139.77uS stddev=7.50uS
11458/s avg= 87.28uS bw=46.92 MB/s lo=68.50uS, hi=154.20uS stddev=7.01uS
11469/s avg= 87.19uS bw=46.98 MB/s lo=69.97uS, hi=151.97uS stddev=6.95uS
11477/s avg= 87.13uS bw=47.01 MB/s lo=69.31uS, hi=158.03uS stddev=7.03uS
And here is QD4 (really QD1 x 4 threads on 4 HW queues):
xeon126# randread /dev/nvme1s1b 4096 100 4 /dev/nvme1s1b bufsize 4096 limit 16.000GB nprocs 4
device
44084/s avg= 90.74uS bw=180.57MB/s lo=65.17uS, hi=237.92uS stddev=16.94uS
44205/s avg= 90.49uS bw=181.05MB/s lo=65.38uS, hi=222.21uS stddev=16.56uS
44202/s avg= 90.49uS bw=181.04MB/s lo=65.19uS, hi=221.48uS stddev=16.72uS
44131/s avg= 90.64uS bw=180.75MB/s lo=64.44uS, hi=245.91uS stddev=16.81uS
44210/s avg= 90.48uS bw=181.08MB/s lo=63.73uS, hi=232.05uS stddev=16.74uS
So, as you can see, at QD1 the 960 EVO is doing around 11.4K transactions/sec and at QD4 it is doing around 44K transactions/sec. If I use a larger block size you can see the bandwidth lift off:
xeon126# randread /dev/nvme1s1b 32768 100 4 /dev/nvme1s1b bufsize 32768 limit 16.000GB nprocs 4
device
19997/s avg=200.03uS bw=655.26MB/s lo=125.02uS, hi=503.26uS stddev=55.24uS
20090/s avg=199.10uS bw=658.23MB/s lo=124.62uS, hi=522.04uS stddev=54.83uS
20034/s avg=199.66uS bw=656.47MB/s lo=123.63uS, hi=495.74uS stddev=55.59uS
20008/s avg=199.92uS bw=655.62MB/s lo=123.50uS, hi=500.24uS stddev=55.92uS
20034/s avg=199.66uS bw=656.47MB/s lo=125.17uS, hi=488.30uS stddev=55.02uS
20000/s avg=200.00uS bw=655.35MB/s lo=123.19uS, hi=504.18uS stddev=55.98uS
And if I use a deeper queue I can max-out the bandwidth. On this particular device, random blocks of uncompressable data at 32KB limits out at around 1 GByte/sec. I'll also show 64KB and 128KB:
xeon126# randread /dev/nvme1s1b 32768 100 64 /dev/nvme1s1
device
Re: (Score:2)
Thanx for the numbers :) looks quite interesting, especially because I'm in the process of buying a fast SSD soon (new PC setup replacing my 7 year old Phenom and the motherboard will most probably have a PCI-e 3.x 4x M.2 slot). Latency increases with block size... but when you're going for bulk data, latency gets less important, I think. It's the commands for very small bits of data, I suppose, you want to have with as little latency as possible. At 'various levels' of copying my experience (just gut feeli
Re: (Score:2)
Dissecting the test output:
11737/s avg= 85.20uS bw=48.07 MB/s lo=66.22uS, hi=139.77uS stddev=7.50uS
That means the average latency is 85uS (averaged over all reads), the lowest latency measured was 66uS and the highest was 140uS. Another important metric is the standard deviation... that is, how 'tight' access times are around that average latency of 85uS. In this case, a standard deviation of 7.5uS is very good.
Comparing this to the Optane. what Intel has stated is that the average latency over all reads
Thanks for the ad, I guess, but you missed somethi (Score:5, Insightful)
So these high-priced, low-capacity drives are meant to fill the need for low-priced, high-capacity drives?
Shouldn't the summary at least attempt to fill in the gaps here?
Re: (Score:2)
They might fill the need, but until then their R&D costs need to be driven in. So lets look forward to a few years or so when the people who believed this marketing crap bought those devices and by that made them cheaper.
Re:Thanks for the ad, I guess, but you missed some (Score:4, Insightful)
A lot of products flat out fail trying to recover R&D expenses. I am not saying this is one of those, as Intel has huge resources behind any tech it brings to market.
The idea here (in the long run), is that Drives and "memory" become the same space. Instant on, fast access to Nonvolatile RAM, and RAM becomes equivalent to 4 tier processor cache.
I've long predicted that memory space is going to be flattened out and everything is going to be mapped as one big logical drive, measured in access speed to data that is frequently needed. Closer / Faster, Further / Slower
Re: (Score:3)
With 64 bit memory addresses, there's no need to differentiate memory vs drive space. Just let the swap manager decide what goes where in the physical world, and each process gets its own dedicated pages of a single memory space.
Re: (Score:3)
I think we're *eventually* going to wind up with a unified memory technology that flattens the memory space, but I don't think Optane is it.
When this was first a thing, the Optane access times were a couple of orders of magnitude off RAM. It really read like a newer/better/faster version of existing flash storage media. Of course the critical thing is "Can you make it price competitive with existing NAND?"
If they can't, it's going to be a tough sell. Existing NAND storage has gotten to be fast, durable,
Re: (Score:2)
When this was first a thing, the Optane access times were a couple of orders of magnitude off RAM.
Optane access times are still too slow to replace DRAM.
While you *can* use faster storage in front of slower capacity storage as a cache, existing NAND is so cheap now that everything is migrating to flash.
Caching works, but it's complex and has overhead penalties, which is one reason why all flash storage has grown in popularity. The consumer wants one drive, not two, and even the enterprise wants speed and simplicity.
I'm curious what Intel's problem is.
Access times on Optane are such that these drives can support their maximum throughput at low queue depths unlike NAND Flash which requires a large number of queued transactions. In this respect, Optane requires *less* caching and buffering than NAND and apparently less processing in its translation tables. Is that enough? I do not know.
As a form of slow (but faster and lower latency than NAND Flash) non-volitile RAM (random access memory) in the tra
Re: (Score:1)
Re: (Score:2)
More and more memory will be moved on die also. 50 years from now, we'll probably just have a single die that is the computer..
No for two reasons:
1. Compare the amount of die area that the DRAM takes in a system with a reasonable amount of memory. It is way too much to be integrated with the CPU die.
2. High performance logic and bulk DRAM processes are different. Also operating the DRAM at the temperature of the CPU is a problem although acceptable in some cases.
The closest you may get is integrating the DRAM as part of a hybrid or multichip module however this will only work for systems with low memory requirements. GPUs are st
Re: (Score:3)
It would depend on the relative latency and other characteristics. XPoint is definitely not it, because XPoint can't handle unlimited writing. But in some future lets say we do have a non-volatile storage mechanic that has effectively unlimited durability, like ram, but which is significantly more dense, like XPoint.
In that situation I can see systems supporting a chunk of that sort of storage as if it were memory.
Latency matters greatly here for several reasons. First, I don't think XPoint is quite fast
Re: (Score:1)
This is pretty much how computers used to be. Just a flat memory space and that's it. Lots of early computers ran OS out of rom and all user data was stored in RAM. Cartrige based game systems simply map the cartridge rom in to memory space. Before cheap flash storage became available early Palms and Windows CE devices stored user data and installed programs in battery backed DRAM - And even had user-added programs specially compiled so they could be executed in place (since they were already stored in fast
Re: (Score:2)
Re: (Score:2)
The idea here (in the long run), is that Drives and "memory" become the same space. Instant on, fast access to Nonvolatile RAM, and RAM becomes equivalent to 4 tier processor cache.
This idea terrifies me. Currently, a reboot fixes everything but hardware issues. Once this goes live, only reinstalling from scratch will fix things.
The are cashes FOR hard drives (Score:5, Interesting)
Yeah, it is not clear from the summary, reading it I thought it was about hybrid drives, but the sizes don't make sense.
So, these are M.2 expansion cards which offer a big and very fast cache for your existing hard drive.
Re: (Score:2)
Re: (Score:2)
In an M.2 slot.
Re: (Score:2)
Intel dabbled in this (as did others) years ago when SSDs were too small for most people. As far as I know, it was kinda shitty and only kinda worked and everyone abandoned it because hybrid drives were simpler (even though they too sucked) and SSDs kept getting bigger, faster, and cheaper.
They called it "Smart Response Technology" when it launched. Maybe it's back? Maybe it never went away? Maybe Windows ReadyBoost has risen from the grave? (I've NEVER seen ReadyBoost in actual use.)
Re: (Score:2)
Re: (Score:2)
If it actually worked very well you wouldn't have noticed it pausing while it waited after a cache miss. Any cache can only help by so much. In the case of hybrid drives, I never understood why drive manufacturers used such a small amount of NAND, besides cost. Sure, it is expensive to use. But if you put more on there I'll pay more, because it will perform better more often.
Re: (Score:2)
Re: (Score:2)
Intel dabbled in this (as did others) years ago when SSDs were too small for most people. As far as I know, it was kinda shitty and only kinda worked and everyone abandoned it because hybrid drives were simpler (even though they too sucked) and SSDs kept getting bigger, faster, and cheaper. They called it "Smart Response Technology" when it launched. Maybe it's back? Maybe it never went away? Maybe Windows ReadyBoost has risen from the grave? (I've NEVER seen ReadyBoost in actual use.)
It's the same as far as I understand, just optimized for a lower latency high performance SSD. But to be honest, except for gamers I think almost everyone has space enough on the SSD these days. And even most gamers could if Steam only offered them two storage areas so they could put 1GB on the SSD and the other 29GB with all the media files on a HDD. I've gone all SSD anyway even though it's a waste.
Re: (Score:2)
Intel is blowing (Score:4, Insightful)
Smoke. Total and complete nonsense. Why would I want to buy their over-priced octane junk verses a Samsung 951* or 960* NVMe drive? Far more storage for around $115-$130, 1.4 GBytes/sec consistent read performance, decent write performance, and decent durability.
P.S. the Intel 600P NVMe drive is also horrid, don't buy it.
http://apollo.backplane.com/DF... [backplane.com]
-Matt
Re: (Score:1)
You apparently either didn't read or didn't comprehend the article. These devices are initially intended for use in hybrid drives - replacing the SSD component of an SSD/HD hybrid. The claim is that the resulting combo will have better than SSD performance at spinning disk size/price points.
And if the approach appears viable, the costs will come down.
Re: (Score:2)
Hybrid drives are a dead segment. If anything, this is geared for their "Smart Response Technology" (which I had assumed was abandoned) and idiots such as OEMs and those that buy from OEMs.
Re: (Score:2)
Re:Intel is blowing (Score:4, Informative)
Right. They are trying to market it as something cool and new, which would be great except for the fact that it isn't cool OR new. A person can already use ANY storage device to accelerate any OTHER storage device. There are dozens of 'drive accelerators' on the market and have been for years. So if a person really wanted to, they could trivially use a small NAND flash based NVMe SSD to do the same thing, and get better results because they'll have a lot more flash. A person could even use a normal SATA SSD for the same purpose.
What Intel is not telling people is that NOBODY WILL NOTICE the lower latency of their XPoint product. At (I am assuming for this product) 10uS the Intel XPoint NVMe is roughly 1/6 the latency of a Samsung NVMe device. Nobody is going to notice the difference between 10uS and 60uS. Even most *server* workloads wouldn't care. But I guarantee that people WILL notice the fact that the Intel device is caching much less data than they could be caching for the same money with a NAND-based NVMe SSD or even just a SATA SSD.
In otherwords, Intel's product is worthless.
-Matt
Re: (Score:2)
Re: (Score:2)
They don't offer write-back as an option for those that do have a UPS and want to use write-back?
Re: (Score:2)
Re: (Score:2)
And, of course, any Linux or BSD operating system will use all available memory for cache data from storage anyway. I guess Windows needs a little more help to do that.
This certainly shows up in, for example, Chrome startup times. It takes around 4 seconds from a hard drive, uncached, 1 second from a SSD, 1 second from a NVMe drive, and presumably 1 second from any other form of storage because chrome itself needs a bit of cpu time to initialize itself, not to mention the time it takes to load a tab (mini
Re: (Score:1)
P.S. the Intel 600P NVMe drive is also horrid, don't buy it.
http://apollo.backplane.com/DF... [backplane.com]
-Matt
According to the Linux kernel, Intel NVMe devices have the block stack stick to certain alignments for performance reasons. Now quoting the above article: "All tests were done on a DragonFlyBSD". I doubt Intel did the same enabling there as they did for Linux.
Re: (Score:1)
I think you are a little confused by Intel marketing speak. Actually, you are a lot confused.
-Matt
Re: (Score:1)
I think you are a little confused by Intel marketing speak. Actually, you are a lot confused.
-Matt
What the heck are you talking about? Intel devices have a quirky alignment requirement that they made work well in Linux (it's documented in the git logs), but Intel neglected BSD. What part of this do you consider to be marketing?
Re: (Score:3)
Intel devices have quirks, but I think you are mixing apples and oranges here. All modern filesystems systems have used larger alignments for ages. The only real issue was that the original *DOS* partition table offset the base of the slice the main filesystem was put on by a weird multiple of 512 bytes which was not even 4K aligned.
This has not been an issue for years. It was fixed long ago on DOS systems and does not exist at all on EFI systems. Regardless of the operating system.
At the same time, all
Re: (Score:2)
Intel devices have quirks, but I think you are mixing apples and oranges here. All modern filesystems systems have used larger alignments for ages. The only real issue was that the original *DOS* partition table offset the base of the slice the main filesystem was put on by a weird multiple of 512 bytes which was not even 4K aligned.
NTFS made the same mistake so it is hardly fair to pick on DOS for this behavior.
Re: (Score:2)
Maybe you should point me at the commitid you are referring to, then I can address your comment more directly. I can tell you straight out, even without seeing it, that you are probably misinterpreting it.
-Matt
Re: (Score:2)
And who the hell do you think I am mister Anonymous Coward?
So, as I thought, you don't understand either that commit or the commit later on that simplified it (159b67d7).
It's not a stripe-size limitation per say, it's just a limitation on the maximum physical transfer size per I/O request, which for 99.9% of the NVMe devices out in the wild will be >= 131072 bytes and completely irrelevant for all filesystem I/O and even most softRAID I/O.
More to the point, that particular commit does not apply to the 60
Re: (Score:2)
Just so happens I have an Intel 750 in the pile, here's the issue that the linux NVMe code had to work around:
nvme3: mem 0xc7310000-0xc7313fff irq 40 at device 0.0 on pci4
nvme3: mapped 32 MSIX IRQs
nvme3: NVME Version 1.0 maxqe=4096 caps=0000002028010fff
nvme3: Model INTEL_SSDPEDMW400G4 BaseSerial CVCQ535100LC400AGN nscount=1
nvme3: Request 64/32 queues, Returns 31/31 queues, rw-sep map (31, 31)
nvme3: Interrupt Coalesce: 100uS / 4 qentries
nvme3: Disk nvme3 ns=1 blksize=512 lbacnt=781422768 cap=372GB serno=CVC
Intel Marketing Incorrect (Score:5, Interesting)
The way Intel plans on using Optane memory, yes it will most certainly improve the speed of HDs by caching but to say it will always outperform an SSD is an outright lie. For starters if you're working with unusually large datasets it likely won't all fit in Optane memory and unless your cache is highly intelligent and can read ahead, it's likely that things will load slowly on the first attempt. Then for laptops there's also the bonus of not destroying the HD if your laptop gets bumped in the wrong way or treated with a bit of abuse when operating. If this worked so well then Seagate's hybrid SSD / HD drives should be almost everything but it isn't.
Re: (Score:3)
The way Intel plans on using Optane memory, yes it will most certainly improve the speed of HDs by caching but to say it will always outperform an SSD is an outright lie.
Also worth noting that there are SSD's that can exceed the 1.2GBps read / 280MBps write of the Optane.
For instance, Samsung 960 Evo claims 3.2GBps/1.8GBps. (https://www.newegg.com/Product/Product.aspx?Item=N82E16820147595&cm_re=pcie_ssd-_-20-147-595-_-Product)
Requires PCIe 3.0 x4. I work for neither Samsung nor Newegg.
Re: (Score:3)
You're doing it wrong. Rather than looking for a good shot at just the right moment, you shoot lots of pictures hoping at least one out of the hundred looks decent. And you keep the unused 99 others around because you're too lazy to erase them all.
Re: (Score:2)
That's about my wife. She will take about 20 photos of the exact same shot from the exact same angle to try and get the best picture and not delete a single one.
I, on the other hand will take three photos from different angles- and then more often than not, I will delete all three photos.
Re: (Score:2)
She will take about 20 photos of the exact same shot from the exact same angle to try and get the best picture and not delete a single one.
I've done that but usually with a tripod mounted camera but there it isn't to pick the best one. When I do that I am planning on combining them and doing things like focus stacking [wikipedia.org], HDR [wikipedia.org], or super resolution [wikipedia.org] photography or a combination of them. For film I will also scan the negatives multiple times as well and combine them to reduce the noise and also produce images closer to the advertised resolution of the film scanner than can otherwise be achieved. Yes I have some photographs where I am getting 60-70 me
Re: (Score:2)
Those people are turing their cameras on, more often than you do.
HTH.
Being confused... (Score:2)
They are saying that SSD cache of HDD is rare because most people only have one device, but somehow by being more expensive per GB, this has a better chance of being a common configuration? This pitch is sufficiently convoluted I can't help but to wonder how worried/challenged they must be to find a wider market for the technology, given the price point.
This seems to be an unfortunate reality of PC storage, the vast majority of the market is entrenched in 'good enough'. Even NVMe is a relative rarity, des
Re: (Score:2)
Motherboard vendors are just now, finally, starting to put M.2 connectors on the motherboard. Blame Intel for the slow rate of adoption. Intel came out with three different formats, all basically incompatible with each other, and created mass confusion.
But now, finally, mobo vendors are settling on a single PCIe-only M.2 format. Thank god. They are finally starting to put one or more M.2 slots and finally starting to put on U.2 connectors for larger NVMe SSDs. Having fewer SATA ports on the mobo is no
Re: (Score:2)
I've seen m.2 modules for a while, but overwhelmingly they are still SATA, and M.2 has had PCIe capability, but largely ignored by the device makers.
One challenge with the PCIe connectivitiy is that 4 lanes of PCIe is an awful lot to ask to spare for a single device, and there isn't a lot of urgent need for better SSD performance, interestingly enough.
Terrible article summary (Score:5, Interesting)
Intel is marketing the Optane Memory M.2 modules as caches for hard drives.
"Lather, rinse, repeat. With each duplicate task, the launching speed accelerated. The load time for Gimp, for example, dropped from about 14 seconds to 8 seconds, and then to 3 or 4 seconds as the Optane Memory cached the task."
That's only speeding up accesses for repeated tasks (which, granted, there are many of).
I think the problem Intel found is that Optane memory is too expensive right now in larger sizes. They came up with this cache module as their best way to market it. Is someone really going to spent $77 for a 32GB cache device when they can just spend $99 for a 256GB SSD?
Re: (Score:2)
If they already own a 1 or 2TB drive that is half full, it makes some sense.
Re:Terrible article summary (Score:5, Interesting)
Actually if I were building another PC soon, I'd do exactly that. Get a 2TB drive cheap ($50-60) and then this for $77. Cheaper than a $99 SSD and the same hard drive, and I don't need to worry about getting a "very large" %APPDATA% directory or have to do configuration of my media, which (large) games are on my SSD versus not, etc. I'm willing to do that now, but I'd be glad to not have to worry about all of that. Just put it all on "C" and then let the Intel "magic" do its job for what I'm running most frequently.
It's the "just make it simple" approach which is good.
Re: (Score:2)
It's the "just make it simple" approach which is good.
But you're adding a whole disk, and also using spinning rust. How is that making it simple?
Re: (Score:2)
By plugging one more thing into a slot that exists but is currently unused, he can avoid trying to migrate all the data on the 2TB spinning rust drive to an SSD, but still get most of the benefits of having the SSD.
Re: (Score:2)
Except that the proposal was to build a new system with a 2TB disk, not to migrate one from an older system.
Your proposed case might make sense.
Re: (Score:2)
Yup. My main $1900 SSD array does about 300MBps (SATA 3 drives).
I'll absolutely spend $80 to put an Optane piece in to split for ZFS log and cache devices to pump up the performance 20% or so.
To bad that intels pci-e lanes suck on there deskt (Score:2)
To bad that intels pci-e lanes suck on there desktop cups.
AMD has X16 or X8 X8 (video) + X4 (storage) + USB 3.X on die + X4 chip set link VS intel with X16 or X8 X8 (video) + X4 chipset link.
What is it useful for? (Score:2)
Having a hard time imagining the use case for this.
For consumer gear, almost any SSD sold today will be faster than someone would ever need. Just use that as a cache and save some money.
For pro/enthusiast gear, money would probably be better invested simply getting more RAM -- with 32GB, in many cases I have 20GB or more of that being used as a filesystem cache. Cache tends to very rapidly exhibit diminishing returns, to the point where I doubt I'd even notice an extra 32GB sandwiched between my RAM and SSD
Optane is cool (Score:5, Informative)
Re: (Score:1)
Re: (Score:2)
Insulator breakdowns on circuit boards happen less often these days but they are still prevalent in Electrolytic caps and anything with windings (transformers, inductors, DC motors, etc), though it can take 20-50 years to happen and depends on conditions. And the failure mode depends too.
Generally speaking, any component with an insulator which is getting beat up is subject to the issue.
Circuit boards got a lot better as vendors switched to solid state caps. Electrolytics tend to dry out and little arc-th
Why buy SSD when you can get higher capacity IDE? (Score:2)
All new storage technologies start with a significant price premium vs established technology.
$77 and 32GB is not intended for photos and videos (which is all consumers think about), they're intended for servers which need high speed but not a great deal of storage space per drive. $2 per GB is roughly what we saw with SSD when they first came out.
For someone running a home server, these drives are a feasible replacement for their existing database and web storage to get much better performance.
For commerc
solid state cache for a hard drive? (Score:3)
So far having solid state cache for a hard drive is an idea which looks great on a paper, but practically everything that has been offered shows the performance - and we're talking about the real workload and the real user experiences - closer to the hard drive than to the solid state device. IMHO, since, apparently, we have a fairly large number of cache misses or some other anomalies, having the solid state cache which is 1000 faster than the traditional NAN-based one won't make too much difference.
On the other hand, having the solid state device which only 10 times slower than DDR would make it excellent virtual storage. you can put 64GB of DDR4 on your server and then get 350GB slab of Optane. For all practical purposes you have 350GB of main memory. Swapping the working sets in and out would happen, for all practical purposes, instantly. But of course that's solution for data center, not for the regular user.
Re: (Score:2)
So far having solid state cache for a hard drive is an idea which looks great on a paper, but practically everything that has been offered shows the performance - and we're talking about the real workload and the real user experiences - closer to the hard drive than to the solid state device. IMHO, since, apparently, we have a fairly large number of cache misses or some other anomalies, having the solid state cache which is 1000 faster than the traditional NAN-based one won't make too much difference.
You can get SSD-like boot times, but that is about it, the rest is HDD like
Re: (Score:2)
You say it doesn't make too much difference, but you clearly haven't played with it for a little while. It's not a miracle, but it is quite a difference.
1.2GBps (Score:2)
DDR3-1600 RAM runs at 12.8GB/s. If we wanted to read a 1.2GB/s couldn't we have a RAM chip, some fancy logic, and a delay line. That is, continuously clock the RAM contents around the delay line and then wait for it to come back in when you want to read it out.
Come to think of it, that just adds read latency, once your patch of delay line comes around you can read it at 12.8GB/s.
probably costs a ton of power, and of course it's volatile, but if 9/10ths of the memory is on the bus you get a lot of value for
AKA 3D Xpoint (Score:2)