Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage Businesses Hardware

With Optane Memory, Intel Claims To Make Hard Drives Faster Than SSDs (pcworld.com) 109

SSDs are generally faster than hard drives. However, they are also usually more expensive. Intel wants to change that with its new Optane Memory lineup, which it claims is faster and better performing than SSDs while not requiring customers to break their banks. From a report on PCWorld: Announced Monday morning, these first consumer Optane-based devices will be available April 24 in two M.2 trims: A 16GB model for $44 and a 32GB Optane Memory device for $77. Both are rated for crazy-fast read speeds of 1.2GBps and writes of 280MBps. [...] When the price of a 128GB SATA SSD is roughly $50 to $60 today, you may rightly wonder why Optane Memory would be worth the bother. Intel says most consumers just don't want to give up the capacity for their photos and videos. PC configurations with a hard drive and an SSD, while standard for higher-end PC users, isn't popular for the newbies. Think of the times you've had friends or family fill up the boot drive with cat pictures, but the secondary drive is nearly empty. Intel Optane Memory would give that mainstream user the same or better performance as an SSD, with the capacity advantage of the 1TB or 2TB drive they're used to. Intel claims Optane Memory performance is as good or better than an SSD's, offering better latency by magnitudes and the ability to peak at much lower queue depths.
This discussion has been archived. No new comments can be posted.

With Optane Memory, Intel Claims To Make Hard Drives Faster Than SSDs

Comments Filter:
  • But (Score:2, Funny)

    Can wouldn't SSDs be more energy efficient?

  • by Wulf2k ( 4703573 ) on Monday March 27, 2017 @01:12PM (#54120257)

    So these high-priced, low-capacity drives are meant to fill the need for low-priced, high-capacity drives?

    Shouldn't the summary at least attempt to fill in the gaps here?

    • They might fill the need, but until then their R&D costs need to be driven in. So lets look forward to a few years or so when the people who believed this marketing crap bought those devices and by that made them cheaper.

      • by Archangel Michael ( 180766 ) on Monday March 27, 2017 @01:38PM (#54120487) Journal

        A lot of products flat out fail trying to recover R&D expenses. I am not saying this is one of those, as Intel has huge resources behind any tech it brings to market.

        The idea here (in the long run), is that Drives and "memory" become the same space. Instant on, fast access to Nonvolatile RAM, and RAM becomes equivalent to 4 tier processor cache.

        I've long predicted that memory space is going to be flattened out and everything is going to be mapped as one big logical drive, measured in access speed to data that is frequently needed. Closer / Faster, Further / Slower

        • With 64 bit memory addresses, there's no need to differentiate memory vs drive space. Just let the swap manager decide what goes where in the physical world, and each process gets its own dedicated pages of a single memory space.

        • by swb ( 14022 )

          I think we're *eventually* going to wind up with a unified memory technology that flattens the memory space, but I don't think Optane is it.

          When this was first a thing, the Optane access times were a couple of orders of magnitude off RAM. It really read like a newer/better/faster version of existing flash storage media. Of course the critical thing is "Can you make it price competitive with existing NAND?"

          If they can't, it's going to be a tough sell. Existing NAND storage has gotten to be fast, durable,

          • by Agripa ( 139780 )

            When this was first a thing, the Optane access times were a couple of orders of magnitude off RAM.

            Optane access times are still too slow to replace DRAM.

            While you *can* use faster storage in front of slower capacity storage as a cache, existing NAND is so cheap now that everything is migrating to flash.

            Caching works, but it's complex and has overhead penalties, which is one reason why all flash storage has grown in popularity. The consumer wants one drive, not two, and even the enterprise wants speed and simplicity.

            I'm curious what Intel's problem is.

            Access times on Optane are such that these drives can support their maximum throughput at low queue depths unlike NAND Flash which requires a large number of queued transactions. In this respect, Optane requires *less* caching and buffering than NAND and apparently less processing in its translation tables. Is that enough? I do not know.

            As a form of slow (but faster and lower latency than NAND Flash) non-volitile RAM (random access memory) in the tra

        • by kfh227 ( 1219898 )
          More and more memory will be moved on die also. 50 years from now, we'll probably just have a single die that is the computer..
          • by Agripa ( 139780 )

            More and more memory will be moved on die also. 50 years from now, we'll probably just have a single die that is the computer..

            No for two reasons:

            1. Compare the amount of die area that the DRAM takes in a system with a reasonable amount of memory. It is way too much to be integrated with the CPU die.
            2. High performance logic and bulk DRAM processes are different. Also operating the DRAM at the temperature of the CPU is a problem although acceptable in some cases.

            The closest you may get is integrating the DRAM as part of a hybrid or multichip module however this will only work for systems with low memory requirements. GPUs are st

        • It would depend on the relative latency and other characteristics. XPoint is definitely not it, because XPoint can't handle unlimited writing. But in some future lets say we do have a non-volatile storage mechanic that has effectively unlimited durability, like ram, but which is significantly more dense, like XPoint.

          In that situation I can see systems supporting a chunk of that sort of storage as if it were memory.

          Latency matters greatly here for several reasons. First, I don't think XPoint is quite fast

        • by Anonymous Coward

          This is pretty much how computers used to be. Just a flat memory space and that's it. Lots of early computers ran OS out of rom and all user data was stored in RAM. Cartrige based game systems simply map the cartridge rom in to memory space. Before cheap flash storage became available early Palms and Windows CE devices stored user data and installed programs in battery backed DRAM - And even had user-added programs specially compiled so they could be executed in place (since they were already stored in fast

        • That's how the AS400 works, single flat address space, every object with a permanent globally unique pointer, auto loaded on reference.
        • The idea here (in the long run), is that Drives and "memory" become the same space. Instant on, fast access to Nonvolatile RAM, and RAM becomes equivalent to 4 tier processor cache.

          This idea terrifies me. Currently, a reboot fixes everything but hardware issues. Once this goes live, only reinstalling from scratch will fix things.

    • by Ecuador ( 740021 ) on Monday March 27, 2017 @01:33PM (#54120441) Homepage

      Yeah, it is not clear from the summary, reading it I thought it was about hybrid drives, but the sizes don't make sense.
      So, these are M.2 expansion cards which offer a big and very fast cache for your existing hard drive.

      • So it is a cache that sits on the motherboard somewhere instead of the HDD?
      • Intel dabbled in this (as did others) years ago when SSDs were too small for most people. As far as I know, it was kinda shitty and only kinda worked and everyone abandoned it because hybrid drives were simpler (even though they too sucked) and SSDs kept getting bigger, faster, and cheaper.

        They called it "Smart Response Technology" when it launched. Maybe it's back? Maybe it never went away? Maybe Windows ReadyBoost has risen from the grave? (I've NEVER seen ReadyBoost in actual use.)

        • by Jamu ( 852752 )
          I used it for a bit on my desktop machine. My OS was on a newer - at the time - SSD, and my old SSD got used as cache for my HDD. The cache worked very well, so good, in fact, that my system would occasionally pause while the HDD spun up after a cache-miss. However, it wasn't long before I'd switched to just SSDs for my desktop machine and the HDD got stuffed in a NAS (where I wouldn't have to listen to it).
          • If it actually worked very well you wouldn't have noticed it pausing while it waited after a cache miss. Any cache can only help by so much. In the case of hybrid drives, I never understood why drive manufacturers used such a small amount of NAND, besides cost. Sure, it is expensive to use. But if you put more on there I'll pay more, because it will perform better more often.

            • by Jamu ( 852752 )
              The cache worked so well that the HDD would spin down because it wasn't being accessed. Ideally I'd have stopped the HDD spinning down, but at the time it wasn't too much of a problem. Obviously the cache can't provide data it doesn't have, and this can result in processes waiting for the HDD to spin up. The cache was a 64Gb SSD, although I can't remember if RST used all of that or about a half.
        • by Kjella ( 173770 )

          Intel dabbled in this (as did others) years ago when SSDs were too small for most people. As far as I know, it was kinda shitty and only kinda worked and everyone abandoned it because hybrid drives were simpler (even though they too sucked) and SSDs kept getting bigger, faster, and cheaper. They called it "Smart Response Technology" when it launched. Maybe it's back? Maybe it never went away? Maybe Windows ReadyBoost has risen from the grave? (I've NEVER seen ReadyBoost in actual use.)

          It's the same as far as I understand, just optimized for a lower latency high performance SSD. But to be honest, except for gamers I think almost everyone has space enough on the SSD these days. And even most gamers could if Steam only offered them two storage areas so they could put 1GB on the SSD and the other 29GB with all the media files on a HDD. I've gone all SSD anyway even though it's a waste.

    • You're already modded 5 for this, but you deserve extra bonus mod points.
  • Intel is blowing (Score:4, Insightful)

    by m.dillon ( 147925 ) on Monday March 27, 2017 @01:13PM (#54120263) Homepage

    Smoke. Total and complete nonsense. Why would I want to buy their over-priced octane junk verses a Samsung 951* or 960* NVMe drive? Far more storage for around $115-$130, 1.4 GBytes/sec consistent read performance, decent write performance, and decent durability.

    P.S. the Intel 600P NVMe drive is also horrid, don't buy it.

    http://apollo.backplane.com/DF... [backplane.com]

    -Matt

    • by Anonymous Coward

      You apparently either didn't read or didn't comprehend the article. These devices are initially intended for use in hybrid drives - replacing the SSD component of an SSD/HD hybrid. The claim is that the resulting combo will have better than SSD performance at spinning disk size/price points.

      And if the approach appears viable, the costs will come down.

      • Hybrid drives are a dead segment. If anything, this is geared for their "Smart Response Technology" (which I had assumed was abandoned) and idiots such as OEMs and those that buy from OEMs.

    • Comment removed based on user account deletion
      • Re:Intel is blowing (Score:4, Informative)

        by m.dillon ( 147925 ) on Monday March 27, 2017 @02:49PM (#54121067) Homepage

        Right. They are trying to market it as something cool and new, which would be great except for the fact that it isn't cool OR new. A person can already use ANY storage device to accelerate any OTHER storage device. There are dozens of 'drive accelerators' on the market and have been for years. So if a person really wanted to, they could trivially use a small NAND flash based NVMe SSD to do the same thing, and get better results because they'll have a lot more flash. A person could even use a normal SATA SSD for the same purpose.

        What Intel is not telling people is that NOBODY WILL NOTICE the lower latency of their XPoint product. At (I am assuming for this product) 10uS the Intel XPoint NVMe is roughly 1/6 the latency of a Samsung NVMe device. Nobody is going to notice the difference between 10uS and 60uS. Even most *server* workloads wouldn't care. But I guarantee that people WILL notice the fact that the Intel device is caching much less data than they could be caching for the same money with a NAND-based NVMe SSD or even just a SATA SSD.

        In otherwords, Intel's product is worthless.

        -Matt

        • Comment removed based on user account deletion
        • And, of course, any Linux or BSD operating system will use all available memory for cache data from storage anyway. I guess Windows needs a little more help to do that.

          This certainly shows up in, for example, Chrome startup times. It takes around 4 seconds from a hard drive, uncached, 1 second from a SSD, 1 second from a NVMe drive, and presumably 1 second from any other form of storage because chrome itself needs a bit of cpu time to initialize itself, not to mention the time it takes to load a tab (mini

    • P.S. the Intel 600P NVMe drive is also horrid, don't buy it.

      http://apollo.backplane.com/DF... [backplane.com]

      -Matt

      According to the Linux kernel, Intel NVMe devices have the block stack stick to certain alignments for performance reasons. Now quoting the above article: "All tests were done on a DragonFlyBSD". I doubt Intel did the same enabling there as they did for Linux.

      • I think you are a little confused by Intel marketing speak. Actually, you are a lot confused.

        -Matt

        • I think you are a little confused by Intel marketing speak. Actually, you are a lot confused.

          -Matt

          What the heck are you talking about? Intel devices have a quirky alignment requirement that they made work well in Linux (it's documented in the git logs), but Intel neglected BSD. What part of this do you consider to be marketing?

          • Intel devices have quirks, but I think you are mixing apples and oranges here. All modern filesystems systems have used larger alignments for ages. The only real issue was that the original *DOS* partition table offset the base of the slice the main filesystem was put on by a weird multiple of 512 bytes which was not even 4K aligned.

            This has not been an issue for years. It was fixed long ago on DOS systems and does not exist at all on EFI systems. Regardless of the operating system.

            At the same time, all

            • by Agripa ( 139780 )

              Intel devices have quirks, but I think you are mixing apples and oranges here. All modern filesystems systems have used larger alignments for ages. The only real issue was that the original *DOS* partition table offset the base of the slice the main filesystem was put on by a weird multiple of 512 bytes which was not even 4K aligned.

              NTFS made the same mistake so it is hardly fair to pick on DOS for this behavior.

          • Maybe you should point me at the commitid you are referring to, then I can address your comment more directly. I can tell you straight out, even without seeing it, that you are probably misinterpreting it.

            -Matt

  • by foxalopex ( 522681 ) on Monday March 27, 2017 @01:15PM (#54120279)

    The way Intel plans on using Optane memory, yes it will most certainly improve the speed of HDs by caching but to say it will always outperform an SSD is an outright lie. For starters if you're working with unusually large datasets it likely won't all fit in Optane memory and unless your cache is highly intelligent and can read ahead, it's likely that things will load slowly on the first attempt. Then for laptops there's also the bonus of not destroying the HD if your laptop gets bumped in the wrong way or treated with a bit of abuse when operating. If this worked so well then Seagate's hybrid SSD / HD drives should be almost everything but it isn't.

    • The way Intel plans on using Optane memory, yes it will most certainly improve the speed of HDs by caching but to say it will always outperform an SSD is an outright lie.

      Also worth noting that there are SSD's that can exceed the 1.2GBps read / 280MBps write of the Optane.
      For instance, Samsung 960 Evo claims 3.2GBps/1.8GBps. (https://www.newegg.com/Product/Product.aspx?Item=N82E16820147595&cm_re=pcie_ssd-_-20-147-595-_-Product)
      Requires PCIe 3.0 x4. I work for neither Samsung nor Newegg.

  • They are saying that SSD cache of HDD is rare because most people only have one device, but somehow by being more expensive per GB, this has a better chance of being a common configuration? This pitch is sufficiently convoluted I can't help but to wonder how worried/challenged they must be to find a wider market for the technology, given the price point.

    This seems to be an unfortunate reality of PC storage, the vast majority of the market is entrenched in 'good enough'. Even NVMe is a relative rarity, des

    • Motherboard vendors are just now, finally, starting to put M.2 connectors on the motherboard. Blame Intel for the slow rate of adoption. Intel came out with three different formats, all basically incompatible with each other, and created mass confusion.

      But now, finally, mobo vendors are settling on a single PCIe-only M.2 format. Thank god. They are finally starting to put one or more M.2 slots and finally starting to put on U.2 connectors for larger NVMe SSDs. Having fewer SATA ports on the mobo is no

      • by Junta ( 36770 )

        I've seen m.2 modules for a while, but overwhelmingly they are still SATA, and M.2 has had PCIe capability, but largely ignored by the device makers.

        One challenge with the PCIe connectivitiy is that 4 lanes of PCIe is an awful lot to ask to spare for a single device, and there isn't a lot of urgent need for better SSD performance, interestingly enough.

  • by jlv ( 5619 ) on Monday March 27, 2017 @01:24PM (#54120367)

    Intel is marketing the Optane Memory M.2 modules as caches for hard drives.

    "Lather, rinse, repeat. With each duplicate task, the launching speed accelerated. The load time for Gimp, for example, dropped from about 14 seconds to 8 seconds, and then to 3 or 4 seconds as the Optane Memory cached the task."

    That's only speeding up accesses for repeated tasks (which, granted, there are many of).

    I think the problem Intel found is that Optane memory is too expensive right now in larger sizes. They came up with this cache module as their best way to market it. Is someone really going to spent $77 for a 32GB cache device when they can just spend $99 for a 256GB SSD?

    • If they already own a 1 or 2TB drive that is half full, it makes some sense.

    • by Erioll ( 229536 ) on Monday March 27, 2017 @01:51PM (#54120577)

      Actually if I were building another PC soon, I'd do exactly that. Get a 2TB drive cheap ($50-60) and then this for $77. Cheaper than a $99 SSD and the same hard drive, and I don't need to worry about getting a "very large" %APPDATA% directory or have to do configuration of my media, which (large) games are on my SSD versus not, etc. I'm willing to do that now, but I'd be glad to not have to worry about all of that. Just put it all on "C" and then let the Intel "magic" do its job for what I'm running most frequently.

      It's the "just make it simple" approach which is good.

      • It's the "just make it simple" approach which is good.

        But you're adding a whole disk, and also using spinning rust. How is that making it simple?

        • By plugging one more thing into a slot that exists but is currently unused, he can avoid trying to migrate all the data on the 2TB spinning rust drive to an SSD, but still get most of the benefits of having the SSD.

          • Except that the proposal was to build a new system with a 2TB disk, not to migrate one from an older system.

            Your proposed case might make sense.

      • Yup. My main $1900 SSD array does about 300MBps (SATA 3 drives).

        I'll absolutely spend $80 to put an Optane piece in to split for ZFS log and cache devices to pump up the performance 20% or so.

  • To bad that intels pci-e lanes suck on there desktop cups.

    AMD has X16 or X8 X8 (video) + X4 (storage) + USB 3.X on die + X4 chip set link VS intel with X16 or X8 X8 (video) + X4 chipset link.

  • Having a hard time imagining the use case for this.

    For consumer gear, almost any SSD sold today will be faster than someone would ever need. Just use that as a cache and save some money.

    For pro/enthusiast gear, money would probably be better invested simply getting more RAM -- with 32GB, in many cases I have 20GB or more of that being used as a filesystem cache. Cache tends to very rapidly exhibit diminishing returns, to the point where I doubt I'd even notice an extra 32GB sandwiched between my RAM and SSD

  • Optane is cool (Score:5, Informative)

    by freeze128 ( 544774 ) on Monday March 27, 2017 @01:32PM (#54120431)
    Optane is Intel's name for 3D Xpoint storage. Right now, it's more expensive than NAND storage, and is only available in smaller capacities. That is why they are using it as cache on conventional hard drives. When it becomes cheaper to produce, and in higher capacities, it's going to be great. It will be way faster than NAND, and you won't have to worry about wear-levelling because it doesn't suffer from insulator breakdown.
    • I've never heard of insulator breakdown in electronic components. I do know about electronic migration eventually ruining a junction.
      • Insulator breakdowns on circuit boards happen less often these days but they are still prevalent in Electrolytic caps and anything with windings (transformers, inductors, DC motors, etc), though it can take 20-50 years to happen and depends on conditions. And the failure mode depends too.

        Generally speaking, any component with an insulator which is getting beat up is subject to the issue.

        Circuit boards got a lot better as vendors switched to solid state caps. Electrolytics tend to dry out and little arc-th

  • All new storage technologies start with a significant price premium vs established technology.

    $77 and 32GB is not intended for photos and videos (which is all consumers think about), they're intended for servers which need high speed but not a great deal of storage space per drive. $2 per GB is roughly what we saw with SSD when they first came out.

    For someone running a home server, these drives are a feasible replacement for their existing database and web storage to get much better performance.

    For commerc

  • by porky_pig_jr ( 129948 ) on Monday March 27, 2017 @02:25PM (#54120875)

    So far having solid state cache for a hard drive is an idea which looks great on a paper, but practically everything that has been offered shows the performance - and we're talking about the real workload and the real user experiences - closer to the hard drive than to the solid state device. IMHO, since, apparently, we have a fairly large number of cache misses or some other anomalies, having the solid state cache which is 1000 faster than the traditional NAN-based one won't make too much difference.

    On the other hand, having the solid state device which only 10 times slower than DDR would make it excellent virtual storage. you can put 64GB of DDR4 on your server and then get 350GB slab of Optane. For all practical purposes you have 350GB of main memory. Swapping the working sets in and out would happen, for all practical purposes, instantly. But of course that's solution for data center, not for the regular user.

    • So far having solid state cache for a hard drive is an idea which looks great on a paper, but practically everything that has been offered shows the performance - and we're talking about the real workload and the real user experiences - closer to the hard drive than to the solid state device. IMHO, since, apparently, we have a fairly large number of cache misses or some other anomalies, having the solid state cache which is 1000 faster than the traditional NAN-based one won't make too much difference.

      You can get SSD-like boot times, but that is about it, the rest is HDD like

    • I have a Drobo with 256 gB of flash. The array has stopped crashing my PC since adding the SSD cache - the timeouts came before the disk spin-ups for the Windows network file system. Copying files to the array happens at 80% of the gigabit ethernet bus speed, now.

      You say it doesn't make too much difference, but you clearly haven't played with it for a little while. It's not a miracle, but it is quite a difference.
  • DDR3-1600 RAM runs at 12.8GB/s. If we wanted to read a 1.2GB/s couldn't we have a RAM chip, some fancy logic, and a delay line. That is, continuously clock the RAM contents around the delay line and then wait for it to come back in when you want to read it out.

    Come to think of it, that just adds read latency, once your patch of delay line comes around you can read it at 12.8GB/s.

    probably costs a ton of power, and of course it's volatile, but if 9/10ths of the memory is on the bus you get a lot of value for

  • You know, the tech they said would reach the market in 2016, then late 2016, then December 2016, then early 2017, and still doesn't show up in shopping.google.com today. When you miss your announced release dates that often, I guess the MO is to change the name and hope nobody notices.

The sooner all the animals are extinct, the sooner we'll find their money. - Ed Bluestone

Working...