Forgot your password?

Everspin Launches Non-Volatile MRAM That's 500 Times Faster Than NAND 119

Posted by Soulskill
from the 500-times?-that's-almost-600-times! dept.
MrSeb writes "Alternative memory standards have been kicking around for decades as researchers have struggled to find the hypothetical holy grail — a non-volatile, low-latency, low-cost product that could scale from hard drives to conventional RAM. NAND flash has become the high-speed, non-volatile darling of the storage industry, but if you follow the evolution of the standard, you'll know that NAND is far from perfect. The total number of read/write cycles and data duration if the drive isn't kept powered are both significant problems as process shrinks continue scaling downward. Thus far, this holy grail remains elusive, but a practical MRAM (Magnetoresistive Random Access Memory) solution took a step towards fruition this week. Everspin has announced that it's shipping the first 64Mb ST-MRAM in a DDR3-compatible module. These modules transfer data at DDR3-1600 clock rates, but access latencies are much lower than flash RAM, promising an overall 500x performance increase over conventional NAND."
This discussion has been archived. No new comments can be posted.

Everspin Launches Non-Volatile MRAM That's 500 Times Faster Than NAND

Comments Filter:
  • by davidwr (791652) on Wednesday November 14, 2012 @06:12PM (#41985383) Homepage Journal

    non-volatile, low-latency, low-cost

    AGoodThing, AnotherGoodThing, YetAnotherGoodThing, pick any two.

  • Re:power (Score:5, Informative)

    by Anubis IV (1279820) on Wednesday November 14, 2012 @06:28PM (#41985555)

    For those curious, it performs 500x faster than NAND, costs roughly 50x more than NAND, and uses 5x more power than NAND. All-in-all, not too bad, considering it's new technology and is actually shipping, but it definitely has limited applications at the moment. Assuming they can get the cost down a bit or come up with a few more ideas to reduce power consumption (it's actually worse in older MRAM), it could be something interesting in the near future. I'm guessing MRAM will be showing up more and more often in the next few years, since it seems like it's finally cracked the wall between "cool in the lab" and "semi-practical" after years of being stuck.

  • by viperidaenz (2515578) on Wednesday November 14, 2012 @06:29PM (#41985561)
    The whole point of MRAM is to avoid the limited duration of Flash type memories. Data is stored as a magnetic field. Flash stores data as an electric charge - but the method flash uses to put that charge the is destructive to the insulating layer that keeps it there.
  • by Anonymous Coward on Wednesday November 14, 2012 @06:32PM (#41985595)

    It's actually very durable. "In contrast, MRAM requires only slightly more power to write than read, and no change in the voltage, eliminating the need for a charge pump. This leads to much faster operation, lower power consumption, and an indefinitely long "lifetime"." (

  • by Anonymous Coward on Wednesday November 14, 2012 @06:40PM (#41985661)

    Speed = 500x
    Price = 50x
    Density = 1/64x
    Power = 5x

    So what you gain in speed, you lose in density, power, and price. Still, if it makes it 500x faster to boot a device, then you could imagine this being great for the embedded market as a boot-up device where the OS resides. The only problem is the 5x power consumption requirement. Maybe the power consumption should be compared to SDRAM, and this might be a good replacement -- imagine not having to wait for the OS in a mobile device to have to write to flash when powering completely down to resume instantly. That might be where this technology finds a niche.

  • Re:So NOT Vaporware? (Score:5, Informative)

    by Areyoukiddingme (1289470) on Wednesday November 14, 2012 @06:54PM (#41985821)

    They claim shipping, so... yeah, a product. However, not a retail product, from the sound of it. Nobody makes a populate-your-own SSD or such.

    More importantly perhaps, MRAM supposedly doesn't suffer from the page problem that NAND requires. Individual bits are accessible for reading and writing conveniently, unlike NAND, which requires writing by page. In addition, MRAM is supposedly much more robust than NAND, surviving many more write cycles. It hasn't existed long enough to know this for sure, but in theory, these two advantages means an SSD controller for an MRAM SSD could be vastly simpler than the ones required for NAND. No need for wear-leveling or page rewrite logic. This should both reduce the expense of SSDs and increase their real world performance and reliability.

    However, while the article summary blathers about "from hard drives to main memory", this is not a competitor to modern DDR SDRAM. Assuming the quoted 500X faster than NAND is accurate, MRAM latency should be on the order of 100 nanoseconds for a random read. (NAND read latency is on the order of 50 microseconds.) DDR SDRAM random read latency is on the order of 22 nanoseconds.

    Having said that, it is comparable with SDRAM from a decade ago, which probably translates directly to modern mobile devices. Low power suspend mode using MRAM instead of SDRAM could conceivably lower mobile device power consumption and improve battery life. If manufacturers get really silly, in theory a mobile device could be built that doesn't distinguish between its main memory and its mass storage. The two functions would be served by the same solid state circuitry. Obviously accommodating such a hardware design would give the kernel guys fits, but it could simplify things in the software a great deal, and incidentally net an interesting performance gain that's visible to users. Notably, the process of launching a program consists of nothing more than creating a stack and a heap for it somewhere--the program's code can stay right where it is. This also results in the somewhat bizarre (to modern ears) situation where suspend mode consists solely of persisting the CPU's state. Memory state is already persistent, always. As a final side effect, once scaled to SSD capacities, a device operating as described above effectively has an absolutely absurd amount of main memory, in theory, equivalent to the entire remaining capacity of the mass storage device.

    MRAM has been around in labs for 20 years now, so the possibility of this being a real, viable, product-ready device is reasonably high. MRAM doesn't suffer from Fusion Power Syndrome.

  • by pipatron (966506) <> on Wednesday November 14, 2012 @07:24PM (#41986129) Homepage

    It can actually replace both, which is pretty interesting and might change how our current computing model is built.

    There are already applications and systems in place to model the data storage like this, for example memory-mapped file I/O, where you basically tell the operating system that "please let me pretend that this huge file on the hard drive is already in RAM", and let the RAM be some sort of huge cache. The same model would apply to storage here, except we would get rid of the whole RAM layer between storage and CPU.

  • by slew (2918) on Wednesday November 14, 2012 @07:42PM (#41986295)

    Everspin has announced that it's shipping the first 64Mb ST-MRAM in a DDR3-compatible module. These modules transfer data at DDR3-1600 clock rates, but access latencies are much lower than flash RAM, promising an overall 500x performance increase over conventional NAND.

    Wait, so, is this to replace RAM (the mention of DDR3) or to replace drive storage?

    MRAM might be a potential candidate to replace current solid state storage (NAND-flash) which is a candiate to replace drive storage. In a system with small amounts of DRAM, MRAM might be used to replace the DRAM as well. Unfortunatly, because of its current high price and low density, it is currently not very good substitute for either one except in perhaps a very small embedded system.

    These modules transfer data at DDR3-1600 clock rates, but access latencies are much lower than flash RAM

    Isn't that comparing apples (DDR3) and oranges (flash RAM)?

    Instead of implementing the slow standard flash memory electrical interface on MRAMs, they (everspin) elected to support the same fast electrical interface that DRAMs use (DDR3). They can do this because just like DRAM, writing data on MRAMs is about as quick as reading data (which isn't the case with NAND-flash). By choosing the standarized DDR3 interface, chips that might want to use these MRAMs won't have to be specially designed to do so (which wouldn't be the case if they came up with a non-standard interface). It will apparantly just look like a small capacity DRAM chip that doesn't forget when you take the power away (I'm guessing the MRAMs probably also ignore any refresh requests that come across the interface).

    The reason that current flash memory electrical interfaces are slow, is that flash memories have pretty slow access times and are read/written in large blocks. This led to an efficient interface that multiplexes the address and data on the same pins. DRAM is however more randomly accessed in smaller blocks and has separate address and data pins. This allows a higher duty cycle of data transfer on the data pins for smaller transactions: you don't have to constantly turn the bus around between sending commands and reading data, and you can pipeline new addresses on the address bus at the same time data from older commands are transfered on the data bus.

    By targetting the DRAM interface, it appears that Everspin is positioning their chip as a DRAM+Flash replacement for systems that don't require much total storage. They need to target the DRAM interface for this because you can't really do random access efficiently on the flash interfaces (but you can do block transactions on a random access interface). In fact in many embedded systems, the first action of the bootstrap code is to copy parts of the NAND into DRAM (for fast access). With MRAM, you could just bypass this step.

1: No code table for op: ++post