'UltraRAM' Breakthrough Could Combine Memory and Storage Into One (tomshardware.com) 99
Scientists from Lancaster University say that we might be close to combining SSDs and RAM into one component. "UltraRAM," as it's being called, is described as a memory technology which "combines the non-volatility of a data storage memory, like flash, with the speed, energy-efficiency, and endurance of a working memory, like DRAM." The researchers detailed the breakthrough in a recently published paper. Tom's Hardware reports: The fundamental science behind UltraRAM is that it uses the unique properties of compound semiconductors, commonly used in photonic devices such as LEDs, lasers, and infrared detectors can now be mass-produced on silicon. The researchers claim that the latest incarnation on silicon outperforms the technology as tested on Gallium Arsenide semiconductor wafers. Some extrapolated numbers for UltraRAM are that it will offer "data storage times of at least 1,000 years," and its fast switching speed and program-erase cycling endurance is "one hundred to one thousand times better than flash." Add these qualities to the DRAM-like speed, energy efficiency, and endurance, and this novel memory type sounds hard for tech companies to ignore.
If you read between the lines above, you can see that UltraRAM is envisioned to break the divide between RAM and storage. So, in theory, you could use it as a one-shot solution to fill these currently separate requirements. In a PC system, that would mean you would get a chunk of UltraRAM, say 2TB, and that would cover both your RAM and storage needs. The shift, if it lives up to its potential, would be a great way to push forward with the popular trend towards in-memory processing. After all, your storage would be your memory -- with UltraRAM; it is the same silicon.
If you read between the lines above, you can see that UltraRAM is envisioned to break the divide between RAM and storage. So, in theory, you could use it as a one-shot solution to fill these currently separate requirements. In a PC system, that would mean you would get a chunk of UltraRAM, say 2TB, and that would cover both your RAM and storage needs. The shift, if it lives up to its potential, would be a great way to push forward with the popular trend towards in-memory processing. After all, your storage would be your memory -- with UltraRAM; it is the same silicon.
UltraRAM vs UltraRAM (Score:2, Redundant)
I take it that this 'UltraRAM' is different from the 'UltraRAM' that's been in Xilinx UltraScale devices for years, right?
only $5 per GB also nice to have raid and apple's (Score:1)
only $5 per GB also nice to have raid and apple's raid 0 locked to the board storage with even higher markup
Been waiting for this (Score:5, Interesting)
This would be amazing - makes all sorts of persistent object databases/processing systems possible, where objects simply "live" and work in the permanent memory. Hope I live long enough to get to play with this.
Re:Been waiting for this (Score:5, Interesting)
It's about time! In 1985, my computer science teacher told us we would get this some day. I have been waiting ever since.
Re: Been waiting for this (Score:1)
We already have this ram from 1985 is significantly slower than a current SSD - but we choose to use even faster RAM instead.
Re: (Score:3)
Maybe, but I have doubts about the random access capabilities of SSDs like in "Random Access Memory". Think about fetching values to populate CPU registers, said values being distributed randomly in memory. I am not sure modern SSDs would be faster than 1985 RAM for the task but you could be right. Now, I hear SSDs are slower at random writes, think about saving CPU registers values to random spots on the SSDs and doing that read/write cycle millions of times every second. Not sure the SSD would be faster a
Re: (Score:3)
HDDs, SSDs, CD-ROMs floppy disks, ... are "block devices". You cannot access a specific byte on them - you must load or write the entire block (512 kB for older HDDs, 4kB for recent HDDs, 2048 or 2352 or 2336 bytes for CD-ROM (data versus audio CD versus image/video).
This could be useful for things like databases (optimized so that table rows fit nicely into those sectors) - databases don't write individual values anyway, they update entire rows.
Now, neither the current RAM technology is "random access" - t
Re: (Score:3)
It's complicated.
An NVMe SSD can be extremely fast, especially if it has a DRAM cache. Latency lower than RAM from 1985 for stuff that is in the cache, and potentially even stuff that is in the flash if you have a good controller and a single bit-per-cell memory chip.
Practically though, cost is an issue. DRAM costs money, and you really want it to be backed up by a big capacitor in case of unexpected power failure. Of course, even if you had special non-volatile RAM, you would need to do things like atomic
Re: (Score:2)
Why have doubts about the "random access" capabilities? The reason for making the distinction between "random access" and not was from an era where you either had to wait for a tape to scroll past a read head to get what you want, or wait for a physical platter to rotate the sector you care about under a head. SSD has none of that - it can go directly to any bit you want, with the exact same latency as any other bit on the device.
Now, having doubts about the longevity and reliability over time of SSD is a
Re: (Score:2)
Re: (Score:2)
In 1985 typical 16 bit personal computers of the era (e.g. the Amiga 1000, IBM PC and clones) could read/write a few megabytes per second from/to RAM.
Latency wise, RAM was usually rated to be zero latency (i.e. it can provide new data on every bus cycle, DRAM refresh excepted). For a machine with a 4MHz bus that would be 250ns. I think the Amiga used 100ns DRAM, with a bus speed of about 7MHz. Back then the main bus was synchronous to the CPU and all peripherals - the ISA bus is basically just the 8086 CPU
Failure Rate (Score:4, Interesting)
This would be amazing
That depends on exactly what they mean by "data storage times of at least 1,000 years". If that's the mean failure rate of one bit of memory (which would be a reasonable way to measure it in the lab given that they haven't waited 1,000 years) then in just 1GB of memory you will have an average of 988 bits fail every hour which would make the device much less useful since you would need incredible ECC to reduce that to a useable level.
;-) but I would not hold my breath.
It's certainly very interesting discovery, but just like with all the new battery technology we keep hearing about that never amounts to much, there is a huge gap between what works well in the lab and what scales up to work in modern machines. I wish them luck and hope it does turn out well (even though I'm a Yorkshireman!
Re: (Score:2)
There have been a few operating systems to work this way where RAM is both working memory and an object store. And by object store, I mean not a RAM disk that is fixed size - it's a dynamic storage medium where the more you have, the less RAM you have.
Windows CE comes close, but it's basically still
Re: (Score:2)
Re: Been waiting for this (Score:2)
Palmos was amazing to work with outside this though, in my humble opinion. :)
Though we did have to account for the regualar hotsync to reinstall software, batteries seemed to be a big problem, and one of the major reasons we eventually switched to pocketpc. A system i both hated, but readily supported memory cards, fixing that problem.
Re: (Score:2)
What kinds of changes do you see this making? I can't think of anything that would change (at least, nothing that would change more than a really fast hard drive). We already have persistent object databases.
Re: (Score:3)
Re: (Score:2)
That's essentially what things like MongoDB and OOP databases try to do, right?
Re: (Score:3)
Power savings?
Your CPUs now has a memory space equal to your persistent storage space, so a system could be much closer to zero power/off when idle without quitting applications, suspending data to disk or anything else. The whole thing could just pause to a very low power state, and when you come back "on" the entire "memory" state, including applications, is just as you left it.
You'd never quit an application that was working right unless you really had to quit it -- upgrade the code or some kind of malf
Re: (Score:2)
You'd never quit an application that was working right unless you really had to quit it -- upgrade the code or some kind of malfunction.
So there must be another copy in storage somewhere so you can start over if there is some kind of malfunction, right?
Re: (Score:3)
I don't know about another whole copy, but perhaps the original copy has memory segments/pages/whatever which are marked immutable and can't be changed or they are marked copy-on-write so that in an event you "start over" you back to the original state of those elements.
I really think that unified memory/storage will require a lot new thinking about how a number of things are structured in a world where storage and RAM are not different entities.
Re:Been waiting for this (Score:5, Informative)
You did, live long enough, sort of. The IBM i [wikipedia.org] operating system, originally OS/400, introduced in 1988, had a single-level store. There was no "disk" or "memory", just "storage". Sure, it was backed up by hard drives. But from the view of the application programs, it was just one huge piece of (virtual) memory, with a single address space.
All the magic happened in the code in the operating system. How the storage was apportioned between RAM and disk, how programs weren't able to look at objects belong to other programs, etc. Even the CPU was abstracted away. You could move your programs from System/38 based hardware to Power based hardware, and the system would translate the program from the intermediate representation that the compiler produced to the machine code for the hardware.
Really elegant system, way ahead of its time.
Re: (Score:3)
This was in fact how PalmOS worked. Application data storage was completely record-based, and lived entirely in RAM. In the earlier versions it didn’t really have the concept of a “file” — both applications themselves and data were simply record stores, with OS APIs that could access and iterate over those record stores.
(Eventually the OS also gained some file support to deal with removable media and data exchange, but the core was still live records residing completely in RAM with
Re: (Score:2)
This would be amazing - makes all sorts of persistent object databases/processing systems possible, where objects simply "live" and work in the permanent memory.
Why would this be amazing? Give me a good use case for why this is better than separate working memory and storage.
Re: (Score:2)
If your OS memory was stored in non volatile memory, you would have instant on boot, which means sleep/hibernate become the same thing, and the computer could sleep between your uses of it, for a user facing machine at least. In a server, instead of having batter backed ram on your RAID controller, you could use this, and get rid of the battery completely. You could also get rid of loading times in games, as the whole game would be in "RAM"
Re: (Score:1)
instead of having batter backed ram on your RAID controller, you could use this
Is that an egg based batter or a milk based batter?
Does it increase the efficiency of the RAM or is it just a way to cook it?
Does it make the RAM easier/tastier to eat?
Does it cause any problems the next day when the RAM gets to the other end?
Re: (Score:2)
If someone owns your system (has physical or virtual access), it's game over no matter the technology
How cool is this? (Score:2)
A single point of failure.
Re: (Score:2)
Re: (Score:2)
Yeah, the controller goes out and everything dies
Re: (Score:2)
A single point of failure.
just use it with raid /s
Re: (Score:1)
This seems a bit nonsensical. Obviously working memory failing is not as consequential as storage failing for data loss; and fur unplanned shutdowns due to hardware failure computers already have multiple single failure points from the processor to the power supply to the cooling.
Re: (Score:2)
His comment wasn't about consequence, it was about how many points of failure.
Note that one point of failure is better than multiple points of failure, so the OP fails to realize that his ignorant criticism identifies an actual strength. If you wanted to make a redundant system, it's easier to provide redundancy for NVRAM than for RAM and NV storage both.
More interesting is algorithms that exploit NVRAM to provide fast recovery from software/hardware failures. But hey, this genius posts on /. and has got
Re: (Score:2)
What you need for a superior system is lots and lots of parts, clearly.
The parts are still there in the unified situation. They are just thrown on a big pile. But in the end they still need to be dealt with separately.
More interesting is algorithms that exploit NVRAM to provide fast recovery from software/hardware failures.
Right, because NVRAM cannot fail or become corrupted by software.
I mean, sure, there will be some uses for it, but the article makes it seem like it's the second coming of jesus.
Re: (Score:2)
A single point of failure by design.
FTFY
Re: (Score:1)
Sure, unlike conventional RAM.
No one will ever mistake you for a computer architect.
Re: (Score:2)
Re: (Score:2)
I've been waiting for this paradigm shift (Score:2)
Re: (Score:2)
Re: (Score:1)
You seem to know a lot about how the brain works. I'd like to understand how memory works in the brain.
Re: (Score:2)
Re: I've been waiting for this paradigm shift (Score:2)
Thinking and using short term memory are closely related. Short term memory is generally energy efficient. This leads to all kinds of observable effects like priming.
Funny thing, take some THC and you get energy efficient access to long term memory, at the cost of short term transcription.
Optane? (Score:3)
Optane (3D XPoint) made all these same claims, and basically flopped due to the high cost. How is this going to be any different?
Re:Optane? (Score:5, Informative)
Not only high cost, but not *really* as fast as memory, and lower density than NAND. Optane DIMMs basically poisoned the well for any group claiming to have non-volatile memory that can compare with DRAM. It was (is?) cheaper per GB than RAM, but it occupied a really awkward middle ground.
Intel also invested a lot to try to get developers to 'consider a new paradigm' and explicitly write applications around this concept of memory but not quite memory but better than the NVMe SSDs' to try to make up for the fact that it was an awkward solution without a place in the scheme of things (NVMe NAND + DRAM is simpler and cheaper, and having something in between turns out to not be that useful, but it *might* have been useful if mass storage was still spinning disk oriented.
Re: (Score:1)
"...NVMe NAND + DRAM is simpler and cheaper, and having something in between turns out to not be that useful..."
Until it turns out to be useful. You're talking like the book is closed. For you, perhaps it is, but no one is looking to you for architecture advances.
NVRAM doesn't have to "compare with DRAM" to be valuable. Exploiting it fully is a challenge, one you apparently are not up to.
"Intel also invested a lot to try to get developers to 'consider a new paradigm' and explicitly write applications aro
Re:Optane? (Score:5, Insightful)
Meaning that Optane DIMMs could be configured to appear as memory (e.g. malloc) or as a block device (for things like open()), but Intel tried to make it more special by having a distinct mode and different set of APIs to store and retrieve data from Optane DIMMs in hopes they could get developers to not do open() style I/O (which makes Optane DIMM just a too-expensive SSD) without trashing main memory performance (because Optane DIMMs are hugely slower than DRAM).
I understand the issues but the performance uplift even in pretty synthetic benchmarks were underwhelming, and while it may be worthwhile to rework architecture for substantial improvement, Optane DIMMs were not it. There's a reason that the industry at large has yawned at this point, and Micron backed off the joint venture, lots of work to very little uplift.
Maybe this 'UtraRAM' ultimately gets there in a way Optane couldn't, but the Optane specifically failed to get traction in the way Intel really wanted it to.
Re: (Score:2)
I have experienced this with a database engine I have d
Re:Von Neumann Architecture (Magnetic Core RAM) (Score:1)
I'm not sure how you connect this to Von Neumann architecture, but this seems to hark back to Magnetic Core Memory. Old timers like myself will sometimes call RAM 'core' memory, because in the old days all RAM (Random Access Memory) was implemented using magnetic core technology. The thing about magnetic core was, when you turned off the power, it kept the last data that was written to it. So that's kind of like what I think this article is talking about, except that core memory was too expensive to use
yea that's a bit ... (Score:1)
Much
I mean its 1000x better at everything, but there's nothing proven about it yet and (no I am not reading TFA, just based on the summary) the only hard fact about it, is that it is better than the previous version.
I am not saying it won't work or we will never have anything similar to it, but I get very skeptical when numbers are used in conjunction with "extrapolated numbers", "will offer", and "in theory".
So, "in theory" I am more likely to be offered a billion dollars while getting a handjob, based on
Re: (Score:2)
This may all pan out, but the big thing to make me instantly skeptical is the too-marketing sounding 'UltraRAM'. I feel like credible tech research this early on rarely bother to come up with that sort of moniker, that might come closer to product time from some marketing person. As it stands the jump to heavy press and marketing effort has a smell of trying to extract value before reality throws some unfortunate kinks in the theory.
More simply (Score:2)
So, essentially what it comes down to is saying that they have invented nonvolatile memory that is orders of magnitude faster than any known nonvolatile memory. Certainly nice, but even if it be true there are other things such as that DRAM memory simply offers far less capacity per physical size than flash memory. Are they suggesting that the capacity per physical size ratio is also comparable to flash memory? In which case we could be looking at terabytes of working memory?
I'm sceptical to this reporting;
10 um Gate length (Score:2)
So how will it perform scaled down to modern standards?
Re: (Score:2)
DRAM cells are about 1000th that size, with gate lengths at about 10-14nm, I'd imagine that the problem there is not from being able to make them smaller but researchers probably don't have access to fabs that are cutting edge because it's expensive
Re: (Score:2)
If DRAM were made on the latest fabs, the cost per wafer would be the same as everything else made in the latest fabs, so you can say goodbye to inexpensive memory. Knowing this, the DDR specs are designed to be met on older fabs.
Most programs would see little benefit from faster memory. The market forces here keeps the DDR specs from even allowing mind-blowing performance.
Re: 10 um Gate length (Score:2)
Plenty of universities with electron beam lithography equipment, probable can't compare with state of the art fab equipment alignment wise but they should be able get below 100nm gate length with it.
Which is all fine and dandy (Score:5, Funny)
Until you fill up your UltraRAM hard drive, and then you instantly get out of hard drive space, paging file errors and out of memory errors at the same time.
Re: (Score:2)
>"Until you fill up your UltraRAM hard drive, and then you instantly get out of hard drive space, paging file errors and out of memory errors at the same time."
I am sure it would still be partitioned off so that some areas remain acting like RAM. Or just limit how much could be acting like storage. Depends on how it is treated and addressed. Lots of possibilities- some rather confusing.
Re: (Score:2)
You've just described the fugue state of someone on their first embedded-system project!
Re: (Score:2)
Doubt it can be suppressed if it's viable.
Simply cos China or Russia or someone else will start working on similar tech and gain the advantage of being the only provider of that tech for however long it takes for others to catch up.
And if it happens to improve the performance of computing devices in a decent way, suddenly US may find itself in the recieving end of being prevented from importing / using it. Or find itself in a disadvantage in supercomputers.
Nice, but don't hold your breath (Score:5, Insightful)
We were told the same back in the 1990s for 'holographic memory' - which was fast enough to take over for RAM, had relatively infinite capacity, and would last forever. It was also just a matter of crossing the ts and dotting the eyes before it was commercially available in 5 years... and of course all that came out of that were a couple holographic disk drives that never really went anywhere. Making something commercially viable is freaking hard.
I know we've also had at least two other technologies in the lab since then which claimed about the same thing (can't remember the names). They never lived up to their promise either.
Basically, this stuff is like fusion power - 5 years away and it always will be. I'd love to be proven wrong!
Re:Nice, but don't hold your breath (Score:4, Interesting)
I remember in the early 00's reading about MRAM, which would combine disks and RAM and all the benefits it would bring. It was also only 5 years away at the time.
Re: (Score:2)
As you two lovebirds reminisce, you fail to realize that these technologies actually exist, they are not 5 years away. There are problems exploiting them, but they are not vaporware.
Also, whether 30 years ago or 20 years ago, RAM performance is a moving target. What exists today could easily replace what existed then, but DRAM advances as well.
The challenge is not making RAM non-volatile, it is replacing the entirety of modern computing built around the notion that RAM is volatile. Until that is done, NVR
Re:Nice, but don't hold your breath (Score:5, Informative)
I remember in the early 00's reading about MRAM, which would combine disks and RAM and all the benefits it would bring. It was also only 5 years away at the time.
Actually, Magnetoresistive RAM (MRAM) does exist and does have the benefits of both and you can buy it right now! The only issue is that it's about $1 per megabyte. So yeah, if you don't mind paying $1000/GB ($1M/TB) of memory then you only need to hire a system designer to make it into a reality. This may seem unreasonable but it was only a couple decades ago that the same was true about flash memory.
Re: (Score:1)
Re: (Score:2)
Unpowered memory itself will retain it's data for few months at the higher capacities. That said, if you are having someone design a multi-million dollar system then why not include supercaps that will slowly leak power to provide MRAM enough power to retain the data for decades?
Honestly, if you haven't powered it up after a decade then you likely don't care about the data on it.
Re:Nice, but don't hold your breath (Score:5, Funny)
You see, clay tablets had much better durability than papyrus, but the write performance was terrible.
On the other hand, producing papyrus was very expensive and took a lot of time
So eventually came the wonderful idea of painting on clay tablets, the tech was so promising that it was called the Mighty Ra Memory, they were advertising it as cheaper than papyrus, as strong as clay tablet, but with papyrus like write performance, some were pushing a little and said they augmented the clay tablet data density to almost infinity since you could now color code the data with infinite nuances
So anyway, we all know what happened, consumers just loved the papyrus because it had already seamless folding technology, you could actually roll your data storage
Oh wait sorry you meant the '20'00s, my bad, yes this one too, they could have tried with another name this time...
Re: (Score:2)
I know we've also had at least two other technologies in the lab since then which claimed about the same thing (can't remember the names). They never lived up to their promise either.
Basically, this stuff is like fusion power - 5 years away and it always will be. I'd love to be proven wrong!
Was one the memristor [slashdot.org]? It got a lot of hype [wired.com] but fizzled [hpcwire.com] out.
Interesting idea (Score:2)
This scheme works like flash except instead of tunneling electrons thru oxide that erode over time there is a barrier that changes its conductivity when voltage is applied to it. Apply a voltage and electrons can be added or removed. Remove voltage and they get stuck.
Re: (Score:2)
Even if the only thing this research yielded was flash-like memory that could handle a lot more write cycles, that would still be pretty good news for SSDs
Re: (Score:2)
No, it would be terrible news for SSDs because they would all be getting discarded after being replaced! However, SSD and memory manufacturers would be in the black big time.
Didn't an IBM minicomputer already do this? (Score:2)
Didn't an IBM minicomputer already do this several decades ago, unifying RAM and storage? I remember reading a book about it.
Re: (Score:2)
I think it was Fortress Rochester, a book about the iSeries including the AS/400.
Re: (Score:3)
You're probably thinking of Bubble Memory:
https://en.wikipedia.org/wiki/... [wikipedia.org]
It was promising, but flash (and fast hard drives) supplanted it.
Re: (Score:2)
I was thinking of from a software standpoint. The operating system treated the memory and storage as the same thing. The book was Fortress Rochester, The Inside Story of the IBM iSeries.
Another poster mentioned that the Palm Pilot had the same approach to memory, that is, RAM and non-volatile storage were the same thing.
I'm betting Intel ... (Score:2)
Intel will only offer ECC versions on servers ...
Return of the memristor? (Score:1)
Not an "at scale" test (Score:5, Informative)
Not really a replacement for RAM (Score:2)
program-erase cycling endurance is "one hundred to one thousand times better than flash."
That's a big improvement over flash, but it's not nearly enough to replace RAM. The endurance of flash memory [wikipedia.org] ranges from about 1000 program/erase cycles up to about 100,000. Assume this tech allows each byte to be rewritten 100 million times. For RAM, you could easily hit that limit in a few minutes.
Re: (Score:2)
Aaaand.... (Score:1)
At Last! A New Model 100? (Score:2)
The old ,a href="http://www.oldcomputers.net/trs100.html">Radio Shack Model 100,/a. used RAM (battery-backed low-power static CMOS) as both storage and RAM (up to 32K!). The package was just great, though, for the time: about the size of a 1/2 ream of paper; weighed 2-4 lb. (depending on which version you got); had a full-size (with Fn-key embedded number pad) keyboard; o/s, necessary apps, and BASIC in ROM (32K of that, hence the RAM limit). It revolutionized news reporting, bypassing the typist at rewr
Re: (Score:2)
You left out the fact that it had a built-in acoustic modem! So, yeah, it was the perfect beast for journalists ... I remember reading about it back in the day.
Might replace flash (Score:3)
Will not replace DRAM. Too slow, too large. Might make for great swap-space though.
And as usual: I believe it when I can buy it, not before.
Close but no cigar (Score:5, Informative)
The quoted 1000 times the endurance of current SSD read/write cycles is still not good enough as RAM. Current RAM cells get written billions of times. So it sounds like it'll mostly be a much better persistent memory (like SSD) rather than something that unifies RAM and SSD. Maybe in some specific systems it can work as a uniform memory, and it's possible that by the time it's out, it'll make sense to put even larger SRAM and even DRAM caches on the CPU die (or in the CPU package) so the count of main memory reads/writes is reduced
Let's also not forget that current RAM serves as video RAM in most of the systems sold. Video RAM is especially prone to being rewritten a gazillion of times. Maybe the speed isn't there either to act as RAM or esp. VRAM
Re: (Score:2)
I am curious how many cycles a standard DRAM cell can undergo flips before failing, though. Is there an actual metric for that?
In my lifetime I have never seen standard DRAM failing, but I have witnessed DRAM placed on video cards failing, so I guess you are right about the latter having much more intensive use than standard DRAM.
With standard DRAM it is OK to do bit flips forever on the same unit, but with UltraRAM an algorithm to spread the writes over all the chip will have to be invented. I wonder what
But, I've been telling users... (Score:2)
Permanent storage (HDD/SSD/etc) is not the same as RAM memory.
That they are measured in the same units, bytes, is not relevant just like diesel and beer are measured in the same units (outside the USA anyway). This is going to confuse them again...
So it is persistent memory/dram hybrid? (Score:2)
We've been playing with Intel's persistent memory. It's pretty nice.
Persistent memory is slower than DRAM, but it is much cheaper to buy and in term of electric consumption.
So you can easily build machines with a few TB of memory for pretty cheap. It is expected to be very nice for various database and scientific applications.