Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware Technology

'UltraRAM' Breakthrough Could Combine Memory and Storage Into One (tomshardware.com) 99

Scientists from Lancaster University say that we might be close to combining SSDs and RAM into one component. "UltraRAM," as it's being called, is described as a memory technology which "combines the non-volatility of a data storage memory, like flash, with the speed, energy-efficiency, and endurance of a working memory, like DRAM." The researchers detailed the breakthrough in a recently published paper. Tom's Hardware reports: The fundamental science behind UltraRAM is that it uses the unique properties of compound semiconductors, commonly used in photonic devices such as LEDs, lasers, and infrared detectors can now be mass-produced on silicon. The researchers claim that the latest incarnation on silicon outperforms the technology as tested on Gallium Arsenide semiconductor wafers. Some extrapolated numbers for UltraRAM are that it will offer "data storage times of at least 1,000 years," and its fast switching speed and program-erase cycling endurance is "one hundred to one thousand times better than flash." Add these qualities to the DRAM-like speed, energy efficiency, and endurance, and this novel memory type sounds hard for tech companies to ignore.

If you read between the lines above, you can see that UltraRAM is envisioned to break the divide between RAM and storage. So, in theory, you could use it as a one-shot solution to fill these currently separate requirements. In a PC system, that would mean you would get a chunk of UltraRAM, say 2TB, and that would cover both your RAM and storage needs. The shift, if it lives up to its potential, would be a great way to push forward with the popular trend towards in-memory processing. After all, your storage would be your memory -- with UltraRAM; it is the same silicon.

This discussion has been archived. No new comments can be posted.

'UltraRAM' Breakthrough Could Combine Memory and Storage Into One

Comments Filter:
  • by dohzer ( 867770 )

    I take it that this 'UltraRAM' is different from the 'UltraRAM' that's been in Xilinx UltraScale devices for years, right?

  • only $5 per GB also nice to have raid and apple's raid 0 locked to the board storage with even higher markup

  • by ugen ( 93902 ) on Wednesday January 12, 2022 @07:09PM (#62168811)

    This would be amazing - makes all sorts of persistent object databases/processing systems possible, where objects simply "live" and work in the permanent memory. Hope I live long enough to get to play with this.

    • by ls671 ( 1122017 ) on Wednesday January 12, 2022 @07:37PM (#62168877) Homepage

      It's about time! In 1985, my computer science teacher told us we would get this some day. I have been waiting ever since.

      • We already have this ram from 1985 is significantly slower than a current SSD - but we choose to use even faster RAM instead.

        • by ls671 ( 1122017 )

          Maybe, but I have doubts about the random access capabilities of SSDs like in "Random Access Memory". Think about fetching values to populate CPU registers, said values being distributed randomly in memory. I am not sure modern SSDs would be faster than 1985 RAM for the task but you could be right. Now, I hear SSDs are slower at random writes, think about saving CPU registers values to random spots on the SSDs and doing that read/write cycle millions of times every second. Not sure the SSD would be faster a

          • HDDs, SSDs, CD-ROMs floppy disks, ... are "block devices". You cannot access a specific byte on them - you must load or write the entire block (512 kB for older HDDs, 4kB for recent HDDs, 2048 or 2352 or 2336 bytes for CD-ROM (data versus audio CD versus image/video).
            This could be useful for things like databases (optimized so that table rows fit nicely into those sectors) - databases don't write individual values anyway, they update entire rows.

            Now, neither the current RAM technology is "random access" - t

          • by AmiMoJo ( 196126 )

            It's complicated.

            An NVMe SSD can be extremely fast, especially if it has a DRAM cache. Latency lower than RAM from 1985 for stuff that is in the cache, and potentially even stuff that is in the flash if you have a good controller and a single bit-per-cell memory chip.

            Practically though, cost is an issue. DRAM costs money, and you really want it to be backed up by a big capacitor in case of unexpected power failure. Of course, even if you had special non-volatile RAM, you would need to do things like atomic

          • Why have doubts about the "random access" capabilities? The reason for making the distinction between "random access" and not was from an era where you either had to wait for a tape to scroll past a read head to get what you want, or wait for a physical platter to rotate the sector you care about under a head. SSD has none of that - it can go directly to any bit you want, with the exact same latency as any other bit on the device.

            Now, having doubts about the longevity and reliability over time of SSD is a

        • SSDs don't have enough write endurance to replace RAM. They would fail very quickly in that application.
      • by AmiMoJo ( 196126 )

        In 1985 typical 16 bit personal computers of the era (e.g. the Amiga 1000, IBM PC and clones) could read/write a few megabytes per second from/to RAM.

        Latency wise, RAM was usually rated to be zero latency (i.e. it can provide new data on every bus cycle, DRAM refresh excepted). For a machine with a 4MHz bus that would be 250ns. I think the Amiga used 100ns DRAM, with a bus speed of about 7MHz. Back then the main bus was synchronous to the CPU and all peripherals - the ISA bus is basically just the 8086 CPU

    • Failure Rate (Score:4, Interesting)

      by Roger W Moore ( 538166 ) on Wednesday January 12, 2022 @08:31PM (#62168959) Journal

      This would be amazing

      That depends on exactly what they mean by "data storage times of at least 1,000 years". If that's the mean failure rate of one bit of memory (which would be a reasonable way to measure it in the lab given that they haven't waited 1,000 years) then in just 1GB of memory you will have an average of 988 bits fail every hour which would make the device much less useful since you would need incredible ECC to reduce that to a useable level.

      It's certainly very interesting discovery, but just like with all the new battery technology we keep hearing about that never amounts to much, there is a huge gap between what works well in the lab and what scales up to work in modern machines. I wish them luck and hope it does turn out well (even though I'm a Yorkshireman! ;-) but I would not hold my breath.

    • by tlhIngan ( 30335 )

      This would be amazing - makes all sorts of persistent object databases/processing systems possible, where objects simply "live" and work in the permanent memory. Hope I live long enough to get to play with this.

      There have been a few operating systems to work this way where RAM is both working memory and an object store. And by object store, I mean not a RAM disk that is fixed size - it's a dynamic storage medium where the more you have, the less RAM you have.

      Windows CE comes close, but it's basically still

      • The Multics Operating System [wikipedia.org] (1964), which inspired Unix, had a memory architecture where there was no differentiation between executable memory and disk storage.

        Multics implements a single-level store for data access, discarding the clear distinction between files (called segments in Multics) and process memory. The memory of a process consists solely of segments that were mapped into its address space. To read or write to them, the process simply uses normal central processing unit (CPU) instructions, and

      • Palmos was amazing to work with outside this though, in my humble opinion. :)

        Though we did have to account for the regualar hotsync to reinstall software, batteries seemed to be a big problem, and one of the major reasons we eventually switched to pocketpc. A system i both hated, but readily supported memory cards, fixing that problem.

    • What kinds of changes do you see this making? I can't think of anything that would change (at least, nothing that would change more than a really fast hard drive). We already have persistent object databases.

      • For one, optimal structures for memory and for non-volatile storage have historically been different and required serialization and deserialization of data. This could potentially make these operations obsolete.
      • Power savings?

        Your CPUs now has a memory space equal to your persistent storage space, so a system could be much closer to zero power/off when idle without quitting applications, suspending data to disk or anything else. The whole thing could just pause to a very low power state, and when you come back "on" the entire "memory" state, including applications, is just as you left it.

        You'd never quit an application that was working right unless you really had to quit it -- upgrade the code or some kind of malf

        • You'd never quit an application that was working right unless you really had to quit it -- upgrade the code or some kind of malfunction.

          So there must be another copy in storage somewhere so you can start over if there is some kind of malfunction, right?

          • I don't know about another whole copy, but perhaps the original copy has memory segments/pages/whatever which are marked immutable and can't be changed or they are marked copy-on-write so that in an event you "start over" you back to the original state of those elements.

            I really think that unified memory/storage will require a lot new thinking about how a number of things are structured in a world where storage and RAM are not different entities.

    • by kamitchell ( 1090511 ) on Wednesday January 12, 2022 @10:57PM (#62169225)

      You did, live long enough, sort of. The IBM i [wikipedia.org] operating system, originally OS/400, introduced in 1988, had a single-level store. There was no "disk" or "memory", just "storage". Sure, it was backed up by hard drives. But from the view of the application programs, it was just one huge piece of (virtual) memory, with a single address space.

      All the magic happened in the code in the operating system. How the storage was apportioned between RAM and disk, how programs weren't able to look at objects belong to other programs, etc. Even the CPU was abstracted away. You could move your programs from System/38 based hardware to Power based hardware, and the system would translate the program from the intermediate representation that the compiler produced to the machine code for the hardware.

      Really elegant system, way ahead of its time.

    • This was in fact how PalmOS worked. Application data storage was completely record-based, and lived entirely in RAM. In the earlier versions it didn’t really have the concept of a “file” — both applications themselves and data were simply record stores, with OS APIs that could access and iterate over those record stores.

      (Eventually the OS also gained some file support to deal with removable media and data exchange, but the core was still live records residing completely in RAM with

    • by noodler ( 724788 )

      This would be amazing - makes all sorts of persistent object databases/processing systems possible, where objects simply "live" and work in the permanent memory.

      Why would this be amazing? Give me a good use case for why this is better than separate working memory and storage.

      • If your OS memory was stored in non volatile memory, you would have instant on boot, which means sleep/hibernate become the same thing, and the computer could sleep between your uses of it, for a user facing machine at least. In a server, instead of having batter backed ram on your RAID controller, you could use this, and get rid of the battery completely. You could also get rid of loading times in games, as the whole game would be in "RAM"

        • instead of having batter backed ram on your RAID controller, you could use this

          Is that an egg based batter or a milk based batter?
          Does it increase the efficiency of the RAM or is it just a way to cook it?
          Does it make the RAM easier/tastier to eat?
          Does it cause any problems the next day when the RAM gets to the other end?

  • A single point of failure.

    • You can always make copies...
    • Yeah, the controller goes out and everything dies

    • by ls671 ( 1122017 )

      A single point of failure.

      just use it with raid /s

    • This seems a bit nonsensical. Obviously working memory failing is not as consequential as storage failing for data loss; and fur unplanned shutdowns due to hardware failure computers already have multiple single failure points from the processor to the power supply to the cooling.

      • by dfghjk ( 711126 )

        His comment wasn't about consequence, it was about how many points of failure.

        Note that one point of failure is better than multiple points of failure, so the OP fails to realize that his ignorant criticism identifies an actual strength. If you wanted to make a redundant system, it's easier to provide redundancy for NVRAM than for RAM and NV storage both.

        More interesting is algorithms that exploit NVRAM to provide fast recovery from software/hardware failures. But hey, this genius posts on /. and has got

        • by noodler ( 724788 )

          What you need for a superior system is lots and lots of parts, clearly.

          The parts are still there in the unified situation. They are just thrown on a big pile. But in the end they still need to be dealt with separately.

          More interesting is algorithms that exploit NVRAM to provide fast recovery from software/hardware failures.

          Right, because NVRAM cannot fail or become corrupted by software.
          I mean, sure, there will be some uses for it, but the article makes it seem like it's the second coming of jesus.

    • A single point of failure by design.

      FTFY

    • by dfghjk ( 711126 )

      Sure, unlike conventional RAM.

      No one will ever mistake you for a computer architect.

  • No need to incur expensive i/o. In memory databases will thrive with this.
  • by Guspaz ( 556486 ) on Wednesday January 12, 2022 @07:16PM (#62168827)

    Optane (3D XPoint) made all these same claims, and basically flopped due to the high cost. How is this going to be any different?

    • Re:Optane? (Score:5, Informative)

      by Junta ( 36770 ) on Wednesday January 12, 2022 @07:54PM (#62168907)

      Not only high cost, but not *really* as fast as memory, and lower density than NAND. Optane DIMMs basically poisoned the well for any group claiming to have non-volatile memory that can compare with DRAM. It was (is?) cheaper per GB than RAM, but it occupied a really awkward middle ground.

      Intel also invested a lot to try to get developers to 'consider a new paradigm' and explicitly write applications around this concept of memory but not quite memory but better than the NVMe SSDs' to try to make up for the fact that it was an awkward solution without a place in the scheme of things (NVMe NAND + DRAM is simpler and cheaper, and having something in between turns out to not be that useful, but it *might* have been useful if mass storage was still spinning disk oriented.

      • by dfghjk ( 711126 )

        "...NVMe NAND + DRAM is simpler and cheaper, and having something in between turns out to not be that useful..."

        Until it turns out to be useful. You're talking like the book is closed. For you, perhaps it is, but no one is looking to you for architecture advances.

        NVRAM doesn't have to "compare with DRAM" to be valuable. Exploiting it fully is a challenge, one you apparently are not up to.

        "Intel also invested a lot to try to get developers to 'consider a new paradigm' and explicitly write applications aro

        • Re:Optane? (Score:5, Insightful)

          by Junta ( 36770 ) on Wednesday January 12, 2022 @09:02PM (#62169019)

          Meaning that Optane DIMMs could be configured to appear as memory (e.g. malloc) or as a block device (for things like open()), but Intel tried to make it more special by having a distinct mode and different set of APIs to store and retrieve data from Optane DIMMs in hopes they could get developers to not do open() style I/O (which makes Optane DIMM just a too-expensive SSD) without trashing main memory performance (because Optane DIMMs are hugely slower than DRAM).

          I understand the issues but the performance uplift even in pretty synthetic benchmarks were underwhelming, and while it may be worthwhile to rework architecture for substantial improvement, Optane DIMMs were not it. There's a reason that the industry at large has yawned at this point, and Micron backed off the joint venture, lots of work to very little uplift.

          Maybe this 'UtraRAM' ultimately gets there in a way Optane couldn't, but the Optane specifically failed to get traction in the way Intel really wanted it to.

          • Which begs the question...just how much better does some technology (hardware or software) have to be over the current offering before people will actually put effort and money into adopting the new technology? No one is going to rewrite their software to take advantage of something like Optane for a simple 10%-15% improvement! Even if you double the speed, it can be a real challenge to get mass adoption if your solution is not a 'drop-in replacement'.

            I have experienced this with a database engine I have d
  • Much

    I mean its 1000x better at everything, but there's nothing proven about it yet and (no I am not reading TFA, just based on the summary) the only hard fact about it, is that it is better than the previous version.

    I am not saying it won't work or we will never have anything similar to it, but I get very skeptical when numbers are used in conjunction with "extrapolated numbers", "will offer", and "in theory".

    So, "in theory" I am more likely to be offered a billion dollars while getting a handjob, based on

    • by Junta ( 36770 )

      This may all pan out, but the big thing to make me instantly skeptical is the too-marketing sounding 'UltraRAM'. I feel like credible tech research this early on rarely bother to come up with that sort of moniker, that might come closer to product time from some marketing person. As it stands the jump to heavy press and marketing effort has a smell of trying to extract value before reality throws some unfortunate kinks in the theory.

  • So, essentially what it comes down to is saying that they have invented nonvolatile memory that is orders of magnitude faster than any known nonvolatile memory. Certainly nice, but even if it be true there are other things such as that DRAM memory simply offers far less capacity per physical size than flash memory. Are they suggesting that the capacity per physical size ratio is also comparable to flash memory? In which case we could be looking at terabytes of working memory?

    I'm sceptical to this reporting;

  • So how will it perform scaled down to modern standards?

    • DRAM cells are about 1000th that size, with gate lengths at about 10-14nm, I'd imagine that the problem there is not from being able to make them smaller but researchers probably don't have access to fabs that are cutting edge because it's expensive

      • Its price/performance decisions that effects the DDR standards itself, not just DDR memory.

        If DRAM were made on the latest fabs, the cost per wafer would be the same as everything else made in the latest fabs, so you can say goodbye to inexpensive memory. Knowing this, the DDR specs are designed to be met on older fabs.

        Most programs would see little benefit from faster memory. The market forces here keeps the DDR specs from even allowing mind-blowing performance.
      • Plenty of universities with electron beam lithography equipment, probable can't compare with state of the art fab equipment alignment wise but they should be able get below 100nm gate length with it.

  • by wakeboarder ( 2695839 ) on Wednesday January 12, 2022 @07:29PM (#62168851)

    Until you fill up your UltraRAM hard drive, and then you instantly get out of hard drive space, paging file errors and out of memory errors at the same time.

    • >"Until you fill up your UltraRAM hard drive, and then you instantly get out of hard drive space, paging file errors and out of memory errors at the same time."

      I am sure it would still be partitioned off so that some areas remain acting like RAM. Or just limit how much could be acting like storage. Depends on how it is treated and addressed. Lots of possibilities- some rather confusing.

    • by ebh ( 116526 )

      You've just described the fugue state of someone on their first embedded-system project!

  • by Sarusa ( 104047 ) on Wednesday January 12, 2022 @07:43PM (#62168887)

    We were told the same back in the 1990s for 'holographic memory' - which was fast enough to take over for RAM, had relatively infinite capacity, and would last forever. It was also just a matter of crossing the ts and dotting the eyes before it was commercially available in 5 years... and of course all that came out of that were a couple holographic disk drives that never really went anywhere. Making something commercially viable is freaking hard.

    I know we've also had at least two other technologies in the lab since then which claimed about the same thing (can't remember the names). They never lived up to their promise either.

    Basically, this stuff is like fusion power - 5 years away and it always will be. I'd love to be proven wrong!

    • by DrSpock11 ( 993950 ) on Wednesday January 12, 2022 @07:51PM (#62168899)

      I remember in the early 00's reading about MRAM, which would combine disks and RAM and all the benefits it would bring. It was also only 5 years away at the time.

      • by dfghjk ( 711126 )

        As you two lovebirds reminisce, you fail to realize that these technologies actually exist, they are not 5 years away. There are problems exploiting them, but they are not vaporware.

        Also, whether 30 years ago or 20 years ago, RAM performance is a moving target. What exists today could easily replace what existed then, but DRAM advances as well.

        The challenge is not making RAM non-volatile, it is replacing the entirety of modern computing built around the notion that RAM is volatile. Until that is done, NVR

      • by Gravis Zero ( 934156 ) on Wednesday January 12, 2022 @09:35PM (#62169085)

        I remember in the early 00's reading about MRAM, which would combine disks and RAM and all the benefits it would bring. It was also only 5 years away at the time.

        Actually, Magnetoresistive RAM (MRAM) does exist and does have the benefits of both and you can buy it right now! The only issue is that it's about $1 per megabyte. So yeah, if you don't mind paying $1000/GB ($1M/TB) of memory then you only need to hire a system designer to make it into a reality. This may seem unreasonable but it was only a couple decades ago that the same was true about flash memory.

        • by Twinbee ( 767046 )
          How durable is it for long term storage compared to the best SSDs?
          • Unpowered memory itself will retain it's data for few months at the higher capacities. That said, if you are having someone design a multi-million dollar system then why not include supercaps that will slowly leak power to provide MRAM enough power to retain the data for decades?

            Honestly, if you haven't powered it up after a decade then you likely don't care about the data on it.

      • by Guignol ( 159087 ) on Thursday January 13, 2022 @08:18AM (#62169603)
        Yes, I remember too, in the early 00s, M-Ra-M, was about combining clay tablets and papyrus technology.
        You see, clay tablets had much better durability than papyrus, but the write performance was terrible.
        On the other hand, producing papyrus was very expensive and took a lot of time
        So eventually came the wonderful idea of painting on clay tablets, the tech was so promising that it was called the Mighty Ra Memory, they were advertising it as cheaper than papyrus, as strong as clay tablet, but with papyrus like write performance, some were pushing a little and said they augmented the clay tablet data density to almost infinity since you could now color code the data with infinite nuances
        So anyway, we all know what happened, consumers just loved the papyrus because it had already seamless folding technology, you could actually roll your data storage

        Oh wait sorry you meant the '20'00s, my bad, yes this one too, they could have tried with another name this time...
    • by MikeKD ( 549924 )

      I know we've also had at least two other technologies in the lab since then which claimed about the same thing (can't remember the names). They never lived up to their promise either.

      Basically, this stuff is like fusion power - 5 years away and it always will be. I'd love to be proven wrong!

      Was one the memristor [slashdot.org]? It got a lot of hype [wired.com] but fizzled [hpcwire.com] out.

  • This scheme works like flash except instead of tunneling electrons thru oxide that erode over time there is a barrier that changes its conductivity when voltage is applied to it. Apply a voltage and electrons can be added or removed. Remove voltage and they get stuck.

    • by dskoll ( 99328 )

      Even if the only thing this research yielded was flash-like memory that could handle a lot more write cycles, that would still be pretty good news for SSDs

      • No, it would be terrible news for SSDs because they would all be getting discarded after being replaced! However, SSD and memory manufacturers would be in the black big time.

  • Didn't an IBM minicomputer already do this several decades ago, unifying RAM and storage? I remember reading a book about it.

    • by kriston ( 7886 )

      I think it was Fortress Rochester, a book about the iSeries including the AS/400.

    • by eriks ( 31863 )

      You're probably thinking of Bubble Memory:

      https://en.wikipedia.org/wiki/... [wikipedia.org]

      It was promising, but flash (and fast hard drives) supplanted it.

      • by kriston ( 7886 )

        I was thinking of from a software standpoint. The operating system treated the memory and storage as the same thing. The book was Fortress Rochester, The Inside Story of the IBM iSeries.

        Another poster mentioned that the Palm Pilot had the same approach to memory, that is, RAM and non-volatile storage were the same thing.

  • Intel will only offer ECC versions on servers ...

  • Remember that thing?
  • by DDumitru ( 692803 ) <`moc.ocysae' `ta' `guod'> on Wednesday January 12, 2022 @11:07PM (#62169245) Homepage
    The testing in the paper was for 20 um junctions and had program cycle types of 1ms to 10ms. Compare this to 10nm and 100 ns or better for current memory/flash. The paper does imply that scaling down to "final dimensions" should increase the speed to "above DRAM" speeds based on capacitance, but actually scaling something to 1/5000 the volume and expecting linear results seems a little ambitious.
  • program-erase cycling endurance is "one hundred to one thousand times better than flash."

    That's a big improvement over flash, but it's not nearly enough to replace RAM. The endurance of flash memory [wikipedia.org] ranges from about 1000 program/erase cycles up to about 100,000. Assume this tech allows each byte to be rewritten 100 million times. For RAM, you could easily hit that limit in a few minutes.

    • The question of endurance is followed by "does it need to be managed". Consider the "worse case" RAM access pattern. A spin-lock, implemented as a ticket lock, that is bouncing between cores on a multi-socket system. The DRAM will get this cache line cycled well over a million times a second. If the memory is "direct mapped", then any amount of endurance other than unlimited is not enough. If you map the memory and wear level it, then even moderate endurance might be good enough if there are a lot of t
  • Chip shortage... With what factories ye gonna have these produced?
  • The old ,a href="http://www.oldcomputers.net/trs100.html">Radio Shack Model 100,/a. used RAM (battery-backed low-power static CMOS) as both storage and RAM (up to 32K!). The package was just great, though, for the time: about the size of a 1/2 ream of paper; weighed 2-4 lb. (depending on which version you got); had a full-size (with Fn-key embedded number pad) keyboard; o/s, necessary apps, and BASIC in ROM (32K of that, hence the RAM limit). It revolutionized news reporting, bypassing the typist at rewr

    • by Joosy ( 787747 )

      You left out the fact that it had a built-in acoustic modem! So, yeah, it was the perfect beast for journalists ... I remember reading about it back in the day.

  • by gweihir ( 88907 ) on Thursday January 13, 2022 @02:17AM (#62169407)

    Will not replace DRAM. Too slow, too large. Might make for great swap-space though.

    And as usual: I believe it when I can buy it, not before.

  • Close but no cigar (Score:5, Informative)

    by robi5 ( 1261542 ) on Thursday January 13, 2022 @03:37AM (#62169447)

    The quoted 1000 times the endurance of current SSD read/write cycles is still not good enough as RAM. Current RAM cells get written billions of times. So it sounds like it'll mostly be a much better persistent memory (like SSD) rather than something that unifies RAM and SSD. Maybe in some specific systems it can work as a uniform memory, and it's possible that by the time it's out, it'll make sense to put even larger SRAM and even DRAM caches on the CPU die (or in the CPU package) so the count of main memory reads/writes is reduced

    Let's also not forget that current RAM serves as video RAM in most of the systems sold. Video RAM is especially prone to being rewritten a gazillion of times. Maybe the speed isn't there either to act as RAM or esp. VRAM

    • I am curious how many cycles a standard DRAM cell can undergo flips before failing, though. Is there an actual metric for that?

      In my lifetime I have never seen standard DRAM failing, but I have witnessed DRAM placed on video cards failing, so I guess you are right about the latter having much more intensive use than standard DRAM.

      With standard DRAM it is OK to do bit flips forever on the same unit, but with UltraRAM an algorithm to spread the writes over all the chip will have to be invented. I wonder what

  • Permanent storage (HDD/SSD/etc) is not the same as RAM memory.

    That they are measured in the same units, bytes, is not relevant just like diesel and beer are measured in the same units (outside the USA anyway). This is going to confuse them again...

  • We've been playing with Intel's persistent memory. It's pretty nice.
    Persistent memory is slower than DRAM, but it is much cheaper to buy and in term of electric consumption.
    So you can easily build machines with a few TB of memory for pretty cheap. It is expected to be very nice for various database and scientific applications.

Some people manage by the book, even though they don't know who wrote the book or even what book.

Working...