Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware Technology

'Universal' Memory Aims To Replace Flash/DRAM 125

siliconbits writes "A single 'universal' memory technology that combines the speed of DRAM with the non-volatility and density of flash memory was recently invented at North Carolina State University, according to researchers. The new memory technology, which uses a double floating-gate field-effect-transistor, should enable computers to power down memories not currently being accessed, drastically cutting the energy consumed by computers of all types, from mobile and desktop computers to server farms and data centers, the researchers say."
This discussion has been archived. No new comments can be posted.

'Universal' Memory Aims To Replace Flash/DRAM

Comments Filter:
  • 10 Years away (Score:2, Insightful)

    by Anonymous Coward

    This technology always seems to be less than 10 years away.

    • Re:10 Years away (Score:5, Interesting)

      by gmuslera ( 3436 ) on Monday January 24, 2011 @10:33PM (#34990278) Homepage Journal
      I hope that xkcd [xkcd.com] is wrong this time. Would be nice to have most new mobile devices with that in 2 years.
      • Finally! (Score:5, Funny)

        by Max Littlemore ( 1001285 ) on Tuesday January 25, 2011 @12:35AM (#34990798)

        Whatever year it it comes to market, you can be sure of one thing....
        That will be the year of Multics [wikipedia.org] on the desktop.

        • by Misagon ( 1135 )

          Don't dis Multics! Multics was forward-thinking, but perhaps too much so for its own good. Unix got the upper hand much because it ran on cheaper hardware that did not have have an MMU.
          If someone is planning on creating an OS from scratch to run on mobile or embedded devices, then I think that that person should take a look at Multics first instead of creating yet another Unix copy.

          • I agree totally. I was only half fishing for +5 Funny. Absolutely no disrespect meant. I seriously think that high speed non-volatile memory is the only stumbling block to making multics really useful.

            Was hoping to get at least one Insightful for my comment, but instead I get a bunch of Funnys and some neck-beard behaving like I just shot all his chickens.....

    • by TopSpin ( 753 )

      This technology always seems to be less than 10 years away.

      There may be hope for this one. These researchers appear to have confidence enough not to adopt usual 5 year microelectronic SPI [slashdot.org].

    • "This technology always seems to be less than 10 years away."

      Eventually, (less than ten years away) technology to produce technology predicted to be less than ten years away in less than ten years will be fielded.

  • by melikamp ( 631205 ) on Monday January 24, 2011 @10:13PM (#34990156) Homepage Journal
    Volatility is actually useful for certain security policies: like storing sensitive passwords in computer memory and working with temporarily decrypted files.
    • by pushing-robot ( 1037830 ) on Monday January 24, 2011 @10:19PM (#34990192)

      The first floating-gate in the stack is leaky, thus requiring refreshing about as often as DRAM (16 milliseconds). But by increasing the voltage its data value can be transferred to the second floating-gate, which acts more like a traditional flash memory, offering long-term nonvolatile storage.

      • by Anonymous Coward

        Simtek _used to_ make a memory like that called nvSRAM back in the 1990's by combining SRAM with EEPROM.
        I have the databook sitting right in front of me right now. Someday I might fetch some $$$ selling it on ebay.

        I hope they solve the issues of limited write cycles for the FLASH cells. Not sure if it would suffer the same high READ errors rates as NAND FLASH.

      • by hitmark ( 640295 )

        So they have crammed two sets of "hardware" onto the same physical chip, and transfer data between them depending on the state wanted. Why no just sell flash in DIMM modules and do the same at the chipset level?

    • by Simon80 ( 874052 ) on Monday January 24, 2011 @11:14PM (#34990488)
      Volatile memory is already vulnerable to reboot attacks, because the data takes a long enough time to rot. Paradoxically, non-volatility could increase security in these cases by making it more obvious that it's not OK to leave sensitive info sitting around in memory.
      • Very true. Don't rely on assumed physical traits. When in doubt, wipe like the $three_letter_agency is at the door.

        • When your memory's nonvolatile
          Nothing is forgot, nothing is forgot, nothing is forgot

          If your bits try to get at you
          flip 'em with a not, flip 'em with a not, flip 'em with a not

          security isn't easy y'all,
          no it's fsckin not, no it's fscking not, no it's fscking not

          With a triple-des key in some volatile ram,
          encrypt all your memory and hide it from the man?

        • FDA?
      • by purpledinoz ( 573045 ) on Tuesday January 25, 2011 @02:12AM (#34991204)
        I read somewhere that if you cool DRAM, the data can stay intact for up to 10 minutes. That's plenty of time to remove the modules and extract the data from them. But if this is really a big concern, I wonder if it's practical to zero the memory after a PC is shutdown. Kind of a background routine. Or maybe even short all the lines to drain the stored charges.
        • by ultranova ( 717540 ) on Tuesday January 25, 2011 @05:22AM (#34991940)

          But if this is really a big concern, I wonder if it's practical to zero the memory after a PC is shutdown. Kind of a background routine. Or maybe even short all the lines to drain the stored charges.

          Why would you sell computers with such features? Are your customers terrorists?

          • by Thing 1 ( 178996 )

            Why would you sell computers with such features? Are your customers terrorists?

            No, bankers. But then I repeat myself.

        • by kasperd ( 592156 )

          I wonder if it's practical to zero the memory after a PC is shutdown. Kind of a background routine.

          If you want the hardware to be modified slightly to achieve it, then it should be completely practical. DRAM doesn't write individual cells at a time. It reads out entire lines of bits into SRAM modifies it there and writes it back. Moreover it even periodically sweeps over the lines just reading them out and writing them back to refresh them.

          I don't know how long time the sweep takes, but for wiping the me

        • It certanly has a reset pin...

        • by tlhIngan ( 30335 )

          I read somewhere that if you cool DRAM, the data can stay intact for up to 10 minutes. That's plenty of time to remove the modules and extract the data from them. But if this is really a big concern,

          One trick I used for debugging (I had no way to log to a serial port in the OS I was using) was to log to memory. The system would crash, then I would simply reboot it, and then dump the log buffer out via the bootloader.

          Even after several seconds, the log was still quite readable.

          That paper on reading hard driv

    • So it's time to think about the next step: overwrite before freeing memory.

      I don't worry at all, it becomes a software problem, not a hardware problem. If only everyone overwrote unused memory...

      • It could be useful as a hardware feature. The same way a powered-down hard drive parks it's head, a chip on your mobo could zero-over your RAM using power from a capacitor if the power cuts out.

    • Also, we lose the "just reboot it" fix for all the crappy software we write.

      • Why? I mean, "rebooting" is still possible, it just sucks that much more since there'd no longer be any hardware reason to do so.

        • The hardware reason is because the hardware uses electricity, even in sleep. Yeah, I know, most people don't care about that.

          • It doesn't in hibernation.

            Regardless, this is a technology which would make that hardware reason go away. As I read it, it's basically a much, much faster form of hibernation.

      • by lga ( 172042 )

        Not so; but rebooting would have to include zeroing all of the memory. Starting up and resuming with the contents intact would be more akin to coming out of sleep mode.

        • by kasperd ( 592156 )

          but rebooting would have to include zeroing all of the memory.

          Not necessary. The operating system already has to assume there could be random garbage in all the memory it didn't touch. The operating system has to zero the memory before handing it to applications. And that is the case even if it was zeroed on boot. It could be a long time since the system was booted, and the memory may have been used for something in the meantime. Some operating systems keep a cache of zeroed pages that can be handed to appl

        • Actually rebooting just would need to zero/replace a few crucial data structures, just as a normal file system format doesn't overwrite all data, but only replaces the superblock (or whatever central data structure the file system in question uses) to mark the rest of the covered space as free and usable.

    • Soooo then when you unallocate memory you n-times random overwrite it.
  • Early DRAM (Score:5, Interesting)

    by DCFusor ( 1763438 ) on Monday January 24, 2011 @10:22PM (#34990210) Homepage
    though it had a short refresh time spec, would actually hold nearly all the bits for up to a minute, and we made early "digital" cameras out of them, charging up all the bits and letting light discharge the lit up pixels quicker than the others. It was a bit of a bear to figure out the pixel layout -- it wasn't in order, but we did it and even got to two bits or so per pixel resolution by taking more than one shot after a charge, different exposure times. One wonders why someone doesn't just work along those lines. Seems to me for most uses simply increasing the refresh time interval would save tons of power, and also complexity. If you could get it to a couple of days, I'd think that would be fine for most all portable devices, and you'd just use cheap flash as the disk, like now. I am guessing you'd lose some density, as the older, less dense DRAMs had large cells that stored more charge per bit, and that new lower voltage semis are also leakier, but it might be worth looking into anyway. I recall one case where the company I worked for designed some very early disk cache controllers. Well, actually I did about 90% of that. We used DRAM, but simply arranged the code so the basic idling operation (for example, looking for io requests or sorting the cache lookup table) took care of refresh anyway, wasn't too hard at all to manage that, and of course a block read or write always did a full page refresh. Made the thing a little bit faster, as there was never a conflict between refresh and real use in the bargain. This would also be trivial an any current opsys to get done. Probably happens by accident except in real pathological cases.
    • Unfortunately, my guess is simply increasing the refresh time is only going to solve one problem. I'm not an expert on DRAM or anything but it seems to behave like a capacitor. Longer refresh times require larger capacitance. Large capacitance doesn't necessarily mean more power, but I think it would take more voltage to change the state of a bit (you'd have to reverse a larger charge).

      Also, the biggest problem with DRAM these days is speed (reads/writes per second). The best way to increase speed (w
      • Re:Early DRAM (Score:5, Interesting)

        by sxeraverx ( 962068 ) on Tuesday January 25, 2011 @02:25AM (#34991244)

        You are correct. Currently, DRAM stores information as a N-channel MOSFET attached to a capacitor. This MOSFET is leaky. There's no getting around this leakage. This leakage acts to discharge the capacitor where the bit is stored.

        You can try to decrease this leakage in a number of ways. You can increase the threshold voltage of the gate, but that means you'd have to increase the voltage the DRAM operates at as well, or else you wouldn't be able to charge the capacitor. This means you'd increase the energy-per-operation of the DRAM cell, because you'd have to charge the capacitor up more. You'd burn up more power, because the leakage is proportional to the operating voltage, but the charging energy is proportional to the square of the voltage.

        Alternatively, you could increase the capacitance. But this means that the capacitor would take longer to charge, slowing down every operation. Also, doubling the capacitor size means doubling the energy it stores (and therefore burns with every operation). It also makes the DRAM cells bigger, meaning you can't fit as many on a silicon wafer.

        Neither of these is what you want to do. In fact, you want to do the opposite for traditional DRAMs. It's counterintuitive, but you get more density, more speed, and less power by increasing the refresh rate (or rather, increasing the refresh rate is a side-effect of all of those). Unfortunately, lithography limits and quantum mechanics mean we're having a hard time going any smaller.

        It's truly amazing what we can do. The oxide layer (essentially a layer of quartz glass between metal and silicon) on a MOS these days is 5 atoms thick. We're going to have to come up with something that relies on something other than the traditional semiconductor effects if we want to continue forward.

        • Why don't you want to reduce the conductive area (channel and poly sizes) while you increase the thicknes oxide layer? That would recude capacitance and leak rate at the same time, wouldn't it?

          But I guess it would be a bicth to manufacture... Thick and smal layers of oxide, those must be quite hard to corrode at the right shape.

          • Increasing the oxide layer thickness was part of the solution, but they couldn't do it with silicon dioxide. The newer deep submicron CMOS processes use metal gates (instead of polysilicon) with high-k gate dielectrics (like hafnium oxide). The thicker high-k materials reduce leakage while still allowing a low turn-on voltage for the transistors.
    • by Bender_ ( 179208 )

      Seems to me for most uses simply increasing the refresh time interval would save tons of power, and also complexity. If you could get it to a couple of days,

      Yes, increasing the refresh time is indeed a way to reduce power consumption of a DRAM. The problem is that you are dealing with billions of memory cells. The median retention time of typical cells is well within the range of seconds. But there is a tiny fraction of cells (1/10000) that lose their charge much quicker, and things may get worse at elevate

      • by kasperd ( 592156 )

        There are ways to work around this by introducing on-the-fly error correction. But this will result in a larger device and added latency, which is obviously not desired in many applications.

        Aren't you going to need this on-the-fly error correction in every system where you don't want a random bitflip to happen every once in a while? I would assume the data going between the DRAM and the SRAM in the control part of the chip would always go through some ECC logic both ways, except on those chips where it was

        • Afaict currently server memory generally has ECC and desktop memory doesn't.

          However ECC is a game of probabilities and block sizes. Lets say your raw bitflip rate is one per 2^30 bits (I suspect in reality it's lower) and that the different bits of each word come from very different parts of the memory array. Your chance of an error in a 64-bit word is around one in 2^24 your chance of two errors in a 64-bit word is arround one in 2^48.

          In other words an ECC scheme that works on a word level and can only cor

          • by kasperd ( 592156 )
            I don't know what blocksize ECC memory uses. It would make sense to use one line of the DRAM memory, because if you don't actually correct errors during refresh, then the probability of an uncorrectable error increases the longer a memory location has remained untouched. Without correction, you could be reading out bad data and writing it back. And the memory would be accumulating errors.
    • I was reading the technical specs on the Z80 recently, and the Zilog designers were brilliant in their CPU design which automatically refreshed the dynamic ram by using a self incrementing "R" register to traverse enough addresses to do the full refresh. That was back in the late 70's and 80's, and that more or less the same dynamic ram is with us still. We could use some advances in that area. It seems like we are struggling to find the elegant memory solution. When I started with computers, we were writin
  • Interesting (Score:4, Interesting)

    by c0lo ( 1497653 ) on Monday January 24, 2011 @10:26PM (#34990236)
    TFA

    "We believe our new memory device will enable power-proportional computing, by allowing memory to be turned off during periods of low use without affecting performance," said Franzon.

    Huh! A new chapter opens in the "program/OS optimization" - heap fragmentation will have an impact on the power your computer consumes, even when not swapping (assuming the high density and non-volatility will render HDD obsolete... a "no more swapping, everything is non-volatile-RAM, with constant addressing cost" becomes plausible).

    • "no more swapping, everything is non-volatile-RAM, with constant addressing cost" becomes plausible

      Wouldn't Non-Volatile memory just be called memory esp. given that, by definition, memory recalls past events.

      This family of memory is not only plausible, it has existed before -- it is how the model of a "Turing Machine" operates. In fact, our first reel to reel magnetic memory systems had this "non-volatile memory" of which you speak due to the absence of large quantities of RAM (we had sequential access memory instead), programs were executed as read from tape, and variables were often interleaved with

      • by c0lo ( 1497653 )

        "no more swapping, everything is non-volatile-RAM, with constant addressing cost" becomes plausible

        Wouldn't Non-Volatile memory just be called memory esp. given that, by definition, memory recalls past events.

        How far back to recall and still be named a memory?

        This family of memory is not only plausible, it has existed before -- it is how the model of a "Turing Machine" operates.

        Yes, I remember them. Density and random-access were indeed lacking.

        What else will change in the mindset of programmers/sysadms when the RAM (heap and stack) and HDD are (again) not distiguishable anymore? Like:
        1. "Buffer overflow and starting to execute the JPEG file at addr 1.5 TB"
        2. "Hey dude? Where is my C:\ drive?"
        3. "Huh? The memory-mapped-files are deprecated?"
        4. "memory allocation fails. Please try to delete or achive some of you older files"
        5

        • 3. So you'd have to copy everything around instead of letting the MMU alias it for you? Not a good idea.
          4. It's quite inconceivable to have this without any disk quotas.
          6. Any OS other than DOS/Windows had that since basically forever. You can even create the file in a deleted state.

        • What else will change in the mindset of programmers/sysadms when the RAM (heap and stack) and HDD are (again) not distiguishable anymore? Like:
          1. "Buffer overflow and starting to execute the JPEG file at addr 1.5 TB"
          2. "Hey dude? Where is my C:\ drive?"
          3. "Huh? The memory-mapped-files are deprecated?"
          4. "memory allocation fails. Please try to delete or achive some of you older files"
          5. "I want the process with PIDx backed-up"
          6. "Ah... the notion of a smart-file-pointer... the GC deletes the file when no longer referenced".

          I hope you're joking. Just partition the RAM separately and it's no different to any current computer with separate RAM. Early PalmOS devices had the RAM and storage on the same storage device (they stored the OS in ROM and loaded it into RAM once the battery was installed...pulling the battery would wipe the device).

      • Anything called "memory", even in humans, is volatile.

        For permanence, you'd want "clay tablets" or newer technology of that kind.

      • We could call it prescient memory, and it could recall data from the future as well as the past. You wouldn't even have to write the data into it, or give it the address of the data you want because it knows what you want before you ask.
  • by Mr Z ( 6791 ) on Monday January 24, 2011 @10:28PM (#34990250) Homepage Journal
    The memory breakthrough was working on had the speed of flash and the volatility of DRAM. It was pretty dense though...
  • by Anonymous Coward

    Isn't this a dupe? I thought I saw it last week.

    Actually, don't I see this same article _every_ week?

    • by c0lo ( 1497653 )

      Isn't this a dupe? I thought I saw it last week.

      Actually, don't I see this same article _every_ week?

      Nope... must be that your memory got corrupted... cosmic radiation I guess (I might be wrong, though... what if somebody rebooted me meantime?)

  • Cost/Byte? (Score:4, Insightful)

    by artor3 ( 1344997 ) on Monday January 24, 2011 @11:34PM (#34990570)

    Where does it get the power for the non-volatile write? It would have to have a battery or capacitor built in, in case of sudden loss of power. It would also need low voltage detection for the same reason. How does all of this end up affecting the cost and density? We already have non-volatile SRAM [wikipedia.org] based on the same principles (warning: article sounds like it was lifted from a press release).

    The reason we use DRAM as computer memory is because it's really, really cheap. If nvDRAM ends up having a significantly highly cost per byte, I doubt it'll see much use. Especially when one considers the ever-falling price point for solid-state drives.

    • by fbartho ( 840012 )

      From what I understood from other comments (didn't RTFA) the point of this is more, it acts like RAM continuously until say you shut the lid of the laptop, then the laptop pulses a bit of extra power and flash[pun] freezes the RAM into a stable state. Bam instant hibernate!

    • Good question on the cost. Can anybody speak to the ratio between production and material costs in any memory type? I'm curious how big of an impact using exotic materials such as paladium and hafnium will make to the overal cost.

      Hmm.. Looking at all the layers they used to produce their chip makes me think that the production costs will be high too.

    • by Anonymous Coward

      It "saves" on command, when the chip is also supplied with Vpp. Each DRAM cell has "shadow" Flash cell to which it is directly connected ... in fact, those cells are one single structure with two capacitors, one leaky for DRAM, one better isolated for Flash.
      It doesn't have to have a backup battery or capacitor built in, non-volatile SRAMs don't have them either, at least not on chip die, they are usually just sealed together in molded package for convenience. However, non-volatile "CMOS" configuration param

  • by Anonymous Coward

    Let's just call it nested memory. kthx.

  • ...does it run Linux?  ;-D
  • by White Flame ( 1074973 ) on Tuesday January 25, 2011 @12:43AM (#34990840)

    I think memristors sound a lot more usable than this setup.

    Given the other thoughts about heap fragmentation and such things, I don't know if it's reasonable to expect fine-grained "flush to NV and stop refreshing" application, but rather as a system-sleep sort of mechanism. Of course, if memory allocators and GCs are written in knowledge of keeping LRU data clumped together, it might be reasonable. The comments say flushing is done on a "line by line" basis, which I don't personally know how big or small that gets.

    One wonders exactly how much juice it takes to flush to NV, vs the standard draw of the DRAM-style mode of operation.

  • Maybe in mobile sector 1W per SDRAM module is interesting, but on desktop computers it isn't. They should reduce the energy to keep ATX boxes switched off(!) to 0W, as it was with AT, where a mechanical switch was used to cut the PC from power. It is simply inacceptable to consume energy (usually over 5W) when something is completely down (yeah I know, there is wake-on-LAN etc, but 99% of people don't use it). That's why I have a big fat red switch on my multi-outlet power strip.

  • The memory market place was too much drama (teehee) in the end. At least now we can hope for some economies of scale, which will hopefully be passed on to consumers.
  • What does this mean to users? There's no new functionality. It's more or less "combined" existing functionality.
    So, what is the significance here (I'm honestly asking, I'm sure there is some interesting consumer benefits).

    A few I can think of:
    1) Longer battery life on mobile devices
    2) Instant "on", since the state of the OS and applications can remain in memory

    With #2 I would guess that certain programs that maintain clock pulse counters may operate "oddly" and have to be reprogrammed to stay in sync. Tho

    • Actually 2) has interesting connotations. Those of us old enough to remember the 80's will remember when Memory Mapped IO was the norm. This meant that the CPU treated all data as an extension of RAM. Your memory sticks, hard drive, floppy disk and network card buffers etc could all be mapped onto the CPU's memory space. Each had different speeds (obviously) and the total memory could not exceed the addressable space of the CPU (e.g. 4GB, but there were tricks for getting around this). To get something into

  • The universal memory would have the speed of SRAM, the density of Flash, would write directly into the non-volatile memory (i.e. no extra nonvolatile storage step, and certainly no need to refresh), and would have the same price per bit as hard disks. That way you could it use in cache (SRAM speed), as DRAM replacement (beats DRAM in any category) and as hard disk replacement (nonvolatile, cheap).

    This "universal" memory would be unsuitable for cache memory, thus it isn't universal.

  • One to utterly wipe RAM... No, encrypting RAM is not an alternative, unless you really enjoy having your system, with the power of a supercomputer of 15 years ago, move with the speed of a *whizbang* 8088....

                    mark

  • I worked at a memory company who's name rhymes with Licron for over 10 years and about 8 years ago we were working on something similar to this. It was basically high speed NOR flash memory, but the project didn't get much traction in lieu of ramping up on DDRDRAM and Rambus DRAM memory production. Given the fact that DRAM production is an almost completely money losing venture you'd think memory companies like Infineon, Hynix, and Samsung would be pushing this technology a bit more aggressively.
  • I didn't know they axed duplicates now..

    "Memory On Demand" Cuts Energy Use
    Posted by CmdrTaco on Wednesday January 26, @09:00AM
    from the cut-it-off dept.
    judgecorp writes
    "Researchers are testing memory that can be powered down when not in use. This could slash the power used by computer memory, combining the benefits of DRAM (speed) and Flash (low power, non-volatile). The memory could also allow "instant-on" computers, according to an IEEE Computer Society report of the research at Carolina State University."

"Hello again, Peabody here..." -- Mister Peabody

Working...