'Universal' Memory Aims To Replace Flash/DRAM 125
siliconbits writes "A single 'universal' memory technology that combines the speed of DRAM with the non-volatility and density of flash memory was recently invented at North Carolina State University, according to researchers. The new memory technology, which uses a double floating-gate field-effect-transistor, should enable computers to power down memories not currently being accessed, drastically cutting the energy consumed by computers of all types, from mobile and desktop computers to server farms and data centers, the researchers say."
10 Years away (Score:2, Insightful)
This technology always seems to be less than 10 years away.
Re:10 Years away (Score:5, Interesting)
Finally! (Score:5, Funny)
Whatever year it it comes to market, you can be sure of one thing....
That will be the year of Multics [wikipedia.org] on the desktop.
Re: (Score:2)
Don't dis Multics! Multics was forward-thinking, but perhaps too much so for its own good. Unix got the upper hand much because it ran on cheaper hardware that did not have have an MMU.
If someone is planning on creating an OS from scratch to run on mobile or embedded devices, then I think that that person should take a look at Multics first instead of creating yet another Unix copy.
Re: (Score:2)
I agree totally. I was only half fishing for +5 Funny. Absolutely no disrespect meant. I seriously think that high speed non-volatile memory is the only stumbling block to making multics really useful.
Was hoping to get at least one Insightful for my comment, but instead I get a bunch of Funnys and some neck-beard behaving like I just shot all his chickens.....
Re: (Score:2)
This technology always seems to be less than 10 years away.
There may be hope for this one. These researchers appear to have confidence enough not to adopt usual 5 year microelectronic SPI [slashdot.org].
Re: (Score:2)
"This technology always seems to be less than 10 years away."
Eventually, (less than ten years away) technology to produce technology predicted to be less than ten years away in less than ten years will be fielded.
Re: (Score:2)
usb is high on cpu io (Score:2)
usb is high on cpu io there are much better buses to use.
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
Why you would actually want to use USB rather than SATA or eSATA is beyond me though. Maybe USB3 in the short term.
Re: (Score:2)
SATA is better than USB for internal drives, but eSATA can be a PITA to set up - do Average Joes even know how to do all that crap in the BIOS to get it working in the first place, never mind having to hook up a separate power connector?
Re: (Score:2)
I've never had a problem and haven't had to touch the BIOS for eSATA. I just plug the drive in and it works. Then again, I'm running Linux.
-Aaron
Re: (Score:2)
Same here, but most mobos aren't configured to support eSATA out of the box in my experience.
Re: (Score:1)
I wouldn't. I was just saying that computers usually do have internal USB ports of a kind if you need to connect devices internally. I imagine it could be useful for things like copy protection dongles and things like that that you wouldn't want to be stolen from an office computer.
In regards to eSATA: I prefer it over USB, as any sane person would; but I have encountered a few problems regarding hot swapping in the past
Re: (Score:2)
I was meaning "why you would actually want" as in a colloquial form of "why anybody would actually want", maybe I should be more explicit though..
Re: (Score:2)
You still don't want to plug your core memory into a hight latency DMA controlled bus.
But I like volatility! (Score:5, Interesting)
Re:But I like volatility! (Score:4, Informative)
The first floating-gate in the stack is leaky, thus requiring refreshing about as often as DRAM (16 milliseconds). But by increasing the voltage its data value can be transferred to the second floating-gate, which acts more like a traditional flash memory, offering long-term nonvolatile storage.
Re: (Score:1)
Simtek _used to_ make a memory like that called nvSRAM back in the 1990's by combining SRAM with EEPROM.
I have the databook sitting right in front of me right now. Someday I might fetch some $$$ selling it on ebay.
I hope they solve the issues of limited write cycles for the FLASH cells. Not sure if it would suffer the same high READ errors rates as NAND FLASH.
Re: (Score:2)
So they have crammed two sets of "hardware" onto the same physical chip, and transfer data between them depending on the state wanted. Why no just sell flash in DIMM modules and do the same at the chipset level?
Re:But I like volatility! (Score:5, Interesting)
Re: (Score:2)
Very true. Don't rely on assumed physical traits. When in doubt, wipe like the $three_letter_agency is at the door.
Re: (Score:3)
When your memory's nonvolatile
Nothing is forgot, nothing is forgot, nothing is forgot
If your bits try to get at you
flip 'em with a not, flip 'em with a not, flip 'em with a not
security isn't easy y'all,
no it's fsckin not, no it's fscking not, no it's fscking not
With a triple-des key in some volatile ram,
encrypt all your memory and hide it from the man?
Re: (Score:2)
LMAO Mod parent Funny! XD
Re: (Score:1)
Re:But I like volatility! (Score:4, Interesting)
Re:But I like volatility! (Score:4, Funny)
Why would you sell computers with such features? Are your customers terrorists?
Re: (Score:2)
Why would you sell computers with such features? Are your customers terrorists?
No, bankers. But then I repeat myself.
Re: (Score:3)
If you want the hardware to be modified slightly to achieve it, then it should be completely practical. DRAM doesn't write individual cells at a time. It reads out entire lines of bits into SRAM modifies it there and writes it back. Moreover it even periodically sweeps over the lines just reading them out and writing them back to refresh them.
I don't know how long time the sweep takes, but for wiping the me
Re: (Score:2)
It certanly has a reset pin...
Re: (Score:2)
One trick I used for debugging (I had no way to log to a serial port in the OS I was using) was to log to memory. The system would crash, then I would simply reboot it, and then dump the log buffer out via the bootloader.
Even after several seconds, the log was still quite readable.
That paper on reading hard driv
say goodbye to volatility! or? (Score:2)
So it's time to think about the next step: overwrite before freeing memory.
I don't worry at all, it becomes a software problem, not a hardware problem. If only everyone overwrote unused memory...
Re: (Score:2)
It could be useful as a hardware feature. The same way a powered-down hard drive parks it's head, a chip on your mobo could zero-over your RAM using power from a capacitor if the power cuts out.
Re: (Score:3)
Also, we lose the "just reboot it" fix for all the crappy software we write.
Re: (Score:2)
Why? I mean, "rebooting" is still possible, it just sucks that much more since there'd no longer be any hardware reason to do so.
Re: (Score:2)
The hardware reason is because the hardware uses electricity, even in sleep. Yeah, I know, most people don't care about that.
Re: (Score:2)
It doesn't in hibernation.
Regardless, this is a technology which would make that hardware reason go away. As I read it, it's basically a much, much faster form of hibernation.
Re: (Score:2)
Not so; but rebooting would have to include zeroing all of the memory. Starting up and resuming with the contents intact would be more akin to coming out of sleep mode.
Re: (Score:2)
Not necessary. The operating system already has to assume there could be random garbage in all the memory it didn't touch. The operating system has to zero the memory before handing it to applications. And that is the case even if it was zeroed on boot. It could be a long time since the system was booted, and the memory may have been used for something in the meantime. Some operating systems keep a cache of zeroed pages that can be handed to appl
Re: (Score:1)
Actually rebooting just would need to zero/replace a few crucial data structures, just as a normal file system format doesn't overwrite all data, but only replaces the superblock (or whatever central data structure the file system in question uses) to mark the rest of the covered space as free and usable.
Re: (Score:1)
Re: (Score:2)
Fast and durable does have its places. Imagine if you could design a computer where you didn't have to worry about losing power for a sh
Re: (Score:2)
Imagine if you could design a computer where you didn't have to worry about losing power for a short period of time because once the power returned, the entire contents of RAM would still be there and you could just resume where it left off.
I have one of those at work, I call it a laptop (well, a netbook). We had a power cut for a while last week, some of the UPSes ran out, but my netbook was fine :)
And it would mean you didn't have to ensure that your laptop was suspended before battery runs out, and you don't lose what you were working on in case the battery does run out while suspended.
Use hibernate instead of suspend. Admittedly hibernation has been pretty buggy on some of the OS/laptop combinations I've had over the years.
I do think this stuff would be very cool for power savings when the machine is in use though.
Re: (Score:2)
I have seen computers where one work and the other doesn't. But I'll have to admit, that most of my issues with it could be resolved by fixing the software, and no new hardware would be required.
With the laptops I currently have, the situation is as follows. My work laptop (MacBook Pro) is configured to automatically suspend to RAM and disk simultaneously. And this works flawlessly until it actually need to resume from disk at which point it crashes and reboots. My older
Re: (Score:2)
Re:HP (Score:4, Informative)
Early DRAM (Score:5, Interesting)
Re: (Score:2)
Also, the biggest problem with DRAM these days is speed (reads/writes per second). The best way to increase speed (w
Re:Early DRAM (Score:5, Interesting)
You are correct. Currently, DRAM stores information as a N-channel MOSFET attached to a capacitor. This MOSFET is leaky. There's no getting around this leakage. This leakage acts to discharge the capacitor where the bit is stored.
You can try to decrease this leakage in a number of ways. You can increase the threshold voltage of the gate, but that means you'd have to increase the voltage the DRAM operates at as well, or else you wouldn't be able to charge the capacitor. This means you'd increase the energy-per-operation of the DRAM cell, because you'd have to charge the capacitor up more. You'd burn up more power, because the leakage is proportional to the operating voltage, but the charging energy is proportional to the square of the voltage.
Alternatively, you could increase the capacitance. But this means that the capacitor would take longer to charge, slowing down every operation. Also, doubling the capacitor size means doubling the energy it stores (and therefore burns with every operation). It also makes the DRAM cells bigger, meaning you can't fit as many on a silicon wafer.
Neither of these is what you want to do. In fact, you want to do the opposite for traditional DRAMs. It's counterintuitive, but you get more density, more speed, and less power by increasing the refresh rate (or rather, increasing the refresh rate is a side-effect of all of those). Unfortunately, lithography limits and quantum mechanics mean we're having a hard time going any smaller.
It's truly amazing what we can do. The oxide layer (essentially a layer of quartz glass between metal and silicon) on a MOS these days is 5 atoms thick. We're going to have to come up with something that relies on something other than the traditional semiconductor effects if we want to continue forward.
Re: (Score:2)
Why don't you want to reduce the conductive area (channel and poly sizes) while you increase the thicknes oxide layer? That would recude capacitance and leak rate at the same time, wouldn't it?
But I guess it would be a bicth to manufacture... Thick and smal layers of oxide, those must be quite hard to corrode at the right shape.
Re: (Score:1)
Re: (Score:2)
Seems to me for most uses simply increasing the refresh time interval would save tons of power, and also complexity. If you could get it to a couple of days,
Yes, increasing the refresh time is indeed a way to reduce power consumption of a DRAM. The problem is that you are dealing with billions of memory cells. The median retention time of typical cells is well within the range of seconds. But there is a tiny fraction of cells (1/10000) that lose their charge much quicker, and things may get worse at elevate
Re: (Score:1)
Aren't you going to need this on-the-fly error correction in every system where you don't want a random bitflip to happen every once in a while? I would assume the data going between the DRAM and the SRAM in the control part of the chip would always go through some ECC logic both ways, except on those chips where it was
Re: (Score:2)
Afaict currently server memory generally has ECC and desktop memory doesn't.
However ECC is a game of probabilities and block sizes. Lets say your raw bitflip rate is one per 2^30 bits (I suspect in reality it's lower) and that the different bits of each word come from very different parts of the memory array. Your chance of an error in a 64-bit word is around one in 2^24 your chance of two errors in a 64-bit word is arround one in 2^48.
In other words an ECC scheme that works on a word level and can only cor
Re: (Score:1)
Re:Early DRAM (refresh is sometimes easy) (Score:2)
Re: (Score:2)
I don't think the intent is to replace main memory, though. The benefits to two-tier storage like this is actually quite significant. A sizable percentage of disk writes don't ever get flushed to disk because they are temporary files.
Combine a smart OS that uses the first tier for write caching and the second tier for permanent storage, you'd be able to significantly reduce wear (assuming, of course, that there is a wear problem
Re: (Score:2)
Background storage, even FLASH, is far larger than main memory for a reason
That reason is that main memory is more expensive, is volatile, and requires power at all times. If that's no longer true, perhaps it's time to revisit the older designs where the storage and the memory were the same thing.
OSses are not designed with this type memory in mind
And they don't have to be if the BIOS zeros it on boot. However, there are substantial advantages to be had if they DO take advantage. For example the suspend states become much simpler.
At the same time, there are OSes that ARE designed this way. Currently, they use disk backed memory, bu
Re: (Score:2)
If - but that seems a pretty gigantic if to me.
It's *very* common with speed/price/size tradeoffs in engineering, regardless of which technology choosen. Usually, "pick any 2" is what it boils down to.
Sure, in principle, if you invent something that scores well on all 3 axes, then no trade-off is nessecary and all the older systems that score poorly on atleast one of the 3 axis, are obsolete. But I'm not holding my breath.
For storing large amounts of data for a long time, you want huge and cheap, but need l
Re: (Score:2)
As you say, it's a trade off. If it's strong enough on 2 axis and not so bad on the other, it might be enough, at least for some application. After all, the pick 2 rule applies to everything we use now as well.
We already have a trend to using flash as storage (it's all the rage!), so it's not exactly a huge leap to imagine this catching on if it's decently priced and really is more durable.
Re: (Score:2)
- Background storage, even FLASH, is far larger than main memory for a reason
What does this have to do with anything?
OSses are not designed with this type memory in mind
The only possible problem any current OS could have with this is this:
Have you wever counted how often you need to reboot a computer? WTF is this thing going to help?
...which could easily be solved by a BIOS tweak. Empty RAM on powerup. Problem solved.
Density will be far lower than DRAM, causing significantly higher prices and preventing it from competing. Also more complex cells are inherently more expensive and less reliable
Yeah that's why we're all still using EEPROMs and 5 1/4 floppies. Those hard drives the size of microwave ovens are just too damn expensive.
An "emergency flash write" still takes a lot of time and energy, at least partially invalidating the concept
And how often do you do anything like this with current storage technologies?
Interesting (Score:4, Interesting)
"We believe our new memory device will enable power-proportional computing, by allowing memory to be turned off during periods of low use without affecting performance," said Franzon.
Huh! A new chapter opens in the "program/OS optimization" - heap fragmentation will have an impact on the power your computer consumes, even when not swapping (assuming the high density and non-volatility will render HDD obsolete... a "no more swapping, everything is non-volatile-RAM, with constant addressing cost" becomes plausible).
Re: (Score:3)
"no more swapping, everything is non-volatile-RAM, with constant addressing cost" becomes plausible
Wouldn't Non-Volatile memory just be called memory esp. given that, by definition, memory recalls past events.
This family of memory is not only plausible, it has existed before -- it is how the model of a "Turing Machine" operates. In fact, our first reel to reel magnetic memory systems had this "non-volatile memory" of which you speak due to the absence of large quantities of RAM (we had sequential access memory instead), programs were executed as read from tape, and variables were often interleaved with
Re: (Score:2)
"no more swapping, everything is non-volatile-RAM, with constant addressing cost" becomes plausible
Wouldn't Non-Volatile memory just be called memory esp. given that, by definition, memory recalls past events.
How far back to recall and still be named a memory?
This family of memory is not only plausible, it has existed before -- it is how the model of a "Turing Machine" operates.
Yes, I remember them. Density and random-access were indeed lacking.
What else will change in the mindset of programmers/sysadms when the RAM (heap and stack) and HDD are (again) not distiguishable anymore? Like:
1. "Buffer overflow and starting to execute the JPEG file at addr 1.5 TB"
2. "Hey dude? Where is my C:\ drive?"
3. "Huh? The memory-mapped-files are deprecated?"
4. "memory allocation fails. Please try to delete or achive some of you older files"
5
Re: (Score:2)
3. So you'd have to copy everything around instead of letting the MMU alias it for you? Not a good idea.
4. It's quite inconceivable to have this without any disk quotas.
6. Any OS other than DOS/Windows had that since basically forever. You can even create the file in a deleted state.
Re: (Score:2)
What else will change in the mindset of programmers/sysadms when the RAM (heap and stack) and HDD are (again) not distiguishable anymore? Like:
1. "Buffer overflow and starting to execute the JPEG file at addr 1.5 TB"
2. "Hey dude? Where is my C:\ drive?"
3. "Huh? The memory-mapped-files are deprecated?"
4. "memory allocation fails. Please try to delete or achive some of you older files"
5. "I want the process with PIDx backed-up"
6. "Ah... the notion of a smart-file-pointer... the GC deletes the file when no longer referenced".
I hope you're joking. Just partition the RAM separately and it's no different to any current computer with separate RAM. Early PalmOS devices had the RAM and storage on the same storage device (they stored the OS in ROM and loaded it into RAM once the battery was installed...pulling the battery would wipe the device).
Re: (Score:2)
Anything called "memory", even in humans, is volatile.
For permanence, you'd want "clay tablets" or newer technology of that kind.
Re: PRAM Memory (Score:2)
Oops, I knew I did something wrong... (Score:4, Funny)
Re: (Score:2)
Re: (Score:2)
Dupe (Score:1)
Isn't this a dupe? I thought I saw it last week.
Actually, don't I see this same article _every_ week?
Re: (Score:2)
Isn't this a dupe? I thought I saw it last week.
Actually, don't I see this same article _every_ week?
Nope... must be that your memory got corrupted... cosmic radiation I guess (I might be wrong, though... what if somebody rebooted me meantime?)
Cost/Byte? (Score:4, Insightful)
Where does it get the power for the non-volatile write? It would have to have a battery or capacitor built in, in case of sudden loss of power. It would also need low voltage detection for the same reason. How does all of this end up affecting the cost and density? We already have non-volatile SRAM [wikipedia.org] based on the same principles (warning: article sounds like it was lifted from a press release).
The reason we use DRAM as computer memory is because it's really, really cheap. If nvDRAM ends up having a significantly highly cost per byte, I doubt it'll see much use. Especially when one considers the ever-falling price point for solid-state drives.
Re: (Score:1)
From what I understood from other comments (didn't RTFA) the point of this is more, it acts like RAM continuously until say you shut the lid of the laptop, then the laptop pulses a bit of extra power and flash[pun] freezes the RAM into a stable state. Bam instant hibernate!
Re: (Score:2)
Good question on the cost. Can anybody speak to the ratio between production and material costs in any memory type? I'm curious how big of an impact using exotic materials such as paladium and hafnium will make to the overal cost.
Hmm.. Looking at all the layers they used to produce their chip makes me think that the production costs will be high too.
Re: (Score:1)
It "saves" on command, when the chip is also supplied with Vpp. Each DRAM cell has "shadow" Flash cell to which it is directly connected ... in fact, those cells are one single structure with two capacitors, one leaky for DRAM, one better isolated for Flash.
It doesn't have to have a backup battery or capacitor built in, non-volatile SRAMs don't have them either, at least not on chip die, they are usually just sealed together in molded package for convenience. However, non-volatile "CMOS" configuration param
It's no use, no one listens to ACs. (Score:1)
Especially not the technically competent ones.
mv UNIVERSAL_MEMORY NESTED_MEMORY (Score:1)
Let's just call it nested memory. kthx.
But... (Score:1)
Re: (Score:2)
No. But it might store it.
meh... (Score:3)
I think memristors sound a lot more usable than this setup.
Given the other thoughts about heap fragmentation and such things, I don't know if it's reasonable to expect fine-grained "flush to NV and stop refreshing" application, but rather as a system-sleep sort of mechanism. Of course, if memory allocators and GCs are written in knowledge of keeping LRU data clumped together, it might be reasonable. The comments say flushing is done on a "line by line" basis, which I don't personally know how big or small that gets.
One wonders exactly how much juice it takes to flush to NV, vs the standard draw of the DRAM-style mode of operation.
"Drastically reduce" the 1W (Score:2)
Maybe in mobile sector 1W per SDRAM module is interesting, but on desktop computers it isn't. They should reduce the energy to keep ATX boxes switched off(!) to 0W, as it was with AT, where a mechanical switch was used to cut the PC from power. It is simply inacceptable to consume energy (usually over 5W) when something is completely down (yeah I know, there is wake-on-LAN etc, but 99% of people don't use it). That's why I have a big fat red switch on my multi-outlet power strip.
Re: (Score:3)
My entire primary server uses less than 5W when operating except when absolutely flat-out when it eats a whole 7W, so I agree with you!
http://www.theregister.co.uk/2010/11/11/diy_zero_energy_home_server/ [theregister.co.uk]
Rgds
Damon
Re: (Score:2)
Very nice low power setup. Running complex stuff too. The Sheeva plug looks a good candidate for a NAS / media server without the pain of big optimisation, and would save me $$$ compared to running PC tower configs to mostly copy files to my box under the TV.
About time... (Score:1)
Impressive stuff, but... (Score:2)
What does this mean to users? There's no new functionality. It's more or less "combined" existing functionality.
So, what is the significance here (I'm honestly asking, I'm sure there is some interesting consumer benefits).
A few I can think of:
1) Longer battery life on mobile devices
2) Instant "on", since the state of the OS and applications can remain in memory
With #2 I would guess that certain programs that maintain clock pulse counters may operate "oddly" and have to be reprogrammed to stay in sync. Tho
Re: (Score:2)
Actually 2) has interesting connotations. Those of us old enough to remember the 80's will remember when Memory Mapped IO was the norm. This meant that the CPU treated all data as an extension of RAM. Your memory sticks, hard drive, floppy disk and network card buffers etc could all be mapped onto the CPU's memory space. Each had different speeds (obviously) and the total memory could not exceed the addressable space of the CPU (e.g. 4GB, but there were tricks for getting around this). To get something into
Re: (Score:1)
Not the universal memory (Score:1)
The universal memory would have the speed of SRAM, the density of Flash, would write directly into the non-volatile memory (i.e. no extra nonvolatile storage step, and certainly no need to refresh), and would have the same price per bit as hard disks. That way you could it use in cache (SRAM speed), as DRAM replacement (beats DRAM in any category) and as hard disk replacement (nonvolatile, cheap).
This "universal" memory would be unsuitable for cache memory, thus it isn't universal.
...and requiring another daemon (Score:2)
One to utterly wipe RAM... No, encrypting RAM is not an alternative, unless you really enjoy having your system, with the power of a supercomputer of 15 years ago, move with the speed of a *whizbang* 8088....
mark
Been there done that... (Score:1)
Say hello to the memory hole (Score:2)
I didn't know they axed duplicates now..
"Memory On Demand" Cuts Energy Use
Posted by CmdrTaco on Wednesday January 26, @09:00AM
from the cut-it-off dept.
judgecorp writes
"Researchers are testing memory that can be powered down when not in use. This could slash the power used by computer memory, combining the benefits of DRAM (speed) and Flash (low power, non-volatile). The memory could also allow "instant-on" computers, according to an IEEE Computer Society report of the research at Carolina State University."