Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware Technology

MRAM Inches Towards Prime Time 261

levin writes "According to an article over at EETimes, magnetoresistive RAM chips are getting a little more practical. Infineon Technologies released info on a new 16M MRAM component on Tuesday and the read and write cycle times of this chip make it 'competitive with established DRAM.' How long before nonvolatile memory becomes the solution to crash-prone software rather than better programming?"
This discussion has been archived. No new comments can be posted.

MRAM Inches Towards Prime Time

Comments Filter:
  • No Subject (Score:5, Insightful)

    by Anonymous Coward on Friday June 25, 2004 @03:31AM (#9525937)
    Last time I checked, most of the software crashes aren't caused by memory randomly disappearing.
    • Re:No Subject (Score:3, Insightful)

      by TheLink ( 130905 )
      I think a lot of people have been trolled by the story.

      Hmm, slashdot's filters are pretty annoying too. I have to type slower in order to post successfully. Gack.
    • Re:No Subject (Score:3, Informative)

      No shit. I wonder what kind of havok a shitty OS will wreak on an NVRAM system? I hope there is always a way to reset the banks, because I don't trust much of anything, especially Windows, to behave well enough to stay "running" like that with no "failsafe" power-cycle option.
    • I think you missed the point. The submitter feels that if MRAM becomes widespread, you could recover from system crashes more easily because the memory is non volatile. This might lead to programmers becoming lazier and not bothering to fix bugs.

      Not that I agree with that viewpoint; just pointing out what I think was meant.

  • huh? (Score:5, Insightful)

    by pe1rxq ( 141710 ) on Friday June 25, 2004 @03:32AM (#9525943) Homepage Journal
    How long before nonvolatile memory becomes the solution to crash-prone software rather than better programming?

    Probably very long......
    There are very little volatile-memory related software bugs.....
    HINT: You don't want your ram back in the same corrupt state it was in before the reboot.

    Jeroen
    • There are some NVM-related bugs though. Programming Flash can be tricky, especially when you want to do advanced things like implementing a file system using Flash. Not what the submitter was thinking of, I believe - but MRAM could still help making programmers' lifes easier.
    • Re:huh? (Score:5, Insightful)

      by gfody ( 514448 ) on Friday June 25, 2004 @04:03AM (#9526034)
      How long before ______________ becomes the solution to crash-prone software rather than better programming?

      I'm sure it was just something he added to the article submission to try and sound smart. After all.. it does make sense for a lot of other articles (faster cpus, faster memory, severely high level programming languages, etc etc).
      • Re:huh? (Score:2, Insightful)

        by Threni ( 635302 )
        >I'm sure it was just something he added to the article submission to try and
        >sound smart

        All the more ironic that it had the opposite effect! "Oh, I dunno..just put something about Microsoft...no, bad programming...that's better." "But how does a different type of RAM magically fix errors in program design?" "Shut up! This is my article and I'll write what I like!"
      • No. Back in the age of core memory, it was very common to diagnose system crashes by dumping core before re-initializing the system. When PCs crash, the information is gone because of the DRAM. This can help operating system development.

        Yay for core memory coming back! (sort of) *wink*
    • by NigritudeUltramarine ( 778354 ) on Friday June 25, 2004 @05:00AM (#9526148)
      There are very little volatile-memory related software bugs.....

      Oh, are you SURE about that? You should research such statements first, my friend, rather than assuming.

      Take a look at this review from last year [anandtech.com] of power supplies by Anandtech.

      They ran a six-hour memory test 54 times--and found that with 512MB of RAM, after each six hour test there were an average of four bits that had flipped! That means there is a memory error on a 512MB PC--on average--every 90 minutes!

      If that error occurs in a code segment in a driver, you may get a system crash. In a Windows DLL, perhaps some system instability. In an application, perhaps an application crash. If it's in a data segment, your important manuscript may suddenly lose a paragraph or skip a couple pages as a linked list pointer jumps to the wrong spot, or you may find a bunch of junk replacing normal text.

      Memory errors are a serious problem that very few people acknowledge. Why people still buy non-ECC RAM is beyond me. (Of course, even with ECC RAM, there are still various places inside the PC where failure can occur--along the various buses for exmaple, which don't all have ECC. So this is only part of the solution.)

      More reliable RAM would definitely be a step in the right direction.
      • You described a harware bug (last time I checked RAM was still considered hardware).....
        I was talking about a software bug.

        Jeroen
      • Why people still buy non-ECC RAM is beyond me.

        I at least don't consider any data too safe before it hits disk anyway. The risk that it gets destroyed due to user error (like me accidentally hitting the wrong key), program bug, failing hardware, power outage, ... is IHMO still larger than the bit flip problem (especially since there's a high probablilty that a bit flip either does a minor error (like changing a single character in a document), or causes a program to crash (if the error is inside program cod

      • by rugger ( 61955 ) on Friday June 25, 2004 @05:46AM (#9526229)
        Err, then the PC and ram Anandtech have been using are dodgy.

        Due to the design of Dynamic RAM chips, memory bit flip errors are not influenced by how long the memory sits "idle". I emphise idle here because Dynamic ram is never really idle. Each cell in a DRAM chip contains a capacitor and a transistor. If a DRAM cell is left to its own devices, the capacitor soon discarges and the cell looses its state. To stop this from happening, in the background, the RAM controller on the chip is constantly recharging the capacitors. Each cell is read and rewritten about every few milliseconds.

        Because DRAM chips are never idle, the whole methodolgy of the anandtech test is WRONG, and the most obvious conclusion is that anandtech is using dodgy ram, or is simply pushing the RAM beyond their specs to forcibly generate errors.
        • by NigritudeUltramarine ( 778354 ) on Friday June 25, 2004 @06:17AM (#9526298)
          No, that's wrong. The truth is that errors in dynamic RAM can be introduced on each refresh. As you said yourself, dynamic RAM needs to be refreshed every few milliseconds--read and rewritten. Each time that happens, it's possible for an error to be introduced. If the refresh circuitry reads the value incorrectly, you get an error. If it writes the value incorrectly, you get an error. The longer the RAM sits around, the more refresh cycles, so the greater the chance for errors. If the voltages aren't stable enough, for example, you'll find a "1" bit refreshed with slightly too low of a current so that when the next refresh comes around, it's read as a "0" as it's been discharging over time and falls just below the threshhold to be read as a "1".

          As far as errors not being introduced when the memory is "idle," you're thinking of static RAM. Static RAM doesn't need to be refreshed, and thus actually CAN be idle. So it holds a huge advantage here. Without the refresh cycle, there's no place for errors to be introduced except during the actual reads and writes by the processor.
        • Due to the design of Dynamic RAM chips, memory bit flip errors are not influenced by how long the memory sits "idle". I emphise idle here because Dynamic ram is never really idle. Each cell in a DRAM chip contains a capacitor and a transistor. If a DRAM cell is left to its own devices, the capacitor soon discarges and the cell looses its state. To stop this from happening, in the background, the RAM controller on the chip is constantly recharging the capacitors. Each cell is read and rewritten about every f
    • Thank god there was someone reading with some sense.
    • Re:huh? (Score:2, Insightful)

      by Grizzlysmit ( 580824 )

      How long before nonvolatile memory becomes the solution to crash-prone software rather than better programming?

      Probably very long......
      There are very little volatile-memory related software bugs.....
      HINT: You don't want your ram back in the same corrupt state it was in before the reboot.

      Yep and wait till you see how many previously working programs start behaving as buggy as hell with nonvolatile memory, we're going to have to be far more careful about the init state of memory in a instant on world.

  • Hopefully never. (Score:5, Interesting)

    by Tokerat ( 150341 ) on Friday June 25, 2004 @03:32AM (#9525944) Journal

    How long before nonvolatile memory becomes the solution to crash-prone software rather than better programming?
    Advanced hardware is no excuse for coders to get lazy. We have enough of that already, let's not make it worse by taking things for granted.

    That being said, imagine the power savings and lightning fast startup times! I'd love an "instant on" PC! ( or, erm...Mac :-D )
    • I don't know about you, but the startup times for PCs just don't annoy me.

      It's a daily ritual for me:
      Turn on Computer
      Walk to kitchen
      Make Coffee!
      Walk to desk
      Log on

      I never even see the machine boot up.
    • That being said, imagine the power savings and lightning fast startup times! I'd love an "instant on" PC! ( or, erm...Mac :-D
      You can already do this, it's called sleep mode :P
      • imagine the power savings

        In SLEEP mode? Yes, there is some savings, but power's still going through the RAM. Maybe you want hibernation - suspend/sleep to disk. Just as stable as sleep on Windows (not very), a little harder to do on Linux, but takes a lot less power with a normal shutdown time and a VERY quick boot.
        • Hibernation = very quick boot? That depends on how much memory you have doesn't it?

          Speed of typical single 7200 rpm drive = 50MB/sec.

          Assuming the typical naive resume:
          RAM = 512MB, resume from disk = 10 seconds.
          RAM = 1GB, resume from disk = 20 seconds
          RAM = 2GB, resume from disk = 40 seconds.

          A non naive resume would involve rapid compresion of the contents of memory and streaming the results to disk + checksum (you'd still have to allocate space for worst case scenario), but I haven't seen anyone do this y
    • EROS (Score:5, Interesting)

      by nacturation ( 646836 ) <nacturation&gmail,com> on Friday June 25, 2004 @05:23AM (#9526186) Journal
      The EROS [eros-os.org] project (Extremely Reliable Operating System) is an attempt to achieve this -- continual persistence with fine grained capability-based security. Essentially, *everything* is serializable to disk and is done so periodically (eg: every 30 seconds). This has the benefit that you can have the power go out unexpectedly, reboot the system, and only lose half a minute worth of work as all your apps will be restored to their last state. An amusing anecdote about the predecessor to EROS, KeyKOS, from this page [eros-os.org]... true story:
      • At the 1990 uniforum vendor exhibition, key logic, inc. found that their booth was next to the novell booth. Novell, it seems, had been bragging in their advertisements about their recovery speed. Being basically neighborly folks, the key logic team suggested the following friendly challenge to the novell exhibitionists: let's both pull the plugs, and see who is up and running first.


      • Now one thing Novell is not is stupid. They refused.

        Somehow, the story of the challenge got around the exhibition floor, and a crowd assembled. Perhaps it was gremlins. Never eager to pass up an opportunity, the keykos staff happily spent the next hour kicking their plug out of the wall. Each time, the system would come back within 30 seconds (15 of which were spent in the bios prom, which was embarassing, but not really key logic's fault). Each time key logic did this, more of the audience would give novell a dubious look.

        Eventually, the novell folks couldn't take it anymore, and gritting their teeth they carefully turned the power off on their machine, hoping that nothing would go wrong. As you might expect, the machine successfully stopped running. Very reliable.

        Having successfully stopped their machine, novell crossed their fingers and turned the machine back on. 40 minutes later, they were still checking their file systems. Not a single useful program had been started.

        Figuring they probably had made their point, and not wanting to cause undeserved embarassment, the keykos folks stopped pulling the plug after five or six recoveries.
  • by SirCrashALot ( 614498 ) <jasonNO@SPAMcompnski.com> on Friday June 25, 2004 @03:33AM (#9525945)
    If a program crashes, its memory is corrupted so saving its state past a reboot wouldn't help. Assuming the system goes down, theres a chance that you might want to save some of the data but if it thats far gone, it might not be trustworthy.

    Looks cool for applications such as hibernate.

  • by wankledot ( 712148 ) on Friday June 25, 2004 @03:33AM (#9525946)
    "...How long before nonvolatile memory becomes the solution to crash-prone software rather than better programming?"

    Someone explain to me how MRAM will help with stability if it is simply replacing the same type of functionality that good old fashioned RAM has.

    • by lightray ( 215185 ) <tobin@splorg.org> on Friday June 25, 2004 @03:39AM (#9525973) Homepage
      Exactly.. MRAM will actually encourage better programming, since users will have no reason to reboot other than because of crashes; rebooting will be even more of an onus.

      Nonetheless, I don't know whether that particular aspect of MRAM will make any difference. I can't remember the last time I had to reboot Linux due to a software crash! Virtual/protected memory systems are very good about isolating applications from each other already.

      The real benefits of MRAM are far more exciting. Unlike conventional DRAM, MRAM does not need to be refreshed (it's nonvolatile), yet its fast enough and could be cheap enough to replace DRAM. The result is a huge POWER SAVINGS since you wouldn't have to use power to run the DRAM refresh cycles. Moreover, MRAM is simpler, so it could have higher integration densities, and thus would be cheaper.

      MRAM falls into the general domain of "spintronics" (which is the name given to technologies which exploit the spin of electrons in addition to their charge). One of the most exciting applications of spintronics is in reconfigurable computing. We could make "real" reconfigurable logic -- cheap nonvolatile FPGA's. Your processor could quite literally rewire itself on the fly, adapting to the task at hand. Very exciting.
      • "Your processor could quite literally rewire itself on the fly, adapting to the task at hand. Very exciting."


        But only if we pull the chip out of his head, and set the switch to the learning mode that skynet had turned off.
        Sorry had to do it.

        Mycroft
  • Wha? (Score:5, Insightful)

    by Anonymous Coward on Friday June 25, 2004 @03:33AM (#9525951)
    How is nonvolatile RAM supposed to prevent crashes? Crashes are the result of unexpected program interaction, hardware incompatibilty, or poorly-anticipated user input.
  • Programmer Error (Score:5, Insightful)

    by LakeSolon ( 699033 ) on Friday June 25, 2004 @03:33AM (#9525952) Homepage
    "How long before nonvolatile memory becomes the solution to crash-prone software rather than better programming?"

    Never. Having the same bits in memory after a reboot doesn't help if you wrote the wrong bits in the first place.

    "On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."

    ~Lake
    • howabout because suddenly your 'secondary' storage becomes fast enough to periodically save your running state to. Every few seconds should be enough. Maybe write transaction logs inbetween.

      A crash? system reboots and picks up at the last stable point. Finding that stable point could be a bit tricky though... maybe some memory was incorrectly overwritten 5 minutes ago and is only just now causing a crash.

      But if a crash causes a system reboot which takes only the blink of an eye and picks up almost exactly
  • Not a solution (Score:5, Insightful)

    by Alioth ( 221270 ) <no@spam> on Friday June 25, 2004 @03:33AM (#9525953) Journal
    Non-volatile main memory is unlikely to be a solution against crash-prone software. If the software crashed because there was a bug in how it handled the data in memory, if the data is still there and the application reads it again, it'll just crash in exactly the same place.

    In any case, an application crashing very seldom causes the machine to actually power down, and an application crashing and being restarted never gets to use the same memory the same way anyway, so the point is entirely moot. If your main memory is nonvolatile RAM, the advantage is you can design a system that can be powered down and suspended without having all that lengthy write of the entire machine's state to disk (and read when it comes back up again), which would be extremely useful on a laptop. If you can do this, you can have essentially uptime of years, so the incentive would be to write MORE stable operating systems and applications if the expectation is that even a laptop may go years between reboots.
  • What? (Score:4, Informative)

    by bobintetley ( 643462 ) on Friday June 25, 2004 @03:33AM (#9525954)

    How long before nonvolatile memory becomes the solution to crash-prone software rather than better programming?

    What? A crash-prone program is a crash-prone program, regardless of whether it vanishes or not when you turn the power off.

  • by noidentity ( 188756 ) on Friday June 25, 2004 @03:34AM (#9525958)
    How long before nonvolatile memory becomes the solution to crash-prone software rather than better programming?

    Good, now I'll be able to preserve memory corruption even after a power-cycle! Last time I checked, software crashes weren't due to the fact that DRAM loses its contents when powered down.
  • by bollow (a) NoLockIn ( 785367 ) on Friday June 25, 2004 @03:36AM (#9525963) Homepage
    How long before nonvolatile memory becomes the solution to crash-prone software rather than better programming?

    If your operating system has crashed, it has crashed. You need to reboot it. MRAM cannot change that. The point is that with MRAM you should be able to switch off your computer and switch it on again later without reboot and without need to save RAM contents to disk at power-down and to retore them from disk after the system is switched on again.

    Hence if anything, this technology will increase pressure on operating system vendors to produce OSes which don't crash badly enough to require a reboot.

  • by robvangelder ( 472838 ) on Friday June 25, 2004 @03:37AM (#9525967)
    Once step closer to replacing HDD, CDROM, DVD and all those other "moving parts" storage devices.

    In 20 years, we'll all be looking back at DVD and CDROM like we do at Tape Cassette.

    Moving parts and things that go whirr make me cringe.

    I just want to plug it in and get instant access.
    • by Anonymous Coward on Friday June 25, 2004 @04:06AM (#9526042)
      You can already have a solid state PC if you want. I've replaced a broken HD in an older laptop with Compactflash through a 2.5" CF-IDE adapter. I run Linux, with the root fs mounted ro and I only use small apps, so I don't need swap with 192MB Ram. My main workspace is in Ram on a tmpfs (the laptop battery acts as a UPS); when I'm done I save the essentials on a USB pen drive. For backing up bigger archives, I plug in a 2.5" USB HD. That's my only "moving parts" storage device, but I don't use it day-to-day.

      If you can live without bloatware like Windows, OS X or KDE/Gnome, it's easy enough to go solid state today. My CF card is bigger than the HD that I used to run Linux 2.0 with X and Fvwm 1.2 ...
    • Sounds nice in theory, but prices on this will have to be WAY cheaper than RAM for it to work.

      If you have a 5 GB DVD movie, to replace that with regular RAM would cost you about $1000 or so. That vs a 50 cent pressed DVD. MRAM looks to be about the same price as RAM once the bugs get worked out.

      Sure, in 15 years you'll be able to buy 5GB of MRAM for 50 cents, but then our movies will be in 1 terapixel 3D, and will be stored on holographic cubes that hold 50 petabytes for 50 cents. And it will still cos
    • But tape is still the most economical way to store large amounts of data - and often the most efficient for long term archive processes.

  • by torpor ( 458 ) <ibisumNO@SPAMgmail.com> on Friday June 25, 2004 @03:39AM (#9525971) Homepage Journal

    "How long before nonvolatile memory becomes the solution to crash-prone software rather than better programming?"

    Hello. What do you think -hard disks- are?

    I'll give you 5 seconds to come up with a list of operating system 'features' that have been 'standardized' which really resulted from this 'ideology' about how to not write 'safe' code and just let other parts of the system 'deal with it' ...

    Give up? Okay, I'll give you a few:

    1. Swap. Yup, if the program has no idea how much RAM it has or needs, and no idea how to manage it, and the programmer just wants it all ... there's swap. Otherwise known as 'virtual memory', or, as they used to say in the good ol' days "fail-safe swap-over".

    2. "Protected Memory". Yup. Same deal. Let the OS deal with 'bad programming'.

    Non-volatile memory has nothing to do with 'protecting from bad programming' and everything to do with writing 'true' persistent state machines... just like these two 'features'.

    In summary: If it wasn't for 'bad programming', operating systems wouldn't have anything to do ...

    Flame on.
    • Perhaps I'm speaking in the wrong forum, but in coding I often across events in which I have no idea how much data I'll be reading until I'm done.

      Shit, this is problem shared by kids' lemonade stands and database developers worldwide.

      The world is construed with constantly changing amounts of units. Ask just about any software engineer how many users use his system and he'll say "Ummm, I don't know."
      Is he "stupid" because he doesn't know? No, it's no real working model can predict the future of s
  • Crash savedty (Score:3, Interesting)

    by ooze ( 307871 ) on Friday June 25, 2004 @03:42AM (#9525981)
    Is not the main point of non-volatile memory. The two main advantages are significatly less power consumption (only put energy into it, when you want to change the sate, not on every single cycle) and having permanent storage at the speed of System Memory (may I see the time coming, when there will be no seperate permanent storage devices, like hds and all this periphery, with all the bus technology and other error prone parts?...It's a long shot, but this is an important first step)
  • ...nonvolatile memory becomes the solution to crash-prone software rather than better programming...

    I don't see how non-volatile memory will cure crash-prone software. One of the main contributory factors to buggy code is memory leaks. Non-volatile memory would allow the invalid state of memory to be preserved between 'reboots', making defects in code more obvious. If non-volatile memory is adopted, then we'll need higher quality code than we have now.

  • One step closer... (Score:4, Insightful)

    by Chrysophrase ( 621331 ) on Friday June 25, 2004 @03:48AM (#9525994) Homepage

    ... to the "immediately on" computer. Boot times reduced to next to nothing will be prove to be a giant leap in the usability of computers, I think.

    • ... to the "immediately on" computer. Boot times reduced to next to nothing will be prove to be a giant leap in the usability of computers, I think.
      We already have that to a certain extent....I close the lid on my iBook, and without even having it plugged in I can pop open the clamshell the next day, hit a key and after a few seconds I am back in business
      Vastly superiour to this windows hibernate crap...and you can even use the feature in linux, you just gotta run it on mac hardware.
  • by mcrbids ( 148650 ) on Friday June 25, 2004 @03:49AM (#9525997) Journal
    Wouldn't non-volatile RAM actually make programmers more attentive?

    One of the most common programming errors is a memory-leak. Can you imagine what would happen if you couln't reboot the Windows machine to clear the memory for another few days?

    Non-volatile RAM may be the best excuse yet to switch to something more, ah... tightly coded!

    That said, I think that the current memory/disk model of computing is antiquated. Why distinguish memory from disk? Why not treat it all the same?

    A HDD is the base storage medium. RAM is a cache of that. L2 cache is a cache of RAM. L1 cache caches L2 cache.

    Why the distinction from HDD to memory? Instead of allocating RAM directly, why not follow the *nix philosophy of "everything is a file" and if you want a storage space for some temp values, open a file and write them in.

    The memory allocated for a particular process would then appear as a file (perhaps buried somewhere in /proc ?) like any other file. Then, determining which program was leaking ram could be done with a simple `ls -la`.

    Instead of flushing to special swap partitions, the memory files would simply be committed to disk when you run out of RAM. (moved down the cache chain from RAM to disk)

    Switching to a fundamentally different type of memory may be the right time to reconsider system architectures and challenge our conventional assumptions of computing, especially since memory leaks can be so severe, even in commercial software!
    • One of the most common programming errors is a memory-leak. Can you imagine what would happen if you couln't reboot the Windows machine to clear the memory for another few days?

      Why everyone automatically assumed that memory can't be cleared upon reboot?! WTF???!! What you were smoking today? It's fucking RAM guys! BIOS could clean it for you during reboot. Or operating system could do it before loading itself.

      • Why everyone automatically assumed that memory can't be cleared upon reboot?! WTF???!! What you were smoking today? It's fucking RAM guys! BIOS could clean it for you during reboot. Or operating system could do it before loading itself.

        Sooo... where's the advantage of NVRAM?

        So, we spend years and millions of dollars developing something that we then disable anytime it behaves differently than something widely available?

        With a minimum of profanity, PLEASE EXPLAIN TO ME WHY YOU'D WANT NON-VOLATILE RAM if
        • With a minimum of profanity, PLEASE EXPLAIN TO ME WHY YOU'D WANT NON-VOLATILE RAM if it's going to be erased on boot anyway?

          I'd want non-volatile ram for instant on-off like my Palm does without fear of loosing memory when battery goes dead. Instant power-on/power-off != reboot.

          • Precisely! And most embedded devices these days do it with battery-backed SRAM. This means that even when "off", they still sip power (albeit at a very slow rate). MRAM would have huge advantages in this market.

            Why does everyone assume every bit of semiconductor/electronics technology has to be for "their PCs"? Hello everyone! Those PC processors are actually in the MINORITY of parts shipped in the overall microprocessor market!
        • PLEASE EXPLAIN TO ME WHY YOU'D WANT NON-VOLATILE RAM if it's going to be erased on boot anyway?

          You wouldn't. But that's not really the question, is it? The question (or rather, the answer) is that you would have a choice of powering down and emptying memory, or just powering down.

          I've decided to omit the "minimum profanity" you've requested, as there's no mention of the desired quantity or measurement listed in your post.

  • Fast and Low Power (Score:2, Insightful)

    by Gigantic1 ( 630697 )
    Hmm...fast and low power - I like it. I don't exactly know how it might be a substitue for my PC's RAM, but I can certainly imagine it being a great way to replace Flash and SRAM.
  • by Thornkin ( 93548 ) on Friday June 25, 2004 @03:52AM (#9526009) Homepage
    Programs don't crash because the memory is cleared during reboots. They crash because they refer to memory that never existed in the first place.
    Perhaps nonvolatile memory will improve startup times (think super-fast hibernate) but crashes? Not a chance.
  • by BortQ ( 468164 ) on Friday June 25, 2004 @03:54AM (#9526015) Homepage Journal
    The real future is in ENRAM. Give it all your money and then it crashes !
  • by ArsenneLupin ( 766289 ) on Friday June 25, 2004 @04:01AM (#9526030)
    How long before nonvolatile memory becomes the solution to crash-prone software rather than better programming?

    At least now, when your Windows crashes, you can reboot your machine, and in extreme cases powercycle it.

    However, with such non-volatile RAM, this is a thing of the past: even leaving the machine unpowered for an hour won't erase the crashed program state...

    • Exactly. The kernel's crashed state will be preserved, so you won't be able to reboot cleanly. Some kind of checkpointing (like in database servers) would be useful here: just reboot to the last valid checkpoint. Of course, this requires a lot more WRAM though...

      • No need for something so complex. All one has to do to recover from such a state is to extend (or emulate one of volatile RAM's 'features', if you wish) the 'reset'-function:

        The moment you push the 'reset' button, not only does the system reboot, but the memory is also wiped, after which a non-corrupted copy is loaded from the 'HDD' (or whatever is used for storage).

        So in other words, the 'power'-button would be used to power the system down, while the entire state would be preserved (like the hibernate
    • However, with such non-volatile RAM, this is a thing of the past: even leaving the machine unpowered for an hour won't erase the crashed program state...

      Two solutions really, I'm no CS major, but I think they ought to work.

      Solution 1: A button on the motherboard (or jumper, or on the front of the case) that clears the memory. I'm not sure what exactly it would want to write. 1's? 0's? 1/2s?

      Solution 2: Bootmenu. Even old versions of Windows know when you didnt finish your bootup sequence and give yo

  • by ColourlessGreenIdeas ( 711076 ) on Friday June 25, 2004 @04:33AM (#9526092)
    Yay! We're going to get instant-on computers, just like home computers in the '80s were. How are we going to achieve it? Some form of jumped-up magnetic core memory!
  • by klagermkii ( 791101 ) on Friday June 25, 2004 @04:38AM (#9526102)
    If the big advantage of non-volatile RAM is the reduction in how many times you have to wait for your PC to perform a full startup and shutdown the last thing you want to have is your software being so crap that you have to reboot it all the time anyway.
  • by salec ( 791463 ) on Friday June 25, 2004 @05:13AM (#9526167)
    The magnetoresistive cell can change the way ANY sequential logic circuit operates. It can make much denser CPUs, ASICs and FPGAs, because now you can make the clock input be THE power supply line.

    It can also make your timepiece battery last ... well, longer.

    You just need to look at it in a different view then Yet Another Non-PowerCycle-Erasable Storage.
  • by doshell ( 757915 ) on Friday June 25, 2004 @05:59AM (#9526262)

    Perhaps I'm being too paranoid, but I see some potential for abuse here. Imagine a program that deals with passwords or credit card numbers... They could be still lying around in your non-volatile memory after the machine is switched off.

    An intelligent program should then zero out those passwords before freeing memory. Even so, would this kind of storage suffer from the security issue already discussed here [slashdot.org] and here [slashdot.org] (ability to retrieve data from many previous writes)?

  • by erroneus ( 253617 ) on Friday June 25, 2004 @06:14AM (#9526294) Homepage
    While it's possible, RAM is a hardware failure and can rarely be connected with software.

    On the other hand, our handy ability to shut down and clear out bad programming is a luxury that might become more difficult with the new RAM technology.

    This could mean that viruses and other malware could remain even more resistant to removal than before!
  • How long before nonvolatile memory becomes the solution to crash-prone software rather than better programming?

    I dont understand how non-volatile memory would solve the problems of crash prone software. Okay so it might make the problem of recovering lost work after a crash that little bit easier. I fail to see how its going to solve the problem of crashing though.
  • by Julian Morrison ( 5575 ) on Friday June 25, 2004 @06:30AM (#9526331)
    ...but they won't be the same as uses for RAM or for hard disk.

    Using it for RAM would be silly - RAM is supposed to be transient, keeping it around would be a security and stability loss.

    Using it for hard disk would be silly - the price per megabyte would be ridiculous unless you're doing stock-market data crunching or some such.

    Some uses I can immediately see for it:

    - boot the OS, and save a snapshot for an instant reboot

    - use it to store persistent caches of binaries, libraries, etc

    - use it for filesystem and database journals

    - do RAID4 and use it to hold the parity volume
    • How would it be a security loss? Any program concerned about security today is already aware that any of its memory may be swapped out at any time... and that swap files can, on many architectures, survive between boots. The only safe way to ensure that stuff you write in memory is not persisted, today, is to clear it by hand. How is this different with MRAM?

      How would it be a stability loss? I just don't see it... all this talk about 'but when you reboot, you'll be in the same state.' No, when you reboot,
  • Comment removed (Score:3, Informative)

    by account_deleted ( 4530225 ) on Friday June 25, 2004 @06:34AM (#9526344)
    Comment removed based on user account deletion
    • back in the day of magnetic core, you could boot and choose whether you wanted to execute a little routine that would shuffle zeros from one location to the next to clear out the machine, or continue running with what you had (any other slashdotters out there ever work on IBM 1620 or 1720 or magcore models in the 360/370 line?) I think it's funny that this is once again may be an option.
  • Is for a PC equipped with AAMRAM. Nothing like having an Air to Air attack missile doubling as RAM on my motherboard. That would certainly make software developers think twice about releasing buggy code.

"The whole problem with the world is that fools and fanatics are always so certain of themselves, but wiser people so full of doubts." -- Bertrand Russell

Working...