Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

No More Rebooting? 320

blankmange writes: "This headline caught my eye: 'The End of Computer Rebooting.' Seems that there has been some new developments in memory technology: The new thin-film technology that could give rebooting the boot is based on resistor logic rather than the traditional transistor logic used in most PCs and other memory-enabled devices. It also is considerably faster than current memory systems and holds the promise of reducing the time required to transfer and download multimedia content and other massive files. This is great news, but what am I going to do with the extra hour or so a day?"
This discussion has been archived. No new comments can be posted.

No More Rebooting?

Comments Filter:
  • I've been waiting for this technology for over five years...when??? When??

    When it's done, of course! (Please don't sue me Id Software)
  • by Paul E. Loeb ( 547337 ) on Monday April 15, 2002 @08:51AM (#3342527) Homepage Journal
    So I guess this puts a big damper on Microsoft Tech Support. "I don't know what to do, please restart your computer."
    • Oh my god, they'll actually have to train their phone techs with a different answer. Like "I actually have no clue what is wrong with your computer, only being a half-trained half-wit."

      Kierthos
      • Re:Tech Support (Score:3, Insightful)

        by tomstdenis ( 446163 )
        To be fair half of the time the problem is *not* something someone sitting 1000 miles away can help with.

        Say you have lingering threads with open ports or something. How are you supposed to figure that out over the phone [and recall you have to tell some 65 yr old lady trying to write her grandson how todo this].

        Most of the time people run stupid third-rate programs like Go!Zilla or Gator or dare I say anything based on linux! They screw up the system and there is not much you can do.

        If on the other hand you said "My modem online light is off" and they retort "reboot your PC" you can be assured they are fairly clueless.

        Tom
        • Most of the time people run stupid third-rate programs like Go!Zilla or Gator or dare I say anything based on linux!

          Let's see... The thread was about microsoft tech support drones. Running linux? They don't support that.

          Or maybe you're saying that running something on a linux box would automagically affect the Windows box? How'sm that supposed to work?

          Or maybe you're just a dumbshit that doesn't know what the fuck he's trying to say? Yeah, that covers it.
  • Great. (Score:3, Funny)

    by HEbGb ( 6544 ) on Monday April 15, 2002 @08:52AM (#3342532)
    So if I can't reboot, how am I supposed to recover from Windows crashes?
    • Re:Great. (Score:3, Funny)

      by unformed ( 225214 )
      No what will happen if you'll turn your computer off but when you turn it, it'll still be crashing.

      Oh you said recover.

      Well, the one and only true solution: reformat.
    • by RatOmeter ( 468015 ) on Monday April 15, 2002 @09:43AM (#3342774)
      I see the reboot issue as minor, compared to the other potential advantages of this technology. I will expect to be rebooting, for one reason or another, for years to come and am not too bothered.

      The article glosses over what I consider the important advantages:

      - [assumedly] great power savings. Great for portables and remote embedded systems.
      - No moving parts! If this tech can really replace and even surpass in speed, Hard Disk Drives, reliability and performance should make a gain of at least an order of magnitude.

      I've been waiting for years for computers to become eletronic-only devices. I've harped before that CRT's (vaccum tubes, for God's sake!) and HDD's need to join the Dodo in oblivion. This new tech, in the common mass storage area (HDD's, CD'c, floppies), along with flat panel technology, would put us right on the verge of that ideal. The last hurdle would be cooling without moving parts.

      • If this tech can really replace and even surpass in speed, Hard Disk Drives, reliability and performance should make a gain of at least an order of magnitude.

        Replacing HDDs in terms of speed/performance/reliability is easy, and there are any number of currently available technologies that fit the bill. The reason HDDs still exist is because nothing else even comes close to in terms of price/capacity.

    • Not only that, but how do you rebot if anything gets corrupt? What if your probgramming an you inadvertently over something in memory? They will need to come up with a way to reset the OS and memory itself.
  • by swordboy ( 472941 ) on Monday April 15, 2002 @08:53AM (#3342535) Journal
    My Win2k boxen are stable enough to be up for months without a reboot. What I need is a box that I can leave on 24-7 and not have to worry about energy consumption. These things are expensive to leave awake all day. Seriously. Do the math.
    • I did, and it's about 30 dollars a year, assuming the worst case, that your computer is consuming 300 full watts all the time, which it isn't.
    • I bet the energy consumption of this is much lower than a harddisk which means it will mean it will be less expensive

      Besides that, yes, having computers turned on 24/7 (which my box is - I want it to be available *now* not 5 minutes after I need to lookup some info!) - just ask people with computerfarms. I'm personally involved in the Ars Technica Distributed Computing Community [infopop.net] and there are a lot of people with pretty large home farms. There is a lot of things to consider if you want to build a farm - the critical part of it is how you get the best performance for the smallest amount of money. This includes the the energy bill - removing as much hardware as possible.

  • But..... (Score:2, Interesting)

    by Deag ( 250823 )
    How else are we supposed to fix things when everything just stops working...

    Rebooting is always a great way to fix things.... they even used it one of the star trek's once.

    And how the hell is windows supposed to work?
    • Re:But..... (Score:5, Interesting)

      by gazbo ( 517111 ) on Monday April 15, 2002 @09:16AM (#3342650)
      I also read the /. writeup as being some miracle cure for OS crashes or the like. In fact, it's just non-volatile memory.

      So, when you turn off/on your PC, you don't need to reboot, it can just put you right back where you were instantly. Unfortunately, in the context of a crash/instability, this would put you right back in an inoperable/unstable environment.

      Bad writeup.

  • But we can pretty much do this at the moment by using the various suspend and hibernate options. Ok, so it's a different technology but the effect is the same. But nobody not using a laptop ever does.
    • Re:But... (Score:2, Informative)

      by gokulpod ( 558749 )
      True, but suspend still drains power from a battery, and hibernate uses hard disk space. Besides, even from hibernate, you need time to read data from Hard disk, which does take some time.
      This solution will mean no power consumption and no data loss. Plus, heat inside a casing will greately reduce. Plus computers can get smaller and big bulky hard disks vanish.
    • But we can pretty much do this at the moment by using the various suspend and hibernate options. Ok, so it's a different technology but the effect is the same. But nobody not using a laptop ever does.


      That would be because 8/10 times, your computer doesn't come back when you try to wake it up.
  • the downtime (Score:5, Interesting)

    by perdida ( 251676 ) <.moc.oohay. .ta. .tcejorptaerhteht.> on Monday April 15, 2002 @08:54AM (#3342543) Homepage Journal
    Like your computer, you need downtime (sleep, walking the dog, eating, etc).

    If you are an avid computer user, you may only get your downtime when your computer is rebooting. This is especially true in workplaces where people are "chained" to their computers trying to finish a project, etc.

    Those ergonomics posters on the wall do very little to get an average 'puter user to take care of themselves.. reboots served some of this purpose.

    (Maybe that is why windows crashes so much - it's Bill Gates' gift to the employee!)

    In any case, perhaps all offices should institute a staggered mandatory 15 minute inactivity period every couple of hours for each active computer.

    • You are right -- I guess I have never seen it in that light before. Having to reboot is Bill Gates' way of saying "You have spent too much time on the PC, reboot and back away... and don't sue me for your RSI's"... It's a feature, not a bug.... kewl!
    • by sweet reason ( 16681 ) <mbloore AT yahoo DOT com> on Monday April 15, 2002 @09:04AM (#3342592) Homepage
      perhaps all offices should institute a staggered mandatory 15 minute inactivity period every couple of hours for each active computer.

      the sysadmin of a server farm would never move again!
    • Some States have that law, for hourly employees at least. In Utah it was 10 minutes per 4 hours.
    • In any case, perhaps all offices should institute a staggered mandatory 15 minute inactivity period every couple of hours for each active computer.

      This is actually law in some countries (for example Holland). In practice, especially for IT workers, it's not followed - mostly is a question of work culture and ignorance. Still, around here nobody (much less your manager) will comment if you do a pause once in awhile.
      • Here in Poland it's 15 for 2 hours of computer work.

        Of course, that's in you're not a contract employee like 80% of the people working in the CS industry are.
    • The people I support here love it when I come around to do routine maintenence on their workstations. They get a free 15 minute break or so and wish I could stay around longer.. ah. Now if I could somehow rig my computer to fail once a day so I can spend 2 hours 'working' on it....
  • Easy -- get a second job, 'cause its probably going to cost all the extra cash you have to get stuff built with this, just because of the patent licensing rights...
  • eh? (Score:3, Insightful)

    by giminy ( 94188 ) on Monday April 15, 2002 @08:55AM (#3342549) Homepage Journal
    This seems like a bad bad title. This stuff is persistent RAM, so it won't help you if you need to reboot after recompiling a kernel. Also, this article doesn't mention getting rid of POST...most computers stuff some data into memory and check it to make sure your ram is still working. Would be kind of hard to check your memory if it has stuff in it that you don't want to lose.
  • by qurob ( 543434 ) on Monday April 15, 2002 @08:56AM (#3342551) Homepage

    It just talks about memory that doesn't lose state when you hit the power button on your PC.

    We've got to invent perfect software that can run forever without needing to be restarted, first.
  • by PineGreen ( 446635 ) on Monday April 15, 2002 @08:56AM (#3342553) Homepage
    Well, rebooting is MUCH more that just recovering the memory content!!

    We could easily dump the memory contents onto the hard drive straigh away and we are not doing it (except in laptops, but even there it doesn't always work) This is because rebooting reinitializes various devices and takes care of the time jump (i.e. crons, anacrons, etc). The more complicated your system is the less likely it is that you are going to survive without booting.

    Also, computers are now 1000 times faster than 10 years ago and they take much more time to boot (DOS did it in seconds on 286).

    • on your DOS 286 did you ever try to load any TSR's or drivers for say sound or yes a network card (ok you got me it was a 386.) Hell have you ever booted a dos6.2 dos recovery disk it takes forever to load mouse and cd drivers it takes just as long for me to load WinXP on my laptop as any other os. If you are about to say strip it down yea it loads great if I load dos 6.2 w/ no drivers or anything I cant do much.. same thing w/ XP now just because this is slashdot I need to mention linux so if you have a stripped down version of linux it will load very fast unlike redhat (also had to mention redhat) which load very slow unless you tweak it, now that striped down version of linux you are running will run very well for what you tweaked it for but not for other general stuff.... That's just my opinion I could be wrong
    • by Rogerborg ( 306625 ) on Monday April 15, 2002 @09:56AM (#3342815) Homepage
      • We could easily dump the memory contents onto the hard drive straigh away and we are not doing it (except in laptops, but even there it doesn't always work)

      Uh, have you seen WinXP's hibernate feature? On my 256Mb Athlon desktop, it writes the RAM to disk and shuts down in under five seconds, and comes back up (from wakeup keypress, through POST, then writes disk to ram) and is fully usable in twelve seconds. I've hibernated it with dozens of running processes and services, and not yet seen any problems on restore. I even took it down and brought it back up during a game of Deus Ex, and just kept right on playing where I'd left off.

      Given a reasonably reliable OS, you should only be wiping the RAM when the system changes significantly, e.g. switching kernels or hardware. XP's hibernate feature demonstrates that merely turning the power off shouldn't require you to shut anything down. Unfortunately, I've yet to see anything that works as well on my linux boxen, including my laptop. Suggestions gratefully received!

      • by Anonymous Coward
        I have to back that up. Granted, I've had problems with it on Win2k, nowadays it works like a charm (very mysterious since I've only installed new games and silly stuff like ADSL).

        Linux is way, way behind Win2K and WinXP. Too bad some people's egoes are too stuck up where the sun don't shine to admit it. I have much more respect for people who run Linux because it's Free Software (as in GPL/open source). Generally, Linux is playing catch-up when it could have been superior. It's just a matter of standardizing the platform (APIs, toolkits, wheels) a bit more. But how do you do that when everybody is working on their own thing? I think Linux is great when you want to learn CS, but it's not for the common man yet.

        It's a bit unfair to bash "linux" though. "Linux" is doing okay, it's a very nice performing kernel.
      • When's the last time you ran windows 98 or earlier. Remember how long it took to boot? My windows 98 box boots in less then 10 seconds (Sony vaio z505js) and linux on the same box boots in less then 5 seconds after POST (I'm not talking about a typical redhat install that starts every service known to man. It's quite minimal). Anyway, my point is this: 12 seconds for a "Fast restore" blows. It should be *booting* in less time then that, you *can* be booting in less time then that, and you don't have to deal with the annoyance that is windows XP to do it.

        On your linux box you should be able to completely restore you session from boot in less time then windows XP is restoring from "hibernation". You just need to disable all the stuff that init is running that you don't use, and configure your window manager and applications correctly. There is no reason you shouldn't be able to log out and back in and have your session exactly as you've left it. Unfortunatly, if you want hibernation you need to get a PC with bios support for it, or you need to get a mac, on which linux will wake from sleep in less then 1 second.
          • My windows 98 box boots in less then 10 seconds (Sony vaio z505js) and linux on the same box boots in less then 5 seconds after POST [...] 12 seconds for a "Fast restore" blows.

          Sigh. 12 second including POST. And that's not "until you see the desktop". It restores to exactly the same state it was in when you shut it down, and is immediately available. That's a big difference; it's a genuine pause button.

          Incidentally, what's your problem with WinXP? You're advocating booting a stripped down linux as an alternative (did you not read my point about "dozens of running processes and services"?) so I assume you'd accept that you can strip XP down until it's effectively Win2K. You seem to be more anti-XP than pro-any other solution. I like linux, but that doesn't mean that I have to hate XP, or to pretend that XP hibernate is a wonderful feature that I'd really like to see on my linux boxen - as I described it. Now, let's go again. Do you know of any linux solution that does effectively what XP hibernate does?

      • Win2000 also has hibernate. It's the primary reason I upgraded from 98 to 2k! Hibernate is the best feature since ... ever! :)

        Seriously though, the ability to turn off my computer at night, and come back in the morning and still have all my windows come up, all my files still be open, even winamp will immediately continue playing when the computer boots up. The only drawback is that all your hardware has to support it, if I plug in my TV Capture card I can't hibernate anymore.

  • What the hell am I gonna do when the boss comes in and my puter is off and I tell him, oh im just rebooting.

    Second, isnt resistor logig analog, and not binary (transistor) ?

    • Transistors ain't analog? You go down to a low enough level and everything's analog. The binary part just means there are two states we can map '1' and '0' to: two voltages, two current levels, two different resistor values, two different amounts of charge, etc. These magnatite thingies can probably be put in (at least) two states with different resistances. To sense it, they run a little bit of current through it and measure the voltage on the output. Or something like that.
    • I'm pretty certain that my practice (guitar) amp is solid-state, but not digital. The amplified sound coming out of those transistors sure sounds analog to me, at least. :)
      • true, so is a one transitor raido, analog as well.

        BUT the way transistor logic, logic being the keyword, is APPLIED to computing is what I spoke of and is certainly digital in nature in that application.

        I however missed the point one of the above posters made of using resitance values in a binary mode. I have built both binary and analog computers, I have a fondness of Analog computing, I never thought to apply only 2 DISTINCT values to a resitance representation.
    • True, you could pull different values (aka 1, 0, hell 2, 3, 4, 5) if you want) from resistance values.

      I guess never having worked with resitance values as absolute, It never crossed my mind good point.

      I think (I may be wrong) these magnatite are almost like "core memory" from days long gone, where a single bit was represented in memory by a single wound ferrite core, and I guess like you said pull distinct values from diffferent residtance values.

      This sounds, upone rereading the article, even more like core memory on a film.

      My dad saw, and showed me picture, yes im that old, of core memory being produced at IBM , thousands of ferrite cores beinghand tested and added to boards. Not bad in concept , but then again bubble memory for those old enough to remeber was a great concept too, there is still one Bubble manufacturer out there I am aware of upwards of 2 gigs, theyre using it as a solid state drive instead of ram (non-volitaile with somethinglike a 500? G shoch rating ?!?!)

    • This [redbrick.dcu.ie] seems to be a pretty good intro to resistor logic.

    • Carver Mead at Caltech used to say that transistor logic is digital by design not by nature.

      What he meant is that transistors are inherently analog devices. We just run them at full saturation levels (almost) all the time, so that the output is flat.

      He went on to demonstrate that rather impressively by building the first neural networks in VLSI, by using CMOS transistors operating in the near-linear response range. (This was more than 10 years ago now, so I can't remember exactly when that was.)
  • by brokeninside ( 34168 ) on Monday April 15, 2002 @09:01AM (#3342578)
    IBM and Infineon expect to deliver MRAM in 2004 [wirelessnewsfactor.com].

    The press release doesn't really go into detail, so I don't know how similar (or disparate) the respective IBM and Samsung solutions are. They do both have the same net effect for users: non-volatile main memory.

    This is cool stuff, but what hasn't been said is that as long as operating systems and applications leak memory, there will be a need for reboots.

    Ciao.

    • This is cool stuff, but what hasn't been said is that as long as operating systems and applications leak memory, there will be a need for reboots.

      You must be a Microsoft user.

      Some of us don't have that problem.


      [surak@tuxedo surak]$ uptime
      9:23am up 69 days, 15:33, 3 users, load average: 0.89, 0.87, 1.10
      [surak@rtuxedo surak]$ uname
      Linux


      See? :-P Who needs reboots? :) And the 69 days is only because I had to put a new hard drive in it due to running out of space on the old one. :)

      • Ah, but I can beat that in just my instance of Opera running.

        [weave@homebox weave]$ uptime
        9:59am up 265 days, 17:11, 2 users, load average: 0.00, 0.00, 0.00
        [weave@homebox weave]$ ps -fp 9399
        UID PID PPID C STIME TTY TIME CMD
        weave 9399 9274 1 2001 ? 07:26:58 opera

        I fired up Opera when I booted the computer, been running ever since. The only reason I shut it down last year was to go on vacation for two weeks and it seemed prudent at the time...

      • As long as we are swinging around our uptimes. True, this box doesn't do a whole lot, but still... Can anybody top this? ;-)>

        ls-1010>sh vers
        Cisco Internetwork Operating System Software
        IOS (tm) LS1010 WA3-7 Software (LS1010-WP-M), Version 11.2(15)WA3(7), RELEASE SOFTWARE (fc1)
        Copyright (c) 1986-1998 by cisco Systems, Inc.
        Compiled Mon 14-Dec-98 16:54 by integ
        Image text-base: 0x600108D0, data-base: 0x60448000

        ROM: System Bootstrap, Version 201(1025), SOFTWARE
        ROM: PNNI Software (LS1010-WP-M), Version 11.2(5)WA3(2b), RELEASE SOFTWARE

        ls-1010 uptime is 3 years, 6 weeks, 5 days, 23 hours, 56 minutes
        System restarted by reload at 07:54:52 MNT Sat Feb 27 1999
        System image file is "slot0:ls1010-wp-mz.112-15.WA3.7", booted via slot0

        cisco LS1010 (R4600) processor with 32768K bytes of memory.
        R4600 processor, Implementation 32, Revision 2.0
        Last reset from power-on
        1 Ethernet/IEEE 802.3 interface(s)
        13 ATM network interface(s)
        125K bytes of non-volatile configuration memory.

        16384K bytes of Flash PCMCIA card at slot 0 (Sector size 128K).
        8192K bytes of Flash internal SIMM (Sector size 256K).
        Configuration register is 0x102

        ls-1010>
    • Interesting highlights:

      The trasentric paper quoted Electronic Buyer's News:

      "Honeywell Inc. and Motorola Inc. are hoping to spin volume quantities of MRAM through a Defense Advanced Research Projects Agency contract that is also shared by IBM. DRAM powerhouses Micron, NEC, and Samsung are said to be developing the technology, while Hewlett-Packard has a design team looking into the viability of chip-level magnetic storage."
      The interesting elements of this:
      1. Much of this research is funded by a DARPA contract which means it is the money of US Taxpayers at work.
      2. Samsung is part of the same contract.
      Methinks that perhaps Samsung and IBM are using the same (or very similar) technology.

      The Wired article is fairly lengthy and also details the biography of Stuart Parkin. Parkin is the IBM fellow that has been driving most of the MRAM research.

      Ciao.

      • Much of this research is funded by a DARPA contract which means it is the money of US Taxpayers at work.

        So are we really talking about desktop PC's or something more like a missile fire control system on a warship which needs to work straight after being forcably power cycled, before the next bomb or antiship missile is launched at it?
    • The press release doesn't really go into detail, so I don't know how similar (or disparate) the respective IBM and Samsung solutions are. They do both have the same net effect for users: non-volatile main memory.

      Pretty much, they might have different edge cases (MRAM might be as sensitive to outside magnetic fields as hard disks...resistor RAM might leak current if not touched for a few years), and they almost definitely will cost different amounts, which may spell life or death for them (unless there are significant speed/density differences).

      This is cool stuff, but what hasn't been said is that as long as operating systems and applications leak memory, there will be a need for reboots.

      True for the OS, not so much for apps because you can restart them without a whole reboot. Some even sort of do that on their own (at one point Apache's child processes would exit after X requests to prevent resource leaks from building up).

      The reduced reboot time might be a big deal for laptops, but the nonvolatile nature of the new RAM types won't matter for desktops until the price is low enough to pose a threat to hard disks. It won't pose a threat to normal RAM until it's prices approach that too...which makes it disappointing that none of these articles address the estimated price of these technologies and the projected price of SDRAM and hard drives in 3 years.

  • holds the promise of reducing the time required to transfer and download multimedia content and other massive files

    Last time I checked, downloading speed depended on your connection, not how fast your RAM goes. I'm sure my memory can handle more than 1.5 Mb/s but that's as fast an I can download, because that is the limit of my DSL line.

    • I think they meant you won't have to reboot during a download, which as a linux user hasn't ever happened to me.

      I do remember in the old days of Windows it usually did happen....I even had to get programs that dealt with "download management"....
    • Don't you know that every speed improvement will speed up your internet access? Like upgrading your processor to a Pentium 4 will turn that modem into a T1!!! It's just buzzwords! The only speedup will be that the servers now have more memory bandwidth and can handle more connections at once.

      The whole article is mistitled. It won't be an end to rebooting, it will be an end to cold booting.

      If you want to eliminate the reboots from Windows, tell bill gates and co to make it more modular and less inter-dependent so you can insert and remove drivers just like *nix kernel modules.
  • Looks like NVRAM (Score:3, Insightful)

    by Ed Avis ( 5917 ) <ed@membled.com> on Monday April 15, 2002 @09:03AM (#3342587) Homepage
    What the article seems to be saying is that there could be a way of producing non-volatile memory which is so cheap, you'll be able to use NVRAM instead of ordinary RAM in your computers. But that depends on no further falls in RAM prices - I wouldn't bet on this technology taking over.

    However, a cheap, fast non-volatile memory which can be written and read unlimited times could be a very useful supplement to RAM. Think journalling filesystems for example - put your ext3 journal in a 100Mbyte NVRAM device and you'd hardly need to touch the hard disk for hours at a time, given moderate use. (Eg notebooks could spin down the drive.) This is possible already, but NVRAM devices are relatively expensive and most PCs don't have them.
  • People: Read the frelling article. This isn't 'an end to rebooting', it's highspeed nonvolatile memory that could theoretically be used to replace mass storage and RAM simultaneously. Although this would speed up booting a bit, it would not obviate rebooting entirely.

    In fact, on some OSen (cough, Windows, cough) it could be very dangerous - if there's only one copy of the OS code in this combination memory, you can't reboot and reload a fresh copy from disk - meaning bugs have a significantly greater probability of rendering your system unusable.

    Sounds like fun, right?

    --
    Damn the Emperor!
    • It seems to me that we're not understanding how this could be set up.. Why not have this as a device in your machine, that has an interface to the BIOS, where the user can set/format the unit to fool an OS into treating this nv memory-space as a fat32 or ext2 disk? You go into the BIOS, flush out the nvramdisk and 'reformat' and you are ready to re-install your OS should that become necessary. The rest of the time it runs as a really fast 'C:'... There's no need to replace normal RAM as your actual main memory during operation. Windows need not be aware of what is actually happening, you just boot/use faster, that's all.

    • Your first paragraph showed you at least read the article, the second paragraph is a nonsequitor. It's not clear to me why you think Windows would somehow be negatively impacted by this and no other OS would. Look at the number of times changes have had to be made to the Linux kernel in order to get it to boot on new hardware such as the Pentium 4.

      Isn't it likely that if this technology came to pass, the people responsible for various OSen would test their OS in that environment, and make changes as appropriate to support it?
      • The reason for the original poster picking on Windows is that Windows, in most folk's experience, has to be rebooted frequently because of error accumulation. In other words, I could leave my Win98 box running (with no additional applications up) for 12 hours and when I came back it would be locked up. This isn't about shutting down during upgrades or installs, it's about shutting down because of frequent OS corruption during everyday use. In this case, you *need* memory to clear itself out.
    • Most likely, we will still partition disks; but instead of a swap file, you'd probably reserve a coule of gig for "memory space" where programs make a copy from the "disk space" for running.
    • The article wasn't any better written than the summary. It seems like this is suitable as a replacement for flash memory, not for either disk (which is huge), or for RAM (which is really fast). Of course, having a flash-like technology be cost-effective would change things; you could keep a copy of system memory as it is when it has just been booted (but before it initializes devices) there. Then you "reboot" by copying the virtual memory table from the nvram to main memory, and the system is immediately ready to initialize devices and run.

      It would also be useful if programs could put some of their data in the nvram region, so (for instance), your emacs buffers don't go away when the power goes out. It would also be a good place to put write buffers, such that, as soon as the data is written to nvram, it will definitely make it into the filesystem, whether or not you lose power. This means that you can accumulate more dirty buffers safely and write them out in larger chunks, which is more efficient.

      Keeping everything in nvram (if that were fast enough) may or may not be a good idea. You'd still want to reboot on occasion to refresh the system (load a new kernel, e.g.), but there's no particular reason you'd want to reboot at exactly those times when you power down and back up. Of course, you'd need everything to be hotswappable (replace the processor with programs running?) and restartable (disks have to be told to spin up, e.g.).
  • Conspicuous by its absence is the cost. IMO by the time this stuff comes around customers will demand both an PC that doesn't crash ever and it be constantly on, removing the need for rebooting.

    Is a rare voluntary reboot really worth the unmentioned price?

  • What about when W2K or NT or 95/98, etc. decides to not quite completely clear out of a particular area of memory? Will this plan still flush it out for me??? I hope so...if so...this COULD do alot for Windows stability.
  • "This is great news, but what am I going to do with the extra hour or so a day?"

    Extra hour a day?! So... Err... You're a windows user, right?

  • Core Memory (Score:3, Interesting)

    by rlp ( 11898 ) on Monday April 15, 2002 @09:10AM (#3342625)
    Back at the dawn of time, I was programming a (Data General) Nova II mini-computer which had "core" memory (which is where the term "core dump" comes from). Core consisted of tiny doughnut (ummm doughnuts) shaped magnets with (read/write) wires through it. It was incredibly slow by today's standards, but it did retain memory even when powered down. I'd shut the machine down at the end of the day. The next morning, I'd turn it on, and immediately pick up where I left off.
  • Don't think so... (Score:3, Interesting)

    by mubes ( 115026 ) on Monday April 15, 2002 @09:11AM (#3342631) Homepage
    Hmm. Rebooting nowadays with 'traditional' OSes is to flush inappropriate state information out of the memory - an unusual sequence of events resulting in the system getting into a state it should never be in during regular operation....this might be either accidental (a crash) or semi-deliberate (an upgrade of a software component which needs a reboot to get it co-ordinated with the rest of the system). Having memory which maintains this state information will make the problem worse, not better!

    What's needed here to achieve systems that don't need rebooting is operating systems which deal with all of these unusual events and states correctly..this means they'll catch errors and will be specifically designed to allow things like dynamic update to system compoents. I'm probably a bit biased but the best example a no-more-reboots kind of environment I see today is the OSGi [osgi.org].
  • C'mon, I know that I have to reboot windows every couple of days to get rid of libraries that errant programs didn't unload and windows doesn't seem to let go of.

    Also, what if computers weren't *allowed* to reboot. You couldn't run a dual boot system. Which is something I suspect Microsoft would like. (I had to throw in a groundless msoft conspiracy... ;)
  • Very confused (Score:2, Informative)

    by Tottori ( 572766 )
    The article is not even clear on whether this development is supposed to be replacing RAM or hard disks. But either way, it cannot eliminate the need for rebooting. The primary reasons for rebooting are either to reset the operating system to a known state, or to upgrade low-level software (such as the kernel in Linux, or your web browser in Windows). Neither of these necessities go away with non-volatile RAM, regardless of how fast, cheap, or capacious it may be. These are software issues, and they need software solutions.
    • The article is not even clear on whether this development is supposed to be replacing RAM or hard disks. But either way, it cannot eliminate the need for rebooting. The primary reasons for rebooting are either to reset the operating system to a known state, or to upgrade low-level software (such as the kernel in Linux, or your web browser in Windows). Neither of these necessities go away with non-volatile RAM, regardless of how fast, cheap, or capacious it may be. These are software issues, and they need software solutions.

      Well it is partly because most computer industry journalists are morons....and partly because this stuff might replace either or both RAM and hard disks depending on the price and speed.

      If it is slow it won't replace RAM. If it is expensive it won't replace disks. If it is fast and cheap it will replace both, if it is neither it won't replace either. (in most systems that is, it might hit the target to replace FLASH in cameras, or...)

      It might be hard for people working on this to tell how fast/cheap they will get it, worse yet they don't really know what disk and RAM prices and speeds will do.

  • On many systems, mandatory, periodic rebooting is part of the ressource management of the operating system (think of memory leaks, descriptor leaks, and so on). Even if you implement RAM using ferrite-core memory (or something else, like this new approach), these maintainance reboots won't go away, and they won't become faster.

    In any case, such memory devices would be great for storing the journal of certain file systems, or even as replacement for traditional mass storage.
  • How would this have anything to do with faster downloads of multimedia content? I didn't even see that in the article, but this has nothing to do with bandwidth.

    Flash is the speediest memory technology? Surely they mean speedier than eeprom.

    How does this prevent reboots? I say without any doubt whatsoever, that the majority of reboots has to do with M$'s ~90% marketshare and numerous system level flaws. Does this memory plug its own leaks? Or do third rate OS programmers and ugly billionaire monopolists actually become smarter when exposed to this, sorta like Superman and kryptonite?

    Verdict: Marketing fluff.
  • I suppose this could be useful on systems that can do suspend-to-RAM, like laptops. Such systems still need a trickle of energy from the batteries to keep the data stored in memory from decaying.

    Also, a system with persistent memory would be like the old mainframe and minicomputers that had core memory. In the event of a crash, the memory could be examined. I suppose this could be somewhat beneficial to operating system developers..
  • As I've already seen alluded to in other posts, surely there will be need for Win XX machines to get back to the initial start state after they've crashed. I am starting a company that will sell reset buttons to accomplish this. No *nix users in the target market -- sorry.
  • by cybergibbons ( 554352 ) on Monday April 15, 2002 @09:37AM (#3342751) Homepage
    This is an appalling summary - and the article is no better.

    "The technology is highly suitable for broadband Internet connections, Hsu said, noting that it combines the features of low voltage, high speed and low power consumption."

    Yes, fantastic. That's great for those broadband internet connections. Faster memory is always good, but choosing this as an application is just a moronic use of buzz words.

    "Ignatiev said the new technology is about 1,000 times faster than flash, which is nonmechanical and currently the speediest memory on the market. "

    Flash memory is the fastest type of memory on the market? No, it is a form of non-volatile memory, which is very slow by RAM standards.

    "is based on resistor logic rather than the traditional transistor logic"

    Actually, you'll find that DRAM in most modern computers are capacitative devices - the techniques to make them are the same as MOS transistors, but they do not use switching to store values, IIRC.

    I wish people would not spout such rubbish.

  • Basically, strictly speaking this isn't preventing 'rebooting', it is enabling a system to boot really fast and load the state of the OS from non-volitile memory and have the state preserved. This just allows the boot process to skip the OS initilization bit (which is significant, but excludes BIOS startup).

    This has been around (save-to-disk hibernation), though using non-volitile memory would increase the speed of the process could increase dramatically. It seems that they are proposing a non-volitile ram technology that claims comparable performance to the volitile memory we use today, so it would be always ready to restore from that state, even if the shutdown is unexpected (power outage, for example).

    However, the annoying part of the boot process to me is the PC Bios. After it's part is done, I can tweak things to start fast, but BIOS, even after tweaking is unbearably slow. I presume on restart a computer may still go through BIOS before restoring state, and even then I presume it needs to offer the option of starting over (don't want a BSOD to be permament). I'm more interested in a BIOS that doesn't take forever to come up...
  • no more boot ? (Score:2, Insightful)

    by Chatterton ( 228704 )
    I understand that's booting and not rebooting that technology promise to get rid of. But how did you do hardware reset, IRQ/DMA peripherical association without a boot sequence? How the CPU state is stored to go back where you have been before shut down (as CPU registers are not stored in main memory)? Did you need special OS to detect this kind of memory and work with it?
  • Check out Squeak [squeak.org], the free, portable Smalltalk machine. Like all Smalltalks, Squeak runs in an "image". The image is your entire language, programming environment, and execution environment, all at once.

    The interpreter, programming tools, and even the GUI all exist as long-lived objects in this large (sometimes very, VERY large) memory space. When you aren't using Squeak, the image gets stored as a file on disk.

    There are also projects to run Squeak on bare metal--no intermediate operating system like Windows or Linux. Squeak itself becomes the operating system.

    This memory technology would be ideal for a Squeak machine. The image would always live in NVRAM. In such a case, there isn't a distinction between the operating system as it exists in static form (files on disk) and executing form (code in memory). There are always just objects in memory. Very elegant.
  • This computer's sooo fast I can reboot TWICE as often in half the time!

    (Yeah, yeah, there'll be a glib Windows sux reply to this one, I'm sure.)
  • I assumed they were talking about Windows PCs, and my first thought was "How does memory keep the machine from crashing?"
    Then I realized that they meant "turn on the computer for the first time today" booting, not RE-booting. Doesn't affect me, the only machine I ever turn off anyway is my laptop.
  • by anthony_dipierro ( 543308 ) on Monday April 15, 2002 @10:02AM (#3342841) Journal

    This is great news, but what am I going to do with the extra hour or so a day?

    Find a better operating system.

    • Or better drivers. Myself, I'm pretty pissed off at the proprietary Nvidia drivers for Linux. I want to get a newer video card (TNT1 right now) but I don't have the cash. Anyway, the point of this comment is that shitty drivers can lock up Linux just as much as anything else. Yes, I mean *hard lock*! No ssh! No nothing!

      -l
      Who is saving up for a Radeon, he thinks.
  • reduces time required to transfer and download multimedia content and other massive files.

    Now I'm really nosy how in freak'n hell any memory technology can reduce multimedia download times? That's just non-sense, it seems the word "multimedia" must be in everything you want to sell.

    Download times dependand on things like your internet connection, compression used, your providers connetion, etc. but not my memory.

    I still remember an intel guy claiming thi Pentium 3 will make the internet faster... how can somebody even dare to claim nonsense like this? And the really sad thing is: nobody started laughing as he said that...
  • As always, hardware is ahead of software
  • What's so exciting exactly? They invented faster FLASH memory.

    This is not the end of rebooting computers and, unfortunately, not the end of mechanical hard drives.
  • The technology to prevent you from having to reboot your computer daily can be found here [debian.org] and here [freebsd.org]. It's been around since the early 90's, and was invented by a grad student...
  • Gather round my children and I will tell you of a time far in past. A time when computers actually performed the job that they were designed to perform. A time when systems, including both hardware and software, worked pretty much all the time. Most importantly, a time when designers and support engineers understood their products and could diagnose and fix things that went wrong; when "restarting" the system was an act of desperation, not to be tried until all other solutions had failed. When "restarting" the system was a badge of shame that everyone would work to cure as fast as possible.

    Well, that is as cute as I can be this morning, but I hope the point is clear. I was willing to reboot an XT running MS-DOS 2.0 from time to time - it was a crude system and we didn't expect too much of it. But the "reboot" virus has spread FROM Microsoft systems all the way INTO the world of distributed controls. I actually have control system techs say to me "reboot and see what happens". Hello! It isn't supposed to be this way! Systems (particularly embedded sysetms) are supposed to work, not not work!

    Faster rebooting would be a crime, not an improvement, since it would help take everyone's attention off the problem, which is that the system failed.

    sPh

  • Where do they come up with this stuff? Unless your memory is Extremely slow, and I mean slower than anything used in personal computers since the days of the Altair, AND you are using a faster than 10M net connection (not realistically possible on an altair) on that slow memory, this will have ZERO effect on download speeds. Any computer faster than a 386 can handle damn near GigE where the limiting factor will be the PCI bus, disks, etc. - not the memory. (well, maybe non GigE on a 386, but 100M easily.) Sheesh.
    Considering I can't even get DSL or Cablemodem service in SILICON VALLEY, I don't think we will be seeing memory speeds being the limiting factor in downloads anytime soon - like not in the next 10 years even if computers stopped getting faster.
  • Honestly, at the risk of sounding cliche, real unix systems, that are bound to their hardware have fantastic uptime. The RS6000 we've had for a year has only been taken down for failover testing. If the resolution is hardware based then this Wintel duopoly isn't of much use. But the biggest question is, what will help desk people do if they can't tell people "Reboot the system." to take care of the problem. The vast majority of them may actually have to be trained in technical problem resolution. Millions will be thrown out of work (because it obviously exceeds their capabilities). Its the fragile nature of Windows that keeps the economy moving! Hopefully they will rethink this before its too late.
  • The Houston Chronicle has a writeup, [chron.com] which describes the technology a little:

    UH researchers worked with very thin films of perovskite oxides called manganites. When these thin films are exposed to electrical pulses their resistance can be programmed. The researchers developed an electrical switching process so that the material could be used to store and retrieve bits of information.

    Computer memory using the new technology would look essentially the same as the memory being used in today's computers, Ignatiev said.

    "If you put them under a microscope, they would really look no different," Ignatiev said. "The difference is that traditional memory uses transistors and capacitors, and these use a resistor."

    It will require only a slight modification to the design of the current generation of motherboards. An interface that reads high and low resistance would have to be installed.

    So it is non-volatile RAM. That makes four distinct NV-RAM technologies that I know of: battery-backed SRAM (fast, expensive, and low capacity), Flash and other electrically eraseable PROM's (slow writes, wears out), magnetic RAM, and resistive memory. The first two have been on the market for years, and capacity/price are nowhere near competitive with hard drives, although they are used where capacity can be much less than a PC needs and the environment is hostile to hard drives. MRAM is now being sold in small quantities, I think, but it's too young to tell how price and performance will work out.

    What I did not see was any reason at all for thinking that resistive RAM would work out to a low enough price to be a hard drive replacement. I'll believe that, with enough work on the production process, it can beat SRAM on price and Flash on write speed (these aren't hard targets), but it has a very long way to go to compete with DRAM on price or speed, and then the price has to go down another 100 times to compete with hard drives. OTOH, start selling boxes with 256M of NVRAM and good non-bloated instant-on software, and maybe people will prefer them to MS's bloatware offerings on a 30G HD...

    Finally, there have been much ballyhooed nonvolatile memories before that died once they hit the market. Bubble memory was supposed to replace hard drives about 20 years ago, but most slashdotters are too young to even remember it... I do like to see another technology out there, because if MRAM stumbles, now there's another chance of getting NVRAM that doesn't require major compromises.
  • The article says Commercial availability of the chips is expected within three years



    As always, multiply the marketing department's wishes with pi. In this case, 3*3.14, it's something like 9.5 years. I'd say end of 2010 at best.

  • Have you not upgraded to Ext3 or are you running Windows 9x?
  • Rebooting is sometimes used to refresh the state of the RAN, rather than just to power down. So there will be needed two new facilities.
    1) a clear function, which will set the ram to a known good value.
    2) an initialize function to recover from connections that may have been left active, and timed out while the power was off. (And probably to do other recovery that I haven't thought of just yet.)

    In the old core memory machines, the core was frequently cleared without turning off the computer, and then another IPL (initial program load) was done to start a program running. These features will need to be re-invented for more modern environments if ram becomes non-volitile. Of course, design could cause only some ram to be non-volitile, in which case it could be treated as an extermely fast disk (faster than the volitile ram? That sounds unlikely, but if so it could cause interesting design changes.)

Love may laugh at locksmiths, but he has a profound respect for money bags. -- Sidney Paternoster, "The Folly of the Wise"

Working...