Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
HP Hardware

HP To Introduce Flash Memory Replacement In 2013 253

Spy Hunter writes "Memristors are the basis of a new memory technology being developed by HP and Hynix. At the International Electronics Forum, Stan Williams, senior fellow at HP Labs, said, 'We're planning to put a replacement chip on the market to go up against flash within a year and a half. We're running hundreds of wafers through the fab, and we're way ahead of where we thought we would be at this moment in time.' They're not stopping at a flash replacement either, with Williams saying, 'In 2014 possibly, or certainly by 2015, we will have a competitor for DRAM and then we'll replace SRAM.' With a non-volatile replacement for DRAM and SRAM, will we soon see the end of the reboot entirely?"
This discussion has been archived. No new comments can be posted.

HP To Introduce Flash Memory Replacement In 2013

Comments Filter:
  • Ofcourse not (Score:4, Interesting)

    by Anonymous Coward on Friday October 07, 2011 @07:10AM (#37637378)

    Ofcourse we will not see the end of the reboot entirely. I have yet to encounter a Windows or Linux system that you can upgrade without rebooting. (In practice that is, in theory it should all work.)
    Memristors will make a dent in the small scale UPS market since there will be no need to shut down gracefully but we will still need large scale backup system where you want to continue running your operation during power outage.
    The real change we will see is when memristors replace flash and dram since there will no longer make sense to keep the bulk storage in a different memory from the rest of the system. Everything will be memory mapped always like it was in the good old ROM-based days.
    The problem is that both Windows and Linux is badly prepared for this since both of them uses executable program structures that require modification upon loading. A lot of programs are also using datafiles in an abstract format that require extensive parsing before usage. (Like XML or other text based configuration files.)
    This makes it hard to transition into XIP-system where loading is something that doesn't happen. (Did anyone with a battery backed SRAM PCMCIA-card try eXecute In Place on the Amiga? I would like to know if it actually worked or if it was just a term mentioned in the manuals. It should have worked since it's not really any different from compiling programs for memory-resident operation.)

    • Why will linux have a problem with this? The structures that require modification are copies of the data on disc, why should that change? Copy these from the non-running programme to private pages, modify the process page table, job done. I doubt windows is any different. Basically the same mechanism in use now, but source the original, non-executing text, from a different place, memory rather than filesystem. I don't think it's mentioned anywhere that programmes will never need loading or initialising
      • that's why he said 'in practice'.
      • The structures that require modification are copies of the data on disc, why should that change? Copy these from the non-running programme to private pages, modify the process page table, job done.

        Unless the version of the kernel with security patches is larger than the old version and won't fit in the same pages, or the security patch changes the meaning of a data structure in RAM. Then you need an Oracle product [wikipedia.org] to make the transition.

    • by TheLink ( 130905 )

      lot of programs are also using datafiles in an abstract format that require extensive parsing before usage. (Like XML or other text based configuration files.)

      This makes it hard to transition into XIP-system where loading is something that doesn't happen

      Configuration files aren't going away, and I don't see why you think they are a problem. There are good reasons why they exist and a new form of memory does not remove those reasons.

      One man's impedance mismatch is another man's layer of abstraction.

    • by macshit ( 157376 )

      I have yet to encounter a Windows or Linux system that you can upgrade without rebooting. (In practice that is, in theory it should all work.)

      The only part of an upgrade that requires rebooting on a modern well-designed linux distro (e.g. Debian, where upgrading-without-rebooting is normal practice) is to start the new kernel, if there happens to be one. Of course you can install a new kernel without rebooting, and just keep on using the old one until the next time you happen to reboot—that works fine.

      [Yeah I know there are techniques that try to avoid even the start-the-new-kernel reboot, but so far as I've seen, they're kinda dodgy....]

    • by skids ( 119237 )

      Linux is in the tail end of integrating its embedded forks with the mainline kernel. These come with XIP binary formats, since RAM-constrained systems often want to keep execute binaries primarily from directly mapped flash. There will be some work involved (e.g. getting ARM ELF-FDPIC implemented in the gcc toolchain and hashing out the geometric explosion of toolchain tuples in a sane manner so people tasked with managing libtool et. al. don't end up in insane asylums) and all the distros will have a fun

    • by julesh ( 229690 )

      The problem is that both Windows and Linux is badly prepared for this since both of them uses executable program structures that require modification upon loading.

      Yes and no. Both use executable formats that are designed to cope with the possibility that modifications may be required upon loading, but in both cases, no modifications are required if all modules can be loaded at their preferred location. This should happen for most Windows software that uses only MS-provided DLLs, and can be made to happen for Linux software using the 'prelink' tool. Also, both formats package up points that are likely to need changing and pack them all together into a minimal numbe

    • by mlts ( 1038732 ) *

      What might be interesting would be a hypervisor technology that can copy process memory space and other items to another VM completely. This way, an application sitting on VM "A" gets paused, its data copied to VM "B", and the old VM "A" shut down. Since both VMs would see the same filesystems (except perhaps /boot might be different due to the new kernel), an application likely wouldn't know or care about the kernel update unless some call it used got deprecated. Of course, there are plenty of consisten

  • End of the reboot? (Score:5, Insightful)

    by MachineShedFred ( 621896 ) on Friday October 07, 2011 @07:12AM (#37637382) Journal

    Reboots usually don't happen because of hardware, and certainly not because of the type of RAM you're running. It's bad software.

    • Re: (Score:3, Insightful)

      by gman003 ( 1693318 )

      I assume, then, that you never shut your computer down for the night. Or for the weekend.

      • by fph il quozientatore ( 971015 ) on Friday October 07, 2011 @07:19AM (#37637430)

        I assume, then, that you never shut your computer down for the night. Or for the weekend.

        I turn my computer on for the night and the weekend*.

        * you insensitive clod!

      • Re: (Score:3, Interesting)

        by trold ( 242154 )

        You've missed an important point here. Non-volatile RAM means than powering off does not imply a reboot. When power returns the next morning, or after the weekend, the computer is still in the same state as when you pulled the plug Friday evening. /trold

        • This assumes that the CPU is also completely non-volatile, and all the other hardware elements.
          It's likely not to be the case, at least in the short term, hardware will absolutely require some shutdown time, to get to a stable standby state.

          You do not go from a billion operations a second to zero cleanly, just by yanking the power.

          • by AlecC ( 512609 )

            True, but a CPU ought to be able to store all that it needs to of its state in a small fraction of a second. Most PSUs will hold power for this long, so that if they can give an "power failed" interrupt half a second before dropping the main voltage, it should be OK. You will have to flush dirty cache lines to main memory, but not dirty disk sectors to disk. Similarly, the disk should be able to complete transfers actually in progress in that same fraction of a second.

            The problem will be, as you say, other

            • by Entrope ( 68843 )

              You would need some honking huge capacitors to provide a half-second of hold-up for a typical desktop once AC goes away -- but milliseconds is achievable. That is more than a CPU needs to flush cache to RAM, but not enough to cleanly shut down certain USB-powered devices (such rotating drives).

              • With a split path, one with a 1pF capacitor and low power isolated by diode off the mains and one with a more substantial capacitor bank (a bank of 3x100uF caps makes my guitar amp hold power for a good 1 second), you could get a pretty good "POWER HAS GONE AWAY" signal that would trigger a performance-impacting fluctuation. CPU immediately halts, pushes cache out. BIOS flat out cuts power to disk immediately; the OS should have the data being written still in RAM, and can get a power fault interrupt and
                • by mlts ( 1038732 ) *

                  From the tinfoil hatter's point of view, it would be nice to have enough power to be able to overwrite master encryption keys multiple times, then set a flag that as soon as the CPU is turned on, to move register state aside, switch to the hypervisor, and wait for re-obtaining of keys, either by scanning the boot path and checking the TPM, checking with the Mandos server, or prompting the user for a passphrase to unlock and reload keys back into memory.

      • I think most people Hibernate their computers these days. I definitely do. Ever since the Linux kernel could be updated without a reboot I haven't had to reboot my machines at all.

      • I never do. Why? Economy of resources?

        The light in my kitchen in the morning is exactly the same light as it was in the evening.

      • by Idbar ( 1034346 )
        I hibernate it. If the headline was meant as the end of hibernation and a new era of powerless sleep, then it's fine. But I really see no correlation with reboots unless the memory was also leak proof.
      • I assume, then, that you never shut your computer down for the night. Or for the weekend.

        The common PC at home is never powered off unless my gf feels "sorry for the machine and wants to give it some rest" (once every few months)

        The PC in my appartment is never powered off.

        • My girlfriend leaves her computers on most of the time. The only time she shuts them down is when there is a thunderstorm: in her neighborhood, the power lines are above-ground including pole mounted transformers, so are prone to lightning hits that can cause significant spikes. In my case, everything is completely underground, with pad-mounted transformers on the ground, so I don't need to worry so much about lightning.

          I've told her the "fix" is to put a 100-foot extension cord on her computer. Acts

          • by mikael ( 484 )

            Where do you put the 100-foot extension cord? Do you wrap it around the supporting beams of the house, or the desk?

            I'd hate to imagine what kind of electromagnetic field a 100-foot cable at 120V/240V is going to generate. Our workplace had urban legends of having coils of cable around objects like metal trashcans, and heating things to the extent they caught fire (batteries, extension cords, etc...)

      • I reboot at most once a week, more likely once a month. The rest of the time is deep sleep, or whatever windows calls writing its state to state and powering down.

      • No, never. I hibernate it instead. Saving the state of my workspace is vital, and saves lots of time. I don't really care about the time it takes to hibernate or come out of hibernation.

    • by gweihir ( 88907 )

      Indeed. Why is this BS always attached to memristor articles?

      Reboots are for updating kernels and when you want to save power on an unused machine. Or when you run OSes that cannot run a few years without reboot...

    • Reboots usually don't happen because of hardware, and certainly not because of the type of RAM you're running. It's Windows.

      Fixed that for you....

    • As I posted below, the stub writer means that this is the the end of cold boots, as system state is persistent in memory even with no power.

      Reboots for maintenance will require much more than non-volatile RAM.
    • by mark-t ( 151149 )
      If you make an update to certain important system executables or files, there is simply no getting around rebooting, as you cannot simply terminate the appropriate executable and restart it without having to also restart all the processes that depend on that executable, which in some cases can be essentially equivalent to rebooting.
  • My bullshit detector is going off - and it's not because I don't believe this sort of technology is just around the corner - because it is. I just don't have confidence that it will be brought to us by HP... given recent evidence HP seems quite capable of snatching any defeat from the jaws of victory.

    • They never mentioned the price. HP might decide to charge a lot of money for this technology, which could make it impractical in many applications. That could effectively screw it up until the patents expire.
  • Windows users will still need them every time they change things, or to clean up tiny software flaws...

  • they're selling us. It sounds good but I'll believe it when I see it (vaporware?).
  • Reboots are not necessary on many machines right now - I have to remind my boss to reboot every few weeks when something finally goes wonky in the network settings on his Mac laptop. Standby mode lasts for a very long time now... and most required reboots are from operating system updates. With modern SSDs, you don't even need to wait that long to boot. My work machine with a modern SSD takes about 7 seconds to boot Windows 7. My home machine, with less services to start, boots in about 4.

    But honestly, t

  • end of the HDD (Score:4, Insightful)

    by gbjbaanb ( 229885 ) on Friday October 07, 2011 @07:24AM (#37637472)

    it won't mean the end of the reboot, stupid editor. This is sSlashdot, don't you know you have a Microsoft-Windows-BSOD-Daily-reboot meme to maintain?? :)

    What I think it could mean the end of is the HDD, or rather the distinction between memory and storage. If all your apps and long-term-storage data could be placed into RAM, then you'd do it wouldn't you. (this assume a few things, like reliability and long-term unpowered persistence) but imagine having 500Gb of RAM that just happened to hold all your data, rather than keeping it separate and shuffling it between the two. That could be quite a change for the way we see computers compared to the ways we've been using them for the last 40 odd years.

    • Re:end of the HDD (Score:5, Informative)

      by pz ( 113803 ) on Friday October 07, 2011 @08:43AM (#37638138) Journal

      What is old is new again.

      There was a project at MIT LCS/AI back in the mid-80s to explore what it would mean to have massive amounts of RAM. A machine was designed with 1 GB of main memory. By today's standards, that's pathetic, but recall that this is in the era where PCs had 640 KB, max, and 1 GB was not only larger than every hard disk (desktop ones were at 10 MB, and even the big enterprise drives were only on order of 10 times bigger), but --- and this was the really important part -- would fill out the virtual address space, so there would be no need for a VM system. Hal Abelson and Gerry Sussman were behind these big ideas (the same duo who wrote Structure and Interpretation of Computer Programs). I don't recall if the machine was actually built (maybe it was the Digital Orrery?), but I do recall one of the contrary viewpoints being that VM was considered important not just for simulating a larger memory system, but that for type-driven hardware like Lisp Machines, a huge address space was useful because the upper addressing bits could be used to encode type, even if that address space was too large to ever be populated.

      • Re: (Score:3, Informative)

        IMHO the main advantage of a VM system is that a program doesn't need to care where its memory is located. It can always act as if it just owned all the memory up to some maximum address. The VM takes care of mapping that to the right place.

      • Re:end of the HDD (Score:4, Interesting)

        by swb ( 14022 ) on Friday October 07, 2011 @10:51AM (#37639738)

        Even if you decided to maintain a VM system, the idea of a unified storage system (DRAM+DISK as one device) is pretty fascinating.

        You could theoretically install a program in an already running state. All your programs could be running simultaneously -- "quitting" an application may just be telling the kernel to stop scheduling it; launching it again would mean just scheduling it again to execute, where it would pick up exactly where it left off.

        Obviously a lot of software would have to be rewritten to take advantage of this, but some of the possibilities are fascinating.

      • ... so there would be no need for a VM system ... VM was considered important not just for simulating a larger memory system ...

        It's a common misconception (reinforced by several GUIs) that "virtual memory" means "using disk as virtual RAM". That's not accurate.

        Virtual memory means the address a program sees is not the real address in storage. It's translated by the MMU (Memory Management Unit). This lets you do any number of things, only some of which have to do with paging/swapping to disk. In partic

  • Man. I was going to put all of my savings into one of the new cold fusion companies that are going to be popping up at the end of the month. Now I'm going to have to split it with all the HP stock I need to buy.

    • Man. I was going to put all of my savings into one of the new cold fusion companies

      Cold fusion [wikipedia.org]? You could have bought ADBE years ago.

  • by An dochasac ( 591582 ) on Friday October 07, 2011 @07:30AM (#37637520)
    We can finally dump the multiple layers of caching, look-ahead and other OS complexity designed to hide several orders of magnitude difference between register/DRAM access and persistent storage (tape/Hard drive/core memory...). Operating systems can return to the level of simplicity they had back when everything was uniformity slow. But now everything will be uniformly fast and we'll can focus complexity on multiprocessing.

    It will become practical to implement neural networks in hardware. This will completely change the way we design and program software and databases.

    Persistent and portable user sessions will become the norm. (Look at Sun Ray for an idea of how this works. Sun Ray sessions are typically logged in for months at a time. This means software has to be better behaved but it also means we won't have to rely on user memory to restore a desktop and applications to... now where was I?
    • We can finally dump the multiple layers of caching, look-ahead and other OS complexity

      That sounds unlikely. To replace multiple layers of caching you need to come up with a technology which is not just faster than Flash and faster than DRAM, but also than cache RAM. A tall order for a single new technology.

      Even if that was the case, it would have to have a uniform speed independent of size and process. Otherwise you would surely have Memristor RAMs of different price and speed. So you would again use sm

    • by Ed Avis ( 5917 )

      Operating systems can return to the level of simplicity they had back when everything was uniformity slow.

      When was that exactly? Only the very earliest, very simple computers didn't have at least two kinds of memory (working memory and storage). And they didn't have an operating system.

  • I haven't encountered any OS besides z/OS that didn't require a reboot atleast every few weeks in order for the software side to remain stable.
    This includes Windows, Linux, OS-X, Android and iOS. Most likely due to not-quite-perfect applications rather than the OS itself, but they still required an OS reboot.

    • by Hatta ( 162192 )

      The common thread in all these instances is you. What are you doing wrong?

  • Oracle must already by working hard to find out how to make it incompatible with their databases.
  • s/reboot/cold-boot;

    This will store your system state in memory, meaning you don't need to hibernate.
  • This, if it ever sees light of a day, will probably need major rethinking of OS architecture. Some things like volatile RAM vs. permanent "files" on "disks" is logic hardcoded into every major OS and framework (java, .NET, C++, ...), not only as a code but as a major architectural constant. With this, everything changes IMHO, not only boot. For example, "files" are completely obsolete, unless we want to emulate with what we know and what we are used to.
    • Some things like volatile RAM vs. permanent "files" on "disks" is logic hardcoded into every major OS and framework

      For one thing, it's not necessarily as hardcoded as you might think in operating systems that have any concept of of "everything is a file". One can mmap a file as a block of memory, or one can mmap /dev/zero to allocate a block of memory. For another, perhaps you want "files" on "repositories" to be backed up, revision controlled, etc.

    • by MobyDisk ( 75490 )

      Suppose you had infinite non-volatile RAM, so you could open as many documents as you want and never have to close or save them.

      You would still want your documents in some sort of organized, searchable, (hierarchical?) format for organizational purposes. By analogy, imagine a desk with infinite surface area. You would still put documents into a filing cabinet because they are easier to find. You would also need this standard storage medium so that other people and other software could get to the document

    • by ceoyoyo ( 59147 )

      You're assuming this is cheap as well as nonvolatile.

      We have nonvolatile memory now. It hasn't made hard disks disappear. I bet the new kid isn't going to either.

    • by LWATCDR ( 28044 )

      Yes and no. Changing people is a lot harder than changing software. Think about it "files" are already a throwback to the old idea of paper files.
      You will still need to move data as well and know what is where.
      Now changing the OS is a given but not as much as you think. We already have disk cache's now. This should allow for an extremely persistent cache and a very low memory sleep state for computers. People are talking about never having to reboot which is more than a bit silly because if you power down t

      • Right now, we have two main models for storage - files and a flat address space. Neither is well suited to flash memory, let alone something like memristors.

        There are other architectures. Burroughs machines from 1960 onward had memory addressing that worked like pathnames. Think of memory addresses as being hierarchical, like "process22/object21/array4[5]". Objects were paged in and out, but were not persistent. The IBM System/38 went further and made such objects persistent. However, such architectures

  • "HP’s technology allows the memory layers to be put directly on top of the processor layer making for very fast systems on chip" Interestingly this is exactly what John Carmack stated he was hoping would happen in his last interview. It would make development of new game engine technology that takes full advantage of PC systems much easier.
  • Don't get too excited guys... Apple already locked up anything HP produces with a long term contract. They're even going to build the factory for them! On the bright side, if this technology ever gets cheap enough, Apple will switch to it exclusively, meaning flash prices will come down finally for other vendors. Remember, Apple doesn't like flash.
  • It's a good thing too, because it's likely Samsung has been colluding with other NAND Flash manufacturers to keep prices high. They bought up a lot of competitors and the top 3 manufacturers control the vast majority of the market now. The DOJ actually investigated Samsung for collusion in 2007 (http://www.theregister.co.uk/2007/09/18/nand_flash_antitrust_probe_widens/) but abruptly dropped the case in 2009 (http://www.bloomberg.com/apps/news?pid=newsarchive&sid=aWgWSqhs_Jk0). Perhaps coincidentally
  • Would some of you clever (and snarky) types consider the security issues that persistent RAM brings?

    Powering off such a machine leaves it entirely ready to be used, true, but also entirely ready to be examined by whoever lifts it out of your bag at the airport or Starbucks. Freezing and bumping DRAM now is considered a genuine, if rare, security concern. No one talks much about hiberfil security. All of my RAM still loaded, ready to be dissected by whoever cares to physically take it away?

    How long before

  • Didn't HP want to put an end to the hardware business and focus on software? Can someone please enlighten me because I'm probably missing something here...

  • by fnj ( 64210 ) on Friday October 07, 2011 @01:37PM (#37642040)

    Everybody here is prattling on about whether we can or cannot eliminate reboots by using memristors - completely missing the point of this new technology. We are talking about a 5 nanometer process here! One in which you can build up many layers! One in which parameters can be traded off in various ways to either make a better DRAM than DRAM, a better SRAM than SRAM, or a better flash than flash. The point is not necessarily to replace all of those with a single part. The point is that there is VAST potential to break through barriers. We are talking about a flash replacement with much higher density, lower power, increased endurance, and (speculatively) lower cost. This could be the a damn big breakthrough; a game changer.

When you make your mark in the world, watch out for guys with erasers. -- The Wall Street Journal

Working...