Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage

The Next Decade In Storage 93

Esther Schindler writes: In this article, Robin Harris predicts what storage will be like in 2025. And, he says, the next 10 years will be the most exciting and explosive in the history of data storage. For instance: "There are several forms of [Resistive RAM], but they all store data by changing the resistance of a memory site, instead of placing electrons in a quantum trap, as flash does. RRAM promises better scaling, fast byte-addressable writes, much greater power efficiency, and thousands of times flash's endurance. RRAM's properties should enable significant architectural leverage, even if it is more costly per bit than flash is today. For example, a fast and high endurance RRAM cache would simplify metadata management while reducing write latency."
This discussion has been archived. No new comments can be posted.

The Next Decade In Storage

Comments Filter:
  • Maybe (Score:5, Insightful)

    by jandrese ( 485 ) <kensama@vt.edu> on Monday January 12, 2015 @05:30PM (#48797597) Homepage Journal
    There are a dozen different memory technologies that "in 10 years time" will revolutionize everything. I'll believe it when I see it. Until this, this gets filed away with Bubble RAM and whatnot in the "will be nice if it ever pans out" file.
    • by sabri ( 584428 )

      There are a dozen different memory technologies that "in 10 years time" will revolutionize everything.

      It doesn't have to be in 10 years time. But did you expect the rise of the All Flash Array [violin-memory.com] 10 years ago?

      Legacy spinning disks will be as dead in 10 years as tape is today.

      • Legacy spinning disks will be as dead in 10 years as tape is today.

        I can't tell if you are being sarcastic or serious. But tape is no where near dead as backup media for business. Very few people used tape for home backup. Spinning disks are still in most homes. I don't think spinning disks will be anywhere close to to being as "scarce" as tape is today. For cheap massive online storage, it's still pretty hard to beat. For streaming video, it's a fantastic solution. And with the way resolution for video is going, spinning disks are perfect.

        I have a mix of SSD and spi

        • And I will go on to add that tape is a ridiculously overlooked solution for home backup. You can pick up a second hand LTO3 drive off ebay for under $100 and it will often come with a handful of tapes. Instant easy safe backup solution.

          Got photos and camcorder you just cant replace - stick it on a tape and drop a copy at a relatives place. House burns down, full HDD failure / corruption. The tape is sitting there. God I picked up a box of 20 unopened lto3 tapes for $10. I give 1 to the MIL every month

          • Too many of us have "gone to the backup tapes" and found them to be corrupted.

            I'm not saying "don't use tape" but I've been burned too many times with only having tape backups (and these were expensive enterprise systems) in the past. If you use tape, also use something else. Belt & suspenders.

            • I do a read back on each tape before I wipe and reuse - and it works out roughly that they are on a 6 month cycle.

              So far I haven't had a single read back fail. I don't compress the data and it is only roughly 100gb that I backup but so far I haven't had any issues.

              My theory is that if the backup system is too hard it wont get used. I have everything on spinning media any way and usually I have 2-3 taps sitting at the MIL. So a worst case scenario is the most recent one is dead and I lose an extra month.

            • by lgw ( 121541 )

              Too many of us have "gone to the backup tapes" and found them to be corrupted.

              1 - That tape was bad when written - verify at least some after write and you're fine.

              2 - Old-school home backup drives were total crap: QIC, crap; DAT, crap. But LTO is solid. Not perfect -- do that verify -- but worlds apart from the low-end crap tape that has all vanished.

              • For a very long time, tape drives and media gave tape drives and media a bad name.

                Consumer QIC — about 1% of tapes actually held any data, total snake oil that took 10 days to "store" 10 megs (immediately unreadable in all cases)
                4mm — Tapes good for one pass thru drive; drive good for about 10 tape passes
                8mm —Tapes good for maybe 10 passes thru drive; drive good for about 100 tape passes before it starts eating tapes

                For all three of the above: Don't bother trying to read a tape on any driv

                • by lgw ( 121541 )

                  Not quite: AIT was also good - that was Sony's take on 8mm. It was still helical scan with the problems that brings, but it was much cheaper than DLT at the time and just as reliable (since DLT went far downhill before the monopoly broke and LTO happened).

                  I used DAT professionally too - it wasn't terrible, but you definitely wanted to verify and reading on other drives was somewhat iffy (but worlds better than QIC), and we could re-use a tape a few times. Still, it beat 9-track reel-to-reel.

              • by Agripa ( 139780 )

                1 - That tape was bad when written - verify at least some after write and you're fine.

                This is not a solution unless you verify the tape in a different drive and maybe not even then. The second to the last time I used tape backups, the tapes verified fine yet none of them could be read later and when that happened, the drive STILL verified newly written tapes as good even though they were not.

                The drive I used after that worked great for years and then became unsupported on the software side. The 4mm and 8m

          • I like this idea, but:

            LTO3 isn't big enough to be worth the headache of 2-3 tapes per TB. LTO5 would be very usable, but requires an annoying SAS card which are just expensive enough to make the whole solution a little spendy.

            I'm mostly kidding about USB3, but I sort of wonder why you couldn't have a USB3 interface for a connection medium.

            • 400gb uncompressed 800gb compressed has been more than enough for me so far. I don't try to backup everything. I have a huge tv and dvd collection but I'm not worried about that. Those are infrequently synced with friends so if I lost the lot I could get a lot from them and the interwebs will provide the rest.

              The bit I can't replace are photos and camcorders and I haven't cracked 100gb yet.

              As for the USB - LTO driver are actually pretty hard to feed with enough data fullstop. You heard the tape spin up

              • by swb ( 14022 )

                On paper at least, USB3 is supposed to support 625 MBytes/sec while the LTO6 is only capable of 160 MBytes/second, so in theory as a data bus, USB3 should be able to handle it.

                IIRC, USB2 was polling driven and run at full rate would cause noticeable spikes in CPU utilization. My guess is that changed in some way for USB3 to support faster data rates, but I don't know how.

                I use a USB3 gigabit NIC on my Surface Pro and I don't notice any network performance issues at all.

        • by sabri ( 584428 )

          I can't tell if you are being sarcastic or serious. But tape is no where near dead as backup media for business.

          I'm serious, but you are right. In the near future, spinning disks will be used for the same applications and seen as the dinosaur of technology: backup and low-performance works.

          The truth of the matter is that spinning disks are simply to slow for modern day technology. Compare your laptop when using a 7200rpm disk or an SSD. Compare your Oracle database query times when using a legacy storage vendor or an all flash array that can do 1 million IOPS [vmem.com]. It is the performance aspect that matters in modern da

          • You don't buy spinning disks for their speed. You buy them for their capacity relative to cost. In other words, they're really the only practical solution for online bulk storage on a budget. Different technology - different tradeoffs.

            Obligatory car analogy to follow: It's sort of like declaring that semi-trucks are too bulky, slow, and inefficient to be used as modern vehicles. Well, that's true, but only if you're considering the sole use cases to be "commuting" or "family outings", and completely ove

            • At one point you spent huge sums of money on memory, or a smaller large pile of money on lots of drives if you were in the moderate sized database world. With SSD you get excellent performance at a cost that ends up being far cheaper than disk per IOP. There are many applications where flash is replacing both memory and disk.

              • IOP is not the only metric that must be considered. Spinning disks are still at least eight time cheaper than SSDs *right now*, and in some use cases the speed at which that data is retrieved is not as important as the cost per GB. Of course that's going to change eventually, but until it does, if you want 6TB of local storage you can either buy one 6TB spinning disks at $270 or six 1TB SSDs at about $475 each for $2850 total.

                SSDs are simply not as economical for bulk storage (not speed-critical databases

              • At one point you spent huge sums of money on memory,

                But now RAM is cheap, so you can have huge amounts of memory without spending huge sums. Big memory caches are still used in many cases.

                With SSD you get excellent performance at a cost that ends up being far cheaper than disk per IOP.

                It's still a hell of a lot slower than DRAM.

          • Comment removed based on user account deletion
            • by trparky ( 846769 )
              I'd have to call bullshit on the "most give you either SMART warning or "delayed write failure" errors long before they die" part. I've had many a drive that was working fine one day, monitoring software showed no signs of pending drive failure, and then... dead the next day. *click* *click* *click* *click* *cry*

              You say the problem with SSDs is when wafer shrink, well... it seems that manufacturers have thought about that and have gone back to larger lithograph processes. In fact, Samsung has done just th
              • Comment removed based on user account deletion
                • by trparky ( 846769 )
                  I was able to recover 95% of the data from the drive after letting the drive cool in a refrigerator (not freezer) so I all that a win for me. But, I'll never trust a Seagate as long as they exist. And yes, a lot of my drives were Seagate drives. Thanks for that bit of info.

                  Oh, and let me guess.... those SSDs were OCZ SSDs? Right?
          • by Kjella ( 173770 )

            I'm serious, but you are right. In the near future, spinning disks will be used for the same applications and seen as the dinosaur of technology: backup and low-performance works. The truth of the matter is that spinning disks are simply to slow for modern day technology. Compare your laptop when using a 7200rpm disk or an SSD. Compare your Oracle database query times when using a legacy storage vendor or an all flash array that can do 1 million IOPS . It is the performance aspect that matters in modern day computing. The bottleneck is storage, not your CPU, not your memory, storage.

            Except that using a computer to store your family photos is a very valid use case, it doesn't have to be about "computing". When people play Angry Birds they're battery bound, when people stream Netflix they're bandwidth bound, when people play FPS games they're GPU bound, when old people want big screens with big text they're interface bound. And I'm sure there's people doing weather simulations that'll say they're CPU/memory bound. Yes, if you measured the time the average desktop user spent waiting for I

            • by trparky ( 846769 )
              I beg to differ on the "average user" comment.

              Take my brother's notebook with a slow 5400 RPM hard drive and boot-up times of more than three minutes. I put an SSD into it and it took off like a rocket with sub one minute boot-ups. Same thing happened with my desktop. Even launching a simple desktop program such as Microsoft Word can benefit from an SSD. You double-click the icon instead of waiting as the HDD retrieves several different DLLs from all over the drive to load into RAM, the SSD can load it al
          • The truth of the matter is that spinning disks are simply to slow for modern day technology.

            What? Who told you that? People who need that much speed can afford multiple disks, and then the speeds go up again. As long as SSD is more than twice as expensive as HDD, there will be a strong market for HDD.

            The bottleneck is storage, not your CPU, not your memory, storage.

            It doesn't take that many SATA3 HDDs' sustained throughput before you saturate a typical workstation bus. For some few workloads, you are absolutely correct. For most others, you are absolutely incorrect. If we were limited to one storage device in scientific computing, you would be correct, but array

            • by trparky ( 846769 )
              You're talking about sequential reads. Yes, multiple drives can help in sequential read speeds but 4K Random Read Speeds is what spinning hard drives absolutely suck at. And before you mention that I'm just talking about benchmark numbers, yes... I am talking about benchmark numbers but 4K Random Read benchmark tests very closely mirror real world activity.

              You can see this in how the average OS boot-up is slow as shit on an HDD. This is because OS boot-up is pulling seemingly random (at least to the HDD)
              • Yes, multiple drives can help in sequential read speeds but 4K Random Read Speeds is what spinning hard drives absolutely suck at.

                Stripes improve seek times, too.

      • by tlhIngan ( 30335 )

        Legacy spinning disks will be as dead in 10 years as tape is today.

        Unlikely, because spinning rust still has a cost advantage over semiconductor storage for the same size. I mean, given $500 or so for 8TB, and a 1TB SSD is around $400 or so, that's 3 process nodes worth of improvement (or about 5 years).

        Remember, storage capacity of semiconductor memory follows Moore's law. Tricks like multi-bit cells do increase capacity, but it's hard to make it reliable. 3 bits means 8 voltage levels and that is getting

      • by Guspaz ( 556486 )

        10 years ago? Sure, that's in the ballpark of when consumer SSDs started to become a thing. Intel getting into the game 7 years ago blew it open, but they weren't the first. So ten years ago, you wouldn't have been crazy for following Moore's Law and making a prediction that flash-based storage arrays would eventually make sense.

        In terms of legacy disks being as dead in 10 years as tape is today, there are a few problems with that. First is that tape isn't dead, it's still in widespread use in enterprise (i

        • You are the second guy who thinks the gp wrote tape is dead. He wrote hdd will be AS dead AS tape is today. No more, no less. I.e. as a backup media if that (ssd follows moore's law but hdd?).

      • by jedidiah ( 1196 )

        The media loves to push this narrative. It's almost as bad as other ideas they feel compelled to shove down everyone's throats.

        I'm not convinced. Storage requirements only seem to be escalating while storage technology for the most part doesn't seem to be keeping up. It doesn't matter how much the pundits wish their fantasy were true.

        An all flash array is as impractical now as a single SSD drive was 10 years ago (or 15). It's an application limited by it's pricetag.

        You can build all sorts of crazy things if

        • An all flash array is as impractical now as a single SSD drive was 10 years ago (or 15)

          Flash drives have been on the market for 35 years. [wikipedia.org] Those drives have always been practical for someone, and flash arrays that are shipping right now are practical for someone. The cost equation for flash arrays is the same, including variables like how much money your application earns or saves by running faster and cooler, and whether your application does not actually need a whole lot of space.

      • Legacy spinning disks will be as dead in 10 years as tape is today.

        So, not at all dead? Tape is a live and well, and it is the best medium we have for backups.

      • Most storage vendors make a big deal out of the fact (sometimes with actual data) that a lot of data isn't accessed often enough to warrant spending a premium on the storage medium it sits on and sell products that automatically track and tier blocks based on access frequency and writes.

        As long as there is a big price gap between large 10k spinning disks and solid state, won't hybrid arrays still make economic sense? Unless you do something weird, you'll mostly have an all-flash experience but at $/TB pr

        • Your servers may not be able to drive a flash array to 100,000 (or 1,000,000) IOPs, but the array itself can use a good bit of this; most of these all-flash arrays have inline global block deduplication and compression. In a scenario where you have a multi-TB database and Prod/QA/Dev/Test environments, you need a copy of the DB for each environment. With an all-flash array, you need 1 copy of the DB and then leverage snapshots w/no apparent performance hit (all those extra IOPs at work).

          Real example: Whe
          • by swb ( 14022 )

            I'm not sure dedupe and compression are arguments for all-flash. Both of those things are heavily CPU bound, dedupe requires a block checksum computation and a lookup to find a match and compression takes computation.

            My experience with dedupe is limited, but inline dedupe writes take a serious performance hit without relying on massive RAM cache to hold writes. Post-write dedupe makes for faster writes and doesn't really cost I/O because it can be done on a deferred basis when there is low I/O loads, whic

    • Re:Maybe (Score:5, Interesting)

      by mlts ( 1038732 ) on Monday January 12, 2015 @06:39PM (#48798183)

      Storage is in tiers, and each tier is different. From the stuff in registers to what is stashed on Amazon Glacier, and everything in between (RAM, SSD, HDD, etc.) A revolution at one strata will have a completely different impact than a revolution at another level.

      Take RRAM, MRAM, or some random access memory technology which is up to speed with DRAM, except cheaper and doesn't need refreshed. This would end up not just supplanting RAM, but also making inroads on SSD, depending how inexpensive it is. Will this fundamentally change computing? Somewhat, although I doubt that RRAM would ever drop near the price of HDD or even SSD.

      Or, take WAN bandwidth. If the average home had terabytes of bandwidth, a phone had the same, this would change things fundamentally. Cloud storage could go from stashing occasional files to being a tier 2 NAS, especially with proper client security and encryption. However, this is extremely unlikely as well.

      Perhaps a tape drive company is able to make reliable media with the bit density of hard disk platters, and is able to fit 100 TB on a cartridge for $10, with drives costing $500. Far-fetched, but if this happens, it would have a different impact to computing than memory costing 1/100 of what it does... but it would be significant.

      Improvements in the middle tiers may or may not help things. Bigger hard drives will have to deal with currently small I/O pipes, making array rebuild times longer, and forcing businesses to go past RAID 6 to ensure the drives have protection when things get degraded. Already, some arrays can take 24 hours to rebuild from one lost HDD, and if capacity increases without I/O coming with it, we might have to have RAID levels that factor in not just two levels of parity, but three or four, perhaps with another level just for bit rot checking.

      So, when someone says that there are storage breakthroughs... it really depends on the tier that the breakthrough happens at.

    • Re:Maybe (Score:5, Informative)

      by mcrbids ( 148650 ) on Monday January 12, 2015 @07:26PM (#48798525) Journal

      Get off my lawn, blah blah...

      Meanwhile, flash has revolutionized storage. We saw at least a 95% reduction in query times on our DB servers when we switched from RAID5 15K SAS drives to RAID1 flash SSDs. Floppies are history, and 32 GB thumb drives cost $5. SSDs have been catching up to their HDD brethren, now just 2-4 years behind the cost/capacity curve, and spinning rust has just about reached EOL, with Shingled Hard drives that make you choose between write speeds and write capacity [techcrunch.com] being a necessary compromise for increased capacity.

      I have no idea why you'd be so dismissive.

      • by sabri ( 584428 )

        Meanwhile, flash has revolutionized storage. We saw at least a 95% reduction in query times on our DB servers when we switched from RAID5 15K SAS drives to RAID1 flash SSDs.

        This, exactly this. HDD will work just fine for your grandparents, while everyone who appreciates performance has moved on to flash.

        The increased low latency read speeds combined with data deduplication, compression and instant off-site replication simply can't be matched by legacy spinning drives. It that is technology that is available today [vmem.com]. Assuming that RRAM, as mentioned in TFA, becomes a generic technology that replaces flash, you'll have all the advantages of flash without the (very few) disadvan

      • Flash is definitely the future, but is still about eight times as expensive per GB. That means if you want to store a lot of data, spinning disks is still the most economical route by far. There's another use-case for disks as well, when a disk is require to do a huge amount of writes - flash is actually very slow to write, and of course, has limited write capacity even with wear-leveling algorithms.

        Eventually, of course, flash will reach price parity, and it's unlikely anyone will continue to manufacture

      • by poet ( 8021 )

        Yeah... although I agree that SSD is going to slaughter a spindle, part of your problem was the extreme ignorance of running RAID5 for a DB.

        • That's some pretty old-school thinking there. For modern storage arrays with decent caching algorithms the level of RAID protection often has little to do with IO response time and throughput. I've dealt with DBAs who have insisted their archive and redo logs live on RAID-1 or RAID-10 storage because that's what they were taught oh-so-long-ago. They want me to carve out and dedicate four spindles for them to do RAID-1 on a storage array with nearly a thousand spinning disks in it. I've got a storage arr
    • Comment removed based on user account deletion
  • For example, a fast and high endurance RRAM cache would simplify metadata management while reducing write latency."

    NSA guy sees "metadata management" and has a wet dream.

  • by io333 ( 574963 ) on Monday January 12, 2015 @05:59PM (#48797811)

    I want my Bubble Memory. I have been waiting 35 years for it.

    • Yeah, I remember the hype back then. I used a Hitachi/GE A4 robot with bubble memory in the 1980s, and like you, I waited for bubble memory to go widespread, but it just seemed to fade away. According to this article, it seems like bubble memory found more industrial & military applications than consumer ones because of the price and power issues:
      http://www.dvorak.org/blog/wha... [dvorak.org]

      • Well, now we have MRAM, which is kinda-similarish and which does the same job exactly. And while we don't have it in our PCs yet, it is being used in consumer-level devices in very tiny quantities. If MRAM ever gets cheap enough to replace storage, then we'll really see a shift.

    • by Tablizer ( 95088 )

      I want my Bubble Memory. I have been waiting 35 years for it.

      They misread the request and gave us a mortgage bubble instead.

    • I see your bubble memory and raise you holographic memory.

  • by Kjella ( 173770 ) on Monday January 12, 2015 @06:11PM (#48797921) Homepage

    Practically I don't feel it's very significantly different anymore. Sure a little faster CPU, a little faster GPU, a little more RAM, bigger and cheaper SSDs but it's mostly the same. To feel that big a difference what you had before must have been rather crap, I still remember how adding a Gravis Ultrasound turned PC sound from shit to excellent. Or adding a new graphics card so you could have transparent, splashing water in Morrowind. Getting a floppy drive for my C64 so I didn't have to wait ages for the tape player. I hereby predict this will be the least exciting decade for storage, except the ones that follow it.

    • God I remembered that in Morrowind. LOVED that game.

      I will go one further though and how you really wanted a 3dfx card in order to play Mechwarrior in all its glory. And then with the C64 an Action Replay card that meant you could instantly resume your game just like a console.

      • I will go one further though and how you really wanted a 3dfx card in order to play Mechwarrior in all its glory.

        I presume you mean Mechwarrior 2, which is probably the game engine with the most rendering engines EVAR. They had a version of Ghost Bear's Legacy for pretty much every 3d accelerator on the market at the time, and came bundled with most of those, too.

        • Probably. Going back a little now in the memory banks. All I can remember is I didn't have a 3dfx card and I really really wanted one! I think I had something like a Matrox 3D card that could almost do it.

    • Comment removed based on user account deletion
      • I was the first kid in school to get a CD drive... mono speed... which I brought from a trip to the US. Six months later everybody had 2X because importers skipped directly to the faster version :-D

        • by dfsmith ( 960400 )
          When I got my first CD-ROM drive, it was half speed (75kB/s). Oh yeah! It was awesome—I wish I still had some of those early CD-ROM demos. (The Intel monk/book of the dead one sticks in my mind: it talked to you!)
      • Remember when burning a CD took more time than playing it (and maybe took more than 1 disc)?
        • by karnal ( 22275 )

          BUFFER UNDERRUN

          Had a fairly high end machine at work at the time; pentium pro 200 + all SCSI inside. Seagate 1.2GB drives x2 and a cd burner. Oh how buffer underruns used to piss me off.

      • Going from an IBM PC-compatible system with a 4 MHz CPU and a Hercules Monochrome graphics chipset (16 shades of amber FTW!) over to a friend's house where he had a dual-speed external CD-ROM playing Wing Commander 3 with FMV was a quantum leap in computing power (I think it was a 486?).

        Going from that IBM PC-comptabile system to a Compaq Presario all-in-one with a 486sx2 66 Mhz CPU, VGA graphics, onboard SB16-compatible sound, and a 19.2K modem was the next quantum leap. Using the computer to browse BBSes

  • everything will be out-of-date and not shiny by then. sometimes i don't read so well.
  • Two points to make:

    1) per-byte accessing doesn't matter for secondary storage, because your filesystem is still going to want to write things in blocks. You'll still want to have logical chunks of data to have checksums for and such.

    2) Modern SSDs already do the whole hybrid approach, mixing SLC and MLC/TLC. And I'm not talking enterprise drives, I'm talking the cheapest budget drives. Samsung, for example, calls this "TurboWrite", and they include it in their "EVO" drives, which are some of the lowest cost

    • by gl4ss ( 559668 )

      sure it matters. ...that is if they get it scaled to the promised level of replacing ram and hd with this for cheaper.

      surely you would't mind having just 1 tb of what was essentially ram that didn't get wiped if you lost power? but of course, that has to wait until they can produce the thing.. even if that was supposed to be 4 years ago.

      • by Guspaz ( 556486 )

        RAM that doesn't get wiped when I lose power? Well, modern operating systems basically simulate that anyhow. Who actually turns off a laptop instead of just closing the lid to sleep it? And even when you do turn something like Windows off, these days it actually just goes to sleep or hibernates in the background. There are also diminishing returns for throwing more RAM at problems, so going from the current, say, 16GB to 1TB isn't going to change much. Loading games, for example, still would take time, beca

    • per-byte accessing doesn't matter for secondary storage, because your filesystem is still going to want to write things in blocks

      if we had fast online storage we could write in bytes, someone would surely come up with a densely packed filesystem which made use of that functionality.

  • by Headw1nd ( 829599 ) on Monday January 12, 2015 @06:58PM (#48798329)
    I feel the time is fast approaching when there is no difference between RAM and storage, and when that happens it will set the stage for a quantum leap in programming. Not only will it eliminate the ever-present need to shuttle back and forth to some slow long term storage media to retrieve this or that, but it will change the assumption that a program being run is inherently different than one that is stored. I believe the fusion of the two will bring about some revolutionary concepts.
    • by PRMan ( 959735 )
      This is already the case on most phone apps.
    • by Tailhook ( 98486 ) on Monday January 12, 2015 @09:17PM (#48799357)

      HP is marketing these ideas as "The Machine." The basic concept is using Re-RAM (ions) for all storage, fiber optics (photons) for all communications and electronics (electrons) for all processing. Ions, photons and electrons in a flattened crossbar matrix. Look up Martin Fink's recent presentations if you need a Buck Rogers fix.

      The incredibly small, simple and easy to fabricate cell structure that Re-RAM seems to offer is just too compelling to ignore. Crossbar (the company) appears to be solving the Re-RAM problem. All we're trying to do is move ions around with current. There is a long list of possible materials and designs yet to be investigated. Eventually a sweet spot will be found. When that happens non-volatile storage density and speed will leap forward an order of magnitude, and the whole storage stack from the CPU cache to the tape drive will get flatter.

      Or not. It's not like we need this to make the future exciting. Humanoid robots alone will provide more than enough excitement for the rest of my life.

  • by Anonymous Coward

    It has a failure mode other then "the controller barfed and blew away all your data".

    I'm still using magnetic media on all my important systems. I can take the performance hit with RAID 5. I have seen far too many SSDs blow themselves away due to a controller issue. I have never seen a mechanical drive fail like that, the data is usually partially readable if something really bad happens (or you can pay someone to swap the platters and read the data if need be- try doing that with flash, where the levelling

  • RRAM cache would simplify metadata management

    This rings a bell....

    NSA stores metadata of millions of web users for up to a year

    Now you know who is behind the developement, and why such memory has a chance on the market.

  • Where can I buy a 100 GB RAM machine?

Solutions are obvious if one only has the optical power to observe them over the horizon. -- K.A. Arsdall

Working...