Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Phase Change Memory vs. Storage As We Know It 130

storagedude writes "Access to data isn't keeping pace with advances in CPU and memory, creating an I/O bottleneck that threatens to make data storage irrelevant. The author sees phase change memory as a technology that could unseat storage networks. From the article: 'While years away, PCM has the potential to move data storage and storage networks from the center of data centers to the periphery. I/O would only have to be conducted at the start and end of the day, with data parked in memory while applications are running. In short, disk becomes the new tape."
This discussion has been archived. No new comments can be posted.

Phase Change Memory vs. Storage As We Know It

Comments Filter:
  • Re:CD-R? (Score:3, Interesting)

    by NeuralAbyss ( 12335 ) on Thursday December 31, 2009 @08:46PM (#30611408) Homepage

    Non-volatile? Like all the other "non-volatile RAM, instant-on" technologies that have gone before? MRAM, SRAM, Holographic storage... and now phase-change memory.

    I've heard this marketing bullshit before. Call me when it's not vapourware.

  • by Paradigm_Complex ( 968558 ) on Thursday December 31, 2009 @09:45PM (#30611722)

    When you can pick up 4GB of RAM memory for a song, why not load the whole OS into memory?

    For what it's worth, you can do this with most Linux distros if you know what you're doing. Linux is pretty well designed to act from a ramdisk - you can set it up to copy the system files into RAM on boot and continue from there all in RAM. I've been doing this on my Debian (stable) boxes when I realized I couldn't afford a decent SSD and wanted a super-responsive system. Firefox (well, Iceweasel) starts cold in about two seconds on an eeepc when set up this way, and it starts cold virtually instantly on my C2D box. In fact, everything seems instant on my C2D box. It's really snazzy.

    As long as you don't suffer a system crash, you can unload it back to disk when you're done.

    Depending on what you're doing, even that may not be an issue. If you're doing massive database stuff, then yes. However, if your disk I/O isn't all heavy you can set a daemon up to automatically mirror changes made in the RAMdisk to the "hard" copy. From your POV everything is instant, but any crash will only result in the loss of data from however far behind the harddrive copy is lagging. Personally, what little I do need saved is simply text files - my notes in class, my homework, etc, and so I can just write to a partition on the harddrive that isn't loaded to RAM. It doesn't suffer at all from the harddrive I/O - I can't really type faster then a harddrive can write.

    tl;dr: It's perfectly feasible for (some) people to do as you've described, and it works quite nicely. It's not really necessary to wait for this perpetually will-be-released-in-5-to-10-years technology, it's available today.

  • by mysidia ( 191772 ) on Thursday December 31, 2009 @10:01PM (#30611788)

    Power failure happens.

    That's what journaling is for.

    Load the system image into RAM at boot from the "image source".

    Journal changes to user datafiles.

    When a certain number of transactions have occured, commit them back to the main disk.

    If the system crashes... load the "boot volume" back up, replay the journal.

    No need to journal changes to the "system files" file system (that isn't supposed to change anyways). If a system update is to be applied, the signed update package gets loaded into the journalling area, and rolled into the main image at system boot.

    Another possibility would be to borrow a technology from RAID controller manufacturers... and have a battery backup for the RAM in the form of a NiMH battery pack. If power is lost, upon system boot, the RAM image will be restored to the same state it was in as of the unexpected shutdown/crash.

    Avoid clearing the RAM region used for file storage at boot also .

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Thursday December 31, 2009 @11:28PM (#30612138) Homepage Journal

    It probably got tagged vaporware because where the fuck is my system with MRAM for main memory? MRAM is a shipping product, too, but it was "supposed" to be in consumer devices before now, as main System RAM.

  • by HockeyPuck ( 141947 ) on Thursday December 31, 2009 @11:45PM (#30612196)

    Maybe these guys ought to ask someone that was around in the days BEFORE there were SANs. Managing storage back then absolutely sucked. Every server had it's own internal storage with it's own raid controller OR had to be within 9m (the max distance of LVD SCSI) of a storage array.

    There was no standardization, every OS has it's own volume managers, firmware updates, patches etc etc etc. Plus compare the number of management points when using a SAN vs internal storage. An enterprise would have thousands of servers connecting through a handful of SAN switches to a handful of arrays. Server admins have more important things to do than replace dead hard drives.

    Want to replace a hot spare on a server, what a pain. As you had to understand the volume manager or unique raid controller in that specific server. I personally like how my arrays 'call home' and an HDS/EMC engineer shows up with a new drive, replaces the failed one and walks out the door, without me having to do anything about it.

    Two words: Low Utilization. You'd buy an HP server with two 36GB drives and the OS+APP+data would only require 10GB of space. So you'd have this land locked storage all over the place.

    Moving the storage to the edge? Even if you replace spinning platters with solid state, putting all the data on the edge is a 'bad thing.'

    "But Google does it!"

    Maybe so, but then again they don't run their enterprise based upon Oracle, Exchange, SAP, CIFS/NFS based home directories etc like almost all other enterprises do.

  • Re:Not really (Score:5, Interesting)

    by StarsAreAlsoFire ( 738726 ) on Friday January 01, 2010 @12:19AM (#30612338)
    "Faster, but there are far more bottlenecks than just disk I/O."

    Generally, I disagree with the statement as written. I would say that there are other LIMITS. Not bottlenecks. Although for something like video encoding you could easily turn things around and say 'Look! Your hard-drive is bottlenecked by your encoder!'. Yeah yeah. So I guess I agree more than I want to admit.

    Almost by definition, there's always going to be a bottleneck somewhere in your system: the chances of ALL of your PC's components working at *exactly* 100% of their capacity is pretty close to zero. And that's for a particular task. Randomize the task and it all goes to hell. So the question we are discussing is really 'If I remove bottleneck n, how many seconds does it shave of the time to run task x?', averaged over a set of 'common' tasks. But if we made our external drives all as fast as DRAM (or whatever. as above), there would be no other single bottleneck left in the system that you could remove which would give you even a handful of percentage points of improvement. Except maybe un-installing Outlook. Or banning Subversion repositories from your enterprise environment -_-.

    For most components in a PC, you have to square the performance to see a significant performance difference, all else being held equal. Tasks that lag noticeably, and that are not dramatically improved by a simple doubling of disk performance, ( 3.5ms seek, 150MB sustained transfer ) are pretty rare. Video encoding, for instance. Certainly getting more common. But with a good video card and a cheap harddrive, you're getting pretty close if not exceeding maximum write speeds on the drive while doing a CUDA rip.

    I think that if Microsoft had released a little monitor that displayed the cumulative time spent blocked on [Disk|CPU|Graphics|Memory|Network] (a column in Task Manager, for instance. Hint, hint) back in Windows 95, spinning disks would be considered quaint anachronisms by now. Look at how much gamers spend on video cards, for almost no benefit.

    Minute 2 of the Samsung SSD advert: http://www.youtube.com/watch?v=96dWOEa4Djs is pretty interesting, if you haven't seen it yet.
  • The SAN argument (Score:5, Interesting)

    by symbolset ( 646467 ) * on Friday January 01, 2010 @03:04AM (#30612810) Journal

    The SAN argument is that your storage is so precious it must not be stranded. If you're paying $50K/TB with drives, controllers, FC switches, service, software, support, installation and all that jazz then that's absolutely true. If you're doing something like OpenFiler [openfiler.com] clusters on BackBlaze 90TB 5U Storage Pods [backblaze.com] for $90/TB and 720 TB/rack you have a different point of view. As for somebody showing up to replace a drive, I think I could ask Jimmy to put his jacket on and shuffle down to the server room to swap out a few failed drives every couple months - that's what hot and cold spares are for and he's just geeking on MyFace anyway. Low utilization? Use as much or as little as you like - at $90/TB we can afford to buy more. We can afford to overbuy our storage. We can afford to mirror our storage and back it up too. In practice the storage costs less than the meeting where we talk about where to put it or the guy that fills it. If you want to pay for the first tier OEM, it's available but costs 10x as much because first tier OEMs also sell SANs.

    Openfiler does CIFS/NFS and offers iSCSI shared storage for Oracle, Exchange and SAP. If you need support, they offer it. [openfiler.com] OpenFiler is nowhere near the only option for this. If you want to pay license fees you could also just run Windows Server clustered. There are BSD options and others as well. Solaris and Open Solaris are well spoken of, and ZFS is popular, though there are some tradeoffs there. Nexenta [wikipedia.org] is gaining ground. There's also Lustre [wikipedia.org], which HP uses in its large capacity filers. Since you're building your own solution you can use as much RAM for cache as you like - modern dual socket servers go up to 192GB per node but 48GB is the sweet spot.

    Now that we've moved redundancy into the software and performance into the local storage architecture, moving storage to the edge is exactly what we want to do: put it where you need it and if you need a copy for data mining then mirror it to the mining storage cluster. We still need some good dedicated fiber links to do multisite synchronous replication for HA, but that's true of SAN solutions also. We're about 20 years past when we should have ubiquitous metro fiber connections, and that's annoying. Right now without the metro fiber the best solution is to use application redundancy: putting a database cluster member server in the DR site with local shared storage.

    Oh, and if you need a lot of IOPS then you choose the right motherboard and splurge on the 6TB of PCIe attached solid state storage [ocztechnology.com] per BackBlaze pod for over a million IOPs over 10Gig E. If you need high IOPS and big storage you can use adaptor brackets [ocztechnology.com] and 2.5" SSDs or mix in an array of The Collossus [newegg.com], though you're reaching for a $6K/TB price point there and cutting density in half but then the SSD performance SAN has an equal multiple and some serious capacity problems. If you go with the SSD drives you would want to cut down the SAS expanders to five drives per 4x SAS link because those bad boys can almost saturate a 3Gbps link while normal consumer SATA drives you can multiply 3:1.

    If you're more compute focused then a BackBlaze node with fewer drives and a dual-quad motherboard with 4 GPGPUs is a better answer. At the high end you're paying almost as much for the network switches as you are for the media. If you're into the multipath SAS thing then buy 2x the controllers and buy the right backplanes for that - but

  • by Anonymous Coward on Friday January 01, 2010 @05:35AM (#30613070)

    I don't think the author knows much about the purpose of a SAN. A SAN is not just a disk array giving you faster access to disks. Local storage that is faster does not help you with concurrent access (clusters), rollback capability(Snapshots, mirror copies \ point in time server recovery), site recovery(off sited mirrors) or substantial data compression gain through technologies like deduplication.

    As for speed, my SAN is giving me write performance in the range of 600mbytes/sec per client. I access my storage over a 10gbit ethernet backbone. Certainly suboptimal, but my blades have a pair of nics and no disks. It's cheap, very fast and I have 3-4 rollback points for my ESX cluster. Thats around 200 VM's in two sites, active, active cross recoverable.

    The SAN is not going away.

    (In case any of you are desiging and want the part list I'm talking about Cisco Nexus 5020 10Gbe backbone, Bluearc Mercury 100 cluster with disks slung on a HDS USP-VM. 64gb cache depth on each path and a few hundred tb of disk. Servers are HP BL495 G6's, with Chelsio cards. Chassis has BNT(HP) 10gbe switches. I haven't even started with Jumbo's yet, I can do better, but this is pretty good for now. All up it was just over a mil AUD).

    Whats this? It's a faster storage device. Thats a fairly small part in a SAN.

  • Re:CD-R? (Score:3, Interesting)

    by TheRaven64 ( 641858 ) on Friday January 01, 2010 @09:01AM (#30613552) Journal

    It isn't faster than DRAM, it's faster than Flash and hard disks. It is also much more expensive per MB than either: about 4 times as expensive as DRAM at the moment, and very few people are thinking of replacing their persistent storage with battery-backed DRAM.

    You seem to be confusing PCRAM with SRAM. Static RAM uses six transistors, while dynamic RAM uses one transistor and one capacitor. This makes it much faster, because you don't have to wait for refresh cycles, but it is a lot less dense and so much more expensive.

    Phase change RAM is much more complicated to make, but can be quite dense. The latest versions use four states so you can store two bits per cell, rather than two. Eventually you may be able to store an entire byte in a cell, which would get the density well above DRAM, but the physical phase change is likely to be slower than an electronic switch for a long time, so I expect to see phase change RAM as part of a three tier cache hierarchy (under DRAM and SRAM), at least initially.

Receiving a million dollars tax free will make you feel better than being flat broke and having a stomach ache. -- Dolph Sharp, "I'm O.K., You're Not So Hot"

Working...