Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Phase Change Memory vs. Storage As We Know It 130

storagedude writes "Access to data isn't keeping pace with advances in CPU and memory, creating an I/O bottleneck that threatens to make data storage irrelevant. The author sees phase change memory as a technology that could unseat storage networks. From the article: 'While years away, PCM has the potential to move data storage and storage networks from the center of data centers to the periphery. I/O would only have to be conducted at the start and end of the day, with data parked in memory while applications are running. In short, disk becomes the new tape."
This discussion has been archived. No new comments can be posted.

Phase Change Memory vs. Storage As We Know It

Comments Filter:
  • ... the death of x tech here, it will eventually die once the groundwork has been laid to migrate to a better system.

    • Re: (Score:2, Funny)

      Long-term data storage is dead! All hail long-term data storage!
      • Advances in storage not keeping up with advances in CPU/RAM doesn't make it irrelevant. It puts it squarely on the critical path.

      • Re: (Score:3, Insightful)

        by TheLink ( 130905 )
        Yes, seriously.

        Despite what the article writer thinks, if PCM is that great, the storage manufacturers will just create storage devices that use PCM technology. The other option is to go out of business ;).

        I see lots of "normal" people using external storage drives. These people are far less likely to open up their computer and swap chips on their motherboard.

        Transferring 1TB from my house to my office by hand is faster and more reliable than using my crappy ISP. If the writer thinks storage IO speeds are b
  • When you can pick up 4GB of RAM memory for a song, why not load the whole OS into memory? As long as you don't suffer a system crash, you can unload it back to disk when you're done.

    • by tepples ( 727027 )

      When you can pick up 4GB of RAM memory for a song

      A song costs 99 cents on iTunes. 4 GB of DDR/DDR2/DDR3 RAM costs far more, and it might not even fit in some older or mobile motherboards.

      why not load the whole OS into memory?

      Puppy Linux does, and Windows Vista almost does (see SuperFetch).

      As long as you don't suffer a system crash

      Power failure happens.

      • Re: (Score:3, Interesting)

        by mysidia ( 191772 )

        Power failure happens.

        That's what journaling is for.

        Load the system image into RAM at boot from the "image source".

        Journal changes to user datafiles.

        When a certain number of transactions have occured, commit them back to the main disk.

        If the system crashes... load the "boot volume" back up, replay the journal.

        No need to journal changes to the "system files" file system (that isn't supposed to change anyways). If a system update is to be applied, the signed update package gets loaded int

        • Re: (Score:3, Informative)

          by davecb ( 6526 ) *

          Interestingly, this closely resembles the discussion of the system image used in Xerox PARC Smalltalk....

          --dave

        • by fyngyrz ( 762201 )

          That's what journaling is for.

          No, that's what a UPS is for. :)

          • by tepples ( 727027 )
            Then why doesn't a UPS come bundled with name-brand desktop PCs in the way a keyboard, mouse, monitor, and sometimes even a printer do? And why don't sellers of used laptops provide any warranty for the UPS built into the laptop?
            • by hitmark ( 640295 )

              who buys desktops these days? most people seems to be getting laptops, even tho they run them of mains most of the time. Best thing, they have built in UPS ;)

              • who buys desktops these days?

                People who want a full-size keyboard, video, and mouse, and who don't want to pay for a duplicate mini-keyboard, mini-monitor, and mini-mouse built into a laptop. They either A. drive to work and thus never have enough time as a passenger on mass transit to make using the computer away from the docking station worth it, or B. have a smartphone, a handheld gaming device, an e-book reader, or even a paperback book to pass the time.

                Or people who use video games, certain kinds of CAD software, or other softw

              • I do plan to buy another desktop soon. And a tablet PC too.

                One doesn't negate the other. Unless you only read email and facebook, that's it.

          • by mysidia ( 191772 )

            Even UPSes have fuses that can blow / breakers that can trip. A UPS can overload.

            Someone can accidentally hit the EPO, or power-off switch on the UPS.

            The UPS battery may be too low to permit a graceful shutdown before power expires.

            The PC power supply can fail.

            Someone could trip over the power cord running to the PC.

            Even with a solid UPS, it doesn't require much imagination at all to recognize how likely a power failure or 'hard down' is to occur eventually.

            Losing all your data/changes in such

        • by tepples ( 727027 )

          Journal changes to user datafiles.

          When a certain number of transactions have occured, commit them back to the main disk.

          What is "a certain number" that won't require the disk to be spun up all the time committing transactions?

          the RAM image will be restored to the same state it was in as of the unexpected shutdown/crash.

          It will be restored to the same state: a crashed state.

          • by mysidia ( 191772 )

            What is "a certain number" that won't require the disk to be spun up all the time committing transactions?

            Why spun up? use a write-optimized SSD for the journal, and compact flash for the rarely-changing system boot image.

            "A certain number", the exact choice is a design/engineering concern, but probably fairly small values should be used, to avoid data loss.

            It will be restored to the same state: a crashed state.

            Well, of course, the filesystem would be in the same state as at the time of the crash

            • by tepples ( 727027 )

              Why spun up?

              Because you still need to spin up the drive to read in data files that the user is working on if either A. the user hasn't opened them since the computer last came out of sleep (compulsory miss) or B. the files collectively are too big to fit in RAM (capacity miss).

              use a write-optimized SSD for the journal

              That's actually a good idea because a journal can be stored as a ring buffer, and a ring buffer is the theoretical best case for SSD wear and write speed. But one problem with journaling writes to data is that the user expects shutdown to be fast

              • by mysidia ( 191772 )

                the files collectively are too big to fit in RAM (capacity miss).

                I think the premise is RAM becomes cheap.. so you can have enough of it to hold the entire filesystem. E.g. 64gb or 128gb of RAM easily meets the needs of most users, with a couple gbs to spare for the kernel/app working memory partition.

                During the boot process, 2-4gb (or user's choice) is reserved for OS and application working memory, and the rest is partitioned as the RAMDISK. Probably none of the filesystems currently in exis

                • by tepples ( 727027 )

                  Or even while the system is running.. presumably there should be an option like "Safely remove hard drive"

                  Provided the system even is running. If the operating system will not boot, the "Safely remove hard drive" option would have to be in BIOS.

                  I'm thinking more along the lines of UNIX systems such as Linux, BSD, MacOS, that don't get updates every tuesday, though.

                  You're right: Ubuntu gets updates more often than Windows XP does, but granted, fewer of them require a full reboot.

                  Or the kernel started doing stray writes to the RAMDISK region, e.g. buffer overflow. Or a buggy driver hit the wrong memory area with a DMA.

                  Bingo. But with a file system on a separate device, file system corruption doesn't seem quite as likely as it would be with a RAM disk.

                  A hardware IOMMU should be used to split the memory regions

                  Good luck getting Microsoft operating systems to support any MMU functionality beyond what the operating systems currentl

      • >> A song costs 99 cents on iTunes. 4 GB of DDR/DDR2/DDR3 RAM costs far more, and it might not even fit in some older or mobile motherboards.

        4GB worth of music on iTunes is going to cost a hell of a lot more than 4GB of system memory. So memory these days can be had for about 40-80 songs....

      • by nyet ( 19118 )
        Superfetch? You're kidding, right? Real VMs were doing this long before MS figured it out. Unused RAM has always been used as disk cache in proper VMs. Only MS was stupid enough to need an *executable* (smartdrv.exe) to accomplish this most fundamental of tasks.
        • Superfetch? You're kidding, right? Real VMs were doing this long before MS figured it out.

          NT has always had a disk cache. SuperFetch of Windows 6.x just extends it to files that haven't been opened yet, as in Lord Byron II's suggestion of loading more of the operating system into RAM at startup.

        • by drsmithy ( 35869 )

          Superfetch? You're kidding, right? Real VMs were doing this long before MS figured it out. Unused RAM has always been used as disk cache in proper VMs. Only MS was stupid enough to need an *executable* (smartdrv.exe) to accomplish this most fundamental of tasks.

          Are you a traveller from the past ? Smartdrv hasn't been relevant to most people for nearly *twenty years*.

    • Re: (Score:2, Informative)

      by mangobrain ( 877223 )

      You may be able to "load the whole OS into memory", but that's missing the point, which is the data people work with once the OS is up and running. If that 4GB was enough to store all the data for the entirety of any conceivable session, on servers as well as desktops, why would anyone ever buy a hard drive larger than that? Hard drives would probably already be obsolete. I bet you own at least one hard drive larger than 4GB - and as the type of person who comments on slashdot, I bet more than 4GB of tha

      • by ls671 ( 1122017 ) *

        > load the whole OS into memory

        Replace with "load the whole OS into memory plus the disk content mostly used".

        Linux and most OSes already do this for you. Look at the free output below on that 8 Gigs machine. Programs only use 969 Meg (.96 GB) of RAM. Linux has swapped 273 Meg of program memory to disk because it is seldom used (memory leaks ?).

        Linux uses 6.9 Gig for buffers/cache which is more than the whole OS loaded into memory. It caches disk content into RAM, so in the end, there is only 45 Meg not

        • by mindstrm ( 20013 )

          a) The bottleneck in pricing, I don't see 64 gig memory modules on the cheap, or supported by any motherboards yet.
          b) The initial load of data (whether prefetch or whatever) that I want to work with is still constrained by whatever it's stored on.

          I'd love to have a few terabytes of ram. That would work for me... and that's where we're heading. how the OS manages the various levels of RAM (as cache, storage, or whatever) is up for debate, I'm sure we'll see some interesting mechanisms.
          (like how ZFS can have

    • by Paradigm_Complex ( 968558 ) on Thursday December 31, 2009 @09:45PM (#30611722)

      When you can pick up 4GB of RAM memory for a song, why not load the whole OS into memory?

      For what it's worth, you can do this with most Linux distros if you know what you're doing. Linux is pretty well designed to act from a ramdisk - you can set it up to copy the system files into RAM on boot and continue from there all in RAM. I've been doing this on my Debian (stable) boxes when I realized I couldn't afford a decent SSD and wanted a super-responsive system. Firefox (well, Iceweasel) starts cold in about two seconds on an eeepc when set up this way, and it starts cold virtually instantly on my C2D box. In fact, everything seems instant on my C2D box. It's really snazzy.

      As long as you don't suffer a system crash, you can unload it back to disk when you're done.

      Depending on what you're doing, even that may not be an issue. If you're doing massive database stuff, then yes. However, if your disk I/O isn't all heavy you can set a daemon up to automatically mirror changes made in the RAMdisk to the "hard" copy. From your POV everything is instant, but any crash will only result in the loss of data from however far behind the harddrive copy is lagging. Personally, what little I do need saved is simply text files - my notes in class, my homework, etc, and so I can just write to a partition on the harddrive that isn't loaded to RAM. It doesn't suffer at all from the harddrive I/O - I can't really type faster then a harddrive can write.

      tl;dr: It's perfectly feasible for (some) people to do as you've described, and it works quite nicely. It's not really necessary to wait for this perpetually will-be-released-in-5-to-10-years technology, it's available today.

      • Re: (Score:3, Insightful)

        by puto ( 533470 )
        You were able to to buy a 10 meg ram drive in the late 1980s and do this, so this is nothing new. You just are.
      • by Urkki ( 668283 )

        Depending on what you're doing, even that may not be an issue. If you're doing massive database stuff, then yes. However, if your disk I/O isn't all heavy you can set a daemon up to automatically mirror changes made in the RAMdisk to the "hard" copy. From your POV everything is instant, but any crash will only result in the loss of data from however far behind the harddrive copy is lagging. Personally, what little I do need saved is simply text files - my notes in class, my homework, etc, and so I can just write to a partition on the harddrive that isn't loaded to RAM. It doesn't suffer at all from the harddrive I/O - I can't really type faster then a harddrive can write.

        tl;dr: It's perfectly feasible for (some) people to do as you've described, and it works quite nicely. It's not really necessary to wait for this perpetually will-be-released-in-5-to-10-years technology, it's available today.

        Not just "available", but that's pretty much how all current operating systems work today. Software operates on a copy in memory (wether reading or writing), and OS writes back any changes at it's leisure. It's just a matter of available RAM vs. required RAM, and only if you run out of RAM, only then the disk becomes a bottleneck. I don't think data read from disk to memory is ever discarded even if unused for a long time, unless you run out of RAM (why would it be, that's just unnecessary extra work for OS

    • by drsmithy ( 35869 )

      When you can pick up 4GB of RAM memory for a song, why not load the whole OS into memory?

      On any remotely modern OS, the whole OS is *already* "loaded into memory" if you have enough of it. It's called a disk cache.

  • by Areyoukiddingme ( 1289470 ) on Thursday December 31, 2009 @09:04PM (#30611522)

    How soon we forget. The article is speculative, sure, but the hardware is not only real, it's in mass production by Samsung: http://hardware.slashdot.org/article.pl?sid=09/09/28/1959212 [slashdot.org]

    Just looking at the numbers, the article is a bit overblown. Phase change memory will first be a good replacement for flash memory, not DRAM. It's still considerably slower than DRAM. But it eliminates the erasable-by-page-only problem that has plagued SSDs, especially Intel SSDs, and the article does mention SSDs as a bright spot in the storage landscape. PCM should make serious inroads into SSDs very quickly because manufacturers can eliminate a whole blob of difficult code. With Samsung's manufacturing muscle behind it, prices per megabyte should be reasonable right out of the gate and as Samsung gets better at it, prices should plummet even faster than flash memory did.

    The I/O path between storage and the CPU will get an upgrade, and it could very well be driven by PCM. Flash memory SSDs are already very fast and PCM is claimed to be 4X faster. That saturates the existing I/O paths (barring 16-lane PCIe cards sitting next to the video card in an identical slot). Magnetic hard drives haven't come anywhere close to saturation. Development concentrated for a decade (or two?) on increasing capacity, for which we are thankful, but the successes in capacity development have outrun improvements in I/O speed. In turn, that meant that video cards were the driver behind I/O development, not storage. Now that there's a storage tech in the same throughput class as a video card, I expect there to be a great deal of I/O standards development to deal with it.

    But hard drives == tape? Not for a long long time. The development concentration on increasing capacity will pay off for many years to come. PCM arrays with capacities matching modern hard drives (2 TB in a 3.5" half height case. Unreal!) are undoubtedly a long ways off.

    Hopefully there are no lurking patent trolls under the PCM bridge...

    • Re: (Score:2, Insightful)

      by maxume ( 22995 )

      The only thing plaguing Intel SSDs is price. And I don't think that particular aspect makes Intel real sad.

      • Ultimately it does. If they're making say $20 per unit on something, they'd be better off if that thing was selling for $100 than $1000. Sure in reality the margins usually shrink somewhat as the price goes down, but generally so does the cost of production. It's unlikely indeed that Intel's making more money like this than they would be if they could produce the drives for less money.
        • In reality, they would make buttloads more money on $2 @ $100 than $20 at $1000, because far more than ten times the number of people who buy at $1000 would buy at $100. With the quality and usefulness of the intel product, you could easily put the purchase rate at 1/10th the price at anywhere from 100 to 1000 times higher.

          To make the most money in this situation, you basically want the lowest price that still gives you a profit and you can still keep up with demand. That's a little simplistic, but that's

    • Re: (Score:3, Interesting)

      by drinkypoo ( 153816 )

      It probably got tagged vaporware because where the fuck is my system with MRAM for main memory? MRAM is a shipping product, too, but it was "supposed" to be in consumer devices before now, as main System RAM.

    • Re: (Score:3, Insightful)

      by Ropati ( 111673 )

      Kevin has this right, what an obtuse article.

      Henry Newman is talking about PC storage not enterprise storage. He discusses all disk IO performance in MBs/sec, meaning sequential. When in reality, very little (disk level) IO for the enterprise is sequential. The numbers here are flawed as is the characterization of storage.

      Storage is where we keep our data. Keeping data is a central requirement of information technology. It will never be a peripheral feature.

      Presently the real IO bottleneck is the spinn

  • > disk becomes the new tape

    Well they got this right even if it was not to be accomplished with the mentioned technology.

    I think that in the medium/long time range this will undoubtedly come true.

    I mean, would any /. reader bet on the chances of hard drives to come on par with today memory access speeds in the future, even with zillions of years of technological advancement ?

         

  • by fahrbot-bot ( 874524 ) on Thursday December 31, 2009 @09:18PM (#30611600)
    From TFA:

    There is no method to provide hints about file usage; for example, you might want to have a hint that says the file will be read sequentially, or a hint that a file might be over written. There are lots of possible hints, but there is no standard way of providing file hints...

    Ya, we had that back in the stone-age and Multics would have been poster-child for this type of thinking, but it was a *bitch* and made portability problematic. I think VMS has some of this type of capability with their Files 11 [wikipedia.org] support - any VMS people care to comment. Unix (and most current OS) sees everything as a stream of bytes, in most cases, and this is much simpler.

    An OS cannot be everything to all people all the time...

    • fadvise (Score:2, Informative)

      by Anonymous Coward

      fadvise and FADV_SEQUENTIAL [die.net] exist in posix. Not sure how well different oses like Linux or bsd use the hints -- I know that some of it's been broken because of bad past implementations.

    • by Guy Harris ( 3803 ) <guy@alum.mit.edu> on Thursday December 31, 2009 @09:55PM (#30611760)

      From TFA:

      There is no method to provide hints about file usage; for example, you might want to have a hint that says the file will be read sequentially, or a hint that a file might be over written. There are lots of possible hints, but there is no standard way of providing file hints...

      Ya, we had that back in the stone-age and Multics would have been poster-child for this type of thinking, but it was a *bitch* and made portability problematic.

      No, Multics would have been the poster child for "there's no I/O, there's just paging" - file system I/O was done in Multics by mapping the file into your address space and referring to it as if it were memory. ("Multi-segment files" were just directories with a bunch of real files in them, each no larger than the maximum size of a segment. I/O was done through read/write calls, but those were implemented by mapping the file, or the segments of a multi-segment file, into the address space and copying to/from the mapped segment.)

      I think VMS has some of this type of capability with their Files 11 [wikipedia.org] support - any VMS people care to comment. Unix (and most current OS) sees everything as a stream of bytes, in most cases, and this is much simpler.

      "Seeing everything as a stream of bytes" is orthogonal to "a hint that the file will be read sequentially". See, for example, fadvise() in Linux [die.net], or some of the FILE_FLAG_ options in CreateFile() in Windows [microsoft.com] (Windows being another OS that shows a file as a seekable stream of bytes).

    • by mysidia ( 191772 )

      We have it today. Tfa's on crack.

      It's called madvise [die.net]

      It allows an application to tell the kernel how it expects to use some mapped or shared memory areas, so that the kernel can choose appropriate read-ahead and caching techniques.

      In Linux there is also fadvise() [die.net]

      Of course... reading from a file (from an app point of view) is really nothing more than accessing data in a mapped memory area. Oh.. I suppose unless you actually use the POSIX mmap call to map the file into memory for reading, y

  • by AllynM ( 600515 ) * on Thursday December 31, 2009 @09:18PM (#30611602) Journal

    Numonyx announced some good advances in PCM a few months back:

    http://www.pcper.com/comments.php?nid=7930 [pcper.com]

    Allyn Malventano
    Storage Editor, PC Perspective

    • by wa7iut ( 1711082 )
      Their marketing has got the phase change physics wrong though. Water is not a good substance to try to make an analogy with GST in this case. Ice is crystalline. liquids are not either crystalline or amorphous. There's no amorphous phase of water analogous to the amorphous phase of GST. The phase change in GST that represents a 0 or 1 is between a crystalline solid phase and an amorphous glass solid phase. Shrinking the bit cell does not make life easier either, certainly not the "slam dunk" they p
  • by Zero__Kelvin ( 151819 ) on Thursday December 31, 2009 @09:27PM (#30611636) Homepage

    "I will assume that this translates to performance (which it does not) ..."

    I was tempted to stop reading right there, but I kept reading. While his point about POSIX improvements is not bad, the rest of the article is ridiculous. It essentially amounts to: Imagine if we had pretty much exactly what we have today, but we used different words to describe the components of the system! We already have slower external storage (Networked drives / SANs, local hard disk), and incremental means of making data available locally more quickly by degrees (Local Memory, L2 Cache, L1 Cache, etc.) We already get that at the expense of its ability to be accessed by other CPUs a further distance away. It turns out I probably should have stopped reading when I first got the feeling I should when reading the first sentence in the article: "Data storage has become the weak link in enterprise applications, and without a concerted effort on the part of storage vendors, the technology is in danger of becoming irrelevant." I can't wait to answer with that one next time and watch jaws drop:

    Boss: Where and how are we storing our database, how are do we ensure database availability, and how are we handling backups?
    me: You're behind the times Boss. That is now irrelevant!

    Yeah. That's the ticket ...

  • The real question is whether we need something other than read/write/seek to deal with the various forms of solid-state memory. The usual options are 1) treat it as disk, reading and writing in big blocks, and 2) treat it as another layer of RAM cache, in main memory space. Flash, etc. though have much faster "seek times" than hard drives, and the penalty for reading smaller blocks is thus much lower. Flash also has the property that writing is slower than reading, while for disk the two are about the s

  • Windows is more closely tied to the whole "Separate levels of RAM memory and Hard Disk Memory" than Linux is I could really see Linux get more traction of all systems went to PCM tomorrow.

    • Dude, virtual memory architectures of the Linux and Windows kernels (and almost all other modern operating systems) are essentailly the same. Everything runs on x86, and so everything utilizes the memory management features of x86 chips.

  • by HockeyPuck ( 141947 ) on Thursday December 31, 2009 @11:45PM (#30612196)

    Maybe these guys ought to ask someone that was around in the days BEFORE there were SANs. Managing storage back then absolutely sucked. Every server had it's own internal storage with it's own raid controller OR had to be within 9m (the max distance of LVD SCSI) of a storage array.

    There was no standardization, every OS has it's own volume managers, firmware updates, patches etc etc etc. Plus compare the number of management points when using a SAN vs internal storage. An enterprise would have thousands of servers connecting through a handful of SAN switches to a handful of arrays. Server admins have more important things to do than replace dead hard drives.

    Want to replace a hot spare on a server, what a pain. As you had to understand the volume manager or unique raid controller in that specific server. I personally like how my arrays 'call home' and an HDS/EMC engineer shows up with a new drive, replaces the failed one and walks out the door, without me having to do anything about it.

    Two words: Low Utilization. You'd buy an HP server with two 36GB drives and the OS+APP+data would only require 10GB of space. So you'd have this land locked storage all over the place.

    Moving the storage to the edge? Even if you replace spinning platters with solid state, putting all the data on the edge is a 'bad thing.'

    "But Google does it!"

    Maybe so, but then again they don't run their enterprise based upon Oracle, Exchange, SAP, CIFS/NFS based home directories etc like almost all other enterprises do.

    • The SAN argument (Score:5, Interesting)

      by symbolset ( 646467 ) * on Friday January 01, 2010 @03:04AM (#30612810) Journal

      The SAN argument is that your storage is so precious it must not be stranded. If you're paying $50K/TB with drives, controllers, FC switches, service, software, support, installation and all that jazz then that's absolutely true. If you're doing something like OpenFiler [openfiler.com] clusters on BackBlaze 90TB 5U Storage Pods [backblaze.com] for $90/TB and 720 TB/rack you have a different point of view. As for somebody showing up to replace a drive, I think I could ask Jimmy to put his jacket on and shuffle down to the server room to swap out a few failed drives every couple months - that's what hot and cold spares are for and he's just geeking on MyFace anyway. Low utilization? Use as much or as little as you like - at $90/TB we can afford to buy more. We can afford to overbuy our storage. We can afford to mirror our storage and back it up too. In practice the storage costs less than the meeting where we talk about where to put it or the guy that fills it. If you want to pay for the first tier OEM, it's available but costs 10x as much because first tier OEMs also sell SANs.

      Openfiler does CIFS/NFS and offers iSCSI shared storage for Oracle, Exchange and SAP. If you need support, they offer it. [openfiler.com] OpenFiler is nowhere near the only option for this. If you want to pay license fees you could also just run Windows Server clustered. There are BSD options and others as well. Solaris and Open Solaris are well spoken of, and ZFS is popular, though there are some tradeoffs there. Nexenta [wikipedia.org] is gaining ground. There's also Lustre [wikipedia.org], which HP uses in its large capacity filers. Since you're building your own solution you can use as much RAM for cache as you like - modern dual socket servers go up to 192GB per node but 48GB is the sweet spot.

      Now that we've moved redundancy into the software and performance into the local storage architecture, moving storage to the edge is exactly what we want to do: put it where you need it and if you need a copy for data mining then mirror it to the mining storage cluster. We still need some good dedicated fiber links to do multisite synchronous replication for HA, but that's true of SAN solutions also. We're about 20 years past when we should have ubiquitous metro fiber connections, and that's annoying. Right now without the metro fiber the best solution is to use application redundancy: putting a database cluster member server in the DR site with local shared storage.

      Oh, and if you need a lot of IOPS then you choose the right motherboard and splurge on the 6TB of PCIe attached solid state storage [ocztechnology.com] per BackBlaze pod for over a million IOPs over 10Gig E. If you need high IOPS and big storage you can use adaptor brackets [ocztechnology.com] and 2.5" SSDs or mix in an array of The Collossus [newegg.com], though you're reaching for a $6K/TB price point there and cutting density in half but then the SSD performance SAN has an equal multiple and some serious capacity problems. If you go with the SSD drives you would want to cut down the SAS expanders to five drives per 4x SAS link because those bad boys can almost saturate a 3Gbps link while normal consumer SATA drives you can multiply 3:1.

      If you're more compute focused then a BackBlaze node with fewer drives and a dual-quad motherboard with 4 GPGPUs is a better answer. At the high end you're paying almost as much for the network switches as you are for the media. If you're into the multipath SAS thing then buy 2x the controllers and buy the right backplanes for that - but

      • by mindstrm ( 20013 )

        But you don't buy Backblaze storage pods, right? Backblaze is an online service - they built them for themselves as I understand it.

        Yes - there are excellent OSS solutions - if you can keep and maintain an engineering staff who can keep up to speed with things, and build things out, you can absolutely build out lots and lots of storage, and maintain it. Jimmy can swap drives. No problem.

        The problem is - as a business grows (that's what they want to do) - this could become unmaintainable. Staffing becomes m

        • Oh, yeah. We a team need highly skilled specialists to assemble this stuff and configure it. Guys that know what attaches to which and what bandwidths and clock speeds and stuff are. Because that's all really complex and detailed. If we don't handle this ourselves we can shuffle along with much less competent people than can be found at the local voc tech, just by relying on the vendor to steer us right.

          For folks who don't like OSS I did mention Windows Server, which has clustering and management just l

      • by drsmithy ( 35869 )

        If you're doing something like OpenFiler clusters on BackBlaze 90TB 5U Storage Pods for $90/TB and 720 TB/rack you have a different point of view.

        Yes. The point of view that the performance and integrity of the data storage technology is unimportant. I doubt you'll have much luck selling that to most enterprises.

        Your first faulty premise is that redundancy can (and/or should) be moved into the application.
        Your second faulty premise is that what works for Google works for everyone.

    • by Thing 1 ( 178996 )

      I personally like how my arrays 'call home' and an HDS/EMC engineer shows up with a new drive, replaces the failed one and walks out the door, without me having to do anything about it.

      I personally like how my source code doesn't randomly walk out the door, but then that's just me I guess...

  • Access to data isn't keeping pace with advances in CPU and memory, creating an I/O bottleneck that threatens to make data storage irrelevant

    It's because data storage will ALWAYS be relevant (talk to any Alzheimers' patient if you don't believe me) that access speeds are a concern.

  • by Anonymous Coward

    I don't think the author knows much about the purpose of a SAN. A SAN is not just a disk array giving you faster access to disks. Local storage that is faster does not help you with concurrent access (clusters), rollback capability(Snapshots, mirror copies \ point in time server recovery), site recovery(off sited mirrors) or substantial data compression gain through technologies like deduplication.

    As for speed, my SAN is giving me write performance in the range of 600mbytes/sec per client. I access my stora

  • I mean what's the advantage of phase change memory in this scenario? If you loose power to your CPU or your system crashes, you will have effectively lost your memory content anyhow. So you might as well open your files with mmap and have lots of RAM. The system will automagically figure out what to swap to disk if RAM isn't enough as well as it will regularly backup the contents do disk.

  • Is anyone working on micromachines (MEMS) that set vast arrays of very tiny storage discs into very tiny radio transmitters, each disc transceiving on its own very narrow frequency band? A 1cm^2 chip, perhaps stacked a dozen (or more) layers thick, delivering a couple hundred million discs per layer, each holding something like 32bits per microdisc and a GB per layer, streaming something like 2-200Tbps per layer, seek time 10ns, consuming a few centiwatts per layer.

    Or skip the radio and just max out a multi

  • What the author fails to realize is that the limiting factor on a SAN is most often the host itself, not the disk. A single disk my not have the IO, but an array most certainly does (depends on array). A standard, 33 MHz PCI bus can only transfer 133Mb/s (theoretical max). Even faster buses still do not match the I/O speed or throughput of a SAN.

    The limiting factor on a PC is that southbridge chip, not the storage. The vast majority of the systems typically connected simply can not push the I/O fast en

It is easier to write an incorrect program than understand a correct one.

Working...