Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

RAM Disk Puts New Spin On the SSD 305

theraindog writes "Although the solid-state storage market is currently dominated by flash-based devices, you can also build an SSD out of standard system memory modules. Hardware-based RAM disks tend to be prohibitively expensive, but ACard has built an affordable one that supports up to 64GB of standard DDR2 memory and features dual Serial ATA ports to improve performance with RAID configurations. And it's driver-free and OS-independent, too. The Tech Report's in-depth review of the ANS-9010 RAM disk pits it against the fastest SSDs around and nicely illustrates the drive's staggering performance potential with multitasking and multi-user loads. However, it also highlights the device's shortcomings, including the fact that SSDs are more practical for most applications."
This discussion has been archived. No new comments can be posted.

RAM Disk Puts New Spin On the SSD

Comments Filter:
  • by Wonko the Sane ( 25252 ) * on Thursday January 22, 2009 @08:09AM (#26558509) Journal
    • Most tasks are not disk IO-bound
    • Despite the fact that this device uses DDR2 RAM running at more than 6 GB/sec, it can not saturate 2 SATA interfaces
    • Why bother?
    • by NevarMore ( 248971 ) on Thursday January 22, 2009 @08:21AM (#26558575) Homepage Journal

      I'd bother, because most of *my* tasks are disk I/O bound.

      • by Wonko the Sane ( 25252 ) * on Thursday January 22, 2009 @08:31AM (#26558649) Journal

        But are you really getting your money's worth from this device?

        DDR2 is an order of magnitude faster than SATA. Looking at their numbers, the internal controller is limited to about 400 MB/sec. That is pretty mediocre.

        • by Poltras ( 680608 )
          Though the motherboard that will support 64Gb of RAM cost more than this baby here. And we're not talking about the OS to support those. In a server environment, I can see a use for faster disks when memory is maxed out.

          • In a server environment, I can see a use for faster disks when memory is maxed out.

            I can see it, but the use is fairly limited. The usage would have to be an environment where you need both extreme performance, and don't care about losing the data on the device. (The battery is only good for 4 hours, not a terribly long amount of time).

            Those situations exist, but are relatively specialized.

            • This looks like an excellent solution for a sound mixer/cd burner box I'm building for a musician friend, studio-quiet and fast. I'm hoping to wow him with the upgrade, from a generic P4 box to an Opteron. Should be a noticeable improvement. Maybe even an ANS9010B, as my friend is typically cheap.

        • That is pretty mediocre.

          For DDR2, maybe, for fixed disk usage? Still pretty good compared to spinning disks or even flash.

          Of course, RAM disks are not anywhere near new. I remember looking at one a while back that used plain old SDRAM. That's a bit hard to find today; it's often actually cheaper to get DDR2 today than DDR. In higher capacities as well.

          Of course; this brings up the question of 'what's the use?' - Seriously, unless you have something weird going on, just install a 64 bit OS and put the extra memory directly into

          • Re: (Score:3, Interesting)

            by Gilmoure ( 18428 )

            Mac OS7 had a way of creating a RAM disk from installed RAM. Was sweet way to run Photoshop at top speed back then, along with the dedicated Photoshop video card. Such specialization back then.

            • Re: (Score:3, Interesting)

              by pipatron ( 966506 )
              Of course, the Amiga could do this already in the eighties, and it could also keep its state during a reboot, you could even boot from it.
          • by drsmithy ( 35869 )

            For DDR2, maybe, for fixed disk usage? Still pretty good compared to spinning disks or even flash.

            A relatively modest (by today's standards) 8-spindle RAID10 (eg: a Dell PE2950's internal disk) will exceed 400MB/sec in sequential read operations.

        • by Forge ( 2456 ) <kevinforge AT gmail DOT com> on Thursday January 22, 2009 @10:03AM (#26559641) Homepage Journal
          This brings up an interesting idea.

          What if the ramdisk function was moved into the motherboard chipset? This would achieve 2 things:

          1. It would dramatically cut the cost of a ramdisk. I.e. The cost of the entire motherboard might go up by $5 or so.

          2. It would eliminate that SATA bottleneck, allowing ramdisk to run at full RAM speed.

          If you then figure out a way to have this data loaded to the ramdisk from a hard drive at poweron (or get realy clever and mount a flash chip on or near each DIMM which takes a backup of that DIMM, just before powerdown.
          • by mcrbids ( 148650 ) on Thursday January 22, 2009 @10:57AM (#26560479) Journal

            This brings up an interesting idea. What if the ramdisk function was moved into the motherboard chipset?

            OMFG! That's an AMAZING idea! This could dramatically change computing as we know it! The implications of this are, eh, well....
             
            .... quite well understood. Somebody thought of this many years ago. Many, many, many years ago. It's called a (ahem) "ram disk" and uses system memory as if it were a drive with a software driver. Here's a howto for Linux [vanemery.com] - I did something similar with so-called "high memory" on a 80286 with DOS 3.x and ramdrive.sys [geocities.com] - that 384k ram disk was small, but //FAST//!!!

            Sorry to break the news to you.

          • This brings up an interesting idea.

            The real question that is not being answered here is why does a 32 GB SD card cost $25 but a 64 GB SATA hard drive cost $800? Why can't the technology that makes SD cards so cheap make cheap SATA hard drives as well?

            If I'm missing something obvious and this sounds like a troll, then please RTFM me with a link, because I'd really like to know.

            • Re: (Score:3, Informative)

              by PitaBred ( 632671 )
              My 16GB SDHC is class 6 [wikipedia.org], which means it'll write at 6MB/s. That's significantly slower than an SSD drive. It is because of the difference between SLC and MLC [wikipedia.org] types of flash, really.
            • by default luser ( 529332 ) on Thursday January 22, 2009 @02:07PM (#26563667) Journal

              The real question that is not being answered here is why does a 32 GB SD card cost $25 but a 64 GB SATA hard drive cost $800?

              A 32GB SDHC card right now on Newegg (in-stock) is a minimum of $72. I don't know where you got the $25 number (sure, in another year it will be that cheap). As another poster mentioned, Newegg has 32GB SSDs available for the same price range.

              Why can't the technology that makes SD cards so cheap make cheap SATA hard drives as well?

              It already has. The first SSDs on the market used single-level cell (SLC) flash [wikipedia.org], while the inexpensive SD cards and mp3 players you see everywhere use multi-level cell (MLC) flash [wikipedia.org]. The difference is how densely you can pack the data, and it makes a huge difference in price.

              To put it simply: SLC flash is faster, lower-power, and more reliable than MLC flash, but also more expensive (at same capacity) than MLC flash.

              The reason the first SSDs used SLC flash is because new technologies have to convince people to take the plunge: people/companies are usually willing to pay significantly more for something that is much faster and more reliable. Early adpoters might have given SSDs the cold shoulder if the first wave of drives reduced capacity and performance in-order to be more cost-competitive with existing storage.

              Now that SSDs are firmly off-the-ground, manufacturers are offering all sorts of devices, including cut-rate drives using MLC flash, so the prices at the low-end have dropped like a rock.

        • by AmiMoJo ( 196126 ) on Thursday January 22, 2009 @10:04AM (#26559655) Homepage Journal

          You can get SSD cards with a PCI-e interface that hit 800MB/sec. Why RAM disk manufacturers stick to SATA I don't know.

          PCI-e even has from standby power available.

          • You can get SSD cards with a PCI-e interface that hit 800MB/sec. Why RAM disk manufacturers stick to SATA I don't know.

            PCI-e even has from standby power available.

            Well ... I would imagine that the main reason they stick to SATA is the ubiquity of the interface.

            At this point, just about every motherboard or computer out there contains a SATA drive, even if it doesn't have a PCI-e slot (or a slot available, or a slot of the required bandwidth available). That, theoretically, expands their potential market.

        • Re: (Score:3, Interesting)

          by BikeHelmet ( 1437881 )

          Bingo.

          That's why I'm holding out for a FusionIO. Their cards go through PCIe4x - not SATA. I want 600MB/sec reads/writes! 60k ops/sec, and cheaper than this thing!

          I just hope their consumer grade products perform as well as their enterprise ones. Apparently tech report will be reviewing them as soon as they can get their hands on them.

      • "I'd bother, because most of *my* tasks are disk I/O bound"

        I totally agree. For some tasks its very useful. (If we had memory that was the speed and no write deterioration like it was DRAM, but non-volatile like Flash, then that kind of memory would make this DDR2 based product obsolete, but until that time, its very interesting. Maybe there's hope in time for Memristor based memory, but for now this product does have some advantages).

        I was interested in Gigabyte i-RAM but its too limited at only 4Gb.
      • by Ed Avis ( 5917 ) <ed@membled.com> on Thursday January 22, 2009 @10:08AM (#26559709) Homepage

        Why would you spend your money on this device instead of just buying the equivalent amount of RAM and putting it on the motherboard where the processor can access it directly? Even if you had to upgrade to a more expensive motherboard you'd still get way better price-performance by doing that, rather than crippling the RAM by putting it on the other side of the SATA bottleneck.

        If you insist on having a 'disk' you can save files to, well, all OSes support the idea of a RAM disk...

    • The reason it can't saturate the 2 SATA interfaces is most likely the custom chip they use, probably a Xilinx FPGA chip like the one i-Ram used.

      Had they invested lots of cash in making a custom chip (but this takes time, months), it may have been faster but I assume they either believed it won't be a success, or got greedy and wanted to put something on the market fast or they assumed it's fast enough to make some fast cash.

      Right now the only problem I see is the short 4 hour battery time and maybe the low

      • Re: (Score:3, Informative)

        by Glonoinha ( 587375 )

        it needs about 20 minutes to transfer 16GB to backup card

        That's 16 megabytes per second - if I had to guess, the bottleneck is the CF transfer rate and has nothing to do with the rest of the device.

      • by LWATCDR ( 28044 )

        I don't see a huge problem with the battery backup time or the transfer speed.
        I would use this in a database server. Possibly even RAID them for even more speed. If the system is without power for four hours then there are some REAL problems going on.
        It would also be good for a rugged embedded systems/
        The speed will come up and Ram prices will keep dropping.

  • by rolfwind ( 528248 ) on Thursday January 22, 2009 @08:11AM (#26558513)

    Without RAM, this costs $380 which is probably more than double the RAM itself if you don't use anything to extravagant. I know other companies offered these in the past, with the similiar high price, always to act as a harddrive with a battery for backup. It was always easy in linux to make a portion of your memory act as a ramdisk, however many motherboards often didnt enough ram slots to make it appealing to split memory up like that.

    I wonder if a company like Apple can instead, on its laptops for instance, just move to SSD for its laptops since they are becoming seemingly cheap, exclusively (for OEMs) license a technology like MFT, and get a real speed edge on other makers. I think it would make more sense than a ramdisk where the bandwidth of ram vs the hard drive channel seems overkill.

  • No ECC... (Score:4, Interesting)

    by KonoWatakushi ( 910213 ) on Thursday January 22, 2009 @08:11AM (#26558515)

    so, this is just as worthless as Gigabyte's i-RAM.

    • Re: (Score:2, Informative)

      Actually, this box supports it's own ECC function. If you use ECC RAM, the ECC function is just like a normal ECC function. However, if you don't want to spend extravagant prices for ECC RAM, the box will create it's own ECC function. This function does use 1/9th the total RAM, but it is ECC, and it works. I own 3 of these boxes right now. They are fast as heck. I haven't played with SSD much(my first SSD drive arrives in the mail today), but I was able to perform tasks at performances that were beyond

      • by Skapare ( 16644 )

        Doing ECC the way this box would have to do when non-ECC RAM is installed, means that not only is it using 1/9 of the memory for ECC, but it also has to store it separately, requiring extra RAM access cycles, and potentially slowing down as a result.

        The article says ECC DIMMs are NOT supported. Are you saying the article is wrong?

        • The manual I got with mine says ECC RAM is supported. I can't confirm that ECC actually works as I have no ECC memory to test at the present time.

        • Re: (Score:3, Informative)

          by (H)elix1 ( 231155 ) *

          I've got one. Registered ECC is not supported. Unregistered ECC is supported. I saw no real performance decrease in simulated vs real ECC RAM. The SATA interface seemed to be a much greater bottle neck.

      • Sorry, 12 hour work days fragmented my last post.

        The time to install Windows XP SP3 from SP0 was 2 minutes. This is a significant difference from the typical 20-30 minutes(sometimes more) I've seen in the past for the SP3 installation. Everything is so much faster. My computer is a core i7 920, and when I use my platter drives to boot instead of my ANS-9010, it's night and day difference. The difference in performance to me is akin to telling your family member to upgrade from 512MB of RAM to 2GB. It's

      • by drsmithy ( 35869 )

        This function does use 1/9th the total RAM, but it is ECC, and it works.

        I'll take a stab in the dark and say their chipset is doing RAID3 or RAID5 behind the scenes to "emulate" ECC. So you're probably losing 1/8th (a single DIMM) not 1/9th.

        • Nope. It's 1/9th. Read the manual on their website. They really use ECC. The size deteced in the BIOS is 1/9th less than expected.

  • 1993 was fun. A few buddies and I built out a hardware ram disk with an unused 486 and some serial cables. Granted, it didn't come in as a native HDD, or even as a linux file system, but we built a little API that made it work reasonably well in software with standard file operations and even a crappy little "directory" structure. It worked fairly well... for some reason Mondo2000 pops into my mind.
    • Re: (Score:3, Informative)

      by setagllib ( 753300 )

      These days you can just use iSCSI to any free Unix-like and export a memory-backed virtual disk. It's also a nice way to use one machine's memory as swapspace for another, and with a fast network link it's like having more RAM in the client machine.

  • This seems like it would be an excellent solution for a swap drive and for a c:\temp or /tmp directory.Fair enough as swap *nix OSes will reap little benefit, but Windows seems to hit swap no matter how much RAM you have so there should be some significant performance gains there.

    • by Ginger Unicorn ( 952287 ) on Thursday January 22, 2009 @08:26AM (#26558615)
      The whole thing is pointless - why not just put 64GBs of ram in your PC and let it fill it up with disk cache. This makes no sense. If you compare this thing to just putting the RAM in your PC there are NO upsides. The data is vulnerable, it's massively expensive and an inefficient use of the RAM modules. Madness.
      • Also - if you had 64GBs in your PC there is plenty of space to create a RAM Disk in memory if you specifically needed one.
      • by Kjella ( 173770 )

        I agree. There was a time when it made sense to use an i-RAM to get around the 4GB limitation, but with 64 bit readily available I wouldn't begin to consider this unless you need >16GB RAM. Even then you're likely to find server motherboards with more slots and support for higher densities, not to mention more sockets - I would imagine that that most applications which need this amount of RAM also could do with more processors.

      • by ustolemyname ( 1301665 ) on Thursday January 22, 2009 @09:04AM (#26558917)
        I wouldn't say "no" sense. It's battery backed up + connected to a compact flash slot, so when the power goes out it starts backing up your data to permanent storage.

        My apologies - forgot I wasn't supposed to RTFA.
      • by cyberjock1980 ( 1131059 ) on Thursday January 22, 2009 @09:09AM (#26558963)

        If you compare this thing to just putting the RAM in your PC there are NO upsides.

        Ok...
        1. Find me a motherboard that has 8 RAM slots that doesn't require expensive ECC and/or Registered memory
        2. Find me a computer that can boot from it's own RAM drive.
        3. Find me a computer that can use a RAM drive that can be persistent through reboots without having to save the contents to something else.

        I have several of these, and I run a power cord that is normally used for one of those SATA/IDE to USB kits in the back of my computer to power my box.

        You don't think about all of the uses that this thing offers.

        • 2. Find me a computer that can boot from it's own RAM drive.

          A 4-hour battery life on the ANS-9010 essentially destroys this argument. The remaining two arguments are valid however.

      • Re: (Score:2, Funny)

        by wed128 ( 722152 )

        But just look at those doom 3 level load times! I DON'T HAVE 7 SECONDS TO SPARE!

        (also, i use solid gold SATA cables so my data doesn't get dirty)

      • Re: (Score:3, Interesting)

        by kabocox ( 199019 )

        The whole thing is pointless - why not just put 64GBs of ram in your PC and let it fill it up with disk cache. This makes no sense. If you compare this thing to just putting the RAM in your PC there are NO upsides. The data is vulnerable, it's massively expensive and an inefficient use of the RAM modules. Madness.

        Well, from everything on this product that slashdot has mentioned, just sticking RAM on a motherboard would be a better solution. It's not always the best though.

        I've wanted one of these things for

      • Wish I could.... Many of the current motherboards have an 8G max, so 4x2G is all the RAM you can stuff into them. For those who don't have the system RAM to spare for a ram disk, it is another option.

        For those who do, I've had good luck with this [fortunecity.com].

  • In summary (Score:5, Informative)

    by Max Romantschuk ( 132276 ) <max@romantschuk.fi> on Thursday January 22, 2009 @08:19AM (#26558561) Homepage

    Skimming the article, I'd summarize as follows:

    Real world performance not radically better than fast traditional HDs or SSD solutions, and you can't power off your PC for the night. (Unless you backup to flash every night.)

    I'd say this is a niche product, but could be a very good one for a chosen few applications.

    • by doti ( 966971 )

      what's the difference from a tmpfs then?

      • what's the difference from a tmpfs then?

        There's a battery that lasts a few hours or something, but not through the night. RTFA for proper details. ;)

      • by gmuslera ( 3436 )
        They have a battery (lasting around 4hs) so survives turning off the pc, or do some not very extensive hardware maintenance. And maybe more important, rebooting (kernel upgrades happens).

        In the other hand, tmpfs have the transfer rate limit of ram (~6G/s), while in this devices the limit is the SATA one (~400M/s).
      • If you're using tmpfs on a 32-bit machine, you're limited to 4GB or whatever the limit is on installable RAM. RAM disk eliminates this limit using SATA ports, but limits it to the speed of SATA transfer too.
    • by Agripa ( 139780 )

      and you can't power off your PC for the night.

      I was a little surprised they did not include a way to tap the ATX 5 volt standby voltage.

  • by moteyalpha ( 1228680 ) * on Thursday January 22, 2009 @08:22AM (#26558585) Homepage Journal
    It seems that with a little firmware it could be coaxed to do some content addressability. Considering that it is 10x faster inside than the peak of the SATA interface. It seems to me that there is a lot of potential. I always liked the ram disks when they were popular ISA cards and this could be the thing that could use the full power of USB 4.0 [sic]. Applications could be changed to take advantage of this speed. If lists and SQL databases could be sorted on the drive without CPU overhead, it could be very useful.
    • Considering that it is 10x faster inside than the peak of the SATA interface. It seems to me that there is a lot of potential.

      Since the performance bottleneck here seems to be the SATA interface, and not the memory itself, maybe the next evolution of this design could move the RAM directly onto the motherboard, and the CPU could be given rapid access to it through the northbridge?

      Imagine what kind of performance that (purely theoretical for now) configuration could accomplish!

  • Yay, a RAM HD! I'd like to see the pagefile dig into this - Microsoft must be foaming at the mouth. Sorry if that seems like trolling, but I've had it up to here with the constant and painful HD thrashing that Windows always seems to enjoy doing (and probably their less than perfect implementation of it).

    • by Ihlosi ( 895663 )

      Yay, a RAM HD! I'd like to see the pagefile dig into this - Microsoft must be foaming at the mouth. Sorry if that seems like trolling, but I've had it up to here with the constant and painful HD thrashing that Windows always seems to enjoy doing (and probably their less than perfect implementation of it).

      You'll be better off sticking all that RAM in your computer than in a $380 box. (Or, optionally, spend the $380 to upgrade your mainboard to one that can hold all the RAM).

      • by Twinbee ( 767046 )
        Perhaps, but why do I have my suspicions that XP will even bother to use that memory, especially for the important stuff (like keeping standard requestors in RAM).
      • Hmm, it would be really interesting to just put 4 more GB of RAM into the computer and make a ramdisk out of it and then point the windows pagefile to the ramdisk. That should hopefully once and for all put an end to that painfull trashing.

  • In the real world benchmark, the only place where it shines is during file copy operations. It's on par with Intel's SSD when measuring OS load time, game level load times and other application benchmarks. The battery only hold 16GB for 4 hours too, you'll need that CF card to backup to (which is a neat feature, just the press of a button to backup and restore). Any way, I'd gladly take it for free.
    • Actually in the real world benchmarks the only place that it really shined was in transactional processing of multiple client streams. In single threaded, single activity processing it isn't really that much faster than even regular hard drives. Save 10 seconds copying a few gigs of files or making a tar file from your existing data - what good is that? Shave 12 seconds from your boot time - again, what good is that? Now enabling your transaction processing system to handle 10x as many transactions in a

  • I can't see why I'd use this rather than just putting more memory in my machine...

  • Nothing new (Score:3, Interesting)

    by Burdell ( 228580 ) on Thursday January 22, 2009 @08:57AM (#26558865)

    I have a high-load mail server that uses a 2G RAM disk (a Curtis Nitro!Xe [curtisssd.com]) for the queue. It looks like a normal 3.5"/1" high SCSI drive with a SCA hot-swap connector. It was made before high-density CF cards, so it has a 2.5" notebook hard drive inside for storage after shutdown (it has a battery to start the drive, dump the RAM, and shut down). We've had this in service for almost 5 years, and it has really made a difference.

    The point to a RAM disk is not necessarily bulk data throughput, but I/O operations per second. Mechanical drives are limited to 100-200 random IOPS or less, while the RAM disk can easily hit 100,000.

  • Good idea (Score:5, Interesting)

    by ledow ( 319597 ) on Thursday January 22, 2009 @09:06AM (#26558939) Homepage

    My university used RAM disks back in the day - it was the only way to get decent performance on older machines. The computers didn't even have hard disks in. My brother (who went to the same university) has a story where he sped up his large FORTRAN compiles by a factor of 10 just by working out how to use the RAMDisk (which was only ever used by the PXE-style boot procedure and then hidden from the OS) for his own purposes and people couldn't work out how he was doing it because he still took stuff home and brought it in on floppy. This is a nice hark back to those times.

    The killer, however, is the price... the price of a PC, basically, before you add the RAM. If you're REALLY serious, you'll have machines that can just take the extra RAM directly and do this in software. If you're not willing to pay that much, well, nothing will work for you but a few bits of extra RAM and a fast SSD for the same price won't go amiss. However, if you occupy the middle ground... this still doesn't seem worth the effort. It'd be cheaper to just buy an SSD, some extra RAM for cache and maybe even a cheap PC to throw it all in (if NanoITX supported 8Gig chips, this device could almost be made obsolete overnight).

    The interconnect too - yes, it emulates a SATA drive but it emulates two as well and fails to do anything significant with them. So you'd need a RAID0 setup, with independent SATA setup, and an expensive device, with lots of even more expensive RAM just to be a fraction of a second quicker than an off-the-shelf SSD in the same machine. The people for whom it's worth it won't want to be bothered with all this.

    The CF Backup feature is fantastic. I love the idea. But 20 minutes is a long time to wait if the battery is only four hours worth when it's brand new (four hours? At least 24 would have been useful and given you a chance to actually do something with it). You would want to be backing up anything this thing held anyway, so you don't really gain anything because the CF is the most inconvenient backup because of its manual nature.

    I can't see a situation where 64Gb of fast storage is worth that amount of money + time + hassle + 64Gb of RAM + potential firmware problems + interface cabling + ... The bottlenecks in anything serious are going to be elsewhere.

  • by myxiplx ( 906307 ) on Thursday January 22, 2009 @09:09AM (#26558965)

    First of all, I absolutely love these devices. It's a great idea that's been well executed, and yes, they're a niche product, but we've one or two apps that would notice the increase in speed from these, and if I had the money I'd buy a whole bunch of them to stick in our servers. ... except that you don't get 5.25" bays in an awful lot of modern rack mounted servers. Certainly none of our new kit has them; all that space is taken up with hot swap 3.5" or 2.5" drives.

    And that's what kills it for me. Even if I'm looking at a new server I'd have to make sacrifices to fit one of these. My first choice for a new storage server is going to be one with 24x 2.5" drive bays. I'd have to sacrifice a full 8 drive bays to make room for one of these, and it's just not worth it. Not when I can buy an Intel SSD for the same price, loose just one bay, and have it hot swap to boot.

    And even worse, there are PCIe devices just around the corner, with 3-4x the read and write speed of any SATA device. Those will drop straight into any of our current servers, no problem at all.

    So unfortunately, much as I love the ANS-9010, I just can't see any reason to buy one :(

  • failure mode (Score:3, Informative)

    by Lord Ender ( 156273 ) on Thursday January 22, 2009 @09:14AM (#26559009) Homepage

    How does this thing handle getting the power cord yanked in the middle of a large write operation?

    • Re:failure mode (Score:4, Informative)

      by (H)elix1 ( 231155 ) * <slashdot.helix@nOSPaM.gmail.com> on Thursday January 22, 2009 @09:43AM (#26559389) Homepage Journal

      Same as any HDD - a hard shutdown. The battery pack will then start backing up the current state of the memory to a CF card, so that when power is returned to the system you can run fsck or chkdsk. If you don't have a CF card, it will keep the RAM alive for a few hours, then all is gone if power was not restored.

    • Re: (Score:3, Funny)

      by MightyYar ( 622222 )

      How does this thing handle getting the power cord yanked in the middle of a large write operation?

      Since you didn't RTFA, you'll be happy to know that you can keep on gloating. The thing has absolutely no backup mechanism, no battery, no ability to write to a CF card. If the power so much as winks, all of your data is garbage.

    • by suggsjc ( 726146 )
      It woul
  • Son of iRAM (Score:5, Informative)

    by (H)elix1 ( 231155 ) * <slashdot.helix@nOSPaM.gmail.com> on Thursday January 22, 2009 @09:33AM (#26559231) Homepage Journal
    I got one of these in our lab, and can answer questions on it. Had both units... the 6 slot version and the 8 slot version. This thing is the spiritual successor of gigabytes's iRAM. It takes bog standard DDR2 RAM as storage and lets you connect it as a SATA drive.

    A few of the things it improved on the old iRAM.

    *DDR2 supported ram, with 6-8 slots, taking up to 4G sticks.
    *A fair sized battery.
    *A CF backup slot.
    *RAID friendly, multiple SATA ports on 8 slot model.
    *Uses 5.25" bay rather than PCI slot.
    *ECC

    First off, no special device driver was needed - the drive was OS agnostic. Every mainboard and controller card I used saw it the device like any other SATA hard drive you might plug in.

    The RAM slots take bog standard DDR2 RAM. The documenation mentions speeds of 400/533/667/800 are all supported. Benchmarks with 533 and 800 grade RAM produced identical benchmarks, so faster RAM does not appear to have any impact. I also mixed and matched faster and slower DDR2 modules without issue.

    Just like most mainboards, the RAM needed to be installed in pairs if over one stick was used.

    Unbuffered ECC or non-ECC modules are both supported. Registered RAM was not. I tried to pull eight 4GB sticks from one of my Sun boxes to give the 'full montey' test. No joy. Had to stick with the far cheaper RAM.

    There was an interesting option for these who wanted to have ECC but used 'regular' non-ECC RAM. Eleven percent of the memory could be reserved for error correction. Again, all hardware based - just move a jumper. Performance metrics between ECC and 'simulated ECC' had negligible differences.

    The 8 slot model has two SATA ports. By setting a jumper, you could have the entire RAM capacity as one large drive on one SATA port or split it as two independent drives. If you splid the drive you had to have an even number of RAM sticks installed. Another jumper would dumb the interface down to SATA1 speeds rather than SATA2. Never tested that....

    Did test RAID-0, however. (grin) The synthetic benchmarks don't hit this device's sweet spot - database usage. Reads are fast. Writes are just about as fast. The RAID controller really makes a difference, as my 3Ware card performed significantly faster than with the mainboard based RAID. Using a EVGA 780i mainboard, it was not crushingly faster than a trio of velociraptors.

    For anyone who has installed XP, you know the wait between hitting the 'workgroup' and the first reboot? Just over two minutes. By far the fastest install I've ever done. The OS also started faster than any other disk or SSD system I've used.

    The CF bay was a nifty option. The question came up - what if I want to shut my machine down overnight? You can. If you have a CF card with more capacity than your RAM, it will back up the disk image automagically. You can also push a button to back up the current 'drive image' to CF, and another to restore the image. (I was able to go back and forth from Linux and Windows very easily).

    Anyhow, tis a fantastic high speed scratch disk or OS disk when write speed matters. For those of us who already maxed out RAM, this covers the gap between RAM drive sharing RAM with the mainboard and fast disk.
    • Hrm, you sure you were using the same model as the one in the article? The one in the article had 8 slots that could handle dimms up to 8GB each and didn't support ECC ram.

    • by drsmithy ( 35869 )

      *Uses 5.25" bay rather than PCI slot.

      This swings both ways. If what you have is a typical rackmount server, then a device that goes into a PCIe slot is far more preferable.

      Realistically, they should be able to make two devices - PCIe-mounted and drivebay mounted - using practically identical circuit boards. Ideally, the PCIe device would have a disk controller 'onboard' rather than plugging into the motherboard's SATA ports (and therefore have better performance).

      To be honest, I'm a little unsure of t

  • Get the device down to about $50 and I'll think about buying one. I don't see any reason a device like this is $380. That's just crazy.

    • $380 excludes the ram OUCH!
      (read the conclusions page)

      • Errr.. I already did. That's why I suggested putting the price to $50 before I'd even think about buying one. ;)

        $380 excludes the ram OUCH!
        (read the conclusions page)

  • Ah, Amiga old buddy, I miss you.

    This sound's like DKB's Battdisk [bboah-hardware.de]. Next to a Video Toaster and SuperGen, the Amiga 2000s best friend. It took a machine from the slowest booting system in the age to the fastest.

  • You can get devices like this for at least a decade, possibly much longer. The primary advantages of FLASH are low price, high reliability and battery-less non-volatility.

    • Most of what you said is right, but I don't know if I agree with "high reliability". Flash suffers from a limited number of writes per memory cell that RAM doesn't suffer from. Assuming the RAM disk is made with decent productiojn quality, it should be much more reliable, in the long term, than a Flash drive. The difference between this device and the previous ones is that this one is priced for the consumer or low end server market where most previous ones were priced for the enterprise market.

"The vast majority of successful major crimes against property are perpetrated by individuals abusing positions of trust." -- Lawrence Dalzell

Working...