Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Data Storage Hardware

Gigabyte Solid-State Storage Reviewed 71

EconolineCrush writes "The Tech Report has a review of Gigabyte's i-RAM, a relatively affordable solid-state storage device that uses plain old DDR memory modules and plugs into a standard motherboard PCI slot and Serial ATA port. Performance is generally excellent and occasionally jaw-dropping, but the i-RAM's appeal is ultimately curbed by its slower Serial ATA interface and limited capacity. Still, it's an interesting solution for anyone looking for faster I/O, and since it behaves like a normal hard drive without the need for drivers or software, it should work with just about any operating system."
This discussion has been archived. No new comments can be posted.

Gigabyte Solid-State Storage Reviewed

Comments Filter:
  • Why use ATA at all? (Score:2, Interesting)

    by networkBoy ( 774728 )
    And if it's plugged into a PCI slot why pray tell does it need Serial ATA?
    Why not use the PCI bus and look like a very fast ATA controller?
    Std PCI has over 1 gig bandwith.
    • You can't boot to PCI. This device is using the PCI slot to keep power to it to keep it performing like a HDD. It is SATA to become a "HDD"
      • What do you mean you can't boot to PCI? What do you think connects your ATA *controller*, which then connects to your ATA devices?
        The other poster probably has it right: It's been done this way so that you don't need any drivers.
        • You would need drivers for a storage device on PCIe. If you just made this look like a PCI host controller it would boot fine (natively and without drivers). What do you think that PCI card that comes with your UDMA133 hard drive is for? Adding second drives? no. It is so if your mainboard only supports UDMA66/100 you can boot from the faster PCI card instead. This would be no different. As long as you're tying up the slot and are using a friggen Spartan 3 FPGA you might as well make use of the things
          • You do need drivers for ATA controllers. It's just that they are normally included already, but if you were to create one from scratch, you would have to include drivers. See the corresponding part of your linux kernel configuration for more info on this.
            • You're making my point in a round about way.
              If you make this device look like a plain vanilla ATA controller then the drivers that are alreadi in the kernal should pick it up fine, no need to add the SATA link.
              Simply identify yourself as a fairly common, but fast IDE controller and everything from Windows to Linux to OS2 should pick it up.
          • No, I think the question is: Why can't a disk controller have any speed it wants? The reason you get a card that supports UDMA133 isn't because the motherboard's controller doesn't support it, not because the bandwidth between the IDE chip and the board is limited to UDMA100 speed.

            I mean, SATA is an interface between the hard disk and the *disk controller*. Then the OS talks to the controller, not to the disk itself. I don't think there's any reason you couldn't have an UDMA133 ISA card, for instance. So wh
            • You may freak out the various OS subsystems that have never been tested to handle high speed data from what appears to be a disk subsystem.

              In theory everything would work fine, but I've found that pushing the edge does uncover lots of bugs, such as when we started building multiterabyte servers with 3+ controller cards. Most drivers aren't very well tested handling that number of cards, and the filesystems pretty much all had 2TB limitations.

              I suspect you might run into the same issues if the OS suddenly g
      • of course you can boot off a pci card. it just needs to have a boot rom which extends the bios like every scsi card, ata raid card, sata card and many network cards do.

        and if you plan to run a NT based version of windows you need the correct drivers for that too again just the same as a scsi raid or sata card. windows provides you with a prompt to load theese during setup (which unfortunately only works with the driver on a floppy) or they can be slipstreamed in.

    • my n00b guess: so it appears as a standard hard drive with no need for drivers or OS (article says BIOS can handle it)?
    • It uses the PCI bus to charge the battery. Read the first page of the article.
    • most likely so people can hook it up to thier raid controller.

      but i agree it does seem a little stupid
    • by Zeio ( 325157 ) on Wednesday January 25, 2006 @01:14PM (#14559164)
      SATA is 150MB/sec. Standard PCI is (32 * 33) / 8 = 132 (and generally * 0.8 for overhead if other things are present on the bus so more like just around 100).

      You should say use a single PCI-Express lane, 500MB/sec.

      Seriously, look into things before your post - especially when using snarky expressions such as "pray tell"

      Also, direct connect to the PCI bus would require (most likely) funky drivers.

      IDEALLY, marvell/adaptec/lsi or others should just have a back end to one of the common non-fakeraid controllers they make be RAM instead of disks, piggybacking the existing driver support for the raid cards.

    • Which is why this review, with their SATA controller clearly being on a PCI bus, is flawed. If they were using a SATA controller on a PCI-X bus, they would get better transfer rates, since they were hitting the PCI bus limit.
    • Re-creating an IDE controller and a boot bios extension costs more money. They realized they could use existing controllers and save money. Smart move. I imagine there might be a pro one that doesn't do this.
  • Dupe (Score:1, Informative)

    by Anonymous Coward
    Dupe from July 2005 []
  • Does stuff like SATA-2 ("virtual" or not) bump up transfers or not in a setup such as this, or is that only applicable to read drives, so to speak? ..Doesn't seem to stand clear wether there's SATA2 involved here.
    • the chip used is programmed for SATA 1 which has the 150MB/s limit, the review does note that it would be nice to have it support SATA II to obtain the 300MB/s transfer rates, but it does not yet.
      As the chip used is a FPGA (Field Programmable Gate Array), this upgrade could be expected in future version if the product launches well.
      It is worth nothing the reviews also found that to get even better transfer rates then a non universal PnP interface would be required, needing OS drivers etc.
      • The review also blames the single-chip FPGA for the lack of ECC RAM support, and other things. Hello? If there's demand, I'm sure those things can be added in the future without changes to the board, unless that pesky ECC notch key in the slot prevents the module from fitting...

        Personally, I'm baffled as to why this thing isn't shaped like a drive. If everything that needed power took up a PCI slot, we'd run out of slots pretty quickly. Let it eat power from a drive connector!
  • So you have a 4GB HDD that "FDISK's" itself if you power the machine down overnight?
  • I just *knew* that the SATA interface was just too damn slow.

    Seriously tho',

    Didn't Gigabyte announce this last year for $50-$60. Seems they rethought how much profit they could make with it.

    And I'm pretty bummed about the 4GB limit. Not killer bad, but 8 would have been so much better.

    Probably wouldn't have hurt to have some more interesting tests in the review tho. Where's the kernel recompile? that would tell me more about real-world performance than the faux-tests that they showed.

    • A kernel compile isn't all that disk intensive compared to the processor usage. Compared to hard drives this ramdrive has a small advantage in transfer rate (150MB/s vs. 60MB/s) and a big advantage in seek times (0ms vs. 9ms).

      For a really simple example, compare two drives with a 60MB/s transfer rate, one drive with 0ms seek time and one with 15ms seek time*. The hard drive falls behind by 900K for every random head seek (not counting sequential seeks which are much faster).

      * This is slower than advertised
  • by Godeke ( 32895 ) * on Wednesday January 25, 2006 @11:57AM (#14558149)
    An interesting idea, but the limited size (4GB) makes me wonder what the target market would be. More to the point, where would this solution be better than 4GB of RAM available to the platform? Yes, this thing has battery backup and sips power when the machine is off, so it acts somewhat like a drive, but I would have my doubts about trusting it with anything mission critical.

    The performance tests show it did a great job as a high performance drive for simultanious requests for data on a web server, for example. But they didn't compare it to using the same 4GB onboard the server, which would be far more interesting... since the data is being "read" over a Serial ATA (which is puzzling since they are plugged into the bus), I can't imagine it being faster than using the memory to cache the data traditionally. The other examples, such as operating system boot time show that the operating system isn't read bound as much as one would think on boot.

    I'm sure there are some specialist uses for this that will make sense, but I suspect most of them would be better served with 4GB of RAM disk or cache.

    • so your server has all its motherboard ram slots full to capacity and you add one of theese and configure it as swap. not as good as sticking another 4 gigs in the motherboard obviously but if the motherboards already maxed out then it could be a lot cheaper than replacing a server class motherboard and transplanting everything into a new case.

      also theese have uses that a software ramdrive doesn't. primerally that they will survive an os crash or reboot! so i'd imagine they'd be quite usefull for things lik
    • Someone else has already replied pointing out "serious" applications, but I would add gaming. Granted the 4GB size is limiting here, but there are still plenty of games that are played at competition levels that you could fit (along with Windows) into this 4gb. Assuming the rest of your rig is top of the line, it's an advantage over your competitors.

      And gamers are the types who would drop the dough on this, though the iRAM with 4gigs of memory doesn't even come close to a top of the line SLI config so it'
    • As I see it, the force of this disk, is the really short time to sync. Running Oracle redo-log on sonething like this would be nice. Or the entire database if it fits.

      But if I had the money, I would rather put it here []

    • Actually, I can think of a decent use for it - put a swap file on it. Oh sure, this is less useful now that we have Athlon 64 systems with an assload of DIMM slots but this has the added advantage of taking cheap ram. You can get cheapie 1GB dimms for about $60.

      Also, it would be really handy for anything that requires a lot of rapid temp file creation.

      Finally, for the high-end gamer market, it would be cool to load your most-played game into it. But as per the benchmarks it wouldn't help all that muc

    • Bittorrent?

      Heavily distributed file tasks like bittorrent may run for lengthy periods of time and require low access times and constant writes.

      I have a raid 0 and a huge problem for me is that when I bittorrent multiple files at once my hard drives grind away like crazy, some programs support variable levels of memory buffering but some don't.

      Solid state storage means that writting one sector in 10 diffrent places isn't a problem for wear/tear or destroying access times.

      Admittedly this example refle
    • Well, I am a tad disapointed at the SATA interface, but there are a few possibilities.

      This would make an outstanding "external journal device" for a journaled file system or four.

      Now nothing is good for an NTFS journal even if you could do it to an external device, but for a real journaled file system it would do quite nicely. The device becomes a fast write cache in front of a potentially slow aggregate (software RAID etc). Put a database on the raid and do full data journaling. You would be able to pul
    • Forget the gamers and desk top users. For those users just put the RAM in the system.

      Think mail servers, database servers, web servers and e-commerce servers. Any application that does heavy I/O. Real I/O, not just swap or paging. In particular any application that is write heavy. For true write it to the disk activities, kernel buffers if they get used, and they should not, are a loose because the data is not committed to non-volatile memory. Wrote the data to disk. Ya, right. Oops, the system crashe
  • Swapping/Caching (Score:4, Interesting)

    by Twillerror ( 536681 ) on Wednesday January 25, 2006 @11:58AM (#14558164) Homepage Journal
    This sounds like a perfect candidate for a swap partion, especially on Windows. Windows swap is a huge performance hog. I turn it off if the machine has 2 gigs+ of memory. Windows tends to swap memory not based on the lack of it, but the lack of access. So if you let a program sit in the background over night and then switch to it your HD goes crazy.

    With swap being on this you'd still get transfer rate problems, but access rates should be extremely higher. Especially when the "drive" is fragmented. A defrag program would run pretty fast on one of these as well.

    It is to bad that OSs don't have support for these types of devices yet. I'd rather use it as an actual drive cache and not bother my main RAM. If the OS loaded a file up it could place it on the RAM drive and read and write to it.

    Related, most of my servers at work have 128 or 256 meg SCSI RAID cards. I wish that technique would make it into the retail market.

    • Swap is not a viable application for what is essentially a slow, persistent ramdisk.
    • "With swap being on this you'd still get transfer rate problems, but access rates should be extremely higher. Especially when the "drive" is fragmented. A defrag program would run pretty fast on one of these as well."

      There's no need to defrag data stored on these, there is no performance gain for sequential access over non-sequential access.

    • Where this really shines is with an OS which loads entirely to the drive. I have one of the early ones, and have it on a machine using Slax installed to the drive. Power up to live time is under 35 seconds - as close to instant on as I need.

      Oh, and the "10 hour" battery is more like 8 or so, but who's counting. OK, I am. But hooked into a UPS, the system is rock-solid, and totally silent.

      Pretty cool
    • Only 1G? Keerist in a bucket, why not just add 1G to the motherboard? Why add all the overhead of pretending to be a disk drive, all the extra components and the connectors and extra power supply and all that crap, just for 1G of swap?

      Put it on the motherboard.

      If you have a slow slow system which can't take 1G on the motherboard, you have other problems. 1G swap isn't going to be that much help, and for the expense, just upgrade your motherboard and get it over with.

      This is a useless product.
    • Windows tends to swap memory not based on the lack of it, but the lack of access

      Yes, that is by design. Linux does the same thing, although it is much less aggressive. Remember, the memory is always being used by something. Better to use the free memory as additional disk cache rather than wasting it on memory that hasn't been accessed in a long time.

      World of Warcraft, for example, takes over 1GB of total memory on my system - note that I only have 1GB of memory. But ~700MB of that is swapped out at any on
    • I often find that my Windows machine appears to be cacheing a whole load of stuff from disk in RAM, and at the same time paging the rest of the RAM out to disk.

      So, we have disk data sitting in RAM which is pretending to be a disk, while transient data are being paged out to disk pretending to be RAM. And now that disk actually is RAM pretending to be a disk pretending to be RAM. What is this? An episode of Scooby Doo?

      Mr Ram tears off a rubber mask. "Haha! I'm really Mr Disk, the janitor! Shaggy tears off a
      • I often find that my Windows machine appears to be cacheing a whole load of stuff from disk in RAM, and at the same time paging the rest of the RAM out to disk.

        It's called a unified buffer cache, and practically every OS since Mach 2.x has had one. A lot of programs allocate memory and then never access it again due to bugs. A lot more programs allocate memory and the only access it very infrequently. Meanwhile, all of these programs are accessing the disk a fair amount. Would you rather that your RA

  • How does this setup compare to a 2gig flash drive on USB4 or one of the integrated IDE/SATA flash readers?

    • Well... flash is several orders of magnitude slower than DDRAM so I would say it probably compares quite well!

      The sustained transfer rate is 150 MB/s... when was the last time you wrote your whole 128 MB USB flash whatever whatever in under a second?

      Anywhere close to a second?

      Ten seconds?
      • True, current standards are 66MB/s it has advantages over either other option. Like the DDR solution there is no performance hit for non-sychronous transfer, it consumes less power then hard drives, and takes up less space. It's primary advantage over the DDR solution is that it is non-volitile.

        Also, acording to this article, IDE and PCMIA Flash drives should hit 133 MB/s this year. y_5838.html [] And it was reported not too long ago on /. that 16 Mb chips(2M
  • Along those same lines, DB apps (Oracle, DB2,etc), and some server apps (Apache SSL cache off the top of my head) use shared memory structures (in *NIX).

    When the structure is created initialized, and equal ammount of swap in allocated (whether it is needed or not). this would be a perfect place for this sort of memory.
  • The least they could do is support 2GB modules to max the card out at 8GB. I don't think I could even copy over my entire World of Warcraft folder to that thing.
  • Yes it's a fresh press release and additional capability but it's a bit stale [] to be real news.
  • Erm. So they take PCI card, pump data across SATA cable to chipset and then through PCI to processor and memory? Does anyone else see any redundancies here?

    Now really, how many of potential users really need these to be attacheable from outside?

    Is it really so hard to make that board pretend to be just another SATA controller, pump the data across PCI only once and not waste the SATA connector?

    • it's drawing power to recharge its battery from the pci bus, nothing more. The real transfer is done over the sata bus, thus making it more or less driver free. A great advantage if you are running an OS that would most likely not be supported by the developer...
    • It uses a FPGA. If they designed it right they may have the option to up the memory to a Max of 8 Gigs use the PCI bus, and move to SATA2.
      A good reason for SATA is the ease that you could build a raid of these.
      four cards set as a raid 0 would make for a VERY fast 16 gig drive.
      This could be real handy for a database server.
      If you are NOT storing graphics, video, or audio 16 gigs is a LOT of data.
      Use an ATA drive for the OS and programs and keep your data base on the RAID RAM disk. Combine that with a lot of
  • by b1t r0t ( 216468 ) on Wednesday January 25, 2006 @12:41PM (#14558707)
    The article apparently only links to Gigabyte's home page, and if they do have a deeper link, I couldn't find it.

    So here is a link to their Other Peripherals page [], where they list all three (!) versions of the board. But you still can't order directly from them anyhow.

  • When I read the article title, I thought that the device would only store 1GB of data, not realizing that 'Gigabyte' was the manufacturer.

    After reading TFA, I see that it's got four memory slots that the users fills with his memory of choice. Each slot supports up to 1GB, for a total of 4GB.

  • You can plug in multiple cards into multiple PCI slots. If you attach their SATA connections to a SATA raid (or even non-raid) controller that is plugged into a higher pefromance slot, such as PCI-X or even a 64 bit PCI slot, you will most likely get better performance than this review leads you to believe. Their bottleneck wasn't this card, but the PCI bus holding the SATA controller they were using. This was obvious from the maximum transfer rates being close to the 125 MB/s theoretical PCI limit.
  • Thousands of small file retrieval without the seek time overhead.

    Large DNS zone files, millions of e-commerce product images (we have about 1.2 million images that consume less than 4GB of space), and heavily queried LDAP data. May be even a MySQL database that many very small tables.

    I see it kind of the mini ResierFS of hardware...

    I would love to have some R&D time to workout the possibilities with some of my operations.
  • by millisa ( 151093 ) on Wednesday January 25, 2006 @08:53PM (#14563679)
    I read about these a while ago and have bought one.

    Just like every single other review I've read, great things are claimed about this thing that "Its just like a hard drive" "Linux! Wee!" yadda yadda yadda.

    I question whether most these reviewers actually touched one.

    I have version 1.2 of the board.
    I had four 512meg pc2700 dimms laying around (kingston) which I figured I'd try it out with. It seemed to work at first, detected in the bios, has the right size on autodetect.

    I was able to format it once in Windows XP after initializing it. I have never successfully formatted it since. The data corrupted itself shortly thereafter. (I copied an iso back and forth from a standard sata disk and md5'd it.)

    The speed was impressive. Copying to itself from itself did about 500mb in 5-7 seconds.

    Now, the use in windows has some appeal (sql temp db? IIS cache / IIS compression dir?) but I really wanted this for some of my mail servers (spam scanners that need a fairly big glob of temp space) and possibly for some replicated mysql dbs.

    I could not get any of the following linux installs to recognize that there was a disk on the system at sda or hda: fedora (core4), centos (4.2), ubuntu (um, whatever the iso is they have up). However, this was *only* during the installation process . . . I do not know what driver these installs might have needed that would allow it to see this device (they see a maxtor sata drive I have on hand just fine). If I installed onto a regular old sata hard drive, and turned off all PATA ports, I was able to see the I-Ram. I was able to fdisk the I-ram. I was able to mkfs.ext3 the i-ram, sort of... The smaller partitions seemed to go ok, but whenever I made a partition bigger than 200meg, sometimes mkfs would crap out throwing errors about the partition being possibly corrupt.

    I was able to successfully install a 100meg fat partition, with dos on it and it worked quite well...

    Now, because I was getting corruption and not using one of the suggested ram types, I purchased 4 1gig sticks of the exact model and chipset they listed as being tested (kingston kvf400x64c3a/1g).
    This did not change any of the weirdness.

    Now, I firmly believe this product works. I can't see them selling it if it didn't (yeah, I'm an optimist). I called their tech support to make sure there wasn't a firmware update I might need to make. All of my hardware should be supported (ICH6R chipset, right ram, right pci slot, etc) they said. They have not tested it at all in Linux he said (This didn't matter since I could show issues in Win32XP). He was not able to immediately RMA a new card however . . .all they have on hand in support is apparently one of hte prototype cards . I'm having to wait until he gets one of the new ones, one of my chipset boards, and the suggested ram before he can make the call that a replacement would fix the issue.

    I knew ahead of time I'd be dealing with early adopter pain, but there is use even though "SATA is so slow!". Yeah. Well, being able to push all 150mbytes/sec per SATA channel is good enough for me. That'd saturate a gigabit line and is good enough for me and I can put a 4gig ram disk on boards that wont support 4gig of ram total...

    Don't consider this a review. I'm not speaking for or against the thing. This is purely my experience so far with *one* card...
  • ...only 9 years until I have 68GB of solid-state storage! Finally, now I know when I can build my no-moving-parts, silent jukebox for my soundsystem! I wonder how much storage I'll need by then if I need 40GB now, especially if DVD-A becomes popular, let alone once we move to holographic storage [] takes off in September of this year [], although that date seems to make me wonder why the hell we're arguing about HD-DVD and BluRay....

10.0 times 0.1 is hardly ever 1.0.