Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

The State of Solid State Storage 481

carlmenezes writes "Pretty much every time a faster CPU is released, there are always a few that are marveled by the rate at which CPUs get faster but loathe the sluggish rate that storage evolves. Recognizing the allure of solid state storage, especially to performance-conscious enthusiast users, Gigabyte went about creating the first affordable solid state storage device, and they called it i-RAM. Would you pay $100 for a 4GB Solid State Drive that is up to 6x faster than a WD Raptor?"
This discussion has been archived. No new comments can be posted.

The State of Solid State Storage

Comments Filter:
  • I'd use Raid (Score:2, Informative)

    by ttown ( 669945 ) on Tuesday July 26, 2005 @10:51AM (#13165004)
    Having disk in parallel will speed up your storage much cheaper. 6x faster is not significant.
  • More than $100... (Score:5, Informative)

    by Anonymous Coward on Tuesday July 26, 2005 @10:51AM (#13165008)
    The card itself goes for $150, not including any RAM. So add 4 1GB sticks of RAM and you are looking at $500+ for the whole setup. So that is about $125 per GB...ouch!
  • Eh (Score:5, Informative)

    by Tranquilus ( 877563 ) on Tuesday July 26, 2005 @10:54AM (#13165045)
    The performance numbers Anand came up with on this are a little disappointing, in my view. It's nice, of course, to get a few seconds quicker startup of apps or level loads, but I doubt this is really worth it to most of us at this stage (aside from the coolness factor). Once capacity of these rises enough to make them capable of replacing HDs, though, they might be really nifty in the entertainment/HTPC space due to that silent operation. Basically, an interesting concept, still not quite ready for prime time, but getting a lot closer. Worth a quick read, anyway...
  • Re:I'd use Raid (Score:5, Informative)

    by MasterC ( 70492 ) <cmlburnett@gm[ ].com ['ail' in gap]> on Tuesday July 26, 2005 @10:58AM (#13165090) Homepage
    Having disks in parallel doesn't solve the latency problem, only increases the throughput.
  • Re:Let me think. (Score:3, Informative)

    by archen ( 447353 ) on Tuesday July 26, 2005 @11:08AM (#13165250)
    I was thinking the same thing, but keep in mind that this thing is actually acting like a SATA drive. I'm sure they're hitting the limitations of SATA, not the limitations of ram. Until they come up with a _standard_ configuration for this type of memory disk that talks as fast as the ram allows instead of following ide/scsi/sata standards, we're stuck with these speeds for compatibility reasons I'm thinking.
  • by Anonymous Coward on Tuesday July 26, 2005 @11:10AM (#13165267)
    It's done because the PCI slot provides continuous power, even when the system is turned off.
  • by archen ( 447353 ) on Tuesday July 26, 2005 @11:18AM (#13165373)
    Actually I think the freebsd memory disks would be superior to this anyway. Although mounting things like /tmp as a memory disk is okay, obviously you will lose everything on reboot. Thus there are also memory systems that are backed up to disk as well. With a freebsd memory disk system you could add more space easily and allocate more or less as needed with not too much work. This thing looks like you're stuck with whatever you put in it once it's up (correct me if I'm wrong).

    Moving parts suck, but they're usually pretty reliable - and certainly worth while as a backup for a ram based system like this.
  • From the summary it sounded like a 2.5" or 3.5" 4GB IDE drive using flash instead of an IDE emulator and battery-backed up RAM using a PCI slot for power... and no memory included!

    I'd pay $100 for 4GB of flash in a PCI or hard drive form factor, for a solid-state BSD or Linux webserver.

    I don't think I'd pay $100 for a 0GB hard drive emulator that takes up both a PCI slot AND an SATA cable, and I still have to populate with RAM, and that will lose all its data if you leave it off too long.

    Given that you can get a 2GB Compact Flash drive for $100 or 4GB for around $200 and you can hook those up to PATA with a $40 adapter, and populating this thing to 4GB will set me back more than that... I don't see the point.
  • by mindmaster064 ( 690036 ) on Tuesday July 26, 2005 @11:20AM (#13165397) Homepage
    Time and time again the biggest problem we these types of products is that no one stops to figure out what they would be useful for.

    First off, this thing costs WAY too much in both terms of the card and terms of the memory to populate it. This board should cost about $50 not $150. I'm saying mainly $50 mostly for the fact that it comes with the lithium battery and charging feature.

    Secondly, it is way too small. If it were 8GB I could use it for something like backing up dvds that play hell with hard drives and make you defrag them often. I could use this thing at that point, and so could you.

    Third, for a memory based i/o board I can think of nothing more silly than to ladden it down with a disk i/o interface. It does make it more "compatible" or whatever, but it also makes this board antiquated in about two years. If they had just made the i/o controller talk directly to the cpu this board would be smoking, and probably twice the speed.

    I was actually excited to read something about a product like this, but this one is not ready for prime time.

    Anyone remember Boca boards? :)

    - Mind

  • by MobileMrX ( 855797 ) on Tuesday July 26, 2005 @11:22AM (#13165415)
    The 6x comes from random seek access. A raptor's 72MB/sec falls down to ~3MB/sec or less when it has to start seeking, like when the data is separated into different physical locations on the hard drive instead of one continuous chunk o' data. The i-RAM is not affected by seeking (or is affected negligibly) because it doesn't have moving parts. -Mr. X
  • Re:I'd use Raid (Score:2, Informative)

    by Apparition-X ( 617975 ) on Tuesday July 26, 2005 @11:24AM (#13165445)
    Erm, not really true. A good RAID array will choose the drive with the head positioned closest to the data. Now I have no idea if this is standard on RAID controllers you would find in a small server, but it is certainly common on shared storage arrays.
  • Re:Swap Drive (Score:3, Informative)

    by Al Dimond ( 792444 ) on Tuesday July 26, 2005 @11:24AM (#13165449) Journal
    For one thing, you're misusing the term "virtual memory", which refers to the concept of separate programs getting their own address space. You don't want to disable that, and I imagine it would be nearly impossible to do so anyway ;-). Using disk space for extra RAM is typically called swapping.

    Point number two: it is perfectly possible to disable swapping in Linux and probably in most other systems. However, in speed tests on systems with lots of RAM enabling swapping has actually been shown to lead to speed increases in many situations. This has led some people with enough RAM that they don't need to swap to set up half of their RAM as a RAM disk and then use that as a swap partition. Supposedly this yields great performance.

    If this storage device is cheaper than adding a similar amount of RAM to a system then it might give you something of a performance boost.
  • ramdisk comments (Score:5, Informative)

    by NASAdude ( 731949 ) on Tuesday July 26, 2005 @11:32AM (#13165547) Journal

    I submitted this as a story back on June 4. Since it was rejected (too verbose?), I posted it to my /. journal [slashdot.org]. My main question to other folks relates to how this would compare to using a regular ramdisk. The main deficiency with a ramdisk is that you'd have to reload the contents every time you reboot. Here's my article, with all its links:

    Giga-byte Technology recently came out with a DRAM-based PC card that operates as a SATA hard drive. The product, iRAM, uses power from the motherboard to keep memory active when the system is shut down. During power outages, the product uses a on-board battery to retain memory for up to 90 minutes. The iRAM card is being talked about in the news (InfoWorld [infoworld.com], itWorldCanada [itworldcanada.com], engadget [engadget.com], PCWorld [pcworld.com], multiplay forum [multiplay.co.uk]) as a means of booting Windows faster. That is, you install Windows onto the iRAM drive to take advantage of the RAM's faster read-access time. Just hope that you don't lose power for more than 90 minutes.

    Is boot time really that important, since many computers are on all the time? A ramdisk might have better uses, perhaps for caching frequently-accessed files such as databases and webservers. Or, if you insist on having faster bootup, instead of putting Windows on the iRAM disk, why not just store the hibernation file there?

    I implemented a RAM-based database for an internet tool in 1998 to alleviate the read/write load on my local hard drive. It turned out to be a simple solution for the problem. At the time, it was just a matter of using a DOS-based ramdisk driver (ramdisk.sys). On application startup, it copied the database files to the ramdisk. During operation, everything was read/written to the ramdisk, and periodic backups were made to the physical disk. There are some inherent risks, such as loss of data during a crash since data isn't immediately written to a physical hard drive, so it may not be a great solution for a mission-critical production database. The iRAM product would make this type of database even more stable, in that the risk of loss of data is much less.

    That was a while ago, so I thought I'd look into setting up a ramdisk in XP for some amusement. Follows are the results of that search. It seems that the options are relatively sparse beyond the DOS-based driver. A few freeware and commercial packages are available, though. One key factor beyond price is the size limit of ramdisk.

    Microsoft's ramdisk [microsoft.com] offerings since Win2k are limited. Included with the XP OS is a ramdisk sample driver that "provides an example of a minimal driver. Neither the driver nor the sample programs are intended for use in a production environment. Rather, they are intended for educational purposes and as a skeletal version of a driver." Installation isn't simple enough for most users to benefit.

    Alternatives include a shareware ramdisk [majorgeeks.com], AR ramdisk (archive link: http://web.archive.org/web/20041011170408/http:/ww w.arsoft-online.de/products/product.php?id=1 [archive.org]) (freeware, 2GB limit, discontinued [arsoft-online.de], available for download here [nyud.net]), a freeware (64MB limit) and shareware (2GB limit) version here [ramdisk.tk],

  • by Anonymous Coward on Tuesday July 26, 2005 @11:34AM (#13165567)
    What is the point in doing this if even an old PCI based stripe set can already saturate the PCI bus? PCI is about 80 Mbyte per second tops for real world hardware if nothing else has to use the bus at the same time....
  • Re:I'd use Raid (Score:2, Informative)

    by etymxris ( 121288 ) on Tuesday July 26, 2005 @11:38AM (#13165615)
    RAID-1 decreases read latency, since you effectively reading the data with two drive heads, and can just read from whichever drive will deliver the data faster.
  • boot disk (Score:3, Informative)

    by dirvish ( 574948 ) <(dirvish) (at) (foundnews.com)> on Tuesday July 26, 2005 @11:40AM (#13165643) Homepage Journal
    Would you pay $100 for a 4GB Solid State Drive that is up to 6x faster than a WD Raptor?

    Yeah, I would only have my OS and applications on there with everything else on a second hard drive.
  • by Anonymous Coward on Tuesday July 26, 2005 @11:40AM (#13165645)
    Assuming that PCI Express has the always on feature you could get much higher bandwidth by having a hardwired interface that emulates an SATA or SCSI RAID array. That would also save an SATA slot as well.
  • Re:New Tech (Score:3, Informative)

    by hackstraw ( 262471 ) * on Tuesday July 26, 2005 @11:56AM (#13165867)
    100 for a 4gb solid state drive is affordable, but not worth the price.

    For you maybe, but people do this every day http://www.nextag.com/serv/main/buyer/outpdir.jsp? search=compact+flash&nxtg=67b8d_D13E150C29EFE508 [nextag.com]

    What makes it so expensive to competetivly price large solid state storage devices?

    No moving parts. No "spin up" time. No power used when idle. Ability to transfer the storage like a CD/DVD.

    On a sidenote, is anyone going to buy this drive that is 4gb and costs 100 bucks? I don't think it's much use to anyone.

    I would buy one in a heartbeat. Better deal than 1 Gig at $100.
  • by a_nonamiss ( 743253 ) on Tuesday July 26, 2005 @12:04PM (#13165991)
    Before you go off praising yourself for being so technically savvy, you really should RTFA. Most of these uses were addressed.

    1. In the article, system boot time went from approx. 15 seconds to approx 10 seconds. Hardly seems worth it.
    2. Specifically addressed in the article. 32-bit Windows XP Pro can only handle 4GB of RAM total (including swap file.) Why not just max out your system with 4GB of physical ram and kill the swap-file altogether? You wouldn't need to buy a $150 card and bottleneck all that memory bandwidth through a SATA controller.
    3. SQL databases and transaction logs may show some notable improvement. (this was NOT addressed in the article.) but again, you have to consider that Windows and SQL server already attempt to cram (or cache, as they call it) as much of the database as possible into RAM. I've seem more than a few systems where SQL server is using up 3.5 GB of RAM just caching databases off the disk. Again, 32-bit Windows can only handle 4GB of memory. Sure you could make the argument that enterprise or data center class devices might see a potential benefit from this, but then you're hardly talking about a mainstream device. Any machine that would rightly be used to run these massive operations would likely cost tens of thousands of dollars and would not be purchased by an average consumer. And again, when you're spending all this loot, why not just max out the system RAM.
    4. Other things. I'd be willing to engage in reasonable debate on these other ideas you have.

    I think this is a phenomenal idea however in its current form it borders on useless. (essentially $510 for a 4 GB drive that makes your OS boot up 5 seconds faster. The price/performance ratio is way off.) I'd like to see the manufacturer make some improvements (more banks for RAM, larger theoretical capacity, faster interface) and this could be a truly useful product. (BTW, I don't claim credit for most of this post. It was all in the article.)

  • by sleeper0 ( 319432 ) on Tuesday July 26, 2005 @12:16PM (#13166137)
    You obviously have a Linux slant, which is what led you to this point. I challenge you to set up a Windows computer with 4 gigabytes of RAM (the max). Then run Performance Monitor and monitor the swap file, and run some applications. You'll find the swap file is stilll being used even though RAM + Swap File can't be greater than 4 gig.

    While you are correct that windows will force swap idle processes even if there is no demand for system ram, it is also true that in this situation you can just turn off the windows swap file and everything will stay in memory and run very well.
  • Re:Disk evolution (Score:3, Informative)

    by Khelder ( 34398 ) on Tuesday July 26, 2005 @12:41PM (#13166499)
    The improvements in capacity of disk are amazing and staggering, I agree.

    I only wish it were so for latency. Around 1980, seek times were in the neighborhood of 20ms. CPUs for personal computers were running at about 1 MHz (the Apple ][, for example), or a cycle time of 1 ms. So the computer would wait 20 cycles for a seek.

    Today seek times are around 5ms and CPU speeds are 3+ GHz, or a cycle time of about 1/3 nanoseconds. So now CPUs have to wait 15,000 cycles for a seek. Relatively speaking, disk is a lot slower than it used to be.

  • Re:Disk evolution (Score:2, Informative)

    by HickNinja ( 551261 ) on Tuesday July 26, 2005 @01:19PM (#13167029)

    1 MHz = 1 microsecond cycle time.
    20 ms / 1 us = 20,000.
    Around 1980, the computer would have to wait 20,000 cycles for a seek.

    3 GHz = 333 picosecond cycle time.
    5 ms / 333 ps = 15,000,000.
    Today, the computer would have to wait 15,000,000 cycles for a seek.

    Your point still stands, but your numbers were off by a factor of 1000.

  • Re:No Way! (Score:5, Informative)

    by iamhassi ( 659463 ) on Tuesday July 26, 2005 @01:24PM (#13167107) Journal
    it's a shame it's not actually $100 for the whole unit

    Actually I don't know where they even got $100 from because the article says:
    "Gigabyte has told us that the initial production run of the i-RAM will only be a quantity of 1000 cards, available in the month of August, at a street price of around $150. "

    OH, and did anyone notice the price does not include RAM? So you're paying $150 for a card that can accept up to 4GB not "$100 for a 4GB Solid State Drive ".

    That's got to be the most misleading quote ever on a /. article description since u'll spend closer to $500+ for the card and four 1GB DIMMS

  • Re:Let me think. (Score:2, Informative)

    by KillShill ( 877105 ) on Tuesday July 26, 2005 @04:59PM (#13169980)
    except that modern drives already do bad sector marking internally. it's part of the S.M.A.R.T. diagnostic/self monitoring tech.

    if a drive you have bought in the last 5-10 years, REQUIRES a full format, you might as well just throw it in the dumpster. it isn't going to work right.

    quick formatting is also more properly called initialization. fyi.
  • Re:No Way! (Score:3, Informative)

    by tylernt ( 581794 ) on Tuesday July 26, 2005 @08:14PM (#13171890)
    [Modern SATA drives easily get 80MB/s, so how is 150MB/s "up to 6x faster"??]

    Seek times and sustained transfer rates. The memory-based-disk has essentially 0ms seek times, wheras the Raptor averages 8.6ms. Also, the Raptor can only put out a sustained 63MBps reading start to finish from an contiguous, unfragmented file. If you are doing random seeks (database or file fragmentation) -- and most hard drive access is random -- the memory unit will kick the rotating hard drive in the teeth.

Never test for an error condition you don't know how to handle. -- Steinbach

Working...