Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

The State of Solid State Storage 481

carlmenezes writes "Pretty much every time a faster CPU is released, there are always a few that are marveled by the rate at which CPUs get faster but loathe the sluggish rate that storage evolves. Recognizing the allure of solid state storage, especially to performance-conscious enthusiast users, Gigabyte went about creating the first affordable solid state storage device, and they called it i-RAM. Would you pay $100 for a 4GB Solid State Drive that is up to 6x faster than a WD Raptor?"
This discussion has been archived. No new comments can be posted.

The State of Solid State Storage

Comments Filter:
  • Am I getting old? (Score:5, Insightful)

    by iguana ( 8083 ) * <davep@nospAm.extendsys.com> on Tuesday July 26, 2005 @10:52AM (#13165016) Homepage Journal
    I remember seeing this sort of thing way back in the DOS days. Battery backed RAM on an ISA card. Product died out because RAM was more expensive than HD.
  • New Tech (Score:3, Insightful)

    by pcmanjon ( 735165 ) on Tuesday July 26, 2005 @10:52AM (#13165022)
    Well this tech will never catch on if they can't make it affordable. Then again, it won't ever catch on if it is affordable but not worth the price.

    15,000 for a 500gb solid state drive isn't affordable
    100 for a 4gb solid state drive is affordable, but not worth the price.

    What they need to do is make the tech better, yet affordable. What makes it so expensive to competetivly price large solid state storage devices?

    On a sidenote, is anyone going to buy this drive that is 4gb and costs 100 bucks? I don't think it's much use to anyone.
  • by thebdj ( 768618 ) on Tuesday July 26, 2005 @10:53AM (#13165026) Journal
    $150 + (4x$90) = $510 for 4 GB of solid state storage. Definitely not worth it.
  • Re:Let me think. (Score:4, Insightful)

    by Pxtl ( 151020 ) on Tuesday July 26, 2005 @10:54AM (#13165039) Homepage
    Speaking of Windows, I would only want this if the OS used it intelligently for caching, hybernation, etc. automatically. If I had to manually juggle files between the magnetic drives and the fast storage, I wouldn't bother.
  • Swap Drive (Score:3, Insightful)

    by smelroy ( 40796 ) on Tuesday July 26, 2005 @10:55AM (#13165059) Homepage
    If you use this to hold your swap and your main partition, I think the speed improvement would be well worth it! Then buy a 300GB drive for your MP3 collection and all the other junk that that doesn't need such access speed and you are set.
  • by BrK ( 39585 ) on Tuesday July 26, 2005 @10:56AM (#13165060) Homepage
    Wow, this thing looks almost EXACTLY like the RAM add-in cards we stuck into ISA slots in the mid/late 80's for our zippy '286 and '386 based machines.

    Looks like they dug up an old PCB screen, added a battery backup and changed the connectors to work with modern RAM :)

    Among other things, I handle the physical hardware design spec for my companies product (the product is software which is loaded onto a hardware to make an "appliance"). I've received emails from quite a few vendors recently offering this sort of solid-state NV storage. I think this market sector is really starting to creep forward, and these might be the kinds of "disks" we see as the norm in the not-so-distant future.

    I think first off, though, these will be like caching drives - holding only the data that is most seek-time sensitive to a particular application.
  • Re:Let me think. (Score:3, Insightful)

    by Enigma_Man ( 756516 ) on Tuesday July 26, 2005 @10:57AM (#13165078) Homepage
    And this thing is only 6x faster than spinning media? That seems much slower than it ought to be, considering that it is solid-state. I suppose if that's only continuous throughput, and doesn't take latency into effect it might be okay, but still. How about 100x faster?

    -Jesse
  • by Jonsey ( 593310 ) on Tuesday July 26, 2005 @11:15AM (#13165343) Journal
    Actually, the card only addresses the RAM at 100MHz (I think that's considered PC1600, I may be wrong here though).

    That means this card uses your old chump-RAM, or very very cheap to buy RAM. It's a good deal, just in that it gives me something to do with all the PC2100 I've got laying around.
  • I'm afraid... (Score:2, Insightful)

    by oringo ( 848629 ) on Tuesday July 26, 2005 @11:18AM (#13165379)
    It's not as cheap as $100. If the story submitter had RTFA, the card itself costs $150, and that doesn't include the cost of equipping it with 4GB RAM, which costs around $90X4=$360. The total cost comes out to be $510.
  • Re:No Way! (Score:3, Insightful)

    by gadgetbox ( 872707 ) on Tuesday July 26, 2005 @11:23AM (#13165431)
    What kind of quality are you recording at? At 10 MB per minute (Stereo 44.1 16bit), assuming 8 mono tracks, that's 40 MB a minute, 4000 MB drive (in reality it would probably be less, say 3800) that gives you 100 minutes of recording time. Just over an hour and a half. Now, recording music, it would be more likely, and a good idea, to record at 24 bits and dither later if going to CD, so your record time would be far far less. Just a guess, but probably under 50 minutes. So that gets you 10 5 minute songs. But if you do 2 takes of everything (which is plausible), now you have room for 5 songs. You'll probably do more than 2 takes of quite a few tracks, so...realistically, you'd be able to fit *maybe* two full songs on a 4 gig drive. It doesn't appear that a 4 gig drive would be enough really, unless you were prepared to dump your files quite a few times a day. Doesn't seem worth it. Not to mention that a plain old IDE drive can easily handle 8 tracks, even with a moderate CPU. SS storage isn't there yet for media work, at least not from a cost/performance point of view.
  • Re:Let me think. (Score:5, Insightful)

    by Guspaz ( 556486 ) on Tuesday July 26, 2005 @11:23AM (#13165432)
    As the other reply mentioned, it's an SATA drive so limited to 150MB/s (100MB/s in practice). The latency is very low, yes, but that's not the only factor. There is only so much you can do with double the bandwidth, no matter how low the latency is.

    I also wonder if the benchmarks were done with drive caches on or off. I would imagine that this drive would be faster with caches off. With what might as well be zero latency on disk accesses, the benefit of a cache is lost; reading ahead probably will only waste bandwidth reading stuff we may not need.

    I'm very disappointed that the article didn't mention SATA2 (300MB/s), which is already available in most new motherboards. With double the bandwidth it would have made a big difference. It's very likely the device doesn't support SATA2. However the Anandtech article makes NO MENTION at all of SATA2, not even to the point of saying "We'd like to see this drive with SATA2 support."
  • Disk evolution (Score:5, Insightful)

    by JordanH ( 75307 ) on Tuesday July 26, 2005 @11:33AM (#13165558) Homepage Journal
    Pretty much every time a faster CPU is released, there are always a few that are marveled by the rate at which CPUs get faster but loathe the sluggish rate that storage evolves.

    On the contrary, I've always been amazed at the rate of price/performance evolution in HD technology.

    Consider that in 1982 a 10 MB disk cost something on the order of $3500 while today you can reasonably expect to get an 80 GB disk for $50, that's a drive that has 8000x the storage for 1/70 the price or a price/MB improvement of roughly 420,000x. And, that doesn't take into account the dramatic improvement in reliability and speed (both access and interface) that the newer drives exhibit. Do you think CPUs have kept up with this?

    I've heard people predict the end of moving-parts mass storage for years now, but it still seems pretty distant considering the great values we're getting with HD technology.

  • Re:New Tech (Score:2, Insightful)

    by Alien Being ( 18488 ) on Tuesday July 26, 2005 @11:36AM (#13165588)
    I disagree. If it's virtual memory you want, then you would be better off putting the memory right in a DIMM slot.

    This type of storage makes for a good /tmp where you want lower latencies than a disk, but don't care that it's not as fast as RAM.

    Another good use would be to use it as a filesystem cache for the OS on a diskless network client.
  • Re:No Way! (Score:2, Insightful)

    by sp3tt ( 856121 ) <<sp3tt> <at> <sp3tt.se>> on Tuesday July 26, 2005 @11:38AM (#13165617)
    But wouldn't a hyperfast 4GB drive be perfect for virtual memory? But then again, the people who really need that much memory are the ones who need a lot of storage too...
  • by Shotgun ( 30919 ) on Tuesday July 26, 2005 @11:44AM (#13165724)
    2. Imagine how fast your system would be if you took the memory off the card and installed it on your motherboard, thus eliminating the need for a swap file.

    3. Imagine how fast your database server would be with its transaction log installed in a memory file. Hey, throw the tempdb (for SQL Server) on there as well, or since the memory is now just standard memory and won't need a special driver, you can just switch to Linux and use a real database.

  • by slashdot.org ( 321932 ) on Tuesday July 26, 2005 @12:04PM (#13165993) Homepage Journal
    FreeBSD allows you to allocate a dynamically resizable filesystem out of swap (see: md, mfs). I'm thinking of mounting the whole thing as a super-fast swap partition - basically, as a giant L4 cache - and mounting /tmp and a few other speed-critical filesystems out of there.

    Mmmm, hyper-fast builds that don't depend on the latency of moving parts...


    This doesn't make sense. I suspect that you were misled by the incorrect summary. You don't get 4GB of solid state storage for $100.-. That would actually be a really good deal. All you get is a card which has SATA on one side and RAM slots on the other side.

    So instead of buying this card you could take the $100 towards a motherboard that supports > 4GB of RAM. Then the RAM will be sitting on a bus that can actually sustain datarates WAY higher than SATA.

    Since you don't need persistent storage for cache it makes little sense to stick it on a bus that can theoretically do, what, 150 MB/s? When you can stick it on a bus which can do several GB/s.

    I don't really see the point of this card, since it will only keep the data for 16 hours if not powered. In other words, if you leave for a weekend and for some reason the power to your PC is turned off, your tough out of luck.

    Other cards that I have seen in the past that make more sense, actually have a normal drive for persistent storage. If power fails, there's enough backup power to write everything to disk. That's basically like having cache on the disk equal to the size of the disk.

    Bottom line; this is a rehash of what's been done many times before, didn't really take off then, and considering a relatively stupid implementation, probably won't take off now.
  • by danharan ( 714822 ) on Tuesday July 26, 2005 @12:23PM (#13166266) Journal
    When you already have lots of RAM and your DB indexes and temp tables are constantly being swapped, this might make sense.
  • Re:I'd use Raid (Score:4, Insightful)

    by sirwired ( 27582 ) on Tuesday July 26, 2005 @12:50PM (#13166607)
    Having disks in parallel doesn't solve the latency problem, only increases the throughput.

    Latency comes from three sources:
    1) Head latency.
    2) Rotational latency.

    These are the two sources you have considered. Striping indeed does absolutely nothing to help there.

    You forgot the third source of latency:
    3) The-disk-is-busy-serving-another-request latency.

    Your comment would be true for a primitive OS with a single-threaded I/O method, and/or a RAID system with no command queue.

    Given that modern RAID systems are NOT primitive, I/O performance is no longer measured with rotational + head latency vs. throughput, because those measurements no longer make sense.

    There are two kinds of performance measurements for modern disk subsystems:
    1) MB/sec. (bandwidth) This is what most people think of when they think of throughput.
    2) I/O / sec. This measurement is simply the reciprocal of the head+rotational latency in the case of a SINGLE DRIVE. However, in a multi-drive setup, max I/O / sec. increases proportionally with the number of drives, up to a point (eventually you hit the limits for the RAID controller, bandwidth, whatever).

    If we measure latency a the time it takes a single drive to physically get the data given a single request, sure, mutiple drives don't help. If we measure latency as the amount of time between when the application asks for the data, and when the disk delivers it, RAID helps quite a bit, beacuse the different I/Os are distributed to multiple disk heads, each of which can contribute it's own I/O handling capacity.

    SirWired
  • Re:Disk evolution (Score:3, Insightful)

    by cvd6262 ( 180823 ) on Tuesday July 26, 2005 @12:54PM (#13166653)
    On the contrary, I've always been amazed at the rate of price/performance evolution in HD technology.

    If I could make a not-so-appropriate industrial comparison to the article summary:

    Pretty much every time a faster F1 engine is released, there are always a few that are marveled by the rate at which carts get faster but loathe the sluggish rate that diesel engines evolves.

    HDD and CPUs are different beasts that do different tasks, and fight different issues. It is not surprising that one can pick up speed and the other capacity.

This file will self-destruct in five minutes.

Working...