Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Why Not Solid State Hard Drives? 652

I never quite thought I'd see this in my life time, but RAM is now cheaper when it comes to memory-per-unitofcurrency than hard drives. Of course, those of you who have noticed this have also wondered, quite reasonably, that it might be cheaper to start building Solid State Hard Drives entirely out of RAM, rather than using the standard ole platters. Is there anyone in the market who also has noticed this and is attempting to market a product that will fill this need? Remember this puppy from 2 years ago, and this story, mentioned a year ago? While the first one was a bit of a laugh, the second article does mention a limit to the lifetime of the current MO Hard Drives. Are we closing in on that limit, now? Update: 10/11 2a EDT by C :I apologize for not catching the erroneous statement above, earlier. What I had meant to say was that since RAM is at its cheapest point in price in recent years, not to say that it was cheaper per-unit-of-currency, which is absolutely false. Chalk this one up to too much creative writing in college, lack of sleep, and a long frustrating day. Thanks to brian@pongonova.net for pointing out that error.

waterlogged asks: "I was just wondering if anybody has heard of a cheap ram based network drive? Seems to me with the ram prices being at about US. $12.00 for 128 megs that someone hasn't developed a battery backup version of this to plug into a network or even a bus. A gig worth of 8ns seek time storage for $120 anyone? That would just about eliminate any wait in loading programs."

BigSlowTarget asks: "There are some previous articles on Slashdot about vendors selling solid state drives, but they all seem to be quite expensive - particularly given the slide in the cost of memory. Has anyone hacked together a solid state drive to take advantage of $60/GB memory prices? I'd really like to be able to boot and run at solid state speed without spending thousands."

Jah-Wren Ryel asks: "In case you haven't noticed, RAM is incredibly cheap, you can put a gigabyte of PC133 RAM into your machine for less than $60. A year ago, that would have cost more like $600. So now it is feasible for one to have a 10-15GB RAM disk, except for one thing - most motherboards won't support more than 2GB total (4 dimm slots x 512MB per dimm). It seems like it wouldn't be too hard to design a PCI card to hold 20-30 dimms and make that available through a hardware windowing scheme (like EMS/EMM back in the old 16-bit days). With the right drivers it could be used as a big RAM disk or for buffercache. Is there such a product out there? The closest I have seen are solid-state disks that sit on the other end of a scsi bus, are too expensive, and aren't anywhere near as fast as a PCI implementation could be."

So what technical details (and the issues of volatile data and price) may be preventing the construction of RAM based drives, and is there anything else that may be preventing some entrepreneurial soul from bringing such a thing to market?

This discussion has been archived. No new comments can be posted.

Why Not Solid State Hard Drives?

Comments Filter:
  • by mutt lynch ( 465565 ) <matt-figroid&cox,net> on Wednesday October 10, 2001 @02:46PM (#2411909) Journal
    You would still need a stable flow of juice to keep from losing everything in case of a blackout or something. I'll stick to the platters for now.
    • by Another MacHack ( 32639 ) on Wednesday October 10, 2001 @02:48PM (#2411918)
      If only there were some sort of device which could store electrical power for later use.

      • Cute. (Score:2, Flamebait)

        by Kasreyn ( 233624 )
        We understand the concept of the battery, smartass.

        We need a UPS instead. And the "u" part is the tough part.

        -Kasreyn
    • How many times has your CMOS been wiped out? Sure it happens once in awhile, but that doesn't really have any safeguards. Keeping juice flowing into RAM shouldn't really take too much, but you wouldn't want to let it sit on a shelf for very long.
      • Re:Ummm CMOS? (Score:4, Informative)

        by suwain_2 ( 260792 ) on Wednesday October 10, 2001 @02:55PM (#2412009) Journal
        Your CMOS is something different, actually. Most computers use "DRAM", which needs to be "refreshed" often, or it'll "lose it's charge"... ROMish stuff is SRAM, which doesn't need the stupid refreshes... But it's more expensive, so a a couple gigs of SRAM is sorta out of the question. :(
      • Re:Ummm CMOS? (Score:3, Informative)

        CMOS only consumes power on state changes. DRAM needs to be refreshed every few ms. Thus, the battery power required for DRAM would be much greater than that used to hold you CMOS settings in BIOS.
    • A long time ago, in computer years, the Apple //gs (still have one) had a couple of cards available for it that were "RAM drives". AIR, they had a rechargeable battery and kept the RAM refreshed while the power was off. This was way back when RAM was over $50/MB and I think they were limitted to 4MB or 8MB, but back then that would hold tons of pirated software. :) So, this idea is certainly not new...
    • How about a standard Lithium Polymer rechargable battery? The battery would never need to be replaced, could be outfitted onto the solid state drives with little or no problems and could offer battery backup when the system is off or during a power outage.

      Computers could also be designed to bypass the hard drives when the system power is off. I doubt a hard drive would utilize much energy.
    • Actually, there are a number of ways to ensure that the data is non-volatile. Flash or battery backed RAM come to mind. Bitmicro (www.bitmicro.com) is a vendor that currently sells non-volatile flash based drives. I checked them out a little while ago, but found that it was a bit too pricey for me still.

      I'd suggest getting a smallish (1gb or so) flash drive for your windows/linux/amiga/whatever partition, and use some monstrous drive to store all your media files.
  • by Slashdolt ( 166321 ) on Wednesday October 10, 2001 @02:47PM (#2411916)
    I've been saying this for years. Eventually, we need to scrap the spinning platters. Unless I have a butt-load of MP3's and other things I don't really need, I can easily fit most of my stuff into 4GB or less.
  • Solid state drives. (Score:4, Informative)

    by billn ( 5184 ) on Wednesday October 10, 2001 @02:48PM (#2411922) Homepage Journal
    (heh. oops.)

    Cenatek [cenatek.com] seems to be on a good track with these. They offer a PCI card with a handful of DIMM slots, a slap on rechargable battery panel, which holds enough power to run a connected hard drive of appropriate size which will dump the contents of what is essentially a RAM disk, in the event of a shutdown or power loss. A little spendy still, for consumer use, but to see something like this backend busy websites, or store database file structures would be pretty slick.
    • by Telek ( 410366 ) on Wednesday October 10, 2001 @03:14PM (#2412149) Homepage
      Why would you bother with one of these?

      According to their website, sustained data transfer rate is 80-100MB/s (umm, WHY would it vary if it's all solid state?). Add to the fact that the PCI bus is limited at 133MB/s and there's more than just 1 device using the PCI bus (and a lot of them aren't conservative when it comes to bus usage)...

      Or, for 1/4 the price you can pack together 2x75GB drives in a raid 0 array, get 30x as much space AND get the same bandwidth.

      No, right now there's not much point to solid state drives. Iff (sorry, math hangover, If and Only If) hard drive prices were to stay the same, and memory prices were to fall by an order of magnitude (lets say 10x) THEN I could see there being a market for this. But you'd also need to use either PCI-64 (533MB/s+) or get some other designed bus to support the much higher throughputs.

      But then again this just begs the question, what do you need that much more speed for?

      To take advantage of RAMdisks, you pretty much need to have your computer on all the time, or in standby mode when you're not using it. At this point, what do you need much higher disk bandwidth for?

      Loading your mp3s or movies?

      Loading office in 2s instead of 6s?

      running your games (oh wait, that's CPU/GPU intensive not HD).

      Quite frankly I don't see the technology or the market right now to create solid state HDs.
      • Or, for 1/4 the price you can pack together 2x75GB drives in a raid 0 array, get 30x as much space AND get the same bandwidth.

        That may be better for some applications. No amount of RAID magic though can reduce the latency though (seek time). So this might be good for some database apps, but a RAID would be better for streaming the data. Though, I can't think of very many apps that require a single 80MB/s stream.
      • by digitalEric ( 527320 ) on Wednesday October 10, 2001 @04:30PM (#2412606)
        &gt . . .or get some other designed bus to support the much higher throughputs.

        This is exactly what AGP was designed for -- high-bandwidth I/O to main memory, without blocking the PCI bus. Plus, the AGP GART can do most of the address translation you would need. All modern PC (and even Apple) chipsets have an AGP interface, which is wasted on a headless server. . . until now. AGP even provdes extra power (even the obscene AGP PRO), so that an onboard battery/HDD could be used to backup.

        &gt To take advantage of RAMdisks, you pretty much need to have your computer on all the time, or in standby mode when you're not using it.

        This is true. *or* you could have your computer net-boot from a a server with one of these. Even 100megabit transferring from memory will feel faster than a local hard disk. And gigabit over copper is becoming very affordabl these days.

      • by tcc ( 140386 ) on Wednesday October 10, 2001 @05:03PM (#2412754) Homepage Journal
        --
        To take advantage of RAMdisks, you pretty much need to have your computer on all the time, or in standby mode when you're not using it. At this point, what do you need much higher disk bandwidth for?

        Loading your mp3s or movies?

        Loading office in 2s instead of 6s?

        running your games (oh wait, that's CPU/GPU intensive not HD).
        --

        FORGET ABOUT HOME USE, think a bit.

        There are limiting factors with hard drives, mainly LATENCY issues, this might not be a problem for you or any home users, but for some specific scenarios, it is, and a BIG one. I give you a specific case where I could benefit from such a system:

        Without going in too much details, I work with a lot of files, my workstation generates over 200,000+ files for a single simulation, no it can't be put in a database for now, it has to be accessed from different software with no database support, every other part of the software is optimised to know exactly which file to open, using the maximum of memory, cropping useless data, etc etc... everything is maximized to a more than good level. The only bottleneck I have in my system right now is the drive's latency issue, and beleive me, if I could go down from the milliseconds to nanoseconds or microseconds, it would be over a tenfold increase in speed and I wouldn't need 10 machines running in parralel to do the job in one day (which unfortunately I don't have :)), one machine could replace them 10 with only that little step up. Thing is I can't afford 10 machines and the drive subsystem, nor current SSD solutions.

        Most application are bandwidth hungry, but there are some stuff out there requiring LOW LATENCY, and heck, if there wasn't a need for that, you wouldn't see solid state drives for 60,000$ out there. There's a need, but sometimes you're limited by your R&D budget and you'd gladly take an emmerging technology or home-made stuff if it means saving 80% of the cost of the equivalent part, and increasing your effeciency by a factor 10.

        I'll see your answer "if you need it, and it slows down you r&d, buy it, for the sake of the company" sometimes it doesn't work like that for cashflow reasons and you have to work with what you can get in your specific budget, the issue here (and title of this forum) is about cheap storage that would have a low latency and High bandwidth solution (with loads of storage). I'm sure I am not the only one that would GLADLY grab a 30GB solid state drive for a fraction of what it would cost me with the current systems (which are way overpriced considering the price of ram right now).

        There's a need for Solid State, while I understand that the gap between a home user and a workstation/server class machine is blending more and more, it's not because a home user wouldn't benefit from such a device, that it's not needed for corporate or R&d levels. Current solutions wouldn't be selling for 50K$+ if there wasn't a need for them... heck, they wouldn't exist.
      • Two words: (Score:3, Insightful)

        by Dwonis ( 52652 )
        Noise Pollution
  • won't a loss of power wipe out all of your data? I remember that you could create a RAM disk on Macs many years ago, and it was kinda cool, until you realized that it would disappear with the inevitable "bomb" hard crash.

    Okay, add a UPS and all, but wouldn't this still be much less stable than a HD that you can pull out and ship across the country without it losing data?

    • The biggest use of RAM drives on Macs were for Powerbook users. With a lightweight word processor (Word 5 *cough*) and a lightweight System folder, you could spin down your hard drive, dim the screen, and get gobs of battery time out of those old machines, and Oh! the blissful silence!

      You'd just want to save your files to the hard drive every now and then to prevent Murphy from visiting.

    • hehehe I remember when I got 24MB of ram in my 486 dx2/66. It was so cool to be able to make a 16mb ramdisk and play Doom out of ram entirely =P
  • RAM Drives. (Score:5, Interesting)

    by Gedvondur ( 40666 ) on Wednesday October 10, 2001 @02:48PM (#2411930)
    RAM drives are a great idea, the problem is the IDE or SCSI bus. Seek times and retriveal times can be greatly reduced, but the total bandwidth is still a limitation.

    Seagate had developed years ago a standard called IPI, I think. It was for the 30 and 40 megabyte RAM drives that had developed. I know it never took off, but it was specificlly for static RAM drives.

    What would be really cool, would be RAM storage with an Infiniband interface. Its possible to use it for storage or for regular memory.

    • I agree - even a SDRAM controller right on the PCI bus can't be as fast as the system's main memory.

      Linux, FreeBSD, and MacOSX (I dunno about Windows) all have excellent VM and file system caches (sometimes they're tightly integrated). If you have 4GB of RAM in your system, and your running processes have 64MB resident, then it's like having a 3.94GB RAM disk. That is, of course, unless you routinely access more than 3.94GB of files.

      This is why having lots of RAM is good, even if your processes don't use much.

      It's not prefect - I know that on FreeBSD 4, for example, if you have zillions of small frequently used files in the cache, and then you do a big tar, all those important little files will get pushed out of the cache in favor of the new file, which might only be accessed once. Also, the kernel will swap processes out to make room for file system cache, and there aren't a lot of knobs for tuning all of this. EG I don't think you tell the kernel "keep *all* my processes resident, even if they're idle... no really, I *do* have enough RAM!"

      Anyway I just don't see any use for standalone RAM disks. There are very few real-world applications that need *deterministic* 1ms seek times. If you rely on the OS you will generally get the best performance.
  • Huh? (Score:4, Insightful)

    by BMazurek ( 137285 ) on Wednesday October 10, 2001 @02:49PM (#2411941)
    RAM is now cheaper when it comes to memory-per-unitofcurrency than hard drives

    Huh? Unless I'm completely out to lunch, I don't see this....

    Is my math wrong, or is Cliffs?

    • Re:Huh? (Score:5, Informative)

      by jtdubs ( 61885 ) on Wednesday October 10, 2001 @02:57PM (#2412024)
      You are right. Cliff is wrong.

      Given his figure of 128MB for $12, that's 10.66MB per dollar.

      From western-digital.com I can get a 40GB 7200RPM UATA/100 caviar harddrive for $117.00. That's 341.88MB per dollar.

      This puts harddrives into the lead by a factor of 32. So, until it's at the point where 128MB of RAM costs $0.375, harddrives still have the lead.

      Justin Dubs
      • Re:Huh? [OT] (Score:4, Insightful)

        by Telek ( 410366 ) on Wednesday October 10, 2001 @03:06PM (#2412092) Homepage
        The slashdot crew over the past few days/weeks have been extremely out to lunch, has anyone else noticed this?

        Example 1:
        but RAM is now cheaper when it comes to memory-per-unitofcurrency than hard drives -- cliff

        RAM is 30-40x more expensive than HDs, I don't know WHAT he was smoking when he thought that...

        Example 2:

        I suspect a fair number of people never try Linux or one of the BSDs because they're moderately happy with AOL as an ISP -- timothy

        how many people do you know who would be running Linux if it wasn't for the fact that they were using AOL? (Let me rephrase, how many tech savvy people are using AOL (that aren't forced to)?)

        And the anti-Microsoft hysteria has been especially harsh over the past few days. That article about File Extensions And Molopolies [slashdot.org] was so pathetic it didn't even qualify as satire. It should never have seen the light of day on either /. or Salon.

        And /. gets over 200 story submissions per day, and yet the average number of story postings has gone way down, now to about 10/day. What's going on here?
  • Cenatek (Score:4, Informative)

    by [amorphis] ( 45762 ) on Wednesday October 10, 2001 @02:50PM (#2411952)
    Cenatek [cenatek.com] may make exactly what you're looking for. It's a PCI card, and uses standard SDRAM sticks.

    From their site:
    The Rocket Drive stores data in memory modules (standard dynamic random access memory, or DRAM) rather than on magnetic media.
    • It also says that their device can support sustained transfer rates of 100MB/sec and that it's "thousands of times faster" than disks. With 3 striped disks over a 2Gb Fibre Channel link I can get 180MB/sec sustained. There is a huge difference between twice as fast and thousands of times as fast. I doubt that even their seek times are more then 10s of times as fast. The seek times may even be slower if you restrict your hard disk to reads and writes on the outer 4 GB of the platters on a 15,000rpm drive. Considering that harddrives are a proven technology, and hot swapable, and expandable to the terrabyte range, I think I'll stick with the disks.

      Maybe they are comparing it to floppy disks?
  • by thryllkill ( 52874 ) on Wednesday October 10, 2001 @02:51PM (#2411961) Homepage Journal
    L337 script kiddies would no longer have to worry about their Hard Drives telling the tale of all of their l337 ownz3r!ngs. As soon as the feds show up yank the plug.

    This would also work for War3z fiends. *again, yanks plug* "What do you mean piracy, I don't even have an OS on there."

    Seriously, I think it would only be useful if you could couple it with a RAID-like (I know it wouldn't be true RAID) system so if the power for whatever reason (Power outage, UPS goes bad, battery dies) you info wuold still be there, maybe a RAM-drive that does nightly/hourly back ups...
    • Two words: RAM remanence [navy.mil].

      k.
    • *again, yanks plug*

      of course, a system crash or a reboot would do about the same thing.

      This by itself would would preclude many script kiddies using notoriously unstable OSen, never mind systems that get infected by trojens etc.

      "issue the reboot command now!"

      heh

    • Seriously, I think it would only be useful if you could couple it with a RAID-like (I know it wouldn't be true RAID) system so if the power for whatever reason (Power outage, UPS goes bad, battery dies) you info wuold still be there, maybe a RAM-drive that does nightly/hourly back ups...

      Why not just make a 40GB HD with 40GB cache? When an access is made on the same data already accessed it would just be found in the cache on the device, and (depending on your write-through, etc.. technique) this should be the same as a platter based divice in "RAID" with a RAM based device. You would have the same lag at initial load as the platter based device but your load time from that point on should only decrease. The data on the HD cache should be able to remained cached following the system soft-reboot, and possibly with a switch on the side, remain during a hard-reboot (useful for if you want to change the sound card and don't mind the pennies worth of electricity used) or turned OFF for when you go on vacation and there is no need...

      Heck, I'm sure you could get a nice cache hit ratio with only 10GB of cache on the 40GB HD. Those of you with 40 gigers, think about how much of that data is just mp3s and iso's and how much is OS, browser, etc...
    • This can be done, in a usable way, with a steganographic filesystem (one that doesn't just encrypt the files, but encrypts everything, so you can't tell if there are files (or a partition table, etc)).

      The other (slightly less secure) way is to use a network filesystem for storage, of encrypted files, and decrypt the files in memory on the diskless desktop computer as you were using them. That way the decrypted files couldn't be written out in swap, or any of the other common problems. Once the power was turned off, it'd all be gone. But unlike most systems, the decryption would all be done locally, preventing clear-text from ever being transmitted.

      Ideally your BIOS's POST routines would involve multiple writes to RAM, of patterns and psuedo-random data. So you'd just hit RESET and it'd perform a thorough wipe. (Theoretically data can be recovered from RAM once the computer is off.)
  • New Math? (Score:2, Redundant)

    by MacGabhain ( 198888 )

    $20 gets you about 256 MB of ram. $200 gets you about 75,000 MB of HD space. Ten times the price gets you 300 times the MBs. What are you smoking, and can you give some of it to my credit card companies?

  • by ackthpt ( 218170 ) on Wednesday October 10, 2001 @02:52PM (#2411971) Homepage Journal
    We had a Megastore (core memory!) on a PDP11/45 (which was used as the swapping drive, hence upping the category to 11/50, IIRC) back in the 70's. My Nikon Coolpix uses flashram as a formatted dist, something I'm certain other's have noticed. Flashram is able to store and retain with the power off, but doesn't appear to transfer very fast. Using SDRAM would be fast, but only so long as: A) you have a constant source of current B) you don't test/clear on rebooting the CPU. Certainly old ideas, but as long as you can set up a big ramdisk in your OS and put your large temp/workfiles there, do it.
  • or else it would lose all data if the power goes out. SRAM and Flash ROM are _MUCH_ more expencive per MB that a harddrive is and will most likly stay that way.....for god sake, a 64 MB flash card for your digital camera is $50-$70 can you imagine the cost of a 100GB flash drive?
  • Huh? (Score:5, Informative)

    by Reality Master 101 ( 179095 ) <<moc.liamg> <ta> <101retsaMytilaeR>> on Wednesday October 10, 2001 @02:52PM (#2411980) Homepage Journal

    RAM is now cheaper when it comes to memory-per-unitofcurrency than hard drives.

    According to pricewatch [pricewatch.com], a 40 gig hard drive is $78. Let's say $120 for a good one. That makes RAM 20 times more expensive, at $60/gig.

    It's still really cheap, but let's not get crazy. :)

  • by jmv ( 93421 ) on Wednesday October 10, 2001 @02:53PM (#2411984) Homepage
    Can't comment for other OS, but Linux tends to be pretty good at using all the RAM you've got to cache disk data. Even though I rarely use more than 256 MB, upgrading to 768 MB made a significant performance improvement for me, as Linux quickly fill the remaining 512 MB with disk cache, without me bothering with setting up a ramdrive.
  • by Quixadhal ( 45024 ) on Wednesday October 10, 2001 @02:53PM (#2411985) Homepage Journal
    The problem of how to maintain power to all that RAM indefinately is still pretty tricky, but how about this idea. Why not put enough SDRAM on your hard drive to buffer the whole thing? Whenever you read anything off the platters, hold it in RAM, and whenever anything is written, page it back to disk as usual. Thus as you use your system, the speed will continue to improve (up to a point) without tying up system RAM.
    • Why put your cache on the other end of a slow IDE or SCSI bus from your CPU when you could put it on a fast system bus?
    • I can't believe this was modded interesting. (Not because the poster didn't know, but that the moderator got away with it as well) (Then again I only knew this from an OS design course I took... =))

      Don't we sync disks in Linux/BSD/Unix before shutting down or unmounting a disk to flush the buffers?

      There is even an NT resource kit utility that causes these buffers to be flushed as well.

      The AT&T System V manuals describe a table to indicate what was in the buffers to insure files didn't get out of sync.

      Welcome to the technology of the late 70's... =)
  • by weez75 ( 34298 ) on Wednesday October 10, 2001 @02:54PM (#2411995) Homepage
    Imagine the size/number of boards that would be needed to get 80 GB of storage. It may be quicker but engineering something that is feasible would quickly drive the cost up so that it wouldn't be that cheap. Further, the cost to modify existing controller technology or making a RAM drive fit the current controllers available. Then there's all kinds of other technical issues like power.
  • If you ever played with a system with enough ram to support mounting a ramdrive on /tmp(and soft link to /usr/tmp etc...Solaris directly maps /tmp into virtual memory/swap??) you see a huge speed up increase for some takes that require generating temporary files.

    If there is such a huge speed up why not make devices that act like drives that are really memory? Becuase the software has already been written (ramdrive drivers) and it is faster and cheaper than implimenting a completely seperate piece of hardware and driver.
    Also consider the fact that you would have not only create hardware to plug into the SCIS/IDE system, the SCSI and IDE channel bandwidths aren't nearly as good as straight memory. Plus it is nice not to eat sometimes crowded cases with another piece of hardware.
  • Recovery (Score:3, Interesting)

    by Darth RadaR ( 221648 ) on Wednesday October 10, 2001 @02:56PM (#2412013) Journal
    I think that you might need a RAID (or RAIM- M == memory :) for RAM in case one of those dimms decides to die on you. Buggered up platters can be rescued in some cases, but if RAM dies, there's no recovery.

  • flash drives (Score:3, Interesting)

    by frknfrk ( 127417 ) on Wednesday October 10, 2001 @02:59PM (#2412043) Homepage

    i have been very pleased with my sandisk [sandisk.com] flashdrives. basically they are IDE-interface drives with flash memory instead of spinning platters. 0 ms seek time is nice, so is -silent- and -very very low power- storage. not to mention if you don't have to treat it like an egg.

    i've used both the flashdrive [sandisk.com] from sandisk, and the IDE flash drives [simpletech.com] from simpletech [simpletech.com].

    the sandisk flashdrives have sizes from very small (4 MB) to big enough for your MP3s (2 GB). of course they get expensive at the high end :) best things about them are (1) can get them semi-cheap from ebay [ebay.com] and (2) standard IDE interface.

    -sam
    • Re:flash drives (Score:3, Informative)

      by jandrese ( 485 )
      Flash memory has a few disadvantages:
      1. It is slow to write to.
      2. It's fairly slow to read, although much better than the writes
      3. It tends to wear out after only a few tens of thousands of writes. Even the fancy new adaptors that spread writes out across the entire memory space get bitten by this
      4. It's more expensive than RAM (quite a bit more currently, but that may be an economy of scale).
      5. Most of them use PIO0 for access (at least the ones I've seen, some of them may support DMA, but I've never seen them). This means your processor has to spend a lot of time handling disk reads and writes. This is purely an engineering problem at the moment that would go away if anybody really tried to sell these as HD replacements, but it is still a problem for people using them today.

      I hope this was helpful.
  • I assume MO here means magneto-optical.

    Who the hell has an MO hard drive? MO WORM drives used to be pretty popular . . .

    Hm.

    -Peter
  • ATTO SiliconDisk (Score:3, Informative)

    by DeeKayWon ( 155842 ) on Wednesday October 10, 2001 @03:01PM (#2412067)
    I'll be damned if I can find anything at ATTO's website [attotech.com], but they used to make the SiliconDisk II [smartcomputing.com], essentially a SCSI hard drive made completely of DRAM (yes, it has power outage protection).
    • It was discontinued about a year ago.

      With hard drive speeds where they are nowadays, there's really no point to RAM disks, except in very specialize high-end applications (i.e. databases). Even in those cases, your probably better off with a machine that can handle huge amounts of RAM (Alpha, Sparc, and Itanium can all handle terabytes of address space, i think) and an OS that can do decent filesystem buffering.
  • Redundant Array of Inexpensive Disks

    or

    Redundant Array of Inexpensive DIMMs
  • These [linuxworld.com.au] are an example (they even have Linux drivers), but an 8GB unit is still over $20k (see CDW [cdw.com]). It's going to be a while until this is affordable (2 orders of magnitude price reduction).

  • To get solid state hard drive they must be more desirable than platter HDs. All that solid state has going for it is speed. It's far more expensive, holds less data, and unless you get the expensive chips, looses all data when the power is turned off.

    Current HD tech has HD's maxing out at 400GB. I'd perfer the robustness of solid state, but platter drives are simply better at this time.

    Imagine a solid state file server though! Sigh.
  • by defile ( 1059 ) on Wednesday October 10, 2001 @03:16PM (#2412162) Homepage Journal

    There are two ways you can do this.

    Way 1 -- Use a PCI card with 4GB of RAM on it as primary storage. At the end of the day, or week, or whatever, copy all of the data to more "permanent" storage. Like hard disks. This way a power loss (or battery failure) isn't too much of a nightmare.

    The drawbacks are that you need special hardware and you could lose days of work.

    Way 2 -- Cram your machine with as much RAM as possible. Which probably means 4GB. Configure your OS so that it uses about 95% of RAM as a buffer-cache.

    Data will be loaded from disk initially on demand (which means slow startup) but will almost always stay memory resident thereafter. The OS will also commit dirty pages back to disk from time to time ensuring that you don't lose anything important.

    This may be less doable with systems that insist on synchronous writes during file operations, but you can often disable these things if you want to take the risk.

    The benefit of this approach is that you don't need special hardware and you're less likely to lose data than Way 1. Which basically means you can and have been experiencing this now.

    If your system grinds disk consistently after several hours of use, it's a good indication that you should get more RAM considering how cheap it is.

  • CD-RW Technology (Score:2, Interesting)

    by wardomon ( 213812 )
    I seem to remember that there was a company working on the idea about 2 years ago of using the rewritable film of a CD-RW as memory.


  • Sure these [yahoo.com] are not cheaper by the MB, but they are incredibly cool!!

  • The last CompactFlash card I bought for my digital camera was well under $1/MB (actually about $0.67/MB).

    The first SCSI hard disk I bought for my Mac Plus was over $10/MB, and held less than 1/4 the capacity of that CF card. And it weighed 14 lb.

    Flash isn't cheaper than current technology disks, certainly; for the price of a 1/4 GB CF card you can get an 80GB IDE drive. But the growth of the digital camera and PDA markets has driven the cost/MB of flash down, and will continue to do so.

    What would be cool is a RAID controller for CompactFlash; plug in 6 CF cards in a space the size of a standard hard drive and have it do RAID-5 in hardware. Slower than stock RAM, but non-volatile. The catch there is the number of read/write cycles...and I'm not sure how much work has been done on improving that side of flashRAM performance.

  • So.. if you get yourself lots of RAM for a fast disk, then when you run out of working memory, just make some swap on that drive! It's almost as fast as real memory!

    Oh, wait.. why am I recalling the joke about a solar powered flashlight?

  • SSD's aren't new (Score:3, Informative)

    by fooguy ( 237418 ) on Wednesday October 10, 2001 @03:28PM (#2412241) Homepage
    SSD's have been around for quite some time. Compaq had several commercial offerings based on Quantum's SSD. There are also several no-name companies that manufacture solid state drives (Memtech being just one: http://www.memtech.com/Prodinfo.htm).

    We actually got our Alpha vendor to let us try an SSD for 30 days. The drive was fast, but we found that we quickly saturated the controller (something a couple U160 drives can easily do). In that regard, it wasn't that fast at all.

    And, as has been said in other posts, it's not really economically fesible. We tested a 3.2GB SSD last Christmas that cost $25,000. For that application, we thought it was a good fit. But if you're concerned about capacity, we just bought some 180GB drives for our SAN for about $5,000.00 each.

    While the RAM and disk capacity available now is amazing, I don't think we'll ever see the dollar/cost ratio for RAM beat the dollar/cost ratio for disks.

    In 1994, which I had a 486/DX2 66 (which came with 4MB Ram), I bought 16MB of RAM for $560.00. Quake was 15MB, so I could load it into a ram drive and play from there. Guess what? It wasn't noticably faster than my IDE hard drive, but Windows screamed. =)
  • by Anonymous Coward
    There is a company in Sweden developing technology that might make both RAM and HDD's obsolete.

    The swedish R&D site:

    http://www.thinfilm.se/

    The norwegian mothercompany:

    http://www.opticomasa.com/

    Article about it (in Swedish however :-):

    http://www.nyteknik.se/pub/pub26_3.asp?art_id=16 01 2

    More material can be found by searching for Opticom, Plastic memory,thinfilm etc..

    Interfaces should not be a big bootleneck. Whatever technology used to create the RAM disc. ATA-100 (100MB/s) and SCSI U160 (320MB/s) should be significant. U320 and U640 will come within years.

    If the current number of RAM sockets are a limit.. one can always network some MB's stuffed with RAM. :-)

    pbRemove(a)ludd.NospamherEluth.RemovEthisse

    Anyone in need of computer consulting with unix or programming btw? ;)

  • by MattRog ( 527508 ) on Wednesday October 10, 2001 @03:33PM (#2412267)
    You see solid-state disk drives used mainly in relational database management systems as a 'scratch pad' for highly-volatile data.

    In order to explain I'll have to do a quick primer on RDBMS' and how they handle memory management.

    As you're probably aware, there are a multitude of different operations you can perform on a RDBMS; UPDATE, DELETE, SELECT, etc.

    For more efficient queries the RDBMS will cache physical data structures in memory. It may cache parts of the index or recently accessed data. If the cache is full it will kick out the oldest, least used parts to make some room for the new stuff.

    To make a long story short, most servers have way more disk space than RAM. As such, it will use a designated 'temp' or scratch area for some of those sorts (and temporary tables) if there are more important things in RAM or it cannot all fit. In Sybase / MS SQL you create a special database for this called 'tempDB'. I'm sure DB2 / Oracle have similar data structures.

    Here is where solid-state disks enter the picture. You can buy a small solid-state disk (9GB or less) for cheap. You then 'create' tempDB on the solid state device. That way you can completely eliminate the relatively slow disk drive for things like sorting, temp tables, etc. and devote all of your RAM to caching database information.

    To me, this seems a lot better than using solid-state devices exclusively as a storage medium. Initially when you start up your RDBMS the cache is clean. After people run a couple queries the important (and most hit) indexes and data are cached any way so you do not have to worry about touching the disk unless you perform a write. However in most OLTP (online transaction processing; a la web app) it's mostly selects so you wouldn't receive the benefit of the solid-state device unless it wasn't in the cache.

    Most SSDD have a battery-backup in them in case of power failure and are generally mated to a corresponding hard drive. When the SSDD is idle it will flush the writes to the HD to keep the HD up-to-date. On a power failure it will immediately dump changed data to the HD (also battery-powered).

    For 'home' systems I can't imagine anyone using SSDD as their primary storage. It doesn't make sense - rarely does anyone perform anything that 'demanding' as to require solid-state drives. Plus, if you have a single memory error you would lose the entire thing (break one of your DIMMs and tell me what happens when you try and boot.) :D
  • A disk array with a big front-end RAM cache effectively gives you RAM-like access speeds for cache hits. You can basically adjust the amount of cache to get as close as you want to RAM speed overall for your workload, while also taking advantage of rotating media's price and durability advantages. Ideally, either the cache is either battery backed or the array has enough of an internal power reserve to dump cache to disk even when external power is lost. This use of a large but safe RAM cache is the main thing that differentiates a Symmetrix or a Shark or a Lightning from some low-end POS that's really no more than a stack of disks with a plain old PC bolted on the front...and don't even get me started on the abomination that is host-based RAID.

  • Is the low price these days due to more efficient manufacturing or market saturation? If it's an efficiency thing then it might make sense to put the effort into doing solid state drives now. But if this a transient glut in the market, then by the time you have something that will do the job, memory may be prohibitively expensive.

    Personally I'm thinking just packing my system full of memory would be the best solution. As others have mentioned, an OS with good disk caching built in can be as good if not better than a RAM disk. It might be useful to have some way to expand memory through a PCI slot but it seems like, for now, solid state storage just isn't worth it.
  • ramdisk.
    Just load your programs into ramdisk.
    Have the data that needs saving tossed onto the hard drive periodically by a script that dumps the data that needs to be saved from a ramdisk directory, to a HDD.
  • Hey, thanks for the tip (shows how out of touch I am). 256MB for a Powerbook G3 for $89 at MicroWarehouse - awesome!

    (Oh yeah, RAM disks, cool. etc.)

  • I just put 2 100 GB magnetic disks drives into my TiVo.

    I think 200 of the 1 GB SDRAMs would take up quite a bit more space, even if the slots were there.

  • Platypus Technology
    http://platypustechnology.com
    "Platypus Technology has designed a range of storage innovations that free applications from the bottlenecks caused by hard drives.
    You can run mission critical files from silicon, rather from rotating platters".
    The design appears to be quite nice.
    The price appears to be outrageous.
    From www.cdw.com
    "Platypus QikDRIVE8 1GB
    1GB PCI solid state hard drive card for PC and Mac workstations and servers $3229."

  • OS Redisign (Score:3, Interesting)

    by kruczkowski ( 160872 ) on Wednesday October 10, 2001 @03:51PM (#2412400) Homepage
    With this trend to continue, OSs should be redesigned. The hard disk has the advantage of keeping the data with no power, but the ram has speed. New pcs could have 3GB of ram and a 40GB ide HD for storage. When the pc boots it would copy the data into its ram and then execute all programs from ram, sure this would take a long boot but with new os to be stable this should not be a problem.

    We would have to do some serius os and user interface redesign. If the pc is used for video editing the samples could be kept in memory this would speed thing up a bit, but you would have to save the data to the HD eventualy.

    Another great application for this would be chase servers, imagin a organization that does video editing and all the clients have gigabit ethernet, implement servers that have 1TB of ram before the data storage server at night they could sync the data.

    Seriusly, we have to think about this, our current view on pc is that ram is way more that hd storage. Diskless clients could make a come back...
  • Solid State hard-drives date back at least as far as the late 1980's, when (I believe) Watford Electronics released a device called a Solidisk for the BBC Microcomputer.


    As such, they are fairly old technology, and most of the problems have been ironed out. The problem with power can be solved in a number of ways, for example. You can have battery-backed RAM, or you can have the "RAM" non-volatile by using a design that does not decay rapidly with time. (Flash RAM works this way.)


    Another problem has been the capacity of a solid-state hard-drive. This, as has been mentioned, has largely been overcome. I =STILL= believe that wafer-scale chips are the way to go, for this, though. You should be able to make wafers that are tens of terrabytes in capacity, by now.


    (The problem with making wafers has always been the purity and the defect levels. Purity just requires you to use something better than skimming. Double distillation, or atomic mass seperation, would give you near 100% purity. You then just cool the resultant in a vaccuum flask, so that the defect rate is negligable.)


    Getting back to the modern day, though - how to turn cheap RAM into quality solidisk. This involves making a card, with a whole load of RAM on it. Since you're using conventional RAM, you can't rely on modern-day core memory. This means the fall-back of using battery-backed RAM.


    You want TWO batteries, for this. One will be in discharge/recharge mode, the other will be in operational mode. When the batteries switch over, you want the recharged one to be switched first, so that the batteries are in parallel, BEFORE switching over the other. That way, there's no loss of power.


    When switching to discharge/recharge mode, the battery must be fully drained, to prevent "memory", where a rechargable battery fails to recharge correctly from a semi-charged state. Once drained, you recharge it to capacity.


    The switch-over should happen on one of two events:

    • The battery in use is under 25% capacity, OR has less than half the charge of the spare battery
    • The computer is switched on


    This guarantees that you have 175% - 200% of any one battery's lifetime, which should be ample for most purposes. The recharger should tap off the bus' power supply, with the batteries directly powering the RAM at all times. This avoids any problems of messy spikes somehow getting into the computer.


    If you want "extra-long-life" SSD technology, you are probably best off using very low-power RAM for the main disk, and using higher-power fast RAM for the cache. The lower the power of the main disk, the better. Static RAM is worth a glance, for this - I think it's usually more efficient than dynamic.


    Of course, the =ULTIMATE= solution is to go back to using core memory. (For those who never went to computer science classes, "core memory" is one of the earliest non-volatile digital storage systems. It was a form of magnetic storage, and used semi-permanent magnets to retain the data. Data could only be read by destroying the copy in storage, which mean that a read cycle also had a write cycle. It was slow, but when you had RAM that was guaranteed to retain data for over a century, who cared?)

  • 1. Create a fast bus near the processor for solid state storage - it is silly having to go through IDE, then PCI, then the NB to the CPU for data, even with an IDE solid state disk.

    2. This bus could be HyperTransport from the NB to a HyperTransport enabled memory controller that can control up to 16GB of memory. This will give you massive bandwidth and low latency - the best of all worlds.

    3. 16 DIMM slots in a drive bay somewhere, or whatever. connect to the memory controller. Battery connected to power DIMMs in case of power down. Use DDR DIMMs, as they use less power. A large laptop battery should power 16 DIMMs for well over a day on their own.

    Alternatively, just set up a massive RAM drive and cache the HD into it... rewards uptimes of course!

  • Let me first start off by saying, I thought this was a good idea once, too. Here's why it's a dumb one:
    • Ram is so expensive that having it sit idle is a waste of money and time.
    • Operating systems do an excellent job of keeping most recently used (and hence most likely to be used again) data in memory
    • Keeping files on a ram disk prevents the operating system from using it
    To learn this initially, I took a machine with 512mb of ram and made a 100mb ram disk partition on Win2k. I needed to speed up my compile times (>45 minutes) when using a bad cross compiler to the Nintendo Game Cube and a lot of templated C++ code (I didn't write it). After moving all the source code and object output files and executables to the ram disk volume, it turned out that it went even slower than before. This is because less ram was available, so it swapped out more frequently. Same principle applies when just adding more ram. The less you hit the hard drive, the faster your machine runs.

    The only reasonable purpose I can think of for a fast ram disk is if you can get some relatively slow ram on that device, which is cheap, but won't fit on your motherboard due to it requiring faster/more expensive ram, such as RDRAM or other exotica like ECC Registered SDRAM. But it's still cheaper to get a few hard drives.
  • I see some people debating ram disks.
    The way I see it, the kernel is smart enough to use ram for buffering when it can - certainly smarter than a user creating a ram disk.
    If you need more performance, give your system more ram and let the kernel decide how much of that ram should go to a ram disk.
  • Try DiskOnChip (Score:2, Informative)

    by hum ( 484887 )
    A family of high performance, single-chip flash disks are available in a wide range of capacities from M-systems [m-sys.com].
  • I'm listening to ideas about a pci card with ram sticks on it that backs up onto a harddrive in the event of power failure. Why the hell don't we just buy a whole ton of ram, make a ramdisk that's sync'd with a harddrive partition, so all the writes are written to a disk buffer and the ram while all the reads only access the ram. If we crank up the size of the write buffer, we've got some pretty impressive performance!

Arithmetic is being able to count up to twenty without taking off your shoes. -- Mickey Mouse

Working...