Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware

Linux On Solid State Disk 112

Blah writes: "A while back Slashdot made reference to The Platypus Solid State Disk. The boys down at LinuxWorld.com.au have scored themselves one and given it a look over. The article has some pictures showing just how much SDRAM this thing has on it, as well as graphs which compare its IO and transfer rate performance against that of standard SCSI disk."
This discussion has been archived. No new comments can be posted.

Linux on Solid State Disk

Comments Filter:
  • by Anonymous Coward
    Yes, you am. Swap does not need backup power when the box dies so either use it as RAM or as a disk. Swap is a hack to work around the memory overcommit of the UNIX architecture (and other VM architectures)
  • Since it isn't being made anymore, the prices on it are higher per megabyte than SDRAM, which is being produced in massive quantities.

    - A.P.

    --
    * CmdrTaco is an idiot.

  • I would do grevious bodily injury for one of these devices.

    A high percentage of my working day is spent waiting for compiles, as even a single change to a file requires on the order of five minutes of compiling and linking. A lot of that is file read/write time. If I could write it to memory-speed output rather than disk, I would be a happy man. According to the task manager, I'm not hitting virtual memory most of the time, but that hard drive sure is cranking.

    Heck, we should probably pass around a hat and get one for Alan and for Linus...
  • Uh ? Put more RAM. Put even more RAM. And some extra RAM.

    My machine is maxed out, unfortunately, at 256 MB. A solid state storage system could be added to an otherwise limited system. (Although at the typical prices they sell for, a new computer would be a cheaper option.)

    Oh, I guess what your problem is.

    Yeah, it's that the market for what I do isn't generally using Linux.

    But yes, a gig of memory and a RAM drive would be a good approach.
  • Serious SSD's may be "bowel-quiveringly expensive;" they just aren't something that you're likely to want to install on a small desktop box.

    There are applications out there where $60K is a small price to pay to increase performance rather a lot, and $60K starts looking small when you start pricing Sun E10000 servers and such.

    The big value to SSD comes in when you've got one of those situations of heavy database updates where eliminating latency time is a big win. If throwing on a $60K SSD allows downgrading from a $1.5M server to a $1.1M server, that was evidently a very good buy...

    A CF memory card system that doesn't allow you to hit it hard with vast numbers of updates just doesn't compare. And it's still hardly cheap; there aren't $60K units, but there aren't 8GB CF memory cards, either...

  • Spend $8 grand on a decent RAID array, and see how the numbers stack up. They compared the thing to a single U2 drive! For $8K, I can get 2 RAID card w/ battery backup and 32 MB Cache (Ultra160 Mylex Cards for $500-700 each), 6 faster drives (10K RPM Ultra160 9.1 GB for $175 each), a data Silo SCSI box, run a mirror set between the 2 Striped RAID volumes (1 connected each to a card), and an extra UPS each for my server and the RAID boxen ($1000). Then see how the speed compares! Oh darn, I still saved some money...
  • We just got an SSD for our Sun mailserver. The OS won't go on the disk, though, we'll either use it for the pop lock files or for a mailqueue - either way we expect the load on the machine to fall dramatically.
  • Why is this better than putting 4GB straight into the server? SQL can be made to allocate and keep it all... I don't see the point.
  • get a nice UPS and then you are set
  • You would be foolish to use an external, lower bandwidth, more expensive interface that had to go through a swapping algorithm just to add more memory. It would make a lot more sense to just spend the 15 grand on big-ass DIMMs and a motherboard that can take them...

    But then again, it prolly would be good for swap, albeit a wasteful kind of good. :)
  • Heh. I've got one of those early production run SB16s. Big thing with an onboard IDE interface. Of course, I don't actually use it, but it is nice to have around.

    Similar pieces of hardware, in terms of size and ugliness, are some of the early graphics accelerators (especially some of the Macintosh ones which used all these little ZIP memory chips), or the original Amiga Video Toaster (now there was a big card).

  • Is Samba a big enough project for you?

  • by BJH ( 11355 ) on Friday February 09, 2001 @04:13AM (#443867)
    I don't think he's talking about systems actually required for flying the plane - things such as targeting systems (where information on which locations are planned targets could be valuable to the enemy) or map data (which would tell the enemy how much of their defense system has been located) would be more likely candidates for such treatment.

    Then again, it could just be that the Air Force doesn't like sharing its sooper-sekret pr0n files with anybody else.

  • Or how about the fact that three of you just consecutively mispelled "grammar?"

    ;) Oh! Shood I have sed "mispelt?"
  • look why oh why dont these have standard interfaces ?

    why arnt they arnt they useing memory interfaces ?

    if they used memory interfaces then they could use JFFS or RAMFS

    whats the advantage ?
    BTW if you trust the newly dubbed IA32 to your mission critical system then you are a fool in my eyes sorry but its true SPARC/SH/ARM are the way to go because of debug in silicon (-;

    regards

    john jones

  • That's a really really expensive hour of battery time. You coulda just bought fifteen extra batteries.
    --
  • How would this stack up to the solid-state drives available here [buymemory.com] ? They use a standard SCSI interface, shouldn't require special drivers, and don't use up a slot.

    Nice performance, though.
  • Yeah, 'cause those people in the non-English-speaking countries aren't very intelligent at all, are they? Look at the Japanese! Idiots - the whole lot of 'em!
  • 5) they also accept a higher shock/vibration load that real disks. Very good for moving vehicles (ie airplanes)
  • I mean, think about it...ram is not _that_ pricey. There must be a lot of research dollars to compensate for. You can get 512megs of pc133 for ~US$200, so why does it cost outrageous sums for the drives? It's not like scsi itself can make up all that cost. Heck, even old edo would give faster performance than our current drives. Slap a bunch of that in a box, put on a scsi controller and I think a lot of people woud be happy. Or even better, put it in a pci card. The main problem with the pci card being physical space...
  • Wow, our user ID numbers are close together.

    :)

    ----------
  • If I remember correctly I told you to sign up on /. so you didn't post as an anonymous coward.
    ----------
  • First, it has to be powered up to work. The external power supply is almost redundant. I won't be rebooting my production linux box more than once or twice a year, and even then, the reboots will be planned. I could easily back up the drive without the external power supply. If I were to suddenly lose power, it wouldn't matter anyways since the external power supply would most likely go down as well.

    Second, for 8 gig models, having a separate PCI card holding the memory makes sense. But for less than 2 gigs, you will probably be better off just using a ramdisk. Not only will this allow you to have more control over the actual memory allocation, there shouldn't be any dramatic difference in performance. As I said before, a sudden loss of power for your server is just as likely to take out the power for the drive as well, so you're not in a much safer position the other way.

    Just a few thoughts.

    -Restil
    restil@alignment.net
  • Thats SO amazing. I wonder how that could have possibly happened???? :)
  • I don't see what the fuss is all about. I can put a couple of gigs of RAM in my main machine, bot up the OS, and load a gig or so RAM drive, and I've effectively got what everyone's talking about here, without any extra hardware (save the ram that is). What's the big deal? Am I just missing somehting? I'm really not trying to start a flame war here, I just don't get it...

  • But 4GB of RAM should be enough for anyone! ;)

  • Haven't we leared ANYTHING???
    Yes, we learned that you need to work on your humor recognition capabilities.
  • reincarnation of VESA

    is this really true? can you provide a link? thanks

    use LaTeX? want an online reference manager that
  • for the most part i would think the os should be caching frequently accessed files like web pages and scripts. the platapus would be better at preventing the /. effect if you accessed the memory as ram. it's probably all of the spawned httpd processes that kill the servers.

    use LaTeX? want an online reference manager that
  • he VESA Local was hard to work with, and you only had one slot,

    i built quit a few machines with more than one VLB slot, where did you get this from? they did make nice graphics cards though. there were also good controllers and scsi cards too.

    use LaTeX? want an online reference manager that
  • Because a 32bit machine only has so much address space. You by using main memory you are limited to about 4 gigs.
  • http://slashdot.org/comments.pl?sid=01/02/09/12522 18&cid=10 [slashdot.org]



    15 grand is a hard bargain, but 50 nanoseconds? how many times a day is that? 3 zillion?

    does it come in blonde?
  • Only then it was called a RamLink. Same concept, about the same hardware, only it was for a commodore 64. Brilliant idea really, just not viable until ram gets extremely cheap.

  • Wouldn't it have made more sense to test this against similarly priced storage systems? In the $5000 range you have fibre channel raid controllers with some storage. A good FC card will push around 200 MB/s. Why not have a measure of MB transferred per second per CPU percentage? Put that next to $ per MB storage and you get a real comparison.

    This card is limited to 100 MB/s and is only 32-bit 33 Mhz so can only be grouped one per bus in order to maintain that speed. Meanwhile, most FC RAID cards are 64-bit 66 Mhz, run around 200 MB/s, support multiple cards before maxing out their target bus. For $5000, you are going to get much more storage than this thing and it will be faster. I just don't get it.

    I have one technical issue with the article, too. It contains the following line: "Current PCI bus speeds are limited to 33MHz, however, 64 bit PCI bus systems are in development and have speeds of 66MHz." This isn't correct. Both 64-bit and 66Mhz PCI systems have been around for some time. I was at the Microsoft Plugfest for Windows 2000 and Millenium testing my 64-bit 66 Mhz fibre channel card in systems from various vendors. This was back in December of '99. Also, the signal rate and signal width are not automatically linked, although most 32-bit buses only support up to 33 Mhz and most 64-bit buses support up to 66 Mhz.
  • Actually the I/O bandwidth of an SSD is limited by
    the fact that many of them have SCSI internally, and even if they have FC back ends you'll only get 35MB throughput, but latency is nice.

    a slashdotted server would be probably dealing with a small enough amount of data that it would all be in host buffer cache.
  • by 1010011010 ( 53039 ) on Friday February 09, 2001 @04:01AM (#443890) Homepage
    The US Air Force uses solid-state disks in at least some of its aircraft. They load the software right before takeoff. The idea is that, if the plan goes down or is captured, the pilot just has to power it down and all the software is lost, and then the plane is useless.

    - - - - -
  • Some info about VLB.

    VLB (VESA Local Bus) is used mostly on 486 computers. It uses a direct connection to the CPU (no bridges like modern PCI computers or older ISA buses). It was designed to be a cheap solution more than anything else, by not requiring a fancy bridge like EISA did.

    VLB has the following characteristics:

    Bus speed is same as Motherboard bus speed, typcially 33mhz.

    When operated at 33mhz, a maximum of 3 cards can be attached.
    At 40 mhz, only 2 cards.
    At 50 mhz, only 1 card.

    This limitiation is caused by the amount of current the CPU could send over the IO lines to the VLB cards. Since there is no bridge between the VLB bus and the CPU, the signal strength from the CPU can not be amplified to service more cards.

    Cards for VLB include network cards, video cards and disk controllers (SCSI and EIDE).
  • Let's say you have a web server with 1gb of memory and your whole content uses 100mb. How would you set your server up to run only in memory? You would still have a hard drive also.

    I know you need to setup a RAM disk and copy the data over each time the computer boots... but what exactly would need to be copied over to the ram disk to make is speediest. Everything? ("/usr" "/" "/var") RedHat's distro is huge you don't to copy it all over into memory. What about Apache bin? a database driven site with MySQL?

    Would just copying the web server content each time it loads and serving off that speed it up significantly?

    Would it be worth it? Or would using an Apache cache module work just as well?

  • 4) as an fs journal device

    Would'nt is be kinda sacry to put a journal device on a RAM device. If you lose power, your journal device loses power too. I can see that is would be whiz bang fast, but if you need speed, and a journal then you probably want one of those spiffy SCSI RAM drives with battery backup and a storage device built in (just in case)

    ~Sean

  • PC133 => 133MHZ * 64bits (8 bytes) = 1GBps.
    PC66 => 528MBps

    Difference in price for standard SDRAM is neglegable. (pricewatch has 512Meg sticks for under $200, meaning the $5,000 price-tag is most likely NOT mainly comprised of DRAM)

    But here's what happens when you underclock: The pipeline get's slowed so your latency increases.

    Random read access should take a hit of about 5 clock ticks per access (actually more because of intermediate custom hardware). So random single byte reads suddenly are slowed to 105MBps, but since we were only interested in a word, we only got 13Mega accesses / second. By staying at PC133, you effectively double that minimum rate.

    Now the difference in price from CAS3 to CAS2 PC133 is significant, I'll grant you.. Probably not worth the premium.

    I admit that most if not all "virtual disk accesses" are going to be in 512B blocks, which comes out to 16 indepedant cache-line-bursts (8B/cycle * 4 cycles), and should thus take overhead(approx 5 cycles) + 4 cycles * 16, or under 70 clock ticks for a full sector read. That's about 1.9Million sectors per second theoretical peek (not bad). At that point, we have to contend with main-memory BW saturation and CPU over-head for disk-drivers. I believe that depending on how intelligent the drivers are, the PCI bus isn't the real bottle-neck for over-all system performance on such a ram-drive.

    I am curious about the prospects of converting this PCI card into a UATA-100 hard virtual drive. You'd have higher peek bandwith, PLUS you'd be able to perfectly emulate a hard drive.. However, there is probably an advatage to putting memory on the PCI bus - namely that the OS drivers could directly access the media in little segments based on the actually requested data instead of duplicating disk-block-buffers in main memory, just to ultimately copy out to user-space.

    -Michael
  • If you notice, there's no rating comment [like informative, troll, funny, etc], that means the Score:2 is because the poster has such high karma, they get an automatic point boost.
    I don't know that I would have modded it up, but I thought it was funny.
  • VLB is not limited to one slot. I have three in my server (486-100), one for graphics, one for IDE, and one for network.
  • Not to mention the $5,000 price tag, if you're in a position where you have to worry about things being plugged and unplugged.

    But damn, it sure does look cool. I'd almost want to buy one just to stare at it. It's the first piece of hardware I've seen in a while that hearkens back to the era of the bulkiest, most awesome piece of hardware ever, the ISA Sound Blaster 16. Those things were beasts.

    Of course, I guess these things need room for all that RAM..
  • ...In which case you really gotta re-ask why this particular piece of hardware is such a boon.
  • Microsoft product suspected.
    Linux zealot team scrambled to liquidate target.
    ETA 6 minutes.
  • I think the real question on everyone's minds as we examine this product is obvious. Can a boost in I/O even as massive as promised by this piece of hardware actually save a poor, unwary server from the Slashdot Effect? Now *there's* a benchmark I'd like to see.
  • >>err sorry for all the grammer mistakes
    >
    >funny thing is that your mistakes didn't jump
    >out at me, I had to go back and re-read your post

    I noticed two grammer errors by Taco today but I didn't even notice Kewjoe's mistakes. I guess by the time I'm reading comments, I'm interested enough for my mind to overlook most errors. Unfortunately, Taco's comments usually don't intrigue me enough to bypass the normal grammer rules my mind implements on it's own free will.

  • Proof that I'm just another spell checker junkie ;-)

    Anyone want to write a spell-check module for Slashcode?

  • Insightful??? excuuuuuuuuuuuuuuuuuuuuuse me?
  • the only thing useful applications of solid state drives i can think of are:

    1) doesn't require backing store, like main memory does
    2) can use slower (cheaper) dram than main memory, since the bus is the bottleneck
    3) if you're out of dimm slots, it could be useful to use as a swap device
    4) as an fs journal device

    unless they can sell these SS drives for less than the same capacity dimms, there's not going to be much of a market.

    also, the article is wrong about pci. 64bit 33mhz and 32bit 66mhz pci slots have been available for quite a while.
  • by Bocaj ( 84920 ) on Friday February 09, 2001 @04:02AM (#443905) Homepage
    Why didn't they make this an AGP card? It's a dedicated port designed for fast I/O. I know that AGP is the reincarnation of VESA, but does anyone know any reason why this wouldn't work?
  • The device is a fine thing if you can fit your data in it but what if you want to combine multiple cards? At $8K a pop they're hardly cheap. Through together a few drives and you can match the read throughput of the SSD (I used to have [last job] a dual-proc Linux box with 9x 36GB IBM drives using s/w RAID-5 [big blocks with ext2], reads at 60+MB/s [PCI limited?], writes at 30MB/s, according to bonnie :)
  • That's "throw together a few drives...". Stupid fast typing. Damm monkeys pressed return on me.
  • I'm sure a large part of it has to do with R&D. And dont forget the performance increase. Access times of >0.1 ms vs SCSI's lowest access time of ~8ms. And then theres the MTBF, whicah can reach upwards of 3-400% the MTBF of HDD's. Not only that, but theres diffrent types of ram they can us, and alot use non-volatile memory so the machine can be turned off, and still retain data. As for the uses of an SSD, well theres an SSD FAQ right here [caltech.edu]. And an even more in depth one (with design pictures and what not) over here [silicondisk.com] .

    ----------------------------------
  • Sort of. Its a SSD, but it connects to the SCSI2 interface on the Proliant 8500. 4GB of storage, but it isnt seen as a hard drive. What it does is as you boot and the OS loads it automatically starts caching everything to the SSD, when it gets full the controller uses the SSD as much as possible, while relying on the regular scsi drives when the info wasnt located in the SSD. Made SQL server scream though, real nice. 35,000$ though, so I doubt we will be getting alot of em.

    ----------------------------------
  • I could be wrong on this, or CompactFlash may not be flash. Last I read somewhere that Flash memories is SLOW and have a relatively SHORT lifetime. Hm...not a good candidate for mass solid state storages at all?
  • >The real selling point to the solid-state hard
    >drive is the speed.

    Hm...my reason to buy a solid state hard drive at all is to put an end to any moving parts in my box.

    I move a lot. And I want peace of mind.
    I want almost 100% shock resistent.
    I want real low MTBF.
    I want my drive to resist wear-outs.
    Lower power consumption. Lower heat.

    My current hard drive does not give me all these. I believe solid state disk can, or almost can.

    In particular, I can shake or drop it any way I want without worrying damaging my data (a bit exaggerated, but you get the idea).
  • How about a customizable killfile?
    Or a customizable regex link matcher to kill links like the goatse ones only retards post?

    Thanks.

  • > Data is kept intact when you power down your system by powering the QikDRIVE card with an external power supply.

    OK. I hope you've got a UPS on that. I'd hate to see someone get everything configured in an operating system stored in one of these and then see the power go out.
  • Looks like they got a new Domain, www.linuxroms.com
  • You'll have to excuse the troll / off topic message, but I followed the link and got an error message stating "Unable to Connect to SQL Server!"

    Let's just hope that this is a generic "Unable to connect to a database server that uses SQL as its query language" as opposed to "Unable to connect to that one database that will only run on that one operating system that crashes way too often (as supported by this error message)".

    Actually, I'm pretty open minded. If there's some reason they need SQL Server, more power to them for working to integrate with a quick and dirty OS.
  • Isn't that pretty dangerous, though. I'm sure that the engineers have thought of lots of horrible scenarios and worked through them. But, what if something happens to the power in the aircraft (hit by lightning, say) and that device looses power. The plane's system can't reboot then? The pilot is dead in the air?

    I suppose that the pilot just ejects and the plane is totalled on the account of a faulty power supply. Sounds about right. 1 billion dollar plane lost to a couple thousand dollar part.
  • ...I've been running Linux on SS disks for some time now. I use the BiTmicro [bitmicro.com] E-Disk [bitmicro.com] line.
  • In fact, why are any SSD devices so expensive?

    Especially this one, that uses normal SDRAM. What in that card costs so much? It's certainly not the RAM. Can the chipset that manages the writing to and from the RAM really cost so much? Shouldn't it be possible to hack something like this together for a couple hundred bucks, much like people do with MP3 players now?
  • What I'm looking for is more like a slightly more modern version (i.e. PCI) of this 4 meg SSD [yahoo.com] without all the EPROM and flash baggage to act as a cheap TRAM cache [redwoodsoft.com] for file system journaling.

    The Playtpus SSD doesn't do much for me. My goal would be to speed system recovery in the case of someone kicking out the plug without going to the extremes of the EROS project [eros-os.org], and without doing the damage to file system performance needed for conventional journaling file systems.

  • by SuiteSisterMary ( 123932 ) <{slebrun} {at} {gmail.com}> on Friday February 09, 2001 @08:05AM (#443920) Journal
    Then again, it could just be that the Air Force doesn't like sharing its sooper-sekret pr0n files with anybody else.
    You don't think that when a pilot says this:
    Bronco Base, this is Big Daddy; we're screaming in on target, and are about to shoot the missile down the pipe....affirmitive. Payload has been delivered, we're RTB to cuddle.
    that they're talking about military things, do you? "I've got the ball" indeed....;-)
  • I reviewed a cheap alternative to "serious" SSDs like this a while ago - the review's here [dansdata.com]. It's just an adapter that takes a CompactFlash card; CompactFlash cards can do ATA natively. It's cheap, and so are smaller capacity CF cards - all you need for a lot of SSD applications is 8Mb or 16Mb, after all. No battery backup or external PSU needed, plenty fast enough for most purposes, fine for everything but swap file use.

    That link again :-): www.dansdata.com/cfide.htm [dansdata.com].

  • VESA Local Bus was a competing standard to PCI.

    PCI was developed by Intel, as was AGP.
    If anything, AGP is a reincarnation of PCI.
  • I thought that Slashdot was supposed to be all about new info. I mean, solid state drives are nothing new. Is the only reason this was accepted to be posted because this article had the word "Linux" in it?????? Lame.
  • > > Oh, I guess what your problem is.

    > Yeah, it's that the market for what I do isn't generally using Linux.

    > But yes, a gig of memory and a RAM drive would be a good approach.

    Don't know if you are going to read this, but you may try a slighly different approach first:

    A FAT32 partition. Defragmented. Maybe on a different physical disk.

    First, NTFS file system is slow on write because of the journalling. FAT32 is not journalled, and have a better throughtput.

    Second, NTFS fragment very easily (which I find hilarious, because a few years ago, it was hyped as good to reduce fragmentation). If you have a lot of files that are created/removed, you ends up with a disk full of holes. When writing a big file on that disk (for instance objects file), your perf goes down in the toilet [mainly because windows is stupid enough to find space "backward"]. I saw trhoughtput divided by 10. Think about it. 10ms seek time adds very fast. You ends up with files that cannot be read fully in less than 1/10 of a second if it is divided in 10 or so fragments. You ends up with a _lot_ of delay. The defragmenter in win2k is not able to really reduce the fragmentation is some non-pathological cases. What is even worse is that the defragmentation don't prevent future files to be fragemented (ie: you get a nicely non-fragmented disk, and then the files you create there are going to be fragemented at creation time).

    By using a separate partition for temporary object files (and maybe some often accessed development tools, like the compiler/linker and the header) you can re-create it from scracth to get a nicely non-fragmented space, in which, when link.exe will be called into memory, it'll be loaded at light speed (check that your disk are goods too. I have 25MB/s sustended read on a ABIT HotRod + 46 Gb UDMA-100 IBM disk on Win2K (and the process is CPU-bound).

    Third, by judiciously splitting the load on 2 disks, you can overlap I/O and get better performance (in particular for the swap file, if you swap a bit while compiling. Avoid having a fragmented swap file as much as you can). YMMV, but checking with the performance monitor a few hours can give you hindsight on what kind on what access pattern and what kind of throughput you can expect.

    Btw, changing your motherboard would be an efficient move too, as if you are maxed at 256Mb you probably have an old box...

    Cheers,

    --fred
  • by f5426 ( 144654 ) on Friday February 09, 2001 @07:51AM (#443925)
    > A high percentage of my working day is spent waiting for compiles, as even a single change to a file requires on the order of five minutes of compiling and linking. A lot of that is file read/write time. If I could write it to memory-speed output rather than disk, I would be a happy man

    Uh ? Put more RAM. Put even more RAM. And some extra RAM. Then use a ram disk for your object directory, and keep a lot of ram as the file cache. On a bsd, suppress atime update on the directory containing system include/libraries, or mount it read-only or copy it into a ram drive. Remove atime from you sources too.

    > According to the task manager

    Oh, I guess what your problem is. You use an OS that have a journaled meta-data filesystem (so sloow sync write for each file) and that have *very* high fragmentation (spend most of his time seeking).

    Cheers,

    --fred

  • ... But did you ever try and BUY one of those quantum drives? Fat chance. I gave up after being told lead times were over 2 months, and no guarentees. I went to soliddata.com. It's a neat box that has "bigass" ram cards, an internal UPS, and a harddrive to store the contents of RAM. Standard really-fast SCSI interface (availble in a variety of speeds.) Totally plug and play, looks like a normal harddrive. Quantum may talk, but they can't deliver.
  • Generally they don't use high-density dimms. They use low density, and LOTS of them. But the boards they plug them into are all proprietary. The real answer is more of the fact that it's a nitch market so drives arn't manufacturered in large quantities. The common person doesn't have a need for a big ram drive.

    This may change however, as the speed of processors keeps going up, yet drive speed really hasn't (not to the same degree anyway.) If applications are written specifically for SSD's, things may change. Oracle for example is now supporting the use of SSD's to hold transaction logs.... Speeds up the database a LOT.

  • Ram drives are NOT fault tolerant. They also nuke themselves when the power goes out, or the OS dies for some reason. SS drives are needed when you need the funtionlaity of non-volitile storage yet much higher speeds than tradional disk. So No, a PC with a ramdisk and UPS does NOT equal a SSD.
  • Score 2???

    Bill Gates: 640K should be enough for anyone.
    (Old IBM founder quote): The world will never need more than 4 computers.

    Haven't we leared ANYTHING???

  • by walt-sjc ( 145127 ) on Friday February 09, 2001 @07:40AM (#443930)
    Even large drive arrays can't keep up with a SSD. Don't think "Large files, continuous write", think opening and closing 100,000 small files and needing to write to all of them. Think directory updates and searches. There are lots of variables. I tried the drive arrays (much larger than your setup, with E4000's, fiberchannel hardware raid, etc.) and the SSD still kicks ass. There are also MANY cases where you don't need 1/2 Terabyte of storage, yet still need speed. Why should I use a $60,000 drive array when a $20,000 SSD works FASTER?

    Bottom line is that you need to use the right tool for the job. Sometimes it's a SSD, sometimes its real disk.

    Don't forget that ram disks generate less heat and use less power and have no moving parts compared to a drive array.

  • by SupahVee ( 146778 ) <superv@NOSPAm.mischievousgeeks.net> on Friday February 09, 2001 @04:09AM (#443931) Journal
    Quantum [quantum.com] has had solid state drives for almost 4 years now. They pioneered the field and their scsi SSD's blow the doors off anything out there. And with an added benefit, it's native scsi, no special drivers needed, access times in the 50ns range, as opposed to the standard 5-7ms for even Cheetah drives.
  • Although this is quite funny, after Doug Miller's recent comments would the pot please refrain from insulting the kettle.
  • What about SDRAM on your memory bus? Why take a Gig of RAM and throttle it down to SCSI-bus speeds? Just give it to the OS instead. I'm sure it will make better use of a statically-assigned chunk of RAM acting as a disk.
  • >err sorry for all the grammer mistakes

    funny thing is that your mistakes didn't jump out at me, I had to go back and re-read your post.

    i think /. has dulled my perception of gammer and spelling... think I should read less since part of my job is to write and review technical documents.
  • yaknow, I've had one of these devices for about 1.5 years now. I've posted about it, did *all* the US beta testing under NT/2k/FreeBSD/Solaris, and nobody noticed. Now we see this? What happened to cutting edge?

    -JPJ
  • The cost is incurred due to the large capacity DIMMS. While 4x256MB DIMMS are relatively cheap, 1GB DIMMS are more expensive due to the density.
    -JPJ
  • Why use a swap partition at all if you can just buy more memory? Isn't that the same as buying this?

    Might be good for people that have all their slots filled. All in all, sounds like a very cool product.
  • I just checked Quantum's site, and FYI, it's not 50 nanosecond access... it's 50 microsecond... 3 orders of magnitude is quite a lot... ;)
  • This isn't really a new idea, because I've had linux running on a 16Mb solid state Sandisk for a couple of months now... makes a great item to have in a router. The disk has an IDE interface and performs very well.

  • Although Australia is large by area, the population is less than half that of the UK.
  • This could be made cheaper through MB integration.

    I think all you would need to do is have battery backup of one dimm slot. A feature, that without including the battery, would add less than $1 to a MB.

    Basically the MB already has the memory controller and DIMM slot. BIOS programs are probably unneeded (for linux anyway) since kernel startup routines could just scan the ram.

  • If I had $15,000 and a good reason, I'd buy a Platypus in a heartbeat... What do you mean it's a harddrive? Whats a Harddrive?
  • Question:

    What would happen If someone(me) held a contest with a $10,000 reward for Open source SCSI Ram Drive?

    Specs(min):
    SCSI1 Interface. May not excede Full height drive bay dimentions. Must accept one to eight 512MB DIMMs. Drive needs to be OS/BIOS transparent. Parts list, alternative IC list, Schemitics, any source code, PCB layout, and working prototype must be submited to judgeing body.

    Cash will be held by Slashdot until winner is chosen by a pannel comprised inpart by Slashdot admins.

    Just an (rough)Idea. The money's not the problem here.

  • Less than 1/3 the population of England. Easily less than 1/4 the population of the whole UK.
    Just over 4 times the population of Nebraska, I think.

    FatPhil
    -- Real Men Don't Use Porn. -- Morality In Media Billboards
  • More like 18 million. And, Vegemite aside, a fair bit of good stuff comes from Australia. They also have a tradition of exporting their best and brightest to work in other countries (funding for universities really sucks under the current Australian government---after living there for a few years, I just moved back because it's easier to pursue a career in the USA). If we could get figures for ex-pat Aussies working on projects I suspect that everyone would be quite impressed for what Australians do with only 18 million people.

  • We had one of these in the office, in company that

    ceased to exist, they are interesting for limited

    access terminals, and such. Hardware in diskless

    slimlined cases is about year or two behind and

    costs just as much. Great thing,though they have

    'expected' diminshed breakdown rate, because of

    absense of mechanical parts... that is expected.

  • The Linux Router Project should be using these card/drives... but that's the only REAL use I can think of for them... too tiny for anything more. Deffinately arent going to get a (for instance) SQL Based financial database on them.
  • Am I foolish to think that this would make a great swap partition?
  • I mean, think about it...ram is not _that_ pricey. There must be a lot of research dollars to compensate for. You can get 512megs of pc133 for ~US$200, so why does it cost outrageous sums for the drives?

    Solid-State hard drives are so expensive because they use SRAM, not the DRAM you are referring to. SRAM, or Static RAM, is an entirely different memory technology from DRAM, or dynamic RAM. It's main selling point is that it is MUCH faster than DRAM. The problem with it is it is much less dense than DRAM and uses a lot more power. These things make it much more expensive than DRAM which is why you don't use it to expand the memory of a PC. In fact, SRAM is the kind of memory used in on-chip cache in microprocessors because of its extreme speed.

  • by crgrace ( 220738 ) on Friday February 09, 2001 @08:48AM (#443950)
    What's the big deal? Am I just missing somehting? I'm really not trying to start a flame war here, I just don't get it...

    You are missing two things: speed and volatility.

    1. SPEED: A Solid-State hard disk is made out of static RAM (SRAM) not the dynamic RAM (DRAM) that consitutes the user RAM in a PC. SRAM is what is used in on-chip cache and is MUCH faster than DRAM because it stores information actively and has physical amplifiers in each memory cell (usually SR-latches), rather than passively storing the information on a capacitor as in DRAM. Because of this it is also much more expensive and burns more power than DRAM. That is why these solid-state hard disks are so expensive.

    2. VOLATILITY: When your computer crashes, or you shut it down, your RAM disk is GONE. This means you have to periodically write it to a physical hard-disk. With a solid-state hard disk, it looks to the computer just to be an amazingly fast hard drive, and no memory-management overhead is required. This is a big deal to large data warehouses and data mining operations.

    The real selling point to the solid-state hard drive is the speed. Internal SRAM can operate upwards of 1 GHz, and although it can't communicate with the outside world at that speed of course, with advanced high-speed digital signaling technologies you can achieve latencies and throughput unheard of with regular hard-disks and even DRAM based RAM-disks.

  • A number of things make SSD superior to disk caching. 1. Latency -- A cache is just that, cached. It must be loaded initially, and this takes time. A SSD has orders of magnitude lower latency and can load a cache much faster. Disks caches are only good if you are reusing data. Applications like databases and stream applications don't take advantage of caching. 2. Bandwidth -- While you can get bandwidth equal to that or SSDs out of Raid, memory has so much more bandwidth left over it can be used across several machines. Checkout http://www.texmemsys.com they have a SSD with upto 3.2GB/s of I/O, that more than the I/O subsystems of most systems. 3. Size -- Many computers have realistic limitations on I/O, disk subsystems are less contrained. Most desktop/workstations top out at 4-8GBs. You can add SSDs until your head spins. 4. Volatiliy, while SSDs are volatile, most have battery backups, or can be mounted externally to other redundant power supplies. Its much easier to recover memory viewed as a disk from an OS standpoint than it is to recover it from system memory.
  • It's an incredible waste of money to put all your data in RAM. You are much better off having a large solid state cache in front of a traditional disk drive.

    If you have battery backup for the solid state cache and if it's reasonably large, then you get pretty much the same effect as if you had a disk consisting of all RAM. Sun used to sell something like that for speeding up NFS called "PrestoServe".

    Of course, Linux already has non-battery-backed-up RAM caches built in. The interesting thing is that even if the kernel crashes, you can usually recover the unwritten data from RAM and (after verifying something like a checksum) write it out to disk (this fails, of course, on platforms where the BIOS messes around with memory before the OS boots).

    So, the best thing to do is probably to have battery backup for your RAM and processor and use an operating system that recovers information from RAM after a crash. That way, you can use normal RAM for caching, and if your machine crashes, you can recover quickly and with no or minimal loss of data.

  • Well, on reboot, everything on the ramdrive would be lost, wouldn't it?
  • You do realize Australia has only 10 million people right? and what does english speaking have to do with anything?

"What the scientists have in their briefcases is terrifying." -- Nikita Khrushchev

Working...