Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

Super-Fast Hard Drives 154

codders writes: "An Australian startup company, Platypus Technology, has launched a range of RAM-based solid state drives. These QikDRIVEs can offer sustained data throughput rates in excess of 110MBps and can be up to 8GB in size."
This discussion has been archived. No new comments can be posted.

Super-Fast Hard Drives

Comments Filter:
  • From memory, the bandwidth they're advertising
    is a significant percentage of the PCI bandwidth -
    which infers that putting two of these in will not
    increase your throughput - and even one of these
    running flat out will mean other devices such as PCI video and networking may struggle.

    I had hoped the quantum leap we benefitted from
    when moving from ISA to PCI would last a bit
    longer than 7 years, but I suppose thats 3 x
    moores law doubling so its not that bad.

    Compaq 8500's and Sparc e420's have multiple
    busses. If you can afford one of these
    puppies you can afford a good platform to run it
    on.
  • Comment removed based on user account deletion
  • People have been making virtual drives for years now. The only problem is that mobo makers don't create systems with that many DIMM slots (for obvious profiting reasons). What I don't understand is why we even bother with virtual memory at all. If you can afford to run a huge amount of RAM on your system, and never even come close to using all of it, why not disable using VM at all? Is there something I'm missing here?

    Seriously, the present hard drive media has to go. It's the bottleneck of the entire system -and one of the few remaining moving parts in a computer. I'm looking foreward to the day when it's all solid-state.
  • you're thinking of the POSIX .4 (realtime functions) mlock and munlock functions. NT has a POSIX subsystem, and win32 probably has an equivalent function that works on w9x and NT.
    #define X(x,y) x##y
  • Seems fairly silly. For $2500, you can get 10 9.1GB Quantum Atlas V (7200 RPM) Ultra160 HDDs and, say, a DPT single-channel Ultra160 RAID controller.
    The drives push about 29MB/sec each on the outside track, so in RAID 0, you'd be pretty much hitting the limit of your PCI bandwidth even at the innermost portion of the disk.
    $2500, 90GB of 132MB/sec storage. Since you'd support all the important flavors of RAID, you could sacrifice storage space for redundancy, if you like.. If you're editing video, you probably don't care. For half the price of this QikDRIVE thing, you could get two controllers, each with 10 drives, and make them redundant (RAID 0+1)
    Not only that, but you could toss in a motherboard with a couple of 64bit PCI slots, and 1-2GB of memory, and STILL be cheaper than the QikDRIVE.
  • the more you utilize the tremendous speed of the RAM HD, the farther behind the disk will fall

    I definitely see this happening in disk space vs. backup media too. Thanks to Diamond, IBM, et al the capacity of consumer hard drives has left reliable tape backups systems in the dust... Has anyone really worked out how to back up ten or twenty workstations with 60GB of local storage each? A file server with 200GB? Without spending $10,000+ on a tape drive and all day writing to it? With the price of high-end backup devices these days, you could buy all your hard drives over again, maintain them as off-line mirrors, and still have enough $$$ left over for several GB of RAM...

  • It is generally better to add RAM as RAM on the main bus of the computer. That way, you have a lot more flexibility for how to use the extra memory: you can use it for caching, for applications, or as a RAM disk.

    Note that RAM contents already survive reboots; it's the operating system that erases it (some systems take advantage of this fact for fast reboots). If you need power failure protection, you can also back up RAM that sits on the bus with batteries.

    So, I think this card is a kludge, something that gives people a quick fix solution to a performance problem. For a quick fix, however, I would prefer a self-contained external box with a SCSI interface.

  • Wow, this sure dates the age of the average Slashdot reader. No knowledge of Core Memory, Cray SSD drives...
    I feel old.
  • The Platypus comes in 3 flavors: The QikCACHE without an external power supply The QikDRIVE with an external power supply, and the QikDATA that has 'the added protection of automated back-up of data to an independent hard disk drive in the event of complete power loss.'

    And how pray tell, does the *independent hard drive* function in the event of a complete power loss?

    The QikCache is 'card only' - when you turn the computer off, it gets wiped. But that's irrelevant, since the thread is specifically about the QikDrive.

    The QikDrive is 'card plus external power supply' that allows you to turn your computer off (e.g. to make hardware changes) without wiping your data. It still isn't non volatile (it wipes if the power is cut) and it costs $9840.00 for 8GB. Quantum's pricelist is down right now, but I read it was $500-600 per GB or less than half the price of the QikDrive

    The QikData has "mirroring capacity" and the "capability" to transfer to a Platypus HDD with a built-in UPS. This increases the price even more -- the HDD/UPS unit is extra. You also have to load the HDD back to the QikData before you can restore service. Not good.

    A nonvolatile Quantum, on the other hand is not dependent on the power. Yank the cord, pull the Quantum, take it cross country, plug it in, just like a HDD. And you have instant high-speed access

    So, while I have no strong feeling about either device, I'm hard pressed to see how you give the advantage to Platypus at all, much less $8K worth! No, I'm not being sarcastic. I really am curious why you gave the QikData the advantage. At all.

    The Quantum is faster in "transactions/sec" (who cares about 'peak bandwidth'?), cheaper. And can be nonvolatile, straight out of the box without extra wires and components. (You can come crying to us when a technician seeing that the server is powered down feels free to unplug the QikData power cable. A UPS doesn't help, if you're not connected to it!)
    _____________
  • 533MBps for a 64bit PCI bus at 66MHz that means that with 110MBps the drive is using just over 20%. Add a 100MBps network card and your at about 40% which leaves over 300MBps for other devices even if your NIC and HD are running flat out.
  • If you were saturate the PCI bus with one device as you suggest what happens to your NIC card, sound card, your SLI voodoo 2 setup (hehe), a second drive, or anything else which happens to be on the PCI bus?
  • I think that they'd last longer infact. I mean, how often do you have to replace ram, vs how often you have to replace a hard drive - the hard drive goes much quicker because it has moving parts. RAM chips do not (well, transisors are more like, chemical/crystaline or something arent they?)

    Mat.
  • Maybe I misread, but these things ARE volatile. They're just regular RAM, in a special board. The only thing that makes them less volatile than regular RAM is the separate power supply.
  • Anyways, this would be great to stick your swap file on...

    I guess if the motherboard (and operating system) supported it, then you could just GET that much memory (and get rid of the swap file entirely!).

    Do they use cheap (and relatively slow) memory for this thing, or what? How much would 8GB of 133Mhz SDRAM cost if you bought retail?

  • The original article detailing the press release (March 20th) is here [zdnet.com].

    The default configurations at those prices are $1538 for 512 MB (upgradeable to 1 GB) and $9840 for 4 GB (upgradeable to 8).

    I'm curious; does anyone think that having an external power supply on the RAM drives make them worth the price premium over a software RAM drive?

    Hopefully the prices will drop as the next generation of SDRAM factories comes online... ;-)

  • by HeghmoH ( 13204 ) on Friday May 26, 2000 @11:28AM (#1044172) Homepage Journal
    I don't quite understand why you'd use one of these things. It seems to be that you'd get much better speed by simply putting all 8GB on your motherboard. Surely there are motherboards that can take that much memory, in addition to a 64-bit processor to address it all.

    I guess the part about an independent power supply is useful. If the power goes out, a UPS is going to be able to power a dinky little card a lot longer than an entire server. However, if your server is under enough load that you need one of these things anyway, you probably have multiple safeguards in place should the power die. You could always keep a hard drive on standby and write your ramdisk to it when the UPS notifies the computer that the power's dead.

    So, am I missing something? Is it less practical to cram that much memory onto a motherboard than I thought?
  • Product Availability And Pricing

    QikDRIVE1 (maximum capacity 1GB) and QikDRIVE8 (maximum capacity 8GB) are available now. Pricing varies depending on configuration. As an example, suggested retail price (ex-tax) for 512MB is $2,500; and $16,000 for 4GB. QikCACHE1 (max. 1GB) and QikCACHE8 (max. 8GB) are available from April 2000. QikDATA8 (max. 8GB) will be available for shipment in June 2000.


    Found here [platypustechnology.com]. At today's exchange rate, that's about $1,429 USD for 512MB and $9,145 USD for 4GB.
  • saw these gizmos at the Linux Expo in Sydney, apparently they've been running them for quite some time in government departments - I think some were being used for a long time testing in the RAAF at the Defence Force Academy, its only just recently that they've released them to the public.
  • The QikDrive is on a PCI card, but has its own internal power supply for data security. Presumably to keep the drives from beingwiped by a system power failure.

    Does that mean you lose your data if you unplug its power supply? If so, that would make a UPS necessary, since the smallest power fluctuation could zap your data. But the most likely way to lose data would be moving the machine - and forgetting to back up the drive before unplugging it. DOH! : )

  • Anyways, this would be great to stick your swap file on...

    yeah... Until you think... umm, well, if your swap is on ram... Why not just buy more system RAM? and just don't allocate swap. It'll always be faster if the mm layer never has to think about paging out. (Hey, you'll save some ca$h, too.)

    "Never trust a Programmer who carries a screwdriver."

  • Fast access to offline storage- video clips, etc.

    Programs like Premiere and Avid Media Composer don't keep the entire video clips in memory- it would never work with 500MB clips being the routine. Instead, they are stored on RAID arrays or fibre channel-connected Storage Area Networks. Although fast, there is still appreciable time involved when it comes time to render the clips, as the information has to be pulled off disk, processed, and rewritten to disk. With a super-fast (i.e. RAM speed :-) disk, video editors and the like will see a large increase in speed.
  • Perhaps someone can do a hardware workaround using an intermediate NVRAM between the SDRAM HD and the hard disk, using principles borrowed from both cache technology and High reliability file systems. But it'll take a bit of work.

    You can get pretty good performance with NVRAM, by tightly integrating the OS, the filesystem, and the RAID subsystem. Hop on over to NetApp's Technical Library [netapp.com] to read how they did it. In particular, check out File System Design for an NFS File Server Appliance [netapp.com] which discusses how the Write Anywhere File Layout (WAFL filesystem) knows (and can take advantage of) how the RAID subsystem works. The NVRAM is mainly used to speed up write performance. They've also got some interesting bits on how the RAM cache works. It gives decent performance, even though NetApp caches tend to be small by modern standards.

    James

  • Replace the SDRAM on these puppies with the NVORAM being developed by Ovonyx Technologies [ovonyx.com], as described in this [slashdot.org] Slashdot article from 5/24. That would solve the volatility problem and (IMHO) the problem of having to buy 8Gb of high-performance DIMMs.Granted, I'd rather see the tech using a standard HDD interface (SCSI, IDE, DMA/66, etc.) than being available only for swapfile.
  • Because.. what do you do, in the odd chance, that you run out of memory? It's bound to happen. Even if you have 2 gigs of ram.. you may run out of memory (hell on my 512 meg machine I frequently have more than 512 megs of processes running). If you turn off a swap file then you're basically fucked...

    Yes, but having a swap file doesn't solve your out-of-memory problem, as you will encounter the exact same situation a bit later, when you run out of swap space. It does delay the onset of the problem for a while--on the other hand, during the time your machine is out of RAM and swapping, your machine's performance will be so poor that it might be better to have it just Panic immediately and be done with the crisis.

    (by Panic I don't mean crash--rather, the system could initiate some kind of emergency procedure for freeing up RAM... maybe by finding the largest non-critical process and killing it. Which processes are considered "non-critical" is left as an exercise for the Sysadmin)

  • Second!
  • But on a 32-bit normal PCI, at 33MHz normal speed this card soaks not 20% but 80%. Also, if it hogs the bus with large data transfers like scsi can then the latency of other bus requests will rise. 1500 bytes from a max-size network packet will grab the bus for a far shorter time than a multi-Mb collected DMA from disk.
  • Sounds like just the thing for mp3 players.. hoohah!
      • Application structure in general - Lots of applications know they're dealing with *files*, not memory, so you've got to give them something that looks like a disk, either by hanging it off a disk controller (using SCSI or IDE-like), or on a PCI bus (imitating a disk controller), or using RAMDISK drivers, or using hybrid memory/disk file systems like TMPFS.

      Again, this argument just dosen't hold salt.

    Yes, but is it worth it's water?

    While I'll agree that in general, these devices have limited usefullness, there ARE cases when an application developer can better guess at important usage patterns than can a caching algorithm.

    For example, a given application might be optimizing indexes when there is no demand on it. When the application would be otherwise idle). In such a case, the caches would always be full of arbitrary indexes that are being optimized.

    However, when a user actually attempts to use the application, 99 out of 100 times a particular set of pages might be called on first. In this case, it would be nice if you could guarantee that these pages could be accessed with minimal latency.

    Having said this, I would, in general, rather have the directly addressed RAM (standard memory) rather than the SCSI bus RAM in my system. Even if there is an advantage to RAM drive optimizations, I'd prefer to have the flexibility of RAM drive software to turn segments of the directly addressed RAM into easily reconfigured RAM drives rather than inflexible SCSI bus RAM drives.

    I also think argument that dropping a RAM drive in to give a boost to existing applications that this poster makes best [slashdot.org], I think, is a good one. But, again, a RAM drive defined in software would be better here.


    -Jordan Henderson

  • Some modern operating systems actually keep a stubby ramdisk around all the time, basically as a root drive that they can mount other file systems on. It turns out to be useful in diskless environments and CDROM-based systems, where you don't have a real disk to hang a filesystem on. There are other ways to do it, but it's pretty clean for many kinds of environments. OTOH, you can reasonably view it as an extension of a boot loader ramdisk.

    You probably don't count Win 95/98 as modern (:-), but I actually do use a small ramdisk as scratch space for security applications where I don't want the data written to a disk drive. (There is still a risk it'll get written to swap space, but I'm not doing anything at that level of paranoia :-)

    It's also useful for stashing web or mail downloaded zip files where I don't have to worry about cleaning them up later, but having a big enough disk drive with a /tmp on that would do as well. Unix /tmpfs file systems would do a better job of that - they avoid writing stuff to disk until necessary, on the assumption that you'll discard most of it pretty soon, which speeds up applications like compiles more effectively than normal caching strategy which assumes you're saving something to a disk because you actually *want* it to be written to the disk.

  • I don't see too big of a use outside of the overpriced mega-server market, though, and lets just hope that none of them suffer from any brown-outs. In my day-to-day work, on a 500 Mhz Pentium III with 256 megs of RAM, it takes approximately five minutes to change one file, recompile, and relink. A faster CPU would help a little, two CPUs slightly more (Anandtech got a 30% speed boost from two CPUs with Visual C++), but I think hard drive access time is the dominating factor. Could I use one of these things? Ooooh yes. The cost would be a hard sell though.
  • If you were saturate the PCI bus with one device as you suggest what happens to your NIC card, sound card, your SLI voodoo 2 setup (hehe), a second drive, or anything else which happens to be on the PCI bus?

    They run off of the *other* 64-bit 66MHz PCI bus, of course.

    Come on, if you're going to spend $8000 on memory, you're probably going to have more than one PCI bus. (Of course, you're probably also not going to have a voodoo 2 setup, but that is a different issue).
  • Would you not be better off with more RAM than having a swap effecticely using RAM?
    After all thats all it is.

    In fact the RAM would almost certainly be cheaper, and a better investment in the long run.

  • think database servers. It has to keep transaction records which must be saved in case of (power) failure. they usually use Hard drives, but this would make it much faster.
  • by Stoutlimb ( 143245 ) on Friday May 26, 2000 @11:41AM (#1044190)
    Microsoft should buy up this company, and then make these drives as cheap as possible for everyone to use. Reboot time would would go down significantly! They would save billions in not having to make their code more efficient.

    "The hardest thing to understand is the income tax." - Albert Einstein
  • Macs have 64bit PCI slots :)

    I'm sure I can exceed 110MBps with an Adaptec 39160 SCSI card (64bit, dual channel) and 2 drives in a stripped RAID. For less than half of the price for the 8GB version. Using 2 18.2GB 15,000rpm Ultra3 SCSI drives too!
    --
  • You're right. these have been around for a very long time. In the traditional Unix world they have been used as database accelerators and such.

    It is nice to see this technology trickle down into the PC world, though. Now we just have to wait for PCI's successor to see some real throughput improvements.
  • I remember seeing an ad for a laptop computer a few years back with a solid state hard drive. The idea was that data could be accessed faster, and it would not drain the battery as fast. The only problem was that the drive was only 4 MB in size. It seemed like a good idea, but at only 4 MB, it also seemed like an idea that just wouldn't work.

    Now we have these QikDRIVES capable of holding up to 8GB. Finally, enough space to be useful. But because they are using ECC SDRAM, the prices are going to be too expensive for all but the most serious of consumers. It's generally been accepted that hard drives are slower than RAM, so why not use cheaper RAM that runs at, say, 60ns (like the old EDO RAM)? We would still probably want to use ECC or something similar for data integrity. Would this cut the price sufficiently to make it attractive for the average person? Obviously, it would still cost more than a conventional hard drive, but hopefully not nearly as much as the current line of QikDRIVES.

    Now I'm not saying to dump the existing line, since they will still be attractive to those who need the high performance. I'm just thinking of how to reduce prices. And over time, the prices on these things can be expected to drop anyway, perhaps even making them commonplace. Does this sound plausible?

    --
  • Well, I have to assume that the data is non-volatile. I didn't see anywhere that explicitly said if it was or not.

    If it *is* non-volatile, then the answer should be obvious.
    --

    A mind is a terrible thing to taste.

  • Also, under some operating systems (such as Windows), many applications require a swapfile, because the swapfile can be (and is) used for other purposes, such as memory-mapped file access (for easy sharing of data among applications).
  • Just how much backup juice would you need to keep a bunch of sticks running in the event of a crash?

    I would think a reliable UPS and an "old-fashioned" drive on standby would be a pretty sure bet. You'll need a physical drive to load up your Qikdrive at boot time, and save out at shutdown, so that thing will already be spinning when the power cuts out. Then your only worry is that you can copy 8G before your UPS fails ;)
  • Last time I checked, RAM was hardware.
  • Come on people. A number of companies have been manufacturing Solid State disks for years. (Imperial Technology comes to mind). Where's the news in this. Is Slashdot into free advertising now ???
  • And a floppy disk drive is also hardware, but that doesn't make it a hard drive.
  • For me the main thing is having enough *reliable* space to store my digi-junk.

    For you the solution would probably be a four letter word: cdrw.

  • Might be. There is probably a difference between OS based ramdrives and external ramdrives.
  • You know, if you immature zealots spent half the time fixing things on your own preferred system instead of bashing others, you'd get a hell of a lot more done.
  • by Anonymous Coward
    Similiar devices proliferated when minis and micros started hitting the 16/20/24-bit addressing limits -- adding a few hundred meg of page-out space to a Xenix-286 box or 68000-based machine was a great way to get around addressing or hardware limitations when you couldn't migrate to a better system for whatever reason.

    Most of those died off when 32/64-bit addressability started getting more commonplace, I guess it's about time for that to happen again as M$ isn't going to have Win64 working for years.

  • This would be lovely for Instant on boot disk, but the price would have to drop by leaps and bounds.. I could never get by with only 8gigs.. (doing graphics & video work, and being lazy about cleaning up old versions).. I am somehow out of space here with 160 gigs of HD space - just picked up a 40 gig drive for under $400 canadian.. For me the main thing is having enough *reliable* space to store my digi-junk..
    -
  • by orpheus ( 14534 ) on Friday May 26, 2000 @11:53AM (#1044205)
    Before anyone gets carried away, please remember that these are volatile RAM drives.

    The Platypus overcomes the HDD's primary liability (read/write latency) at a serious cost to the HDDs primary function: reliable storage. Note that it doesn't even have an on-board battery. It simply has a separate external Power supply and (optional) UPS

    While a UPS is wonderful for keeping my system running, it's much less reliable than it needs to be if an outage (or office idiot kicking the plug out) means I lose *all* my data (sales for the day, etc.) In a sense, the platypus drive is not much stabler than having 8GB of system RAM and *no* HDD ["not much" is relative. The MTBF of a UPS is orders of magnitude less than a good HDD)

    I doubt the usual high reliability filesystems could maintain a RAID/HA type redundant backup to disk precisely because the RAM HD is so much faster than the disk. It would be like having a scribe backing up your HD to quill-and-scroll -- the more you utilize the tremendous speed of the RAM HD, the farther behind the disk will fall (and thrash).

    It's a nice product (though hardly a new idea), but I see it having limited application (e.g. as a HD accelerator in some server applications)

    Perhaps someone can do a hardware workaround using an intermediate NVRAM between the SDRAM HD and the hard disk, using principles borrowed from both cache technology and High reliability file systems. But it'll take a bit of work.

    Is there already a solution out there? Or is this essentially just a giant unidirectional HDD cache, good for serving up data faster than an HDD, but not good for critical rewritten data?
    _____________

  • But geez what a great swap partition it would make!

  • by Anonymous Coward
    As long as it's not squidgy, it can be called a hard drive :)
  • Booting? i usually only reboot every two weeks and that's on my laptop... server will get it every 3-6months for an upgrade that's it.
    --
    Jesse Tie Ten Quee - tie@linux.ca - highos@highos.com
    http://highos.dhs.org
  • So basically, this is a souped up memory expansion board with its own power supply and software that does essentially the same thing as ramdrive? Can platypus offer any kind of discount that makes this cheaper or more attractive than just buying extra dimms and configuring said virtual drive software? I mean... I've already got my PC on a UPS... This is one of those technologies that boils down quite a bit when you analyze it. Try again, fellers...
  • by Xenu ( 21845 )
    The advantage of these things is that you can drop a RAM drive into a system and get immediate performance benefits, without having to modify the applications and operating system. The system's RAM may already be maxed out or it may have a limited address space. The last time this came up, someone mentioned that there were external RAM drives with battery backup and a hard disk that could mirror the contents of the RAM.
  • According to their website it "works like a SCSI drive" and has Linux support. My guess is that it has its own SCSI controller on the card, which may have been tweaked to support the high bandwidth (110MB/sec) of the RAM. What does Ultra2 SCSI support, 80MB/sec? This might be useful as an external SCSI device but you would lose performance, I think.
  • Linux does this, and I am very thankful. (And remember kiddies, leaving a running Ethereal on a busy network while you go to lunch is a baaadd idea! The machine was unresponsive (although still responded to pings in 0.5ms) and constantly swapping for about a half hour after I got back, until the kernel decided to kill the right process.)
  • It might be a much cheaper way to get a large amount of RAMdisk, than buying more system memory.

    Motherboards have only so many memory slots. 256MByte DIMMs are the cheapest per MByte, 512MByte DIMMs are still reasonably priced per MByte. Gbyte DIMMs are way more expensive. 2GByte DIMMs are insanely expensive (SGI sells them for their Octanes, I don't know who else does).

  • It seems to me that with Linux' hard disk buffering algorithms, this card wouldn't really make much difference from simply putting, say, 4 gigs onto the motherboard. All the useful files would simply be loaded into the buffer on the first load (yes, a once-off performance penalty, but no slower than copying the files from a normal hard disk to a solid state one), and then it would be faster even than RAM sitting way out there on the PCI bus.

    On other OSes, perhaps it would be advantageous. But as to the suggestion that it could be used for virtual memory (this was on their Web page!), you have got to laugh. Does it make any sense to anyone? I mean why go through the kernel as a file operation, go through the PCI bus, get the stuff from RAM, bring it back and THEN turn it into a page; rather than using actual RAM?

    Has anyone actually done benchmarks on the supposed applications of these things (say, webserving under Linux)? I could find no benchmarks on their web page. It seems to me that it might be 10 or 20 per cent faster, but given that the bottleneck is likely to be the network and not the machine (and besides, it would be better value to simply *upgrade* the machine), why bother?

  • If you ask me, this is a pretty risky venture for a startup company. First off, it's not like the technology is unheard of, not to speak of the lack of innovation in the drive. Second, if I was running my business on the dependability of this particular drive, then I must have gone crazy. Even though HDD drives are comparatively much slower, I know that the risk of losing business, time, and money isn't worth performance increases. In the corporate world, even 99.9% reliability isn't good enough when the next guy behind you is ready to offer 100%. Although this would be great for graphic-intensive purposes like video and animation, I can't see most other companies shelling out the bucks for this riskier option, and there is NO consumer market for it (although many of us at home would drool at the prospect of setting up the ultimate home gaming machine with one of these suckers. Personally, I'd like to see benchmark comparisons just for kicks.....). It certainly is a product risky to profit from, though I am all for production of newer products like this, since the only way to truly move technology is to commercially market it.
  • Seeing as how most MP3's are 128-256Kb/s, what use would you have for that kind of speed in the application of an MP3 player? Sure, you'd have the song loaded in about 1us, but sheesh, I don't think that the cost would justify that. Also, the fact that if you lose power, you lose the data... if you let your batteries reach "dead" no more songs. So there would be a constant drain on the batteries too. I'm assuming a mobile version of this would runs off of 5V. Of course it wouldn't take long to load your 8GB of MP3's back on it at 110Mb/s. :) I don't think this sort of "RAM" drive has much of a mobile use. I wouldn't put anything of value on it with anything less than redundant power supplies and a UPS.

    -Slayback

    P.S. Does anyone know of a website that is running off of one of these? Just curious.
  • Yah, a PCI bus. The "external" is limited by the PCI bus which would, imho, make it slower.

    -Slayback
  • by jafac ( 1449 ) on Friday May 26, 2000 @12:03PM (#1044218) Homepage
    The thing I don't understand is, why-oh-why do they insist on using like, the highest-cost RAM available for this application, when there's plenty of 4-meg simms out there from old machines and memory upgrades that are not being used. Why can't someone create a drive like this that just has arrays of empty simm slots, so we can get this currently worthless commodity, and put it to good use?

    I just remembered this old Metallica song. . .
  • by darial ( 177051 ) on Friday May 26, 2000 @12:05PM (#1044219)
    Unfortunatly, I'm under NDA, so I can't tell you everything about it, but I have a fairly good grasp on the performance of this class of device.

    The biggest problem with 32 bit machines is not the 32 bit int, which is really sufficient for most things, it's the inability to address more than 4 gig of memory. This provides a relatively clean solution to that problem by using this device as swap. The burning question, of course, is performance... how much worse is it than on board memory?

    The answer is, of corse, it depends: If you are in a single process enviroment, the time it takes to swap pages is somewhat killer, because the machine justs sits on it's ass while the DMA moves the block. Now, don't get me wrong, it's a lot better than disk, but it's not like real meory.

    However, on multi-process machines like servers, it's great. There is a delay for the page swap, but the other processes keep the cpu busy and the DMA keeps the bus busy. Since throughput is more important than response time, this is almost as good as onboard ram. But, you say, this is MORE expensive than real RAM? not really... for an app like this, it will be an smp machine anyways, and the difference in cost between comodity x86 parts and a 64bit+8gig-uberboard setup from a proprietary vendor is so great that you could buy one of these things with the spare change. This could easily save many tens of thousands on certain types of server projects.

  • Imperial technology (http://www.imperialtech.com/) has had these drives available for years, and in configurations up to 12 gigs. Saw this link on www.scsi.org...
  • I bet you can't!

    Just because you have a pipeline that can support it doesn't mean you have the technology to sustain that throughput.

    Harddrives are a VERY old technology. I mean seriously!! The most vonerable part of the computer is the Harddrive, PowerSupply, and CPU Fans (if you're x86 anyways)..

    I can't wait until computers are 100% solid state. Moving parts increase points of failure.

    This solid state drive is neet, but looks WAY too risky to me to stick in a production server. I mean if the only way these things are makeing sure they don't loose the data is to keep power to it all the time, COME ON! We have had the ability to do 8GIG Ram Drives for a LONG time now!

    Seriously, what if the "PCI CARD" needs a good-ol-fashioned reboot?? You can't tell me this company GAURENTEES a 100% UPTIME on their "NEW" PRODUCT.

    I noticed how they recommend it for "web Proxies, Secondary DNS Servers, ..." All things where the data on them is not marked as "curtial".

    Why can't someone come up with a USABLE product?? Like a 8GIG SolidState Drive with NVRAM instead of SDRAM .... or a 10GIG SDRAM Solution that has a piggy back hardware mirrored 10GIG Harddrive with like 1GIG Cache to write to the harddrive, so if it does reboot, it reads off the 10GIG Harddrive into memory and off it goes. (I think that's what Quantum's drives "SORTA" did). . . .. I dunno...

    Why hasn't anyone persued the NVRAM option??

    Ryan
  • Basically I wanted to construct a dual drive...
    A hd that had approx the same size ram cache in
    the same box.
    On boot.. the os loaded on the harddrive would load
    itself into ram and would run from there... meanwhile
    backing itself up to the hd as files were changed etc.

    Unfortunately I had neither the time, money, nor experience
    and knowledge to build a microboard that would handle
    those transactions and report to the MB as a standard hd.. *sigh*

    The things we come up with that are implausible to
    one's current abiltities/situation... *shrug*

    Anyone need an intelligent, intuitive, dreamer
    for their R&D dept? I work like a bastard, am
    fantastic for company morale and have some of the
    best ideas on a constant basis...
    (Sucks waking up at 2am with a fantastic idea and
    having no one intelligent around who can follow you
    past the first 10 words out of your mouth. *sigh*)

    Anyway.. Party on!
  • Yeah, but you're not running NT.. =)

    Ryan

    PS. Not like I am either.. but still...
  • Though my idea is a bit different. All it involves is a few gig of normal ram and loading the contents of the harddrive into it on boot. All changes would be done in ram, and then on shutdown it would copy the entire thing back onto the harddrive. Super fast access, 30 minute startup/shutdown. Good tradeoff if you have a long uptime.
  • 8GB of motherboard memory is well and good if you can easily change your OS and hardware. Something like this would be a drop in performance booster for when you are stuck with a legacy system that has grown beyond it's original boundaries.

    Imagine you developed a Windows NT system to keep development simple (yes some things are simpler on Windows), then your system becomes a success. More of a success than the system can handle. Your choices are:
    1. Rewrite it for a more scalable system
    2. Throw heavy duty hardware at it (like an 8 Gig RAM disk)

    Which solution costs less and get finished faster?

    I look forward to the day when standard hard drives are done with RAM of some flavor. What's all this silly business with spinning disks and moving parts? Feels kind of archaic doesn't it?

    Sure I think I'm right. If I thought I was wrong, I'd change my mind.
  • in an AT case.

    I just remembered this old Metallica song. . .
  • As a cheaper way of speeding up hard disks, why don't drives have multiple heads per platter?

    The problem is keeping the head aligned with the track on the disk. This is difficult enough with a single active head. As soon as you introduce multiple active heads, you have the problem of keeping multiple heads independently aligned with their tracks. Putting two heads on a single positioner arm isn't going to work because of thermal expansion/contraction of the positioner arm and mechanical errors with alignment of the positioner arm and the disk platter. You need independent positioner arms, each with its own servo system and read/write electronics. This is very expensive. Multiple heads on a positioner arm and head per track disks used to be common in high-performance disk drives when track densities were much lower than today. I used to use a computer that had a 5 MB head per track disk drive, it used up a whole 19" rack.

  • A lot of the performance issues depend on your usage statistics, and on the driver implementation qualities (their drivers or yours :-). Many applications do more reading than writing, and ramdisks or operating system disk caching are are a big win for them. But most applications are also very peaked - caching lets you commit writes near-instantly, but the average rate is low enough that you can catch up in between very busy periods.

    The caching lets you write to the disk in reasonably optimal order, laying down long bursts of stuff on the same track instead of seeking and rotating in between them. Of course, most modern disk drives too that also, though last I checked the caches were typically small, like 1 MB, though that may be enough for many applications. Having a substantially bigger cache on your computer means that you're much more likely to hand the disk drive orderly stuff to write, and wasting less interface bandwidth on reads and on idle time.

  • What is needed is a 5 1/4 form factor device with butt load a DIMM slots. SCSI, IDE, FireWire, whatever. That would be cool.
  • If a service keeps running when an individual server dies, then a rebooting server will have to refresh its database from a running server. For some types of databases, it is unacceptable to use stale data, even if that database only missed one transaction.

    If the database goes stale while a server reboots, it doesn't matter if it is volatile or not.

  • I would guess that the RAM alone on the board costs $5k-$6k, then add in the cost of the hardware (the board that is), and if you get the "Optional Uninterrupted Power Supply" which you would need if you plan to use it to store real data, I'm /guessing/ about $10k. I don't know though, just a guess.

    -slayback
  • SDRAM is cheaper than say fast page or EDO. If they came back into main stream production, maybe the price would go back. Computers are odd like that, just because it's obsolete doesn't necessarily mean it's cheaper. Whatever's being made in bulk is what's going to be the cheapest. However, you do have a good idea. We have LOTS of 486's dying everyday with good RAM that'll never be used again. This would be the perfect use for that, the trick it just collecting the old RAM before it goes to a dump, or is melted down for the metal.

    -slayback
  • Yeah, but doesn't "Hard Drive" come from "Hard Disk Drive"? What would be an example of a storage device that wasn't a piece of hardware?
  • Actually, it is connected directly to the PCI slot. They aren't bootable and need custom drivers for Windows and Linux.

    John Wiltshire
  • One method to improve the performance, even with current technology, might be to get rid of the well known structure of hard discs though. As it has been for years now discs are platters with a fixed logical structure. 512 sectors, N sectors per cylinder, N cylinders per platter.

    It has already happened. The heads, tracks and cylinders that you see in the BIOS setup screen on a PC have no basis in reality. That is just a software compatibility kludge for PC operating systems. If you look at the SCSI specification, you will find no mention of heads, tracks and cylinders, the disk is addressed as an array of logical blocks. IDE drives can use logical block addressing, similar to SCSI, or heads, tracks and cylinders. Due to the use of techniques such as Zoned Bit Recording (ZBR), the number of sectors per track is not a constant. The IDE drive translates the physical sector address provided by the CPU to/from an internal sector address that reflects the actual layout of the drive. I've seen SCSI drives with larger sector sizes but they are rare.

  • Why can't it use generic sdram?
  • Not quite video cap/editing but a UK mag called Computer Arts reviewed a solid state drive a few years ago. I think it was It was external and had a capacity of around one gig.
    They tried it as scratch-disk for Photoshop with, IIRC, a 2 or 3 hundred meg file and said it was like working in RAM.

    *sigh* Yet another thing for my wish-list.

    ---
    "When I was a kid computers were giant walk-in wardrobes served by a priesthood with punch cards."
  • The main reasons are
    • Application structure in general - Lots of applications know they're dealing with *files*, not memory, so you've got to give them something that looks like a disk, either by hanging it off a disk controller (using SCSI or IDE-like), or on a PCI bus (imitating a disk controller), or using RAMDISK drivers, or using hybrid memory/disk file systems like TMPFS.
    • Database-specific needs, particularly committment - DBMSs really need to know that when they've written something to stable storage, it'll stay there unless they change it - crashing the machine or losing power will trash things in regular RAM, and that just doesn't cut it. Sticking a bunch of RAM in a SCSI shoebox with its own stable power supply and maybe an automagic copy-to-builtin-disk gives you the security you need, and 8GB really *did* hold a large database not very long ago :-) Putting it on an imitation disk controller board required being more careful about armoring it, but you might be able to do it. And building a stable storage device that doesn't need to wait 5-10ms to commit data is a big big performance win.

    There are also scaling issues - motherboards always have some limit on capacity, whether it's address lines or card slots or whatever. With Ultra-Mega-FooBar-SCSI, you can hang 8-16 of these things on the bus if you need to. Living on a bus lets you design boards for your specific application, and isn't limited by the design tradeoffs and compatibility requirements of a general-purpose computer, just by the size, power, and cooling of a shoebox or 1-2U and the creativity of the designer. Can *your* desktop machine address or even hold 8GB of RAM? Mine can't.

  • by Chairboy ( 88841 ) on Friday May 26, 2000 @11:13AM (#1044259) Homepage
    I read about these in March. Here's some info from then that might have changed in the interim:

    $1,538 for a QikDrive1 with up to 1GB storage.
    $9,840 for a QikDrive8 with up to 8GB storage.

    The QikDrive is on a PCI card, but has its own internal power supply for data security. Presumably to keep the drives from being wiped by a system power failure.

    They support between 15,000 and 20,000 I/O transactions per second (versus 200-300 for Winchester-style drives)

  • Point me (URL) to a place where I can buy a motherboard with 8GB (or more) RAM support (and NOT with that RAMBUS crap) and a 64-bit CPU that is supported by Linux or FreeBSD.
  • if these would outlast a regualr HD. It seems to me ramchips might degrade even faster than HD wafers.

    Just think how fast Metallica MP3's can be downloaded to these babies! tcd004

    Here's my Microsoft [lostbrain.com] parody, where's yours?

  • Sure, it costs a lot more per GB than rotating machinery, and I wouldn't buy one for home, but in a business context it's a really cheap deal if it works well. You can get a substantial speedup for less than one day's consulting fee for a database wizard. (Of course, if it's not a good match, you might end up spending a few days's consultant time to make it work :-) Also, the performance impact may save you buying another computer, which would cost at least as much for a production server.
  • I don't see too big of a use outside of the overpriced mega-server market, though,

    This thing would seriously rock for video capture. I'm currently having to capture video in compressed form to avoid hard drive bandwidth issues.
  • by InsaneGeek ( 175763 ) <slashdot AT insanegeeks DOT com> on Friday May 26, 2000 @12:36PM (#1044271) Homepage
    Solid state disks have been around for years, they aren't offering anything a bit revolutionary or even a best of breed product. They even take out the functionality of a SSD by making you plug it into a pci slot, instead of a standard drive connection. Maybe if they've SIGNIFICANTLY reduced the price it might not be that bad, but that's a big if.

    Check out solidstate.com, mti.com, etc. they have much better solutions than putting a card into your computer.

    I checked out SSD's last year to see about SAN integration, but the cost if VERY prohibitive. i.e. 90k for a 4 gig disk with a fibre channel connections (of course that was battery backed up, disk backed up, etc). If you are running a big data base/warehouse they can become very useful, they appear to the system as a regular drive, no drivers, etc. I know of a couple of companies who do a raid set over multiples of these (think 10x4 gig striped SSD's to do big database billing then think price).
  • Forgot to mention - Legato used to make an accelerator board for Suns that was basically a meg or two of battery-backed RAM on an S-Bus card, with some appropriate driver support. As with databases, being able to commit a write without waiting 5-10ms for mechanical latency was a big performance win for NFS, and a meg or two was enough buffering to do the job. It could also be used for database journaling files, and similarly helped a lot. Keeping a couple MB of SRAM or DRAM alive doesn't take hulking UPS batteries - a little NiCAD or lithium cell can keep you alive through pretty long power failures, and the drivers were written to write any leftovers to disk at boot time if they hadn't been written already.
  • This is basically just a system with lots and lots of cache. So assuming you are on a system with more memory than disk space, any modern operating system will do this for you, commiting changes to disk occasionally.

    Of course, this would have to be a 64-bit architecture to be able to address that much memory, and you would need an OS that supports that much memory.
  • Oh, good! I was worried they would be expensive or something :^P

  • 110MBps? Come on, even PC100 RAM can transmit 800MBps, and DDR and RDRAM get up into the 2-4GBps range. Sure, flash memory is slower, but if they are building gigabytes of it they should be able to use interleaving to speed it up.

    I think that at very least they should be able to run at the 533MBps maximum speed provided by 64-bit 66MHz PCI.
  • Solid State Drives have been around for awhile. The only problem is the extreme cost. The fact they are releasing a range of SSDs is just another step in computer evolution. I remember reading an artical (I can't remember if it was from /. or not) where they are using these metallic doughnut rings that are about .5 microns in diameter that hold a magnetic charge when you shoot a pulse down through the hole. And it stays when you power off your computer, so when you turn it back on, it's instant on. No more booting. If anyone remembers where this site is please give me a post. I also think I remember seeing that it held about 100GB/in^2. So we have a lot more to look forward to than just SSDs.
  • Application structure in general - Lots of applications know they're dealing with *files*, not memory, so you've got to give them something that looks like a disk, either by hanging it off a disk controller (using SCSI or IDE-like), or on a PCI bus (imitating a disk controller), or using RAMDISK drivers, or using hybrid memory/disk file systems like TMPFS.

    Again, this argument just dosen't hold salt. If you want to have something that fast, then just provide lots of RAM and your OS will use it as a cache. There's literally NO reason to use a ramdisk except as an initial boot loader if you're using a modern operating system.
  • "The biggest problem with 32 bit machines is not the 32 bit int, which is really sufficient for most things, it's the inability to address more than 4 gig of memory."

    Actually Intel based systems can addres up to 64 GB of memory using Physical Address Extensions (PAE) for 36-bit addressing.

    Hardware needs to be 64-bit compliant and/or support Dual Address Cycle (DAC) to address memory above 4 GB.

    "because the machine justs sits on it's ass while the DMA moves the block."

    If you are using a busmaster PCI device (which any SCSI or Fibre Channel card is) the card itself does the DMA. Most modern operating systems support non-blocking I/O. This means that a read/write call can get a pending response. The caller can than go about other things or wait on an event to signal the I/O completion. If the caller waits, the OS can run another thread or process until the event is signaled. If the caller doesn't wait, it can still be interrupted when its quantum expires.

    I have worked on a Fibre Channel/RAID adapter that blows this thing out of the water. 110 MBps? In vanilla direct connect mode (no raid), our card did 190 MBps and 20,000 I/Os per second. With RAID and a 64 MB cache, we can beat that. 8GB? We worked with 1TB to 10TB databases. By putting only up to 8GB on one PCI card you are really limited. 64-bit, 66 Mhz PCI supports up to 528 MBps. Five of their cards on one bus would give you only 40 GB at a transfer rate of 105.6 MBps per card. You can add busses but that isn't cost effective. On other hand, you can put nearly unlimted storage on one of cards. For the best speed you can spread it out over multiple ports (our card has two) and multiple cards. 3 cards on a 64-bit, 66 Mhz PCI bus can still hit 176 MBps each (2 cards can hit 190 MBps each) with MUCH more storage.
    This thing is too small and too slow for their target market of enterprise computing.
  • Actually, it gets worse. Because they only offer 8GB per card, you have to use multiple cards to get to any kind of enterprise size. 5 cards gives you only 40 GB at 105.6 MBps. Yuck. A good Fibre Channel card can give you 150-200 MBps over 1TB or more. Adding more PCI buses gets expensive and relies on OEM guys to make a special box for you. Check out the cost for multiple 64-bit 66 Mhz PCI Bus systems. This is not a good solution.
    Good cache managment on a RAID card can get this kind of performance per RAID drive using much less RAM. You computer doesn't have 64 MB of L1, 2, or 3 cache, does it?
  • Actually, you still want a small amount of swap. Most machines have processes (eg at and cron) that sit around for a long time doing nothing. There is negligable performance loss to swapping these apps out to disk, leaving more memory for important things. So you almost always want at least a small amount (100MB) of swap.
  • I'm well aware of the use of a RAMdisk as an initial bootloader. I've actually got a neat set up here where I modified the redhat install disks to boot a system over NFS after configuring and loading the networking and NFS modules. So I know all about ramdisks as a boot loader.

    But as for the scratch space, I believe there is a function to request a block of memory never go to swap. I know linux has this feature, I thought windows did too.

  • Sounds like these things would be great for my work, which is video editing. The problem with many drives I've used is that they get trashed by capturing video constantly, slamming data on to those drives for sustained periods of time can trash them in a few short months due to the constant accessing. Any of you who've used a Media100 or other hardward packages similar to it may know what I'm talking about.

    My main concerns are:
    A) Is this here to stay, or is it going the way of the SparQ?

    B) Is it going to be cost-effective? Granted an Art Director like myself can just get the company to pay for the drive as part of a new system or upgrade, but many freelancers would have trouble with a super expensive drive. . .

    C) Are they really going to be worth using, or am I going to have constant trouble getting the damned things to work on my Operating System(s) of Choice[tm]?

  • by billstewart ( 78916 ) on Friday May 26, 2000 @11:24AM (#1044310) Journal
    Devices like this have been around for a decade or more, but as memory keeps getting radically cheaper they become more interesting. The big performance win isn't usually how many MB/s throughput you get, though that's nice, but it's the elimination of rotational and seek latency which can be 5-10ms - a long time for a database transaction, and even with journaling filesystems or datastructures you've got to deal with some of it.


    Be sure to use a good UPS with the things, and make sure your powerfail shutdown procedures work well.

IOT trap -- core dumped

Working...