Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

Getting Rid of the Disks 305

Kneht writes "Dan's Data has an interesting article on what it would cost to get rid of your HDDs and replace them with SSDs because hard drives suck. Several aspects are examined, such as required UPS, compact flash, etc. Read the article and you may get a new appreciation for your lowly 7200rpm drive." Funny, I was just thinking that I should start using 120GB disks as my removable media.
This discussion has been archived. No new comments can be posted.

Getting Rid of the Disks

Comments Filter:
  • $$$ Money! (Score:4, Insightful)

    by Anonymous Coward on Sunday April 20, 2003 @11:56AM (#5769082)
    Right now, hard drives are the right cost/benefit compromise. Could they be better? Yes. Would it cost a lot more? Yes. When the second changes, let me know.
    • Re:$$$ Money! (Score:3, Interesting)

      Right now, tape drives are the right cost/benefit compromise. Could they be better? Yes. Would it cost a lot more? Yes. Why are you using hard drives over tape, when tape holds so much more for the cost?

      Speed matters. Just because one is more expensive than the other doesn't rule it out, if they're both relatively affordable for the performance.
      • Re:$$$ Money! (Score:5, Informative)

        by paraax ( 126484 ) on Sunday April 20, 2003 @12:32PM (#5769237)
        It might be that tape drives aren't really hugely cheaper than hard-drives. Lets go for the 20Gb Internal Travan from seagate. $180 for the physical drive and one tape. Compared, Western Digital 20 GB, $63.

        So lets assume that the cost portion wasn't stacks 3 to 1 in the favour of the hard-drive. We also have the performance factor. I've supported these beasties. They are slow, especially if you even think about using them like a hard-drive for random access storage (which regretably HP did at one point)... the benefit comes in easily storable and removable media. It might be cheaper to buy 5 hard drives to do your rotation on, but its much more bulky and more labor intensive to do. Thus the 3 to 1 price tradeoff for using the slower tape for archival purposes outweighs the cost problems for some people.

        Now, lets assume that this solid state is meant to do exactly the same as a hard-drive (which by the description of the article, it is.) We're looking at a 100 to 1 price tradeoff. The only way that kind of increase in price becomes worth is if your doing some highly critical things which absolutely must be done faster. The average game of Quake doesn't need it.

        Thus, hard drives, could they be better? Yes. But if the next alternative is that much more
        pricey, chances are they are good enough.
        • Re:$$$ Money! (Score:3, Insightful)

          by fishbowl ( 7759 )
          >The only way that kind of increase in price
          >becomes worth is if your doing some highly
          >critical things which absolutely must be done
          >faster.

          Or under acceleration, or being rotated, or in zero-g.

      • Re:$$$ Money! (Score:3, Insightful)

        by vsprintf ( 579676 )

        Right now, tape drives are the right cost/benefit compromise. Could they be better? Yes. Would it cost a lot more? Yes. Why are you using hard drives over tape, when tape holds so much more for the cost?

        That's only true if you use lots of tapes. Check out the price of a DLT drive, well over $1000 - and $60 per tape is fairly steep too. High speed and reliable, but not cheap. An external hard drive is well under $200 and is randomly accessible, unlike tape.

        We offer FireWire/USB hard drives as well as

      • OK, it's been a while since I've used tape :-) The typical digital tape things held about 4GB of data, which would cost you about $4 to store on current disk drives. (At this point, the little plastic trays for removable disks are about 10% of the cost of the disks themselves :-) What do tapes cost today, and how big are they?

        The operational costs of using that tape were generally higher than the disk drive - a good tape robot and automation software can reduce them, and to make up for the lack of rando

    • by Andorion ( 526481 ) on Sunday April 20, 2003 @01:06PM (#5769392)
      The price per meg on current harddrives is RIDICULOUSLY low, we're all spoiled.

      It's basically a dollar a GIG, or less... a 200 gig HD costs 200 bucks.

      I'd be willing to pay $200 for a TWENTY gig solid state drive. Ten times the cost, but worth it... too bad no such thing is available.

      ~Berj
      • by Glonoinha ( 587375 ) on Sunday April 20, 2003 @02:02PM (#5769596) Journal
        How about $200 for a TWO GIG solid state drive?

        For SSDs that are smaller than difference between what your computer has for RAM and what it can hold (ie. if you have 512M in your system but the board can hold 4G, the difference is 3.5G) the price is roughly $100 per Gig.

        Add two Gigs of RAM to your mobo and run ramdrive software (www.superspeed.com) - voila! cheap SSD running at your RAM bus speed.

        Need more than that? Mobo already filled with 4G and you need another 4G? RocketDrive DL (www.cenatek.com) : a PCI card with slots for up to 4 1G SDRAMs (PC133) viewed by the system as a drive. Retail price $900 plus the $1500 or so for memory (specifies high quality RAM.) So maybe $2500 total to add 4G to the system, and you can stack them if you want more than 4G via software RAID across multiple adapters (ie. 4 cards would be 16G of SSD for $10,000.)

        Ok, so $12,500 for a 20G SSD is a little out of my price range, but it also offers performance that I can't justify on a price to performance ratio.

        It was worth it to add a Gig of RAM to an old machine (PII/300) and create a 768M RAMdrive though, because when I tried to burn CDs from the hard drive at 12x it always suffered buffer underrun. Most of the time at 8x also. Move the stuff I want to burn to the RAMdrive first and I get a fast clean burn every time, adding $100 worth of RAM to that system saved me from having to buy a whole new computer.
        • Add two Gigs of RAM to your mobo and run ramdrive software
          Ever try to boot from RAM after the power was off overnight?
          Somehow I still see a rotating media device as the boot device.
    • Re:$$$ Money! (Score:3, Insightful)

      by Pharmboy ( 216950 )
      Actually, a much larger cache would help, especially those of us that work with large graphic files (20mb to 120mb, initial scans up to 400mb)

      The new shiney 8MB caches help, but I would love to see an IDE or SCSI drive with a slot for a DIMM. I know I can use RAID, but the performance is not good enough at a $ I can afford. But adding a 256mb or 512mb DIMM for cache, I could. Yes there are lots of caching cards, etc. out, but once again, price. it seems that it should be reasonably possible to have a "
      • What is the point of having a larger on-disk buffer, when you can just use an operating system that buffers disk efficiently? I'm no linux zealot, but I notice a HUGE difference in caching efficiency between the two.

        I have 256MB of main system RAM, which while not huge, should give plenty of area for disk cache. The difference that I have seen is that Linux will agressively swap out unused applications to make room for the disk buffer (while windows will generally only swap out things to make room for m

      • by billstewart ( 78916 ) on Sunday April 20, 2003 @03:26PM (#5769915) Journal
        The purpose of on-disk cache isn't to cache your files - that's your operating system's job, and system RAM is the place to do that. On-disk cache is for speed and latency matching between your disk drives and the request queues from your system, so you can do things like start caching a whole track on the disk wherever the heads are right now, rather than waiting for the disk to rotate around to the bytes you asked for (which lets you work on the next request after one rotation, rather than one and a half), and caching write requests so that you can work on them after finishing the current request. How fancy the software in your disk controller and operating system is can affect the efficiency of these operations, but it's basically for scheduling around the rotational and seek latency of the disk.

        Does anybody know how big disk tracks are these days? If 2MB was enough on a 20GB disk, does a 200GB disk need 20GB, because the tracks are 10 times as large, or does the disk have 10 times as many tracks of the same size, or somewhere in between? The price of memory hasn't come down as fast as the price of disks, but it has come down a lot, and 10MB of RAM costs about $1 - even though the price of disks is really competitive, drives might as well have as much as makes sense for current geometries and speeds. The sizes are still likely to be on the order of 10MB, not 256MB, and since there's got to be _some_ chip there, it's cheaper as well as more reliable to just make the chip big enough rather than adding sockets for plug-ins.

        Large quantities of write-cache on a disk drive are bad, though, because they're not backed up by battery - the system needs to know that when it's written something to disk, it's really written in some form that can be read back later. Read cache is harmless, because losing it just loses a bit of repeatable fetch work - you need enough to cache a couple of tracks of data, but more than that doesn't usually accomplish much, unless there's a big mismatch between your disk speeds and the bus that transmits to your system memory.

        Caching cards are usually silly, unless they either provide battery backed-up RAM or are part of RAID controllers where they can help in the assembly/disassembly process. Their main purpose is to make up for limitations in operating system caching design (i.e. they help Windows a lot more than Unix) or making up for other hardware limitations (e.g. CPU RAM limitations, or bus speed differences, or letting you run server disks off the otherwise-unused AGP port instead of the PCI bus.) Their other main purpose is to take advantage of memory speed / price differences - disk caching works just fine with cheap PCI-100 memory, while system RAM needs to be the fastest Quadruple-Data-Rate Gigahertz-RAMBUS Quadruple-Price memory you can buy to keep the CPU running at maximum speed, so if you're buying large quantities of the stuff, it's sometimes worth spending an extra $50-100 for a card that can hold lots of cheap memory.

        Battery-backed RAM cards are actively useful for applications that need secure writes, such as database commits or NFS writes. A decade or so ago, the Legato Prestoserve NFS accelerator cards had a meg of battery-backed RAM, which was enough to commit writes to while waiting for the disk drive to spin. This meant that you could respond to NFS requests in sub-millisecond time rather than waiting 10ms or more for a disk to seek and spin (seek time was still slower than rotational latency back then, plus your request might be queued with other disk requests), so you could handle one or two orders of magnitude more requests per second, and a megabyte was more than enough to buffer traffic from a 10mbps ethernet. Database transactions might be generated much faster than NFS requests, but it was still enough to handle caching for a lot of disk space.

  • Huh? (Score:5, Funny)

    by SN74S181 ( 581549 ) on Sunday April 20, 2003 @11:58AM (#5769090)

    Get rid of my High Density Diskettes (HDD) and replace them with Single Sided Diskettes (SSD) ???

    That would be expensive, because the old drives are expensive when you find them from collectors on eBay, besides which I would have far less storage capacity (180K instead of the 1.44M I have now).

    It reminds me of the short period back in the day when I ran my BBS on a three floppy diskette PC system. The third floppy diskette was a 5-1/4" 720K drive (quad density) but users complained about the slowness, and this was 1200 baud users.
    • That 8080 info you have on your site is interesting. Something like this would be good for some powerfull remote data logging applications.
  • Price? (Score:5, Funny)

    by brian728s ( 666853 ) on Sunday April 20, 2003 @11:58AM (#5769091)
    From the point of view of serious corporate customers, $US100,000 can be a great big bargain.
    I think I'll keep my magnetic drives and spend my $999,900 on something else.
  • by HMC CS Major ( 540987 ) on Sunday April 20, 2003 @11:58AM (#5769092) Homepage
    There's a lot of research going on in this area. In particular, there's a newly completed Ph.D. thesis studying a persistent memory/disk hybrid filesystem for linux, named conquest [ucla.edu]. The performance is quite impressive, although the reports are that it's nowhere near ready for use - the term 'researchware' gets tossed around a lot.

    Basically, by storing metadata and files smaller than 1mb in memory, the typically accessed information is much more convenient, and the larger files left on the disk are typically in their 'best case' (it's much more common to read large files than to write them, and typically they're read in some near-linear order: if you watch a moving, you may skip once or twice, but then it's sequential reads). The combination seems to work quite well: We compare Conquest's performance to ext2, reiserfs, SGI XFS, and ramfs, using popular benchmarks. Our measurements show that Conquest incurs little overhead compared to ramfs. Compared to disk-based file systems, Conquest achieves 24% to 1900% faster performance for working sets that fit in memory, and 43% to 96% faster performance with working sets larger than the memory size. .
    • Wouldn't this be easier just to implement as a disk caching algorithm?
      • No. The logic is that system memory is faster than cache, because LRU caches typically have a high overhead in management and searching. To find data in the LRU cache, you have to search the entire cache, which is much slower than following a pointer already in memory.

        The disk caching helps (quite a bit) for the large files, though.
    • why do they use one moving set of heads when several stationary (or less movable) heads would be much, much faster?

      i remember a very old 10 meg disk i had that had an 18 inch platter on it, and about 200 or so STATIONARY heads. the seek time was determined by the platter spin rate, and it wrote and read data as fast as the disk spun under the heads.

      it couldn't be that hard to build a small (10gig) prototype drive with one platter, and many, many stationary heads. you could probably format the thing in u
  • by roseblood ( 631824 ) on Sunday April 20, 2003 @12:02PM (#5769114)
    I'm finding that the lack of a universal DVD standard has left me looking at HDDs as my removeable media of choice as well. CDRs are nice and cheap, but I have files that would span multilpe CDRs. It's a little bit of a hastle to have to WINRAR up my data into small chunks, only to have to UNRAR it back into oen big chunk. DVDs aren't readable everywhere. I'd love to see faster solid state storage available at a price competitive with today's HDDs, but alas, it's just no there. I already have a great deal of respect from my 7200RPM HDDs
  • hmmm, (Score:5, Informative)

    by hfastedge ( 542013 ) on Sunday April 20, 2003 @12:03PM (#5769118) Homepage Journal
    Sigh...
    http://ask.slashdot.org/article.pl?sid=03 /02/11/10 20256

    "It seems like the most problematic part of any notebook is the speed of the hard drive (and they also get noisy). I noticed http://www.bitmicro.com/products_edide.html [bitmicro.com] selling 2.5" solid state disks SSD. Anybody currently using one of these in a notebook? I can't find pricing anywhere, but they've gotta cost a fortune." How long do you think it will be before the major laptop manufacturers start adopting this technology?
  • by stonebeat.org ( 562495 ) on Sunday April 20, 2003 @12:04PM (#5769119) Homepage
    SSDs are good for research purposes and Software Developer Kits. I think Intel's Explorer 2 SDK used to have 128 MB on board, which is useful for Assembly programming.

    I remember when we used to program Motorola 6800... hehe...
    • I think Intel's Explorer 2 SDK used to have 128 MB on board, which is useful for Assembly programming.

      Are you fscking KIDDING me? I could spend YEARS trying to fill up 128MB with assembly instructions. Unless you are doing something that involves large amounts of data, then only maybe. I could see 3D graphics, possibly, or perhaps engineering applications like CFD, but to be honest I haven't seen anyone use extensive amounts of assembly to do that stuff in years...compilers are much, much better nowada

  • by evilviper ( 135110 ) on Sunday April 20, 2003 @12:06PM (#5769126) Journal
    I don't understand the hard drive bashing. Sure, it isn't as fast as DDR, but it's faster than any other storage media... It's not only faster, but cheaper as well.

    In addition, I've had many power supplies and entire motherboards die in the same period as my hard drives have been operating. The best part of all is that they have very obvious signs when they are beginning to die, as well.

    Hard drives are not the fastest or most reliable piece in you computers, but they are definately not the worst or slowest. Who here can find ECC DDR RAM for anywhere near $1/GB?
    • by Alpha_Nerd ( 565637 ) on Sunday April 20, 2003 @12:10PM (#5769142)
      Sure, it isn't as fast as DDR


      You've never seen me play DDR... Not exactly fast !
    • The quality of hard drives have gone down in recent years. Luckily I have seen a disk failure. I consider myself lucky. As hard drives get more dense the chances of errors showing up increases and reliablity goes down.

      With ram prices the way they are in a a year you can buy a 2 gig ram stick for the same price as a hard drive today. 2 gigs is fine for simple stuff. I could run FreeBSD and all the apps in 2 gigs and use a regular external hard drive for my bloated /home directory where my mp3's are.

      Ram is
      • I was thinking about doing without a hard disk and just booting from a Knoppix CD. All my interesting data is over the network anyway - all a hard disk would be used for is to install software, which Knoppix makes obsolete, and to store temporary files, which can be done in a ramdisk at today's memory prices. The only trouble is finding a suitably huge Knoppix or equivalent (perhaps on DVD rather than CD) that comes with everything you'd ever want to run.

        Alternatively if you have several networked PCs yo

        • I wonder if there is a Linux distribution where you stick in a boot floppy, have the workstation contact a server to

          [SNIP]

          Sure, anyone can do this. :) It all depends on your creativity. Years ago, before we got a CD burner at my office, we used to do all our Slackware installs over NFS. It's not exactly documented, but doesn't take a rocket scientist to figure out how. A few subtle adjustments to the technique would give you machines that boot up and use NFS to mount up their filesystems.

          I'd s
          • I know you _can_ set it up if you know how but I was wondering if there is any plug-and-play solution. Like how Beowulf clusters used to involve some manual setting up until Red Hat with their 'Extreme Linux' product made it as easy as sticking in a CD.

            I agree that floppies suck, but as boot media they're just about acceptable. I mean, if the floppy drive does fail the worst that can happen is the machine fails to boot - you don't lose any data. But the ideal is to have neither floppy nor CD but a boota
    • Hard drives are not the fastest or most reliable piece in you computers, but they are definately not the worst or slowest

      You so sure about that? There are several dozen computers on my floor of my dorm. I've seen several hard-drive exchanges over the course of this year. Other components repaired/exchanged/etc? Maybe one or two. And those obvious signs of impending failure you talk about? I and many others have had times where the drive works one day, and the next day on boot you just get the old-fashione
    • In the workgroup environment, having a drive spindle for every seat is a waste. Imagine a server serving 30 terminals but having 30 drives inside the server?! It is a popular mindset to have 30 autonomous computers each with their own autonomous operating system complete with autonomous applications on autonomous disk drives. That simply isn't practical, but it is totally acceptable for current IT thinking.

      When I read the article, I realized that it was Windows specific when I got to the part about the guy

  • Seems cheap (Score:3, Informative)

    by ripleymj ( 660610 ) <ripleymj@jmuPERIOD.edu minus punct> on Sunday April 20, 2003 @12:07PM (#5769130)
    When you look at something like this [cenatek.com], it makes $5000 for 20GB seem like a conservative estimate.
    • Re:Seems cheap (Score:3, Interesting)

      I wonder if I can buy a 2gig version for $500. I could then mount my / directory there and use my /home with my large mp3 collection on a regular hard drive. I would have kick ass boot time.

      • If you supply the memory :
        1G version = $599
        2G version = $699
        4G version = $899

        They used to sell the entry level one for $399 but just when I saved enough to buy it they upped the price by $200. Arg.

        Preloaded with certified RAM prices :
        1G version = $1,199
        2G version = $1,999
        4G version = $3,599

  • Instead of RAM... (Score:5, Informative)

    by c_oflynn ( 649487 ) on Sunday April 20, 2003 @12:09PM (#5769139)
    They could use FRAM [ramtron.com] (Ferromagnetic Random Access Memory)

    It is as fast as RAM, but is non-volatile. Oh, and its endurance is unlimited. Right now they aren't big enough, but a the technology improves...
  • by Anonymous Coward on Sunday April 20, 2003 @12:12PM (#5769152)
    lynx -dump http://slashdot.org/ | grep "ead the article"
    required UPS, compact flash, etc. Read the article and you may get a
    loop" (read the article.) The goofy loop put about seven miles between


    Two mentions of "read the article" on the front page. Are they trying to start a fad?
  • by ottffssent ( 18387 ) on Sunday April 20, 2003 @12:16PM (#5769164)
    "RAM costs more than disk". There. Now you don't need to read the story, which is probably /.ed by now anyway.
  • RAM swapfile (Score:4, Interesting)

    by Anonymous Coward on Sunday April 20, 2003 @12:18PM (#5769174)
    I find it amusing that it mentioned using a RAM based swapfile. Doesn't that defeat the purpose of a swapfile???

    • Re:RAM swapfile (Score:4, Insightful)

      by vidnet ( 580068 ) on Sunday April 20, 2003 @12:31PM (#5769230) Homepage
      Yes, it would indeed defeat the purpose.

      But quoth the article, "If your operating system's virtual memory management isn't all that it might be (...)".

      So if your OS sucks (I'd insert an example, but it's too obvious), then RAM based swap files could speed things up. If you OS does not suck, then it would be utterly stupid.

      And speaking of OS that don't suck, I upgraded to 512mb ram half a year ago, and Linux hasn't done a disk write since. Love that cache.

      • It's not necessarily just an OS-related problem. I dare you to try to put 60GBs of RAM into your favorite Linux box. What's that, you can't do it? But... Linux doesn't mind having multiple gigabytes of swap... What about putting that swap on a 60GB external ram-drive?

        This idea has been used for decades. The C64 had ramdrives of up to 8MBs available that did just this, even though the base system could not have more than 64K of system RAM onboard.

    • Lol, true, I didn't pick up on that.
      Imagine making your RAM-based swapfile bigger than your total RAM. Swap in, swap out, swap in, swap out...
    • I find it amusing that it mentioned using a RAM based swapfile.

      If you're trying to get by on some 32-bit system with 4 gigs of RAM and your box is still thrashing, you can't just add memory to it, so throwing swap into an external RAM device might make sense. Seldom used stuff that gets swapped out would happen much faster.

      Just an idea. Not sure if that is any better than bank-switching would be (or just farming crap out to multiple systems).

    • Yes. But it's not uncommon when you have scads of ram to have /tmp mounted on a ramdisk. I assumed that's what the author meant.
  • by grahamsz ( 150076 ) on Sunday April 20, 2003 @12:21PM (#5769188) Homepage Journal
    This guy mentions that compact flash dies after 100,000 to a million rewrites... and that you'll reach that surprisingly quickly if you put your swap file on it.

    It seems highly unlikely that any sane person on any desktop system would choose to spend money on compact flash to use as swap, when they could spend less money and buy dram instead - which shuld be faster.

    Anyway potentially you only need fast solid state diskspace for your operating system and main applications, since few people need that sort of speed on their 'data files'. I could build a bootable linux box that ran off a 256Mb compact flash - doesn't seem like it'd be too bad at all.
    • I wonder If you could configure Linux to put the kernelimage in a Small Flash Disk (only read) , put the swap on a SSD device (I mean RAM based not flash based) , and then have a RAID 5 for your hard disk, Should be one fast machine. Also with some ACPI compliance maybe turn hybernate on to the SSD device?
      Even better if some big company starts selling those, with volume prices, probably I could buy machines which boot in less than 10 seconds....
  • ...they offer permanent stable storage. In the meantime there are all of these hacks to back the SSD with a magnetic disk, battery backups, etc.

    In those cases, you're better off loading up on RAM and relying on the buffer-cache to make the disk slowness transparent.

    Of course, you could be in a position where you already have 4GB of RAM stuck in a machine and it runs an x86 processor and it's still too slow. Then you're up shit creek unless you can deal with a measely 100MB/sec RAID solution. ;)

    x86_64

  • by Billly Gates ( 198444 ) on Sunday April 20, 2003 @12:27PM (#5769212) Journal
    I was thinking about this after creating this thread. [slashdot.org]

    The tandy sensationII had a 212 hard drive while my current system has 512 megs of ram. If I upgraded such a beast with my current amount of ram I would have twice as much ram as hard drive space. I still have my old 3.2 gig and 540 meg hard drives in a drawer in my room. I am thinking of using these again in a FreeBSD server. With ram the way it is today I can just put a shitload of ram on the server and set it up to load the whole hard drive in memory.

    I remember a 3 year old maximumPC magazine which mentioned something about a ramdisk. Its an external scsi device that had 2 gigs of ram and a powersupply to keep its contents from being erased. Basically the computer saw it as a fast hard disk. It was very fast but got poor remarks because in a castrophic power outage your whole virtual disk is gone. Also the store space is very tiny. However with my comments above about specific servers this might make sense. With a price close to 30k it was very expensive and it was this that created the poor review.

    A web server that does not containt a database does not need alot of disk space. Just apache, the os, a java2EE sdk, or javascript, or perl and thats it. All the content is created automatically by cgi scripts or java depending on what the webadmin uses. A ramdisk might be used for a highend website if the disk is the bottleneck. My old 3.2 gig drive has the same amount of ram as these 2-4 gig ramdrives. I can easily create a workstation or webserver with this.

    A ramdisk today might be perfect because ram is so cheap and it would be alot cheaper.

    • Ram is WAY too volatile for any sort of long term storage with any type of integrity.
    • First of all, if you're thinking of using RAM for long-term storage, remember that it isn't entirely reliable, and make sure you opt for ECC ram. Even this is not expensive now.

      Second, if you're using FreeBSD then I suggest you take advantage of the feature that lets you take a snapshot of a mounted FS (i.e. Your RAM drive in this case) and make regular backups if you plan on modifying anything on the disk. If you're going for a really huge RAM disk, don't forget that they're looking for testers who have

  • Bah. (Score:5, Interesting)

    by silverhalide ( 584408 ) on Sunday April 20, 2003 @12:29PM (#5769219)
    Until SSDs get an order of mangitude cheaper, HDDs will continue to rule! For the thousands that SDDs cost, you can built a huge striped RAID of quick 120 GB drives that will perform more than fast enough for any existing applications. Paintbrush and minesweeper will run like they've never run before.
  • by tinrobot ( 314936 ) on Sunday April 20, 2003 @12:30PM (#5769227)
    We create tons of video and always are hungry for backup. What we've done is to simply save our old hard drives as we upgrade and put the old ones to use in those $6.00 IDE removable cartridges as backup media. We mostly have 2-3 year old 20-40GB drives. We also have bought 5400rpm 120GB drives for incredibly cheap on Pricewatch as well...

    We figure, as a backup, HDDs last just as long as any other magnetic medium. Because they mostly sit unused on a shelf, we're not that concerned about MTBF of the drive mechanisms. When we do use a backup, we still copy to our RAID/Server before using the files. The backup drives rarely see much use.

    We have CD/DVD writers, but really only use them when sending stuff to clients. With the price of hard drives, it's hard to justify anything else as a backup medium.

  • Poor Math Skills (Score:3, Informative)

    by Nintendork ( 411169 ) on Sunday April 20, 2003 @12:43PM (#5769283) Homepage
    If you want to add more RAM chips for RAID 5-ish protection from damage to any one chip, multiply your chip count by 1.5.

    Last I checked, you would only have to spare one chip worth of space to store the parity information, regardless of how many chips are being used to begin with.

    -Lucas

    • No, parity is almost useless: it tells you you have lost data, but not what.

      ECC is what you want. Someone will probably correct me on this, but the number of extra bits for ECC is not 50%. It is 6 at 16 bit word width (ie 22 bits), 7 and 32 and 8 at 64. However, it only deals with single bit errors and the probability of multi bit errors increases as word width increases.Even so, the overhead is around 40% max for a 16 bit memory path, rather than 50%.

  • Comment removed (Score:3, Interesting)

    by account_deleted ( 4530225 ) on Sunday April 20, 2003 @12:45PM (#5769289)
    Comment removed based on user account deletion
    • by billstewart ( 78916 ) on Sunday April 20, 2003 @04:56PM (#5770196) Journal
      There's no need for the OS to be in non-volatile memory - almost all of it's read-only except for a few log files, things like print spools, /tmp, and swap space when that's needed. So if the operating system does a half-decent job of cache management, it'll keep the stuff it needs in RAM, and it'll be much more efficient if it can decide flexibly what that is rather than having chunks of the memory inflexibly dedicated only to specific applications.

      The special cases are things like /tmp, which look like disk drives but mostly contain files that are created, used, and destroyed, and never really need to be saved on disk if there's enough cache space to keep them. The tmpfs file system type was designed to optimise these - it stores files in RAM and uses the virtual memory mechanisms to handle its data rather than a separate disk partition, and can really speed up applications like compiles because there's no need to wait for disk latencies or to even bother the disk bus with writes in most cases.

  • by SunPin ( 596554 ) <slashspam@@@cyberista...com> on Sunday April 20, 2003 @12:59PM (#5769356) Homepage
    The author starts out trying to convince the reader that hard drives suck, makes a weak attempt at defending the alternatives, and concludes that the alternatives are not yet feasible. He compounds the problem by littering the whole piece with annoying ticks like "well", "really" and nonexistent English usage like "that're."

    He obviously knows his stuff but a few more drafts and an editor would have done wonders for this article.

  • I bought a 120gb 3.5 inch drive ($100) a couple months ago, and put it in an external firewire drive case ($50) and hooked it up to my computer. It's portable, has massive storage (relatve to most other removable storage, at least), has fast transfer speeds, comparable to other removable media, at least.

    The plus is that I can always remove the drive and put a different one in if 120gb ever becomes "small".
  • by Quietti ( 257725 ) on Sunday April 20, 2003 @01:03PM (#5769375) Journal
    modern drives are pretty reliable, and highly compatible with each other
    I wanna have some of what he's smoking, quick!

    Seriously, I have IDE and SCSI drives that are about 10 years old (capacity is obviously small - in the 200 - 500mb range) and have almost no bad sectors; they still do a reliable job in routers and other boxes that don't require a lot of storage. Meanwhile, newer drives of 2Gb or larger regularly require replacements. Then, there's the problem of recent drive capacities being too large for the BIOSes of my "deprecated" computers, not to mention SCSI connector standards that change more often than the MTV Top 10.

    The real problem, for an end-user, though, is the excessively generous storage capacities; as Cringely once pointed out, unless you are a graphic artist, your personal data probably fits well within 500Mb of storage. Why the hell is it that the smallest drives I can purchase nowadays are around 30Gb (120Gb for SCSI), at a time when my data storage needs still have not exceeded that 500Mb per user quota? And, no, my workstations do not suddenly have a use for larger drives either.

    One cannot help but notice how manufacturer warranties reflect the lower quality, as well. Where we used to have 5 year warranties (which, in practice, meant that the drive actually performs well for about 10 years), current offerings are guaranteed for 1 year and last exactly that. There's been several recent cases e.g. with IBM's glass drives, where a replacement is required within 6 months from purchase.

    I don't know about you, but I have better things to do than constantly wasting money on purchasing replacement drives and time on reinstalling everything on the new drive, only to find out that the BIOS cannot use such large drives, and cursing that I had to purchase a drive whose capacity is exactly 100 times what I can use.

    Message to drive manufacturers: Gimme reliable and quiet 2 - 4Gb drives, using the good old 50-pin connectors in both IDE and SCSI flavours, but providing all the modern refinements of Ultra DMA100, etc. and guaranteed for 5 years or more. Make them affordable too. We don't want any more stinky throw-away media storage, thanks you.

    • Are you for real? (Score:3, Insightful)

      by mindstrm ( 20013 )
      My personal data?

      Let's list some common consumer appliances that offload data to the home computer:

      mp3: 1MB/min of audio

      video from digital video cameras: Lots of GB here
      digital photos: getting bigger all the time.

      DIVX video: almost a gig per movie.

      Video Game: 2 gigs of space, easy.

      PVR: the more space the merrier.

      So seriously, what are you smoking?

      In the old days, there were all kinds of ide incompatabilities.. some drives just would not work with other drives in master/slave configurations. Bios is
      • My own comments are emphasized between yours:

        My personal data? Let's list some common consumer appliances that offload data to the home computer:

        • mp3: 1MB/min of audio

          Unless you are a recording artist, none of it is your data.

        • video from digital video cameras: Lots of GB here. digital photos: getting bigger all the time.

          Both are best strored on CDRW.

        • DIVX video: almost a gig per movie.

          Unless you are a movie producer, none of it is your data.

        • Video Game: 2 gigs of space, easy.

          Which is still wel

        • 4Gb should be plenty for you too, including OS, applications and your own data.

          Troll much? [Insert 640K RAM joke here]

          My email directory is over 1 Gb in size.
          Legally bought and paid for applications (not data) take up 8.3 Gb
          Offline usenet messages take up about 2.5 Gb.

          Not to mention that a large fraction of people who use their PCs for professional/serious amateur work will easily use GBs of space. To burn a DVD of the family's vacation takes up 5Gb. Dual booting Linux and Windows will take a couple
        • > Both are best strored on CDRW.

          That is unless you want to keep those pictures for more than 6-8 years...

          CDR's degrade over time and lose data.
          (Cheaper ones in my experence dont even last 6 years, more like 3-4)

          I hope writtable DVD media doesnt have the same problems, but i fear it will.
          dvd[-/+]r[w] would be a great backup medium if it didnt fail after a number of years.

        • You are putting the worst case scenario, but, you can legally (and not being a recording artist nor a movie producer) need huge amounts of space for relativelly common life scenarios with mp3 and divx.

          In example, if you want to make your computer to take dictates, voice notes, work as a phone answerer or record meetings, classes, etc, and don't trust enough or need additional data to use speech to text software, mp3 is a good alternative here, and in the long run it could take lots of space.

          And for divx

        • by (H)elix1 ( 231155 )
          >>video from digital video cameras: Lots of GB here. digital photos: getting bigger all the time.

          >Both are best stored on CDRW.

          >>DIVX video: almost a gig per movie.

          >Unless you are a movie producer, none of it is your data.

          Personal Computing is powerful enough to turn the home user into a movie producer. Back in the day, I remember spending six digits for a Mac based Avid - a very large portion of the cost was the vast arrays of SCSI hard drives and raid controllers needed to handle da
        • by Per Wigren ( 5315 )
          "mp3: 1MB/min of audio
          Unless you are a recording artist, none of it is your data."


          Duh, I've ripped all of my legally purchased CDs (about 300 of them) to my computer.. I love the fact that I can now put 50 CDs in my XMMS playlist and play them at random.. I haven't used my regular CD-player in more than 2 years...
    • I mostly agree, if you mean "personal data" == "documents you produced yourself" like spreadsheets, word processor files etc. However, give a digicam to any modern user and your diskspace is *gone*. Same with MP3, and luckily DivX isn't mainstream yet. (mainstream as in "non-geeks use is dayly").

      However, there is a need in 2G-4G drives. Mainly in the corporate market. I've got this nice new Dell workstation at work for development and to my suprise I noticed it had a 8Gig harddisk. We're supposed t

    • It is hard to build a drive with a partial platter, so a single sided drive is the smallest you'll get in a standard drive.

      What is left is to shrink the size (diameter) of the drive, leading to microdrives.

    • I dunno what you are talking about. I have a 120 gb western digital hdd that I bought boxed at fry's for 89$ (one of their sales) - comes with a 5 year warrenty - only picked it up a few months ago. Also - its very silent.

      Also - my two previous disks (seagate 40 and 80 gig) are happily serving up data in my file server on reiser/lvm - they have been on non-stop for the last year or so and are still clicking along nicely - and they are quiet too.
  • The usual rant on Moore's law, etc.

    As seen in this Scientific American article [sciam.com] (among too many others) the cost per megabyte measured in dollars per megabye back in the early 1990s. which seems to be where SSDs are right about now. Presuming a similar price performance curve for the forseeable future, these things should be available and affordable in the mass market in the next decade or so.

    We then reach the point where conventional storage is able to be completely absurd. We currently have over 100 time

  • by yuvtob ( 533399 ) on Sunday April 20, 2003 @01:19PM (#5769443)
    The current basic and EXTREMELY old computer architecture (which is CPU, memory, storage device, and IO devices) already solves this !
    You store everything you need on the STORAGE DEVICE, and access stuff by copying it to MEMORY. If you what you need to access is big (like a database) - add more memory.
    Actually, the only difference between MEMORY and STORAGE DEVICE is speed. If they were the same speed, we wouldn't have needed one of them. Shoving the memory away from the processor is like saying 'let's put a hard drive instead of memory - that way we'll have hundreds of GBs of memory !'.

    To be fair, I'll add that it might help on 2 occasions:
    1. Systems which are memory-limited - like my PC which is limited to 4GB. But I'm guessing that computer manufacturers will continue to expand this as needed (both for PCs and servers).
    2. Loading-up such a system - reading those GBs from an HDD to the memory will take longer than loading it from memory.

    But other than that, I think that stuffing the memory in the storage device and saying that you have a fast storage device might be true, but it's plain stupid !
  • I always wondered if this would be a good idea once economies of scale kicked in and design knowhow was put toward it

    Basically, you have your normal 120 GB disk for your files. But the swap file exists on a different GB hard drive thats only 1% the size of the usual hard drive, specifically manufactured for high speed and a lot of cache. This hard drive holds the swap file, which probably accounts for the majority of hard drive accesses. It wouldn't cost $5000 like an SSD would, but maybe 30-50 bucks once
  • I don't know, Executor looked pretty damned expensive.
  • by pjrc ( 134994 ) <paul@pjrc.com> on Sunday April 20, 2003 @01:57PM (#5769581) Homepage Journal
    For several months, we've been tossing around the idea of making an "affordable" solid state disk circuit board at PJRC [pjrc.com]. The article asks:

    What if someone started making SSDs for the consumer market, though? How cheap could they be?

    Produced at modest volumes in the USA (not made by the boat-load in China), we've been looking at somewhere in the $250 to $300 (usd) range for the bare board with 16 or 20 DIMM sockets, IDE interface, and power management circuitry with aux power inputs.

    The unit is planned to fit into the form factor of a cdrom drive, which allows just enough room for 20 sockets and a couple inches to pack in all the circuitry, IDE and power connectors. There just isn't room for a battery, so the plan is to have 2 or 3 "aux power" connectors that accept 9 to 12 volts. We'd make a battery pack that fits into a 5 or 3 inch drive bay and recharges itself from PC power, so you could connect 1, 2, or maybe even 3 battery packs, or maybe a battery pack and 12 volts from some external source like a "wall-wart" power adaptor plugged into a cheap UPS, or maybe something a bit more "reliable". I'm not sure what the battery pack will cost, but it's hard to imagine it'll be over $50-60 even if we splurge a bit for a fancy microcontroller-based rapid charger and advanced battery monitor.

    Today, 512 meg DIMMs are the most affordable, and today's pricewatch says about $40 for PC100-SDRAM and $46 for PC2100-DDR. Prices fluctuate quite a bit... a few months ago the 512 meg PC100-SDRAM was $30. But assuming you pay $40 each for 20, plus $280 for the bare drive and $60 for a battery pack, that puts you at $1140 for a 10 gig ultra-ultra-fast drive. Ouch. Even if the prices drop back to $30, which puts you under four digits, it's still quite expensive.

    But not as expensive as the article claims.

    Anyway, at this point the project is pure vapor. The earliest you might see it would be about one year from now, but 18 months is more likely. Even though DDR is more expensive today, the design will almost certainly use DDR because it is expected to become cheaper and remain more easily available for the years to come. It's also quite likely I'll do serial ATA only, as S-ATA is going to become the mainstream down the road, and it's already gaining acceptance now. My hope is that 1 and 2 gig DIMMs will become more common and their price/byte will come in line with the 128/256/512M sizes.... 'cause there's no way we're going to get more than 20 DIMM sockets into the 5.25 inch drive bay form factor.

    The project also has a number of technical challenges... including the difficulty of connecting that many unbuffered DIMMs (the design will need 4 or 5 separate memory channels and a lot of buffers & PLLs that there aren't really room for on the board).

    Well, enough vapor for one day.

    • Quit teasing. (Score:4, Insightful)

      by Glonoinha ( 587375 ) on Sunday April 20, 2003 @03:08PM (#5769863) Journal
      Ouch - come on man quit teasing us. This is EXACTLY what we want, although I would suggest supporting ATA-133 on down. The reason people want to add a SSD is to make an existing computer a LOT faster ... if they have to buy a new computer (that has SATA) simply to use your SSD then the price isn't just the price of your hardware, it is the price of your hardware PLUS the price of a new computer. A hundred million PCs are getting sold this year without SATA support and that means there are a hundred million computers (customers) out there you are insuring you can't sell to if you only support SATA.

      Secondly, rather than planning your first release to be the superduper box in 18 months, how about a 'pretty good' box that supports regular IDE (ATA-100 on down) in 6 months, sell some to generate some cash flow, learn from the feedback of your early adopters, adapt the engineering changes into your superduper box v2 that is still getting released in 18 months.

      Maybe the first generation skips SATA, no battery backup, uses PC100 SDRAM, make it full height (two 5.25" bays) instead of half height if you need the room, perhaps see if a SCSI interface might get you out the door sooner (much less intelligence on the drive in a SCSI implementation) ...

      Lets face it, the first generation of anything usually has pain - so plan on your uber release being v2 in 18 months and release (sell) your first generation in 6 months.
  • I find it quite odd that the title of this is, 'Getting Rid of the Disks', considering that the article is about why it would be too expensive for a normal user to switch to SSDs.
  • Works great for me (Score:3, Interesting)

    by digitalgimpus ( 468277 ) on Sunday April 20, 2003 @02:13PM (#5769637) Homepage
    Should use intelegent drive management.

    Since solid state memory is cheaper now than it was, disk drives should use giant amounts of cashe... Perhaps 512MB... let the OS put the common stuff there.... that way boot time, and such could be quicker.

    Kind of like auxilary ram. The OS can put stuff there based on what it thinks should be there... for example commonly used apps (in most cases a webbrowser and email client)..

    Can also expand on the idea and use solid state as a form of backup, since it's so reliable. Have the system automatically compress data on the drive from specified directorys, and backup to solid state memory.

    There are so many potential uses. We rely on hard drives to much.
  • get a nice 64-bit system (hello, athlon64),
    slap 8 or so gigs of ram and a four-disk striped raid array on it.
    half of the filesystem would use 2gb on each disk as a database,
    to be updated twice a day.
    the other half would be on that ram, in the form of diffs on the hdds.
    on top of this, you put a more intelligent system
    to handle things like downloads (go direct to hdd)
    or swap (never go to hdd).

    this would be interesting for test machines,
    which would never sync ram to hdd,
    or multi-user full-access boxes,
    which c
  • hmmm (Score:4, Informative)

    by photon317 ( 208409 ) on Sunday April 20, 2003 @07:11PM (#5770725)

    This guy Dan who wrote the article doesn't seem to have all his facts together. He speaks of constructing SSDs from normal RAM chips with a magnetic hard drive to store the data during powerdown, and needing a long run UPS to prevent data loss - does he realise SSD drives do exist that use static rams that don't lose data on power loss? Some of them have intelligent controllers that re-map memory blocks on the fly to spread out the write-wear across the drive evenly too, which eliminates his write-cycle argument.

    As two examples I know of - both BitMicro and M Systems offer solid state disks with static ram in them, in IDE and SCSI varities. Both products are drop-in replacements for standard magnetic drives. The BitMicro SCSI drives can sustain 34MB/sec.

    As far as I can tell, then only reason you *wouldn't* want one of these in your machine would be cost, which is a biggie. The cost is very prohibitively expensive. However, if I were worth a lot more money than I was now, and I felt like blowing a solid 5-digit figure just to build the ultimate custom high-speed PC, a static-ram based HDD would be a major component. It would probably be best to buy a 20G-ish unit as your main system drive (C: for windows, or /, /usr, .... on linux), and then drop in a couple monster-sized IDE drives for multimedia data, large games, homedirs, or whatever suits you.
  • by gelfling ( 6534 ) on Sunday April 20, 2003 @07:57PM (#5770909) Homepage Journal
    Our infrastructure requires something on the order of 24-28 MB/sec zone data transfers which is close the upper steady state limit of our SSA arrays. So yeah there are situations where you would need SSD.

    We've also seen some DFS FLDBs that needed frightfully high disks that might be solved with SSD.

    We've experimented with virtualized disks inside of z/OS partitions and connecting the fiber channel adapter. But this is a high-kludge solution and very expensive from a physical port consumption perspective.

Programmers do it bit by bit.

Working...