SSD Won't Make Sense In Laptops For Two Years 326
kgagne writes "While solid state disk drives can vastly improve random read performance and are perfectly suited to most mobile devices, many operations are sequential in laptops and desktops and involve writes where SSDs most often lose to magnetic hard disk drives in performance. While introducing multi-channel flash memory controllers and interleaving the NAND flash chips increases performance, it will still be about two years before the cost versus benefit ratio will make sense to install SSD in your laptop or desktop PC, according to a Computerworld story. '"I think you need to get to 128GB for around $200, and that's going to happen around 2010. Also, the industry needs to effectively communicate why consumers or enterprise users should pay more for less storage," says Joseph Unsworth, an analyst at Gartner Inc.'"
120GB is too much. (Score:5, Informative)
Try 16GB SDHC, available now for $50, delivered. [newegg.com]
One for the OS and apps, one for the data. Need more? Put the other ones in your pocket.
I disagree (Score:5, Informative)
Complexity, power, heat, and failure from kinetic shock. These are either reduced or zero with a flash device.
If you're looking for non-mobile, or a large storage application, then the disk makes sense.
Re:120GB is too much. (Score:5, Informative)
I guarantee that the SDHC card you mention will not push any really reasonable speed.
I bought this:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820208418 [newegg.com]
Then I went to Addonics web site and ordered a CF to IDE adapter. Well, at first I ordered one on ebay. Turned out it didn't fully support DMA...??? Like they didn't complete all the traces properly... anyways, for 70$ or so total, I have a diskless machine in my garage that boots Ubuntu and plays music; no more whiney 80gb hard drive there.
I think Linux reported hdparm stats of 25 to 30MB per second. Not too shabby; since the PC is only a 900 mhz athlon, I really can't tell if the CF is a limiting factor in speed. It feels just as snappy as when I had the original hard disk in; it probably boots a bit faster but I generally just turn it on and don't watch over it...
Re:I disagree (Score:5, Informative)
It's not as cut and dried as you think; from the article you link to:
Check out the graphs on the retest [tomshardware.com]
Re:120GB is too much. (Score:5, Informative)
That was just the cheapest one today. There are dozens there and one will suit. I didn't have time to construct the capacity/price/performance grid and still get a first post. Sorry.
If you need more than 16GB of OS and apps, you don't need a laptop really. Or if you do you're a power user with unusual needs - you're not in the "most people" zone where the price/performance sweet spot is. About 4GB is an XP install with Office, for 8GB you can have Ubuntu and a few hundred of your favorite free apps. If your system image is >12GB, you have other issues and you should expect to pay more. 16GB for OS & apps, 16GB for data is plenty for almost anybody.
Not all SDHC->IDE or SDHC->PCMCIA or SDHC->SATA converters support booting, but most do and most SDHC adapters installed in laptops do support it. You can always try it. The ones that do are quite proud of the fact and so it won't be hard to tell which is which. The performance on these things can be quite fine. I don't know why they don't just put a socket for these things on a desktop motherboard. You have to buy the embedded motherboard for that.
Re:I completely agree (Score:5, Informative)
There's a reason those new Dells which boast 19h of battery life have SSD's
No, the new Dells that are boasting that have a battery pack option that is the same size as the bottom of the laptop. Think of one of those laptop cooler pads except 15 pounds of battery instead of a couple of fans inside.
Re:Losing out on performance (Score:2, Informative)
Isn't this highly dependent on the filesystem you use and its strategy for block allocation ?
Yes, but...
Wouldn't it be possible to design the block allocation algorithm to favour SSDs...
Well, fragmentation isn't the answer. That seems to be what you're suggesting...
See, fragmentation introduces problems of its own -- for example, simple overhead of block allocation. If you've got a bunch of blocks that are sequential -- say, block 123, 124, 125, 126, and 127 -- you can say that a file is in an extent, from block 123-127. If, however, your file is stored in blocks 123, 259, 312, 567, and 964, you're going to have to store all of those addresses -- which means you're spending 250% more disk space simply storing addresses.
Also, Flash is written to (and read from) in rather huge blocks -- so you want a file to at least be contiguous on that level.
No, I would say that the simple solution to Flash not appearing to perform as well (for sequential operations) is to defragment just as aggressively, and to increase readahead by a lot. Flash would work great for contiguous sequential reads, assuming that the filesystem (or block layer) are anticipating that you'll keep reading from the same file.
But first, you need the flash disk itself to support simultaneous reads, and probably some OS support as well.
And for what it's worth, Linux has filesystems which are optimized for raw Flash -- which handle things like wear-leveling on their own. CompactFlash, and the newer standards, provide an IDE-like interface, which does the wear-leveling in hardware -- in other words, they pretend to be a hard disk, mostly for the benefit of Windows.
So I predict two things: First, that Linux will solve this problem the "right way", given sufficiently low-level access. And second, that there will be a lot of hardware, firmware, and BIOS hacks to get around the fact that Windows isn't going to be changing its filesystem anytime soon.
Re:120GB is too much. (Score:3, Informative)
i have a hard time finding blank DVDs that last more than a 3-4 years. and backing up hundreds of gigabytes of files onto DVDs tends to ruin your DVD burner pretty fast. not to mention it's a lot easier to lose/damage data stored in hundreds of separate DVD's than a couple of harddrives.
it's pretty presumptuous to think that every one has the same needs/preferences as you.
Re:Ah...No. (Score:5, Informative)
How many write cycles are your SSDs good for?
With wear leveling? More than a hard drive. Time to put that myth to rest. And no, I am not trolling.
Re:Random write performance (Score:3, Informative)
120mb/s sustained and sequential read and write. WD Velociraptor (the new 10k rpm drive) has that value much lower at 85mb/s sustained and 68mb/s sequential.
http://benchmarkreviews.com/index.php?option=com_content&task=view&id=149&Itemid=1&limit=1&limitstart=4 [benchmarkreviews.com]
Re:Random write performance (Score:2, Informative)
The Intel drive uses MLC and while its write speed might appear competitive, the relatively large block erase that happens with MLC will make this drive impractical for typical "system drive" usage. Small writes will bring this drive to its knees.
Re:I completely agree (Score:2, Informative)
The high end ones have scored metal cases too to produce handgrenade style shrapnel.
CF cards and poor DMA support... (Score:3, Informative)
Turned out it didn't fully support DMA...??? Like they didn't complete all the traces properly...
This problem is rampant in many flash card models and brands - even ones that claim to support DMA or UDMA. Search around on the interweb using your Napster machine and you'll see many others with the same issues.
:)
After trying a Transcend 4GB 133x that would only work in PIO mode, I got my hands on a Ridata 4GB 266x that *does* work in DMA. So if anyone is considering that card, maybe that's a good sign.
Mechanical challenges turned out to be not the only ones waiting for me when I worked on connecting the CF cards to the camera. These cards were hanging when the CPU tried to read them using DMA mode (and the card identified itself as supporting DMA mode). I tried to find the problem, and used all the tools I had. I added a bunch of printk's to the driver source, tried different speed settings for the DMA, and finally used an oscilloscope to spy on the signals between the CF card and the CPU. What I found was that the card did actually send the data using DMA mode, but always only for two "sectors" (1024 bytes total), regardless of the number of blocks to transfer written to the corresponding register. Then it silently hung, without activating an IRQ line, even if it was asked to transfer just a single block. And the CPU was relying on that interrupt to continue with the processing of the data read from the CF card. Careful examination of the data on the IDE bus did not reveal any problems (I was expecting something specific to the ETRAX). The same CF card with the DMA mode disabled in the driver worked fine (but slower, of course), as did the IDE hard drive (or SATA through the bridge) with DMA enabled. Googling the issue showed that I'm not the first to have problems with CF cards and DMA. The driver itself had a blacklist for some of the devices that caused problems. -- http://linuxdevices.com/articles/AT5102023409.html [linuxdevices.com]
Re:Why I'm using over 16MB for work (Score:2, Informative)