Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage Hardware Technology

Why SSDs Won't Replace Hard Drives 315

storagedude writes "Flash drive capacities have been expanding dramatically in recent years, but this article says that's about to change, in part because of the limits of current lithography technology. Meanwhile, disk drive densities will continue to grow, which the author says will mean many years before solid state drives replace hard drives — if they ever do. From the article: 'The bottom line is that there are limits to how small things can get with current technology. Flash densities are going to have data density growth problems, just as other storage technologies have had over the last 30 years. This should surprise no one. And the lithography problem for flash doesn't end there. Jeff Layton, Enterprise Technologist for HPC at Dell, notes that as lithography gets smaller, NAND has more and more troubles — the voltages don't decrease, so the probability of causing an accidental data corruption of a neighboring NAND goes up. "So at some point, you just can't reduce the size and hope to not have data corruption," notes Layton.'"
This discussion has been archived. No new comments can be posted.

Why SSDs Won't Replace Hard Drives

Comments Filter:
  • by strangeattraction ( 1058568 ) on Monday July 26, 2010 @05:22PM (#33036976)
    Was plenty for my needs and boots Ubuntu in 20 seconds. Barely uses power when not in use. I'm a believer.
  • by Microlith ( 54737 ) on Monday July 26, 2010 @05:33PM (#33037144)

    SSDs already leverage extreme parallelism via 15+ different channels, indeed they have to due to how slow most NAND chips (especially MLC) are. Eventually you're forced to the PCIe bus, especially as you approach 18-25 channels (FusionIO) and the SATA bus becomes a bottleneck.

  • by Anonymous Coward on Monday July 26, 2010 @05:45PM (#33037288)

    SSD devices have been around since the 50's and in production forms since the mid 70's. its not that the technology is immature, its that the technology is not cost effective for the vast majority of end users. there are serious issues that have yet to be fully addressed with SSD, and im not just talking about wear leveling and reduced performance as the devices fill.

  • by QuantumRiff ( 120817 ) on Monday July 26, 2010 @05:52PM (#33037360)

    Several SAN vendors do similar things right now.. either manually or some automatically, moving older, less frequently used data from fast SCSI and Fiber Channel drives to slower SATA drives.. last I looked, they were looking to add SSD's to the mix as well, either replacing SCSI, or as a very top tier.

  • by Sycraft-fu ( 314770 ) on Monday July 26, 2010 @06:29PM (#33037602)

    This is what annoys me is that it seems like Flash is idea as a cache for magnetic HDDs. The same principle is already at work in our CPUs:

    So a modern CPU is way faster than modern RAM. The access times are much lower. How then, can we have a system not hamstrung by RAM? The answer is cache. With a good system of high speed L1/L2 (and sometimes L3 cache) we can have our cake and eat it too. You have a few megabytes of expensive high clock SRAM right on the core. You have a few gigabytes of cheap DRAM clocked much slower. With proper caching, you then get 90-95% of the expected speed of the SRAM. Nearly all of the speed, a fraction of the cost.

    Why not HDDs then? Have the RAM on there (L1) and a couple gigabytes of flash (L2) pared with the disk. Use an intelligent caching algorithm (as in not just the first part of the drive) to cache reads and writes. This should again offer most of the expected speed of the flash, while still offering a low price.

    I'd pay for that. Say a full magnetic drive is $100 for 1TB. A full SSD is $3000 for 1TB. A Hybrid 1TB drive, which features 4GB of flash, is $200 but performs 50% faster than the magnetic drive and deals with simultaneous reads and writes much better. I'd buy that.

    Unfortunately all the hybrids are for laptops and use it to save power, not to speed things up.

  • Re:Lets wait and see (Score:2, Informative)

    by Anne_Nonymous ( 313852 ) on Monday July 26, 2010 @08:12PM (#33038624) Homepage Journal

    Oh come on. 640K ought to be enough for anybody.

  • by adonoman ( 624929 ) on Monday July 26, 2010 @09:20PM (#33039136)
    You mean like this [computerworld.com]
  • by Ruede ( 824831 ) on Monday July 26, 2010 @09:46PM (#33039370)
    @ article, yeah right but strangely enough all the HDD for the OS have been replaced the minute i could afford them
  • by DJRumpy ( 1345787 ) on Monday July 26, 2010 @09:47PM (#33039382)

    I'm in agreement with this except holographic storage has a few major drawbacks. Although SSD is steller for smaller storage requirements, platter drives are just too slow to be of much more use. Some highlights for holographic storage that should be pointed out first:

    The theoretical limits for the storage density of this technique is approximately several tens of Terabytes (1 terabyte = 1024 gigabytes) per cubic centimeter
    Another factor: photographic media has the longest proven lifespan - over a century - of any modern media. Since there’s no physical contact you can read the media millions of times with no degradation.

    Unfortunately, the current limitations make this a far off product that probably won't see the light of day for many years.

    The initial prototype was only capable of 20 MB/sec. Although this isn't horrible for optical storage, it's hardly a top performer
    Although the theoretical limits are almost infinite, the reality of the prototypes were only about 300 MB. They have already fallen behind platter based storage.
    Seek times were in the area of 200 ms, which is also pretty poor compared to platter storage.

    With all of that said, there have been viable advances in holographic storage. HVD's (Holographic Versatile Disc) show true promise.

    These discs have the capacity to hold up to 6 terabytes (TB) of information. The HVD also has a transfer rate of 1 Gbit/s (125 MB/s). Sony, Philips, TDK, Panasonic and Optware all plan to release 1 TB capacity discs in 2019 while Maxell plans one for early 2020 with a capacity of 500 GB and transfer rate of 20 MB/s[2]—although HVD standards were approved and published on June 28, 2007, no company has released an HVD as of July of 2010.

    Ref: http://en.wikipedia.org/wiki/Holographic_Versatile_Disc [wikipedia.org]

  • Flash is so 2000s (Score:3, Informative)

    by georgewilliamherbert ( 211790 ) on Monday July 26, 2010 @10:02PM (#33039512)

    The hot new solid state non-volatile memory technologies are phase-change memory (PRAM), memristors, ferroelectric RAM, resistive RAM.

    Some of these technologies are much more area-efficient than Flash, and will stack in pseudo-3D chips reasonably well (memristors in particular should stack in full 3-D arrays very efficiently...).

    The general observation that disks have the lead right now is true, but the other technologies close a lot of the gap, and the growth curves look very similar after that. Who knows if it ever gets cheap enough to completely replace disks in our lifetimes, but there is hope of seeing that.

    That does entirely change the game on system architecture. Disks are slow and far away from the CPU. Solid state memory can be as close or nearly as close as DRAM, and if it doesn't require a lot of handholding on lifecycle management (wear rates etc - Flash is horrible here) then can be used and managed as a simple byte or block array rather than the whole "filesystem" crap we now use. We still may want POSIX like abstractions for parts of storage management, but life is so much easier if the back end store is just a block array we read/write than if it's really a spinning disk, behind a cache, behind a controller, behind a SATA/SAS bus, behind a controller, behind a PCI bus, behind a southbridge, ....

  • by fuzzyfuzzyfungus ( 1223518 ) on Monday July 26, 2010 @11:00PM (#33039964) Journal
    Although it hasn't been relevant in ages, technically your old SCSI monster takes up one 5 1/4 inch bay, and virtually all modern optical drives take up 1/2 of a bay.

    Such "Full-height" devices are essentially extinct(if anything, more servers are going with 2.5 inch drives, for zippiness, with 3.5s in the SAN if you need bulk storage); but their descendants are still "half-height"...
  • by symbolset ( 646467 ) on Monday July 26, 2010 @11:13PM (#33040062) Journal

    SSDs already do things now that HDDs could never do - like provide sufficient capacity, I/Os per second and low enough latency to satisfy the I/O needs of a maxed out virtual host with internal storage, or a virtual host for VDI. In a next-gen SAN like the WhipTail they beat $1/IOPS, which is necessary for making VDI cost effective. They do it with a power to IOPS ratio that's so superior it's not even directly comparable, in a form factor that's like comparing a toaster to a refrigerator.

    Performance against spinning rust was beat off the line. Storage capacity is almost beat already (400GB SFF SSD, 1TB LFF), and the only reason it isn't flat beat is because the engineers rebel against storage media that's capable of oversaturating its connection bandwidth by such a large factor - they CAN put that many chips in that box but the idea is offensive. The only issue left of the big three is price. Prices of SSDs are coming down faster than HDD prices so the trend is clear. SSDs will replace spinning drives on more and more applications. You can plot an intersect if you want - I'm pretty sure that against enterprise spinning disk the intersect is less than the five years out stated in the article. SSD is the new tape.

    And that's without considering those impossible technological evolutions explored in your post and elsewhere in the thread.

If you have a procedure with 10 parameters, you probably missed some.

Working...