Catch up on stories from the past week (and beyond) at the Slashdot story archive


Forgot your password?
Data Storage Media Portables Hardware Apple

Are Consumer Hard Drives Headed Into History? 681

Lucas123 writes "With NAND flash fabricators ramping up production, per GB prices of solid state drives are expected to drop by more than half by this time next year to about 50 cents. Even so, consumers still look at three things when purchasing a computer: CPU power, memory size, and drive capacity, giving spinning disk the edge. SSD manufacturers like Samsung and SanDisk have tried but failed to change consumer attitudes toward choosing SSDs for their performance, durability and lower power use. But, with the release of the new MacBook Air (sans hard disk drive), Steve Jobs has joined the marketing push and may have the clout to shift the market away from hard drives, even if they're still an order of magnitude cheaper."
This discussion has been archived. No new comments can be posted.

Are Consumer Hard Drives Headed Into History?

Comments Filter:
  • Durability? (Score:1, Informative)

    by Anonymous Coward on Saturday October 23, 2010 @07:15PM (#33999916)
    SSDs are known for their durability? Perhaps if properly set up, with temp and cache in memory instead of on disk, then yes.

    Correct me if I'm wrong, bOtherwise, constant read/writes (at least used to) chew through the "spare blocks".
  • by Anonymous Coward on Saturday October 23, 2010 @07:17PM (#33999930)

    I have had the opposite experience. I bought a small SSD and was really happy with it until it died after 2 months of use. I didn't even have a swap partition :(

  • by Anonymous Coward on Saturday October 23, 2010 @07:23PM (#33999980)
    20% of retail sales. That leaves out all online and corporate purchases.
  • by CaptBubba ( 696284 ) on Saturday October 23, 2010 @07:33PM (#34000042)

    Even with the best wear leveling techniques SSDs will not be able to provide the sort of write cycles that a magnetic drive can withstand. This may not be an issue in most consumer use, but the possibility is there that somebody will hear of a friend of a friend's uncle who had his entire life's work (read: porn collection) wiped out. Something doesn't actually have to be a risk for someone to freak out about it and avoid the technology.

    On the other end of the spectrum of usage scenarios: If the disk is not accessed and rewritten occasionally the issue of disappearing data comes up. In a NAND cell the data may be stored by as few as 100 electrons which are trapped in the floating gate of the transistor. Over the years imperfections in the insulation layers or quantum tunneling through the insulation layers (some of which are merely a few atoms thick) results in the electrons escaping and the cell eventually becoming unreadable. The target minimum data retention time for NAND flash is 10 years, but just due to the absurd number of individual transistors in a SSD some data will be lost before that time period. Suboptimal storage temperatures combined with smaller cell sizes and multi-level-cell NAND flash designs tend to make this effect worse.

    SSDs may find a home in specialized situations where the pros outweigh the cons, like laptops, but I doubt they will ever displace magnetic hard drives in most applications.

  • by NimbleSquirrel ( 587564 ) on Saturday October 23, 2010 @07:36PM (#34000076)
    The MacBook Air is a pretty poor example to choose as a shift to SSDs. In the MacBook Air, the SSD chips are soldered to the logic board. It is not like there is a choice on what kind of drive can be installed. When 64GB isn't enough, there is no way to upgrade. When the SSD gets a fault, there is no drive to swap out - it would be time for a new logic board. With NAND Flash having a finite lifetime, soldering the SSDs to the logic board is a prime example of planned obsolescence. When the SSD dies (when, not if), there is only Apple to turn to, so Apple effectively has vendor lock-in as well, but we have come to expect that from Apple.

    Marketing isn't going to shift far away from traditional hard drives any time soon. Yes, prices for NAND flash is dropping but there are disadvantages to using flash: low capacities (compared with HDDs), relatively low write performance and a finite lifetime of write cycles (yes wear levelling does help, but doesn't eliminate the core of the problem).
  • by mmcxii ( 1707574 ) on Saturday October 23, 2010 @07:37PM (#34000088)
    Even a Mac site [] doesn't back up your number. Not even by half. Sorry.
  • by linc_s ( 653782 ) on Saturday October 23, 2010 @07:43PM (#34000154) Homepage

    In the MacBook Air, the SSD chips are soldered to the logic board. It is not like there is a choice on what kind of drive can be installed. When 64GB isn't enough, there is no way to upgrade. When the SSD gets a fault, there is no drive to swap out - it would be time for a new logic board. With NAND Flash having a finite lifetime, soldering the SSDs to the logic board is a prime example of planned obsolescence. When the SSD dies (when, not if), there is only Apple to turn to, so Apple effectively has vendor lock-in as well, but we have come to expect that from Apple.

    No, the SSD's are on a removable board. See [] (It's the thing that comes off from above the RAM)

  • by Anonymous Coward on Saturday October 23, 2010 @07:47PM (#34000180)

    finite number of read/writes to flash memory

    This myth needs to die. []

    I would cringe to do secure erases (writing zeroes)

    Problem solved [].

  • by Jeremy Erwin ( 2054 ) on Saturday October 23, 2010 @07:56PM (#34000244) Journal

    The RAM is soldered in

    Let me just repeat that, in case it hasn't quite sunk in yet.

    The RAM is soldered in/ If you buy it with 2GB, you can't upgrade it. If you buy it with 4 GB, you can't upgrade it.

    However, you can upgrade the SSD.

    source []

    Of course, it comes with a paltry 1.4 GHz Core 2 Duo (soldered in, naturally) or a 1.6 GHz C2D.

    Oh, I see that my new talking points have come in from Apple.

    You don't need a faster processor because it's still faster than an Atom.
    You don't need to upgrade the RAM, because virtual memory on an SSD is so much faster.

    Thanks, Apple! My Fanboy subscription still pays dividends!

  • by thestudio_bob ( 894258 ) on Saturday October 23, 2010 @07:59PM (#34000264)

    He's not making up the 20% number...

    Cook pointed to a study from market research firm NPD that pegs Apple’s current share of the US consumer retail market at 20.7 percent...

    Source: Study: Mac claims 20 percent US consumer market share []

  • by causality ( 777677 ) on Saturday October 23, 2010 @08:09PM (#34000328)

    I tend to hold on to my tech for years. With the finite number of read/writes to flash memory, I don't want to be forced to part with a computer because it uses a proprietary flash storage system or be forced to purchase a proprietary replacement storage module.

    Things like iPods, smart phones, and PDAs are cheaper and easily replaced in whole, but I wouldn't want to face a replacement cost for a laptop.

    I admit I have never owned an SSD and therefore I might be ignorant. Having said that, to the best of my knowledge SSDs use the same standard connectors (SATA) as spinning hard drives. If/when an SSD fails you should be able to buy either another SSD or a spinning hard drive as a drop-in replacement. This situation is no different and no more proprietary than mechanical drives.

    When a question like that is so immediate and obvious, it does occur to me that I have probably misunderstood you. I don't know if maybe laptops are a special case. Can you explain this for me?

    I would cringe to do secure erases (writing zeroes) to a flash memory drive (solid state drives or Apple's flash "drive" module in the new Airs), knowing I was prematurely killing my storage life. Platter-based disks with sudden motion sensors will still be my huckleberry for a few more years...

    That really would be an issue. I'll note that usually a secure erase is more thorough than merely overwriting a file with zeroes. It often involves multiple passes that overwrite it with random data, either exclusively or in conjunction with overwriting it with zeroes. What I don't know is whether that's necessary for an SSD, though I do know it's often done that way for spinning hard drives.

    On a desktop you could balance wear-and-tear and the need for secure deletion by having two drives. You could have an SSD with the operating system and applications installed on it for performance and then a larger mechanical drive for data storage. For a laptop that doesn't sound so practical, unfortunately. Perhaps on a laptop you'd want to have a small partition for sensitive data that uses filesystem encryption. That way sensitive data is never written to the device in plaintext and wouldn't need to be overwritten just to protect your data from someone who obtains the drive.

  • And you could have done even better by just adding a second hard drive to your laptop (most 17" laptops will accommodate 2 drives) and used one for your OS and one for your data, or ran them as a RAID-1

    AND saved $$$$.

    Just for fun, I just priced a 17" mac laptop (I like my full-sized keyboards). With a 512gig SSD, it's $3,628.00

    For the same price, you can buy, not one, not two, not 4, but 6 17" laptops. plus a second 640gig hd for each of them.

    So, for the price of ONE 17" mac with half a terabyte of SSD, you get:

    1. 24 gigs of ram
    2. 12 cores
    3. 10 terabytes of storage
    4. 6 displays (imagine the virtual desktop !!!)

    On top of that, if one breaks, you would have 5 spares. Plus lots of place to store backups

    Think about being able to carry a lan party in one of those large recyclable shopping bags.

    And you won't have to just imagine having your own Beowulf cluster.

  • by klui ( 457783 ) on Saturday October 23, 2010 @08:39PM (#34000536)

    The target minimum data retention time for NAND flash is 10 years

    From a prior discussion here it appears that the retention time will decrease significantly with the newer MLC cells. Rather than 100K rewrite cycles, the 30nm 2-bit/cells are expected to have no more than 3K rewrite cycles; 3-bit/cell chips will have less than 0.5K. []

  • by Anonymous Coward on Saturday October 23, 2010 @08:48PM (#34000580)
    Re: This myth needs to die:

    Please note the difference between SLC and MLC flash as touched on briefly in the article you linked:

    "Depending on the type of flash-memory cells they will fail after only 10,000 (MLC) or up to 100,000 write cycles for SLC, while high endurance cells may have an endurance of 1–5 million write cycles."

    cite: Robert Penz Blog

    SLC is high-speed, and more expensive, while MLC is slower but cheaper. [cite] []. It all depends if you're looking at a small, lean system drive or a larger storage/archival drive. It comes down to the individual vendor's on-board wear leveling and damage mitigation, and typically, the larger the drive, the more room for wear leveling.

    So as with most things in computing, the answer comes down to *what* you are going to be doing with it.

  • Re:This is silly. (Score:3, Informative)

    by arth1 ( 260657 ) on Saturday October 23, 2010 @08:57PM (#34000644) Homepage Journal

    Now, for the failure modes. Let us assume your drive can handle 10 000 writes ( a low estimate ). Modern drives use wear leveling to avoid writing to the same sector all the time. Thus for a 100GB drive you would have to write over a thousand terabytes before it would start to fail, and even then the failure is a "soft" failure in the sense that reads are fine, so your OS should be able to tell you that the writes are failing, allowing you to copy down unsaved work to your USB stick, mail it to yourself, save on another drive , whatever.

    You're making a false assumption: That the wear leveling will work perfectly, and spread your writes equally over all blocks. This is only the case if you always delete all files on the drive before writing new ones.

    In reality, one of three things will happen:

    1. The drive controller will move your static files to other places on the disk during idle moments, so the blocks that have been written to the fewest times can be freed for writing again. This causes extra writes, and also increases latency when the drive is busy clearing areas for writes.
    2. The drive does the same, but at the actual write, and not during idle moments. The effect is the same, but in addition, you get increased write latency, especially for random writes.
    3. The drive leaves your static files alone, and only does wear leveling on the unused areas of the disk. If your drive is half full with static files, you have then effectively reduced the life span by half. But until that happens, your performance will be higher, and the SSD manufacturers sell on performance.
  • by Shadyman ( 939863 ) on Saturday October 23, 2010 @09:18PM (#34000756) Homepage

    What ever happened to the hybrid drives that were supposed to be the practical solution...

    Seagate Momentus XT drives are available at your favorite computer part reseller in 250GB, 320GB and 500GB flavors.
    See also: Wikipedia - Hybrid drive [] and Seagate's Momentus XT landing page [].

  • Um... Small random reads are the primary pattern in desktop usage. Are you a complete idiot? That's the SUBJECT under discussion, not dumb shit like sequential transfer speed. That's only important for marketing people who like big numbers with MB on the end.

    No, small random reads are NOT the primary pattern in desktop usage. Almost NO file on your file system is under 4k in size, which is the "chunk" size for most 8mb to 64mb hd caches.

    Even DOS didn't have average file sizes that small. And many of today's hard drives also have implemented the elevator algorithm in hardware, so head seek times, especially for small random files, are much less of an issue than they once were.

    4 drives with 32mb hardware caches will outperform your sdd in every scenario, including small random writes - especially since, for the same capacity, they can be grossly under-stroked - limited to the outermost few tracks. Understroke a 1TB drive to 32 gigs and its' seek times drop to almost zero. Throw 4 of them into a 4-drive setup as /, /home, /var, and /srv, and you'll beat the 128-gig SDD in small file r/w, and massively beat it in large file r/w.

  • by tirefire ( 724526 ) on Saturday October 23, 2010 @10:54PM (#34001142)

    Speaking as a graduate student who still has a G4 Powerbook, I've loved it but honestly in the past 2 years I've been looking to replace it with something that can actually stream flash videos and show a block of animated gif smilies on a forum reply page without being choppy or using full CPU ... My first choice would be a 13" Macbook Pro, but Apple seems to have left that one useful model on the short bus and gave it a Core 2 Duo while the other pros in the line have decent current-generation chips. I've talked to other friends about it and I know at least 2 other people that would go out and buy a 13" within the next month if only it had a better processor.

    The performance difference between a Core 2 Duo and Core i5/i7 is pretty negligible for this use case :P. Even the 1.66Ghz Core Duo in my 4-year-old Mac Mini doesn't choke on web browsing. As I see it, the main advantage to the Core i5/i7 CPUs is that they have an Intel integrated graphics chip on-die that can be used instead of the nVidia graphics in order to conserve battery power.

  • by dgatwood ( 11270 ) on Saturday October 23, 2010 @11:05PM (#34001188) Homepage Journal

    You're close. It's actually Cali-speak. It's just missing some commas to indicate the right pauses.

    It will leave you feeling, like, a 14-year-old girl.

  • by gringer ( 252588 ) on Saturday October 23, 2010 @11:13PM (#34001202)

    No, small random reads are NOT the primary pattern in desktop usage. Almost NO file on your file system is under 4k in size, which is the "chunk" size for most 8mb to 64mb hd caches.

    I differ in that respect. Not sure if my use is typical, but here's a dump of the counts for the smallest file sizes in my home directory:

    ~$ du .* --apparent-size -a 2>/dev/null | awk '{print $1}' | sort | uniq -c | sort -n -k 2,2 | head -n 10
        40006 1
        11237 2
        6862 3
        4831 4
        3554 5
        2964 6
        2783 24
        2619 7
        2477 8
        2229 22

    In other words, the highest frequency file size is 1kB (blocks are 1kB in my version of du), next highest 2kB, and so on. I get an odd jump at 24kB and 22kb (and FWIW 0kB comes in at #18), but in general the smaller a file is, the more frequent it is.

  • Re:ridiculous story (Score:5, Informative)

    by beelsebob ( 529313 ) on Sunday October 24, 2010 @02:10AM (#34001902)

    Not true – Intel's current 160GB SSDs, if written continuously at their maximum write speed will last 10 years, that's twice what most hard disks last. Add to that that life span increases linearly with capacity on SSDs, and you're in very very good territory

  • Why the hell do you want a half a terabyte of SSD? Because it's the most expensive offering?

    RAID 0 and RAID 1 are nowhere near SSD in terms of power consumption, throughput and IOPS.

    In today's computing environment, RAM is plentiful, CPU cycles are cheap, storage is abundant yet IOPS will bring even a high end machine to it's knees.

    I was migrating some data from an old laptop (2 year old MacBook Pro) to a new one (MacBook Pro with a small SSD). I don't know what it's like on Windows or lInux, but on OS X once you're hitting 500-800 IOPS on a 7.2k hard drive everything slows to a crawl. You CPU utilisation can be idle, your RAM usage can be well within the amount of physical RAM installed yet too many IOPS and you soon can't do much with the machine.

    On this new machine, I was copying a mail spool to it (mbox folders) installing software and Spotlight (full text indexing) was running in the background. This machine (a laptop mind you, not a workstation) was pulling in 7500 IOPS and not breaking a sweat - it was quick, responsive and completely usable for interactive tasks.

    In order to get 7k IOPS from spinning media, you're talking about Fibre Channel or iSCSI storage arrays costing tens of thousands of dollars.

    I, for one, am more than happy to put up with a small boot drive (40-60GB) if it's an SSD and move my bulk storage to spinning media. After that experience I now carry a laptop with a 64GB SSD and a 500GB FireWire external drive for bulk data and I couldn't be happier with that setup. I've even made the boot drive (and apps drive) in my workstation a small SSD, with bulk data on spinning media. I can boot this machine in mere seconds and launch half a dozen apps at login and it just doesn't slow down.

    If you haven't used a machine with an SSD in real life, don't knock it until you've tried it.

    It used to be that adding more RAM to a machine was the cheapest way to speed it up as just about all machines used to be (more or less) RAM bound. Now it's IOPS and adding an SSD is the cheapest way to have a more responsive machine. Older machines will potentially benefit even more than a newer machine as the relative speedup can be even greater...

  • by julesh ( 229690 ) on Sunday October 24, 2010 @02:56AM (#34002038)

    I want to know in the real world how long a SSD sitting on a shelf with data will last in general.

    General consensus seems to be about 10 years. This data is out there, so I'm not sure why you're still waiting for it...

  • by TheRaven64 ( 641858 ) on Sunday October 24, 2010 @05:19AM (#34002452) Journal

    You're missing the point. Adding USB wasn't the important factor - removing the other ports was. PCs had USB from a year or two earlier (although only the ones with Widnows 95 OSR 2.1 could actually use it), but they also had serial, parallel, and PS/2 ports. If you bought a new PC in 1998, it came with a PS/2 keyboard, a PS/2 mouse, and typically a parallel printer. It also had two USB ports doing nothing.

    This meant that peripheral manufacturers wanting to sell to PC users just kept producing the same old stuff they had been making. Ones wanting to sell to Mac users had to support USB. Once they'd done that, they had a peripheral that also worked with PCs, so it was in their interests to try selling it to PC users as well (tiny marketing cost, potentially a large return). Before 1998, USB stuff in shops was quite rare. After, it was common and for the first year or two most of it used that ugly translucent plastic so that it looked like it was designed for an iMac.

    Apple also, accidentally, did something else that spurred the USB peripheral market - they released the iMac with the worst mouse ever designed (and a pretty crappy, but tolerable, keyboard). This meant that a large proportion of people who bought an iMac wanted to buy a new USB mouse.

Today is a good day for information-gathering. Read someone else's mail file.