Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

OCZ Releases First 1TB Laptop SSD 128

Lucas123 writes "OCZ today released a new line of 2.5-in solid state drives that have up to 1TB of capacity. The new Octane SSD line is based on Indilix's new Everest flash controller, which allows it to reduce boot-up times by half over previous SSD models. The new SSD line is also selling for $1.10 to $1.30 per gigabyte of capacity, meaning you can buy the 128GB for about $166."
This discussion has been archived. No new comments can be posted.

OCZ Releases First 1TB Laptop SSD

Comments Filter:
  • Didn't they have reliability problems in the past? Am I wrong about that or have they finally fixed it?

    • Re:OCZ (Score:5, Informative)

      by Kjella ( 173770 ) on Thursday October 20, 2011 @05:34PM (#37784558) Homepage

      Supposedly the fixed one BSOD bug [anandtech.com] a few days ago. That wouldn't be with this controller anyway, but their record isn't spotless. Then again, Intel managed a SSD blemish too so... you're seeing an industry moving at breakneck speed, just make sure yours isn't on the line.

      • by blair1q ( 305137 )

        How do these things take to RAID configurations?

        • Raid 0 1 and 10 are fine, raid 5 or 6 your quickly start getting the raid controller being the bottleneck, it's a whole lot of CPU grunt to do those calcs at a gigabyte or more a sec.

          • by blair1q ( 305137 )

            Is there a chipset that does RAID 5 or 6 in HW?

            • It's not so easy they require reads to write. Say you have 5 drives that's 4 blocks of data and 1 block of parity info the OS has to be able to write out as little as a single block so you have to read 4 blocks then write out 5. In reality the stripe size is bigger than one block. Now the controller has to handle a pile of these at the same time, all real raid controllers do this in hardware. Not your el chepo motherboard built in raid but most of the add in cards over a couple hundred bucks. If you rea

            • by amorsen ( 7485 )

              The best RAID accelerators are sold by Intel and AMD under the brand names "Xeon" and "Opteron"...

              • That would be an expensive RAID controller to dedicate a whole Xeon to acceleration.

                • by amorsen ( 7485 )

                  That would be an expensive RAID controller to dedicate a whole Xeon to acceleration.

                  Would it really? At the mid range an extra Xeon is a few hundred USD; obviously a bit more if you have to go for a server with more sockets. At the low end you just pick a 6-core CPU instead of a 4-core. Neither case seems unreasonably expensive to me. With a bit of luck you save an expansion slot.

          • A stripe would be glorious from a performance perspective. It would only take a few to saturate the bus to the CPU.

        • by Sancho ( 17056 ) *

          I think that TRIM isn't supported in RAID5. Not sure about other RAID levels. I would expect that you'd be able to RAID1 just fine, though.

          • by laffer1 ( 701823 )

            TRIM doesn't work with RAID, period. Most drives can work with RAID 0 or 1.

            • Re:OCZ (Score:5, Interesting)

              by Cyberax ( 705495 ) on Thursday October 20, 2011 @06:42PM (#37785372)

              Not so fast!

              You can use my https://github.com/Cyberax/mdtrim/ [github.com] to periodically TRIM unused space on software RAID-1 on Linux (ext3/4 are supported right now).

              Extending it for RAID-0/5/6 is not hard, but right now I don't have time for this.

            • TRIM doesn't work with RAID, period. Most drives can work with RAID 0 or 1.

              I just installed a pair of SSDs in RAID-1 using Fedora 15. I had searched far and wide to see if TRIM was supported, and didn't find anything conclusive, so I installed anyway, since I'm only using 16GB out of the 64GB drive (which means that even non-TRIM-aware wear-leveling would last 4 times as long). After I had installed, I did some more searching, and somewhere I found info that said the 3.0 kernels (which Fedora 15 uses, although renumbered to keep some apps from breaking) supported it. I didn't b

              • by Cyberax ( 705495 )

                Try my https://github.com/Cyberax/mdtrim/ [github.com] , we use it to periodically TRIM empty space. Works fine with md RAIDs on raw devices, it won't work with LVM though.

                Unfortunately, RAID on stock md devices does NOT work, even if LVM/dm TRIM works. You won't get any warnings from ext4, TRIM just silently won't work because md doesn't pass through TRIM requests. There's a thread (dated August of this year) on linux-raid where md maintainer says that TRIM is not his priority.

        • So - I'm wondering why, exactly, do people want to do RAID with SSD? There are really only two reasons (that I'm aware of) for a RAID. Either you need performance - which the SSD delivers, or you need redundant security for your data. Even in a corporate setting, it seems that running the OS and applications from from the SSD, while keeping data on RAID would be a good solution. Investing in enough SSD's for RAID arrays seems a bit of a waste.

          Servers, on the other hand, might make good use of SSD's in

          • So - I'm wondering why, exactly, do people want to do RAID with SSD? There are really only two reasons (that I'm aware of) for a RAID.

            And both are as valid for SSD as they are for spinning disks.

            RAID-0 can turn SSDs from fast to insane (if you have the controller bandwidth). Any of the data-protection RAID levels will help if your drive-with-really-new-and-somewhat-untested-technology dies an untimely death.

          • by kmoser ( 1469707 )
            Another reason: to be able to address the RAID as a single, contiguous (virtual) volume.
          • by Cyberax ( 705495 )

            Keeping OS on SSDs doesn't make any sense in for server applications. You don't need to worry about boot speed and all your code is usually in RAM anyway.

            However, databases and SSDs are a natural fit. We accelerated our server code more than 10 times just by moving it to SSDs. And since we're storing critical data, RAID-1 are a must (backups are the last resort - you inevitably lose some data if you have to restore from backups).

        • by Luckyo ( 1726890 )

          It's worth noting that if you desperately need more speed, you should look at Revo line of OCZ's drives. Those are SSDs mounted in a cluster of four separate drives linked to a single raid 0 controller all on the same PCI-E card.

          The reason why I'm not touching that is that raid controller doesn't let TRIM commands through, so after a year or so, drive will slow down significantly. That said, at raid 0 and PCI-E interface, it will still smoke any and all SATA solutions.

      • Re:OCZ (Score:5, Informative)

        by iamhassi ( 659463 ) on Thursday October 20, 2011 @06:13PM (#37785068) Journal
        10/17/2011 : After months of end user complaints, SandForce has finally duplicated, verified and provided a fix for the infamous BSOD/disconnect issue that affected SF-2200 based SSDs. [anandtech.com]

        wow, that's not something anyone wants to see, a bug in their hard drive. CPU I can replace, ram I can replace... pretty much everything I can swap out, but my hard drive is where everything is stored, I can't risk losing data because of a bug.

        Intel just had problems too that cause loss of data: [storagereview.com]
        "JULY 13TH, 2011 : Intel has recently acknowledged issues with the new SSD 320 series, where by repetitively power cycling the drives, some may become unresponive or report an 8MB drive capacity."

        I was waiting on an SSD until they worked out the bugs and there were no articles about problems for awhile but with stories like these I'll keep waiting, it's just not worth the risk.
        • by FyRE666 ( 263011 )

          Well my experience was that the issue wasn't fixed. I just returned one of these drives due to lockups, "disappearing drive" and random BSODs. This happened with a Corsair 120GB Force 3 SSD, but I know the OCZ drives are also affected. The issues have been going on for months.

          • by dbraden ( 214956 )

            Were you using the firmware update that was just released three days ago? I'm not being a dick, just genuinely curious.

            • by FyRE666 ( 263011 )

              Well I had used the firmware available on the same day this article was posted. I returned the drive on that day, after it still failed to fix the issues. I had tried the drive on 2 different machines with different motherboards, and in each case the problems occured in the same way when the drive was used. There are plenty of other people on the forums who have had the same experience so basically I had no faith whatsoever in the Sandforce controller-based SSDs. I've bought several in the past - my gaming

        • Re: (Score:3, Insightful)

          by Anonymous Coward

          SSDs aren't meant/ready as a mass storage solution. They are a performance product aimed at system drives and portable devices -- situations where replacing them isn't a big deal data wise.

          Also. if you're storing data on one one drive you're an idiot no matter what kind of drive you are using.

        • >>wow, that's not something anyone wants to see,

          There's been a LOT of "you lose everything" bugs in the SSD market up till now.

          Buying the latest and greatest is nice, but I like to wait until there's a couple hundred reviews of a product on Newegg before buying (filter at 1-egg level to see what can go wrong...)

          • by A12m0v ( 1315511 )

            My SSD is only for the OS and most used Applications, everything else is stored on a network drive that is mirrored in 3 different machines. If my SSD buys the farm right now I will lose nothing of importance.

            • >>My SSD is only for the OS and most used Applications, everything else is stored on a network drive that is mirrored in 3 different machines. If my SSD buys the farm right now I will lose nothing of importance.

              Me, too, actually, but going through the process of reinstalling Windows and my applications is something I'd rather avoid with a little bit of research.

              One of the SSDs I almost bought had a firmware bug which could very occasionally cause you to lose everything. The problem, though, was that p

          • by tlhIngan ( 30335 )

            There's been a LOT of "you lose everything" bugs in the SSD market up till now.

            There's plenty of "you lose everything" bugs in spinning rust drives as well.

            From Western Digital drive failures that led to them being nicknamed "Western Digicrap", to IBM's Deathstar drives, Seagate's logfile one, etc.

            Every manufacturer has come up with a line of drives that proved to be lemons, and we're talking about very mature technology here - it's been around since the IBM RAMAC.

            SSDs are a game changer in that any idiot w

            • >>There's plenty of "you lose everything" bugs in spinning rust drives as well.

              I can't recall the last time there was a firmware bug in a HDD that caused catastrophic loss of data... I can think of a couple RAID controllers, but nothing baked into the drives themselves. There's a lot of such problems in SSDs.

              I know it's a generalization, but HDDs tend to fail more gracefully (though SMART is by no means foolproof, don't get me wrong) whereas SSD failures are characterized by simply not working one day

              • by tlhIngan ( 30335 )

                I can't recall the last time there was a firmware bug in a HDD that caused catastrophic loss of data... I can think of a couple RAID controllers, but nothing baked into the drives themselves. There's a lot of such problems in SSDs.

                I know it's a generalization, but HDDs tend to fail more gracefully (though SMART is by no means foolproof, don't get me wrong) whereas SSD failures are characterized by simply not working one day.

                Seagate was the most recent one - depending on how you rebooted your PC, it could ju

                • >>If you avoid the cutting edge super-high-performance SSDs, then you really ought to be fine

                  Which is basically what I said. I finally bit the bullet, imaged my drive, cloned it over to an SSD, but only after carefully going through all the 1-Egg reviews on Newegg for the models I was interested in, and avoided the ones that had the catastrophic failure bugs.

                  >>in an SSD, practically the only reason it dies prematurely are due to firmware bugs

                  Well, and when it runs out of writes, I guess. Though

        • wow, that's not something anyone wants to see, a bug in their hard drive. CPU I can replace, ram I can replace... pretty much everything I can swap out, but my hard drive is where everything is stored, I can't risk losing data because of a bug.

          So, you're storing important data on a single drive with no backup? And you expect a typical rotating-magnetic hard drive to last forever and never fail?

          Studies show that, overall, long-term failure rates for SSD drives are *lower* than traditional hard drives.

          • by Luckyo ( 1726890 )

            In vast majority of cases, modern hard drive's SMART will start screaming its mechanical lungs out at you long before failure comes. Even if you ignore that, physical errors with the drive will often cause noise that will lead you to suspect that drive is failing. The "holy shit, it just died" failures are in minority.

            On the other hand, SSD dies like that in vast majority of times. Add to that unreliability of the new controllers (not the drive itself which is what you were talking about), and you get a per

            • I seem to remember a while ago that google published stats on their hard disk failures and that SMART wasn't particularly useful in predicting failure times of drives.

              I've personally seen several drives fail with no warning whatsoever and that's without anything dramatic like a nearby electrical storm to fuse the drive controller. If you don't have your data replicated, then either you don't care about the data (easy to recreate) or you will eventually become wiser.
              • by epine ( 68316 )

                Yes, IIRC, the Google report indicated that SMART murmured not a peep in about half of all failures.

              • by Alamais ( 4180 )
                Yeah, I've had drives in my research group fail and lose a bunch of data, and then have their SMART kick in hours/days later while we're trying to recover what's left. Thanks a lot.
              • I seem to remember a while ago that google published stats on their hard disk failures and that SMART wasn't particularly useful in predicting failure times of drives.

                I've read that study. The trouble with it is that it doesn't distinguish types of failure. As far as that study is concerned a drive that died completely, a drive that developed the odd unreadable sector and a drive that dropped out of a raid despite testing out fine as an individual disk are all "failures".

            • by Anonymous Coward

              Google's analysis (which someone else mentioned) is correct here -- SMART will generally not trip the overall SMART health status from OK to FAIL. Why doesn't this happen sooner? Because the vendors themselves are the ones who decide the thresholds for when an attribute goes from OK/good to FAILING/bad. On most drives -- consumer-grade and "enterprise-grade" (yes, BOTH!) -- the thresholds are set to the point where the drive will have already experienced so many failures that the end-user notices it as I

              • by Luckyo ( 1726890 )

                You have a good point. I made it a habit to quickly check some important statuses of drives through gsmartcontrol, usually parts like reallocated sector counts and other pre-failure marked parts every couple of months. Essentially parts that show that drive is starting to really suffer from wear and tear, and I should pay more attention to backing it up and probably replacing it sometime soon.

                I tend to do that long before it trips the threshold set on the drive because I know that those are artificially hig

            • SMART hasn't predicted a single one of the HD failure's that my client's or I have experienced. SMART is useful, but not at a predictor of failure.

              • by Luckyo ( 1726890 )

                I obviously do not know who "your client" is so I cannot address the issue, but for a knowledgeable home user maintaining his own and his family/friends machines, it certainly does.

                • I meant to type "clients" (plural, not possessive). Many hundreds of HD's and many dozens of failures over the 15+ years that HDs have had SMART. Only one or two of the failures were predicted by SMART. I'll put my much larger sample, and Google's vastly larger sample up against your friend's/family's machines any day. SMART is NOT a reliable predictor of HD failure and never has been.

                  • by Luckyo ( 1726890 )

                    And when talking about servers, I will be forced to agree with you. My disks don't come close to those levels of wear and tear.

        • I've been using an OCZ Vertex 3 120GB since April. Throughout that time, their firmware releases have variously resolved certain bluescreen issues, while introducing others. Since installing 2.13, I have had major issues, and just last week was complaining that I would have to return the thing to OCZ and demand my money back. However, I installed 2.15 the day it was released, and all bluescreen issues have stopped.

          Through all that, I have never lost any data. I was rather panicked at first (the drive
          • by dbIII ( 701233 )
            I've been paranoid about mine (sync to other drives every hour) but apart from the annoyance of successive 120GB models not being the same size they've been OK so far (Vertex and Vertex2).
            Setting up a couple of striped disks took three attempts before I got something with good read and write speeds. You know when you've got the wrong stripe size because it ends up being noticably slower than a mechanical disk.
        • I was waiting on an SSD until they worked out the bugs and there were no articles about problems for awhile but with stories like these I'll keep waiting, it's just not worth the risk.

          You didn't think HDDs have similar dumb BIOS errors [sourceforge.net]?

        • by AmiMoJo ( 196126 )

          I have an older Intel SSD (160GB XM25) and have already had it replaced under warranty once because it ran out of spare sectors. Looking at the spec it is rated for about 17TB of writes, and the SMART attributes report 650GB of writes after a few months of use.

          At the time I bought it the expected life time was supposed to be at least 5 years, and I only put the OS and apps on it (data is on a traditional HDD). It looks like I will just have to do another warranty claim next year, and them when the warranty

    • The problems with the Sandforce controller has been fixed. But yeah OCZ were in the SSD game early and had a few flaky firmwares in the wild. All is good now though.

    • by rhook ( 943951 )

      These drives are not Sandforce based. Did you even read the article?

      "The drive, based on the new Indilinx Everest controller, includes an "instant on" feature, that reduces boot times over previous OCZ SSDs by 50%."

      • Just when their controller matures a bit they switch to a different one....?

        • by tlhIngan ( 30335 )

          Just when their controller matures a bit they switch to a different one....?

          Of course, because for some reason, these guys are making SSDs that push the performance envelope. There seems to be a big vacuum of "slower but stable" drives as everyone is racing to try to get 1GB/sec transfer speeds.

          Then again, Apple probably is soaking up all the "slow, solid, reliable" SSDs. Plenty of computers and plenty of users who can get by with a slower drive that still does 150MB/sec - most of the speedup comes from the

    • by kolbe ( 320366 )

      OCZ's reliability record is in no way different to any other Data Storage Manufacturer past or present.

      Seagate's recent 1TB woes: ST31000340AS [tomshardware.com]
      Western Digital's recent woes: Caviar Green EARS 1.0TB and 1.5TB SATA [newegg.com]

      Going further back, anyone who's been around in IT for a decade or longer recalls the old Micropolis 9GB drive failures that sent the company into bankruptcy. In any case, OCZ is a relatively good company and a notable innovator of SSD technology and I personally find most of their products to be ju

      • by Kjella ( 173770 )

        You jumped from 1TB to 9GB, but forgot the biggest one in between. The IBM DeskStar aka DeathStar was a huge scandal and probably lead to them selling off the division.

        • by kolbe ( 320366 )

          We all survived the 18GB Deathstar and avoided Fujitsu's sudden death syndrome and it further proves most have had their fair share of failures at one time or another. The only ones I cannot recall any fault with were Quantum and later Maxtor drives. I loved my Fireball and Atlas drives!

          • Maxtor had so many QA issues in their latter days that it contributed greatly towards the purchasing price that Seagate got when they bought them. Of course the CEO Bill Watkins promptly moved a huge bulk of the Seagate drive ops to the same plant (as a cost cutting measure it was presumed) that had issues for Maxtor and denied the 'bricking' issues that resulted for AS and ES series drives for at least 9 months... until the Seagate Board of Directors fired him [theregister.co.uk] and replaced him with the former CEO from 200
          • by amorsen ( 7485 )

            Quantum had their share of troubles. The Bigfoot drives would once in a while need a good kick to the side to get going again.

        • by mikael ( 484 )

          Remember that - happened to my workstation.

            Then my laptop hard disk drive (TravelStar) fried. Had to do the recommended trick of putting the disk drive in a freezer bag, chill it out, and give it a good whack on power-up. Managed to get one final incremental backup before it went up to the great server rack in the sky..

        • Not to mention Maxtor's 20GB and 40GB drives. Out of about 20 of those bought over a period of a year (so not just one bad batch), none lasted more than two years and most died in under one.
      • by haruchai ( 17472 )
        Micropolis!! Haven't heard that name in 15 years! I sure hope OCZ doesn't suffer the same fate.
        • by Anonymous Coward

          Miniscribe! Conner! Feel old yet?

          • by haruchai ( 17472 )
            Yeah, I'm pretty old, and I do remember Conner, the Compaq spinoff. Y'know, I might even have a box of Dysan double-sided, double-density 5.25" floppies kicking around. Wonder if the (worthless) data is still readable
      • I had a few 20GB IBM Death/DeskStar drives when they came out... raid-0, fast as sin at the time.. dead in 2 months, both of them.
      • by Luckyo ( 1726890 )

        For the record, seagate's 7200.11 was the only instance of buggy controller in that line of drives. I have two of 7200.7 drives that clocked over 50.000 hours uptime in raid0 as OS drive pair with no issues (and hilariously, one raid controller failure), and I have several 7200.12 drives (including the warranty replacement drive for 7200.11 that had a controller bug mentioned on the TH.com). No problems.

        Problem with OCZ is that they do not have a proper solution for their clients' woes, and they don't send

    • I ran an OCZ 60GB SSD in my desktop until I ditched it for a laptop. It fully KICKS ASS. I did Windows XP and Windows 7 startup comparisons (seemed to be the easiest 'controlled' test). The SSD on a clean/fresh/fully patched install on WinXpSP3 or Win7 (not SP1 at the time) was four times faster than a conventional drive. And that is including the POST.

      To be more specific, my test was 'power on to an open Windows Explorer displaying a network drive directory'.

      On a traditional drive (200Gb Seagate Ba
    • I think just about everyone had reliability problems with Sandforce based drives, but as OCZ was one of the closest partners to Sandforce they have generally released drives before others; often containing older firmware with more bugs.

      It does seem their QA process is not especially robust looking at their track record though, where ever the problem is, you would think they would pick it up and delay shipment until it's fixed (especially seeing how widespread they where).

    • Largest-capacity milestones are presented in the form of a SSD before good old magnetic HDDs?
    • Annual sales (in $$) of SSDs tops that of magnetic HDDs?
    • Price/TB for SSDs drops below that of magnetic HDDs?
    • Major OEM builder announces they remove magnetic HDDs as an option?
    • Well-known manufacturers do the same, turning magnetic HDDs into a niche product only produced by 1 or 2 specialist companies?
    • Most retailers drop magnetic HDDs from their inventory?
    • Dad, what's a magnetic HDD?
    • by kolbe ( 320366 )

      SSD's must meet or surpass all of your mentioned categories and overall capacity limits before Magnetic HDD's are cast the way of the floppy disk drive. Even then, look at how long it took to get rid of the floppy disk drive:

      Beginning of end for Floppy Disk Drive: 1998 with a CD-ROM drive but no floppy drive
      End of the Floppy Disk Drive: 2009 Hewlett-Packard, the last supplier, stopped supplying Floppy Disk Drives on all systems

      It could be stated that the HDD is more entwined in technology than the FDD was a

      • Comment removed based on user account deletion
        • 1. When the cost per GB become cheaper than what's available in an HDD.

          Heck, I'd probably pay $200 for a 500GB SSD for my laptop. That's way more than a hard drive, but I'd get something more for it. That's forty cents a GB, these guys want triple that currently.

          2. Proven track of reliability.

          Yeah, I'm only using SSD's for caches now because of their failure mode, and write caches get mirrored. If OCZ wants to make that 500GB drive a 750GB drive, and put four controllers onboard with a SATA multilane int

          • Comment removed based on user account deletion
            • I just love booting my pc in 6 seconds and going to my photo manager and before you can blink an eye... Scrolling through thousands of photos as thumbnails, and seeing every one no matter how fast I scroll, or what size the pics are, and changing zoom is instantaneous! I have sold 5 SSDs now to friends just based on that!

              I never see an IOWait anymore. Truly magical!

            • That's sick (awsome maybe?). Now my laptops bottle neck is the bloody processor.

              Awesome - in my opinion, that's where you always want to be.

          • Heck, I'd probably pay $200 for a 500GB SSD for my laptop

            Laptops aren't the only place people use disks. I've bought two new machines this year. My laptop has a 256GB SSD and the difference that it makes to performance is amazing. I'd hate to go back to using a hard disk. The other machine is a NAS. It has three 2TB disks in a RAID-Z configuration. I am mostly going to be accessing it over WiFi, so as long as the disks can manage 2MB/s I really don't care about performance. I care a lot about capacity though, because it's having things like snapshotted ba

            • Buy a cheap SSD, partition it, and add part of it as a cache device to your RAID-Z - the L2ARC will be kept fresh and you'll make the cost back in electric costs. If you can buy two SLC's (~$115 each) make a mirror of two partitions and add that as a log device, to save on the write side too (but don't trust a single SSD as a zil, and MLC probably isn't great for constant writes).

              I have more data on my laptop hard drive than is economically feasible for an SSD right now. Two more years, I figure. For now

              • I have an 8GB USB flash stick working as L2ARC, but even without it I never notice the hard disks being slow. The main advantage is that the disks can spin down completely after filling up the ARC and L2ARC when I've got a nice predictable read-only workload (e.g. watching a movie). Given that the drives consume a total of 15W in normal operation, and about 3W when idle, I doubt that I'd save enough for it to be worthwhile with a better SSD...
                • Yeah, depends on your workload. IIRC I worked out a 24/7-busy 5-disk Raid-Z2 to cost about $9/mo in electricity here (counting cooling, 40% or so). If Sun's math is to be believed, SSD L2ARC and ZIL get that down to $2/mo or so.

                  If your disks are mostly idle most of the time, I can't see how the payback would be fast enough. I don't know of any reliable way to figure carbon load.

                  • It absolutely makes sense for a busy server, but not really for a home NAS. Adding a SSD for the ZIL might make sense though, because Time Machine backups tend to be about 20MB and they wake up the disks.
        • When the cost per GB become cheaper than what's available in an HDD.

          For laptops, it happens once SSDs get cheap enough for large enough that you don't feel like you're trying to live with a gold-plated shoebox sized amount of storage.

          By a lot of metrics, we've hit that point with it being possible to pickup 128GB SSDs for about $1.50/GB. That's plenty for most machines and the responsiveness puts it far ahead of the competition.

          Once it drops below $1/GB and starts heading down to about $0.50/GB, you
        • by tlhIngan ( 30335 )

          1. When the cost per GB become cheaper than what's available in an HDD.

          That won't happen for a LONG time.

          SSDs only get cheaper in line with Moore's law (the number of transistors limits the size of the flash chips).

          Hard drives, it seems, grow in capacity faster than Moore's law. At some point in the future they'll slow down, but that's a huge gap to catch up to.

      • It could be stated that the HDD is more entwined in technology than the FDD was and so it may be more well more than 11 years before we see Magnetic HDD's disappear from the consumer marketplace.

        No. If price and capacity are similar, SSD is a complete replacement for HDD. They use the same form factors and same interfaces. FDD survived so long because of existing removable media. CD/DVD drives will persist for the same reason. HDDs have no backward compatibility issues to maintain other than form factor and interface. If/when SSDs approach the price and capacity of HDD, the HDD will die rapidly because the SSD has huge advantages in latency and portable durability, plus advantages in power, weight,

    • I think it will be when: the SSDs $/MB/GB/TB is below the HDD. That I believe will be the biggest milestone to pave the way for the rest. inexpensive. I will only switch over when I feel they are extremely reliable, which would be about the time any one of the many options above is true.
      • SSDs are better in enough other respects that I'll bet they take over before they're the rock-bottom $/MB choice. CRT TV's and monitors exited the market, or at least were on a very sharp downturn, even when they were still somewhat cheaper than the same-sized LCD.
      • by jbolden ( 176878 )

        The cost per gig is unlikely to ever get below the HDD cost. The multiplier used to be around 20x. Maybe it is down to about 15x 2 years later. So if we assume something like the multiplier halves every 4 yrs you still need another dozen years before HDDs are even close. I don't think HDD has that long.

        At this point most people experience essentially unlimited storage. Generally the switches: 14 in -> 8 in -> 5 1/4 -> 3 1/2... all happened when the larger drives still had better performance bu

    • Interesting list. For an even easier goal, it would be nice to see when will most high-end laptops ($1500+) start to have a SSD. Only very select models currently have.
  • wd and seagate hard drive prices are roofing and will keep going up over the months to come, ssd will be the way as energy costs go up..time for spinning crap to leave anyway

    • by Luckyo ( 1726890 )

      What makes you think that energy costs will go up? It's very country-dependent and at least where I live, electricity is far more likely to go down in price as next nuclear power plant is built in a couple of years.

  • If/when SSD's get to the price point of mechanical drives, I'll get one. Not worth it now.
  • I was wondering how long this SSD will last in typical use. Let's take the 128 GB unit; thats 128Gx10000 write cycles.
    Some numbers for my system: I've got 4 GB RAM, and at the moment it's using 1.5 GB of swap. Let's say the swap gets overwritten once a day. I hibernate 2x/day. New data: that won't be too much, Time Machine backs up maybe 1 GB in a week.
    In total 10GB of writing per day. That's 120,000 days, not bad.

    Worse case would be rewriting the entire SSD each day, that still 5000-10,000 days. Still good

  • I know I'm supposed to care that an SSD is unreliable, but the truth is you have to back up everything anyway because hard drives aren't reliable. I have a server with conventional drives in a raid array for data security, I want my main machine to fly.... and SSD lets you do that. I just wish it didn't cost so much.

No skis take rocks like rental skis!

Working...