Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage Intel Upgrades Hardware

Intel Replaces Consumer SSD Line, Nixes SLC-SSD 165

Lucas123 writes "Intel today launched a line of consumer solid state drives that replaces the industry's best selling X25-M line. The new 320 series SSD doubles the top capacity over the X25-M drives to 600GB, doubles sequential write speeds, and drops the price as much as 30% or $100 on some models. Intel also revealed its consumer SSDs have been outselling its enterprise-class SSDs in data centers, so it plans to drop its series of single-level cell NAND flash SSDs and create a new series of SSDs based on multi-level cell NAND for servers and storage arrays. Unlike its last SSD launch, which saw Intel use Marvell's controller, the company said it stuck with its own processing technology with this series."
This discussion has been archived. No new comments can be posted.

Intel Replaces Consumer SSD Line, Nixes SLC-SSD

Comments Filter:
  • Generations (Score:5, Interesting)

    by DarkXale ( 1771414 ) on Monday March 28, 2011 @12:47PM (#35642166)
    The 320 series isn't quite as impressive over the X25-M G2 series as I had originally hoped, so will likely be quite some time before I bother replacing the current one (and move that into the laptop instead).
    Still, an update has been due for a long time now the X25-M G2 is ancient in SSD terms. Just hope the new controller is as reliable as the Intel one found in the old drives.
    • by dc29A ( 636871 ) *

      Same controller.

  • Don't like this (Score:4, Insightful)

    by XanC ( 644172 ) on Monday March 28, 2011 @12:52PM (#35642238)

    MLC the only option on a server? For high-transaction databases, I don't see how it will work.

    • by Chas ( 5144 ) on Monday March 28, 2011 @12:59PM (#35642336) Homepage Journal

      Seriously. Any sort of enterprise-level should be swearing off these things as a storage medium then. Well, maybe for a boot drive. But anything with massive amount of writes should be kept as far away from an MLC drive as possible.

      • Comment removed based on user account deletion
        • by XanC ( 644172 ) on Monday March 28, 2011 @01:39PM (#35642952)

          Because SLCs survive for two orders of magnitude more writes than MLCs.

          • by arth1 ( 260657 )

            Also, each cell write is much faster (because it can be "sloppier" with only two states per cell), which greatly affects random write speeds even if the speeds are the same for sequential writes.
            And random writes is often a bottleneck in master databases.

            • by bertok ( 226922 )

              Also, each cell write is much faster (because it can be "sloppier" with only two states per cell), which greatly affects random write speeds even if the speeds are the same for sequential writes.
              And random writes is often a bottleneck in master databases.

              I hear this come up every time even though existing SSDs, both MLC and SLC, already run circles around hard drives for both random read and random write performance.

              I have an old SSD in my laptop that can outperform a very expensive SAN array for database workloads -- I've tested it with the same database and the same query side-by-side with the 16-core production database server with a 48-spindle LUN behind it, and my laptop won every time.

              Stop quoting stuff that was barely true for some first-generation d

              • by arth1 ( 260657 )

                I hear this come up every time even though existing SSDs, both MLC and SLC, already run circles around hard drives for both random read and random write performance.

                That's rather irrelevant when you compare two SSDs, though? Who said anything about HDDs?

                (And besides, it's not true. The average speed is far higher for SSDs, but a short-stroked HDD has a much better worst case random write speed than any SDD currently on the market. Yes, really. And for some uses, that's what matters. But again, that's not relevant, because we're talking about SLC vs MLC drives here.)

            • by jon3k ( 691256 )
              One thing to consider is the price per capacity and how that affects performance. You can get Intel MLC based SSDs for about $2.15/GB and SLC based SSDs for about $11.70/GB. That can translate into 4x the number of drives at the same capacity, which is four times the controllers working together in a storage system. Then you can front end that with large write caches and you _MIGHT_ end up coming out ahead in performance.
          • Because SLCs survive for two orders of magnitude more writes than MLCs.

            I don't work with this sort of stuff, but does that matter? If MLCs have other advantages, then what the problem with chucking them out and replacing them when they wear out?

            • Because two orders of magnitude is the difference in price between a Honda Civic and a Lamborghini Gallardo.
              • by fnj ( 64210 )

                More to the point, it's the difference in life between one month and 8 years.

              • by ebuck ( 585470 )

                Because two orders of magnitude is the difference in price between a Honda Civic and a Lamborghini Gallardo.

                It's a lot closer between the max speed of a Honda Civic (117 MPH) and something that can cross the Atlantic in 29 minutes.

          • No. Two orders of magnitude is 100x. Good SLC vs good MLC is 10x, only a single order of magnitute longer lasting.

            http://www.anandtech.com/show/2614/4 [anandtech.com]

            What you forget is MLC is about 2x cheaper than SLC, so you can get 2x the space for the same price. With wear leveling, extra space is extra lifespan, so MLC dies 5x faster than SLC.

            What does that mean for you? I put my money (job) where my mouth is. Our reasonably high traffic OLTP database server uses Intel SSDs as filesystem-level write cache. We get an av

            • Doubling lifespan that way requires that you only use half the disk capacity.

              I have burned out a Major Name Brand SLC SSD with a high traffic OLTP DB in eight months. I have heard the same from Large Internet Companies which tested these for internal use. There are ongoing independent reliability expert studies in FAST, HOTDEP, other conferences which are uniformly highly skeptical of vendors' claims on SSD lifetime.

              If you have not actually tested the drive out to six years service, run an accellerated pi

              • I have burned out a Major Name Brand SLC SSD with a high traffic OLTP DB in eight months.

                Why tip-toe around this? Are you talking about Intel or not? If not, it's not really relevant here because this is about Intel, and I think most people agrees that Intel is generally a bit more respected for being a better tested product with a bit more truth behind their numbers. If you ARE talking about Intel, then I think that's pretty important to know.

              • Comment removed based on user account deletion
    • Well, they DO mention that they tripled the sequential write speed, so it could be that the MLC is now competitive, speed wise, with SLC. High-transaction databases are the devil's bane of storage devices as it is, you're probably best going with a high amount of RAM cache - both read and write, if that's what you're doing. Enough cache and the right database system and you can turn random writes into what are effectively sequential writes, improving performance that way.
      • by arth1 ( 260657 )

        Well, they DO mention that they tripled the sequential write speed

        But sequential write speed is rarely a bottleneck - random access small writes are. And those tend to be much worse for MLC than SLC.

        • I know that random writes is a problem - which is why I mentioned things like 'enough cache' and 'right database system' to turn those random writes into sequential ones. It's expensive in terms of storage capacity(your DB will have to be bigger), but if MLC is 'enough' cheaper than SLC, you just buy the additional storage. Plus, with modern MLC and wear-leveling you're looking at years at the drive's maximum write speed to start wearing out the cells. If MLC is around an order of magnitude cheaper than
          • by afidel ( 530433 )
            Huh, with write amplification you can wear out an MLC drive in a matter of months at a small fraction of their write speed. This is why I dished out the money for FusionIO's SLC based cards, estimated life based on data from our existing SAN is ~5 years which means we should be for good our planned replacement time of 3.5-4 years (our current servers which are to be retired in a few days are 4.5 years old but have been in production use for just over 4).
            • Hmm... Going by the Intel X-35,

              35MB write speed, 40GB. 1,143 seconds to write completely. 19 minutes. Research gives a number of write cycles of ~10k. 132 days, about 4 months. I'll note that this is pretty much writing 100% of the time, which should make up for 'write amplification'.

              Get the 160GB model and you get 100MB/s over 160GB. 1,600 seconds for an overwrite.

              I guess it depends on the duty cycle. A 32GB SLC runs about the same price as a 160GB MLC. Around 5 times the price per gig. If y

              • by arth1 ( 260657 )

                35MB write speed, 40GB. 1,143 seconds to write completely. 19 minutes. Research gives a number of write cycles of ~10k. 132 days, about 4 months. I'll note that this is pretty much writing 100% of the time, which should make up for 'write amplification'.

                Except that that is only true if every write gets deleted so garbage collection can work at full efficiency. In reality, you fill large parts of the disk, and then you have a smaller empty area to work in. As it gets worn, the disk has to use wear leveling to use other already used sectors instead, and that means at least three writes for every block, which both slows the drive down as well as helps shorten the overall lifespan at the benefit of increasing the local block lifespan.
                For a consumer drive, it

                • Consumers are having to decide between Flash and a hard drive, where the hd runs about 1/20th the cost per gig than even a MLC SSD. Yes, it's tough.

                  As for garbage collection and deletions, did you miss my point about using some of the savings to buy a bigger drive? That's where I get the extra performance from. Not to mention that in the GP I essentially mentioned using a 'smart' database that knows what it's using to help reduce the number of write cycles and wear leveling needed.

                  Look, I won't argue

    • by CAIMLAS ( 41445 )

      Why would it be a problem? I haven't looked at the specs on these yet, but some of the sandforce based MLCs have MTBFs of a million years and can handle something like 100 years of constant writing.

      • by afidel ( 530433 )
        BS, give them a random 4K write workload at 10% of their capability and they will burn out in a few months max, MLC cells are only rated for ~10k writes.
  • by sandytaru ( 1158959 ) on Monday March 28, 2011 @12:54PM (#35642262) Journal
    I'm not going to run out and replace my $100 2TB external backup with one of these any time soon. However, I've been tempted to snag a small 40 gig model and use that as my OS drive, and use my existing internal 1TB HDD for the actual data. I think the article is right, in that the price per gig needs to hit $1 before you start seeing acceptance for mass storage solutions from consumers. 95% of users can't tell the difference between a 5600 RPM HDD and a 10,000 RPM one, so they won't care about SSD speeds that much either.
    • by CastrTroy ( 595695 ) on Monday March 28, 2011 @01:07PM (#35642468)
      Maybe the reason users can't tell the difference between 5600 (5400??) RPM and 10,000 RPM is because for the most part what is slowing things down is the seek latency. In both those drives, they seek latency is going to be 12 ms and 7 ms respectively. Which you're right, the user probably won't notice. But a solid state drive will give you a seek time of about 0.1 ms which will make a huge difference in many situations. Most users will probably notice a change like this because seek time is probably what is slowing down the computer most of the time.
      • In my experience users can see the difference from just 4500 to 5400 rpm laptop drives. And that's not much of a jump at all.

        From 5400 to 7200 RPM, a Windows PC is quite substantially more responsive. If you don't feel the difference with a 15k RPM enterprise drive your problem may simply be that the drive was already fast enough for the workload you're giving it.

        That said, for intense database applications, every moment counts.

    • Comment removed (Score:4, Interesting)

      by account_deleted ( 4530225 ) on Monday March 28, 2011 @01:09PM (#35642500)
      Comment removed based on user account deletion
      • I've been preaching the max ram option to people who are planning on new systems since 2003 when I was able to see the difference it made using Gentoo Linux. The testing method was a bit simplistic but as it involved bootstrapping the system, the difference in time required with 512 compared to 1GB was impressive and convinced me at that time to install the most memory I could afford.

        What I find funny now is people are spending their money on High Performance Gaming RAM when actual benchmarks show no improv

        • What I find funny now is people are spending their money on High Performance Gaming RAM when actual benchmarks show no improvement in performance for the same amount of memory.

          Worst of all are the ones with the fancy heatsinks. Add those and the price goes up by 50%.

          • There's also the RAM with the blinky lights on it :)

        • Comment removed based on user account deletion
      • by lgw ( 121541 )

        I turn off my gaming rig when I'm not using it - it's a bit of a power hog even when idle. An SSD reduced the power-switch-to-usable delay to 1/3 of what it was with a fast HDD, especially reducing the time spent grinding from logon to all services started and actually ready.

        Storage performance in general isn't very noticable on a running gaming system, as other delays tend to dominate (especially on DRM-infested games than=t need to phone home), but the boot up time reduction was worth it for me. I only

      • This is why I went with a RAID0 array of 10krpm drives for my gaming machine in early/mid 2009. I get most of the speed and way more capacity (600GB) for a much lower price.

        Early last year I bought a laptop, which as usual came with a hard drive that was too slow, so I was going to get a replacement hard drive the same time. I figured I needed at least 64GB of storage. An SSD was still way too expensive so I went with a 160GB 7200RPM drive that only cost me $100 and doesn't make me wait too long for anythin

        • Re: (Score:2, Informative)

          by Anonymous Coward

          I used to have two regular HD's in RAID 0, now I have two SSD's in RAID 0. There is simply no comparison. SSD's absolutely blow away traditional Hard Drives, it's not about the Mb/sec... it's about the I/O's per second, and in this sense SSD's are about 70-100 times faster than traditional disks. Photoshop, Office, Firefox, everything opens instantly. I can even open 5 programs at once and they still all open instantly. This can all be done with 4GB of ram too, no need to buy more memory to make up for

      • RAM is cheap

        Checking one of my favorite suppliers (there are probablly cheaper ones out there)

        Ram is about £10 per gig and if you want to go beyond 16GB it's time to bend over and pay a lot more for your CPU/MB to get support for that extra ram.
        SSD is about £1.50 per gig
        HDD is about 10p per gig

        Superfetch sounds great if you have a regular schedule every day switching between programs at known times, for those whose usage patterns aren't so consistent it doesn't seem so useful (still mostly on XP mysefl, won

      • by PRMan ( 959735 ) on Monday March 28, 2011 @02:09PM (#35643316)

        I already have 8GB on my home server and that makes very little difference from 4GB since it sits idle at ~4GB most of the time. But the SSD made a world of difference. A 2-3 minute boot became 25 seconds. A 1+ minute shutdown became about 5 seconds. I don't worry about reboots anymore, because it's around 30 seconds total (instead of 5 minutes)! Game cutscenes are almost instantly skippable (within 2-3 seconds), if they allow it. EVERY program loads instantly. Installs take mere seconds (even OpenOffice or Office 2007).

        BTW, my RAM maxes out at 24 GB on this board, but if you told me the 24GB would help more than the 64GB SSD (about $90), you would be doing me a horrific disservice.

        • right on, I Picked up a 60GB SSD for 100 bucks back in August, It's not even a new sandforce model, and it's the single best upgrade I've ever seen on a computer. HUGE difference.
        • You're not reading the parent post correctly. He's suggesting that you don't power off the machine anymore. That's when you get big gains with Windows 7 and a big wad of RAM. Obviously, when you reboot, his solution doesn't work anymore.

      • by Ndkchk ( 893797 ) on Monday March 28, 2011 @02:39PM (#35643646)
        SSDs affect other things besides just speed. I put one in my netbook and battery life went from six hours to eight - and it boots in fifteen seconds and starts programs almost instantly. The difference in power consumption matters less in a bigger laptop, but it would still help. I also don't see why you're talking about an SSD and a 2TB drive as a binary choice. The "average user" doesn't need 2TB; they already have enough space with the ~500GB that came with their Dell. They could get an SSD, keep the hard drive they already have, get someone to move the Windows install, and have the best of both worlds.
      • hybrid sleep makes shut downs kinda pointless

        Why? According to the wikipedia article, hybrid sleep puts the machine in 'standby' mode. Isn't that just another term for sleep.. where the computer is still using _some_ more power than turned off?

        • Comment removed based on user account deletion
          • I don't use Windows.. but ok.. "while keeping just the used pages alive in RAM". So that _is_ using more power than shutting down completely.

            Yes, a tiny bit, and yes, probably tiny enough that I would use it... But if I'm done using the computer for the day, or even more than a few hours in most cases, I'll shut down completely.

            BTW, I think you can turn this kind of functionality on in a Mac via a defaults write workaround.

      • Caching solutions are always poor. No system is smart enough to cache everything and there's a cost to caching - misses, reading the first time, etc that produce lag and the characteristic disk churn of mechanical drives.

        I find in everyday usage, most users are disk bound. CPU and RAM are just sitting around waiting for the disk. I've only put in 3 SSDs and the difference is night and day. The low seek times and transfer speeds make the computer feel completely different. Once Joe Average gets to see one o

      • That is why I've been telling my regular customers and those wanting new builds to max out on RAM first and then if they still have money to blow after getting the rest of their wish list get an SSD for an OS drive, because frankly if their choice is RAM or SSD I'd always advise the most RAM as it'll get more use. How does 24 GB of ram help if your system uses 2 GB of it? Windows is not going to cache all of your commonly used files on that RAM. Maybe it does if you use Superfetch. I've not tried it.
        I woul
      • I have maximized my RAM for many years now. Since I had SSD the last few years, I stopped buying max # gigs for any new computer, probably about half of it and put it toward SSD.

        It's much faster. The biggest improvement is waiting for disk to spin up to speed in a normal computer, in a laptop, this is multiplied by putting it to sleep much more often (close lid). Superfetch won't do anything there. Startups are MUCH faster too. (But then I don't start up as much as I used to.)

        I also do a lot of browsin

    • If you think it's too pricey for mass storage, just pretend that it's 2005.

    • by pz ( 113803 )

      I've been tempted to snag a small 40 gig model and use that as my OS drive, and use my existing internal 1TB HDD for the actual data.

      I've been doing exactly that for about a year now and highly recommend it. Use 3 (or 4) 1TB HDDs in a RAID for your non-OS storage and you'll add some failsafe capacity as well as speed.

    • by dc29A ( 636871 ) * on Monday March 28, 2011 @01:15PM (#35642620)

      I'm not going to run out and replace my $100 2TB external backup with one of these any time soon.

      I am not going to run out and replace my minivan that I use to ferry my four kids and wife with a two seater sports car any time soon!

      • I'd say small high-speed drives are the 2-seater sports cars of storage. No major hauling capacity but you can still fit a decent bit of stuff inside and go plenty fast enough. SSDs are the sportbikes of storage. Costly, finicky and kind of unsafe but ZOMG SO MUCH SPEED!

        And Intel SSDs are the Italian sportbikes of storage - more expensive than the competition because of the name :P

      • by ebuck ( 585470 )

        I'm not going to run out and replace my $100 2TB external backup with one of these any time soon.

        I am not going to run out and replace my minivan that I use to ferry my four kids and wife with a two seater sports car any time soon!

        The analogy falls apart when your two seater sports car can make 80 round trips in the time the minivan makes one. Once you realize the true speed differences, you will start thinking of that two seater sports car as a minivan with seating capacity for 81, or one that can shuffle a "mere" four people around in seconds instead of minutes.

    • in that the price per gig needs to hit $1 before you start seeing acceptance for mass storage solutions from consumers.

      Hmm... Hard to say, hard to say. Personally, I'm thinking more like $.10 per gig. As you mention, HDs are currently around $.05 per gig. I bought a 60gig SSD a while back, it's just not big enough - it constantly forces me to shift stuff to the HD(I LOVE symbolic links!). I can keep the OS, a few applications, and maybe a couple games on it. Performance improvements, at this point, are almost unnoticable. Personally, I think that a hybrid SSD/HD [storagemojo.com] solution is currently the best idea, at least for the c

    • I'm not going to run out and replace my $100 2TB external backup with one of these any time soon. However, I've been tempted to snag a small 40 gig model and use that as my OS drive, and use my existing internal 1TB HDD for the actual data. I think the article is right, in that the price per gig needs to hit $1 before you start seeing acceptance for mass storage solutions from consumers. 95% of users can't tell the difference between a 5600 RPM HDD and a 10,000 RPM one, so they won't care about SSD speeds that much either.

      The difference between a 10,000 RPM hard drive and and SSD is much bigger than the difference between a 5600 RPM HDD and a 10,000 RPM HDD. They will notice.

      Like many others I went with an SSD for my boot drive when I built my last system (Crucial 64GB RealSSD), in combination with a cheap 7200 RPM HDD for data. The SSD makes a HUGE difference - no waiting for applications to start, very quick startup (the longest part by far is the various BIOS checks that it feels the need to go through), no need to defrag

      • Right. Focusing on read and write speed is misleading. The reason for this is that the perceived speed of SSDs comes from seek times, not R/W speed.

        Think of it like this: ever play a game on a server in Korea with a one-second ping? Even if your connection is 100Mb/s, that feels horrible. This is analogous to a mechanical hard drive. Compare it to the LAN game where the server is 10ms away - even on a 10Mb/s pipe it's far better. That's what an SSD feels like.

      • With the availability of a relatively cheap 40 GB option I can see the start of widespread adoption in the corporate world. In my experience 40 GB is plenty for OS and applications for the vast majority of office drones (myself included), with pretty much all data staying on the server these days.

        I see several adoption points - and the biggest one isn't performance related, but when the cheapest SSD that 'works' is cheaper than the equivalent cheapest HD.

        What does this mean? When the cheapest available HD costs the manufacturer $20 and the equivalent SSD costs $19. It might be a 500GB HD for $20 and $19 for 40GB, but it'll be cheaper. HDs offer 'enough' performance today, and their vastly cheaper cost per GB still outweighs that SSDs scale 'down' better than HDs. The cheapest HD at the moment

    • Oh they sure can tell the difference (they may not know it's the hard drive, but they definitely know the computer is shit-slow), but they'd just rather keep the cost of their computers down.

    • You're right about the $1/gig price point. My money will remain in my wallet until then. Besides price, I'm disappointed at the lack of a 60GB model. 40GB is too little; 80GB is too much. I guess I have to wait until the next release cycle for my mythical $60 60GB Intel SSD.
    • For laptops, the performance increase is incredible and obvious. I've replaced spinning drives in two MacBooks and a MacPro. The latter was pretty damned fast anyway and going from a 7500 RPM drive to the SSD did make a difference, but not the absolute stunning level of performance increase that I've noticed in the laptops. That might be because, as Macs, they're on the slow end of high performance (they're both circa 2007) and came with pretty sluggish hard drives to boot.

      But it's night and day. For
    • by aussersterne ( 212916 ) on Monday March 28, 2011 @03:04PM (#35643962) Homepage

      There's no comparison between the 5,600-10,000 RPM gap and the HDD-SSD gap.

      I took the plunge last year and installed X-25M drives in my desktop and laptop as OS drives, with secondary drives for user data. The difference is the single greatest performance jump I've ever experienced in 30 years of upgrading, going even back to the days of replacing clock generators on mainboards to overclock 8-bit CPUs by 50 percent.

      There is literally a several-orders-of-magnitude difference in the overall speed of the system. If you haven't experienced it, a description of the difference doesn't sound credible, but a multi-drive RAID-0 array of 10k drives doesn't come close to a single SSD in terms of throughput.

      I can't go back to non-SSD OS installs now. Systems without an SSD literally seem to crawl, as if stuck in a time warp of some kind. Non-SSD systems seem, frankly, absurdly slow.

      • There is literally a several-orders-of-magnitude difference in the overall speed of the system.

        LITERALLY?

        I call BS. Assuming several is equal to at least 3, several orders of magnitude implies an increase of a factor of at least 1000 of your overall system performance.

        Since most performance comparisons between an ordinary hard drive and a SATA drive show at most a factor of 2 difference in tasks like booting Windows you are WAY off. Even drive specific tasks like sustained reads are typically no more than a

        • I would say two orders of magnitude.

          In my case, from a three-digit (100 second+) boot process to a one-digit (8-9 second) boot process, comparing a 1TB WD Scorpio Blue drive to an Intel X-25M drive storing the OS. It was a MASSIVE difference, a ridiculous difference.

    • by glwtta ( 532858 )
      I'm not going to run out and replace my $100 2TB external backup with one of these any time soon.

      And I'm probably not going to replace my RAM with a tape drive; what's your point?
  • by ameline ( 771895 ) <ian.ameline@gma i l .com> on Monday March 28, 2011 @01:10PM (#35642528) Homepage Journal
    It is a bit behind the times with no Sata 3 (6 GBps) support.
    • They probably see the SATA 3.0 market as currently too small, such that OCZ can not become so dominant so as to prevent Intel from successfully entering it later.
  • Looks like like Intel has scrapped the "power safe write cache" that was slated for the next generation of drives.

    • by Amouth ( 879122 )

      humm what??? they added caps to ensure that there was enough power to finish write operations in the advent of power failure.. they didn't "scrap" it, they implemented it.

    • No they didn't, read the white paper [intel.com] about it. You can see all the capacitors involved in the anandtech review [anandtech.com] even. In theory, this has finally fixed the problem that made Intel's drive unusable for high-performance databases, that the write cache was useless in that context because it lied about writes.

      • Thanks, my mistake. I looked over the datasheet and product brochure and it made no mention of this. Since they were touting it prior to launch, it seems strange that it is no longer a marketing point. Hopefully the feature won't disappear, as has happened with certain other products after launch.

  • So, anyone cares to make a forecast as to when SSDs will overtake HDs even for large Tb units in terms of price+perf ?
    • Yes, sir.

      2014 will be that year.
    • Re: (Score:2, Interesting)

      by Anonymous Coward

      It took them 3 years or so to go down 30% in price, maybe. It'll probably take them 2 more years to drop another 30%, and after that 1 more year to drop another 30%. At which point they'll most likely hit a wall and they'll only drop variably 30% every year, year after year.

      I speculate 5 - 10 years to beat the price / performance of conventional hard drives. That's the point at which your average consumer does not find any value at all in owning a conventional hard drive. Already, many enthusiasts are will

"Trust me. I know what I'm doing." -- Sledge Hammer

Working...