Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Intel's First SSD Blows Doors Off Competition 282

theraindog writes "Intel is entering the storage market with an ambitious X25-M solid-state drive capable of 250MB/s sustained reads and 70MB/s writes. The drive is so fast that it employs Native Command Queuing (originally designed to hide mechanical hard drive latency) to compensate for latency the SSD encounters in host systems. But how fast is the drive in the real world? The Tech Report has an in-depth review comparing the X25-M's performance and power consumption with that of the fastest desktop, mobile, and solid-state drives on the market."
This discussion has been archived. No new comments can be posted.

Intel's First SSD Blows Doors Off Competition

Comments Filter:
  • Oh Yeah? (Score:5, Funny)

    by MyLongNickName ( 822545 ) on Monday September 08, 2008 @02:51PM (#24923257) Journal

    My SBDs will blow THEIR doors off.

  • by Anonymous Coward on Monday September 08, 2008 @02:54PM (#24923301)

    A step in the right direction, but at $600 per 1000 I am gonna wait a bit longer before jumping on the SSD bandwagon.

    • Why? They're almost free at 60 cents each :-P

      • Re: (Score:2, Funny)

        by Anonymous Coward

        Why? They're almost free at 60 cents each :-P

        Verizon cents.

    • by adisakp ( 705706 )
      Yup - which means the cost of a single SSD drive will be about 25-30% higher or $900-$1000.

      I'm happy my WD Velociraptor for right now. The Velociraptor is $300 for 300GB which is still steep but it beat or matched the tested SSD's in quite a few tests.

      The Velociraptor even beat the Intel SSD in several tests such as Windows Boot time (and it creamed it on anything that involved large amounts of writing / content creation since the Velociraptor gets 107MB/s write compared to 80MB/s).
      • by adisakp ( 705706 )
        FWIW, this article [reghardware.co.uk] recommends the Velociraptor over SSDs for gamers. The Velociraptor either beats or is close to SSD's in many benchmarks and the price per GB is at least an order of magnitude less.
      • Re: (Score:3, Interesting)

        by AllynM ( 600515 ) *

        The review is slashdotted at the moment so I can't RTFM, but...

        If a velociraptor beat an SSD in boot time, well, something is wrong with their test, or perhaps the bios was waiting on the SSD to initialize (entirely possible based on the added intelligence on their controller chipset). I just went from an SLC SSD to a velociraptor and the difference is painful. Boot time is slower. The system is just 'laggier'.

        You can't judge the differences between SSD and HDD from charts and graphs on review sites. Re

    • A step in the right direction, but at $600 per 1000 I am gonna wait a bit longer before jumping on the SSD bandwagon.

      I'd place an order for one this instant if I could. My company uses a relatively small database, on the order of 40GB of online data. It's running on 4 SCSI-320 Cheetah 32GB, 15K RPM drives in RAID 0. By all accounts, this single SSD would out-seek the Cheetahs, meaning that our website can serve more customers and more quickly. This is a total no-brainer for a lot of applications, even at the current price.

      • by arth1 ( 260657 ) on Monday September 08, 2008 @04:12PM (#24924407) Homepage Journal

        Before rushing to buy these for database use, I would want a good look at MTBF values. Especially MTBF values for really heavy use, which may be completely different from estimated desktop use.

      • Are you sure? (Score:3, Insightful)

        Quote
          4 SCSI-320 Cheetah 32GB, 15K RPM drives in RAID 0.
        End Quote

        What company would really want to run their DB on a Raid 0 (Striped) Disk setup? Does this not put it at risk from a single spindle failure?

        • Re: (Score:3, Insightful)

          by Just Some Guy ( 3352 )

          What company would really want to run their DB on a Raid 0 (Striped) Disk setup?

          One who replicates the data to slower backup systems.

          Does this not put it at risk from a single spindle failure?

          If those were the only spindles involved, sure.

      • by Dancindan84 ( 1056246 ) on Monday September 08, 2008 @04:22PM (#24924555)

        It's running on 4 SCSI-320 Cheetah 32GB, 15K RPM drives in RAID 0.

        I hope you know how volatile RAID 0 can be. A problem with any single one of those drives will screw up the whole works until you can restore from a backup. I can understand wanting to avoid RAID 5/6 if there are a lot of writes to your DB as performance of those arrays in writes are notoriously bad and RAID 1 would be a doubled hardware cost increase, but the ability to stay up and hot swap in drives after a failure is priceless.

        • Re: (Score:3, Interesting)

          by Just Some Guy ( 3352 )

          I hope you know how volatile RAID 0 can be.

          Oh yeah, but we can do a bare-metal recovery in an acceptable amount of time, so a failure is more along the lines of "dangit, break out the tapes".

          To answer other posters while I'm at it:

          That chassis is maxed out on RAM. We could buy a newer, bigger system but this SSD would serve about the same ends for a lot less money and effort. Besides, at some point you have to flush those cached writes out to disk. Right now, that is sometimes a bottleneck on our system. If we could magically make those writes s

          • Re: (Score:3, Interesting)

            by Korin43 ( 881732 )
            If you want a fast disk, get some i-RAM [wikipedia.org] (you'll probably want it doing constant backups to a normal hard drive). It's expensive, and you max out at 4 Gb (unless you put it in some sort of RAID), but it's hellafast. With the price of 1 GB sticks of RAM, you could probably do 4 in RAID 0 for around $500 (is 16 Gb enough space?).
            • Re: (Score:3, Interesting)

              by benow ( 671946 )
              IRAMs don't play well with controllers... bad SATA implementation. Good idea, bad implementation, and a costly experiment on my part.
        • by Nefarious Wheel ( 628136 ) on Monday September 08, 2008 @06:45PM (#24926537) Journal

          I hope you know how volatile RAID 0 can be. A problem with any single one of those drives will screw up the whole works until you can restore from a backup

          Oh my, pardon me, I am rolling on the floor laughing, biting the carpet and frightening the cat (ROFLBTCAFTC).

          I remember reading these exact same arguments in articles written during the early days of computing, when people were complaining of the multi-platter nature of modern disk packs. These started hitting the market around 1963 I think. The argument went -- if you stack all those platters together, the failure of one platter would trash the entire set! Oh noes...

      • by lgw ( 121541 ) on Monday September 08, 2008 @04:25PM (#24924601) Journal

        Have you tried just putting 16GB of RAM in the database server? Nearly 16GB of cache for a 40GB database should work pretty well.

        More geenrally, it's time to start thinking about DB servers that satisfy all reads from memory. It won't be long before the RAM available in a commodity sever is larger than many shops' database. Your caching model would want to be very different if you know you can cache everything.

        • Re: (Score:3, Insightful)

          by wizzat ( 964250 )
          Honestly, I think we're long past the time when we can even consider satisfying all reads from memory. Data volume is growing these days - and it's growing much faster than hardware.

          Disclaimer: I work in the data warehousing industry.

        • by ignavus ( 213578 ) on Monday September 08, 2008 @08:48PM (#24927721)

          It won't be long before the RAM available in a commodity sever is larger than many shops' database.

          First law of data: data always expands to fill all available storage.

          Second law: doubling your storage only buys you half the extra time you expected.

          Final law: no storage is ever enough.

      • by Ed Avis ( 5917 )

        Well it's about $600 in bulk. I imagine the retail price will be a bit more than that. But suppose you can get one for just $600. What else could you do with the money?

        You can buy 8 gigabytes of RAM for about $150 (you can even get ECC for that price if it doesn't have to be the fastest clocked RAM). So $600 would let you pimp out your server with 32 gigabytes of RAM - actually, not so much these days. I'd bet that for many applications the RAM will give a better performance increase than going to SSDs

  • to run vista, or do you need a RAID array of these drives.

    • Re: (Score:3, Funny)

      to run vista, or do you need a RAID array of these drives.

      Vista does a lot better with slow hard drives than XP or most other operating systems, thanks to superfetch or whatever silly name they give to the precache of apps.

    • Re: (Score:2, Insightful)

      SSD doesn't have a seek delay or rotational delay.
    • Im sure they have properly optimiz^H^H^H^H^H^H crippled versions to use with Vista.
  • by Anonymous Coward on Monday September 08, 2008 @02:56PM (#24923327)
    This article at HotHardware, has a few additional tests that show real-world usage models as well as synthetic benchmarks: http://www.hothardware.com/Articles/Intel-X25M-80GB-SATA-Solid-State-Drive-Intel-Ups-The-Ante/ [hothardware.com]

    The PCMark Vantage tests are especially impressive: http://www.hothardware.com/Articles/Intel-X25M-80GB-SATA-Solid-State-Drive-Intel-Ups-The-Ante/?page=7 [hothardware.com]
  • by sakdoctor ( 1087155 ) on Monday September 08, 2008 @02:58PM (#24923361) Homepage

    You were only supposed to blow the bloody doors off!

  • by religious freak ( 1005821 ) on Monday September 08, 2008 @02:59PM (#24923375)
    This is great and all, but if I had to choose, give me more SSD storage. It's got plenty of speed right now, I'll be impressed when SSDs can be an actual alternative to disks.
    • Yeah, i have 3TB of HDD's on my desktop. Someone let me know when they make 3TB SSD's that i can afford. :)
      -Taylor

      • by Firethorn ( 177587 ) on Monday September 08, 2008 @03:58PM (#24924215) Homepage Journal

        At current improvement rates, I think that you're looking at 7-10 years before SSD becomes cheaper than 3.5" form factor drives for sheer storage. We seem to have been lagging at around a terabyte for a while. Meanwhile it seems that SSD is doubling in capacity per $ at it's 'sweet spot' each year at the moment.

        Going by performance improvements, it'll only be a 2-4 years before companies start replacing their platters with solid state for intensive database operations, especially those biased towards reads. Those 10k-15k RPM drives are significantly more expensive and store less than 7200/5400 RPM drives.

        The article mentions $595. Looking up, a 300GB 15k HD is $400 for an OEM. That's 5 times the size of the 80GB SSD mentioned in the article. Figure on a doubling each year, that'd be 3 years before the SSD exceeds current models. Figure in the lower power requirements and such, and I can see SSDs selling well before reaching parity based purely on size - their improved seek time, lower power demands, etc...

    • by grasshoppa ( 657393 ) on Monday September 08, 2008 @03:15PM (#24923595) Homepage

      Or you split up your expectations.

      Honestly, how much space do you need for the OS and programs? Have an SSD for these functions, and a traditional HDD for pure space requirements. That'd be more economical too, at least in the short term.

      • I just filled a 150 gig velociraptor with OS/programs. I may be a little out of the ordinary, but programs, especially games, are starting to stack up space wise. I have a few that top the 10gb mark, and at least one that tops 20. When you're eating space like that, the SSD sizes that are coming out right now just don't cut it. (I know, I know, I could just uninstall some stuff... but I like having all of them on hand in case I get a bug at 2 in the morning and have to play THIS GAME! :) )
        • I have a 74gb raptor, split into two 36gb halves (xp/ubuntu) and I get by on that as my OS drive. Seems enough for any programs I use plus 4 current-ish (wow, oblivion, cod4, civ4, plus any expansions for each) games. Its a little restrictive but I really don't regularly play 4 games anyway so I'm fine with tossing one for spore or whatever comes next. Admittedly, if I didn't have a 360 and did all my gaming on the PC, this probably wouldn't be sufficient.

          This isn't to say you're doing it wrong or anythi

      • Seconded.

        What gets really interesting is if you start thinking about these access times and such on your swap partition/file/drive/whatever. It's a hell of a lot less expensive than a ton of extra RAM, but still performs quite well, especially in random access. 80GB of an Intel SSD is still a lot cheaper than the equivalent amount of RAM, too.

    • Actually, it is the speed - the write speed. Most SSD's on the market right now have extremely slow write speeds, to the point that it can make running an OS off them quite painful.

      First get performance to parity with hard drives on write (they already kill them on reads due to lack of seek times), and then start ramping up the capacity. I expect we'll see both of these well underway by the end of next year. 200GB SSD, anyone?

  • These things cut latency by 2 orders of magnitude. Defrags are no longer necessary. 250MB/s damn near saturates the newest SATA gear.

    Write/Read speed parity would be nice.

  • by MojoKid ( 1002251 ) * on Monday September 08, 2008 @03:01PM (#24923405)
    This review at HotHardware shows some additional data including a few additional real-world usage models, like PCMark Vantage tests: http://www.hothardware.com/Articles/Intel-X25M-80GB-SATA-Solid-State-Drive-Intel-Ups-The-Ante/ [hothardware.com]

    Benchmarks start here: http://www.hothardware.com/Articles/Intel-X25M-80GB-SATA-Solid-State-Drive-Intel-Ups-The-Ante/?page=4 [hothardware.com]
  • If anyone's seen the results, it's in first place in speed but not in a "door blowing manner". It's just slightly faster than the next guy. "Blows doors off" reads like marketing spooge trying to overhype something that has a small or no advantage over the next contender. Misleading title.

    • by Kjella ( 173770 ) on Monday September 08, 2008 @03:12PM (#24923567) Homepage

      If anyone's seen the results, it's in first place in speed but not in a "door blowing manner". It's just slightly faster than the next guy.

      Pardon me, but it is "blowing down the doors" (and the house too) in some tests, like this one [techreport.com]. More than 3x the number of transactions of the second fastest flash drive? 7x faster than the slowest SSD drive? And the traditional HDDs are so crushed at the bottom I can't make out a ratio, but 30x or more? That is just ownage of the highest level. Yes, the write speeds aren't exactly compelling but for IO and read-heavy uses it's completely mindblowing.

      • Note to self: Tell Netflix to store all their watch it now content on these drives.
      • by CaptainPatent ( 1087643 ) on Monday September 08, 2008 @03:34PM (#24923877) Journal

        Pardon me, but it is "blowing down the doors" (and the house too)

        Yes, the write speeds aren't exactly compelling but for IO and read-heavy uses it's completely mindblowing

        Great, first the doors, then the house and now your mind...

        I guess if there's anything we've learned is this drive really blows.

      • but for IO and read-heavy uses it's completely mindblowing

        I'm being anal but ... you realize IO implies reading AND writing, right?

        • by Kjella ( 173770 )

          I'm being anal but ... you realize IO implies reading AND writing, right?

          Short answer, you have read and write performance as you'd see in normal laptop/desktop/workstation use which consists of fairly mixed size and randomness. Then you have what is typically database transactions - a huge number of small read/writes which tend to saturate the controller not the actual medium unless the underlying medium is extremely fast to respond. Those specificly interested will check out read IOPS, write IOPS and various mixes for various block sizes, but in many ways its a separate metric

    • "door blowing manner" Sir, I am intrigued by your ideas and wish to subscribe to your newsletter.
    • by tknd ( 979052 )

      They did blow the doors off the competition because they actually have engineers that get it. They were able to make an MLC based flash disk that is not only faster in every manner but has an amazing MTBF. This brings cheaper SSDs within reach. Look at how thorough the assessment of their MTBF calculations are and it really shows they paid attention to every detail.

  • by Anonymous Coward on Monday September 08, 2008 @03:03PM (#24923435)

    Since SSD don't really have "sectors", do they fragment files the same way as HDD?

    Also, what would the defrag speeds be?

    • by bunratty ( 545641 ) on Monday September 08, 2008 @03:11PM (#24923557)
      The reason you defrag a hard disk is because the time to read a file is much less if the drive doesn't have to a random-access seek while reading the file. SSDs have fast performance whether they need to seek randomly or not, so why would there be a need to defrag an SSD disk? I would think it would only wear out the drive faster.
      • by chill ( 34294 ) on Monday September 08, 2008 @03:23PM (#24923723) Journal

        Yes, it would wear the disk out faster, but your original premise is flawed.

        Clustering locations would allow for accessing large chunks of data with one fetch, instead of lots of little fetches. If you're old enough, think back to the Blitter on the Amiga and moving contiguous chunks of memory as opposed fragmented blocks.

        Remember, RAM can get fragmented just as badly as a hard drive.

        • by petes_PoV ( 912422 ) on Monday September 08, 2008 @03:50PM (#24924131)
          You store the database on these, so fragmentation questions are moot. Provided you've set the (database) block size correctly, the only time you'd have to modify (as opposed to write new) a block is to update a VARCHAR field that won't fit in the original size.

          What would be interesting would be to put an Oracle database block interface on these puppies, instead of the normal filesystem interface. then you'd just have the database say to the storage "get me block X" and it appears. No filesystem overheads - which given the speed of these things could turn out to be significant.

          Looks like we'll be back on RAW "disks" for databases. Plus ca change!

        • by adisakp ( 705706 ) on Monday September 08, 2008 @04:00PM (#24924243) Journal
          You never want to defrag SSD's. It just wears out the disk.

          A good SSD has wear-leveling and write-combining techniques that keep the SSD "defragmented" automatically.

          And it doesn't matter if the FS clusters are far apart as long as they are close to the SSD's hardware cluster sizes or the SSD intelligently combines them (which is what I believe Intel is doing since they claim a write amplification of only 1.1).

          It's possible that the Samsung SLC chip stores data for the wear-leveling and write-combining operations which would remap the MLC in a non-fragmented way.

          BTW, let me give you a naive wear-leveling / write-combining algorithm. I'm sure Intel has a better one because they've invested millions of dollars of research and the one I'm about to present to you could be done by a CS101 student:

          1) You have a bit more than 80GB free for an 80GB drive (extra memory to take care of bad sectors just like a normal hard drive plus a small amount of required for the wear-leveling / writecombining)

          2) You treat most of the storage as a ring buffer that consists of blocks on two levels: the native block size and a subblock size. The remaining storage (or alternate storage which may be the Samsung SLC chip on the MLC drives) is used to journal your writes and wear-leveling.

          3) You combine all writes aligned to the subblock size into a native block and write them out to the next free native block in the ring buffer and keep a counter for the write to the block. If you run into a used block, and increment a counter (for wear levelling) and if the counter is below a certain value, you skip it to the next free block, otherwise you move the used block (which has been stagnent) to a more frequently writtento free block (which will now take less of a burden since it's had a stagnant block moved into it).

          4) Anytime you make a write, the new sectors are updated in the memory area used for journaling / wear-level / sector remapping.

          Assuming your reads can be done fairly quickly at the subblock level, it never matters if you have to "seek" for the reads and the drive won't fragment on writes because they are combined into native block sizes.
        • by adisakp ( 705706 )
          BTW, the Blitter analogy isn't so good because today's hardware often has scatter/gather technology for fetching reads where it can combine many smaller blocks into a what appears to be a larger single virtual block for the read.

          Even tech without this will usually allow lists or queued fetching to hide the overhead of many little fetches.

          The important thing is to have the subblock size to be at least large enough that the time penalty for switching native blocks is minimal compared to the actual time of
        • by antdude ( 79039 )

          So do those RAM defraggers even work? Or do they not help?

          • by chill ( 34294 )

            I have no idea if the products work. RAM does get fragmented, but nothing a quick reboot won't fix. Hard drives need explicit defragmenting, but that scares me with RAM. I don't want Program A trying to move crap around in memory. Actually, I can't even see *how* they work, moving other program's data around. If program B expects to find a data block at $C000, it better be there and not bounced around by some defrag program.

            I personally wouldn't waste my time on them.

            • Re: (Score:3, Interesting)

              by Thaelon ( 250687 )

              I don't mean to attack directly, but you seem to be just well informed enough to be dangerous. First, you seem to think a quick reboot is something that should be no big deal and happen rather often. This is kind of appalling. If you need to reboot a computer often (more than to install new hardware), something serious is wrong with it or it's OS.

              Secondly, this phrase, "Hard drives need explicit defragmenting" is misleading as all hell. Hard drives do not need defragmenting. They're made of platters, he

          • Generally not. What a RAM defragger does, all it CAN do, is request a metric fuckton (not imperial... people often get those confused) of memory, which shoves almost everything currently running into swap, and then it releases it, so hopefully the OS reads the pages back from the swap in larger cohesive blocks.

            In short, no, there's no reason for it. If there was, it'd be recommended for use on memory-hungry server applications in the enterprise, and I have never seen that. Operating systems have improved

        • You really don't need to defrag your SSD / USB flash drives. Just as there are defrag utilities for your hard drives, there are defrag utilities for your RAM in your PC. Last time I ran one of those was perhaps 10 years ago. Do a Google search for RAM Defrag and you will find these. The time's I've done it with RAM where to clean up after programs with memory leaks, not for the real defrag use.

          The fact is in very few cases do you ever want to do this. The benefits just are not there to justify an

      • by arth1 ( 260657 )

        There's only one benefit you can have from defragmenting a solid state disk -- you free up space.

        On a heavily fragmented drive, the information on how to jump all over the disk to read the file has to be stored somewhere. Depending on the file system, it can either be in a block allocation map or file allocation table (BAM or FAT), which grows quicker the more fragmented the disk gets, or in continuation blocks (extents), where the end of a file block tells the file system "jump to sector NNNNN block MMMM"

    • Since SSD don't really have "sectors", do they fragment files the same way as HDD?

      Also, what would the defrag speeds be?

      SSD don't have seek times so all blocks have the same access times which means that fragmentation isn't an issue.

    • Flash File Systems (FFSs) use a wear-balancing algorithm to spread write-cycles out over the entire available drive, thus minimizing failure of individual blocks that would otherwise turn the drive as a whole into a brick. You specifically do not defragment flash drives because of this; all the defrag process accomplishes is to use up write cycles, because there is no seek delay and no rotational delay, which is what makes a fragmented filesystem slower than one where all the files in it are contiguous.
    • by Intron ( 870560 )
      Who says SSD don't have sectors? Erase blocks are around 128K or larger. Pages to write are 2K typical.
  • SSDs are *very* compelling. The lack of mechanical moving parts, better seek time, better read and write rates, better random access (goodbye defragmentation?), less noise, lees heat, better power consumption and the ability for us to finally use a lot of the bandwidth of those interfaces we've had for ages - what's not to like?

    However, they're going to need to get a lot cheaper, and we're going to need to see capacities in the hundreds of gigabytes before they start to take off, but take off they will.
    • They need to get cheaper, and they need to be easy as pie to recycle, because people who write intensively to them are going to go through them faster than consumers.
    • Re: (Score:3, Informative)

      by BitZtream ( 692029 )

      Write rates aren't THAT impressive, good but meh.

      Less heat depends on the device, I've seen plenty of HOT SSDs, presumably due to the density of silicon in them and being first generation devices

      Better power consumption ... where? Every SSD I've seen doesn't have a power saving mode, in power saving mode, as a general rule, mechanical drives are less hungry than SSDs.

      They are really only compelling if you need fast seek times or for use in a laptop where shock (head strikes) is a potential issue at this p

  • Other than that, we can already say that the days of magnetic media are numbered. The technology is here, we now only need to wait a bit. I give it three to four years at most.

  • How different is NAND flash memory compared to Memristor technology and would Memristors make a better SSD?

  • Real use for SSD (Score:3, Insightful)

    by jcdick1 ( 254644 ) on Monday September 08, 2008 @03:21PM (#24923703)

    Western Digital blah blah, 2.5" mobile blah blah. How do they compare to the mainline Hitachi and Seagate 15k Fibre Channel? EMC's SSD offerings? I want to know what I can expect for data warehousing on Oracle RAC.

    • Grab a price/performance ratio on all of those you listed, compare it to the ratio this Intel SSD has, and get back to me. Then put them in a RAID config. Not to mention... how many of those will fit into a 2.5" form factor? I don't think any of them. This is big news for mobile speed, and for compact datacenter needs.

  • http://www.pcworld.com/businesscenter/article/149792/intel_launches_smaller_ssd_for_netbooks_minidesktops.html [pcworld.com]

    intel appears to have actually jumped into the SSD foray before this.

    unfortunately, reviews have been lackluster.

  • SSD on PS3? (Score:5, Interesting)

    by nobodyman ( 90587 ) on Monday September 08, 2008 @03:33PM (#24923863) Homepage
    With more PS3 games offering an "install-to-HD" option, I wonder how SSD would affect performance. My theory is that playing a console game is a read-heavy experience, so an SSD should do quite well, right? Any rich gamers out there that have tried this out yet?
  • Anyone know about the general longevity of these devices?
    The shelf life of a hard drive isn't incredibly impressive.

  • Price is over-rated (Score:5, Interesting)

    by sampson7 ( 536545 ) on Monday September 08, 2008 @03:55PM (#24924189)
    I get a little tired of hearing about how the price has to drop orders of magnitude before SSD is viable. Shop around a little people!

    I ended up buying a refurb Dell laptop for around $1000 with a 64 gig SSD. Was it the latest and greatest? Nope. But it was about $150/200 more than a similarly priced computer with a traditional drive (which of course, was larger). Since the only significant problems I've ever had with my two prior Dell laptops (admittedly a small sample) involved the hard drive, going with the SSD (especially when you include the "cool" factors -- both temperature and nerd-ism) was an easy decision.

    But the point is that as SSDs become more prevalent, they become available at cheaper prices. I'm sure that as the Intel drives are rolled out, the "obsolete" drives currently on the market will continue to fall in price and become available to bottom-dwelling cheap-o-s like me who may not be able to justify $1000, but can rationalize $200 without a whole lot of difficulty.
  • In fact my VR destroys it in write speed...I'll stick with it for now.

  • Why don't they put a 1GB RAM on the thing with a battery and create a huge write cache? This ought to make the write speed almost a non-issue.

BLISS is ignorance.

Working...