Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Data Storage Hardware IT

Four X25-E Extreme SSDs Combined In Hardware RAID 228

theraindog writes "Intel's X25-E Extreme SSD is easily the fastest flash drive on the market, and contrary to what one might expect, it actually delivers compelling value if you're looking at performance per dollar rather than gigabytes. That, combined with a rackmount-friendly 2.5" form factor and low power consumption make the drive particularly appealing for enterprise RAID. So just how fast are four of them in a striped array hanging off a hardware RAID controller? The Tech Report finds out, with mixed but at times staggeringly impressive results."
This discussion has been archived. No new comments can be posted.

Four X25-E Extreme SSDs Combined In Hardware RAID

Comments Filter:
  • Re:Oh good (Score:5, Informative)

    by ChienAndalu ( 1293930 ) on Tuesday January 27, 2009 @04:45PM (#26628399)

    Make that 228 years [intel.com].

    Life expectancy 2 Million Hours Mean Time Before Failure (MTBF)

    Hint: learn about "wear leveling"

  • Re:Oh good (Score:5, Informative)

    by spazdor ( 902907 ) on Tuesday January 27, 2009 @04:49PM (#26628447)

    Your enterprise environment must not be hitting its drives very hard.

    Where SSDs is in disk operations that are usually lagged out by seek times; a big unwieldy database that gets a lot of writes and no downtime, for instance, is happiest when it lives on a striped SSD array.

    Coincidentally, this is exactly the type of workload which is most likely to shorten a magnetic drive's life.

  • Re:Oh good (Score:5, Informative)

    by FauxPasIII ( 75900 ) on Tuesday January 27, 2009 @04:50PM (#26628467)

    > 'cause SSD's don't cost $300-$500 more than their spindle counterparts, yep yep.

    Hint: Enterprise storage purchasing often looks at dollars/IOPS rather than dollars/GB.

  • by Wonko the Sane ( 25252 ) * on Tuesday January 27, 2009 @04:51PM (#26628473) Journal

    That RAID card was the bottleneck. It can't support 4x the raw transfer rate of a single drive.

  • Re:Oh good (Score:1, Informative)

    by Anonymous Coward on Tuesday January 27, 2009 @04:58PM (#26628573)

    So are SSDs ready for prime-time?
    Last year I attended IBM's Sydney Technical Conference and listened in to a presentation by ?????, one of the lead designer's of IBM's 3950 chipset.

    Part of the presentation was on SSD technology. Whilst viewing the graphs showing SSD closing the gap in cents / gigabyte, someone asked the pointed question: "Is there any future in spinning platter technology?". The presenter actually stopped for well over half a minue (an age in public speaking) before replying carefully "I do not speak officially for IBM now, but I can see no future at all for spinning-platter technology. Not even for bulk storage"

    As others have noted that once SSD drives are available in a price-competative form at volumes of around 150GB, tradditional drives will immediately and permamently exit the notebook market.

    And I can wait for that to happen

  • by grub ( 11606 ) <slashdot@grub.net> on Tuesday January 27, 2009 @04:59PM (#26628589) Homepage Journal

    Independent disks. And remember that some high end SCSI or Fiberchannel RAIDs have never fit the antiquated "Inexpensive" bit.
  • Re:paging benefits? (Score:4, Informative)

    by guruevi ( 827432 ) on Tuesday January 27, 2009 @05:02PM (#26628629)

    SSD shouldn't be for paging. That would become very expensive (even with wear leveling) if you have a minimal amount of RAM (say 256M) to run large (say 16G) operations. It would also be slow since you have the overhead of whatever bus system your hard drive/ssd is connected to.

    Technically hard drives aren't supposed to be paging either, it's just a cheap and simple trick to avoid having people pay a lot for (expensive) RAM or have their programs crash when occasionally they run out of RAM. However if your system is paging heavily it's better and faster with more RAM.

    Anecdote: I worked at a place once where cheap ($500) hardware was sold as dedicated SQL/IIS servers (you could fit 10 of them in 5U) and a lot of customers thought they could run whatever they wanted (Microsoft ran MSN for a whole country of one for a while) in them but they only supported a maximum of 2G RAM (4G according to BIOS but the modules back then were too expensive). Of course PHB just said: let them swap and besides the heavy slow downs they ran fairly fine. Well, those heavy users all crashed their software-RAID's in less than a year (the heavy load made Windows get the RAID system out of sync and then you had the first hard drive fail). The temperature was fine but simply swapping out was too much for the cheap hard drives (Maxtor and Seagate) and they all failed.

  • Re:paging benefits? (Score:2, Informative)

    by bluefoxlucid ( 723572 ) on Tuesday January 27, 2009 @05:20PM (#26628905) Homepage Journal

    SSD shouldn't be for paging. That would become very expensive (even with wear leveling) if you have a minimal amount of RAM (say 256M) to run large (say 16G) operations. It would also be slow since you have the overhead of whatever bus system your hard drive/ssd is connected to.

    You talk like you know what you're talking about; but then the reader realizes you don't understand what happens when the CPU spends 99% of its life in wait state waiting for paging operations. Swap is not a high-intensity workload; swap workload increases six orders of magnitude faster than CPU workload, meaning when you start swapping, you spend lots of time swapping.

    As the hard disk is external, this number increases with CPU speed; a swap operation taking 1,000,000 cycles on a 1GHz CPU (1mS) will take 10,000,000 cycles on a 10GHz (1mS) CPU. Triggering a seek operation between 4 and 9 mS on a 2.0GHz CPU (modern AMD) is a disaster; triggering these continuously, every 10mS, halves your CPU performance and performs 50 operations a second. Write operations take more than read operations, substantially, so we're talking 20-30mS, at which point ... if you're swapping even 2-3 times a second you notice it. AND ALL THAT SEEK WILL KILL HARD DRIVES.

  • by afidel ( 530433 ) on Tuesday January 27, 2009 @05:26PM (#26629011)
    Good controllers do read interleaving where every other batch of reads is dispatched to a separate drive.
  • by afidel ( 530433 ) on Tuesday January 27, 2009 @05:32PM (#26629087)
    Dude, 4 of these drives can keep up with my 110 spindle FC SAN segment for IOPS. Here's a hint, 110 drives plus SAN controllers is about two orders of magnitude more expensive than 4 SSD's and a RAID card. If you need IOPS (say for the log directory on a DB server) these drives are hard to beat. The applications may be niche, but they certainly DO exist.
  • by Anonymous Coward on Tuesday January 27, 2009 @05:41PM (#26629235)

    RAID5 has terrible random write performance, because every write causes a write to every disk in the array. it's VERY easy to saturate traditional disks random write capabilities with raid5/6. So, it's rightly avoided like the plague for heavily hit databases.

    I'm not certain how much of the performance hit is due to latencies of disks. So i feel it would be an interesting test to also see raid5 database performance.

    Also, Raid1 (or 10 to be more fair when comparing with RAID5) in a highly saturated environment. reading data should do marginally better than Raid5 since you don't lose a disk to parity--and any raid controller worth it's salt will send independent reads or round robin your read's to both disks).

    Then, there is also that whole disk failure thing. It's a huge performance hit to lose a disk in RAID5. For that reason alone in a heavily hit environment it would probably be best to avoid it.

    Disk failure is not an IF, but a WHEN. i Dont care what manufacturers say about MTBF's.

  • by XanC ( 644172 ) on Tuesday January 27, 2009 @05:48PM (#26629365)

    RAID5's write performance is so awful because it requires so much reading to do a write.

    I have to read from _every drive in the array_ in order to do a write, because the parity has to be calculated. Note that it's not the calculation that's slow, it's getting the data for it. So that's multiple operations to do a simple write.

    A write on RAID1 requires writing to all the drives, but only writing. It's a single operation.

    RAID1 is definitely faster (or as fast) for seek-heavy, high-concurrency loads, because each drive can be pulling up a different piece of data simultaneously.

  • Re:paging benefits? (Score:4, Informative)

    by illumin8 ( 148082 ) on Wednesday January 28, 2009 @08:29PM (#26647439) Journal

    I'm Betaing Windows 7. Before going to bed I set up a swap partition for it. After getting up the next morning and checking, it was full.

    I have *no idea* what W7 put in there while I was sleeping.

    In any modern operating system, including Windows , swap isn't just used for out of physical memory conditions. It's also used to "page out" portions of the operating system and libraries, shared objects, dlls, etc., that aren't being used at the moment. This actually speeds your system up by allowing more memory to be used as disk read/write cache.

    I've looked at Linux boxes with 64GB of memory in them and only using 25% of that. I usually get asked by someone, "wasn't 64GB enough? Why is there some usage in swap right now?" It's normal, I explain. The kernel just pages out sections of Linux that aren't needed, to free up more RAM for filesystem caching.

    I think perhaps Windows 7 just has a more aggressive way of doing this, probably because if you need to use some obscure Windows Directmedia SuperDRM doubleplusgood Plugin X, it's just as fast to reload it out of swap into memory as it is to load the binary from disk. But 99% of home users will never load that plugin so it can stay safely swapped out, giving you more precious memory for applications and disk cache.

"Protozoa are small, and bacteria are small, but viruses are smaller than the both put together."

Working...