Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage Hardware IT

Four X25-E Extreme SSDs Combined In Hardware RAID 228

theraindog writes "Intel's X25-E Extreme SSD is easily the fastest flash drive on the market, and contrary to what one might expect, it actually delivers compelling value if you're looking at performance per dollar rather than gigabytes. That, combined with a rackmount-friendly 2.5" form factor and low power consumption make the drive particularly appealing for enterprise RAID. So just how fast are four of them in a striped array hanging off a hardware RAID controller? The Tech Report finds out, with mixed but at times staggeringly impressive results."
This discussion has been archived. No new comments can be posted.

Four X25-E Extreme SSDs Combined In Hardware RAID

Comments Filter:
  • by Ostracus ( 1354233 ) on Tuesday January 27, 2009 @04:32PM (#26628213) Journal

    "So just how fast are four of them in a striped array hanging off a hardware RAID controller? The Tech Report finds out, with mixed but at times staggeringly impressive results.""

    So in other words I'll get First Post much faster since slashdot switched over.

  • by the_humeister ( 922869 ) on Tuesday January 27, 2009 @04:36PM (#26628279)

    A 1.2 GHz processor with 256 DDR2 memory? Holy crap! That's faster than my new Celeron 220! And the perennial quesion: can this thing run Linux?

    • Re: (Score:3, Informative)

      That RAID card was the bottleneck. It can't support 4x the raw transfer rate of a single drive.

      • I suspect the performance would have been a LOT better if they'd used something like the 3Ware 9690SA. 3Ware is also a LOT more Linux friendly.

        Cheers,

        • by MBCook ( 132727 )
          Of course, they ran all their tests in Windows. I wonder how much of the results in some of the tests (like program installation) are really due to how fast NTFS can handle lots of little files and not due to the drives they were testing.

          It would have been nice to see some quick tests under Linux with ext3 / XFS / reiser / ext4 / btrfs / flavor_of_the_month just to see if that was really the drives or a vastly sub-optimal access pattern.

          • Windows filesystems don't even have an optimal access pattern. At least with ext2/3 you can optimise for RAID stripe and stride in a way that works regardless of the underlying RAID implementation, and significantly reduces the number of disks involved in reading/writing metadata.

      • by default luser ( 529332 ) on Tuesday January 27, 2009 @06:08PM (#26629681) Journal

        Actually, I felt that the limiting factor was probably the craptastic single-core Pentium 4 EE [techreport.com] they used to run all these benchmarks.

        What, you shove thousands of dollars worth of I/O into a system, and run it through the paces with a CPU that sucked in 2005? I'm not surprised at all that most tests showed very little improvement with the RAID.

      • What I want to know is if the RAID controller had a battery backup unit installed so write caching could be enabled. There is no BBU shown in the article's picture of the controller.

        I recently built a new Exchange server with 6 X-25Ms (we couldn't get the 64GB X25-Es when we ordered it) hooked to a 3ware 9650 in three separate RAID1 arrays. Turning on write caching switches the whole feel of the system from disappointingly sluggish to there-is-no-way-these-tire-marks-were-made-by-a-'64-Buick-Skylark-conver

  • What I want to see (Score:5, Interesting)

    by XanC ( 644172 ) on Tuesday January 27, 2009 @04:36PM (#26628281)

    Is 4 of these in a RAID-1, running a seek-heavy database. Nobody does this benchmark, unfortunately.

    • Re: (Score:3, Interesting)

      by Aqualung812 ( 959532 )
      Why not run RAID-5 (or 50 or 15) if it is seek-heavy? I thought RAID-1 was only used if you had to deal with a lot of writes. Those are slower on 5 than 1, but 5 is much faster for reads.
      • Re: (Score:3, Informative)

        by Anonymous Coward

        RAID5 has terrible random write performance, because every write causes a write to every disk in the array. it's VERY easy to saturate traditional disks random write capabilities with raid5/6. So, it's rightly avoided like the plague for heavily hit databases.

        I'm not certain how much of the performance hit is due to latencies of disks. So i feel it would be an interesting test to also see raid5 database performance.

        Also, Raid1 (or 10 to be more fair when comparing with RAID5) in a highly saturated environme

      • by XanC ( 644172 ) on Tuesday January 27, 2009 @05:48PM (#26629365)

        RAID5's write performance is so awful because it requires so much reading to do a write.

        I have to read from _every drive in the array_ in order to do a write, because the parity has to be calculated. Note that it's not the calculation that's slow, it's getting the data for it. So that's multiple operations to do a simple write.

        A write on RAID1 requires writing to all the drives, but only writing. It's a single operation.

        RAID1 is definitely faster (or as fast) for seek-heavy, high-concurrency loads, because each drive can be pulling up a different piece of data simultaneously.

        • by rthille ( 8526 )

          If you setup your Raid block size and your Filesystem block size appropriately, you won't have to read-before-write, at least not very often. Setting up RAIDFrame on NetBSD, with a 4-drive raid-5 setup, performance was dismal because every write was a partial write (3 data disks meant that it was impossible for the FS block size to match or be an even multiple of the Raid block size). Going to 3 drives or 5 drives performance increased about 8-10 times.

      • Why not run RAID-5 (or 50 or 15) if it is seek-heavy?

        Because four drives in a RAID-10 are three times as reliable as the same four drives in a RAID-5. Arrays of large drives are more vulnerable to drive failures during reconstruction than arrays of small drives, and RAID-5 is much more vulnerable to a double drive failure than RAID-10 [miracleas.com]. In RAID-5, you lose data if any two drives fail. In RAID-10, you lose data only if the drives that fail are from the same mirrored pair, and there's only a 1 out of 3 chance that two randomly selected drives will be from the s

    • A seek-heavy DB would be a great bench mark but why RAID1, it doesn't give any performance boost. It would just be reading off of the primary the entire time.

      Do you mean RAID 0?
      • by afidel ( 530433 ) on Tuesday January 27, 2009 @05:26PM (#26629011)
        Good controllers do read interleaving where every other batch of reads is dispatched to a separate drive.
        • Good controllers let you set the behaviors as do good implementations of software raid. For instance on Solaris with SVM you can set a raid 1 to read only from a the primary, roundrobin alternation, or (my favorite) read from whichever drive that has a head in position closest to the requested block. For random read biased application the final option wins hands down on latency, for sequential streaming reads the roundrobin seems to be the best option, and for absolute hardware reliability the "read from pr
      • by XanC ( 644172 )

        It's not simply a matter of interleaving; independent requests can be executed simultaneously. Read performance, especially seeking, can scale linearly with the number of drives in a RAID1.

  • by telchine ( 719345 ) on Tuesday January 27, 2009 @04:48PM (#26628441)

    This is a very expensive solution. What part of Redundant Array of Inexpensive Disks don't they understand?

    • RAID 0 is not redundant, they are not really 'disks' any more and they could be independent disks rather than inexpensive. Sorry I know you were trying to be funny but I felt you could have more fully reduced the issue.
    • Re: (Score:2, Informative)

      by grub ( 11606 )

      Independent disks. And remember that some high end SCSI or Fiberchannel RAIDs have never fit the antiquated "Inexpensive" bit.
    • by afidel ( 530433 ) on Tuesday January 27, 2009 @05:32PM (#26629087)
      Dude, 4 of these drives can keep up with my 110 spindle FC SAN segment for IOPS. Here's a hint, 110 drives plus SAN controllers is about two orders of magnitude more expensive than 4 SSD's and a RAID card. If you need IOPS (say for the log directory on a DB server) these drives are hard to beat. The applications may be niche, but they certainly DO exist.
      • by NSIM ( 953498 )
        It's nice to see someone who actually gets it. Yes, SSD is epxensive, but not when you compare it to the price you'd pay for a similar number of hard drives that can match the IOPs performance
    • The performance part.

  • by heffrey ( 229704 ) on Tuesday January 27, 2009 @04:51PM (#26628477)

    It seemed a little unfair that they only used the nice hardware RAID controller with the Intel SSDs. I would have liked to see them use it with all the other disks to get a more level playing field.

    • by Nick Ives ( 317 )

      Indeed, telling us to ignore the extra minute in the X25-E RAID0 boot times compared to the other setups is highly disingenuous. RAID setups are slower to boot because you have to load the RAID BIOS first, if you really care about fast booting it's something you need to be aware of. There were also CPU bound case where the RAID0 setup performed slightly worse than the single disk, an obvious sign of a performance hit due to the RAID card.

  • Doom levels????
    Office tasks???
    Okay folks I can only see a few groups using this kind of set up.
    Not one Database test?
    I mean a real database like Postgres, DB2, Oracle, or even MySQL. Doom3... yea those are some benchmarks.

  • Test it on a better system then a OLD P4 cpu.

  • by damn_registrars ( 1103043 ) * <damn.registrars@gmail.com> on Tuesday January 27, 2009 @06:06PM (#26629655) Homepage Journal

    Intel's X25-E Extreme SSD is easily the fastest flash drive on the market, and contrary to what one might expect, it actually delivers compelling value if you're looking at performance per dollar rather than gigabytes

    I hope someone got a healthy commission from Intel for writing that...

    • Let me get this straight. Is it possible to do any kind of article on a commercial product without it being "astroturfing" of some form or another? Or is it only the negative articles that can be done? I just want to know the SlashDweeb rules.
      • Is it possible to do any kind of article on a commercial product without it being "astroturfing" of some form or another?

        Yes, it is. They didn't need to write it as

        Intel's X25-E Extreme SSD is easily the fastest flash drive on the market, and contrary to what one might expect, it actually delivers compelling value if you're looking at performance per dollar rather than gigabytes

        When they could have just as easily said

        We tested Intel's X25-E Extreme SSD drive in a four-disk RAID configuration

        There was no need to tout the product like that on the front page.

        I just want to know the SlashDweeb rules

        There was no need for that, either. I rather doubt that someone is forcing you to read anything on this website. You could read something completely different if you prefer, or not read anything technical at all.

        I stand by my criticism of this article. The headline did not need to be such blatant advertising of the Intel drives.

  • Other than just using one of these Flash RAIDs as a swap volume, is there a way for a machine running Linux to use them as RAM? There are lots of embedded devices that don't have expandable RAM, or for which large RAM banks are very expensive, but which have SATA. Rotating disks were too slow to simulate RAM, individual Flash drives probably too slow, but a Flash RAID could be just fast enough to substitute for real RAM. So how to configure Linux to use it that way?

    • It's called swap, but at the prices for SSDs(and limited write cycles) you'd be better of getting *real* RAM. If you're at the limit for your board, UPGRADE!
  • If you're really looking for high performance storage, you should go with a DRAM-based solution. This has almost no latency and can scale to any interface. Depending on your budget, you can get SAS 3GB/s 2 ports with 32GB capacity for a bargain $24,000 (URL:http://www.solidaccess.com/products.htm/> and if you need more performance or storage space, spring for the serious iron--a FC 4GB/2, 2 ports at a mere $375,000.

    No need to raid this puppy. Make sure you spring for the redundant power supplies and r
  • by hack slash ( 1064002 ) on Tuesday January 27, 2009 @09:10PM (#26632249)
    When will someone come up with a hardware or software RAID solution to enable several USB flash drives to appear as a single drive on Windows? with relatively reliable & fast (12MB/s write, 30MB/s read) 16GB flash drives as cheap as £16 each [play.com] I'd love to cram as many as I could inside my Eee and have them appear as a single drive instead of many individual drives.
  • I hate to ask but... (Score:3, Interesting)

    by Douglas Goodall ( 992917 ) on Wednesday January 28, 2009 @03:45AM (#26635711) Homepage
    As SSD drives come into the market used, how will people know how close these drives are to "used up". That is to say, we will have to worry that these cheap drives on ebay will have lots of "bad" spots that can no longer be written. We are going to be needing a program or device of some kind that can certify the state of a drive so as to set a fair value on it. I expect a lot of unhappy people when used drives get installed and start failing soon after. There will have to be some pretty sleazy warrantees to cover used SSDs.

If you think the system is working, ask someone who's waiting for a prompt.

Working...