Four X25-E Extreme SSDs Combined In Hardware RAID 228
theraindog writes "Intel's X25-E Extreme SSD is easily the fastest flash drive on the market, and contrary to what one might expect, it actually delivers compelling value if you're looking at performance per dollar rather than gigabytes. That, combined with a rackmount-friendly 2.5" form factor and low power consumption make the drive particularly appealing for enterprise RAID. So just how fast are four of them in a striped array hanging off a hardware RAID controller? The Tech Report finds out, with mixed but at times staggeringly impressive results."
Solid State Slashdot Drive. (Score:3, Funny)
"So just how fast are four of them in a striped array hanging off a hardware RAID controller? The Tech Report finds out, with mixed but at times staggeringly impressive results.""
So in other words I'll get First Post much faster since slashdot switched over.
Actually, that RAID card seems more interesting (Score:5, Interesting)
A 1.2 GHz processor with 256 DDR2 memory? Holy crap! That's faster than my new Celeron 220! And the perennial quesion: can this thing run Linux?
Re: (Score:3, Informative)
That RAID card was the bottleneck. It can't support 4x the raw transfer rate of a single drive.
Exactly (Score:2)
I suspect the performance would have been a LOT better if they'd used something like the 3Ware 9690SA. 3Ware is also a LOT more Linux friendly.
Cheers,
Re: (Score:2)
It would have been nice to see some quick tests under Linux with ext3 / XFS / reiser / ext4 / btrfs / flavor_of_the_month just to see if that was really the drives or a vastly sub-optimal access pattern.
Re: (Score:2)
Windows filesystems don't even have an optimal access pattern. At least with ext2/3 you can optimise for RAID stripe and stride in a way that works regardless of the underlying RAID implementation, and significantly reduces the number of disks involved in reading/writing metadata.
Re:Actually, that RAID card seems more interesting (Score:5, Insightful)
Actually, I felt that the limiting factor was probably the craptastic single-core Pentium 4 EE [techreport.com] they used to run all these benchmarks.
What, you shove thousands of dollars worth of I/O into a system, and run it through the paces with a CPU that sucked in 2005? I'm not surprised at all that most tests showed very little improvement with the RAID.
Re: (Score:2)
What I want to know is if the RAID controller had a battery backup unit installed so write caching could be enabled. There is no BBU shown in the article's picture of the controller.
I recently built a new Exchange server with 6 X-25Ms (we couldn't get the 64GB X25-Es when we ordered it) hooked to a 3ware 9650 in three separate RAID1 arrays. Turning on write caching switches the whole feel of the system from disappointingly sluggish to there-is-no-way-these-tire-marks-were-made-by-a-'64-Buick-Skylark-conver
What I want to see (Score:5, Interesting)
Is 4 of these in a RAID-1, running a seek-heavy database. Nobody does this benchmark, unfortunately.
Re: (Score:3, Interesting)
Re: (Score:3, Informative)
RAID5 has terrible random write performance, because every write causes a write to every disk in the array. it's VERY easy to saturate traditional disks random write capabilities with raid5/6. So, it's rightly avoided like the plague for heavily hit databases.
I'm not certain how much of the performance hit is due to latencies of disks. So i feel it would be an interesting test to also see raid5 database performance.
Also, Raid1 (or 10 to be more fair when comparing with RAID5) in a highly saturated environme
Re:What I want to see (Score:4, Informative)
RAID5's write performance is so awful because it requires so much reading to do a write.
I have to read from _every drive in the array_ in order to do a write, because the parity has to be calculated. Note that it's not the calculation that's slow, it's getting the data for it. So that's multiple operations to do a simple write.
A write on RAID1 requires writing to all the drives, but only writing. It's a single operation.
RAID1 is definitely faster (or as fast) for seek-heavy, high-concurrency loads, because each drive can be pulling up a different piece of data simultaneously.
Re: (Score:2)
If you setup your Raid block size and your Filesystem block size appropriately, you won't have to read-before-write, at least not very often. Setting up RAIDFrame on NetBSD, with a 4-drive raid-5 setup, performance was dismal because every write was a partial write (3 data disks meant that it was impossible for the FS block size to match or be an even multiple of the Raid block size). Going to 3 drives or 5 drives performance increased about 8-10 times.
BAARF (Score:2)
Why not run RAID-5 (or 50 or 15) if it is seek-heavy?
Because four drives in a RAID-10 are three times as reliable as the same four drives in a RAID-5. Arrays of large drives are more vulnerable to drive failures during reconstruction than arrays of small drives, and RAID-5 is much more vulnerable to a double drive failure than RAID-10 [miracleas.com]. In RAID-5, you lose data if any two drives fail. In RAID-10, you lose data only if the drives that fail are from the same mirrored pair, and there's only a 1 out of 3 chance that two randomly selected drives will be from the s
Re: (Score:2)
Do you mean RAID 0?
Re:What I want to see (Score:4, Informative)
Re: (Score:2)
Re: (Score:2)
It's not simply a matter of interleaving; independent requests can be executed simultaneously. Read performance, especially seeking, can scale linearly with the number of drives in a RAID1.
Redundant Array of what? (Score:5, Funny)
This is a very expensive solution. What part of Redundant Array of Inexpensive Disks don't they understand?
Re: (Score:2)
Re: (Score:2)
I've never understood why they call it RAID 0. Striped Array would suffice. Why is it a Redundant Array of In(expensive|dependent) Disks 0(as in NOT)?
Re: (Score:2)
So, what is it now? Random Assortment of Independent Datastores?
Re: (Score:2, Informative)
Independent disks. And remember that some high end SCSI or Fiberchannel RAIDs have never fit the antiquated "Inexpensive" bit.
Re:Redundant Array of what? (Score:5, Informative)
Re: (Score:2)
Re: (Score:2)
The performance part.
Re:Redundant Array of what? (Score:4, Funny)
Re: (Score:2)
Yes, well, that, or maybe it's just that the notion of "expensive" disks is gone. These days, you pay a tiny amount per GB, which usually goes down with increasing size. Oh sure, you may pay a bit more at the top, but it's not much.
It used to be that you could get huge drives. I'm not just talking about the fact that they would store like 20+MB, but they were also physically huge. I used to have one that was 5 platters and two 3.5" slots high (though my memory is fuzzy). These suckers were EXPENSIVE; m
Comparisons a little unfair in places (Score:5, Insightful)
It seemed a little unfair that they only used the nice hardware RAID controller with the Intel SSDs. I would have liked to see them use it with all the other disks to get a more level playing field.
Re: (Score:2)
Indeed, telling us to ignore the extra minute in the X25-E RAID0 boot times compared to the other setups is highly disingenuous. RAID setups are slower to boot because you have to load the RAID BIOS first, if you really care about fast booting it's something you need to be aware of. There were also CPU bound case where the RAID0 setup performed slightly worse than the single disk, an obvious sign of a performance hit due to the RAID card.
Re: (Score:2)
What was the target for this test??? (Score:2)
Doom levels????
Office tasks???
Okay folks I can only see a few groups using this kind of set up.
Not one Database test?
I mean a real database like Postgres, DB2, Oracle, or even MySQL. Doom3... yea those are some benchmarks.
Re: (Score:2)
Look in the later pages of the review. There's a bit of everything. There are IOMeter benches there, with very enligthening results.
Re: (Score:2)
But likely not due to disk. If I remember, that was a heavily CPU-bound process.
Test it on a better system then a OLD P4 cpu (Score:2)
Test it on a better system then a OLD P4 cpu.
Re: (Score:2)
Fantastic Slashvertising (Score:3, Insightful)
Intel's X25-E Extreme SSD is easily the fastest flash drive on the market, and contrary to what one might expect, it actually delivers compelling value if you're looking at performance per dollar rather than gigabytes
I hope someone got a healthy commission from Intel for writing that...
Re: (Score:2)
Re: (Score:2)
Is it possible to do any kind of article on a commercial product without it being "astroturfing" of some form or another?
Yes, it is. They didn't need to write it as
Intel's X25-E Extreme SSD is easily the fastest flash drive on the market, and contrary to what one might expect, it actually delivers compelling value if you're looking at performance per dollar rather than gigabytes
When they could have just as easily said
We tested Intel's X25-E Extreme SSD drive in a four-disk RAID configuration
There was no need to tout the product like that on the front page.
I just want to know the SlashDweeb rules
There was no need for that, either. I rather doubt that someone is forcing you to read anything on this website. You could read something completely different if you prefer, or not read anything technical at all.
I stand by my criticism of this article. The headline did not need to be such blatant advertising of the Intel drives.
SATA/Flash for RAM? (Score:2)
Other than just using one of these Flash RAIDs as a swap volume, is there a way for a machine running Linux to use them as RAM? There are lots of embedded devices that don't have expandable RAM, or for which large RAM banks are very expensive, but which have SATA. Rotating disks were too slow to simulate RAM, individual Flash drives probably too slow, but a Flash RAID could be just fast enough to substitute for real RAM. So how to configure Linux to use it that way?
Re: (Score:2)
RAIDing SSDs is a wimp solution (Score:2)
No need to raid this puppy. Make sure you spring for the redundant power supplies and r
USB flash drive RAID? (Score:3, Interesting)
I hate to ask but... (Score:3, Interesting)
Re:Oh good (Score:5, Insightful)
'cause regular hard drives usually survive 5 years in an enterprise environment, yep yep.
Re:Oh good (Score:5, Insightful)
'cause SSD's don't cost $300-$500 more than their spindle counterparts, yep yep.
Re:Oh good (Score:5, Informative)
> 'cause SSD's don't cost $300-$500 more than their spindle counterparts, yep yep.
Hint: Enterprise storage purchasing often looks at dollars/IOPS rather than dollars/GB.
Re: (Score:2, Funny)
"Enterprise storage purchasing often looks at dollars/IOPS rather than dollars/GB."
Which is good news, as this Intel Slashdot advert says its "compelling value if you're looking at performance per dollar rather than gigabytes.".
Re:Oh good (Score:4, Interesting)
Re: (Score:2)
Hardly a fair comparison. If an enterprise drive doesn't get stolen on the way out of dry dock, it surely won't be long before the thing is being attacked by Romulans.
Re: (Score:2)
Besides even if some blocks go bad you can map around them, the SSD itself might even do it.
Besides, you are unlikely to be using the same drive in 5 years time and magnetic drives have a much higher chance
Re:Oh good (Score:5, Informative)
Your enterprise environment must not be hitting its drives very hard.
Where SSDs is in disk operations that are usually lagged out by seek times; a big unwieldy database that gets a lot of writes and no downtime, for instance, is happiest when it lives on a striped SSD array.
Coincidentally, this is exactly the type of workload which is most likely to shorten a magnetic drive's life.
Re: (Score:2)
er, "where SSDs shine is..."
Re: (Score:2)
Re:Oh good (Score:5, Insightful)
I'll be sure to do that, and replace them every 5 years when they run out of write operations.
Winchester drives, on the other hand, use a time-honored complex system of delicate moving parts, and last virtually forever. They certainly do not start experiencing sudden failures if kept in continuous service for more than 5 years.
Re:Oh good (Score:5, Funny)
I must be doing something wrong then. Should I put my computer in the freezer when I'm not using it or something, like, to keep it fresh longer?
Re: (Score:2)
Yes.
Re: (Score:2, Interesting)
Re: (Score:2)
Who the hell modded you insightful, especially for claiming a system of delicate moving parts lasts virtually forever...
What about watches? /sarcasm
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
what about penises? (peni?)
But! (Score:2)
Re: (Score:3, Interesting)
Re:Oh good (Score:5, Informative)
Make that 228 years [intel.com].
Life expectancy 2 Million Hours Mean Time Before Failure (MTBF)
Hint: learn about "wear leveling"
Re: (Score:2)
It can be a good ballpark figure, to differentiate between enterprise class drives and consumer drives, but should NOT be an expected number.
There are too many things to take into account: temperature surrounding the drive, how many days it's on for, how long per day it's on for, how many writes to the drive, how much voltage is supplied, etc etc..
Re:Oh good (Score:4, Insightful)
MTBF is a highly inaccurate way to show how long you should expect a drive to live. The whole Seagate Fiasco is a prime example of why NOT to believe them.
Misuse of a statistical figure is a problem with those misinterpreting it. Obviously things have changed since schools taught the difference between the mean, the mode, the median, and the minimum. If I run an ISP then MTBF is useful for me to calculate costs, both in replacements and labour costs. It's not supposed to be a measurement for consumers though that will be buying single unit quantities.
Buying a hard drive is like buying a washing machine. If I'm lucky it will go on practically for ever. On the other hand if I'm unlucky it could die tomorrow. As Piranhaa says, there are too many variables. All I can go on is that if it comes with a garauntee of 3 years then I assume the manufacturers have designed it to mostly exceed that figure otherwise they would end up losing money on the product. I still have to ensure I have a contingency plan in case it breaks down.
Phillip.
Re: (Score:2)
Fail. (Score:2)
MTBF is Mean Time Between Failures, (device is repairable).
For HDDs, you should really be talking about MTTF.
http://en.wikipedia.org/wiki/MTBF [wikipedia.org]
Re: (Score:2)
Re: (Score:2)
If I keep my current 15K drives that long (Score:4, Interesting)
I will be surprised.
See, in the enterprise environment that I work in the majority of our big hardware is leased. I am quite willing to use what I can to maintain performance and reliability. That being said my system is built entirely on 15K drives of various sizes. I am not worried about five years or so of read/write that SSD drives have, all I want to see is a track record. I expect to replace most of the drives I have now within five years so this "five year limit" many like to toss out is immaterial to me. Reliability over that lifetime is of more importance.
Besides, the nice benefit of SSD drives is I don't need special enclosures (read: ones that can handle the torque these puppies can put out)
Re: (Score:2)
http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403&p=4 [anandtech.com]
[quote]Given the 100GB per day x 5 year lifespan of Intel's MLC SSDs, there's no cause for concern from a data reliability perspective for the desktop/notebook usage case. High load transactional database servers could easily outlast the lifespan of MLC flash and that's where SLC is really aimed at. These days the MLC vs. SLC debate is more about performance, but as you'll soon see - Intel has redefined what to expect from an MLC drive.[/
Re: (Score:2)
Re: (Score:2)
Re:paging benefits? (Score:4, Insightful)
I really don't get this obsession with page files these days. Say you have 4GB ram and an 4GB page file. Memory is cheap these days, so rather than using 4GB of (relatively slow) SSD, why not just get another 4GB ram?
Re:paging benefits? (Score:4, Informative)
SSD shouldn't be for paging. That would become very expensive (even with wear leveling) if you have a minimal amount of RAM (say 256M) to run large (say 16G) operations. It would also be slow since you have the overhead of whatever bus system your hard drive/ssd is connected to.
Technically hard drives aren't supposed to be paging either, it's just a cheap and simple trick to avoid having people pay a lot for (expensive) RAM or have their programs crash when occasionally they run out of RAM. However if your system is paging heavily it's better and faster with more RAM.
Anecdote: I worked at a place once where cheap ($500) hardware was sold as dedicated SQL/IIS servers (you could fit 10 of them in 5U) and a lot of customers thought they could run whatever they wanted (Microsoft ran MSN for a whole country of one for a while) in them but they only supported a maximum of 2G RAM (4G according to BIOS but the modules back then were too expensive). Of course PHB just said: let them swap and besides the heavy slow downs they ran fairly fine. Well, those heavy users all crashed their software-RAID's in less than a year (the heavy load made Windows get the RAID system out of sync and then you had the first hard drive fail). The temperature was fine but simply swapping out was too much for the cheap hard drives (Maxtor and Seagate) and they all failed.
Re: (Score:2, Informative)
SSD shouldn't be for paging. That would become very expensive (even with wear leveling) if you have a minimal amount of RAM (say 256M) to run large (say 16G) operations. It would also be slow since you have the overhead of whatever bus system your hard drive/ssd is connected to.
You talk like you know what you're talking about; but then the reader realizes you don't understand what happens when the CPU spends 99% of its life in wait state waiting for paging operations. Swap is not a high-intensity workload; swap workload increases six orders of magnitude faster than CPU workload, meaning when you start swapping, you spend lots of time swapping.
As the hard disk is external, this number increases with CPU speed; a swap operation taking 1,000,000 cycles on a 1GHz CPU (1mS) will ta
Re: (Score:2)
Desktop memory is indeed cheap.
However I don't think i've seen a desktop board that could go over 8 gigabytes and most top out at four. Server boards can go higher BUT
* intel xeon boards requires FBDIMMs which are expensive.
* AMD opteron boards can use ordinary DDR2 but the CPU performance sucks compared to the aforementioned xeons.
Re: (Score:3, Interesting)
If you go multiprocessor (not multicore) then you get much higher memory bandwidth (NUMA). Sometimes that matters more than CPU power.
Re: (Score:2)
However I don't think i've seen a desktop board that could go over 8 gigabytes and most top out at four.
Maybe this was true a year ago, but not now. I recently got a PC with an Asus P5Qpro motherboard - it supports up to 16GB. http://www.asus.com/products.aspx?l1=3&l2=11&l3=709&l4=0&model=2269&modelmenu=2 [asus.com]
Re: (Score:2)
The new I7 boards from Intel are either 12GB or 24 maximum.
Re: (Score:3, Insightful)
I don't think anyone should be using a page file at all if you have 4 GB or more of RAM. Maybe even 2 GB. It just doesn't make sense. With that much memory what good is a 512 MB page file going to do really? And if you're swapping more than 512 MB of RAM to disk your machine is going to be thrashing like mad and unusable anyway.
It's stupid that many OS's allocate 2 times your RAM as a page file. Are you really going to swap 8 GB of RAM to disk? I mean seriously, that would be unusable.
Even when I had
Re:paging benefits? (Score:4, Interesting)
I'm Betaing Windows 7. Before going to bed I set up a swap partition for it. After getting up the next morning and checking, it was full.
I have *no idea* what W7 put in there while I was sleeping.
Re:paging benefits? (Score:4, Informative)
In any modern operating system, including Windows , swap isn't just used for out of physical memory conditions. It's also used to "page out" portions of the operating system and libraries, shared objects, dlls, etc., that aren't being used at the moment. This actually speeds your system up by allowing more memory to be used as disk read/write cache.
I've looked at Linux boxes with 64GB of memory in them and only using 25% of that. I usually get asked by someone, "wasn't 64GB enough? Why is there some usage in swap right now?" It's normal, I explain. The kernel just pages out sections of Linux that aren't needed, to free up more RAM for filesystem caching.
I think perhaps Windows 7 just has a more aggressive way of doing this, probably because if you need to use some obscure Windows Directmedia SuperDRM doubleplusgood Plugin X, it's just as fast to reload it out of swap into memory as it is to load the binary from disk. But 99% of home users will never load that plugin so it can stay safely swapped out, giving you more precious memory for applications and disk cache.
Because Windows memory management sucks? (Score:2)
No matter how much RAM you give Windows, it will still page. It's to the point where people make ramdisks to put pagefiles on.
Not that I should talk... I have 1 gig of swap on Linux, and I'm thinking I could use more. Why? Because I have 4 gigs of RAM, and if I'm actually using even half of that, I can't hibernate.
Re: (Score:3, Interesting)
I'm no expert, but wouldn't that be a redundant statistic? if it handles normal read/writes faster than a disk drive, then could you presume paging would be faster as well?
Although it would be interesting to see a RAM-less PC try and run on SSD's only... somehow using normal data read/write, and memory read/write on the same SSD (if thats possible). Guess that's what we'll end up with eventually anyways, where your amount of MEM is the amount of free-space you have on your SSD, no longer seperated component
Re: (Score:2)
Re: (Score:2)
I'd run my page file on DRAM chips, if I could afford it. Serious applications might want a card that interfaces through the PCIe bus. I can't even afford to ask how much one of those costs.
Re: (Score:2)
[citation needed]
Re:New acronym: RAVED? (Score:5, Funny)
No, its got a cooler acronym, RAVEN: Redundant Array of Very Expensive Not-disks-but-some-silly-stack-of-flash-memory-chips.
Re: (Score:2)
No, its got a cooler acronym, RAVEN: Redundant Array of Very Expensive Not-disks-but-some-silly-stack-of-flash-memory-chips.
This is /.:
RAMEN - Redundant Array of More Expensive Not-disks-but-some-silly-stack-of-flash-memory-chips.
Re: (Score:2)
That was me, I accidentally checked "post anonymously".
I want my karma! *sobs*
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
But, from what I saw there, you didn't make a real RAID0, you benchmarked several mountpoints simultaneously instead.
A md0-like benchmark will be nice to see.
Re: (Score:2)
Re: (Score:2)
What poached actually typed:
I found that joke to be very arousi^H^H^H^H^H^Hjuvenile and I have a hard time reading the rest of the article without an erec^H^H^H^H^H^H prejudicial eye.