Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage Intel Hardware

Intel Takes SATA Performance Crown With X25-E SSD 164

theraindog writes "We've already seen Intel's first X25-M solid-state drive blow the doors off the competition, and now there's a new X25-E Extreme model that's even faster. This latest drive reads at 250MB/s, writes at 170MB/s, and offers ten times the lifespan of its predecessor, all while retaining Intel's wicked-fast storage controller and crafty Native Command Queuing support. The Extreme isn't cheap, of course, but The Tech Report's in-depth review of the drive suggests that if you consider its cost in terms of performance, the X25-E actually represents good value for demanding multi-user environments."
This discussion has been archived. No new comments can be posted.

Intel Takes SATA Performance Crown With X25-E SSD

Comments Filter:
  • by ltmon ( 729486 ) on Tuesday November 25, 2008 @12:10AM (#25881469)

    It's pretty much apples and oranges. Even with batteries (which I wouldn't trust) RAM has different characteristics in power consumption, heat output, storage density etc. By the time you address these challenges you'd have... an SSD.

    Plus the SSDs get their long life from having more raw storage than advertised, and dynamically shutting down dead areas and bringing in reserve areas as it ages. Your sums would have to take into account the cost of this "hidden" storage.

    As an aside the best use for these things is hands down as extended cache in a storage array. One or two SSD alongside a few terrabytes of "normal" disk managed an intelligent filesystem or storage firmware can speed the whole beast up by phenomenal amounts depending on the data usage patterns. Yet the total cost of the whole storage appliance is really not much changed in relative terms. Some of the new Sun boxes are designed to work with SSDs like this, and probably from other big storage vendors as well.

  • by phr1 ( 211689 ) on Tuesday November 25, 2008 @12:11AM (#25881475)
    PC-6400 ram is around 15 dollars a GB now, and the 6400 stands for MB/sec, i.e. ram is over 20x faster than this flash drive and has no write wear issues or slowness of random writing. The only thing wrong with it is volatility, but in an enterprise environment you can use a UPS and/or maintain a serial update log on a fast hard disk or RAID (since the log is serial, the flash drive's ultrafast seek advantage doesn't apply). There is just a narrow window where this $21/gb 32gb flash drive is really cost effective.
  • by nitsnipe ( 1332543 ) on Tuesday November 25, 2008 @12:18AM (#25881523)

    What happens when the read-write cycles on this run out?

  • Weak test system (Score:5, Interesting)

    by dark_requiem ( 806308 ) on Tuesday November 25, 2008 @12:18AM (#25881525)
    I would have liked to have seen them test this drive in a much more powerful system. I mean, a P4 with 1GB RAM, and a fairly dated chipset (955x) as the SATA controller? No one is going to put a drive like this in a system that old. I'd guess that we might see different results on a more powerful system. At some point in those tests, other components of this fairly slow (by today's standards) machine. Throw some serious power behind it, and you can be sure that you're not bottlenecked, and the full power of the drive shows. Can't say for sure if this is actually the case, as I don't have a drive to test, but it's a definite possibility. Hopefully someone else does a similar review with a more powerful testbed.
  • by symbolset ( 646467 ) * on Tuesday November 25, 2008 @12:27AM (#25881611) Journal

    You be the judge [techreport.com]. I would consider a factor of 80x improvement in IO/s over the best HDD, and 2x your best competitor (yourself) "wicked-fast door blowing screams" if you're looking at transaction processing for a database or other IOPS bound application. This is not the review that's overzealous about a 4% processor speed improvement. Stripe that across 5 or 10 of these bad boys and the upside potential is, um, noticable? If we can't get a little enthusiastic about that what does merit it? A flame paint job and racing stripes? A Ferrari logo? The next step up from here is RAMdisk. Yeah, it's not going to make Vista boot in 4 seconds. Is that the metric that's driving you?

    Capacity is still lacking at 32GB, but obviously they could expand it now and 64GB will be available next year. Naturally if they wanted to make a 3.5" form factor they could saturate the bandwidth of the interface and stuff 320GB into a drive with no problem if they wanted to court the folks who can (and most definitely would) pay $10,000 for that premium product (HINT HINT). Obviously the price bites, but they can get it for this, so why not? Naturally for challenging environments (vibration, rotation, dropping under use, space applications, heat) it's a big win all the way around. Isn't SATA 3.0 (6Gbps) due soon?

    I think I foresaw some of these improvements here some years ago. I'm glad to see them in use. If I were to look forward again I would say that it might be time to abandon the euphemism of a hard disk drive for flash storage, at least for high end devices. You can already reconfigure these chips in the above mentioned 320GB drive to saturate a PCIe 2.0 x4 link (20Gigatransfers/sec), which makes a nice attach for Infiniband DDR x4. The SATA interface allows a synthetic abstraction that is useful, but the useful part is that it's an abstraction -- you don't need to continue the cylinder/block/sector metaphor once you accept the utility of the abstraction.

  • by dark_requiem ( 806308 ) on Tuesday November 25, 2008 @12:28AM (#25881613)
    Products that use RAM as the storage media have been around for years. They're exactly what you're describing: a few standard DDR DIMMS and a battery on a PCI card, usually. However, no one in an enterprise environment would actually trust data to such a device, and they never really took off. Home users don't generally have the power and data backup capacity to safely use such a device (and not even the most hardened masochist wants to reinstall or restore everything whenever a breaker goes), and enterprise users can't tolerate the risk level. Sure, you can have backup power, but the risk of losing data and downtime restoring it just isn't tolerable in most enterprise environments.
  • Re:NCQ on an SSD? (Score:4, Interesting)

    by Bacon Bits ( 926911 ) on Tuesday November 25, 2008 @12:52AM (#25881807)

    Even if it's solid state, there's still a physical layer, and still a logical-to-physical abstraction that an IDE disk must perform. (Slashdot pedantics will please note that here an IDE disk means a disk with an integrated electronic controller, not just a drive with an ATA interface. If you've never had to know the true physical geometry -- the number of cylinders, heads, and sectors in a disk (CHS)-- to tell your PC's BIOS or OS, you've never used a non-IDE disk. Most BIOS systems were faking CHS numbers by the time EIDE hit in 1994 which eliminated CHS in favor of LBA.)

    Flash drives use NAND flash memory, which uses pages of up to about 4 KB. For the most part, you can only access a single page at a time. Additionally, sequential access within a page is almost always faster than random access. Giving the disk's integrated controller a list of values means that it can examine the queue intelligently and can perform paging operations more intelligently.

  • NCQ? (Score:3, Interesting)

    by melted ( 227442 ) on Tuesday November 25, 2008 @12:59AM (#25881851) Homepage

    Why the heck does a drive that has uniform, low latency random access would even NEED NCQ? NCQ was designed to optimize the seek order in mechanical drives with heads.

  • by symbolset ( 646467 ) on Tuesday November 25, 2008 @01:13AM (#25881941) Journal

    Now plug these things into your SAN -- because they plug right in -- and do the math again. 50% price premium for 80x the aggregate IOPS and 10x the bandwidth? Your SAN needs new connectors to handle the speed.

    This is a slam dunk. Admit it.

  • by Hal_Porter ( 817932 ) on Tuesday November 25, 2008 @01:41AM (#25882057)

    Actually even if you do do use it for swap, or some application that writes to out absolutely flat out the lifetime is less than Intel quotes. If I use the formula here

    http://www.storagesearch.com/ssdmyths-endurance.html [storagesearch.com]

    2 million (write endurance) x 64G (capacity) divided by 80M bytes / sec gives the endurance limited life in seconds.

    I can work out how much less.

    If you substitute the figures Intel gives for write endurance (100000), capacity(80GB) and write speed(170MB/sec) you only get 1.5 years. Bear in mind that's for an application that writes flat out at 170MB/sec 24 hours a day.

    The odd thing is if you compare it to the X25-M. Write endurance is 10x less at 10000, write speed is lower at 70MB/sec. There I get 0,37 years. Mind you with SLC memory being 1/3 the price you could just buy three times as much of it. That way you get 3x the storage and a lifesapce of 1 year absolute worst case. SLC actually seems like a better choice for most people.

    Incidentally, this really is a worst case, hopefully no real world application can saturate write bandwidth like this.

    It would also make sense to gradually decrease the write bandwidth so the drive slows down in its old age but takes longer to die. Throttling write bandwidth to 70MB/sec on the SLC drive would give a life of 3.7 years. Throttling to 70MB/sec after half the writes were used up for an average write rate of 120MB/sec would give you 2.16 years. You could imagine a sort of Zeno's throttling algorithm (50% bandwidth at 50% life, 25% bandwidth at 75% life and so on) where the write bandwidth keeps dropping so the drive slows down but never actually dies.

  • by Anonymous Coward on Tuesday November 25, 2008 @02:21AM (#25882361)

    Anyone who tells you SSDs are a replacement for disks is at best talking about some niche workloads and at worst trying to sell you a line of the old BS. SSDs render 15k rpm disks obsolete all right, but not in the way you're suggesting.

    To get the same capacity as you'd get out of those hot, expensive disks - which is not a lot given what you're paying for them - you'd need to spend much more, and you'll likely find that performance levels off quickly when you saturate out your HBAs and/or CPUs and/or memory bus and/or front-end connectivity. Much better to combine a few of these with slow, cool, cheap disks to maximise both performance and capacity at a lower price than the 15k disks.

    Let's take an example.

    Suppose you have 14 146GB 15k rpm disks. They cost you $2000 apiece, or $28000. Each one gives you 300 IOPS, for a total of 4200 (we'll ignore the costs and inefficiencies of the hardware RAID dinosaur you're probably using these with; if we didn't, you might start to feel stupid about it). So you spent about $6.67/IOPS or $14/GB, plus the power and cooling to keep those disks spinning. Not cheap. Not particularly fast. Not really great in any way.

    Suppose instead that you want to replace them with these 80GB SSDs. You'll probably pay your vendor around $1400 for them (figure 60% margin like they're getting on those FC drives you've been buying from them). Now you need 26 of them to get the same capacity, costing you $36400. But you get about 12000 read IOPS each (write latency suspiciously omitted from this fluff piece, but we'll dubiously assume it's similar - it almost certainly isn't anywhere close) for a total of 312000. Too bad your HBA can do only about 140000, so you'll max out there on random reads. And if we're talking about block sizes larger than 512 bytes, latency will be higher. So you've spent $0.26/IOPS, which is great, and you've saved money on operating costs as well. But you actually spent a lot more in total - $18/GB - and woe unto you if you need more capacity; demand for storage tends to double every 12-18 months, and adding in 80GB chunks at $18/GB is going to hurt. Sure, prices will drop, but not fast enough to be competitive with the multi-TB disks we're already seeing today.

    Finally, suppose instead that you buy 2 of these SSDs to act as log devices and then buy 4 1TB 7200rpm SAS disks for $350 each. You've spent $4200 and you've gotten 24000 IOPS. That's $0.18/IOPS or $0.48/GB, and you've actually spent much less in absolute terms as well. You're still spending only a tiny fraction in power and cooling of what you were spending on the original all-disk solution, and you've got twice as much total storage capacity. Best of all, you can now grow your storage in two dimensions, not just along a line fixed by the disk vendors. Need more IOPS? Add another SSD or two. Need more capacity or streaming bandwidth? Add some more rotating rust.

    This approach gives you the best of all worlds, something you can't get by blindly replacing all your disks with SSDs. In other words, you get to pick the spot along the performance/cost/capacity curve that's right for your application. Using only SSDs, only slow disks, or only expensive disks doesn't do that. Upon a moment's though, this should be obvious: when your computer needs to perform better, adding DRAM is usually the best way to make that happen. When it needs to store more data, adding disks is the way to go. You don't add disks to improve performance (one hopes... if you need to do that, your storage vendor is probably taking you to strip clubs) and you don't add DRAM to increase storage capacity. This is no different. Flash occupies an intermediate spot in the memory hierarchy and has to be thought of that way. It's exciting to see the prices fall and capacities rise like they have, but I don't think a lot of people really understand yet just how SSDs are going to change things.

  • by afidel ( 530433 ) on Tuesday November 25, 2008 @03:35AM (#25882889)
    Well I just checked and a hot disk in my SAN has done a bit over 15M 128K writes in the last 3 weeks so about 1.92TB in 21 days or close to 100GB per day. I have replaced 3 drives out of 150 in the last 2.5 years (well 5 total but 2 were precautionary from the SAN vendor when trying to troubleshoot another issue). This is a pretty lightly utilized SAN, we need it more for capacity then pure I/O. I can see a busy installation doing 10x what we do without even pushing the same hardware to its limit.
  • by Eunuchswear ( 210685 ) on Tuesday November 25, 2008 @05:23AM (#25883547) Journal

    So why the fuck isn't it SAS?

  • by Ed Avis ( 5917 ) <ed@membled.com> on Tuesday November 25, 2008 @06:18AM (#25883895) Homepage

    The earlier model of Intel SSD had some serious performance degradation [matbe.com] after a few hours of heavy use. (Article in French, but it says that after a ten minute torture with IOmeter writing small blocks to the drive, and even after waiting an hour for the drive to 'recover', performance drops by about 70%.) I wonder if they have fixed this bug with the new model?

  • by zrq ( 794138 ) on Tuesday November 25, 2008 @10:27AM (#25885605) Journal

    From the techreport article :

    NCQ was originally designed to compensate for the rotational latency inherent to mechanical hard drives, but here it's being used in reverse, because Intel says its SSDs are so fast that they actually encounter latency in the host system.

    Is it time to look at connecting these chips direct to the motherboard ? Avoiding the added complexity of driving what is essentially a block of memory via a serial interface designed to control spinning discs. If the SLC memory chips were mapped into the main memory address space, it should be possible to make them look like a 32G or 64G (NV)RAM drive on a Unix/Linux system. Mount '/' and '/boot' on the (NV)RAM drive and install the OS on it. Presto - very fast boot and load times. You can still use traditional spinning disc(s) for large data, mounted as separate data partitions.

    It would need some thought as to which parts of the filesystem went on spinning disc and which parts went on the (NV)RAM partition. But that is why Unix/Linux has all of the tools for mounting different parts of the filesystem on different partitions. Back in the olden days, most systems had a combination of small fast(ish) discs and big(ish) slow discs, and tweaking fstab to mount different parts of the filesystem on different discs was a standard part of the install process. Most desktop systems now have one huge disc, and the standard Linux install dumps everything on one big '/' partition, but all the tools for optimizing the partition layout are still there.

    How about an ultra quiet desktop workstation with no moving parts, the OS installed on (NV)RAM disc, and user data dragged across the network from a fileserver (e.g NFS mounted /home).

  • by symbolset ( 646467 ) on Tuesday November 25, 2008 @11:29AM (#25886405) Journal

    Here's where I would go with a useful link [techworld.com]. "Duplex" doesn't necessarily mean what you think it means in this context. The use of this term bare is misleading, as perhaps the marketing person who invented the meme intended to be.

    "FYI, SAS full duplex means that one channel can be used for data traffic and the other channel can be simultaneously used for command traffic. Both channels cannot be simultaneously used for data. So when Mr Batty says 6Gb/s is available and that's 4x SATA I, he is technically correct, but end users will not see 4x performance."

    If you can't sell on the features, it's ok for some people to make stuff up when they're selling. But not us, here, ok? Let's be honest with one another here around the water cooler.

  • by dh003i ( 203189 ) <dh003i@gmail. c o m> on Tuesday November 25, 2008 @11:37AM (#25886535) Homepage Journal

    The Fusion IOdrive is faster...about 510 Mb/s according to dvnation. At $30/GB, that's not bad. Granted, the Intel one is $22/GB...but it has about twice the performance; and it's only priced about 1.4x the price of the Intel ssd.

"What man has done, man can aspire to do." -- Jerry Pournelle, about space flight

Working...