Fusion-io IoXtreme's Consumer-Class PCIe SSD — Impressive Throughput 110
MojoKid writes "When Fusion-io's first ioDrive product hit the market, it was claimed to be a 'disruptive technology' by some industry analysts, with the potential to set the storage industry on its ear. Of course the first version of the ioDrive was an enterprise-class product that showed the significant potential of PCI Express direct-attached SSD storage, but its cost was such that the mainstream market couldn't possibly justify it, no matter what the upside performance looked like. Then we heard of Fusion-io's more consumer-targeted play, the ioXtreme, that was announced this past summer. Fusion-io has only very recently released these new, lower cost cards to market. The first-ever full performance review of the product over at HotHardware shows the half-height PCI Express X4 cards are capable of a robust 800MB/sec read bandwidth and about 300MB/sec of write bandwidth. The cards particularly excel versus a standard SSD at random read/write requests and even perform relatively well with small block transfers."
In the right place (Score:3, Insightful)
This is the proper place for memory, on the system bus.
Putting memory behind a drive controller is just like making your gas pedal respond to a buggy whip (OK, car analogies aren't my strong point).
Re:In the right place (Score:5, Funny)
Putting memory behind a drive controller is just like making your gas pedal respond to a buggy whip (OK, car analogies aren't my strong point).
Yeah, no kiddin. I mean if the whip has bugs in it, isn't that a driver issue?
Re: (Score:2)
That was perfect. Thank you.
Re: (Score:1)
I always hated back-seat drivers
Re: (Score:3, Insightful)
SATA does have its advantages, though: laptop support, bootability, hot-swap, cross-platform (no drivers needed), etc.
Re: (Score:3, Informative)
A proper PCIe (miniPCIe) card supports bootability (appears as a regular controller+disk), laptops often boot from miniPCIe SSDs (netbooks notably - Asus eeePC and the SSD Acer Ones, amongst others). Hot swap not so much (I know SATA supports it, but do real world motherboard controllers support it?), though I suppose if someone were to make it an ExpressCard design, possibly. Cross-platform
Re: (Score:2)
The ioDrive/ioXtreme doesn't appear as a regular IDE or AHCI controller because that would significantly degrade its performance; most of Fusion-io's "special sauce" is in the driver.
Re: (Score:2)
But as long as it has a bios extention rom on it that knows how to make basic use of the interface (it doesn't have to be particulally fast, just good enough to let the OS kernel/drivers be loaded by the bootloader) then it should be bootable.
Re: (Score:2)
thats the point - to make it so the generic OS can see the card and get to that point it has to advertise it's self as an existing standard compliant card - which it isn't because the existing standard isn't fast enough for it. instead you end up with OS/Application specific drivers to present the card as storage space.
Sure they might be able to make it present it's self as a standard ATA or SCSI interface and volume with degraded performance and then some how load a driver in the OS to talk over that exis
Re: (Score:2)
it has to advertise it's self as an existing standard compliant card
No it does not, it just has to have a rom that knows how to talk to the card, is loaded as a bios extension and traps interrupt 13h. The bootloader uses that interrupt to access the drive and load critical parts of the OS including the drivers needed for the main hard drive. The OS then switches into protected mode and the driver takes over.
There are plenty of SATA/SCSI/RAID cards/chips that are bootable and yet need a special driver for wi
Re: (Score:3, Insightful)
Most motherboards these days do implement SATA hotplugging. In fact, it's pretty important for eSATA.
Re: (Score:2)
Most netbooks I know about use a modified minipcie pinset, with usb and sata onboard that most cards use, rather than actual minipcie.
Re: (Score:2)
Those laptop PCIe SSD's are not PCIe. The compact PCIe connector also has pins for SATA and USB just like express card. So the SSD's use the SATA pins. I thought I could use one in a PCIe x1 adapter card to boot an ATX motherboard but after some research found out that was not possible.
Re: (Score:1)
It's in the right place, but will it behave the right way?
When those mainboards with extra flash for vista were announced I hoped for it being accessible directly via linux mtd.
Without reading the article I still assume that it will again be just another hdd simulator that doesn't allow the os to do the wear levelling or map the storage directly into the accessible memory.
Too bad. Since Debians live-helper made building live systems easy I'm running my desktop and laptop from squashfs anyways so I'd love to
sweet (Score:5, Insightful)
I bought a SATA SSD which can read and write at around 200MB/s. It was the greatest upgrade I've ever done, and for just $200 (less than my CPU or GPU). Now, I can't stand waiting for things to load when I have to work using mechanical hard drives.
If 200MB/s is that big a difference, 800MB/s is going to be... actually probably not that much better. My computer already feels "instant."
Re: (Score:3, Insightful)
It's the read latency, not MB/s that's most important for desktop usage or for most databases. Everybody quotes the numbers that they're used to quoting, but the game is different with SSDs.
Re:Latency (Score:3, Informative)
Re: (Score:2)
And the worthless JMicron controller SSDs probably have read latencies under 100 microseconds as well.
It's not read latency that matters at all, it's total THROUGHPUT for the smallest, random, reads and writes.
Re: (Score:2)
It's not read latency that matters at all, it's total THROUGHPUT for the smallest, random, reads and writes.
And that throughput is dominated by latency in HDDs. Much less so for SSDs.
Re: (Score:2)
It's not read latency that matters at all, it's total THROUGHPUT for the smallest, random, reads and writes.
For one thing, in a hard disk, seek latency dominates for throughput for random loads. SSDs improve throughput by cutting latency. For another, interactive tasks demand high throughput on a burst of transactions, which needs low latency.
Re: (Score:2)
The main issue with the JMicron controllers is the latency occasionally spikes to closer to half a second when hitting it with random writes. The other issue is that during this latency spike, even reads hang, leading to the 'stuttery' issues with those models. The 602B controller was supposed to address this, and it did to some degree, but not enough to compete with Indilinx and especially Intel. I've noted Samsung drives to be stuttery as well, but only after you've hit it with a bunch of random writes
Re: (Score:2)
I kind of was thinking that if "latency" measured the time between sending a read request to the drive and the time when you get back the very FIRST bit, then even the JMicron probably does that ok.
Re: (Score:2)
If you send a write request and then a read request to the JMicron controller, you could very well wait a second before your first bit is returned. That is quite a bit of latency.
Re: (Score:2)
Re: (Score:2)
I had one JMicron drive that was silently failing to write one section of the drive (every other bit was always zero). I'd always heard that SSDs were supposed to check that, but I'm not entirely sure that was a JMicron specific issue.
Which? (Score:1)
I got the Kingston v-series 64M for around $120--I think it's only rated around 100MB. Still feels a lot faster, especially after boot-up.
Re:sweet (Score:4, Funny)
Instant, or is there a "speed of light [xkcd.com]" delay?
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2)
Only if you get him on the first try!
Believe me, it's kind of hard - and the hardest part can be knowing if you actually hit it.
Re: (Score:2)
It's not the throughput you're noticing. It's the seek latency, at which SSDs are many times faster (comparing Intel's X-25M to WD's 10K RPM Velociraptor, you're looking at about 65x faster. Comparing to a 7200rpm drive, you're looking at about 100x difference.) than mechanical drives.
Re: (Score:2)
and my rebuttle to your post (copied and pasted from my reply to someone on another forum)
I recently installed 2 of the 120gb Agility OCZ drives in RAID0 - apparently SSD's scale better with raid than a regular hard disk. .7mb/s in such tests)
I can read at 390mb/s write at 220mb/s and the random 4k reads and writes are about 23mb/s (regular disks can do about
According to benchmarks, a single OCZ disk is pretty darn close to the intel in the real world performance tests and one can only guess that 2 of them
Re: (Score:2)
With a mechanical disk, you must wait on apps to load. With a fast SSD, they load as fast as you click. That is a huge difference. Your train of thought is never derailed due to disk waits.
There is no cure for net latency yet. This is irrelevant. My computer works as fast as I think, and I love that!
Re: (Score:2)
I know all this, search for my history on disks - I know how they work, I know about latency, I know which portions of disk operations should be quicker and I'm telling you, on a high end machine with a 7200RPM disk and 6gb of ram the difference is negligable, especially on a quad core rig which used a 2gb readyboost disk.
Re: (Score:2)
In my experience, Netbeans takes about ten times longer to load on a mechanical disk. If you call that negligible, you have a very strange definition of "negligible."
Re: (Score:2)
I'm waiting until they hit my price point of under $1/GB for the better units. MLC based SSDs are still up around $2.25 to $2.45 per gigabyte for the low-end stuff, with the better MLC in the $2.50 to $3.25 per gigabyte range. I think the best spot price I've seen yet is around $1.90 for MLC.
At $1/GB, I'd quickly replace the 2.5" SATA
Still can't boot off of it. (Score:3, Informative)
On paper, I don't think the performance difference between this and something like an Intel X-25m is going to justify the 4 fold price difference. When people went from their laptop HDD to the Intel drive, they often saw startup times and whatnot go from multiple (tens!) of seconds to less than a second. This card is likely to push them from less than a second to a smaller less than a second, it's just not worth it to most people.
Re: (Score:3, Insightful)
It still has many of the limitations that the original FusionIO cards have: It's pricey at $11/GB (although not astronomical like the original products), and you still can't boot off of it. This means you'll need at least one old fashioned drive with the OS on it to get your machine going, which is a shame because the system files can often make good use of SSD performance.
I have a Linux machine that boots off a hard drive (i.e. bootloader and kernel) and the rest of the system runs on a SSD. The HD can then spin down until next boot. I guess other real operating systems can do this too.
Re: (Score:1)
Re: (Score:1)
Re: (Score:2)
Re: (Score:2)
Wrong actually.
I have mounted the Program Files (x86), the Users and the UserData folders as HDD partitions mounted via NTFS folders to these points, and it means your system boots and runs some apps as fast as the SSD can let it, but for storage of big files and junk, the rust takes over.
Re: (Score:2)
Re: (Score:3, Interesting)
Making sure that "the code" is present, and actually functions, on god-knows-how-many motherboards, each with its own BIOS horror show, is probably pretty tricky.
By far the easiest way is simply emulating an SATA controller; but then you would lose out on the assorted FusionIO special sauce and might as well just buy the cheaper intel drives and plug them into your
Re: (Score:2)
PCI (including PCIe) devices can supply a bios extention rom to make themselves bootable, afaict this is how most scsi and sata cards make themselves bootable.
Re: (Score:2)
But for some strange reason Windows will not install on to them without a driver being supplied at install time. I've always wondered about that.
Re: (Score:2)
The bios can only be easilly used from real mode, win9x had code to switch back to real mode temporarily to use the bios for hard drive access but afaict no modern OS does so (and you wouldn't want to anyway because the performance would suck).
The bootup process of a PC running a modern* operating system goes something like this.
1: the bios looks for the expansion cards, loads the option roms (which if it's a mass storage device may hook interrupt 13h to make itself accessible)
2: the bios loads and runs the
Re: (Score:2)
Oooooooh, that explains everything. Thanks.
Still workable (Score:2, Interesting)
It's pretty simple actually: they're cheap and easily available in all kinds of different sizes ranging from "I just need to boot Linux" (256MB) to "I want all of my apps on it too" (32GB+), they're writable so you can update the OS, and you've likely got a multitude of ports inside
Re: (Score:2)
Erm. Booting Windows 7 off of a USB thumbdrive? (you'd need that 32GB model)
Dunno, doesn't sound like a very good idea. The OS is huge, and needs lots and lots of IO accesses, both for booting and during normal operation. Thumbdrives generally aren't really designed for that kind of continuous use. And finally, the slowdown from waiting to boot would possibly cause more lost time than you'd gain from having an $800 PCI-express card for your application files.
Re: (Score:2)
In this instance, "booting the PC" doesn't necessarily mean "loading all of the system files." Mainly just means getting the system up to the point that you're then pulling system (e.g. \windows files) off of the SSD.
Re: (Score:2)
On paper, I don't think the performance difference between this and something like an Intel X-25m is going to justify the 4 fold price difference.
This is the perfect caching layer for ZFS. One command to insert it as a read cache between the OS and a big array can make a huge difference [sun.com] in IOPS. I can't easily convince my boss to buy a machine with 80GB of RAM that will be used for nothing but filesystem caching, but I wouldn't hesitate to ask him for a PCIe card to drop into the servers we already have.
Well (Score:5, Insightful)
First off, late in the article they show that game level load times are faster with these PCIx SSDs. Left For Dead loads about twice as quick with the Fusion IOXtreme. So the end user would notice a difference (especially as time goes on and apps become more and more bloated)
One thing this product does effectively illustrate is that SATA 6 is already obsolete. All this card really is is the same grade of memory chips that goes in a lesser SSD like an Intel X-25M. The difference is that the controller gangs together 25 channels instead of just 10 like the Intel product. The controller isn't even that high performance a part - it's using an FPGA. An ASIC version of the chip could be cheaply fabbed using technology several generations back. So, in the long run, the cost to design and manufacture a PCIx SSD is virtually identical to the cost of a SATA SSD. And SATA 6 is already too slow for SSDs to use (and too fast of an interface for a mechanical hard drive)
All in all, I predict that in a few more years, basically all SSDs sold will use a PCIx interface to connect to the host PC. Laptop manufacturers will have to change their internal mounting scheme slightly. And, prices should fall drastically from the $900 this IoXtreme is MSRPing at.
Re: (Score:2)
is intended to eventually replace PCI and PCIx (for example the FC HBA and the SAS controller in my new Precision T5500 workstation are PCI-E rather than the PCIx that was in my old PowerEdge 2650)
Afaict PCIe has already practically killed PCIx, take your precision workstation for example, four PCIe slots but only one PCIx.
At the low end PCIe x1 cards/slots don't seem to be doing so well though, while lots of machines have at least one slot I don't think i've ever seen a card in person (and the cards i've s
Re: (Score:2)
I have a PCIe x4 RAID controller and an inexpen
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
One major issue with it (Score:2)
Unfortunately, a bit of a let-down for some might be, that the product still currently can't be utilized as a boot volume.
That means you still need some other drive (probably an "old" SATA SSD) to boot from. You can then load all your apps (and probably even some parts of the OS with a little hacking) onto this beast, but you still can't use it as your primary drive.
Fusion-io assures us that this feature will be supported in future driver and/or firmware revisions but also didn't commit to a schedule for that roll-out just yet.
Hopefully it comes along soon and at no cost for the early adopters of this item. I'd love to see these become the standard, but it doesn't really fit for me at the moment. As stated above, the jump from HDD to SATA SSD is a much larger percentage increase than
Re: (Score:2)
You just need to load the kernel from some other medium. An old hard drive, a USB stick, an old flash card or something.
Unless you're running a truly backwards OS like Windows. Then, yeah, you have to put a lot of stuff on your boot drive.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
It sounds to me more likely that they skimped big time on the hardware end, than that they met up with a technical limitation.
Re: (Score:2)
Re: (Score:2)
I've got to try this again, but back in the day you could install on a drive that Windows had a driver for, but the BIOS couldn't boot as long as you had a small NTFS/FAT partition on a drive the BIOS COULD boot to hold the bootloader and driver... So you primary drive/OS would live on the SSD, and that legacy pile of junk hanging off your ATA port could be a tired piece of CF for all Windows could care.
Price ? (Score:2)
For about $900 (Score:2)
Re: (Score:1)
Given that the SSDs are very nearly striped at the block level anyway, I can't imagine that RAID 0 is adding much more than flakiness.
Re: (Score:1)
Re: (Score:2)
You are uninformed. The drives are fast enough that they hit the cap for a single SATA connection.
Here's a review of 16 Intel drives in raid-0
http://www.tomshardware.com/reviews/ssd-6gb-raid,2388.html [tomshardware.com]
Its not quite 1600% faster but its about 1300% faster than the peak transfer rate of a single SATA connection.
Then again.. if you really wanted performance for cheap, you could get 8 of the new 40 gig Kingston (intel based) drives and raid-0 them for the same price as the Fusion ioXtreme card. I'd challenge so
Re: (Score:1)
Peak transfer isn't a particularly interesting workstation benchmark (If I were chasing performance, I might put a bunch of spinning disks in RAID 0 to cut down on latency, but the RAID isn't going to make the USB drive I am copying files to any faster, so the transfer rate isn't really that interesting).
And really, I wouldn't be shocked if OP was using software RAID.
Re: (Score:2)
There are plenty [benchmarkreviews.com] of benchmarks [hothardware.com] on the net if you look for them that show both a large speedup in transfer rates
Re: (Score:1)
Incomplete is probably a better word than flawed, the context is comparing the speed boost of going from a spinning disk to a SSD or a couple of SSDs in a RAID setup and the copying example is just a case in my usage where there really isn't any difference between the two.
As you pointed out in your other reply, I was wrong about the benefits of the RAID setup, but I still have trouble looking at it from anything other than a cost/benefit perspective (where, again, for me, the 10 seconds that the RAID saves
Re: (Score:2)
Given that the SSDs are very nearly striped at the block level anyway, I can't imagine that RAID 0 is adding much more than flakiness.
I actually tried both single drive and RAID0 in my Vista configuration. Single drive took about 20 seconds to boot (once POST completes), RAID0 takes about 10 seconds to boot. So it's twice as fast based on a simple real world timings (VISTA boot speed).
80GB is small (Score:1)
80GB is small
Re: (Score:2)
80GB is small
Very small, it holds only about 2 to 10 minutes of information...
Direct link to first page of the article is here: (Score:2)
Two years from now, these will cost $25 (Score:2)
And five years from now, they'll be dusty leftovers found in plastic bins at the local electronics surplus shop. If you can even find them.
Ten years from now, people will hold them up and squint at them and wonder what they were originally built to do. Computer cards all look the same. The only notable thing about these ones is that they don't have any ports on the back. After a couple seconds of interest, they'll get tossed back into the bin.
No real point to this post, other than the "gosh" factor. It
Pointless! for now... (Score:1)
The price tag vs capacity and limitations makes this a worthless purchase for ANY Serious minded individual.
Hotswap isn't really a viable option for failed devices.
RAID if possible would not be conventional or standardized.
Price tag is completely stupid, especially when you can have an Intel x25 80 gig for much less in cost.
Most people are awe inspired and fooled by the grand total throughput of this thing at 800MB/s. Let me tell you, that is not really all that impressive. Just 8 HDD's could turn that nu
The speed has limited usefulness (Score:2)
You have to ask yourself, what do you need that kind of speed for vs a more portable, hot-swappable, and likely longer-lived SATA/E-SATA standard? Maybe a transactional store for a database, but that is pretty much it. A PCI-e style interface would be relegated only to those situations where extreme performance is required. Such devices will always be priced at a premium over their SATA counterparts simply by virtue of their lower volume production.
I do have an interest in how well a SSD could be used to
Re: (Score:2)
I see your defeatist attitude, and raise you one positive and thoroughly excited attitude that wonders where the tech world will go next.
Re: (Score:1)
Re: (Score:2)
With that kind of attitude 640K would be enough for anyone.
Re: (Score:1)
First, are there really small businesses who need this kind of performance, same question for Universities?
The CPU comparison is just apples to oranges. The primarily competitor for SSDs, right now, is HDDs. So HDDs:SDDs = x86 CPUs:? Even if I take your analogy is valid, the only reason processors come down in price so fast, is because they sell about a bazillion (rough estimate) of their actually affordable processors, recoup their R&D and optimize their yields.
> With that kind of attitude 640K
Re: (Score:2)
Re: (Score:1)
For whatever reason, SSDs are still expensive. If the reason is material and basic process costs, then I concede and will agree that improving the value of an SSD should be done through improved performance. However, I don't think this is the issue, so the right direction ought to be bringing costs down, without entirely sacrificing SSD advantages.
Some manufacturers are doing this by pairing premium controllers with non-premium NAND MLC (a la OCZ Agility). I'll say it again: Transfer rates are important,
Re: (Score:2)
I'd say take a walk in the real world, not everything is perfect and costs money right from the word go. You
Re: (Score:2)
In the scope of a consumer product, I can't think of many common workloads that would really benefit from a PCIe interface.
Well the review showed it as cutting game load times in half compared to a conventional SSD. Is that worth adding $1000 to the cost of your gaming rig? I personally don't think so but I bet there are some gamers who think otherwise just as there are some who will spend $1000 each on CPU, and $700 each on thier SLI graphics cards. These early adopters cover some of the R&D and hopef
Re: (Score:1)
A valid point. Although, CPU and SLI charge hilarious premiums for something that could be a real competitive advantage for a gamer, i.e. frame rate. It would be a harder sell to convince a gamer that 1337 loading times will lead to similarly 1337 headshot percentages.
Re: (Score:2)
I wish people would stop jumping on the "wear on the flash chips" issue. It's not that big of a deal anymore, drop it people.
Re: (Score:2)
That kind of speed is needed to run things faster. It's like saying, who needs 16 cores, all they do is run things faster!
Less stuff will have to be loaded into RAM as the cost of a disk read isn't catastrophic, IO can substitute for computation - store precomputed textures instead of computing transformations to textures with imprecise fast routines, get away from the mad sequentiality that's everywhere in high performance computing.
RAIDing and striping hard disk requires huge enclosures, heat dissipation
Anyone else remember the SemiDisk? (Score:2)
It was a RAM drive that went in the old Epson QX-10 and QX-16 computers. I remember when we dropped one of those in the old QX-10 and TP/M and ValDocs launched almost instantly. And two freakin' megabytes of storage. It was HUGE!!! And the battery backup could keep your data safe for a good 6 hours without power.
you still need a good old fashioned spinning drive (Score:1)