Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage Intel Hardware

"Limited Edition" SSD Has Fastest Storage Speed 122

Vigile writes "The idea of having a 'Limited Edition' solid state drive might seem counter-intuitive, but regardless of the naming, the new OCZ Vertex LE is based on the new Sandforce SSD controller that promises significant increases in performance, along with improved ability to detect and correct errors in the data stored in flash. While the initial Sandforce drive was called the 'Vertex 2 Pro' and included a super-capacitor for data integrity, the Vertex LE drops that feature to improve cost efficiency. In PC Perspectives's performance tests, the drive was able to best the Intel X25-M line in file creation and copying duties, had minimal fragmentation or slow-down effects, and was very competitive in IOs per second as well. It seems that current SSD manufacturers are all targeting Intel and the new Sandforce controller is likely the first to be up to the challenge."
This discussion has been archived. No new comments can be posted.

"Limited Edition" SSD Has Fastest Storage Speed

Comments Filter:
  • How hard can it be? (Score:5, Interesting)

    by bertok ( 226922 ) on Friday February 19, 2010 @10:48PM (#31207220)

    I'm kinda fed up waiting for the SSD manufacturers to get their act together. There's just no reason for drives to be only 10-50x faster than physical drives. It should be trivial to make them many thousands of times faster.

    I suspect that most drives we're seeing are too full of compromises to unlock the real potential of flash storage. Manufacturers are sticking to 'safe' markets and form factors. For example, they all seem to target the 2.5" laptop drive market, so all the SSD controllers I've seen so far are all very low power (~1W), which seriously limits their performance. Also, very few drives use PCI-e natively as a bus, most consumer PCI-e SSDs are actually four SATA SSDs attached to a generic SATA RAID card, which is just... sad. It's also telling that it's a factor of two cheaper to just go and buy four SSDs and RAID them using an off-the-shelf RAID controller! (*)

    Meanwhile, FusionIO [fusionio.com] makes PCI-e cards that can do 100-200K IOPS at speeds of about 1GB/sec! Sure, they're expensive, but 90% of that is because they're a very small volume product targeted at the 'enterprise' market, which automatically inflates the price by a '0' or two. Take a look at a photo [fusionio.com] of one of their cards. The controller chip has a heat sink, because it's designed for performance, not power efficiency!

    This reminiscent of the early days of the 3D accelerator market. On one side, there was the high-performing 'enterprise' series of products from Silicon Graphics, at an insane price, and at the low-end of the market there were companies making half-assed cards that actually decelerated graphics performance [wikipedia.org]. Then NVIDIA happened, and now Silicon Graphics is a has been because they didn't understand that consumers want performance at a sane price point. Today, we still have SSDs that are slower that mechanical drives at some tasks, which just boggles the mind, and on the other hand we have FusionIO, a company with technically great products that decided to try to target the consumer market by releasing a tiny 80GB drive for a jaw-dropping $1500 [amazon.com]. I mean.. seriously... what?

    Back when I was a young kid first entering university, SGI came to do a sales pitch, targeted at people doing engineering or whatever. They were trying to market their "low-end" workstations with special discount "educational" pricing. At the time, I had a first-generation 3Dfx accelerator in one of the first Athlons, which cost me about $1500 total and could run circles around the SGI machine. Nonetheless, I was curious about the old-school SGI machine, so I asked for a price quote. The sales guy mumbled a lot about how it's "totally worth it", and "actually very cost effective". It took me about five minutes to extract a number. The base model, empty, with no RAM, drive, or 3D accelerator was $40K. The SSD market is exactly at the same point. I'm just waiting for a new ''NVIDIA" or "ATI" to come along, crush the competition with vastly superior products with no stupid compromises, and steal all the engineers from FusionIO and then buy the company for their IP for a bag of beans a couple of years later.

    *) This really is stupid: 256GB OCZ Z-Drive p84 PCI-Express [auspcmarket.com.au] is $2420, but I can get four of these 60GB OCZ Vertex SATA [auspcmarket.com.au] at $308 each for a total of $1232, or about half. Most motherboards have 4 built-in ports with RAID capability, so I don't even need a dedicated controller!

  • by bertok ( 226922 ) on Friday February 19, 2010 @11:49PM (#31207578)

    Not really. You're limited to the speed of the individual chips and the number of parallel storage lanes. They also target the 2.5" SATA market because it gives them an immediate in. Directly into new desktops and systems without consuming a slot the high performance people who would buy these are likely shoving an excess of games into. The high end is already using those slots for storage.

    Believe me, the industry -is- looking into ways of getting SSDs on to faster buses, but it takes time and some significant rearchitecture. Also, NAND sucks ass, with high block failure rates fresh out of the fab outweighed by sheer density. And it's only going to get worse as lithography gets smaller.

    From what I gather, the performance limit is actually largely in the controllers, otherwise FusionIO's workstation class cards wouldn't perform as well as they do, despite using a relatively small number of MLC chips. Similarly, if the limit was caused by Flash, then why is it that Intel's controllers shit all over the competition? The Indilinx controllers got significant speed boosts from a mere firmware upgrade! There's a huge amount of headroom for performance, especially for small random IOs, where the controller makes all the difference (storage layout, algorithms, performance, caching, support for TRIM, etc...).

    And there's no need to "rearchitect" at all! PCI/PCI-e is old, storage controllers of all sorts have been made for it for decades. There are RAID or FC controllers out on the market right now that can do almost 1GB/sec with huge IOPS. It's not rocket science, storage controllers are far simpler internally than, say, a 3D accelerator.

    I also disagree that people are running out of expansion slots. On the contrary, other than a video card, I haven't had to use an add-in card for anything for the last three machines I've purchased. Motherboards have everything built-in now. Server and workstations boards have so many expansion sockets, it's just crazy.

    If you think that's bad, consider that the Virtex5 they're using on it costs on the order of $500 for the chip itself. You linked the "pro" model, which supports multiple devices in the same system in some fashion. You want this one [amazon.com], which is only $900. Both models use MLC NAND, and neither are really intended for mass-market buyers (you can't boot from them, after all.)

    Precisely my point! Every vendor is making some stupid compromises somewhere. Using an FPGA is really inefficient, but still better in some ways than what everyone else is doing, which ought to really make you wonder just how immature the market is.

    Similarly, look at the price difference between the two FusionIO drives, the "Pro" and the "Non-Pro" model. I bet there's no physical difference, because all of the specs are identical, but there's a 2x price difference! It's probably just a slightly different firmware that allows RAID. This is artificial segmentation. If they had decent competition, the drive would cost 1/4 as much per GB, and all models would allow RAID.

  • by hlge ( 680785 ) on Saturday February 20, 2010 @12:10AM (#31207694)
    If you want to go real fast http://www.sun.com/storage/disk_systems/sss/f5100/ [sun.com] OK, not something that you would use in home setting, but it shows that there is still lot of room for innovation in the SSD space. But to your point, rather than using traditional SSDs Sun created a "SO-DIM" with flash that allows for higher packing density as well better performance. Info on the flash modules. http://www.sun.com/storage/disk_systems/sss/flash_modules/index.xml [sun.com]
  • by m.dillon ( 147925 ) on Saturday February 20, 2010 @12:13AM (#31207708) Homepage

    At least not the Colossus I bought. Write speeds are great but read speeds suck compared to the Intels. The Colossus doesn't even have NCQ for some reason! There's just one tag. The Intels beat the hell out of it on reading because of that. Sure, the 40G Intel's write speed isn't too hot but once you get to 80G and beyond it's just fine.

    The problem with write speeds for MLC flash based drives is, well, its a bit oxymoronic. With the limited durability you don't want to be writing at high sustained bandwidths anyway. The SLC stuff is more suited to it though of course we're talking at least 2x the price per gigabyte for SLC.

    --

    We've just started using SSDs in DragonFly-land to cache filesystem data and meta-data, and to back tmpfs. It's interesting how much of an effect the SSD has. It only takes 6GB of SSD storage for every 14 million or so inodes to essentially cache ALL the meta-data in a filesystem, so even on 32-bit kernels with its 32-64G swap configuration limit the SSD effectively removes all overhead from find, ls, rdist, cvsup, git, and other directory traversals (64-bit kernels can do 512G-1TB or so of SSD swap). So its in the bag for meta-data caching.

    Data-caching is a bit more difficult to quantify but certainly any data set which actually fits in the SSD can push your web server to 100MB/s out the network with a single SSD (A single 40G Intel SSD can do 170-200MB/sec reading after all). So a GigE interface basically can be saturated. For the purposes of serving data out a network the SSD data-cache is almost like an extension of memory and allows considerably cheaper hardware to be used... no need for lots of spindles or big motherboards sporting 16-64G of ram. The difficulty, of course, is when the active data-set doesn't fit into the SSD.

    Even using it as general swap space for a workstation has visible benefits when it comes to juggling applications and medium-sized data sets (like e.g. videos or lots of pictures in RAW format), not to mention program text and data that would normally be throw away overnight or by other large programs.

    Another interesting outcome of using the SSD as a cache instead of loading an actual filesystem on it is that it seems to be fairly unstressed when it comes to fragmentation. The kernel pages data out in 64K-256K chunks and multiple chunks are often linear, so the SSD doesn't have to do much write combining at all.

    In most of these use-cases read bandwidth is the overriding factor. Write bandwidth is not.

    -Matt

  • by m.dillon ( 147925 ) on Saturday February 20, 2010 @12:21AM (#31207754) Homepage

    Yah. And that's the one overriding advantage to SSDs in the SATA form factor. They have lots and lots of competition. The custom solutions... the PCI-e cards and the flash-on-board or daughter-board systems wind up being relegated to the extreme application space, which means they are sold for tons of money because they can't do any volume production and have to compete against the cheaper SATA-based SSDs on the low-end. These bits of hardware are thus solely targeted to the high-end solution space where a few microseconds actually matters.

    Now with 6GBit (600 MByte/sec) SATA coming out I fully expect the SATA based SSDs to start pushing 400MB+/sec per drive within the next 12 months. If Intel can push 200MB/sec+ (reading) in their low-end 40G MLC SSD, then we clearly already have the technological capability to push more with 6GBit SATA without having to resort to expensive, custom PCI-e jobs.

    -Matt

  • by AllynM ( 600515 ) * on Saturday February 20, 2010 @01:10AM (#31207984) Journal

    Matt,

    Totally with you on the Colossus not being great on random-IO, that's why we reviewed one!:
    http://www.pcper.com/article.php?aid=821&type=expert&pid=7 [pcper.com]
    The cause is mainly that RAID chip. It doesn't pass any NCQ, TRIM or other ATA commands onto the drives, so they have no choice but to serve each request in a purely sequential fashion. The end result is even with 4 controllers on board, the random access of a Colossus looks more like that of just a single Vertex SSD.

    Allyn Malventano
    Storage Editor, PC Perspective

  • by bertok ( 226922 ) on Saturday February 20, 2010 @01:11AM (#31207988)

    You are basically saying contradictory things:

    "lots and lots of competition" is the opposite of an "overriding advantage". It's a huge disadvantage. No company wants to enter a market with massive competition.

    The PCI-e cards aren't any more "custom" than the SATA drives. Is a 3D accelerator a "custom" PCI-e card? What about a PCI-e network card? Right now, a SATA SSD and a PCI-e SSD is actually more or less the same electronics, except that the PCI-e card also has a SATA controller built in.

    There's zero need to squeeze a solid-state storage device into the form factor that was designed for mechanical drives with moving parts. Hard drives are the shape and size they are because it's a good size for a casing containing a couple of spinning platters. They are connected with long, flexible, but relatively low-bandwidth cables because mechanical drives are so glacially slow that the cabling was never the performance limit, and having flexible cabling is an advantage for case design, so in that case, it was worth it.

    Meanwhile, SSDs have hit the SATA 3 Gbps bus speed limit in about two generations, and will probably saturate SATA 6 Gbps in just one more generation. There are drives already available that can exceed 2x the speed of SATA 6, which means that we'll have to wait years for some SATA 12 Gbps standard or something to get any further speed improvement.

    Meanwhile, there's already several 20-80 Gbps PCI-e ports on every motherboard, which is cheap and easy for manufacturers to interface with. If flexible cabling is an absolute requirement, then there is PCI-e cabling [wikipedia.org].

  • by AllynM ( 600515 ) * on Saturday February 20, 2010 @01:20AM (#31208030) Journal

    > Not really. You're limited to the speed of the individual chips and the number of parallel storage lanes.

    There's the thing. Most SSD's are only using the legacy transfer mode of the flash. The newer versions of ONFi support upwards of 200MB/sec transfer rates *per chip*, and modern controllers are using 4, 8, or even 10 (Intel) channels. Once these controllers start actually kicking the flash interface into high gear, there will be no problem pegging SATA or even PCI-e interfaces.

    Allyn Malventano
    Storage Editor, PC Perspective

  • Re:Misleading title (Score:3, Interesting)

    by TheRaven64 ( 641858 ) on Saturday February 20, 2010 @08:19PM (#31214514) Journal

    Not every thing uses BIOS, you know. There's more than just BIOS and EFI.

    True, but we're talking about booting Windows and Linux, maybe *BSD and Solaris. That basically means BIOS. EFI if you want to boot OS X too, but EFI anywhere else will emulate a BIOS for the purpose of interfacing with disk controllers. You need just enough working to get the boot loader to read the kernel.

    Then to add to that - NOT ALL BIOS ARE THE SAME.

    And yet every ISA, PCI, or PCIe IDE or SCIS controller manages to work with every BIOS that supports the correct bus. How? Because this stuff has not changed significantly for a decade. That's one of the main reasons why the OS doesn't use the BIOS interfaces for talking to the disk as soon as it's loaded a real driver. You don't need a high performance implementation, you just need something that can read a few (virtual) sectors into RAM at a designated location.

    Something tells me you've never done hardware development before.

    No, but I've done hypervisor development (and written a book about it) and worked with code that emulates this stuff and deals with all of the 16-bit weirdness that happens during boot. We're not talking about needing to support DOS, or operating systems that actually make serious use of the BIOS for disk I/O, we're talking about supporting a couple of boot loaders that copy a kernel image from disk and then let the kernel load real drivers. No one cares about the performance of this code, because it's used for a couple of seconds at system boot time and then ignored.

    Supporting other firmwares is more effort, but how many of your customers actually care about anything other than x86? If you want to support IBM and Sun machines with OpenFirmware, then it's even easier; you can write the whole thing in Forth, including the device tree enumeration, which is probably the least fun bit of doing this on x86, you don't even need any assembly knowledge (yes, it's quite slow, that's why operating systems install a real driver as soon as they can during boot).

    I agree with Rockoon's assessment; it sounds like you've hired an incompetent programmer who is feeding you a line of garbage about why he can't do the job he's paid to do.

This file will self-destruct in five minutes.

Working...