Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage Intel Hardware

"Limited Edition" SSD Has Fastest Storage Speed 122

Vigile writes "The idea of having a 'Limited Edition' solid state drive might seem counter-intuitive, but regardless of the naming, the new OCZ Vertex LE is based on the new Sandforce SSD controller that promises significant increases in performance, along with improved ability to detect and correct errors in the data stored in flash. While the initial Sandforce drive was called the 'Vertex 2 Pro' and included a super-capacitor for data integrity, the Vertex LE drops that feature to improve cost efficiency. In PC Perspectives's performance tests, the drive was able to best the Intel X25-M line in file creation and copying duties, had minimal fragmentation or slow-down effects, and was very competitive in IOs per second as well. It seems that current SSD manufacturers are all targeting Intel and the new Sandforce controller is likely the first to be up to the challenge."
This discussion has been archived. No new comments can be posted.

"Limited Edition" SSD Has Fastest Storage Speed

Comments Filter:
  • Is the cap left off the board so you can just put one in yourself or is it size-reduced as well?

    • There's a blank space with pads, yes, but i don't think it would be a good idea to just solder it in there. Going out on a limb here, but the supercap may require firmware support to actually complete the writes when the main power is yanked. Then again maybe its just wired in parallel with the power from the SATA connector, dunno. I'm not much help :)

      • Or the supercap could also depend on other componments omitted or shorted(zero-ohm resistors) to the board(s).
      • If it doesn't require firmware someone will figure it out. It's unlikely it requires firmware though, unless they specifically decided it had to somehow. I can believe some other parts might be omitted though (maybe a diode or jumper as mentioned) but I bet it's no biggie. Now, what's the SMT supercap part they use?

      • Splicing two 1500 uF caps inline on the 5v and 12v PSU lines respectively would be cheaper, more effective, and safer that attempting surgery on this little PCB. It won't void the warranty, and it'll provide significantly more reserve current than the tiny cap normally soldered to that pad. And it will power the entire SSD drive for moments after the cord is ranked, so you won't have to worry about whether soldering a tiny cap to that pad required a drive firmware update as the reserve "lights out" curren

  • "I was so eager to test it that I pounded on this drive all night "

    Possible poor choice of words?

  • Is that new-speak for "cheaper"? I also love "the drive was able to best the Intel X25-M" this is one of the worst written pieces of commercial press release I have ever seen on Slashdot.
    • by maxume ( 22995 )

      Calling the article a 'press release' unfairly tarnishes OCZ. Their press release is still full of press release though:

      http://www.ocztechnology.com/aboutocz/press/2010/362 [ocztechnology.com]

      • by seifried ( 12921 )
        I have no problem with OCZ releasing press releases, they're a company that sells stuff so that's what they do. Slashdot OTOH is supposed to be some sort of quasi-news site (or at least it used to be) with discussion, not a PR mouthpiece.
        • by maxume ( 22995 )

          Right, but this isn't PR, PC Perspective thinks they are a news site (and they didn't simply parrot the OCZ press release).

  • "to improve cost efficiency"

    should be

    "to lower the cost"
  • Misleading title (Score:5, Informative)

    by dnaumov ( 453672 ) on Friday February 19, 2010 @10:38PM (#31207150)
    The new OCZ SSDs, while a welcome addition to the market aren't anywhere near "fastest storage".
    Crucial RealSSD C300: http://www.tweaktown.com/reviews/3118/crucial_realssd_c300_256gb_sata_6gbps_solid_state_disk/index5.html [tweaktown.com]
    Fusion-IO: http://storage-news.com/2009/10/29/hothardware-shows-first-benchmarks-for-fusion-io-drive/ [storage-news.com]
    • Re:Misleading title (Score:5, Informative)

      by AllynM ( 600515 ) * on Saturday February 20, 2010 @12:11AM (#31207696) Journal

      - We included some early C300 results with the benches. The C300 will read faster (sequentially) under SATA 6Gb/sec, but it is simply not as in most other usage.
      - Fusion-IO - good luck using that for your OS (not bootable). Fast storage is, for many, useless unless you can boot from it.

      Allyn Malventano
      Storage Editor, PC Perspective

      • Re: (Score:3, Informative)

        by Khyber ( 864651 )

        "Fusion-IO - good luck using that for your OS (not bootable)."

        Not until Q4, when we release the firmware upgrade to get it working.

        Then, your point will be moot.

        • His point is that you _currently_ cannot boot from it, so it is useless for many people _today_. That point cannot become moot unless find a way to time travel back to today with your Q4 firmware. It's not like we need to wait just a few more days until you release it.
        • Re:Misleading title (Score:4, Informative)

          by AllynM ( 600515 ) * on Saturday February 20, 2010 @07:30AM (#31209180) Journal

          I've got a copy of the fusion-IO faq from early 2008 that reads as follows:

          > Will the ioDrive be a bootable device?
          > This feature will not be included until Q3 2008 ...Then it was promised for the Duo (and never happened). ...Then it was promised for the ioXtreme and even it was released without the ability.

          Don't get me wrong, I'm a huge fan of fusionIO, but you can only fool a guy so many times before he gives up hope on a repeatedly promised feature.

          Allyn Malventano
          Storage Editor, PC Perspective

        • Seriously? It's going to take you over three years to write the two hundred or so lines of x86 assembly required to let the BIOS see your product as a disk? Why does this not fill me with confidence in your company's technical ability? Possibly the same reason that you are selling a storage product using solid state storage with marketing material telling everyone that it's not an SSD...
          • by Khyber ( 864651 )

            "Seriously? It's going to take you over three years to write the two hundred or so lines of x86 assembly required to let the BIOS see your product as a disk?"

            Not every thing uses BIOS, you know. There's more than just BIOS and EFI.

            Then to add to that - NOT ALL BIOS ARE THE SAME.

            Something tells me you've never done hardware development before.

            • Then to add to that - NOT ALL BIOS ARE THE SAME.

              They arent exactly the same, but come on.. during bootup they are 99.5% the same, meaning that they have all the same I/O interrupts.. they leverage the interrupt vector table at the same location.. thats the table where you install your own I/O handlers during your devices initialization..

              I can say for certain that your problem is not incompatible bios's, that it is almost certainly a programmer who doesnt know what hes doing selling you a load of horseshit.

            • Re: (Score:3, Interesting)

              by TheRaven64 ( 641858 )

              Not every thing uses BIOS, you know. There's more than just BIOS and EFI.

              True, but we're talking about booting Windows and Linux, maybe *BSD and Solaris. That basically means BIOS. EFI if you want to boot OS X too, but EFI anywhere else will emulate a BIOS for the purpose of interfacing with disk controllers. You need just enough working to get the boot loader to read the kernel.

              Then to add to that - NOT ALL BIOS ARE THE SAME.

              And yet every ISA, PCI, or PCIe IDE or SCIS controller manages to work with every BIOS that supports the correct bus. How? Because this stuff has not changed significantly for a decade. That's one

      • So use a regular SSD for the OS, and multiple ioDrives for heavy DB work, and whatever else you can throw at it?

      • by raynet ( 51803 )

        I don't think the OS needs any kind of fast media for boot. Just boot from USB stick or similar and set the Fusion-IO as root device. The USB stick will be fast enough to transfer the 20-40MB that are required to load the kernel.

    • I, for one, will never buy another OCZ product again. I bought a "Solid Series" a little over a year ago when newegg reviews (about a dozen at the time) only had good things to say about them. They were pretty fast in the beginning.

      About half-a-year later, the thing started stuttering for seconds on end, much worse than any non-broken spinning disk I encountered. It was a little over half full, that's it. Turns out that they put in crappy controllers, I guess. Not fully sure. Now the company says they

      • Welcome to jumping on a new technology, you got burned, as everyone with the exception of Samsung and Intel drives of the time used the same Jmicron controller. OCZ actually went and designed some cache and paired controllers into their middle offering (I forget the name), and I believe switched to Samsung controllers and single layer flash for a time on the high end. (I don't know what their current offerings are)

        Everyone else for the most part kept selling parts that used the same chip as the OCZ Value Se

      • Their Solid Series 2 is pretty good. Ridiculously cheap. It's reliably fast for read speeds, at least. But stay away from any of the older SSDs that had those horrible JMicron controllers.

  • How hard can it be? (Score:5, Interesting)

    by bertok ( 226922 ) on Friday February 19, 2010 @10:48PM (#31207220)

    I'm kinda fed up waiting for the SSD manufacturers to get their act together. There's just no reason for drives to be only 10-50x faster than physical drives. It should be trivial to make them many thousands of times faster.

    I suspect that most drives we're seeing are too full of compromises to unlock the real potential of flash storage. Manufacturers are sticking to 'safe' markets and form factors. For example, they all seem to target the 2.5" laptop drive market, so all the SSD controllers I've seen so far are all very low power (~1W), which seriously limits their performance. Also, very few drives use PCI-e natively as a bus, most consumer PCI-e SSDs are actually four SATA SSDs attached to a generic SATA RAID card, which is just... sad. It's also telling that it's a factor of two cheaper to just go and buy four SSDs and RAID them using an off-the-shelf RAID controller! (*)

    Meanwhile, FusionIO [fusionio.com] makes PCI-e cards that can do 100-200K IOPS at speeds of about 1GB/sec! Sure, they're expensive, but 90% of that is because they're a very small volume product targeted at the 'enterprise' market, which automatically inflates the price by a '0' or two. Take a look at a photo [fusionio.com] of one of their cards. The controller chip has a heat sink, because it's designed for performance, not power efficiency!

    This reminiscent of the early days of the 3D accelerator market. On one side, there was the high-performing 'enterprise' series of products from Silicon Graphics, at an insane price, and at the low-end of the market there were companies making half-assed cards that actually decelerated graphics performance [wikipedia.org]. Then NVIDIA happened, and now Silicon Graphics is a has been because they didn't understand that consumers want performance at a sane price point. Today, we still have SSDs that are slower that mechanical drives at some tasks, which just boggles the mind, and on the other hand we have FusionIO, a company with technically great products that decided to try to target the consumer market by releasing a tiny 80GB drive for a jaw-dropping $1500 [amazon.com]. I mean.. seriously... what?

    Back when I was a young kid first entering university, SGI came to do a sales pitch, targeted at people doing engineering or whatever. They were trying to market their "low-end" workstations with special discount "educational" pricing. At the time, I had a first-generation 3Dfx accelerator in one of the first Athlons, which cost me about $1500 total and could run circles around the SGI machine. Nonetheless, I was curious about the old-school SGI machine, so I asked for a price quote. The sales guy mumbled a lot about how it's "totally worth it", and "actually very cost effective". It took me about five minutes to extract a number. The base model, empty, with no RAM, drive, or 3D accelerator was $40K. The SSD market is exactly at the same point. I'm just waiting for a new ''NVIDIA" or "ATI" to come along, crush the competition with vastly superior products with no stupid compromises, and steal all the engineers from FusionIO and then buy the company for their IP for a bag of beans a couple of years later.

    *) This really is stupid: 256GB OCZ Z-Drive p84 PCI-Express [auspcmarket.com.au] is $2420, but I can get four of these 60GB OCZ Vertex SATA [auspcmarket.com.au] at $308 each for a total of $1232, or about half. Most motherboards have 4 built-in ports with RAID capability, so I don't even need a dedicated controller!

    • by Microlith ( 54737 ) on Friday February 19, 2010 @11:02PM (#31207298)

      It should be trivial to make them many thousands of times faster.

      Not really. You're limited to the speed of the individual chips and the number of parallel storage lanes. They also target the 2.5" SATA market because it gives them an immediate in. Directly into new desktops and systems without consuming a slot the high performance people who would buy these are likely shoving an excess of games into. The high end is already using those slots for storage.

      Believe me, the industry -is- looking into ways of getting SSDs on to faster buses, but it takes time and some significant rearchitecture. Also, NAND sucks ass, with high block failure rates fresh out of the fab outweighed by sheer density. And it's only going to get worse as lithography gets smaller.

      The controller chip has a heat sink, because it's designed for performance, not power efficiency!

      No, it's because the thing's running an Xilinx Virtex5 FPGA. It also costs a ton as it's using 96GB of SLC NAND, and is part of a fairly modular design that is reused in the io-drive Duo and io-drive Quad.

      Today, we still have SSDs that are slower that mechanical drives at some tasks

      If you're referring to the older JMicron drives that failed utterly at 4K random reads/writes, then you're mistaken. That was the case of a shit controller being exposed. Even the Indilinx controllers, which paled next to the Intel chip, outclassed mechanical drives at the same task.

      on the other hand we have FusionIO, a company with technically great products that decided to try to target the consumer market by releasing a tiny 80GB drive for a jaw-dropping $1500. I mean.. seriously... what?

      If you think that's bad, consider that the Virtex5 they're using on it costs on the order of $500 for the chip itself. You linked the "pro" model, which supports multiple devices in the same system in some fashion. You want this one [amazon.com], which is only $900. Both models use MLC NAND, and neither are really intended for mass-market buyers (you can't boot from them, after all.)

      • Re: (Score:3, Interesting)

        by bertok ( 226922 )

        Not really. You're limited to the speed of the individual chips and the number of parallel storage lanes. They also target the 2.5" SATA market because it gives them an immediate in. Directly into new desktops and systems without consuming a slot the high performance people who would buy these are likely shoving an excess of games into. The high end is already using those slots for storage.

        Believe me, the industry -is- looking into ways of getting SSDs on to faster buses, but it takes time and some significant rearchitecture. Also, NAND sucks ass, with high block failure rates fresh out of the fab outweighed by sheer density. And it's only going to get worse as lithography gets smaller.

        From what I gather, the performance limit is actually largely in the controllers, otherwise FusionIO's workstation class cards wouldn't perform as well as they do, despite using a relatively small number of MLC chips. Similarly, if the limit was caused by Flash, then why is it that Intel's controllers shit all over the competition? The Indilinx controllers got significant speed boosts from a mere firmware upgrade! There's a huge amount of headroom for performance, especially for small random IOs, where the

        • I also disagree that people are running out of expansion slots. On the contrary, other than a video card, I haven't had to use an add-in card for anything for the last three machines I've purchased.
          It used to be that you had a dedicated slot for your graphics card (AGP or PCIe), maybe an AMR or CNR slot that noone actually used and all the other slots were PCI. High end server/workstation boards had PCI-x but even there in general you could still put most cards in most slots (unless the card manufuacturer w

      • by AllynM ( 600515 ) * on Saturday February 20, 2010 @01:20AM (#31208030) Journal

        > Not really. You're limited to the speed of the individual chips and the number of parallel storage lanes.

        There's the thing. Most SSD's are only using the legacy transfer mode of the flash. The newer versions of ONFi support upwards of 200MB/sec transfer rates *per chip*, and modern controllers are using 4, 8, or even 10 (Intel) channels. Once these controllers start actually kicking the flash interface into high gear, there will be no problem pegging SATA or even PCI-e interfaces.

        Allyn Malventano
        Storage Editor, PC Perspective

        • When do you see the introduction of bootable PCIe FusionIO type cards for the consumer?

          • The FusionIO boards will never be for the typical consumer, even if they were bootable. They are just too damn expensive and unless they start using a storage medium besides DRAM, they will remain too damn expensive.

            Their market is the extreme I/O per second niche, and that niche will never grow to include consumers, who dont need 100,000+ IOPS. They just want more bandwidth, and flash can provide that as well as a more-than-enough (1000+) boost in IOPS (and even today the price point for Flash SSD's, whi
            • I said "FusionIO type cards". IE, an SSD integrated directly onto a PCIe card. It doesn't need to be insanely fast and expensive, or developed by FusionIO.

              • Then yes, I think that it is inevitable.

                The reason is because while SATA 3.0 is still barely adopted, we need a SATA 4.0 that is at least 4 times faster at a minimum. Since this cannot happen any time soon, the only solution is a different pipe.. such as the PCI lanes.
      • by Skapare ( 16644 )

        Believe me, the industry -is- looking into ways of getting SSDs on to faster buses, but it takes time and some significant rearchitecture. Also, NAND sucks ass, with high block failure rates fresh out of the fab outweighed by sheer density. And it's only going to get worse as lithography gets smaller.

        How about the PCIe bus? It's already reasonably mature technology and there's a huge installed base. They can build small cards and huge cards.

        I'm looking for an SSD for the OS and programs to reside on, mounted read/only almost all the time (only writing when I need to upgrade it). This does not need sheer density as 16GB will be sufficient (that's GB, not TB). What I want is sheer SPEED. Speed of access and speed of transfer. Single level cells, not multi-level cells, is all that would be needed. A

    • by adisakp ( 705706 )
      FWIW, the FusionIO product is not a simple drive replacement the way a SSD is. It doesn't boot and requires drivers to operate, plus the "control logic" is not self-contained but rather part of the driver. It uses your System CPU and system RAM to help handle bookkeeping rather than just the controller and cache on the drive itself.
      • by Khyber ( 864651 )

        'FWIW, the FusionIO product is not a simple drive replacement the way a SSD is. It doesn't boot and requires drivers to operate, plus the "control logic" is not self-contained but rather part of the driver."

        Everything you address is fixed at the end of this year with a firmware upgrade.

        • Promises, promises. I like FusionIO, I have 8 of the cards. But they have been promising this fix in a few quarters since they released the cards, man.

          C//

        • Everything you address is fixed at the end of this year with a firmware upgrade.

          Funny, they've been saying that exact thing for the past two years. Fortunately this time we can trust them. You know, because the year ends in a zero.

    • Re: (Score:2, Interesting)

      by hlge ( 680785 )
      If you want to go real fast http://www.sun.com/storage/disk_systems/sss/f5100/ [sun.com] OK, not something that you would use in home setting, but it shows that there is still lot of room for innovation in the SSD space. But to your point, rather than using traditional SSDs Sun created a "SO-DIM" with flash that allows for higher packing density as well better performance. Info on the flash modules. http://www.sun.com/storage/disk_systems/sss/flash_modules/index.xml [sun.com]
    • Re: (Score:3, Interesting)

      by m.dillon ( 147925 )

      Yah. And that's the one overriding advantage to SSDs in the SATA form factor. They have lots and lots of competition. The custom solutions... the PCI-e cards and the flash-on-board or daughter-board systems wind up being relegated to the extreme application space, which means they are sold for tons of money because they can't do any volume production and have to compete against the cheaper SATA-based SSDs on the low-end. These bits of hardware are thus solely targeted to the high-end solution space wher

      • Re: (Score:3, Interesting)

        by bertok ( 226922 )

        You are basically saying contradictory things:

        "lots and lots of competition" is the opposite of an "overriding advantage". It's a huge disadvantage. No company wants to enter a market with massive competition.

        The PCI-e cards aren't any more "custom" than the SATA drives. Is a 3D accelerator a "custom" PCI-e card? What about a PCI-e network card? Right now, a SATA SSD and a PCI-e SSD is actually more or less the same electronics, except that the PCI-e card also has a SATA controller built in.

        There's zero nee

        • Re: (Score:3, Insightful)

          by m.dillon ( 147925 )

          I think you're missing the point. The SATA form factor is going to have much higher demand than any PCI-e card, period, for the simple fact that PCI-e is not really expandable while SATA is. SATA has a massive amount of infrastructure and momentum behind it for deployments ranging the gauntlet, small to large. That means SATA-based SSD drives are going to be in very high volume production relative to PCI-e cards. It DOES NOT MATTER if the PCI-e card is actually cheaper to produce, it will still be price

          • Re: (Score:3, Insightful)

            by bertok ( 226922 )

            I think you're missing the point. The SATA form factor is going to have much higher demand than any PCI-e card, period, for the simple fact that PCI-e is not really expandable while SATA is.

            I think you're missing *my* point. The PCI-e standard is for expansion slots. You know, for... expansion. There already are 1TB SSD PCI-E cards, and you can plug at least 4 into most motherboards, and 6-8 into most dual-socket server or workstation boards. Just how much expandability do you *need*?

            Keep in mind that 99% of the point of SSD is the speed. It finally removes that hideous mechanical component that's been holding back computing performance for over a decade now. Nothing stops you from having a co

            • I have to concur. Back when I got my first hard drive, it was a whopping 40 megabytes and came as an ISA expansion card. It was cheaper than buying both a HD and controller separately. They were called "Hard Cards" at the time, and they werent just some novelty high end equipment. They were priced for consumers.

              I believe that there will be a true Hard Card revival because of the facts of this current market.

              SATA 3.0 adoption will be slow (motherboard with the 6Gb sata are noticeably more expensive) and
              • I believe that there will be a true Hard Card revival because of the facts of this current market.

                This current market? Laptops are now, what, 60% of total PC sales? They passed the 50% mark a year or so back, but I haven't been paying attention much since then. Laptops don't have multiple internal PCIe slots. There is some advantage in custom form-factors for fitting inside a laptop, but the 1.8" and 1" hard disk form factors are a logical place to go. Maybe a PCIe bus rather than SATA sounds sensible, but it needs more wires (and more motherboard traces), which makes things much more expensive an

            • If a PCI-e SSD at the same price as an equal capacity SATA drive provided literally 100 times the performance, would people ignore it because.. wait... it's a funny shape for a drive?

              No, of course not. But it cannot happen, because you have to recoup your driver creation and maintenance costs, for plural operating systems.

              C//

            • I think you're missing *my* point. The PCI-e standard is for expansion slots. You know, for... expansion. There already are 1TB SSD PCI-E cards, and you can plug at least 4 into most motherboards, and 6-8 into most dual-socket server or workstation boards. Just how much expandability do you *need*?

              I think you've missed the boat entirely. I'm a small business owner and as a business owner, I buy the cheapest computers that allow my employees to get their work done. This means they're MATX form factor and as you stated earlier, everything is on the board (Video, Sound, Networking) and are lucky to have even a PCIe-16 slot for a video card upgrade. So where are all the business desktops with four or more PCIe slots? I've never seen one in a MATX business class board but I have seen plenty of boards with

            • by amorsen ( 7485 )

              Keep in mind that 99% of the point of SSD is the speed. It finally removes that hideous mechanical component that's been holding back computing performance for over a decade now. Nothing stops you from having a couple of 2TB spinning disk drives in there for holding your movies and photos and all that junk that doesn't need 100K IOPS.

              You may have missed it, but the desktop is dead. The major markets of SSD are notebooks and servers. Modern servers are 1U or blade and have ~0 available PCI-e slots. Notebooks don't have any PCI-e slots either, and manufacturers can't yet make models without support for regular hard drives.

        • The PCI-e cards aren't any more "custom" than the SATA drives.

          You don't have to write driver software for all of the individual platforms you might support if you pick SATA. So, yes, in that sense SATA is less "custom" than the PCIe interface, because the PCIe approach requires quite literally so much more customization work.

          C//

        • Meanwhile, there's already several 20-80 Gbps PCI-e ports on every motherboard
          ROFLMAO

          Pretty much every board has one x16 slot (though in some cases it may be x8 or even x4 electrical). However given that most desktop users buying SSDs will probablly be using this for graphics and the fact that some boards don't like anything but a graphics card in this slot it can't really be considered as a general purpose slot.

          The remaining PCIe slots on most boards (if there are any, there are still machines being made w

    • by Kjella ( 173770 )

      "We Lose Money On Each Unit, But Make It Up Through Volume"

      Take a look at memory sticks and memory cards - they're just one of the dumbest chips possible wrapped in a few cents of plastic. Multiply it up to desired SSD size. It actually comes out to quite a bit in parts before you start trying to build an SSD out of it. Now I haven't looked at FusionIO's products in a while but their early products at least were basically banks of RAM with a battery powered backup. Neat, but didn't really help unless you co

    • by tlhIngan ( 30335 )

      I suspect that most drives we're seeing are too full of compromises to unlock the real potential of flash storage. Manufacturers are sticking to 'safe' markets and form factors. For example, they all seem to target the 2.5" laptop drive market, so all the SSD controllers I've seen so far are all very low power (~1W), which seriously limits their performance. Also, very few drives use PCI-e natively as a bus, most consumer PCI-e SSDs are actually four SATA SSDs attached to a generic SATA RAID card, which is

    • *) This really is stupid: 256GB OCZ Z-Drive p84 PCI-Express [auspcmarket.com.au] is $2420, but I can get four of these 60GB OCZ Vertex SATA [auspcmarket.com.au] at $308 each for a total of $1232, or about half. Most motherboards have 4 built-in ports with RAID capability, so I don't even need a dedicated controller!

      Let me just point out, I bought 2 SSD drives and used my onboard RAID, only to find out that I was limited to 1 PCIe lane due to the onboard controller's design, and thus was running at the speed of a single one of my SSDs instead of 2, realizing no performance gains from RAID0.

  • by m.dillon ( 147925 ) on Saturday February 20, 2010 @12:13AM (#31207708) Homepage

    At least not the Colossus I bought. Write speeds are great but read speeds suck compared to the Intels. The Colossus doesn't even have NCQ for some reason! There's just one tag. The Intels beat the hell out of it on reading because of that. Sure, the 40G Intel's write speed isn't too hot but once you get to 80G and beyond it's just fine.

    The problem with write speeds for MLC flash based drives is, well, its a bit oxymoronic. With the limited durability you don't want to be writing at high sustained bandwidths anyway. The SLC stuff is more suited to it though of course we're talking at least 2x the price per gigabyte for SLC.

    --

    We've just started using SSDs in DragonFly-land to cache filesystem data and meta-data, and to back tmpfs. It's interesting how much of an effect the SSD has. It only takes 6GB of SSD storage for every 14 million or so inodes to essentially cache ALL the meta-data in a filesystem, so even on 32-bit kernels with its 32-64G swap configuration limit the SSD effectively removes all overhead from find, ls, rdist, cvsup, git, and other directory traversals (64-bit kernels can do 512G-1TB or so of SSD swap). So its in the bag for meta-data caching.

    Data-caching is a bit more difficult to quantify but certainly any data set which actually fits in the SSD can push your web server to 100MB/s out the network with a single SSD (A single 40G Intel SSD can do 170-200MB/sec reading after all). So a GigE interface basically can be saturated. For the purposes of serving data out a network the SSD data-cache is almost like an extension of memory and allows considerably cheaper hardware to be used... no need for lots of spindles or big motherboards sporting 16-64G of ram. The difficulty, of course, is when the active data-set doesn't fit into the SSD.

    Even using it as general swap space for a workstation has visible benefits when it comes to juggling applications and medium-sized data sets (like e.g. videos or lots of pictures in RAW format), not to mention program text and data that would normally be throw away overnight or by other large programs.

    Another interesting outcome of using the SSD as a cache instead of loading an actual filesystem on it is that it seems to be fairly unstressed when it comes to fragmentation. The kernel pages data out in 64K-256K chunks and multiple chunks are often linear, so the SSD doesn't have to do much write combining at all.

    In most of these use-cases read bandwidth is the overriding factor. Write bandwidth is not.

    -Matt

    • Re: (Score:3, Interesting)

      by AllynM ( 600515 ) *

      Matt,

      Totally with you on the Colossus not being great on random-IO, that's why we reviewed one!:
      http://www.pcper.com/article.php?aid=821&type=expert&pid=7 [pcper.com]
      The cause is mainly that RAID chip. It doesn't pass any NCQ, TRIM or other ATA commands onto the drives, so they have no choice but to serve each request in a purely sequential fashion. The end result is even with 4 controllers on board, the random access of a Colossus looks more like that of just a single Vertex SSD.

      Allyn Malventano
      Storage Edito

  • I don't see why manufacturers prefer spending large amounts of time and money into producing smart controllers when they could just give us raw access to the device and let us use something as NILFS on top of it. Do you?
  • Anyone else agree that SSD speeds are plenty fast for the tasks given to it? When I shop for SSD drives I look for a reputable company that doesn't stutter like crazy with reads and writes for the lowest price. I've owned Intel X25Ms as well as other brands and I can't tell the difference in performance. Of course, the benchmarks do paint different numbers.

    But who is REALLY gonna notice that 0.03ms difference in "seek time" for one SSD over another and 150MB/sec over 220MB/sec sequential? SSDs these day

    • by Skapare ( 16644 )

      No. I want speed. I want to be able to suck 16GB of the OS out of the SSD and into RAM in 0.03ms. So there :-)

    • By and large, for ordinary user space apps and workloads you are certainly right. But even some home users do intense things, such as video encoding or 3 rendering that, because of high intensity I/O that can be associated with that, will certainly benefit from faster disk. Now, if one hasn't already upgraded to SSD, I'll say this: one is missing out on the best upgrade you can do for your daily experience of your computer, barring a really nice monitor.

      C//

  • It should be illegal to label products like this. The only thing limited, is the mental capacity of those who buy it because of this label. ;)

    • by Skapare ( 16644 )

      I specifically avoid products with such a label because I know that means I can't replace it if it fails. One exception is Mountain Dew's limited edition with real sugar (but that's not something that fails).

      • I specifically avoid products with such a label because I know that means I can't replace it if it fails.

        I agree, it's stupid, and unless they pre-determine the run size it's meaningless, but the general statement above applies to most computer parts if your window is much bigger than a year.

  • Photoshop 1.0 actually ran on a B&W Mac? Seriously? What's the point in that?

    Although, if anyone know where I can find a copy of this for my Mac Plus, let me know...

BLISS is ignorance.

Working...