Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

OCZ IBIS Introduces High Speed Data Link SSDs 76

Vigile writes "New solid state drives are released all the time, and the performance improvements on them have started to stagnate as the limits of the SATA 3.0 Gb/s are reached. SATA 6G drives are still coming out and some newer PCI Express based drives are also available for those users with a higher budget. OCZ is taking it another step with a new storage interface called High Speed Data Link (HSDL) that extends the PCI Express bus via mini-SAS cables and removes the bottleneck of SATA-based RAID controllers thus increasing theoretical performance and allowing the use of command queueing — vital to high IO's in a RAID configuration. PC Perspective has a full performance review that details the speed and IO improvements and while initial versions will be available at up to 960 GB (and a $2800 price tag), in reality, the cost-per-GB is competitive with other high-end SSDs when you get to the 240GB and above options"
This discussion has been archived. No new comments can be posted.

OCZ IBIS Introduces High Speed Data Link SSDs

Comments Filter:
  • first! (Score:2, Funny)

    by Anonymous Coward

    Thanks to the high speed link SSDs.

    • WAAAHAHAAA!

      SSDs huh? WHEN I CAN AFFORD ONE.

      FUCK THEM IN THE FACE WITH NERVE-RACKING MONOCABLES.

      HATE HATE HATE HATE

      SSDs COST NOTHING WHATSFUCKINGEVER TO PRODUCE COMPARED TO HDDs

      WHY DO THEY COST A KIDNEY PER TERABYTE???

      • Corwn wrote:

        WHY DO THEY COST A KIDNEY PER TERABYTE???

        Actually, that is a pretty good deal...

      • HATE HATE HATE HATE

        SSDs COST NOTHING WHATSFUCKINGEVER TO PRODUCE COMPARED TO HDDs

        WHY DO THEY COST A KIDNEY PER TERABYTE???

        The manufacturers still need to finalize the HAH (Have A Heart) standard. They approved version 1.0, but have to revise it due to a grammar error in section 45G, paragraph 8.

        I hear 1.1 will introduce two extensions called "soul" and "conscience" as well.

  • by GiveBenADollar ( 1722738 ) on Wednesday September 29, 2010 @09:41AM (#33733696)

    From the website: 'Whatever you do, don't plug an HSDL device into a SAS RAID card (or vice versa)! '

    Although I dislike proprietary connectors for generic signals, I dislike interchangeable connectors for different signals even more. Can someone with a bit more knowledge explain why this could ever be a good idea, or how this is not going to smoke hardware.

    • I haven't checked the details, but I'm willing to bet that the physical differential signaling levels used for PCIe (LVDS) and SAS/SATA are pretty similar. As long as they at least kept the transmit/receive pairs in the same place, I bet that plugging in the wrong type of device will probably just cause error reports from the controller or at worst severely confuse the device and/or controller, but won't cause any permanent damage.

    • by Relyx ( 52619 ) on Wednesday September 29, 2010 @10:46AM (#33734396)

      From what I gather it was cheaper and quicker for OCZ to co-opt an existing physical standard than roll their own. All the customer needs to do is source good quality SAS cables, which are in plentiful supply.

    • It's bone-headed is what it is. It's like some manufacturer saying "our notebook is going to start supplying 110vac at these connectors that just happen to look like USB host ports. Whatever you do, don't plug USB devices into them!"

      I know why they've done it, though: it's expensive in time and labour designing and testing new connectors before going mass production. It's saving $ for them. And it'll bite the customers when they plug the wrong devices in and find out they've blown their warranty along with

    • It looks like just scaremongering from this "pc perspective" outlet. They never say they tried it, and I'd be willing to bet that nothing would happen if I plugged it into the wrong port.

  • The connectors shown in the article look very similar to multilane connectors that you see used on raid controllers like a 3ware raid controller. Is it the same?

    • Re: (Score:3, Informative)

      by Joehonkie ( 665142 )
      Those are just very high-end SAS cables, so yes.
      • Re: (Score:3, Funny)

        by suso ( 153703 ) *

        If these are your idea of very high-end SAS cables/connectors, then you haven't met my friend Mr. $1million SAN.

        • Re: (Score:3, Informative)

          Is that the Monster Cable version?
          • Re: (Score:1, Funny)

            by Anonymous Coward

            No, it's the EMC version. Twice as expensive, only half as bling.

    • by Gates82 ( 706573 )

      Yes, this is a plain Jane SAS connector, as a comment above eludes, don't mix and match this drive with your RAID card.

      --
      So who is hotter? Ali or Ali's Sister?

    • by suso ( 153703 ) *

      Nevermind. Here is a con on the last page of the article: "HSDL cabling may be confused with mini-SAS cabling unless clearly marked."

      Specifically, I'm talking about the SFF-8087 with iPass connector as shown here [wikipedia.org].

  • How is this any different than existing PCI Express SSD products? They both consume a PCI Express slot..and this one consumes a 3.5" drive slot. Am I the only one missing the point?
    • Re: (Score:3, Informative)

      by Joehonkie ( 665142 )
      Probably. The point is that it's a whole new drive interconnect. They have another product that is a standalone card which supports 4 drives in a RAID. These drives only come with a card because it's a new interface technology and they are assuming you won't have a port for it yet. It's an open standard so they are gambling on it eventually becoming the standard for SSDs and having it built into motherboards and such.
  • by bill_mcgonigle ( 4333 ) * on Wednesday September 29, 2010 @10:01AM (#33733888) Homepage Journal

    The illustrations all seem to show an x8 card, but I think what they're saying is they multiplex a PCIe lane over each pair in the SFF-8087 cable. So, eventually you'll be able to run x16 out of a card to your drive bay, and use that now for a 4x4 config, but perhaps a single x16 config in the future.

    In short, a slower PCIe extension cord using existing cables (as opposed to the oddball PCIe external cables [xtreview.com]). This will probably put pressure on mobo vendors to add more x16 slots. I regularly build storage servers with 16 and 24 drive bays, and it looks like top-end now are Tyan AMD boards with 4 x16 slots. I'd like to see, for instance, a SuperMicro with 6 PCIe x16 slots and dual Intel sockets (though I'm using AMD 12-core more and more lately). PCIe 3.0 is due out in a couple months, so probably it will be there - OCZ could also update to the faster coding rate.

    • Isn't Evga's X58 Classified boards better with 7 16x/8x slots?

      • Isn't Evga's X58 Classified boards better with 7 16x/8x slots?

        Is this part 141-BL-E759-A1? I see 4x16x and no ECC.

        • I was referring to part number 170-BL-E762-A1, however, it does not have ECC. It claims 4x SLI, however, that is because most video cards that are SLI require 2 slots eating up the slot between them, this board actually has 7 x16/x8 slots.

          • However, if ECC is a requirement, you can check out this part: 270-WS-W555-A1 also known as the Classified SR-2.
            7 x16/x8 slots
            Dual Cpu (Xeon 5500/5600)
            Supports up to 48GB of DDR3

          • That's a pretty sweet rig - I've got some friends doing scientific computing who can't get enough GPU in a system - they'd probably like this.

            I have to say the EVGA site is very glitzy but not terribly helpful. I downloaded the 'spec sheet' and it was a 1-page advertisement. Sigh. Newegg's specs says 3 of the slots are x8 but they look like x16 on the picture.

            Even the ZFS guys insist on ECC for storage, but for a monster compute farm this looks awesome.

    • by drsmithy ( 35869 )

      In short, a slower PCIe extension cord using existing cables (as opposed to the oddball PCIe external cables). This will probably put pressure on mobo vendors to add more x16 slots. I regularly build storage servers with 16 and 24 drive bays, and it looks like top-end now are Tyan AMD boards with 4 x16 slots. I'd like to see, for instance, a SuperMicro with 6 PCIe x16 slots and dual Intel sockets (though I'm using AMD 12-core more and more lately). PCIe 3.0 is due out in a couple months, so probably it wil

      • I think it's only on SLC SSD's - 250MB/s is close to a full SATA 3Gbps bus. But since those are the cache drives, more speed would be helpful.

        6Gbps SATA should double that, but the PCIe card devices claim 1500Mbps. I don't usually work in the price ranges of those parts, though, and to be fair, those may just have built-in striping.

        • by drsmithy ( 35869 )

          I'm still curious as to what situations - outside of benchmarking - you're in where a x8 PCIe bus is constraining. Or even a 3Gb SATA port for that matter.

        • Just as I mentioned, the 250MB/s SLC cache drives. The zpool behind them is pumping 700MB/s out. It would be nice to have a big cache that could exceed the speed of the disks. The PCIe 1500MB/s cache drives do that.

  • Serial-Attached SCSI (Score:3, Interesting)

    by leandrod ( 17766 ) <l@dutras . o rg> on Wednesday September 29, 2010 @10:18AM (#33734060) Homepage Journal

    Why not just go SAS?

    • by dmesg0 ( 1342071 ) on Wednesday September 29, 2010 @10:39AM (#33734270)

      My question exactly. One miniSAS connector would give them 6Gb*4 = 24Gbps = ~2400GB/s (including overhead) - a lot more than enough bandwidth

      Maybe to save the costs of SAS HBA (at least 200-300$) and avoid paying royalties to T10?

      • Re: (Score:3, Informative)

        Maybe to save the costs of SAS HBA (at least 200-300$)

        That's the reason. OCZ found some really cheap obsolete Silicon Image PCI-X RAID controllers and PCIe-to-PCI-X bridge chips in a warehouse somewhere and decided to kludge together some "SSDs".

        • I like your scare quotes while referring to what is at the moment the fastest single storage device on the planet. (really just a raid in an unusual package, but still)

      • 24*1024/8=3072MB/s or 3GB/s

        24Gbps does not magically get 800 faster just because it is SAS.

        Do not confuse bits and bytes.

        • by dmesg0 ( 1342071 )

          Sorry, I meant ~2400MB/s or ~2.4GB/s (which is 24Gbps divided by roughly 10 to take into an account protocol overhead - that's the rule of thumb in SCSI, you get at most 400MB/s out of 4Gbps FC or 300MB/s out of 3Gbps SAS).

          Not confusing bits and bytes, just a typo.

          In any case, this thoughput is currently theoretical - best SAS HBAs are 8x PCIe 2.0, and limited by its bandwidth of 20Gbps (which is also divided by 10 because of 8b/10b encoding).

      • by Kjella ( 173770 )

        Yup. Compared to other ~250GB SSDs the $739 price tag does not look so bad. Doing a quick price check, in USD less VAT I have to pay $600+ anyway. For that extra $100 you get a the connector card and a built in RAID, which is roughly what it'd cost you to get a 4 port RAID controller and 4 regular SSDs to RAID. On the other hand, there's not much real reason to get this over a RAID setup either, but I'm guessing they're trying to push this connector out there. If they can get it rolling and start building "

    • by Surt ( 22457 ) on Wednesday September 29, 2010 @10:55AM (#33734502) Homepage Journal

      Are you joking? Because the bandwidth has the same limitations this company (and all the other ssd makers) are trying to find a way to break free of.

  • Another Betamax ? (Score:2, Informative)

    by gtirloni ( 1531285 )
    Same physical connector with different electrical wiring. Now we can fry all those expensive SAS parts. Yay! I don't see this taking off. The storage industry is moving to SAS 6Gb/s now.
    • Re: (Score:3, Interesting)

      by dave420 ( 699308 )
      Most people don't have SAS in their machines. And even if they did, 6Gb/s isn't enough for a lot of people.
  • Inside the IBIS there is two full SATA drive boards, with SandForce SATA controllers, connected to a standard PCIe/SATA RAID controller on the base board.

    The only difference to a SATA RAID controller and two regular SSDs is that the cable is in a different place.

    • by Xzisted ( 559004 )
      Don't forget that another reason to move away from SAS/SATA and towards PCIe is to break away from current restrictions in RAID controllers. This setup looks targeted at Enterprise RAID. Enterprise RAID setups, including LSI Logic's megaraid (H700/H800 from Dell) can't support things such as NCQ or SMART, which are really important features on many traditional Hard Drives or TRIM for SSDs. Support of NCQ would be required to hit higher transfer speeds in an SSD RAID setup than what we are able to hit tod
  • how much Monster would charge for these cables

    ...I can't count that high.
  • ...I couldn't take looking at any more artifacted jpeg images after page 5, which it seems was only 1/10 of the way through... Sheesh...

The use of money is all the advantage there is to having money. -- B. Franklin

Working...