Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Start-up Claims SSD Achieves 180,000 IOPS 133

Posted by ScuttleMonkey
from the chestnuts-roasting-over-an-open-cage dept.
Lucas123 writes "Three-year-old start-up Pliant Technology today announced the general availability of a new class of enterprise SAS solid state disk drives that it claims without using any cache can achieve up to 180,000 IOPS for sustained read/write rates of 500MB/sec and 320MB/sec, respectively. The company also claims an unlimited number of daily writes to its new flash drives, guaranteeing 5 years of service with no slowdown. 'Pliant's SSD controller architecture is not vastly different from those of other high-end SSD manufacturers. It has twelve independent I/O channels to interleaved single level cell (SLC) NAND flash chips from Samsung Corp. The drives are configured as RAID 0 for increased performance.'"
This discussion has been archived. No new comments can be posted.

Start-up Claims SSD Achieves 180,000 IOPS

Comments Filter:
  • by Anonymous Coward
    there's enough for everyone in here
    • Re: (Score:1, Interesting)

      by Anonymous Coward

      Actually current SSD's are bottlenecked by the SATA connection at 300MB/s read so getting 500 with specialized hardware doesn't seem all that fantastic.

      • by adisakp (705706)

        Actually current SSD's are bottlenecked by the SATA connection at 300MB/s read so getting 500 with specialized hardware doesn't seem all that fantastic.

        The easy way around the SATA speed limit is software RAID and multiple drives. I have two Kingston 160GB (relabelled Intel G1) SSD Drives on an Intel Matrix Controller MB with software RAID 0. I get read rates over 400MB/s with technology that is roughly a year old. I'm sure newer technology on higher end controllers can easily achieve 500.

        • Re: (Score:3, Insightful)

          by XanC (644172)

          That doesn't get around the bottleneck at all. You've got the same ratio of actual bandwidth used to theoretical bandwidth possible.

          A single drive with multiple SATA interfaces, acting like RAID 0, would alleviate the bottleneck.

          • by adisakp (705706) on Monday September 14, 2009 @04:42PM (#29419597) Journal

            That doesn't get around the bottleneck at all.

            I get nearly 2X the speed of a single drive that is limited by SATA. Theoretically, that might not be the same thing but for all *PRACTICAL* purposes, it gets around the bottleneck just fine for me :-)

            • Re: (Score:2, Insightful)

              by Garganus (890454)

              That doesn't get around the bottleneck at all.

              I get nearly 2X the speed of a single drive that is limited by SATA. Theoretically, that might not be the same thing but for all *PRACTICAL* purposes, it gets around the bottleneck just fine for me :-)

              Yep, doubling your bus count usually doubles your transfer speed. *rolling eyes*

      • SAS not SATA (Score:2, Insightful)

        by davidwr (791652)

        TFA said serial-attached SCSI (SAS) was currently 6Gb/sec going on to 12 by 2012. SATA III is also 6Gbit/sec.

        0.5GB/sec is 4Gbit/sec, which is under the SAS limit.

        Even if it were SATA @ 3Gbit/sec that would still be quite fast.

      • by AHuxley (892839)
        If the chips for a high end and low end unit are the same, the only way to get sales is to work on the controllers.
        By making new and better controllers, end users are now on a software upgrade path for hardware.
        Why sell the captivated enthusiast market 1 drive in 3-5 years when you can try or 2 or 3 over the same time?
      • by mysidia (191772)

        That's why their product is an Enterprise SAS drive, not a SATA drive. SAS can get 3 gigabits per second.

        SAS is Serial Attached SCSI, which isn't the same thing as SATA.

        SATA is a consumer-level/workstation technology. Whereas SAS is for servers.

        You can plug a SATA drive into a SAS port, but can't plug a SAS drive into a SATA port. :)

  • Yay. (Score:1, Funny)

    by 2names (531755)
    Neat.
  • by ibsteve2u (1184603) on Monday September 14, 2009 @03:35PM (#29418731)
    They're fishing for a price point? Quick, everybody make a comment to the effect that such a drive is only worth about $10...
  • Congrats (Score:5, Insightful)

    by VeNoM0619 (1058216) on Monday September 14, 2009 @03:35PM (#29418739)
    Congrats! Oh wait...

    Start-up Claims SSD Achieves 180,000 IOPS

    Claims? As in no one else but the company has stated this "fact"? I wish this article waited for a review before being posted :S

    • Re:Congrats (Score:5, Funny)

      by Monkeedude1212 (1560403) on Monday September 14, 2009 @03:44PM (#29418865) Journal

      I can claim that I have confirmed it if you like.

    • Reviews of enterprisey hardware are near-impossible to find, so you may be waiting a while.

    • "three-year-old start-up ... guaranteeing 5 years of service"

      Completely possible: they acquired another company which performed testing.
      • by sphantom (795286)

        A company doesn't necessarily even have to test their product for as long as they claim it will last. Often times they'll just test the product at a usage level x times greater than is expected to be utilized on average and do the math. An example with context to this story might be testing a SSD with an amount of reads and writes 5 times greater than they'd expect an average person to use it. If it lasts for a year, they can claim it will last 5.

        • Re: (Score:3, Informative)

          by MartinSchou (1360093)

          In this case it's probably more a matter of just doing the math.

          They know their cells can handle 100,000 writes in their lifetime, they know the maximum number of writes they'll see (180,000/s for 5 years for the 3½ inch model), and they can merely do the math to figure out how many cells they need to have in their product to survive.

          I did the math elsewhere, and to do it with 4 kB/write they'd only need 136 GB. Even when looking at the 320 MB/s write rate, you're only averaging 1.9 kB/write if you're

    • I can't seem to find anything on their website [storage-news.com] and/or in their data sheets that confirms the claim in the summary about "unlimited writes for 5 years"; just a 2 million hour MTBF. Can anyone point me to a statement from Pliant that confirms this?
    • by bertok (226922)

      Congrats! Oh wait...

      Start-up Claims SSD Achieves 180,000 IOPS

      Claims? As in no one else but the company has stated this "fact"? I wish this article waited for a review before being posted :S

      It's not outside what I'd expect for a next-gen enterprise SSD. The PCI-E FusionIO cards can easily do 100K IOPS sustained. I'm just surprised the SAS bus can send that many commands per second. I guess the SCSI wire protocol scales better than I assumed.

      The bandwidth numbers are actually relatively low - that's the bus speed limiting the drive. I suspect that pretty soon, most enterprise-grade SSDs will connect using the PCI bus to avoid that.

  • by Robotbeat (461248) on Monday September 14, 2009 @03:40PM (#29418809) Journal

    I used pre-production versions of these. I tested them with Terabytes of test data in random write tests. They are amazing, and can saturate a 1Gb FC connection with random writes. They are very resilient. We put these in my company's demo boxes to show that our architecture can compete with EMC. Kind of cheating, but we told them that it was a special drive that enables us to show the limits of our storage management architecture in a small, 1U box, instead of just showing you the limits of physical hard drives.

    We beat their 8Us of EMC hard drives by 34% with just one of these 2.5" drives, and we had bottlenecks all over the place in our small demo box. And they did the testing, not us.

    The thing about these drives is that they are more expensive ($/GB) even than registered ECC DDR2/3 RAM, which obviously is going to be even faster.

    • by dosguru (218210)

      1GbFC? How well can this stand up to modern 8GbFC or 10GbE iSCSI?

      • by afidel (530433)
        Or better 10Gb FCoE (lower overhead than iSCSI). In theory with a fast enough controller they should be able to do it for reads in a RAID1 configuration.
    • The thing about these drives is that they are more expensive ($/GB) even than registered ECC DDR2/3 RAM, which obviously is going to be even faster.

      So, how much do they cost exactly?

      • by owlstead (636356)

        Thing with these kind of prices is that you start off with the off the shelf price - if any - and then negotiate the real price. And this final price is - of course - confidential otherwise clients start comparing prices on the internet. If he would post the price they would directly point to the company that was paying the price and signed the confidentiality contract.

    • Two problems:

      1) They're bottlenecked by SAS, which, if they're using 3gbit controllers, probably won't go that much higher than ~500MB/s

      2) Their cost is probably insane, if they're setting the upper bounds at $6000

      By comparison, Fusion-IO claims 100,000 IOPS (not as high, but not far off) on their drives, and are about to introduce a new model for $895. They use a PCI-e 4x slot, which assuming v1.x, should give them about 10gbit/s (before overhead) to play with.

      Also, Woz is their chief scientist, so bonus.

      T

    • The thing about these drives is that they are more expensive ($/GB) even than registered ECC DDR2/3 RAM, which obviously is going to be even faster.

      That only means it would probably be better to use RAM for read-only applications. An application that needs to commit a write to a database, ensuring that the bits are actually written to a physical medium, will be be able to utilise flash rather than a hard disk. A lot of database servers would gain increased performance from such an arrangement.

      • The $/GB of DRAM is misleading. Sure, 300 GB of RAM is cheap, but how much does the server cost that can hold it?

        • by billcopc (196330)

          That's why we need more development in RAM-based disk emulators. Much like the now-archaic Gigabyte i-Ram, I would kill for a PCI-E card that takes 8 or more registered RAM modules and spits out a bunch of SAS or SATA connectors to be Raid-0'ed, with battery backup. It would be cheaper than a high-end SSD and much much faster.

          • I'm late reading this article, but there *are* products out there that do exactly what you state. I can't recall any company names off-hand (and it'd sound too much like an advertisement anyway), but I think one of them was mentioned earlier by someone. The ones I've seen will take 6 sticks of ECC-R DDR2, and have a small external connector for power to maintain the contents of RAM while the computer is off. You're still limited by how many PCI-E slots you have in your servers (most 1U servers have 1-2 for

      • by atamido (1020905)

        An application that needs to commit a write to a database, ensuring that the bits are actually written to a physical medium, will be be able to utilise flash rather than a hard disk.

        Log files are sequential, so there is no speed benefit in using flash media instead of hard drives.

        • by chrb (1083577)

          In an application where you need to ensure data is written to somewhere physical on commit the database will call fsync(). This can only be done 250 times per minute on a 15k rpm disk drive [livejournal.com]. That limits the database to 250 commits a minute. Battery backed or flash cache increases performance here - "The really enormous performance increases that have been found for update-heavy database loads come from another hardware enhancement, namely the RAID controller with battery-backed cache.... On a test involving [linuxfinances.info]

          • by atamido (1020905)

            Write speeds for a given rotational speed have been getting better over the years, and any real system is going to be using a battery backed cache anyway. With a properly tuned file system I don't see any benefit from an SSD for purely sequential writes. Add in cache on a controller card, and you don't even need that. Modern drives can cache the fsync() operations and perform them as a long single write.

          • The number of fsync's per minute depends a lot on the writes you are doing. If you are using a log-structured filesystem then fsync() can be handled asynchronously; you just don't return until the drive reports that it has committed those write (all writes are linear in LFS). The down side of this is that, until the garbage collector runs, your data is slow to read back. This isn't always a problem for a database. If you have a 4GB db then you can keep it all in RAM and stream the commits out to disk.
        • by chrb (1083577)

          Of course I meant 250 commits per second, not per minute. That's still assuming the best case of one transaction being written every rotation of the disk. Actual world results will probably be lower.

  • awesome (Score:2, Funny)

    by KingPin27 (1290730)
    This looks like a pretty good device. Tho i haven't heard much about them until recently I'm still pretty skeptical about their claimed lifespan - something that would be able to handle 24/7 consistent read/write for a number of years. The other thing that leaves me scratching my head is the missing DRAM cache -- I thought the need to store information then write it in buffer was kind of important especially with writing as fast as SAS is supposed to be able to transfer it. If these were hitting the shelv
    • by Bigjeff5 (1143585)

      I thought the need to store information then write it in buffer was kind of important especially with writing as fast as SAS is supposed to be able to transfer it.

      What's the point of a buffer if SAS can barely keep up with the drive's IO speed? All writes will be at the limit of the SAS. Surely you don't think they put the cache buffers in hard drives for data integrity do you? That makes no sense, the cache is more volatile than the storage medium, by definition.

      Think of it this way, slow drives need lots and lots of cache, fast drives need very little cache. Does your RAM have some other cache before it sends stuff off to the CPU? Same thing here, except its g

  • by toppavak (943659) on Monday September 14, 2009 @03:45PM (#29418881)
    And Intel's enterprise-class SSDs already offer sustained speeds of up to 250MB/s read and 170MB/s write, wouldn't read speeds of approximately 500MB/s and write speeds of over 300MB/s be expected?
    • by MartinSchou (1360093) on Monday September 14, 2009 @04:24PM (#29419369)

      The 12 independent channels can be accessed as RAID-0 if needed, giving upwards of 12x the speed of a single channel, but this is done by the onboard controller, not by anything else.

      Intel uses 10 independent channels to achieve their speeds, also in a "RAID-0" like setup.

      • by toppavak (943659)
        So they are claiming up to 500/320 when all 12 channels are used in a RAID 0-like configuration while Intel achieves their 250/170 doing something similar with 10 channels? That makes more sense, thanks!
    • by zdzichu (100333)

      Intel X25-E delivers about 5000 IOPS. How you are going to match drive doing 9x more with RAID?

  • The summary seems to end abruptly and the article.
  • by afidel (530433) on Monday September 14, 2009 @04:10PM (#29419199)
    With all the fast SSD's I've tested I've found the controllers to be a bigger bottleneck than the SSD itself. I've seen 50% performance gains on the Intel x-25e's simply by hooking them to a second machine with a different controller. Even with the best performer (Intel ICH9) I still had the feeling that the controller might have been holding the drive back a bit. Haven't tried it with an ICH10 based board yet though so perhaps there's significant improvements there. (on further reading they claim to be using SAS, I'm not aware of any really high performance SAS chipsets, they all seem to be targeted at RAID's of traditional HDD's and so can't keep up with SSD, I'd really be interested in some details of their test).
    • With SAS there are basically two choices: LSI 1068 (with IT firmware for maximum performance) or the not-yet-released LSI 9210.

    • by XanC (644172)

      I'll soon be configuring another ICH10 box with an X25-E; if you want to send me some benchmarks to run I could probably do it.

    • by AHuxley (892839)
      I see it more as a SSD cartel. Intel sets the bar high and all the rest form up in and around it. Nobody wants to spook the prosumer herd just yet.
      Milk them for a few more years, then its a race to the bottom as a commodity product.
    • The solution to your problem is to not have a chipset in the way - like with FusionIO ioDrives. PCIe based SSD, direct connection to the CPU! Doesn't even use much CPU time, because it's so damn fast that unlike HDDs, it doesn't spend much time waiting.

      Coincidentally, those ioDrives are also faster. I'd love the 1.4/1.5 GB/sec write/read variety, but I have a feeling I'd have to sell my car, and maybe my house. Even their low end model "for Desktop PCs" costs $900 for 80GB. If these guys can keep the price

      • Doesn't even use much CPU time

        It sounds like you haven't used Fusion io; when you saturate the card the driver uses an entire CPU core.

        • It sounds like you haven't used Fusion io; when you saturate the card the driver uses an entire CPU core.

          I haven't, and I was aware of that.

          Perhaps I would've been more correct in saying "makes efficient use of CPU time"?

          Most of us have quad-core CPUs with multiple cores sitting idle while we game or work on stuff. Using one core to give other cores and programs access to data much faster is a good tradeoff.

          And HDD access for a relatively small number of drives(12?) will saturate a core too, unless you have a decent controller card. It's all that RAID parity checking and stuff. But 12 HDDs won't come close in

  • Unlimited writes? (Score:3, Interesting)

    by pmontra (738736) on Monday September 14, 2009 @04:36PM (#29419527) Homepage
    From TFA:

    they're also able to claim unlimited program and erase [write/erase] cycles,

    They're using SLC NAND flash which has a lower wear than MLC NAND [wikipedia.org] but that doesn't mean there is no wear at all. It looks like a nice drive anyway.

    • True, but in this context the word "unlimited" is being used to mean "you can't wear it out in 5 years". It's vaguely similar to "unlimited" Internet: The ISP may not slow you down at a set data limit, but you still can't pull more than ~300GB through a 1Mb connection per month.

      But yeah, I don't like how marketing departments use the word unlimited either.

    • Re:Unlimited writes? (Score:5, Informative)

      by MartinSchou (1360093) on Monday September 14, 2009 @06:51PM (#29420729)

      They didn't say "unlimited writes forever" they said "unlimited writes for 5 years", and that's obviously limited to what the drive can do, i.e. 180,000 operations per second for their 3½ inch drive.

      At 180,000 IOPS * 5 years you're looking at 28,401,233,400,000 write operations.
      At 320 MB/s * 5 years you're looking at writing 47 petabytes worth of data.

      Now, obviously none of those figures are realistic, as there is no way you would be writing 100% and never ever reading your data again. But they are claiming that their drives can handle those loads without failing. In order for their device to handle that many writes, they'll need a minimum of 284,012,334 cells. That's assuming 1 bit/write of course. The more realistic thought is 4 kB/operation. Now you're looking at 9,306,516,160,512 cells or 136 GB, and I think it's safe to assume that their 3½ inch drive will store more than 136 GB of data.

      It's not unlimited forever, it's unlimited within a timespan and capabilities of the device. And just doing the math makes this seem entirely plausible.

      • by Ant P. (974313)

        That's only plausible assuming you only use a log-structured filesystem. Use something that stores something in a fixed position on disk, say... a journal, and you'll find it can survive a lot less than 28 trillion writes.

        • Re: (Score:3, Informative)

          by josath (460165)
          It's called wear-leveling. Writing to the same spot from the OS's point of view, doesn't actually write to the same spot on the chip inside the actual drive. It shuffles things around to make sure everything gets used up evenly.
      • But they are claiming that their drives can handle those loads without failing.

        Yeah, right. Just as HDD manufacturers claim, their drives would survive a decade of normal loads, for about decades now. While in reality often failing in 2-3 years or less.

        Sorry, but with my experience, I won't believe a word of their claims. My data is worth too much.

      • by TheLink (130905)
        How'd you get your 136GB figure?

        4KB per op, 5 years, 180000 operations per second, 100000 overwrites allowed before the flash becomes unreliable.

        That gives me: 4kB * 2.840184 * 10^13 total ops/ 100K writes = 1 TB. Not 136GB.

        Or perhaps the SLC flash they are using allows 1M overwrites? But in another post your assumption was 100k.

        What am I missing?
        • No, you're not missing anything. I used Google to calculate the number:
          135 GB [google.com].

          However, I did have to do a bit of checking up, and I found out what was wrong:
          1.06 TB [slashdot.org]

          Capitalization isn't just important in husbandry.

          when I did my 135 GB calculation, I checked their website and found that their lowest capacity unit was 150 GB, so I expected that I was right and the kb vs kB error hadn't crossed my mind.

          I did just check Wikipedia [wikipedia.org] though:

          SLC Floating Gate NOR Flash has typical Endurance rating of 100K to 1,000K c

  • have to wonder about the accurary of the following claim:

    Pliant also claims there is no limit to the number of writes that can be performed to the drive and that it will work without slowdown for at least five years.

    I have no problems with their claimed speed since frankly, if you run multiple smaller internal unit in parallel, you can pretty much get any speed you desire. But it's my understanding that the wearing out of the storage cells is a physical problem and in order for their claim to hold true, the

    • probably number 2 -- all you need to do is to have you wear levelling software swap infrequently written cells onto frequently written ones, once some write disparity has arisen. something like:

      onWrite(data,location){
      if(location.writeCount>threshold*drive.writeMinimum){
      write(drive.writeMinimum.data,location)
      write(data,drive.writeMinimum)
      } else
      write(data,location)
      }

      (i'm sure this is a sub-op
  • ASIC to the rescue (Score:2, Informative)

    by Art3x (973401)
    Article:

    based on a proprietary ASIC design

    Most enterprise-class SSDs today also use a general purpose field programmable gate array (FPGA) controllers as opposed to Pliant's custom controller

    Seems like the same massive advantage of an Application-Specific Integrated Circuit (ASIC) over general processors and even FPGAs that I see in video compression, a field I keep tabs on.

    At one time I had wondered why a $100 camcorder could encode video in real-time, when my seemingly much more powerful desktop took hour

    • by atamido (1020905)

      Most ASICs really aren't that good. For instance, the efficiency of compression on most cameras is not that good, but it only has to be good enough to compress down to a size that it can write fast enough to its medium. On a PC the encoders are heavily optimized towards efficiency as you want to reduce storage and transfer costs as much as possible.

      Today, with the sheer number of transistors on a general purpose CPU, it would cost way too much to develop an ASIC that is faster for many purposes in raw pow

  • I stopped caring about speed. As long as it's fast enough to boot before I'm done in the kitchen and bathroom, and as long as it manages to stream HD movies and the like, it's OK.

    I rather prefer a good ZFS pool and a set of reliable drives, that survive the first 3 years, but also the next 20!

Statistics are no substitute for judgement. -- Henry Clay

Working...