Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage Intel Upgrades IT

Intel Launches SSD 750 Series Consumer NVMe PCI Express SSD At Under $1 Per GiB 67

MojoKid writes Today, Intel took the wraps off new NVMe PCI Express Solid State Drives, which are the first products with these high speed interfaces, that the company has launched specifically for the enthusiast computing and workstation market. Historically, Intel's PCI Express-based offerings, like the SSD DC P3700 Series, have been targeted for datacenter or enterprise applications, with price tags to match. However, the Intel SSD 750 Series PCI Express SSD, though based on the same custom NVMe controller technology as the company's expensive P3700 drive, will drop in at less than a dollar per GiB, while offering performance almost on par with its enterprise-class sibling. Available in 400GB and 1.2TB capacities, the Intel SSD 750 is able to hit peak read and write bandwidth numbers of 2.4GB/sec and 1.2GB/sec, respectively. In the benchmarks, it takes many of the top PCIe SSD cards to task easily and at $389 for a 400GB model, you won't have to sell an organ to afford one.
This discussion has been archived. No new comments can be posted.

Intel Launches SSD 750 Series Consumer NVMe PCI Express SSD At Under $1 Per GiB

Comments Filter:
  • What kernel version is needed to support these drives?
  • let's see...
    pci express raid controller ~ 100
    5 x 256gb ssd ~ 500

    $600 vs $1200 (assuming $1 per gb for this intel card)
    not sure about about speed. in theory, it should be faster due to raid or stripping (4 or 5 x 500mb/sec).
    power and cable is a mess so definitely a con here.
    fault tolerance is a plus from raiding.
    upgradeable storage capacity is a plus.
    otherwise, great for server farms.
    • by guruevi ( 827432 ) on Thursday April 02, 2015 @12:37PM (#49392637)

      You're also using 5 (2 per U) drive slots vs 1 (10 per U). And assume that your raid controller can push to the drives at pcie speed. Raid controllers aren't that fast, even from expensive manufacturers chips push the boundaries at 6Gbps and ~100,000 IOPS for the entire array.

    • by zlives ( 2009072 )

      IMHO ssd's is all about speed... "2 lanes of PCIe 3.0 offers 3.3x the performance of SATA 6Gb/s"

    • by Nemyst ( 1383049 )
      Actually, if you want the performance you mention (straight number of disks times read/write speed of disk), you need to use RAID0, which makes reliability absolutely awful, especially on an array of 4 or 5 drives. RAID1 will give you faster read speeds, but poor writes and no disk space. More advanced striping generally is slow-ish and loses out space as well.

      I really don't see many cases where a RAID array is better than a drive like this, especially considering Intel's reputation and reliability ($100
      • I really don't see many cases where a RAID array is better than a drive like this, especially considering Intel's reputation and reliability ($100 256GB SSDs aren't going to be the best fault tolerant ones...).

        I've gotten burned by an Intel SSD. The Intel 320's data loss bug bit me, and that was on a drive that had the firmware patch to fix the flaw.

        Never trust a single drive. RAID array is always better because it give you a chance to recover from a disk failure. Then back that RAID array to external storage. Then back up the essential data to off-site storage. With this drive, buy two and do a software RAID-1. With write speed like this you will never notice the performance drop-off.

    • by swb ( 14022 )

      I think these PCIe flash drives win for raw performance because they have access to the entire device at PCIe bus speeds.

      Maybe some ideal 16x card with a dedicated SATA controller per connection would give you RAID-0 performance of 30 Gbits/sec for five disks, but something tells me you'd be limited by the individual SATA limit of 6 Gbit/sec. In the real world, I don't know that any $100 8x RAID card would do that.

      But I would also bet that most workloads are IOP bound, not mass throughput bound, and with r

      • IOPS in raid controllers are awful. I have an external entry level eonstor appliance that tops out at ~10,000IOPS with 2gb cache. Same goes for areca. The only way of getting good performance out of SSD arrays is by doing clean access and letting the CPU handle it.

    • The cabling wouldn't be much of a mess if if you use something like this:
      http://www.newegg.com/Product/... [newegg.com]

  • Couldn't see OPAL V2 / eDrive support anywhere. Shame because I think it's an essential feature of any SSD these days.

    OPAL V2 allows the drive to encrypt using a user supplied key, with near zero performance loss.

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...