Please create an account to participate in the Slashdot moderation system


Forgot your password?
Data Storage Intel Hardware

Intel Intros 310 Series Mini SSDs 122

crookedvulture writes "Intel has added a couple of tiny 310 Series solid-state drives to its storage lineup. Measuring just 51 x 30 x 5.8mm, the mini-SATA SSDs are about a tenth the size of a standard notebook hard drive. Impressively, their performance ratings track with full-sized SSDs. Intel is pushing the 310 Series as a solution for dual-drive notebooks that combine solid-state and mechanical storage to give users the best of both worlds. Next-gen notebooks just got a little more interesting."
This discussion has been archived. No new comments can be posted.

Intel Intros 310 Series Mini SSDs

Comments Filter:
  • by Anonymous Coward
    Hopefully I'm not the only one the read the title as "Intel intros 310 mini series"
  • Drat (Score:5, Interesting)

    by DurendalMac ( 736637 ) on Wednesday December 29, 2010 @11:25PM (#34706832)
    I was excited as these appear to be Mini PCIe cards, but then I was disappoint as it looks like it's a SATA connector that shares the form factor. It's not entirely clear, though.
    • Why is SATA a disappointment?
      • Why is SATA a disappointment?

        Because slightly older laptops might have an emtpy Mini PCIe slot but not an extra SATA connector? To me it's not a dissapointment but perhaps to the poster you replied to it is.

        • by stevel ( 64802 ) *

          The MiniPCIe standard includes SATA lines, as well as USB. So if you have an open full MiniPCIe connector, it probably has SATA capability. What you have to watch for, though, are slots that are physically MiniPCIe but which are wired for USB only (many notebooks and netbooks with WWAN connectors), or use non-standard pinouts for PATA (Dell Mini 9, for example.)

          What is not clear, for the add-on user, is if the SATA lines are visible to the chipset. Usually mobile chipsets have 1 or maybe 2 ports for SATA

      • Re:Drat (Score:5, Insightful)

        by Rockoon ( 1252108 ) on Wednesday December 29, 2010 @11:35PM (#34706908)
        SATA 1.0 (1.5 Gb/s) can't keep up with any modern SSD

        SATA 2.0 (3.0 Gb/s) is currently keeping the industry down.

        SATA 3.0 (6.0 Gb/s) isnt widely adopted yet, but even when its finally popular enough that too will just keep the industry down.

        SATA-IO should be ashamed of itself for implementing 3.0 with such bullshit specs given the obvious reality of the situation.

        Thats why many people want PCIe to become a standard interface for SSD's. That wont happen until low cost/capacity SSD's use it.
        • by afidel ( 530433 )
          Even the Fusion I/O cards with SLC only push 500-700MB/s depending on the workload and they cost $7,500 for a 160GB card, SATA 6Gb should be plenty fast for a consumer standard.
          • by AHuxley ( 892839 )
            Yes SATA 6Gb SSD raid via a good sandforce or better like solution.
            Or pack the pci slots :)
          • Re:Drat (Score:4, Informative)

            by Rockoon ( 1252108 ) on Thursday December 30, 2010 @12:18AM (#34707154)
            OCZ has 740MB/s cards for an order of magnitude less (Save $7000 and spend only $650) than than Fusion I/O's offering, and with 50% more capacity too (240GB card)

            For cards in the price range you are talking about, OCZ delivers 1400MB/s on its 512GB card.

            You seem to be less informed than you realize.
            • by afidel ( 530433 )
              Are those the best case numbers or worst case? OCZ has a history of claiming huge numbers and terribly under-delivering. Oh, and at least for my use case MLC is a non-starter so the only OCZ card I'd be interest in is the Z-Drive e88 R2 which is ~$10k so 30% more for a two card solution (RAID1) and I only need ~120GB for the OLTP tables.
              • Re:Drat (Score:4, Informative)

                by sr180 ( 700526 ) on Thursday December 30, 2010 @12:55AM (#34707348) Journal

                Yes, we've been evaulating the OCZ Cards - and they are much slower in real life then the benchmarks suggest. Note that the FusionIO has a FusionIO Duo - which pulls 1.5GBytes a sec. This seems to be the holy grail of speed atm.

              • by jon3k ( 691256 )
                Just curious, why is MLC not an option? Is it just a matter of sheer IOPS requirements or do you have longevity concerns? Can I ask what the workload is?
                • by afidel ( 530433 )
                  Longevity concerns, worst case numbers based on our workloads puts minimum life for MLC at ~6 months if the controller isn't very smart about write amplification, the 10x improvement for SLC makes that a much more acceptable ~60 months. The load is a mix of OLTP and reporting against a JD Edwards database. When you have lots of 8KB random writes you can wear out cells pretty quickly.
                  • by jon3k ( 691256 )
                    Wow that's some pretty heavy writing. Is this after factoring in wear leveling?
                    • by afidel ( 530433 )
                      Yep. In the last 2 years the 8 LUN's that hold our tables have all overflowed their 32bit write counters which means more than 17B 8KB writes.
                • He is using worse case scenarios for his decision making.

                  Nothing wrong with that.. but its not realistic to expect the market as a whole to also think that way. The market is more concerned with average case.
              • Re:Drat (Score:4, Informative)

                by bobcat7677 ( 561727 ) on Thursday December 30, 2010 @01:25PM (#34712306) Homepage
                Having worked with sets of comparable cards from Fusion IO and OCZ (IOXtreme and Zdrive), I can give this assessment:

                Neither card met the published performance numbers. But the Fusion I/O card came closer to it's published numbers then the OCZ card in basic benchmarks making the Fusion I/O card quite a bit faster for raw throughput. Both cards were blazingly fast though pushing MBps and IOps like no tomorrow.

                Real world performance suffered greatly with the Fusion I/O cards due to their software driven architecture. The CPU overhead was significant, even on a powerful multi CPU Xeon server. The OCZ cards did not have this problem.
                The Price/performance ratio in real world made OCZ the winner overall. The competition was closest when excluding CPU overhead, but once you include CPU overhead the OCZ cards win hands down.
                Support was highly disappointing from Fusion I/O. With OCZ you expect minimal support, but I expected something better from the "premium" Fusion I/O brand (and price point). Unfortunately, their support was no better then OCZ.
                We originally evaluated the original Zdrive model which was kindof a rough implementation of the technology. If you are going to buy one now, avoid the old Zdrives...there are several problems with their design. The new R2 Zdrives have fixed these problems and are sold at basically the same price point for similar specs.

                We eventually returned the Fusion I/O cards due to their ridiculous CPU penalty. We still have the OCZ cards, but have stopped using them in favor of normal SAS controllers with hot swap SSD drives. It's just not convenient to shut down a server and crack open the case just to replace a failed SSD...and SSDs do fail:) At this point, PCIe SSD cards seem better suited to high end workstation applications where it's not as big of a deal to crack open the box for maintenance.
                • by afidel ( 530433 )
                  Were you using the older drivers with the Fusion card? Supposedly the newer drivers are more efficient from both a memory and CPU perspective. As for maintenance we are planning to do software RAID1 so that a failed card can be replaced during a maintenance window. None of the HP raid controllers can keep up with SSD's from an IOPS perspective and they introduce additional latency which is our biggest bottleneck.
              • Are those the best case numbers or worst case? OCZ has a history of claiming huge numbers and terribly under-delivering. Oh, and at least for my use case MLC is a non-starter so the only OCZ card I'd be interest in is the Z-Drive e88 R2 which is ~$10k so 30% more for a two card solution (RAID1) and I only need ~120GB for the OLTP tables.

                Here's another data point that agrees with you: I recently specc'd a machine for a client with a Core i7, 12GB RAM etc., including an OCZ RevoDrive. This machine is a massive beast, easily the most powerful I've worked with, yet it doesn't feel noticably faster than a machine equipped with a simple SSD such as one of Intel's offerings or even an OCZ Vertex2.

                My experience with OCZ mirrors yours exactly; all mouth and no trousers. The reviews of the product don't help much because although the numbers look

            • No SATA SSD pushes even 250MB/sec for continuous reads in the real world, even when connected to a 6Gbps SATA controller. See the latest comparison benchmarks [].

              This is because the entire SATA controller typically gets a single PCI Express lane, which is a 500MB/s max. The OCZ cards use 4 lanes, so 550MB/sec or so (which are the actual benchmarks []) is pretty poor use of a 2GB/sec max bandwidth.

              • None of the SSD's they seem to have tested in your first link were SATA 3.0 (6 Gbps) so obviously they were restricted to less than 300MB/sec....

                As far as your second link, since they benchmarking the slowest OCZ card (and showing that the benchmarks agree with the advertised speed), why are you declaring that its making poor use of PCIe x4 given that fact?

                Its the slowest card that OCZ offers. Think about it.

                Don't be so dishonest with your presentation.
              • by jon3k ( 691256 )
                Really? Because here's a SATA 6G drive reading over 350MB/s [].
                • You're going to have to provide some evidence of such speeds in a real-world usage scenario.

                  Hardware review site benchmark porn is less than useless.
                  • Whats with this 'real world usage scenario' crap?

                    Its as if you think that other drives don't suffer the same degradations in 'real world scenarios'

                    Do you think that there is something magical about SSD's that makes their real world performance degradation significantly worse than other technologies, or is your invokation of unknown magic specific to OCZ?
                  • by jon3k ( 691256 )
                    So when anyone elses posts benchmarks they aren't real world tests but when you post benchmarks they are? And even though there are literally dozens of other benchmarks that show sequential reads for SATA 6G drives over 300MB/s, you'd believe that every other benchmark is incorrect and that the link you posted is the one true, accurate benchmark instead of the anomaly? Okie dokie.
          • by jon3k ( 691256 )
            Today, yes, and Fusion I/O total throughput isn't that impressive, just look at the OCZ RevoDrive X2 for comparison. The Crucial RealSSD [] ($2.18/GB) drives are SATA 6G and are currently pushing over 350MB/s. And we're talking current generation controllers. You really think we'll see SATA 12G before we completely saturate SATA 6G? All we need to do is take a Crucial RealSSD, add in a second controller and internal raid and we're looking at nearly 700MB/s of non-sequential peak read throughput. We could
            • by afidel ( 530433 )
              The OCZ drive has a max response time of 1.2 seconds under load according to these [] testes, that's just not acceptable for my application.
        • by Twinbee ( 767046 )
          So why are they having so much difficulty in making SATA decently fast?

          And why don't you think SSDs for PCIe (or indeed just PCI for standard desktops) have caught on yet?
          • Not withstanding signaling issues across the cable, SATA can be faster. It's just there must be standards met for obvious cross compatibility reasons. This holds true for both the host and device chipset.

            As for SSD over PCIe, it's a niche market. You also have to factor in cost and potential installation issues. But must have faster I/O, you could always leapfrog the issue by implementing the controller right on the CPU in the same manor as RAM is addressed today.

      • Until you hit the really high end(where SATA is a bottleneck), there isn't much wrong with SATA. It's more the fact that mini-PCIe slots, sometimes several, are downright standard in notebooks and similar small devices, while these strange, hybrid 'electrically SATA; but mini-PCIe connector' things are not. an SATA device isn't going to do anything useful plugged in to a conventional mini-PCIe slot, and it will require a mechanical adapter to connect to any reasonably normal SATA connector.

        For the moment
    • Re:Drat (Score:4, Insightful)

      by fuzzyfuzzyfungus ( 1223518 ) on Wednesday December 29, 2010 @11:34PM (#34706892) Journal
      It isn't a vice exclusive to Intel; but that is indeed what you are seeing.

      For reasons that I can only imagine had something to do with "somebody pinching pennies until their pecuniary ichor flows", the trend somehow started of using the mini-PCIe connector, without so much as the decency of different keying or anything, to handle what are, electrically, SATA signal lines plus power. There would be nothing wrong with this if these things were actually storage-oriented mini-PCIe cards(like the HDD PCI cards of yore, with a controller chip+flash, capable of acting like a normal PCIe device; or if they were just using some 'sub-mini SATA' connector; but using a straight mini-PCIe connector for something electrically and logically completely different is plain hostile.

      I get this sense that users aren't really supposed to touch these things, or the innards of the devices in which they will end up, or such a confusing and potentially damaging connector misuse would likely have not taken place...
    • RTFA: "Otherwise known as mSATA, the diminutive SSD form factor pipes Serial ATA signaling over a mini PCI Express connector."

      So that's a Mini PCIe connector, not SATA
    • by Anonymous Coward

      This appears to be a "PCI Express Mini Card"

      This form factor has 1 PCI-e lane, so it's either 2 Gb/sec or 4 Gb/sec

      From the article:
      "pipes Serial ATA signaling over a mini PCI Express connector."

      Neither is all that shabby for such a tiny card

      The 200 MB/sec read bandwidth is probably limited by this bus.

      Really this is QUITE a nice number if you compare it to a standard notebook rotating media drive, I would not complain.

      As others have posted, this thing is tiny! Put two

  • by Anonymous Coward

    10 of them in a raid in a laptop?

  • Perfomance vs size (Score:4, Interesting)

    by BradleyUffner ( 103496 ) on Wednesday December 29, 2010 @11:29PM (#34706856) Homepage

    Why is it impressive that a smaller solid state drive performs as well as a standard size one? What does the size have to do with anything relating to these performance benchmarks?

    • Why is it impressive that a smaller solid state drive performs as well as a standard size one? What does the size have to do with anything relating to these performance benchmarks?

      Because the bigger it is the more smoke it can hold and we all know that letting the smoke out totally kills performance.

    • What does the size have to do with anything relating to these performance benchmarks?

      Perhaps because of the whole decades of history related to rotating bulk storage? Without increases in spindle speed (and, thus, price), larger storage has always been faster.

      Don't you remember the Quantum Bigfoot?

      Get off of my lawn!

    • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday December 29, 2010 @11:41PM (#34706938) Journal
      It isn't wildly impressive, since many of the larger SSDs are either smaller boards padded out with aluminum or plastic to meet 2.5inch size standards, or 2.5 inch boards taking advantage of relatively lax density requirements to save on board layers and fabrication expenses; but it is the case that most high-performing SSDs are doing somewhat RAID-esque stuff across their multiple flash chips. Thus, unless the design is severely gimped by either incompetence or cost constraints, larger device=space for more chips=more opportunity for spreading operations across multiple flash chips=higher overall apparent speed. For a very small device to hit high speeds, the maker is either doing some clever packaging, to get a competitive number of dice in that space, or implementing a nice controller that can compensate for not having substantial parallelism to play with, or using comparatively pricey flash that is high on the speed and density curves, rather than just doubling up on whatever is available at mainstream price points and taking advantage of the available board space.

      Given Intel's formidable fab expertise and capital resources, it would not surprise me if two and three are at play here...
    • Why is it impressive that a smaller solid state drive performs as well as a standard size one?

      Is it the size of the ship or the motion of the ocean? (Sorry couldn't help myself.)
      Otherwise, good point!

    • by Rockoon ( 1252108 ) on Thursday December 30, 2010 @12:29AM (#34707216)

      Why is it impressive that a smaller solid state drive performs as well as a standard size one? What does the size have to do with anything relating to these performance benchmarks?

      The speed of SSD's is linearly correlated with the number of flash chips they contain, because the flash chips are operated in parallel (think RAID0, only its implicit in the design)

      Smaller would usually mean less flash chips, so less parallelism.

      • It does seem from the picture that they have used new packaging for the chips. If I remember correctly, there are more chips on my Intel SSD than there are on the picture, so they've probably paired them. That is quite a bit of effort to go through for just introducing a smaller form factor. This may also be a drawback from competitors that don't have a direct influence over the production facilities. Note that this is pure speculation from what I see from the picture.

      • Smaller size also means less cooling, which may be a factor in performance of flash.
    • by sam0737 ( 648914 )

      Why is it impressive that a smaller solid state drive performs as well as a standard size one? What does the size have to do with anything relating to these performance benchmarks?

      Ask a woman and they might be able to tell...

  • I suppose a small size that performs well is impressive because smaller and lighter are attributes prized in laptops along with it's performance. Although performance is still the main quality we all want..
  • Just a quick note for you guys that like to fiddle with miniature screw-drivers and such: you can always replace your optical drive with an SSD or HDD. It seems that newmodeus has this market cornered for a while, restricting you to a higher priced product, but it is certainly a viable option. I've left my HDD where it is at because of possible heat issues (although there is quite a lot of spare room in the caddy) and possible problems with warranty. The only drawback is that you have to put your movies on

  • I hate having to choose between an SSD and an HDD for a laptop and really want one of each. I want a nice big 500+ GB HDD but I always want a 40+ GB SSD for a boot/OS/applications/page partition i.e., the "C" drive. Then you really get the best of both worlds because you get the insanely fast IO speeds of SSD but you have somewhere to put large data files.
    • by Twinbee ( 767046 )
      Exactly, I was recently looking for a laptop that had space for 2 HDs. Course, Seagate I think released a dual drive which takes the space of one, so that could help.

Thufir's a Harkonnen now.