Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage Intel Hardware

Intel DC S3700 SSD Features New Proprietary Controller 54

crookedvulture writes "For the first time in more than four years, Intel is rolling out a new SSD controller. The chip is featured in the DC S3700 solid-state drive, an enterprise-oriented offering that's 40% cheaper than the previous generation. The S3700 has 6Gbps SATA connectivity, end-to-end data protection, LBA tag validation, 256-bit AES encryption, and ECC throughout. It also includes onboard capacitors to prevent against data loss due to power failure; if problems with those capacitors are detected by the drive's self-check mechanism, it can disable the write cache. Intel's own high-endurance MLC NAND can be found in the drive, which is rated for 10 full disk writes per day for five years. Prices start at $235 for the 100GB model, and capacities are available up to 800GB. In addition to 2.5" models, there are also a couple of 1.8" ones for blade servers. The DC S3700 is sampling now, with mass production scheduled for the first quarter of 2013."
This discussion has been archived. No new comments can be posted.

Intel DC S3700 SSD Features New Proprietary Controller

Comments Filter:
  • The article makes me a bit suspicious:
    "Intel's own high-endurance MLC NAND can be found in the drive, which is rated for 10 full disk writes per day for five years."
    sounds pretty bad actually, if I understand it right.
    Per cell this means: 365*10*5 = roughly 20.000 write cycles per cell? Sure wear leveling algorithms are there, but 20.000 cycles is not exceptional, or am I wrong?

    Don't misunderstand this post. I think Intel's SSDs are good.

    • by etash ( 1907284 )
      yes you are wrong, 20k cycles are very good for mlc.can't bring up any citations now -- too lazy, but the latest mlc nand cells ( 20nm ) are down to like 5k or less
      • by Anonymous Coward

        20k was good 10 years ago. That number has only gone down... This is due to feature size getting smaller...

    • by Anonymous Coward

      MLC write endurance is usually between 1,000 and 10,000 cycles.

    • by Wierdy1024 ( 902573 ) on Monday November 05, 2012 @03:28PM (#41885803)

      This is about right. MLC flash normally is rated for between 1k and 10k cycles. Newer flash is generally less as transistor sizes are shrunk to fit in more gbytes in the same die area.

      A home PC will only write a couple of gigs a day under typical workloads, which turns out to about 5 full writes a year for even the small sizes. That would last you 4000 years assuming ideal wear leveling...

      Basically, what they're saying is this will be absolutely fine for everything except outgoing mail servers and a few other specialist things.

      The capacitor backup and write cache make wear leveling much much easier, since all frequently written to cells can be cached in ram, and only written once on shutdown, and the capacitor backup means even an unclean shutdown will save your data.

      • A home PC will only write a couple of gigs a day under typical workloads, which turns out to about 5 full writes a year for even the small sizes

        ...unless the disk is nearly full, in which case it'll be writing the same cells over and over again.

        (unless the supply a utility which moves data from least-used cells to most-used...)

        • by blueg3 ( 192743 )

          ...unless the disk is nearly full, in which case it'll be writing the same cells over and over again.

          (unless the supply a utility which moves data from least-used cells to most-used...)

          That happens even if the disk is nowhere near full, and performing wear leveling is a major part of what the SSD controller does. If you're on a system that doesn't support TRIM, a nearly-full disk could end up with write amplification problems, though.

        • (unless the supply a utility which moves data from least-used cells to most-used...)

          All SSDs do wear levelling, otherwise they'd die after a couple of days. That happens beneath the LBA address layer - i.e. LBA's are mapped to physical addresses and the mapping changes each time an LBA is written.

          So you don't need to do wear levelling at the file system level. In fact the only thing you need to do there is to have a TRIM command which tells the SSD that a range of LBAs no longer contain useful data. That means the SSD can mark them as obsolete which gives the wear levelling a bit more elbo

          • (unless the supply a utility which moves data from least-used cells to most-used...)

            All SSDs do wear levelling, otherwise they'd die after a couple of days. That happens beneath the LBA address layer - i.e. LBA's are mapped to physical addresses and the mapping changes each time an LBA is written.

            Point is: Wear levelers are only useful of they've got some free space to work with. If they haven't got any (ie. disk nearly full), ...then what?

            • You've always got a free erase unit, because at least one is reserved for wear levelling. It's easy to invent an algorithm that moves that free unit around the the disk by garbage collecting from a full unit to an empty one.

              There are papers on this sort of thing. Look at the patents M Systems filed for example, or the documentation on TrueFFS. I've worked with embedded systems that used that and one of the first things we did after we got a socket driver working was to hammer a full disk and check that

      • This is about right. MLC flash normally is rated for between 1k and 10k cycles. Newer flash is generally less as transistor sizes are shrunk to fit in more gbytes in the same die area.

        Data retention figures would be interesting too. Last I heard, the strategy for dealing with that at smaller feature sizes was to make the disk periodically rewrite all the data, which of course will eat into your write cycles.

        [checks articles] ....ugh. Is that seriously it?Three months? [hothardware.com]

        • This isn't unprecedented. When I looked into the 710 series models [2ndquadrant.com] it was the same trade-off: those drives were also only specified to save their data for 3 months between refreshes.

        • [checks articles] ....ugh. Is that seriously it?Three months?

          Wear retention on flash is kind of a bummer for time capsules and Stargate style ancient repositories of knowledge. An old school PC with a bios in mask rom should be able to boot up given power in hundreds of years time, assuming the hard disks don't have some sort of failure mode that happens when they are un-powered.

          A modern machine has firmware in flash and also a flash drive. Both of which would end up blank in a few years to a few decades depending on technology with more recent being worse.

          If I were

      • by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Monday November 05, 2012 @04:25PM (#41886469) Homepage

        The small amount of RAM on Intel's SSDs are not used to cache writes in a significant quantity. The idea that you'll only have to write the most popular cells once per shutdown is a dream. The main benefit of having a bit of reliable capacitor backup is that the drive can be less aggressive about forcing an erase of a large cell just to write a fraction of it out, therefore improving the write amplification [wikipedia.org] situation on the drive. You can even see limiting small writes as a factor in the claimed longevity of the drives if you dig into their spec sheets enough. I did an article comparing the 320 vs 710 series lifetimes [2ndquadrant.com], approaching from the perspective of one of those specialist things you allude to--database server operation. One of the things that I noticed there is that the longer lifetime of the 710 came with the restriction that you couldn't do nearly as many small random writes per second (write IOPS) and still hit the claimed lifespan target. If the cache was larger and really effective at postponing writes, that trade-off wouldn't exist.

      • Comment removed (Score:4, Informative)

        by account_deleted ( 4530225 ) on Monday November 05, 2012 @07:40PM (#41888557)
        Comment removed based on user account deletion
        • by Kjella ( 173770 ) on Monday November 05, 2012 @08:48PM (#41889207) Homepage

          Except the "dirty little secret" of the industry is its NOT the cells dying that gets you, the controller dying is what bites you in the ass. if it was just the cells since when a cell fails it just ends up read only that wouldn't be so bad, but when the controller fails you flip the switch and...nothing. Not even the BIOS/UEFI detects the thing, its just gone.

          You forget that in a file system you typically write to more than one cell to store some data, what happens when some writes succeed and others fail? Major file system corruption and fast. I've managed to wear out one of the original OCZ Vertex drives - don't know how, I wrote maybe 5 TB to it and ideally it should take 1200 TB @ 10k writes/cell but SMART data was pretty clear. I had a broken file system and each run of fsck made everything worse, I had to stop trying to fix it, mount the thing read-only and salvage what I could. Even that failure mode is not graceful.

          • by deroby ( 568773 )

            So basically, you got "lucky" and some of the cells failed. From what I've heard it's more often the controller that gives up causing the disk to change overnight from a nice piece of electronics into a shiny paper-weight. No hope for recovery at all; the thing simply won't show up in the BIOS. Because of this it's also impossible to read the SMART info so it's hard to say if the controller failures are related to some cells being end-of-life confusing the hell out of the controller or if it's something els

        • I have a question. Have you heard about Greenliant? How are their controllers?
        • I had an OCZ drive fail on me. Was working perfectly fine, the day before. Turned on my PC, and the BIOS couldn't find the drive so it wouldn't boot. Booted up the old windows installation I still had on another HDD, nothing I could do could get any OS or the BIOS to even recognise I'd connected the SSD. All the data, completely unrecoverable with no advanced warning (after just 6 months of usage). Just to rub it in, I had to pay £20 to return the £160 drive to OCZ (and trust that they'd respons
        • Not quite correct either.

          It's not the controller hardware dying, it's the controller firmware crashing and burning.

          A few days ago, my Crucial C300, a drive I've been running like mad for 2 years, finally critically failed to read back a sector. And instead of returning an disk error, the entire drive froze. After waiting 15 minutes to see if it'd come back, it didn't. Rebooting, then rereading resulted in the same drive crash. Overwriting the sector with dd made it force a remap and allowed me to fully imag

    • by godrik ( 1287354 )

      Assuming 6Gbps and assuming you never stop writing, you crash the disk in one month. It is indeed pretty low. Most SSDs would last a couple years at full write speed.

    • Doesn't sound like "marketing speech" to me, it sounds like trying to express the life in a fashion more useful to a human being. The term "Marketing speech", at least when used derogatively, suggests obfuscation or hiding reality.

      Here what they are saying is clear. As someone considering the drive I can easily say (without doing any sums) that my use case is nowhere near as "bad" as the pathologically SSD unfriendly situation they describe and quickly conclude (to the extent I trust their information) th
    • The article makes me a bit suspicious: "Intel's own high-endurance MLC NAND can be found in the drive, which is rated for 10 full disk writes per day for five years." sounds pretty bad actually, if I understand it right. Per cell this means: 365*10*5 = roughly 20.000 write cycles per cell? Sure wear leveling algorithms are there, but 20.000 cycles is not exceptional, or am I wrong?

      With an Intel SSD you never actually get anywhere near the total number of write cycles. Because of a special Intel wear-levelling feature called BAD_CTX 0000013x the drive will brick itself periodically [intel.com], forcing you to erase it and resetting the write config. It's a clever feature of Intel SSD products that I haven't seen other manufacturers implement yet.

  • by hawguy ( 1600213 ) on Monday November 05, 2012 @03:21PM (#41885723)

    The article says this:

    The controller has a 6Gbps Serial ATA interface, and a gig of DRAM rides shotgun.This DRAM cache never stores user data but is instead used for context and indirection tables.

    That detail is important in light of the DC S3700's power-loss protection, which uses multiple onboard capacitors to ensure that in-flight data is safely written to the flash in the event of a power failure.

    What are context and indirection tables?

  • With this drive I would feel ok ditching my onsite weekly backup and only having a single off-site backup.

    • At the beginning of its release cycle, the odds of firmware bugs eating all your data is massively higher on this drive than the models that re-used existing controllers/firmware and have been out a while. The new controller means they've basically started over again with a firmware rewrite. PC hardware and software has so many possible configurations to test, it's impossible to get that right without beta testing the hardware in the field to see what problems the sucker early adopters get nailed by. The

  • by Anonymous Coward

    If you want to make sure your data is safe from prying eyes,
    choose another drive.

    Posted anonymously because doing otherwise would jeopardize my job.

To be awake is to be alive. -- Henry David Thoreau, in "Walden"

Working...