Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Data Storage Intel Upgrades Hardware

Intel 335 Series SSD Equipped With 20-nm NAND 135

crookedvulture writes "The next generation of NAND has arrived. Intel's latest 335 Series SSD sports 20-nm flash chips that are 29% smaller than the previous, 25-nm generation. The NAND features a new planar cell structure with a floating, high-k/metal gate stack, a first for the flash industry. This cell structure purportedly helps the 20-nm NAND overcome cell-to-cell interference, allowing it to offer the same performance and reliability characteristics of the 25-nm stuff. The performance numbers back up that assertion, with the 335 Series matching other drives based on the same SandForce controller silicon. The 335 Series may end up costing less than the competition, though; Intel has set the suggested retail price at an aggressive $184 for the 240GB drive, which works out to just 77 cents per gigabyte."
This discussion has been archived. No new comments can be posted.

Intel 335 Series SSD Equipped With 20-nm NAND

Comments Filter:
  • by fustakrakich ( 1673220 ) on Monday October 29, 2012 @06:32PM (#41811999) Journal

    Maybe we won't need so much of that rare earth stuff anymore. I still find it amazing that a hard drive with all that monkey motion going on inside is any cheaper than these SSDs.

    • by fuzzyfuzzyfungus ( 1223518 ) on Monday October 29, 2012 @06:44PM (#41812115) Journal

      According to TFA, each of these new 8GB 20nm dice are 118 mm. There are 32 of them in the 335 series. 37.8 square centimeters of processed silicon is serious business. Honestly, I'm amazed that it's so cheap.

      • How much human effort is involved in the manufacturing process compared to a hard drive? To me that's where the real costs lie. Mechanization should be driving the price even lower.

      • by tlhIngan ( 30335 )

        According to TFA, each of these new 8GB 20nm dice are 118 mm. There are 32 of them in the 335 series. 37.8 square centimeters of processed silicon is serious business. Honestly, I'm amazed that it's so cheap.

        The thing is, it's made up of individual dies. If you tried to make a single slab of silicon that's 38 square cm, you'll find it impossible because of flaws. The smaller the die, the less chance it will be made on imperfect silicon, so smaller processes lead to more dies per wafer (less cost per die) an

    • Like a lightbulb, the factories are a sunk cost, they're just churning out the copies now. SSDs are still recouping R&D costs for now - once they get rolling in equal volume to spinning drives, they should be cheaper. Like tape 20 years ago, it's the sheer volume of spinning platters that keeps them going - 2TB of platters for $99... hard to touch that with SSD.
      • I'm in total agreement on the ongoing R&D costs. Ultimately I expect the price to really plummet, if the patent licensing isn't abused. I sure wish more laptop makers included an SSD option, but maybe they still need to offload their warehouse full of hard drives. It should really be only a couple more years.

    • Comment removed (Score:4, Interesting)

      by account_deleted ( 4530225 ) on Monday October 29, 2012 @10:53PM (#41813979)
      Comment removed based on user account deletion
      • I think the extent to which silicon can be shrunk is hitting a dead end. We are already at the level where you have gates that are a few atoms thick, and it can't get any smaller than that. Let's face it - the era of endless cost reductions and Moores law is coming to an end - at least w/ silicon. Once they get there, making newer fabs will be relatively cheaper, and companies can then make products at the margins they need to carry on.
  • Interesting... (Score:5, Interesting)

    by fuzzyfuzzyfungus ( 1223518 ) on Monday October 29, 2012 @06:34PM (#41812013) Journal

    I'm a bit surprised that Intel seems to have abandoned doing their controllers in-house(which they did for some of their early entries in the SSD market, back when there was some...um... extremely variable quality available. *cough* JMicron *cough*). Does SandForce have some juicy patents that make it impossible for Intel to economically match/exceed them even with superior process muscle? Has building competent flash controller chips now been commodified enough that Intel doesn't want to waste their time? Did some Intel project go sour and force them to go 3rd party?

    • Re: (Score:3, Funny)

      by Anonymous Coward

      And why hasn't Intel shipped better faster cheaper products? Do they even want to compete anymore? Are these even questions? Or perhaps some form of statements in the form of questions? Isn't it about time we get some answers? Who knows anymore? Does Intel know?

    • by Amouth ( 879122 )

      The last in house controller was on the 320's,, and in overall use i still continue to buy the Intel 320 series drives for enterprise use. They are absolutely rock solid.

    • Re:Interesting... (Score:5, Interesting)

      by rsmith-mac ( 639075 ) on Monday October 29, 2012 @07:32PM (#41812567)

      It's pretty much all of the above. On the Intel side of things, making their own controllers just wasn't panning out. There are rumors that they had some problems with what was supposed to follow their existing in-house controller, but there's also a lot of evidence that the benefits of building their own controller wasn't worth the cost. The controller itself is very low margin, and Intel is looking for high-margin areas.

      Meanwhile SandForce has some extremely desirable technology. Data de-dupe and compression not only improve drive performance right now, but they're going to be critical in future drives as NAND cells shrink in size and the number of P/E cycles drops accordingly. Intel likely could have developed this in-house, but why do so? They can just buy the controller from SandForce at a sweet price, roll their own firmware (that's where all the real work happens anyhow), and sell the resulting SSD as they please.

    • If I remember correctly, Intel is using their own firmware on the SandForce controller. So an Intel SSD will still be different then those from their competitors.
    • SSD reliability has been so bad [insert contrary anecdotes here] and Sandforce such a bright-spot of "not broken" that at this point I just specify Sandforce controllers and worry aout other things. Newegg will even let me search by it now. Perhaps Intel gets this sentiment and stands to benefit. Intel has a historical relationship of OEM'ing from LSI and their memory is good, so sign me up if these things don't get hanged in the first couple months.

    • Re:Interesting... (Score:4, Interesting)

      by PipsqueakOnAP133 ( 761720 ) on Tuesday October 30, 2012 @02:06AM (#41814977)

      I actually asked a person who worked in Intel's storage research about this.

      It boiled down to this: Intel Research made the X25, and pushed it over to Intel's product teams who basically just put them in boxes and shipped it. And people loved it.

      Then Intel's product design teams tried to design a follow on controller and sucked entirely at it. So they got the research group to rev the x25 a few times, while they contracted with Marvell for controllers since they needed a SATA 6G controller for their own firmware.

      At that time, they hadn't switched to Sandforce, but judging by the fact that Sandforce has been quite dominant even back then, I wouldn't be surprised if Intel did almost no firmware customization now.

      I wouldn't have believed that Intel had sucked in SSD controller design had I not heard it from a Intel researcher (although they might have been biased given that the story make their peers look good) but looking back again, we're talking about the company that brought us Netburst and FBDIMMs.

  • by Anonymous Coward

    20 is 20% smaller than 25.

    25nm - ( 20% * 25nm) = 20nm

  • by WGFCrafty ( 1062506 ) on Monday October 29, 2012 @06:57PM (#41812239)
    Just wondering: Is there a point (or is this close to it) where in using HDDs and certain RAID configurations, you can match or beat speed while maintaining better redundancy with larger capacity, cheaper drives? What is the main application these excel at? I assume power would be one, and cached content on webservers? Help me understand :-)
    • by fuzzyfuzzyfungus ( 1223518 ) on Monday October 29, 2012 @07:31PM (#41812551) Journal

      Laptops are one obvious win, since only the largest ones can even contain a RAID of any flavor, and certainly not a properly cooled 15k SAS type arrangement.

      When you aren't dealing with form-factor constraints, though, the big deal is random access. SSDs are only moderately superior(and some are actually worse) than HDDs for big, well-behaved, linear reads and writes. If you are faced with lots and lots of requests for little chunks from all over the disk, though, mechanical HDDs fall off a cliff and SSDs don't.

      • I gotcha. Thanks for the reply!
      • Uh, no. NAND flash does not allow you to read random data - it loads or unloads pages of data (depending on the page size - typically 64kB or higher). Whereas w/ HDDs, the disc contents are copies to a cache and then accessed by the CPU, so random access there is very much possible
    • by dbIII ( 701233 )

      Just wondering: Is there a point (or is this close to it) where in using HDDs

      Yes, and the crossover point will vary depending on how much data you want to store and how much you want at once. Hybrid solutions like using an SSD as the cache drive in ZFS change that point as well. A pile of recent drives of any kind will saturate gigabit if you have enough of them.

    • SSDs are not about speed, that's just a side effect of their true nature. The reason you use SSDs is because of their low latency (and high IOPS).
      • How exactly are latency and IOPS not measures of speed?

        • Did you just conveniently ignore the context of my post? The 'speed' I was referring to is the high latency, low IOPS of multiple drives in RAID0 (as compared to an SSD). Sequential throughput might be similar, but there are other benchmarks in which hard drives are inferior.
    • If redundancy is what you want you will sacrifice even more latency. In general, the latency (speed, but other considerations also apply) benefit of SSD has to do with fetching the required data directly, instead of waiting for the head of the disk to move along the radius (seek-time), and for the disk to rotate to the correct location, or both. These issues dominate performance in a HDD, unless the files are big and contiguous, in which case transfer time gets more important and thus the benefit of SSD ove
    • Hard drives arrays can be fast at sequential transfers but they suck at random access as tends to happen when doing things like loading software or running most types of server.

    • by godrik ( 1287354 )

      SSDs typically have read write sustained bandwidth around 500MB/s. You would need about 4 HDD to catch up with that speed. Moreover RAID is not going to do much about latency which is one of the most important point with SSDs. The power consumption of SSDs is much lower than that of a single HDD. The only good thing about HDDs is their price per GB.

    • Just wondering: Is there a point (or is this close to it) where in using HDDs and certain RAID configurations, you can match or beat speed while maintaining better redundancy with larger capacity, cheaper drives? What is the main application these excel at? I assume power would be one, and cached content on webservers? Help me understand :-)

      You'd need several dozen hard drives to even approach the IOPS of a single consumer level SSD. The SSD wins so many times over it's not funny.

      Now, if you're talking about sequential read/write speeds, that's a whole different matter. You'd need roughly 3-4 hard drives (in RAID 0 (no redundancy)... double that figure for RAID 10) to match the typical sequential read/write speeds of an SSD. At that point, the raw cost of the hard drives far exceeds that of the SSD, and that's ignoring the need for the extra S

    • by Anonymous Coward

      Not all systems can take 100-200 hard drives, which is what it would take to get the same average latency between a 10ms mechanical hard drive and a 0.1ms SSD. My laptop, for example, the 1U servers I run, etc... Plus, many applications can't take advantage of the lower average latency. Sure, a web and database server with thousands of simultaneous users can, but my laptop where I take advantage of it when I start openoffice or another large application for the first time in a few days cannot.

      I think per

  • by SuperBanana ( 662181 ) on Monday October 29, 2012 @07:13PM (#41812377)

    Last I heard, failure rate was directly tied to process size. Does any of this fix that?

    Also: Sandforce controller? Way to go, Intel - Sandforce is a bucket of fail:

    https://www.google.com/search?q=sandforce+freeze [google.com]

    and:
    https://en.wikipedia.org/wiki/SandForce#Issues [wikipedia.org]

    and more...

    • Intel writes their own firmware, they just use the SandForce controller. That's why Intel SSDs are rock solid and other SandForce SSDs are garbage.
      • by Anonymous Coward

        My experience with Sandforce based Intel SSD's was rubbish. Bought a SSD 330 120GB, constant freezing. Sent it back, got a replacement - still freezing. The seller gave me a free 'upgrade' to a SSD 520 120GB as an apology for the trouble. Guess what? Still freezing all the time. Got a refund, went and bought a Samsung SSD 830 128GB (based on Samsung's own controller), and is as solid as a rock - might not be as speedy, but it was £20 less and actually *works*.

        • by Anonymous Coward

          What do you mean by 'freezing'? Depending on your OS, the issue may actually be with the SATA controller/driver. There is a Windows hotfix for Win7/2k8R2 addresssing the inbox driver.

          Sorry, I'm too lazy to sign in.

      • by ne0n ( 884282 )
        Not quite. Intel debugs and modifies the firmware to a mild degree. Although Intel fixes certain SandForce bugs, mainly specific to Intel's own needs, these fixes may eventually trickle down to other OEMs after an expiration period of 6-12 months. We've seen this happen a few times recently. I wouldn't buy more SandForce because of it though.
    • by Kjella ( 173770 )

      Last I heard, failure rate was directly tied to process size. Does any of this fix that?

      I haven't heard anything about failure rate, but smaller process size generally means it will wear out earlier. Anandtech's review [anandtech.com] says it is still rated at 3000 P/E (program/erase cycles) like the 25nm NAND that preceded it, but they found some very disturbing results of less than 1000 P/E so I'd definitively wait to see how that checks out. Personally I'm sitting on a 5K-rated drive that according to the life meter should die after three years, so yeah... these new SSDs may be "cheap", but they're also co

      • FWIW: my X25-E (Intel's SLC based 'enterprise' SSD - firstgen with large fab size) died after a few years of not-so-intensive use. It's been my experience (two of my own drives, and what has happened to a couple of friends) that when an SSD dies, it doesn't seem to be because you exhaust the P/E cycles.
  • Awesome, so now we're down to what, nine erasures before it's cooked?

    • TFS:

      allowing it to offer the same performance and reliability characteristics of the 25-nm stuff.

      There is no "down to" it is "the same as"

  • As long as the hardware can provably outlast a spinning HDD, I'd be more than happy.

  • by thue ( 121682 ) on Tuesday October 30, 2012 @04:27AM (#41815463) Homepage

    Serious users should insist on SSD with a battery or super capacitor [wikipedia.org]. If not, then you might lose data in internal caches [2ndquadrant.com] in an unclean shutdown.

    Unlike the Intel 320 series, I can't find anywhere whether the 335 series has backup power, so I strongly assume that it doesn't.

    • by Twinbee ( 767046 )
      Does a forced reset (i.e. Windows crash) count as a shutdown here?
    • by MobyDisk ( 75490 )

      I don't think this issue is specific to SSDs. A regular hard drive also corrupts the sector if it loses power during a write. Especially if the data is in the cache and hasn't been written to the disk. And both types of drives often lie about their fsync capabilities.

      • by Guppy ( 12314 )

        I don't think this issue is specific to SSDs. A regular hard drive also corrupts the sector if it loses power during a write. Especially if the data is in the cache and hasn't been written to the disk. And both types of drives often lie about their fsync capabilities.

        If I'm reading the wiki link provided in the grandparent post correctly, in a MLC (but not SLC) drive, not only can the current write be corrupted, but previously performed (and assumed safe) writes can be corrupted as well.

        • by MobyDisk ( 75490 )

          I noticed that too, so I think I was not clear on my original post. My point is this: serious users should insist on a drive with a battery or super capacitor. This statement is true regardless of the type of drive used. The original post implies that this problem is specific to SSDs, which it is not. Given the context of the link, the Wiki article then implies that it is specific to MLC drives, which is it not.

          On a note of this, the Wiki article calls this "lower page corruption" and a Google search fo

  • Is this a breakthrough? No. 29% is nice, but it's not like they found a whole new revolutionary way of doing it.

    Is there some controversy, like someone claimed in the past that they could never get more than 10% better and Intel broke through the barrier? No (or if it is, Slashdot doesn't seem to have heard of it).

    Does having them get this much better make them useful for applications they weren't useful before, or make them affordable to a whole new range of customers? Not really.

    Is it at least a nice

Order and simplification are the first steps toward mastery of a subject -- the actual enemy is the unknown. -- Thomas Mann

Working...