Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage Intel Upgrades Hardware

Intel and Micron Unveil 128Gb NAND Chip 133

ScuttleMonkey writes "A joint venture between Intel and Micron has given rise to a new 128 Gigabit die. While production wont start until next year, this little beauty sets new bars for capacity, speed, and endurance. 'Die shrinks also tend to reduce endurance, with old 65nm MLC flash being rated at 5,000-10,000 erase cycles, but that number dropping to 3,000-5,000 for 25nm MLC flash. However, IMFT is claiming that the shrink to 20nm has not caused any corresponding reduction in endurance. Its 20nm flash uses a Hi-K/metal gate design which allows it to make transistors that are smaller but no less robust. IMFT is claiming that this use of Hi-K/metal gate is a first for NAND flash production.'"
This discussion has been archived. No new comments can be posted.

Intel and Micron Unveil 128Gb NAND Chip

Comments Filter:
  • by AmiMoJo ( 196126 ) on Wednesday December 07, 2011 @02:04PM (#38293076) Homepage Journal

    Not all programmers are doing that. Android and Windows have both been getting faster on the same hardware.

  • by Anonymous Coward on Wednesday December 07, 2011 @02:05PM (#38293082)
    No, it couldn't. Most drives - even those with bad write lifetimes - could be continually overwritten for a period of many years before needing to be replaced. Reference: http://www.storagesearch.com/ssdmyths-endurance.html [storagesearch.com]

    As a sanity check - I found some data from Mtron (one of the few SSD oems who do quote endurance in a way that non specialists can understand). In the data sheet for their 32G product - which incidentally has 5 million cycles write endurance - they quote the write endurance for the disk as "greater than 85 years assuming 100G / day erase/write cycles" - which involves overwriting the disk 3 times a day.

    That's for old-ish tech and a smallish drive. For consumers, large drives get written to exponentially less. Consider, the vast bulk of consumer "big" drives are and movies. These are big, chunky files that don't get overwritten very much. As a consequence the vast majority of your drive stays clean. For most people, they'll want or need to buy a new hard drive long before the old one wears out. Please read up on the facts before spouting nonsense.

  • by billcopc ( 196330 ) <vrillco@yahoo.com> on Wednesday December 07, 2011 @03:27PM (#38294156) Homepage

    If you have 12GB in your PC, and you're using it normally, you can disable swap entirely. Sure, your commit rate will jump a bit, but you still have several times more Ram than you need. Swap space has a usefulness even when you have memory available, because a properly tuned VMM will treat it as low-priority commit fodder - meaning if an app requests 10 gigs of buffer space, but has not yet put anything in there, the VMM will earmark swap first, so as to not tie up physical RAM until it is actually needed (if at all). In a sense, it's an accounting trick that allows the OS to "borrow" memory without necessarily using it. It's like a line of credit for memory; you're best to avoid using it, but if you need a security deposit for something, that mastercard is ideal. Swap is like that mastercard. It can help swing you through tight spots, but if you abuse it, you enter a world of pain...

  • by Xygon ( 578778 ) on Wednesday December 07, 2011 @03:30PM (#38294180)
    Speaking as someone in the NAND industry...

    NAND does not have its own reliability controls on-die. Items such as wear-leveling, file management, and ECC mechanisms need to be handled somewhere. So the options are in software, which would then need to be validated and designed for each NAND manufacturer, die, and process; and would consume CPU and batter power from the tablet OS, or it can be done via a separate off-die controller.

    And as to the choice of eMMC, it's a cost/performance/reliability trade-off. eMMC is relatively inexpensive (very small die), and includes all of the aforementioned reliability mechanisms at a low-power, and low-cost method, in an I/O language supported by most mobile architectures (SD/MMC). However, it severely lacks in relative performance to an SSD. The other option is an optimized SSD controller, which may cost many times more, but has much higher performance. The problem is how to include a $100 SSD in a $100-200 tablet BOM... impossible.
  • by Rockoon ( 1252108 ) on Wednesday December 07, 2011 @05:51PM (#38295890)

    Except with SSD write lifetimes falling with every generation

    Except this isnt true. Flash lifetimes are dropping due to process shrinks, but SSD lifetimes are remaining steady due to increasing capacity made possible by those process shrinks.

    This is the problem with you SSD critics. You get that one nugget of information and then gleefully go on spitting bullshit at everyone on forums like this one. To be quite clear, YOU DO NOT KNOW WHAT YOU ARE TALKING ABOUT.

    Why do you volunteer to talk about a subject that we both know that you are poorly informed about? You dont see me talking about JAVA performance because... guess what... even though I know a couple things about JAVA, I refuse to make declarative statements about topics where I know that I only know a couple of things about.

    If you are an expert in something... wait for that topic before you act like an expert.

An authority is a person who can tell you more about something than you really care to know.

Working...