Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Toshiba To Test Sub-25nm NAND Flash 80

An anonymous reader writes "Toshiba plans to spend about $159.8 million this year to build a test production line for NAND flash memory chips of less than 25 nanometers. The company hopes to kick off mass production of the chip as early as 2012. The fabrication facility for this key NAND flash memory will be located at Yokkaichi, Mie Prefecture."
This discussion has been archived. No new comments can be posted.

Toshiba To Test Sub-25nm NAND Flash

Comments Filter:
  • microSD (Score:3, Funny)

    by Hatta ( 162192 ) on Monday April 05, 2010 @10:42AM (#31733732) Journal

    I thought microSD was small. I'm going to lose this stuff for sure!

    • Re:microSD (Score:2, Funny)

      by Stenchwarrior ( 1335051 ) on Monday April 05, 2010 @10:50AM (#31733814)
      That's what they're hoping!
    • by nunojsilva ( 1019800 ) on Monday April 05, 2010 @10:56AM (#31733914) Journal
      This means IPoAC [wikipedia.org] will become more useful, as its main strenght is bandwidth (currently limited by the capacity of microSD cards and the like).
    • by Anonymous Coward on Monday April 05, 2010 @03:54PM (#31739580)

      Yayyyyyyyy! I bought a couple of USB sticks and stuck them on the back of the computer a couple years ago. They are 4GB each, and I bought two because they were about half the price (for two) as a single 8GB stick. If they are making the feature size of these smaller, then they can surely stuff more capacity into an identical sized space. It isn't nearly as fast as getting data off the hard disk, but when you need to move a lot of stuff, or when the disk crashes, its nice to have a physical backup (slow is better than typing everything in again, or worse, losing it forever). I don't advocate doing system backups on 64GB or 128 GB usb sticks, mostly because data transfer speeds are relatively slow, and the amount you can store is less than a single drive, or multiple drives. But, in the case of business intelligence, you can store at least the critical applications and critical data on these, and lock them in a box offsite, and when the walls come tumbling down, get back to some semblance of normal more quickly. Oh, and they are in general, quite reliable.

  • 1st post (Score:-1, Offtopic)

    by Anonymous Coward on Monday April 05, 2010 @10:43AM (#31733746)

    that's it.

  • by rindeee ( 530084 ) on Monday April 05, 2010 @10:49AM (#31733808)
    Not everyone (including me) understands what the benefit to consumers will be when less than 25nm production is possible. Does that mean 1TB flash memory cards for my camera? Same sizes as now but cheaper? What? Just an additional sentence giving a "once possible, this will mean blah blah blah blah blah". Simple as that. Of course, with an 'article' (actually just PC Mag parroting a Thoshiba presser...for pay I'd imagine) as crappy as the one linked to in the headline, I don't know that it really matters.
    • by digitaldrunkenmonk ( 1778496 ) on Monday April 05, 2010 @10:59AM (#31733940)

      The smaller the transistors, the more that can be packed into a smaller area. Basically, this will allow you to have smaller chips that will have denser memory capacities. The benefits come into things like phones, tablet PC's, netbooks, cameras, cars, computers, etc. Anything that uses or can use digital memory will benefit from smaller components.

      It'll also decrease the price for components out now, and that's always nice.

      I just wonder what'll happen when we hit the quantum wall -- the point at which quantum effects become apparent and electronics behave erratically.

    • by dk90406 ( 797452 ) on Monday April 05, 2010 @11:07AM (#31734038)
      The article is not specific about the size. Sub 25nm is hardly precise. But lets assume it is 22,5 nm (half of the 45nm process know today). That would give you four times the capacity on a similar size chip. Or a smaller chip (witch means an approx 4 x larger yield on a 300mm wafer) for a similar capacity chip.

      The first example would give bigger capacity and the second lower prices. Besides that there are benefits on power usage and read/write speeds.

      From TFA some flash already use a 32nm process, so the gain would not be so big compared to those (I'll let you do the math), unless they are talking about 16nm. That is doubtful, as 16nm is very much "sub 25nm" - and they would want to advertise that fact.

      • by TheRaven64 ( 641858 ) on Monday April 05, 2010 @11:22AM (#31734280) Journal

        witch means an approx 4 x larger yield on a 300mm wafer

        I'm not sure what witches have to do with it, but the yield improvement from a process shrink is more than just the 4x that you get from cramming four times as many chips on a wafer. An impurity in the wafer typically destroys one die. If you're unlucky it may be between 2 or even 4. If you make each die smaller then an impurity of the same size may only destroy 1-3 of the 4 in the same area as one of the originals.

        • by TheLink ( 130905 ) on Monday April 05, 2010 @01:55PM (#31736888) Journal
          > If you make each die smaller then an impurity of the same size may only destroy 1-3 of the 4 in the same area as one of the originals.

          I would have thought that an impurity/defect of the same size would be more likely to destroy more chips if the chips are smaller - esp if the defect is big. If the defect is really small it's unlikely to damage more than one chip in which case see below:

          To me a more plausible reason is if each chip is smaller, you get more chips per wafer. So assuming the same number of tiny defects scattered across the wafer, you'd have more good chips per wafer - since 8 bad chips out of 100 is better than 8 bad chips out of 25. The first one = 92 good chips, the second one = 17. Which is more than 4 x.

          Of course what Intel et all do is they also make the chips able to still work if some portions are defective - so the chip gets sold with less cache, and/or fewer cores.
          • Most NAND Flash chips are already defective - it's just that the defects are branded "factory bad blocks" and the chips get sold anyway. We're well past the point where 100% reliability is possible and well into the realm of using error correction and flash translation layers to mask an increasingly poor memory array.

          • by TheRaven64 ( 641858 ) on Tuesday April 06, 2010 @07:12AM (#31746252) Journal

            I would have thought that an impurity/defect of the same size would be more likely to destroy more chips if the chips are smaller - esp if the defect is big. If the defect is really small it's unlikely to damage more than one chip in which case see below:

            Yes, more of the chips are defective, but a smaller fraction of them are. Consider a wafer with one chip. Anywhere you put a defect, it destroys one chip, and the yield is 0%. Now make the wafer into 4 chips and put the same small defect somewhere. Most likely, it will hit just one chip, but it may overlap a border between two, so you get two dead chips but the yield is now 50%. Now make it hold 9 chips. They're smaller, so the defect now destroys 4 of them, but the yield is 55%.

    • by RabidMoose ( 746680 ) on Monday April 05, 2010 @11:11AM (#31734126) Homepage
      The reason this is a big deal, is that this is the type of flash that goes into SSD's. Right now, a 256GB SSD costs over $600. Read/write speeds on mainstream HDDs are one of the biggest bottlenecks in today's machines, and SSDs are the answer to the problem, once they come down in price. Also, SSDs draw less power than traditional hard drives, so longer laptop battery life is an added benefit. Not to mention the benefit that data centers could see, both from a throughput standpoint, and a lower power/cooling requirement.
    • by tlhIngan ( 30335 ) <slashdot.worf@net> on Monday April 05, 2010 @11:24AM (#31734326)

      Simple. Moore's Law. The number of transistors double every 18 months (roughly).

      The benefit? What device you use where the number of transistors directly affects you? Memory cards primarily - things like CPUs and GPUs and chipsets, not so much (most of the space is taken up with wires). But a 32GB card has over 16 billion transistors in it. Now we double that in 18 months, and that same card can have 64GB of data, or effectively the same cost.

      Maybe the card you use in your digital camera isn't too exciting. But if you wanted a decent sized SSD, it means once the next node is mature, SSD prices effectively tumble by half, so that 128GB SSD you were eyeing at $500 suddenly costs $250. Or that stratospheric 256GB SSD drops to something that a little saving can pay for.

      Of course, hard drives don't obey Moore's Law, and their increase in capacity is somewhat faster, making the spinning media-SSD gap even bigger.

    • by ThreeGigs ( 239452 ) on Monday April 05, 2010 @11:30AM (#31734410)

      Shrinking a process gives several benefits, but a quick general overview helps:
      Silicon as used in chip manufacturing is expensive. It costs a lot to grow, cut and polish. It's also a mature industry, so no real breakthroughs are likely to happen to reduce the cost of the silicon. The less silicon area you use, the more chips you can make for the same cost. Next is manufacturing. Whether you put one transistor per square millimeter or 100,000 per square millimeter, the cost is the same, or at least within a penny. Coat, expose to a masked pattern, etch, sputter, clean and repeat a few times, and voila, you have a chip. Shining a light through a mask costs the same no matter the resolution of the mask. Dunking the wafer in a chemical etch bath is the same, running a wafer through a sputterer or CVD costs the same, etc. Labor costs are basically per wafer, so more components per wafer means you get more output for the same labor (and plant infrastructure) dollar.

      So, a smaller manufacturing process means:
      More components per wafer. Thus if you double the component density, your manufacturing costs will remain the same, and you can double output while keeping costs the same (think 32GB for the price of 16GB).

      You can also make the chips smaller while keeping the same capacity (same 16GB chip uses half the silicon, thus costs 50% less to make, think 16GB for half the cost you paid last year).

      Or, more capacity within given size limits. (think 64GB or 128GB SD cards, or 2 TB Compact Flash).

      • by wonkavader ( 605434 ) on Monday April 05, 2010 @11:50AM (#31734710)

        Imperfect silicon wafers tend to have little dot-like blemishes. So there are points on wafers which spoil the chip that gets printed at that point.

        As chips get smaller, more chips get printed on a wafer, but the count of blemishes (and thus spoiled chips) stays the same -- so as a percentage of chips on wafer, manufacturing reliability goes up.

        One chip on a wafer with a blemish -- complete loss.
        Two chips, one blemish -- 50% loss.
        1000 chips, one blemish -- .1% loss.

        So smaller chips mean lower manufacturing costs, there, too.

        HOWEVER, I suspect someone on this thread can tell us if the smaller wavelength process (25nm, here) causes imperfections which would otherwise have allowed the chip to work fine to become an important blemish.

        Will chip failure count go up because of this new process?

    • by Belial6 ( 794905 ) on Monday April 05, 2010 @11:36AM (#31734470)
      The SDXC [sdcard.org] spec is designed to handle up to 2TB on a card. That means a whole lot of transistors to fit 2TB on a micro-SD card.
    • by Elledan ( 582730 ) on Monday April 05, 2010 @12:18PM (#31735126) Homepage
      What I'm more interested in than data density is what this new feature size is going to mean for data retention and write cycles. Right now 32 nm MLC Flash memory is at around 1 year data retention and ~1,000 write cycles (some at 300 cycles). Would 20 nm Flash have 2 months data retention and only a 100 write cycles? At which point will Flash memory simply not scale down any more?
    • by account_deleted ( 4530225 ) on Monday April 05, 2010 @02:47PM (#31738244)
      Comment removed based on user account deletion
  • by KraftDinner ( 1273626 ) on Monday April 05, 2010 @10:49AM (#31733810)
    I've got to admit that I don't really know much about the hardware side of tech and new advances in the shrinking of chip size, but what are the real benefits of shedding 7nm off the last smallest chip? It seems to me like a very marginal gain, unless I'm missing something fundamental about why one would want even smaller chip sizes.
  • by Anonymous Coward on Monday April 05, 2010 @10:50AM (#31733820)
    Let me see how long it takes before this discussion degenerates into Toyota/anti-Toyota flame war.
  • by RulerOf ( 975607 ) on Monday April 05, 2010 @10:51AM (#31733836)
    Over the last decade, I keep seeing these manufacturing processes grow ever smaller. I still remember when I bought my Athlon FX-55. 130nm process. Aw hell yeah. It's currently living the remainder of its life in one of my guest boxes. God that chip was such a waste of money, but I digress.

    For those in the know, this ever shrinking manufacturing process tech: when will it stop? Where will it stop? 10nm? Sub-1nm?
    • by Anonymous Coward on Monday April 05, 2010 @10:58AM (#31733928)
      Well the game will seriously change at some point when transistors are approaching the sizes of individual molecules. At that point we'll probably have to switch from carving transistors out of bulk material to actually using individual molecules as the functional elements. That's a size-scale of, say, 0.5 nm to 4 nm. We're getting surprisingly close.

      Doing better than single molecules going to require an even more serious paradigm shift (better parallelization? quantum computing?)... Computers may continue getting better, but it won't be micronization as the driving force.
    • by mcgrew ( 92797 ) * on Monday April 05, 2010 @11:02AM (#31733980) Homepage Journal

      For those in the know, this ever shrinking manufacturing process tech: when will it stop?

      Probably never. The walls always come down.

    • by Hurricane78 ( 562437 ) <deleted @ s l a s h dot.org> on Monday April 05, 2010 @11:04AM (#31734000)

      Well, there’s always the Planck length [wikipedia.org], as as the ultimate size limitation. As there need to be some structures, it always have to be a multiple of that.
      But everything else depends on of we can overcome the difficulties of constructing working *tronics out of structures that small.

    • by Anonymous Coward on Monday April 05, 2010 @11:09AM (#31734070)

      The roadmap for chip feature sizes is set by ITRS, and manufacturers generally try to at least follow (and hopefully outpace) it. Currently it says that flash chips will be at 6.3nm in 2024, and that's its furthest prediction. If I recall correctly, the ability of electrons to quantum tunnel through the channel of transistors will become a serious issue around 5nm. So there will definitely have to be some changes in the process before then.

    • by Tator Tot ( 1324235 ) on Monday April 05, 2010 @11:09AM (#31734082)
      There comes a point were you start messing with quantum effects instead of classical effects when dealing with electrons (i.e. electron tunneling). Bonds between atoms are around 0.2 nm. So if you were to hypothetically bond atoms end on end, 50 or so atoms would make the distance for your 10 nm transistors.
    • by vlm ( 69642 ) on Monday April 05, 2010 @11:27AM (#31734368)

      For those in the know, this ever shrinking manufacturing process tech: when will it stop? Where will it stop? 10nm? Sub-1nm?

      Well, if the lattice spacing of a silicon crystal is a bit more than 1/2 a nm, I think it unlikely we'd have a silicon crystal process much smaller than the smallest unit crystal of silicon.

      http://en.wikipedia.org/wiki/Silicon#Crystallization [wikipedia.org]

      Its interesting that a 25nm process means parts are only about 50 atoms across. So, one individual contaminant atom means about a 2% change in composition, probably resulting in much more than 2% change in electrical properties. So the design has to be pretty fault tolerant, or cleanliness must be amazing, or yields must be pretty low, or all of the above of course.

    • by ThreeGigs ( 239452 ) on Monday April 05, 2010 @11:47AM (#31734668)

      For those in the know, this ever shrinking manufacturing process tech: when will it stop? Where will it stop? 10nm? Sub-1nm?

      There is a hard physical limit based on the silicon and dopant distribution. On a macro scale, the silicon is very homogenous. However once you get a feature size down to the point where it encompasses only a few hundred atoms on a side, you begin to run into the real possibility that such small localised areas are over- or under-doped. Thus you have a new source of potential defects because your charge carriers are overabundant or underabundant. We've been solving the diffraction problem nicely to the point where we can project ever-smaller patterns onto the wafers, but eventually there will come a point of diminishing returns when defect rate increases negate any benefit gained from feature size shrinks.

      • by stevusmichaels ( 1751474 ) on Monday April 05, 2010 @12:16PM (#31735080)
        Exactly. For those of you who want a number, I've heard 10nm is where the wall will be, but I'm not sure the exact reason for that. I believe ThreeGigs is right on the money though. Dopants in silicon are randomly distributed, and when you're only dealing with a few hundred silicon atoms, you're only talking about a handful of dopants. So this random distribution starts to become non-uniformly distributed.
    • by Jenming ( 37265 ) on Monday April 05, 2010 @12:14PM (#31735060)

      I believe around 8 nm electron tunneling becomes a serious issue. At that point the electrons will "tunnel" between transistors even if there was infinite resistance in between the two transistors. This happens at larger distances as well, but not too often.

    • by BikeHelmet ( 1437881 ) on Monday April 05, 2010 @05:50PM (#31741652) Journal

      It's not going to just stop! Pretty soon it's going to reverse! And then we'll design our chips in 3D! Blessed 130nm 3D!

      (this was part joke, part serious - most modern chips are very complicated, but flat...)

    • by Anonymous Coward on Monday April 05, 2010 @09:52PM (#31744032)

      Silicon crystal is spaced at about 2 molecules per nm. The manufacturers claim that there's a clear path all the way to 10nm (20 molecules) and they have ideas that might work beyond that in the pipeline.

  • ultimate limit (Score:5, Interesting)

    by goombah99 ( 560566 ) on Monday April 05, 2010 @10:52AM (#31733866)

    Is there a proposed ultimate limit for lithography before one has to jump to molecular electronics? 25nm is well below what anyone though practical a decade ago (since it's so many times smaller than easily produced optical wavelengths). Now it's closing in on the limit of easily produced x-rays.

    while the resolution of the smallest resolvable element is shrinking, is the utilization of area increasing proportionally. That is are we densely filling the area with 25nm structures or is that simply the finest linear element and these are well separated?

    A 1cm chip would have 1E15 resolvable points at 0.025 micron resolution. And then there is the vertical resolution to multiply that. I should think it would become prohibitively difficult to design something with so many possibilities.

  • by SemperUbi ( 673908 ) on Monday April 05, 2010 @11:08AM (#31734064)
    Is this chip design somehow based on the NAND logic gate? How is it different from other chips? I couldn't tell from the article.
  • by neophytepwner ( 992971 ) on Monday April 05, 2010 @11:10AM (#31734110)
    Does this mean cheaper SSD (and inevitably larger SSDs), then I could see a benefit to 25nm NAND flash memory.
  • by Existential Wombat ( 1701124 ) on Monday April 05, 2010 @11:40AM (#31734528)

    4SF is pretty accurate for an 'about'. Why not 'About $160M".

  • by Firethorn ( 177587 ) on Monday April 05, 2010 @11:41AM (#31734550) Homepage Journal

    Personally, while I find it interesting, I'd like to know just how much extra data storage this would enable?

    32nm to 25 nm would, what, increase the theoretical max density of flash by 64%? IE instead of getting a 16GB chip you'd get a 24GB one.

    At the same price once you have all the details worked out, of course.

    45nm to 25 nm by my figuring would allow 3.24 times as much storage in a given size of chip.

  • by stevusmichaels ( 1751474 ) on Monday April 05, 2010 @12:22PM (#31735166)
    So I don't think they've highlighted the move to extended UV (EUV) enough. What new wavelength of light are they using? The semiconductor industry has been stuck at 193nm for a long time now. If the industry moves to a smaller wavelength it's a pretty big deal. New wavelength means new lithography materials. It may not be interesting to those of you asking "what size hard drive does this mean?" but to those who know this stuff it's important.
  • A litho primer (Score:2, Informative)

    by quo_vadis ( 889902 ) on Monday April 05, 2010 @12:24PM (#31735202) Journal
    For those unfamiliar with the field of semiconductor design, heres what the sizes mean. The Toshiba press release is about flash. In flash, the actual physical silicon consists of rectangular areas of silicon that have impurities added (aka. doped regions or wells). On top of these doped regions, are thinner parallel "wires" (narrower rectangles) made of poly silicon. The distance between the leading edge of wire and the next is called the pitch. Thus, the half pitch is half that distance. The reason this is important is that half pitch is usually the width of the polysilicon wire and effectively becomes the primary physical characteristic from the point of view of power consumption (leakage), speed and density.

    The official roadmap for processes and feature sizes (called process nodes) are published yearly by the International Technology Roadmap for Semiconductors, a consortium of all the fabs. According to the 2009 lithography report [itrs.net]. 25nm Flash is supposed to hit full production in 2012, thus inital deployments happen a couple of years before. Effectively Toshiba seems to be hitting the roadmap.

    The takeaway being, theres nothing to see here, its progress as usual. The big problem is what happens under 16nm. Thats the point at which current optical lithography is impossible, even using half or quarter wavelength, and EUV with immersion litho.
    • by Microlith ( 54737 ) on Monday April 05, 2010 @12:31PM (#31735316)

      The big problem is what happens under 16nm.

      IIRC, lithos down to 13nm are believed to be possible. NAND will start hitting terminal reliability problems below 20nm as the floating gates will likely hold 100 electrons (or less!) and far more susceptible to random drainage and bit errors way beyond what is currently experienced.

      So we'll end up with more, higher-density, and fundamentally unstable nonvolatile memory. As I understand it, DRAM will be hitting this problem too, as the capacitors will become susceptible to spontaneous charge loss.

    • by Firethorn ( 177587 ) on Monday April 05, 2010 @01:54PM (#31736878) Homepage Journal

      The takeaway being, theres nothing to see here, its progress as usual. The big problem is what happens under 16nm. Thats the point at which current optical lithography is impossible, even using half or quarter wavelength, and EUV with immersion litho.

      So, around 1/2 of what this factory will produce? That would translate, in a perfect world, to around 4X of what the 25nm process would create.

      Call it 16X the storage per area of current processes.

      Given that SSDs are STILL something like 100X the price per gigabyte than hard drives, will we ever see the end of the spinning platter?

      Matter of fact, I'll jinx myself here:

      I'm afraid that we're going to see a plateuing of storage capabilities within my lifetime. One guy I was talking with was convinced that hard drives have almost reached their economical max density, while convinced that flash would surpass them in five years. Personally, I placed it more like 10-15, but with concerns about just how small they could go with the process. I figured we'd have more room, personally.

    • by Katatsumuri ( 1137173 ) on Monday April 05, 2010 @02:34PM (#31737902)

      The big problem is what happens under 16nm.

      Maybe 3D chip stacking [ibm.com] will help prolong Moore's law for a while, instead of further miniaturization.

  • Micron and Intel previously announced 25nm flash. Toshiba is trailing badly. http://bit.ly/c6oOQW [bit.ly]
  • Details? (Score:3, Insightful)

    by AdamHaun ( 43173 ) on Monday April 05, 2010 @03:00PM (#31738542) Journal

    The article is frustratingly light on details. There's nothing about what type of flash transistor they're using (there are several variants on the basic stacked-gate NMOS design as well as more wild types). They don't say whether they're actually shrinking the bits (which you don't have to do) or just the support circuitry. All it says is that Toshiba is making NAND flash in a new process node, probably 22nm.

    My day job is working with embedded NOR flash. I'm not really a process or solid state physics guy, but I think I know enough to comment, unlike a lot of the people running their mouths. (Seriously, folks, if you don't know what you're talking about, *shut up*. Misinforming people with wild guesses is not helpful, no matter how much it strokes your ego.)

    First off, the flash transistor itself is not 22nm long. It's probably at least ten times longer, if not more (obviously Toshiba's not giving exact numbers). When you go to a new process node you don't necessarily shrink every feature by 50%. The limiting factor in flash size isn't lithography (manufacturing), it's leakage.

    Flash works by storing electrons on an isolated (floating) material sandwiched inside an NMOS transistor [linux-mag.com]. If extra electrons are present, the transistor is forced off (0). If they aren't, the transistor can turn on (1). The problem is that over time the electrons leak out of the floating gate, eventually causing bits to flip. If you shrink the circuit enough you hit a point where you can't keep electrons in the gate for a reasonable amount of time. At that point, we'll need a new memory technology -- maybe FRAM [wikipedia.org], maybe something else. Whatever it is, I'm sure it's been researched already -- a lot of the major research papers for flash memory are 25+ years old.

    Also, I said this elsewhere, but NAND flash is called NAND because the flash transistors (bits) are in series, like the NMOS transistors in a NAND gate. It isn't made out of logic gates or anything like that. Flash memory is analog, like DRAM -- you need special analog circuitry to read it and output a digital signal.

"Money is the root of all money." -- the moving finger

Working...