Forgot your password?
Data Storage Hardware

Toshiba To Test Sub-25nm NAND Flash 80

Posted by CmdrTaco
from the who-wants-underwater-memory dept.
An anonymous reader writes "Toshiba plans to spend about $159.8 million this year to build a test production line for NAND flash memory chips of less than 25 nanometers. The company hopes to kick off mass production of the chip as early as 2012. The fabrication facility for this key NAND flash memory will be located at Yokkaichi, Mie Prefecture."
This discussion has been archived. No new comments can be posted.

Toshiba To Test Sub-25nm NAND Flash

Comments Filter:
  • microSD (Score:3, Funny)

    by Hatta (162192) on Monday April 05, 2010 @10:42AM (#31733732) Journal

    I thought microSD was small. I'm going to lose this stuff for sure!

  • by rindeee (530084) on Monday April 05, 2010 @10:49AM (#31733808)
    Not everyone (including me) understands what the benefit to consumers will be when less than 25nm production is possible. Does that mean 1TB flash memory cards for my camera? Same sizes as now but cheaper? What? Just an additional sentence giving a "once possible, this will mean blah blah blah blah blah". Simple as that. Of course, with an 'article' (actually just PC Mag parroting a Thoshiba presser...for pay I'd imagine) as crappy as the one linked to in the headline, I don't know that it really matters.
    • Re: (Score:2, Insightful)

      The smaller the transistors, the more that can be packed into a smaller area. Basically, this will allow you to have smaller chips that will have denser memory capacities. The benefits come into things like phones, tablet PC's, netbooks, cameras, cars, computers, etc. Anything that uses or can use digital memory will benefit from smaller components.

      It'll also decrease the price for components out now, and that's always nice.

      I just wonder what'll happen when we hit the quantum wall -- the point at which quan

      • Depends what you mean by "quantum effects". Some effects are already apparent.
        • True. I'm expecting reduced functionality that will eliminate the possible benefits from this size. I remember reading that it was sub 45 nm that the effects became apparent, but I'm not entirely sure.

          • Re: (Score:3, Interesting)

            by marcansoft (727665)

            MLC NAND Flash is already horribly unreliable. Manufacturers don't care about errors, quantum or not. The proper question to ask is when will quantum effects become dominant such that decreasing feature size loses more memory from failure than you gain from the reduced size. Until then, people will just slap on better ECC and nobody cares if a large number of bits are randomly flipping.

    • by dk90406 (797452)
      The article is not specific about the size. Sub 25nm is hardly precise. But lets assume it is 22,5 nm (half of the 45nm process know today). That would give you four times the capacity on a similar size chip. Or a smaller chip (witch means an approx 4 x larger yield on a 300mm wafer) for a similar capacity chip.

      The first example would give bigger capacity and the second lower prices. Besides that there are benefits on power usage and read/write speeds.

      From TFA some flash already use a 32nm process, so the

      • by TheRaven64 (641858) on Monday April 05, 2010 @11:22AM (#31734280) Journal

        witch means an approx 4 x larger yield on a 300mm wafer

        I'm not sure what witches have to do with it, but the yield improvement from a process shrink is more than just the 4x that you get from cramming four times as many chips on a wafer. An impurity in the wafer typically destroys one die. If you're unlucky it may be between 2 or even 4. If you make each die smaller then an impurity of the same size may only destroy 1-3 of the 4 in the same area as one of the originals.

        • by TheLink (130905)
          > If you make each die smaller then an impurity of the same size may only destroy 1-3 of the 4 in the same area as one of the originals.

          I would have thought that an impurity/defect of the same size would be more likely to destroy more chips if the chips are smaller - esp if the defect is big. If the defect is really small it's unlikely to damage more than one chip in which case see below:

          To me a more plausible reason is if each chip is smaller, you get more chips per wafer. So assuming the same number of
          • Most NAND Flash chips are already defective - it's just that the defects are branded "factory bad blocks" and the chips get sold anyway. We're well past the point where 100% reliability is possible and well into the realm of using error correction and flash translation layers to mask an increasingly poor memory array.

          • I would have thought that an impurity/defect of the same size would be more likely to destroy more chips if the chips are smaller - esp if the defect is big. If the defect is really small it's unlikely to damage more than one chip in which case see below:

            Yes, more of the chips are defective, but a smaller fraction of them are. Consider a wafer with one chip. Anywhere you put a defect, it destroys one chip, and the yield is 0%. Now make the wafer into 4 chips and put the same small defect somewhere. Most likely, it will hit just one chip, but it may overlap a border between two, so you get two dead chips but the yield is now 50%. Now make it hold 9 chips. They're smaller, so the defect now destroys 4 of them, but the yield is 55%.

    • by RabidMoose (746680) on Monday April 05, 2010 @11:11AM (#31734126) Homepage
      The reason this is a big deal, is that this is the type of flash that goes into SSD's. Right now, a 256GB SSD costs over $600. Read/write speeds on mainstream HDDs are one of the biggest bottlenecks in today's machines, and SSDs are the answer to the problem, once they come down in price. Also, SSDs draw less power than traditional hard drives, so longer laptop battery life is an added benefit. Not to mention the benefit that data centers could see, both from a throughput standpoint, and a lower power/cooling requirement.
    • by tlhIngan (30335)

      Simple. Moore's Law. The number of transistors double every 18 months (roughly).

      The benefit? What device you use where the number of transistors directly affects you? Memory cards primarily - things like CPUs and GPUs and chipsets, not so much (most of the space is taken up with wires). But a 32GB card has over 16 billion transistors in it. Now we double that in 18 months, and that same card can have 64GB of data, or effectively the same cost.

      Maybe the card you use in your digital camera isn't too exciting.

    • Re: (Score:3, Informative)

      by ThreeGigs (239452)

      Shrinking a process gives several benefits, but a quick general overview helps:
      Silicon as used in chip manufacturing is expensive. It costs a lot to grow, cut and polish. It's also a mature industry, so no real breakthroughs are likely to happen to reduce the cost of the silicon. The less silicon area you use, the more chips you can make for the same cost. Next is manufacturing. Whether you put one transistor per square millimeter or 100,000 per square millimeter, the cost is the same, or at least within a

      • Imperfect silicon wafers tend to have little dot-like blemishes. So there are points on wafers which spoil the chip that gets printed at that point.

        As chips get smaller, more chips get printed on a wafer, but the count of blemishes (and thus spoiled chips) stays the same -- so as a percentage of chips on wafer, manufacturing reliability goes up.

        One chip on a wafer with a blemish -- complete loss.
        Two chips, one blemish -- 50% loss.
        1000 chips, one blemish -- .1% loss.

        So smaller chips mean lower manufacturing

    • by Belial6 (794905)
      The SDXC [] spec is designed to handle up to 2TB on a card. That means a whole lot of transistors to fit 2TB on a micro-SD card.
      • by Luyseyal (3154)

        Relevant: XKCD: "MicroSD" []

        Kinda scary to think about having that much data in just one tiny flash card. Really need a faster way to dupe 'em.


        /Just sent in an A-Data card for warranty support. Le sigh.

    • by Elledan (582730)
      What I'm more interested in than data density is what this new feature size is going to mean for data retention and write cycles. Right now 32 nm MLC Flash memory is at around 1 year data retention and ~1,000 write cycles (some at 300 cycles). Would 20 nm Flash have 2 months data retention and only a 100 write cycles? At which point will Flash memory simply not scale down any more?
    • by hairyfeet (841228)

      Pretty much all of the above. I have in my drawer a fat 64Mb flash stick that cost me nearly $80 bucks when it was new (boy having the same space as dozens of floppies!) and now I carry an 8Gb stick the size of my pinky that cost a whole $10.

      what we have seen before in CPUs, GPUs, and RAM looks like it will now be coming to flash based media. Remember when a single core P4 cost a mint and would heat your house for you in the winter? Now I have a quad AMD that barely reaches 100F under load. With this tec

  • I've got to admit that I don't really know much about the hardware side of tech and new advances in the shrinking of chip size, but what are the real benefits of shedding 7nm off the last smallest chip? It seems to me like a very marginal gain, unless I'm missing something fundamental about why one would want even smaller chip sizes.
    • Re: (Score:2, Informative)

      by Anonymous Coward

      Well, chips are 2D, so you also get to square that benefit.

      32x32 = 1024 nm^2
      25x25 = 625 nm^2

      That's nearly 18 months of Moore's Law right there.

    • You are missing something fundamental. 25nm is not the size of the chip, it is the size of a feature (e.g. a transistor). If you shrink the feature size from 32nm to 25nm then you are shrinking by around 22%, but since these are 2D components you are shrinking the area each takes by almost 40%. This means that you can get 1.64GB on the 25nm process for the price that you get 1GB on the 32nm process. This means either cheaper flash at the same capacity, bigger flash at the same price, or something somewh
  • by RulerOf (975607) on Monday April 05, 2010 @10:51AM (#31733836)
    Over the last decade, I keep seeing these manufacturing processes grow ever smaller. I still remember when I bought my Athlon FX-55. 130nm process. Aw hell yeah. It's currently living the remainder of its life in one of my guest boxes. God that chip was such a waste of money, but I digress.

    For those in the know, this ever shrinking manufacturing process tech: when will it stop? Where will it stop? 10nm? Sub-1nm?
    • by mcgrew (92797) *

      For those in the know, this ever shrinking manufacturing process tech: when will it stop?

      Probably never. The walls always come down.

      • by symbolset (646467)
        We're actually getting pretty close to the limit with silicon. Soon they'll be compelled to use Z. Silicon crystals have a lattice structure with a spacing of 0.5430710 nm [] so at 45 nm we're already talking about features that are less than 100 atoms across.
        • by mcgrew (92797) *

          And yet someone has made a transistor out of three molecules. The silicon wall will come down as well; something will be developed to take its place, and be better, cheaper, smaller, and use less power. Progress has slowed at times and even gone backwards at times, but for the most part progress hasn't stopped since the invention of the stone tool.

    • Well, there’s always the Planck length [], as as the ultimate size limitation. As there need to be some structures, it always have to be a multiple of that.
      But everything else depends on of we can overcome the difficulties of constructing working *tronics out of structures that small.

      • Re: (Score:3, Insightful)


        Chemitronics? Quantatronics? What does this mean?!?!?

      • by Thanshin (1188877)

        Well, there's always the Planck length, as as the ultimate size limitation.

        Unless we find out a way of storing stuff on a place directly accesible but not physically close in 3d space. Either by new means of comunication or by new means of accessing an additional dimension.

      • by toastar (573882)

        Well, there’s always the Planck length [], as as the ultimate size limitation. As there need to be some structures, it always have to be a multiple of that.
        But everything else depends on of we can overcome the difficulties of constructing working *tronics out of structures that small.

        The Planck length? that's like 25 orders of magnitude smaller then what we are looking at now. I'd be surprised if there is a way to scale below the width of a single walled nano tube(~1nm)

    • There comes a point were you start messing with quantum effects instead of classical effects when dealing with electrons (i.e. electron tunneling). Bonds between atoms are around 0.2 nm. So if you were to hypothetically bond atoms end on end, 50 or so atoms would make the distance for your 10 nm transistors.
    • by vlm (69642)

      For those in the know, this ever shrinking manufacturing process tech: when will it stop? Where will it stop? 10nm? Sub-1nm?

      Well, if the lattice spacing of a silicon crystal is a bit more than 1/2 a nm, I think it unlikely we'd have a silicon crystal process much smaller than the smallest unit crystal of silicon. []

      Its interesting that a 25nm process means parts are only about 50 atoms across. So, one individual contaminant atom means about a 2% change in composition, probably resulting in much more than 2% change in electrical properties. So the design has to be pretty fault to

    • by ThreeGigs (239452)

      For those in the know, this ever shrinking manufacturing process tech: when will it stop? Where will it stop? 10nm? Sub-1nm?

      There is a hard physical limit based on the silicon and dopant distribution. On a macro scale, the silicon is very homogenous. However once you get a feature size down to the point where it encompasses only a few hundred atoms on a side, you begin to run into the real possibility that such small localised areas are over- or under-doped. Thus you have a new source of potential defects b

      • Exactly. For those of you who want a number, I've heard 10nm is where the wall will be, but I'm not sure the exact reason for that. I believe ThreeGigs is right on the money though. Dopants in silicon are randomly distributed, and when you're only dealing with a few hundred silicon atoms, you're only talking about a handful of dopants. So this random distribution starts to become non-uniformly distributed.
    • by Jenming (37265)

      I believe around 8 nm electron tunneling becomes a serious issue. At that point the electrons will "tunnel" between transistors even if there was infinite resistance in between the two transistors. This happens at larger distances as well, but not too often.

    • It's not going to just stop! Pretty soon it's going to reverse! And then we'll design our chips in 3D! Blessed 130nm 3D!

      (this was part joke, part serious - most modern chips are very complicated, but flat...)

  • ultimate limit (Score:5, Interesting)

    by goombah99 (560566) on Monday April 05, 2010 @10:52AM (#31733866)

    Is there a proposed ultimate limit for lithography before one has to jump to molecular electronics? 25nm is well below what anyone though practical a decade ago (since it's so many times smaller than easily produced optical wavelengths). Now it's closing in on the limit of easily produced x-rays.

    while the resolution of the smallest resolvable element is shrinking, is the utilization of area increasing proportionally. That is are we densely filling the area with 25nm structures or is that simply the finest linear element and these are well separated?

    A 1cm chip would have 1E15 resolvable points at 0.025 micron resolution. And then there is the vertical resolution to multiply that. I should think it would become prohibitively difficult to design something with so many possibilities.

    • by goombah99 (560566)

      oops my bad: 1.6E11 resolvable points on a 1cm chip. still a lot to design for.

    • by Yvan256 (722131)

      I think once it's too hard to squeeze more things into a 2D surface we might start seeing development into 3D space.

      And then it leads to the creation of Skynet, etc... very bad stuff.

    • Re: (Score:3, Informative)

      by RabidMoose (746680)
      Actually, it looks like the answer is going to be to step away from silicon, and replace it with graphene []. They can't make it anywhere near as small as silicon (yet), but there's other advantages. The linked article is a pretty good primer on the subject.
      • Re: (Score:3, Interesting)

        by vlm (69642)

        Hmm. Well, Si unit cell spacing is about 0.5 nm and graphene C-C spacing is about 0.15 nm. The longest diagonal of a hexagon is twice one side, so the minimal graphene unit cell lattice would be about 0.30 nm.

        So, for all the trouble of scrapping an entire industry and starting over, we'd only go from 0.50 to 0.30 nm. Not sure if thats going to be worth it.

        Not that graphene isn't interesting or cool, just that its unit cell isn't much smaller than Si unit cell.

        • no, it isn't, but isn't part of the point of graphene being able to jack higher voltages through it, thus achieving higher speeds before the wall's hit there? If we can continue to make something smaller, even if only a tiny bit (heh), but can jack up clock speeds and such, then it's still a good improvement, though less useful for SSD-type use.

        • by ModelX (182441)

          So, for all the trouble of scrapping an entire industry and starting over, we'd only go from 0.50 to 0.30 nm. Not sure if thats going to be worth it. Not that graphene isn't interesting or cool, just that its unit cell isn't much smaller than Si unit cell.

          It's not about scaling, the key property of graphene is greatly improved electron mobility. It also has many other interesting properties.

    • by cyfer2000 (548592)
      Double patterning [].
      • by treeves (963993)

        If that was an answer, then what was the question?
        Assuming the question was something like "how will we get to sub-22nm technologies", I'll one-up you: Extreme Ultraviolet. (Wavelength - 13.5nm vs. ArF with pitch-doubling which would be equivalent of 193/2=96.5nm, so a factor of about seven better. ) Lately, the prospects for EUV have been getting pushed out to later years, but nothing new there.

    • by mounthood (993037)
      Smaller sizes on the X-Y plane make for less heat, so you get higher density on the Z plane also.
    • That "molecule" of silicon is a single crystal that goes from one side of the wafer to the other - perhaps about 300 millimetres.
      Even in poorly funded labs for well over a decade people have been getting down to atomic scales where a single layer of another element can give a junction. The materials can do it but the problem is fabricating it.
  • Is this chip design somehow based on the NAND logic gate? How is it different from other chips? I couldn't tell from the article.
    • by vsage3 (718267)
      Short answer, yes. NAND is perhaps the simplest logic gate to make with CMOS technology (requires 4 transistors), which is why it is talked about so much because of the space savings involved.
      • by SemperUbi (673908)
        Very satisfying answer. thanks.
        • Also, while some circuits grow when you make them exclusively with NAND (you can create any logic circuit out of NAND gates), flash memory doesn't. So it is very common to make NAND flash.
    • Re: (Score:3, Interesting)

      by AdamHaun (43173)

      No, it has little to do with the NAND digital logic gate -- the other person who responded to you is totally wrong. NAND flash is a circuit topology where the flash transistors (bits) are arranged in long series chains, like this: []

      which is similar to the pull-down side of a NAND gate. NAND flash is very high-density but is read in blocks (you turn on the whole chain and then check one bit at a time). The other type of flash is NOR flash, which us

  • Does this mean cheaper SSD (and inevitably larger SSDs), then I could see a benefit to 25nm NAND flash memory.
  • 4SF is pretty accurate for an 'about'. Why not 'About $160M".

    • Scientists and engineers love them some sig figs. Since precision would be nine sig figs, anything less is an about, to them. ;)

  • by Firethorn (177587) on Monday April 05, 2010 @11:41AM (#31734550) Homepage Journal

    Personally, while I find it interesting, I'd like to know just how much extra data storage this would enable?

    32nm to 25 nm would, what, increase the theoretical max density of flash by 64%? IE instead of getting a 16GB chip you'd get a 24GB one.

    At the same price once you have all the details worked out, of course.

    45nm to 25 nm by my figuring would allow 3.24 times as much storage in a given size of chip.

  • So I don't think they've highlighted the move to extended UV (EUV) enough. What new wavelength of light are they using? The semiconductor industry has been stuck at 193nm for a long time now. If the industry moves to a smaller wavelength it's a pretty big deal. New wavelength means new lithography materials. It may not be interesting to those of you asking "what size hard drive does this mean?" but to those who know this stuff it's important.
  • A litho primer (Score:2, Informative)

    by quo_vadis (889902)
    For those unfamiliar with the field of semiconductor design, heres what the sizes mean. The Toshiba press release is about flash. In flash, the actual physical silicon consists of rectangular areas of silicon that have impurities added (aka. doped regions or wells). On top of these doped regions, are thinner parallel "wires" (narrower rectangles) made of poly silicon. The distance between the leading edge of wire and the next is called the pitch. Thus, the half pitch is half that distance. The reason this i
    • by Microlith (54737)

      The big problem is what happens under 16nm.

      IIRC, lithos down to 13nm are believed to be possible. NAND will start hitting terminal reliability problems below 20nm as the floating gates will likely hold 100 electrons (or less!) and far more susceptible to random drainage and bit errors way beyond what is currently experienced.

      So we'll end up with more, higher-density, and fundamentally unstable nonvolatile memory. As I understand it, DRAM will be hitting this problem too, as the capacitors will become suscep

    • by Firethorn (177587)

      The takeaway being, theres nothing to see here, its progress as usual. The big problem is what happens under 16nm. Thats the point at which current optical lithography is impossible, even using half or quarter wavelength, and EUV with immersion litho.

      So, around 1/2 of what this factory will produce? That would translate, in a perfect world, to around 4X of what the 25nm process would create.

      Call it 16X the storage per area of current processes.

      Given that SSDs are STILL something like 100X the price per gigabyte than hard drives, will we ever see the end of the spinning platter?

      Matter of fact, I'll jinx myself here:

      I'm afraid that we're going to see a plateuing of storage capabilities within my lifetime. One guy I was talking with was convinced that ha

    • The big problem is what happens under 16nm.

      Maybe 3D chip stacking [] will help prolong Moore's law for a while, instead of further miniaturization.

  • Micron and Intel previously announced 25nm flash. Toshiba is trailing badly. []
  • Details? (Score:3, Insightful)

    by AdamHaun (43173) on Monday April 05, 2010 @03:00PM (#31738542) Journal

    The article is frustratingly light on details. There's nothing about what type of flash transistor they're using (there are several variants on the basic stacked-gate NMOS design as well as more wild types). They don't say whether they're actually shrinking the bits (which you don't have to do) or just the support circuitry. All it says is that Toshiba is making NAND flash in a new process node, probably 22nm.

    My day job is working with embedded NOR flash. I'm not really a process or solid state physics guy, but I think I know enough to comment, unlike a lot of the people running their mouths. (Seriously, folks, if you don't know what you're talking about, *shut up*. Misinforming people with wild guesses is not helpful, no matter how much it strokes your ego.)

    First off, the flash transistor itself is not 22nm long. It's probably at least ten times longer, if not more (obviously Toshiba's not giving exact numbers). When you go to a new process node you don't necessarily shrink every feature by 50%. The limiting factor in flash size isn't lithography (manufacturing), it's leakage.

    Flash works by storing electrons on an isolated (floating) material sandwiched inside an NMOS transistor []. If extra electrons are present, the transistor is forced off (0). If they aren't, the transistor can turn on (1). The problem is that over time the electrons leak out of the floating gate, eventually causing bits to flip. If you shrink the circuit enough you hit a point where you can't keep electrons in the gate for a reasonable amount of time. At that point, we'll need a new memory technology -- maybe FRAM [], maybe something else. Whatever it is, I'm sure it's been researched already -- a lot of the major research papers for flash memory are 25+ years old.

    Also, I said this elsewhere, but NAND flash is called NAND because the flash transistors (bits) are in series, like the NMOS transistors in a NAND gate. It isn't made out of logic gates or anything like that. Flash memory is analog, like DRAM -- you need special analog circuitry to read it and output a digital signal.

Going the speed of light is bad for your age.