Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
IBM Hardware

IBM Creates First 2nm Chip (anandtech.com) 74

An anonymous reader shares a report: Every decade is the decade that tests the limits of Moore's Law, and this decade is no different. With the arrival of Extreme Ultra Violet (EUV) technology, the intricacies of multipatterning techniques developed on previous technology nodes can now be applied with the finer resolution that EUV provides. That, along with other more technical improvements, can lead to a decrease in transistor size, enabling the future of semiconductors. To that end, today IBM is announcing it has created the world's first 2 nanometer node chip. Just to clarify here, while the process node is being called '2 nanometer,' nothing about transistor dimensions resembles a traditional expectation of what 2nm might be. In the past, the dimension used to be an equivalent metric for 2D feature size on the chip, such as 90nm, 65nm, and 40nm. However with the advent of 3D transistor design with FinFETs and others, the process node name is now an interpretation of an 'equivalent 2D transistor' design.

Some of the features on this chip are likely to be low single digits in actual nanometers, such as transistor fin leakage protection layers, but it's important to note the disconnect in how process nodes are currently named. Often the argument pivots to transistor density as a more accurate metric, and this is something that IBM is sharing with us. Today's announcement states that IBM's 2nm development will improve performance by 45% at the same power, or 75% energy at the same performance, compared to modern 7nm processors. IBM is keen to point out that it was the first research institution to demonstrate 7nm in 2015 and 5nm in 2017, the latter of which upgraded from FinFETs to nanosheet technologies that allow for a greater customization of the voltage characteristics of individual transistors. IBM states that the technology can fit '50 billion transistors onto a chip the size of a fingernail.' We reached out to IBM to ask for clarification on what the size of a fingernail was, given that internally we were coming up with numbers from 50 square millimeters to 250 square millimeters. IBM's press relations stated that a fingernail in this context is 150 square millimeters. That puts IBM's transistor density at 333 million transistors per square millimeter (MTr/mm^2).

This discussion has been archived. No new comments can be posted.

IBM Creates First 2nm Chip

Comments Filter:
  • by Third Position ( 1725934 ) on Thursday May 06, 2021 @09:04AM (#61354554)

    ...IBM fired it!

  • by Admiral Krunch ( 6177530 ) on Thursday May 06, 2021 @09:04AM (#61354560)

    Just to clarify here, while the process node is being called '2 nanometer,' nothing about transistor dimensions resembles a traditional expectation of what 2nm might be.

    but it's important to note the disconnect in how process nodes are currently named.

    Wake me when they make 0nm chips.

    • 0nm? That’s for chumps. I want a -2nm process. How else are you gunna make it larger on the inside than the outside?
      • by tsa ( 15680 )

        Yeah! It will also be faster because the information arrives before it is sent. Much better.

        • Well technically it was already at the destination so it didn't have to go anywhere. That is how it got around that pesky speed of light thing.
          • Well technically it was already at the destination so it didn't have to go anywhere. That is how it got around that pesky speed of light thing.

            And the fact you knew this already and didn’t have to figure it out is proof that it will be invented soon, obviously.

            • It already was invented by Elon Musk when he got stuck in an elevator the other day. I know you might be skeptical but I swear I put absolutely no thought into that all. I just know it.
          • You laugh, but that's how photons see tge universe.
            To them, the universe never expanded, nor did time, and there is no Pauli exclusion principle since everything is at the same place. And hence, from their POV, they don't (need to) exist. :)

  • by nagora ( 177841 ) on Thursday May 06, 2021 @09:17AM (#61354586)

    I guess HR overlooked them or something.

  • by chuckugly ( 2030942 ) on Thursday May 06, 2021 @09:17AM (#61354588)

    I'm just mostly excited that we now have a newly defined standard for area - the standard fingernail.

    • I'm just mostly excited that we now have a newly defined standard for area - the standard fingernail.

      That would be metric standard fingernail. Sounds like your 1/2 inch standard standard fingernail.

  • sizes according to density--it's what everyone needs to use to compare them anyways.

    Using some lame-ass estimation of an apples-to-oranges comparison for a marketing number that everyone knows is now meaningless is.... stupid.

    This process size would simply be 333.

    • Forget 3.1, 7 and 10. I want XP and Vista. Real personality.

      • Re: (Score:2, Informative)

        by Anonymous Coward

        Forget 3.1, 7 and 10. I want XP and Vista. Real personality.

        WTH does this have anything to do with the poster's comment?

        Calling a process size "333" (MTr/mm^2) vs "2nm" where the 2 is some spungy "equivalance" but yet has *zero* bearing on any actual meaningful measurement within the process itself seems more than reasonable.

        We'd get:
        Intel 22nm: "16"
        TSMC 16nm: "28"
        SS 14nm: "33"
        Intel 14nm++: "37"
        Intel 14nm: "44"
        SS 10nm: "51"
        TSMC 10nm: "52"
        SS 8nm: "61"
        TSMC 7nm: "91"
        Intel 10nm: "100"
        IBM 2nm: "333"

        Which are all now directly comparable in at least one, actually meaningfu

  • ... redefine the metric.

    I really appreciate TFS clarifying this fake measurement - that helps explain some prior stories.

    In unrelated news ... the CDC announced yesterday that it will only report nCov-19 vaccine-breakthrough deaths that occur in hospitals from now on. The numbers on the charts are expected to rapidly improve.

    • And when the metrics look bad, use imperials.

    • redefine the metric.

      Okay but in fairness all chip makers are using this redefined metric. Ever since the switch to 3D transistors (which by the by, not all applications use 3D. Regular 2D is still used for things that don't need super high transistor count. Which also makes all this even more murkier), the 2D definition hasn't ever made sense. That said, Intel/AMD/nVidia/etc all kept using the 2D "equivalent" because explaining such nuance to the consumer was iffy. It was just easier to say "lower is better and this is 28

  • by burtosis ( 1124179 ) on Thursday May 06, 2021 @09:31AM (#61354622)
    Engineer slowly staggers backwards with a look of horror:

    You’re gonna need a smaller atom.

    • Si has a relatively large atom, but atom size is not the whole story - maximum practical field strength is probably more important. One advantage is that it is cheap. There are however other metals that can be used, such as GaAs for example.
    • Coming soon...
      metallic hydrogen computing

      non-baryonic computing

      • Oh you dream too small.

        A decade ago I saw a documentary about the possibility of encoding information in the wave function of an atom's electron shell.

        But we could also go full neutron star "degenerate matter".

        Or how about free electron clouds in a complex three-dimensional field of bosons that is influenced so it only interacts with the electrons where we want it and when we want it?

        • Hey, if your not bypassing the limits of polynomial time computation through time dilation, you’re obviously not doing it right.
          • by youn ( 1516637 )

            No worries, as long as there is access to a flux capacitor and 1.21 jigowatt of energy... it'll all be fine :p

        • Alstair Reynolds wrote some good SciFi about civilizations that convert neutron stars to computational devices, if I recall. Fun stuff.
  • About twice the density of TSMCs 5nm process? Sounds cool.

    • TSMC's existing process.

      Not TSMC's process where they also only just recently managed to do it.

      • by Kokuyo ( 549451 )

        Granted, comparing IBM's non-commercially available tech to TSMCs commercially available one is a bit unfair.

        Then again I more or less just wanted a point of reference.

  • Intel has left the chat
    • by Kokuyo ( 549451 )

      If I were Intel I'd think hard about just licensing the tech from IBM.

      • Not a bad idea. Remember: AMD was saved several times in the past, by getting tech from IBM.
        The first Athlon was such a case.
        If Intel could snatch it up first, they might have a good advantage. Provided their fab game stops being a trainwreck.

      • That would definitely be the most pragmatic solution. It would help Intel get out of the rut of constantly being so far behind and it would allow a US company to run a fab that was producing cutting-edge chips that could compete with the fabs in China. I don't think Intel's old CEO would have ever been able to swallow his pride and license tech from IBM, but Intel's new CEO is an engineer, so hopefully he remembers how to make pragmatic and bold decisions. Afterall, as consumers we all benefit from healt
  • by tsa ( 15680 ) on Thursday May 06, 2021 @09:58AM (#61354700) Homepage

    I don't get it. From the article:

    Just to clarify here, while the process node is being called ‘2 nanometer’, nothing about transistor dimensions resembles a traditional expectation of what 2nm might be. In the past, the dimension used to be an equivalent metric for 2D feature size on the chip, such as 90nm, 65nm, and 40nm. However with the advent of 3D transistor design with FinFETs and others, the process node name is now an interpretation of an ‘equivalent 2D transistor’ design.

    This sounds like gibberish to me. What is 'an equivalent 2D transistor design?'

    And then:

    IBM’s 3-stack GAA uses a cell height of 75 nm, a cell width of 40 nm, and the individual nanosheets are 5nm in height, separated from each other by 5 nm. The gate poly pitch is 44nm, and the gate length is 12 nm. IBM says that its design is the first to use bottom dieletric isolation channels, which enables the 12 nm gate length, and that its inner spacers are a second generation dry process design that help enable nanosheet development. This is complimented by the first use of EUV patterning on the FEOL parts of the process, enabling EUV at all stages of the design for critical layers.

    Where oh where are the 2 nm?

    • by bws111 ( 1216812 ) on Thursday May 06, 2021 @10:14AM (#61354740)

      If the transistors were still planar (2D) like they used to be, they would have to be 2nm to get the same density as these 3D transistors.

      • And if my grandma had wheels, she'd be a motorcycle!

        What does any of this have to do with giving us useful information?

      • That's a really good concise explanation! Shame I don't have mod points. The whole sub-9nm thing still smacks of marketing but I guess it's better than "faster". I remember having a conversation with a chip designer about how when you reach 9nm, gate leakage is a really nasty problem that's incredibly hard to solve. Changing the baseline to 3D from 2D is not really 9nm, in the same vein, bolting a 200HP car on top of a 200HP car doesn't make a 400HP car.

      • Not correct, unfortunately. That would be a nice, compelling explanation, it's just not true. In fact the 2nm number really is completely meaningless. They just picked some part of the device and said it's 2nm, so we will call it 2nm. But they could have done that any time...

        The old standard way of measuring density was by M1 pitch (distance between the finest level of metal interconnect). The new standard way is you just make up a number.

        The reason M1 pitch was initially used is that it defines how many L
        • Interesting, I've done chip design in 250nm / 0.25um, which had the node named after the gate oxide thickness (C50, after C100 and C75, 5V and 3.5V technologies still on 8 inch wafers). Then I've done chip design in CMOS 18 (180nm), CMOS 12 (120nm), CMOS 90 (guess), CMOS 65, CMOS 45/40. Since I did analog in those technologies (targeting mostly TSMC but I've done a tape out to global foundries once too), I can tell that in all those nodes the minimum gate length was identical to the nm naming. Truth be told
          • The problem is that sub around 40nm feature sets require clever architecture of the transistors.
            I.e., even a 2nm node still has features around that size.
            So does a 5nm node. And a 7nm node.

            So how do we name them?
            Some argue density is better, and I tend to agree with them, but the problem with that, is it means certain companies would need to... give up... marketing advantages they currently have, for example, TSMC's 7nm node being equivalent in density to Intel's 10nm node.

            So ya, below 30-40nm, it me
            • Yeah, I also think density is better, it used to be factor 2, sqrt of 2 in X and Y on the design plan plane. 180nm to 130nm (called CMOS 12 for reasons of superstition, can you believe that!) to 90nm to 65nm to 45nm, always a halving in required area for digital cores (an ARM 926 was about 1mm2 in CMOS 65, plus glue logic)... And then there were the shrink nodes in-between, CMOS 55, CMOS 40. I don't know if they still do those...
        • It's almost that bad, but not quite.

          TSMC's 5nm node means one thing, and one thing only: That it has higher density than its 7/6nm node.

          IBM's 2nm node is the same deal. It means it's capable of more density than their last.
          And of course, there's zero comparison between the nodes "sizes" of different foundry companies.

          So I'm not disagreeing with you, just throwing out a little caveat.
        • would be something like the number of SRAM cells per square mm. Specifying transistors would be difficult because there are different designs. But SRAM is a small functional unit that should relate reasonably well to actual devices. It captures the effect of improvements to lithography and transistor design and should be a good overall gauge of the manufacturing process.
    • Stop trying to evaluate the node "size" with any actual size.
      What's more important is the density.
      Node names long ago departed from anything resembling the actual size of anything important on the node.

      You'll find that your question can be asked of any [wikichip.org] random process node that's even remotely recent.
  • "Just to clarify here, while the process node is being called '2 nanometer,' nothing about transistor dimensions resembles a traditional expectation of what 2nm might be..."

    Wha? *gasp* A company being shady. Say it aint so!

    "Often the argument pivots to transistor density as a more accurate metric, and this is something that IBM is sharing with us."

    Oh. So instead of being all shady they are actually giving the real deal. Didn't see that coming.

    "We reached out to IBM to ask for clarification on what the size
    • by Luckyo ( 1726890 )

      It's worth remembering that this is nothing new. TSMC, UMC, Intel, etc all have gone down this path already. I don't know who was the first to adopt "our process names have nothing to do with actual transistor size in our process", but when someone started doing it, everyone else had to follow it in their PR.

      It still wasn't good enough either. Remember how intel got shit for "being on 14nm when TSMC is already on 10nm" a few years ago?

      Reality check, TSMC's 10nm is closer to intel's 14nm than intel's 10nm in

      • Yeah and lets be honest feature size really isn't entirely legitimate in 3d structures. In fact the complexity of interaction becomes such that adding additional dimensions to the logic can result in greater benefit than raw brute forced bottom up efficiency. Look at how much more efficient neuronal structures are in the brains of cravens vs mammals for instance.

        No one mark is going to be perfect but density is at least a more reasonable and OBJECTIVE 3d compatible metric which was probably more legitimate
  • Have tsmc/samsung/intel been doing press releases for every research/process tweak that they have been doing? I don't get that impression, tsmc is talking about volume production of 3 & 4 nm in the next year or so. That means they probably produced the first chips on the process a couple years ago and they have spent the past couple years perfecting it so it can be produced in volume.

    So really this sounds more like "we are way behind, our competitors are about to do volume production on 3nm and we just

    • by bws111 ( 1216812 )

      IBM does not make chips. They do research and development of chip manufacturing, then license that technology to TSMC/Samsung/Intel, etc.

  • What's the actual size of the various features?

    Like what's the thinnest line of current? What's the size of a standard transistor?

    I don't want to calculae it from the transistor density right now because this is a phone and I'm on a busy train.

  • If SRAM can be made small enough that the speed-of-light limitation on signal path length is not a factor, could a processor with only SRAM be made? (i.e. no need for cache). I guess transistor switching speed is already fast enough if the processor is running at multiple GHZ and the registers are single cycle.

  • There are three metrics of interest to me, all based on wafer scale integration:

    1. How many of the current-fastest CPU cores could you place on a wafer, given the same L2 cache and an adequate L3 cache?

    2. Ditto for the Itanium 3 core

    3. If the wafer was pure high-speed RAM and support chips, how much RAM could a single wafer take?

    Realistically, nobody could afford a wafer's worth of RAM at L1 speeds, but because of that, it gives you a handy upper limit. Nobody is going to build a conventional computer with

  • Only Intel is stuck at 7nm and higher.
  • IBM states that the technology can fit '50 billion transistors onto a chip the size of a fingernail.' We reached out to IBM to ask for clarification on what the size of a fingernail was, given that internally we were coming up with numbers from 50 square millimeters to 250 square millimeters. IBM's press relations stated that a fingernail in this context is 150 square millimeters.
    So, this means there is on average 1 transistor for 3000 square nanometers on this chip, which si rectangle of size 50*60 nano
  • by DMJC ( 682799 )
    But can they ramp up production in time to save the economy from the chip shortage meltdown?
  • TSMC's 3nm is 292 million transistors per square millimeter, and plans to enter mass production for Apple chips in 2021.
    IBM's 2nm is 333 million transistors per square millimeter, and plans to enter mass production in 2024 or 2025.

    I daresay that TSMC will be on a better process node than 3nm by 2024 or 2025.

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken

Working...