Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware

Are Nanosheet Transistor the Next (and Maybe Last) Step in Moore's Law? (ieee.org) 82

An anonymous reader quotes IEEE Spectrum: Making smaller, better transistors for microprocessors is getting more and more difficult, not to mention fantastically expensive. Only Intel, Samsung, and Taiwan Semiconductor Manufacturing Co. (TSMC) are equipped to operate at this frontier of miniaturization. They are all manufacturing integrated circuits at the equivalent of what is called the 7-nanometer node... Right now, 7 nm is the cutting edge, but Samsung and TSMC announced in April that they were beginning the move to the next node, 5 nm. Samsung had some additional news: It has decided that the kind of transistor the industry had been using for nearly a decade has run its course. For the following node, 3 nm, which should begin limited manufacture around 2020, it is working on a completely new design.

That transistor design goes by a variety of names -- gate-all-around, multibridge channel, nanobeam -- but in research circles we've been calling it the nanosheet. The name isn't very important. What is important is that this design isn't just the next transistor for logic chips; it might be the last. There will surely be variations on the theme, but from here on, it's probably all about nanosheets....

All in all, stacking nanosheets appears to be the best way possible to construct future transistors. Chipmakers are already confident enough in the technology to put it on their road maps for the very near future. And with the integration of high-mobility semiconductor materials, nanosheet transistors could well carry us as far into the future as anyone can now foresee.

This discussion has been archived. No new comments can be posted.

Are Nanosheet Transistor the Next (and Maybe Last) Step in Moore's Law?

Comments Filter:
  • by K. S. Kyosuke ( 729550 ) on Saturday August 03, 2019 @10:42AM (#59034170)
    Quoth Stanislaw Lem: [wikipedia.org]

    This was a computer of the "last" generation—last, because no other could have greater calculating power. Limits were imposed by such properties of matter as Planck's constant and the speed of light. Greater calculating ability could be achieved only by the so-called imaginary computers, designed by theorists engaged in pure mathematics and not dependent on the real world. The constructors' dilemma arose from the necessity of satisfying mutually exclusive conditions to pack the most neurons into the smallest volume. The travel time of the signals could not be longer than the reaction time of the components; otherwise, the time taken by the signals would limit the speed of calculation. The newest relays responded in one-hundred-billionth of a second. They were the size of atoms, so that an actual computer had a diameter of barely three centimeters. A computer any larger would be slower.

    • by Anonymous Coward

      And next to it, stood the "last last" generation. Its plaque read: "One year later, the Cheela Neutronium Computing Corporation presented their new neutronium-based computer, made entirely of degenerate matter and artificially constrained quark-gluon plasma. It was the size of grain of rice. -- Ford Perfect thought that it was funny, while eating his bowl of fried rice and expanded Cheela. He wondered what would have happened, if they hadn't been bankrupted by photon sphere computers. Of which he owned thre

    • This was actually a problem for the Pentium 4. Its pipeline was super-long (IIRC 20 stages) because it had multiple "Drive" stages for signal propagation. When a branch was mispredicted you wound up having to throw away tons of stages. So for unpredictable branches, the P4 was a turd.

      On the other hand, if you could wave away other limits like heat dissipation, cost, etc. then you could solve problems like this by having enough functional units to execute more of the possible predicted paths. Like, all of th

      • by Agripa ( 139780 )

        This was actually a problem for the Pentium 4. Its pipeline was super-long (IIRC 20 stages) because it had multiple "Drive" stages for signal propagation. When a branch was mispredicted you wound up having to throw away tons of stages. So for unpredictable branches, the P4 was a turd.

        Branch predictors were pretty good even for the P4. The replay system [wikipedia.org] was a greater drag on performance because the design was so fragile.

    • From the novel: Fiasco.

  • “Just say the report of the death of my law has been grossly exaggerated.”

    • Brannigan's Law.

    • by lgw ( 121541 ) on Saturday August 03, 2019 @12:01PM (#59034344) Journal

      TFA isn't predicting the death of Moore's law, though. The assertion is that the approach used is likely to endure, as it's the best approach for any smaller scale.

      The fundamental problem in making a transistor smaller is that the shorter the path from source to drain, the harder the gate has to work to clamp the current flow and ensure no leakage when the transistor is supposed to be "off".

      Old school MOSFET had the gate as a layer crossing on top of the source-drain path. That's impractical for the latest and greatest, as the gate can't get the job done. The current SOTA is "FinFET", which makes the source-drain path a vertical "fin" so that the gate can wrap it on three sides (there's an illustration in TFA), making the gate more efficient.

      To go further, you have to break the source-drain path into multiple very thin "sheets", sort of wires stacked vertically, so that the gate can completely surround each one. This maximizes the "leverage" the gate has, and there's unlikely to be a better approach, just refinements on this theme.

  • Intel at 7 nm? (Score:4, Informative)

    by fintux ( 798480 ) on Saturday August 03, 2019 @11:09AM (#59034230)
    According to the summary, Intel is manufacturing at 7 nm. Well, it is not, it has barely the 10 nm manufacturing working, with the first more widely available (mobile) chip just released. Now, it is saying: "at the equivalent of what is called the 7-nanometer", but still, as far as I know, the Intel 10 nm node is not equivalent to that of Samsung or TSMC. And TSMC's is the only process that is used for desktop, others are only used for laptop (Intel) or mobile (Samsung).
    • by Kohath ( 38547 )

      Those numbers aren't very meaningful. It's only called 7nm.

      The distinction between mobile and desktop CPUs is even less meaningful.

      • Those numbers aren't very meaningful. It's only called 7nm.

        The name of the song is called 'Haddocks' Eyes'.''
        ``Oh, that's the name of the song, is it?'' Alice said, trying to feel interested.
        ``No, you don't understand,'' the Knight said, looking a little vexed. ``That is what the name is called. The name really is 'The Aged Aged Man'.''
        ``Then I ought to have said 'That's what the song is called?' '' Alice corrected herself.
        ``No, you oughtn't: that's quite another thing! The song is called `Ways And Means': but that's only what it's called, you know!''
        ``W

    • Re:Intel at 7 nm? (Score:5, Interesting)

      by mangastudent ( 718064 ) on Saturday August 03, 2019 @12:32PM (#59034436)

      Echoing and expanding on the other replies, Intel's 10nm was more aggressive than TSMC's initial 7nm, both of which use 193nm UV lithography. And Intel screwed up at least one thing resulting in their 10nm process node being useless to date, except for getting experiences in some technologies they're using in their 7nm node, and making some chips for sale while they're at it (the SemiAccurate interpretation).

      TSMC then added ~13nm EUV lithography to improve their 7nm node, and is presumably using a lot more of it for their 5nm node, which as the snippet of TFA alludes to, they started risk production of in the spring. For all we know, Intel's equivalent 7nm is in about the same state, but I don't know of anyone who's betting on that, the most we know is that they fired the guy who was in charge of this, and restructured that part of the company.

    • "as far as I know, the Intel 10 nm node is not equivalent to that of Samsung or TSMC."

      Except they are. All three have around 56nm poly pitch, M2 pitch in the 40nm range, and a SRAM cell around .03um^2. See https://semiwiki.com/semicondu... [semiwiki.com]
      That comparison is a year old, so ignore the Globalfoundries figures since they are out of the running.

      The really crazy part is how hard Intel is driving poly pitch in the next process while the others are barely scaling at all. (but them Intel holds CPP for the node after

      • by fintux ( 798480 )
        It's also about the amount of cores, and thus die sizes (AMD has up to 8 cores on a chiplet, Intel has only up to 4), yields (which gets worse with increased die size), prices (where they need to compete against AMD) and frequency scaling (Intel quad-core Ice Lake CPUs consume > 50 watts at their boost frequency). It might be well the case that Intel is able to produce competitive 10 nm chips only for the laptop market. Especially since AMD only has the 12nm based products available there.
    • Re:Intel at 7 nm? (Score:4, Informative)

      by Rick Schumann ( 4662797 ) on Saturday August 03, 2019 @01:37PM (#59034672) Journal
      Was working there not too long (not long enough) ago; 'manufacturing' doesn't necessarily mean 'available on the street to the general public'. They're 'manufacturing' test chips at 7nm. Test chips that aren't CPUs or PCHs, but that just contain.. 'subassemblies', shall we say.. for various working groups within the company to run extensive tests on, to see how viable the silicon is at 7nm. They literally parcel out the 'real estate' on the die to this-or-that working group, who puts whatever they're specializing in onto that space, they get their own power rails and sets of solder-bumps to connect it to the outside world, and they get it linked into the JTAG chain so they can talk to it.
  • by Jason1729 ( 561790 ) on Saturday August 03, 2019 @11:22AM (#59034254)
    I've been hearing "Is [new tech] the next and last stop in Moores law" for 25+ years. And it's probably just my lack of age that prevents me from saying 50+.

    The only constant I've noticed in technology is that tech writers are and always have been very small-minded and ignorant people who seem to understand technology less than the average techno-phobe they're writing for.
  • by mykepredko ( 40154 ) on Saturday August 03, 2019 @11:31AM (#59034274) Homepage

    A few years ago, I tried to research how far back the predictions of the death of Moore's law went and I think it was first made around 1970-1971 when the first DRAM chips were becoming available on the market. When I was involved in memory manufacturing (1986-1995) at IBM, there was the constant worry that silicon would prevent the continual reduction in transistor size - I saw many slide decks forecasting the end of Moore's law. Yet here we are 25 years from that time and silicon features are still getting smaller and smaller.

    I suspect that we'll be having this debate for a few years yet.

    • by Anonymous Coward on Saturday August 03, 2019 @12:01PM (#59034340)


      I suspect that we'll be having this debate for a few years yet.

      A few? Yes. 20? No. A single silicon atom is .2nm. The last step the said was 3nm in 2020. So that means:
      1.5 nm
      .8nm
      .6nm
      .4nm
      .3nm
      EOL

      That's 5 generations, 5*1.5= about 8. 8 years until we get down to the size of a silicon atom. And you can't make a transistor that small anyway, so it'll end before that anway.

        How do you think it can shrink beyond that? 3D? Hey maybe you can get a bit out of that... But you're going to have some severe heat problems, which has already reduced clock speed growth severely. Now we just get more cores, which is of limited value. So much of the value of Moore's law is already lost.

      We've already seen a severe growth limitation in spinning disk capacity. Nobody coined a law for it, but it's been growing exponentially for decades, until about 7-10 years ago, when the doubling time has gone up to about 6 years instead of 2. Just 4 years ago in 2015 This guy thought we'd have 64 terabyte HDs by 2020:

      https://v1.escapistmagazine.com/articles/view/scienceandtech/columns/forscience/12908-How-Big-Will-Hard-Drives-Get-Hard-Disk-Sizes-over-Time.2

      Nope, not even close. 16tb is the biggest you can get. Exponential growth ends, and when it does it's like falling off a cliff. Everyone is so used to it they just think it'll go on forever, until it doesn't.

      • Exponential growth ends, and when it does it's like falling off a cliff. Everyone is so used to it they just think it'll go on forever, until it doesn't.

        You need to frame that and send it to every stock analyst in the world.

      • by Agripa ( 139780 )

        Moore's Law is about economics and not limited to Dennard scaling. To lower the cost per transistor you can:

        1. Make smaller transistors - Dennard scaling.
        2. Expose larger dies.
        3. Assemble more dies into a module.

        I hesitate to include vertical scaling because parts have been limited by power density for more than 10 years now.

  • Okay guys, Moore's Law does not have a limit mm'kay?

    It just says that number of transistors in IC's double about every 2 years. IC's are not limited to a particular size. And what happens if we develop technology to build them full 3d instead of on a flat surface? If we develop the tech to reliably print cubical shaped dense IC's then who know how much more we could stuff in these things.

    The only limit to moore's law is our imaginations and technology levels. maybe this is the end of that road... maybe

    • Speed of light is hard limit, so if you have chip bigger (like 15 cm big) it only can improve performance of _parallel_ algorithms by having thousands of cores, speed of single algorithm cannot be improved much (only by rearranging transistors at current size). This is serious bottleneck for many tasks. Going 3D will be extremly difficult and limited, because of problems with cooling. However, several layers of L1 cache (like 1 GB) would again greatly improve single-threaded performance once more (and reall
      • Speed of light is hard limit, so if you have chip bigger (like 15 cm big) it only can improve performance of _parallel_ algorithms by having thousands of cores, speed of single algorithm cannot be improved much (only by rearranging transistors at current size).

        So the obvious solution is to simply pack them in far smaller than silicon atoms by creating a miniature black hole. Why by my back of the napkin calculations, you can fit 10^40 silicon atoms inside a silicon atom, speed of light isn't any issue at all and the time dialation isn't too bad since it would run at a mere 650 watts which is plenty of bandwidth. So we just encode the information on the event horizon surface and use it for quantum calculations too. See? An easy 100 more years no problem.

    • Re: (Score:3, Interesting)

      Iirc, Gordon Moore talked about the # of components (in particular: transistors) vs. cost at the sweet spot in manufacturing.

      So if 3D stacking turns out to be more expensive in many-layer processes vs. IC's with just a few layers, that in itself would bring an end to Moore's Law. As for making IC's larger: that requires more of the base materials (like ultra-pure silicon) which doesn't help with the # components vs. cost equation. As long as you're not using less material per transistor (as in: smaller t

      • I'm actually amazed that more work has not been done on 3D with memory at this point. It is perfect for setting up sheets and then testing various ideas on how to cool. It would be nice to have a 64 bit chip. At the very least, I wonder if it would be possible to create this as a large cache for the CPU?
    • The only limit to moore's law is our imaginations and technology levels. maybe this is the end of that road... maybe not!

      So about those "technology levels," the technology stopped changing the way that Moore's Law describes a long time ago. As in, the past. As in, the people who say it was still a thing even 5 years ago had to change what they think the "law" was to be able to keep using the words.

      Some idiots even go so far as to say that Moore's Law is just an abstract general concept, like the American Dream. Except, it was always actually a ratio.

  • by NikeHerc ( 694644 ) on Saturday August 03, 2019 @11:48AM (#59034308)
    All in all, stacking nanosheets appears to be the best way possible to construct future transistors.

    In early 1948 someone likely said, "All in all, germanium point-contact appears to be the best way possible to construct future transistors."
  • Moore's Law is a marketing strategy. Technical innovations do not fall on a regular schedule. Companies invest whatever resources are necessary to keep upgrades predictable in an attempt to maximize return on investment.
    • by HiThere ( 15173 )

      Moore's law was an observation which *became* a marketing strategy. And also a useful predictive tool. It's been on the decline for the last decade, and it was never all that regular, but this doesn't mean it wasn't useful. I'm just not sure it still is (useful).

  • A turd sandwich is a still a turd, even if you cover it in Cheeto Dust..

  • Assuming we don't extinguish ourselves as an entire species, eventually they'll probably have to abandon such crude technologies as laser lithography in favor of assembling integrated circuits one atom at a time. Nanometers? Hah, stone knives and bearskins! Picometers or smaller (femtometers?), that's where it's at!
  • Should this way of building IC-s go into mass production, abandoning silicon will be the next step. The transistor will already be completely decoupled from substrate, it will be nothing more than mechanical basis to build on. Switching semiconductor to something else from there will be much easier than switching the substrate. But I think we'll see it in analog applications before we see it in next node of logic IC-s. GaN fets are already a thing in power and RF applications, they are simply not the most c
  • All in all, stacking nanosheets appears to be the best way possible to construct future transistors. Chipmakers are already confident enough in the technology to put it on their roadmaps for the very near future.

    Old Mother Hubbard's PR agent in full flower today. Step aside I Can't Get No Satisfaction, come on down Little Engine that Could.

  • Marketing idea called a law.

After the last of 16 mounting screws has been removed from an access cover, it will be discovered that the wrong access cover has been removed.

Working...