Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
IBM Hardware

IBM: Chip Making is Hitting Its Limits, But Our Techniques Could Solve That (zdnet.com) 50

IBM has devised materials and processes that could help improve the efficiency of chip production at the 7nm node and beyond. From a report: The company's researchers are working on challenges in the emerging field of 'area-selective deposition', a technology that could help overcome limitations on lithographic techniques to create patterns on silicon in 7nm processes. Semi Engineering has a neat account of lithographic patterning and why at 7nm there's growing interest in area-selective deposition. Techniques such as 'multiple patterning' helped ensure integrated circuits kept scaling, but as chips have shrunk from 28nm to 7nm processes, chipmakers have needed to process more layers with ever-smaller features that need more precise placement on patterns. Those features need to align between layers. When they don't, it leads to 'edge placement error' (EPE), a challenge that Intel lithography expert Yan Borodovsky believed lithography couldn't solve and which would ultimately impede Moore's Law.
This discussion has been archived. No new comments can be posted.

IBM: Chip Making is Hitting Its Limits, But Our Techniques Could Solve That

Comments Filter:
  • by rfengineer ( 927289 ) on Wednesday November 21, 2018 @09:26AM (#57679440)
    Figure out how to solve the quantum tunneling gate leakage power problem and you've got a winner. Photolithography has never been identified as a show stopper to continued gate shrinkage. Gate leakage at these dimensions is.
    • by Megol ( 3135005 )

      Really? My impression is that problems like those of shot noise inherent in the optical lithographic process are significant for continued scaling.
      Leakage due to quantum tunneling can't be solved anyway except by increasing the gate dielectric thickness.

    • Comment removed based on user account deletion
    • Uh... translation: "I have no clue what they're talking about so I will spout about this thing I heard about years ago."

      Lithography has got plenty of show stoppers. Maybe you haven't heard about the transition from transmissive to reflective optics, to name one gigantic issue. Ever heard the word "pellicle?"

  • by BringsApples ( 3418089 ) on Wednesday November 21, 2018 @10:07AM (#57679690)
    ...that, probably, only a few people here will understand. I'll simplify it:
    They want to work at a very small scale, using very big words. /s
    • by AmiMoJo ( 196126 )

      "Chip making has hit it's limits"

      Oh no! What am I going to do now?!

      "Our techniques could solve that"

      Oh thank goodness! Do you take Bitcoin?

  • by Viol8 ( 599362 ) on Wednesday November 21, 2018 @10:16AM (#57679742) Homepage

    Instead of using lots of scripting languages and VM with frameworks that suck up huge volumes of memory and are so poorly written they require large amounts of CPU time to do very little, perhaps there should be a return to an emphasis on more effcient compiled languages that only use what resources they need at any given time.

    Yeah I know, get off my lawn etc. But that fact that script kiddy coders don't like being told that their toy language is a bloated CPU hogging mess doesn't change the reality of the situation.

    • You aren't wrong. However, a lot of projects have unreasonable time frames for deployment for that optimization part. There's steps to help try to make that optimization more automatic, like tree shaking, VMs that run closer to metal, lighter containers, document parsers that automatically create an index... But yeah, we need better resource management and we need people who make the calls and set the timelines to understand that it's better if we focused on optimization in the long run.

      • by TheDarkMaster ( 1292526 ) on Wednesday November 21, 2018 @11:09AM (#57680038)
        Optimization is not even the problem in most cases. I have seen idiots who, being unable to understand (or worse, not interested in learning) how to properly use the existing resources of the operating system, they decide to make their own frameworks/VMs and end up making a mess that uses three times more resources than the native operating system functions and with three times as many defects.
    • Instead of using lots of scripting languages and VM with frameworks that suck up huge volumes of memory and are so poorly written they require large amounts of CPU time to do very little, perhaps there should be a return to an emphasis on more effcient compiled languages that only use what resources they need at any given time.

      With VMs:

      Start $APP Now

      Without VMs:

      We're Sorry!
      $APP is not yet available for $PLATFORM. We apologize for the inconvenience. [System Requirements]
      To purchase a device that runs $APP: [Shop PCs] [Shop Mobile]
      To be notified when preorders for $APP on $PLATFORM become available: [Join $APP Ports Mailing List]

      In an environment with "more efficient compiled languages" replacing VMs and big frameworks such as React or Qt, how would you recommend to bridge platform gaps?

      • by Viol8 ( 599362 )

        Most *progams* only run on one architecture anyway but perhaps you've not heard of compile time options in code that allow you to use the same source code for multiple platforms.

        Do you actually work with computers or did you get lost here on the way to Facebook?

        • Most *progams* only run on one architecture anyway but perhaps you've not heard of compile time options in code that allow you to use the same source code for multiple platforms.

          If you have designed your application with a model-view-controller paradigm, you can reuse the model layer across multiple platforms. However, you cannot so easily reuse view code across Win32, Cocoa, X11, Android, and Chrome OS DOM platforms without a multi-platform framework such as Qt, and I assume Qt is one of the "frameworks that suck up huge volumes of memory" that you mention. And even if you cross-compile your application to a platform that you don't have, it's a bit harder to cross-test the respons

          • by Viol8 ( 599362 )

            Thanks for the heads up on your 1990s design pattern knowledge, but I was talking mainly about backend code on proper computers, not silly little smartphone apps. As for Qt, its one of the better ones out there.

            • by tepples ( 727027 )

              I was talking mainly about backend code on proper computers

              And I was talking about frontend code on proper computers. You'll have trouble building an application using X Athena Widgets for a non-*n?x target, one using Win32 for a non-Windows target, or one using Cocoa for a non-macOS target. If Qt papers over these differences and isn't considered bloat, that's solved. But the difficulty of testing an application for a platform you don't own remains, particularly if you don't yet have enough capital to set up an in-house testing lab.

              • I have coded in Qt for years. It's really pleasant to use, and not bloated at all. Yes, on Windows the DLLs take quite a big amount of space, but only on disk, not so much when running the program.

                A framework that sucks huge amounts of memory and that runs on top of an scripting language is Electron.

                • Yes, on Windows the DLLs take quite a big amount of space, but only on disk

                  Is this true of macOS as well? This becomes doubly important as Macs switch to SSDs. And for developers who have a lot of users stuck behind satellite Internet or cellular tethering at $5 to $10 per GB, how well do Qt DLLs compress for distribution?

                  Besides, testing cross-compiled macOS binaries can still prove expensive for a micro-ISV.

                  • I have never owned a Mac, nor do I plan to do so, so I can't answer your question.

                    Regarding compression, I just did a test with the first Qt5 program I could find installed on this Windows machine. Uncompressed size 57 MB, compressed size 18 MB. If you build Qt yourself and statically link only what you really need, I think the size would be smaller.

    • Re: (Score:3, Insightful)

      by Jeremi ( 14640 )

      False choice detected.

      The beauty of more-efficient hardware is that it will improve the performance of all software -- bloated and non-bloated alike.

      In the meantime, there's no need to wait on IBM (or to demand that IBM wait on us) to start writing non-bloated software, we can start doing that today. With or without IBM's magic beans, anyone who does so gains a competitive advantage over those who do not.

    • This. I still remember when a browser (for example purposes) just needed a few MB to work (actually, mostly static pages and just a few simple javascripts but being honest did not needed much else). Now a browser to do the same job starts at over 256MB of RAM, which is more than most of the desktops of the era dreamed of having.
  • I hope someone or many someones out there are working on reversible computing. It sounds like the only long-term way forward. https://spectrum.ieee.org/comp... [ieee.org]
  • by Anonymous Coward

    Why won't anyone belieeeeeve us?

Every cloud has a silver lining; you should have sold it, and bought titanium.

Working...