Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Hardware

Nanoimprint Lithography 206

An anonymous submitter writes "According to BBC News, researchers at Princeton have developed a die-stamp method for chip fabs. The Princeton site claims they've got to 10nm already. The professor in charge has told BBC News Online that they're '20 years ahead of Moore's Law.' Dubious claims aside, it looks like a handy way to bring down prices even if it doesn't improve ultimate top speed."
This discussion has been archived. No new comments can be posted.

Nanoimprint Lithography

Comments Filter:
  • Moore's Law (Score:3, Informative)

    by emf ( 68407 ) on Wednesday June 19, 2002 @02:40PM (#3731412)
    Moore's law really has nothing to do with speed even though people think it does.

    "More than 25 years ago, when Intel was developing the first microprocessor, company cofounder Gordon Moore predicted that the number of transistors on a microprocessor would double approximately every 18 months. To date, Moore's law has proven remarkably accurate. "

    From : http://www.cnet.com/Resources/Info/Glossary/Terms/ mooreslaw.html
  • 10 nm != .1 micron (Score:4, Informative)

    by GreenPhreak ( 60944 ) on Wednesday June 19, 2002 @02:40PM (#3731414)
    10 nm == .01 microns last time I looked.

    1 nm = 1e-9 m
    1 micron = 1e-6 m

  • it is 0.01 micron

    but what's ONE ORDER OF MAGNITUDE between friends?
    • by Anonymous Coward

      That's what she said :)

    • nm and microns are worthless units anyway. If you read any articles about microscopic technology, you would know that "N times thinner than a human hair" is the industry standard measurement of distance.

      For reference, here is the complete list of basic units in the news media measurement system:

      • Small distance: human hair
      • Large distance: Distance from New York to LA
      • Small mass: flea
      • Large mass: battleship
      • Information: library of congress
      • Speed: rifle bullet
      • Time: eyeblink
      • Temperature: surface of the sun
      • Power: locomotive
      • Volume: swimming pool
      • Area: football field
      As you can see, this system is far more intuitive than systems based on arbitrary units, such as the metric system, because you deal with things in terms of real world objects you can relate to.
  • Important Issue (Score:2, Insightful)

    by imta11 ( 129979 )
    How do they make the ultra small quartz die to burn the patterns? Grow it perhaps?
    • by Anonymous Coward
      don't over look the important field of "magic" or "the devil's work"
    • The same way they make phase shift masks for optical lithography. They use an ebeam writer to expose a pattern onto a photoresist layer on a quartz substrate. Then they develop the resist, etch away the quartz and then strip the resist. Ebeam writers have very high resolution and printing patterns of this size is not a problem.
  • is the cost of installation and any retraining that needs to be paid for to use the new system. I have a feeling - unless it offers a HUGE advantage over standard methods - Intel et al. would be very reluctant to adopt a new process.
    • by Anonymous Coward
      Intel is very willing to adopt new techniques. The other alternatives they have at this feature size are e-beam (very slow serial writing process) or deep UV lithography (they've ordered a tool with a price tag of ~$1 billion )

      • $1 billion?!? Ouch. But would this new process really be that much cheaper in the long run? Let's say it is a lot cheaper in the short-run and can produce a chips at .10 micron. But is the technology scaleable? Will is be able to do .08, .06, or whatever? Maybe the more expensive machines are actually cheaper in the long-run because a few tweaks make them scale well. I don't know - I'm just thinking outloud. It seems that every few months we hear of a new 'revolutionary' process, never to see it again.
        • $1 billion is a lot, but the companies are getting used to it. Keep in mind that many of the new facilities Intel and AMD have built (or are in the process of planning/building) will cost $3 billion or more, and only a small fraction of that is structural.
          • Yes, this is true. However a new fab will typically have tens of exposure tools, with capital costs ranging from ~$5 million to ~$30 million or more for the tools used on critical layers. If one needed to have several extreme UV tools, the costs suddenly start to get a little out of control. Especially if you can do the same thing with a block of rubber...It's no coincidence that the lithography literature and conferences have been filled with stuff on nanoimprint lately.
        • Since the scale is actually .01 micron, I don't think that any process based on the current tech used to make processors will catch up any time soon.
      • The margins in the chip business are SO tight now I am not sure Intel will be willing to invest heavily in developing new techniques. IBM's dumping fabs, prices are low, etc. ... Tough times right now. They haven't really made their money from .13um yet, so till they do, they probably won't be looking to adopt squat.
    • is the cost of installation and any retraining that needs to be paid for to use the new system. I have a feeling - unless it offers a HUGE advantage over standard methods - Intel et al. would be very reluctant to adopt a new process.

      Bah, obvious and trite, possibly uninformed. Smaller is better. With a new process chip-makers can hide more margin in the increased costs of retooling and retraining, which they need to do every time they step down the chanel widths as it is.

      At the end of the day, it's becoming increasingly clear that speeed and transistor count will soon be a moot item. Instead, we will begin to evaluate chips based on their native inteligence, and the elegance of their code.

      Okay, this chip has sixteen trillion gates, but are they designed in such a way as to be elegant, and do they create strong tools for developers?

      Competition will soon be fun again..

      -GiH
    • 10 nanometer process == HUGE advantage over standard methods

      The deciding factor will be if it really works.

  • by Anonymous Coward
    g to the oatse
    c to the izzex
    fo shizzle my nizzle i have no idea what nanoimprint lithography means
  • by Anonymous Coward on Wednesday June 19, 2002 @02:43PM (#3731443)
    I've experimented with this technique a bit, and surprisingly it is very capable of replicating super tiny features. Surprising because the stamps are most commonly made from a flexible polymer material. They are very good at replicating tiny features from a master fabricated using electron beam lithography. One thing that we weren't able to solve was doing alignments between layers, since the stamps tend to be thick and hard to see through. But this is just an engineering issue that we didn't have the time or inclination to solve.

    I was just blown away that we were able to fabricate high fidelity microstructures using what basically amount to a rubber stamp!
  • The two links gush with claims but provide little evidence of its utility. The only demonstration shown there demonstrates making holes in substrate, or leaving dots of material. It does not show making any traces. I'd wait to be impressed until I see something beyond a row of dots.
  • by joshv ( 13017 ) on Wednesday June 19, 2002 @02:43PM (#3731446)
    This sounds great, but how do they make the mold, what kind of wear and tear is the mold subject to? My guess is that one of these 'nano-imprint' molds is not going to last all that long.

    I am assuming they are relying on something like electron beam lithography to create the imprint mold, certainly this would be a cost/time improvement over direct e-beam litho, but it all depends on longevity of the molds.

    -josh
    • According to the article, they use a die made of quartz. Any materials science whizzes out there know how well 10nm-wide quartz features hold up under compression and heat?

      I'd assume it doesn't take much heat to melt a 10nm-wide strip of silicon. Then again, it likely wouldn't take much to damage quartz at that size, either. And how well would the heat dissipate?
      • Doesn't relative crystal strength increase as size decreases? IANAMSW, but I seem to remember reading this somewhere very recently. If this holds true, then these crystals may be more durable than first appears.
        • IANAMSW either, but I do know that macroscopic crystals are weaker than their theoretical maximum strengths by a few orders of magnitude because of internal irregularities. When the structure is a very fine crystal of this scale, there really isn't room for such defects. If such a defect were to exist at a given location, there would be nothing there at all, which would be easily detected after the first batch of chips failed testing. I imagine this screen would be extraordinarily durable.
    • The same way they make phase shift masks for optical lithography. They use an ebeam writer to expose a pattern onto a photoresist layer on a quartz substrate. Then they develop the resist, etch away the quartz and then strip the resist.

      The lifetime of the template is an open question but research on a similar process that I've seen showed that the template lifetime was sufficient to make this process economically feasible.
    • Every day, diffraction gratings are created with about 1nm accuracy using macroscopic tools. My father designed one [iscpubs.com] which does just that. It is not impossible to imagine, therefore, that arbitrary features could similarily be scribed.

      The machines which create the diffraction gratings are called ruling engines, and, not unlike the methods used to stamp metal currency, the masters are used to make duplicates which then are used to make the work tools. Each stage can be replicated N times, so while there is a limited lifetime of the entire process, N^3 can be quite large.
  • I saw this several years ago, "Block Print Lithography", an article in Science. They were able to do, at the time, 80nm resolution features in metal.

    It has serious problem however in producing the blocks to use in the printing, and aligning them properly in use.
  • Yay (Score:2, Interesting)

    by sheepab ( 461960 )
    Maybe this means AMD will cut their prices on Athlon chips even more! With ram being so cheap, and this making it able to create more chips at less cost, maybe I really can have that beowulf cluster I've always wanted! Now what to do with it....
    • Possible uses for a beowulf cluster:

      Use it to make a CGI of Natalie Portman covered in steaming hot grits.

      Create abstract art based on the goatse.cx pictures.

      Create a spell checker that can handle CmdrTaco comments.

      Corner the market on beowulf cluster comments.

      Did I miss anything important?

  • by pokeyburro ( 472024 ) on Wednesday June 19, 2002 @02:45PM (#3731468) Homepage
    Take note of that third section: no nasty chemicals, they claim. If their claim holds, a company using this tech could make a lot of political capital from it.

    Natural questions arise: just how dirty is the current process? Will the details of the method really prove to be as clean as they say?
    • "just how dirty is the current process?"

      Have you ever been to a chip fabrication lab? Those places are nasty; cyanide emergency kits on the walls, phosgene and arsine gases. Bad stuff.
      • Don't forget HFl, HCl's nasty brother!
      • And there is the nasty Trichlorsilane (SiHCl3 process in the Si-refining plants, which involves HCl.
      • Been to? Hell, I used to work at one. Lovely fab safety classes -- "If you ignore the gas leak alarm, please try to die within 6 feet of the door. That's how long the pole hook is to drag your body out."

        Not to mention the horror stories about HF (watch your bones melt!), phosphine and other gasses which can kill you before you smell them (but the MSDS lists them as smelling like lemon... go figure), liquid scrubbers like Pirhana that meant no contacts (if the system backblasts the Pirhana would melt the contacts to your eyes), etc.

        That said, this process will only eliminate Photolithography... which is the process that uses the fewest of these amazingly nasty chemicals from what I recall. But I worked mostly with PVD/CVD and etchers, so I could be wrong about Photo's chemical usage.
      • So the term "Clean Room" is something of a cruel joke, eh?
    • According to the Princeton site, this process only eliminates the photoresist developing process. Etching and photo resist stripping is still required and therefore you still have to use a lot of unfriendly chemicals. So only some of the chemicals are eliminated. Still some reduction is better than no reduction.
    • Take note of that third section: no nasty chemicals, they claim. If their claim holds, a company using this tech could make a lot of political capital from it.

      The Princeton site makes no claim of this being a chemical-free process; all they use the imprinter for is patterning the etching resist, as an alternative to using a light-sensitive photoresist and exposing it to light and developing it to get the patterning. Under this scheme, virtually all of the nasty chemicals would still be present (you'd have a bit more flexibility in choosing resists, but that's just one set of chemicals out of a whole zoo that are needed).

      The BBC report claims that patterns are directly stamped into the deposited material. This could be legit, or it could just be a misinterpretation of the resist stamping. Even if it is the patterning mechanism (i.e. if no resist or developing is needed), you still have nasty chemicals used when depositing layers of various substances on the substrate and when etching (which you'll still need to do - pure stamping will leave a thin layer of the undesired material in the stamped region even if most of it flows away).

      In summary, I'd take claims of environmental friendliness with a large grain of salt.
  • Mmmm more transistors means more processsing...
    Lower prices means more hardware
    More hardware makes geeks happy.
    Happy geeks means more slashdot posts.
    Good.

    See it all works out in the end.
  • ...that we end up going back to old technology? I mean, this is basically an old printing press, only on the microscopic, technological, not old side. One would think that sometime such as a laser would be the first thing to accomplish this goal. But hey, who's going to argue with cheap parts?
    • Lasers spread too much... unless you go to REALLY short wave-length light.

      Now there was talk recently about an electron-beam laser that might give this some competition. And when that gets a bit too spread out, you could substitute a meson for the electron...

      I think that it they can make this work, it could be a lot better than electron beams, and certainly than lasers. Those really short wave length photons tend to penetrate too deeply, and knock around the crystal structure ... unless that's what you want? I was reading an article the other day that seemed to imply that you could knock some holes in a crystal, slather a material on the surface, and then treat the crystal so that it absorbed the dopants where it had been dislocated. Perhaps you could get an even better effect if you plated the stuff on and then hit it with a high energy photon where you wanted it to penetrate??
      (I've never heard of this process before, so I hereby claim invention!! It's MINE! I may even patent it. [Of course, I haven't yet....])

      • There is no such thing as an "electron-beam laser." Lasers by definition emit photons, not electrons.

        Anyway, any clean electron-beam process requires VACUUM, which increases cost and decreases throughput by at least one order of magnitude, often more.

        • I understand your reasoning, but an "electron laser" what they called it in the article that I read.

          You are right about the vacuum. And my proposal of mesons would be even worse. But the question is, what are the alternatives. That's probably why this press & fit idea is going to get a good trial, even though it sounds off the wall.
        • There is no such thing as an "electron-beam laser." Lasers by definition emit photons, not electrons.
          Anyway, any clean electron-beam process requires VACUUM, which increases cost and decreases throughput by at least one order of magnitude, often more.


          Uh, you are aware they call it the electromagnetic spectrum? Electrons are really high-energy photons.
    • ...that we end up going back to old technology? I mean, this is basically an old printing press
      Think carefully - we are living in the age of steam! Most of our buildings are built using Roman technology (concrete). Both refinements of old technologies and new technologies have a place.

      In the case of semiconductor technology, the die size is currently the limiting factor. I saw someone make a single atomic layer thick diode junction with Indium-Gallium-Arsenide on Silicon over a decade ago, using fairly basic equipment (a home made chemical vapour deposition setup) at a small university. Thickness isn't a problem - it's area. When we eventually hit nanotech, we'll see a lot of small versions of existing technology.

  • "10nm already, or 0.10 micron" ?

    10nm = 0.01 micron
  • Company doing this (Score:2, Informative)

    by Anonymous Coward
    A company is already employing this technique commercially. See www.nanoopto.com. They're using it to fab photonic bandgap and other microoptical structures. I think this company came directly from the Princeton work (although the technique was invented at Harvard, I think.)
  • I think this is a company I heard about because they can't make good transistors. The transistors are very small but they veary in quality wildly. Some are 20 times slower than others. Synchronous designs suffer badly because of it. It becomes reather difficuit to work out your critical path.

    They were intrested in DI (delay insensitive) methods because even if you have a very slow transistor the design will still work and if you dont go through the tranny then it will work at full speed.
  • chicken and egg (Score:3, Interesting)

    by g4dget ( 579145 ) on Wednesday June 19, 2002 @02:57PM (#3731576)
    You still have to make the mold itself, and since it is in actual mechanical contact with the substrate, it won't last anywhere near as long as an optical mask. So, you certainly have to make masters fairly regularly, and those processes may be disproportionately costly and time consuming (electron beam lithography, nanoprobes, etc.).

    Altogether, it looks like a nice process, but it's not immediately clear that it will help.

    • Re:chicken and egg (Score:2, Informative)

      by zemaxuser ( 586694 )
      But you can make several stamps from a single master (the process for making the stamps doesn't usually damage the master) and from each of those stamps make many replicas of the original surface relief. This is because you're usually stamping into a liquid medium (usually wet photoresist or reflown photoresist) and there isn't a huge amount of wear on the stamp. So, you really don't have to go back to the e-beam writer too often to make a new master. This is of course said with the caveat that this is a pretty new technique and there isn't a lot of data on long term lifetime of the stamps.
    • Template lifetime is still being studied but the short term studies show surprising template resiliency. Extrapolated results indicate that template manufacturing costs will be competitive with optical mask costs. Its still an open question but its not nearly as bad as you would expect. The template is coated with a thin film of some sort before each imprint and this film prevents material from sticking to the template. I don't know about the mechanical stress for this particular technique but a similar technique called Step and Flash Imprint Lithography only requires very low pressure on the template.

  • This kinda news is like people using oil instead of hydrogen or other fuels to power cars.

    This isnt good, because sure we can keep using this process, but we should use something new and better.
    How is keeping intel and others from innovating by improving exsisting technology better than forcing them to innovate and create new technology?
  • The professor in charge has told BBC News Online that they're '20 years ahead of Moore's Law.

    I'll believe that when it is in production and I am buying the damn things.....
  • That's 2^20 times denser! For those of you who aren't so fast, that's just over a million. Impressive!

    Now why is that "dubious"?
    • "2^20 times denser! For those of you who aren't so fast, that's just over a million."

      You're not so fast yourself, bub. Try and get your math right before you get all condescending.

      Moore's Law doubles in density every 18 months, not every year. So the correct calculation is 2^(20*12/18), which is roughly 10,321, or 3 orders of magnitude lower than what you stated.
  • Why don't we just increase the die size?

    (I know, more defects, etc. but it is another direction we can take)
    • See this [slashdot.org] post... Moore's Law deals with transistor density. Anyway, increasing the die size doesn't work when we're talking about operating at hundreds of thousands of gigahertz, because the propegation delay from one end of the die to the other becomes longer than a single (or even multiple) cycles. -Berj
    • I am in no way an expert in this, but at high clock speeds you have problems with large die sizes.

      With longer traces on the die, capacitance between them increases. This means that the speed that you can switch these traces on and off and get a decent signal out the other end decreases greatly.

      Ian (naiive computer engineering undergrad)
      • Correct. More importantly, the acceptable length of traces is a function of the dielectric (propagation time) and the rise time of the signal. Very fast rise times mean the systems must be very small to avoid transmission-line effects...you cannot terminate lots of transmission lines inside an IC, the dissipation gets unmanageable.
    • The cost of a silicon wafer increases exponentially with its area. For example, four 2 cm^2 wafers cost MUCH less than one 4 cm^2 wafer, even though the total area is the same. By decreasing die size, we lower the cost of the actual silicon, which is a significant expense in chip-making.
  • These guys should quiet down. If they get together the other scientists in this field, they could take the next 20 years off. Now, if only I could find a way to do the same thing...

    --Josh
  • moore's law is about macro economics, not technology.
    • A quartz die is pressed against the silicon, which is melted briefly by a laser.

    woah, that sounds easy !

    but can someone explain to me how this will make a difference ?

  • Smaller and faster CPUs just mean that I'll have to debug more code for one to run for any reasonable amount of time without crashing.

    Bah - Gimme a 6502...

  • I've just heard a talk on a very similar technique that does not require heating and melting the substrate. This process squishes a liquid polymer between the template and the substrate so that the polymer fills the gaps in the template. Then they cure the polymer with UV light, lift off the template and then more or less follow the standard etching process.

    The first thing you would wonder about is problems with air gaps and bubbles but they say that this has not been a problem.

    They also say that template lifetime does not appear to be an issue but they need to do a longer term study on this.

    One of the bigger problems they were facing was pattern alignment because the liquid polymer acts as a lubricant and the template tends to slide around as its being pressed down. They say they have addressed this problem with more rigid and precise mechanics.

    Its very interesting technology and its expected that this technology will begin showing up in corporate research fabs - rather than academic research - by next year.
  • Crystal structure (Score:2, Insightful)

    by fava ( 513118 )
    The article states that the silicon wafer is melted briefly by a laser. Considering that the silicon wafers are actual single crystals, wont the melting and re-solidification of the silicon alter the properties of the wafer.

    So instead of having a single crystal we could end up with many small crystals aligned along the features that we are creating. I am not sure how much the creation of semiconductors is dependant on having a single crystal, but if it is dependant then this new technique may not be that useful after all.
    • This (recrystallized Si) is actually standard practice. Yes, the Si in the laser-affected area becomes polycrystalline, but this Si (known as "polysilicon") becomes an electrode, used for its conductivity, not the doped Si which actually takes part in the transistor action.
  • Isn't this one of the main problems chip-manufacturers have to deal with now that individual transistors are becoming so small?

    I'd imagine that the electron-migration with 10 nm transistors is pretty bad, not to mention the inferference between individual traces.

    I could be horribly wrong, though. Anyone wants to hit me with a clue-stick? :)
  • A little math (Score:3, Insightful)

    by quintessent ( 197518 ) <my usr name on toofgiB [tod] moc> on Wednesday June 19, 2002 @04:46PM (#3732351) Journal
    The observation that the computing power which can be incorporated in a given sized piece of silicon doubles roughly every 18 months was put forward by the head of Intel, Gordon Moore, in 1965. - BBC News.

    We're probably 20 years ahead of the curve, - Professor Chou.

    Seems a little exaggerated. Let's look at the numbers.

    The article says they're 100x as dense (in area) as current technology.

    if 2^7=128, then technology needs to double fewer than 7 times.

    7 * 1.5 years = 10.5 years, far fewer than the claimed 20 years.

    And this technology is still vaporware, so even 10.5 years is exaggerated.

    Sounds cool, though. It would be nice if this really worked.
  • Particles? (Score:2, Insightful)

    by mactom ( 515670 )
    We do standard old fashioned i-line lithography (0.35m), old fashioned proximity lithography (1.5 m), decent laser direct write lithography (0.8 m) and top of the line e-beam direct write lithography ( 100 nm). The smaller the feature size gets, the more problems do we have with particles on the substrate, causing defects. Proximity lithography is suffering from defects caused by particles that form from the direct contact between the mask and the substrate. Thinking of an embossing method for resist patterning gives me a bad feeling about generated particles adhering to the stamp-mask. Especially at 10 nm feature size. Very questionable. Also, the wall angle of the patterned resist seems far off of the desired 90 degrees. The etch behaviour of such shallow slopes is difficult to control and leads to variance in etched feature size. This is an interesting lab experiment, but I cannot imagine it for high volume chip production at all.

C makes it easy for you to shoot yourself in the foot. C++ makes that harder, but when you do, it blows away your whole leg. -- Bjarne Stroustrup

Working...