Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
IBM Hardware

IBM Heralds 3-D Chip Breakthrough 99

David Kesmodel from WSJ writes to let us know about an IBM breakthrough: a practical three-dimensional semiconductor chip that can be stacked on top of another electronic device in a vertical configuration. Chip makers have worked for years to develop ways to connect one type of chip to another vertically to reduce size and power use. The IBM technique of "through-silicon vias" offers a thousand-fold reduction in connector length and a hundred-fold increase in connector density. The new chips may appear in cellphones and other communication devices as soon as next year. PhysOrg has more details.
This discussion has been archived. No new comments can be posted.

IBM Heralds 3-D Chip Breakthrough

Comments Filter:
  • More information (Score:4, Informative)

    by karvind ( 833059 ) <karvind@NoSPAM.gmail.com> on Thursday April 12, 2007 @10:19AM (#18702541) Journal
    As article says they had been working on it for a long time, they had published few details before.

    http://www.research.ibm.com/journal/rd/504/topol.h tml [ibm.com]

    http://domino.watson.ibm.com/comm/pr.nsf/pages/new s.20021111_3d_ic.html [ibm.com]

    • Re: (Score:2, Funny)

      by Anonymous Coward
      This will fail miserably. It will be hell to repair these things when the middle component breaks.
  • Very nice, but... (Score:3, Interesting)

    by Rosco P. Coltrane ( 209368 ) on Thursday April 12, 2007 @10:21AM (#18702575)
    Chip manufacturers have better define some kind of common norm for the Vccm Vss, GND, busses, etc... pins on similar devices (like ICs, RAM chips and such), otherwise it's back to square one with a circuit board that has to pick up the lines and reroute them to other components, and the advantage of this technology would be zilch.
    • It's likely that we'll see custom integration before standards like that settle out. When cell phone vendors crank out tens of millions of a given model, the economy of scale can be achieved reasonably. It won't be much different than the custom IC work that already happens in some devices like this. (The iPhone is a well known example).
    • Re:Very nice, but... (Score:4, Interesting)

      by stevesliva ( 648202 ) on Thursday April 12, 2007 @10:29AM (#18702733) Journal

      otherwise it's back to square one with a circuit board that has to pick up the lines and reroute them to other components, and the advantage of this technology would be zilch.
      There is no implied change in the chip packaging that is the interface to a circuit board. There are already plenty of packages that have two chip dice side-by-side. This will just stack the dice on top of each other within the package.
      • Re: (Score:3, Informative)

        by MightyYar ( 622222 )
        They already do that, too. Stacked die are not new - this is simply a way to connect them without using a wire bonder or flip-chip. One of the traditional problems in wirebonder-less solutions is that you then have to match up the die with the substrate - this means that a simple silicon die shrink also requires a substrate re-design.

        I think that this sounds like a relatively expensive process, but it should enable a thinner profile than flip-chip or wirebonding.
    • Re:Very nice, but... (Score:4, Informative)

      by MightyYar ( 622222 ) on Thursday April 12, 2007 @10:47AM (#18703007)
      Die shrinks happen way to quickly to establish standards. Most manufacturers don't even try to match up substrates with chips - they just use a wire bonder. Only packages with specialized requirements keep the substrate and chip matched up so that they can use flip-chip or some other interconnect process... inkjet heads still use tab bonding, for instance.
  • by LiquidCoooled ( 634315 ) on Thursday April 12, 2007 @10:21AM (#18702579) Homepage Journal
    Surely they need to cool the components in the middle of the stack?
    Unless they decide to leave some of the holes open then anything in the middle is going to overheat?

    I always imagined this kind of tech running on some kind of multi layered wire fence with plenty of room for cooling.

    Incidentally, didn't Hitachi beat them to the whole 3d element thing?
    http://www.hitachigst.com/hdd/research/recording_h ead/pr/PerpendicularAnimation.html [hitachigst.com]
    • by UnknowingFool ( 672806 ) on Thursday April 12, 2007 @10:26AM (#18702681)
      Tubes! Everything else seems to run on tubes.
    • Re: (Score:3, Interesting)

      by s-gen ( 890660 )
      It looks like they might be planning to pump liquid between the layers:

      http://www.zurich.ibm.com/st/cooling/integrated.ht ml [ibm.com]
    • As a former IBMer, I know that one of IBM's biggest strengths in IC research has always been packaging technology. If they are confident enough to announce this as a breakthrough , then you can safely assume they've figured out how to tackle the thermal issues and keep the chip cool.
    • Re: (Score:3, Insightful)

      by Zantetsuken ( 935350 )
      DISCLAIMER: Of course I didn't RTFA - cmon man, this is /. v2.0...

      From the summary saying how it would mostly see use in cellphones and the like, I would think it would operate at low enough speeds/voltages to be able to get by with passive cooling...
    • I wonder if they are using microtubes for cooling the lower layers.
    • by SQL Error ( 16383 ) on Thursday April 12, 2007 @11:11AM (#18703415)
      You wouldn't be able to stack multiple desktop CPUs, because it would generate too much heat. But you could stack a CPU on top of its own level 2 cache instead of next to it, making for shorter wires and a faster chip. Or stack a GPU on top of DRAM, so that you could have a 2048-bit bus instead of 256-bit.

      Then they just rely on the upper layer to conduct enough heat to keep the low layers cool.
    • by duncanFrance ( 140184 ) on Thursday April 12, 2007 @11:27AM (#18703703)
      There are some thermal advantages to this sort of interconnect. Since it keeps the wirelength short it means the drivers don't have to be so powerful. Hence a fair amount less heat will be generated. Driving any amount of capacitance at GHz speeds wastes shed-loads of power.

      Average power dissipated = V*V * f * C

      So reducing V obviously makes a big difference (hence partly why operating voltages of ICs decrease with frequency), but getting C down will help also.
    • by zmotula ( 663798 )

      Surely they need to cool the components in the middle of the stack?
      Unless they decide to leave some of the holes open then anything in the middle is going to overheat?

      This is where the Menger Sponge [wikipedia.org] comes in...
    • I used to ponder that question frequently and found that Neil Stephenson's "the diamond age" had a "cool" solution to the problem... Basically, imagine a cube with holes drilled through it in a 3-d grid pattern... then just run water/coolant through it... or use the extra surface area throughout the holes as a vaporizer medium at the bottom of a heatpipe [wikipedia.org] and move the heat to a heatsink bank somewhere above the processor
    • Re: (Score:1, Interesting)

      by Anonymous Coward
      The most obvious cooling solution would be to create a chip/layer which was a pass-through connector as well as a heat pipe. The processor is then a sandwich of layers alternating between computing layers and cooling layers. There would then be some parallel stack which would connect these heat pipes together and too the primary cooling system (e.g. heat-sink/fan).
    • Just imagine a Beowolf cluster of these ...

    • Unless things move to optical or biological I would think that future cooling of CPU/GPU etc might very well be by immersion in a high dielectric fluid or vapor such as HCFC compounds. For instance where I work we have several Trane chillers that have the motors, about 400hp 480v 3phase, cooled by running them in the same evap side cycle refrigerant fluid/vapor as the compressor vane assembly.

      So if one placed the CPU or for that fact the whole dang mother assembly in a hermetically sealed vessel one could s
  • So they figured out how to make them, but wouldn't you start running into problems with heat retention in the middle of the chip? Or are they still thin enough at this point that this isn't really an issue.... the article doesn't mention it at all.
    • Re: (Score:3, Interesting)

      by Jake73 ( 306340 )
      Heat is certainly a concern. However, vertical stacking also helps address the issue of disparate technologies. For example, you may have two ICs that are manufactured with, say CMOS and bipolar technologies that together won't generate enough heat to be a concern, but because they are different technologies, need to be separated and therefore take up more space.

      On the other hand, it would be neat to see them put heatsinks between each individual chip. They could still drill and insert the tungsten vias
    • You have a good point. A chip twice as thick takes twice as long for heat to diffuse through it, so even if the new chips have the same power/area of today's chips they will operate at twice the temperature increase above ambient temperature.
      If they merely sandwich two processors together, you'll have twice the heat generated with half the conductivity, so these chips would run at 4 times higher than ambient temperature than today's chips.
      Aside: do you think we should start asking for chips which nee
      • Re: (Score:3, Informative)

        by MightyYar ( 622222 )
        I don't know much about the cooling issues, but I know that they back-grind the chips to make them thinner. For instance, if they are replacing a memory package that used to consist of 1 chip with 3 stacked chips, they will grind the 3 stacked chips so that they are no taller overall than the 1 chip. Typical silicon thicknesses used to be 14-20 mils (355 - 500 microns). Now we are seeing as thin as 3 mils (75 microns), with folks at trade shows demonstrating even thinner.

        Don't ask me why people still use mi
    • Personally this article puzzled me. I remember reading just about a month ago about a WAY cooler (pardon the pun) technology that they had developed for the chipsets. It is based on fiberoptics instead and carried far more data, at an incredibly small cost. I distinctly remember the article saying that they could divide all of manhattan into two camps of 4 million each, and one of the camps could have all 4 million people call the other 4 million people and it would take either a single chip or the same ene
  • What?????? (Score:5, Funny)

    by Xinef Jyinaer ( 1044268 ) on Thursday April 12, 2007 @10:25AM (#18702671)
    The chips didn't exist as 3-D objects prior to this? Infact, wouldn't a chip that only exists in two dimensions be much more difficult to make?
    • "-1 Pedantic" Think of the space you could save though...
      • "+1 Pedantic" opposed to "-1 Pedantic" (At least I don't consider it a negative thing). Being able to make 2-D objects would save a lot of space.
    • by phasm42 ( 588479 )
      First line of TFA (emphasis mine)

      The IBM breakthrough enables the move from horizontal 2-D chip layouts to 3-D chip stacking
    • Re: (Score:3, Informative)

      by stevesliva ( 648202 )

      The chips didn't exist as 3-D objects prior to this? Infact, wouldn't a chip that only exists in two dimensions be much more difficult to make?

      One layer of silicon substrate, followed by many layers of polysilicon and wires and insulator. There is as of yet no practical way to fabricate to transistors on top of each other on a wafer. It's always the transistor on bottom, wiring on top. The transistors themselves are only a 2D array (but yes they are 3D devices). Sounds like this technique bores holes

    • Re: (Score:2, Funny)

      by Tiles ( 993306 )
      Why stick with three dimensions? Let's just skip three, and go straight to five! And add a moisturizing strip!
  • by illegalcortex ( 1007791 ) on Thursday April 12, 2007 @10:29AM (#18702729)
    It was scary stuff, radically advanced. It was shattered... didn't work. But it gave us ideas. It took us in new directions... things we would never have thought of. All this work is based on it.
    • Re: (Score:2, Troll)

      by pla ( 258480 )
      It was scary stuff, radically advanced

      July 1947: A "weather balloon" crashes in Roswell, NM.

      December 1947: Bell Labs' Bardeen, Brattain, and Shockley "invent" the transistor, using boron-doped silicon, which Bell Labs didn't have the equipment to produce at that time!

      Spoooooooky. ;-)
  • by crea5e ( 590098 ) on Thursday April 12, 2007 @10:30AM (#18702763)
    LEG-OS. 64 block architecture. Also themeable for star wars and lord of the ring fanboys.
  • Well (Score:5, Informative)

    by ShooterNeo ( 555040 ) on Thursday April 12, 2007 @10:31AM (#18702771)
    This is it. Maybe. Possibly major problems with heat dissipation. However, there are some massive advantages :

    1. One tradeoff IC designers always face is that the fastest, lowest latency access is always to on-die components. On-Die memory (cache) is almost ALWAYS faster, coprocessor interconnects (like for dual core) are far quicker, ect. With any given level of state of the art, you can get a much higher clock signal over itsy bitty paths on silicon from one side of the chip to the other than going out to big, clunky, exremely long wires.

    2. The tradeoff is that a bigger chip radically reduces yields : the chance of a defect causing a chip to be bad goes up with the square of the number of gates.

    3. This technology allows one to use multiple dies, and to interconnect them later. There's just one problem.

    HEAT DISSIPATION. A 3d chip will of course have it's heating per square centimeter multiplied by the number of layers. The obvious solution, internal heatpipes, has not yet been shown to be manufacturable.

    Hence TFA mentioning use in devices such as cell phones, where bleeding edge high wattage performance is not a factor.
    • Re: (Score:3, Informative)

      by drinkypoo ( 153816 )

      HEAT DISSIPATION. A 3d chip will of course have it's heating per square centimeter multiplied by the number of layers. The obvious solution, internal heatpipes, has not yet been shown to be manufacturable. Hence TFA mentioning use in devices such as cell phones, where bleeding edge high wattage performance is not a factor.

      It's useful in other spaces, too. If you have a massively parallelizable task, then you could use this technology to have a stack of CPUs in less space on the board, which would reduce t

      • Plus you could keep using the standard ATX motherboard dimensions without a sprawling CPU taking over the board's real estate.
        • You do realize that 99% of a CPU is just the packaging, and it's only that big because it's the only reasonable way to get that many pins that are large enough not to snap off during socket insertion (or sublimate directly into the atmosphere ;-)).

          • I have an exposed pentium chip lying around somewhere, It was a promotional thing I got from MIT once. They really are quite big.
            • Perhaps they were. I have an exposed AMD Sempron (or whatever the non-64 bit one was called), it's less than 1cm square.
    • Re: (Score:1, Interesting)

      by Anonymous Coward
      "HEAT DISSIPATION. A 3d chip will of course have it's heating per square centimeter multiplied by the number of layers. The obvious solution, internal heatpipes, has not yet been shown to be manufacturable"

      Sure they are. Every metal trace is an internal heatpipe. It doesn't have to be some crazy fluid filled micro cavity if thats what you were thinking. 3-D circuits have been around for quite some time now. Several labs will fabricate your circuit in 3-D. Its not consumer production ready but it exists and
    • Re: (Score:3, Insightful)

      by Jeff DeMaagd ( 2015 )
      How much power is lost due to the interconnects right now? What fraction of power can be saved by almost eliminating the long wires?
    • The tradeoff is that a bigger chip radically reduces yields : the chance of a defect causing a chip to be bad goes up with the square of the number of gates.

      Isn't it directly proportional?

      Doubling the number of gates doubles the chip area. Doubling the chip area halves the number of chips per wafer. Assuming a constant number of chip-killing defects per wafer (say 5), halving the number of chips per wafer means you have twice the percentage of dead chips (i.e. 5 dead per 50 chips (= 10%) instead of 5 dead p
      • Err. Yes, you're absolutely right. I was thinking software for some reason, meant to write linear. (software can be squared because every tiny part of software can in theory interact with every other part. Hence the reason for compartmentalization of code, to reduce the number of possible interactions) And putting cache in the middle just makes sense for other reasons. One core on top, one on bottom, cache in the middle. Works great. Only improvement in cooling for 3-4 layers like this (hot ICs on th
        • I was thinking software for some reason, meant to write linear.

          To add to the confusion, some chips, are traditionally measured in one dimension only (optical sensor sizes are based on old camera film formats, which are measured along the diagonal) in which case the relationship *is* squared - and what's more, optical sensors are an area where people are more likely to need to know die size!

          One core on top, one on bottom, cache in the middle.

          I'm speculating a bit here, but in the modern world of 4 megabyte o
    • The obvious solution to the heat dissipation is a layer of diamond,
  • IBM has a nice track record of cool things they introduced to the world. HDDs, Open Standard Components, etc. ...
    This could be another one of those cool things that help shape the next few decades of technology.
    • They are one of the few major companies that still do basic research.
    • IBM PC's...

      (padded out to avoid lameness filter)
  • The biggest advantage here is that you no longer need a planar graph as your circuit diagram (meaning, a graph were no two edges cross). The most obvious application for this that I can think of is a neural net chip, but all sorts of other designs that would require a non-planar design are opened up. Cool!
    • Re: (Score:3, Informative)

      You appear to be under the misapprehension that VLSI designs are planar graphs. The place and route tools used to move from RTL to GDSII layouts make assumptions (depending upon the manufacturing process) of anywhere between 4 and 20 metal layers.

      The technology described in the article is exciting but not novel... academics has been exploring memory hierachies, hardware dynamic thread scheduling, and introspective debug solutions for some years.

      For reference... Last years ASPLOS (06) conference includ

  • I thought the problem that's limiting the current chip density is heat dissipation due to leakage current, rather than the number of device we can squeeze into a die. btw, pls help with a google analytics study (STATS252 [stanford.edu]).
  • by Yaa 101 ( 664725 )
    If it got DRM then I say who cares?
    • If it got DRM then I say who cares?

      Let's have a hand everyone, for Slashdot's living, breathing stereotype. That is, if it isn't just some kind of machine that posts about DRM and software patents, even where not appropriate.

      Seriously though, this is one of those moments where I'm glad someone is doing some serious research and the industry won't stagnate anytime soon.

  • that can be stacked on top of another electronic device in a vertical configuration.


    Brings new meaning to the term "Tower configuration"!

    RM
  • by Yvan256 ( 722131 ) on Thursday April 12, 2007 @10:59AM (#18703213) Homepage Journal
    To further increase R&D of this new 3D chip technology, IBM will be launching a new company called Cyberdyne Systems Corporation.
  • Is it time now for the IBM/AMD versus Intel Death Match? (yes, no, haha). Intel's had a pile of chip improvements. IBM, AMD's main partner, has a pile of their own. Who will win? While Intel has Perlyn at 45nm, could AMD counter with a Barcelona that stacks its cache right on top of its processors? Now that's something I'm waiting to see. Either way, I should win!
  • imagine stacking a cpu,gpu,ram,ppu,sound,network... on top of a "south (or north? x.x)" bridge,that just send the signals to the usb,vga and etc
  • Pringles have been doing this for years!
  • I always knew the answers to all life's problems could be found in stackable breakfast foods.
  • Interesting; the More Moore project is the name of a real project in Europe, but not having to do with 3D interconnects/chip stacking, but rather having to do with EUV (extreme ultraviolet) lithography to print smaller features.
  • by epine ( 68316 ) on Thursday April 12, 2007 @05:36PM (#18710291)

    Quite funny to perfect this now, with thermal considerations already dominating chip design costs. A nice little bit of space saving if it pans out for the super-compact, low-power cellphone market. For any other application, pretty much worthless. It might have some applications at the high end to increase supercompting bandwidth for systems where the half the cost is the cooling system. After the planet runs out of refinable bauxite, some prime locations with fat connections to the hydro grid would become available for server centers based on this technology.

  • Put this in a girl robot and you can say, "Man she's stacked!" and not get a sexual harassment suit!
  • A fold is essential a doubling - think about it - fold a sheet of paper - how many layers are there?

    It is basically counting in binary (5 fold = 5 bits = 32 times)

    I wish more journalists got this straight.

    Same goes for Magnitude - your lucky if anyone knows what that means,

No spitting on the Bus! Thank you, The Mgt.

Working...