Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AMD Intel Hardware Technology

The Transistor Wars 120

An anonymous reader writes "This article has an interesting round-up of how chipmakers are handling the dwindling returns of pursuing Moore's Law. Intel's about four years ahead of the rest of the semiconductor industry with its new 3D transistors. But not everyone's convinced 3D is the answer. 'There's a simple reason everyone's contemplating a redesign: The smaller you make a CMOS transistor, the more current it leaks when it's switched off. This leakage arises from the device's geometry. A standard CMOS transistor has four parts: a source, a drain, a channel that connects the two, and a gate on top to control the channel. When the gate is turned on, it creates a conductive path that allows electrons or holes to move from the source to the drain. When the gate is switched off, this conductive path is supposed to disappear. But as engineers have shrunk the distance between the source and drain, the gate's control over the transistor channel has gotten weaker. Current sneaks through the part of the channel that's farthest from the gate and also through the underlying silicon substrate. The only way to cut down on leaks is to find a way to remove all that excess silicon.'"
This discussion has been archived. No new comments can be posted.

The Transistor Wars

Comments Filter:
  • by mevets ( 322601 ) on Friday November 11, 2011 @07:16PM (#38030586)

    I thought that saline was the new medium of choice; especially after all those messy lawsuits in the 90s.

    • by treeves ( 963993 ) on Friday November 11, 2011 @07:35PM (#38030748) Homepage Journal

      That silent 'e' adds hydrogen and oxygen to silicon, leaving you with an insulator instead of a semiconductor.

      • by Anonymous Coward

        Why don't they just add hydrogen and oxygen to the silicon when they need it to stop leaks, and take it away when they need it to conduct electricity? Some people are so dumb.

        • by Khyber ( 864651 )

          This actually sounds interesting, as silicone itself can withstand very high temperatures. On the other hand, the insulative capability (low thermal conductivity) means issues in handling heat dissipation.

      • by mcavic ( 2007672 )
        The e isn't silent, at least where I come from.
      • Re: (Score:2, Funny)

        Who pronounces it with a silent e.
        • Who pronounces it with a silent e.

          People who have no understanding that chemistry is more important than the pristine contents of their spell checker.

        • by PDF ( 2433640 )

          Who pronounces it with a silent e.

          I've never heard it pronounced without a silent e. How would you pronounce it without a silent e? Sil-ih-co-neh? Sil-ih-co-nee?

          • like cone, like ice cream cone, not con, like convict. the e sound is overlayed with the n sound.
            • by PDF ( 2433640 )
              I obviously have much to learn about pronunciation, as I have always understood that the word "cone" has a silent e and is one syllable long.
  • by Anonymous Coward

    we use laser interference to calculate. no atoms = no leaks.

    • Re:in the future (Score:5, Interesting)

      by Anonymous Coward on Saturday November 12, 2011 @12:10AM (#38032520)

      I think I know what you're talking about, and I'll elaborate on it.

      Laser interferometry, scanned across an optical ROM using piezoelectric lenses. In essence it makes an optical processor achievable without requiring optical switches.

      Such a thing has been done before, using other ROM technologies (usually for digital signal processing). This one has the advantage of being limited in speed only by how fast the piezoelectric lenses can be aimed, and every pseudo instruction takes only one clock cycle. Because of that it can also make CISC instructions outperform RISC (at the expense of a significantly larger ROM).

      Granted, most piezoelectric crystals oscillate below 1GHz (considerably), but their limitation is due to conducting a usable amount of electricity for the purpose of making an electronic oscillator. I'm not sure what the ceiling is on how fast a piezoelectric lens could be aimed, but at any rate aiming a laser is a different bottleneck than the heat dissipation of transistors and in my opinion it's a much easier problem to work around.

  • by Animats ( 122034 ) on Friday November 11, 2011 @07:38PM (#38030768) Homepage

    3D transistors aren't all that new; high power devices have been 3D for decades. Making 3D transistors this small is new. I wonder how long the lifetime is. The smaller the device gets, the worse the electromigration problem gets. The number of atoms per gate is getting rather small.

    Note that this is different from making 3D chips. That's about making an entire IC, then laying down another substrate and making another IC on top of it. Or, in some cases, mechanically stacking the chips with vertical interconnects going through the substrate. The density improves, but the fab cost goes up, the yield goes down, and getting heat out becomes tougher. We'll see that for memory devices, but it may not be a win for CPUs.

    • by fyngyrz ( 762201 ) on Friday November 11, 2011 @07:51PM (#38030866) Homepage Journal

      3d integration should become practical when 3d cooling (channels? pipes? something else?) can also be easily integrated into the silicon. Once we can get the heat out, there's no particular reason that 3d can't *really* mean "3d integration", instead of "stack dies." I don't see any reason why this wouldn't come to pass. Even so, at the current geometries, we're approaching true high-performance systems on a chip.

      Larger chips provide for more interconnects (more edge space) but at some point, that'll be overkill because the system will be all in there, and only I/O will need to be brought out. We're seeing it (in a kind of feeble way) with some of the microcontrollers, but I rather expect (ok, hope) that this will be how computers are supplied, or at least, one way they are supplied.

      • by lexman098 ( 1983842 ) on Friday November 11, 2011 @08:37PM (#38031240)

        Larger chips provide for more interconnects (more edge space) but at some point, that'll be overkill because the system will be all in there, and only I/O will need to be brought out.

        There's a little more to it than that. Larger chips draw a large amount of power (very suddenly) which means the number of pins used just for VDD + GND/VSS goes way up. That's especially true since more of the analog circuitry (notoriously sensitive to rail noise) would have to be integrated into the same chip within same process. That's just power, and depending on the application there's a lot more to consider for your pinout. You can really never have enough pins. That rule of thumb isn't going anywhere soon.

        • If all the pins were power and optical was used to communicate, that would reduce pin count and increase bandwidth, But of course, this has all been examined ad-infinimum and has problems, Experts care to expound?

        • Would it be possible to have the chip start up layer by layer? I'm using a rack of equipment as my real world example, it effectively reduces the maximum amount of current needed at any point in time. Of course I'm making the assumption that the startup current exceeds the maximum running current under full load.

          • No this isn't start-up related. Chips have a lot of variation in "switching-activity" or the need to suddenly charge and discharge internal nodes. This is during normal operation, and you do what you can to throw in as much "decoupling capacitance" as possible (to insulate other circuitry from rail noise and relax current draw requirements) but it takes up space and can only do so much.
      • by phaserbanks ( 1977290 ) on Friday November 11, 2011 @09:09PM (#38031468)

        The more 3D features you pattern onto a wafer, the more mechanical stress you create. This is especially true when you integrate features with different materials and different coefficients of thermal expansion. Such features can increase the warpage and bow of the wafer to such a point, that the fabrication equipment can no longer handle the wafer. It becomes like trying to feed a potato chip into a CD changer.

        The larger the wafer, the worse this problem becomes, and today they're running very large 12" wafers that are quite sensitive to mechanical stress. Also, the SOI wafers are more prone to warpage than single crystal silicon.

        So, the *real* 3D integration you're taking about is very difficult.

        • by fyngyrz ( 762201 )

          That's why my assertions were predicated upon cooling.

          • You were talking about integrated microchannels for cooling, right? Or through hole vias patterned into the die? That's what I'm talking about. Digging holes in the crystal and/or depositing metal both cause the wafer to warp.

            • by fyngyrz ( 762201 )

              I was just handwaving; I've seen cooling strategies (in macro) that range from fan driven air to full-immersion oil baths to pumped, pressurized materials that run through heat sinks. At micro, I really don't know what the solution is (otherwise I'd be at the patent office), I'm just surmising that as it is a physical engineering problem that directly stands in the way of technology (and huge amounts of earning potential), someone is quite likely to solve it.

              It also seems very unlikely to me that we've reac

        • Well, actually nowadays the chip-level (instead of wafer-level) stacking/integration is taking ever more precedence. There are various types of micro-manipulators, and even self-aligning techniques, mostly developed by Japanese scholars, that make this task possible.

      • d integration should become practical when 3d cooling (channels? pipes? something else?)

        The obvious answer is diamond. Semiconduction and high thermal conductivity.

        • by rtb61 ( 674572 )

          Better to add in the third dimension to the calculations themselves, switch from binary '0,1' to trinary '+1,0,-1'. New problems to solve but a definite leap in 'calculation' density.

        • When diamond becomes as cheap and plentiful as silicon... Lots of research already into using diamond for high voltage power semiconductor devices.

      • by Animats ( 122034 ) on Saturday November 12, 2011 @02:33AM (#38033080) Homepage

        3d integration should become practical when 3d cooling (channels? pipes? something else?) can also be easily integrated into the silicon.

        That's being tried by IBM. [electroiq.com] But it's probably not going to be useful for portable and mobile devices. IBM is looking at it for high-density server farms.

      • by tlhIngan ( 30335 ) <slashdot.worf@net> on Saturday November 12, 2011 @03:04AM (#38033172)

        Larger chips provide for more interconnects (more edge space) but at some point, that'll be overkill because the system will be all in there, and only I/O will need to be brought out. We're seeing it (in a kind of feeble way) with some of the microcontrollers, but I rather expect (ok, hope) that this will be how computers are supplied, or at least, one way they are supplied.

        Larger chips are also more expensive. A silicon wafer costs anywhere from $1000-3000 each. Each wafer has a fixed area, and the larger the chip, the less of them per wafer. Additionally, a larger chip means there's more of a chance of an imperfection in the wafer to destroy the entire chip, leading to lowered yields. Lowered yields meach the base price of each chip goes up as there are fewer chips to pay for the entire batch.

        There are two kinds of chips - silicon-limited, and I/O limited. Memory devices (both volatile (DRAM) and non (Flash)) are silicon-limited - they are as big as economically possible (more area == more capacity after all) juggling yields and such to reach a usable price point.

        CPUs are I/O limited - they are actually very small devices, the only thing keeping them back is the number of I/O pins. And it's not the actual silicon itself - it's the physical package that connects to the PCB. The most popular packages are BGA, but even those have specifications on ball size and ball spacing. Put the balls too close together and too small, and the cost of the base PCB holding the chip goes up significantly as the PCB has to be made to tighter tolerances.

        Even so - we're talking about a thousand pins still in the latest high end Intel and AMD parts. This is doable as the PCB chip carrier can be made very specially (it only holds the chip, after all, and doesn't have to hold the rest of the circuits for the device) - basically it's a breakout board.

        • Use the brain model at mother nature provided? Processing on outside (CPUs), connections on inside (connectors), blood vessel-like cooling, aluminum 'skull', and spinal I/O connection all in one cube?

      • Cooling pipes and 3D parts are nothing new really. High power thyristor switches have used cooling pipes for decades. There are fully electronic systems that can do DC to AC conversion and many other neat things on the Megawatt range used in the power industry.
    • Note that this is different from making 3D chips. That's about making an entire IC, then laying down another substrate and making another IC on top of it. Or, in some cases, mechanically stacking the chips with vertical interconnects going through the substrate. The density improves, but the fab cost goes up, the yield goes down, and getting heat out becomes tougher. We'll see that for memory devices, but it may not be a win for CPUs.

      Well you could use it to pack ever more transistors onto the same area, but also to pack the same number of transistors onto a smaller area. For example not 100 million transistors on a 10*10mm die, but 100 million transistors on a 3*3mm die, stacked 11 layers high (assuming cubic space / transistor is the same).

      Sure that would be more difficult to produce & thermally less optimal, but also enable shorter interconnects within an IC. Shorter interconnects -> higher clock speeds? Less interconnect -

  • by electrosoccertux ( 874415 ) on Friday November 11, 2011 @07:45PM (#38030812)

    Current sneaks through the part of the channel that's farthest from the gate and also through the underlying silicon substrate

    big "huh" at this article excerpt, the point of Intel's 3d gate transistors is it allows for a fully depleted region of silicon in the channel. IE, the gate is so close to the silicon, NO electrons exist in the channel when it is off. The only leakage current you can have then through the channel is quantum tunneling, and that's basically nil; bringing the total current consumption of the transistor down by a factor of 10. Ho hum silly slashdot summary, get off my lawn!

    • by mkiwi ( 585287 ) on Friday November 11, 2011 @08:23PM (#38031144)

      I think the point they're trying to make is that there's some sort of depletion going in the channel, which causes a very small, but not insignificant amount of current to flow from drain to source through the transition region in the substrate. From the standpoint that electrons are sitting on the upper part of the Si substrate underneath the channel, the summary makes sense. They want to remove the excess Si so that depletion mode current is more tightly controlled.

    • by Calos ( 2281322 ) on Friday November 11, 2011 @08:39PM (#38031256)

      That part of the summary was probably meant to address traditional planar transistor designs, where it is roughly accurate. It is one of the reasons why Intel has been pursuing 3D transistors - more gate control over the channel and no bulk leakage.

      Another approach is to use a buried oxide layer, so that the transistors simply don't have a bulk substrate, and the channel is thin enough to allow better gate control. This approach will help the leakage, but 3D gets you faster transistors, too, because there is more area the gate directly controls to form an inversion layer to conduct current. The upside of this method is that if we can fabricate the wafers, the rest of the processing is mostly the same (though those wafers will be expensive). 3D requires a lot more work, but apparently Intel has that figured out.

      • by Technician ( 215283 ) on Friday November 11, 2011 @11:59PM (#38032468)

        SOI limits the depth of the conductive channel by placing a film on an insulator. If the insulator is low K Dielectric, the capacitance is reduced helping the speed. The 3D transistor on the other hand has a vertical fin of semiconductor created by etching away the surrounding material. This places the flat film of semiconductor on edge, then a wrap around gate applies the e-field on both sides and the top essentially surrounding the doped semiconductor path on 3 of 4 sides. This places all of the channel in close proximity to the gate voltage so a smaller voltage can pinch off the channel. SOI is still a gate on only one side (the top) of the semiconductor channel.

        If you don't understand the tech, a photo is worth many words. A photo can be seen here.
        http://www.pcmag.com/article2/0,2817,2384909,00.asp#fbid=2uqV-rrPnOE [pcmag.com]
        Most people do not understand the photo. The center lattice structure contains 12 transistors. It has 6 parallel N channel devices in series with 6 parallel P channel devices. The semiconductor is the shorter fins under the higher fins. There are 6 of these fins with 2 transistors each configured in complimentary pairs as a basic inverter. The 5 bars on top are the Source on the ends and the Drain in the center and the two Gates in-between. The gate wraps the channel under it between the source and drain of each transistor. This is considerably different than SOI technology.

        • by Calos ( 2281322 )

          Exactly, but more detail than I decided to go into :)

          I'd probably just link to the Ars or TechReport article instead, though those may go over the head of people that have no education in this stuff.

    • by Kjella ( 173770 ) on Friday November 11, 2011 @10:33PM (#38032028) Homepage

      This has been one of their major bullet points, the next round of processors will improve power consumption a lot. So if Intel's not on the right path, I don't know who is. AMD Bulldozer certainly is not. Of course sooner or later this is going to come to a halt, silicon atoms are roughly 0.235nm apart. So 22/0.235 = 93.6 atoms. The roadmap [blogspot.com] puts us at 8nm = 34 atoms in 6 years. Just extrapolating in 2023 that it'll be 12 atoms, 2029 4.5 atoms and 2035 1.6 atoms. That's not going to happen, at latest in the 2020s we will hit a brick wall and Moore's "law" will be dead. We'll hit some level of energy efficiency and most likely stay there.

      • silicon lattice constant is like 4.5 angstroms or something I believe not 2.35...and apparently that puts the wall at about 5nm, "with tweaking maybe 4nm". Maybe we can play with carbon nanotubes then but we'll hit a wall regardless, can't go much smaller...will be interesting to see what happens to intel's stock and employment. If I had to guess they will just lay everybody off except the actual fabrication workers and continue selling the chips...

        • by Kjella ( 173770 )

          Lattice constant is 5.431 angstroms actually, but that's the unit cell of the crystal structure - the hexagon, not the shortest distance between two atoms. So 8nm = 34 atoms in a row or 15 hexagons, just atoms are easier to understand. In any case I deliberately didn't predict a limit, because people have been wrong about this so many times before. When the PIV started running into massive leakage current on 90nm, people were also saying "this is it" and now 8nm is on the roadmap. Maybe we'll run into that

    • The other advantage a wrap around gate provides is the ability to pinch off the channel at LOWER voltage. This is essential for low power high speed transistors. The overall improvement is lower leakage at lower voltage, lower current, and thus lower power at high speed.
      This moves a 90 Watt part to a 9 watt part at about the same speed, or much lower Watt part at slower speeds. This is essential to bring desktop features to Ultrabooks and other low profile devices with relatively small bat

  • Comment removed based on user account deletion
  • That's one reason they invented SOI. Other advantages include higher speeds (due to less capacitance), higher operating temperatures, latch-up free, and radiation hardness.
    • by UnknownSoldier ( 67820 ) on Friday November 11, 2011 @09:49PM (#38031752)

      You are not going to address Moore's Law with silicon -- the problem with silicon is that it has an effective 4 to 5 GHz barrier -- which is its dirty little secret that no one wants to talk about. The army had 100 GHz chips 20 years ago -- guess what, they weren't using silicon, but a germanium compound.

        The only "real" solution is to start looking at other materials.

      • Re: (Score:3, Informative)

        by rev0lt ( 1950662 )
        The histeresis of the material also applies to copper, aluminium, gold and other conductive metals using in manufacturing of the circuits.
        Germanium has been used in semiconductors longer than silicon, and it is widely used today. One of the emerging alternatives to silicon is a germanium-silicon alloy, that has been gaining traction from some years now, so this is nothing new.
      • by Anonymous Coward

        Moore's Law only makes sense when applied to bulk CMOS. No other semiconductor technology has the momentum to challenge bulk CMOS. No other material has an easily made complimentary transistor pair. You can get a III/IV or II/VI compound to operate with majority carriers to THz frequencies, but it is difficult to find a replacement for such easily created p and n enhancement devices that can be created in bulk CMOS. If SOI, GaAs, InGaAs, SiGe, GaN, SiC, nanowire FETs, or any other "exotic" material had

      • Gallium arsenide (GaAs) also good for HF microwave stuff. The higher saturated electron velocity and higher electron mobility, allowing transistors made from it to function at frequencies in excess of 250 GHz http://en.wikipedia.org/wiki/Gallium_arsenide#GaAs_advantages [wikipedia.org]
      • by Anonymous Coward

        Err... the frequency you're talking about is the transition frequency of the process. Modern CMOS processes have f_t's in the 100+ GHz range. Just because the transistors themselves are that fast, doesn't mean you can design digital systems that clock that fast. It's trivial to design an extremely high speed processor, but it's just not economical since the power consumption goes up as operating frequency squared. The limit to processors today is the ability to dissipate this heat. Of course, new materials

  • New physical design. (Score:3, Interesting)

    by Commontwist ( 2452418 ) on Friday November 11, 2011 @08:12PM (#38031078)

    In the whole CPU mounting in desktop PCs the heatsink/fan combos are massive beasts on top of the chip itself. The 'chip' is mostly a heat sink itself with larger connectors to connect up to the motherboard. Compared to the actual chip the connection and heat dissipation materials are huge.

    Has anyone tried to create a silicon cube composed of layer upon layer of CPUs that is of low enough speed that heat isn't a problem, especially if you coat it with aluminum? How many CPUs could one fit in such a cube the size of a modern heat sink and how much parallel processing power do you think one could get out of it? Would it be able to stand up to a modern CPU? Given at least a few dozen CPUs could be fit into such a thing the parallel processing should be impressive, I'd think.

    Of course, I'm no computer engineer which is why I'm posting this to see how good/bad this idea is.

    • by mikael ( 484 )

      Still never understood why CPU's have to be installed in a motherboard mounted socket rather than on their board connector like a GPU. Or why all the connectors have to be on the motherboard and not on a separate board.

      I'm guessing that trying to create a silicon cube based on multiple layers would increase the chances of defects reducing the yield of functional dies. Even if you did get two successful slices, there's always the chance something would get trapped inbetween.

      Maybe you could create a heat sink

      • It also increases the number of operations.
        This is bad.
        Adding more steps adds cost.

      • by kesuki ( 321456 )

        they tried that, the slot based pentiums. they were a hassle. then more recently they had the ball grid array to avoid the oft broken pins...

      • Makes me wonder what one could do if you tossed traditional 2-D design (Most CPUs do have layers but very much 2D for all that) and went for a more 3D design like in the human brain. Create a miniaturized silicon 'matrix' of semi-conductor connections in a similar way to the human brain, for example.

        • by slew ( 2918 ) on Friday November 11, 2011 @09:52PM (#38031780)

          Makes me wonder what one could do if you tossed traditional 2-D design (Most CPUs do have layers but very much 2D for all that) and went for a more 3D design like in the human brain. Create a miniaturized silicon 'matrix' of semi-conductor connections in a similar way to the human brain, for example.

          The 2 main problems with 3d are currently fabrication density (defect issue,, stress, strain, etc) and how to get rid of all that heat. In your brain, that is solved by self-assembly, redundancy and low usage and a circulatory system. The current computing model of a usable CPU (runs an OS, does IEEE floating point arithmetic, does branching/looping) is probably too complex to solve this problem the same way in the forseable future. Of course if we change the definition of what a usuable CPU is, then perhaps this would be more feasable.

          On the other hand there is some progress being made on bump stacked or through subtrate vias (TSV) assemblies (sometimes done for DRAM&Flash for cellphones) and even some limited 2-layered silicon devices (instead of the current one layer) of active devices per silicon die.

          Stacked silicon die are promising, but there is currently a large overhead for mechanical connection between die so the density isn't very good. Also there's the problem of differing thermal expansion coefficients between the die that cause mechanical instabilities (which currently has to be solved by just putting in even more interconnect area overhead/margin).

          The 2-layered devices are usually not done by stacking two active layers on the same wafer (because it's currently hard to grow a new thick uniform layer of silicon on top of existing circutry) , but they are made by patterning on one side, sticking a new clean wafer on top, flipping the stack over, shaving off the new top (used to be the bottom of the original patterned wafer), and then doing a new pattern on the newly shaven surface. As you might imagine, this isn't currently very scalable for more layers as defects will eventually dominate the system.

          Neither technique is currently very good for getting the heat out.

          People are working on this,and some limited stuff has made it's way out of the lab and into production but none of the 3d stuff is currently much better than just doing the standard planar chip for most typcial CPU projects right now. It's just a niche...

        • by mikael ( 484 ) on Saturday November 12, 2011 @08:10AM (#38033998)

          Human brains are more like a supercomputer architecture. The outer layers (gray matter) do all the calculations while the inner layers (white matter) do the connections. There are diffusion-based MRI images that show how all the interconnects go.

          For heating, you have the arterial blood supply, while for cooling, you have the venous blood supply which draws the heat out. For pressure equalisation, there's the circle of Willis, a ring of arteries. Even the flow of blood and nutrients isn't a simple pumping process. It's http://www.brain-aneurysm.com/ba1.html [slashdot.org] ">regulated by a neural system of it's own

          • So... outer layer of the cube would be the CPUs, the internal portions would be the interconnects, and some kind of oil or other liquid can be patterned like blood vessels inside. All protected by aluminum (skull) and plugged into a central slot (spine).

      • by sjames ( 1099 )

        The socket is so a single SKU of motherboard can be fitted with a variety of CPUs of different speeds after manufacture. In the past, it was copmmon to upgrade the CPU after the fact, but these days by the time you want to upgrade, the new CPUs want a new socket anyway. In the embedded world, the CPU is typically soldered on like all the other chips. The connections all tend to be on the mainboard rather than a daughter board mainly due to the number of connections needed, especially when the memory control

    • Removing heat is the big problem. Every time you double the number of active layers,
      • ___ you double the heat power
      • ___you double the distance the heat has to travel, thus doubling the heat rise of the innermost layer

      Combining the two effects, the innermost layer rises in temperature by a factor of 4, which means that speed has to be reduced by a factor of 4 to get back down to the temperature of the previous iteration. Thus twice as many layers means 2/4 times the processing power.

      There's a gain to be had th

      • You might also be able to 'layer' aluminum 'channels' inside the cube to get the lowered heat out or perhaps a contained, liquid heat distribution system? Layered aluminum separating CPUs?

        Assuming you could cool the cube down through layers of cooling material equally thick as the CPUs how powerful would the cube be?

    • You need surface to move heat away since heat flow is proportional to area and delta T. A given transistor technology needs a fixed amount of energy to switch state plus a fixed amount of power for leakage current. So... the number of transistors you can cool is proportional to area. That's why 3D is a bad idea, unless you're talking about tech that's not power intensive. But even newer Flash chips get warm. Maybe for memristors which don't need static power consumption.

      • I suggested 3D matrix elsewhere, response was 'brain'. Then I put down having CPUs on the outside of a Cube, interconnects inside, a blood-like network of coolant, aluminum to act as the 'skull', and an internal connection/plug as the 'spine'.

        Hey, mother nature seems to think 3D is much better. If everything was best for 2D then our heads would be rather flat.

  • Fabless (Score:5, Interesting)

    by drhank1980 ( 1225872 ) on Friday November 11, 2011 @08:32PM (#38031204)

    Where I think AMD really fell behind was they were not able to afford the kind of R&D on the manufacturing side that Intel does for each new process. AMD basically gave up and is now in the same boat as the rest of the "fabless" companies being 100% dependent on what TSMC or Global Foundries can produce. This is always going to put you at a competitive disadvantage at the very high end. While intel is working on pushing down to 22nm FINFET for the "old" architecture people in the design group are without a doubt working on 16nm and getting sample silicon at this node so they can tune their designs for what the transistors will really look like. When you go fabless you get to figure this out with poor yields while in "manufacturing" at the foundry. Maybe at 130 -65nm this wasn't such a big deal but when you need to make your design work with double or tripple patterned 193nm immersion lithography just figuring out some design rules is no simple task.

    Also does anyone know if there is more than 1 vendor in the world that can make fully depleted SOI of the quality needed for 32nm - 28nm on a 300mm wafer? Last I knew this was a major reason behind Intel pushing FINFET instead of the fully depleted SOI.

    • by nsaspook ( 20301 )

      Where I think AMD really fell behind was they were not able to afford the kind of R&D on the manufacturing side that Intel does for each new process.

      This is why it's so good now to be at least a few process generations behind Intel. If you're on the rusty edge of fab technology you get Intelâ(TM)s old equipment for pennies on the dollar and it's been well maintained unlike used equipment from Asia.

    • "Where I think AMD really fell behind was they were not able to afford the kind of R&D on the manufacturing side"

      I believe more that it is a matter of market failure and problems related to anti-trust, that intels advanced manufacturing actually hinders chip design advancement through being able to monopolize production facilities. Look at the underhanded tactics intel used during the athlon era when AMD was ahead. In my opinion we have a special case of market failure. The resources that are now req

    • by sjames ( 1099 )

      Why do people think AMD is at such a disadvantage at the high end? They're neck and neck with Intel there and tend to produce less heat. It's the high end desktop where Intel has an advantage. In the low power embedded, AMD has the advantage.

      The only place Intel has a clear win is in benchmarks compiled with the intel compiler without disabling the AMD crippling function (I'm NOT making that up!).

  • by queazocotal ( 915608 ) on Friday November 11, 2011 @08:42PM (#38031280)

    An increasingly common way to get powersaving is to divide the chip into oodles and oodles of blocks.
    These blocks are rapidly turned off and on as they are needed.
    This is what lets your phone last a week on battery, while staying logged into wifi and 3G.

  • I'm mainly referring to embedded DRAM, which they use for gigantic on-chip caches. Neither Intel nor AMD have this, they have to use SRAM for their caches, which consumes much more space on the die. Sure, IBM is targeting different customers with their pricing strategy, but just imagine having an 8-core POWER7 chip with 32MB on-chip L3 cache in your PC. That thing would just blow any x86 CPU away. Partly because of the gigantic cooler required, I admit.
    • by Anonymous Coward

      Is this a clever troll bait, or do you simply not understand that DRAM is significantly slower to access.

      Yes IBM has embedded DRAM as their L3 cache, but Intel and AMD essentially have embedded SRAM as their L3 cache in the range of 4-8MB, with DRAM (DDR) as L4. So what's your point sir? That IBM cannot afford to put large SRAM on their process, so they found a way to get (slower and more power hungry) DRAM closer to the cores as a stop-gap?

      Besides, a Power7 @ 3.8GHz has a TDP of ~200W, while an Intel Sa

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      IBM's eDRAM solution is very expensive. It is not that Intel doesn't know how to make it. They must have evaluated it but finally figured that it is a big yield issue and not worth the cost. SRAM pretty much uses standard logic manufacturing process with straight forward customization. Including DRAM is lot of extra process steps. And the name of the game, in consumer space, is to reduce cost. Power7 isn't available on any low end system. And high end XEON chips have started to eat Power7's lunch since they

  • I thought Intel's new FinFET transistor structure was going to be the new standard? They had excellent results without significant retooling or adjustment to the manufacturing process. They just built the transistors upward instead of across at ever increasingly smaller scale.

  • by axonis ( 640949 ) on Saturday November 12, 2011 @12:42AM (#38032648)
    The below processor looks like the current transistor king to me, way beyond the scope of the discussions on moores law here, sometime you need to think outside the mainstream box more than double Intels best [wikipedia.org]


    Virtex-7 2000T FPGA Device First to Use 2.5-D IC Stacked Silicon Interconnect Technology to Deliver More than Moore and 6.8 Billion Transistors, 2X the Size of Competing Devices SAN-JOSE, Calif., Oct. 25, 2011-- Xilinx, Inc. [design-reuse.com](Nasdaq: XLNX) today announced first shipments of its Virtex®-7 2000T Field Programmable Gate Array (FPGA), the world's highest-capacity programmable logic device built using 6.8 billion transistors, providing customers access to an unprecedented 2 million logic cells, equivalent to 20 million ASIC gates, for system integration, ASIC replacement, and ASIC prototyping and emulation. This capacity is made possible by Xilinx's Stacked Silicon Interconnect technology, the first application of 2.5-D IC stacking that gives customers twice the capacity of competing devices and leaping ahead of what Moore's Law could otherwise offer in a monolithic 28-nanometer (nm) FPGA.
  • regardless of "fins" or not, quantum effects spells the end of miniaturisation of features the for current digital semiconductor technology in four years or so. will we then concentrate on making better software for awhile in the lull to to finding the technology for mass-producing the cores of our computational devices?

"If it ain't broke, don't fix it." - Bert Lantz

Working...