The Transistor Wars 120
An anonymous reader writes "This article has an interesting round-up of how chipmakers are handling the dwindling returns of pursuing Moore's Law. Intel's about four years ahead of the rest of the semiconductor industry with its new 3D transistors. But not everyone's convinced 3D is the answer. 'There's a simple reason everyone's contemplating a redesign: The smaller you make a CMOS transistor, the more current it leaks when it's switched off. This leakage arises from the device's geometry. A standard CMOS transistor has four parts: a source, a drain, a channel that connects the two, and a gate on top to control the channel. When the gate is turned on, it creates a conductive path that allows electrons or holes to move from the source to the drain. When the gate is switched off, this conductive path is supposed to disappear. But as engineers have shrunk the distance between the source and drain, the gate's control over the transistor channel has gotten weaker. Current sneaks through the part of the channel that's farthest from the gate and also through the underlying silicon substrate. The only way to cut down on leaks is to find a way to remove all that excess silicon.'"
Leaking silicone... (Score:5, Funny)
I thought that saline was the new medium of choice; especially after all those messy lawsuits in the 90s.
Re:Leaking silicone... (Score:5, Informative)
That silent 'e' adds hydrogen and oxygen to silicon, leaving you with an insulator instead of a semiconductor.
Re: (Score:1)
Why don't they just add hydrogen and oxygen to the silicon when they need it to stop leaks, and take it away when they need it to conduct electricity? Some people are so dumb.
Re: (Score:1)
This actually sounds interesting, as silicone itself can withstand very high temperatures. On the other hand, the insulative capability (low thermal conductivity) means issues in handling heat dissipation.
Re: (Score:1)
Re: (Score:2, Funny)
Re: (Score:2)
People who have no understanding that chemistry is more important than the pristine contents of their spell checker.
Re: (Score:1)
Who pronounces it with a silent e.
I've never heard it pronounced without a silent e. How would you pronounce it without a silent e? Sil-ih-co-neh? Sil-ih-co-nee?
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
in the future (Score:1)
we use laser interference to calculate. no atoms = no leaks.
Re:in the future (Score:5, Interesting)
I think I know what you're talking about, and I'll elaborate on it.
Laser interferometry, scanned across an optical ROM using piezoelectric lenses. In essence it makes an optical processor achievable without requiring optical switches.
Such a thing has been done before, using other ROM technologies (usually for digital signal processing). This one has the advantage of being limited in speed only by how fast the piezoelectric lenses can be aimed, and every pseudo instruction takes only one clock cycle. Because of that it can also make CISC instructions outperform RISC (at the expense of a significantly larger ROM).
Granted, most piezoelectric crystals oscillate below 1GHz (considerably), but their limitation is due to conducting a usable amount of electricity for the purpose of making an electronic oscillator. I'm not sure what the ceiling is on how fast a piezoelectric lens could be aimed, but at any rate aiming a laser is a different bottleneck than the heat dissipation of transistors and in my opinion it's a much easier problem to work around.
Re: (Score:3)
Could you expand on that? How? For example how do you avoid the quadratic memory issues in lazy languages?
Re: (Score:3, Informative)
The computer capable of that level of introspection and inference would snort at your silly fashion bias toward functional languages. The main calling card of functional languages is to offset weakness in human cognition. The human brain struggles to convert a functional specification into an optimal state machine without dropping a stitch. Kasparov and others complain about co
Re: (Score:3)
I don't think it's blathering nonsense per se but it is cocky and arrogant,
The main calling card of functional languages is to offset weakness in human cognition. The human brain struggles to convert a functional specification into an optimal state machine without dropping a stitch.
Bingo!
Supposing this assertion is true, knowing what I know now about human nature and human factors (and my beard is starting to go grey), human cognition is the weak link and therefore we should be exploiting this feature of
Comment removed (Score:5, Informative)
Re: (Score:1)
I like the concept of Bulldozer, but I thought their modular design came at a transistor cost (but also made design easier). If increased transistor count becomes a larger liability than it currently is when shrinking things down (by making tactics to limit power usage of unused portions of the chip less effective) it could make the ability to clock it up harder than expected.
Re: (Score:2)
Re: (Score:2)
Well, I don't like Bulldozer, at least not as it is now.
It doesn't make much sense as it is nether here not there. They are trying to sell one modules as two cores, which is ridiculous. Also, it is not clear why in real life one module ( ie by making it execute just one thred with all its resources) can't match performance of one decent core on clock-by-clock basis.
Also, AMD managed to squeeze 6 K-10 cores on 346 mm2 with 45nm geometry on Thuban ( x6 1055,1075,1090,1100T), With 32nm they should be able to u
Re: (Score:2)
Small 3D transistors (Score:5, Informative)
3D transistors aren't all that new; high power devices have been 3D for decades. Making 3D transistors this small is new. I wonder how long the lifetime is. The smaller the device gets, the worse the electromigration problem gets. The number of atoms per gate is getting rather small.
Note that this is different from making 3D chips. That's about making an entire IC, then laying down another substrate and making another IC on top of it. Or, in some cases, mechanically stacking the chips with vertical interconnects going through the substrate. The density improves, but the fab cost goes up, the yield goes down, and getting heat out becomes tougher. We'll see that for memory devices, but it may not be a win for CPUs.
Re:Small 3D transistors (Score:5, Informative)
3d integration should become practical when 3d cooling (channels? pipes? something else?) can also be easily integrated into the silicon. Once we can get the heat out, there's no particular reason that 3d can't *really* mean "3d integration", instead of "stack dies." I don't see any reason why this wouldn't come to pass. Even so, at the current geometries, we're approaching true high-performance systems on a chip.
Larger chips provide for more interconnects (more edge space) but at some point, that'll be overkill because the system will be all in there, and only I/O will need to be brought out. We're seeing it (in a kind of feeble way) with some of the microcontrollers, but I rather expect (ok, hope) that this will be how computers are supplied, or at least, one way they are supplied.
Re:Small 3D transistors (Score:4, Informative)
Larger chips provide for more interconnects (more edge space) but at some point, that'll be overkill because the system will be all in there, and only I/O will need to be brought out.
There's a little more to it than that. Larger chips draw a large amount of power (very suddenly) which means the number of pins used just for VDD + GND/VSS goes way up. That's especially true since more of the analog circuitry (notoriously sensitive to rail noise) would have to be integrated into the same chip within same process. That's just power, and depending on the application there's a lot more to consider for your pinout. You can really never have enough pins. That rule of thumb isn't going anywhere soon.
Re: (Score:2)
If all the pins were power and optical was used to communicate, that would reduce pin count and increase bandwidth, But of course, this has all been examined ad-infinimum and has problems, Experts care to expound?
Re: (Score:1)
Would it be possible to have the chip start up layer by layer? I'm using a rack of equipment as my real world example, it effectively reduces the maximum amount of current needed at any point in time. Of course I'm making the assumption that the startup current exceeds the maximum running current under full load.
Re: (Score:1)
Re: (Score:1)
Re:Small 3D transistors (Score:5, Interesting)
The more 3D features you pattern onto a wafer, the more mechanical stress you create. This is especially true when you integrate features with different materials and different coefficients of thermal expansion. Such features can increase the warpage and bow of the wafer to such a point, that the fabrication equipment can no longer handle the wafer. It becomes like trying to feed a potato chip into a CD changer.
The larger the wafer, the worse this problem becomes, and today they're running very large 12" wafers that are quite sensitive to mechanical stress. Also, the SOI wafers are more prone to warpage than single crystal silicon.
So, the *real* 3D integration you're taking about is very difficult.
Re: (Score:2)
That's why my assertions were predicated upon cooling.
Re: (Score:1)
You were talking about integrated microchannels for cooling, right? Or through hole vias patterned into the die? That's what I'm talking about. Digging holes in the crystal and/or depositing metal both cause the wafer to warp.
Re: (Score:2)
I was just handwaving; I've seen cooling strategies (in macro) that range from fan driven air to full-immersion oil baths to pumped, pressurized materials that run through heat sinks. At micro, I really don't know what the solution is (otherwise I'd be at the patent office), I'm just surmising that as it is a physical engineering problem that directly stands in the way of technology (and huge amounts of earning potential), someone is quite likely to solve it.
It also seems very unlikely to me that we've reac
Re: (Score:2)
Well, actually nowadays the chip-level (instead of wafer-level) stacking/integration is taking ever more precedence. There are various types of micro-manipulators, and even self-aligning techniques, mostly developed by Japanese scholars, that make this task possible.
Re: (Score:3)
d integration should become practical when 3d cooling (channels? pipes? something else?)
The obvious answer is diamond. Semiconduction and high thermal conductivity.
Re: (Score:2)
Better to add in the third dimension to the calculations themselves, switch from binary '0,1' to trinary '+1,0,-1'. New problems to solve but a definite leap in 'calculation' density.
Re: (Score:1)
When diamond becomes as cheap and plentiful as silicon... Lots of research already into using diamond for high voltage power semiconductor devices.
Re: (Score:2)
synthetic diamonds
Re: (Score:2)
Alaska has patented that, won't work. Besides, you can see Russia from there. From the bridge. To nowhere.
Re:Small 3D transistors (Score:5, Interesting)
3d integration should become practical when 3d cooling (channels? pipes? something else?) can also be easily integrated into the silicon.
That's being tried by IBM. [electroiq.com] But it's probably not going to be useful for portable and mobile devices. IBM is looking at it for high-density server farms.
Re:Small 3D transistors (Score:4, Informative)
Larger chips are also more expensive. A silicon wafer costs anywhere from $1000-3000 each. Each wafer has a fixed area, and the larger the chip, the less of them per wafer. Additionally, a larger chip means there's more of a chance of an imperfection in the wafer to destroy the entire chip, leading to lowered yields. Lowered yields meach the base price of each chip goes up as there are fewer chips to pay for the entire batch.
There are two kinds of chips - silicon-limited, and I/O limited. Memory devices (both volatile (DRAM) and non (Flash)) are silicon-limited - they are as big as economically possible (more area == more capacity after all) juggling yields and such to reach a usable price point.
CPUs are I/O limited - they are actually very small devices, the only thing keeping them back is the number of I/O pins. And it's not the actual silicon itself - it's the physical package that connects to the PCB. The most popular packages are BGA, but even those have specifications on ball size and ball spacing. Put the balls too close together and too small, and the cost of the base PCB holding the chip goes up significantly as the PCB has to be made to tighter tolerances.
Even so - we're talking about a thousand pins still in the latest high end Intel and AMD parts. This is doable as the PCB chip carrier can be made very specially (it only holds the chip, after all, and doesn't have to hold the rest of the circuits for the device) - basically it's a breakout board.
Re: (Score:1)
Use the brain model at mother nature provided? Processing on outside (CPUs), connections on inside (connectors), blood vessel-like cooling, aluminum 'skull', and spinal I/O connection all in one cube?
Re: (Score:2)
Re: (Score:1)
Note that this is different from making 3D chips. That's about making an entire IC, then laying down another substrate and making another IC on top of it. Or, in some cases, mechanically stacking the chips with vertical interconnects going through the substrate. The density improves, but the fab cost goes up, the yield goes down, and getting heat out becomes tougher. We'll see that for memory devices, but it may not be a win for CPUs.
Well you could use it to pack ever more transistors onto the same area, but also to pack the same number of transistors onto a smaller area. For example not 100 million transistors on a 10*10mm die, but 100 million transistors on a 3*3mm die, stacked 11 layers high (assuming cubic space / transistor is the same).
Sure that would be more difficult to produce & thermally less optimal, but also enable shorter interconnects within an IC. Shorter interconnects -> higher clock speeds? Less interconnect -
Intel's 3g gate transistors stop all current (Score:5, Insightful)
Current sneaks through the part of the channel that's farthest from the gate and also through the underlying silicon substrate
big "huh" at this article excerpt, the point of Intel's 3d gate transistors is it allows for a fully depleted region of silicon in the channel. IE, the gate is so close to the silicon, NO electrons exist in the channel when it is off. The only leakage current you can have then through the channel is quantum tunneling, and that's basically nil; bringing the total current consumption of the transistor down by a factor of 10. Ho hum silly slashdot summary, get off my lawn!
Re:Intel's 3g gate transistors stop all current (Score:4, Informative)
I think the point they're trying to make is that there's some sort of depletion going in the channel, which causes a very small, but not insignificant amount of current to flow from drain to source through the transition region in the substrate. From the standpoint that electrons are sitting on the upper part of the Si substrate underneath the channel, the summary makes sense. They want to remove the excess Si so that depletion mode current is more tightly controlled.
Re:Intel's 3g gate transistors stop all current (Score:5, Interesting)
That part of the summary was probably meant to address traditional planar transistor designs, where it is roughly accurate. It is one of the reasons why Intel has been pursuing 3D transistors - more gate control over the channel and no bulk leakage.
Another approach is to use a buried oxide layer, so that the transistors simply don't have a bulk substrate, and the channel is thin enough to allow better gate control. This approach will help the leakage, but 3D gets you faster transistors, too, because there is more area the gate directly controls to form an inversion layer to conduct current. The upside of this method is that if we can fabricate the wafers, the rest of the processing is mostly the same (though those wafers will be expensive). 3D requires a lot more work, but apparently Intel has that figured out.
Re:Intel's 3g gate transistors stop all current (Score:5, Informative)
SOI limits the depth of the conductive channel by placing a film on an insulator. If the insulator is low K Dielectric, the capacitance is reduced helping the speed. The 3D transistor on the other hand has a vertical fin of semiconductor created by etching away the surrounding material. This places the flat film of semiconductor on edge, then a wrap around gate applies the e-field on both sides and the top essentially surrounding the doped semiconductor path on 3 of 4 sides. This places all of the channel in close proximity to the gate voltage so a smaller voltage can pinch off the channel. SOI is still a gate on only one side (the top) of the semiconductor channel.
If you don't understand the tech, a photo is worth many words. A photo can be seen here.
http://www.pcmag.com/article2/0,2817,2384909,00.asp#fbid=2uqV-rrPnOE [pcmag.com]
Most people do not understand the photo. The center lattice structure contains 12 transistors. It has 6 parallel N channel devices in series with 6 parallel P channel devices. The semiconductor is the shorter fins under the higher fins. There are 6 of these fins with 2 transistors each configured in complimentary pairs as a basic inverter. The 5 bars on top are the Source on the ends and the Drain in the center and the two Gates in-between. The gate wraps the channel under it between the source and drain of each transistor. This is considerably different than SOI technology.
Re: (Score:2)
Exactly, but more detail than I decided to go into :)
I'd probably just link to the Ars or TechReport article instead, though those may go over the head of people that have no education in this stuff.
Re:Intel's 3g gate transistors stop all current (Score:4, Interesting)
This has been one of their major bullet points, the next round of processors will improve power consumption a lot. So if Intel's not on the right path, I don't know who is. AMD Bulldozer certainly is not. Of course sooner or later this is going to come to a halt, silicon atoms are roughly 0.235nm apart. So 22/0.235 = 93.6 atoms. The roadmap [blogspot.com] puts us at 8nm = 34 atoms in 6 years. Just extrapolating in 2023 that it'll be 12 atoms, 2029 4.5 atoms and 2035 1.6 atoms. That's not going to happen, at latest in the 2020s we will hit a brick wall and Moore's "law" will be dead. We'll hit some level of energy efficiency and most likely stay there.
Re: (Score:2)
silicon lattice constant is like 4.5 angstroms or something I believe not 2.35...and apparently that puts the wall at about 5nm, "with tweaking maybe 4nm". Maybe we can play with carbon nanotubes then but we'll hit a wall regardless, can't go much smaller...will be interesting to see what happens to intel's stock and employment. If I had to guess they will just lay everybody off except the actual fabrication workers and continue selling the chips...
Re: (Score:2)
Lattice constant is 5.431 angstroms actually, but that's the unit cell of the crystal structure - the hexagon, not the shortest distance between two atoms. So 8nm = 34 atoms in a row or 15 hexagons, just atoms are easier to understand. In any case I deliberately didn't predict a limit, because people have been wrong about this so many times before. When the PIV started running into massive leakage current on 90nm, people were also saying "this is it" and now 8nm is on the roadmap. Maybe we'll run into that
Re: (Score:2)
The other advantage a wrap around gate provides is the ability to pinch off the channel at LOWER voltage. This is essential for low power high speed transistors. The overall improvement is lower leakage at lower voltage, lower current, and thus lower power at high speed.
This moves a 90 Watt part to a 9 watt part at about the same speed, or much lower Watt part at slower speeds. This is essential to bring desktop features to Ultrabooks and other low profile devices with relatively small bat
Re: (Score:1)
silicon-on-insulator (SOI) (Score:2)
Re:silicon-on-insulator (SOI) (Score:5, Informative)
You are not going to address Moore's Law with silicon -- the problem with silicon is that it has an effective 4 to 5 GHz barrier -- which is its dirty little secret that no one wants to talk about. The army had 100 GHz chips 20 years ago -- guess what, they weren't using silicon, but a germanium compound.
The only "real" solution is to start looking at other materials.
Re: (Score:3, Informative)
Germanium has been used in semiconductors longer than silicon, and it is widely used today. One of the emerging alternatives to silicon is a germanium-silicon alloy, that has been gaining traction from some years now, so this is nothing new.
Re: (Score:1)
Moore's Law only makes sense when applied to bulk CMOS. No other semiconductor technology has the momentum to challenge bulk CMOS. No other material has an easily made complimentary transistor pair. You can get a III/IV or II/VI compound to operate with majority carriers to THz frequencies, but it is difficult to find a replacement for such easily created p and n enhancement devices that can be created in bulk CMOS. If SOI, GaAs, InGaAs, SiGe, GaN, SiC, nanowire FETs, or any other "exotic" material had
Re: (Score:1)
Re: (Score:1)
Err... the frequency you're talking about is the transition frequency of the process. Modern CMOS processes have f_t's in the 100+ GHz range. Just because the transistors themselves are that fast, doesn't mean you can design digital systems that clock that fast. It's trivial to design an extremely high speed processor, but it's just not economical since the power consumption goes up as operating frequency squared. The limit to processors today is the ability to dissipate this heat. Of course, new materials
New physical design. (Score:3, Interesting)
In the whole CPU mounting in desktop PCs the heatsink/fan combos are massive beasts on top of the chip itself. The 'chip' is mostly a heat sink itself with larger connectors to connect up to the motherboard. Compared to the actual chip the connection and heat dissipation materials are huge.
Has anyone tried to create a silicon cube composed of layer upon layer of CPUs that is of low enough speed that heat isn't a problem, especially if you coat it with aluminum? How many CPUs could one fit in such a cube the size of a modern heat sink and how much parallel processing power do you think one could get out of it? Would it be able to stand up to a modern CPU? Given at least a few dozen CPUs could be fit into such a thing the parallel processing should be impressive, I'd think.
Of course, I'm no computer engineer which is why I'm posting this to see how good/bad this idea is.
Re: (Score:1)
Still never understood why CPU's have to be installed in a motherboard mounted socket rather than on their board connector like a GPU. Or why all the connectors have to be on the motherboard and not on a separate board.
I'm guessing that trying to create a silicon cube based on multiple layers would increase the chances of defects reducing the yield of functional dies. Even if you did get two successful slices, there's always the chance something would get trapped inbetween.
Maybe you could create a heat sink
Re: (Score:2)
It also increases the number of operations.
This is bad.
Adding more steps adds cost.
Re: (Score:2)
they tried that, the slot based pentiums. they were a hassle. then more recently they had the ball grid array to avoid the oft broken pins...
Re: (Score:1)
Re: (Score:1)
Makes me wonder what one could do if you tossed traditional 2-D design (Most CPUs do have layers but very much 2D for all that) and went for a more 3D design like in the human brain. Create a miniaturized silicon 'matrix' of semi-conductor connections in a similar way to the human brain, for example.
Re:New physical design. (Score:5, Informative)
Makes me wonder what one could do if you tossed traditional 2-D design (Most CPUs do have layers but very much 2D for all that) and went for a more 3D design like in the human brain. Create a miniaturized silicon 'matrix' of semi-conductor connections in a similar way to the human brain, for example.
The 2 main problems with 3d are currently fabrication density (defect issue,, stress, strain, etc) and how to get rid of all that heat. In your brain, that is solved by self-assembly, redundancy and low usage and a circulatory system. The current computing model of a usable CPU (runs an OS, does IEEE floating point arithmetic, does branching/looping) is probably too complex to solve this problem the same way in the forseable future. Of course if we change the definition of what a usuable CPU is, then perhaps this would be more feasable.
On the other hand there is some progress being made on bump stacked or through subtrate vias (TSV) assemblies (sometimes done for DRAM&Flash for cellphones) and even some limited 2-layered silicon devices (instead of the current one layer) of active devices per silicon die.
Stacked silicon die are promising, but there is currently a large overhead for mechanical connection between die so the density isn't very good. Also there's the problem of differing thermal expansion coefficients between the die that cause mechanical instabilities (which currently has to be solved by just putting in even more interconnect area overhead/margin).
The 2-layered devices are usually not done by stacking two active layers on the same wafer (because it's currently hard to grow a new thick uniform layer of silicon on top of existing circutry) , but they are made by patterning on one side, sticking a new clean wafer on top, flipping the stack over, shaving off the new top (used to be the bottom of the original patterned wafer), and then doing a new pattern on the newly shaven surface. As you might imagine, this isn't currently very scalable for more layers as defects will eventually dominate the system.
Neither technique is currently very good for getting the heat out.
People are working on this,and some limited stuff has made it's way out of the lab and into production but none of the 3d stuff is currently much better than just doing the standard planar chip for most typcial CPU projects right now. It's just a niche...
Re: (Score:2)
You get the floating point arithmetic for free in the brain - the time between a particular neuron firing is inversely proportional to the strength of a particular input. Stronger signals mean shorter pulse times. More precision is gained by having more neurons for than input.
You most certainly do not get floating point arithmetic in the brain, you get analog arithmetic in the brain. Brain style arithmetic is more like fixed point arithmetic (or perhaps some log-transformed version of it), than floating point arithmetic (which has dynamic precision and defined rounding among other properties). Also, most folks don't like CPUs that almost get the same answer depending on what mood the computer is in. Of course if we change the definition of what would be a useful CPU, than m
Re:New physical design. (Score:4, Interesting)
Human brains are more like a supercomputer architecture. The outer layers (gray matter) do all the calculations while the inner layers (white matter) do the connections. There are diffusion-based MRI images that show how all the interconnects go.
For heating, you have the arterial blood supply, while for cooling, you have the venous blood supply which draws the heat out. For pressure equalisation, there's the circle of Willis, a ring of arteries. Even the flow of blood and nutrients isn't a simple pumping process. It's http://www.brain-aneurysm.com/ba1.html [slashdot.org] ">regulated by a neural system of it's own
Re: (Score:1)
So... outer layer of the cube would be the CPUs, the internal portions would be the interconnects, and some kind of oil or other liquid can be patterned like blood vessels inside. All protected by aluminum (skull) and plugged into a central slot (spine).
Re: (Score:2)
Re: (Score:2)
The socket is so a single SKU of motherboard can be fitted with a variety of CPUs of different speeds after manufacture. In the past, it was copmmon to upgrade the CPU after the fact, but these days by the time you want to upgrade, the new CPUs want a new socket anyway. In the embedded world, the CPU is typically soldered on like all the other chips. The connections all tend to be on the mainboard rather than a daughter board mainly due to the number of connections needed, especially when the memory control
Re: (Score:3)
Combining the two effects, the innermost layer rises in temperature by a factor of 4, which means that speed has to be reduced by a factor of 4 to get back down to the temperature of the previous iteration. Thus twice as many layers means 2/4 times the processing power.
There's a gain to be had th
Re: (Score:1)
You might also be able to 'layer' aluminum 'channels' inside the cube to get the lowered heat out or perhaps a contained, liquid heat distribution system? Layered aluminum separating CPUs?
Assuming you could cool the cube down through layers of cooling material equally thick as the CPUs how powerful would the cube be?
Re: (Score:2)
You need surface to move heat away since heat flow is proportional to area and delta T. A given transistor technology needs a fixed amount of energy to switch state plus a fixed amount of power for leakage current. So... the number of transistors you can cool is proportional to area. That's why 3D is a bad idea, unless you're talking about tech that's not power intensive. But even newer Flash chips get warm. Maybe for memristors which don't need static power consumption.
Re: (Score:1)
I suggested 3D matrix elsewhere, response was 'brain'. Then I put down having CPUs on the outside of a Cube, interconnects inside, a blood-like network of coolant, aluminum to act as the 'skull', and an internal connection/plug as the 'spine'.
Hey, mother nature seems to think 3D is much better. If everything was best for 2D then our heads would be rather flat.
Fabless (Score:5, Interesting)
Where I think AMD really fell behind was they were not able to afford the kind of R&D on the manufacturing side that Intel does for each new process. AMD basically gave up and is now in the same boat as the rest of the "fabless" companies being 100% dependent on what TSMC or Global Foundries can produce. This is always going to put you at a competitive disadvantage at the very high end. While intel is working on pushing down to 22nm FINFET for the "old" architecture people in the design group are without a doubt working on 16nm and getting sample silicon at this node so they can tune their designs for what the transistors will really look like. When you go fabless you get to figure this out with poor yields while in "manufacturing" at the foundry. Maybe at 130 -65nm this wasn't such a big deal but when you need to make your design work with double or tripple patterned 193nm immersion lithography just figuring out some design rules is no simple task.
Also does anyone know if there is more than 1 vendor in the world that can make fully depleted SOI of the quality needed for 32nm - 28nm on a 300mm wafer? Last I knew this was a major reason behind Intel pushing FINFET instead of the fully depleted SOI.
Re: (Score:1)
Where I think AMD really fell behind was they were not able to afford the kind of R&D on the manufacturing side that Intel does for each new process.
This is why it's so good now to be at least a few process generations behind Intel. If you're on the rusty edge of fab technology you get Intelâ(TM)s old equipment for pennies on the dollar and it's been well maintained unlike used equipment from Asia.
Re: (Score:3)
"Where I think AMD really fell behind was they were not able to afford the kind of R&D on the manufacturing side"
I believe more that it is a matter of market failure and problems related to anti-trust, that intels advanced manufacturing actually hinders chip design advancement through being able to monopolize production facilities. Look at the underhanded tactics intel used during the athlon era when AMD was ahead. In my opinion we have a special case of market failure. The resources that are now req
Re: (Score:2)
"If anything, this is a poster child for the success of capitalism."
No because intel used all sorts of bribery and nonsensical bullshit with other companies to get them to not buy AMD when AMD had the advantage. It's not that AMD couldn't compete it's that Intel used it's market power and monopoly position in an abusive way. This is the problem with you americans - you're fucking stupid. The complete lack of anti-trust enforcement in the US has lead to AMD getting fucked even when it was ahead.
Re: (Score:2)
Why do people think AMD is at such a disadvantage at the high end? They're neck and neck with Intel there and tend to produce less heat. It's the high end desktop where Intel has an advantage. In the low power embedded, AMD has the advantage.
The only place Intel has a clear win is in benchmarks compiled with the intel compiler without disabling the AMD crippling function (I'm NOT making that up!).
Not the only way. (Score:3)
An increasingly common way to get powersaving is to divide the chip into oodles and oodles of blocks.
These blocks are rapidly turned off and on as they are needed.
This is what lets your phone last a week on battery, while staying logged into wifi and 3G.
It's an iPhone 4S (Score:2)
.. powered down, of course. :-)
Isn't IBM the one who's 4 years ahead? (Score:1, Flamebait)
Re: (Score:1)
Is this a clever troll bait, or do you simply not understand that DRAM is significantly slower to access.
Yes IBM has embedded DRAM as their L3 cache, but Intel and AMD essentially have embedded SRAM as their L3 cache in the range of 4-8MB, with DRAM (DDR) as L4. So what's your point sir? That IBM cannot afford to put large SRAM on their process, so they found a way to get (slower and more power hungry) DRAM closer to the cores as a stop-gap?
Besides, a Power7 @ 3.8GHz has a TDP of ~200W, while an Intel Sa
Re: (Score:2, Interesting)
IBM's eDRAM solution is very expensive. It is not that Intel doesn't know how to make it. They must have evaluated it but finally figured that it is a big yield issue and not worth the cost. SRAM pretty much uses standard logic manufacturing process with straight forward customization. Including DRAM is lot of extra process steps. And the name of the game, in consumer space, is to reduce cost. Power7 isn't available on any low end system. And high end XEON chips have started to eat Power7's lunch since they
Re: (Score:2)
Huh - wouldn't 13 MICROseconds be 76.9 thousand reads/s and 13 ns be 76.9 MILLION reads/s?
Fin Field Effect? (Score:2)
I thought Intel's new FinFET transistor structure was going to be the new standard? They had excellent results without significant retooling or adjustment to the manufacturing process. They just built the transistors upward instead of across at ever increasingly smaller scale.
Xilinx are the leaders in transitor count at 6.8B (Score:5, Informative)
Virtex-7 2000T FPGA Device First to Use 2.5-D IC Stacked Silicon Interconnect Technology to Deliver More than Moore and 6.8 Billion Transistors, 2X the Size of Competing Devices SAN-JOSE, Calif., Oct. 25, 2011-- Xilinx, Inc. [design-reuse.com](Nasdaq: XLNX) today announced first shipments of its Virtex®-7 2000T Field Programmable Gate Array (FPGA), the world's highest-capacity programmable logic device built using 6.8 billion transistors, providing customers access to an unprecedented 2 million logic cells, equivalent to 20 million ASIC gates, for system integration, ASIC replacement, and ASIC prototyping and emulation. This capacity is made possible by Xilinx's Stacked Silicon Interconnect technology, the first application of 2.5-D IC stacking that gives customers twice the capacity of competing devices and leaping ahead of what Moore's Law could otherwise offer in a monolithic 28-nanometer (nm) FPGA.
2013 - 2015 end of line at 11nm or slightly below (Score:2)
regardless of "fins" or not, quantum effects spells the end of miniaturisation of features the for current digital semiconductor technology in four years or so. will we then concentrate on making better software for awhile in the lull to to finding the technology for mass-producing the cores of our computational devices?
Re: (Score:1)
I think I tripped...