TSMC and Global Foundries Plan Risky Process Jump As Intel Unveils 22nm SoC 60
MrSeb writes with news on the happenings with next generation fabrication processes. From the article: "... Intel's 22nm SoC unveil is important for a host of reasons. As process nodes shrink and more components move on-die, the characteristics of each new node have become particularly important. 22nm isn't a new node for Intel; it debuted the technology last year with Ivy Bridge, but SoCs are more complex than CPU designs and create their own set of challenges. Like its 22nm Ivy Bridge CPUs, the upcoming 22nm SoCs rely on Intel's Tri-Gate implementation of FinFET technology. According to Intel engineer Mark Bohr, the 3D transistor structure is the principle reason why the company's 22nm technology is as strong as it is. Earlier this year, we brought you news that Nvidia was deeply concerned about manufacturing economics and the relative strength of TSMC's sub-28nm planar roadmap. Morris Chang, TSMC's CEO, has since admitted that such concerns are valid, given that performance and power are only expected to increase by 20-25% as compared to 28nm. The challenge for both TSMC and GlobalFoundries is going to be how to match the performance of Intel's 22nm technology with their own 28nm products. 20nm looks like it won't be able to do so, which is why both companies are emphasizing their plans to move to 16nm/14nm ahead of schedule. There's some variation on which node comes next; both GlobalFoundries and Intel are talking up 14nm; TSMC is implying a quick jump to 16nm. Will it work? Unknown. TSMC and GlobalFoundries both have excellent engineers, but FinFET is a difficult technology to deploy. Ramping it up more quickly than expected while simultaneously bringing up a new process may be more difficult than either company anticipates."
Re: (Score:2)
One one? There were multiple projects from multiple companies mentioned in the article.
Re: (Score:2)
That was 2007. I've been on the same gig for a while now.
My job isn't a secret. It's quite public facing in some ways. But some things are not my secrets to give away, even though they are very nice things technologically speaking.
Re: (Score:3)
Clearly I made the mistake of posting my throwaway comment that landed as the first post, so people responded to it.
So I will add more detail:
I don't develop process technology, but I get to design logic circuits on this technology and it is indeed rather awesome.
After 20 years of gradual and steady feature size reduction, the switch to 22nm and beyond appears feels like a step in improvement way beyond the normal gradual improvement. In that sense it is rather awesome, because things that were previously t
Re: (Score:2)
I thought everyone who cared knew what I did. Google RdRand and RdSeed. That's my thing.
Remember 40nm at TSMC? (Score:2)
I couldn't possibly comment because they'd fire me.
But it is rather awesome.
Is that sarcasm? You can't comment means you won't add criticism or praise? I remember the HUGE cock-up that TSMC caused AMD when they went to the 5000 series GPUs. They had QC issues [xbitlabs.com] for all the rev.0 chips, and none of them would overclock. The 3 that I bought (sequentially) all needed super-cooling OR underclocking to perform consistently.
Maybe it's just me, but I'm extremely sceptical that TSMC will be able to pull this off properly.
Re: (Score:2)
>Is that sarcasm?
Not at all. The technology is awesome in that it is so much better than what went before and that makes it a joy to work on.
I think my comment was misinterpreted because it inadvertently landed as the first post.
SoC (Score:5, Informative)
In case anyone else was wondering, SoC stands for System on a Chip [wikipedia.org]
Re: (Score:3)
Re: (Score:3)
Re:SoC (Score:5, Informative)
There are many thing more complex with SoC vs just CPU-chips. Although CPUs are complicated beasts in their own rights, if you follow the recent trends, they stamp down 4 of the units on the chip with lots of cache and only a few different Input/Output pad connections (e.g., DRAM, DMI). On an SOC, you've got lots of different types of units (CPU's, GPUs, Video decoders, wireless MACs, USB controllers, etc), each having their own clock, power and I/O requirements, and most of the time some licensed designs from outside IP vendors (of varying quality and originating from different design and testing environments), which have to be all integrated on the same chip.
Today, operations like place and route, timing closure, power and noise crosstalk, clock generation, etc, are tough things to do. If you only have a few identical things (say like 4 cores and 2 caches on a chip), you can leverage a lot of things between these modules. On an SoC, you need to do these things on all units, but you can't really leverage much between modules because they are so different, so some of the work is simply more complex (not necessarily harder, but more work and irregular work, so it's easier to overlook things, e..g., high complexity). There are also tons, secondary issues (e.g, thermal/electrical power sharing between GPU/CPU, low-power standby-modes), that you don't necessarily find in a CPU-only design that also need to designed and analyzed (can't fix them after you tape out the SoC, where you might be able to fix them on a board in a discrete design).
On the electrical I/O front, designing and characterizing a few standard I/Os that only have to drive a few mm on fairly standardized circuit boards isn't the same as having lots of different I/Os that run at different frequencies and have varying drive voltage requirements and high density packaging that need to still have a routable board with good signal integrity in several different circuit board designs. Just because Intel could get a few standard low-swing I/Os running on their 22nm process didn't mean it was a cake walk for them to design I/Os that hooks to cables and run at higher voltages and have experience more severe ESD issues (don't want to zap you SoC when you walk across the carpet).
The fact that they got the stuff they need for SoCs working from a design integration and electrical I/O point of view on their advanced 22nm process is certainly a big advance for them worthy of trumpeting...
Re: (Score:2)
Yes, an SoC is a significantly bigger job than a pure CPU core. But Intel hasn't been producing pure CPU cores for a long time; an Ivy Bridge has a large GPU, a collection of video accelerators, two DDR3 controllers, a PCIe 3.0 interface, and quite a fancy power-management microcontroller. The die is less than 50% occupied by CPU cores.
Re: (Score:2)
Yes, an SoC is a significantly bigger job than a pure CPU core. But Intel hasn't been producing pure CPU cores for a long time; an Ivy Bridge has a large GPU, a collection of video accelerators, two DDR3 controllers, a PCIe 3.0 interface, and quite a fancy power-management microcontroller. The die is less than 50% occupied by CPU cores.
But on Ivy Bridge, all the wacky I/O is in the southbridge (connected through DMI, a PCIe-like physical interface) which was manufactured in an older process technology. On a true SoC, you need to pull all the cruft from the southbridge into your main chip which means you need to port all the I/O cells to your main chip on the advanced process. You also need to worry about board routability more (with a southbridge, you get to put all that nasty I/O stuff far way from your main chip avoiding many of the r
Re: (Score:2)
>Why are SoCs more complex than CPU designs?
They aren't. Not as far as I can tell. I swim in both seas.
SoCs have some integration issues because they tend to emphasize IP reuse (I.E. dropping in standard designs for things like interfaces) and IP reuse is always more difficult that it looks on the surface.
CPUs tend to focus more on build-it-youself architecture. But the distinctions are very blurred these days.
CPUs tend to be associated with big core/desktop
SoCs tend to be associated with small core/hand
Re: (Score:2)
Screw the 3D printers, I'm going to mill my own SoC. All I need is a sub-micron, square end mill bit.
Marketing 14nm not, real 14nm (Score:5, Insightful)
If you read the announcements, you will weasel words like "14nm class". The bottom line is: these are not 14nm processes. It would be more accurate to call them 20nm with FinFets. Global Foundries process does reduce some parameters from their 20nm planer but there is nothing 14nm about it.
Re: (Score:2, Funny)
I don't care if it's "real" 14nm or fake. What counts is how fast the resulting chips are, and how many MIPS/Watt they achieve. At the end of the day, the whole stuff is insistinguishable from magic [youtube.com].
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Re:Marketing 14nm not, real 14nm (Score:5, Insightful)
...there is nothing 14nm about it.
Add more Gs to it. That's what the telcos did. They bring a 2G, you bring a 3G. They bring a 3G, you bring a 4G. That's the chic--marketing way! Then we took that whole gigabyte thing with harddrives and just rounded down. Asking companies to compete based on actual specifications instead of marketing bullshit is communist. If you support that kind of commie non-sense then you're the reason we're losing jobs to China. Blah blah blah... *barfs*
Well in this case (Score:2)
They have more of a marketing issue because they are up against someone with better technology. Intel tends to be around a node ahead of everyone else because they invest massive amounts in to R&D, billions a year.
So it isn't like the telcos trying to market "moar Gzzzz!!!11" to consumers, it is that they are trying to figure out a way to catch Intel.
Re: (Score:2)
So it isn't like the telcos trying to market "moar Gzzzz!!!11" to consumers, it is that they are trying to figure out a way to catch Intel.
They could try investing in R&D. You know, just a thought...
Re:Well in this case (Score:5, Informative)
Re: (Score:3)
quite true and I'm sad. I want the end of the silicon roadmap as soon as possible
Re: (Score:3)
The irony also is that it's a SoC, so most of the transistors there are NOT going to be "14nm" or "22nm" or whatever. They're going to be larger.
Why? Several things decide the size of a transistor - first, the use of the
Where is the damn article?? (Score:1)
Where is the damn article?? I don't see any link to the actual article. Is this the new slashdot?
Re: (Score:1)
OK found it. Its at the bottom of the summary :)
Re:Where is the damn article?? (Score:5, Funny)
Ah, not the new /. at all, just the old /. readers.
Unfair (Score:1)
This is obviously unfair of Intel to be out innovating the rest of the market like this. We should curb it somehow
Re: (Score:2)
>Intel, you didn't build that!
I happen to live next door to the Ronler Acres Intel Fantasy Fab Land.
Somebody most definitely did built that, because for 6 months there was a 'kin huge Lampson Transilift visible out the back window.
http://www.splatzone.com/lampson/ [splatzone.com]
Re: (Score:2)
Found a picture of the one I saw..
http://www.flickr.com/photos/67292116@N00/6139404916/ [flickr.com]
It just looks like a crane int he picture, but when people walked by, it was the sort of crane that made people stop and say "By golly, that's a big crane!"
Re:Yeah but can it run... (Score:5, Interesting)
I'm a bit scared of all this die shrinkage.
We have lots of perfectly working gear around here older than most of our offspring...
As transistor count goes up and feature size down can we expect more of our gear to start to go haywire over a shorter length of time or is there something baked into process steps to counteract or actually improve reliability?
I'm not sure why this was modded down. Flash in particular has problems with smaller die sizes, and while lower longevity has certain economic benefits, environmentally it's a dead end.
The other thing is the 11-year solar cycle... if we develop some ultra-high density technology during the low ebb, we may find that half our electronics get frazzled during the solar maximum.
Re: (Score:2)
I'm a bit scared of all this die shrinkage.
I'm not sure why this was modded down.
Political correctness. Think about it for a second, you'll get it.
Re: (Score:2)
Political correctness. Think about it for a second, you'll get it.
Having pored over it for quite some time, I can only assume it's some peculiarity of US English which I will need help to see.
Re: (Score:2)
22nm die: I was in the pool! I was in the pool!
Can TMSC really do it? (Score:2)
It took Intel 10 years to take FinFET from concept to production, yet TMSC are claiming they can do it in only 2 years. Is that even feasible? Even if it is, doesn't Intel have patents on the tech?
Re: (Score:2)
yet TMSC are claiming they can do it in only 2 years.
Where on did you get that piece of information?
Firstly, FINETs have been a subject of research for quite a while, much of which has been open academic research. So, it's not like TMSC has been doing this in a vacuum.
Secondly, why do you think TMSC hasn't looked at FINFETs before now?
Re: (Score:1)
2 years. Yes, that's the time from which TMSC first publicly said they would use it (last year), until when they deliver (2014 according to article).
Re: (Score:2)
Third, it's not like TMSC hasn't chopped up an intel chip really small and looked at under microscopes.