Intel On Track For 32 nm Manufacturing 139
yaksha writes "Intel said on Wednesday that it has completed the development phase of its next manufacturing process that will shrink chip circuits to 32 nanometers.
The milestone means that Intel will be able to push faster, more efficient chips starting in the fourth quarter.
In a statement, Intel said it will provide more technical details at the International Electron Devices Meeting next week in San Francisco. Bottom line: Shrinking to a 32 nanometer is one more step in its 'tick tock' strategy, which aims to create a new architecture with new manufacturing process every 12 months. Intel is obviously betting that its rapid-fire advancements will produce performance gains so jaw dropping that customers can't resist."
Not surprising. (Score:4, Interesting)
Chipsets (Score:5, Interesting)
It's great that Intel are working on die shrinks for their processors, but I wish they would do the same for their support chipsets. It's annoying that on most laptops the northbridge for Atom processors uses more power than the processor does.
Point of Diminishing Returns? (Score:3, Interesting)
Am I the only one feeling we might have reached the point of diminishing returns, at least for desktops, in the last 2-3 years. All the shrinkage past 90 nanometers just feels underwhelming. Stuff beyond Pentium 3 has not been revolutionary, performance wise, for a desktop.
Re:Point of Diminishing Returns? (Score:3, Interesting)
Yea, there's a pretty big wall that's been hit in terms of clock speed, which is why multiple core processors is the direction instead of ramping up speeds.
Re:Chipsets (Score:5, Interesting)
This should be partially alleviated once the i7 architecture is fully adopted. Pretty much no more north bridge. That's probably why they're neglecting the current chip set technology with more aggressive updates.
And who knows, if a better chip interconnect comes around in the next generation (unlikely, but possible), Intel could start putting more and more in the CPU package. Things like a Larrabee GPU and south bridge functionality (audio, networking, general I/O). System on a chip is common place in embedded systems now. If Intel wants to eat ARM's lunch they're going to have to adopt some of the same techniques.
Re:What about AMD? (Score:5, Interesting)
Re:Point of Diminishing Returns? (Score:3, Interesting)
Anything past the P3 may not have been revolutionary, but it's steadily progressed quite nicely.
I have a dual 1.4GHz P3 system, and a 1.6GHz Core Duo. The Core Duo is *much* faster, and that chip is already outdated. Not to mention the fact that it's comparing the fastest P3s made to the lowest of the Core Duo lineup.
People also forget about things that can't be measured in nanometers or gigahertz, like the advances that have greatly lowered leakage current. Without them, something like 85% of the power used in the 32nM chips would be leakage, and liquid cooling would be an absolute necessity.
Also, these advances allow Intel to make modest chips VERY cheaply... like the Atom. I've got a micro-atx board with one on it, and considering that the entire board+cpu only cost $65, it is an AMAZING performer.
Re:Normal people don't need faster computers (Score:3, Interesting)
The sad part is that improved runtime speed and code readability can be had together at the same time. The reason the DataDraw based code ran 7x faster was simple: cache performance. C, C++, D, and C# all specify the layout of objects in memory, making it impossible for the compiler to optimize cache hit rates. If we simply go to a slightly more readable higher level of coding, and let the compiler muck with the individual bits and bytes, huge performance gains can be had. The reason DataDraw saved 40% in memory was that it uses 32-bit integers to reference graph objects rather than 64-bit pointers. Again, C, C++, and most languages specify a common pointer size for all class types. If the compiler were allowed to take over that task, life would be easier for the programmer, and we'd save a ton of memory.
But then again... what's a mere factor of 7X runtime with today's computers? With the low price of DRAM, who cares about 40%? It's easier to stick with the crud we've used since 1970 (C, and it's offspring) than to bother building more efficient languages. Language research has abandoned efficiency as a goal.
Re:Normal people don't need faster computers (Score:4, Interesting)
Re:Normal people don't need faster computers (Score:3, Interesting)
Check out the benchmark table at this informative link [sourceforge.net]. On every cache miss, the CPU loads an entire cache line, typically 64 or more bytes. Cache miss rates are massively dependent on the probability that those extra bytes will soon be accessed. Since typical structures and objects are 64 bytes or more, the cache line typically gets filled with fields of just one object. Typical inner loops may access two of those object's fields, but rarely three, meaning that the cache is loaded with useless junk. By keeping data of like fields together in arrays, the cache line will be filled with the same field, but from different objects, often objects that will soon be accessed. This, plus the 32 vs 64 bit object references, and cache-sensitive memory organization (unlike malloc), leads to a 7X speedup in DataDraw backed graph traversals vs plain C code.
Understanding cache performance is critical for fast code, yet most programmers are virtually clueless about it. Just run the benchmarks yourself if you want to see the impact.