Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel On Track For 32 nm Manufacturing 139

yaksha writes "Intel said on Wednesday that it has completed the development phase of its next manufacturing process that will shrink chip circuits to 32 nanometers. The milestone means that Intel will be able to push faster, more efficient chips starting in the fourth quarter. In a statement, Intel said it will provide more technical details at the International Electron Devices Meeting next week in San Francisco. Bottom line: Shrinking to a 32 nanometer is one more step in its 'tick tock' strategy, which aims to create a new architecture with new manufacturing process every 12 months. Intel is obviously betting that its rapid-fire advancements will produce performance gains so jaw dropping that customers can't resist."
This discussion has been archived. No new comments can be posted.

Intel On Track For 32 nm Manufacturing

Comments Filter:
  • Not surprising. (Score:4, Interesting)

    by pclminion ( 145572 ) on Thursday December 11, 2008 @01:17AM (#26070997)
    At WinHEC 2008 the Intel speakers continued to hint at the fact that they had operating, packaged cores at this size. On track for manufacturing? More like they've been making it for 9-12 months already. At any rate, it's cool, though not surprising.
  • Chipsets (Score:5, Interesting)

    by lobiusmoop ( 305328 ) on Thursday December 11, 2008 @01:24AM (#26071061) Homepage

    It's great that Intel are working on die shrinks for their processors, but I wish they would do the same for their support chipsets. It's annoying that on most laptops the northbridge for Atom processors uses more power than the processor does.

  • by Anonymous Coward on Thursday December 11, 2008 @01:28AM (#26071095)

    Am I the only one feeling we might have reached the point of diminishing returns, at least for desktops, in the last 2-3 years. All the shrinkage past 90 nanometers just feels underwhelming. Stuff beyond Pentium 3 has not been revolutionary, performance wise, for a desktop.

  • by sunami ( 751539 ) on Thursday December 11, 2008 @01:30AM (#26071113)

    Yea, there's a pretty big wall that's been hit in terms of clock speed, which is why multiple core processors is the direction instead of ramping up speeds.

  • Re:Chipsets (Score:5, Interesting)

    by Anonymous Coward on Thursday December 11, 2008 @01:49AM (#26071243)

    This should be partially alleviated once the i7 architecture is fully adopted. Pretty much no more north bridge. That's probably why they're neglecting the current chip set technology with more aggressive updates.

    And who knows, if a better chip interconnect comes around in the next generation (unlikely, but possible), Intel could start putting more and more in the CPU package. Things like a Larrabee GPU and south bridge functionality (audio, networking, general I/O). System on a chip is common place in embedded systems now. If Intel wants to eat ARM's lunch they're going to have to adopt some of the same techniques.

  • Re:What about AMD? (Score:5, Interesting)

    by afidel ( 530433 ) on Thursday December 11, 2008 @02:51AM (#26071573)
    Actually I think the biggest post P3 improvement has been the move to dual core as standard on the desktop in the last couple years. At least on Windows the non-blocking nature with a stalled thread is huge for overall system performance and UI snapiness. It's great to be able to get those benefits without a $200 motherboard and two CPU's =)
  • by NerveGas ( 168686 ) on Thursday December 11, 2008 @05:08AM (#26072337)

    Anything past the P3 may not have been revolutionary, but it's steadily progressed quite nicely.

    I have a dual 1.4GHz P3 system, and a 1.6GHz Core Duo. The Core Duo is *much* faster, and that chip is already outdated. Not to mention the fact that it's comparing the fastest P3s made to the lowest of the Core Duo lineup.

    People also forget about things that can't be measured in nanometers or gigahertz, like the advances that have greatly lowered leakage current. Without them, something like 85% of the power used in the 32nM chips would be leakage, and liquid cooling would be an absolute necessity.

    Also, these advances allow Intel to make modest chips VERY cheaply... like the Atom. I've got a micro-atx board with one on it, and considering that the entire board+cpu only cost $65, it is an AMAZING performer.

  • by smilindog2000 ( 907665 ) <bill@billrocks.org> on Thursday December 11, 2008 @06:54AM (#26072831) Homepage

    The sad part is that improved runtime speed and code readability can be had together at the same time. The reason the DataDraw based code ran 7x faster was simple: cache performance. C, C++, D, and C# all specify the layout of objects in memory, making it impossible for the compiler to optimize cache hit rates. If we simply go to a slightly more readable higher level of coding, and let the compiler muck with the individual bits and bytes, huge performance gains can be had. The reason DataDraw saved 40% in memory was that it uses 32-bit integers to reference graph objects rather than 64-bit pointers. Again, C, C++, and most languages specify a common pointer size for all class types. If the compiler were allowed to take over that task, life would be easier for the programmer, and we'd save a ton of memory.

    But then again... what's a mere factor of 7X runtime with today's computers? With the low price of DRAM, who cares about 40%? It's easier to stick with the crud we've used since 1970 (C, and it's offspring) than to bother building more efficient languages. Language research has abandoned efficiency as a goal.

  • by TheRaven64 ( 641858 ) on Thursday December 11, 2008 @09:45AM (#26073877) Journal
    General purpose CPUs are quite bad for video compression. A DSP or GPU is generally laid out in a way that maps more closely to the algorithms. I'd be interested to see what performance ffmpeg gets once they've finished optimising it for the DSP in the OMAP3530 (for reference, the entire BeagleBoard system built around one of these uses 1.8W - less than just the CPU of Intel's 'low power' systems and include the ARM Cortex A8 core, an OpenGL ES 2.0 GPU and a DSP).
  • by smilindog2000 ( 907665 ) <bill@billrocks.org> on Thursday December 11, 2008 @10:41AM (#26074585) Homepage

    Check out the benchmark table at this informative link [sourceforge.net]. On every cache miss, the CPU loads an entire cache line, typically 64 or more bytes. Cache miss rates are massively dependent on the probability that those extra bytes will soon be accessed. Since typical structures and objects are 64 bytes or more, the cache line typically gets filled with fields of just one object. Typical inner loops may access two of those object's fields, but rarely three, meaning that the cache is loaded with useless junk. By keeping data of like fields together in arrays, the cache line will be filled with the same field, but from different objects, often objects that will soon be accessed. This, plus the 32 vs 64 bit object references, and cache-sensitive memory organization (unlike malloc), leads to a 7X speedup in DataDraw backed graph traversals vs plain C code.

    Understanding cache performance is critical for fast code, yet most programmers are virtually clueless about it. Just run the benchmarks yourself if you want to see the impact.

That does not compute.

Working...