Intel Launches Power-Efficient Penryn Processors 172
Bergkamp10 writes "Over the weekend Intel launched its long-awaited new 'Penryn' line of power-efficient microprocessors, designed to deliver better graphics and application performance as well as virtualization capabilities.
The processors are the first to use high-k metal-gate transistors, which makes them faster and less leaky compared with earlier processors that have silicon gates. The processor is lead free and by next year Intel is planning to produce chips that are halogen free, making them more environmentally friendly.
Penryn processors jump to higher clock rates and feature cache and design improvements that boost the processors' performance compared with earlier 65-nm processors, which should attract the interest of business workstation users and gamers looking for improved system and media performance."
Still sticking (Score:2, Interesting)
We're still running PowerPC here because they're low-power and do certain mathematics very well (I'm not the science guy). Hopefully Apple will switch back to PowerPC or so now that they are fully "Universal" and IBM has some promising chips lined up.
Re:RISC vs. CISC (Score:4, Interesting)
CISC to RISC runtime translation (Score:4, Interesting)
Re:Can somebody explain (Score:4, Interesting)
Re:revolutionary? no, but still noteworthy (Score:4, Interesting)
Re:RISC vs. CISC (Score:5, Interesting)
The argument that the compiler can do a reasonable job at scheduling instructions ... well, is simply false. Reason #1: The problem is that most applications have rather small basic blocks (spec 2000 integer, for instance, has basic blocks in the 6-10 instruction range). You can do slightly better with hyperblocks, but for that you need rather heavy profiling to figure out which paths are frequently taken. Reason #2: compiler operates on static instructions, the dynamic scheduler - on the dynamic stream. The compiler can't differentiate between instances of the instructions that hit in the cache (with a latency of 3-4 cycles) and those that miss all the way to memory (200+ cycles). The dynamic scheduler can. Why do you think that Itanium has such large caches? Because it doesn't have out-of-order execution, it is slowed down by cache misses to a much larger extent than the out-of-order processors.
I agree that there are always ways to statically improve the code to behave better on in-order machines (hoist loads and make them speculative, add prefetches, etc), but for the vast majority of applications none are as robust as out-of-order execution.
Come Full Circle (Score:2, Interesting)
Now, the trend seems to be to return to the metal gates of yesteryear and ditch the oxide (the 'O' in MOSFET) for high-k dielectrics (not high-k metals, as the summary seems to say)...
That's all well and good, but I have one question... when will we get around to updating the term "CMOS"?
Re:Still sticking (Score:1, Interesting)
Huh. That's a strange definition of "replaced" you've got.
This is like having ATMs that only gave out dimes, complaining about the dimes, and being told "no, we do all transactions in units of $10; the dimes you 'see' are not the same monies that we actually transfer".
As a user, I don't care what the processor does internally -- could use black magic for all I care. I've written PPC compilers before, but I can't wrap my brain around x86. Could this be why so few new (non-byte)compiled languages exist -- because nobody can figure out how to write a code-emitter for the monstrosities that pass as recent CPUs?
Names of Rivers? (Score:4, Interesting)
POWER6 is now In-Order (Score:3, Interesting)