Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Launches Power-Efficient Penryn Processors 172

Bergkamp10 writes "Over the weekend Intel launched its long-awaited new 'Penryn' line of power-efficient microprocessors, designed to deliver better graphics and application performance as well as virtualization capabilities. The processors are the first to use high-k metal-gate transistors, which makes them faster and less leaky compared with earlier processors that have silicon gates. The processor is lead free and by next year Intel is planning to produce chips that are halogen free, making them more environmentally friendly. Penryn processors jump to higher clock rates and feature cache and design improvements that boost the processors' performance compared with earlier 65-nm processors, which should attract the interest of business workstation users and gamers looking for improved system and media performance."
This discussion has been archived. No new comments can be posted.

Intel Launches Power-Efficient Penryn Processors

Comments Filter:
  • by Anonymous Coward on Monday November 12, 2007 @11:55AM (#21323931)
    While Penryn is a small increase in performance, it is not a big change in the architecture. Instead of upgrading to Penryn, customers can expect Nehalem, the next major revision in the Intel architecture, was responsible for the release in 2008.

    At the Intel Developer Forum in San Francisco in September Intel showed, and said it would be a better yield per watt and better system performance through its Quick Path Interconnect system architecture. Nehalem chips will also provide a memory controller integrated and improved communication between system components.
  • Halogen free (Score:3, Informative)

    by jbeaupre ( 752124 ) on Monday November 12, 2007 @12:01PM (#21324021)
    I'm sure they mean eliminating halogenated organic compound or something similar Otherwise I think eliminating halogens from chips themselves is just a drop in the ocean. A deep, halogen salt enriched ocean.
  • Can somebody explain (Score:3, Informative)

    by sayfawa ( 1099071 ) on Monday November 12, 2007 @12:08PM (#21324099)
    Why is there so much emphasis on size (as in 45nm) for these things? Does making it smaller make it inherently faster or more efficient? Why? I've looked around (well, I looked at wikipedia anyway) and it's still not clear what advantage the smaller size has.
  • Re:Still sticking (Score:5, Informative)

    by Waffle Iron ( 339739 ) on Monday November 12, 2007 @12:12PM (#21324155)

    It should've been replaced a long time ago with a pure RISC instruction set

    It was, when the Pentium Pro was introduced circa 1997. The instruction set the programmer "sees" is not the instruction set that the chip actually runs.

  • by Chabil Ha' ( 875116 ) on Monday November 12, 2007 @12:17PM (#21324211)
    Think of it in these terms. Electricity is being used to transmit 1 and 0s inside a circuit. We can only do so much to make the conductivity less resistant, so we need to shorten the distance between gates. The less distance an electrical signal has to travel, you can increase the number of operations that are performed in the same amount of time.
  • by compumike ( 454538 ) on Monday November 12, 2007 @12:17PM (#21324213) Homepage
    The energy required to switch a capacitor from zero to Vdd volts is 1/2*C*Vdd^2.

    Smaller logic sizes can operate faster because the physical gate area of the transistor is that much smaller, so there's less capacitance loading down the piece of logic before it (proportional to the square of the scaling, of course). However, it also tends to be the case that the operating voltages scale down too (because they adjust the semiconductor doping and the gate oxide thickness to match), so you get an even better effect on energy required. Thus, scaling helps both with speed and operating power.

    The problem they're running into now is that at these smaller sizes, the off-state leakage currents are getting to be of the same magnitude as the actual switching (operating logic) currents! This happens because of the reduced threshold voltage when they scale down, so the transistor isn't as "off" as it used to be.

    That's why Intel has to work extra hard to get the power consumption down as the sizes scale down.

    --
    NerdKits: electronics kits for the digital generation. [nerdkits.com]
  • by Rhys ( 96510 ) on Monday November 12, 2007 @12:20PM (#21324259)
    Smaller size means signals can propagate around the chip faster. It also means you need less signal-fixing/synchronization hardware, since it is simpler to get a signal synced up at a given clock rate. Smaller size generally means less power dissipated. Smaller feature sizes means the CPU is physically smaller (generally), so more CPUs fit on a silicon wafer. For each wafer they produce (a high but relatively fixed cost vs the number of CPUs on the wafer) they get more CPUs out (= cheaper). If a CPU is bad, that is a smaller percent of the wafer that was "wasted" on that CPU.
  • Re:Still sticking (Score:5, Informative)

    by jonesy16 ( 595988 ) on Monday November 12, 2007 @12:23PM (#21324301)
    Actually, one of the reasons that Apple jumped off of the PowerPC platform was BECAUSE of their power inefficiency. The G5 processors were incredibly power hungry, enough so that they could never get one cool enough to run in a laptop and actually offered the Mac Pro line with liquid cooling. Compare that to the new quad-core and eight-core mac pro's and dual core laptops that run very effectively with very minimal air cooling.
  • RISC vs. CISC (Score:5, Informative)

    by vlad_petric ( 94134 ) on Monday November 12, 2007 @12:23PM (#21324309) Homepage
    That's a debate that happened more than 20 years ago, at a time when all processors were in-order and could barely fit their L1 on chip, and there were a lot of platforms.

    These days:

    • The transistors budgets are so high that the space taken by instruction decoders aren't an issue anymore (L1, L2 and sometimes even an L3 is on chip).
    • Execution is out-of-order, and the pipeline stalls are greatly reduced. The out-of-order execution engine runs a RISC-like instruction set to begin with (micro-ops or r-ops).
    • There is one dominant platform (Wintel) and software costs dominate (compatibility is essential).

    One of the real problems with x86-32 was the low number of registers, which resulted in many stack spills. x86-64 added 8 more general purpose registers, and the situation is much better (that's why most people see a 10-20% speedup when migrating to x86-64 - more registers). Sure, it'd be better if we had 32 registers ... but again, with 16 registers life is decent.

  • by Pojut ( 1027544 ) on Monday November 12, 2007 @12:31PM (#21324413) Homepage
    Another good reason is that it is far cheaper (at least last time I checked prices) to go with AMD...especially if you aren't doing any gaming or audio/video work. While Core 2 blasts AMD out of the water, the price difference makes AMD a very smart buy for every-day use. For gaming, AMD's offerings still work great, and the money you save on the processor can instead be used towards a more powerful video card.
  • It's not really true (Score:3, Informative)

    by Moraelin ( 679338 ) on Monday November 12, 2007 @01:26PM (#21325191) Journal
    Well, bear some things in mind:

    1. At one point in time there was a substantial difference between RISC and CISC architectures. CPUs had tiny budgets of transistors (almost homeopathic, by today's standards), and there was a real design decision where you put those transistors. You could have more registers (RISC) or a more complex decoder (CISC), but not both. (And that already gives you an idea about the kinds of transistor budgets I'm talking about, if having 16 or 32 registers instead of 1 to 8 actually made a difference.)

    Both sides had their advantages, btw. If it were that bleeding obvious that RISC = teh winner and CISC = teh loser, a lot of history would be different.

    The difference narrowed a lot over time, though, so neither is purely CISC or RISC any more (except marketting bullshit or fanboy wars.) Neither the original RISC idea nor the CISC one scaled past a point, so now we have largely the same weird hybrid in both camps.

    E.g., the Altivec instruction set on PowerPC is the exact opposite of what the original RISC idea was. The very idea of RISC was never to implement in hardware what a compiler would do for you in software. So the very idea of having whole procedures and loops coded in the CPU instead of in software would have seemed the bloody opposite of all that RISC is about, back in the day.

    At any rate, what both are today is what previously used to be called a microcoded architecture. It's sorta like having a CPU inside a CPU. The smaller one inside works on much simpler operations, but an instruction of the "outer" CPU translates into several of those micro-operations. Which in turn are pipelined, reordered in flight, etc, to have them execute faster.

    What both sides are doing nowadays for marketting reasons is basically calling the inner architecture "RISC", because marketing really likes that term, and the lemmings have been already conditioned to get excited when they hear "RISC". Really, PowerPC's architecture is only "RISC" on account of basically "yeah, but deep down inside it's still sorta RISC like"... and ironically the x86's can make the exact same claim too.

    At any rate, whether you want to call that RISC or not, once you look inside it, both the PowerPC and the Pentiums/Athlons have nearly identical architectures and modules. Sure, the implementation details differ, and some have advantages over other implementations (the Netburst ones had too long pipes, while a G4 had a tiny pipe, so the G4 did have better IPC), but essentially they both are based on the exact same architecture. Neither is more RISC than the other. We can lay that RISC-vs-CISC war to rest.

    2. That said, the x86 still was somewhat hampered by the lack of more general purpose registers. Although the compilers and the CPU itself did optimize heavily around the problem, they didn't always do the optimal job.

    That has changed in the 64 bit version, though. AMD went and doubled the number of registers for programs running in 64 bit mode, and Intel had to use the same set of instructions so they have that too nowadays.

    The performance penalty of that architecture basically became a lot lower than it was in the days of G4 vs Pentium 4 flame wars.
  • by necro81 ( 917438 ) on Monday November 12, 2007 @02:24PM (#21325853) Journal
    The biggest thing about Penryn is the move to 45-nm fabrication, and the technological advances that were required to pull it off. IEEE Spectrum has a nice, in-depth (but accessible) article on those advances [ieee.org]. High-k dielectrics and new metal gate configurations will be how advanced ICs are produced from now on. It is as large a shift for the fabs as a new chip architecture is for designers.
  • by Quadraginta ( 902985 ) on Monday November 12, 2007 @02:25PM (#21325887)
    The problem is not the halogen atoms themselves, but the chemical reactivity a carbon atom gets when it's bonded to a halogen atom. That is, an organic compound that contains carbon-chlorine bonds is obnoxious not because of the chlorine atoms, but because the chlorine atoms "activate" the carbon atoms to which they're bonded (more precisely they make it far easier for nucleophilic and radical reactions to happen at the carbon atom) so that the carbon atom can do chemistry inside you (or inside some other animal) that you really don't want to happen, e.g. mutating your DNA. This is why chlorinated organic compounds (e.g. PCBs, perc, carbon tet) tend to be tightly regulated.

    The halogens themselves (Cl_2 et cetera) and the halogen-oxygen compounds you find in swimming pools (e.g. hypochlorite anions) are merely noxiously caustic, like acid. At high enough concentrations they might scar your lungs and skin, or kill you, but they won't seep into your tissues and do insidious chemistry that gives you cancer or lupus, and they're quite harmless at low concentrations (e.g. what you find in your pool, or in seawater).
  • by pla ( 258480 ) on Monday November 12, 2007 @02:45PM (#21326185) Journal
    No way your computer draws only 65W, unless you have a VERY old computer or a shuttle that can barely do anything.

    Provide an email address and I'll send you a picture of a Kill-A-Watt reading in the high-50W range with the CPU pegged (and in the low 40s idle). I respect your pessimism, but really do run two such systems; One even has something vaguely resembling a decent GPU, though no doubt the hardcore gamers would sneer heartily at it (not that I care, as I said, as I mostly prefer RPG and RTS over FPS).

    As for "older", AMD has two entire lines of modern, dual-core chips running between 31W (Turion) and 45W ("BE" parts). While true that dual 2.3Ghz cores don't rock the world anymore, as I said, they perform so much more than "okay" that I don't see myself upgrading for at least another two years (barring any revolutionary advances in CPU technology before then, which looks exceedingly unlikely IMO).



    Not to mention your power supply is at max 85% efficient.

    I've had enough crappy low-end PSUs take out systems in the past that I buy only the best now - And as a side effect of "quality", you tend to get "efficiency". I personally favor SeaSonic's hardware, of which the newer ones push 88% efficient; Though yes, the ones I have now only claim 85%.

    Regardless, keep in mind that that number applies multiplicatively to whatever your CPU and GPU (and the negligible rest) draw... 0.85*(35+16) wastes only 9W, while 0.85*(120+107) wastes over 40W. Just think about that for a sec - A carelessly designed midrange PC can easily waste, just in PSU losses, my total light-use draw.



    or a shuttle that can barely do anything.

    I run one of those (well, a home-built EPIA system) as my home file server. 22W at-the-wall (not counting the bank of HDDs except the boot drive), and it can perform its one and only real "task" (saturating a gigabit network connection) juuuuuust fine.
  • by ircmaxell ( 1117387 ) on Monday November 12, 2007 @02:48PM (#21326229) Homepage
    Ummmm.... Check this out... http://www23.tomshardware.com/cpu_2007.html [tomshardware.com]

    This chart shows that in terms of Price/Performance for the average user, Intel has only two CPU's that can compete with AMD's leading X2 (non-FX) processor (the 6000+, which is the highest AMD they have benchmarked). The first is the E2160, and the second is the P4E 613.

    The field is LARGELY domainated (at the best scores that is) by AMD... Intel has 5 in the top 20, 1 in the top 10, and 0 in the top 5. AMD, conversely, has 2 x2's in the top 5...
  • by homer_ca ( 144738 ) on Monday November 12, 2007 @03:35PM (#21326851)
    You're correct that the x86 instruction set is still cruft, and a pure RISC CPU is theoretically more efficient. However, the real world disadvantage of x86 support is minimal. With each die shrink, the x86 to micro-op translator occupies less die space proportionally, and the advantages of the installed hardware and software base gives x86 CPUs a huge lead in economies of scale.

    I know we're both just putting different spins on the same facts, but in the end, practical considerations outweigh engineering purity. x86 is even competing against ARM in the embedded space now, not just in higher powered UMPCs, but also routers too like this one [openwrt.org] with a 486 class CPU.
  • Re:Still sticking (Score:2, Informative)

    by fitten ( 521191 ) on Monday November 12, 2007 @03:48PM (#21327025)
    As much as it sucks to admit it ;), CISC is even interesting in that it is sort of a 'code compression' built-in sometimes. Sometimes, you can load one CISC instruction that does the work of several RISC instructions. The CISC instruction will take up less memory. This means that not only does it take less memory, it takes less cache space, leaving more for other things (more code, more data) and cache space (particularly L1) is still at a premium. Not only that, a fetch of such a CISC instruction is like several fetches that make up the same sequence of RISC instructions.

    There's a big gap between the CPU and main memory... taking up less memory for instructions and, in effect, fetching more instructions per fetch cycle can have some benefits.

    That being said, there's nothing to prevent you from programming a modern x86 processor in a RISC-like way. It even sometimes has performance benefits. (Some compilers do this already.)

    THAT being said... I haven't programmed in assembly in years... I let compilers do that work for me.
  • Re:Names of Rivers? (Score:3, Informative)

    by ikeleib ( 125180 ) on Monday November 12, 2007 @04:46PM (#21327743) Homepage
    The names are mostly after Oregon rivers: http://en.wikipedia.org/wiki/List_of_Intel_codenames [wikipedia.org]

  • by pla ( 258480 ) on Monday November 12, 2007 @05:14PM (#21328111) Journal
    Can you share details?

    Sure. Start with a VIA Epia LN10000EG (I personally use LogicSupply for my mini-ITX shopping, they haven't screwed me yet). Toss in a gig of DDR2 533 RAM. Get the lowest wattage SeaSonic PSU you can find (or other known quality unit - You will regret saving $30 here).

    Get any ol' $20 ATX case with four (or more) external 5.25 bays. You obviously don't need one with a PSU, but you'll find that it costs less to get one with power and toss the stock unit.

    Get a ThermalTake A2309 iCage (NewEgg carries these), the best $17 you'll ever spend on computer parts. I put one of these (or something comparable) into every machine I build, and it holds up to three HDDs (perfect with a fourth bay holding your optical drive).

    You may want to get a gigabit NIC card rather than using the onboard 10/100. You could also get the... Hmmm... I think Epia EK12000EG, a similar board that has an onboard gigabit NIC, but it costs almost twice the price for basically just that one feature. You also will probably want to get a loose 12cm fan to blow in the general direction of the CPU - Officially the LN10000EG works fanless, but in practice it can get pretty warm.

    And there you go... Everything you need except the actual drives, for under $250. Toss in a DVD burner and a 500GB drive (currently the "sweet" price point) or three, install your favorite Linux distro, and you have an instant home file/media server drawing only 30-50W (depending mostly on what and how many HDDs you put in it; you can also force-throttle the CPU in the BIOS to squeeze out a few more watts) at the wall.



    One warning about drives, though... I've had horrible luck with VIA's SATA drivers for Linux. I'd recommend sticking with the PATA for now, or even going so far as to run Windows if you must use the SATA port(s). And for the record, I do not recommend the marginally-cheaper Epia clones such as JetWay. I've only dealt with a few of them, but they without exception have sucked, and hard.

"More software projects have gone awry for lack of calendar time than for all other causes combined." -- Fred Brooks, Jr., _The Mythical Man Month_

Working...