Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Launches Power-Efficient Penryn Processors 172

Bergkamp10 writes "Over the weekend Intel launched its long-awaited new 'Penryn' line of power-efficient microprocessors, designed to deliver better graphics and application performance as well as virtualization capabilities. The processors are the first to use high-k metal-gate transistors, which makes them faster and less leaky compared with earlier processors that have silicon gates. The processor is lead free and by next year Intel is planning to produce chips that are halogen free, making them more environmentally friendly. Penryn processors jump to higher clock rates and feature cache and design improvements that boost the processors' performance compared with earlier 65-nm processors, which should attract the interest of business workstation users and gamers looking for improved system and media performance."
This discussion has been archived. No new comments can be posted.

Intel Launches Power-Efficient Penryn Processors

Comments Filter:
  • by Anonymous Coward on Monday November 12, 2007 @11:55AM (#21323931)
    While Penryn is a small increase in performance, it is not a big change in the architecture. Instead of upgrading to Penryn, customers can expect Nehalem, the next major revision in the Intel architecture, was responsible for the release in 2008.

    At the Intel Developer Forum in San Francisco in September Intel showed, and said it would be a better yield per watt and better system performance through its Quick Path Interconnect system architecture. Nehalem chips will also provide a memory controller integrated and improved communication between system components.
    • I'm wondering when the new chips will show up in the macbook pro.

      I was about to buy one, but, if this is coming up soon, I may wait...

    • by necro81 ( 917438 ) on Monday November 12, 2007 @02:24PM (#21325853) Journal
      The biggest thing about Penryn is the move to 45-nm fabrication, and the technological advances that were required to pull it off. IEEE Spectrum has a nice, in-depth (but accessible) article on those advances [ieee.org]. High-k dielectrics and new metal gate configurations will be how advanced ICs are produced from now on. It is as large a shift for the fabs as a new chip architecture is for designers.
    • The biggest change to transistor fabrication since the creation of the silicon transistor. This is a previously unavailable technology for making integrated circuits that is substantially different than was used before. Isn't there a word in the English language that describes this?
  • Still sticking (Score:2, Interesting)

    by guruevi ( 827432 )
    It's sad that the industry is still sticking to the x86 instruction set. It should've been replaced a long time ago with a pure RISC instruction set especially now with the quest for less power-hungry chips. The Power/PowerPC architecture was good but because they didn't have enough demand, the price was high and development low. A few failures (compare to Netburst) and their customers (amongst them Apple) went running to the competitors.

    We're still running PowerPC here because they're low-power and do cert
    • What you say is directly comparable to the internal combustion engine, say. It makes a lot of sense (and has done so for a lonnnnnng time now) not to use gasoline and to instead work on alternative engine technologies, compressed air, hydrogen, ethanol, and so forth.. but these things are still sideline projects. The engine / automotive industry is far more fragmented (in terms of suppliers and target markets) than the PC industry and a lot older.. and if they haven't learned the lessons, I can't see altern
    • Re:Still sticking (Score:5, Informative)

      by Waffle Iron ( 339739 ) on Monday November 12, 2007 @12:12PM (#21324155)

      It should've been replaced a long time ago with a pure RISC instruction set

      It was, when the Pentium Pro was introduced circa 1997. The instruction set the programmer "sees" is not the instruction set that the chip actually runs.

      • by Z-MaxX ( 712880 ) on Monday November 12, 2007 @01:28PM (#21325219) Journal
        An often overlooked benefit of the way that modern IA32 processors achieve high performance through translating the CISC x86 instructions into microcode instructions is that the chip designers are free to change the internal microcode architecture for every CPU in order to implement new optimizations or to tune the microcode language for the particular chip's strengths. If we were all coding (or if our compilers were coding for us) in this RISCy microcode, then we, or the compiler, would have to do the optimizations that the CPU can do in its translation to microcode. I agree that the Power architecture is pretty cool, but I'm tired of hearing people bash the Intel x86 architecture for its "obsolete" nature. As long as it is the fastest and best thing I can buy for a reasonable amount of money, it's my top choice.
        • IA32 is going the way of the passenger pigeon. There may be a few rapidly diminishing flocks left in the wild, but they'll be gone in a blink of the (metaphorical) eye.

          AMD-64 for evah! (or at least, the next decade). Oh, that's also spelled "Core 2"...
          • by Z-MaxX ( 712880 )
            I agree that the x86-64 architecture is the new desktop and server standard (ignoring embedded systems etc.).

            However, x86-64 is a superset of IA32 and suggests similar design considerations regarding instruction decoding and so on.
      • x86 CPU's have always been microcoded. Even the original x86. The latest Core CPUs are actually closer to 1-1 mapping between microcode and x86 code than ever before :)

        The thing about calling P6 a RISC CPU was that it was a marketing win back in '95 when RISC was all the rage.
    • by Blahbooboo3 ( 874492 ) on Monday November 12, 2007 @12:15PM (#21324193)
      I believe that x86 already has many of the benefits of RISC chips incorporated into them. Way back in 1995 http://en.wikipedia.org/wiki/X86#Chronology. [wikipedia.org]Intel added to the Pentium Pro a RISC core. From the Wiki article, "During execution, current x86 processors employ a few extra decoding steps to split most instructions into smaller pieces, micro-ops, which are readily executed by a micro-architecture that could be (simplistically) described as a RISC-machine without the usual load/store limitations."

      As for PowerPC Macs, I doubt it. The switch to Intel is what made most new Mac users switch because there was no longer a risk of not being able to run the one Windoze program they might need. If Mac ever went to a non-mainstream CPU again it would be a big big mistake.
      • This is a bit like saying that a truck with a rocket plane inside has 'many of the features of a rocket plane.' The point of RISC is to manage the complexity of the processor, minimise the amount of unnecessary work, and shift load onto software wherever that has zero or negative performance impact. By, effectively, adding an on-the-fly compiler in hardware, the Intel engineers have not done this, even if they have streamlined the back-end execution engine using tricks published in the RISC literature.

        But

        • Re: (Score:3, Informative)

          by homer_ca ( 144738 )
          You're correct that the x86 instruction set is still cruft, and a pure RISC CPU is theoretically more efficient. However, the real world disadvantage of x86 support is minimal. With each die shrink, the x86 to micro-op translator occupies less die space proportionally, and the advantages of the installed hardware and software base gives x86 CPUs a huge lead in economies of scale.

          I know we're both just putting different spins on the same facts, but in the end, practical considerations outweigh engineering pu
        • The situation is common in computing.

          I don't disagree, but I think "the situation" is common in design and engineering of all kinds. The flexible nature of IT may result in more and faster-growing cruft, but continuity in the face of technological change (which is where cruft comes from) is important for any business endeavor. Backwards compatibility always trumps everything, despite the cruft it creates, whether you're talking about CPU architectures, internet protocols, user interface paradigms, keyboa

          • Do you think? I think we currently pay a factor of four or more in cruft, and it won't go away by itself. So our choices for 30 years from now, assuming things go as they have been going, are a factor of 16 or a factor of 64 slowdown, depending on whether we make an effort in this generation or not... not that we will.
      • As for PowerPC Macs, I doubt it. The switch to Intel is what made most new Mac users switch because there was no longer a risk of not being able to run the one Windoze program they might need. If Mac ever went to a non-mainstream CPU again it would be a big big mistake.
        If Apple changes processor again I'll eat my hat!
    • Re:Still sticking (Score:5, Informative)

      by jonesy16 ( 595988 ) on Monday November 12, 2007 @12:23PM (#21324301)
      Actually, one of the reasons that Apple jumped off of the PowerPC platform was BECAUSE of their power inefficiency. The G5 processors were incredibly power hungry, enough so that they could never get one cool enough to run in a laptop and actually offered the Mac Pro line with liquid cooling. Compare that to the new quad-core and eight-core mac pro's and dual core laptops that run very effectively with very minimal air cooling.
    • RISC vs. CISC (Score:5, Informative)

      by vlad_petric ( 94134 ) on Monday November 12, 2007 @12:23PM (#21324309) Homepage
      That's a debate that happened more than 20 years ago, at a time when all processors were in-order and could barely fit their L1 on chip, and there were a lot of platforms.

      These days:

      • The transistors budgets are so high that the space taken by instruction decoders aren't an issue anymore (L1, L2 and sometimes even an L3 is on chip).
      • Execution is out-of-order, and the pipeline stalls are greatly reduced. The out-of-order execution engine runs a RISC-like instruction set to begin with (micro-ops or r-ops).
      • There is one dominant platform (Wintel) and software costs dominate (compatibility is essential).

      One of the real problems with x86-32 was the low number of registers, which resulted in many stack spills. x86-64 added 8 more general purpose registers, and the situation is much better (that's why most people see a 10-20% speedup when migrating to x86-64 - more registers). Sure, it'd be better if we had 32 registers ... but again, with 16 registers life is decent.

      • Re:RISC vs. CISC (Score:4, Interesting)

        by TheRaven64 ( 641858 ) on Monday November 12, 2007 @01:00PM (#21324845) Journal

        The transistors budgets are so high that the space taken by instruction decoders aren't an issue anymore (L1, L2 and sometimes even an L3 is on chip).
        Transistor space, no. Debugging time? Hell yes. Whenever I talk to people who design x86 chips their main complaint is that the complex side effects that an x86 chip must implement (or people complain that their legacy code breaks) make debugging a nightmare.

        Execution is out-of-order, and the pipeline stalls are greatly reduced. The out-of-order execution engine runs a RISC-like instruction set to begin with (micro-ops or r-ops).
        Most non-x86 architectures are moving back to in-order execution. Compilers are good enough that they put instructions far enough away to avoid dependencies (something much easier to do when you have lots of registers) and the die space savings from using an in-order core allows them to put more cores on each chip.

        There is one dominant platform (Wintel) and software costs dominate (compatibility is essential).
        Emulation has come a long way in the last few years. With dynamic recompilation you can get code running very fast (see Rosetta, the emulator Apple licensed from a startup in Manchester). More importantly, a lot of CPU-limited software is now open source and can be recompiled for a new architecture.

        x86-64 added 8 more general purpose registers, and the situation is much better (that's why most people see a 10-20% speedup when migrating to x86-64 - more registers)
        Unfortunately, you can only use 16 GPRs (and, finally, they are more or less real GPRs) when you are in 64-bit mode. That means every pointer has to be 64-bit, which causes a performance hit. Most 64-bit workstation spend a lot of their time in 32-bit mode, because the lower memory (capacity and bandwidth) usage and cache churn give a performance boost. They only run programs that need more than 4GB of address space in 64-bit mode. Embedded chips like ARM often do the same thing with 32/16-bit modes. If x86-64 let you have the extra registers with the smaller pointers you would probably see another performance gain.
        • Re:RISC vs. CISC (Score:5, Interesting)

          by vlad_petric ( 94134 ) on Monday November 12, 2007 @02:10PM (#21325689) Homepage
          High-performance computing isn't moving away from out-of-order execution any time soon. Itanic was a failure. The current generation of consoles are in-order, indeed, but keep in mind that they serve a workload niche (rather large niche in terms of deployment, sure, but still a workload niche).

          The argument that the compiler can do a reasonable job at scheduling instructions ... well, is simply false. Reason #1: The problem is that most applications have rather small basic blocks (spec 2000 integer, for instance, has basic blocks in the 6-10 instruction range). You can do slightly better with hyperblocks, but for that you need rather heavy profiling to figure out which paths are frequently taken. Reason #2: compiler operates on static instructions, the dynamic scheduler - on the dynamic stream. The compiler can't differentiate between instances of the instructions that hit in the cache (with a latency of 3-4 cycles) and those that miss all the way to memory (200+ cycles). The dynamic scheduler can. Why do you think that Itanium has such large caches? Because it doesn't have out-of-order execution, it is slowed down by cache misses to a much larger extent than the out-of-order processors.

          I agree that there are always ways to statically improve the code to behave better on in-order machines (hoist loads and make them speculative, add prefetches, etc), but for the vast majority of applications none are as robust as out-of-order execution.

        • Most non-x86 architectures are moving back to in-order execution. Compilers are good enough that they put instructions far enough away to avoid dependencies (something much easier to do when you have lots of registers) and the die space savings from using an in-order core allows them to put more cores on each chip.
          OTOH most non x86 architectures are used in environments where it is feasible to compile for the specific chip.

          to win in the PC market chips must be able to perform reasonablly well on code compil
      • by emil ( 695 )
        • While POWER5 was out-of-order, POWER6 is now in-order. That's how they plan to hit 5ghz.
        • While you've added 8 more registers, you've also doubled the size of pointers (and thus doubled the memory bandwidth required for them). We've seen several cases where Sparc-32 compiled applications are faster than Sparc-64 on the same platform - therefore I'd benchmark an application in 32-bit mode before I'd take the 64-bit version.
        • by HuguesT ( 84078 )
          Note the all-important "pointer". Yes you have doubled the pointer size, who cares?? the data pointed to is still the same size. I'm sure you can find corner cases where a 32-bit cpu will be faster than the 64-bit counterpart, but for x86_64, my own developer's experience is that it does measureably improve performance.
    • Re: (Score:3, Insightful)

      by pla ( 258480 )
      It's sad that the industry is still sticking to the x86 instruction set.

      Why? Once upon a time, the x86 ISA had too few registers. Today, that problem has vanished (simply by throwing more GP registers at the problem) - And even then, so few people actually see the problem (and I say that as one of the increasingly rare guys who still codes in ASM on occasion) as to make it a non-issue, more a matter of trivia than actual import.



      The Power/PowerPC architecture was good

      I know I risk a holy-war here
      • by Pope ( 17780 )
        The G4 (PPC 74xx) line with AltiVec came out in 1999, two years after MMX debuted in the Pentium. The x86 family still don't come close to the PPC 970 line when it comes to SIMD execution.
      • Re: (Score:2, Informative)

        by fitten ( 521191 )
        As much as it sucks to admit it ;), CISC is even interesting in that it is sort of a 'code compression' built-in sometimes. Sometimes, you can load one CISC instruction that does the work of several RISC instructions. The CISC instruction will take up less memory. This means that not only does it take less memory, it takes less cache space, leaving more for other things (more code, more data) and cache space (particularly L1) is still at a premium. Not only that, a fetch of such a CISC instruction is li
    • It's not really true (Score:3, Informative)

      by Moraelin ( 679338 )
      Well, bear some things in mind:

      1. At one point in time there was a substantial difference between RISC and CISC architectures. CPUs had tiny budgets of transistors (almost homeopathic, by today's standards), and there was a real design decision where you put those transistors. You could have more registers (RISC) or a more complex decoder (CISC), but not both. (And that already gives you an idea about the kinds of transistor budgets I'm talking about, if having 16 or 32 registers instead of 1 to 8 actually
  • Halogen free (Score:3, Informative)

    by jbeaupre ( 752124 ) on Monday November 12, 2007 @12:01PM (#21324021)
    I'm sure they mean eliminating halogenated organic compound or something similar Otherwise I think eliminating halogens from chips themselves is just a drop in the ocean. A deep, halogen salt enriched ocean.
    • by julesh ( 229690 )
      I'm sure they mean eliminating halogenated organic compound or something similar Otherwise I think eliminating halogens from chips themselves is just a drop in the ocean. A deep, halogen salt enriched ocean.

      Halogens are elements. Halogenated organic compounds are compounds that contain halogens. In order to eliminate halogens from the chip, they'll have to eliminate all compounds of halogens. I'd have thought that was fairly obvious...?
    • by ajlitt ( 19055 )
      Good point. I was pretty sure that Intel would have a hard time manufacturing chips without HF.
      • 1, I think the GP means organic halogenated flame retardant in the epoxy and PCB used to package the chip.

        2, I am not sure about Intel, but I know many fabs have stopped HF wet etching and use dry etching instead. Because dry etching is actually cheaper and faster.

  • Can somebody explain (Score:3, Informative)

    by sayfawa ( 1099071 ) on Monday November 12, 2007 @12:08PM (#21324099)
    Why is there so much emphasis on size (as in 45nm) for these things? Does making it smaller make it inherently faster or more efficient? Why? I've looked around (well, I looked at wikipedia anyway) and it's still not clear what advantage the smaller size has.
    • You can fit more of them on a die, making it cheaper. A die defect kills fewer CPUs.

      Or to make chips more complicated (by using more gates in the same space)---do more with 1 clock cycle.

      Or some combination of both.

      Also, smaller usually means more energy efficient.
    • Re: (Score:3, Informative)

      by Chabil Ha' ( 875116 )
      Think of it in these terms. Electricity is being used to transmit 1 and 0s inside a circuit. We can only do so much to make the conductivity less resistant, so we need to shorten the distance between gates. The less distance an electrical signal has to travel, you can increase the number of operations that are performed in the same amount of time.
      • Ahh, but going to smaller features means, and shorter distances between gates, also means that the lines become narrower. Resistance is proportional to length, and inversely proportional to cross-sectional area. So if you halve the length, and halve the area, total resistance stays the same. Basic EE, folks.

        Smaller might not mean less resistance, unless the lines get shorter faster than they get narrower.
    • by compumike ( 454538 ) on Monday November 12, 2007 @12:17PM (#21324213) Homepage
      The energy required to switch a capacitor from zero to Vdd volts is 1/2*C*Vdd^2.

      Smaller logic sizes can operate faster because the physical gate area of the transistor is that much smaller, so there's less capacitance loading down the piece of logic before it (proportional to the square of the scaling, of course). However, it also tends to be the case that the operating voltages scale down too (because they adjust the semiconductor doping and the gate oxide thickness to match), so you get an even better effect on energy required. Thus, scaling helps both with speed and operating power.

      The problem they're running into now is that at these smaller sizes, the off-state leakage currents are getting to be of the same magnitude as the actual switching (operating logic) currents! This happens because of the reduced threshold voltage when they scale down, so the transistor isn't as "off" as it used to be.

      That's why Intel has to work extra hard to get the power consumption down as the sizes scale down.

      --
      NerdKits: electronics kits for the digital generation. [nerdkits.com]
      • Keep in mind too, that the gate dielectric is usually thinned with each generation, increasing the capacitance per area. A typical corresponding reduction of operating voltage (so-called constant field scaling) with each generation contributes to the CV^2 reduction when going to smaller dimensions.

        Of course, the new high-K dielectrics may shift the curve as they give even more capacitance per unit area for a given thickness while possibly allowing higher voltage.

        And, all of the modern dynamic VDD-scaling fe
    • Just as you guess, making the parts smaller drops their heat output and power consumption considerably for a given speed. It's also necessary to advance the technology further, because it allows them to create new, faster parts without raising the power consumption.
    • by Tim C ( 15259 )

      Does making it smaller make it inherently faster or more efficient?
      Yes, basically. For one thing, a smaller chip size means that you can get more of them out of a silicon wafer, and wafer defects kill fewer chips. As for efficiency, that should be obvious - smaller chips mean shorter electrical pathways means less distance for the electrons to travel means less energy required to move them about and less heat generated means higher efficiency.
    • by Rhys ( 96510 ) on Monday November 12, 2007 @12:20PM (#21324259)
      Smaller size means signals can propagate around the chip faster. It also means you need less signal-fixing/synchronization hardware, since it is simpler to get a signal synced up at a given clock rate. Smaller size generally means less power dissipated. Smaller feature sizes means the CPU is physically smaller (generally), so more CPUs fit on a silicon wafer. For each wafer they produce (a high but relatively fixed cost vs the number of CPUs on the wafer) they get more CPUs out (= cheaper). If a CPU is bad, that is a smaller percent of the wafer that was "wasted" on that CPU.
      • by enc0der ( 907267 ) on Monday November 12, 2007 @01:31PM (#21325241) Homepage
        Smaller size means faster but at the expense of more power. As a chip designer I can tell you that the smaller you go, the more leakage you have to deal with in the gates, and it goes up FAST. Now, with the new Intel chips, they are employing some new techniques to limit the leakiness of the gates, these techniques are not standard across the industry so it will be interesting to see how they hold up. I do not understand what you mean by signal-fixing/synchronization hardware. Design specific signal synchronization doesn't change over the different gate sizes. What changes is the techniques that are used as people find better ways to do these things. However, these are not technology specific and tend to find their way back into older technologies to improve performance their as well. In addition, cost is NOT always cheaper because die yield is generally MUCH LESS at newer technologies. For those on the bleeding edge. In addition, development costs go up because design specific limitations, process variance, and physical limitations cause designs to be MUCH HARDER to physically implement than at larger sizes. Things like electromigration, leakage power, ESD, OPC, DRC, and foundry design rules are MUCH worse. What is true is that these people want faster chips, and you can get that, as I said. Although the speed differences are not that amazing. Personally, I don't think the cost justifies the improvement in what I have worked on. Especially on power. Now, going out a few years from now, as they solve these problems at these specific gate geometries, THEN we will start to see the benefits of the size overall.
    • Does making it smaller make it inherently faster

      Generally, yes, mostly because the capacitance and inductance of electrical components usually scales with size. The logic speed is often limited things like R*C time constants. At high enough speeds, speed of signal transmission accross the chip comes into play as well.

      Another factor is with smaller parts, more can be packed onto a die. The more parts you have, the more caching and concurrency tricks you can implement to increase speed.

      more efficient?

      Up to a point, but they seem to have hit a wall. Smaller induc

    • Each silicon wafer that goes through processing to become a bunch of microprocessor costs pretty much the same to make. Having a smaller die makes each chip on the wafer smaller, so you get more chips on each wafer that you process. This also increases your yield so you can sell more chips. Furthermore, the electrons on the chip has to take a shorter path, so there's less heat being evolved when the processor is run. Thus, the chip can be run at a higher clock frequency before heat becomes a problem. In con
    • It depends. As many people explained to you, the smaller gate opens/closes faster. But thinner interconnect has higher resistance and closer interconnect has higher capacitance. At current gate size, the speed of CPU is dominated by RC delay [wikipedia.org]. So copper has been used to lower the resistance and low K [wikipedia.org] materials has been used to lower the capacitance. Also, as the gate becomes smaller, the leakage become bigger due to tunneling effect, which makes the efficiency low, so high K [wikipedia.org] materials has been used to i

    • Well, you can make the die smaller, as others have pointed out, or you can add cache, which *can* also make things faster. Look at the 12 MB caches of the Xeons mentioned in the article. That's quite a number of MB's, that won't take too much space (to keep the costs down). Actually, many of my - smaller - applications could fit easily within the cache alone. Of course, with multiple cores, virtualization and the bottleneck of the main memory, having a big cache *can* really help.

      Note: *can* because it rath
  • When TFA is not informative, seek the source [intel.com]. Enjoy.
  • While I realize that GPUs may be doing more calculations that CPUs (I'm not a Programmer), the power consumption of many graphics cards/GPUs at idle is getting ridiculous (some are 100 to 200 watts), never mind what is needed during gaming. On the one hand, I would buy an on-board accelerator or a cheap PCI-x with the knowledge it won't need additional power to the board, but for the odd games that I play, I need more GPU power. Game consoles' -as a whole like the X360- consumes about 200 Watts @ max draw.
  • Intel's own spec sheet shows the best of these (and only a single one at that) with a TDP of 65W.

    Call me a pessimist, but my two main systems peak at less than that at the wall, and I have yet to find them too slow for any given task (though I admittedly don't do much "twitch" gaming).

    • Some LV versions will probably come later. The same happened with Clovertown.
      Standard Xeon 5300 rate at 80W too, X53xx at 120W. The L53xx Clovertown: 50W. Dual Core Xeon 5138 and 5148: 35W and 40W.
    • by Wavicle ( 181176 )
      Call me a pessimist, but my two main systems peak at less than that at the wall, and I have yet to find them too slow for any given task (though I admittedly don't do much "twitch" gaming).

      You're using a very apples-to-oranges comparison. Hey, I think it is great that the VIA solution works for you (I use it at home for a server as well), but that system is very, very underpowered. For reference the C7 in your system scores about 1,700 Dhrystones and about 300 Whetstones. Last year's Core 2 Duo scored 31,06
      • by pla ( 258480 )
        You're using a very apples-to-oranges comparison

        As I've said to others, see my first reply in this thread. I refer to a modern dual-core >2GHz AMD machine, with a reasonably modern GPU and all the toys you'd expect if you went out and bought a new desktop PC today.
  • Might this be followed by a price drop in their current offerings? I'm about to buy a new C2D, so I'd wait if it meant a significant savings.
  • Just how much Hafnium is there in the world, and has Intel cornered the supply before AMD could get their hands on any of if?
  • Come Full Circle (Score:2, Interesting)

    by IorDMUX ( 870522 )
    Once upon a time (1970's), everybody used metal for their FET gates. Those aluminum gates are where we got the names MOSFET (Metal-Oxide-Silicon Field Effect Transistor) and CMOS (Complementary MOSFET). In the 1980's, pretty much every fab gave up metal gates for the polysilicon that has been used since, amidst various enhancements in polysilicon deposition technology, self aligned gates, etc.

    Now, the trend seems to be to return to the metal gates of yesteryear and ditch the oxide (the 'O' in MOSFET) f
  • What I'd like to see in a 45-nm process is an ARM architecture based SoC (System on a Chip).

  • My key interest is that I can play the games I want to play when I want to play them, but when I've got my system on doing file sharing, or sitting idle, I don't want to be raising my electric bill.

    It still strikes me that Intel chips suck more power on idle, cost more, and run hotter when they run at capacity. So, since I don't do high-end processing, I don't need one. And my SQL servers benefit more from better bandwidth to the processor that high processing power.

    So far, I have yet to see anything from I
    • by Wavicle ( 181176 )
      It still strikes me that Intel chips suck more power on idle, cost more, and run hotter when they run at capacity.

      It still strikes me that AMD fanboys would repeat the same old line that hasn't been true for about a year.

      It still strikes me that Intel chips suck more power on idle, cost more, and run hotter when they run at capacity.

      It strikes me even more that the fanboys would trot this out in response to an article on an Intel chip that has an idle power draw less than 4 Watts.

      It still strikes me that I

For God's sake, stop researching for a while and begin to think!

Working...