Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Hardware

The Future of Intel Processors 164

madison writes to mention coverage at ZDNet on the future of Intel technology. Multicore chips are their focus for the future, and researchers at the company are working on methods to adapt them for specific uses. The article cites an example were the majority of the cores are x86, with some accelerators and embedded graphics cores added on for added functionality. "Intel is also tinkering with ways to let multicore chips share caches, pools of memory embedded in processors for rapid data access. Cores on many dual- and quad-core chips on the market today share caches, but it's a somewhat manageable problem. "When you get to eight and 16 cores, it can get pretty complicated," Bautista said. The technology would prioritize operations. Early indications show that improved cache management could improve overall chip performance by 10 percent to 20 percent, according to Intel." madison also writes, "In another development news Intel has updated its Itanium roadmap to include a new chip dubbed 'Kittson' to follow the release of Poulson. That chip will be based on a new microarchitecture that provides higher levels of parallelism."
This discussion has been archived. No new comments can be posted.

The Future of Intel Processors

Comments Filter:
  • by seebs ( 15766 ) on Friday June 15, 2007 @11:46AM (#19520839) Homepage
    I think Cell's taught us two important things about heterogeneous multicore:
    1. It's fairly hard to develop for.
    2. It's bloody fast.

    Looks like Intel's gonna be running with it some; that's good news for anyone making a living selling compilers! :) Buy stock in gcc...
    • gcc? (Score:3, Insightful)

      by everphilski ( 877346 )
      Buy stock in gcc..

      Yeah, cause, you know, Intel doesn't make their own http://www.intel.com/cd/software/products/asmo-na/ eng/compilers/284132.htm [slashdot.org]">compiler...
      • Re: (Score:3, Informative)

        by walt-sjc ( 145127 )
        It's a joke son. Gcc is GPLed.
      • by seebs ( 15766 )
        You know, come to think of it, IBM has a compiler too.

        Maybe, uhm... A joke?

        (That said, stuff like this IS good news for anyone working on gcc professionally, potentially, although it does have the short-term impact of creating a class of apps where gcc isn't going to be as good as the industrial and research compilers for a while.)
        • it does have the short-term impact of creating a class of apps where gcc isn't going to be as good as the industrial and research compilers

          That class already exists, and the impact isn't short-term. Sorry, gcc is quite good, but it's not as good as the industrial and research compilers on anything more complex than hello world.

          Note that I use gcc regularly, and I believe it to be "good enough" in the vast majority of cases. But from a performance standpoint, it still has a long way to go.

  • by nurb432 ( 527695 )
    How about more code efficiency? That would also improve overall security too.

    If people coded properly, we wouldn't need this 'speed race' just to watch our word processors and browsers get slower and slower each release..
    • by CajunArson ( 465943 ) on Friday June 15, 2007 @12:01PM (#19521059) Journal

      That would also improve overall security too.

      I hate to break it to ya, but in a low-level language like C, doing proper bounds checks and data sanitization required for security does not help performance (although it doesn't harm it much either, and should of course always be done)
          There is a lot of bloated code out there, but the bad news for people who always post "just write better code!" is that the truly processor-intensive stuff (like image processing, 3D games) is already pretty well optimized to take advantage of modern hardware.
          There's also the definition of what "good code" actually is. I could write a parallelized sort algorithm that would be nowhere near as fast as a decent quicksort on modern hardware. However, on hardware from 10 years from now with a big number of cores, the parallelized algorithm would end up being faster. So which one is the 'good' code?
          As usual, real programming problems in the real world are too complex to be solved by 1-line Slashdot memes.
      • by Nikker ( 749551 )
        I think with all of these cores and such an increase in on die cache we should be asking what can we accomplish by staying on-die? As the number of cores increase so will on-die cache, when we start to get into 10MB+ area we could likely do some pretty fancy stuff, also treating registers as memory on idle cores will add to this. With all this micro-logic maybe even simple operations add + move ops will be added to the off-die ram as a type of pre-processing.

        The more cores they add the more the system w
    • hmmm how about:?

      Optimization = more specialized code = less maintainability = bugs are worse = adding features adds bloat = security issues

      More powerful processors = less need for optimization

      More powerful processors = Compilers take less time to do their job and developers get more time to work on their applications efficiently

      • How about no. its inefficient and wasteful, and just plain wrong.
        • Come on, let's all revert to the bubble sort! But seriously, you are right. Not optimizing code is ridiculous. I have jobs that take 40 cores 6 months to execute (spread over 10 machines at the moment.) If by optimizing code I can drop that to 3 months, it's a huge win. Unfortunately, it's commercial closed source code and we can't fix it, and so we currently have a team working on rewriting the code using open-source / tools and we can release the improvements back to the community so that everyone benefit
  • by BritneySP2 ( 870776 ) on Friday June 15, 2007 @11:48AM (#19520873)
    While multicores, obviously, have their use, the future belongs to CPUs with massive internal implicit parallelism, IMHO.
    • While CPUs with massive internal implicit parallelism, obviously, have their use, the future belongs to electric cars, IMHO.

  • With process sizes getting smaller and smaller, it is interesting to watch new ideas for as to what to do with that newfound area. The elementary choice seemed to always be "throw on more cores" but the prospects of accelerators and bridges moving into Systems-on-Chips looks like it might have much nicer prospects.

    The average parallism factor for most programs tends to hover around four. I think Intel might have figured out that this is a decent stopping point for hardware parallelism as well.
    • by f00man ( 1056198 )

      In the early 1980's I was sure that Y2K would bring desktop machines with >10,000 (neural net) processors and paperless offices. I blame MS, Intel and HP.

      I never really expected a flying car though.

    • The average parallism factor for most programs tends to hover around four. I think Intel might have figured out that this is a decent stopping point for hardware parallelism as well.


      That's not really true anymore. The type of programs that we run has changed, and so the average has moved. Any of the media applications that I run regularly, or games has a much higher potential for parallism.
  • But gee (Score:4, Funny)

    by MrNonchalant ( 767683 ) on Friday June 15, 2007 @11:52AM (#19520917)
    What I really want is a dialogue with Intel engineers about this piece of Intel-themed news. Why can't you add something like that to the site? You could call it something like Opinions With Intel or Intel And Opinions or Center for Intel. No that's not quite right.
    • Yes, but it would be even better if it included floating, animated, always on top rich media advertising that was triggered every time the mouse got anywhere near the opinion widget on the page. I just might turn off AdBlock to see it...or then again I might not.
  • by Timesprout ( 579035 ) on Friday June 15, 2007 @11:53AM (#19520931)
    So we can can have comments in parallel.
    • by MyLongNickName ( 822545 ) on Friday June 15, 2007 @12:06PM (#19521117) Journal
      Be patient. The other five articles should appear soon.
      • This is in reply to another thread where you mentioned "78% of all drivers consider themselves above average"

        I couldn't reply there because I moderated in that thread.

        Anyway..

        I've heard others make fun of that. But people (you?) seem to overlook that it's entirely possible that 78% of people are, in fact, above average drivers. People (you?) often confuse "average" with "median."

        I mean, it's simple: 1, 1, 2, 2, 2, 2, 2, 2, 2, 2 = average of 1.8. 80% are above average.
        • And if you read the thread you were responding to, you'd know that this has nothing to do with the "average" in question.
          • it was an example. The point is that it's entirely possible that 78% of drivers _ARE_ above average....
    • Re: (Score:3, Funny)

      by wbren ( 682133 )

      So we can can have comments in parallel.
      I think you ramped up your clock frequency too much. Your instructions are overlapping, causing data corruption in the pipeline and grammar mistakes. :-)
  • Who's going to need 80 Cores? *ducks*
    • Who's going to need 80 Cores? *ducks*
      Any one wanting to run Areo on Vista Ultra Optimum Utmost Paramount Ultimate Quintessential Home Edition?

      I, for one, am betting Intel loses its shirt on this 80 Core hodgepodge. That's why I'm investing my entire retirement saving in Transmeta's Crusoe line.
    • Re: (Score:3, Funny)

      by walt-sjc ( 145127 )
      What would a duck do with 80 cores? Quack in harmony?
  • Why isn't parallel processing used more since more of us will need graphics/math intensive processors? We don't need faster word processors. The threading direction seems misguided to me. Is the state of parallel processing compilers not workable. I don't want to hear about the stupid '4 diggers to dig 1 ditch' analogy. Cliche.
    • Well, the analogy I've always heard was "1 woman can have 1 baby in 9 months, but 9 women can't have 1 baby in 1 month." Lesson here: not everything is as "parallelizable" as digging a ditch. Data dependency in single execution threads means there often simply isn't enough independent work that can be done at once. Moreover, it is often left up to the user (or third party vendors) to create the application library to take advantage of parallel processing. Almost all code being run at this moment was wri
    • Okay how is threading not parallel processing?
      One of the great difficulties of the Cell is asymmetrical in nature. With a Cell you have to do a lot more resource management than with symmetrical multiprocessor system. I have not worked with the Cell but some of the issues I could see cropping up is that it maybe a little light in none floating point resources. With only one PPC core there may be issues with keeping all the SPEs busy.
      The 360 is no slouch when it comes to floating point but has a lot more g
  • For the long term (Score:3, Insightful)

    by ClosedSource ( 238333 ) on Friday June 15, 2007 @12:05PM (#19521103)
    Intel needs to develop new processor technologies to significantly increase native performance rather than just adding more cores. Whether multi-core processors can significantly increase performance for standard applications hasn't yet been proven and even if possible, will depend on the willingness of developers to do the extra work to make it happen.

    If software developers can't or won't take advantage of the potential benefits of multi-core, Intel and AMD may have to significantly cut the price of their processors because upgrading won't add much value.
    • by timeOday ( 582209 ) on Friday June 15, 2007 @01:03PM (#19521947)

      Intel needs to develop new processor technologies to significantly increase native performance rather than just adding more cores.
      Figure out how to do that and you will be a rich man. The move to multi-core is a white flag of surrender in the battle against the laws of physics to make a faster processor, no doubt about it. The industry did not bite the bullet of parallelism by choice.
    • Re: (Score:3, Informative)

      by 0xABADC0DA ( 867955 )
      That sounds like what they are doing, improving performance by making more things native.

      For example, they could put a Java bytecode interpreter "cpu" into the system. Java CPUs didn't take off because a mainstream processor would always have better process and funding, and you had to totally switch to Java. But if everybody had a Java "cpu" that only cost $0.25 extra to put in the chip and got faster as the main CPU got faster, then it might actually be useful (incidentally .NET bytecode is too complicat
      • Re: (Score:3, Insightful)

        by Vellmont ( 569020 )
        I think this is this most intelligent reply I've heard about multi-core processors. Everything I've heard up to this point is the standard "But multi-threaded programming is both hard, and has diminishing returns". Which is very true. I've often wondered how the hell I'd break my programs into 80 different independent parts.

        Ultimately I think you're right. Processors started out general, and have become increasingly specialized. First we had the "floating point co-processor", next stuff like an MMU, th
    • Concurrent programming isn't really that hard a problem. To do it easily using todays tools requires some "design patterns" that many programmers aren't used to, but the concurrent models actually end up being cleaner / more intuitive than the serial model in many cases (including things like network programming and GUI programming).

      The problem is that the tools don't make these patterns blindly easy, and they require a little bit of programmer discipline to use properly. That occasionally includes giving

      • I don't make any claims about how hard concurrent programming is supposed to be, but until we see a lot more real-world apps running on multiple cores, we won't know how much of a performance gain will be seen. I'd say anything less than a average improvement of 25% wouldn't justify rewriting a legacy app to accomodate multiple cores.
        • until we see a lot more real-world apps running on multiple cores, we won't know how much of a performance gain will be seen.

          You can estimate that sort of thing reasonably well. A lot of things parallelize in some really obvious way - enough that you'll get better than twice the throughput if you move from one to four cores. Other things would gain a perceived performance advantage simply by using a concurrent programming model even on a single core - tabbed web browsers with plugins / javascript are a goo

          • I suspect that in the real world of applications running concurrently with other applications, these estimates may not hold up very well, but let's see what the next 5 years bring.
    • Intel needs to develop new processor technologies to significantly increase native performance rather than just adding more cores.

      I'm sure they would have done that already if they could. The problem with more powerful processors is the amount of power they use. By using multiple cores of slightly less powerful chips, you get more performance with less power usage.
      • "I'm sure they would have done that already if they could."

        I'm sure prior to the invention of the Integrated Circuit, many hardware engineers thought that computers couldn't be made any smaller than a large closet. The technologies used today for creating processors are essentially refinements of the IC technology created in the late 1950's.

        I'm not suggesting that Intel and AMD have a lot of options based on that legacy technology, but the future belongs to the companies that can develop new technologies. T
  • Clock Speed? (Score:4, Interesting)

    by tji ( 74570 ) on Friday June 15, 2007 @12:10PM (#19521173)
    It seems that Intel very rarely mentions clock speed in any of their roadmap briefings. The clock speed increases over the last five years or so have been pretty minimal. Moore's law talks about the rate transistor density increases. But, clock speed has followed a similar curve until recently. The last 4-5 years has to be the longest plateau in the history of the industry.

    Yes, I know they changed to a new architecture that put less emphasis on raw clock speed. But, given that more efficient architecture, clock speed increases are still going to be a major benefit.

    So, what's the story? Has the industry hit a wall? How long will it take to get back to above 3GHz for a mainstream processor, or even to the 4GHz levels that the old Pentium IVs were pushing.

    Don't get me wrong, I am a huge fan of the power efficiencies of the new chips. For my primary purposes (laptop, HTPC) the new chips are a godsend. And, the thought of specialized "accelerator" cores is fantastic (a video decoder core for MPEG2 & H.264, please). But, doing that same thing at 4GHz is even more compelling (of course, with the speedstep++ stuff to shut down cores when not needed, and throttle back to low GHz to save power).
    • Re:Clock Speed? (Score:5, Informative)

      by ZachPruckowski ( 918562 ) <zachary.pruckowski@gmail.com> on Friday June 15, 2007 @12:40PM (#19521609)
      Penryn (a die shrink of the Core2 Duo/Quad plus some SSE4) should have 3 GHz+ models. The real performance issue isn't clockspeed, it's instructions per second. When you make 128-bit SSE take fewer cycles, and you add execution units, improve scheduling logic, and reduce access latencies (through pre-fetching or larger caches, or faster buses), you make processors faster. A processor that runs at 2 GHz with 3 Instructions per clock is just as fast as one that runs at 4 GHz with 1.5 IPC. The reason clockspeed hasn't been increasing is because performance gains have been coming from other areas. Intel could probably sell a juiced-up 3.6 GHz Core 2 Extreme, but it'd run at 180 Watts or something, and cost like $1500.
      • The real performance issue isn't clockspeed, it's instructions per second.
        Bull. The fact is, the MHz "myth" is mostly true. The vast majority of improvement in processor speed over the past 30 years is due to clock rate, not IPC. The performance gains from other areas over the last 5 years have not kept pace with the rate of progress for the preceeding 25 years, not even close.
        • Re:Clock Speed? (Score:4, Informative)

          by 644bd346996 ( 1012333 ) on Friday June 15, 2007 @01:29PM (#19522347)
          Sure, for most of the past 25 years, it has been the clock speed that's been improving. But that's changed in recent years. When Intel switched from Prescott to Core, they pretty much cut the clock speed in half without really sacrificing performance. That's because they increased the IPC a lot in Core, so that it had comparable IPS.

          When comparing different processors with the same ISA (ie x86), IPS is the best measure of CPU performance, not clock speed.
        • Re: (Score:3, Informative)

          by Vancorps ( 746090 )

          Tell that to the Amiga guys and to AMD when they chose IPC over clock while the P4 was around. Both are very important. The industry spent years ramping up the clock and now they're spending a few years working on IPC. It makes perfect sense to me. Moore's law also doesn't refer to the frequency of a chip but to the number of transistors which has kept pace especially now with the 45nm processes.

          Personally I think for the moment IPC is far more important than frequency given computers are doing more and m

          • Tell that to the Amiga guys and to AMD when they chose IPC over clock while the P4 was around. Both are very important.

            Not equally important. The original 8086 ran at 4 mhz and had an average CPI (in actual use) of 12 [serghei.net]. The Core 2 has a theoretical maximum of 4 IPC. That's only a difference of at most 48 times (actually less, but I couldn't find benchmark IPC data for Core 2). Meanwhile MHz over the same time period has increased by a factor of greater than 500! Thus 90% of the difference is from goo

        • Instructions per Second = Cycles/Second (the MHz) * Instructions/Cycle (IPC). An IPS increase can come from either MHz or IPC increases. Therefore, if you triple IPC and halve MHz, you have a 1.5 increase in (single-threaded) performance. You misread "instructions per second" as IPC, which it isn't.
    • Has the industry hit a wall? How long will it take to get back to above 3GHz for a mainstream processor

      Power6 is a mainstream server processor operating at 4.7ghz in servers today, and at 6ghz in the lab. While it's clear that gains are more difficult now, it would appear the industry has not hit the wall yet.
    • by Sinical ( 14215 )
      Look for POWER 6: 4.7GHZ.
      Look for bumps in Cell or Cell2: Cell2 expected @ > 4GHz.

      Note that these will go into machines where more expensive heat dissipation devices can be used, i.e. any of IBM's machine or RoadRunner.
    • Re: (Score:3, Informative)

      So, what's the story? Has the industry hit a wall?
      Yes. There was a big story about three years ago that when Intel got its first chips from some new process shrink (90 nm?), they were startled to find that they couldn't get them to run substantially faster than the previous version. Up until then, they'd always gotten a significant speedup from that with no design changes, but they did hit some sort of physical limit no one was expecting. I haven't heard anything since about whether they figured out what
  • New term war. (Score:4, Insightful)

    by jshriverWVU ( 810740 ) on Friday June 15, 2007 @12:13PM (#19521213)
    I was just checking out this page here [azulsystems.com] which discussed a machine with 768 cores. While I do a good amount of parallel programming this is good news to me. But it seems for the average person, this is turning into another mhz/ghz war, this time cores.

    What we really need is for software to catch up. Luckily some programs like Premiere, Photoshop have supported multiple CPU's for a while now. But games, etc can really benefit from this. Just stick AI on 1 core, terrain on another, etc etc.

    • "Software" (as in all software ever written) is not a monolithic thing. The vast majority of software in use today is not CPU-restricted by modern (and even 5-year-old) commodity hardware.

      Of the little bit that does need oompf, Where SMP can be taken advantage of, people have largely been working on doing so for a while now.

      Only the little fraction that remains - projects that CAN USE the extra oompf and haven't been developed in that direction yet - need to catch up.

      Your statement hardly applies to most so
    • by suggsjc ( 726146 )
      First, I'm not saying your wrong. But the (processor) world doesn't revolve around /. comments/criticisms. Meaning, its all to easy to look at companies (esp big companies) and say that they just get going in one direction and don't stray from the course until it hits a dead end.

      Do you really think companies will intentionally go in the wrong direction (more GHz, more cores, etc) just because? Possibly for marketing reasons, but outside that I would think that with their massive R&D budget that the
  • by gEvil (beta) ( 945888 ) on Friday June 15, 2007 @12:15PM (#19521237)
    I've found that improved cash management does wonders for me, like allowing me to buy things like new processors.
  • My thought is: How long can Intel and AMD remain interchangeable? For that matter, how interchangeable will Intel be in the same socket, if processors are going to vary this widely? In is this a good thing?
    • Re: (Score:3, Informative)

      by drinkypoo ( 153816 )

      For that matter, how interchangeable will Intel be in the same socket, if processors are going to vary this widely? In is this a good thing?

      If intel used just one socket, then you would have portions of a socket unused on some systems, but it would cost less to do the design, because there would be only one design. They don't do this because a socket with less pins costs less.

      I don't know if that's what you wanted to know...

      Intel and AMD could ostensibly remain eternally interchangeable; they are not and

  • by Animats ( 122034 ) on Friday June 15, 2007 @12:21PM (#19521337) Homepage

    Where will all the CPU time go on desktops with these highly parallel processors?

    • Virus scanning. Multiple objects can be virus scanned in parallel.
    • Adware/spyware. The user impact from adware and spyware will be reduced since attacks will be able to use their own processor. Adware will be scanning all your files and running classifiers to figure out what to sell you.
    • Ad display. Run all those Flash ads simultaneously. Ads can get more CPU-intensive. Next frontier: automatic image editing that puts you in the ad.
    • Indexing You'll have local search systems indexing your stuff, probably at least one from Microsoft and one from Google.
    • Spam One CPU for filtering the spam coming in, one CPU for the bot sending it out.
    • DRM One CPU for the RIAA's piracy searcher, one for the MPAA, one for Homeland Security...
    • Interpreters Visualize a Microsoft Office emulator written in Javascript. Oh, wait [google.com].
    • Re: (Score:3, Insightful)

      by walt-sjc ( 145127 )
      Keep in mind that many of those tasks are also very I/O intensive, and our disk speed has not kept up with processor speed. With more cores doing more things, we are going to need a HELL of a lot more bandwidth on the bus for network, memory, disk, graphics, etc. PCI SuperDuper Express anyone?
      • Will intles newer cpus have somethings like amd Direct Connect Architecture?
        Will cpus be able to talk to each other without need to use the chip set?
        Will they be able to have more then one northbridge like chip as there is in high end amd systems?
        Will they have cache coherency?
        Will you be able to have add on cards on the cpu bus like you can with HyperTransport?
        Only having one chipset link for the pci-e slots, I/O, network, and etc. can be a big choke point in a 2-4+ cpu systems even more so with each cpu h
    • by jafac ( 1449 )
      Only on Slashdot, can it be ambiguous when something THIS Funny, is stamped Insightful.
  • I for one do a lot of cpu intensive coding, so I *would* use a 1thz processor. One thing I dont understand, they kept wanting to get more ghz for the same size an eventually hit a barrier. So why are we stuck on having a processor so small? I recently bought a 3ghz CPU and it was about the size of a 50 cent piece, and the actual core was smaller than a dime! 3ghz in less space than a dime! Cool, but why can't they just extend outwards?

    I wouldn't mind going back to the days when computers were bigger if i

    • by RevHawk ( 855772 )
      IANAS (I am not a scientist) But I thought I remembered hearing the size limitation has to do with the speed of light only being so fast - so if you make a cpu too large, you run into a delay issue because data can only move so fast. But, this might all be total BS. I did read it on Slashdot after all...
    • by bcmm ( 768152 )
      I don't think size is an issue really. Faster cycling doesn't come from adding transistors, it comes from making things happen faster. If anything, putting things closer together helps.
    • by smoker2 ( 750216 )

      3ghz in less space than a dime! Cool, but why can't they just extend outwards?

      Three words :
      Speed Of Light
      The clock speed (of a cpu) is limited by the speed of light, and the bigger the chip, the further stuff has to travel. Even at light speed, you can only go so far and get back again in a certain time.
      I'm not brilliant at explaining this, but I'm sure someone else will pick this up.
      In the meantime, have a look at this interesting paper [www.gotw.ca] from 2005.

    • 3ghz in less space than a dime! Cool, but why can't they just extend outwards?
      Because the speed of light is too slow. No, seriously. You wanna run at 3 GHz? Light only travels about 4 inches in a clock cycle. Of course, you also need to allow time for switching - a processor is mostly a big bunch of switches, and they take a little time to respond to turn on and off.
      • by dgatwood ( 11270 )

        And the speed of electrical propagation is even slower. In modern, copper-based chips, it's about 2/3rds the speed of light, IIRC. In the old aluminum-trace chips, I believe electrical propagation was even slower. The next gen will probably use carbon nanotubes, which reportedly provide faster propagation.

        That said, your point still holds that you are constrained by the speed of electrical signal propagation in the trace medium (currently copper), and that short of changing that medium (and thus, the s

  • Cache's are cool, because they're automated to solve a common chip problem of faster access to more frequently used data, without any extra programming. But they're a pain, because they're a blob that extra programming can't do anything else with. If Intel could just add some programmatic access to core caches (including flushing and swap in/out to main or other-core memory), which otherwise could serve higher performance at some cycles, they'd solve a lot of these problems with little investment.

    Conversely
    • Intel have added some programmer control over the cache. Look at the prefetch, movnt and sfence instructions. They're only really hints, but they do help.

      Time to dig out your instruction set manual... :-)
    • The 8800GTX is going this way with lockable cachelines and control over how jobs are split between core.
  • by Nim82 ( 838705 ) on Friday June 15, 2007 @12:34PM (#19521523)
    I'd much rather they focussed on making chips more energy efficient than faster. At the moment barring a few high end applications most of the cpu power on the majority of current processors is largely unused.

    I dream of the day when my gaming computer doesn't need any active cooling, or heat sinks the size of houses. Focussing on efficiency would also force developers to write better code, honestly its unbelievable how badly some programs run and how resource intensive they are for what they do.
    • I second that.

      I've just finished pulling apart my E6X00-based gaming box, in favor of a C2D T5500 mobile-on-desktop rig, replacing a fast FSB with a fanless(BIG-heatsink)-CPU and cutting CPU power consumption to almost 1/3. (Yes, I know an 8800 eats 250 Watts on idle. I'm still looking for a way to depower it and use alternative low-power VGA-out when not in use. Mention'em if you can think of'em)

      L7200 and L7400's soon to hit the mobile-478-socket CPU market soon (thinkpad X60t's already ship with it), givi
      • by tknd ( 979052 )
        Easy solution: get two computers. One for gaming and one for everyday tasks.

        I've tried looking around for power efficient desktop parts and it's pretty much trial and error. For example I went through three desktop athlon 64 motherboards trying to find one with low power consumption but I could never get close to my laptop.

        Once you've done that, the next thing I suggest is trying to run Vista (/ducks). You may laugh at first but I recently bought a dell c521 athlon X2 machine for my parents with vista busin
        • Already doing that.
          My other computer is an ultraportable dual-core T-5600-based Thinkpad X60.

          Point is, my requirement is a bit different.

          I game on an off, which is to say for 3 months I don't touch computers when I'm in school, then for 3 more I do some gaming. Cheaper (and nicer) to buy a graphics card for those months, then sell it off before the next semester. A proven way of getting a better academic record too ;-)

          Still, I don't want to disassemble the entire desktop rig each time, and in school-era I w
    • Re: (Score:3, Informative)

      by drinkypoo ( 153816 )

      I'd much rather they focussed on making chips more energy efficient than faster.

      Primary enemy of electronics is heat caused by inefficiency. By moving to a smaller process we reduce voltage, thus we reduce power (P=VI) and thus we reduce heat. So we can go faster. But we can also not go faster, and go lower power. VIA is the current leader, AFAIK, in low-power x86-compatible processors/systems. But beyond their equipment, much of which is very sad and slow, you can simply underclock any CPU and depending

  • Energy Efficiency (Score:3, Interesting)

    by zentec ( 204030 ) * <zentec@gmai l . com> on Friday June 15, 2007 @02:04PM (#19522911)
    The thing that is the future for Intel is not only the bizillion cores and cheaper/faster, but to do so with outstanding energy efficiency. This is obviously important for portable computing, but it's also important to reduce heat load and power consumption in large data centers. Cost of ownership comparisons have yet to include power consumption, but as green house gas taxes start making their way onto electric bills, it's likely to be a selling point.

    More and more there's a need for extremely energy efficient, low footprint devices for special purpose applications. It just doesn't make a lot of sense to have PC sucking 60 watts when all you need is something to run Minicom to a simple 15" LCD screen.
  • by gilesjuk ( 604902 ) <giles.jones@nospaM.zen.co.uk> on Friday June 15, 2007 @03:28PM (#19524187)
    That's what they need to do. Rather than make one chip look like two, it's easier to get max performance by making more than one core appear as one.

"An idealist is one who, on noticing that a rose smells better than a cabbage, concludes that it will also make better soup." - H.L. Mencken

Working...