The Future of Intel Processors 164
madison writes to mention coverage at ZDNet on the future of Intel technology. Multicore chips are their focus for the future, and researchers at the company are working on methods to adapt them for specific uses. The article cites an example were the majority of the cores are x86, with some accelerators and embedded graphics cores added on for added functionality. "Intel is also tinkering with ways to let multicore chips share caches, pools of memory embedded in processors for rapid data access. Cores on many dual- and quad-core chips on the market today share caches, but it's a somewhat manageable problem. "When you get to eight and 16 cores, it can get pretty complicated," Bautista said. The technology would prioritize operations. Early indications show that improved cache management could improve overall chip performance by 10 percent to 20 percent, according to Intel." madison also writes, "In another development news Intel has updated its Itanium roadmap to include a new chip dubbed 'Kittson' to follow the release of Poulson. That chip will be based on a new microarchitecture that provides higher levels of parallelism."
Interesting! Cell is making waves after all... (Score:5, Funny)
1. It's fairly hard to develop for.
2. It's bloody fast.
Looks like Intel's gonna be running with it some; that's good news for anyone making a living selling compilers!
gcc? (Score:3, Insightful)
Yeah, cause, you know, Intel doesn't make their own http://www.intel.com/cd/software/products/asmo-na
Re:gcc? (Score:3, Informative)
Re:gcc? (Score:2)
Maybe, uhm... A joke?
(That said, stuff like this IS good news for anyone working on gcc professionally, potentially, although it does have the short-term impact of creating a class of apps where gcc isn't going to be as good as the industrial and research compilers for a while.)
Re:gcc? (Score:2)
Note that I use gcc regularly, and I believe it to be "good enough" in the vast majority of cases. But from a performance standpoint, it still has a long way to go.
Re:The future of Intel, with AMD following (Score:2)
Instead of more power (Score:1, Interesting)
If people coded properly, we wouldn't need this 'speed race' just to watch our word processors and browsers get slower and slower each release..
Re:Instead of more power (Score:5, Insightful)
I hate to break it to ya, but in a low-level language like C, doing proper bounds checks and data sanitization required for security does not help performance (although it doesn't harm it much either, and should of course always be done)
There is a lot of bloated code out there, but the bad news for people who always post "just write better code!" is that the truly processor-intensive stuff (like image processing, 3D games) is already pretty well optimized to take advantage of modern hardware.
There's also the definition of what "good code" actually is. I could write a parallelized sort algorithm that would be nowhere near as fast as a decent quicksort on modern hardware. However, on hardware from 10 years from now with a big number of cores, the parallelized algorithm would end up being faster. So which one is the 'good' code?
As usual, real programming problems in the real world are too complex to be solved by 1-line Slashdot memes.
Re:Instead of more power (Score:2)
The more cores they add the more the system will seem to converge into the CPU, as this happens devices will become very simple as most of the system will be able to operate using a smaller package. As the system makes more money it will be come more and more closed, curiosity will lead to hacks, hacks will lead to other uses, which will give us an interface which will make the whole thing balloon up again....
What a tangled web we weave eh?
Re:Instead of more power (Score:2, Insightful)
Besides, if we stopped adding features, we'd still be using things like ed for editing (and 'word processing'), our games would still be like Pong, and our remote access would still be VT52 terminals.
Re:Still using pong and VT52 terminals (Score:2)
Re:Still using pong and VT52 terminals (Score:2)
Re:Instead of more power (Score:2, Insightful)
The parent's point is that in code where it makes a difference, the code is already thoroughly optimized, in general. Slimming down the code for Microsoft Word or XEmacs or Firefox or Nautilus or iTunes (there, now we've slaugthered everyone's sacred cow!) isn't likely to make much of a difference because apps like these already run plenty fast on modern hardware. Sure, bloat is bad, but it's a lot harder to remove bloat from existing code without removing features than it sounds. If bloat is an issue, use an equivalent app with less features -- nano instead of XEmacs, for instance.
Re:Instead of more power (Score:2)
Re:Instead of more power (Score:5, Funny)
Re:Instead of more power (Score:2)
Better code = less bloat = better performance and security.
The thing you've failed to realize is that "bloat" is relative. One mans bloat is another mans "gotta-have-it" feature. Also the point of the poster was that "better performance" is a moving target.
Programmers don't design software for one guy, with one computer, that's run only next week. They design software for a hundred/thousand/million guys that runs on 200 different computers of different speeds, and for the next several years.
The basic takehome message here is that the computing world changes fast, and has a wide diversity of environment. "better" changes.
Re:Instead of more power (Score:2)
Well, as you are the only important person on the planet, I would like to know what you're planning to do about climate change.
Re:Instead of more power (Score:2)
What does this mean? I like playing games, and entertainment is not worthless. I can only conclude (from reading and rereading your comment at least six times) that you disagree.
What does this mean? (Score:2)
Re:What does this mean? (Score:2)
Because I have a hard time figuring out why anyone would say anything that stupid, including someone such as yourself whose nickname I recognize yet do not (yet) associate it with revulsion.
3D visualization, besides being an excellent tool for game development, is also used for a broad variety of real-world applications.
It's used in engineering, in molecular biology, in ordinary biology. It's used as a training tool, as well. The 9/11 incident heavily underscores the high value of computer-based training through simulation.
The same technology used in games is used for these purposes.
Re:What does this mean? (Score:2)
Now, that out of the way, I honestly believe a lot of what is being done today and being passed off as 'need' is a waste of resources. Most of the entire 'digital industry' is a waste as far as i am concerned. I feel the 'digital revolution' has far exceeded its usefulness and is actually now harming society as a whole. It has/had its place, but not invading every part of our lives as it does now.
And yes, i wear a windup mechincal watch to this day, and dont have fancy computer controlled components in my nearly 30 year old car. Wont ever buy another modern vehicle for the same reason. Oh, and guess what, just to be totally inconsistent, i actually make my living from supporting these infernal digital devices. But it doesnt mean i have to approve of how they are being (ab)used.
Re:What does this mean? (Score:2)
What you are experiencing is the "settling-out" phase that comes with every new technology. During the industrial revolution, cancer rates doubled without a corresponding increase in life expectancy. But now, each individual source of pollution tends to be dramatically cleaner than they were back then. Today, cancer rates are high, but we also live long enough to hit that wall when we should be, not prematurely.
I think actually that it's not at all the digital revolution that is the problem, but capitalism. We seem to be convinced (as a planet! the only thing we're on the same page about) that making money is the highest calling of man. For all their posturing, the people running the Chinese government live lives of opulence, so I think it's safe to say that they agree as well - actions speaking louder than words and all that.
Look around and you will see a lot of people complaining that they are broke while sitting on the couch watching their big screen TV. I am not much of an exception, except instead of big new shiny possessions I live in a nice house, and it sucks up a large portion of my income in rent. The problem is not that we're all using electronics all the time, the problem is that we're not happy with ourselves and we substitute possessions for inner peace.
I have nothing against wind-up watches (except that I've never owned a good one, and a good digital watch is a hell of a lot cheaper than a good mechanical one) but I have to part company with you on the car issue. I think that's nothing short of Luddism. Electronically-controlled cars are (when not implemented by idiots) more reliable and more efficient than purely mechanical ones. And besides, many cars of that era are computer controlled. Even when they are carbureted, anything made after California's emissions standards went up pretty much has an O2 sensor and a computer to match. They simply use a motor to adjust the mixture control.
No, of course. But it does definitely diminish the strength of your argument (as does having it on slashdot.)
Re:What does this mean? (Score:2)
As far as making a living in the 'industry' and sounding like a hypocrite, it does give me a unique viewpoint of being someone on the 'inside' that has grown over the years to have a strong distaste for for where its heading. ( unfortunately ive been in it far to long to get out, we all gotta eat and have a roof to live under. )
I agree that i do have to pay more for some things, like my fountain pen, my watch, gas stove.. but at least i can make a statement in my personal life like this, with the acceptance of reality ( to avoid being called a ludite ) that one cant practically go 100% 'retro' as like above, you gotta eat.. and you have to interact with the rest of the world on their level, to an extent. At least in the USA anyway, in other countries, way out in the sticks, your milage may very..
( we are way OT now, and you get my *honest* point, since im not playing 'lets piss off the
Re:What does this mean? (Score:2)
If anyone else read this thread all the way down to here then heed the warning. *Do not* open any post from nurb432...
Re:Instead of more power (Score:2)
hmmm how about:?
Optimization = more specialized code = less maintainability = bugs are worse = adding features adds bloat = security issues
More powerful processors = less need for optimization
More powerful processors = Compilers take less time to do their job and developers get more time to work on their applications efficiently
How about: (Score:2)
Re:How about: (Score:2)
Multicore vs. implicit parallelism (Score:4, Interesting)
Re:Multicore vs. implicit parallelism (Score:2)
Re:Multicore vs. implicit parallelism (Score:2)
Intel is very likely doing both with equal zeal, and the market is at a point where it will pay for useful advances in either.
Re:Multicore vs. implicit parallelism (Score:2, Interesting)
In a manner of speaking, yes. For a compiler of a programming language to be able to implement the language's constructs efficiently, there must be an adequate support of those constructs by the target hardware.
On a more general note, the boundaries between hardware and software are always blurred, in that you cannot completely abstract one from another without hurting the performance of the system.
Let's see where this takes us (Score:2)
The average parallism factor for most programs tends to hover around four. I think Intel might have figured out that this is a decent stopping point for hardware parallelism as well.
Re:Let's see where this takes us (Score:1)
In the early 1980's I was sure that Y2K would bring desktop machines with >10,000 (neural net) processors and paperless offices. I blame MS, Intel and HP.
I never really expected a flying car though.
Re:Let's see where this takes us (Score:2)
That's not really true anymore. The type of programs that we run has changed, and so the average has moved. Any of the media applications that I run regularly, or games has a much higher potential for parallism.
But gee (Score:4, Funny)
Re:But gee (Score:2)
This story should be posted 8 times (Score:4, Funny)
Re:This story should be posted 8 times (Score:5, Funny)
Re:This story should be posted 8 times (Score:2)
I couldn't reply there because I moderated in that thread.
Anyway..
I've heard others make fun of that. But people (you?) seem to overlook that it's entirely possible that 78% of people are, in fact, above average drivers. People (you?) often confuse "average" with "median."
I mean, it's simple: 1, 1, 2, 2, 2, 2, 2, 2, 2, 2 = average of 1.8. 80% are above average.
Re:This story should be posted 8 times (Score:2)
Re:This story should be posted 8 times (Score:2)
Re:This story should be posted 8 times (Score:3, Funny)
Oblig. (Score:1)
Re:Oblig. (Score:2)
I, for one, am betting Intel loses its shirt on this 80 Core hodgepodge. That's why I'm investing my entire retirement saving in Transmeta's Crusoe line.
Re:Oblig. (Score:3, Funny)
Cell and parallel processing. Answer this for me. (Score:2)
Re:Cell and parallel processing. Answer this for m (Score:2, Insightful)
Re:Cell and parallel processing. Answer this for m (Score:2)
Use of "envision" - check
Incoherent rambling loaded with buzzwords and cliches - check
You sir, are a tool.
Re:Cell and parallel processing. Answer this for m (Score:2)
One of the great difficulties of the Cell is asymmetrical in nature. With a Cell you have to do a lot more resource management than with symmetrical multiprocessor system. I have not worked with the Cell but some of the issues I could see cropping up is that it maybe a little light in none floating point resources. With only one PPC core there may be issues with keeping all the SPEs busy.
The 360 is no slouch when it comes to floating point but has a lot more general purpose CPU power than the PS3. The PS3 will kill the 360 in things like transcoding video but the 360 maybe a better mix of capabilities than the PS3.
For the long term (Score:3, Insightful)
If software developers can't or won't take advantage of the potential benefits of multi-core, Intel and AMD may have to significantly cut the price of their processors because upgrading won't add much value.
Re:For the long term (Score:4, Insightful)
Re:For the long term (Score:3, Informative)
For example, they could put a Java bytecode interpreter "cpu" into the system. Java CPUs didn't take off because a mainstream processor would always have better process and funding, and you had to totally switch to Java. But if everybody had a Java "cpu" that only cost $0.25 extra to put in the chip and got faster as the main CPU got faster, then it might actually be useful (incidentally
Alternatively, they could put in generic garbage collection as a separate processor that runs all the time. This could accelerate Python, Java,
I don't think multi-threaded code is necessarily the only way to take advantage of multiple cores.
Re:For the long term (Score:3, Insightful)
Ultimately I think you're right. Processors started out general, and have become increasingly specialized. First we had the "floating point co-processor", next stuff like an MMU, then GPUs came along. Multiple cored with differing functions is in many ways just a continuation of that trend.
Re:For the long term (Score:2)
Concurrent programming isn't really that hard a problem. To do it easily using todays tools requires some "design patterns" that many programmers aren't used to, but the concurrent models actually end up being cleaner / more intuitive than the serial model in many cases (including things like network programming and GUI programming).
The problem is that the tools don't make these patterns blindly easy, and they require a little bit of programmer discipline to use properly. That occasionally includes giving up minor performance advantages for code cleanliness - think of it like the "don't use gotos" rule in structured programming.
The simplest view into the method of concurrent programming that I'm talking about is here: http://video.google.com/videoplay?docid=8102320126 17965344 [google.com]
That doesn't solve the general "number crunching" problem - for that you need parallel algorithms - but that's been solved in scientific computing clusters for decades.
Re:For the long term (Score:2)
Re:For the long term (Score:2)
You can estimate that sort of thing reasonably well. A lot of things parallelize in some really obvious way - enough that you'll get better than twice the throughput if you move from one to four cores. Other things would gain a perceived performance advantage simply by using a concurrent programming model even on a single core - tabbed web browsers with plugins / javascript are a good example.
Only some few really obnoxious cases like finite state machines can't be parallelized for a >25% performance gain. And even for those cases, it frequently turns out that you want to run more than one of them in parallel anyway.
Re:For the long term (Score:2)
Re:For the long term (Score:2)
I'm sure they would have done that already if they could. The problem with more powerful processors is the amount of power they use. By using multiple cores of slightly less powerful chips, you get more performance with less power usage.
Re:For the long term (Score:2)
I'm sure prior to the invention of the Integrated Circuit, many hardware engineers thought that computers couldn't be made any smaller than a large closet. The technologies used today for creating processors are essentially refinements of the IC technology created in the late 1950's.
I'm not suggesting that Intel and AMD have a lot of options based on that legacy technology, but the future belongs to the companies that can develop new technologies. These new technologies may be developed through research into ideas like DNA computers, Quantum computers, or chemical computers etc. Or they may be based on new ideas that nobody has thought of yet.
I find it hightly unlikely that multi-core software techniques will be able to sustain significant performance improvements for more than 10 years, if that. Of course the same physical laws that restrict the performance of single core processors will significantly limit the number of cores that can be integrated in a single chip, so multicore hardware isn't going to scale very far either.
Clock Speed? (Score:4, Interesting)
Yes, I know they changed to a new architecture that put less emphasis on raw clock speed. But, given that more efficient architecture, clock speed increases are still going to be a major benefit.
So, what's the story? Has the industry hit a wall? How long will it take to get back to above 3GHz for a mainstream processor, or even to the 4GHz levels that the old Pentium IVs were pushing.
Don't get me wrong, I am a huge fan of the power efficiencies of the new chips. For my primary purposes (laptop, HTPC) the new chips are a godsend. And, the thought of specialized "accelerator" cores is fantastic (a video decoder core for MPEG2 & H.264, please). But, doing that same thing at 4GHz is even more compelling (of course, with the speedstep++ stuff to shut down cores when not needed, and throttle back to low GHz to save power).
Re:Clock Speed? (Score:5, Informative)
Re:Clock Speed? (Score:2)
Re:Clock Speed? (Score:4, Informative)
When comparing different processors with the same ISA (ie x86), IPS is the best measure of CPU performance, not clock speed.
Re:Clock Speed? (Score:3, Informative)
Tell that to the Amiga guys and to AMD when they chose IPC over clock while the P4 was around. Both are very important. The industry spent years ramping up the clock and now they're spending a few years working on IPC. It makes perfect sense to me. Moore's law also doesn't refer to the frequency of a chip but to the number of transistors which has kept pace especially now with the 45nm processes.
Personally I think for the moment IPC is far more important than frequency given computers are doing more and more these days not just doing one thing faster.
Re:Clock Speed? (Score:2)
Re:Clock Speed? (Score:2)
Re:Clock Speed? (Score:2)
Power6 is a mainstream server processor operating at 4.7ghz in servers today, and at 6ghz in the lab. While it's clear that gains are more difficult now, it would appear the industry has not hit the wall yet.
Re:Clock Speed? (Score:2)
Look for bumps in Cell or Cell2: Cell2 expected @ > 4GHz.
Note that these will go into machines where more expensive heat dissipation devices can be used, i.e. any of IBM's machine or RoadRunner.
Re:Clock Speed? (Score:3, Informative)
Yes. There was a big story about three years ago that when Intel got its first chips from some new process shrink (90 nm?), they were startled to find that they couldn't get them to run substantially faster than the previous version. Up until then, they'd always gotten a significant speedup from that with no design changes, but they did hit some sort of physical limit no one was expecting. I haven't heard anything since about whether they figured out what it was.
Basically immediately, the Pentium 4 line was ended, and they started planning to go back to the Pentium 3 design (P-6 architecture, introduced in 1995 on the Pentium Pro), which had been quietly improving as the Pentium M in the meantime.
even to the 4GHz levels that the old Pentium IVs were pushing
The Pentium IV had a couple of really good ideas ("trace cache", off the top of my head -- the instruction cache was post-decode), but it was fundamentally a really dumb design. It was optimized for a clock speed number they could put on a label, even though it degraded performance by taking pipelining too far. It was really fast if you could keep the pipeline full, but the only common application that could do so was video encoding.
New term war. (Score:4, Insightful)
What we really need is for software to catch up. Luckily some programs like Premiere, Photoshop have supported multiple CPU's for a while now. But games, etc can really benefit from this. Just stick AI on 1 core, terrain on another, etc etc.
Re:New term war. (Score:2)
Of the little bit that does need oompf, Where SMP can be taken advantage of, people have largely been working on doing so for a while now.
Only the little fraction that remains - projects that CAN USE the extra oompf and haven't been developed in that direction yet - need to catch up.
Your statement hardly applies to most software out there.
Re:New term war. (Score:2)
Do you really think companies will intentionally go in the wrong direction (more GHz, more cores, etc) just because? Possibly for marketing reasons, but outside that I would think that with their massive R&D budget that they would be exploring other ideas to give them the edge over the competition. Yes, sometimes it takes a new-comer to shake things up, but at the same time the big companies are pushing as hard as they can to either get an edge or narrow the gap...so give credit where credit is due and stop complaining (not that you were necessarily complaining, but almost any tech war cores, ghz is going to result in better tech for the consumer).
Improved cash management (Score:5, Funny)
Remaining Interchangable (Score:2)
Re:Remaining Interchangable (Score:3, Informative)
If intel used just one socket, then you would have portions of a socket unused on some systems, but it would cost less to do the design, because there would be only one design. They don't do this because a socket with less pins costs less.
I don't know if that's what you wanted to know...
Intel and AMD could ostensibly remain eternally interchangeable; they are not and long have not been socket-level-compatible anyway. And they're not 100% interchangeable, if you fritter around at low levels you will find things that must be done differently on each processor, which is why [for example] the Linux kernel is configured differently for each.
The last time intel and AMD were socket-compatible was Socket Super 7.
Where all the CPU time will go (Score:5, Insightful)
Where will all the CPU time go on desktops with these highly parallel processors?
Re:Where all the CPU time will go (Score:3, Insightful)
Re:Where all the CPU time will go (Score:2)
Will cpus be able to talk to each other without need to use the chip set?
Will they be able to have more then one northbridge like chip as there is in high end amd systems?
Will they have cache coherency?
Will you be able to have add on cards on the cpu bus like you can with HyperTransport?
Only having one chipset link for the pci-e slots, I/O, network, and etc. can be a big choke point in a 2-4+ cpu systems even more so with each cpu has 4+ cores.
Re:Where all the CPU time will go (Score:2)
Size doesnt matter to me. (Score:2)
I wouldn't mind going back to the days when computers were bigger if it meant I could have a 10ghz or 1thz computer. Let the computing begin.
Re:Size doesnt matter to me. (Score:1)
Re:Size doesnt matter to me. (Score:2)
Re:Size doesnt matter to me. (Score:2)
Speed Of Light
The clock speed (of a cpu) is limited by the speed of light, and the bigger the chip, the further stuff has to travel. Even at light speed, you can only go so far and get back again in a certain time.
I'm not brilliant at explaining this, but I'm sure someone else will pick this up.
In the meantime, have a look at this interesting paper [www.gotw.ca] from 2005.
Re:Size doesnt matter to me. (Score:2)
Re:Size doesnt matter to me. (Score:2)
And the speed of electrical propagation is even slower. In modern, copper-based chips, it's about 2/3rds the speed of light, IIRC. In the old aluminum-trace chips, I believe electrical propagation was even slower. The next gen will probably use carbon nanotubes, which reportedly provide faster propagation.
That said, your point still holds that you are constrained by the speed of electrical signal propagation in the trace medium (currently copper), and that short of changing that medium (and thus, the speed of propagation), the only way to increase speed beyond a certain point is to make the die smaller.
Re:Size doesnt matter to me. (Score:2)
Ehrm, no. The electrons in an electric current travel very slowly, in the order of a few feet per second (maybe even lower). The signal on the other hand is propagated at a very high speed, such as a significant fraction of the speed of light. To use an analogy: Imagine that you have a thin pipe filled with peas. If you push another pea into one end, a pea will almost instantaneously fall out the other end. The peas themselves just moved a short distance though.
Re:Size doesnt matter to me. (Score:2, Informative)
Programmable Cache/Storage (Score:2)
Conversely, chips like the Cell could include HW that makes their cores' local storage into caches.
Re:Programmable Cache/Storage (Score:2)
Time to dig out your instruction set manual...
Re:Programmable Cache/Storage (Score:2)
More energy efficient chips... (Score:3, Insightful)
I dream of the day when my gaming computer doesn't need any active cooling, or heat sinks the size of houses. Focussing on efficiency would also force developers to write better code, honestly its unbelievable how badly some programs run and how resource intensive they are for what they do.
Re:More energy efficient chips... (Score:2)
I've just finished pulling apart my E6X00-based gaming box, in favor of a C2D T5500 mobile-on-desktop rig, replacing a fast FSB with a fanless(BIG-heatsink)-CPU and cutting CPU power consumption to almost 1/3. (Yes, I know an 8800 eats 250 Watts on idle. I'm still looking for a way to depower it and use alternative low-power VGA-out when not in use. Mention'em if you can think of'em)
L7200 and L7400's soon to hit the mobile-478-socket CPU market soon (thinkpad X60t's already ship with it), giving the same dual-core mid-range desktop performance for yet another 50% cut in power consumption - ~15 watts in place of ~30W, and knocking another 5W off for losing a fan or two.
Speaking of, any 478-mobile boards out there except for Gigabyte's GA-8I945GMMFY-RH that do both C2D (bumps the Asus N4L-VM) and PCIex16 (bumps the Abit IL-90)?
Re:More energy efficient chips... (Score:2)
I've tried looking around for power efficient desktop parts and it's pretty much trial and error. For example I went through three desktop athlon 64 motherboards trying to find one with low power consumption but I could never get close to my laptop.
Once you've done that, the next thing I suggest is trying to run Vista (/ducks). You may laugh at first but I recently bought a dell c521 athlon X2 machine for my parents with vista business loaded on it. The machine supports a low power sleep state which consumes 2 to 3 watts at the outlet. That rivals many PSUs in standby mode! The nice thing about Vista's sleep state is that it comes back up practically instantly (2 to 3 seconds). You can literally just hit a key on your keyboard to wake up the computer and be working within 5 seconds.
The only problem with two different computers is now you have separate configurations (install certain software twice) and you have to come up with some way of sharing data between them. But I agree, I wish video manufacturers would start putting similar power saving technology as CPU manufacturers into their GPUs. The idle power consumption numbers are getting out of hand.
Re:More energy efficient chips... (Score:2)
My other computer is an ultraportable dual-core T-5600-based Thinkpad X60.
Point is, my requirement is a bit different.
I game on an off, which is to say for 3 months I don't touch computers when I'm in school, then for 3 more I do some gaming. Cheaper (and nicer) to buy a graphics card for those months, then sell it off before the next semester. A proven way of getting a better academic record too
Still, I don't want to disassemble the entire desktop rig each time, and in school-era I want it to be a no-moving-parts box. This also means running its OS from a big&cheap CF card (and NAS for some stuff), careful choice of components and alternating between an tiny picoPSU-120 during peacetime to fuel the rig and a monster PSU for when a GPU monstrosity is present.
Re:More energy efficient chips... (Score:3, Informative)
Primary enemy of electronics is heat caused by inefficiency. By moving to a smaller process we reduce voltage, thus we reduce power (P=VI) and thus we reduce heat. So we can go faster. But we can also not go faster, and go lower power. VIA is the current leader, AFAIK, in low-power x86-compatible processors/systems. But beyond their equipment, much of which is very sad and slow, you can simply underclock any CPU and depending on the design, often run it at a lower voltage still.
Look into underclocking - the same work that went into making a faster processor also produced a lower-power processor. It simply isn't both at once.
I do wish processors would clock themselves down further. Core Duo T2600 peak is 2.16GHz, but it only goes down to 1GHz. Why not, say, 500MHz? Most of the time, two Core cores running at 500MHz would more than cover my CPU needs. It's only when I'm encoding video or playing a game or running a big report that I need all the processing power.
Energy Efficiency (Score:3, Interesting)
More and more there's a need for extremely energy efficient, low footprint devices for special purpose applications. It just doesn't make a lot of sense to have PC sucking 60 watts when all you need is something to run Minicom to a simple 15" LCD screen.
Multiple cores appear as one (Score:3, Interesting)
Re:Multiple cores appear as one (Score:2)
You are a minuscule fraction of consumers. (Score:2)
Re:You are a minuscule fraction of consumers. (Score:2)
LGA 775
Socket 478
Socket 604
Socket 771
Socket M
Socket P
Which is kind of silly. Not that AMD is currently any better.