Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Hardware

The Future of Intel Processors 164

madison writes to mention coverage at ZDNet on the future of Intel technology. Multicore chips are their focus for the future, and researchers at the company are working on methods to adapt them for specific uses. The article cites an example were the majority of the cores are x86, with some accelerators and embedded graphics cores added on for added functionality. "Intel is also tinkering with ways to let multicore chips share caches, pools of memory embedded in processors for rapid data access. Cores on many dual- and quad-core chips on the market today share caches, but it's a somewhat manageable problem. "When you get to eight and 16 cores, it can get pretty complicated," Bautista said. The technology would prioritize operations. Early indications show that improved cache management could improve overall chip performance by 10 percent to 20 percent, according to Intel." madison also writes, "In another development news Intel has updated its Itanium roadmap to include a new chip dubbed 'Kittson' to follow the release of Poulson. That chip will be based on a new microarchitecture that provides higher levels of parallelism."
This discussion has been archived. No new comments can be posted.

The Future of Intel Processors

Comments Filter:
  • gcc? (Score:3, Insightful)

    by everphilski ( 877346 ) on Friday June 15, 2007 @12:58PM (#19521005) Journal
    Buy stock in gcc..

    Yeah, cause, you know, Intel doesn't make their own http://www.intel.com/cd/software/products/asmo-na/ eng/compilers/284132.htm [slashdot.org]">compiler...
  • by CajunArson ( 465943 ) on Friday June 15, 2007 @01:01PM (#19521059) Journal

    That would also improve overall security too.

    I hate to break it to ya, but in a low-level language like C, doing proper bounds checks and data sanitization required for security does not help performance (although it doesn't harm it much either, and should of course always be done)
        There is a lot of bloated code out there, but the bad news for people who always post "just write better code!" is that the truly processor-intensive stuff (like image processing, 3D games) is already pretty well optimized to take advantage of modern hardware.
        There's also the definition of what "good code" actually is. I could write a parallelized sort algorithm that would be nowhere near as fast as a decent quicksort on modern hardware. However, on hardware from 10 years from now with a big number of cores, the parallelized algorithm would end up being faster. So which one is the 'good' code?
        As usual, real programming problems in the real world are too complex to be solved by 1-line Slashdot memes.
  • by keithjr ( 1091829 ) on Friday June 15, 2007 @01:03PM (#19521077)
    Well, the analogy I've always heard was "1 woman can have 1 baby in 9 months, but 9 women can't have 1 baby in 1 month." Lesson here: not everything is as "parallelizable" as digging a ditch. Data dependency in single execution threads means there often simply isn't enough independent work that can be done at once. Moreover, it is often left up to the user (or third party vendors) to create the application library to take advantage of parallel processing. Almost all code being run at this moment was writen in a serial, higher-level language (such as C++) for serial execution (even if it utilizes threading in the OS). The Cell didn't provide a very good API, and even trivially parallelizable algorithms often have to be rewritten in assembly code to take full advantage of the available hardware. And that just plain sucks.
  • For the long term (Score:3, Insightful)

    by ClosedSource ( 238333 ) on Friday June 15, 2007 @01:05PM (#19521103)
    Intel needs to develop new processor technologies to significantly increase native performance rather than just adding more cores. Whether multi-core processors can significantly increase performance for standard applications hasn't yet been proven and even if possible, will depend on the willingness of developers to do the extra work to make it happen.

    If software developers can't or won't take advantage of the potential benefits of multi-core, Intel and AMD may have to significantly cut the price of their processors because upgrading won't add much value.
  • New term war. (Score:4, Insightful)

    by jshriverWVU ( 810740 ) on Friday June 15, 2007 @01:13PM (#19521213)
    I was just checking out this page here [azulsystems.com] which discussed a machine with 768 cores. While I do a good amount of parallel programming this is good news to me. But it seems for the average person, this is turning into another mhz/ghz war, this time cores.

    What we really need is for software to catch up. Luckily some programs like Premiere, Photoshop have supported multiple CPU's for a while now. But games, etc can really benefit from this. Just stick AI on 1 core, terrain on another, etc etc.

  • by Animats ( 122034 ) on Friday June 15, 2007 @01:21PM (#19521337) Homepage

    Where will all the CPU time go on desktops with these highly parallel processors?

    • Virus scanning. Multiple objects can be virus scanned in parallel.
    • Adware/spyware. The user impact from adware and spyware will be reduced since attacks will be able to use their own processor. Adware will be scanning all your files and running classifiers to figure out what to sell you.
    • Ad display. Run all those Flash ads simultaneously. Ads can get more CPU-intensive. Next frontier: automatic image editing that puts you in the ad.
    • Indexing You'll have local search systems indexing your stuff, probably at least one from Microsoft and one from Google.
    • Spam One CPU for filtering the spam coming in, one CPU for the bot sending it out.
    • DRM One CPU for the RIAA's piracy searcher, one for the MPAA, one for Homeland Security...
    • Interpreters Visualize a Microsoft Office emulator written in Javascript. Oh, wait [google.com].
  • by walt-sjc ( 145127 ) on Friday June 15, 2007 @01:27PM (#19521423)
    Keep in mind that many of those tasks are also very I/O intensive, and our disk speed has not kept up with processor speed. With more cores doing more things, we are going to need a HELL of a lot more bandwidth on the bus for network, memory, disk, graphics, etc. PCI SuperDuper Express anyone?
  • by fitten ( 521191 ) on Friday June 15, 2007 @01:32PM (#19521493)
    Define "bloat". For example, do you classify 'features', as in adding more of them, as bloat? I think the word "bloat" is thrown around so much that few people have a good definition of it anymore. For example, features (what lots of people call 'bloat') that aren't used *shouldn't* cause performance issues as the code for them isn't executed.

    Besides, if we stopped adding features, we'd still be using things like ed for editing (and 'word processing'), our games would still be like Pong, and our remote access would still be VT52 terminals.
  • by morgan_greywolf ( 835522 ) on Friday June 15, 2007 @01:34PM (#19521513) Homepage Journal

    Better code = less bloat = better performance and security.


    The parent's point is that in code where it makes a difference, the code is already thoroughly optimized, in general. Slimming down the code for Microsoft Word or XEmacs or Firefox or Nautilus or iTunes (there, now we've slaugthered everyone's sacred cow!) isn't likely to make much of a difference because apps like these already run plenty fast on modern hardware. Sure, bloat is bad, but it's a lot harder to remove bloat from existing code without removing features than it sounds. If bloat is an issue, use an equivalent app with less features -- nano instead of XEmacs, for instance.

  • by Nim82 ( 838705 ) on Friday June 15, 2007 @01:34PM (#19521523)
    I'd much rather they focussed on making chips more energy efficient than faster. At the moment barring a few high end applications most of the cpu power on the majority of current processors is largely unused.

    I dream of the day when my gaming computer doesn't need any active cooling, or heat sinks the size of houses. Focussing on efficiency would also force developers to write better code, honestly its unbelievable how badly some programs run and how resource intensive they are for what they do.
  • by timeOday ( 582209 ) on Friday June 15, 2007 @02:03PM (#19521947)

    Intel needs to develop new processor technologies to significantly increase native performance rather than just adding more cores.
    Figure out how to do that and you will be a rich man. The move to multi-core is a white flag of surrender in the battle against the laws of physics to make a faster processor, no doubt about it. The industry did not bite the bullet of parallelism by choice.
  • by Vellmont ( 569020 ) on Friday June 15, 2007 @05:44PM (#19525209) Homepage
    I think this is this most intelligent reply I've heard about multi-core processors. Everything I've heard up to this point is the standard "But multi-threaded programming is both hard, and has diminishing returns". Which is very true. I've often wondered how the hell I'd break my programs into 80 different independent parts.

    Ultimately I think you're right. Processors started out general, and have become increasingly specialized. First we had the "floating point co-processor", next stuff like an MMU, then GPUs came along. Multiple cored with differing functions is in many ways just a continuation of that trend.

"Engineering without management is art." -- Jeff Johnson

Working...