Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Details Upcoming Gulftown Six-Core Processor 219

MojoKid writes "With the International Solid-State Circuits Conference less than a week away, Intel has released additional details on its upcoming hexa-core desktop CPU, next gen mobile, and dual-core Westmere processors. Much of the dual-core data was revealed last month when Intel unveiled their Clarkdale architecture. However, when Intel set its internal goals for what its calling Westmere 6C, the company aimed to boost both core and cache count by 50 percent without increasing the processor's thermal envelope. Westmere 6C (codename Gulftown) is a native six-core chip. Intel has crammed 1.17 billion transistors into a die that's approximately 240mm sq. The new chip carries 12MB up L3 (up from Nehalem's 8MB) and a TDP of 130W at 3.33GHz. In addition, Intel has built in AES encryption instruction decode support as well as a number of improvements to Gulftown's power consumption, especially in idle sleep states."
This discussion has been archived. No new comments can be posted.

Intel Details Upcoming Gulftown Six-Core Processor

Comments Filter:
  • by SmilingBoy ( 686281 ) on Thursday February 04, 2010 @09:07AM (#31021158)
    Can most programmes really be written to take advantage of so many cores? I am not sure I want to have a 6-core processor, of which 5 spend most of the time idling as I am only running a single-core-aware programme. OK, one more core can be used by the OS to make everything snappy, but the question stands.
  • by TheStonepedo ( 885845 ) on Thursday February 04, 2010 @09:14AM (#31021210) Homepage Journal

    Perhaps a jump in number of cores will convince people outside the Apple and FreeBSD camps to port Grand Central Dispatch.
    Letting the kernel team handle the hairier parts of multi-threaded design should make it easy for barely-optimized software to use powerful hardware.
    Could its Apache license work with the #1 OS family?

  • by SmilingBoy ( 686281 ) on Thursday February 04, 2010 @09:21AM (#31021268)
    Wrong question. When was the last time my computer was running a single thread that could use 100% CPU for more than a few milliseconds. Answer: All the time. For example whenever I open Slashdot with Firefox. I rather have less cores at higher speed than more cores.
  • by TheRaven64 ( 641858 ) on Thursday February 04, 2010 @09:24AM (#31021300) Journal
    Most programs can't be written to take full advantage of even one core. Most of the things that you do on a computer will run happily on a 1GHz CPU and still not bring usage over 50% more than occasionally. Most of the things that will tax a modern CPU can be made parallel, so will scale quite well to a number of cores. Even if your processor intensive task isn't using multiple cores, you still benefit a bit from being able to move everything else onto another core. With the recent Intel chips you also have 'Turbo Boost' (horrible name) which underclocks some cores while overclocking others, giving one core a speed boost for that CPU-eating single-threaded app while keeping the power usage and heat generation output. To prevent hotspots on the die, you can move the process around between the cores, giving each a boost for a little while.
  • by Anonymous Coward on Thursday February 04, 2010 @09:36AM (#31021388)

    Only use concurrency when it makes sense. On my system, all audio runs through PulseAudio, which runs in its own process. Input (among other things) is handled by X.org, also running its own process. The scheduler can decide which process runs on which CPU and tries to use all available CPUs (or cores) in the most efficient way. So the operating system is already using concurrent processes itself.

    Mobile processors benefit from a low load by entering various idle states which use less power then the active state, which in turn benefits the battery such that it lasts longer on the same charge.

    Thus the point is not to try to use all available cores and every available CPU cycle, but rather to use the /least/ cycles possible for any given task. Otherwise there would be no point in adding more cores, because the programs would simply burn more cycles. Although operating systems generally do seem to grow to require more processing power, computers typically run only one operating system at a time. Having multiple cores to work with is meant to benefit the end user by allowing more processes to run simultaneously, such as encoding/decoding an audio and video stream at the same time. Or multiple A/V streams with different camera angles, the possibilities are for the users to explore and there will be plenty.

  • by Gr8Apes ( 679165 ) on Thursday February 04, 2010 @09:50AM (#31021536)

    So I skimmed TFA (gasp!) and it appears that Intel is finally following AMDs lead by keeping thermal envelopes constant.

    I note that this is still a effectively 2 CPUs with 3 cores each, but that's better than legacy Intel approaches, which would have been 3 sets of dual cores.

    It will be interesting to see how independent performance benchmarks play out between the new processors that are coming out.

  • by null8 ( 1395293 ) on Thursday February 04, 2010 @10:21AM (#31021870)
    Instead of churning out cores they schould tweak the x86 isa to use multiple cores efficently. 1/2-word Atomic compare and swap is not enough, you cannot make atomic lockless doubly linked lists with that. No wonder something as interesting as http://valerieaurora.org/synthesis/SynthesisOS/ [valerieaurora.org] is not possible on x86 without major hacks.
  • by Anonymous Coward on Thursday February 04, 2010 @10:41AM (#31022108)

    Wrong question. When was the last time my computer was running a single thread that could use 100% CPU for more than a few milliseconds. Answer: All the time. For example whenever I open Slashdot with Firefox. I rather have less cores at higher speed than more cores.

    Really? So one thread wasn't reading the network traffic, one wasn't parsing the markup, and a third putting things up on the screen? At the same time the page wasn't being saved to your browser cache, while your e-mail program was querying the server for new mail, and cron was checking to see if there were jobs to run this minute? If you're on Windows, all of these activities were probably scanned by anti-virus.

    There's a lot going on in a modern system:
    $ ps -ef | wc -l
    146

  • by Big Smirk ( 692056 ) on Thursday February 04, 2010 @10:45AM (#31022174)

    Around here, the programmers never met a thread they didn't like. Add a requirement like - "display dialog box to confirm shutdown" and suddenly the thread count in the application jumps by 4...

    Could things be done more efficiently? No, because that would require thinking and thermodynamically it is cheaper just to spawn another thread.

  • by Guspaz ( 556486 ) on Thursday February 04, 2010 @12:07PM (#31023188)

    Examples of things that benefit from more than two cores:

    - Modern web browsers such as Chrome
    --- Multi-process architecture means that Flash sucks up a CPU all to itself, while various other tabs/domains are in different processes. Javascript-heavy web-apps or the user of other flash like plugins can easily make 3+ cores worthwhile
    - Most modern games
    --- Games are very CPU intensive. Most modern engines do a very good job of taking advantage of multiple cores. Some games even require 2+ cores in order to get any decent performance; MassEffect 2 (Unreal Engine 3) is unplayable on single-core processors
    - Video encoding
    --- GPU-accelerated (CUDA, OpenCL, etc) encoding is not yet useful, and isn't likely to be so any time soon. Existing hardware accelerated encoders are extremely limited in flexibility, and are usually of very poor quality (in terms of output).
    - Multitasking
    --- You scoff at it, but if I've got some demanding application running and I try to do something else at the same time (such as pause a game, alt-tab out of it, try to watch a video), CPU load starts to add up.

    Most peoples' needs can be met with the low end dual-core processors with hyperthreading such as the i3 or i5 series, but these days it's not just anybody who can take advantage of a quad core CPU. Pretty much all gamers, for example.

  • by chaim79 ( 898507 ) on Thursday February 04, 2010 @12:27PM (#31023464) Homepage

    Actually that isn't the case, I've been keeping an eye on the porting of GCD to other OSs and there are build options for with and without blocks (the non-standard C extension).

    As of right now I think the status is that FreeBSD (and other BSDs) can compile GCD with or without block support, Solaris is 90% there (again with and without blocks), and Linux is about 70% there (can compile and parts work, but not all of it).

  • Re:Transistor count (Score:3, Interesting)

    by sznupi ( 719324 ) on Thursday February 04, 2010 @01:32PM (#31024256) Homepage

    And yet, latest ARM cores are much closer to that 68k transistors from 1980, while not being nearly that far behind i7 in performance as the relation between numbers of transistors would suggest.

    Perhaps ARM found the sweet spot.

  • by mandolin ( 7248 ) on Thursday February 04, 2010 @03:00PM (#31025380)

    Porting libdispatch requires a generic event delivery framework, where the userspace process can wait for a variety of different types of event (signals, I/O, timers). ... Linux is the odd system out. All different types of kernel events are delivered to userspace via different mechanisms, so it's really hairy trying to block waiting until the next kernel event.

    I don't understand this. It's true Linux does not have kqueue. (as I recall Linus thought it was "ugly" ... whatever) But in theory (because I haven't actually used them), to achieve the same effect under Linux, you would use timerfd() + signalfd() + (your normal i/o fds like sockets etc.) ... and then epoll_wait()/poll()/select() on all of the fds you were interested in. In this way, one thread could wait for multiple different types of events, including timers and signals.

    Would you please point out the flaws w/the above that make it impossible or impractical to achieve the functionality needed by Grand Dispatch? I would be enlightened -- thanks in advance.

  • 24 isn't enough (Score:2, Interesting)

    by toastar ( 573882 ) on Thursday February 04, 2010 @03:36PM (#31025738)

    24 x86 cores just doesn't compare to 1 Fermi with 512 striped down vector processors

Never test for an error condition you don't know how to handle. -- Steinbach

Working...