Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Next-Gen CPU Has Memory Controller and GPU 307

Many readers wrote in with news of Intel's revelations yesterday about its upcoming Penryn and Nehalem cores. Information has been trickling out about Penryn, but the big news concerns Nehalem — the "tock" to Penryn's "tick." Nehalem will be a scalable architecture with some products having on-board memory controller, "on-package" GPU, and up to 16 threads per chip. From Ars Technica's coverage: "...Intel's Pat Gelsinger also made a number of high-level disclosures about the successor to Penryn, the 45nm Nehalem core. Unlike Penryn, which is a shrink/derivative of Core 2 Duo (Merom), Nehalem is architected from the ground up for 45nm. This is a major new design, and Gelsinger revealed some truly tantalizing details about it. Nehalem has its roots in the four-issue Core 2 Duo architecture, but the direction that it will take Intel is apparent in Gelsinger's insistence that, 'we view Nehalem as the first true dynamically scalable microarchitecture.' What Gelsinger means by this is that Nehalem is not only designed to take Intel up to eight cores on a single die, but those cores are meant to be mixed and matched with varied amounts of cache and different features in order to produce processors that are tailored to specific market segments." More details, including Intel's slideware, appear at PC Perspectives and HotHardware.
This discussion has been archived. No new comments can be posted.

Intel Next-Gen CPU Has Memory Controller and GPU

Comments Filter:
  • Re:Is AMD beaten? (Score:4, Insightful)

    by Applekid ( 993327 ) on Thursday March 29, 2007 @09:25AM (#18527245)
    "Anybody have an opposing viewpoint?"

    I think "AMD fan" or "Intel fan" is a bad attitude. When technology does its thing (progress), it's a good thing, regardless of who spearheaded it.

    That said, if AMD becomes so obviously a bad choice, Intel who is in the lead will continue to push the envelope just not as fast since they don't have anything to catch up to. That will give AMD the opportunity to blow ahead as it did time and time again in the past.

    The pendulum swings both ways. The only constant is that competition brings out the best and it's definitely good for us, the consumer.

    I'm a "Competition fan."
  • by TheSunborn ( 68004 ) <mtilstedNO@SPAMgmail.com> on Thursday March 29, 2007 @09:32AM (#18527315)
    If they manage to combine all these features in single chip, they really have made some genuinely new chip production process :}
  • by madhatter256 ( 443326 ) on Thursday March 29, 2007 @09:47AM (#18527489)

    Intel is no longer leading as they have in yeas past - they are copying and looting their competition shamelessly. It appears that they are "leading" when point in fact it's simply not the case - had AMD not realeased the Athlon64 we would all still be using single processor NetBurst processors.
    Actually, Intel is leading on something very important, mobility and power consumption. Take a look at the Pentium M series. Laptops with the Pentium M series always outpaced the Athlon Turion series in both battery life and in speed, in most applications. Now we see Intel integrating that technology into the desktop CPU series.
  • by mosel-saar-ruwer ( 732341 ) on Thursday March 29, 2007 @09:52AM (#18527563)

    It seems that AMD has lost, and I'm not trying to troll. It just seems that fortunes have truly reversed and that AMD is being beaten by 5 steps everywhere by AMD. Anybody have an opposing viewpoint? (Being an AMD fan, I am depressed.)

    Look at the title of this thread: Intel Next-Gen CPU Has Memory Controller and GPU.

    The on-board memory controller was pretty much the defining architectural feature of the Opteron family of CPUs, especially as Opteron interacted with the HyperTransport bus. The Opteron architecture was introduced in April of 2003 [wikipedia.org], and the HyperTransport architecture was introduced way back in April of 2001 [wikipedia.org]!!! As for the GPU, AMD purchased ATI in July of 2006 [slashdot.org] precisely so that they could integrate a GPU into their Opteron/Hypertransport package.

    So from an intellectual property point of view, it's Intel that's furiously trying to claw their way back into the game.

    But ultimately all of this will be decided by implementation - if AMD releases a first-rate implementation of their intellectual property, at a competitive price, then they'll be fine.

  • Two problems (Score:4, Insightful)

    by tomstdenis ( 446163 ) <tomstdenis@gma[ ]com ['il.' in gap]> on Thursday March 29, 2007 @10:06AM (#18527721) Homepage
    1. Putting a GPU on the processor immediately divides the market for it. Unless this is only going to be a laptop processor it probably won't sell well on desktops.

    2. Hyperthreading only works well in an idle pipeline. The core 2 duo (like the AMD64) have fairly high IPC counts, and hence, low amount of bubbles (as compared to say the P4). And even on the P4 the benefit is marginal at best and in some cases it hurts performance.

    The memory controller makes sense as it lowers the latency to memory.

    if Intel wants to spend gates, why not put in more accelerators for things like the variants of the DCT used by MPEG, JPEG and MPEG audio? or how about crypto accelerators for things like AES and bignum math?

    Tom
  • by GundamFan ( 848341 ) on Thursday March 29, 2007 @10:08AM (#18527753)
    Is it really fair to attribute the GPU-CPU combo to AMD/ATi if Intel gets to market first? As far as I know neither of them have produced anything "consumer ready" yet.
       
  • Re:Two problems (Score:2, Insightful)

    by jonesy16 ( 595988 ) on Thursday March 29, 2007 @10:16AM (#18527839)
    The point of this processor is that it will be modular. Your points are valid but I think you're missing Intel's greater plan. The GPU on core is not a requirement of the processor line, merely a feature that they can choose to include or not, all on the same assembly line. The bigger picture here is that if the processor is as modular as they are claiming, then they can mix and match different co-processors on the die to meet different market requirements, so the same processor can be bought in XY configuration with an on-board GPU, or in AB configuration with on-board physics engine, etc.
  • by guidryp ( 702488 ) on Thursday March 29, 2007 @10:22AM (#18527909)
    1: Integrated sells very well on the desktop almost every single machine in your big box shops has integrated graphics. I am sure it is outsells machines with separate graphics cards in the desktop. Gamers are not the market.

    2: I am skeptical about hyperthreading, but it all depends on the implementation. I don't think this is something they are pursuing just for marketing. They must have found a way to eek out even better loading of all execution units by doing this. I can't imagine this being done if it actually performs worse than hyperthreading in P4. We have to wait and see.
  • Re:Is AMD beaten? (Score:3, Insightful)

    by Anonymous Coward on Thursday March 29, 2007 @10:43AM (#18528201)
    Meh.

    #define Competition > 2

    What you have here is a duopoly, which is apparently what we in the US prefer as all our major industries eventually devolve into 2-3 huge companies controlling an entire market. That ain't competition, and it ain't good for all of us.

    Captcha = hourly. Why, yes, yes I am.
  • by gillbates ( 106458 ) on Thursday March 29, 2007 @10:54AM (#18528363) Homepage Journal

    It is interesting to note that Intel has now decided to put the memory controller on the die, after AMD showed the advantages of doing so.

    However, I'm a little dismayed that Intel hasn't yet addressed the number one bottleneck for system throughput: the (shared) memory bus itself.

    In the 90's, researchers at MIT were putting memory on the same die as the processor. These processors had unrestricted access to its own, internal RAM. There was no waiting on a relatively slow IDE drive or Ethernet card to complete a DMA transaction; no stalls during memory access, etc...

    What is really needed is a redesign of the basic PC memory architecture. We really need dual ported RAM, so that a memory transfer to or from a peripheral doesn't take over the memory bus used by the processor. Having an onboard memory controller helps, but it doesn't address the fundamental issue that a 10 ms IDE DMA transfer effectively stalls the CPU for those 10 milliseconds. In this regard, the PC of today is no more efficient than the PC of 20 years ago.

  • by ceeam ( 39911 ) on Thursday March 29, 2007 @11:00AM (#18528427)
    They've come up with open-sourcing their GPU drivers, for example.
  • Re:*snore* (Score:3, Insightful)

    by Fordiman ( 689627 ) <fordiman@g[ ]l.com ['mai' in gap]> on Thursday March 29, 2007 @11:55AM (#18529281) Homepage Journal
    Redundant?

    C'mon, modders, you can do better than that. Troll, Flamebait, Overrated, I'd understand; they're applicable. But redundant??

    Besides, I was serious. When am I going to see some serious RAM on-chip?
  • Re:Is AMD beaten? (Score:3, Insightful)

    by Endo13 ( 1000782 ) on Thursday March 29, 2007 @12:18PM (#18529603)

    If Intel and AMD start integrating good GPU cores on the same die as the CPU where will that leave NVidia? It could be left in the dust.
    It might not affect NVidia at all. At worst, it will replace their on-board graphics chipsets. These are a replacement for integrated graphics that are part of the chipset. It's going to be quite some time (if ever) until GPUs integrated in a CPU will be powerful enough to replace add-on graphics cards.
  • Re:Is AMD beaten? (Score:3, Insightful)

    by donglekey ( 124433 ) on Thursday March 29, 2007 @12:33PM (#18529821) Homepage
    They will all be playing the same game eventually, and that game is stream processing. Generalized stream processing using 100's of cores doing graphics, video, physics, and probably other applications. It is already happening, although Nvidia is a pretty undisputed champion at the moment. AMD owns ATI, Intel is working on their 80 core stream processing procs, IBM has the Cell, and Nvidia has their cards (128 'shader' units on the 8800 GTX). It is all converging very quickly into the next important aspect of hardware. So basically Intel intends to put in GPUs or something that can be used as a GPU if needed from here on out. In 4 years we will be counting stream units along with the number of general processing cores that we are counting now.
  • by dpilot ( 134227 ) on Thursday March 29, 2007 @01:24PM (#18530543) Homepage Journal
    Did Intel really make "power consumption a key part of their strategy" or did something else happen? If I look at a little recent history and cross that with too many years in corporate America, I see something else...

    Intel had a Haifa lab - waaaay out of the corporate mainstream. A few years back, Intel corporate mainstream was wrapped up in NetBurst, high clock rates, and IA64. Also at that time, the wind was still behind those sails on all fronts. There was a small design shop in Haifa playing with CPU architecture under the corporate radar. It's just possible that had they been higher profile, their efforts would have been killed, outright. Anyway, starting with a sensible core, the Pentium3, and doing sensible things to it, they came out with a dynamite CPU for portables. Banias became Centrino. Whether this was "strategy designed for the portable market" or "skunk works keeping interesting jobs in Haifa" I don't know, but the neither would surprise me.

    As Banias was coming to market, NetBurst and IA64 were smashing into their respective thermal and market walls. Intel, to their credit, turned practically on a dime, dead-ended NetBurst, and moved forward based on the Banias/Centrino core. But nothing turns immediately, and it's worth noting that even after the rudder was shifted, several re-labelings of NetBurst still came out in the interim, before Core2 was ready.

    The fact that Banias/Centrino was done in Haifa, very far away from Intel corporate mainstream, makes me think it was either a skunk works, or intended as a niche product. Nor was there lots of Big Press during Banias development, just the fanfare as Centrino was approaching launch. I haven't been able to find specifics, but I strongly suspect that Core/Core2 development was brought back to the US, closer to HQ.
  • Re:Is AMD beaten? (Score:3, Insightful)

    by bberens ( 965711 ) on Thursday March 29, 2007 @03:23PM (#18532809)
    10 threads, 8 cores, I don't give a damn. The standard baseline PC workstation bought from [insert giganto manufacturer] really doesn't provide me with a better experience than it did 4 years ago. Memory bus, hard drive seek time, etc. are the stats I care about and are going to give me the most noticable improvement in usability. CPU cores/threads/mhz is pointless, the bottleneck is elsewhere.

E = MC ** 2 +- 3db

Working...