Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Next-Gen CPU Has Memory Controller and GPU 307

Many readers wrote in with news of Intel's revelations yesterday about its upcoming Penryn and Nehalem cores. Information has been trickling out about Penryn, but the big news concerns Nehalem — the "tock" to Penryn's "tick." Nehalem will be a scalable architecture with some products having on-board memory controller, "on-package" GPU, and up to 16 threads per chip. From Ars Technica's coverage: "...Intel's Pat Gelsinger also made a number of high-level disclosures about the successor to Penryn, the 45nm Nehalem core. Unlike Penryn, which is a shrink/derivative of Core 2 Duo (Merom), Nehalem is architected from the ground up for 45nm. This is a major new design, and Gelsinger revealed some truly tantalizing details about it. Nehalem has its roots in the four-issue Core 2 Duo architecture, but the direction that it will take Intel is apparent in Gelsinger's insistence that, 'we view Nehalem as the first true dynamically scalable microarchitecture.' What Gelsinger means by this is that Nehalem is not only designed to take Intel up to eight cores on a single die, but those cores are meant to be mixed and matched with varied amounts of cache and different features in order to produce processors that are tailored to specific market segments." More details, including Intel's slideware, appear at PC Perspectives and HotHardware.
This discussion has been archived. No new comments can be posted.

Intel Next-Gen CPU Has Memory Controller and GPU

Comments Filter:
  • Re:Is AMD beaten? (Score:1, Informative)

    by __aamnbm3774 ( 989827 ) on Thursday March 29, 2007 @09:36AM (#18527357)
    AMD will catch back up. Intel is a monster, much like Microsoft. Sure, they gain a step here and there, but because they are so large and slow, I'm sure AMD will catch up.

    In fact, Intel Quad processors currently are two dual-core dies mashed together, where AMD is coming out with a pure Quad core solution. It wouldnt be surprising to see them gain a temporary advantage. (the back and fourth is amazing for consumers) http://www.legitreviews.com/article/426/1/ [legitreviews.com] (includes a picture!)
  • Re:Is AMD beaten? (Score:5, Informative)

    by jonesy16 ( 595988 ) on Thursday March 29, 2007 @10:33AM (#18528067)
    I'm not sure what reviews you've been looking at but AMD is not nearly "keeping pace" with Intel, not for the last year anyway. http://www.anandtech.com/cpuchipsets/showdoc.aspx? i=2879 [anandtech.com] clearly shows the intel architecture shining, with many benchmarks having the slowest Intel core beating the fastest AMD. At the same time, Intel is acheiving twice the performance per watt, and these are cores, some of which have been on the market for 6-12 months. Intel has also already released their dual-chip, eight core server line which is slated to make its way into a Mac Pro within 3 weeks. AMD's "hold" on the 4-way market exists because of the conditions 2 years ago when those servers were built. If you want a true comparison (as you claim to be striving for) then you need to look at what new servers are being sold and what the sales numbers are like (I don't have that information). But since the 8-core Intel is again using less than half of the thermal power an 8-core AMD offering, I would wager that an informed IT department wouldn't be choosing the Opteron route.

    AMD is capable of great things but Intel has set their minds on dominating the processor world for at least the next 5 years and it will take nothing short of a major evolutionary step from AMD to bring things back into equilibrium. Whilst AMD struggles to get their full line onto the 65nm production scheme, Intel has already started ramping up the 45nm, and that's something that AMD won't quickly be able to compete with.

    Intel's latest announcement of modular chip designs and further chipset integration are interesting but I'll reserve judgement until some engineering samples have been evaluated. I'm not ready to say that an on-board memory controllers is hands-down the best solution, but I do agree that this is a great step towards mobile hardware (think smart phones / pda's / tablets ) using less energy and having more processing power while fititng in a smaller form factor.
  • More information (Score:3, Informative)

    by jonesy16 ( 595988 ) on Thursday March 29, 2007 @10:37AM (#18528125)
    http://www.anandtech.com/cpuchipsets/intel/showdoc .aspx?i=2955 [anandtech.com] provides a much more detailed look at the new processor architectures coming from Intel. A little better than the PR blurb at ars'.
  • Re:Is AMD beaten? (Score:3, Informative)

    by Creepy ( 93888 ) on Thursday March 29, 2007 @11:56AM (#18529293) Journal
    Ray Tracing is not the be-all end-all of computer graphics, you know - it does specular lighting well (particularly point source) but diffuse lighting poorly, which is why most ray tracers also tack in a radiosity or radiosity-like feature (patch lighting). The latest polygon shaders often do pseudo-Ray Tracing on textures, so we're actually seeing ray traced effects in newer games (basically ray tracing approximation on a normal mapped surface). You can, say, take a single flat polygon and map a face onto it (with realtime hard or soft self-shadows, depending on the technique used). Note that I'm NOT saying polygon modeling is the be-all end-all of computer graphics, either - it has plenty of flaws (no curved surfaces, poor specular lighting, etc). There is ongoing work on a unification model that may be the most promising - we'll have to see where that goes.

    I noted above that the ray tracing techniques are really pseudo-ray tracing - they don't completely linearly trace the ray to the surface - usually they have linear and binary trace components (binary means they split the remaining distance in 2 and see if a surface is hit, then backtrack as necessary, but this could result in the wrong surface being hit and aliasing occurring). As GPU speed increases, we may see this do actual ray tracing.

    See Relief Mapping, Parallax Occlusion Mapping, Displacement Mapping, etc.
  • Nothing That New (Score:2, Informative)

    by Bo0bMeIsTeR ( 1066964 ) on Thursday March 29, 2007 @02:40PM (#18531937)
    AMD has had on die memory controllers for how long now? Athlon 64 cornerstone was this feature. AMD has also developed and successfully integrated hypertransport in today's machines. AMD is also working on the same type of development, but they already has two key pieces already in place. Now with their acquisition of ATI i see them in a much better situation to implement this form of technology than intel. Intel can do it, but they have much more research to conduct and test before their chips will be ready. I believe AMD will quietly work on this, and drop it a year or so before intel. Then intel will be in the catchup phase again. This whole thing works in cycles, AMD and intel will constantly be swapping places on top of the mountain.
  • Re:*snore* (Score:3, Informative)

    by lxt518052 ( 720422 ) on Thursday March 29, 2007 @05:39PM (#18535517)
    Given today's 45nm technology, it's not really that hard to put massive amount of memory on die. The problem is however, memory at such density is not going to run nearly at the same speed as the CPU core. Therefore making the integration pointless.

    Generally for a type of memory, the larger its capacity, the larger its latency becomes and the smaller the throughput you'll get from it. A memory hierarchy is sometimes seen as a solution to reduce memory system cost, but more fundamentally, as silicon technologies evolve, it also reflects an inherent characteristic of memories - either large or fast, you can't have it both ways.

Neutrinos have bad breadth.

Working...