Intel Next-Gen CPU Has Memory Controller and GPU 307
Many readers wrote in with news of Intel's revelations yesterday about its upcoming Penryn and Nehalem cores. Information has been trickling out about Penryn, but the big news concerns Nehalem — the "tock" to Penryn's "tick." Nehalem will be a scalable architecture with some products having on-board memory controller, "on-package" GPU, and up to 16 threads per chip. From Ars Technica's coverage: "...Intel's Pat Gelsinger also made a number of high-level disclosures about the successor to Penryn, the 45nm Nehalem core. Unlike Penryn, which is a shrink/derivative of Core 2 Duo (Merom), Nehalem is architected from the ground up for 45nm. This is a major new design, and Gelsinger revealed some truly tantalizing details about it. Nehalem has its roots in the four-issue Core 2 Duo architecture, but the direction that it will take Intel is apparent in Gelsinger's insistence that, 'we view Nehalem as the first true dynamically scalable microarchitecture.' What Gelsinger means by this is that Nehalem is not only designed to take Intel up to eight cores on a single die, but those cores are meant to be mixed and matched with varied amounts of cache and different features in order to produce processors that are tailored to specific market segments." More details, including Intel's slideware, appear at PC Perspectives and HotHardware.
Is AMD beaten? (Score:4, Interesting)
Re:Is AMD beaten? (Score:5, Funny)
No, seriously, though. I'm holding out on the hope that AMD's licensing of ZRAM will be able to keep them in the game.
Re:Is AMD beaten? (Score:4, Insightful)
I think "AMD fan" or "Intel fan" is a bad attitude. When technology does its thing (progress), it's a good thing, regardless of who spearheaded it.
That said, if AMD becomes so obviously a bad choice, Intel who is in the lead will continue to push the envelope just not as fast since they don't have anything to catch up to. That will give AMD the opportunity to blow ahead as it did time and time again in the past.
The pendulum swings both ways. The only constant is that competition brings out the best and it's definitely good for us, the consumer.
I'm a "Competition fan."
Re: (Score:3, Interesting)
That's assuming they'll have the cash and/or debt availability to do so; a large chunk went into the ATI acquisition. Their balance sheet reads worse now than any time in the past (imho) and the safety net of a private equity buyout is weak at best. Now that ATI is in the mix, it seems that competition in two segments is now at risk.
Point being that the underdog in a two horse race is always skating on thin ice. Le
Re:Is AMD beaten? (Score:4, Funny)
Re: (Score:2)
Re: (Score:3, Insightful)
#define Competition > 2
What you have here is a duopoly, which is apparently what we in the US prefer as all our major industries eventually devolve into 2-3 huge companies controlling an entire market. That ain't competition, and it ain't good for all of us.
Captcha = hourly. Why, yes, yes I am.
Re: (Score:2)
Re: (Score:3, Insightful)
Re:Is AMD beaten? (Score:5, Interesting)
So we will see. Intel's GPUs are fine for home use but not in the same category as ATI or NVidia. The company that might really loose big in all this is NVidia. If Intel and AMD start integrating good GPU cores on the same die as the CPU where will that leave NVidia?
It could be left in the dust.
Re: (Score:2, Interesting)
Re: (Score:3, Informative)
Re: (Score:2)
PCI Express is 2.5Gbps per lane each way, so x16 means 40Gbps full duplex. I haven't seen any x32 anywhere, but there's supposed to be specs for it. That's 80Gbps
Re: (Score:2)
Re:Is AMD beaten? (Score:4, Funny)
You don't need an advanced GUI and expensive GPU to to do wobble effects. Every time the guy in the next cubicle degaussed his computer monitor, *EVERY* window on my desktop would wobble, even the taskbar. To avoid any damage to my monitor, I'd degauss my monitor
Re: (Score:3, Insightful)
Re: (Score:3, Insightful)
Re: (Score:2, Troll)
Re: (Score:2, Troll)
I can take its/it's, and I can take their/they're/there, but lose/loose.... I understand in terms of what I hear as I read; I come across 'loose' in my head where 'lose' should be (or worse, 'looser'), and it just throws me completely off.
Re:Is AMD beaten? (Score:4, Interesting)
This reminds me of MS during the OS/2 days, when they first announced Cairo with its DB file system and OO interface (sound familiar? It should - features of Longhorn, then moved to Blackcomb, and now off the map as a major release). Unlike MS, I don't doubt Intel will finally release most of what they've announced, but to think that they're "ahead" is ludicrous. At this moment, their new architecture will barely beat AMD's 3+ year old architecture (See Anandtech or Tom's, I forget which, but there was a head to head comparison of AMD's 4X4 platform with Intel's latest and greatest quad CPU, and AMD's platform kept pace. That should scare the bejeebers out of Intel, and apparently it has, because they're now following the architectural trail blazed by AMD, or announced previously, like multi-core chips with specialty cores.
In other words, not much to see here, wake me when the chips come out. Until Barcelona ships, Intel holds the 1-2 CPU crown. When it ships, we'll finally be able to compare CPUs. AMD still holds the 4-way and up market, hence its stranglehold in the enterprise. Intel's announcement of an onboard memory controller in Nehalem indicates that they're finally maybe going to try to tackle the multi-CPU market again, depending upon how well architected that solution is.
Re:Is AMD beaten? (Score:5, Informative)
AMD is capable of great things but Intel has set their minds on dominating the processor world for at least the next 5 years and it will take nothing short of a major evolutionary step from AMD to bring things back into equilibrium. Whilst AMD struggles to get their full line onto the 65nm production scheme, Intel has already started ramping up the 45nm, and that's something that AMD won't quickly be able to compete with.
Intel's latest announcement of modular chip designs and further chipset integration are interesting but I'll reserve judgement until some engineering samples have been evaluated. I'm not ready to say that an on-board memory controllers is hands-down the best solution, but I do agree that this is a great step towards mobile hardware (think smart phones / pda's / tablets ) using less energy and having more processing power while fititng in a smaller form factor.
Re: (Score:2, Interesting)
When only running one or two CPU intensive threads, Quad FX ends up being slower than an identically clocked dual core system, and when running more threads it's no faster than Intel's Core 2 Extreme QX6700. But it's more expensive than the alternatives and consumes as much power as both, combined.
My point was that 3 year old tech could keep pace with Intel's newest. The 4X4 system is effectively nothing more than a 2-way Opteron system. With an identical number of cores, AMD keeps pace with Intel's top of the line quad. That would concern me if I were Intel, especially with AMD coming out with a quad on a smaller die than those running in t
Re:Is AMD beaten? (Score:5, Funny)
From your truly,
Marklar
Re: (Score:2)
Intellectually, Intel is playing catchup here. (Score:5, Insightful)
It seems that AMD has lost, and I'm not trying to troll. It just seems that fortunes have truly reversed and that AMD is being beaten by 5 steps everywhere by AMD. Anybody have an opposing viewpoint? (Being an AMD fan, I am depressed.)
Look at the title of this thread: Intel Next-Gen CPU Has Memory Controller and GPU.
The on-board memory controller was pretty much the defining architectural feature of the Opteron family of CPUs, especially as Opteron interacted with the HyperTransport bus. The Opteron architecture was introduced in April of 2003 [wikipedia.org], and the HyperTransport architecture was introduced way back in April of 2001 [wikipedia.org]!!! As for the GPU, AMD purchased ATI in July of 2006 [slashdot.org] precisely so that they could integrate a GPU into their Opteron/Hypertransport package.
So from an intellectual property point of view, it's Intel that's furiously trying to claw their way back into the game.
But ultimately all of this will be decided by implementation - if AMD releases a first-rate implementation of their intellectual property, at a competitive price, then they'll be fine.
Re: (Score:2, Interesting)
Integrated GPU... SGI? I can't think of another high-end modularly integrated GPU, and I'm not even 100% sure about the SGI one.
Re: (Score:2)
Re: (Score:2)
One colleague has been setting up two comparible servers with VMware server before and he also stated it ran much faster on the AMD form HP than the Intel from Dell.
Re: (Score:2)
oops - corrections: (Score:2)
So, basically... (Score:2, Interesting)
Have Intel come up with anything genuinely new recently?
Re: (Score:3, Insightful)
Re: (Score:2)
Re:So, basically... (Score:5, Funny)
Stable connectors... (Score:2)
One stable and open socket technology. So you can pop custom hardware accelerators or FPGA chips in the additionnal sockets in a multi-CPU mother board.
Like AMD's AM2/AM2+/AM3 and hyper transport bus, with partners currently developping FPGA chips.
Not like intel who change controller with each chip generation, at least twice to screw the custommers (423 vs. 478) The Slot 1 used during the Pentium II / III / Copermine / Tualatin era was a good solution to keep 1 in
Re: (Score:2)
They are just like Microsoft....except for the better bit.
The big thing that Intel do have going for them, is that they have been able to move to smallers processes for creating chips, which gives them a big advantage in speed and power usage.
Re: (Score:2)
But all in all, its good news - now let's see what the other camp comes up with that will be 45 nm ready.
Re: (Score:3, Insightful)
One of those new computers? (Score:3, Funny)
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2)
/me drools. (Score:2)
This is awesome. I'm just sitting here, waiting for more and more cores. While all the Windows/Office users whine that "it's dual-core, but there's only a 20% performance increase", I just set MAKEOPTS="-j3" (on my dual-core box) and watch things compile twice as fast. Add in the 6-12 MB of L2 cache these will have, and it's gonna rock. (my current laptop has 4 MB--that's as much RAM as my old 486 came with. (There. I got the irrelevant "when I was your age" comparison out of the way. (Yes, I know on
Re: (Score:2)
Re: (Score:2, Funny)
When I was your age, we overclocked our floppy drives.
When I was your age, memory upgrades came with a soldering iron and a hand-written instruction sheet.
When I was your age, L1 cache was just a really long wire loop with high capacitance.
When I was your age, computers booted in about 2/10ths of a second.
When I was your age, Compuserve was the world's biggest dial-up network
When I was your age, we didn't let teenagers post
Image quality? (Score:2)
Re: (Score:2)
It probably doesn't make sense to put high-end graphics on the chip, because people in that market want to upgrade graphics more often than CPUs (not to mention that they probably want nVi
Re: (Score:2)
Re: (Score:2)
This "analog problem hypothesis" should be quite simple to test: Does onboard graphics image quality also suck when using a digital connector (e.g. DVI-D)? If I'm right, then it shouldn't, because in this case all
Re: (Score:2)
Uhh... think about what your asking. Does placing the graphics processor closer to the source of it's information (ram) and on a much faster bus (CPU internal bus) make it slower?
The reason onboard graphics suck on most machines is not because they are integrated, it's because the mobo manufacturers have no interest in integrating the latest and greatest video processors and massive quantities of RAM into a motherboard.
Most onbo
Re: (Score:2)
Yeah, let's use a slow CPU to memory bus, shared by the CPU, peripherals and the video output, rather than a 30+GB/second GPU to memory bus on a typical graphics card these days.
Sticking the GPU in the same package as the CPU is a way to decrease costs for highly integrated systems, not performance. Unless you're going to stick a really fast memory interface on the CPU, anyway.
Imitation is the highest form of flattery (Score:5, Interesting)
However, it's worth noting, that these are clearly AMD ideas.
* On die memory controller - AMD's idea - and it's been in use for quite a while now
* Embedded GPU - a rip off of the AMD fusion idea, announced shortly after the aquisition of AMD.
Intel is no longer leading as they have in yeas past - they are copying and looting their competition shamelessly. It appears that they are "leading" when point in fact it's simply not the case - had AMD not realeased the Athlon64 we would all still be using single processor NetBurst processors.
Re: (Score:3, Insightful)
Intel is no longer leading as they have in yeas past - they are copying and looting their competition shamelessly. It appears that they are "leading" when point in fact it's simply not the case - had AMD not realeased the Athlon64 we would all still be using single processor NetBurst processors.
Actually, Intel is leading on something very important, mobility and power consumption. Take a look at the Pentium M series. Laptops with the Pentium M series always outpaced the Athlon Turion series in both battery life and in speed, in most applications. Now we see Intel integrating that technology into the desktop CPU series.
Re: (Score:2)
Re: (Score:3, Insightful)
Intel had a Haifa lab - waaaay out of the corporate mainstream. A few years back, Intel corporate mainstream was wrapped up in NetBurst, high clock rates, and IA64. Also at that time, the wind was still behind those sails on all fronts. There was a small design shop in Haifa playing with CPU a
Re: (Score:2)
Did they ever? Maybe for desktop PCs, but not for chips in general. The DEC Alpha chip was way ahead of anything Intel had at the time.
Re: (Score:2)
Actually, I would say that of AMD.
Playing Devil's Advocate (Score:2)
On either side this isn't a huge engineering breakthrough. It's simply trying to gain more business. Not that there is anything wrong wit
Integrated Graphics? Uh-oh! (Score:2)
Penryn and Nehalem? (Score:5, Funny)
Re: (Score:2, Funny)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I read somewhere that the Gollum chip will only support two cores.
Price still factors, though, and AMD competes. (Score:5, Interesting)
AMD's not out because they don't control the high end. Remember, you can get the X2 3600 w/ a Biostar TForce 550 motherboard at Newegg for the same price as an E4300 CPU (no mobo), and that's the board folks are using to get it up to crazy clock speeds.
Re: (Score:2)
AMD and ATI have a better partnership. I'm still waiting for intel to try and buy Nvidia. With Nvidia's latest disaster known as the geforce 8800gtx, i'm curious if they're ready to sell out to intel.
The 8800gtx performs like shit in opengl apps. an $80 apg ATI card out performs the geforce 8800gtx ($600) in opengl applications in XP.
NVidia has released a driver for the geforce 8800's since jan. It s
Re: (Score:2)
Re: (Score:2)
OpenGL is a standard. And wonderfully designed, easy to get into, but powerful enough for almost anything you can think of and some things you can't
Two problems (Score:4, Insightful)
2. Hyperthreading only works well in an idle pipeline. The core 2 duo (like the AMD64) have fairly high IPC counts, and hence, low amount of bubbles (as compared to say the P4). And even on the P4 the benefit is marginal at best and in some cases it hurts performance.
The memory controller makes sense as it lowers the latency to memory.
if Intel wants to spend gates, why not put in more accelerators for things like the variants of the DCT used by MPEG, JPEG and MPEG audio? or how about crypto accelerators for things like AES and bignum math?
Tom
Re: (Score:2, Insightful)
Re:Two problems, integrated sells very well. (Score:3, Insightful)
2: I am skeptical about hyperthreading, but it all depends on the implementation. I don't think this is something they are pursuing just for marketing. They must have found a way to eek out even better loading of all execution units by doing this. I can't imagine this being done if it a
Re: (Score:2)
2. Don't give Intel that much credit. The P4 *was* a gimmick. And don't think that add HTT is "free" at worst. It takes resources to manage the thread (e.g. stealing memory port access to fetch/execute opcodes for instance).
In the case of the P4 it made a little sense because the pipeline was mostly empty (re: it was a shitty de
Re: (Score:2)
On HT edition two. I have a skeptical wait and see attitude. Though I will probably buy a new computer in 2007 so it doesn't matter to me for a long time as I will probably squeeze 5 years out my next machine. So 2007 and 2012 are the years that interest me.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
What happens when you hit a limitation/bug in the Intel GPU?
Also, don't misunderestimate
Re: (Score:2)
The more flexibility in the configuration the more expensive verification becomes. A big part of chip design is meeting efficiency with performance. That is, keep the gate count low but the efficiency high.
I agree that an on-chip GPU would likely take less power, and be easier to integrate, if that was the only processor you made.
By starting to mix in what many consumers look at a
Re: (Score:2)
I really wish they'd add hardware accel for text rendering, considering it's something everything would benefit from (using a terminal with antialiased TTFs is painfully slow). There's supposedly graphics cards that do this, but I've never come across one.
Re: (Score:2)
Crypto is another big thing. It isn't even that you have to be faster, but safer. Doing AES in hardware for instance, could trivially kill any cache/timing attacks that are out there.
Tom
Bursts of CPU (Score:3, Interesting)
On a desktop PC you often need the focused application (say, some sort of graphical/audio editor, game, or just a very fancy flash web site even) to get most of the power of the CPU to render well.
If you split the speed potential in 16, would desktop users see actual speed benefit? They'll see increased responsiveness from the smoother multitasking of the more and more background tasks running on our everyday OS-es, but can a mostly single-task focused desktop usage really benefit?
How of course, we're witnessing ways to split concerns of a single task application into multiple threads: the new interface of Windows runs in a separate CPU thread and on the GPU, never mind if the app itself is single threaded or not. That's helping.
Still, serial programming is, and is going to be, prevalent for many many years to come, as most tasks a casual / consumer applications performs are inherently serial and not "paralelizable" or whatever that would be called.
My point being, I hope we'll still be getting *faster* threads, not just *more* threads. The situation now is that i's harder harder to communicate "hey we have only 1000 threads/cores unlike the competition which has 1 million, but we're faster!". It's just like AMD's tough position in the past, explaining their chips are faster despite having slower clock-rate.
Re: (Score:2)
This is the second post I've seen along these lines and I'm beginning to think people really don't understand what software is or how processors work... Even in the slightest.
A processor can't just magically decide to have, say two multipliers in parallel just because your task demands it. You can do that in hardware because you are
Where's the Software? (Score:3, Interesting)
What we need is new models of computing that programmers can use, not just new tools. Languages that specify purely sequential operations on specific virtual hardware (like scalar variables that merely represent specific allocated memory hardware), or metaphors for info management that computing killed in the last century ("file cabinets", trashcans of unique items and universal "documents" are going extinct) are like speaking Latin about quantum physics.
There's already a way forward. Compiler geeks should be incorporating features of VHDL and VeriLog, inherently parallel languages, into gcc. And better "languages", like flowchart diagrams and other modes of expressing info flow, that aren't constrained by the procedural roots of those HW synthesis old guard, should spring up on these new chips like mushrooms on dewy morning lawns.
The hardware is always ahead of the software - as instructions for hardware to do what it does, software cannot do more. But now the HW is growing capacity literally geometrically, even arguably exponentially, in power and complexity beyond our ability to even articulate what it should do within what it can. Let's see some better ways to talk the walk.
Re: (Score:2)
Also, we already have threading capabilities that are trivial to make use of. If you're talking about *vectorization*, then yeah, that's not well supported in a portable fashion. But threading? pthreads makes that trivial.
What features from verilog would you want? Concurrent expressions? You realize how expensive that would be ?
a = b + c
d = c + e
sure that makes sense in
More information (Score:3, Informative)
Von Neuman bottleneck (Score:3, Insightful)
It is interesting to note that Intel has now decided to put the memory controller on the die, after AMD showed the advantages of doing so.
However, I'm a little dismayed that Intel hasn't yet addressed the number one bottleneck for system throughput: the (shared) memory bus itself.
In the 90's, researchers at MIT were putting memory on the same die as the processor. These processors had unrestricted access to its own, internal RAM. There was no waiting on a relatively slow IDE drive or Ethernet card to complete a DMA transaction; no stalls during memory access, etc...
What is really needed is a redesign of the basic PC memory architecture. We really need dual ported RAM, so that a memory transfer to or from a peripheral doesn't take over the memory bus used by the processor. Having an onboard memory controller helps, but it doesn't address the fundamental issue that a 10 ms IDE DMA transfer effectively stalls the CPU for those 10 milliseconds. In this regard, the PC of today is no more efficient than the PC of 20 years ago.
Re: (Score:2)
Also DMAs don't take nearly that long to fulfill. This is how you can copy GBs of data from one drive to another and still have a huge amount of processor power to use. The drives don't lock the bus while performing the read, only when actually transferring data. Otherwise, if you locked the bus for 10ms that means you can't service interrupts, say the timer. Which means your clock would be
It's about time (Score:2)
This has been coming for a while, and shouldn't surprise anybody. I was expecting it to come from NVidia, though, which had been looking into putting a CPU on their graphics chips and cutting Intel/AMD out of the picture. Since they already had most of the transistor count, this made sense. They already had the nForce, which has just about everything but the CPU and RAM (GPU, network interface, disk interface, audio, etc) on one chip. But they never took the last step. Probably not because they couldn
Re: (Score:2)
All Intel x86 code names are derived from the names of rivers in the (northwest?) USA.
BBH
Re:Sure thats nice but... (Score:5, Interesting)
It's quite common in the industry to give projects names that don't mean anything, and each company uses a different scheme for generating the monikers. One interesting story is what happened when Apple used an internal project name of "Sagan". Carl Sagan took exception to this use of his name and threatened a lawsuit. Apple responded by changing the project name to "BHA", a TLA for "Butt-Head Astronomer". Sagan filed a lawsuit over this but it was thrown out of court when the judge ruled the new name was a generic one since Sagan was probably not the world's only butthead astronomer. (As least that's what I recall of it. Perhaps someone who worked at Apple during this time can add more detail?)
Re: (Score:2)
Based on the very Hebrew sounding name I would think this is some of the fruition of that partnership....
Just my conjecture though....
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Nehalem is a river in Oregon--where the chip will be fabbed--and it means "rivers" in Hebrew--many of Intel's recent processors have been designed in Israeli.
The name is a pretty nice nod to cross-continental cooperation, I'd say. I'm still an AMD fanboy though, dangit.
Re: (Score:2, Interesting)
Re: (Score:2)
Yes, how else will the *OS* access and interface with the GPU?
For servers and settops and businesses... (Score:2)
It'll mean that if you want graphics performance that doesn't suck, you'll still need an external video card with dedicated VRAM, but for embedded systems, servers, and business laptops and desktops where Intel's ghastly GPUs are acceptable it'll be OK.
This will also probably make Microsoftwood happy, since it'll guarantee there's no open traces on the video card for you to use to pirate your HD movies on Vista.
Re: (Score:2)
Re: (Score:3, Insightful)
C'mon, modders, you can do better than that. Troll, Flamebait, Overrated, I'd understand; they're applicable. But redundant??
Besides, I was serious. When am I going to see some serious RAM on-chip?
Re: (Score:3, Informative)
Generally for a type of memory, the larger its capacity, the larger its latency becomes and the smaller the throughput you'll get from it. A memory hierarchy is sometimes seen as a solution to reduce memory system cost, but more fundamentally, as silicon technol