Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Hardware

Details of New Intel Dunnington and Nehalem Architectures Leaked 147

Daily Tech is reporting that details about Intel's new processor models were leaked over the weekend. Both the six core Dunnington and Nehalem architectures were featured in this leak. "Dunnington includes 16MB of L3 cache shared by all six processors. Each pair of cores can also access 3MB of local L2 cache. The end result is a design very similar to the AMD Barcelona quad-core processor; however, each Barcelona core contains 512KB L2 cache, whereas Dunnington cores share L2 cache in pairs. [...] Nehalem is everything Penryn is -- 45nm, SSE4, quad-core -- and then some. For starters, Intel will abandon the front-side bus model in favor of QuickPath Interconnect; a serial bus similar to HyperTransport."
This discussion has been archived. No new comments can be posted.

Details of New Intel Dunnington and Nehalem Architectures Leaked

Comments Filter:
  • by Yetihehe ( 971185 ) on Monday February 25, 2008 @03:21PM (#22549238)
    Actually I think you could run windows 95 just in cache. Now say about bloat...
  • Re:Wow (Score:4, Insightful)

    by Tridus ( 79566 ) on Monday February 25, 2008 @03:29PM (#22549346) Homepage
    Cores are the new gigahertz. Where Intel previously raced to get the GHz up higher then AMD (no matter if it was useful or if anybody really wanted it that way), now they race to get more cores then AMD (no matter if it was useful or if anybody really wanted it that way).

    This is great for many computing environments, but my home system is not one of them. Honestly there isn't much software I use on a regular basis that really taxes the second core, let alone six of them.
  • Re:Wow (Score:3, Insightful)

    by Firehed ( 942385 ) on Monday February 25, 2008 @03:32PM (#22549384) Homepage
    Does it really matter? Just because the math to double things is easier doesn't make it a more cost-effective move. Maybe due to the shape of the chip, it's a lot cheaper to make a triple-core die than a quad. It's not like the extra core should have any weird effects - apps that support multiple procs/cores will use the extra resources, and those that don't won't. My work XP machine can only use 3GB of RAM (despite having 4GB physically in there) and there's no detriment to such a setup.

    Yes, I find it strange. But does it really matter? I doubt it. For all we know, someone at Intel just thought the "sex-" prefix would be funny, rather than the expected "quad-" or "octo-".
  • by mihalis ( 28146 ) on Monday February 25, 2008 @04:49PM (#22550462) Homepage

    QuickPath: because Intel doesn't adopt standards... it rewrites them.
    Why should Intel pay AMD to license HyperTransport? The specs may be open to developers, but that does not mean they are unencumbered by patents. Even if they could, why Would they? I don't really know the situation surrounding the technology, but even if Intel could use it for free, they would lose a huge battle in the PR War. I can see it now, "Remember that interconnect AMD has been using for years now? Well our design has finally caught up with theirs enough to use it." Remember that to the masses, the non-slashdot crowd, they have no idea what the techno-jargon spouted by Intel marketing means.

    Note that Intel did adopt AMD's 64-bit extensions to the x86 instruction set. I regard that as far more significant than, hypothetically, licensing HyperTransport. For example see this article on Wikipedia [wikipedia.org] or any other history of AMD64/Intel64 or "x86-64" or whatever everyone is calling it these days.

    This was a PR blow to Intel, but still made good business sense at the time, and seems to have been good for Intel and for AMD (bad for Itanium though).

  • Re:Wow (Score:2, Insightful)

    by wonnage ( 1206966 ) on Monday February 25, 2008 @05:04PM (#22550640)
    The problem is that more and more its technologically infeasible to increase clock speed without frying chips built with ever tinier components. We still have the ability to cram a few more transistors onto the silicon, though. This by itself doesn't solve anything - having the ability to cram transistors on doesn't do jack if you can't make use of them. Right now, increasing the core count seems to be best way to utilize the room on the chip, which is why all the major processor manufacturers have banked on it for the near future. Basically, we've chosen parallelization as the way of the future. Concurrency is a tough problem to tackle, though, which is why a lot of programs either don't make use of it as much as they could. Some tasks just can't be parallelized anyway. The difference between the core race and the gigahertz race is that at least the cores have some potential. Sure, you can bump the clock up to 3GHZ if you make an incredibly long pipeline, but if you can't fill that pipeline or it get stalled or you get interrupted and have to flush the whole thing, the 3GHZ isn't all that useful. Much of those problems are out of the software developers' control. If you have enough jobs for the cores, though, and write a program that makes good use of parallelization (much easier said than done), multi-core will actually give you a big performance increase.

The rule on staying alive as a program manager is to give 'em a number or give 'em a date, but never give 'em both at once.

Working...