8-Core Intel Nehalem-EX To Launch This Month 186
MojoKid writes "What could you do with 8 physical cores of CPU processing power? Intel's upcoming 8-core Nehalem-EX is launching later this month, according to Intel Xeon Platform Director Shannon Poulin. The announcement puts to rest rumors that the 8-core part might be delayed, and makes good on a promise Intel made last year when the chip maker said it would release the chip in the first half of 2010. To quickly recap, Nehalem-EX boasts an extensive feature-set, including up to 8 cores per processor, up to 16 threads per processor with Intel Hyper-threading, scalability up to eight sockets via Intel's serial Quick Path Interconnect and more with third-party node controllers, and 24MB of shared cache."
Re:Balance (Score:5, Insightful)
From TFS's mention of "up to 8 CPUs or more with third-party node controllers" I'm(perhaps optimistically) assuming that that means all the RAM in an up to 8 socket system wouldn't be more than one hop away from any core.
They almost certainly didn't go with 24MB of cache because their main memory situation is perfect; but intel's bigger chips are substantially improved from the old "Hey, let's hang a bunch of super expensive Xeons off a dubiously adequate northbridge through a shared front-side bus, let them starve for memory access, and then get curb stomped by cheaper Opterons!" days.
Re:Balance (Score:3, Insightful)
Re:programs compatible with 8 cores (Score:2, Insightful)
I am sure there are plenty of applications out there that can take advantage of this new hardware. I run finite element and computational fluid dynamics software at work and both are capable of using the 8 cores in my work PC (dual quad core).
The really sad part though is that for the FEA software I can only use 2 cores because the vendor requires customers to buy a separate HPC license for every processor/core beyond 2.
Re:programs compatible with 8 cores (Score:2, Insightful)
Don't know about games, but many types of numerical processing can easily take advantage of this. ATLAS and other high-performance linear algebra libraries already use all available cores (no, IO is often not the biggest bottleneck with these libraries, as they seem to squeeze out all possible advantages from the L1 / L2 caches). In other words, for my scientific computations, I would definitely notice a difference.
Also, OpenMP is becoming easier and easier to use with recent gcc releases, and it only takes a few #pragma statements in some parts of the code to give a huge speedup if you know what you're doing and have appropriate code.
Re:Balance (Score:1, Insightful)
This is a good point. When do we start crowdsourcing the cloud? Why isn't there a website for this already, where people can buy and sell virtualization services hosted off their local home systems?
When will Moore's Law apply to Cores? (Score:3, Insightful)
So can we now expect a doubling of cores every 18 months?
Hyperthreading (Score:2, Insightful)
Why are they are still announcing hyperthreading? It was established long-ago that it had no benefit. It's been off on any machines I've ever purchased.
Re:When will Moore's Law apply to Cores? (Score:3, Insightful)
So can we now expect a doubling of cores every 18 months?
Moore's Law refers to transistor density, right? As long as programming makes the expected shift to massively parallel techniques that would justify a very large number of cores I think the answer to your question is yes.
Re:Finally! (Score:4, Insightful)
Re:It's obvious (Score:5, Insightful)
10 years ago if you had told me about an 8-core processor I would have imagined using it for kick-of-the-ass games, immersive virtual reality, editing 3D video and simulating newer, more deadly designs of chainsaw chain.
But noo, instead they are used to pump out inefficient JavaShit-based versions of the Desktop software we had in '93 with a shiny new rounded corner interface to web browsers around the world. Great.
Re:It's obvious (Score:3, Insightful)
Yea, it really bugs me how 95% of a web site's load time and processing load is accounted for by a few pretty features like rounded corners and drop shadows.
How about we put those effects into CSS where they below and not induce massive load by simulating them with 5mb of JavaScript?
Re:It's obvious (Score:3, Insightful)
Web 1.0 can use plenty of cores, too, but generally your Web x.x requirements and your required server core count are orthogonal. Bandwidth and latency requirements for Web 2.0 are a different story, though. Those things tend to scale depending on how shitty your programmers are.
Re:Ditch x86 (Score:3, Insightful)
People have been arguing as you are that x86's bloated CISC instruction set was inferior to a cleaner RISC architecture for the last 20+ years. Nobody has ever proven that the elegance of the instruction set matters with hard data though.
What evidence we do have goes against that argument.
Apple machines used a cleaner RISC architecture for a while in the desktop space. They never performed any better than equivalent x86 based machines, and in the end Apple abandoned RISC and moved to x86.
Intel came out with a cleaner RISC based instruction set that that the Itanium line uses. If x86 was really as bad as you say, Itanium chips would be running circles around the x86 based server chips provided by both Intel and AMD. That isn't happenning.
Another thing you might not realize: all x86 chips, from both Intel and AMD, once you strip them down to the micro-code level ARE RISC designs under the hood. RISC is the cleaner way to implement the micro code and the underlying execution architecture, but all historical data seems to indicate that the question of whether the instruction set that sits on top of that is RISC or CISC is irrelevant to performance. It is arguably more complicated to design a CISC based chip like x86, but that clearly has not been an obstacle to competing with RISC on the performance end for Intel or AMD engineers.
Re:Sun Ultrasparc T2 has 8 cores... and 64 threads (Score:2, Insightful)
Re:Ditch x86 (Score:3, Insightful)
People have been arguing as you are that x86's bloated CISC instruction set was inferior to a cleaner RISC architecture for the last 20+ years. Nobody has ever proven that the elegance of the instruction set matters with hard data though.
What evidence we do have goes against that argument.
The only evidence that we have is that the benefits of commoditization and economies of scale often outweigh any architectural advantages. The fact that x86 incorporated many elements of RISC would also demonstrate its value.
Apple machines used a cleaner RISC architecture for a while in the desktop space. They never performed any better than equivalent x86 based machines, and in the end Apple abandoned RISC and moved to x86.
Manufacturing processes simply trumped architectural differences. PowerPC's have never been manufactured on anywhere near the scale of x86.
Intel came out with a cleaner RISC based instruction set that that the Itanium line uses. If x86 was really as bad as you say, Itanium chips would be running circles around the x86 based server chips provided by both Intel and AMD. That isn't happenning.
Itanium is EPIC, not CISC. It is the exact opposite of RISC. It may not be running circles around x86, but that may be due to compilers not yet being advanced enough to take full advantage of the architecture. We may still see this change in the future.
Re:When will Moore's Law apply to Cores? (Score:4, Insightful)
You just made me realize nobody named Cole (Ashley Cole, Cheryl Cole, Nat King Cole) will ever have a law named after them. Everyone will just snicker and it'll never catch on.