Forgot your password?
typodupeerror
Upgrades Intel Hardware

8-Core Intel Nehalem-EX To Launch This Month 186

Posted by Soulskill
from the double-the-cores-for-only-twice-the-price dept.
MojoKid writes "What could you do with 8 physical cores of CPU processing power? Intel's upcoming 8-core Nehalem-EX is launching later this month, according to Intel Xeon Platform Director Shannon Poulin. The announcement puts to rest rumors that the 8-core part might be delayed, and makes good on a promise Intel made last year when the chip maker said it would release the chip in the first half of 2010. To quickly recap, Nehalem-EX boasts an extensive feature-set, including up to 8 cores per processor, up to 16 threads per processor with Intel Hyper-threading, scalability up to eight sockets via Intel's serial Quick Path Interconnect and more with third-party node controllers, and 24MB of shared cache."
This discussion has been archived. No new comments can be posted.

8-Core Intel Nehalem-EX To Launch This Month

Comments Filter:
  • Re:Balance (Score:5, Insightful)

    by fuzzyfuzzyfungus (1223518) on Monday March 08, 2010 @05:46PM (#31405876) Journal
    Given that the Nehalems all have integrated memory controllers, I'd assume that the memory I/O situation wouldn't become substantially worse as you scaled up.

    From TFS's mention of "up to 8 CPUs or more with third-party node controllers" I'm(perhaps optimistically) assuming that that means all the RAM in an up to 8 socket system wouldn't be more than one hop away from any core.

    They almost certainly didn't go with 24MB of cache because their main memory situation is perfect; but intel's bigger chips are substantially improved from the old "Hey, let's hang a bunch of super expensive Xeons off a dubiously adequate northbridge through a shared front-side bus, let them starve for memory access, and then get curb stomped by cheaper Opterons!" days.
  • Re:Balance (Score:3, Insightful)

    by CBRcrash (1061324) on Monday March 08, 2010 @05:52PM (#31405956)
    I'm thinking computing power for rent (aka the cloud), VDI, cluster data crunching , and any combination of the above
  • by Alastor187 (593341) on Monday March 08, 2010 @06:00PM (#31406128)

    I am sure there are plenty of applications out there that can take advantage of this new hardware. I run finite element and computational fluid dynamics software at work and both are capable of using the 8 cores in my work PC (dual quad core).

    The really sad part though is that for the FEA software I can only use 2 cores because the vendor requires customers to buy a separate HPC license for every processor/core beyond 2.

  • by hoytak (1148181) on Monday March 08, 2010 @06:03PM (#31406186) Homepage

    Don't know about games, but many types of numerical processing can easily take advantage of this. ATLAS and other high-performance linear algebra libraries already use all available cores (no, IO is often not the biggest bottleneck with these libraries, as they seem to squeeze out all possible advantages from the L1 / L2 caches). In other words, for my scientific computations, I would definitely notice a difference.

    Also, OpenMP is becoming easier and easier to use with recent gcc releases, and it only takes a few #pragma statements in some parts of the code to give a huge speedup if you know what you're doing and have appropriate code.

  • Re:Balance (Score:1, Insightful)

    by Anonymous Coward on Monday March 08, 2010 @06:06PM (#31406234)

    This is a good point. When do we start crowdsourcing the cloud? Why isn't there a website for this already, where people can buy and sell virtualization services hosted off their local home systems?

  • by rberger (2481) on Monday March 08, 2010 @06:07PM (#31406254) Homepage

    So can we now expect a doubling of cores every 18 months?

  • Hyperthreading (Score:2, Insightful)

    by MobyDisk (75490) on Monday March 08, 2010 @06:11PM (#31406336) Homepage

    Why are they are still announcing hyperthreading? It was established long-ago that it had no benefit. It's been off on any machines I've ever purchased.

  • by Colonel Korn (1258968) on Monday March 08, 2010 @06:24PM (#31406588)

    So can we now expect a doubling of cores every 18 months?

    Moore's Law refers to transistor density, right? As long as programming makes the expected shift to massively parallel techniques that would justify a very large number of cores I think the answer to your question is yes.

  • Re:Finally! (Score:4, Insightful)

    by Cytotoxic (245301) on Monday March 08, 2010 @06:38PM (#31406860)
    Even funnier, soon enough you'll be running Crysis on your cell phone (or whatever we call it then). Remember when it was tough to get decent framerate on Doom with high settings? You can run that on a cellphone these days. 15 years from "state of the art" to "runs on my cellphone." Wow. In 15 years you might have a 1TB database running on your personal communicator that fits in your pocket. (in keeping with the "15 years out" prediction theme of the day.
  • Re:It's obvious (Score:5, Insightful)

    by ickleberry (864871) <web@pineapple.vg> on Monday March 08, 2010 @07:00PM (#31407182) Homepage
    This makes me sad. Web 2-point-Oh is such a waste of a perfectly good 8-core processor.

    10 years ago if you had told me about an 8-core processor I would have imagined using it for kick-of-the-ass games, immersive virtual reality, editing 3D video and simulating newer, more deadly designs of chainsaw chain.

    But noo, instead they are used to pump out inefficient JavaShit-based versions of the Desktop software we had in '93 with a shiny new rounded corner interface to web browsers around the world. Great.
  • Re:It's obvious (Score:3, Insightful)

    by MrNaz (730548) * on Monday March 08, 2010 @07:44PM (#31407698) Homepage

    Yea, it really bugs me how 95% of a web site's load time and processing load is accounted for by a few pretty features like rounded corners and drop shadows.

    How about we put those effects into CSS where they below and not induce massive load by simulating them with 5mb of JavaScript?

  • Re:It's obvious (Score:3, Insightful)

    by raddan (519638) * on Monday March 08, 2010 @07:51PM (#31407768)
    Am I the only one here who understands that client-side Javascript has absolutely nothing to do with how many cores your server has?

    Web 1.0 can use plenty of cores, too, but generally your Web x.x requirements and your required server core count are orthogonal. Bandwidth and latency requirements for Web 2.0 are a different story, though. Those things tend to scale depending on how shitty your programmers are.
  • Re:Ditch x86 (Score:3, Insightful)

    by Klintus Fang (988910) on Monday March 08, 2010 @08:51PM (#31408376)

    People have been arguing as you are that x86's bloated CISC instruction set was inferior to a cleaner RISC architecture for the last 20+ years. Nobody has ever proven that the elegance of the instruction set matters with hard data though.

    What evidence we do have goes against that argument.

    Apple machines used a cleaner RISC architecture for a while in the desktop space. They never performed any better than equivalent x86 based machines, and in the end Apple abandoned RISC and moved to x86.

    Intel came out with a cleaner RISC based instruction set that that the Itanium line uses. If x86 was really as bad as you say, Itanium chips would be running circles around the x86 based server chips provided by both Intel and AMD. That isn't happenning.

    Another thing you might not realize: all x86 chips, from both Intel and AMD, once you strip them down to the micro-code level ARE RISC designs under the hood. RISC is the cleaner way to implement the micro code and the underlying execution architecture, but all historical data seems to indicate that the question of whether the instruction set that sits on top of that is RISC or CISC is irrelevant to performance. It is arguably more complicated to design a CISC based chip like x86, but that clearly has not been an obstacle to competing with RISC on the performance end for Intel or AMD engineers.

  • by Dragoniz3r (992309) on Monday March 08, 2010 @09:03PM (#31408488)
    True but they're designed for entirely different workloads. The Niagara series of processors is designed toward large numbers of not-particularly-intensive tasks such as serving web pages and such. Power7 and Nehalem-EX are targeted more toward processing-power-intensive tasks which are still parellizable.
  • Re:Ditch x86 (Score:3, Insightful)

    by hhw (683423) on Monday March 08, 2010 @10:05PM (#31408990) Homepage

    People have been arguing as you are that x86's bloated CISC instruction set was inferior to a cleaner RISC architecture for the last 20+ years. Nobody has ever proven that the elegance of the instruction set matters with hard data though.

    What evidence we do have goes against that argument.

    The only evidence that we have is that the benefits of commoditization and economies of scale often outweigh any architectural advantages. The fact that x86 incorporated many elements of RISC would also demonstrate its value.

    Apple machines used a cleaner RISC architecture for a while in the desktop space. They never performed any better than equivalent x86 based machines, and in the end Apple abandoned RISC and moved to x86.

    Manufacturing processes simply trumped architectural differences. PowerPC's have never been manufactured on anywhere near the scale of x86.

    Intel came out with a cleaner RISC based instruction set that that the Itanium line uses. If x86 was really as bad as you say, Itanium chips would be running circles around the x86 based server chips provided by both Intel and AMD. That isn't happenning.

    Itanium is EPIC, not CISC. It is the exact opposite of RISC. It may not be running circles around x86, but that may be due to compilers not yet being advanced enough to take full advantage of the architecture. We may still see this change in the future.

  • by Kjella (173770) on Tuesday March 09, 2010 @03:58AM (#31410960) Homepage

    You just made me realize nobody named Cole (Ashley Cole, Cheryl Cole, Nat King Cole) will ever have a law named after them. Everyone will just snicker and it'll never catch on.

The clearest way into the Universe is through a forest wilderness. -- John Muir

Working...