Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Intel Hardware Technology

Inside Intel's Core i7 Processor, Nehalem 146

MojoKid writes "Intel's next-generation CPU microarchitecture, which was recently given the official processor family name of 'Core i7,' was one of the big topics of discussion at IDF. Intel claims that Nehalem represents its biggest platform architecture change to date. This might be true, but it is not a from-the-ground-up, completely new architecture either. Intel representatives disclosed that Nehalem 'shares a significant portion of the P6 gene pool,' does not include many new instructions, and has approximately the same length pipeline as Penryn. Nehalem is built upon Penryn, but with significant architectural changes (full webcast) to improve performance and power efficiency. Nehalem also brings Hyper-Threading back to Intel processors, and while Hyper-Threading has been criticized in the past as being energy inefficient, Intel claims their current iteration of Hyper-Threading on Nehalem is much better in that regard." Update: 8/23 00:35 by SS: Reader Spatial points out Anandtech's analysis of Nehalem.
This discussion has been archived. No new comments can be posted.

Inside Intel's Core i7 Processor, Nehalem

Comments Filter:
  • by Joe The Dragon ( 967727 ) on Friday August 22, 2008 @08:17PM (#24714077)

    only the super high desk tops have Quick Path and Triple channel DDR3 and the bigger joke is the that there will be 2 differnt 1 cpu desktop Socket.

    also the mobile will not have Quick Path.

    all AMD cpus use hyper transport and all desktops will use the same socket and the upcoming AM3 cpus will work in the older am2+ boards. Also on amd you can use more then 1 chipset will intel it looks like you will be locked in to a intel chipset.

  • by Kjella ( 173770 ) on Friday August 22, 2008 @08:18PM (#24714087) Homepage

    Nehalem is really the realization of what many slashdotters have claimed before - the typical user doesn't need that much more performance. Both datacenters and laptop users ask for the same thing - power efficiency - and Intel delivers. The Atom is another part of the strategy, even though it's current coupled with a very inefficient chipset.

    The thing is, today we have the knowledge and complexity to fire up kilowatt systems and more - but they're costly running. Certainly there's the extreme hardcore gamers who won't mind running the hottest, most powerhungry quad crossfire system, but they're few and far between. Laptop users think battery life. Desktop users think electricity costs. The result is Nehalem, which promises to deliver a lot more performance per watt.

    If the practise is as good as the theory, AMD is unfortunately in deep shit. They've always been good at delivering ok processors at an ok price, but power efficiency has really only been their strength compared to the Netburst (PIV) processors, not P3 or the Cores. If it amounts to "yeah your processors are cheaper but they cost more to operate" things will fall apart, which is sad since ATI is really doing fine. The 48xx series are kick-ass cards, I just hope they can keep up the competition against Intel...

  • Here we go again (Score:2, Interesting)

    by PingXao ( 153057 ) on Friday August 22, 2008 @08:34PM (#24714185)

    Hyperthreading. I thought I was getting an ultra-tech processor when I bought my Dell 8400 some years back, with its 3.2 GHz P4 hyperthreaded power-sucking processor. Once all the reviews and independent technical evaluations and benchmarks were in, it was revealed that outside of a few niche application areas, hyperthreading wasn't all that great.

    It's a good sign Nehalem is also focusing on lowering power usage, the reason Intel had to finally abandon their Tejas plans (the old 8400 Coppermine P4 was a juice junkie). But why return to a feature like hyperthreading that has been thoroughly debunked? New software being written is still struggling with SMP multiple cores and threads running in parallel. Why gum up the works even more with a questionable feature? It makes very little sense to me.

    One justification would be if it had the potential to significantly reduce rendering times in animation and CGI applications. I thought Intel's plans for the mid-term were to go towards many-core processors (many more than 4 or even 8). Maybe hyperthreading is just a way to kick software designers in the arse, because software that can really take advantage of multi-threading is scarce. It's really quite amazing how much the hardware has outstripped the ability of software to keep up.

  • Re:Here we go again (Score:5, Interesting)

    by Traiano ( 1044954 ) on Friday August 22, 2008 @08:58PM (#24714333)
    Don't assume that since Hyper-Threading failed with Netburst that it is forever doomed to fail again. The primary problem with that architecture was that stages along the pipeline didn't support multiple threads. So, any thread context switches forced a flush of Netburst's very, very long pipeline. Intel's next generation of pipelines track multiple threads at all stages and make the prospect of HT much more attractive.
  • Gene pool comment (Score:3, Interesting)

    by blahplusplus ( 757119 ) on Friday August 22, 2008 @09:14PM (#24714447)

    "completely new architecture either. Intel representatives disclosed that Nehalem 'shares a significant portion of the P6 gene pool,"

    That's like saying equations share a significant portion of numbers gene pool. It's all geometry when you get down to it. I mean really, there are going to be certain circuit geometries that are always good to use and whom you can't totally get away from.

  • by beakerMeep ( 716990 ) on Friday August 22, 2008 @09:19PM (#24714475)
    Take a deep breath. It's OK if AMD and intel both have good chips. The question really comes down to the brand of salsa anyways.
  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Friday August 22, 2008 @09:56PM (#24714695)
    Comment removed based on user account deletion
  • by Kneo24 ( 688412 ) on Friday August 22, 2008 @10:07PM (#24714755)

    You are behind the times. ATI cards, as far as price vs performance, are spanking NVidia's cards with moon rocks. I think a big helping hand in that is that for whatever reason, AMD said to them, "make better drivers, or else!".

    Also, AMD has gone the route of trying to be more open source friendly with their cards, more so than NVidia.

    Currently, you just can't go wrong with owning a current generation Radeon card right now.

  • by Louis Savain ( 65843 ) on Friday August 22, 2008 @10:18PM (#24714805) Homepage

    More than any other organization, Intel knows that multithreading is bad. Lots of smart people such as professor Edward Lee [berkeley.edu] (the head of U.C. Berkeley's Parallel Computing Lab) have warned Intel of the disaster down the road. It is time for Intel and everybody else to make a clean break with the old stuff. There is an infinitely better way to design and program parallel computers that does not involve the use of threads at all. Instead of the Penryn, Intel should have picked something similar to the Itanium, which has a superscalar architecture [wikipedia.org]. A sequential (scalar) core has no business doing anything in a parallel multicore processor. Intel will regret this. Sooner or later, a competitor will read the writings on the wall and do things right. Intel and the others will be left holding an empty bag. To find out the right way to design a multicore processor, read Transforming the TILE64 into a Kick-Ass Parallel Machine [blogspot.com].

  • Re:First Post (Score:1, Interesting)

    by Anonymous Coward on Friday August 22, 2008 @10:32PM (#24714917)

    What's with the Hebrew? [72.14.205.104] Nehalem? Are these the chips Mossad uses to accelerate the backdoor access to the Israeli-coded crypto cyphers? :-)

  • Re:Here we go again (Score:4, Interesting)

    by Waffle Iron ( 339739 ) on Friday August 22, 2008 @10:42PM (#24714997)

    Hyperthreading can make a lot of sense in some circumstances. Sun pushed hyperthreading to its limits to achieve very impressive energy efficiency for certain niche workloads with its Niagra CPUs and derivatives. (IIRC, up to 128 threads per chip.)

  • by sam0737 ( 648914 ) <samNO@SPAMchowchi.com> on Saturday August 23, 2008 @01:00AM (#24715845)

    The QuickPath sounds so like AMD's HyperTransport. 3 pairs per CPU, integrated controller is exactly what AMD's doing for long long time.

    20-bit wide 25.6 GB/s per link? HyperTransport is already capable at deliverying 41.6 GB/s per link in 2006. (according to Wikipedia)

  • by distantbody ( 852269 ) on Saturday August 23, 2008 @01:26AM (#24716001) Journal

    Nehalem is really the realization of what many slashdotters have claimed before... ...power efficiency - and Intel delivers.

    Putting the cringe-worthy PR tone aside (are you connected to intel in any way?), the lowest-clocked 'mainstream desktop' Bloomfield CPU (running at 2.66 GHz, 45nm, quad-core) has a TDP of 130W! Now, efficient or not, that is one hot-and-sweaty processor, making me wonder that if Nehalem truly does have '1.1x~1.25x / 1.2x~2x the single / multi-threaded performance of the latest Penryn ('Yorkfield', 2.66GHz, 45 nm, quad-core, 95W TDP) at the same power level', why wouldn't they let the efficiency gains carry the performance increase of Nehalem for the same TDP?

    Look I may or may not be missing something, but I have been reading plenty of (uncomfortably positive, perhaps bankrolled) material on nehalem, yet I can't shake the perception that, with a huge TDP increase, the return of hyperthreading and the cannibalization of L2 cache for L3 cache, Nehalem seems far more Pentium 4 than Penryn.

  • Re:Here we go again (Score:2, Interesting)

    by Anonymous Coward on Saturday August 23, 2008 @01:51AM (#24716131)
    The Nehalem architecture is designed to maximize performance for a given power level. If you happen to be running a legacy application which cannot take advantage of all the cores then the unused cores will go into a low power state and the cores in use will overclock until the selected power envelope is reached.

    I, for one, welcome our new automatic overclocking overlords.
  • Re:Nehalem? (Score:2, Interesting)

    by Perf ( 14203 ) on Saturday August 23, 2008 @04:57AM (#24716899)

    Nah, it's named after a river in Oregon, which in turn, is named after a Native American tribe.

  • by boorack ( 1345877 ) on Saturday August 23, 2008 @05:59AM (#24717097)
    It's just that software does not keep up with hardware advances. There are many semi-ai or ai things I would have running on my PC. Classical example is indexing images or videos. Being able to query "show me all pictures where my girlfriend wields watch on her left hand" etc.

    My favorite would be a robot which will clean up my house. Not just hoover or clean up a floor. Also, clean up higher standing things, recognize what is a useful thing, what is a piece of rubbish and what I should decide if it should be tossed out. That kind of robot would also alert me that something needs to be repaired (like leaking roof), fix simple things (leaking pipes?), and generally take care of my property keeping it well by maintaining and fixing early enough, taking care of all living plants etc. And i would rather talk with this device using a natural language than program it by clicking or writing some kind of bizzare script ;)

    That kind of thing certainly needs enormous computational power. You need to recognize objects in images coming from its sensors (be it cameras, laser/infrared sensors etc.), solve a kinematic and dynamic equations of robot arms in realtime, have some advanced AI - both in solving basic problems of geometry and moving objects, and more sophiscated AI, including some non-trivial ontology-like database (so robot won't close a plant in a cabinet letting it die. So, you need to crunch incredible amounts of data and do not consume too much power. I think that current designs still needs some work to keep with such kind of workload.

  • by paradigm82 ( 959074 ) on Saturday August 23, 2008 @08:26AM (#24717663)

    Intel's CPU's have been superscalar since P6 (Pentium Pro). They can execute 3-4 instructions per clock under optimal conditions (yes all the way through the pipeline). They have out-of-order execution, speculative execution, register renaming etc. However, there's a limit to how much you can execute in parallel at the instruction level.

    Could you elaborate on what Intel's CPU's are missing and what Edward Lee was warning about?

  • Re:Here we go again (Score:3, Interesting)

    by Sparky McGruff ( 747313 ) on Saturday August 23, 2008 @12:58PM (#24719523)

    Take MS word.. You have grammer checking, but what about background googling to do FACT checking.

    Exactly. There's a million things that a "simple" program like Word could do; instead, they just add on cosmetic crap that slows the program down. I haven't seen a significant advancement -- something that made the old program obsolete -- in Word in a decade.

    As one example of a pathetic feature, Word has an option to "compare two documents". In theory, this would be a useful feature when someone extensively edits a document and hands it back to you. In reality, it's completely useless. If you take a document, and swap the beginning and ending paragraphs, it tells you that the entire document was deleted and a new one inserted. How useful. We have software algorithms (freely available!) for analyzing DNA sequences that allow for automatically identifying how entire genomes have been rearranged and modified, yet Microsoft can't figure out how to identify that a single paragraph has moved.

    They're lazy, unimaginative, and sloppy. There are a million tasks that could be implemented to truly revolutionize the process of writing documents (particularly long documents). They could make inserting figures into long documents less painful (delete a sentence, reformat all the pictures!). They could provide real hooks to allow EndNote or other referencing software to not be so clunky (insert a reference, wait 2 minutes for a flurry of script "search and replaces" to complete! Instead, the Word designers, in the finest MS tradition, choose to bring us "clippy" and the "ribbon bar". Gee, thanks!

  • by karnal ( 22275 ) on Saturday August 23, 2008 @04:01PM (#24720959)

    Problem being - if most people don't natively benefit from HT then aside from benchmarks or off-the-wall memory intensive apps, HT wouldn't be that impressive.

    I've had a core2duo 6600 for over a year now - and from what I've been reading, Nehalem isn't really any large performance boost for the typical user over Penryn. Usually I'll buy new CPU/systems when the performance of mainstream games suffer due to the CPU being outdated; in fact, this e6600 is the first system I've had that I've actually upgraded the video card on without doing a complete swap of mobo/cpu along with it.

"More software projects have gone awry for lack of calendar time than for all other causes combined." -- Fred Brooks, Jr., _The Mythical Man Month_

Working...