Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Hardware Technology

Inside Intel's Core i7 Processor, Nehalem 146

MojoKid writes "Intel's next-generation CPU microarchitecture, which was recently given the official processor family name of 'Core i7,' was one of the big topics of discussion at IDF. Intel claims that Nehalem represents its biggest platform architecture change to date. This might be true, but it is not a from-the-ground-up, completely new architecture either. Intel representatives disclosed that Nehalem 'shares a significant portion of the P6 gene pool,' does not include many new instructions, and has approximately the same length pipeline as Penryn. Nehalem is built upon Penryn, but with significant architectural changes (full webcast) to improve performance and power efficiency. Nehalem also brings Hyper-Threading back to Intel processors, and while Hyper-Threading has been criticized in the past as being energy inefficient, Intel claims their current iteration of Hyper-Threading on Nehalem is much better in that regard." Update: 8/23 00:35 by SS: Reader Spatial points out Anandtech's analysis of Nehalem.
This discussion has been archived. No new comments can be posted.

Inside Intel's Core i7 Processor, Nehalem

Comments Filter:
  • by Anonymous Coward on Friday August 22, 2008 @08:06PM (#24713979)

    The problem with hyperthreading is that it fails to deal with the fundamental problem of memory bandwidth and latency in the x86 architecture. It's true, some apps will see a 20% or better improvement in performance, but most won't see anything more than a marginal increase.

    Still, if one can safely enable hyperthreading without slowing down your system, unlike the last time we went through this, we should consider it a success. Hopefully, Quickpath will provide the needed memory improvements.

  • by Anonymous Coward on Friday August 22, 2008 @08:07PM (#24713995)
    'nuff said?
  • I for one... (Score:1, Insightful)

    by Anonymous Coward on Friday August 22, 2008 @08:25PM (#24714141)

    I for one welcome the death of FSB and all that, but yet again it means a new motherboard, a new CPU socket and all that (DDR3 too). Better save up!

  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Friday August 22, 2008 @08:47PM (#24714263)
    Comment removed based on user account deletion
  • by tknd ( 979052 ) on Friday August 22, 2008 @09:01PM (#24714347)

    See here [tomshardware.com]

    I know it's a tomshardware article but compared to what people have been posting in silent pc review forums the results are consistent. I do think with a better chipset and laptop style power supply the atom platform can go down to sub 20watts, but for now Intel is not making those boards or even allowing atom platforms to have fancy features like PCI-Express. In fact with the older AMD 690G chipset, some people at silent pc review were able to build sub 30watt systems.

  • by tftp ( 111690 ) on Friday August 22, 2008 @09:04PM (#24714363) Homepage

    It's really quite amazing how much the hardware has outstripped the ability of software to keep up.

    It's not amazing at all. Most desktop applications are single-threaded because you, the operator, are single-threaded. MS Word could enter words on all 100 pages of your book simultaneously, but you aren't able to produce them. An audio player could decode and play 100 songs to you at the same time, but you want to listen to one song at a time...

    I can see niche desktop applications where multiple threads are of use. For example, GIMP (or Paint.net or Photoshop) could apply your filter to 100 independent squares of the photo if you have 100 cores. However the gain would be tiny, the extra coding labor would be considerable, and you still need to stitch these squares... all to gain a second or two of a rare filter operation?

    The most effective use of multiple cores today is either in servers, or in finite element modeling applications.

  • by Anonymous Coward on Friday August 22, 2008 @09:32PM (#24714563)

    It's not amazing at all. Most desktop applications are single-threaded because you, the operator, are single-threaded....

    That's a pretty simplistic view. Other than the obvious historical reasons, I believe that most applications are single threaded because the languages and tools for writing non-trivial robust multi-threaded applications is lagging far behind the capability to run them.

  • by moozh84 ( 919301 ) on Friday August 22, 2008 @09:38PM (#24714595)
    You won't be locked into an Intel chipset. Obviously NVIDIA will be making chipsets for Nehalem processors. So with Intel processors you will have Intel and NVIDIA chipsets. With AMD processors you will have AMD and NVIDIA chipsets. It won't be much different than it currently is, except most likely VIA will completely drop out of the market in favor of other ventures.
  • Given how closely Apple has worked with Intel before and after the processor switch from PowerPC, I wonder how much more Hyper-Threading aware OS X 10.6 (AKA Snow Leopard) will be? After all, it's supposed to be a "tuning" release focused on full 64 bit performance across the OS, so it wouldn't surprise me to see OS X 10.6 to see much greater speed gains from HT than Vista on Nehalem, especially given Anandtech's description of how Vista screws up Turbo mode [anandtech.com] on Penryn-based systems. (And of course, MS won't go back and put hyperthreading awareness in XP at all...)

  • by Mycroft_VIII ( 572950 ) on Friday August 22, 2008 @10:32PM (#24714915) Journal
    Games, 3d rendering in general, but games are a big common app that can utilize good multi-threading.
    And multiple cores? Just the O.S. alone runs many things at once, then you've got your drivers, the applications, the widgets, the viruses(hey they're processes too, just because some people have a bit of prejudice:)), the bittorrent running in the background, and the list goes on.

    Mycroft
  • by Anonymous Coward on Friday August 22, 2008 @11:17PM (#24715269)

    Yeah, that's what Intel thought as well, ten years ago. Many valuable lessons were learnt.
    They're still continuing the Itanium line, I'd guess primarily for the research value and to save face, but I don't think they're particularly eager to face the ridicule they'd get from committing all their mistakes a second time.

  • by AcidPenguin9873 ( 911493 ) on Friday August 22, 2008 @11:34PM (#24715361)

    I'm not sure what you mean by geometries. SRAM arrays, flops, random logic, carry-lookahead adders, Wallace-tree multipliers (building blocks of processors) generally look similar across all high-performance ASICs over the past 15 years. Circuit geometries themselves have almost certainly changed completely since P6 days - 45nm is a hell of a lot smaller than 350nm, and the rules governing how close things can be have almost certainly changed.

    I think what the article really means is that Nehalem shares a lot of the architectural concepts and style of the P6: similar number of pipe stages, similar number of execution units, similar decode/dispatch/execute/retire width (I think Core 2/Penryn/Nehalem are 4 and P6 was 3), similar microcode, etc. Of course enhancements and improvements have been made in things like the branch predictor, load-store unit, and obviously the interconnect/bus...but if you look at Nehalem closely enough, and indeed if you look at Pentium M, Core 2, Penryn too, you can see the architecture of the P6 as an ancestor.

  • by PitaBred ( 632671 ) <slashdot@pitabre d . d y n d n s .org> on Friday August 22, 2008 @11:44PM (#24715435) Homepage

    He's saying that there's no killer application for the general user to upgrade to the latest and greatest. Gamers, sure, but they're a SMALL minority of computer users. Multi-threading and more cores than we have now doesn't really do anything for the average person. Until it does, these updates will be received with lukewarm approval. It won't be like the original Pentium again.

  • ECC? (Score:1, Insightful)

    by Anonymous Coward on Saturday August 23, 2008 @12:18AM (#24715613)

    Now that the memory controller will be in the CPU, does that mean they'll enable ECC RAM support for their consumer-level systems, the same way most AMD boards do?

    The idea of using 4GB or more with no error correction just doesn't interest me.

  • by Pulzar ( 81031 ) on Saturday August 23, 2008 @12:24AM (#24715645)

    Intel has money to burn, so they can afford prime-time TV commercials... The question is -- is the return on investment worth it? Your average Joe will buy whatever Dell/HP offers them in the right price range. The ones who are looking for a specific CPU are generally informed enough not to be swayed by TV commercials.

  • by Anonymous Coward on Saturday August 23, 2008 @02:21AM (#24716267)

    As a matter of fact, the technology was called Simultaneous Multithreading (SMT) when it was developed by Digital Equipment and the University of Washington, long before Intel marketeers got their hands on it.

  • by thecheatah ( 977630 ) on Saturday August 23, 2008 @02:51AM (#24716401)
    The problem that you describe can also be applied to having multiple cores. If you read the article you will realize that they have taken MANY steps to prevent this.
    For one they use ddr3 memory. Another thing is that they have much more intelligent pre-fetching mixed with the loop detection thingy. The cache size/design itself allows for many applications to run.
    The problem that you describe is a problem with the OS's scheduler. It should understand the architecture that it is running on. It should know about the types of caches the way each processor shares them. etc. Thus, it only makes sense to use hyper-threading if 1. you are simply out of cores (the choice of using ht cores is iffy) 2. a single application has spawned multiple threads. Even then you have to take into account the availability of other cores that share the l2 or l3 cache.
    I personally think that intelligent pre-fetching and loop detection thingy is something that needs more tests/statistics thrown at.
    Like you say, there are some applications that take advantage of HT let them take advantage of it while writing smarter OSs that understand the problems with doing so.
    Maybe they need a feed back mechanism from the processor for the OS to understand what is the best way to schedule tasks.

    I dont know much about CPUS :-p, just from what I read and learned in school.
  • by TheRaven64 ( 641858 ) on Saturday August 23, 2008 @07:32AM (#24717451) Journal

    It's not amazing at all. Most desktop applications are single-threaded because you, the operator, are single-threaded. MS Word could enter words on all 100 pages of your book simultaneously, but you aren't able to produce them.

    Absolute nonsense. Most applications have inherently parallel workloads that are implemented in sequential code because context switching on x86 is painfully expensive.

    Consider your example of a word processor. It takes a stream of characters and commands. It runs a spelling, and possibly grammar, checker in the background. It runs a layout and pagination algorithm. Both of these can also be subdivided into parallel tasks. If you insert an image, it has to decode the image in the background. Then we get to the UI, updating the view of the document via scrolling and so on while the model is not modified.

  • by RingDev ( 879105 ) on Saturday August 23, 2008 @11:52AM (#24719067) Homepage Journal

    It's a great idea and all, but you and what market segment are going to buy hundreds of thousands of those chips to offset to R&D and production costs? The existing x86 architecture is universally supported. Many other better architectures have died on the side of the road because they couldn't get a market segment large enough to support their costs.

    -Rick

  • Re:I for one... (Score:2, Insightful)

    by turgid ( 580780 ) on Saturday August 23, 2008 @12:10PM (#24719187) Journal

    Try again.

    They all have "HyperTransport."

    I have a socket AM2 motherboard (ASUS M2N-SLi Deluxe) which supports quad core Phenoms with a BIOS upgrade. I initially had a socket AM2 single-core Athlon 64 in it.

    The different sockets are to do with memory width (Socket 754 is single channel). Socket 939 (and 940 for the Opterons) are dual channel DDR. Socket AM2 is dual channel DDR2.

    My points were that AMD has had the equivalent of QuickPath (i.e. NUMA with on-chip memory controller) since 2003. I have also been served well by a socket AM2 motherboard which has taken me all the way from single-core 64-bit to 4-core 64-bit.

    Can any intel motherboards do this?

If you want to put yourself on the map, publish your own map.

Working...