Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Details Nehalem CPU and Larrabee GPU 166

Vigile writes "Intel previewed the information set to be released at IDF next month including details on a wide array of technology for server, workstation, desktop and graphics chips. The upcoming Tukwila chip will replace the current Itanium lineup with about twice the performance at a cost of 2 billion transistors and Dunnington is a hexa-core processor using existing Core 2 architecture. Details of Nehalem, Intel's next desktop CPU core that includes an integrated memory controller, show a return of HyperThreading-like SMT, a new SSE 4.2 extension and modular design that features optional integrated graphics on the CPU as well. Could Intel beat AMD in its own "Fusion" plans? Finally, Larrabee, the GPU technology Intel is building, was verified to support OpenGL and DirectX upon release and Intel provided information on a new extension called Advanced Vector Extension (AVX) for SSE that would improve graphics performance on the many-core architecture."
This discussion has been archived. No new comments can be posted.

Intel Details Nehalem CPU and Larrabee GPU

Comments Filter:
  • Nehalem? Larrabee? (Score:2, Interesting)

    by thomasdz ( 178114 ) on Monday March 17, 2008 @06:35PM (#22778240)
    Heck, I remember when "Pentium" came out and people laughed
  • Re:Intel Vs. AMD? (Score:2, Interesting)

    by WarJolt ( 990309 ) on Monday March 17, 2008 @06:52PM (#22778362)
    Intel has expensive really fast multi core processors.
    AMD 64-bit processing is better. Depending on the type of processing you're doing that could mean a lot.
    We all know what a debacle Intels integrated graphics were in the past. I'm not sure if they should be using that as a marketing point.
    Since AMD acquired ATI I would assume AMDs integrated graphics would be far superior.

    NVIDIAs stock price hasn't been doing so good in the last couple months. Could this mean a return of integrated graphics? I'd bet my money on AMD who already owns ones of the big players.
  • Re:Intel Vs. AMD? (Score:3, Interesting)

    by Joe The Dragon ( 967727 ) on Monday March 17, 2008 @06:55PM (#22778378)
    But AMD has better on board video and there new chipset can use side port ram.

    Video on the cpu may be faster but you are still useing the same system ram and that is not as fast that ram on a video card and that ram it on it's own.
  • by slew ( 2918 ) on Monday March 17, 2008 @06:59PM (#22778410)

    Nehalem? Larrabee?
    Heck, I remember when "Pentium" came out and people laughed

    Heck, I remember when "Itanium" came out and people laughed...

    But before they laughed, I remember a bunch of companies folded up their project tents (sun, mips, the remains of dec/alpha). I'm not so sure companies will do the same this time around... Not saying this time Intel doesn't have their ducks in a row, but certainly, the past is no indication of the future...
  • by Kamokazi ( 1080091 ) on Monday March 17, 2008 @07:14PM (#22778526)
    These are code names, not product names. They will probably all be Core 2(3?), Xeon, etc.
  • Re:Intel Vs. AMD? (Score:4, Interesting)

    by eggnoglatte ( 1047660 ) on Monday March 17, 2008 @07:15PM (#22778534)

    Your post makes me think that Intel will attempt a take-over of Nvidia, hostile or otherwise. But I have no knowledge in this area.
    Why would they? Intel already has the biggest GPU marketshare (bout 50% or so), and they achieve that with integrated graphics, that are arguably the way of the future. My guess is that NVIDIA will become the SGI of the early 21st century - they'll cater to a high-speed niche market. Too bad, actually, I kind of like their cards (and they have by far the best 3D Linux performance).
  • HyperThreading (Score:3, Interesting)

    by owlstead ( 636356 ) on Monday March 17, 2008 @07:19PM (#22778568)
    "Also as noted, a return to SMT is going to follow Nehalem to the market with each core able to work on two software threads simultaneously. The SMT in Nehalem should be more efficient that the HyperThreading we saw in NetBurst thanks to the larger caches and lower latency memory system of the new architecture."

    Gosh, I hope it is more effective, because in my implementations I actually saw a slowdown instead of an advantage. Even then I'm generally not happy with hyper-threading. The OS & Applications simply don't see the difference between two real cores and a hyperthreading core. If I run another thread on a hyperthreading core, I'll slowdown the other thread. This might not always be what you want to see happening. IMHO, the advantage should be over 10/20% for a desktop processor to even consider hyperthreading, and even then I want back that BIOS option so that disables hyperthreading again.

    I've checked and both the Linux and Vista kernel support a large number of cores, so that should not be a problem.

    Does anyone have any information on how well the multi-threading works on the multi-core Sun niagara based processors?
  • by ChronoReverse ( 858838 ) on Monday March 17, 2008 @07:21PM (#22778576)
    Well, it went from Core, to Core 2. I'd presume these new chips would get the "Core 3" moniker.
  • Ummmmm, no (Score:5, Interesting)

    by Sycraft-fu ( 314770 ) on Monday March 17, 2008 @07:26PM (#22778620)
    First off, new integrated Intel chipsets do just find for desktop acceleration. One of our professors got a laptop with an X3000 chip and it does quite well in Vista. All the eye candy works and is plenty snappy.

    However, this will be much faster since it fixes a major problem with integrated graphics: Shared RAM. All integrated Intel chipsets nab system RAM to work. Makes sense, this keeps costs down and that is the whole idea behind them. The problem is it is slow. System RAM is much slower than video RAM. As an example, high end systems might have a theoretical max RAM bandwidth of 10GB/sec if they have the latest DDR3. In reality, it is going to be more along the lines of 5GB/sec in systems that have integrated graphics. A high end graphics card can have 10 TIMES that. The 8800 Ultra has a theoretical bandwidth over 100GB/sec.

    Well, in addition to the RAM not being as fast, the GPU has to fight with the CPU for access to it. All in all, it means that RAM access is just not fast for the GPU. That is a major limiting factor in modern graphics. Pushing all those pixels with multiple passes of textures takes some serious memory bandwidth. No problem for a discrete card, of course, it'll have it's own RAM just like any other.

    In addition to that, it looks like they are putting some real beefy processing power on this thing.

    As such I expect this will perform quite well. Will it do as good as the offerings from nVidia or ATi? Who knows? But this clearly isn't just an integrated chip on a board.
  • by TheRaven64 ( 641858 ) on Monday March 17, 2008 @07:34PM (#22778686) Journal

    But before they laughed, I remember a bunch of companies folded up their project tents (sun, mips,
    I think you are mistaken. MIPS still exists, but SGI stopped using it. HP killed both PA RISC and Alpha, but they co-developed Itanium, so it isn't entirely surprising. Sun kept developing chips, and currently hold the performance-per-watt crown for a lot of common web-server tasks.
  • by dosh8er ( 608167 ) <oyamao.gmail@com> on Monday March 17, 2008 @07:36PM (#22778694) Homepage Journal
    ... because I simply _don't_ trust any company/companies with market share as vast as Intel (yeah, I know, the "Traitorous Eight" [wikipedia.org]). Apparently, AMD has had a lot of legal beef with Intel in the past, in fact, they used to be best buds, until Intel snaked AMD from some business with IBM. I know it's only a matter of time before Intel outwits AMD in the mass sales of proc.'s (esp. in the desktop/laptop field... I personally LOVE the power-saving on my Dual-Core... 3.5 Hrs avg. on a battery is GREAT for the powerhorse that it is), but what can AMD do? Merge with ATI... oops, already been done. So is AMD restricted to GPU market for the rest of their (profitable) life?

    I can see this going two ways:
    1) Intel forces AMD outta business. AMD ends up liquidating its stock/technology to foreign companies (read: outside USA).
    2) AMD Brings an Anti-Trust case against Intel for 'unfair practices' or some crap (IANAL).

    However, there is ALWAYS the possibility that Intel pulls another Pentium Bug [wikipedia.org]. Remember the mid-late 90's ? (God how _could_ we _forget_ the 90's!?) Either way, AMD needs to diversify their R&D and/or look for more lucrative business opportunities (whatever that means), or -the winner IMHO- work with IBM on this power saving crusade.

    Was denken Sie, Slashdot Crowd?
  • Re:HyperThreading (Score:3, Interesting)

    by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Monday March 17, 2008 @07:57PM (#22778824) Homepage Journal
    This is why I think it would be better to have virtual cores and physical hyperthreading. You have as many compute elements as possible, all of which are available to all virtual cores. The number of virtual cores presented could be set equal to the number of threads available, equal to the number of register sets the processor could describe in internal memory, or to some number decided by some other aspect of the design. Each core would see all compute elements, and would use them as needed for out-of-order operatons. The primary idea would be to hide the multithreading of the chip from the OS, yet take advantage of being able to multithread. In addition to that, however, if one core can't exploit multiple threads but another core can exploit many, then you don't waste compute elements or slow things down by not making resources available.

    (Since a compute element is designed for one specific task, you end up with a maximum number of supportable virtual cores equal to the number of pools times the number of elements in each pool. The minimum number of cores would be determined by the maximum number of threads generated by any instruction supported. If the CPU was really smart, it could "hotplug" CPUs to increase and reduce the number of cores that appear to the operating system, so that if there's a heavy, sustained use of the threading, the CPU doesn't try to overcommit resources.)

  • Re:Intel Vs. AMD? (Score:3, Interesting)

    by 0111 1110 ( 518466 ) on Monday March 17, 2008 @07:59PM (#22778832)

    without AMD we wouldn't be seeing these releases.
    Actually this seems a bit disingenuous to me. Intel released Penryn way before they had to. Intel (the hare) was so far ahead of AMD (the tortoise) with the 65nm Core 2 that they could have sat back and relaxed for a while, saving R&D costs while waiting for AMD to catch up at least a little. I mean look at Nvidia for a perfect counterexample. Most people believe that they already have a next gen GPU ready but that they are sitting on it until they have someone to compete with besides themselves. To a lot of people that seems to make sense. Especially if you *only* care about making as much money as possible and don't care about being a technology leader. The only problem I have with that logic is that you will be losing sales from upgraders as well as allowing your competition to get closer to you so that you cannot price as high. But obviously Nvidia seems to feel that the savings in R&D costs and not competing with their own products is enough to justify it. Of course there is always the possibility that Nvidia is just not ready with their new tech yet, but not many people seem to believe that.

    Instead of waiting for some real competition, Intel released Penryn more or less right on schedule when the only competition they had was their own 65nm processors. Of course the quad cores are only just being released now, but they are still releasing them way before AMD has anything to really compete with them. People make all kinds of cynical statements about business methods without even considering corporate culture. Has it ever occurred to anyone that Intel simply may not believe in only releasing new tech when they absolutely have no choice due to competition?

    I'm not saying competition is not a good thing, but I don't think AMD is presenting much competition to Intel at the moment. AMD is in big trouble and Intel is well aware of that fact. I just don't think that it is competition that is driving Intel forward. Competition may affect their pricing, but I think Intel would keep right on with their two year tick tock cycles and process shrinks even if AMD folded tomorrow.
  • Re:Intel Vs. AMD? (Score:3, Interesting)

    by Joe The Dragon ( 967727 ) on Monday March 17, 2008 @08:42PM (#22779134)
    With things like vista do you really want to give up 128-256 of system ram + the bandwidth need for that just for areo? The on chip video should have side port ram like the new amd chip can use or maybe have 32meg+ of on chip ram / cache for video. Just having a ddr 2/3 slot or slots with there own channels will be better then useing the same ones that are justed for system ram but ram on video cards is faster then that.

    also console games don't have mod's / users maps and other add ones they also don't have that many free games.

    you don't have many mmorpg games on them and the Xbox is pay to play online vs free on the pc and ps3.
  • by markass530 ( 870112 ) <markass530@NOspAm.gmail.com> on Monday March 17, 2008 @09:05PM (#22779266) Homepage
    I say this, as an admitted AMD fanboy, and in hopes that they can make a comeback, to once again force intel into a frenzy of research and development. I Can't help but imagine that AMD exec's are saying something along the lines of Isoroku Yamamota's famous WWII post pearl harbor quote, "I fear that all we have done is to awaken a sleeping giant." It's all gravy for consumers so one can't help to be happy at the current developments. However to ensure future happiness for consumers, one must also hope for an AMD Comeback.
  • by dpokorny ( 241008 ) on Monday March 17, 2008 @10:22PM (#22779680)
    Effectively all of Intel's chipsets support dual digital outputs. Many mobile chipsets support 5+ unique outputs. Just take a look at the spec sheets available at developer.intel.com. It's a question of the motherboard manufacturers -- they need to put one or more sDVO transmitters on the motherboard to support the physical DVI connectors.

    There is a standard called ADD+ that allows you too connect the transmitters via an AGP or PCIe card, however, given that drivers are validated with specific transmitters, it's unusual to find ADD+ cards outside of driver development groups or validation teams.

    However, if you can find an ADD+ card with a pair of common transmitters such as the Chrontel CH7307, then you can get your dual DVI outputs.

    (Not speaking as an official representative of Intel Corporation)
  • by donglekey ( 124433 ) on Monday March 17, 2008 @11:52PM (#22780102) Homepage
    Very interesting and I think you are right on the money. 'Graphics' is accelerated now, but the future may be more about generalized stream computing that can be used for graphics (or physics, or sound, etc) similar to the G80 and even the PS3's Cell (they originally were going to try to use it to avoid having a graphics card at all). This is why John Carmack thinks volumetrics may have a place in future games, why David Kirk thinks that some ray tracing could be used (not much, but don't worry it wouldn't really bring that much to the game anyway) and why Ageia created a company made to be sold before Intel, AMD/ATI, and Nvidia got into the stream processing business and beat them at their own game. Imagine all the what ifs you can think of in the video game world and they will start to become plausible over the next decade (but forget about ray tracing, it wouldn't be a good use of power at 100x the speed that we have now).
  • by Hal_Porter ( 817932 ) on Tuesday March 18, 2008 @03:38AM (#22780836)
    Why are they called Arabic anyway? The only justification is that al Khwarizmi wrote a book popularising them in 825AD [wikipedia.org] but they were actually invented centuries before hand in India [wikipedia.org].

    They weren't even used in the Arab world until modern times -

    http://en.wikipedia.org/wiki/History_of_the_Hindu-Arabic_numeral_system [wikipedia.org]

    In the Arab world - until modern times - the Arabic numeral system was used only by mathematicians. Muslim scientists used the Babylonian numeral system, and merchants used the Abjad numerals. It was not until Fibonacci that the Arabic numeral system was used by a large population.

Today is a good day for information-gathering. Read someone else's mail file.

Working...