Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Intel Hardware

Intel Details Nehalem CPU and Larrabee GPU 166

Vigile writes "Intel previewed the information set to be released at IDF next month including details on a wide array of technology for server, workstation, desktop and graphics chips. The upcoming Tukwila chip will replace the current Itanium lineup with about twice the performance at a cost of 2 billion transistors and Dunnington is a hexa-core processor using existing Core 2 architecture. Details of Nehalem, Intel's next desktop CPU core that includes an integrated memory controller, show a return of HyperThreading-like SMT, a new SSE 4.2 extension and modular design that features optional integrated graphics on the CPU as well. Could Intel beat AMD in its own "Fusion" plans? Finally, Larrabee, the GPU technology Intel is building, was verified to support OpenGL and DirectX upon release and Intel provided information on a new extension called Advanced Vector Extension (AVX) for SSE that would improve graphics performance on the many-core architecture."
This discussion has been archived. No new comments can be posted.

Intel Details Nehalem CPU and Larrabee GPU

Comments Filter:
  • Nehalem? Larrabee? (Score:2, Interesting)

    by thomasdz ( 178114 )
    Heck, I remember when "Pentium" came out and people laughed
    • by TechyImmigrant ( 175943 ) * on Monday March 17, 2008 @05:38PM (#22778270) Homepage Journal
      They are code names, not product names.

      Intel has a rich collection of silly code names.

    • by slew ( 2918 ) on Monday March 17, 2008 @05:59PM (#22778410)

      Nehalem? Larrabee?
      Heck, I remember when "Pentium" came out and people laughed

      Heck, I remember when "Itanium" came out and people laughed...

      But before they laughed, I remember a bunch of companies folded up their project tents (sun, mips, the remains of dec/alpha). I'm not so sure companies will do the same this time around... Not saying this time Intel doesn't have their ducks in a row, but certainly, the past is no indication of the future...
      • by TheRaven64 ( 641858 ) on Monday March 17, 2008 @06:34PM (#22778686) Journal

        But before they laughed, I remember a bunch of companies folded up their project tents (sun, mips,
        I think you are mistaken. MIPS still exists, but SGI stopped using it. HP killed both PA RISC and Alpha, but they co-developed Itanium, so it isn't entirely surprising. Sun kept developing chips, and currently hold the performance-per-watt crown for a lot of common web-server tasks.
      • Just for the record, Sun canceled their last product line because they flailed on completing it in a timely fashion, and by the time it came out it would have been dramatically outdated already. So instead they canned it; brought out broader versions of existing chips rather than deeper, new processors; and built x86-64-based systems in the meantime.
      • Heck, I remember when "Itanium" came out and people laughed...
        suddenoutbreakofcommonsense ...
      • by TheLink ( 130905 )
        Itanium? People laughed and started calling it the Itanic. Many still do, some might also call it an EPIC failure. It's only good for very niche applications. You're usually better off using an x86 (from AMD or Intel) or IBM POWER.

        The x86 is still king. It may be as ugly as a pig with a rocket strapped on, but it still flies faster than those elegant RISC eagles.

        While IBM's POWER stuff might be faster, it sure doesn't like a RISC anymore - definitely not a very "reduced instruction set" :). My bet is comple
    • Re: (Score:3, Interesting)

      by Kamokazi ( 1080091 )
      These are code names, not product names. They will probably all be Core 2(3?), Xeon, etc.
  • Intel Vs. AMD? (Score:4, Insightful)

    by Naughty Bob ( 1004174 ) on Monday March 17, 2008 @05:38PM (#22778268)

    Could Intel beat AMD in its own "Fusion" plans?
    Intel is hugely advanced on AMD at this point, however, without AMD we wouldn't be seeing these releases. Hurray for the market, I guess....
    • Intel is hugely advanced on AMD at this point, however, without AMD we wouldn't be seeing these releases. Hurray for the market, I guess...

      Hell yeah! Without AMD, we'd all be on x86 technology. Although, there is/was Motorola. Wouldn't it be nice to run multiple time line(s) scenarios?

    • So far AMD's Phenom processor is not outpreforming the Core 2 Quad's as expected but at the same time AMD has set up an architecture that will give the ability to expand to more cores easier.

      This could give AMD an advantage come beyond quad cores however Intel I am sure are hard at work to make sure they stay in the lead.
    • Re: (Score:2, Interesting)

      by WarJolt ( 990309 )
      Intel has expensive really fast multi core processors.
      AMD 64-bit processing is better. Depending on the type of processing you're doing that could mean a lot.
      We all know what a debacle Intels integrated graphics were in the past. I'm not sure if they should be using that as a marketing point.
      Since AMD acquired ATI I would assume AMDs integrated graphics would be far superior.

      NVIDIAs stock price hasn't been doing so good in the last couple months. Could this mean a return of integrated graphics? I'd bet my m
      • I am nobody's fanboy, but I kinda believe that if/when the transition to 64-bit picks up speed, Intel would miraculously produce something fairly crushing.

        Your post makes me think that Intel will attempt a take-over of Nvidia, hostile or otherwise. But I have no knowledge in this area.
    • Re: (Score:3, Interesting)

      But AMD has better on board video and there new chipset can use side port ram.

      Video on the cpu may be faster but you are still useing the same system ram and that is not as fast that ram on a video card and that ram it on it's own.
      • Re: (Score:3, Insightful)

        Video on the cpu may be faster but you are still using the same system ram and that is not as fast that ram on a video card and that ram it on it's own.

        Nobody could argue against that, but the two approaches solve different problems currently. If the drift is towards an all in one solution, then the drift is towards less capable, but cheaper tech. Most gamers are console gamers, perhaps the chip makers are coming to the conclusion that dedicated GPUs for the PC are a blind alley (a shame IMHO).

        • Re: (Score:3, Interesting)

          With things like vista do you really want to give up 128-256 of system ram + the bandwidth need for that just for areo? The on chip video should have side port ram like the new amd chip can use or maybe have 32meg+ of on chip ram / cache for video. Just having a ddr 2/3 slot or slots with there own channels will be better then useing the same ones that are justed for system ram but ram on video cards is faster then that.

          also console games don't have mod's / users maps and other add ones they also don't have
          • I just don't think the lack of side port ram is a deal breaker is all. It would be nice, but system ram is cheap, and it looks like the cool developments are in other directions. I'd be happy to be proved wrong.

            True enough about the restricted nature of console gaming, but don't expect that to inform 'Big Silicon' in its future decisions. Money is their only friend.
          • In 2009, even the lowest-end laptops are gonna have 2 GB of RAM. 128 MB for video is small potatoes for a consumer machine. Sure, you could add some extra RAM on the side, but that's $20 on a $800 machine (which sounds small, but is a decent slice of the profits). And adding 32 MB on die for video wouldn't help free up bandwidth or all that much RAM (because it's not large enough to not overflow into main RAM, and it's got to get to the Display Adapter somehow either way).

            The reason Intel has so much of
            • Amd MB's are cheaper and you can get a good mid-range to high-end AMD board at the same price as low to mid-range intel one.
    • Re: (Score:3, Interesting)

      by 0111 1110 ( 518466 )

      without AMD we wouldn't be seeing these releases.

      Actually this seems a bit disingenuous to me. Intel released Penryn way before they had to. Intel (the hare) was so far ahead of AMD (the tortoise) with the 65nm Core 2 that they could have sat back and relaxed for a while, saving R&D costs while waiting for AMD to catch up at least a little. I mean look at Nvidia for a perfect counterexample. Most people believe that they already have a next gen GPU ready but that they are sitting on it until they have someone to compete with besides themselves. To a

      • Re: (Score:3, Insightful)

        It is only about the money. All decisions ultimately come back to that. With Penryn, huge fabricating plants were coming online, and they couldn't have justified (to shareholders) not following through. That it kept Intel's jackboot firmly on the AMD windpipe was in that instance a happy sweetener.
      • Actually this seems a bit disingenuous to me. Intel released Penryn way before they had to. Intel (the hare) was so far ahead of AMD (the tortoise) with the 65nm Core 2 that they could have sat back and relaxed for a while, saving R&D costs while waiting for AMD to catch up at least a little. I mean look at Nvidia for a perfect counterexample. Most people believe that they already have a next gen GPU ready but that they are sitting on it until they have someone to compete with besides themselves.

        There

      • Think further back. A few years ago, the Opteron and the Athlon64 took a big bite out of Intel's market share. That happened because Intel was arrogantly chasing higher clocks (P4) and awkward architectures (Itanium). The tick-tock strategy was adopted in response to AMD's success. If AMD hadn't embarrassed Intel so badly, I doubt we'd be seeing such rapid product cycles today.

        Though, 45nm processors are currently in short supply. They're usually sold out, and are marked up considerably.

        http://techreport.co [techreport.com]
        • by oni ( 41625 )
          Intel was arrogantly chasing higher clocks (P4) and awkward architectures (Itanium).

          Indeed. And furthermore, for a very long time intel was avoiding actual innovation and instead just arbitrarily segmenting the market. For example, back in the 1990s, they were selling 486SX chips. To make a 486SX, intel had to manufacture a 486DX and then go through the extra step of disabling the math coprocessor. In spite of the fact that they took that extra step - thus necessarily increasing the manufacturing cost,
          • The SX issue could have been entirely due to yeild. Making 486DX-33 parts on a die, only maybe 10% of them will run at that speed. In some, the fp unit will be broken, but can still run at 33mhz, so you brand it a 486SX-33. In some, it won't even run at 33mhz, so you sell as 486SX-25 or 486SX-16. They still do it today, but you'll never see another general purpose CPU ever sold without a working fp-unit. Too many businesses running excel and too many gamers who depend on them.

    • "without AMD we wouldn't be seeing these releases."

      No, without a demand for these advances, competition would exist only to lower prices, but because this demand exists, the competition also includes innovation. If AMD weren't in the running, some other company or companies would be. Hurray for the market being properly represented.
  • by immcintosh ( 1089551 ) <slashdot@ianmcin ... inus threevowels> on Monday March 17, 2008 @06:05PM (#22778464) Homepage
    So, this Larrabee, will it be another example of integrated graphics that "supports" all the standards while being too slow to be useful in any practical situation, even basic desktop acceleration (Composite / Aero)? If so, I've gotta wonder why they even bother rather than saving some cash and just making a solid 2D accelerator that would be for all intents and purposes functionally identical.
    • by frieko ( 855745 )
      Intel GMA950 does compositing just fine in Leopard. Compiz works plenty fast on it too, though it's buggy as hell.
    • by GXTi ( 635121 )
      What they need to do is make some discrete graphics cards. They seem to have a clue when it comes to making their hardware easy to work with from Linux; if only the cards had more horsepower they'd be a favorite in no time.
    • by Kamokazi ( 1080091 ) on Monday March 17, 2008 @06:19PM (#22778566)
      No, far, far, from integrated garbage. Larrabee will actually have uses as a supercomputer CPU:

      "It was clear from Gelsinger's public statements at IDF and from Intel's prior closed-door presentations that the company intends to see the Larrabee architecture find uses in the supercomputing market, but it wasn't so clear that this new many-core architecture would ever see the light of day as an enthusiast GPU. This lack of clarity prompted me to speculate that Larrabee might never yield a GPU product, and others went so far as to report "Larrabee is GPGPU-only" as fact.

      Subsequent to my IDF coverage, however, I was contacted by a few people who have more intimate knowledge of the project than I. These folks assured me that Intel definitely intends to release a straight-up enthusiast GPU part based on the Larrabee architecture. So while Intel won't publicly talk about any actual products that will arise from the project, it's clear that a GPU aimed at real-time 3D rendering for games will be among the first public fruits of Larrabee, with non-graphics products following later.

      As for what type of GPU Larrabee will be, it's probably going to have important similarities to we're seeing out of NVIDIA with the G80. Contrary to what's implied in this Inquirer article, GPU-accelerated raster graphics are here to stay for the foreseeable future, and they won't be replaced by real-time ray-tracing engines. Actually, it's worthwhile to take a moment to look at this issue in more detail."

      Shamelessly ripped from:

      http://arstechnica.com/articles/paedia/hardware/clearing-up-the-confusion-over-intels-larrabee.ars/2 [arstechnica.com]
       
      • by Vigile ( 99919 ) *
        You can also see the validity of, but debate around, Larrabee here in an interview with John Carmack: http://games.slashdot.org/article.pl?sid=08/03/12/1918250&from=rss [slashdot.org]
      • Re: (Score:3, Interesting)

        by donglekey ( 124433 )
        Very interesting and I think you are right on the money. 'Graphics' is accelerated now, but the future may be more about generalized stream computing that can be used for graphics (or physics, or sound, etc) similar to the G80 and even the PS3's Cell (they originally were going to try to use it to avoid having a graphics card at all). This is why John Carmack thinks volumetrics may have a place in future games, why David Kirk thinks that some ray tracing could be used (not much, but don't worry it wouldn'
        • by TheLink ( 130905 )
          The Real Thing is too slow and will be too slow. The way I see it, everyone will still have to use "tricks".

          Carmack has had a good track record of figuring out nifty tricks that current popular tech can achieve or at least the popular near cutting edge tech - I remember just barely managing to play the first Doom on a 386SX, it sure looked a lot better than the other stuff out there.

          He used tricks for commander keen, wolf 3d, doom (2D game with some 3D), and so on.

          Carmack's engines tend to do pretty decent
      • by renoX ( 11677 )
        >No, far, far, from integrated garbage. Larrabee will actually have uses as a supercomputer CPU:

        Yes, if AVX includes 256bit = 4*64 FPU calculations with reasonable performance, I can imagine many computer scientist drooling over this..
    • Ummmmm, no (Score:5, Interesting)

      by Sycraft-fu ( 314770 ) on Monday March 17, 2008 @06:26PM (#22778620)
      First off, new integrated Intel chipsets do just find for desktop acceleration. One of our professors got a laptop with an X3000 chip and it does quite well in Vista. All the eye candy works and is plenty snappy.

      However, this will be much faster since it fixes a major problem with integrated graphics: Shared RAM. All integrated Intel chipsets nab system RAM to work. Makes sense, this keeps costs down and that is the whole idea behind them. The problem is it is slow. System RAM is much slower than video RAM. As an example, high end systems might have a theoretical max RAM bandwidth of 10GB/sec if they have the latest DDR3. In reality, it is going to be more along the lines of 5GB/sec in systems that have integrated graphics. A high end graphics card can have 10 TIMES that. The 8800 Ultra has a theoretical bandwidth over 100GB/sec.

      Well, in addition to the RAM not being as fast, the GPU has to fight with the CPU for access to it. All in all, it means that RAM access is just not fast for the GPU. That is a major limiting factor in modern graphics. Pushing all those pixels with multiple passes of textures takes some serious memory bandwidth. No problem for a discrete card, of course, it'll have it's own RAM just like any other.

      In addition to that, it looks like they are putting some real beefy processing power on this thing.

      As such I expect this will perform quite well. Will it do as good as the offerings from nVidia or ATi? Who knows? But this clearly isn't just an integrated chip on a board.
      • by Quarters ( 18322 )
        X3000 is an ATi part. It's a daughter board with a GPU and dedicated RAM, not an integrated solution.
    • Larrabee could be more of a hedge against IBMs Cell and Nvidia's GPUs for high computational workloads. The addition of graphics being a page from Nvidias book to try and get gamers to fund their HPC conquests.
    • part of the slow down comes from having to use system ram at lest amd got that right be letting there new chipset with video built in use side port ram there are MB with coming soon and it should be nice to see how much of a speed up that gives you with just on board video alone and with hybrid Crossfire.
  • HyperThreading (Score:3, Interesting)

    by owlstead ( 636356 ) on Monday March 17, 2008 @06:19PM (#22778568)
    "Also as noted, a return to SMT is going to follow Nehalem to the market with each core able to work on two software threads simultaneously. The SMT in Nehalem should be more efficient that the HyperThreading we saw in NetBurst thanks to the larger caches and lower latency memory system of the new architecture."

    Gosh, I hope it is more effective, because in my implementations I actually saw a slowdown instead of an advantage. Even then I'm generally not happy with hyper-threading. The OS & Applications simply don't see the difference between two real cores and a hyperthreading core. If I run another thread on a hyperthreading core, I'll slowdown the other thread. This might not always be what you want to see happening. IMHO, the advantage should be over 10/20% for a desktop processor to even consider hyperthreading, and even then I want back that BIOS option so that disables hyperthreading again.

    I've checked and both the Linux and Vista kernel support a large number of cores, so that should not be a problem.

    Does anyone have any information on how well the multi-threading works on the multi-core Sun niagara based processors?
    • Re: (Score:3, Interesting)

      by jd ( 1658 )
      This is why I think it would be better to have virtual cores and physical hyperthreading. You have as many compute elements as possible, all of which are available to all virtual cores. The number of virtual cores presented could be set equal to the number of threads available, equal to the number of register sets the processor could describe in internal memory, or to some number decided by some other aspect of the design. Each core would see all compute elements, and would use them as needed for out-of-ord
    • by Yenya ( 12004 )

      The OS & Applications simply don't see the difference between two real cores and a hyperthreading core. If I run another thread on a hyperthreading core, I'll slowdown the other thread.

      Wrong. Linux scheduler can distinguish between two real cores and a hyperthreading core (i.e. it prefers to run the threads on independent cores). Linux scheduler can also take into consideration core to socket mapping (it prefers to run threads on cores in a single socket in order to allow the other sockets lower the

    • Gosh, I hope it is more effective, because in my implementations I actually saw a slowdown instead of an advantage. Even then I'm generally not happy with hyper-threading. The OS & Applications simply don't see the difference between two real cores and a hyperthreading core. If I run another thread on a hyperthreading core, I'll slowdown the other thread. This might not always be what you want to see happening. IMHO, the advantage should be over 10/20% for a desktop processor to even consider hyperthrea
      • I hope you are right. My code was 8 way multi-threaded (9 way if you added the admin thread, but that was almost using no cycles - sleeping most of the time). Slow downs whatever way I used it (Java using real threads, cryptographic uses). Ok, the chances of using any FPU or SSE instructions was close to zero of course. But really, what kind of program would use the FPU and the integer units fully at the same time?
  • While these processors may end up being great, in the end they may very well push AMD over the edge if you consider that AMD's new processors get clobbered by Intel's old processors. In the end, unless AMD pulls a rabbit out of their hat by the end of the year, this may either be the last innovation Intel makes for a while, or the last affordable one. As consumers we owe AMD a vote of thanks for driving Intel to the level they are at now.
  • I can't even find the clock speed in that article, which means we're STILL probably stuck at 3.5 Ghz +/- .5 Ghz, which we've been stuck for what, three, four years? What the hell happened? If we're still shrinking components, why are we not seeing clock speed increases?
    • I can't even find the clock speed in that article, which means we're STILL probably stuck at 3.5 Ghz +/- .5 Ghz, which we've been stuck for what, three, four years? What the hell happened? If we're still shrinking components, why are we not seeing clock speed increases?

      Intel's current designs are basically focusing on what I'd consider horizontal scaling instead of vertical. That is, they are increasing the # of cores that run at a lower frequency which makes up for raising the clock speed. In addition, they run cooler. You aren't losing ground. If the Core 2 Duos weren't more efficient and provided better performance then Intel wouldn't be beating AMD's ass with them. You now have up to 4 cores in a single package each running at 2-3ghz (not sure the exact number for t

    • by djohnsto ( 133220 ) <dan.e.johnston@noSPam.gmail.com> on Monday March 17, 2008 @06:49PM (#22778772) Homepage
      Because power generally increases at a rate of frequency^3 (that's cubed). Adding more cores generally increases power linearly.

      For example. Let's start with a single-core Core 2 @ 2GHz. Let's say it uses 10 W (not sure what the actual number is).

      Running it at twice the frequency results in a (2^3) = 8X power increase. So, we can either have a single-core 4 GHz Core 2 at 80W, or we can have a quad-core 2GHz Core 2 at 40W. Which one makes more sense?
      • Well, performance-wise, a single-core 4GHz Core 2 makes more sense.

        We still don't have much software that can really take advantage of multiple cores. A single core running at 4GHz is going to be MUCH faster on almost every benchmark than 2 cores running at 2GHz each.

        But, it doesn't matter. Multi-cores are the future, and we need to figure out a way to take advantage of them.
      • by mczak ( 575986 )
        If you write it like that, not true. Power generally increases linearly with frequency. A cpu running at 4Ghz will use twice as much power as the same cpu running at 2Ghz (actually slightly less than twice since the leakage is the same).
        The problem with this is that to achieve twice the frequency (for the same cpu), you likely need to increase the voltage (increasing voltage increases power at a rate of voltage^2), and there is only so much you can increase the voltage... If you'd design the cpu to reach h
      • by Prune ( 557140 )
        The single core still makes mores sense, because most computations other than graphics are not easily (or at all) parallelizable.
        Instead, what is needed when power becomes excessive is simply a shift to a newer technology--and there are many options, so there's no fundamental issue, just a monetary one, and so any possible profits will be milked from the slow silicon substrate for as long as possible, even if progress is slowed down because of it.
      • by 5pp000 ( 873881 )

        power generally increases at a rate of frequency^3

        No, power is linear in clock frequency, and quadratic in voltage. References are easy to find on the Web; here's one [poly.edu].

      • NOT TRUE, PLEASE MOD DOWN PARENT.

        DYNAMIC POWER = FREQUENCY * CAPACITIVE LOAD * VOLTAGE^2

        The above ignores leakage, but as another poster mentioned, that is not related to frequency. Leakage actually scales LINEARLY with the device voltage.

        Adding more cores DOES increase power linearly. but the frequency^3 comment is completely off-base. The worst offender is actually voltage, which adds quadratic dynamic power and linear leakage power. As you raise the frequency, power consumption can increase even more
    • by TheSync ( 5291 ) * on Monday March 17, 2008 @07:39PM (#22779102) Journal
      1) We've hit the "Power Wall", power is expensive, but transistors are "free". That is, we can put more transistors on a chip than we have the power to turn on.

      2) We also have hit the "Memory Wall", modern microprocessors can take 200 clocks to access DRAM, but even floating-point multiplies may take only four clock cycles.

      3) Because of this, processor performance gain has slowed dramatically. In 2006, performance is a factor of three below the traditional doubling every 18 months that occurred between 1986 and 2002.

      To understand where we are, and why the only way to go now is parallelism versus clock speed increase, see The Landscape of Parallel Computing ReseView from Berkeley [berkeley.edu].

  • by dosh8er ( 608167 )
    ... because I simply _don't_ trust any company/companies with market share as vast as Intel (yeah, I know, the "Traitorous Eight" [wikipedia.org]). Apparently, AMD has had a lot of legal beef with Intel in the past, in fact, they used to be best buds, until Intel snaked AMD from some business with IBM. I know it's only a matter of time before Intel outwits AMD in the mass sales of proc.'s (esp. in the desktop/laptop field... I personally LOVE the power-saving on my Dual-Core... 3.5 Hrs avg. on a battery is GREAT for the po
    • The Pentium Bug isn't going to happen again. Or rather, it still happens but it doesn't matter.

      Since the Pentium, all Intel (and AMD) processors have used microcode. That is, there is a layer of abstraction between machine code that the processor executes and the actual electronic logic on the chip. It's a layer between the physical processor and Assembly. What it allows you to do is provide bug fixes for processor design errors. It's slightly slower because it's an extra decode operation, but it allow
  • While this new architecture sounds amazing, and I am planning to upgrade in about a year when this is released, is anyone else a bit worried about the overclocking potential of Nehalem? Intel sells their high end $1000 + 'Extreme' CPUs with an unlocked multiplier and, other than a higher bin, that is really its only selling point. I remember the days before Intel started locking down the multipliers. Lots of people thought it might spell the end of overclocking. But of course it turned out that FSB overcloc
  • by markass530 ( 870112 ) <<moc.liamg> <ta> <035ssakram>> on Monday March 17, 2008 @08:05PM (#22779266) Homepage
    I say this, as an admitted AMD fanboy, and in hopes that they can make a comeback, to once again force intel into a frenzy of research and development. I Can't help but imagine that AMD exec's are saying something along the lines of Isoroku Yamamota's famous WWII post pearl harbor quote, "I fear that all we have done is to awaken a sleeping giant." It's all gravy for consumers so one can't help to be happy at the current developments. However to ensure future happiness for consumers, one must also hope for an AMD Comeback.
    • by Hal_Porter ( 817932 ) on Tuesday March 18, 2008 @03:04AM (#22780914)
      I think AMD will do OK. Once Dell and the like get used to using CPUs from multiple sources they will probably survive. And a small company like AMD probably has an edge in terms of shorter design cycles and the ability to pick niches. AMD64 was a brilliant hack in retrospect that gave people most of the features of Itanium they wanted (64 bit, more registers) and none that they didn't (and expensive single source CPU with crap integer performance). Meanwhile Intel got hopeless bogged down trying to sell people Itaniums that they didn't want.

      AMD and they have other clever stuff in the pipeline. E.g.

      http://www.tech.co.uk/computing/upgrades-and-peripherals/motherboards-and-processors/news/amd-plots-16-core-super-cpu-for-2009?articleid=1754617439 [tech.co.uk]

      What's more, with that longer instruction pipeline in mind, it will be interesting to see how Bulldozer pulls off improved single-threaded performance. Rumours are currently circulating that Bulldozer may be capable of thread-fusing or using multiple cores to compute a single thread. Thread fusing is one of the holy grails of PC processing. If Bulldozer is indeed capable of such a feat, the future could be very bright indeed for AMD.
  • These Intel Hewbrewlish names are getting really hard to pronounce.

    "Hebrew English is to be helpings and not to be laughings at."
  • Given that processors with four cores are called "quad-core", shouldn't a six core processor be a "sexa-core" processor? Calling a six core processor "hexa-core" would imply that a processor with four cores should be called "tetra-core."

    </pedantic>
  • If they didn't support OpenGL you'd see Apple moving over to AMD without a second glance.
    • Comment removed based on user account deletion
      • Compare the amount of Mac sales to the amount of PC sales and if Apple moved over to AMD, I doubt Intel would give a second glance either.

        Apple probably buys around 10 percent of all laptop chips that Intel produces, and mostly goes for the more expensive ones, so I would estimate about 20 percent of dollar revenue. And Apple buys a good amount of expensive quad core server chips as well. And they don't buy any of the $50 low end chips that end up in your $399 PC. So financially, losing Apple would be a major hit for Intel.

        • Re: (Score:3, Informative)

          Comment removed based on user account deletion
          • by tyrione ( 134248 )

            Apple probably buys around 10 percent of all laptop chips that Intel produces, and mostly goes for the more expensive ones, so I would estimate about 20 percent of dollar revenue.

            I notice you've tried to sneak in the adjective "laptop" in there. I think it would be erring on your side to suggest that no more than half the chips Intel produces are for laptops, the remaining being for desktop and servers. If your figures are correct (which I seriously doubt), then that puts Apple down to buying a maximum of 5% of Intels overall chip production. (Even then, whilst I accept there are possibly a higher proportion of Apple users in the US, that is not the case here in Europe where Apple's penetration for computers is very low.)

            And they don't buy any of the $50 low end chips that end up in your $399 PC.

            Except that you're now (presumably) talking about $399 PCs in general, not just laptops - I detect some serious massaging of figures now on your part.

            However, if you're talking about $399 (or in my case £399) laptops, then I call BS on you. Sure, a lot of home users buy a cheap laptop as a second home machine but the biggest buyers of laptops are corporates who do not buy the cheapest machines. Therefore, by supposition, higher grade chips also go into Dell's, HP's, Lenovo's, etc. mid- to high- end laptops which, because there are more of those than there are Macs sold, puts Apple into a much smaller minority than you are claiming.

            So please do not exaggerate the Mac's penetration (outside of the US at least) - there really are not that many of them about. As I've said previously on Slashdot, having spent 25+ years as a technical person in telecomms and IT travelling quite regularly around Europe and parts of the Middle East, I have seen a total of 3 Mac machines ever - one was an American tutor on a course I did, one was a student posing in the local Starbucks with one, and a friend of mine has a surplus Mac given to him by his boss that he has no idea what to do with and is still in the box.

            My original comment stands. Having friends from both Intel and Apple I know the close relationship that has developed and the cross-pollenation of technical knowledge benefits both companies. However, with the upcoming products Apple has in the pipeline, their impressive market gains in several market spaces and upcoming markets it's clear that Intel would lose several Billion dollars of future revenue by having Apple leave.

            Let me also point out the stagnation of the Intel stock that benefits from it's h

        • by drsmithy ( 35869 )

          And Apple buys a good amount of expensive quad core server chips as well.

          I doubt that very much. Mac Pros aren't exactly a volume seller, and with only a single mid-range 1U server offering - and not an especially compelling one at that - Apple are far, far from a major player in the server market.

          So financially, losing Apple would be a major hit for Intel.

          No, they wouldn't A hit, yes, but not a major one.

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...