Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Intel Hardware Technology

Intel Abandons Discrete Graphics 165

Stoobalou writes with this excerpt from Thinq: "Paul Otellini may think there's still life in Intel's Larrabee discrete graphics project, but the other guys at Intel don't appear to share his optimism. Intel's director of product and technology media relations, Bill Kircos, has just written a blog about Intel's graphics strategy, revealing that any plans for a discrete graphics card have been shelved for at least the foreseeable future. 'We will not bring a discrete graphics product to market,' stated Kircos, 'at least in the short-term.' He added that Intel had 'missed some key product milestones' in the development of the discrete Larrabee product, and said that the company's graphics division is now 'focused on processor graphics.'"
This discussion has been archived. No new comments can be posted.

Intel Abandons Discrete Graphics

Comments Filter:
  • Groan (Score:4, Insightful)

    by Winckle ( 870180 ) <mark&winckle,co,uk> on Wednesday May 26, 2010 @01:23PM (#32351202) Homepage

    I hope they at least manage to incorporate some of what they've learnt into their integrated chips.

    Intel's integrated chips have been appallingly bad in the past, some incapable of decoding HD video with reasonable performance. Manufacturers using those intel integrated chips in their consumer level computers did a great deal of harm to the computer games industry.

    • Re:Groan (Score:5, Informative)

      by Pojut ( 1027544 ) on Wednesday May 26, 2010 @01:44PM (#32351444) Homepage

      For anyone stuck with an Intel GMA chipset: GMA Booster [gmabooster.com] may help solve some of your problems. Just make sure you have a decent cooling solution [newegg.com], as it can ramp up the heat output of your system considerably. Still, if you're stuck with GMA, it can make the difference between a game being unplayable and being smooth.

      • Limited to 950 (Score:5, Informative)

        by manekineko2 ( 1052430 ) on Wednesday May 26, 2010 @02:23PM (#32351948)

        Note for anyone else whose curiosity was piqued, this only works with 32bit systems with 950 chipset based systems, and does not work with GMA X3100, GMA X4500, GMA 500, or GMA 900.

        • by Svartalf ( 2997 )

          Not only that, but people shouldn't confuse the GMA500, which is a rebadged PowerVR in the mix.

        • It's interesting how although it boosts the clockspeed from 133/166mhz to 400mhz, the performance boost is approximately 25%.

          It says right on the website, this is because it's RAM bandwidth starved - and as it gobbles more bandwidth, your CPU gets less, but it does result in a net gain.

          That means in theory the performance gains could be higher on desktop systems with higher speed RAM, as opposed to laptops. However, being able to feed it more data also means it works harder, so the recommendation for decent

  • by Anonymous Coward

    They've never been able to bring the most innovative designs to market.. they bring 'good enough' wrapped in the x86 instruction set.

    If x86 was available to all I think we'd see Intel regress to a foundry business model.

    • They've never been able to bring the most innovative designs to market.. they bring 'good enough' wrapped in the x86 instruction set.

      And, judging by what I've heard, a lot of people would say that the x86 instruction set itself is nothing more than 'good enough.'

      • It's not so much 'good enough' as it brings along several decades worth of cruft that aren't really necessary in the modern era. While Intel had a bright idea in Itanium and ditching the x86 instruction set, they greatly underestimated the amount of effort that it would take to port the code and ensure that the necessary applications were available. Ultimately it was more or less DOA as a result, it just took some time for it to become formalized.
    • Re: (Score:3, Interesting)

      I disagree. Intel has been destroying AMD these past 4 years.

      AMD's 64bit instruction set, and athlons were a huge improvement where Intel had failed...

      But now.. Intel's chips are faster, and AMD has been playing catch up. For a while there AMD didnt have an answer for intel's core line of cpus.

      Now they do, and they're slightly cheaper than intel but they do not perform as fast as intel.

      • by Jeng ( 926980 )

        Intels chips are faster because Intel has much better production facilities.

      • by Nadaka ( 224565 )

        I think you may be a bit off here. AMD meets or exceeds the performance available from Intel chips at every point of the price curve except the very high end where they do not compete at all.

        The i7 920 is the only real competitor to AMDs chips in price/performance, coming in a bit faster than the AMD 955/965 in both performance and cost.. Above that point, incrementally more power from Intel comes at exponentially higher costs. Below that point and AMDs chips beat everything Intel has at each price point.

        • AMD does beat intel on the price curve... but not in performance. If you want the performance.. AMD has no answer for intel's cpus. I recently built a system for someone and looked at all of the cpu options. Ultimately I went with an AMD cpu for him because of his price range....

          Like you said, AMD beats intel on the price... but not on the performance. If you want performance, you have to pay intels prices.

          Thats why they cost more. The cpu's intel have put out these past 3 years are incredible. For a good t

          • by adolf ( 21054 )

            It doesn't matter what's at the top.

            If performance were the only metric one needed when selecting a product, we'd all be driving Bugatti Veyrons when we wanted to go fast, Unimogs when we want to move lots of stuff slow, and AMG Mercedes-Benz SUVs when we want to move stuff along with people.

            Over here in reality, though, price is a factor. And so, Toyotas, the Hyundais, and the Chevys are a much better deal for most folks.

            So, even if Bugatti made a more inexpensive and practical vehicle that I might be int

          • by Nadaka ( 224565 )

            I agree that for everything from the i7 920 and up, intel is unquestionable faster. Even the new 6 core AMD chips will be able to match/beat the i7 920 at a some tasks despite similar system costs.

            The AMD 955 at ~$160 outperforms the intel E8400, Q8200 and Q8400 and i5-650 available in the range of ~150 to ~190. The same goes for just about every lower price point as well.

            I think that the larger section of the market lies in the low to mid range chips. I am not just talking about price, but value as well. I

          • Re: (Score:3, Interesting)

            by Rockoon ( 1252108 )

            AMD does beat intel on the price curve... but not in performance.

            AMD does seem to have an edge in the multiprocessor arena, although I am not sure why.

            According to PassMark, the fastest machines clocked using their software is a 4 x Opteron 6168 (4 x 12 cores = 48 cores) system and a 8 x Opteron 8435 (8 x 6 cores = 48 cores) [cpubenchmark.net]

            The actual numbers are:

            4 x Opteron 6168 : 23,784 Passmarks.
            8 x Opteron 8435 : 22,745 Passmarks.
            4 x Xeon X7460 : 18,304 Passmarks.
            2 x Xeon X5680 : 17,910 Passmarks.

            That $200 AMD chip that everyone is raving about, the Phenom II 1055T, scor

            • As somebody whose sole job is to squeeze maximum floating point performance out of Intel chips, I can tell you those benchmarks are absolute crap.

              How was the code for the benchmark written? Did it use the compiler that intel puts out? Does it use ippxMalloc to create the datastructures for the number crunching? If the answer to any of the last two questions is no, then you are not getting even slightly close to the full throughput of the chips.

              • How was the code for the benchmark written?

                Its fucking Passmark. Are you new to the benchmarking scene?

                Did it use the compiler that intel puts out?

                The one that intentionally outputs code that cripples performance on non-Intels? Are you new to the benchmarking scene? Yes, their compiler is great.. as long as you trick it into not crippling performance on non-Intels.

                Does it use ippxMalloc to create the datastructures for the number crunching?

                Its fucking Passmark. Are you a dipshit or something?

                If the answer to any of the last two questions is no, then you are not getting even slightly close to the full throughput of the chips.

                First you say the results are crap, and then later you say its only crap if the answer to blah blah blah is no? Make up your mind, dipshit.

                • Hello troll.

                  Yes, their compiler is great.. as long as you trick it into not crippling performance on non-Intels.

                  Well you could use the compiler that AMD puts out. I'm sure they have one that is specific to their architectures.

                  Its fucking Passmark.

                  Doesn't mean anything unless you can pop the hood and examine the code. To get the most out of recent chips, the coding style/methodologies have changed substantially. There are whole things you just shouldn't do now. You code to the compiler.

                  I generally write micro-benchmarks to test even simple things (std::max, if greater than do this otherwise do that, etc) and the difference

        • I think you may be a bit off here. AMD meets or exceeds the performance available from Intel chips at every point of the price curve except the very high end where they do not compete at all.

          Performance/price != Performance

          And of course, it also depends mightily on exactly WHICH performance characteristics you're talking about.

    • by JamesP ( 688957 )

      Well, not really

      The P6 was a really good architecture. it's what AMD battled with the K7 arch (really good as well)

      Of course, that, until Intel shot itself in the foot with the Netburst architecture (AKA Pentium 4)

      A 1GHz P3 could run circles around the 1.4GHz, 1.6GHz even higher clocked Willamette P4

      But the P6 arch carried on and Core 2 is based on it (with a lot of improvements on top)

  • missed milestones (Score:4, Informative)

    by LinuxIsGarbage ( 1658307 ) on Wednesday May 26, 2010 @01:30PM (#32351276)

    ' He added that Intel had 'missed some key product milestones' in the development of the discrete Larrabee product,

    Like proof that they were even capable of making an integrated graphics product that wasn't a pile of garbage?

    GMA910: Couldn't run WDDM, thus couldn't run Aero, central to the "Vista capable" Lawsuits

    GMA500: decent hardware, crappy drivers under Windows, virtually non-existant Linux drivers, worse performance than GMA950 in Netbooks.

    Pressure to lockout competing video chipsets. We're lucky ION saw the light of day. http://www.pcgameshardware.com/aid,680035/Nvidia-versus-Intel-Nvidia-files-lawsuit-against-Intel/News/ [pcgameshardware.com]

  • A company that hasn't produced a discrete graphics card in over a decade (I'm pretty sure I remember seeing an Intel graphics card once. Back in the 90s.) is going to continue to not produce discrete graphics cards. Wow. Stop the presses. Has Ric Romero been alerted?

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      A large, publicly announced project with a great deal of media hype that had the potential to shake up the industry was cancelled. So, yeah, stop the presses.

    • Re: (Score:2, Informative)

      by kdekorte ( 8768 )
      The i740 [wikipedia.org] card.... great expections, poor real world experience.
      • by 0123456 ( 636235 )

        The i740 [wikipedia.org] card.... great expections, poor real world experience.

        Everyone I knew in the graphics business thought that Intel had gone completely insane with the i740; other companies were trying to cram more and more faster and faster RAM onto their cards while Intel were going to use slow system RAM over a snail-like AGP bus.

        So I'd say the expectations were pretty low, at least among those who knew what they were talking about.

        • Re: (Score:3, Informative)

          by TheRaven64 ( 641858 )
          To be fair to Intel, most graphics cards then were on the PCI bus, not AGP, so they didn't have the opportunity to use the host RAM except via a very slow mechanism. At the time, the amount of RAM was far more of a limitation than the speed, and a card using 8MB of host RAM via AGP was likely to have an advantage over a card with 4MB of local RAM on the PCI bus. While it was much slower than competing solutions, it was also much cheaper. The RAM on something like the VooDoo 2 was a significant proportion
          • by 0123456 ( 636235 )

            To be fair to Intel, most graphics cards then were on the PCI bus, not AGP, so they didn't have the opportunity to use the host RAM except via a very slow mechanism.

            If by 'most' you mean 'Voodoo-2', yes. From what I remember all the cards I was using at the time Intel was trying to sell the i740 (Permedia-2, TNT, etc) were on the AGP bus.

            I believe 3dfx were pretty much the last holdouts on PCI, because game developers had deliberately restricted their games to run well on Voodoo cards, thereby ensuring that they didn't need much bus bandwidth (any game which actually took advantage of AGP features so it ran well on a TNT but badly on a Voodoo was slated in reviews).

            • Re: (Score:3, Insightful)

              by TheRaven64 ( 641858 )

              From what I remember all the cards I was using at the time Intel was trying to sell the i740 (Permedia-2, TNT, etc) were on the AGP bus.

              Check the dates. The i740 was one of the very first cards to use AGP. Not sure about the Permedia-2, but the TNT was introduced six months after the i740 and cost significantly more (about four times as much, as I recall). It performed a lot better, but that wasn't really surprising.

    • Re: (Score:3, Interesting)

      by gman003 ( 1693318 )

      The Larrabee chips actually looked pretty good. There was a lot of hype, especially from Intel. They demoed things like Quake Wars running a custom real-time ray-tracing renderer at a pretty decent resolution. Being able to use even a partial x86 ISA for shaders would have been a massive improvement as well, both in capabilities and performance.

      From what I've been able to piece together, the problem wasn't even the hardware, it was the drivers. Apparently, writing what amounts to a software renderer for Ope

  • Both good and bad (Score:3, Interesting)

    by TheRealQuestor ( 1750940 ) on Wednesday May 26, 2010 @01:37PM (#32351368)
    This is bad news for one reason. Competition. There are only 2 major players in discreet graphics right now and that is horrible for the consumer. Now the good. Intel SUCKS at making gpus. I mean seriously. So either way Intel has no hope of making a 120 core GPU based off of x86 being cheap or fast enough to compete. Go big or stay at home. Intel stay at home.
    • I almost let this slide until you put the other half of the pun in capitals!

      There is lots of tasty competition producing NSFW "Discreet Graphics" that Sucks!

    • by keeboo ( 724305 )

      This is bad news for one reason. Competition. There are only 2 major players in discreet graphics right now and that is horrible for the consumer.

      What about VIA and Matrox?

      • by h4rr4r ( 612664 )

        They produce joke cards.

      • He said "major players". VIA's IGPs are generations behind and only end up in 'budget' computers or embedded appliances. Matrox serves a niche market primarily focused on professional workstation rendering. Neither competes head to head with nVidia or AMD/ATI.
      • Re:Both good and bad (Score:4, Informative)

        by FreonTrip ( 694097 ) <freontrip AT gmail DOT com> on Wednesday May 26, 2010 @02:36PM (#32352124)

        VIA stopped designing motherboards for AMD and Intel CPUs about two years ago. Consequently, you can't find its GPUs in many places aside from embedded systems or ultra low-budget netbooks and the like. Weirdly they still sell a miniscule number of discrete cards, primarily overseas, but without divine intervention they'll never become a serious player again.

        Matrox serves niche markets, mostly in the way of professional workstations, medical imaging equipment, and the odd sale of their TripleHead system to the ever-eroding hardcore PC gamer market.

        In case anyone wonders what happened to the others: Oak Technologies' graphics division was acquired by ATI many moons ago; Rendition was eaten by Micron in 1998 and their name is now used to sell budget RAM; SiS bought Trident's graphics division, spun off their graphical company as XGI Technologies, had a series of disastrous product releases, and had their foundries bought by ATI, who let them continue to sell their unremarkable products to eager low-bidders; and 3dfx was mismanaged into oblivion a decade ago.

      • Matrox has been irrelevant ever since 3D became important.

    • Nvidia has been incredible for the consumer for a long time now.

      I would like to see their quadro products come down in price though. They are ridiculously overpriced.

    • Re: (Score:3, Interesting)

      by BikeHelmet ( 1437881 )

      Are you kidding me? This is great for consumers.

      If Intel got their claws in the discreet graphics market (which is already showing signs of stagnation rather than growth), then they'd take a huge chunk of nVidia and ATI's R&D budgets away. Unable to put as much money towards advancement, GPU generations (and their pricedrops) would come slower. Meanwhile Intel would utilize their advanced (and cheap) fabbing to make a killing on that market, just as they do IGPs.

      End result? Slower progress, nVidia and A

  • by SLot ( 82781 )

    Doesn't bode well for the future of Project Offset.

  • Actually Intel have changed the name to NotToBee.
  • by Funk_dat69 ( 215898 ) on Wednesday May 26, 2010 @02:20PM (#32351904)

    I kind of think Larrabee was a hedge.

    If you think about it, around the time it was announced (very early on in development, which is not normal), you had a bunch of potentially scary things going on in the market.
    Cell came out with a potentially disruptive design, Nvidia was gaining ground in the HPC market, OpenCL was being brought forth by Apple to request a standard in hybrid computing.

    All of sudden it looked like maybe Intel was a little too far behind.

    Solution: Announce a new design of their own to crush the competition! In Intel-land, sometimes the announcement is as big as the GA. Heck, the announcement of Itanium was enough to kill off a few architectures. They would announce Larrabee as a discrete graphics chip to get gamers to subsidize development and....profit!

    Lucky for them, Cell never found a big enough market and Nvidia had a few missteps of their own. Also, Nehalem turned out to be successful. Add all that up, and it becomes kind of clear that Larrebee was no longer needed, negating the fact that it was a huge failure, performance-wise.

    Intel is the only company that can afford such huge hedge bets. Looks like maybe another one is coming to attack the ARM threat. We'll see.

  • Does anybody know where the Larrabee development was actually done? I'd be embarrassed if it was done at Intel here in Oregon.
  • Larrabee was canceled "as a standalone discrete graphics product" on December 4, 2009.

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...