Intel Abandons Discrete Graphics 165
Stoobalou writes with this excerpt from Thinq: "Paul Otellini may think there's still life in Intel's Larrabee discrete graphics project, but the other guys at Intel don't appear to share his optimism. Intel's director of product and technology media relations, Bill Kircos, has just written a blog about Intel's graphics strategy, revealing that any plans for a discrete graphics card have been shelved for at least the foreseeable future. 'We will not bring a discrete graphics product to market,' stated Kircos, 'at least in the short-term.' He added that Intel had 'missed some key product milestones' in the development of the discrete Larrabee product, and said that the company's graphics division is now 'focused on processor graphics.'"
Groan (Score:4, Insightful)
I hope they at least manage to incorporate some of what they've learnt into their integrated chips.
Intel's integrated chips have been appallingly bad in the past, some incapable of decoding HD video with reasonable performance. Manufacturers using those intel integrated chips in their consumer level computers did a great deal of harm to the computer games industry.
Re:Groan (Score:5, Informative)
For anyone stuck with an Intel GMA chipset: GMA Booster [gmabooster.com] may help solve some of your problems. Just make sure you have a decent cooling solution [newegg.com], as it can ramp up the heat output of your system considerably. Still, if you're stuck with GMA, it can make the difference between a game being unplayable and being smooth.
Limited to 950 (Score:5, Informative)
Note for anyone else whose curiosity was piqued, this only works with 32bit systems with 950 chipset based systems, and does not work with GMA X3100, GMA X4500, GMA 500, or GMA 900.
Re: (Score:2)
Not only that, but people shouldn't confuse the GMA500, which is a rebadged PowerVR in the mix.
Re: (Score:2)
It's interesting how although it boosts the clockspeed from 133/166mhz to 400mhz, the performance boost is approximately 25%.
It says right on the website, this is because it's RAM bandwidth starved - and as it gobbles more bandwidth, your CPU gets less, but it does result in a net gain.
That means in theory the performance gains could be higher on desktop systems with higher speed RAM, as opposed to laptops. However, being able to feed it more data also means it works harder, so the recommendation for decent
Re: (Score:2)
I've tried it, it definitely works. It "transforms" the hardware by greatly overclocking it :-)
Re: (Score:2)
Those 386to486 converter programs installed a hander for the illegal instruction trap, caught it, and emulated the dozen or so that were introduced with the 486. They let you run programs that required a 486 (rather than having them crash with an illegal instruction error), they just ran a lot slower than on a real 486.
This is completely different; it just tweaks the clock speed of the hardware. Given that most people who have integrated graphics and can't upgrade are laptop users, who have limited and
Re: (Score:2)
Back in the days of the 8086/88 and 80386 I used to run a math coprocessor emulator and it worked amazingly well, much to my surprise. I never ran 386to486 but it would not surprise me if it did a good job of running programs requiring a 486.
Re: (Score:2)
You mean like Apple using the GMA950 and X3100 in their early intel Mac minis and MacBooks?
Starcraft II, Diablo 3, Steam on Mac OS X.... all great news except we can't use any of it because the intel integrated GPUs SUCK!
Starcraft 2 chokes on my radeon 5430
Re: (Score:2)
Video cards for gaming.
Memory bandwidth within the video card is one of the best ways to grade a video card.
It doesn't matter much how much video Ram it has, its the speed.
Having a newer video card helps a little, but the big thing is the video memory bandwidth.
Video processing is secondary for gaming because the primary bottleneck is the memory bandwidth.
If you have a low end, but new video card it will perform comparably to the older version of the same card.
If you have an older premium card it will still
Re: (Score:2)
That isn't really true - memory bandwidths have a fairly minor impact, similar to CPU/memory - maybe 5% of the speed of a graphics card is from latencies due to memory (either bus or strobe latencies). The rest of the time is spent doing transforms, running shaders, doing depth testing, etc, and those processes depend on the GPU clock. The old AGP shared memory model is actually creeping back in (look at the G100s, for example), especially in the mobile processing area because memory bandwidth matters so
Re: (Score:2)
The rest of the time is spent doing transforms, running shaders, doing depth testing, etc,
and what do you think those transforms are being done on? large arrays of numbers stored in gpu memory
There once was a time when main memory to GPU memory was a serious throttle, but those days died around AGP 4x and haven't returned.
Ah, this is where the misunderstanding would hvae been, he was likely talking about gpu to gpu memory bandwidth
but the latencies negate any speed gain, same as for processor memory.
The advantage most gpu's have is their memory access patterns are very linear, they already know what they'll be working on next before they're finished what they're doing, so while they're working on what they are they can chuck the next address on the bus so when they're done they have new data.
Re: (Score:2)
This depends a lot by the graphic load - one has examples of linear performance increase with memory bandwidth, and in some cases the increase is hardly there. Some games benefit from more memory bandwidth, some don't (see http://www.firingsquad.com/hardware/nvidiageforce/page10.asp [firingsquad.com], going from GeForce 256 to GeForce 256 DDR doubles bandwidth but do nothing for performance in Quake2 or Quake3)
Re: (Score:2)
Intel is a great manufacturer.. not designer. (Score:2, Interesting)
They've never been able to bring the most innovative designs to market.. they bring 'good enough' wrapped in the x86 instruction set.
If x86 was available to all I think we'd see Intel regress to a foundry business model.
Re: (Score:2)
They've never been able to bring the most innovative designs to market.. they bring 'good enough' wrapped in the x86 instruction set.
And, judging by what I've heard, a lot of people would say that the x86 instruction set itself is nothing more than 'good enough.'
Re: (Score:2)
Re: (Score:2)
The many-uop model isnt a bad one, either. It makes sense that if there are common pairs of uops, that shorthand be used to encode them. This improve code density, among other things.
While
Re: (Score:3, Interesting)
I disagree. Intel has been destroying AMD these past 4 years.
AMD's 64bit instruction set, and athlons were a huge improvement where Intel had failed...
But now.. Intel's chips are faster, and AMD has been playing catch up. For a while there AMD didnt have an answer for intel's core line of cpus.
Now they do, and they're slightly cheaper than intel but they do not perform as fast as intel.
Re: (Score:2)
Intels chips are faster because Intel has much better production facilities.
Why Intel chips are faster (Score:2)
Intels chips are faster because Intel has much better production facilities.
No way, man, it's their Speed Hole(tm) Technology!
Re: (Score:2)
I think you may be a bit off here. AMD meets or exceeds the performance available from Intel chips at every point of the price curve except the very high end where they do not compete at all.
The i7 920 is the only real competitor to AMDs chips in price/performance, coming in a bit faster than the AMD 955/965 in both performance and cost.. Above that point, incrementally more power from Intel comes at exponentially higher costs. Below that point and AMDs chips beat everything Intel has at each price point.
Re: (Score:2)
AMD does beat intel on the price curve... but not in performance. If you want the performance.. AMD has no answer for intel's cpus. I recently built a system for someone and looked at all of the cpu options. Ultimately I went with an AMD cpu for him because of his price range....
Like you said, AMD beats intel on the price... but not on the performance. If you want performance, you have to pay intels prices.
Thats why they cost more. The cpu's intel have put out these past 3 years are incredible. For a good t
Re: (Score:2)
It doesn't matter what's at the top.
If performance were the only metric one needed when selecting a product, we'd all be driving Bugatti Veyrons when we wanted to go fast, Unimogs when we want to move lots of stuff slow, and AMG Mercedes-Benz SUVs when we want to move stuff along with people.
Over here in reality, though, price is a factor. And so, Toyotas, the Hyundais, and the Chevys are a much better deal for most folks.
So, even if Bugatti made a more inexpensive and practical vehicle that I might be int
Re: (Score:2)
I agree that for everything from the i7 920 and up, intel is unquestionable faster. Even the new 6 core AMD chips will be able to match/beat the i7 920 at a some tasks despite similar system costs.
The AMD 955 at ~$160 outperforms the intel E8400, Q8200 and Q8400 and i5-650 available in the range of ~150 to ~190. The same goes for just about every lower price point as well.
I think that the larger section of the market lies in the low to mid range chips. I am not just talking about price, but value as well. I
Re: (Score:3, Interesting)
AMD does beat intel on the price curve... but not in performance.
AMD does seem to have an edge in the multiprocessor arena, although I am not sure why.
According to PassMark, the fastest machines clocked using their software is a 4 x Opteron 6168 (4 x 12 cores = 48 cores) system and a 8 x Opteron 8435 (8 x 6 cores = 48 cores) [cpubenchmark.net]
The actual numbers are:
4 x Opteron 6168 : 23,784 Passmarks.
8 x Opteron 8435 : 22,745 Passmarks.
4 x Xeon X7460 : 18,304 Passmarks.
2 x Xeon X5680 : 17,910 Passmarks.
That $200 AMD chip that everyone is raving about, the Phenom II 1055T, scor
Re: (Score:2)
As somebody whose sole job is to squeeze maximum floating point performance out of Intel chips, I can tell you those benchmarks are absolute crap.
How was the code for the benchmark written? Did it use the compiler that intel puts out? Does it use ippxMalloc to create the datastructures for the number crunching? If the answer to any of the last two questions is no, then you are not getting even slightly close to the full throughput of the chips.
Re: (Score:2)
How was the code for the benchmark written?
Its fucking Passmark. Are you new to the benchmarking scene?
Did it use the compiler that intel puts out?
The one that intentionally outputs code that cripples performance on non-Intels? Are you new to the benchmarking scene? Yes, their compiler is great.. as long as you trick it into not crippling performance on non-Intels.
Does it use ippxMalloc to create the datastructures for the number crunching?
Its fucking Passmark. Are you a dipshit or something?
If the answer to any of the last two questions is no, then you are not getting even slightly close to the full throughput of the chips.
First you say the results are crap, and then later you say its only crap if the answer to blah blah blah is no? Make up your mind, dipshit.
Re: (Score:2)
Hello troll.
Yes, their compiler is great.. as long as you trick it into not crippling performance on non-Intels.
Well you could use the compiler that AMD puts out. I'm sure they have one that is specific to their architectures.
Its fucking Passmark.
Doesn't mean anything unless you can pop the hood and examine the code. To get the most out of recent chips, the coding style/methodologies have changed substantially. There are whole things you just shouldn't do now. You code to the compiler.
I generally write micro-benchmarks to test even simple things (std::max, if greater than do this otherwise do that, etc) and the difference
Re: (Score:2)
I think you may be a bit off here. AMD meets or exceeds the performance available from Intel chips at every point of the price curve except the very high end where they do not compete at all.
Performance/price != Performance
And of course, it also depends mightily on exactly WHICH performance characteristics you're talking about.
Re: (Score:2)
Well, not really
The P6 was a really good architecture. it's what AMD battled with the K7 arch (really good as well)
Of course, that, until Intel shot itself in the foot with the Netburst architecture (AKA Pentium 4)
A 1GHz P3 could run circles around the 1.4GHz, 1.6GHz even higher clocked Willamette P4
But the P6 arch carried on and Core 2 is based on it (with a lot of improvements on top)
missed milestones (Score:4, Informative)
' He added that Intel had 'missed some key product milestones' in the development of the discrete Larrabee product,
Like proof that they were even capable of making an integrated graphics product that wasn't a pile of garbage?
GMA910: Couldn't run WDDM, thus couldn't run Aero, central to the "Vista capable" Lawsuits
GMA500: decent hardware, crappy drivers under Windows, virtually non-existant Linux drivers, worse performance than GMA950 in Netbooks.
Pressure to lockout competing video chipsets. We're lucky ION saw the light of day. http://www.pcgameshardware.com/aid,680035/Nvidia-versus-Intel-Nvidia-files-lawsuit-against-Intel/News/ [pcgameshardware.com]
Wait, what? This is news? (Score:2, Insightful)
A company that hasn't produced a discrete graphics card in over a decade (I'm pretty sure I remember seeing an Intel graphics card once. Back in the 90s.) is going to continue to not produce discrete graphics cards. Wow. Stop the presses. Has Ric Romero been alerted?
Re: (Score:3, Insightful)
A large, publicly announced project with a great deal of media hype that had the potential to shake up the industry was cancelled. So, yeah, stop the presses.
Re: (Score:2, Informative)
Re: (Score:2)
The i740 [wikipedia.org] card.... great expections, poor real world experience.
Everyone I knew in the graphics business thought that Intel had gone completely insane with the i740; other companies were trying to cram more and more faster and faster RAM onto their cards while Intel were going to use slow system RAM over a snail-like AGP bus.
So I'd say the expectations were pretty low, at least among those who knew what they were talking about.
Re: (Score:3, Informative)
Re: (Score:2)
To be fair to Intel, most graphics cards then were on the PCI bus, not AGP, so they didn't have the opportunity to use the host RAM except via a very slow mechanism.
If by 'most' you mean 'Voodoo-2', yes. From what I remember all the cards I was using at the time Intel was trying to sell the i740 (Permedia-2, TNT, etc) were on the AGP bus.
I believe 3dfx were pretty much the last holdouts on PCI, because game developers had deliberately restricted their games to run well on Voodoo cards, thereby ensuring that they didn't need much bus bandwidth (any game which actually took advantage of AGP features so it ran well on a TNT but badly on a Voodoo was slated in reviews).
Re: (Score:3, Insightful)
From what I remember all the cards I was using at the time Intel was trying to sell the i740 (Permedia-2, TNT, etc) were on the AGP bus.
Check the dates. The i740 was one of the very first cards to use AGP. Not sure about the Permedia-2, but the TNT was introduced six months after the i740 and cost significantly more (about four times as much, as I recall). It performed a lot better, but that wasn't really surprising.
Re: (Score:3, Interesting)
The Larrabee chips actually looked pretty good. There was a lot of hype, especially from Intel. They demoed things like Quake Wars running a custom real-time ray-tracing renderer at a pretty decent resolution. Being able to use even a partial x86 ISA for shaders would have been a massive improvement as well, both in capabilities and performance.
From what I've been able to piece together, the problem wasn't even the hardware, it was the drivers. Apparently, writing what amounts to a software renderer for Ope
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Both good and bad (Score:3, Interesting)
Re: Discreet (Score:2)
I almost let this slide until you put the other half of the pun in capitals!
There is lots of tasty competition producing NSFW "Discreet Graphics" that Sucks!
Re: (Score:2)
This is bad news for one reason. Competition. There are only 2 major players in discreet graphics right now and that is horrible for the consumer.
What about VIA and Matrox?
Re: (Score:2)
They produce joke cards.
Re: (Score:2)
Re:Both good and bad (Score:4, Informative)
VIA stopped designing motherboards for AMD and Intel CPUs about two years ago. Consequently, you can't find its GPUs in many places aside from embedded systems or ultra low-budget netbooks and the like. Weirdly they still sell a miniscule number of discrete cards, primarily overseas, but without divine intervention they'll never become a serious player again.
Matrox serves niche markets, mostly in the way of professional workstations, medical imaging equipment, and the odd sale of their TripleHead system to the ever-eroding hardcore PC gamer market.
In case anyone wonders what happened to the others: Oak Technologies' graphics division was acquired by ATI many moons ago; Rendition was eaten by Micron in 1998 and their name is now used to sell budget RAM; SiS bought Trident's graphics division, spun off their graphical company as XGI Technologies, had a series of disastrous product releases, and had their foundries bought by ATI, who let them continue to sell their unremarkable products to eager low-bidders; and 3dfx was mismanaged into oblivion a decade ago.
Re: (Score:2)
and 3dfx was mismanaged into oblivion a decade ago
And Nvidia picked up the pieces from 3dfx.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Matrox has been irrelevant ever since 3D became important.
Re: (Score:2)
Nvidia has been incredible for the consumer for a long time now.
I would like to see their quadro products come down in price though. They are ridiculously overpriced.
Re: (Score:3, Interesting)
Are you kidding me? This is great for consumers.
If Intel got their claws in the discreet graphics market (which is already showing signs of stagnation rather than growth), then they'd take a huge chunk of nVidia and ATI's R&D budgets away. Unable to put as much money towards advancement, GPU generations (and their pricedrops) would come slower. Meanwhile Intel would utilize their advanced (and cheap) fabbing to make a killing on that market, just as they do IGPs.
End result? Slower progress, nVidia and A
Hrmmm. (Score:2)
Doesn't bode well for the future of Project Offset.
Intel's NotToBee GPU (Score:2, Funny)
Larrabee was a hedge anyway (Score:5, Interesting)
I kind of think Larrabee was a hedge.
If you think about it, around the time it was announced (very early on in development, which is not normal), you had a bunch of potentially scary things going on in the market.
Cell came out with a potentially disruptive design, Nvidia was gaining ground in the HPC market, OpenCL was being brought forth by Apple to request a standard in hybrid computing.
All of sudden it looked like maybe Intel was a little too far behind.
Solution: Announce a new design of their own to crush the competition! In Intel-land, sometimes the announcement is as big as the GA. Heck, the announcement of Itanium was enough to kill off a few architectures. They would announce Larrabee as a discrete graphics chip to get gamers to subsidize development and....profit!
Lucky for them, Cell never found a big enough market and Nvidia had a few missteps of their own. Also, Nehalem turned out to be successful. Add all that up, and it becomes kind of clear that Larrebee was no longer needed, negating the fact that it was a huge failure, performance-wise.
Intel is the only company that can afford such huge hedge bets. Looks like maybe another one is coming to attack the ARM threat. We'll see.
Question (Score:2)
Isn't this old news? (Score:2)
Mod parent "Likely." (Score:5, Informative)
Short of buying out Nvidia I don't see Intel having a consumer's chance in America of competing with AMD in the value sector for the next few generations of chips.
CPUs have been "fast enough" for years, but GPUs have not. AMD is going to laugh all the way to the bank being able to offer a $50 package that can run The Sims.
Re:Mod parent "Likely." (Score:5, Insightful)
CPUs have been "fast enough" for years, but GPUs have not.
Really? I think you might want to take a look at what most people use their GPUs for. Unless you are a gamer, or want to watch 1080p H.264 on a slightly older CPU, a 4-5 generation old GPU is more than adequate. My current laptop is 3.5 years old, and I can't remember ever doing anything on it that the GPU couldn't handle. As long as you've got decent compositing speed and pixel shaders for a few GUI effects, pretty much any GPU from the last few years is fast enough for a typical user.
Re:Mod parent "Likely." (Score:5, Informative)
As long as you've got decent compositing speed and pixel shaders for a few GUI effects, pretty much any ATI or nVidia GPU from the last few years is fast enough for a typical user.
Fixed that for you. Intel cards are fine for "normal" computer usage, but they still suck pretty bad at most games.
Re: (Score:2)
Unless you are a gamer, or want to watch 1080p H.264 on a slightly older CPU, a 4-5 generation old GPU is more than adequate.
But if you're a game developer, you have an interest in members of the public having PCs with more powerful GPUs, not the Voodoo3-equivalent without even hardware T&L that is a GMA 950.
Re: (Score:2)
The Voodoo 3 was bitching for its time.
And its time was a decade ago.
Re: (Score:2)
*sigh* All right, I've got karma to burn.
The Voodoo3's limitations weren't trivial even for its time: 256x256 texture dimensions, forced 16-bit color, no real stencil buffer support, framebuffer size maxed out at 16 MB... A former 3dfx employee literally told me that it was a die-shrunk, bug-fixed Voodoo Banshee with an extra TMU popped onto its single lonely rendering pipeline. I owned one, and liked it tremendously, but it's based on technology first debuted 13 years ago.
As for Doom 3: yes, a MesaGL --
Re: (Score:2)
All I really meant by that is incremental improvements in GPU performance hold significantly greater "value" than commensurate increases in CPU speed. A bottom barrel consumer computer is going to be able to handle anything a common consumer is going to throw at it except gaming. Even the lightest gaming is about impossible on the most common consumer-class (Intel) GPUs, and unfortunately Intel is still the graphics chip in the vast majority of consumer computers.
Now bottom-barrel will still mean "game capa
depends on GP-GPU (Score:2)
I would say that is currently true, but the average user may care if General Purpose GPU (GP-GPU) takes off and they use applications that use it. For a speed example, I had what is essentially a math problem that kept a dual core CPU busy (and yes, it was threaded) for 2 weeks, 3 days, 14 hours. The same problem tackled by 216 GPU shaders and one CPU took around 25 minutes. While neither
I realize most people aren't doing surface detail analysis involving trillions of points of data like I
Re: (Score:2)
You probably did floating point work on double precision (in fact 80-bits precision) in the CPU and in 32-bits precision on the graphic card. If this "increased error" doesn't bother you, GPUs offer better computing performance and much higher memory bandwidth compared to any common CPU (triple channel DDR3 1600MHz has less than 40 GB/s, while a 5870 has over 150 GB/s and a Fermi has 177 GB/s)
Re: (Score:2)
Your perspective is a little short-sighted... I remember when CPUs were barely able to play MP3s. Have you heard that CS5 will be GPU accelerated?
(Disclaimer: I work for NVIDIA)
Re: (Score:2)
Cpu's have never been fast enough :) My Quad Core could be a dual 8 core.. and I'm still at the mercy of the CPU while rendering.
Anyone doing music production at home, or professional, or 3d graphics at home or professional... or photoshop work at home or.. well you get it.
CPU is and will always be a factor. That son of a bitch is never fast enough for me :)
Re: (Score:2)
"Because Intel is still battling NVIDIA in court over whether it has the necessary license to make chipsets for Intel's latest processors,
Apple can't pair these new Core i5 processors with the new NVIDIA 320M used in the new 13" MacBook Pro and white MacBook."
AMD is back by default
Re: (Score:2)
Not really (Score:5, Insightful)
Everyone gets up on Intel integrated GPUs because they are slow, but they are looking at it from a gamer perspective. Yes, they suck ass for games, however that is NOT what they are for. Their intended purpose is to be cheap solutions for basic video, including things like Aero. This they do quite well. A modern Intel GMA does a fine job of this. They are also extremely low power, especially new newest ones that you find right on the Core i5 line in laptops.
Now what AMD may do well in is a budget gaming market. Perhaps they will roll out solutions that cost less than a discreet graphics card, but perform better than a GMA for games. That may be a market they could do well in. However they aren't going to "kill" Intel by any stretch of the imagination. For low power, non-gaming stuff using minimal power is the key and the GMA chips are great at that. For the majority of gaming, a discreet solution isn't a problem ($100 gets you a very nice gaming card these days) and can be upgraded.
Re: (Score:3, Informative)
Their intended purpose is to be cheap solutions for basic video, including things like Aero.
Well, it depends on your definition of basic video, of course. I mean, I've seen Intel GMA chipsets struggle to display a 1080p Blu-Ray movie. Given that consumers increasingly are going to be hooking up their laptops to TVs and other larger displays, saying, "Oh, that's not basic video," isn't going to cut it.
Re: (Score:2)
"Everyone gets up on Intel integrated GPUs because they are slow, but they are looking at it from a gamer perspective. "
Everyone SHOULD get up on intel. Intel doesn't approach computers as a platform like it should, and that's a huge problem for the #1 player in the industry when it comes to CPU's and motherboard chipsets. Nvidia and AMD are the few companies approaching the PC as a _platform in itself_. The idea that the "gamers perspective" doesn't matter is short-sighted and NAIVE, imagine you told M
Re: (Score:2)
You are correct that discrete graphics isn't going anywhere, but your argument for why gaming performance is so important in integrated graphics contradicts that. I think the same argument applies there: the thing that people riding on Intel for the performance of their integrated gpu's and who are predicting that AMD will spank them in the integrated gpu space are forgetting is: 1) heat, 2) power, and 3) bw. When you integrate the GPU with the CPU you have to share the heat, power, and bw budget with th
Re: (Score:2)
I think AMD's Fusion in the first step will be the same as integrated graphics now, just cheaper. I hope to be wrong, though.
Re:Not really (Score:5, Insightful)
Funny, at this point, I thought the purpose of Intel graphics was to try and make sure that OpenCL never becomes a viable solution. Seriously, Intel does everything in their power to make their terrible graphics chips universal. They've done some pretty shady dealing over the years to try and make it happen. At this point, they have even put their GPU's right on the CPU's of their current generation laptop chips. Apple and nVidia had to come up with dual-GPU solutions that can't be as power efficient as an Intel-only solution because they have to leave the Intel GPU also running and burning power. Intel is trying to sue nVidia out of the integrated chipset market. Examples go on and on.
Why? It isn't like Intel makes all that much money on their GPU's. It's nothing to sneeze at. Intel makes more money in a year on GPU's than I'll probably make in a lifetime, but that's peanuts on the scale of Intel. It's also not enough cash to justify the effort. But, if you look at it as a strategic move to make sure that the average consumer will never have a system that can run GPGPU code out of the box, it starts to make a little more sense. Intel is trying to compete on sheer terribleness of their GPU's, because if the average consumer has an nVidia integrated GPU in their chipset, then developers will bother to learn how to take advantage of GPU computing, which will marginalize Intel's importance.
I know it sounds kind of like a crazy conspiracy theory, but after the last several years of Intel-watching, it really does seem like quietly strangling GPGPU is a serious strategic goal for Intel.
Some more fuel (Score:3, Interesting)
There was a company called Rapid Mind, which built library & tools for writing code to target various GPUs, multi-core CPUs, etc. Something similar OpenCL, I suppose, but easier to program (theoretically -- I never actually tried it). Intel bought it and killed it.
Another company, Havok, developed a successful physics & AI library. They were going to port it to to GPUs. Then Intel bought it and canceled the GPU port.
Re: (Score:2)
"Funny, at this point, I thought the purpose of Intel graphics was to try and make sure that OpenCL never becomes a viable solution."
Why write your applications in OpenCL now, to run on cards available now, when you can write them the next year for a platform incompatible with anything else existing?
Re: (Score:2)
Re: (Score:2)
AMD is performance-competitive at any price point below $150 for the processor (even more so as the mainboards are a bit cheaper). Intel is better at performance per watt (at any performance point), and AMD can't reach the performance of Intel's top processors/platforms.
Re: (Score:2)
So Intel's next cpu will the same suck video build in?
Intel are already building in their sucky (though not quite as sucky as it used to be) video into their dual core i3 and i5 chips (technically it's a multi-die module ATM but from the system integrators POV that doesn't really make any difference). With the next gen I believe they are planning to put it on-die on all their low and mid range chips (maybe the high end too, information on the next gen high end stuff seems very sketchy at the moment)
IMO Inte
Re: (Score:3, Insightful)
"Graphics cards with performance comparable to the best integrated graphics aren't exactly expensive" ...).
You can't find expansion graphic cards with performance comparable to the current integrated graphics - the integrated graphics are slower than anything else (less available memory bandwidth, fewer compute clusters,
Re: (Score:2, Funny)
ignoring you're complete inability to form a sentence
Hey everybody, 'tard fight! Come watch!
Re: (Score:2, Funny)
its potatoe you dumb fuck.
Re: (Score:2)
This is what happens when you cross Dan Quayle with Joe Biden.
Intel planed to put this tech into the next cpu an (Score:2)
Intel planed to put this tech into the next cpu and this seems to be dead so what will intel do?
Re: (Score:2)
Re: (Score:2)
Heh... I'm none too happy with either of them. And "doesn't run nearly as well" is a relative concept- I've had decent results (though not as good as with NVidia) with my AMD parts I've got- though there ARE glitches with the proprietary drivers (which is where all the problems with AMD's stuff arises from- even on Windows.).
So, while you've got decent overall performance with reasonably stable drivers, you've got to deal with a company that did what NVidia did with their packaging a while back.
On the oth
Re: (Score:2)
Same here. I am running a Asus motherboard with the Radeon HD 3300 graphics chipset, dual booting 9.10, now 10.04 and windows 7. I have drivers available under both OS's and the hardware is plenty good enough for everything that I have thrown at it (admittedly, I am not exactly playing crysis, but it decodes 720p without a hiccup and plays all the games that I both have and like).
A discrete component (Score:5, Informative)
More directly, what the hell is "discrete graphics"?
It refers to a graphics processor as a separate (discrete) component of a computer system. A chip that does nothing but graphics can be more powerful than integrated graphics because the GPU circuitry doesn't have to share a die with the rest of the northbridge.
Re:I wonder (Score:4, Informative)
Re: (Score:3, Funny)
If you read something like Byte from the early '90s, you will find discussions about the relative merits of integrated and discrete FPUs.
yikes, my memory of installing an 80387 has been completely un-accessed for at least a decade. Thanks for the scrub. :)
Re: (Score:2)
and 80287 and 8087
meh
I miss those days some times.
then I use my computer to do something and I cease missing them.
Re: (Score:2)
Re: (Score:2)
Afaict graphics for PCs come into two main categories*. Discrete graphics chips have their own memory and communicate with the rest of the system over standard busses (PCI AGP or PCIe depending on age). Integrated graphics are integrated with some part of the chipset and don't usually have any dedicated ram.
I think the term originated in the laptop market. Many laptops have discrete graphics soloutions soldered to the motherboard.
*There are graphics setups that don't fall nicely into either category. Server
Re: (Score:2)
Re: (Score:2)
The i740 wasn't integrated. It was a discrete AGP-bus card.
God I remember playing POD on that thing.