Intel Kills Consumer Larrabee Plans 166
An anonymous reader tips news that Intel has canceled plans for a consumer version of their long-awaited and oft-delayed Larrabee chip, opting instead to use it as a development platform product. From VentureBeat:
"'Larrabee silicon and software development are behind where we had hoped to be at this point in the project,' said Nick Knuppfler, a spokesman for Intel in Santa Clara, Calif. 'Larrabee will not be a consumer product.' In other words, it’s not entirely dead. It’s mostly dead. Instead of launching the chip in the consumer market, it will make it available as a software development platform for both internal and external developers. Those developers can use it to develop software that can run in high-performance computers. But Knuppfler said that Intel will continue to work on stand-alone graphics chip designs. He said the company would have more to say about that in 2010."
Oh rats (Score:2)
Re:Oh rats (Score:5, Insightful)
I would say ATI AMD are about to become the leader. Intel is making it more difficult to ship mobile systems without the craptastic intel graphics cards. Larrabee was supposed to be a decent performance GPU, that would almost be like a co-processor.
AMD has slightly slower CPU's, but their intgerated graphics blow the snot out of the Intel ones, and are getting even better.. What good is a super fast CPU, if you can't play any games, or even do basic stuff without using the power hungry CPU?
Future is Fusion (Score:2)
Re: (Score:3, Interesting)
Re: (Score:2)
Craptastic as the Intel cards may be, in overall performance terms, I could happily take any of the integrated parts by Intel that has decent Linux support on my next desktop, even if that meant a massive reduction in performance. I have an Xbox 360 for playing games on and I would love for my desktop to Just Work as well as my Eee does with Linux. That said, with ATI cards getting better and better support under Linux it is quite possible that they'll be the best option by the time I upgrade again...
I disagree (Score:2, Informative)
Many people really don't care about their graphics card. If you don't do games, an Intel chipset graphics unit works fine. It accelerates the shiny interface in Windows 7 and everything is nice and responsive. For business uses, this is plenty.
Ok well if you do care about games, then you want a discreet graphics solution. Integrated solutions will just never do well. Big reason is memory. You can make your card as fast as you like, if it shares system memory it is severely bottlenecked. Graphics cards needs
Re: (Score:2)
> Ok well if you do care about games, then you want a discreet graphics solution.
The graphics hardware for games tend to be rather indiscreet. Big rapidly spinning fans, hot, noisy, lots of shiny/glossy metal and big.
See the second pic:
http://techreport.com/articles.x/17986 [techreport.com]
Integrated graphics solutions (which are nondiscrete) tend to be way more discrete. Just one small chip (or even just part of another chip), quiet, fanless, small.
mod parent up (Score:2)
My wife and I play wow but most users prefer to use a wii or ps3 if they want to play games.
Its frustrating and I agree that the intel chipsets and integrated chips (not true video cards) put desktops 5 - 6 years behind and piss off game developers forcing them to port only to consoles.
The netbook phenomena shows this trend for slim boring graphics that are cheap cheap and uh cheap.
Most game developers have left the pc as a result due to angry kids whose parents get a nice i945 graphics chipset computer for
Re: (Score:1)
ATI are about to become the leader? They are already the leader in all categories: perf/$, perf/W, absolute perf, and at all price points. See list below. For gaming performance, the GFLOPS rating are a roughly (+/- 30%) good enough approximation to compare ATI vs. Nvidia. For GPGPU performance, the GFLOPS rating is actually unfair to ATI because Nvidia's GT200 microarchitecture causes it to be artificially inflated (they assume a MUL+MAD pair executing 3 floating-point op per cycle, whereas ATI assumes a
Re: (Score:2)
I find it very funny that my two-generations old 9800GTX+ has more power than the pretty new GTS250.
And I can get it for 89 bux off pricewatch. So for a pair running SLI, you get roughly the same performance as the card that costs 40 bucks more (GTX260.)
Biggest difference is DX version support.
Glad my bet on the 9800GTX+ a couple years ago was a good one to make!
Re: (Score:2)
The GTS250 is a rebranded 9800GTX+. Which itself was a die shrink of the 8800GTS, with higher clocks. So the design goes back to 2006 with the G80, through to the G92 in late 2007.
Re: (Score:2)
I don't play games on my laptop, but I do run compiz-fusion with many of the features enabled. It's very eye-candy-heavy, and my integrated Intel graphics chip keeps up just fine. My CPUs don't bear much load at all. I don't think things are as grossly out of proportion as you make them out to be. 5 years ago, yes. Today, not so much.
the performance is there (Score:3, Interesting)
http://www.brightsideofnews.com/news/2009/12/2/intel-larrabee-finally-hits-1tflops---27x-faster-than-nvidia-gt200!.aspx [brightsideofnews.com]
way faster than amd's or nvid's hottest....
Re: (Score:1, Informative)
Read the comments. It looks like a lopsided comparison, with other folks getting higher results from e.g. ATI 4800, 5800.
Re:the performance is there (Score:5, Insightful)
Re: (Score:2)
To be fair, the hardware exists, it is just not commercially available (and now we have learned that it won't be available in the future either, unless you are in a research group). My guess is that the hardware works just fine, but the programming model makes it much harder than anticipated to reach the nominal performance for practical problems. Kind of like the i860 as cheesybagel points out.
Of course this is what a lot of graphics researchers thought might happen ever since Intel had the Larrabee paper
Re: (Score:3, Interesting)
So what we have here is Itanium- look good on paper but impossible to be fully utilized.
That constitutes a failure if you ask me.
Actually I hold the exact opposite view. The hardware isn't ready, and by not ready I mean the performance isn't as high as expected due to design issues.
If I am correct Intel doesn't want a repeat of the 1st gen Itanium where on release the brand name is blemished by the less than expected performance. This perception that IA64 is slow continues to haunt Intel up to this
Re: (Score:2, Insightful)
Re: (Score:1)
No, Intel is very good but sometimes the best laid plans of mice and men aft gang agley and all that.
It's cool that they're not afraid to hang themselves out there like that. If you want to see something new you gotta scratch your feet on a new road.
Re: (Score:1)
Re: (Score:1)
Re: (Score:1)
I am not going to reply to you again as Anonymous Coward on Slasdhot. Commenting on my blog requries a wordpress logon. It would be great if you got a Slashdot user name. This will enable people
Re: (Score:1)
Hi APK - thanks for the posts. I see them.
A few things.
Re: (Score:1)
Trolls thrive on attention. Don't feed them and they go away. Validate them with responses and they will swarm you as if they were ducks and you were unwrapping loaves of bread.
This is pretty basic online stuff. You have been sheltered, haven't you?
Re: (Score:1)
Re: (Score:1)
Re: (Score:1)
My appologies for not being more specific: I should have said: "I'm not going to reply lengthly replies to here on Slasdhot."
About supplying email alias to MSFT people: It's just the wrong thing - on many lelves - for me to post the email address of MSFT employees on any public forum.
The most effective way for you to get in touch with teams at MSFT and to have an actionable conversation with them is to use the Microsoft connect site.
Also, i belive I said in another post somewhere that I en
Re: (Score:1)
I still think you and the other Anonymous Coward are the same guy.
Yup, you caught me red handed. I'm a frequent mis-typo-ist. I live with it...
Again, I didn't criticize your spelling or grammar. Its your polemic, and now very argumentative and picayune style.
It’s almost as if you are trying to win some court room trial. I told you in my original post that I sent email about this internally and to be patient in waiting for an answer.
Despite your pushy and obnoxious approach - I t
Re: (Score:1)
Hi APK,
Im unclear on what you are arguing about. Im not arguing with you and Im not asking you to apologize for anything.
Of course, larger files take longer to load.
In any case, like I said before. I appreciate your questions and points about the host file. They are interesting and Im working on finding an answer. Again, please be patient.
At this point, I dont know why the 0 thing was removed from the hosts file parser. Maybe it was an oversight, maybe there was a good reason for it. Maybe you are right a
Re: (Score:1)
sounds more like gma video good on paper but the d (Score:2)
sounds more like gma video good on paper but the divers / performance is just not there.
Re: (Score:1, Insightful)
If I am correct Intel doesn't want a repeat of the 1st gen Itanium where on release the brand name is blemished by the less than expected performance. ... ...
It's not as if Intel needs Larrabee in the near future anyway- AMD doesn't have anything significant in the near future as well; even if they do, with Intel's brute engineering capability, they will just pull a Core2 again.
Another possibility is that no game company is able to support Larrabee's architecture.
Intel is great at manufacturing and CPUs but they couldn't make a decent GPU and driver if their life depended on it. Until Intel can produce a gpu that is competitive with ATI / nVidia, any pie in the sky talk (like Larabee) is just vaporware and should be largely ignored.
Re:the performance is there (Score:4, Insightful)
Vaporware is not faster than existing products.
Vaporware is always faster than existing products.
Re: (Score:1)
Re: (Score:2)
Re: (Score:1)
Mod parent up (Score:2)
You're an AC, but your use of "we" is interesting in the grammar analysis realm. You are clearly posting from a subjective point of view and your post is worthy of note.
I think your opinion is interesting, though I don't share it.
Re: (Score:2)
That was actually me, I had forgotten to log in. So yes 'we' is telling, it means MSFT.
-Foredecker
Re: (Score:1)
Well in that case... you're probably right about Larrabee. Microsoft's scheduler isn't ready for this hardware, but OS-X has Grand Central Dispatch so it would behoove Microsoft for it to not come out yet. But it does seem more likely that Intel's unable to get it to work or can't get the SDK ready than that someone convinced them it was a bad idea.
About Snapdragon [engadget.com] I'm not so sure... [semiaccurate.com]
Re: (Score:2)
What? The scheduler for Larabee doesnt run on the host - it runs on the graphics chip (cite [cnet.com]). Note, this is different from scheduling that happens in the traditional graphics driver (shader compiling etc). Even if it did run on the host, it would have its own scheduler, just like DirectX10 adapters do today with WDDM drivers, and just like OSX.
What is so special about Grand Central Dispatch? It is a decent thread pool implementation. NT has had one of those for a LONG time (cite [microsoft.com]). The .NET base class libra
Re: (Score:2)
Secondly, the 1 TFLOPS figure is incorrect. The real one is higher. Intel claims up to 32 cores @ 2.0GHz, so with the 512-bit LRBni instruction set that means Larrabee could reach 2 TFLOPS. But wait! Don't get your hopes to high, because your article conveniently fails to point out that, as of right now, ATI's HD 5970 rates 4.6 TFLOPS, 2.3x faster than Larrabee! So we have vaporware that is effectively already theoretica
Re: (Score:2)
A practical GFLOPS rating that should be reachable by an HD 5970 in the SGEMM benchmark is about 1.2 TFLOPS when scaling the numbers from the FireStream 9270 (300 GFLOPS in SGEMM), because the HD 5970 can execute 3.87x more instructions per cycle than the FireStream 9270 ((3200SPUs*725MHz) / (800SPUs*750MHz)), and its memory
Re: (Score:1)
Can SGEMM make use of FMA (how you get 2 TFLOPS from a system that can only do 1 TFLOPS without FMA)?
FMA = Fused Multiply Add, for readers who don't have the time to Google it.
Re: (Score:2)
Yes, I read a recent white paper comparing Larrabee/NVidia/AMD on medical computing applications, and Larrabee was 5-6 times faster. The problem is the huge amount of graphics-specific software that needs to be developed for Larrabee, while NVidia and AMD have two advantages there: First, they've done it incrementally over several years. Second, they do a lot of that work on dedicated hardware, which is also more efficient, but only for said graphics applications. For everything else, Larrabee is the only s
Re: (Score:1, Flamebait)
The simple truth is Intel can't do anything but CPUs (and maybe chipsets). Anytime they go outside of their comfort zone, they get smacked around.
Re: (Score:3, Interesting)
last i checked their flash disks were pretty kickass
Re: (Score:1, Interesting)
Intel delivered the first sub-40nm flash memory [intel.com] and has delivered two generations of top-flight solid state drives [anandtech.com]. Intel has [highbeam.com] always [hothardware.com] been strong [encyclopedia.com] in flash memory.
Re:Oh rats (Score:5, Interesting)
NVidia hasn't let ATI do anything. Actually, NVidia is dealing with a series of problems - from serious packaging problems last year to TSMC yield issues now. ATI/AMD has been really effective lately; NVidia historically had a dominant position, but definitely not a monopoly, and I'll say that they have slipped a lot recently. Things change fast in the GPU race, so NVidia may recover quickly. But ATI/AMD have a solid amount of momentum, and the only real execution problem I've seen them make in the last few months in GPUs has been to rely on TSMC.
Take a look at the Dell Zino HD - it combines AMD's 'just enough CPU' with top end GPU to make a very compelling system. Intel has cut NVidia out of the chipsets, so they don't get the synergy that AMD has with ATI.
AMD is definitely better situated for the long haul than NVidia, and actually may be better off than Intel for complete systems.
Re: (Score:1, Informative)
NVidia historically had a dominant position
I suppose "historically" is a relative term. I remember when just about EVERY graphics card was ATI.
ATI had the OEM market in the bag for quite a while.
From 1999: [findarticles.com]
What this also does is put a dent in the armor of ATI Technologies Inc., Toronto, Canada. ATI is the PC graphics market share leader with revenues close to $1 billion and has been steam rolling over the competition in the PC space for the past year or so. This includes S3, Trident Microsystems, 3Dfx, 3Dlabs and even Intel. The only companies to put up much of a fight was Nvidia, which is much smaller than ATI, and Montreal, Canada-based Matrox Graphics Inc., which has a similar business model to ATI.
Until the nVidia juggernaut took off [zdnetasia.com] in 2000:
Nvidia has overtaken ATI Technologies as the biggest maker of chips to enhance graphics on desktop computers, according to a new study by industry consultant Mercury Research.
In the third quarter, Nvidia chips were in 48 percent of all desktop computers, more than doubling its market share from 20 percent in the third quarter last year, Mercury said. ATI slipped to 34 percent from 39 percent.
Re: (Score:1)
Year 2000? Get with the times, since then nVidia has been the market leader.
Re: (Score:2)
I might agree with you if ATI/AMD would finally get serious about producing drivers that aren't complete crap. Their hardware is fine, but Linux drivers, as well as OpenGL drivers on Windows just plain suck.
Re: (Score:1)
Re: (Score:3, Informative)
I might agree with you if ATI/AMD would finally get serious about producing drivers that aren't complete crap. Their hardware is fine, but Linux drivers, as well as OpenGL drivers on Windows just plain suck.
It's not just the video drivers. ATI also has a horrible software stack (SDK, runtime, compiler and documentation) for their Stream GPGPU computing architecture, which is why everybody uses NVIDIA and its excellent CUDA. Generally speaking, ATI has excellent hardware, but such hardware is useless if you don't have a matching software to exploit it.
Re:Oh rats (Score:4, Informative)
Don't forget about the NVidia Ion platforms. They also use a "just-enough" CPU in Intel's Atom, with higher end NVidia GPUs to run nicely integrated HD set-top boxes. Nice little platforms for MythTV frontends.
Lol at the idiots (Score:2, Funny)
So they intend to take a product, who's chief advantage was that it could run old x86 code, and only sell it people who are designing new software? Am I the only one that sees a problem with this?
In other words... (Score:5, Insightful)
A nicer way of saying:
Uhm, guys, remember how we were supposed to ship a year ago and said recently we will ship a year from now? Well, add 5 to that now...but we will provide and totally kick ass, promise.
Re: (Score:2, Funny)
Re: (Score:3, Insightful)
Hm, yeah... a variant of FUD; spreading wonderful stories about a future product just to stall / eradicate the competition; just so potential clients will wait.
What doesn't add up in this case is that Intel, at this point in time, seems quite cautious in their claims about Larabee - they hardly have anything / are themselves very skeptical about it, even in face of major delay & reengineering?
Re:In other words... (Score:5, Interesting)
Re: (Score:1)
We're going into TMI territory. I've worked in Intel labs. The people there are first rate. There's no way to describe how much more fun it is to deal with folks who can think.
The executive suite there could use a broom. That's all I can say about that.
We'll have our progress with or without Intel. If Intel gets behind enabling individuals to do more without worrying about how much that "cannibalizes" their historical markets, they will have learned what I tried to teach them. I did try.
Re: (Score:3, Insightful)
Oh I totally agree that Intel has some top drawer engineers. I've heard their compiler division is first rate (which company was it that they bought for their patent portfolio again?). Intel's production process group is also tops and has been instrumental in keeping them ahead of the curve. Core is a testament to their CPU and chipset design teams. I've just never seen any indication that their graphics teams are of
Larrabee = Graphics Chip competing w nVidia (Score:5, Informative)
In case you've forgotten what a Larrabee was (like I had), it was Intel's planned graphics / vector processing chip, competing with nVidia and AMD / ATI graphics systems. Here's the Wikipedia article [wikipedia.org].
Re: (Score:3, Insightful)
I guess this means that the only option we have to get half-decent graphics with an Intel processor is with an nVidia chipset. However, that relationship looks a bit rocky and very soon we'll probably only be left with the incredibly shitty Intel integrated graphics systems that work passibly (i.e. you can display a Vista/7 desktop with it a
Re: (Score:1, Insightful)
Re: (Score:2)
Oh ye of major n00b, any laptop that advertises a discrete solution to graphics uses an MXM slot.
Oh, wait, I bet you use APPLE products. They won't use MXM, so you're STUCK.
Re: (Score:2)
I'm not one for conspiracy theories, although I wouldn't be terribly shocked if Intel surprised everybody and launched Larrabee a few months after AMD releases a competing product.
In the past, Intel's deliberately stifled product development and engaged in anticompetitive behaviors that would even make Microsoft look twice (and has been found guilty and forced to pay up to this extent). Remember how quickly Intel brought consumer x86-64 chips to market after AMD proved that the platform was technically and
Re: (Score:1)
Great, just in time for Duke Nukem Forever! (Score:5, Funny)
Re: (Score:2)
... running GNU Hurd, of course!
Re:Great, just in time for Duke Nukem Forever! (Score:5, Funny)
Imagine a beowulf cluster of old memes. Oh wait, I don't have to, it's Slashdot.
Re: (Score:2)
Imagine a beowulf cluster of old memes. Oh wait, I don't have to, it's Slashdot.
If Intel has one-chip cloud computers, then slashdot has one-post beowulf clusters.
Re: (Score:2)
(Gnu meme is gnu.)
So the next mini, low end imac and 13" macbook's w (Score:2, Interesting)
So the next mini, low end imac and 13" macbook's will be stuck with shit video and the mac pro will start at $3000 with 6 core cpus.
Will apple move to amd just to get better video in low end systems?
Re: (Score:1, Informative)
Apple already dropped GMA for low end stuff, they're using GeForce 9400M instead. They're also using Radeons on most iMac models.
Re: (Score:2)
i3/i5 cut off nvidia and the low end cpus have gma (Score:2)
i3/i5 cut off nvidia and the low end cpus have gma build in and apple likely will put i3 in the mini and stick it with carp video at $800 as well.
Re: (Score:2)
Re: (Score:3, Interesting)
but at least they are dedicated graphics solutions
Actually, the 9400m is not. It uses system memory but does a much better job then Intel. It also acts as the memory controller and does system IO. The reason for the parent's comments is that all future Intel CPUs will have integrated memory controllers (like the i7 and i5) and an integrated GPU. Performance will suck but it will make for cheap systems. This will make it difficult for system builders to make a low end system with good graphic performance as the market for such systems will be small.
Re:So the next mini, low end imac and 13" macbook' (Score:2)
"The new macbook pro, now with AMD... and only 3 hours of battery"
Somehow I think AMD still has a few things to learn about mobiles, and that's the mac's main market.
Re:So the next mini, low end imac and 13" macbook' (Score:1)
It's likely that Apple will have to use discrete graphics on all but the lowest-of-the-low (a theoretical $799 MacBook) in order to not regress graphically. NVIDIA GT240 could be an option as a discrete replacement for the integrated 9400M.
It will require motherboard redesigns, but the CPU will force that anyway. The Intel I/O hub for the new systems is quite small, so there should be room.
However Apple have regressed graphically in the past (Radeon 9550M -> Intel 2006 rubbish integrated graphics). It wo
Wow... shock horror (Score:5, Funny)
Re: (Score:2)
Just remember:
"Thou shalt NOT rootkit The Lord thy Admin."
Re: (Score:1)
Mis-reported, I think. (Score:2, Insightful)
This is being mis-reported or mis-communicated by Intel, I believe.
The first version of Larrabee silicon isn't going to consumers, that's all.
From the consumer's perspective, it's a delay. Yet to be seen if it's fatal.
Otherwise, who'd want to use it to develop software?
" In other words, it's not entirely dead." (Score:1)
I'm not dead yet!"
Re: (Score:2)
Parrot-like (Score:2)
Maybe it's just resting?
Stunned?
Pining for the fjords?
I'll show myself out.
Bad for Linux (Score:2)
Intel has shown real commitment to supporting their video hardware on Linux with full time staff [intellinuxgraphics.org] employed to produce high quality open source drivers in addition to providing open specifications for (most) of their contemporary hardware. Unfortunately this hardware provides only limited 3D acceleration. I was hoping that Larrabee would conflate these two and provide vendor supported, open, high performance accelerated 3D for Linux.
So much for that happening anytime soon...
I can't understand why Intel cede
Re: (Score:2)
They don't have the experience and all the good computer graphics engineers are at Nvidia and ATI.
I wonder if Bangalore has anything to do with it. (Score:5, Interesting)
I think the announcement of the 48-core Intel 'Bangalore' chip [slashdot.org] just recently is not a coincidence.
When I first read about the Larrabee chip, I thought the decision to make it a cache coherent SMP chip to be simply insane - architectures like that are very difficult to scale, as the inter-core chatter scales roughly as the factorial of the number of cores. Remember how Larrabee was designed around a really wide 1024-bit ring bus? I bet that's required because otherwise the cores would spend all of their time trying to synchronize between each other.
So, Larrabee is effectively cancelled, but only a day or two before Intel announced an almost identical sounding part without cache-coherence! It sounds to me like they've given up on the 100% x86 compatibility, and realised that a chip with some extra instructions around explicit software controlled memory synchronization and message passing would scale way better. Without cache coherence, a "many core" chip is basically just an independent unit repeated over and over, so scalability should be almost infinite, and wouldn't require design changes for different sizes. That sounds like a much better match for a graphics processor.
While Intel kept their cards relatively close to their chest, from all of the presentations I've seen, no first-gen Larrabee chip could scale beyond 24 cores even with a 1024 bit bus, while the new Bangalore chip starts at 48 cores. There's no public info on how many lanes Bangalore has in its on-chip bus but based on the bandwidth of its 80 core experimental predecessor, I'm guessing it's either 32-bit or 64-bit (per core).
Re: (Score:2)
The problem is, a many-core non cache-coherent x86-like system isn't particularly interesting. The big advantage of Larrabee was that you could treat it like a normal SMP system, including (presumably) running standard multithreaded C code on it. Once you have to deal with memory synchronization explicitly, Larrabee starts to look a lot more (from a programming standpoint) like Fermi, Cypress or whatever other Nvidia/ATI GPUs are out at the time.
There's nothing magic about x86/AMD64 in the HPC world. It's a
Re: (Score:3, Interesting)
The problem is, a many-core non cache-coherent x86-like system isn't particularly interesting. The big advantage of Larrabee was that you could treat it like a normal SMP system, including (presumably) running standard multithreaded C code on it. Once you have to deal with memory synchronization explicitly, Larrabee starts to look a lot more (from a programming standpoint) like Fermi, Cypress or whatever other Nvidia/ATI GPUs are out at the time.
There's nothing magic about x86/AMD64 in the HPC world. It's attractive because it is cheap and has good performance. Clusters can, have been, and still are built using POWER and other architectures.
But for "embarrassingly parallel" problems, which are the target application for these chips, cache coherence is often not necessary, and simply imposes a design burden. There are lots of problems where it's better to have 1000x the performance than 1/2 the developer time.
It may not even involve less development time: Others have pointed out that the Unix "fork" mechanism combined with "copy-on-write" at the memory page level would also work, and wouldn't require cache coherency. Similarly, any existing cod
Re: (Score:3, Insightful)
Wow... thanks for your insight! Should have known Intel would be logical even about their failures, and roll them over to something that has a chance of applicability. The only thing I wish they would do is skip the 64-bit crap and make 128-bit architectures that are compatible with both 32- and 64-bit predecessors. It would ease the development of new applications since the life time of 128-bit archs would be decades as opposed to developing all 64-bit apps to only have 128-bit archs appear in 5-10 years.
I'm not sure if you're trolling or not, but 64-bit memory capacity is not "twice" as big as 32-bit, it's 4.3 billion times as big. That's more than just 5 to 10 years of Moore's law, that's more like 50 years. Physical bus widths have nothing to do with architecture bitness either, there are memory buses for 64-bit architectures that only have a few pins.
there's a difference between "all dead" and (Score:1)
Likely the Intel PowerVR partnership (Score:1)
Larrabee? (Score:2)
Re: (Score:2)
Re: (Score:2, Insightful)
Rendering graphics is already done, because its easy to split the task of rendering a bunch of pixels into pixel-sized chunks. Each small thread can read from the same shared memory (the scene graph and textures, etc.) and write to a
This FP was brought to you by Intel's Release Team (Score:1)
Intel; 5th December 2009, for immediate release:
Intel Corp. today announced the release of their new FirstPost processor(tm), known internally as FristPsot and FrostyPiss. This new processor will let you post first on any web internet board. Demos of this processor's achievements have been given showing astounding performance on sites such as Slashdot.
The FirstPost processor(tm) will be available in Q1 2010. Sorry, 2014.
Re: (Score:1)
The ATI patents were not included in the recent settlement.