All GeForce 8 Graphics Cards to Gain PhysX Support 114
J. Dzhugashvili writes "Nvidia completed its acquisition of Ageia yesterday, and it has revealed exactly what it plans to do with the company's PhysX physics processing engine. Nvidia CEO Jen-Hsun Huang says Nvidia is working to add PhysX support to its GeForce 8 series graphics processors using its CUDA general-purpose GPU (GPGPU) application programming interface. PhysX support will be available to all GeForce 8 owners via a simple software download, allowing those users to accelerate games that use the PhysX API without the need for any extra hardware. (Older cards aren't CUDA-compatible and therefore won't gain PhysX support.) With Havok FX shelved, the move may finally popularize hardware-accelerated physics processing in games."
Re: (Score:1)
Re: (Score:2)
Having said that, I use Linux so my next card probably will be an nVidia because of the better drivers. Unless ATI get better in the one/two/three years until I buy a new card.
It'll be interesting to see what they can do to really exploit this PhysX and make it worth its while, though.
Re:It's the "Ray" experience. (Score:5, Interesting)
Re: (Score:3, Interesting)
Re:It's the "Ray" experience. (Score:5, Informative)
Amen. (Score:1, Informative)
I've been a happy owner of NVidia cards ever since.
Re: (Score:2)
Not to mention it functions like one too. A few releases ago, the driver had broken the ability to display 1650x1050 and up until recently it couldn't suspend with any kernel using the SLUB allocator which debuted as the default in 2.6.23 but was in 2.6.22. What a joke.
Re: (Score:2)
AMD has open sourced their Radeon drivers. What more could you ask for than that?
Open Source != Holy Grail (Score:1)
GPL Licence? better support for linux from AMD themselves?
Re: (Score:3, Informative)
And the closed-source, binary module is still making progress while all that other stuff happens.
Re:Open Source != Holy Grail (Score:5, Informative)
Re: (Score:2)
They still haven't released updated doc's for 3D/Video rendering etc
The reason they have not released docs for video rendering, and won't for the current generation of cards, is microsoft.
The video rendering hardware is intertwined with their DRM enforcement hardware. On MS Windows that's all fine and dandy because MS loves DRM and MS drivers are closed source. But ATI is afraid that if they release the specs for the current video/drm combo hardware that will compromise their DRM on Windows. Security through obscruity, blah, blah, blah.
Their solution is for their next g
Re: (Score:2)
Yes, the open sourcing might be useful, but nVidia works more smoothly with DirectX, Compiz-Fusion and media played through anything other than VLC (
Re:It's the "Ray" experience. (Score:4, Insightful)
The last big news I saw was not that they OSed the drivers, but that they had given partial card specs and promised more.
Please note that Matrox did the same thing in 1999 - They gave partial card specs (insufficient for implementing any 3D) and promised more, but never delivered. Lots of Linux users got suckered into buying paperweight G200s (including myself) back then. I will buy a card that performs as advertised NOW (whether or not it is with an open source driver or not), not a card that the manufacturer promises will eventually perform as advertised but can't at the moment.
Re: (Score:2)
Re:It's the "Ray" experience. (Score:4, Insightful)
So did Matrox...
Re: (Score:2)
Bull! I used to routinely play Quake3, as well as TuxRacer (full version) with a matrox g200 card in my Linux box. See this site [ntlug.org] for instance, the documentation may not have been the best, but it was enough.
I know they had problems getting an OpenGL driver out for Windows, I'm not sure they ever got it right, and a lot of people were pissed, but
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Insightful)
Re: (Score:2)
No, the way it is meant to be is a game that I play on my computer, not an advert for a specific card manufacturer!
Re: (Score:1)
Re:It's the "Ray" experience. (Score:5, Interesting)
Their driver support lags behind nVidia by years, and when they "support" a feature, it will often be in software with no warning that it is - so instead of failing with a useful error message, all you know is that *something* you did causes your system to render at 1 frame per minute and be completely unusable.
I have spent weeks bending over backwards and through hoops to get our ATI test card to agree with me, just because it is so darn unresponsive when anything goes wrong. Non power of two texture in one of your models because the modeller apparently ignored your instructions? No warning, no error - just a hung machine that will take 5 minutes to kill the process.
Give me nVidia any day.
Re: (Score:2)
They came out with OGL 2.0 and 2.1 cards well before ATI, as well (but ATI tends to outperform them when they finall
Re: (Score:2)
Re: (Score:2, Interesting)
You might like to hold off for a while, then. NVidia Linux support is very poor at the moment; the current drivers work fine for 7-series cards and some older 8-series cards, but they are hopeless for anything from the 8800GT onwards.
Since I upgraded to an 8800GT from an old 7-series card, performance in Windows has rocketed but graphics in Linux have gotten slower, and the display is full of glitches to
Think of the Earth! (Score:1)
So Ageia's stocks go up, nVidia's down. I hope I didn't plant any ideas into the heads of the green peacemakers.
Re: (Score:1)
I replaced all of my house's lighting with CFLs so the impact will be negligible. in fact after the switch i am using less power overall.
(i'm pretty sure you were jesting but thought i'd throw that idea out there for any green conscientious gamers.)
PhysX (Score:1)
Re: (Score:2, Interesting)
Re: (Score:2, Insightful)
Nice! But... (Score:5, Insightful)
Re: (Score:2)
Re: (Score:3, Interesting)
Re:Nice! But... (Score:5, Informative)
On the CUDA forums, we've gone back and forth about this, and the diagrams that people base this statement on are backwards. There are 16 multiprocessors (to use the NVIDIA terminology), each with 8 stream processors per multiprocessor. The 8 stream processors on each multiprocessor run the same instruction at once, but on separate register files. Multiprocessors, however, are completely independent, so in principle, one could imagine partitioning the resources between physics simulation and 3D rendering. This sort of partitioning has not been made available through CUDA yet, but hopefully this means we will see it soon.
You are correct that these 128 stream processors (however you slice them) are the main compute engine. There is additional circuitry to do hardware accelerated video decoding, but NVIDIA has not exposed that functionality to 3rd party programmers, and it isn't used during 3D rendering.
Re: (Score:1)
Re:Nice! But... (Score:5, Informative)
(Think! Why would NVIDIA waste expensive chip real estate for stream processors if they weren't useful for 99.9% of the applications running on these chips?)
Re: (Score:2)
3D acceleration itself is not useful for 99.9% of the applications running on these chips, if we include computing activities that are not gaming.
Re: (Score:2)
Compositing window managers (Score:2)
Re: (Score:2)
No, it's not.
But neither is it
Re: (Score:3, Funny)
Re: (Score:2)
Re: (Score:3, Informative)
...what will be calculating my 3D images, if the GPU is already working on the physics? It is not like there is so much spare capacity left over in modern games anyway...
FTFA
Re: (Score:1)
Hopefully, now PhysX adoption will become better.. (Score:3, Insightful)
If the adoption picks up, maybe Havok (which is now Intel property) will not remain the only physics engine in town, but right now, this news will not affect a whole lot of games...
Re:Hopefully, now PhysX adoption will become bette (Score:4, Interesting)
Re: (Score:2)
Re:Hopefully, now PhysX adoption will become bette (Score:2, Insightful)
It's almost the same reason why game companies aren't making their games Vista only.
Re: (Score:1)
Re: (Score:1)
ATI isn't in the throes of death. One assumes that if Nvidia holds 50% of the market some other video card manufacturer probably holds that other 50%. My guess is it would be ATI. Who I'm guessing would make every effort to push back against Nvidia.
Nvidia still needs to convince more of the bigger studios like id and Valve to use their technology exclusively. Which probably won't happen. Because if they start building engines l
now that the gpu is doing 2 things lets do 3 !!! (Score:3, Funny)
Re:now that the gpu is doing 2 things lets do 3 !! (Score:1, Interesting)
Re: (Score:1)
Re: (Score:1, Interesting)
Re: (Score:1)
Re: (Score:2)
The latency to get the results of the calculations back from the card is high enough that your frame rate would cut in half (or worse) if you waited for the results. So games use it for particle effects, and render the results a frame or two behind. It doesn't matter at all for pure eye candy stuff, but it's just not useful for anything affects gameplay.
So, what's actually accelerated here? (Score:4, Informative)
Re:So, what's actually accelerated here? (Score:5, Funny)
Re:So, what's actually accelerated here? (Score:5, Interesting)
All the physics processing for all those particles can be offloaded to the physx engine, allowing more particle effects to be going on at higher level of detail and realism (e.g. incorporating 'wind' etc..) without dragging down the cpu.
Its cool... but not earthshattering. And its a logical step to incorporate it into a video card.
I don't honestly know if it it can really be used to assist with the trajectory calculations of the interactive players tank or fighter plane or whatever, etc... but I doubt it. And it probably doesn't matter either. That is a minor part of the scene...each shower of sparks by itself probably requires more physics calculations than an entire squadron of planes... more independant particles in the shower.
Re: (Score:2)
It won't unless you can get the data back from the card. It's useless for some calculations and I much prefer the way a dedicated card works that feeds the data back to a program.
Why? Well say you're running an MMOG server (or any server for that matter), you could have all sorts of crazy physics running on the server through a dedicated chip, or e
Re: (Score:3, Interesting)
I don't think that's the case. Graphics cards work on the same PCI-X buses that acceleration cards probably use lately. They use DMA to communicate with main memory without involving the processor. The VRAM might be optimised for writing, but it should be very possible to do calculations on the card, and get the results back. That's the whole point of the generalised GPGPU techniques.
On p
Re: (Score:2)
Re:So, what's actually accelerated here? (Score:5, Informative)
Re: (Score:2)
Nitpicking your nitpick... it's not worth pointing out that PCI-X is different from PCI Express unless you also point out that PCI Express is usually abbreviated as PCIe or PCI-E.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Of course, you don't have to have the sepaate card even now to get some of the benefits; the Ageia engine will run in software, too, just not as well. It will be interesting to see what happens when nV
Re: (Score:2)
Rigid body physics, constrained motion, etc all take up some decent CPU. As does collision detection. So far, game developers have had to do with simplified collision geometries, simplified models, etc. As a first stab at
Re: (Score:2)
The reason why it's only for eye candy at the moment is because developers do not want to fork the gaming experience. Since accelerated physics would create a have and have not situation for gamers, where the non accelerated experience would be too slow to be acceptable, developers choose to only fluff up the eye candy portions because you could not make the game play experience identical between the two.
This means that you could fork development and have t
Re:So, what's actually accelerated here? (Score:5, Funny)
Re: (Score:1)
Re: (Score:1)
Ultra trivial - accelerating a single object due to gravity? The maths is quite simple. You add a constant vector to your velocity at constant intervals and add the velocity to your position. This could be done using customer hardware. This would involve sending the acceleration vector (and possible the velocity vector) to the graphics hardware and reading the position back. Fine, b
Re: (Score:2)
Re: (Score:1)
It's good for a multi-CPU solution. Not so suitable for SIMD type parallelism that graphics cards use though because the datasets for each segment are too different. But this is just the way I'd do it on a normal CPU. Maybe there's a way to do things differently that exploits the hardware.
Also, accelerating one box is trivial but 3000 boxes is not, like say,
Re: (Score:2)
I dont quite get it (Score:3, Interesting)
In other words, did NVidia just buy some clever code?
Re: (Score:1)
Re: (Score:1)
Compatible cards (Score:3, Interesting)
Re: (Score:2)
Re: (Score:2)
GeForce 8 cards have had CUDA support from day one.
nVidia bought Ageia, and with it all they need involving the PhysX API.
This upcoming download to enable physics acceleration will be a PhysX-to-CUDA wrapper that is in no way locked down to the Geforce 8 architecture (which is the point of CUDA).
By my understanding of SarbOx (which admittedly is not great) this falls under the same category as programs being written for an Intel processo
Re: (Score:3, Funny)
Re: (Score:2)
When will I be able to use my GPU for folding? (Score:1)
PhysX support GREAT!! (Score:1, Funny)
Re: (Score:2)
Pity, it looks like you already sold your grammar.
w00t! (Score:2)
Re: (Score:2)
Good thing I just bought a 8600GT :) (Score:1)
Re: (Score:1)