Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware Entertainment Games

All GeForce 8 Graphics Cards to Gain PhysX Support 114

J. Dzhugashvili writes "Nvidia completed its acquisition of Ageia yesterday, and it has revealed exactly what it plans to do with the company's PhysX physics processing engine. Nvidia CEO Jen-Hsun Huang says Nvidia is working to add PhysX support to its GeForce 8 series graphics processors using its CUDA general-purpose GPU (GPGPU) application programming interface. PhysX support will be available to all GeForce 8 owners via a simple software download, allowing those users to accelerate games that use the PhysX API without the need for any extra hardware. (Older cards aren't CUDA-compatible and therefore won't gain PhysX support.) With Havok FX shelved, the move may finally popularize hardware-accelerated physics processing in games."
This discussion has been archived. No new comments can be posted.

All GeForce 8 Graphics Cards to Gain PhysX Support

Comments Filter:
  • This gives much more sense to buying all those dual GPU cards out there. However, they do consume quite a bit of power and therefore contribute to global warming by taxing the power stations more.

    So Ageia's stocks go up, nVidia's down. I hope I didn't plant any ideas into the heads of the green peacemakers. :P
    • "However, they do consume quite a bit of power and therefore contribute to global warming by taxing the power stations more. "

      I replaced all of my house's lighting with CFLs so the impact will be negligible. in fact after the switch i am using less power overall.

      (i'm pretty sure you were jesting but thought i'd throw that idea out there for any green conscientious gamers.)
  • I knew buying a PhysX was a waste of time. Now they're effectivly useless - I already have an 8800.
    • Re: (Score:2, Interesting)

      by SuperDre ( 982372 )
      No it wasn't a waste of time.. the PhysX-card is much better at calculating physics than the 8800 which is already busy enough doing 3D.. So the combo of 8800 with Physx-card is much better than using a dual 8800..
    • Re: (Score:2, Insightful)

      by masticina ( 1001851 )
      They are not as they did put in the market a product that has a place. Okay they did fail to sell succesfull a product but the first graphic accelerators we'rent the most lucky either! What matters is that the idea sticks and that now we might see Physics being offloaded more. So it has a place but the one putting it first on the market well, they didn't fare well!
  • Nice! But... (Score:5, Insightful)

    by johannesg ( 664142 ) on Friday February 15, 2008 @04:21AM (#22432026)
    ...what will be calculating my 3D images, if the GPU is already working on the physics? It is not like there is so much spare capacity left over in modern games anyway...

    • The GeForce 8800GTX, for example, has 16 stream processors, each of which can run up to 8 identical commands per clock (SIMD). They're not the same as the main graphics processors; they're a separate part of the chip AFAIK.
      • Re: (Score:3, Interesting)

        by qbwiz ( 87077 ) *
        The 8800GTX has 8 groups of 16 stream processors, and they are the main graphics processors.
        • Re:Nice! But... (Score:5, Informative)

          by volsung ( 378 ) <stan@mtrr.org> on Friday February 15, 2008 @09:45AM (#22434176)

          On the CUDA forums, we've gone back and forth about this, and the diagrams that people base this statement on are backwards. There are 16 multiprocessors (to use the NVIDIA terminology), each with 8 stream processors per multiprocessor. The 8 stream processors on each multiprocessor run the same instruction at once, but on separate register files. Multiprocessors, however, are completely independent, so in principle, one could imagine partitioning the resources between physics simulation and 3D rendering. This sort of partitioning has not been made available through CUDA yet, but hopefully this means we will see it soon.

          You are correct that these 128 stream processors (however you slice them) are the main compute engine. There is additional circuitry to do hardware accelerated video decoding, but NVIDIA has not exposed that functionality to 3rd party programmers, and it isn't used during 3D rendering.

      • Dude, they /are/ the main graphics processor(s).
      • Re:Nice! But... (Score:5, Informative)

        by eggnoglatte ( 1047660 ) on Friday February 15, 2008 @04:35AM (#22432096)

        They're not the same as the main graphics processors; they're a separate part of the chip AFAIK.
        You know wrong. OpenGL, Direct 3D and CUDA all share the same stream processors on the chip.

        (Think! Why would NVIDIA waste expensive chip real estate for stream processors if they weren't useful for 99.9% of the applications running on these chips?)

        • Why would NVIDIA waste expensive chip real estate for stream processors if they weren't useful for 99.9% of the applications running on these chips?)

          3D acceleration itself is not useful for 99.9% of the applications running on these chips, if we include computing activities that are not gaming.

          • "3D acceleration" is no longer a meaningful concept, as the majority of the "acceleration" happens on general purpose stream processors. A modern GPU is a parallel processor which just happens to have a rasterisation engine built-in. I'm guessing we'll eventually move the stream processors closer to the CPU, and integrate the graphics handling parts in the motherboard.
          • 3D acceleration itself is not useful for 99.9% of the applications running on these chips, if we include computing activities that are not gaming.
            Compositing window managers such as Compiz (X11) and Aero DWM (Windows Vista) apply 3D effects to entire windows. That's not gaming, is it?
            • Compositing window managers such as Compiz (X11) and Aero DWM (Windows Vista) apply 3D effects to entire windows. That's not gaming, is it?

              No, it's not.

              But neither is it /useful/.
              • Re: (Score:3, Funny)

                by fbjon ( 692006 )
                Not to mention a GUI isn't useful either, since everything can be done on the command line anyway. In fact, all you need is a bank of LEDs to indicate the state of the registers!
          • 3D acceleration itself is not useful for 99.9% of the applications running on these chips, if we include computing activities that are not gaming.
            That may certainly be true, howver there clearly are people who buy these cards to do stuff with them. As you correclty point out, "stuff" is mostly gaming, so it makes no sense to add features to these cards that are not useful for gaming but add significant costs. That was what my remark was about.
    • Re: (Score:3, Informative)

      by Anonymous Coward

      ...what will be calculating my 3D images, if the GPU is already working on the physics? It is not like there is so much spare capacity left over in modern games anyway...

      FTFA

      Our expectation is that this is gonna encourage people to buy even better GPUs. It might--and probably will--encourage people to buy a second GPU for their SLI slot. And for the highest-end gamer, it will encourage them to buy three GPUs. Potentially two for graphics and one for physics, or one for graphics and two for physics.

  • by bomanbot ( 980297 ) on Friday February 15, 2008 @04:32AM (#22432078)
    I hope the NVIDIA acquisition and now this news will drive the adoption of the PhysX Engine. Right now, if you look at the list of titles [ageia.com], the PhysX Engine is not used by many games (namely, mostly Unreal3-Engine titles).

    If the adoption picks up, maybe Havok (which is now Intel property) will not remain the only physics engine in town, but right now, this news will not affect a whole lot of games...
    • by montyzooooma ( 853414 ) on Friday February 15, 2008 @08:07AM (#22433168)
      Isn't the real problem that the games that DO incorporate PhysX hardware support don't really showcase the technology in any carnal desire type manner. There's no equivalent of GLQuake, that drove adoption of the original 3D cards.
      • I agree, hardware physics needs a killer app. I was playing around with the idea of adding PhysX support to Quake 2 and modifying some maps to have real liquids instead of the fake water it normally had, but after making some tests apps I realized how incredibly slow the physics are on a CPU. I'm not holding my breath, but I hope the GPU is capable of at least playable frame rates.
    • One might think reluctance to adopt PhysX would be knowing that a large number of your customers don't use NVIDIA cards and therefore wouldn't be able to take advantage of the technology.
      It's almost the same reason why game companies aren't making their games Vista only.
      • http://www.steampowered.com/status/survey.html [steampowered.com] Steam hardware survey (which I think is a fair representation of gamers - the people who use this stuff most) gives nVidia over 50% market share. 8 series is a little over 11% of all graphics cards used. There's already a fairly sizable market - and one that is only going to get bigger.
        • I knew as soon as I posed that somebody was going to hit me with some statistics.

          ATI isn't in the throes of death. One assumes that if Nvidia holds 50% of the market some other video card manufacturer probably holds that other 50%. My guess is it would be ATI. Who I'm guessing would make every effort to push back against Nvidia.
          Nvidia still needs to convince more of the bigger studios like id and Valve to use their technology exclusively. Which probably won't happen. Because if they start building engines l
  • by sirmonkey ( 1056544 ) on Friday February 15, 2008 @05:20AM (#22432276)
    so now that my vid card is processing the 3d graphics and the physics (which is really only eye candy) how about we make it -the gpu- run the O/S tooo!!!! ooo ooo my next summer project! have linux run on just the video card! (openmosix is still around right :-D?) :-p what?!?!?! it runs on everything else. right now i'm typeing this on my old 700mhz laptop running the latest debian :-p
    • by Anonymous Coward
      How old are you? Serious question.
      • by krilli ( 303497 )
        Who gives a shit how old anyone is? It's an interesting question.
        • Re: (Score:1, Interesting)

          by Anonymous Coward

          Who gives a shit how old anyone is? It's an interesting question.
          So we can distinguish the immature because-they're-young from the immature for other reasons.

    • so now that my vid card is processing the 3d graphics and the physics (which is really only eye candy)
      Switch that statement.
      • by edwdig ( 47888 )
        Any physics done on a PhysX card is only eye candy.

        The latency to get the results of the calculations back from the card is high enough that your frame rate would cut in half (or worse) if you waited for the results. So games use it for particle effects, and render the results a frame or two behind. It doesn't matter at all for pure eye candy stuff, but it's just not useful for anything affects gameplay.
  • by 91degrees ( 207121 ) on Friday February 15, 2008 @05:32AM (#22432338) Journal
    Physics covers a lot, from gravity, inertia, particles, collisions, IK and various other bits and pieces. Not everything lends itself to acceleration. So what will be accelerated by this?
    • by TeknoHog ( 164938 ) on Friday February 15, 2008 @06:07AM (#22432494) Homepage Journal
      Obviously, gravity and other kinds of non-steady motion are good targets for acceleration. And because of NVidia's evil closed source drivers, the best way to accelerate your GeForce is at 9.81 m/s**2.
    • by vux984 ( 928602 ) on Friday February 15, 2008 @06:16AM (#22432542)
      From what I understand of this (and I could be wrong), the physx accelerator is primarily used to add eye-candy -- so things like showers of sparks, sprays of blood, geysers and clouds of dirt or water or snow on an impact (whether a footfall or a weapon strike...), leaves falling when you shoot trees, better hair and clothing, clouds, rain drop impacts, etc, etc.

      All the physics processing for all those particles can be offloaded to the physx engine, allowing more particle effects to be going on at higher level of detail and realism (e.g. incorporating 'wind' etc..) without dragging down the cpu.

      Its cool... but not earthshattering. And its a logical step to incorporate it into a video card.

      I don't honestly know if it it can really be used to assist with the trajectory calculations of the interactive players tank or fighter plane or whatever, etc... but I doubt it. And it probably doesn't matter either. That is a minor part of the scene...each shower of sparks by itself probably requires more physics calculations than an entire squadron of planes... more independant particles in the shower.
      • I don't honestly know if it it can really be used to assist with the trajectory calculations of the interactive players tank or fighter plane or whatever, etc...

        It won't unless you can get the data back from the card. It's useless for some calculations and I much prefer the way a dedicated card works that feeds the data back to a program.

        Why? Well say you're running an MMOG server (or any server for that matter), you could have all sorts of crazy physics running on the server through a dedicated chip, or e

        • Re: (Score:3, Interesting)

          by CarpetShark ( 865376 )

          You can't do that with physics on a graphics card because it's a one way pipeline, from your program to your monitor.

          I don't think that's the case. Graphics cards work on the same PCI-X buses that acceleration cards probably use lately. They use DMA to communicate with main memory without involving the processor. The VRAM might be optimised for writing, but it should be very possible to do calculations on the card, and get the results back. That's the whole point of the generalised GPGPU techniques.

          On p

          • by cnettel ( 836611 ) on Friday February 15, 2008 @08:51AM (#22433596)

            I don't think that's the case. Graphics cards work on the same PCI-X buses that acceleration cards probably use lately. They use DMA to communicate with main memory without involving the processor. The VRAM might be optimised for writing, but it should be very possible to do calculations on the card, and get the results back. That's the whole point of the generalised GPGPU techniques.
            Nitpicking: PCI Express is not PCI-X. PCI-X was a derivative of the parallel PCI bus and never found in mainstream machines.
            • Nitpicking: PCI Express is not PCI-X. PCI-X was a derivative of the parallel PCI bus and never found in mainstream machines.


              Nitpicking your nitpick... it's not worth pointing out that PCI-X is different from PCI Express unless you also point out that PCI Express is usually abbreviated as PCIe or PCI-E.
              • And just to add to the nitpicking, the PhysX cards were only PCI as far as I ever saw. Maybe they didn't need the bandwidth of anything more, but that always struck me as silly to make a PCI card when PCI-E was already available on new motherboards
          • by nomel ( 244635 )
            With multi core cpu's coming out..i still say it's best left to the cpu. I rarely see a game that uses 100% cpu...all cores included.
        • by Khyber ( 864651 )
          Umm, only AGP is essentially one-way. PCI and PCI-E are bidirectional.
      • Yeah, that's what City of Heroes uses it for. When a villain smashes a mailbox, a cloud of letters goes flying everywhere. Things like that. It really is eye-candy, doesn't improve the game in any majorly meaningful way. Certainly not worth the $150-$200 extra that an Ageia PhysX standalone card would run you.

        Of course, you don't have to have the sepaate card even now to get some of the benefits; the Ageia engine will run in software, too, just not as well. It will be interesting to see what happens when nV
      • Nope, the eye candy was done b/c it's harder to re-architect games to use more extensive physics in primary gameplay. You have to shove all the physics models into the application code for all that advanced simulation. It's a lot easier to run 10,000 copies of a simple parabolic arc.

        Rigid body physics, constrained motion, etc all take up some decent CPU. As does collision detection. So far, game developers have had to do with simplified collision geometries, simplified models, etc. As a first stab at
      • by tonywong ( 96839 )
        Bah I just woke up so this may come across clearly.

        The reason why it's only for eye candy at the moment is because developers do not want to fork the gaming experience. Since accelerated physics would create a have and have not situation for gamers, where the non accelerated experience would be too slow to be acceptable, developers choose to only fluff up the eye candy portions because you could not make the game play experience identical between the two.

        This means that you could fork development and have t
    • by teslar ( 706653 ) on Friday February 15, 2008 @08:14AM (#22433222)
      Naw, it's much easier than you think. All of Physics can be expressed by just one equation, the Grand Unified Theory, the computation of which is accelerated by PhysX. The Grand Unified Theory was first discovered when programmers at Valve tried to optimise the physics engine of HL2 [bbspot.com]. From the link:

      Game Engine Software Engineer at Valve, Jose Garcia discovered the theory. "The game engine ran too slowly. I was assigned the job of speeding it up," he said. "I started out by combining some of the gravity equations with some of the other force equations and found it all started to fit together. After a day, I had fine-tuned the entire physics-animation functions down to four lines of code, which ran a bit faster," he added.


      ;)
      • Wow. I always told my parents that game developers led the way in the advancement of science. Yup. But that doesn't mean that CS:Source will ever be devoid of porn sprays, campers, and microphone spammers. That's just the price of advancement, I guess.
  • I dont quite get it (Score:3, Interesting)

    by theskov ( 556173 ) <philipskov@noSPam.gmail.com> on Friday February 15, 2008 @06:57AM (#22432742) Homepage
    If existing cards can be upgraded thrugh a software patch, NVidia should have been able to do this all along. Are the PhysX people just much better at coding physics, or is there another reason this haven't already been added?

    In other words, did NVidia just buy some clever code?
    • by Ristol ( 745640 )
      Not quite. NVidia just bought the rights to use some clever code.
    • by RupW ( 515653 ) *

      In other words, did NVidia just buy some clever code?
      They also bought existing support from the Unreal 3 engine.

  • Compatible cards (Score:3, Interesting)

    by LotsOfPhil ( 982823 ) on Friday February 15, 2008 @08:15AM (#22433226)
    http://www.nvidia.com/object/cuda_learn_products.html [nvidia.com] CUDA can run on some pretty cheap cards now.
    • by p0tat03 ( 985078 )
      The question is... if you're running a cheap card you're probably already pushing the card to its limits in new games, can you afford to give any cycles to PhysX?
  • That is all that matters for me, increasing my folding score!
  • Its good to see PhysX support, I know it was worth keeping my limbs rather than selling a arm or leg to make Ghost Recon Advance Warfare 2 to work good.
    • Its good to see PhysX support, I know it was worth keeping my limbs rather than selling a arm or leg to make Ghost Recon Advance Warfare 2 to work good.

      Pity, it looks like you already sold your grammar.

  • Thank you, nVidia and Ageia, for saving me from spending $150 on a Physx card! I was very close to buying one, but now I won't have to!
  • This might sound silly, but exactly what does PhysX do? And what would I benefit from having PhysX support on my new card? I know a few games I might want to get that use PhysX (Unreal Tournament 3), would that make those games run faster?

Passwords are implemented as a result of insecurity.

Working...