Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Hardware Hacking Graphics Software

Modders Get Nvidia's PhysX To Run On ATI Cards 122

stress_life writes "Following controversial allegations that Nvidia is cheating in 3DMark Vantage and Unreal Tournament 3 benchmarks, executives from Futuremark and Epic moved forward to clean any confusion. However, the game was not over — enthusiasts from Israel ported PhysX middleware to run on ATI Radeon cards, achieving remarkable performance. Owners of ATI Radeon cards will be able to play PhysX games as well, such as Ghost Recon 2 and already mentioned Unreal Tournament 3."
This discussion has been archived. No new comments can be posted.

Modders Get Nvidia's PhysX To Run On ATI Cards

Comments Filter:
  • Since I tend to use mostly ATI cards, this is great news.

    • by goombah99 ( 560566 ) on Friday June 27, 2008 @04:00PM (#23973871)

      Sure I grock the term "PPU" and can maybe even imagine it's got some fast elastic particle simmulations.

      But what "physics" is really there. What's the interface look like.

      Is it real physics? Would it be good for say simmulating chemical dynamics with quantum or classical force fields? COuld I use it to model the hydrodynamics of a sail boat cutting through the water?

      What about applied math or engineering physics like say the propagation and attenuation of sound in a turbulent atmoshere or concert hall.

      What about a piece of rope falling, a flag in the wind, or a ball and spring model?

      Just what does this do and how does the interface look?

      if possible compare it to CUDA since I know what that does.

      • by dotancohen ( 1015143 ) on Friday June 27, 2008 @04:32PM (#23974317) Homepage

        if possible compare it to CUDA since I know what that does.

        You see, it's like a car... It takes instructions and makes something of it... like how a car takes steering and brakes and get you from point A to point B.

      • by Actually, I do RTFA ( 1058596 ) on Friday June 27, 2008 @04:33PM (#23974335)

        The interface is a freely available SDK (for some uses). The physics is basically Newtonian mechanics (more in a moment). Physics for games are, first and foremost, an exercise in collision detection. The physics is simple. Determining collisions in a series of finite-length steps is the hard part.

        Why I say that the physics is basically Newtonian mechanics, there is spring technology, although all spring technology in finite step simulations has errors (if you are not carefully, the springs increase in oscillation over time, instead of damping.) Chemical dynamics and quantum force fields are out. Classical force fields are included. The force fields operate based on propogation (distance, distance-squared, etc.) and other parameters.

        The fluid/solid interaction is still being worked on, and fluids and cloth benefit most from hardware acceleration. Fluids use a number of points with mutual attraction/repulsion properties.

        No sound properties.

        Rope is emulated as a series of sticks with ball joints at the end, a flag as a series of springs with forces at points (cloth simulation is esentially a thing of springs), and the ball and spring, yes.

        You left out an important question, which is the rigidity of objects other than cloth/fluids. The ball that deforms as it bounces. Currently, that's in the SDK, but I've not played with it yet.

        • Actually, I'd think that the underlying algorithms would resemble Lagrangian Mechanics, rather than Newtonian Mechanics.

          • by Actually, I do RTFA ( 1058596 ) on Friday June 27, 2008 @05:32PM (#23975011)

            The PhysX system doesn't really care about heat or energy. It primarily concerns itself with force and momentum. That, as I understand it, is the principle difference between Newtonian and LaGrangian mechanics.

            • by moosesocks ( 264553 ) on Friday June 27, 2008 @06:03PM (#23975307) Homepage

              Nope. Not quite :-)

              Lagrangian Mechanics gives you a lot more flexibility in terms of your coordinate system, and tends to be much better for solving systems with many interacting forces. It's essentially a mathematical re-formulation of Netwonian Mechanics.. The underlying laws are all the same, but the math used to arrive at a solution is quite different.

              Of course, this is all for solving problems analytically. Computers most likely do things differently.

              • Interesting. I'm not sure how it works under the hood. And during collisons (for instance, a ball could easily be touching the rim in two places and the backboard in one) there are multiple forces that need to be solved in an interdependent fashion. Could Lagrangian mechanics be used then? I suppose.

                PhysX also has a special vehicle wheel with suspension class of object. The fact that it seems not to be made up of other components implies that it has a different solve method. Given the difficulty of da

              • by ppanon ( 16583 )

                Of course, this is all for solving problems analytically. Computers most likely do things differently.

                Yeah. I would be very surprised if they were solving this stuff analytically :-). My guess is that it's using techniques similar to those used for finite element analysis with quantum time increments. For FEM, I would think an approximation of Newtonian mechanics should be a lot simpler to deal with and more appropriate for GPUs.

                • You're probably right, although computerized analytical solutions certainly aren't out of the question, as proven by Mathematica and the like.

                  It could very well be faster to analytically solve the equations of motion every so often, and simply plug into those results as time evolves.

                  I doubt that this is the case, as the underlying programming would be extremely complicated, and I'm not quite sure that it would even work.

                  • by ppanon ( 16583 )

                    It could very well be faster to analytically solve the equations of motion every so often, and simply plug into those results as time evolves.

                    Analytical solutions tend to be derived with algorithms that aren't vectorable/parallelizable, whereas the strength of GPUs is in vectorized/parallel calculations. So, yes your approach might work, but wouldn't gain much efficiency from GPUs. Still, I have found some of the comments for this article much more informative than is usual for a Slashdot article. There's

                    • Though on occasions, an analytical solution is prefferred, especially when you need to handle accuracy. Karma (from the now defunt Maths engine), used a matrix solve internally, which was slightly slower than the iterative solvers found in Havok/PhysX, but it was fantastically stable as a result. That engine was parallelised for PS2, SSE and altivec IIRC.

                      Being able to choose whether to use an iterative or matrix solver was one of the nice things about that engine...
        • by pdbaby ( 609052 ) on Friday June 27, 2008 @07:52PM (#23976523)

          The physics is basically Newtonian mechanics (more in a moment)

          Was that accidental or is that the worst physics joke ever? :-P if intentional allow me to present you with a medal!

        • by ozphx ( 1061292 )

          Determining collisions is easy.

          The collisions get turned into a set of contact pairs, which get converted into a set of constraints.

          Then when the simulation is stepped the great big bloody matrix of constraints needs to be solved, which is hard as hell. This is where a massively parallel vector processor comes in (like what is inside a PPU - or better, another vector processing chip with shiteloads more R&D - the GPU).

      • by MachDelta ( 704883 ) on Friday June 27, 2008 @04:33PM (#23974339)
        Um... it IS CUDA. Or rather, its an extension for CUDA.

        From what I understand, nVidia took the PhysX engine they bought from Ageia and ported it to their own language (CUDA) so that it would run on their graphics cards, so people didn't have to shell out for a second $300 "Physics Processing Unit", thus boosting nVidia's GPU sales.

        And now someones ported it to ATI.

        *Nelson Laugh*
      • by DrYak ( 748999 ) on Friday June 27, 2008 @05:01PM (#23974701) Homepage

        Is it real physics? Would it be good for say simmulating chemical dynamics with quantum or classical force fields? COuld I use it to model the hydrodynamics of a sail boat cutting through the water?

        No. Most physics middleware provide a simplified model (collision detection, rigid body physics, etc...) which is great for visual gimmicks in games, but is too much an approximation to be used in research. You would need other engines which are optimized to do accurate physics modelling - Gromacs comes as an example.

        Now about the hardware behind this : Ageia's PPU could in theory be used to accelerate research calculation. The problem is the lack of a proper API. This processor has only PhysX as available API which is specialized for gaming oriented physics. The SieveC compiler is supposed to be able to generate parallel programs for the PPU but hasn't been released publicly.

        Whereas, even if the GPU port of PhysX is only oriented for gaming-specific applications, ATI Radeon card also expose the much more general purpose API "Brook+" (the usage of which is already demonstrated in Folding@Home) and nVidia card have CUDA that you know.
        Unlike PhysX, those API expose generic numerical methods and can be used to calculate applications as diverse as you mention. Including calculating the game-specific Ageia PhysX.

        PhysX is to CUDA what, for example, Gromacs could be compared to Fortan. The first is a specific engine which is optimised to solve some very specific problems, the second is a general purpose language that can be used to crunch numbers.

      • I imagine that the "physics" computations which take place in most games are heavily vectorized. Similarly, I'm sure that you could do some sort of interpolation/guesswork either in hardware, or API that reduces the number of "hard" calculations required.

        GPUs are designed specifically to do vector math, and it only seemed logical that an API would come along that properly exposed these functions to other software.

        I imagine that it's very similar to CUDA, but more optimized, and with an API that contains f

        • GPUs are designed specifically to do vector math

          Not any more. The GPUs since G80 and R600 are good at scheduling and running scalar math on the shader units. (Earlier they tended to be fixed 4-wide or 2-wide.) But branching is still costly, and the GPU needs to bundle many scalar ops together (a bigger batch of pixels even when the shader/program has a scalar op as such) for full performance.
      • by DittoBox ( 978894 ) on Friday June 27, 2008 @06:06PM (#23975343) Homepage

        Sure I grock the term "PPU"

        Naw man. That's the sound lasers make: pew pew pew

      • Re: (Score:3, Insightful)

        by azaris ( 699901 )

        This thread is so full of misinformation I don't know where to start.

        Newtonian mechanics (no matter if you dress it up as Lagrangian or Hamiltonian mechanics) is basically just solving a second order ODE with constraints. Depending on how you set up the constraints and discretize the system, you end up solving a linear system of equations on each time step. Oh, and forget analytical solutions. There are like a handful of mechanical systems that you can solve analytically (called integrable), the rest can

      • by delt0r ( 999393 )
        From a scientific standpoint, CUDA is where its at. The PhyX is a very rough Newtonian physics simulator in terms of any true accuracy, with the emphasis on stability of the integration. Contact dynamics are very rough and multiple collision is still expensive. It does not need more for games.

        Also its not clear how much of the graphics side takes the hit with some physics sims ruining or if you can get the results back to the CPU efficiently if the physics sims need more than eye candy. And the fact that
        • yeah, when we got hold of a PhysX accelerator, our engine actually slowed down quite a lot - It ran roughly half the speed of the software implementation. The big problem was that whilst the card was great at processing a few thousand colliding boxes, it's performance truly sucked if you wanted to do anything with those results. So yeah, I'd be inclined to agree that the value of the hardware is questionable.

          There is a bigger problem though - why would you even want a few thousand boxes colliding? (Youtu
  • by electrosoccertux ( 874415 ) on Friday June 27, 2008 @03:49PM (#23973711)

    Might also find this interesting-- AMD/ATI sure has been having a lot of fun lately.

    http://www.tomshardware.com/news/Larrabee-Ray-Tracing,5769.html [tomshardware.com]

    This latest round of cards from Nvidia and ATI seems to have been won by ATI as well. For $300 you can get the AMD 4870, on the performance of the $400 Nvidia 260, and sometimes as good (depending on the game) as the $600 280.

    • by Clay Pigeon -TPF-VS- ( 624050 ) on Friday June 27, 2008 @03:58PM (#23973849) Journal

      Also the 48XX series ships with linux drivers.

      • Re: (Score:1, Informative)

        by aronschatz ( 570456 )
        My Sapphire Radeon HD4850 disagrees. I'm a reviewer and always use the latest drivers. There were no Linux drivers on the Sapphire driver CD.
    • Re: (Score:2, Informative)

      by legoman666 ( 1098377 )
      Tom's hardware is crap. Try a decent review site: http://www.anandtech.com/video/showdoc.aspx?i=3341 [anandtech.com]
    • by Endo13 ( 1000782 ) on Friday June 27, 2008 @04:02PM (#23973893)

      And the even worse news for NVidia is some preliminary numbers for the upcoming 4870 X2 would indicate it will completely blow away anything NVidia currently has on the market.

      • Re: (Score:2, Informative)

        by sexconker ( 1179573 )

        In terms of performance per $.

        NVidia is still king of the hill in raw performance.
        You just have to pay.

        • by Endo13 ( 1000782 ) on Friday June 27, 2008 @04:27PM (#23974253)

          Not any more. You haven't been keeping up too well with tech news eh? Read a few reviews and look at some benchmarks of the 4850 and 4870 cards. If it were just one or two review sites showing such favorable numbers for the new ATI cards, they might be suspect. It's not one or two. It's all of them.

          • by Endo13 ( 1000782 ) on Friday June 27, 2008 @04:42PM (#23974443)

            Yes, I did misread your comment. Nevertheless, most of my comment still stands. A 4870 in Crossfire performs significantly better than the X280 and the 9800 GX2 every benchmark I've seen except Crysis, and these cards also have the capability to be run in a quad Crossfire mode. Oh, and two of them sell for less than one of NVidia's top dogs.

            http://www.bjorn3d.com/read.php?cID=1301 [bjorn3d.com]
            http://www.pcper.com/article.php?aid=581&type=expert [pcper.com]
            http://techreport.com/articles.x/14990 [techreport.com]

            • And you can run Nvidia cards in Quad SLI.
              What's your point?

              Like I said, in terms of performance, NVidia wins.
              You just have to pay more.

              Sure, some games favor AMD (ATi) and some favor NVidia. What else is new?

              Barring huge improvements for AMD (ATi) from driver updates in the future, NVidia will push itself as the performance king, and will still money hat developers to make sure a game is dripping with green (The Way it's Meant to be Played).

              You stated that ATi blows away NVidia this round. I agree. I'm n

              • Pretty sure you can run anything that's out right now and anything that will come out in the next year or so on an 8800, and it'll look great. I played Crysis on medium-high settings, and (while it was a godawful game once the aliens showed up) it looked great. I paid like $220 for my GTS 320MB last May.
                • Yeah, but at what resolution.
                  And such.

                  People always want more.

                  I myself picked up an 8800GTX from CompUSA when it was 40% off (due to CompUSA closing down). Just cheap enough to warrant it over the 8800GT / a pair of them.

                  • 1280x1024. I have a new widescreen monitor now, but considering how Crysis is a godawful game, I have no motivation to see how it runs.
                    • I'm pretty sure I can run it just as high on my 1680x1050. As I said, it's a terrible game, and I have no desire to play it.

                      There aren't really any NEW games that will stress your video card out right now. Excluding Crysis, which nobody actually plays.
            • by Shatrat ( 855151 )
              Just because AMD is more competitive than they used to be, doesn't mean they are winning.
              Nvidia has dropped the price of the 9800 GTX to the same level as the 4850, and it consistently beats the 4850 in most benches I've seen.
              Add to that CUDA, Physx, more overclocking headroom, and better linux drivers (at least until radeonHD has matured) and nvidia is still the better product from where I'm sitting.

              The high end cards aren't as good performance per dollar, but high end cards never are and I really don't
    • Re: (Score:3, Insightful)

      And that performance is about 15-25% over my 8800GTS 320MB that I paid just over $200 for over a year ago.

      The latest round of cards came WAY too soon.
    • ... and sometimes as good (depending on the game) as the $600 280.

      Like this $600 280? [augustadrifting.com]

    • Re: (Score:3, Insightful)

      by bill_kress ( 99356 )

      $300? $400? $600? wtf!

      It's been a while since I bought a video card. I totally splurged and got a $90 card! Worked for the stupid game I was trying to play I guess, but now that game is lame and I'm out $90! I wouldn't do it again--$90 is a silly amount to spend to replace existing functionality.

      If I just wait a few years, any games I might still be interested in will be cheap and play on commodity hardware--and all I've lost is, well, nothing--actually gained a little extra time.

      • by Splab ( 574204 )

        So basically you are holding out till the games and hardware are almost free?

        Good idea, however if you like to play online you might end up with a problem since you are most likely going to be the only one left still playing the game...

        • Ummmmmm. Ever heard of Battle.net [battle.net]? Apparently there is a huge community of online StarCraft, WarCraft, and Diablo series players. StarCraft came out in 1998. According to GameSpy [gamespy.com], the original Counter-Strike currently has 33097 servers and 79928 players. Counterstrike 1.0 was release in 2000.

          Based on that, I'd say that a good multiplayer game will have online playability for at least 10 years. That is a pretty decent amount of time for hardware to catch up.
          • by shannara256 ( 262093 ) on Friday June 27, 2008 @08:01PM (#23976603) Homepage

            The problem with playing 10-year-old games online is that, for the most part, the only people still playing 10-year-old games online are really, REALLY good at them. New games will have a wide variety of players in terms of skill, while old games tend to have just the hardcore players. If you're waiting for prices to fall to play a game, you'll have missed out on the time it takes to learn how to play the game, both in general and against other players of a similar skill level, and you'll lose every online game you play.

            • Considering that waiting 10 years, you can earn 1,000,000 dollars in sallary, I think you have to be quite damn poor not to afford saving $5/week for 20 weeks, to not afford a $100 video card. Many people spend more than $90 on one nights of drinking, so cmon, are you living in a 4th world country earning $3 a day making ipods for apple in a factory?

        • Re: (Score:3, Insightful)

          by CastrTroy ( 595695 )
          Yeah, nobody plays those old games like Starcraft or Counter-Strike anymore.
        • I don't pay a repeating fee to play anything--learned my lesson a long time ago, so think the good games with open servers will still be available, and the lame ones or ones with closed pay per month servers, I'd have done without anyway.

          If you like to pay money to waste time playing, however, I suppose my entire theory falls apart.

      • If I just wait a few years, any games I might still be interested in will be cheap and play on commodity hardware--and all I've lost is, well, nothing--actually gained a little extra time.

        Guess you could say that for just about anything. Sloshdot loves car references, but i'll pass :-)

        But yeah, if you don't mind waiting till something is outdated, then you can save some money. But if you enjoy playing new games, using new cellphones, etc. then you go ahead and pay the money to do so.
  • by Grond ( 15515 ) on Friday June 27, 2008 @04:00PM (#23973873) Homepage

    My guess is that nVidia will put a stop to this pretty quickly. PhysX is covered by at least a couple of patents [uspto.gov]. There may be others pending or that were assigned to nVidia.

    I don't know if PhysX is covered by patent protection in Israel, but it's possible. In any event, don't count on official PhysX support from ATI any time soon.

    • Re: (Score:3, Interesting)

      by Yvan256 ( 722131 )

      Great, so here's yet another technology that will get split into many different versions by different companies...

      Why can't these guys sit together and discuss things to come up with, say OpenPhysX? (think OpenGL)

      • Because nVidia paid good money for it. They wouldn't want to give it to competitors for free or allow ATi card owners to benefit from a nVidia "feature."
        • by Yvan256 ( 722131 )

          They could still agree on an API or something, not the actual code. It would mean that ATI, intel and nVidia could make "PhysyX" compatible hardware/drivers, but maybe nVidia would have the fastest implementation.

          • Re: (Score:3, Interesting)

            by sexconker ( 1179573 )

            The API is free and open and AMD (ATi) is free to implement it if they wish.
            They simply haven't done so.

            • Re: (Score:2, Informative)

              by MachDelta ( 704883 )
              They haven't done so because they're subscribers to PhysX's competition - Havok.

              AMD / ATI / Havok
              vs
              Intel / nVidia / PhysX
              Pick your side!

              (Ok so it doesn't quite work like that but dividing battle lines evenly makes it less confusing than it really is)
      • by EvilIdler ( 21087 ) on Friday June 27, 2008 @04:13PM (#23974057)

        Another vote for OpenPL. It only makes sense. You feed the coordinates from OpenGL to OpenPL. OpenPL returns a new velocity and position for the objects. Maybe toss in mesh deformation because of impact. All handled by the same tightly integrated processor for speed. I want it, and I want it yesterday :)

        • Re: (Score:2, Informative)

          That's pretty much what a physics engine does, and there are already a number of open source physics libraries out there (ODE [ode.org] and Bullet [bulletphysics.com] are the most well supported as far as I know, the former has been used in a few big budget commercial titles). Someone just needs to port the back-end to CUDA and off we go... Easier said than done, I reckon.

          I recall hearing chatter about CUDA bindings for Bullet but I'm not sure if anything came of that.
      • by CAIMLAS ( 41445 )

        Simple: right now, ATI is stomping on Nvidia in terms of price and performance. The only thing Nvidia has going for them is their PhysX. Take that away - or make it an open standard by which ATI can also play - and Nvidia loses, becoming another "has been" vendor of high-end cards.

      • Re: (Score:3, Interesting)

        by mrchaotica ( 681592 ) *
        It already exists, and is called the Open Dynamics Engine [ode.org]. It'd be nice if somebody made a version reimplemented on top of CUDA or CTM, though.
    • since it is in the wild now. who is going to prevent me from fixing the mod ?
    • From one of the linked articles:

      We already knew that Nvidia is working on a CUDA version for x86 CPUs, but said it would leave a modification for ATI GPUs to others./quote

    • It will happen (Score:5, Insightful)

      by ConanG ( 699649 ) on Friday June 27, 2008 @04:20PM (#23974155)
      I suspect they'll license it to ATi.

      The nVidia people are probably well aware that hogging PhysX to themselves is a stupid idea. Game makers aren't going to go out of their way to support it unless it can be reasonably expected that most gamers will be able to use it. It's a dead fish unless ATi can use it. That doesn't mean they'll just hand it over.
      • Re: (Score:2, Insightful)

        by DRobson ( 835318 )

        The nVidia people are probably well aware that hogging PhysX to themselves is a stupid idea. Game makers aren't going to go out of their way to support it unless it can be reasonably expected that most gamers will be able to use it.

        Contrast with all the vendor specific OpenGL extensions that were used by developers...

      • by Aladrin ( 926209 )

        I'll go one further. I suspect they'll ignore the situation and let things get rolling, and THEN license it to ATI.

        I have been wondering how nVidia would conquer the 'only our cards use it' hurdle, and figured they'd just push games that work on the software version of Physx but you can turn on all the really cool effects if you have the hardware. Doing that while ignoring this hack for a while is a great way to get people interested.

    • Maybe I'm missing something (sorry, haven't read the article yet, only skimmed it), but this is basically an instance of using software on hardware it wasn't originally designed or intended for, right? How are patents going to prevent that? If Nvidia is allowing free downloads of this software (as they do with all their other driver code), then there's simply no way to prevent people from using it on other hardware. If the software isn't free, and people aren't paying for it, well that's just simple soft

    • by pla ( 258480 )
      My guess is that nVidia will put a stop to this pretty quickly.

      How? AMD/ATI didn't do anything - A third party extracted the the cat from the bag, and we all (except nVidia) benefit as a result.

      AMD doesn't need to do anything more than not break the interface used to make the port possible.
  • by Nom du Keyboard ( 633989 ) on Friday June 27, 2008 @04:03PM (#23973907)
    This is hardly the big deal that Nvidia makes it out to be. Physics doesn't come for free on either card. It takes away substantial resources from the GPU's major function of rendering frames. Frankly I don't care how beautiful the physics are when the frame rate is 9.
    • by Endo13 ( 1000782 ) on Friday June 27, 2008 @04:10PM (#23974023)

      Well, the amazing part of the whole deal here is that benchmarks show almost no decrease in framerate with PhysX turned on. So yeah, it's kind of a big deal.

      • Re: (Score:3, Funny)

        by Racemaniac ( 1099281 )

        so basically, the games hardly use any physics?

        • by timeOday ( 582209 ) on Friday June 27, 2008 @05:14PM (#23974817)
          Maybe the amount of physics that would overwhelm the CPU (which can also kill framerate, BTW) is hardly lifting a finger for the GPU. It's certainly not impossible; GPUs do blow away CPUs for some calculations, which is why we have GPUs in the first place.
      • One possible explanation is that games, contrary to expectations, continue to be limited by texture sampling (together with other old non-shader ops like stencil fill or alpha blending). While vertex processing is not enough to eat up the available shader unit horsepower. So while the shader unit pool indeed sees 100% peak times (especially when processing long special effects shaders for some parts of the screen), on average there is some percentage of pure shader unit power available for other tasks -- li
    • Depending on how it works, I know I care. If I can write basic Fortran/C/C++ code and have it running on an ultra-multithreaded core, I'm all for it. If their "physics" includes real physics (event simulation, data analysis, differential/integral equation solving, etc) this will be a HUGE boost to scientific productivity. Add two or three $200 boards on a standard, quad-core PC and you'll have more power than a lot of clusters out there, for a fraction of the cost.
    • Re: (Score:3, Informative)

      by ichigo 2.0 ( 900288 )
      That depends. Unless a game is amazingly well optimized for a specific card, it will not be able to use 100% of the resources. If running physics on the GPU lets it use those untapped resources then it can only be a good thing.
  • by Keith Russell ( 4440 ) on Friday June 27, 2008 @04:20PM (#23974165) Journal

    So this whole thing was kicked off by a column on the Inquirer? The same people who brought us the Rydermark "scandal"? [wikipedia.org] The Inq has shown a blatant and consistent anti-Nvidia bias over the years, so why give this any credence?

    Besides, the first question that popped into my head is one that is being asked a lot of places, but not answered: If accelerating PhysX on Nvidia's GPU hardware is cheating, wouldn't accelerating PhysX on Ageia's PPU hardware be considered cheating, too? Call me cynical, but I think AMD knows the answer to that, and would rather you didn't mention it, thank you very much.

  • Obviously they can't incorporate this into their drivers, but one has to wonder how much they'll look the other way on this. Do they have any legal obligation to stop users from exploiting this (ie. modify their drivers to prevent such mods)? You can be sure they would go out of their way to stop something like that from happening in the other direction.

    Relevant original phrase: All's fair in love and war.

    Relevant original phrase with 21st century spin: All's fair in love and war so long as you do
  • Get rid of the 'hardhack' tag people. Sheesh. It's just software. There is no hardware hacking involved.

  • Owners of ATI Radeon cards will be able to play PhysX games as well, such as Ghost Recon 2 and already mentioned Unreal Tournament 3.

    Not exactly true, they could already play the games with an ATI card, just not with physx enabled

    On a side note, Sir Issac Newton would be proud of these Israelis and their accomplishment of bringing 9.8m/s^2 constant acceleration to ATI gamers.

You are always doing something marginal when the boss drops by your desk.

Working...