Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Graphics Hardware

Nvidia Discloses Details On Next-Gen Fermi GPU 175

EconolineCrush writes "The Tech Report has published the first details describing the architecture behind Nvidia's upcoming Fermi GPU. More than just a graphics processor, Fermi incorporates many enhancements targeted specifically at general-purpose computing, such as better support for double-precision math, improved internal scheduling and switching, and more robust tools for developers. Plus, you know, more cores. Some questions about the chip remain unanswered, but it's not expected to arrive until later this year or early next."
This discussion has been archived. No new comments can be posted.

Nvidia Discloses Details On Next-gen Fermi GPU

Comments Filter:
  • Re:But does it... (Score:1, Insightful)

    by Anonymous Coward on Wednesday September 30, 2009 @08:05PM (#29600413)

    More importantly, does it run physx in a machine that also has a non-nvidia gpu?

    Oh, wait. No, it doesn't [slashdot.org].

  • Re:AWESOME (Score:5, Insightful)

    by ArchMageZeratuL ( 1276832 ) on Wednesday September 30, 2009 @08:23PM (#29600529)
    To the best of my knowledge, double-precision floating point operations are actually pretty important for some scientific applications of GPUs, and as such this is significant for those using GPUs as supercomputers.
  • by jhulst ( 1121275 ) on Wednesday September 30, 2009 @08:45PM (#29600671) Homepage
    Sure, they have lots of power, but only when used for parallel tasks. Each individual core is considerably slower than a normal CPU core and much more limited in what it can do.
  • Comment removed (Score:5, Insightful)

    by account_deleted ( 4530225 ) on Wednesday September 30, 2009 @08:51PM (#29600711)
    Comment removed based on user account deletion
  • Re:But does it... (Score:2, Insightful)

    by PopeRatzo ( 965947 ) * on Wednesday September 30, 2009 @09:04PM (#29600791) Journal

    More importantly, does it run physx in a machine that also has a non-nvidia gpu?

    You understand that these gpu's are made by nvidia, right? So how could they run something on a machine with a non-nvidia gpu if the gpu's the article refers to are made by nvidia/I.?

    What exactly were you trying to say? I'm not quite sure.

  • Re:AWESOME (Score:3, Insightful)

    by Korin43 ( 881732 ) on Wednesday September 30, 2009 @09:08PM (#29600813) Homepage
    It could also be useful in raytracing. The official reason POV-Ray hasn't been able to use video cards is that they don't have the required precision. That's probably pre-CUDA though, but "better support" sounds helpful.
  • Re:But does it... (Score:4, Insightful)

    by skarhand ( 1628475 ) on Wednesday September 30, 2009 @09:31PM (#29600953)
    You could have read the link... Theoretically, you could use an ATI card for graphics and a second Nvidia card just for the physx. Well, not anymore. Nvidia disabled that possibility in the driver. So people with older Nvidia cards who choose to upgrade to the newest radeon 5800 series will lose physx. That kind of business practices remind me of a certain company from Redmond...
  • by Sycraft-fu ( 314770 ) on Wednesday September 30, 2009 @09:32PM (#29600971)

    It depends on what you are doing, but when you get something that involves a lot of successive operations, even 32-bit FP can end up not being enough precision. You get truncation errors and those add up to visible artifacts. This could also become more true as displays start to take higher precision input and even more true if we start getting high dynamic range displays (like something that can do ultra-bright when asked) that themselves take floating point data.

  • Re:But does it... (Score:3, Insightful)

    by PopeRatzo ( 965947 ) * on Wednesday September 30, 2009 @09:37PM (#29600997) Journal

    That kind of business practices remind me of a certain company from Redmond...

    Actually, I can think of at least one other major computer manufacturer who makes products that nerf other manufacturers' products. I think they're located in Cupertino.

  • by Heir Of The Mess ( 939658 ) on Wednesday September 30, 2009 @10:46PM (#29601361)

    Since then however, the hardware has always been "good enough"

    That's because most games are now being written for consoles and then being ported to PC, so the graphics requirements are based on what's in an X-Box 360. Unfortunately consoles are on something like a 5 year cycle. People are now buying a game console + a cheap PC for their other stuff for cheaper than the ol gaming rig. Makes sense in a way.

  • Re:AWESOME (Score:3, Insightful)

    by GreatBunzinni ( 642500 ) on Thursday October 01, 2009 @03:31AM (#29602767)
    Actually, that's a fundamental aspect of GPGPU's migration from an interesting oddity to a serious option (if not the obvious choice) in the number crunching world. Just to give you an example, I'm a structural engineering major and, for my graduate thesis, I'm on the process of developing a pair of structural analysis programs (finite elements method applications), a type of problems which basically consist of solving considerably large linear equation systems. That sort of problem is right up GPGPU's alley. Yet, although it's a very affordable piece of technology and, as it was already demonstrated, would bring massive performance improvements to this sort of problem, after analysing the options it was found that, at least at this moment, it would be better to focus on relying on multiple-CPUs through multi-threading instead of jumping into the GPGPU's bandwagon. One of the main reasons that forced GPGPUs not to be seen as a serious option was, in fact, their underwhelming support for double-precision math.

    There were a hand-full of issues behind that decision. One of them was that some GPGPU platforms fail silently [nvidia.com], which, in practice, means that you start crunching numbers with less than the expected mantissa and therefore you get considerably larger rounding errors,. This is something that may bring disastrous results. Another issue is that even in some cases the announced double-precision support of some products was a bit flawed, as it failed to comply with IEEE 754, the standard for floating-point arithmetic. [wikipedia.org] Although it didn't complied due to only a hand-full of issues, to rely on GPGPUs to crunch numbers when they don't conform to that standard would mean that someone would be forced to spend a considerable time formally checking what effects that non-compliance would have on the project being developed. That means that that would take precious man-hours from projects which may already be poorly manned, not to mention that that task would be rendered to waste as the next GPGPU generation would either fully support with IEEE 754 or, in the worst case scenario, fail to support it in some other aspect, which would mean that the poor chap assigned to verify the effects of the product's non-compliance would be forced to do everything from scratch, once again.

    So, to sum things up, GPGPU's support for double-precision math is, in fact, great news. It means that everyone may have it's own personal vector-processing super-computer on his desktop. Heck, even on laptops. That may not mean much for the proverbial joe-sixpack (at least not beyond the "oohh... shiny graphics" side of things) but being able to crunch a lot more numbers on the same time frame means the world to anyone writing/using number-crunching software, which is a lot of people.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...