Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Graphics Hardware

Nvidia Discloses Details On Next-Gen Fermi GPU 175

EconolineCrush writes "The Tech Report has published the first details describing the architecture behind Nvidia's upcoming Fermi GPU. More than just a graphics processor, Fermi incorporates many enhancements targeted specifically at general-purpose computing, such as better support for double-precision math, improved internal scheduling and switching, and more robust tools for developers. Plus, you know, more cores. Some questions about the chip remain unanswered, but it's not expected to arrive until later this year or early next."
This discussion has been archived. No new comments can be posted.

Nvidia Discloses Details On Next-gen Fermi GPU

Comments Filter:
  • Another article here (Score:4, Informative)

    by Vigile ( 99919 ) * on Wednesday September 30, 2009 @09:13PM (#29600837)

    http://www.pcper.com/article.php?aid=789 [pcper.com]

    Just for a second glance.

  • by mathimus1863 ( 1120437 ) on Wednesday September 30, 2009 @09:15PM (#29600855)
    I work at a physics lab, and demand for these newer NVIDIA cards are exploding due to general-purpose GPU programming. With a little bit of creativity and experience, many computational problems can be parallelized, and then run on the multiple GPU cores with fantastic speedup. In our case, we got a simulation from 2s/frame to 12ms/frame. It's not trivial though, and the guy in our group who got good at it... he found himself on 7 different projects simultaneously as everyone was craving this technology. He eventually left b/c of the stress. Now everyone and their mother either wants to learn how to do GPGPU, or recruit someone who does. This is why I bought NVIDIA stock (and they have doubled since I bought it).

    But this technology isn't straightforward. Someone asked why not replace your CPU with it? Well for one, GPUs didn't use to be able to do ANY floating or double-precision calculations. You couldn't even program calculations directly -- you had to figure out how to represent your problem as texel- and polygon-operations so that you could trick your GPU into doing non-GPU calculations for you. With each new card released, NVIDIA is making strides to accommodate those who want GPGPU, and for everyone I know those advances couldn't come fast enough.
  • by Anonymous Coward on Wednesday September 30, 2009 @11:26PM (#29601611)

    Start looking at OpenCL as soon as possible if you want to learn gpgpu, cuda is nice but opencl is portable between vendors and hardware types :)

  • Re:Embedded x86? (Score:2, Informative)

    by Anonymous Coward on Thursday October 01, 2009 @01:22AM (#29602141)

    What I'd like to see is nVidia embed a decent x86 CPU,

    They did, its called Tegra. Except its not using the x86 hog, but way more efficent ARM architecture

  • by Anonymous Coward on Thursday October 01, 2009 @03:17AM (#29602701)

    GP was foolish to assume people would know what hir was talking about but:

    GPUs are SIMD machines (Single Instruction Multiple Data), they process large quantities of numbers in parallel this is what makes them "fast" despite their low clock speed (compared to a CPU). They also have massive pipelines to decode as many instructions as possible simultaneously. All this makes them very powerful except for one major problem: branches. GPUs stall with major latency on branches.

    If you can write general purpose software and operating system code that rarely uses Goto, Ternary, If, For, Do, While or Switch statements in C then you could pull it off, however, such a subset of C minus those constructs would not be Turing Complete so it'd be damn hard.

    And then, even if you did succeed it would still be slower than the CPU code since not all workloads are compatible with SIMD, it only works on parallel streams; workloads that consist of multiple unrelated units (eg. pixels) that need to have the exact same operation performed on them whilst not depending on the results from the other pixels as part of those operations.

    This is the major benefit of a CPU vs the GPU, the CPU can handle branch dense code with lots of interdependencies without too much stalling, the GPU cannot.

    ---

    Of course, Intel's Larrabee GPU may change all this but that remains to be seen until it hits the market

  • Re:But does it... (Score:3, Informative)

    by ioshhdflwuegfh ( 1067182 ) on Thursday October 01, 2009 @03:33AM (#29602773)
    He is talking about Apple.
  • Re:AWESOME (Score:5, Informative)

    by jpmorgan ( 517966 ) on Thursday October 01, 2009 @05:38AM (#29603285) Homepage
    GTX 280 is a graphics card. The GT200 is the GPU core the GTX 280 card is based on. Likewise the 8800 series graphics cards were based on the G80 chip (and later G92, I think). There were also the G84, G86, G94 that power a number of nvidia's economy or mobile platforms. The Quadro 5600 and 4600 are also G80 based. There were other, cheaper Quadros based on the G84. The Quadro 5800 is based on the GT200 chip. The Tesla 870s were based on G80s, the 1070s are based on GT200. The cards also tend to have different memory interfaces (and amounts), clock rates and even firmware, which is why there are many different cards all based on the same handful of chips.

    So no, I do mean the GT200. The GT200 processor supports double-precision, the G8x and G9x processors do not.
  • Re:But does it... (Score:3, Informative)

    by smoker2 ( 750216 ) on Thursday October 01, 2009 @06:00AM (#29603361) Homepage Journal
    Wrong. They disabled PhysX when a non-nVidia graphics card is powering the display. The presence of another graphics card is not the driving issue.

"Ada is PL/I trying to be Smalltalk. -- Codoso diBlini

Working...