Forgot your password?
typodupeerror
Graphics Hardware

Nvidia Discloses Details On Next-Gen Fermi GPU 175

Posted by samzenpus
from the getting-the-skinny dept.
EconolineCrush writes "The Tech Report has published the first details describing the architecture behind Nvidia's upcoming Fermi GPU. More than just a graphics processor, Fermi incorporates many enhancements targeted specifically at general-purpose computing, such as better support for double-precision math, improved internal scheduling and switching, and more robust tools for developers. Plus, you know, more cores. Some questions about the chip remain unanswered, but it's not expected to arrive until later this year or early next."
This discussion has been archived. No new comments can be posted.

Nvidia Discloses Details On Next-gen Fermi GPU

Comments Filter:
  • But does it... (Score:5, Interesting)

    by popo (107611) on Wednesday September 30, 2009 @08:00PM (#29600369) Homepage

    ... run Linux?

  • by Joce640k (829181) on Wednesday September 30, 2009 @08:42PM (#29600655) Homepage

    "Ignorant" would be a better rating - there's a lot of compute power but it's in the middle of a very different architecture to an x86 CPU. Not usable for running an OS.

  • So... (Score:2, Interesting)

    by fuzzyfuzzyfungus (1223518) on Wednesday September 30, 2009 @08:51PM (#29600709) Journal
    Will they also be announcing support for an underfill material that doesn't cause the chip to die after a fairly short period of normal use? And, if they do, will they be lying about it?
  • by jpmorgan (517966) on Wednesday September 30, 2009 @09:09PM (#29600815) Homepage
    Notice the features being marketed: concurrent CUDA kernels, high performance IEEE double-precision floating point performance, multi-level caching and expanded shared memory, high performance atomic global memory operations. NVIDIA doesn't care about you anymore. Excepting a small hardcore, gamers are either playing graphically trivial MMOs (*cough*WoW*cough*) or have moved to consoles.

    They won't want to sell you this chip for a hundred bucks, they want to sell it to the HPC world for a couple thousand bucks (or more... some of NVIDIA's current Tesla products are 5 figures). The only gamers they're really interested in these days are on mobile platforms, using Tegra.
  • by Pulzar (81031) on Wednesday September 30, 2009 @09:22PM (#29600893)

    People have been saying that forever now. I think only the first 2 generations of 3D cards were greeted by universal enthusiasm, while everything else since had a number of "who needs that power to run game X" crowd. The truth is, yes, you can run a lot of games with old cards, but you can run them better with newer cards. So, it's just a matter of preference when it comes to the usual gaming.

    AMD/ATI is at least doing something fun with all this new power. Since you can run the latest games in 5400x2000 resolutions with high frame rate, why not hook up three monitors to one Radeon 58xx card and play it like this [amd.com]? That wasn't something you could do with an older card.

    Similarly, using some of the new video converter apps that make use of a GPU can cut down transcoding from many hours to one hour or less... you can convert your blu-ray movie to a portable video format much easier and quicker. Again, something you couldn't do with an old card, and something that was only somewhat useful in previous generation.

    In summary, I think the *need* for more power is less pressing than it used to, but there's still more and more you can do with new cards.

  • Re:But does it... (Score:5, Interesting)

    by Anonymous Coward on Wednesday September 30, 2009 @09:32PM (#29600961)

    Some motherboards have more than one PCI Express slot. Some even come with GPUs built onto the motherboard. In either case, it is entirely conceivable that there may be a GPU present other than the one attached to the display. Then there's the Hydra 200 (look on Anandtech, I'm too lazy to find the link) - a chipset which evenly distributes processing power among multiple GPUs from any vendor to render a scene or do GPGPU computing.

    Nvidia just released new drivers which explicitly disable PhysX acceleration in the presence of a GPU from another manufacturer. For the above stated reasons, this is evil.

  • Embedded x86? (Score:2, Interesting)

    by Doc Ruby (173196) on Wednesday September 30, 2009 @09:33PM (#29600973) Homepage Journal

    What I'd like to see is nVidia embed a decent x86 CPU, (maybe like a P4/2.4GHz) right on the chip with their superfast graphics chips. I'd like a media PC which isn't processing apps so much as it's processing media streams, pic-in-pic, DVR, audio. Flip the script of the fat Intel CPUs with "integrated" graphics, for the media apps that really need the DSP more than the ALU/CLU.

    Gimme a $200 PC that can do 1080p HD while DVR another channel/download, and Intel and AMD will get a real shakeup.

  • Re:Embedded x86? (Score:3, Interesting)

    by Doc Ruby (173196) on Thursday October 01, 2009 @10:07AM (#29605185) Homepage Journal

    That's better than nothing. But I want all the x86 packages, especially the Windows AV codecs. That requires an x86.

    Though that requirement suggests an architecture of ARM CPU for OS/apps, little x86 coprocessor for codecs, and MPP GPU cores doing the DSP/rendering. If Linux could handle that kind of "heterogenous multicore" chip, it would really kill Windows 7. Especially on "embedded" media appliances.

  • by Bitmanhome (254112) <bitman&pobox,com> on Thursday October 01, 2009 @06:14PM (#29611743)

    FLOPS aren't directly comparable, as the ATI chips are arranged more like the Itanium, while nVidia looks more like a Core Duo. ATI has more raw power, but uses a smaller percentage of it.

I bet the human brain is a kludge. -- Marvin Minsky

Working...