Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Nvidia Geforce 4 (NV25) Information 351

msolnik writes: "nV News has a brief article about the long-awaited NV25-based video adapters. These graphics processors have similar capabilities compared to the XGPU, and are a lot more powerful than GeForce3 Ti500. Since they are manufactured using .13 micron technology, they will probably be clocked at very high levels."
This discussion has been archived. No new comments can be posted.

Nvidia Geforce 4 (NV25) Information

Comments Filter:
  • Will it be able to render Pengiuns under linux?
  • 3dfx... (Score:2, Interesting)

    Does anyone know if nVidia has started using the technology that they acquired with the purchase of 3DFX? I know that 3DFX was working on some killer graphics routines and various chipsets before nVidia bought them out, and nVidia at the time was too far along with the GeForce 3 to integrate them. But supposedly they were going to use the technology in their next graphics chip.... which I assume to be the GeForce 4.
    • NVidia's purchase of #DFX was in my opinion, more a cheap purchase of a struggling competitor to wipe them off the face of the earth, and maybe inherit one or two good ideas, rather than a cooperative purchase.
      • NVidia's purchase of #DFX was in my opinion, more a cheap purchase of a struggling competitor to wipe them off the face of the earth

        Gigapixel/TDFX was already going out of business anyway. NVDA did not buy TDFX, they merely acquired their intellectual property, mainly numerous pages of research concerning anti-aliasing.

        TDFX was dead anyway. nVidia did not buy them to "wipe them off the face of the earth," as that was already happening.
        • The point I'm amking is they didnt try to rescue the company either, which with the NVidia millions would be easily accomplished.

          This left quite a void next to NVidia at the top for quite a while, which only ATI is starting to fill now.
          • Re:3dfx... (Score:2, Informative)

            by SquierStrat ( 42516 )
            ATI is no where near filling that void sir! Take a look at their drivers. Quite simply, they suck! Particularly the linux drivers. My apologies, but NVIDIA is the only company to have decent (speedwise) linux drivers on the market. I can't say that is true for stability, however, I have zero stability problems due to the drivers, although others have said otherwise. ATI's windows drivers, however are cripplingly unstable ( have been for many years too. ) Not to mention, the card/driver mixer is just behind NVIDIA as far as speed, and only gets ahead in FSAA benchmarks, which personally, I couldn't give a rip about. I want a card that runs stable and runs fast in Linux AND windows (shudder...) not one that runs stable in one OS and fairly fast on one, but hey, the image quality is good. If that was the case I'd have used ATI a long time ago.
            • Re:3dfx... (Score:2, Funny)

              by dinivin ( 444905 )
              I want a card that runs stable and runs fast in Linux AND windows (shudder...)

              Same here... Which is why I've settled on a Radeon 7500.

              Dinivin
              • Re:3dfx... (Score:3, Informative)

                by mz001b ( 122709 )
                nvidia's drivers under linux pretty much as fast as they are under windows. There are not open source, but they work very well. OpenGL apps are fast under linux with them.
    • Re:3dfx... (Score:4, Informative)

      by fault0 ( 514452 ) on Sunday November 25, 2001 @08:10PM (#2611563) Homepage Journal
      Yeah, apparently they will:

      from the article:

      ".. A Voodoo5 5500-esque Anti-Aliasing feature. The presumption is that the NV25 will bring a Rotated-Grid AA implementation.."

      This is probably to compete with Ati's SmoothVision FSAA implementation, which is really quite slick. However, 3dfx was rumored to have really advanced FSAA implementations for their future Voodoo5 6000/6500 series. Perhaps the NV25 will include that.
      • Indeed. I own a Voodoo5 5500, and the FSAA on it kicks some serious ass. My only problem with it is that there's no official XP drivers. I have to rely on flaky 'hacked' drivers. I've been holding out for the nVidia card which was going to have some of 3dfx's work, otherwise I'd pick myself up a Geforce 3.
        On a side note, probably off-topic, but I for one would have paid anything to have a Voodoo5 6000. External power supply baby, aw yeah.
        • Indeed. I own a Voodoo5 5500, and the FSAA on it kicks some serious ass. My only problem with it is that there's no official XP drivers. I have to rely on flaky 'hacked' drivers.

          I abandoned my Voodoo 3 because I couldn't get drivers for XP and Tombraider Chronicles kept crashing the machine whenever I turned on the haredware acceleration and was slow as molasses without (this on a twin Pentium 3 machine with 512Mb Ram).

          I just picked up a GeForce 3 500 Ti and suddenly the machine works fine. Not only does TRC run fine but the machine is now stable, I haven't had a crash since. I am even wondering whether to bother with the Windows XP upgrade at all.

          The machine has no problem at all doing 30fps at the max resolution of my monitor (1024x1280). It even runs OK at 1200x1600 but the output looks crappy because the LCD display is aliasing like mad.

          BTW I bought the Voodoo 3 with the machine because I wanted to get a PCI bus card and leave the AGP slot open for when the Voodoo 5 came available. It is a pity that the 5/6000 never appeared since it gave the term 'gratuitous' a whole new meaning. Hopefully the new NVidia will fill the same role.

          As a practical matter however I found that people are far more impressed by the size of the case than the capability of the machine. For my last machine I bought a $300 'server style' case. This is about the same height as a small ATX case but twice the width. The idea was to avoid the type of overheating problems some of my earlier machines had had. It worked pretty well, no heating problems at all and the top is wide enough for the printer to sit on it nicely.

      • The presumption is that the NV25 will bring a Rotated-Grid AA implementation.

        Actually, the GeForce3 already does a form of rotated-grid AA. The 2x and Quincunx AA modes both take diagonally-arranged samples, one at the pixel corner, and one at the middle. It does help a lot, compared to the GF2's implementation.

        However, the 4-sample mode is still a straight horizontal/vertical grid pattern. And all modes currently do multisampling instead of supersampling, i.e. they take a single texture sample per pixel instead of up to four. This means it's significantly faster, but textures are blurrier. Which can be compensated for somewhat by turning up the anisotropic filtering, but at a cost in speed...

        There's certainly room for improvement though. It'll be interesting to see what's planned.

    • Re:3dfx... (Score:2, Informative)

      negative

      3dfx was trying to ski uphill with a set of gear that had been obsolete for some time. nVidia was no longer worried about the, rather about ATI and competing in the OEM deals. Buying 3dfx just gave them protection against someone else buying 3dfx's IP and starting patent litigation against them.

      And the fact of the matter is, you don't know what 3dfx was working on, wheither it was "killer routines" or not. Gigapixels technology may have breathed some extra life into t-buffer, but it's hardware transform and programability that will be the forseeable future, 2 things 3dfx was not making much progress towards.

      Look, I liked 3dfx as well. I replaced my rendition with a voodoo when there were exactly 3 games that ran on it. However, I don't let my fondness for 3dfx pull the wool over my eyes and convince me that they were somehow not doomed, were going to come back and do great things if only nasty nvidia hadn't bought them.

      Remember, 3dfx chose to sell.

      What will be the next big innovations? If someone can figure out how to get realisitc sizes of embeded dram, that'll be nice for fill rate. Some people might try heirarchical zbuffering, but that hasn't been a win for a while on most architectures. As vertex processing throughput approaches fillrate throughput, I expect to see things move to randomized splat samping, since it appears the best way to get logarithmic complexity vs number of primatives. I've not seen any other way to get realistic rendering times on datasets involving billions of polygons or more.
    • I don't care whether they adapt any of 3Dfx's hardware ideas. What I really wish they'd do is either implement a Glide support into their drivers, or open up the Glide source code to whatever extent they can (some bits and bobs may be proprietary to other companies).

      Obviously, I'm wishing for this for compatibility with old Glide-only apps, not so that new ones could be written. No one has written new ones in eons AFAIK, since as soon as more open standards like OpenGL and DirectX came onto the scene people dumped the 3Dfx-only Glide route, thank God. But there are still several older games written in the Voodoo and Voodoo 2's heyday which are Glide-only, or which work significantly better under Glide than they do in DX.

      These apps are few but they contain a couple of early PC classics, as well as the first and still-most-compatible N64 emulator UltraHLE and its offshoot SupraHLE. There are several Glide-wrappers that translate Glide calls into standard DirectX calls, but they don't work well or at all for everybody--me included. None of the Glide wrappers will let me play any Glide-only games or any game through SupraHLE. In addition, some older titles like the first *Tomb Raider* look much, much better under Glide than they do under DX.

      So, for the sake of compatability with old games I wish they would release as much of the Glide code as they can, if not write a quick-and-dirty Glide implementation into their drivers. Some may remember that Creative Labs had promised a near-perfect Glide compatibility for their TNT2-based cards back in 1999, in a driver they called Unified. But after 3Dfx sued them the project disappeared, and now that a couple years have passed the desire for Glide capabilities has died down since the games are now so old. But some of us like those old games, and the idea of continued compatability. I just hate it when things break unnecessarily. It's funny how, although some of them need CPU-slowdown programs because they lack internal timing routines, I can still run almost any DOS game with the oldest I've run going back to 1982, yet the development of proprietary 3D APIs like Glide and even DX (Microsoft could break backwards-compatibility with older versions any time they wish) takes away that continuity and certainty.

      Just my opinion, though.
      • I second this. I found my old copy of Independance War a few months back, and was all excited about playing it, till I remembered it was Glide or Software. I was THIS close to digging out my Banshee card....but it just didn't feel right plugging it in beside my Geforce 3....
  • What I want (Score:1, Interesting)

    by redcliffe ( 466773 )
    is a realtime raytracing chip. That would be cool, especially if it did radiosity and photon mapping.
  • Finally, something to face the Radeon
    • Except for the fact that the drivers are still closed source, but most people can live with that.
  • Tiny Little Item (Score:5, Informative)

    by 1alpha7 ( 192745 ) on Sunday November 25, 2001 @08:03PM (#2611546) Homepage

    In case you miss it 3/4 down the page:

    NV25 Information

    I was browsing nVidia's forum over @ Fools, and there was a link to Reactor Critical. Here's what they have to say about NV25.

    Long-awaited NV25 based adapters. This graphics processor that have similar capabilities compared with XGPU is a lot more powerful than GeForce3 Ti500. Since it is manufactured using 0.13 microns technology, it has a lot of chances to be clocked at the very high levels. The GPU comes in January/February 2002, while professional boards should be available in the second quarter.

    ELSA is going to launch two boards based on NV25GL processor, both supports two LCD monitors, though, we do not know whether there are two integrated TMDS transmitters or only one and the second is external.

    NV25 that works on 275 MHz. 128 MB DDR SDRAM @ 250 MHz.
    NV25 that works on 300 MHz. 128 MB DDR SDRAM @ 330 MHz.

    So, this is what a high-end NV25 part *might* look like...

    * Rumoured 6 Pixel pipelines
    * Core freq: 300 MHz.
    * Memory: 660 MHz. (eff) ~ 10.5 GB/sec BW, assuming they stay with 128-bit data paths.
    * Supports TwinView
    * Supports (finally) Hardware iDCT
    * More powerful T&L unit, to include a second Vertex Shader
    * Can't find the link, but there's a rumour stating that we can expect Voodoo5 5500-esque Anti-Aliasing feature. The presumption is that the NV25 will bring a Rotated-Grid AA implementation to the table.
    * .13u Manufacturing process

    It really does sound like a pretty amazing chip. I would be willing to bet we'll be hearing a lot more in the way of rumours as the New Year approaches.

  • While the specs look really excellent (Sorry, no ray-tracing accelleration ;), I must ask you one simple question...

    Don't you think it's time they adopted a new family name for their newer cards? GeForce is kind of old-sounding now. I think it could use a new name, what do you think? Reply to this post with suggestions, please! :)
    • [08:06p] <Gangis> GeForce 4?!?! Pleeeeeease, be original for once
      [08:09p] <[stig]> they need to call it an AK47 or something
      [08:09p] >[stig]> just so i say to my friends in school "Yeah so I picked up an AK47 the other day, its really powerful!"
      [08:09p] <[stig]> in front of teachers of course
  • by x136 ( 513282 )
    Is my sense of time totally off, or did the GeForce3 just come out a few months ago?

    Stop the train, I want to get off.
    • Re:eh? (Score:3, Interesting)

      by fault0 ( 514452 )
      nVidia became so sucessful because of it's short release schedule. They release new products two times a year, every 6 months.

      This 6 months, the GeForce 3 Ti200/500 came out. Last 6 months ago, the GeForce 3 came out, etc...

      This kind of release schedule is what made 3dfx, an once undisputed leader in 3d technology, lag so far behind. Consequently, it's also what has made Matrox not even really care about the 3d market.
      • Actually, with NV25, they are abandoning the 6-month cycle. NV25 will come out a year after GeForce3, while Ti500 was just an overclocked revision of the same chip (that used to be the "Ultra" version that would come out 3 months after a new chip release).
    • by Sokie ( 60732 )
      nVidia's product development cycle has always been fast paced. They come out with a new GPU every 12 months and come out with a tweaked and upgraded version of that GPU 6 months later. Example: GeForce3 was a new GPU, GeForce3 Ti 200/500 was the tweak/upgrade ~6 months later. GeForce4 info coming out about now is nothing out of the ordinary for nVidia.
    • Re:eh? (Score:3, Insightful)

      by hexix ( 9514 )
      Yeah but it's no big deal. You must get 2 year old hardware nice and cheap, seeing as all the new stuff is never even taken advantage of til its about 2 years old.
  • by hound3000 ( 238628 ) on Sunday November 25, 2001 @08:29PM (#2611614) Journal
  • It's amazing to see NVidia's dedication to forwarding their technology and continually improving a seemingly perfect line of cards, but with all this power, are we running out of an application to utilize this power?

    I have a 800 Duron system with a Geforce 2 MX. It plays any new game at 1152x968 flawlessly. The GeForce 3 can pump out perfect refresh rates at even higher resolutions on any of the newest and graphical intensive game available today. There simply is no challenge, whereas years ago there was always room to improve - refresh rates, resolution, bit colour, texture size, etc.

    Does improvement in the 2000's merely mean higher resolutions? If so, I don't want it. On average, most consumer level monitors are 17" and support a max resolution of 1280x1024. These new cards can easily support it flawlessly, so there lacks any point in investing a new card, and I see no point in running Max Payne, for example, at 4800x3600 resolution.

    There is no "killer app" available today - even with the GeForce 3 being out for some time now - that will even begin to offer these cards a challenge, and with a GeForce 4 on the way, will NVidia be able to intise buyers into believing they need 300fps at 4800x3600 resolution? In the end, I begin to wonder if NVidia is beginning to find itself in a tough corner. Their hardware is revolutionary, but lacks any practical application.
    • That's what they said about the Voodoo2, the P2 chips and almost every new cpu that comes out. Until a game I play has real time 3d graphics that are totally lifelike we're not there yet. And I mean every little pore on the characters faces is individually bump mapped and rendered and real arm movement.

      The enviroments still look crappy. It's just large polygons with painted edges. And were no where near enviroments that react realistically to your actions.
      • The problem (as Hideo Kojima says in this interview [slashdot.org]), is that each of those pores will have to be designed. So as detail increases, so does game development cost.

        Games won't be able to keep up with graphics cards until designs scale above the latest hardware. Some kind of fractal / organic method seems the only way to go.

    • I have the same setup as you .. as of right now, I don't think there is a killer app, I just follow The Carmack's mighty upgrade plan .... upgrade at the release of every new id game. :)

      So Doom3 = new card for me.
    • by Anonymous Coward
      Unreal Warfare, the new licenseable engine from the people that brought you UT and Unreal, brings an Athlon 1.4 GHz / GeForce3 to its knees. Sure, it is a game engine still in flux, with content that may need to be optimized.

      Developers will continually find ways to make these boards work. The game production pipeline is getting slower, thats all.

    • I fail to see what's so revolutionary about their hardware. They're basically building huge DSP-style chips where much of the operation is hardcoded for better optimization. If chips continue to do this, of course you're going to see games struggle to catch up. The 3D graphics market seems to be doing very little that's revolutionary--just bringing the chips up to the process limitations of transistor size and speed.

      The problem with the current model is that the graphics card itself isn't expected to have any intelligence of its own. It's simply expected to render as much as possible in as little time as it can. Right now, we're expected to pass millions of triangles to the card to render, as well megabytes of textures to slap on them as fast as possible. Imagine if instead, the developer handed the graphics card a mathematical description of the model, and the chip did the rest, filling in details based on fractal algorithms. Instead of applying a bumpy looking texture to a wall, you could make the wall itself bumpy with potentially infinite detail. That would be revolutionary, and would require incredible engineering to design.

      Saying the current crop of graphics chips are revolutionary is like denying that SGI ever designed a Reality Engine in the first place. Just because greater integration allows it to be insanely fast doesn't mean it's anything really that new. NVidia's going to have to do something pretty amazing to keep from getting blown away when something truly revolutionary comes around.

      • The 3D graphics market seems to be doing very little that's revolutionary--just bringing the chips up to the process limitations of transistor size and speed.

        Oh, you make it sound so easy ;-)

        What about turning the last 20 years of hardware graphics acceleration on its head by introducing a *programmable* graphics pipeline? It's never been done before, and it changes everything. OpenGL is being totally redesigned from the ground up to cope with this huge shift in the rendering paradigm. This is the biggest thing since Renderman. I'd call it revolutionary.

        Imagine if instead, the developer handed the graphics card a mathematical description of the model, and the chip did the rest, filling in details based on fractal algorithms.

        What, like these [nvidia.com] hardware mandelbrots, rendered entirely by the GPU? Or this [nvidia.com] Game of Life? Water [nvidia.com] simulation, Perlin noise [nvidia.com], grass [nvidia.com], prodecural 3D noise, particle systems [nvidia.com], all rendered by programmable vertex & pixel shaders on the GPU. Plus fire, fur, toon shading, silhouettes... Of course, this is only what a few people have thought of so far, on some first-generation hardware.

        That would be revolutionary, and would require incredible engineering to design.

        Hmm. Maybe so.

      • Way to insult to the hard work of 100 peole the know 1000 more than you. Textures being generated by algorithms are known as procedural textures. They are the backbone of texturing in non realtime 3D animation. They are also what Nvidia's Geforce 3 was all about. As for procedural objects, that is further off, but will eventually work its way down to realtime 3D. It is what Photorealistic Renderman has used for the last twenty years, and is why film effects don't have polygonal edges. It isn't done on a full object basis, it is done by very small sections called NURBS patches, and more recently subdivision surfaces. You seem to have to good ideas, but you need to realize that people had them 20 years ago, so you really aren't the authority on revolutionary.
    • There are actually people who use 3D cards for things other than gaming. Even still, we haven't reached a point where 3D cards are sufficient for gaming.

      I don't think that the purpose of newer video cards is to allow someone to run at higher resolution. Sure it is a byproduct of having a faster card, but the main goal is to push more detail at the lower resolutions. I'd rather play a game that only ran at 640x480 but had realistic lighting and was indistinguishable from a TV broadcast, than a game that looked like Q3 but jacked up to 64000x48000

      I'd like to know what games play 'flawlessly' at 1152x968 on a GeForce2 MX. I have a GF2 GTS Ultra with 2x1.5 GHz PIV (win2k) and have troubles maintaining 60 fps at 800x600 for games like RTCW, and Ghost Recon with full-detail (the way these games should be played). Sure QIII runs solid, but I don't consider that a new game. IMHO, gaming is horrible at anything less than 60fps and disabling vsync is ugly.

    • Well, you obviously haven't played Ghost Recon. :) I have a 1.2Ghz Athlon, a GF3 Ti200, and it runs at a mere 30fps with Quincunx FSAA at 1280x1024. It looks AWESOME, but I could use 50% more FPS. With Unreal2 coming out, even the GF3's are going to be taxed. I would imagine that by the time the GF4's actually come out, and are below $300, that there will be games that will take advantage of it.
    • by Namarrgon ( 105036 ) on Sunday November 25, 2001 @09:41PM (#2611771) Homepage
      There's always one, isn't there...

      One good reason to improve speed is simply better AA. Bet your card still shows those unsightly jaggies? Textures shimmer as they pass by, moire on the staircase, distant telegraph poles popping in & out of existance? Turn on 4-sample AA. Oh dear, now it's too slow. But not on my GF3, with anisotropic filtering turned up. I just wish I could do more than 4 samples.

      Want more realism? Bump mapping everywhere? Gloss maps? More translucent surfaces? Detail & dirt maps? Specular highlights? More objects in the scene (i.e. more overdraw)? Guess you're going to need a lot more fillrate too.

      Here's some more reasons, this time for better polygon handling - real-looking trees. Smooth, organic surfaces. Huge, detailed outdoor scenes that aren't always hidden by that strange vertical fog. Realtime, dynamic shadow volumes. And of course, accurate reflections. If you want to see the trees reflecting in the water, you have to render all those polygons twice. To see all the buildings in your nice shiny car, a cubic environment map is a good way, but requires the entire scene to be rendered six times!

      And we haven't even started getting to the interesting stuff. Anisotropic pixel shaders, vertex shaders for displacement mapping or nice rippled reflections. Overbright textures for really nice highlights (or for running realtime Renderman shaders :-) Maybe some really computation-heavy stuff - ray traced surfaces, realtime radiosity solutions or global illumination.

      Not enough? I'd like that all in stereovision too, please. Better double that workload again. Or perhaps on each wall of an immersive room? 5x more rendering.

      DOA3 looks really, really nice on my Xbox, but I can't help thinking how much better it'd be at 1600x1200 with AA. Or with some of the other refinements I've mentioned above. Sadly, the hardware still isn't there yet...

      Believe me, the field of 3D has a lot of room to grow yet.

      • I'd like that all in stereovision too, please. Better double that workload again. Or perhaps on each wall of an immersive room? 5x more rendering.

        Actually that would be 6x more -- and it exists now at ISU's C6 [iastate.edu].

      • by Danse ( 1026 )

        When I can play a game that looks as good as the Final Fantasy movie, at a consistent 100 FPS that's when it's fast enough for me :)

    • Well, maybe right now...

      But lets zapped forward a few years when we should see the new Doom game. It will use the greatest feature of the nvidia cards. Their programmable processor unit (gpu). Musicles will tighten and skin will stretch based on corresponding equations. All of this will be rendered on the fly as the character moves.

      It should give the GeForce 4 something to crunch on.

      I can't wait til games are like Toy Story or Monsters Inc.
    • One good use is for video rendering or image rendering people who don't want to pay for the highly expensive top of the line cards.

      From: http://forums.overclockers-network.com/cgi-bin/ult imatebb.cgi?ubb=get_topic&f=6&t=005704 [overclockers-network.com]
      I was using V7100(with a res of 2360x960 32bit and 70hzrefresh).... until my radeons came in and with the v7100 I could move 250,000 polys in real time with a little bit of lag.... now with the Radeon8500 I can work with scenes up to 1,000,000 polys with no lag...just beautifly real time flow!...just to bogg it down to the same lag as the V7100 I have to open 1,500,000 polys!
    • What the hell are you talking about? Everything you said is simply... false.

      There is NO video card out today that can handle the latest games at 1024x768 with all the bells and whistles turned up all the way. Sure, it may look ok, but it will still slow down to under 30 fps in many cases. And there is also Doom 3 on the horizon, which will run at under 30 fps on a GF3 according to Carmack.

      There is still PLENTY of room for more performance with video cards and it will continue until we have FPSes running at 1600x1200 with completely photo-realistic environments, full anti-aliasing, acheiving 120 fps. And even then, I'm sure someone will have a reason to have even more power in their video card.

      Current video cards are not even close to that.

    • Games such as RTCW Scream for this much power...
      For example, I'm running a highly overclocked radeon 8500 (64Meg), equiv to a slow geforce 3 or fast geforce2:
      with all the graphical settings turned up to thier max in the game (extra-high character texture detail, max everything else), and only 2xAA, I CANNOT PLAY THE GAME at 800x600 (the framerate falls to 4fps durring combat); If I turn things down some, I can get away with playing at 800x600. Even with weak settings though, 1024x768 is out of the question unless I turn AA completely off. The only way around this is to switch to vertext lighting (as opposed to lightmaps - which are wonderfully beautifull)...
      Anyway, the point is, there is at least 1 game out Now, in which a card like this would be useful. Since games only get more and more complex, by the time a NV25 based card hit the market, many games would want/need this sort of speed.
    • Grip3n: I have a 800 Duron system with a Geforce 2 MX. It plays any new game at 1152x968 flawlessly.

      I too have a Duron 800 and Geforce 2 MX, and until last week I would have totally agreed with you.

      Last week I installed 3D Mark 2001 [madonion.com] .

      Try it yourself. Wait for the scene with the trees... suddenly your system will drop to 1-2 FPS.

      Trees. That is why we need a better CPU & graphics card.

      Notice how all the "good" games are set indoors, in cities, or in deserts? Yet all the fun army combat films take place in rural areas?

      Think of all those war or commando films where you've got a lone gunman sneaking around using trees for cover. Now when you look at games, you're always sneaking around using crates and boxes for cover. That's why we need better hardware.

      The games run fast on our Duron 800 / GEF2MX systems because those games are set in an environment which is specifically designed not to challenge our hardware. It's always a sewage system, a couple of city blocks, an underground base, a desert airport. It's never a forrest, a farm, a suburb.

    • I have a 800 Duron system with a Geforce 2 MX. It plays any new game at 1152x968 flawlessly. The GeForce 3 can pump out perfect refresh rates at even higher resolutions on any of the newest and graphical intensive game available today. There simply is no challenge, whereas years ago there was always room to improve - refresh rates, resolution, bit colour, texture size, etc.

      Game developers are no longer pushing the capabilities of graphics cards (note: I am a game developer). You can look at this several different ways:

      1. We're glad to finally have enough power to not worry about getting just polygons on the screen, so we simply write games and no longer have the same technological obsession that many PC buyers do.

      2. Newer cards like the Radeon and GeForce 3 are pricey enough and new enough that only the hardcore fanboy types are buying them. If you assume a GeForce 3 level card, then your market gets reduced by a factor of 20 or more. Probably more as there's been a growing trend to not even put 3D accelerators in new systems (other than bare-bones chips, that it is).

      3. We still don't really know how to push older generations of cards yet. Ever see games like Spyro: Year of the Dragon on the Playstation *ONE*. Wow is that impressive. PC games look better, but not an order of magnitude better. On the one side we have a system that doesn't even have a z-buffer, and on the other we have state of the art. Sometimes I think that if cards stopped advancing past the Voodoo 2 then game graphics would have still kept advancing all the way to where they are now.

      Disclaimer: I know, I know, people who drop $600 on a new graphics card the day it is released don't want to hear this. They're in their own world anyway :)
  • by Anonymous Coward
    according to the inquirer [theinquirer.net], nvidia is having problems with the foundry that supplies its' chips
  • Video capture (Score:3, Interesting)

    by z4ce ( 67861 ) on Sunday November 25, 2001 @09:57PM (#2611807)
    I was wondering -- does anyone know what vendors sell Nvidia cards with TV/Video capture built-in that supports Linux?

    I have an old Asus TNT3400/TV, and I never get to use the /TV functions because I don't ever use windows (except for Minitab and Xilinx. gah.) Now that I'm looking at upgrading my computer again, I want to make sure I get maximum linux compatiblity.

    Anyone have any recommendations?

    Thanks,

    Ian
    • Re:Video capture (Score:3, Informative)

      by ukyoCE ( 106879 )
      You have to go for the Asus Deluxe line of cards(they make the same cards in multiple versions, Deluxe has a crappy TV tuner and digital VCR) or for an ATI All-in-Wonder line. The ATI has by far the best multimedia suite, it's tv-guide-like recording of shows and on-board mpeg-2 encoding kick-ass. the program is still a little slow and buggy, but incredible nonetheless.
      honestly though, I say save 50-100$ on the video card, and buy the tv-card seperate. that way a) save money on the video card b)no need to keep buying the 50-100$ extra card every year, since the capture card will still work just as well.
      As for Linux compatibility, I've heard mixed reports about all the Asus Deluxe and the ATI All In Wonders, so you'll have to search around online and take a guess.
  • by jroyall ( 538982 ) on Sunday November 25, 2001 @10:11PM (#2611833)
    As a linux and open source purist I only play text based games.
  • Perhaps this will start to drive down the price of the GEForce 3 to more affordable levels. $300 Plus for a video card is just a bit much. Thats a 100GB hard drive, a new high end motherboard and processor, a ton of memory, or a larger monitor. I don't need to see the pores and zits on on my warrior to have a good experience.
  • First, let me just say I'm an avid XBox supporter. I'm a supporter of anything that gets the PC into the living room of millions of people, and if it has a user-friendly interface it gets bonus points.

    That said, I think nVidia is playing with fire in simultaneously building "NV2A" chipsets for the XBox and trying to push the envelope on the PC. I understand they're covering their bases: games on PC are wiltering in comparison to console games (at least for now -- this is a recurring cycle with every new console that comes out). However, by creating one standard that users can lock in to, what's the impetus to purchase a PC and upgrade to a higher video card?

    Wired magazine had an interesting take on the "secondary benefits" to Microsoft making the XBox successful. One was the obvious possibility that they will leverage the living room as a new monopoly (which, rightfully so, they agreed was simply conspiracy theory). However, another "benefit" is getting console developers familiar with the (admitally not that bad) DirectX 8 interface, and bringing them back to the PC to develop quality ports. This, in my mind, is the only way nVidia is going to honestly stay in the computer video card game at the growth rate it's been going.

    I'm wondering, perchance, if this will release the other extreme: eventually, people just kind of settle on a certain type of technology "good enough" for their present needs. The internal combustion engine was pretty much finalized 60 years ago, and very real modifications have taken place since then. Televisions, likewise, were pretty much finalized in technology 30 years ago. Outside of a few fringe stragglers, very few people now make the jump to "upgraded" tech. I wonder if PCs will be the next.

    And if it is, where's nVidia's future in all of this?

  • Generally, I don't play games much, which is why I also don't care much about 3D performance (it doesn't hurt to have it though :p ). I care a lot more about 2D quality and the Nvidia based cards isn't exactly known to be the best there...

    Besides that, I care about driver quality - AFAIK the NVIDIA drivers are generally great, both for windows and Linux (although they are closed source). I also care a lot about noise. I don't want a card that need a huge noisy fan. No active cooling, thanks!

    AFAIK, the Geforce3 Ti200 doesn't *need* a fan (the NVIDIA reference card doesn't have one), but most cards comes with a fan anyway - if the heatsink is good enough, it should be safe to disable the fan.

    I've heard Leadtek cards is some of the only NVIDIA cards that actually have good 2D quality.

  • One of the things I have been very disappointed with, in both my nVidia-based home and work machines, is that I have not been able to get virtual screen sizes larger than 2048x1536 when there should be plenty of memory (32M and 64M, respectively) to run much larger virtual screens. This is particularly timely with the arrival of much higher resolution displays (like the 2048x1536 physical screen of the top-end Viewsonic). These larger virtual displays would be very useful for scientific visualization, or even to look at the 3200x2400 Hubble pictures at http://heritage.stsci.edu/ [stsci.edu]

  • Is it just me, or does it seem like we're down to only the "Big 3" (nVidia, ATI, and Matrox -- and I could be wrong about Matrox).

    You used to have all sorts of chipset makers... S3, Matrox, ATI, WesternDigital, Tseng Labs (whatever happened to them, anyway?), 3dFX...

    What happened? Is this consoldiation a good thing or a bad thing?

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...