Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

Preview of The GeForce 256 93

Tor Arne writes " Gamecenter is doing a preview of the Nvidia GeForce 256. Not as amazing as it initially sounded, it's still pretty amazing looking."
This discussion has been archived. No new comments can be posted.

Preview of The GeForce 256

Comments Filter:
  • by Masa ( 74401 )
    But I've heard that the OpenGL support isn't good (or stable under Win95/98). Actually, I even didn't know that TNT would support windowed acceleration.

  • This has been discussed in computer animation circles for some time. The reason film at 24 frames per second (and tv at 30fps) looks better than normal computer animation at 60fps is that for film you have 24 frames of motion, whereas for cgi you get 60 still frames. Motion blur for cgi usually involves generating three times as many frames and then blurring them together to get the illusion of frames of motion.
  • If I had the money, I'd spend $300 on the card, $50,000 on the car, $15,000 on the sterio/tv and $250,000 on the house. But who's the moron that spends $21.95 a month for aol?

    -----
  • If you can get better performance at a lower clock speed, that is much better than running it fast. Less heat, less stress on the silicon and there's a better possibility that the chip CAN go faster. I'd love it if I could get the performance of an athlon 600 out of a chip that runs at 100, even though it doesn't sound as sexy.

    -----
  • I don't know why you think nvidia cards are crap. I own Matrox, 3dfx and nvidia cards. They all have driver issues. Hell, all PC boards with new technology have driver issues.

    As for comparing the nVidia chipset (or any PC based chipset) to the PSX2 - that's pretty pointless. The PSX2 isn't intended to be a general purpose graphics platform, it has to only run in one resolution (and a low one at that!) It's pretty easy to optimize given those circumstances (crap, the refrest rate is limited too). The GeForce has to perform under a variety of resolutions, refresh rates, and windowing environments. Plus, I have no doubt in my mind that by the time the PSX2 ships in volume (or shortly thereafter) PC boards will have surpassed it in speed and quality (even at higher resolutions). So the PSX2 may have an irrelevent lead for a month or two, but it will become old technology really quick - closed platforms are like that.

    I look at it this way: nVidia has hired a lot of talent away from SGI. There wasn't a lot that surprised SGI engineers when it came to graphics technology. I doubt nVidia is going to be caught anytime soon with their pants around there ankles.
    (3dfx too, but I have less faith in them).

    Don't get me wrong, the PSX2 looks like it will be a great gaming (and multimedia) platform. But will it make PC graphics irrelevent or blow the PC graphics platforms away? I don't think so.
  • Better said, the G3/G4 of graphic cards. One can easily argue that the increase of the P3 over the P2 are hype. (ok, so apple does hype the Gs a bit too, but they actually are faster when you have apps designed for the chip. Don't forget that the p3 didn't add or improve any x86 instructions much, now did they?)

    -----
  • by emmons ( 94632 )
    Be totally honest... have you heard of anything being good or stable on win95/98? (not intended to be flamebait)

    -----
  • Many people have commented that they are unimpressed by this preview of the GeForce. I think this was a real gamble by Nvidia. This is not just another high speed pixel pushing video card. Alot of games like Halflife are not fill rate limited so much as CPU geometry limited. They took a leap here and are offering the ability to take Transform and ligting (Geometry)away from the cpu (pissing of Intel) freeing it up for animation and special effects. And putting it on a specialized piece of hardware that cranks it out much faster allowing more detail in a rendered scene. It wont be obvious at first but developers ARE going to take advntage of this they are jazzed about this development. It's the technology that wil take the PC to the Playstation 2 level. Higher fill rates will only let you increase the reslolution and color depth on your game if the CPU is still handling geometry your stuck with the level of detail you have had in the past because you will be CPU constrained. Dont get me wrong higher fill rates are a good thing. I think you will see this cards fill rates increase over time. It's an extremely complex chip 22 million transistors. So it's clock rates are lower starting out. Can you say GeForce 256 Ultra in about six months.
  • Do remember that the intended improvement of the GForce 256 over its predecessors is intended to be image quality, rather than speed. Your eye has difficulty seeing the difference between 60 and 80fps, but it can clearly see the difference between a truely shaped, shadowed wheel and a round object with a texture slapped on top. We will not get to see this for 6-12 months however, until game developers really take advantage of the new technology.

    -----
  • XF86 3.3.4 (the latest one with Debian packages when I upgraded last).

    I usually run my desktop at 1024x768x16, but I've tried x15, x32, 640x480x* and 800x600x*.

    So as not to clutter up /., anyone interested could post to local.linux.nvidia.problems on handorf.penguinpowered.com. It won't be up all the time, but hopefully it'll stay up for a while.

    Thanks for the comments, I'd really like to get this resolved.
  • Some games can take more advantage of T&L than other games - Unreal stresses fill rate while Q3 Arena will benefit more from T&L.

    Voodooextreme has asked a lot of game developers, what they think of this hole issue about T&L vs fill rate - you can find the article here: http://www.voodooextreme.com/articles/fillvstl.htm l

    I like the first comment from Tim Sweeney - Epic Games :-)

    A lot of the benchmarks that has been published doesn't take advantage of the T&L, and therefore the benchmarks doesn't look really great, just great. But the fill rate of the GeForce isn't *that* much better than the TNT2 Ultra or the Voodoo3 3500.

    What you can't se from the benchmarks either, is the picture quality - with games that uses T&L, you might not get a frame rate that is much higher than others, but you'll get a much nicer picture.

    You'll need to get games that stresses the T&L chip to see the difference, and there are not many games which does that today (are there any at all?!)

    Download the tree demo from NVIDIA's website and run it on you 3D-accelerators - it crawls!
    I tried it on my Celeron450/128MB RAM/Voodoo3 2000 - it was a slideshow!

    Besides, Geforce is the only next generation card which is available in the next couple of weeks - S3 Savage2000 will be available before christmas, but that's a long time in the graphics bussiness. It is even worse with 3dfx's Voodoo4/Napalm - it will maybe not be available before february!

    If NVIDIA continue to deliver a new product every 6 month, then the will have their next generation card ready a few month after Voodoo4 arrives.

    Rumors about NVIDIA's next card/chip/GPU will certainly be all around the net at that time, which may hurt 3dfx's sales, if they don't deliver something quite extraordinary...
  • With SGI and NVIDIA in bed with each other, and SGI is putting a lot of money on Linux, my bet is that we will soon see much better drivers from them...
  • ---
    What will be interesting in the future, will be the test results of the Voodoo4 (or whatever it will be called). This card should be able to do more with a lower framerate due to it's support of motion blur. The corrollary that 3Dfx is making with the V4 is between film which runs at 24fpx and todays video games which must run at 60fps. I think that most people would agree that film looks very, very good. If what 3Dfx says about their board is true, then FPS will no longer be a suitable judge for a boards performance. Assuming that a game is written to take advantage of motion blur. The downside... games depend on reaction to a controller and must be able to display these small changes. If the game is only updating at 24fps, then you may feel as if you don't have precise control over the game (but it will damn well look good!!).
    ---

    This certainly isn't true at all. 3DFX has *always* stressed framerate over features, and it's not different with the V4 (Napalm, whatever). You can expect this card to put up framerates that squash all the competition, but at the expense of having a limited featureset. They've got a supposedly huge fillrate with the V4, but most of that will go to supporting full-screen anti-aliasing at a decent framerate. So 3DFX has another incremental product upgrade with minimal benefit to the consumer. 32-bit color? Wow - that only took close to 2 years to implement after their competitiors did.

    Face it, 3DFX hasn't done anything really worthwhile since bringing affordable 3D in hardware to the desktop.

    -aaron


    ---
    aaron barnes
    part-time dork
  • by Cuthalion ( 65550 ) on Tuesday September 28, 1999 @04:48AM (#1654074) Homepage
    > Hardware T&L greatly increases the amount of onboard memory needed.

    No facts to back this claim up. How exactly does hardware T&L increase the amount of onboard framebuffer required? With AGP, there really is no need for local video memory at all, except to use for the actual visual screen, and maybe as a texture cache. Sure the geometry system will need somewhere to cache scenes, but to fill up 128MB with just _geometry_ information you'll need something as complicated as that huge landscape scene in the Matrix.

    Certainly hardware T&L does not increase the size of the framebuffer needed. However, AGP is really not as fast as the RAM they're putting in these systems - hell, all bus issues aside, system RAM is only 100 MHz, while most video cards local memory is way faster, or on a wider bus or both.

    When doing the geometry, you don't want to tie up your bus to read and write and read each vertex as you translate and light each frame. Sure, it is possible to do it with AGP, but is it efficient? Let's see, they say it can push 15 million polys a second? Say a poly takes up .. I dunno, between 64 and 128 bytes (you need your texture indices too, remember)), you're using between one and two GB/s of bus bandwidth if all you're doing is reading each polygon once.

    If this is 60 fps, each of those frames is 16-32 MB. 128 MB will be more than most applications will need. But using an extra 32 for geometry information is not unwarranted, in their pushing-the-card-to-its-limits case.

    Disclaimer: I'm not as smart as I think.

  • "P3 didn't add or improve any x86 instructions"..

    I for one don't have many intel chips, but they did add SIMD instructions, if I recall, which make applications "faster when designed for the chip", such as you said for the Gs.

  • ... (because it is so specialized) can probably out-render any CPU available today (and probably for the next 6-8 months).
    And, one should note that hardware geometry is a new technology _in_the_consumer_market_.
    In 6-8 months we'll have the next geforce, and with it perhaps much higher clockrates, more pipelines, more features. The nvidia guys themselves seem to think of up to 8(!) times the geometry processing power. And I can belive this, it's just improving the given plattform, a lot easier than the step they have done now. No CPU will ever be able to catch up, it's the same as with DSPs.
    It's a big step, and in my opinion we'll see nvidia (and S3, Matrox, and perhaps ATI and 3dfx too, otherwise they'll die) fastly going in a direction where they will be able to get all specialized graphics hardware manufacturers into big trouble.
    Nobody who sells for a relativly small market segment is able to compete with a demanding monopoly free mass market which wants exactly these highend features.
    And - looking at the highend workstation-graphics companies - we see they all know it (SGI, Intergraph ...).



  • Yeah, video memory is faster than system memory, but doesn't a goodly chunk of that bandwidth go to the RAMDAC?
  • The deal with 3dfx is that they mean to be able to provide over _100_ fps, or 60fps with antialiasing. This is different from motion blur- for one, the antialiasing will work with all old games as it's nothing but a scaled-up screen bilinearly resampled down. Antialiasing does look good (it's widely used in raytracing) and this will indeed cause existing games to look better, as well as the future games.
    GeForce will not be able to do this as it is grossly fill-rate impeded compared to its competitors. GeForce is all geometry and no fill rate- the next 3dfx thing is all fill rate and no geometry- the Savage one is somewhere in the middle.
    The only way you'll get the antialiasing and motion blur ('cinematic' effects, kind of like how 3dfx rendering seems dirtier, more contrasty, more photographic as opposed to 'rendered') is with the 3dfx stuff as none of its competitors are willing to put that much effort into fill rate. The only way you'll get 20 times the geometry (rounded curves, 3d trees in games etc.) is if you get the GeForce and also wait to have developers write games for it, many of which could be Win-only *grumble*. My money's on 3dfx actually- I'm biased because I always think 3dfx screenshots look more 'photographic' (grain? contrast? some factor of their 22-bit internal calculations to 16-bit display?) but there's another factor- competitiveness.
    If you read folks like Thresh talking about what they use, it turns out that they crank everything down to look as ugly as possible and run as fast as possible. I've done this on q3test and got a solid 60fps in medium action using only a 300Mhz G3 upgrade card and a Voodoo2. It looks awful, especially when you really pull out all the stops and make things look absolutely horrible- but it's sure competitive! You can track enemies and gib them much better, even if you're not all that hot at Quake.
    How does this relate to the GeForce? It's the fill rate. Even on a normal AGP bus the thing can't be fed enough geometry to max it out- but the actual filling of the screen is unusually slow, and this expands rapidly with larger resolutions.
    The result is this- somebody trying to max out, say, q3test but run at 1600x1200 in lowest image quality will be able to see accurate (but nearly solid color!) enemies in the distance and be able to make out subtle movements. This also applies to the antialiasing- that will help as well, even at normal resolutions. The result is that the person running on something with insanely high fill rate and using that combined with very low graphics quality, will get more visual information than the other players will, and will be getting it at a frame rate that is competitive (to a Thresh, there's a difference between 100 and 150 fps- while in a crowded fight, with 'sync' turned off).
    By contrast, users of a geometry enhanced card will not get a competitive advantage from their form of graphical superiority. It is strictly visual eye candy and will not significantly add a competitive advantage...
    For that reason I'd say, DON'T write off 3dfx just yet. Their choice for technological advancement is tailor made for getting a competitive advantage, and when you start maxing out the respective techie wonderfulness, the competitive advantage of 3dfx's approach will not be subtle. Likely result- 3dfx users may not be looking at comparably pretty visuals, but can console themselves by gibbing everybody in sight ;)
  • Quake2 has a software renderer, and Carmack has stated that it uses GL's transformation pipeline. It does not, however, make user of the lighting engine, as Quake (and most ofther FPS games) use lightmaps instead of vertex lighting.
  • by Xamot ( 924 ) on Tuesday September 28, 1999 @03:01AM (#1654091)
    Sharky Extreme [sharkyextreme.com] had one recently too.

    --

  • I think this card will perform as advertised. Unfortunately, software must be written specifically to take advantage of the hardware and there is no way to test this at the moment. Right now, without taking advantage of the T&L engine it is the fastest "conventional" video board. That says quite a bit.

    What will be interesting in the future, will be the test results of the Voodoo4 (or whatever it will be called). This card should be able to do more with a lower framerate due to it's support of motion blur. The corrollary that 3Dfx is making with the V4 is between film which runs at 24fpx and todays video games which must run at 60fps. I think that most people would agree that film looks very, very good. If what 3Dfx says about their board is true, then FPS will no longer be a suitable judge for a boards performance. Assuming that a game is written to take advantage of motion blur. The downside... games depend on reaction to a controller and must be able to display these small changes. If the game is only updating at 24fps, then you may feel as if you don't have precise control over the game (but it will damn well look good!!).

    It will be interesting to see how the 6 to 9 months of graphics card pan out. One thing is for certain though, by the time the PSX2 ships in North America, the PC should be well beyond it visually.*

    *Of course the world is ending on Y2k, so these are hypothetical hardware progression estimates.
  • by Anonymous Coward
    It may be too early to tell performance issues until the product actually hits the streets. I recall Nvidia downplaying many of these "previews" because many sites are using outdated drivers. Direct X 7 is susposed to take advantage of T&L and Open GL already does so for years. So out of the box it'll be a great card for 3d applications like maya, lightwave, etc. And all the open gl game games out on the market [except for GL Quake1]. Apparently this started with a preview of a creative GeForce256 card from an asian review site which pegged its performance not that much faster then a TNT2... I guess it depends on what tests.. I find alot of the benchmarks pretty useless. If it runs quake3 good it should run most games good. As for applications.. heck if I get any speed performance increase in Maya over my Viper 770.. i'm sold =).
  • As was mentioned in the article by the developers, it's not Nvidia that decides how fast to ``clock'' the chips, it is the OEM's that build the boards.

    More importantly, a fixation on clock speed is quite silly, as it is merely one of the factors in how fast a system is. I'm rather more impressed if they produce a faster product that doesn't have as fast a clockspeed.

    Furthermore, keeping the clock speed down has other merits such as that it likely reduces the need for cooling, as well as diminishing the likelihood of the chips being stressed into generating EMI.

    (Entertaining rumor has it that 900MHz systems that are likely coming in the next year may interfere with the 900MHz band used by recent digital cordless phones...)

  • by TheJet ( 93435 ) on Tuesday September 28, 1999 @03:06AM (#1654097)
    I think this card will perform as advertised. Unfortunately, software must be written specifically to take advantage of the hardware and there is no way to test this at the moment.

    Actually OpenGL has had support for onboard T&L for quite some time. When these guys are developing these games, you can bet that their rigs have boards with T&L (If they aren't a poor startup).

    What will be interesting in the future, will be the test results of the Voodoo4 (or whatever it will be called). This card should be able to do more with a lower framerate due to it's support of motion blur.

    If you are talking about T-Buffer technology, you may be grossly overestimating its power. In the demos that I have seen (and granted, these are 3dfx demos and they are nasty, why wouldn't a company trying to sell its product produce decent demos to show it off???, but I digress) all your motion blur will get you is a loss framerate since your CPU has to fill the T-Buffer with x images, which then get blended to produce the effect. While it may look purdy, it is going to require one hell of a CPU to pull it off. I think the fullscene anti-aliasing is going to be the selling point on that board.

    I think something that will be interesting to point out is that 3dfx has done a really good job at throwing their marketing prowess at consumers with in respect to the GeForce 256. While I don't quite believe everything that nVidia says, I find it hard not to support them when 3dfx has gone so far out of their way to make nVidia look bad. I will paraquote a developer (can't remember his name):

    3dfx has tried to convince users that somehow a higher triangle count amounts to needing a higher fillrate. This is completely untrue, a scene with 5000 triangles at 1024x768x32 has the exact same _pixel_ count as the same scene rendered with 1,000,000 triangles. So why not use onboard T&L to up the triangle count if it costs you nothing in terms of fillrate?
  • by handorf ( 29768 ) on Tuesday September 28, 1999 @02:54AM (#1654098)
    Does any one know if the Linux drivers for this thing (which have been promised) will be day and date with the cards themselves?

    And when will ID start supporting Q3A on the nVidia cards? I HATE rebooting into Windows!

    I want one of these SO BAD, but I'm not going to give up XF86 for it!
  • by Stiletto ( 12066 ) on Tuesday September 28, 1999 @03:00AM (#1654099)
    However, games will need to be specially written to take advantage of this geometry acceleration.

    This is only true if you were unfortunate enough to write your game using Direct3D. OpenGL games will be able to take advantage of geometry acceleration without even recompiling. You reap what you sow when you use a Microsoft API.
    Whether or not hardware T&L is of any benefit to current or future games is yet to be seen though. Games lately have been getting more and more fillrate bound and less geometry bound, as game creators take advantage of higher resolutions and larger textures.

    The GeForce, on the other hand, supports up to
    128MB of local graphics memory. Hardware T&L greatly increases the amount of onboard memory needed. The first boards aimed at consumers should come out at 32MB, with 64 MB and 128MB cards to follow later on.


    No facts to back this claim up. How exactly does hardware T&L increase the amount of onboard framebuffer required? With AGP, there really is no need for local video memory at all, except to use for the actual visual screen, and maybe as a texture cache. Sure the geometry system will need somewhere to cache scenes, but to fill up 128MB with just _geometry_ information you'll need something as complicated as that huge landscape scene in the Matrix.

    Texture compression allows the use of much more detailed textures without overburdening graphics memory or bus bandwidth.

    My jury's still out on texture compression. For games that are poorly written (i.e. that load and release textures on the fly, each frame) compression can help, but for games that use a more intellegent caching scheme for texturing, there really isnt much of a point.

    Like the TNT2, GeForce supports the AGP 4X standard.

    Definitely "A Good Thing".

    The GeForce also introduces a new feature, cube environment mapping, that allows for more realistic, real-time reflections in games.

    Similar to the Matrox G400's env mapped bump mapping but not quite the same.

    Other things to note: 4 texel pipes (fills at four times the clock rate). Watch for all the other chip makers to do this too, limit of 8 lights in hardware (what happens when a scene requires more than eight? They don't say.. hmmm......)

    Basically nVidia is gambling with hardware geometry. The gamble is, that future host cpu's (Pentium-4's or whatever) will not be able to beat them in doing transformation and lighting, and that if they don't, gamers are going to really even benefit from T&L. We'll see if that pans out. Unless they have a very sophisticated ALU on that chip, it will doubtlessly only speed up certain types of scenes. (We've all seen the "tree" demo).

  • by kuro5hin ( 8501 ) on Tuesday September 28, 1999 @03:08AM (#1654100) Homepage
    You can play Q3A on nVidia. Check out nvidia's linux FAQ [nvidia.com]. It's got links to the drivers, and instructions for Q2/Q3. Yes, they all say it can't be done, etc etc, but believe me, I run Q3test on a TNT2 all the time, it works fine. It's just not officially supported. Have fun! :-)

    ----
    We all take pink lemonade for granted.
  • I would like to see some day a cheap 3D card supporting windowed 3D acceleration with full OpenGL support. Nothing else...
  • Tried it. Even fired an e-mail off to Zoid at ID. My TNT just winds up with corrupted video and the X server gets a Sig11. If you have any other links for Q3 on the nVidia stuff, I'd appreciate them!

    Maybe it's a TNT2 thing. :-(
  • Did you try changing the default video settings in Q3? The first time I tried it, I got ~ 1 fps, till I cut back on color depth and stuff. Also, it will only work with the X server at 16 bpp.

    I basically just followed the instructions in the page above, and everything went ok. Try searching google or deja.com for other's experiences.

    ----
    We all take pink lemonade for granted.

  • A benefit to shifting work to a graphics chip rather than having the CPU perform the calculations is that the CPU is more readily available for other processes.

    The frame rate cap sounds really good. Unsteady motion (going from one rate to another) is much more irritating.

  • This is only true if you were unfortunate enough to write your game using Direct3D. OpenGL games will be able to take advantage of geometry acceleration without even recompiling. You reap what you sow when you use a Microsoft API.
    Whether or not hardware T&L is of any benefit to current or future games is yet to be seen though. Games lately have been getting more and more fillrate bound and less geometry bound, as game creators take advantage of higher resolutions and larger textures.


    This is only true if the software is using OGL's transformation pipeline. IIRC, a lot of the current OGL games set the MV matrix to identity. Also, almost all of today's games use lightmaps, not OpenGL lights. So no speed up there.

  • I saw this in RivaZone [rivazone.com]:

    Another point of clarity that was added was the hardware lighting algorithm. Many people interpreted this as eight light sources for a whole screen, and this is incorrect. The GeForce allows for eight hardware lights per triangle. This means that every individual triangle that is part of an on-screen shape can have up to eight sources affecting its lighting. This is done with a minimal performance hit. NVIDIA is also in the process of tweaking their drivers to fully optimize them with the retail release of DirectX 7.

    Cool, isn't it. I want one.
  • the gamecenter preview shows the geforce card running at 120/166, and the tnt2 at 150/183(i think thats close enough =). even with the lower clock rate, its faster, significantly in a couple tests.

    indeed, it may be too early. maybe they'll pump the clock rate up and make it just silly-fast =).

    all this talk of 3d 3d 3d, what about 2d? does it look as good as a g200 at 1280x1024x32bpp@75hz? my plain old TNT doesn't. i'll probably still get it anyway - i heard linux support is out of box =)
  • from the myth-ii-will-look-so-nice dept.

    Unfortunatly, it won't, as Myth II only supports hardware acceleration on 3dfx cards via the glide port; OpenGL support is not even planned. To quote briareos, a Loki developer on loki.games.myth2 [loki.games.myth2]:

    You're really asking "will we take the time to write an OpenGL
    rendering module for Myth2"? The answer is: if someone finds the
    time. That's all I can really say.
  • You can simulate many more lights by overlapping the lights you do have.

    Think of it as having 100 lights all over the place but choosing the 8 that makes the biggest impact on the polygon. Since most objects and groups of objects exhibit a lot of lighting coherence, you probably wouldn't even notice the subtler lighting discrepancies.
  • A texel (which fillrate is measured in) is different than a pixel.

    But are not texels just textured pixels? My point was that you don't get extra texels/pixels if you add more triangles.

    Thanks for pointing out the difference though, it does make a difference if you say that one card can push through 4 texels/cycle and another can do 2 texels/cycle. I am going under the assumption that 3dfx is also going to do 4 pixel pipelines as nVidia does.
  • This is only true if you were unfortunate enough to write your game using Direct3D. OpenGL games will be able to take advantage of geometry acceleration without even recompiling. You reap what you sow when you use a Microsoft API.


    Not true. Not true at all. If a game is written to only use OpenGL as a rasterization system then it will NOT benefit from HW T&L in the least. Take Tribes as an example. All of the 3D T&L in Tribes is done by the program. This is so they can support a software rendering mode along with GLide and OpenGL. Tribes will not benefit from HW T&L.

    Any game that has a software rendering mode, along with OpenGL probably has an internal T&L pipeline and therefore will not benefit from HW T&L.




  • Why did I get moderated down? This is an article about Display Adapters. I posted a comment about Display Adapters. How is that offtopic?
  • by Anonymous Coward
    Film works at 24 fps because of Motion blur. Your brain can put together motion blurred images with a smaller framerate than non-motion blurred images. Since 3D cards don't do motion blurring, you have to up the frame rate to 60 fps, to beat the refresh rate of your eyes. A better explanation can be found at http://www.penstarsys.com/editor /30v60/30v60p1.htm [penstarsys.com]
  • You're right. W95/98 isn't the best possibly choise for operating system but the fact is that the driver support for hardware is better than for NT or Linux (I'm myself a Linux user). Another thing is that W95/98 is one of the most used operating systems at the home use. So, to get as much paying customers as possible it is (unfortunately) wice to write W95-programs.

    I've been testing Mesa 3D-library under Linux but accelerated 3D (for example with 3DFX Voodoo Rush chipset) support is poor. Well, it works in full screen mode but I haven't succeeded to get things work in the window...
  • It will come...everybody is free to implement it without any cost, so NVIDIA (and others, except S3 maybe...) will probably implement it in their next generation cards.

    But 3dfx has just announced it, so NVIDIA haven't had a chance to implement it in GeFORCE.

    One thing is for sure - FXT1 is better than S3 texture compression, and it is *free*!
    Which also means that it maybe will be available on Linux.

    S3's texture compression is (at the moment) only for the Windows platform.
  • Yes but SIMD are added, they don't accelerate what's already there in the x86 standard.

    -----
  • Well, a 350 MHz RAMDAC can't DAC more than 350 Million RAMs a second, can it? Let's do some math. (yay!)

    1600x1200x32/8 = 7 680 000 bytes/screen.

    The G400 (as an example) has a 300 MHz RAMDAC. At this resolution, it can DAC all its RAMs 100 times a second.

    768 000 000 bytes/second. Hmm. Since each pixel takes up 3 or 4 bytes, and each Hz of the RAMDAC would pretty much have to work on entire pixels, this is fine with a 300 MHz RAMDAC.

    The memory speed on the g400 max is .. (well they haven't announced it, but it's somewhere around the 170 mhz range). They claim a 256 bit 'dual bus' (whatever that means).
    32 bytes/clock * 170 m = 5 440 000 000

    About 14% of the bus bandwidth is being used by the RAMDAC, unless I'm missing something. Since the RAMDAC only ever really needs to look at 24 bits per pixel, we could probably bring that down to 10%. My guess as to the g400's ram clock isn't off by more than 10% either way.

    So, yes, if you're running at a rediculous refresh rate, at a VERY high resolution, then a kind of significant portion of your video memory bandwidth is going to just turning pixels into voltages.

    But you've got a lot left over still. Never minding squeezing data through the AGP, video RAM runs at a higher clock than system RAM and often sits on a wider bus.
  • I've had some problems with Q3a on a Riva TNT in full screen...what I've found is that full screen anything over 640x480 will get screwy...so I just play @ 800x600 in a window and it's fine.

    Something to note, you won't get uber-godly rates upwards into the 60fps area, so don't expect those...I average 30fps personally on a celeron 400.
  • If I recall correctly (which I probably don't)...the openGL standard only officially supports up to 8 lights...so I don't really think it's important.

    But like I said, I'm probably wrong, because hey, it's just one of those days...

  • by TheJet ( 93435 ) on Tuesday September 28, 1999 @03:21AM (#1654126)
    Whether or not hardware T&L is of any benefit to current or future games is yet to be seen though. Games lately have been getting more and more fillrate bound and less geometry bound, as game creators take advantage of higher resolutions and larger textures.

    This is because in the past the CPU has been the limiting factor. Developers were forced to limit the triangle count and rely on large textures to make games realistic. With the triangle limit somewhat lifted, you can use smaller textures to produce the same (and better) effects.

    The GeForce also introduces a new feature, cube environment mapping, that allows for more realistic, real-time reflections in games.

    Similar to the Matrox G400's env mapped bump mapping but not quite the same


    This is actually not the case, while the GeForce does support bump mapping (I think dot product??) the cube environment mapping has to do with clipping out reflections that shouldn't be there due to obstructions, and basically making the scene more like it would be in real life.

    limit of 8 lights in hardware (what happens when a scene requires more than eight? They don't say.. hmmm......)

    The same that has been done in the past, render in software.

    Basically nVidia is gambling with hardware geometry. The gamble is, that future host cpu's (Pentium-4's or whatever) will not be able to beat them in doing transformation and lighting, and that if they don't, gamers are going to really even benefit from T&L. We'll see if that pans out. Unless they have a very sophisticated ALU on that chip, it will doubtlessly only speed up certain types of scenes. (We've all seen the "tree" demo).

    They are _not_ gambling at all, this is going to be a feature that is _very_ important to games in the future (listen to Carmack if you don't believe me). nVidia is just hoping that developers will pick it up sooner rather than later. Secondly the whole point of T&L is _not_ to outdo your CPU, but to free up the CPU for other things (i.e. AI, 3D sound, etc.). This would allow for much more immersive games than are currently available (and then would be available if you stick to fillrate only). Secondly (and please someone correct me if I am mistaken) the GeForce 256 has _more_ transistors than the Pentium III's!! A geometry engine is built to handle any scene you throw at it, and (because it is so specialized) can probably out-render any CPU available today (and probably for the next 6-8 months). Plus the whole point is to make it so the CPU doesn't have to worry about geometry calculations (which is always a "Good Thing")
  • Unf, I can't even get into the game. I don't get ANY framerate. I get corrupt video and a dead X server. I ran the command line Zoid sent me:

    ./linuxquake3 +set r_glDriver libGL.so.1 +set in_dgamouse 0 +set r_fullscreen 0

    but got the same symptom. :-(
  • limit of 8 lights in hardware (what happens when a scene requires more than eight? They don't say.. hmmm......)

    BTW, that is 8 HW lights per triangle, not per frame.
    http://www.gamepc.com/news/display_news.asp?id=404
  • I'm not even hoping for 20FPS, but I've got an SMP system and would like to be able to take advantage of it without installing (*SHIVER*) Y2K, er, W2K.

    What is the command line that you use? Is it similar to this? :
    ./linuxquake3 +set r_glDriver libGL.so.1 +set in_dgamouse 0 +set r_fullscreen 0

    That's what came down from Zoid at ID. I still can't make it work. I'll spend some time tonight fiddling with the XF86 mode. What are you using?

    Which TNT card do you have?

    Thx!
  • by Spazmoid ( 75087 ) on Tuesday September 28, 1999 @03:37AM (#1654132)
    The idea of Hardware T&L and texture compression are nice improvements but as has been repeated many times for many hardware additions (MMX, 3dNow, the new IBM crypo chip) software (read apps) has to be written to take advantage of these new features. I would much rather see a CPU with enough FPU power and cache to handle the software T&L as it exists now in games. If film runs at ~24FPS what makes it look as good as a game running at 40-60fps? In my opinion it comes from the frame rate drops and stutters that occure when rendering scenes or perspective changes. When in Half-Life and I walk out of a hall to an open area my TNT drops from about 20-25 fps to 12-17 fps. Throw in a couple of light sources in any rendering and your slowing the whole thing down more.

    Why does this matter you ask, well, your eyes can definately see the frame rate jumping up and down, even if you see no major difference in the smoothness of the animation. I would rather have any type of T&L as well as buss and fill rates that allowed me to push a steady 25-35 FPS in what ever game/app was rendering in the resolution i wanted. If it pushed 70-80 and dropped to 30 on a tough scene it would not matter much as I would cap my frame rate at about 35-40. Then my fps stays a STEADY 25-40 and doesn't drop to an un acceptale rate, also the app code doesnt get bottenecked when the frames push super high. I dont want 300 fps at 1280x1024, i just want ROCK SOLID frames between 25-40 when rendering ANY scene. Of course Having hardware T&L and for pipelines is nice for being able to do more detailed goemetry and faster as long as the software offloads its T&L to the hardware. However I think that we are getting pretty damn close to the maximum detail level thats needed in games. We can use some more but not a whole lot. The most important thing is that we can dot it at a steady rate.

    Flame Away!!

  • I actually see this alot. Most people don't seem to grasp that a computer display is not supposed to work like a television set (bigger = pixels).

    There's also a large number of people that really need corrective lenses that won't wear them and instead requisition a huge monitor. This could be a legal problem for a company if someone insists that they can't do their work without a 21" @ 800x600 (or even 640x480!)
  • by |ckis ( 88583 )
    Doesn't a plain old TNT based board fit these requirements? About $70 for 16MB AGP from Creative on Pricewatch. -
    -
  • by Anonymous Coward
    1)
    OpenGL will accelerate only if you bother to use the OpenGL functions eg, the matrix, vector, lighting functions (such as glMultMatrixd, glPushMatrix, etc.) which abstract the hardware setup engine.

    If you simply use OpenGL as your polygon engine, and do all the transformation math yourself because you wrote your own wizzbang Matrix functions using 3dNow, then your GeForce's geometry engine ain't gonna help you at all.

    2)
    AGP is a stop gap solution to having onboard memory. You need the onboard memory, but if you can't have a lot, you can use the limited amount that you have as a texture cache. Maybe doesn't quite cut it. :) Using AGP-reads to access your textures is akin to grabbing ancient 60ns RAM and shoving it into your nice spiffy Pentium III 600mhz. (Provided you can fit SIMMS in these days :) This is amplified by that fact that the GeForce can access 256 bits in one pass out of onboard memory at once, instead of spooning over 64bits at a time over the AGP port.

    In addition, with the amount of textures increasing, storing all of them in RAM and such, having the video card constantly messing with RAM for textures means that your poor Pentium III 600 will be twidding its thumbs waiting for the GeForce to stop accessing RAM.

    You're gonna slow down your system a lot, and that's because your processor is gonna be burning a lot of cycles doing nothing waiting for RAM to throw data at it.
  • Dear god, count me in on all 3...
  • I didn't have to add any of that command line stuff. My command line is ./linuxquake3. That's it.

    ----
    We all take pink lemonade for granted.
  • Sig11, eh? You are running as root? I have a Diamond Viper 770 (TNT2) and I've had success running Q3A once I set the depth to 16 bpp and monkeyed with the video settings, although my V2 runs a bit better still so far.

    Which X server are you running? XF86 3.3.3.* or up SVGA, i would think.

  • Have you ever seen a camera pan around fast (and I mean really fast) in a movie or on tv?

    You see nothing but blur, which makes for a nice effect in a horror movie, but would be VERY annoying if you panning around in order to aim, fire, and kill a target in less than half a second.

    60 fps will always be the minimum for playable action games (people who are really serious will say even higher, the "pro" Quake players won't play with less 100).

    -
    /. is like a steer's horns, a point here, a point there and a lot of bull in between.
  • Read this nice article [planetquake.com] from PlanetQuake on fillrate, pixels and texels.
  • og the GeForce is really cool in my opinion but I see it more as a transition technology rather than an end in itself. Up until now your CPU has taken the role of code operator, physics engine, and scene controller while your video card just rasterized the scene, which means the CPU needed to be really fast to render a complex scene. The GeForce is supposed to take over the job of scene controller which means your CPU has less to do while the video card is more utilized, no games are programmed this way yet because the system is so new. But the GeForce won't be the last word in new graphics chipsets or even graphics processing, it's just the first step into a different way of doing things.

    You look at it's scrores compared to the TNT2 and see only an 8fps increase in certain tests, remember those 8 extra frames are another 6 million texels. You also have to take into account that you're not utilizingf all aspects of the chipset, the T&L on hardware isn't being utilized and neither is it's control of the scene so you can't really compare apples to oranges here. It's actually like the P3 and Apple's G4, at the same clock speed as the old chips running the same software they are only marginally faster but when you actually use their features to their fullest you have a much faster result.


    As for being a transition technology thats exactly what I think it is, soon you'll see S3 and 3Dfx do something similar if not better, then nVidia will come out with a more powerful GeForce and so on and so forth. One area I really think this kind of technology will do alot of good is in the console market. If you look at the N64 and Dreamcast they both have a super fast CPU (relatively) and then a powerful graphics chip for the actual rendering, theres not alot of technological different between the two besides word size and the number of transitors. On the the other hand if you used a technology like GeForce in a console you'd have a much more versatile machine that would be cheaper to manufacture. Your CPU handles the game code and does the physics calculations using a standardized chip that can preform just about any task you assign it and then your graphics card uses a part of it's chip for scene control and another part for lighting and textures and then a final part for the actual rendering of a scene. Each job is done on a specialized processor on the chip which means it can be done faster and more efficiently than can be done on a general purpose chip. This means consoles can more easily run complex code and physics because the processor isn't as tied up with the graphics processing not to mention run application style programs with heavy graphical content without a dip in performance. This would give future consoles more leverage when it comes down to a choice between a full fledged PC or a console that has much of the funtionality but less hassle.

  • >However I think that we are getting pretty damn
    >close to the maximum detail level thats needed
    >in games.

    Remember when Bill Gates said, "640K is plenty"? That is the wrong way to think, my friend. Games will never have too much detail until they are indistinguishable from real life.


    -
  • Point taken, but I try not to remember anything Bill gates says, it's like reliving a bad acid trip!!


  • -- The GeForce fared better in Shogo: MAD, scoring 61.3fps at 1,024 by 768, 16-bit, compared to 52.3fps for the Riva TNT2 Ultra card. --


    Remember when the Pentuim III came out and everone was skeptical about how it was only like %13 faster than the PII at the same clock speed? Well that was because nothing was written for the newer extensions like Streaming SIMD. The GeForce 256 is in the same boat. Nothing is written for its 2 extra pipelines. So don't bash the GeForce for its 8fps increase...



Kiss your keyboard goodbye!

Working...