Preview of The GeForce 256 93
Tor Arne writes " Gamecenter is doing a preview of the Nvidia GeForce 256. Not as amazing as it initially sounded, it's still pretty amazing looking."
Your own mileage may vary.
Re:TNT (Score:1)
Why 24fps film looks better (Score:1)
Re:23 million transistors (Score:1)
-----
lower clock speeds are *better* (Score:2)
-----
Re:Is as advertised (Score:1)
As for comparing the nVidia chipset (or any PC based chipset) to the PSX2 - that's pretty pointless. The PSX2 isn't intended to be a general purpose graphics platform, it has to only run in one resolution (and a low one at that!) It's pretty easy to optimize given those circumstances (crap, the refrest rate is limited too). The GeForce has to perform under a variety of resolutions, refresh rates, and windowing environments. Plus, I have no doubt in my mind that by the time the PSX2 ships in volume (or shortly thereafter) PC boards will have surpassed it in speed and quality (even at higher resolutions). So the PSX2 may have an irrelevent lead for a month or two, but it will become old technology really quick - closed platforms are like that.
I look at it this way: nVidia has hired a lot of talent away from SGI. There wasn't a lot that surprised SGI engineers when it came to graphics technology. I doubt nVidia is going to be caught anytime soon with their pants around there ankles.
(3dfx too, but I have less faith in them).
Don't get me wrong, the PSX2 looks like it will be a great gaming (and multimedia) platform. But will it make PC graphics irrelevent or blow the PC graphics platforms away? I don't think so.
better put... (Score:1)
-----
um... (Score:1)
-----
Fill rate is not as important as Geometry (Score:1)
keep this in mind... (Score:1)
-----
Re:Quake on TNT (Score:1)
I usually run my desktop at 1024x768x16, but I've tried x15, x32, 640x480x* and 800x600x*.
So as not to clutter up
Thanks for the comments, I'd really like to get this resolved.
It all depends on the app/game... (Score:2)
Voodooextreme has asked a lot of game developers, what they think of this hole issue about T&L vs fill rate - you can find the article here: http://www.voodooextreme.com/articles/fillvstl.ht
I like the first comment from Tim Sweeney - Epic Games
A lot of the benchmarks that has been published doesn't take advantage of the T&L, and therefore the benchmarks doesn't look really great, just great. But the fill rate of the GeForce isn't *that* much better than the TNT2 Ultra or the Voodoo3 3500.
What you can't se from the benchmarks either, is the picture quality - with games that uses T&L, you might not get a frame rate that is much higher than others, but you'll get a much nicer picture.
You'll need to get games that stresses the T&L chip to see the difference, and there are not many games which does that today (are there any at all?!)
Download the tree demo from NVIDIA's website and run it on you 3D-accelerators - it crawls!
I tried it on my Celeron450/128MB RAM/Voodoo3 2000 - it was a slideshow!
Besides, Geforce is the only next generation card which is available in the next couple of weeks - S3 Savage2000 will be available before christmas, but that's a long time in the graphics bussiness. It is even worse with 3dfx's Voodoo4/Napalm - it will maybe not be available before february!
If NVIDIA continue to deliver a new product every 6 month, then the will have their next generation card ready a few month after Voodoo4 arrives.
Rumors about NVIDIA's next card/chip/GPU will certainly be all around the net at that time, which may hurt 3dfx's sales, if they don't deliver something quite extraordinary...
Re:Nvidia Drivers (Score:1)
Re:Is as advertised (Score:1)
What will be interesting in the future, will be the test results of the Voodoo4 (or whatever it will be called). This card should be able to do more with a lower framerate due to it's support of motion blur. The corrollary that 3Dfx is making with the V4 is between film which runs at 24fpx and todays video games which must run at 60fps. I think that most people would agree that film looks very, very good. If what 3Dfx says about their board is true, then FPS will no longer be a suitable judge for a boards performance. Assuming that a game is written to take advantage of motion blur. The downside... games depend on reaction to a controller and must be able to display these small changes. If the game is only updating at 24fps, then you may feel as if you don't have precise control over the game (but it will damn well look good!!).
---
This certainly isn't true at all. 3DFX has *always* stressed framerate over features, and it's not different with the V4 (Napalm, whatever). You can expect this card to put up framerates that squash all the competition, but at the expense of having a limited featureset. They've got a supposedly huge fillrate with the V4, but most of that will go to supporting full-screen anti-aliasing at a decent framerate. So 3DFX has another incremental product upgrade with minimal benefit to the consumer. 32-bit color? Wow - that only took close to 2 years to implement after their competitiors did.
Face it, 3DFX hasn't done anything really worthwhile since bringing affordable 3D in hardware to the desktop.
-aaron
---
aaron barnes
part-time dork
Re:Hopefully understandable rundown: (Score:3)
No facts to back this claim up. How exactly does hardware T&L increase the amount of onboard framebuffer required? With AGP, there really is no need for local video memory at all, except to use for the actual visual screen, and maybe as a texture cache. Sure the geometry system will need somewhere to cache scenes, but to fill up 128MB with just _geometry_ information you'll need something as complicated as that huge landscape scene in the Matrix.
Certainly hardware T&L does not increase the size of the framebuffer needed. However, AGP is really not as fast as the RAM they're putting in these systems - hell, all bus issues aside, system RAM is only 100 MHz, while most video cards local memory is way faster, or on a wider bus or both.
When doing the geometry, you don't want to tie up your bus to read and write and read each vertex as you translate and light each frame. Sure, it is possible to do it with AGP, but is it efficient? Let's see, they say it can push 15 million polys a second? Say a poly takes up
If this is 60 fps, each of those frames is 16-32 MB. 128 MB will be more than most applications will need. But using an extra 32 for geometry information is not unwarranted, in their pushing-the-card-to-its-limits case.
Disclaimer: I'm not as smart as I think.
Re:better put... (Score:1)
I for one don't have many intel chips, but they did add SIMD instructions, if I recall, which make applications "faster when designed for the chip", such as you said for the Gs.
Re:Hopefully understandable rundown: (Score:1)
And, one should note that hardware geometry is a new technology _in_the_consumer_market_.
In 6-8 months we'll have the next geforce, and with it perhaps much higher clockrates, more pipelines, more features. The nvidia guys themselves seem to think of up to 8(!) times the geometry processing power. And I can belive this, it's just improving the given plattform, a lot easier than the step they have done now. No CPU will ever be able to catch up, it's the same as with DSPs.
It's a big step, and in my opinion we'll see nvidia (and S3, Matrox, and perhaps ATI and 3dfx too, otherwise they'll die) fastly going in a direction where they will be able to get all specialized graphics hardware manufacturers into big trouble.
Nobody who sells for a relativly small market segment is able to compete with a demanding monopoly free mass market which wants exactly these highend features.
And - looking at the highend workstation-graphics companies - we see they all know it (SGI, Intergraph
Re:Hopefully understandable rundown: (Score:1)
Almost right... (Score:2)
GeForce will not be able to do this as it is grossly fill-rate impeded compared to its competitors. GeForce is all geometry and no fill rate- the next 3dfx thing is all fill rate and no geometry- the Savage one is somewhere in the middle.
The only way you'll get the antialiasing and motion blur ('cinematic' effects, kind of like how 3dfx rendering seems dirtier, more contrasty, more photographic as opposed to 'rendered') is with the 3dfx stuff as none of its competitors are willing to put that much effort into fill rate. The only way you'll get 20 times the geometry (rounded curves, 3d trees in games etc.) is if you get the GeForce and also wait to have developers write games for it, many of which could be Win-only *grumble*. My money's on 3dfx actually- I'm biased because I always think 3dfx screenshots look more 'photographic' (grain? contrast? some factor of their 22-bit internal calculations to 16-bit display?) but there's another factor- competitiveness.
If you read folks like Thresh talking about what they use, it turns out that they crank everything down to look as ugly as possible and run as fast as possible. I've done this on q3test and got a solid 60fps in medium action using only a 300Mhz G3 upgrade card and a Voodoo2. It looks awful, especially when you really pull out all the stops and make things look absolutely horrible- but it's sure competitive! You can track enemies and gib them much better, even if you're not all that hot at Quake.
How does this relate to the GeForce? It's the fill rate. Even on a normal AGP bus the thing can't be fed enough geometry to max it out- but the actual filling of the screen is unusually slow, and this expands rapidly with larger resolutions.
The result is this- somebody trying to max out, say, q3test but run at 1600x1200 in lowest image quality will be able to see accurate (but nearly solid color!) enemies in the distance and be able to make out subtle movements. This also applies to the antialiasing- that will help as well, even at normal resolutions. The result is that the person running on something with insanely high fill rate and using that combined with very low graphics quality, will get more visual information than the other players will, and will be getting it at a frame rate that is competitive (to a Thresh, there's a difference between 100 and 150 fps- while in a crowded fight, with 'sync' turned off).
By contrast, users of a geometry enhanced card will not get a competitive advantage from their form of graphical superiority. It is strictly visual eye candy and will not significantly add a competitive advantage...
For that reason I'd say, DON'T write off 3dfx just yet. Their choice for technological advancement is tailor made for getting a competitive advantage, and when you start maxing out the respective techie wonderfulness, the competitive advantage of 3dfx's approach will not be subtle. Likely result- 3dfx users may not be looking at comparably pretty visuals, but can console themselves by gibbing everybody in sight
Re:Hopefully understandable rundown: (Score:1)
Another preview of the GeForce 256 (Score:3)
--
Is as advertised (Score:2)
What will be interesting in the future, will be the test results of the Voodoo4 (or whatever it will be called). This card should be able to do more with a lower framerate due to it's support of motion blur. The corrollary that 3Dfx is making with the V4 is between film which runs at 24fpx and todays video games which must run at 60fps. I think that most people would agree that film looks very, very good. If what 3Dfx says about their board is true, then FPS will no longer be a suitable judge for a boards performance. Assuming that a game is written to take advantage of motion blur. The downside... games depend on reaction to a controller and must be able to display these small changes. If the game is only updating at 24fps, then you may feel as if you don't have precise control over the game (but it will damn well look good!!).
It will be interesting to see how the 6 to 9 months of graphics card pan out. One thing is for certain though, by the time the PSX2 ships in North America, the PC should be well beyond it visually.*
*Of course the world is ending on Y2k, so these are hypothetical hardware progression estimates.
Hmm it may be too early to tell ... (Score:2)
Who cares about clock speeds if it's fast? (Score:3)
More importantly, a fixation on clock speed is quite silly, as it is merely one of the factors in how fast a system is. I'm rather more impressed if they produce a faster product that doesn't have as fast a clockspeed.
Furthermore, keeping the clock speed down has other merits such as that it likely reduces the need for cooling, as well as diminishing the likelihood of the chips being stressed into generating EMI.
(Entertaining rumor has it that 900MHz systems that are likely coming in the next year may interfere with the 900MHz band used by recent digital cordless phones...)
Re:Is as advertised (Score:3)
Actually OpenGL has had support for onboard T&L for quite some time. When these guys are developing these games, you can bet that their rigs have boards with T&L (If they aren't a poor startup).
What will be interesting in the future, will be the test results of the Voodoo4 (or whatever it will be called). This card should be able to do more with a lower framerate due to it's support of motion blur.
If you are talking about T-Buffer technology, you may be grossly overestimating its power. In the demos that I have seen (and granted, these are 3dfx demos and they are nasty, why wouldn't a company trying to sell its product produce decent demos to show it off???, but I digress) all your motion blur will get you is a loss framerate since your CPU has to fill the T-Buffer with x images, which then get blended to produce the effect. While it may look purdy, it is going to require one hell of a CPU to pull it off. I think the fullscene anti-aliasing is going to be the selling point on that board.
I think something that will be interesting to point out is that 3dfx has done a really good job at throwing their marketing prowess at consumers with in respect to the GeForce 256. While I don't quite believe everything that nVidia says, I find it hard not to support them when 3dfx has gone so far out of their way to make nVidia look bad. I will paraquote a developer (can't remember his name):
XF86 and other Linux drivers (Score:3)
And when will ID start supporting Q3A on the nVidia cards? I HATE rebooting into Windows!
I want one of these SO BAD, but I'm not going to give up XF86 for it!
Hopefully understandable rundown: (Score:4)
This is only true if you were unfortunate enough to write your game using Direct3D. OpenGL games will be able to take advantage of geometry acceleration without even recompiling. You reap what you sow when you use a Microsoft API.
Whether or not hardware T&L is of any benefit to current or future games is yet to be seen though. Games lately have been getting more and more fillrate bound and less geometry bound, as game creators take advantage of higher resolutions and larger textures.
The GeForce, on the other hand, supports up to
128MB of local graphics memory. Hardware T&L greatly increases the amount of onboard memory needed. The first boards aimed at consumers should come out at 32MB, with 64 MB and 128MB cards to follow later on.
No facts to back this claim up. How exactly does hardware T&L increase the amount of onboard framebuffer required? With AGP, there really is no need for local video memory at all, except to use for the actual visual screen, and maybe as a texture cache. Sure the geometry system will need somewhere to cache scenes, but to fill up 128MB with just _geometry_ information you'll need something as complicated as that huge landscape scene in the Matrix.
Texture compression allows the use of much more detailed textures without overburdening graphics memory or bus bandwidth.
My jury's still out on texture compression. For games that are poorly written (i.e. that load and release textures on the fly, each frame) compression can help, but for games that use a more intellegent caching scheme for texturing, there really isnt much of a point.
Like the TNT2, GeForce supports the AGP 4X standard.
Definitely "A Good Thing".
The GeForce also introduces a new feature, cube environment mapping, that allows for more realistic, real-time reflections in games.
Similar to the Matrox G400's env mapped bump mapping but not quite the same.
Other things to note: 4 texel pipes (fills at four times the clock rate). Watch for all the other chip makers to do this too, limit of 8 lights in hardware (what happens when a scene requires more than eight? They don't say.. hmmm......)
Basically nVidia is gambling with hardware geometry. The gamble is, that future host cpu's (Pentium-4's or whatever) will not be able to beat them in doing transformation and lighting, and that if they don't, gamers are going to really even benefit from T&L. We'll see if that pans out. Unless they have a very sophisticated ALU on that chip, it will doubtlessly only speed up certain types of scenes. (We've all seen the "tree" demo).
Quake on TNT (Score:3)
----
We all take pink lemonade for granted.
Sounds nice, but... (Score:2)
Re:Quake on TNT (Score:2)
Maybe it's a TNT2 thing.
Re:Quake on TNT (Score:2)
I basically just followed the instructions in the page above, and everything went ok. Try searching google or deja.com for other's experiences.
----
We all take pink lemonade for granted.
Re:Nice but I rather... (Score:1)
The frame rate cap sounds really good. Unsteady motion (going from one rate to another) is much more irritating.
Re:Hopefully understandable rundown: (Score:1)
Whether or not hardware T&L is of any benefit to current or future games is yet to be seen though. Games lately have been getting more and more fillrate bound and less geometry bound, as game creators take advantage of higher resolutions and larger textures.
This is only true if the software is using OGL's transformation pipeline. IIRC, a lot of the current OGL games set the MV matrix to identity. Also, almost all of today's games use lightmaps, not OpenGL lights. So no speed up there.
Is 8 lights per polygon, not scene (Score:1)
Another point of clarity that was added was the hardware lighting algorithm. Many people interpreted this as eight light sources for a whole screen, and this is incorrect. The GeForce allows for eight hardware lights per triangle. This means that every individual triangle that is part of an on-screen shape can have up to eight sources affecting its lighting. This is done with a minimal performance hit. NVIDIA is also in the process of tweaking their drivers to fully optimize them with the retail release of DirectX 7.
Cool, isn't it. I want one.
Re:Hmm it may be too early to tell ... (Score:1)
indeed, it may be too early. maybe they'll pump the clock rate up and make it just silly-fast =).
all this talk of 3d 3d 3d, what about 2d? does it look as good as a g200 at 1280x1024x32bpp@75hz? my plain old TNT doesn't. i'll probably still get it anyway - i heard linux support is out of box =)
as for Myth II (Score:2)
Unfortunatly, it won't, as Myth II only supports hardware acceleration on 3dfx cards via the glide port; OpenGL support is not even planned. To quote briareos, a Loki developer on loki.games.myth2 [loki.games.myth2]:
You're really asking "will we take the time to write an OpenGL
rendering module for Myth2"? The answer is: if someone finds the
time. That's all I can really say.
Overlap, dude (Score:1)
Think of it as having 100 lights all over the place but choosing the 8 that makes the biggest impact on the polygon. Since most objects and groups of objects exhibit a lot of lighting coherence, you probably wouldn't even notice the subtler lighting discrepancies.
Re:Is as advertised (Score:1)
But are not texels just textured pixels? My point was that you don't get extra texels/pixels if you add more triangles.
Thanks for pointing out the difference though, it does make a difference if you say that one card can push through 4 texels/cycle and another can do 2 texels/cycle. I am going under the assumption that 3dfx is also going to do 4 pixel pipelines as nVidia does.
Re:Hopefully understandable rundown: (Score:2)
Not true. Not true at all. If a game is written to only use OpenGL as a rasterization system then it will NOT benefit from HW T&L in the least. Take Tribes as an example. All of the 3D T&L in Tribes is done by the program. This is so they can support a software rendering mode along with GLide and OpenGL. Tribes will not benefit from HW T&L.
Any game that has a software rendering mode, along with OpenGL probably has an internal T&L pipeline and therefore will not benefit from HW T&L.
Re:Dudical. (Score:1)
Re:Nice but I rather... (Score:1)
Re:um... (Score:1)
I've been testing Mesa 3D-library under Linux but accelerated 3D (for example with 3DFX Voodoo Rush chipset) support is poor. Well, it works in full screen mode but I haven't succeeded to get things work in the window...
Re:What a dud of a card.... (Score:1)
But 3dfx has just announced it, so NVIDIA haven't had a chance to implement it in GeFORCE.
One thing is for sure - FXT1 is better than S3 texture compression, and it is *free*!
Which also means that it maybe will be available on Linux.
S3's texture compression is (at the moment) only for the Windows platform.
Re:better put... (Score:1)
-----
Video memory (Score:1)
1600x1200x32/8 = 7 680 000 bytes/screen.
The G400 (as an example) has a 300 MHz RAMDAC. At this resolution, it can DAC all its RAMs 100 times a second.
768 000 000 bytes/second. Hmm. Since each pixel takes up 3 or 4 bytes, and each Hz of the RAMDAC would pretty much have to work on entire pixels, this is fine with a 300 MHz RAMDAC.
The memory speed on the g400 max is
32 bytes/clock * 170 m = 5 440 000 000
About 14% of the bus bandwidth is being used by the RAMDAC, unless I'm missing something. Since the RAMDAC only ever really needs to look at 24 bits per pixel, we could probably bring that down to 10%. My guess as to the g400's ram clock isn't off by more than 10% either way.
So, yes, if you're running at a rediculous refresh rate, at a VERY high resolution, then a kind of significant portion of your video memory bandwidth is going to just turning pixels into voltages.
But you've got a lot left over still. Never minding squeezing data through the AGP, video RAM runs at a higher clock than system RAM and often sits on a wider bus.
Re:Video memory (Score:1)
Re:Quake on TNT (Score:1)
Something to note, you won't get uber-godly rates upwards into the 60fps area, so don't expect those...I average 30fps personally on a celeron 400.
Re:Hopefully understandable rundown: (Score:1)
But like I said, I'm probably wrong, because hey, it's just one of those days...
Re:Hopefully understandable rundown: (Score:3)
This is because in the past the CPU has been the limiting factor. Developers were forced to limit the triangle count and rely on large textures to make games realistic. With the triangle limit somewhat lifted, you can use smaller textures to produce the same (and better) effects.
The GeForce also introduces a new feature, cube environment mapping, that allows for more realistic, real-time reflections in games.
Similar to the Matrox G400's env mapped bump mapping but not quite the same
This is actually not the case, while the GeForce does support bump mapping (I think dot product??) the cube environment mapping has to do with clipping out reflections that shouldn't be there due to obstructions, and basically making the scene more like it would be in real life.
limit of 8 lights in hardware (what happens when a scene requires more than eight? They don't say.. hmmm......)
The same that has been done in the past, render in software.
Basically nVidia is gambling with hardware geometry. The gamble is, that future host cpu's (Pentium-4's or whatever) will not be able to beat them in doing transformation and lighting, and that if they don't, gamers are going to really even benefit from T&L. We'll see if that pans out. Unless they have a very sophisticated ALU on that chip, it will doubtlessly only speed up certain types of scenes. (We've all seen the "tree" demo).
They are _not_ gambling at all, this is going to be a feature that is _very_ important to games in the future (listen to Carmack if you don't believe me). nVidia is just hoping that developers will pick it up sooner rather than later. Secondly the whole point of T&L is _not_ to outdo your CPU, but to free up the CPU for other things (i.e. AI, 3D sound, etc.). This would allow for much more immersive games than are currently available (and then would be available if you stick to fillrate only). Secondly (and please someone correct me if I am mistaken) the GeForce 256 has _more_ transistors than the Pentium III's!! A geometry engine is built to handle any scene you throw at it, and (because it is so specialized) can probably out-render any CPU available today (and probably for the next 6-8 months). Plus the whole point is to make it so the CPU doesn't have to worry about geometry calculations (which is always a "Good Thing")
Re:Quake on TNT (Score:1)
./linuxquake3 +set r_glDriver libGL.so.1 +set in_dgamouse 0 +set r_fullscreen 0
but got the same symptom.
Re:Hopefully understandable rundown: (Score:1)
BTW, that is 8 HW lights per triangle, not per frame.
http://www.gamepc.com/news/display_news.asp?id=40
Re:Quake on TNT (Score:1)
What is the command line that you use? Is it similar to this? :
./linuxquake3 +set r_glDriver libGL.so.1 +set in_dgamouse 0 +set r_fullscreen 0
That's what came down from Zoid at ID. I still can't make it work. I'll spend some time tonight fiddling with the XF86 mode. What are you using?
Which TNT card do you have?
Thx!
Nice but I rather... (Score:3)
Why does this matter you ask, well, your eyes can definately see the frame rate jumping up and down, even if you see no major difference in the smoothness of the animation. I would rather have any type of T&L as well as buss and fill rates that allowed me to push a steady 25-35 FPS in what ever game/app was rendering in the resolution i wanted. If it pushed 70-80 and dropped to 30 on a tough scene it would not matter much as I would cap my frame rate at about 35-40. Then my fps stays a STEADY 25-40 and doesn't drop to an un acceptale rate, also the app code doesnt get bottenecked when the frames push super high. I dont want 300 fps at 1280x1024, i just want ROCK SOLID frames between 25-40 when rendering ANY scene. Of course Having hardware T&L and for pipelines is nice for being able to do more detailed goemetry and faster as long as the software offloads its T&L to the hardware. However I think that we are getting pretty damn close to the maximum detail level thats needed in games. We can use some more but not a whole lot. The most important thing is that we can dot it at a steady rate.
Flame Away!!
Re:Dudical. (Score:1)
I actually see this alot. Most people don't seem to grasp that a computer display is not supposed to work like a television set (bigger = pixels).
There's also a large number of people that really need corrective lenses that won't wear them and instead requisition a huge monitor. This could be a legal problem for a company if someone insists that they can't do their work without a 21" @ 800x600 (or even 640x480!)
TNT (Score:1)
-
Re:Hopefully understandable rundown: (Score:1)
OpenGL will accelerate only if you bother to use the OpenGL functions eg, the matrix, vector, lighting functions (such as glMultMatrixd, glPushMatrix, etc.) which abstract the hardware setup engine.
If you simply use OpenGL as your polygon engine, and do all the transformation math yourself because you wrote your own wizzbang Matrix functions using 3dNow, then your GeForce's geometry engine ain't gonna help you at all.
2)
AGP is a stop gap solution to having onboard memory. You need the onboard memory, but if you can't have a lot, you can use the limited amount that you have as a texture cache. Maybe doesn't quite cut it.
In addition, with the amount of textures increasing, storing all of them in RAM and such, having the video card constantly messing with RAM for textures means that your poor Pentium III 600 will be twidding its thumbs waiting for the GeForce to stop accessing RAM.
You're gonna slow down your system a lot, and that's because your processor is gonna be burning a lot of cycles doing nothing waiting for RAM to throw data at it.
Re:23 million transistors (Score:1)
Re:Quake on TNT (Score:2)
----
We all take pink lemonade for granted.
Re:Quake on TNT (Score:2)
Which X server are you running? XF86 3.3.3.* or up SVGA, i would think.
Re:Is as advertised (Score:1)
Have you ever seen a camera pan around fast (and I mean really fast) in a movie or on tv?
You see nothing but blur, which makes for a nice effect in a horror movie, but would be VERY annoying if you panning around in order to aim, fire, and kill a target in less than half a second.
60 fps will always be the minimum for playable action games (people who are really serious will say even higher, the "pro" Quake players won't play with less 100).
-
Nice article on fillrate, pixels, texels (Score:1)
The technology... (Score:2)
You look at it's scrores compared to the TNT2 and see only an 8fps increase in certain tests, remember those 8 extra frames are another 6 million texels. You also have to take into account that you're not utilizingf all aspects of the chipset, the T&L on hardware isn't being utilized and neither is it's control of the scene so you can't really compare apples to oranges here. It's actually like the P3 and Apple's G4, at the same clock speed as the old chips running the same software they are only marginally faster but when you actually use their features to their fullest you have a much faster result.
As for being a transition technology thats exactly what I think it is, soon you'll see S3 and 3Dfx do something similar if not better, then nVidia will come out with a more powerful GeForce and so on and so forth. One area I really think this kind of technology will do alot of good is in the console market. If you look at the N64 and Dreamcast they both have a super fast CPU (relatively) and then a powerful graphics chip for the actual rendering, theres not alot of technological different between the two besides word size and the number of transitors. On the the other hand if you used a technology like GeForce in a console you'd have a much more versatile machine that would be cheaper to manufacture. Your CPU handles the game code and does the physics calculations using a standardized chip that can preform just about any task you assign it and then your graphics card uses a part of it's chip for scene control and another part for lighting and textures and then a final part for the actual rendering of a scene. Each job is done on a specialized processor on the chip which means it can be done faster and more efficiently than can be done on a general purpose chip. This means consoles can more easily run complex code and physics because the processor isn't as tied up with the graphics processing not to mention run application style programs with heavy graphical content without a dip in performance. This would give future consoles more leverage when it comes down to a choice between a full fledged PC or a console that has much of the funtionality but less hassle.
Re:Nice but I rather... (Score:1)
>close to the maximum detail level thats needed
>in games.
Remember when Bill Gates said, "640K is plenty"? That is the wrong way to think, my friend. Games will never have too much detail until they are indistinguishable from real life.
-
Re:Nice but I rather... (Score:1)
The Pentium III of graphics cards (Score:1)
Remember when the Pentuim III came out and everone was skeptical about how it was only like %13 faster than the PII at the same clock speed? Well that was because nothing was written for the newer extensions like Streaming SIMD. The GeForce 256 is in the same boat. Nothing is written for its 2 extra pipelines. So don't bash the GeForce for its 8fps increase...