Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

nVidia's GeForce 256 Breaks Out; changes 3D world 191

Hai Nguyen writes " nVidia officially unveiled the GeForce 256 (the chip formerly known as NV10). Its architecture emphasizes both triangle rate and fill rate, so the chip can render 3D landscape with highly detailed 3D environments and models, and smooth framerates. Go get the full info." Holy moses. I want one. Now.
This discussion has been archived. No new comments can be posted.

nVidia's GeForce 256 Breaks Out; changes 3D world

Comments Filter:
  • And I suppose its revolutionary, too. Somehow I doubt it.

    Drop the marketing fluff -- we don't want any here. Just the facts.
  • Of course the 3 biggest questions are:
    1) Will the driver be open sourced as the TNT/TNT2 driver is?
    2) How soon will it be available to the public?
    3) What kind of framerate will I get when fragging LPBs in quake 3?

    Fun fun fun!
  • This just makes me sit back and wonder - is the Playstation 2 now history? Yeah, I know, diffy platform and all that, but still. Leaving out the Emotion Engine or whatever it is called, it seems to me the PS2 is now a relic in terms of what it is delivering for graphics. Granted we are not seeing any numbers, but still.


    I think if anything seeing graphics like this are at least going to set a fire under some butts to get the next generation of stuff out. It is also good to see nvidia not having to worry about selling the chips. This whole Diamond/STB thing had me worried for a while.

    Mister programmer
    I got my hammer
    Gonna smash my smash my radio

  • damnit...i did it too...
  • As far as the PSX2 i concerned, it will come in at about 2-300 dollars for the system. This card looks like it will not be a commodity for some time. If it has 128 meg of ram/runs agp4x/etc, you can guess the target market. A clue: it ain't sub-$400 boxes.
  • Actually this is pretty big news. Up till this point fill rate has been all that mattered for 3D chip makers, providing the capability of allowing higher resolutions, more rendering passes (for different visual effects) and higher frame rates. This is the first (consumer level?) chip to add transformation and lightning to the 3D chip, thus offloading these duties from the CPU. This effectively means more polygons can be used and this will have a truely remarkable effect on the realism.

    Aren't you tired of watching perfectly flat walls with big posters stuck on them?
  • Whatever happened with diamond making nvida-based cards? I heard about stb, but nothing about diamond...

    -lx
  • by Amnesiak ( 12487 ) on Tuesday August 31, 1999 @05:12AM (#1714607) Homepage
    Damn. I was hoping this would get linked up on the front page. :) Oh well, I took a trip to NVIDIA last week, and I'd love it if you guys checked my article out: riva extreme - geforce 256 coverage [rivaextreme.com]
  • by Anonymous Coward on Tuesday August 31, 1999 @05:13AM (#1714608)
    • 15M triangles/sec - sustained DMA, transform/clip/light, setup, rasterize and render rate.
    • 4 Pixels per clock (4 pixel pipelines).
    • 480M pixels/sec fill rate - 32 texture samples per clock, full speed 8-tap anisotropic filtering.
    • 8 hardware lights.
    • 350 MHz RAMDAC.
    • Most feature complete for DX7 and OGL - Tranform & Lighting, Cube environment mapping, projective textures, and texture compression.
    • Will utilize 4x AGP performance with Fast Writes , which enables the CPU to send data directly to the GPU (1 GB/sec transfer rate), increasing overall performance and freeing the system memory bus for other functions.
    • 256 bit rendering engine.
    • Highest quality HDTV (High Definition Television) video playback.
    • High Precision HDTV video overlay.
    • 5 horizontal, 3 vertical taps.
    • 8:1 up/down scaling.
    • Independent hue, saturation and brightness controls in hardware.
    • High bandwidth HDTV class video I/O.
    • 16 bit video port.
    • Full host port.
    • Dedicated DMA video.
    • Powerful HDTV motion compensation.
    • Full frame rate DVD to 1080i resolution.
    • Full precision subpixel accuracy to 1/16 pixel.
    Snipped from www.bluesnews.com
  • I just went and looked at the tweak3d guide ( http://www.tweak3d.net/reviews/nvidia/geforce256/1 .shtml ) and good god this card kicks ass.

    The addition of Transform and Lighting really _is_ revolutionary. Once you've used one of these babies, you won't want to go back.

    There's a list of useful links at Blues News (www.bluesnews.com)
  • "hands-on tests" means prototype hardware already available, so this should be out fairly soon. None too soon, as S3 pulled ATI's trick and is coming out early with a chip at 0.18.
  • It says this here card can only do 15 million triangles per second. Playstation 2 can do 75 million. I don't think this is gonna put Playstation2 out of business.

    ^. .^
    ( @ )
  • This just makes me sit back and wonder - is the Playstation 2 now history? Yeah, I know, diffy platform and all that, but still. Leaving out the Emotion Engine or whatever it is called, it seems to me the PS2 is now a relic in terms of what it is delivering for graphics. Granted we are not seeing any numbers, but still.


    The Playstation II has a modified R10000 processor with very hefty floating point extensions - it won't have much of a problem doing geometry transformations. IMO, it will probably be about on par with the graphics cards floating around at the time of its release. It won't leave them in the dust, but neither will it be left in the dust.


    OTOH, a friend in the gaming industry says that the Playstation II has architectural problems that might degrade performance (low system bus bandwidth, among other things). We'll see what happens when it ships.

  • by aheitner ( 3273 ) on Tuesday August 31, 1999 @05:20AM (#1714613)
    So basically nVidia chose to make a high fill-rate card with hardware lighting and transforms (geometry acceleration). These aren't innovative directions -- they were the obvious ones. None the less, the other major player, 3dfx, has pulled back from these choices. I'll explain why:

    nVidia has a card which can do supported operations fast. It obviously has a lot of fill. It'll be a good board. Of course it'll still be slow in D3D ... everything is (we once demonstrated that it's physically impossible under DX6 to be faster than a Voodoo3 under Glide). There are some downsides: if you want to do crazy weird stuff with your lighting (eg. wrong faster stuff, funky effects) you may not be able to get it to work. Similarly with geometry -- special fast cases will become normal cases. So there may be a 50%-100% gain in triangle rate, but it's unlikely geometry acceleration will ever be able to provide much more than that.

    nVidia seems to have chosen not to support the hardware bump mapping of the Matrox G400, an extremely high fill (runs beautifully bump mapped in a window in 1600x1200x32bpp) card without geom accel. 3DLabs' long awaited Permidia3 will also have some kind of hardware bump. IMHO this is a relatively flexible feature -- you could do a lot with it. It remains to be seen how flexible nVidia's lighting and geom turn out to be.

    I'll be impressed if D3D ever delivers real hardware geometry benafits. We have yet to see a single benefit of DX6 over DX5 (not screwing with the fp control word especially) actually work. I'm highly suspect of anything MS sez.

    So what about the remaining behemoth, 3dfx? Their Voodoo4 is supposed to be an extremely high fill card (fill has always been their hallmark). It may not support any more hardware features (eg. bump, lighting, geom accel), but it will fill like crazy. It's supposed to do full screen anti-aliasing ... 3dfx talked about putting a geometry accelerator on V4 but I believe they backed off from it. Voodoo4 is however still an SST and therefore still a true descendent of the original Voodoo chipset conceived as a flexible, long-term solution for both PCs and arcade games.

    I'm eagerly awaiting the new generation. But I expect the real crazy stuff to start happening in the following generation ... it may be finally time to kill some very old paradigms in 3d hardware...
  • The obvious question from one who doesn't follow the 3-D chipset world closely: what's 3dfx's answer to this chipset or has nvidia kicked there buts.


    The Voodoo 4 will be coming out around Christmas, and it will have hardware geometry as well. Rumour has it that it too will be at 0.22 micron instead of 0.18. I don't remember the name of the chipset off-hand.


    S3 has also rolled out a new chip, with four pipelines and hardware geometry, at 0.18 micron. Check Sharkey Extreme for details.


    Also, I've heard some reports that the PlayStation II will beat the living daylights out of PIII's loaded with then recent and most modern 3d accels. Even with this kind of chip, and most likely other chips to follow from nVidia's competitors, does this still hold true? Will the PlayStation II live up to the hype?


    No, but it won't sink either. See my previous response on this subject (check my user info to find the post).

  • Get a load of the supplied pictures. Gee, low poly models sure don't look that impressive when you DON'T TEXTURE THEM ;P or do you really think the treads on the second tire are, or ought to be, geometry?
    So it pushes 15 million triangles a second and a PIII only does 3.5 million. Well, where do they come from? Exactly what is used to store these geometries? I'd say that if they went with a rather Voodoo Glide-esque approach of putting all the geometries on the card and then giving minimal commands to position, scale and rotate them, then it could be significant. This, however, would be pathetically incompatible with all existing games- and frankly the bus is the bottleneck, that PIII is probably pretty comparable for doing transforms, it just cannot get them across the _bus_ as fast as a cached copy of the geometry on the card.
    I saw what appeared to be a statistic that implied that games might see a 10% improvement in framerate. That, I think, is closer to the truth.
    Sorry guys- you've been Hyped.
  • Um.

    You do know what the word "hyperbole" actually means, don't you?

    --
  • by Christopher Thomas ( 11717 ) on Tuesday August 31, 1999 @05:28AM (#1714619)
    This is indeed a nice chip; however, it has competition.


    3dfx is rolling out another chip, as people have been talking about for a while. It is rumoured to be at 0.22 micron too, and will have hardware geometry processing.


    S3 already rolled out a new chip - at 0.18 micron. It too has four texel engines and hardware geometry processing.


    IMO, the S3 chip is actually the one to worry about. Architecture may or may not be great, but at 0.18 micron it may outperform nVidia and 3dfx's offerings just on linewidth. ATI did something similar when it rolled out the Rage 128, if you recall.


    What I'm waiting for is the release of the GeForce or (insert name of 3dfx's offering here) at 0.18 micron. However, I'll probably be waiting a while.

  • Great, now when are they gonna start supporting Digital Flat Panels? I've had mine for 6 months now, and still stuck with the shitty ATI Xpert@LCD card that it came with.

    I was pretty excited to hear the V3 3500 supported DFP but they got rid of it when it changed over to the 3500TV. Anyone know of any plans for an upcoming video card to support DFP?
  • Up till this point fill rate has been all that mattered for 3D chip makers, providing the capability of allowing higher resolutions, more rendering passes (for different visual effects) and higher frame rates. This is the first (consumer level?) chip to add transformation and lightning to the 3D chip, thus offloading these duties from the CPU.


    "Consumer level" is correct. High-end graphics workstations have been doing this for several years; in fact, the entire OpenGL pipeline has been in hardware for quite a while. Check out 3dlab's high-end boards for examples, or take a look at their competitors. These tend to be 64-bit PCI boards in the $2,000-$3,000 range.


    The consumer graphics manufacturers have been making noise about using geometry processing for a while now, but have only recently gotten around to it. In that market, yes it could be called revolutionary (in that it substantially changes game design).

  • I would assume that Diamond would be quick to jump on this. They've been partners with nVidia for some time, and this chip looks to big to pass up, epecially with 3dfx out of the picture.
  • I read another article (the one on rivaextreme ... pretty good article), I can add a few more comments.

    GeForce has environment mapping (iirc so does Permidia3) but not bump mapping.

    It can do 8 free hardware accel'd lights ... imho this is kind of limiting ... we'll see.

    A Voodoo3 on a fast machine under Glide can handle about a 10-12kpoly scene lighted textured w/effects and phsyics running about 20-25 fps on a 450a. I'll be very impressed if GeForce can do twice that -- 25kpolys at 25fps, or about 500k real polys/sec (BTW a Voodoo3 under ideal conditions w/out features can do 500k "fake" polys/sec ... I again expect GeForce to better that ...).

    But 15million polys/sec is the kind of bloated number that usually comes out of graphics shops. Don't believe it for a second.

    As for 100kpoly models lighted w/fx running smoothly, i'll believe it when I see it.

    if DaveS or DaveR wants to correct me on any of this stuff, go right ahead guys...
  • Actually, I was there last week, and I can say that it runs kick-ass fast fully textured and environment mapped. They didn't provide us with any of those screenshots unfortunately, only the shaded models. But if you check back at my site in the next couple of weeks, I should have benchmarks on fully lit, textured, and mapped models. Did you check out the tree? It's freaking amazing.
  • There are all kinds of reasons why Nvidia is choosing not to support 3dfx's anti-aliasing and Matrox's bump mapping. The most significant of these is lack of a common API. To support bump mapping on a G400MAX, you write Matrox code. To support anti-aliasing for the Voodoo4 (which, btw, kills frame rates), you write 3dfx code.

    Sound familiar? Back to the days of 3d-acceleration in games before DirectX?

  • It says this here card can only do 15 million triangles per second. Playstation 2 can do 75 million.


    Not if it can only transform 36 million polys per second, it can't (sustained transformation figure from an older slashdot article).


    Based on all of the other numbers in that article, I suspect that they dropped a decimal point in the "75 million polys rendered" figure. That, or they're talking about flat-shaded untextured untransformed polygons.

  • Yes, but how well can the PCG systems do at movie speeds: on the fly rendering at 24 frames/second? nVidia is saying (and it remains to be seen how accurate their marketting info is) that this is the image quality quality that can move, not just a pretty static image.

    ----
  • by aheitner ( 3273 )
    Microsoft writes each version of D3D by asking the manufacturers "What features do you guys need?" and then writing them in.

    There were no 3D games bfore D3D. No one had cards.

    Just 'cos a card supports D3D doesn't mean you can assume your program will work right. You still have to test and debug each individual card. This is the voice of experience :)

    Matrox's bump will be in D3D i'm pretty sure...

    People use Glide rather than D3D 'cos it's way faster. Speed really is all that matters ...
  • That is a pre-announcement typical of what Microsoft would do... if you don't have a product to match, simply announce a few months early.


    Do you see GeForce boards on the shelves?
    Do you see Playstation 2s on the shelves?


    They've been conversation topics for months, but all either has now is alpha test hardware. It becomes difficult to see what point you are trying to make, given that.

  • Some of us are interested in the world of computing and what is happening in it, not just with Linux... And Slashdot does a marvelous job covering cutting edge technologies for both linux and non linux applications. Since right now Windows is the dominating computing force I think it would be ignorant and foolish to exclude the technologies that are emerging just because they are marketed to Windows users. How about going to your preferences page and applying that prejudice on the "Pretty Widgets" to filter Hemos. Retard. "Cynic?? Who's a cynic?"
  • Video consoles like the Playstation have no chance to keep up with the fast evolution of PC hardware.

    Where they excell is the ease of use and installation and the homogenity of software design.

    However such applications, like Final Fantasy VII, have been ported to the PC too.

  • Well gee...to filter stories you would have to get a login...but that would ruin that wonderful cover of AC that you hide under. Besides that your post is just lame anyways. nVidia will release linux drivers for this thing more than likely considering they did with thier past chips (albeit a bit late). On top of all that, I haven't responded to any good flamebait lately and I'm so ill right now that you were the perfect dumbass to unleash on.
  • I feel sorry to tell you that Glide no longer has the massive performance increase compared to D3D. Yes D3D is still pretty hard to program for, and yes it is still slightly slower (20-30% in fact) but it is not like the old days when D3D would run at 30 fps and glide would do 70 fps. Even Turok which is only DX6 only gets 15 fps in Glide over D3D. The proof is in the games. Unreal is just as playable with D3D as it is under Glide.
  • > There were no 3D games bfore D3D. No one had cards.

    I do believe Glide was out before it, and DOS games (like the original Descent?) used it. Wasn't that before D3D?

    There were people with the cards back then, not the millions there are now, but enough for accelerated 3D to be implemented in more and more games.

    .Shawn
    I am not me, I am a tree....
  • For those of you who doubt NV10's performance improvement, take a look at the workstation front. Even with Dual PII 400s, a card with a gamma geometry processor, is much faster (in medium textured scenes) than the Evens and Sutherland chipsets, which, although they have higher fill-rate, don't have geometry acceleration. A playstation kicks a P100s ass even though the main CPU is 1/3 the speed. Its becuase the playstation has HW geometry acceleration. In any case, HW acceleration will also benifit OpenGL, now truespace will run faster than ever!!!
  • by Chris Johnson ( 580 ) on Tuesday August 31, 1999 @06:05AM (#1714641) Homepage Journal
    That's easy- I presume you're rotating the tree realtime? _All_ that requires is that the tree can be cached on the card, which is then issued commands.
    Why only one tree? What program, exactly, did this? There are some very serious questions to ask about demos like this. I, too, write software and try to come up with impressive claims. I can legitimately say that I'm writing a game with a ten million star universe with approximately sixteen million planets, of which the terrestrial ones (hundreds of actually landable-on planets) have terrains the size of the earth at 3 dots per inch for height information.
    This is misleading as I'm doing it _all_ algorithmically- it's fair to ask 'well, what does it work like?' but nonsensical to imagine that somehow I'm messing with kajilliobytes of data. It's faked. (I have stellar distribution whipped, am working currently on deriving star types, slightly modified according to actual galaxy distributions- main current task is to come up with RGB values for the actual colors of star types, as this is more like white point color temperatures than anything else- very close to updating my reference pictures.
    At any rate, will you believe me when I say that this reeks of demo? It wouldn't be that surprising if they used _all_ the capacity of the card to do that one tree. _I_ would. Might that be why there is only one tree and _no_ other detail at all (one ground poly, one horizon)?
    More relevantly, what was used in doing that? If it was vanilla OpenGL, then okay, I concede this is very big. If they had to write their own software to do that, then you have a problem. Here in Mac land (also LinuxPPC land ;) ) we have a comparable problem- there are 4X the voodoo cards as anything else, because of availability, and we're getting 'em off you PCers who are buying nVidias ( ;) dirt cheap, too! ), but Apple only supports ATI- so many important development tools are _not_ supporting 3dfx or Glide, and we are once again suffering the recurrent apple disease of Thou Shalt Use Only One Solution- in this case, ATI 3d acceleration. And I personally like 3dfx rendering better than even _TNT_, but this helps me not. (reading User Friendly I have been!).
    You guys are looking at exactly the same situation here. Be damned careful. If you go with a proprietary technology you will fragment, and your developers will be faced with tough choices and could end up writing nVidia-only much as some developers in Mac land are writing ATI-only. This is bad. Do I have to explain why this is bad?
    Let's get some more information about exactly how you operate this geometry stuff before getting all giddy and flushed about it, shall we? I don't see how software will use it without rewriting the software. And when you do that- it's an open invitation for nVidia to make the thing completely proprietary and lock out other vendors.
    Or maybe they'd give the information out to people at no cost and not enforce their (presumed) patents for a while, only to turn around a year from now when they've locked in the market, and start bleeding people with basically total freedom to manipulate things any way they choose? But of course nobody (GIF) would think (GIF) of ever doing (GIF!) a thing like (GIFFF!) _that_... ;P
  • > I'm eagerly awaiting the new generation. But I expect the real crazy stuff to start happening in the following generation ...
    > it may be finally time to kill some very old paradigms in 3d hardware...

    I'd be interested to hear your thoughts on what might replace the current paradigm. Are you thinking voxel-based rendering techniques?

    Am I wrong when I state that the amount of research (even recent!) devoted to rendering techniques based on the current paradigm dwarfs the effort put into researching more innovative approaches to rendering?
  • There were no 3D games bfore D3D. No one had cards.

    That's funny... I remember playing Wolf3D and DOOM before Direct3D was even a glimmer in Microsoft's eyes.
  • > Even Turok which is only DX6 only gets 15 fps in Glide over D3D.

    15fps can mean a hell of a lot. If you don't have high-end everything, then 15fps can mean the difference between 15fps and 30fps. _you_ can try playing Q3test at 15fps :-)

    Of course, when you're getting 120fps, 15fps means next to nothing.

    .Shawn
    I am not me, I am a tree....

  • Posted that lastone when I was only halfway done flaming:

    There were no 3D games bfore D3D. No one had cards.

    That's funny... I remember playing Wolf3D and DOOM before Direct3D was even a glimmer in Microsoft's eyes.

    People use Glide rather than D3D 'cos it's way faster. Speed really is all that matters ...

    You've got to be kidding. Would you mind explaining how one API can be faster than another? Sure, a driver or hardware can be faster, but an API is just a specification. People use whatever API that will get the job done. If the job is to only support 3DFX cards, they use Glide. If the job is to support a number of cards, they use D3D. If the job is for portability, the use OpenGL.
  • What ????

    I seem to recall pocking up my first Voodoo1 card, downloading GLQuake and haveing a ball. D3D wasn't even in the picture.

    Not to mention Duke3d, Doom, Doom2, Wolf, Triad etc

    short memories or just mild drugs ?
  • Wrong! Diamond will still make other boards!

    I bet they will announce a GeFORCE board soon.
  • According to what I've read on the net until now, you should se boards on the shelves as early as in late september!
  • The Voodoo4 (or what it will be named) will probably be 0.22 micron too...

    Why?

    Because, (AFAIK) TNT2 and Voodoo3 is produced at the same "plant"...
  • If you're a smart developer, and using OpenGL, and letting it handle transforms, you the speed-up for free. Those lost souls using D3D will have to rewrite yet again. The reason OpenGL is always so far ahead is that all these "innovations" are really just moving workstation class features to the consumer market. OpenGL has been used on those workstations for years. At least when nvidia borrows a workstation feature they don't rename it and claim to have invented it (Accumulation buffer -> "T-buffer").
  • For any out there that have been complaining, please allow me to shed a silent tear. The endless masses of consumeroids have their morning tranquility of force fed corporately altered news interrupted by video card enthusiasts ( or paid off placard wavers ). Rats.
  • Actually, he's right - S3 acquired Diamond, so once Diamond sells off its remaining stock of nVidia Riva-based and 3Dfx Voodoo(2|Banshee)-based cards, they're gonna be strictly making S3 Savage-based cards.
  • 1. Probably - NVIDIA makes great drivers, and I bet they'll come with Linux drivers (of some kind) too.

    2. I've seen "late september" mentioned!

    3. Really great framerates at 1024x768 and above + it will look beatifull - probably much better than with current cards.

    3a. Personally, I am looking more forward to Team Fortress 2 - if Valve can do the same with multiplayer as they did with singleplayer (Halflife) - it's going to be so much fun!
  • The significance of putting geometry acceleration on the GPU is complex, certainly more complex than you've made it out to be. Without it, geometry data is retained in system memory and processed by the CPU -- all vertex data is transformed, culled, scaled, rotated, etc. by the processor before being sent to the card for rasterization/fill. With full geometry acceleration, the GPU handles all of those tasks, meaning that the data sent to the card is often redundant and can be easily cached, and that the CPU no longer performs those tasks (and will instead be freed to perform software tasks like scene assembly and AI).

    As far as proprietary natures go, your post gets _way_ ahead of itself. The GeForce 256 will be accessible via OpenGL and DX7. Important extensions to API functionality are performed via review by the ARB and by Microsoft DX version revs. There is no indication that NVidia will deal with the additional capabilities of this chipset in a manner any different from the way multitexturing extensions were handled.

    In any case, "how you operate this geometry stuff" is via the OpenGL API, which has been "operating this geometry stuff" in higher-end equipment for some years now. The ability to render high-polygon models in real-time is truly a revolution; not only are texture-mapped low-poly models unsuitable for a wide rage of visualization tasks, they are simply inferior to high-poly models in terms of realism, flexibility, and reusability. From a development perspective, it has little or nothing in common with GIF patent/licensing issues.

    One last note: if this "reeks of demo", there's a very good reason for it. It _is_ a demo, designed to demonstrate the capabilities of the chipset. It is neither a benchmark nor a source-level example of _precisely_ how the card behaves. You'll likely have to wait for the silicon to ship before you have either. Whether or not "vanilla OpenGL" was used for the demo is irrelevant, since OpenGL is an API and does not specify a particular software implementation. Implementation is the purpose of _drivers_.

    MJP
  • Wolf3d, Doom, Duke3d, etc, weren't 3d games, per se. They were "2 1/2d". They used sprites to imitate 3d. > There were no 3D games bfore D3D... Quake was 3d. It was also a DOS game. I don't think there was ever a D3D for DOS... :-P And glQuake used GLide/Voodoo long before D3D...

  • Is there a way to filter out Hemos stories?


    Yes. Get a login/password, click "preferences" and filter away.

  • I'm still waiting for real-time ray-tracing. Once you look at a ray-traced image, every time you look at polygons you'll say "Yuck, what's that?" I have to say, though, that those textures and bump maps almost make up for it. Go get POV-Ray [povray.org] and see what I mean.


  • One of the pages there, I think it is page 3, sends Netscrape (Redhat 6.0) into an infinite loop that pegs a PII 400 at 100% CPU until I kill netscape. It's a pity. I wanted to read that article. Guess I'll fire up lynx.
  • > It can do 8 free hardware accel'd lights

    I believe the OpenGL spec only lists 8 as a minimum. Of course games/apps can use any number of "virtual" lights.

    If you're a programmer, check out http://www.opengl.org/Documentation/Specs.html
  • No way... every true fan knows it's pronounced "Gatchaman" ;-)
    ----
    Dave
    All hail Discordia!
  • I noticed you said 16 bit video. Does this mean it will support up to 16 bit per channel color?
    aka rgb48a they way MNG does? that would be incredibly awesome. (note for readers, the current 32bit video aka rgb24a only supports 8 bit per channel sampling and play back. Broadcast television supports up to 10 bit per channel sampling and play back, true 35mm film is closer to 96 bit or 32bit per channel). this is independent of things like gamma and transparancy)
  • Now what modern hardware gets 30 fps on modern games. I am talking about cutting edge games on cutting edgre hardware. Turok gets 65+fps in D3D, and 75-80 fps in Glide. (On Voodoo 2s no less) the 15 fps there is not a big thing. DirectX is not the pathetic thing we used to make fun of anymore. As I pointed out in my post, it has only a 20% performace hit on average. And I would think that Open obsessed Linux users and Glide just wouldn't go together. 3Dfx is infamous for propriatory stuff, in consumer and arcade markets. In the beginning people wanted to liscence glide but 3Dfx wouldn't let them. And remember the Glide Required fiasco where 3Dfx put Glide Required stickers on D3D and OGL games?
  • The TNT has very decent OpenGL drivers from my experiance. (under windows anyway, don't know about the NT drivers) in any case the reason that the Gloria Xl (Permedia 2 I think) is so fast in OpenGL is becuase it has a Delta geometry coprocessor onboard the Permedia 2 chip. Besides, people who use Maya ($10,000) can afford a high end OGL card. (under 2000 these days) Plus unlike 3Dfx, nVidia has a full ICD so it IS theoretically compatible with Softimage, etc.
  • by Anonymous Coward
    GeForce has hardware bump mapping, it's *real* bump mapping uses perturbed normals and dot-products, not faked tricks. Perturbed normals look better than even environment mapped bump mapping.

    Secondly, D3D is a non issue. First of all, DirectX7 is *fast* Second, the games most likely to take advantage of geometry acceleration first will be OpenGl based. Glide sucks. OpenGL is way more open and cross platform.

    Third, 3dfx can't defeat everyone in fillrate. They are bound by the speed of available ram which is maxing out at 200Mhz. All they can do is start using multiple pixel pipelines like NVidia and Savage.

    But to beat NVidia, they'd have to use a 512-bit or 1024-bit architecture (8 or 16 pipelines) which unfortunately, is difficult with the current manufacturing process (.22 or .18)
    So I'm sorry to say, the Voodoo4 is not going to kick anyone's butt in the fillrate department.

    (and super-expensive Rambus ram won't help them either)

    Fifth, the triangle rate increase is 3-4 up to 10x as much, and many games like Team Fortress 2 or Messiah are using scalable geometry (the original 50,000 polygon artwork is used and scaled dynamically based on processing power and scene complexity)

    Sixth, the hardware lights are in ADDITION TO regular lightmap effects and will give much better dynamic lighting effects than Quake's shite spherical lightmap tweaking technique.

    3dfx is incompetent and no longer the market leader, and their anti-32bit color, anti geometry, anti-everything-they-cant-implement marketing is tiresome, along with 3dfx groupies who continually praise the company for simply boosting clockrates on the same old Voodoo1 architecture.

    Both S3 and NVidia have introduced cards with the potential to do hardware vector math at 10x the speed of a PentiumIII, without the need to ship all the 2d transformed data over the PCI/AGP bus, and they have done it at consumer prices.

    I'm sorry, but increased fillrate doesn't do it for me anymore. It's still the same old blocky characters but at 1280x1024. They look just as good if you display them on a TV at 640x480. What's needed is better geometry, skeletal animation, wave mechanics, inverse kinematics, etc everything that geometry acceleration allows you to do (NVidia, S3, Playstation 2)
  • I see a lot of people disappointed in the lack of bump mapping. I wonder however if you will need bump mapping with this card. Just make your models with more polys. I think that is why nvidia let off this feature, they want people to make higher poly count models instead of cheating with a bump map. Will it look better? We shall see.
  • Actually the term "2 1/2 d" refers to the fact that although Wolf, Doom and Duke may have used 2D sprites for their characters and items, the sprites were projected into a rendered 3D world.

    I agree with your point, which is still the same: Microsoft didn't invent 3D :)
  • I'm still waiting for real-time ray-tracing. Once you look at a ray-traced image, every time you look at polygons you'll say "Yuck, what's that?" I have to say, though, that those textures and bump maps almost make up for it. Go get POV-Ray and see what I mean.

    I'm asking, not baiting here ... how do you animate ray-traced images? Re-trace every frame?

  • A 3d geometry accel driver will be an entirely different beast than the current tnt drivers. openGL drivers are very very difficult to write, most companies spend years with large engineering teams(working with the designers and vhdl programmers) to write decent drivers.

  • yep you would have to re-raytrace the whole scene, there might be some optimisations, but not much ..
    btw. just like you would re-render the whole scene every frame in a game, quake etc. send the whole scene to the card every frame.
  • Microsoft may not have invented 3D, very obviously - but they did standardize it. As much as everyone would love it if games used OpenGL, for the most part, they don't (Quake being a notable exception). 3D gaming didn't take off until Direct3D brought 3D to the masses.

    It's too bad we couldn't have made a solid, open 3D game API spec before MS gave the world its proprietary version. OpenGL is portable, but writing an OpenGL driver is pretty much a bitch. Direct3D may be annoying to program to, but the drivers supposedly aren't quite so hard to write.

    It's really too bad we don't have something like Glide (really easy to program to), but open, not 'only 3dfx' crap.

  • Better check the spec list again buddy. Bump mapping is there. In fact there are 3 different types of bump-mapping. "Will it look better?" The specs speak for themselves. This thing is revolutionary.
  • forgot :-)
    But raytracing is VERY parallelizable, you can have one cpu per pixel, I believe a company called "division" (.co.uk) did something in this area.
    they called it "smart memory".
    Realtime raytracing could be the next big thing in computer graphics, but games need to be totally rewritten, you can't use openGL anymore because raytracing can use primitives like spheres, planes, cylinders and yes triangles, openGL doesn't support this.
    It would require some major hardware advances, if you want to realtime raytrace a 640x480 image using one cpu per pixel, it would require 302700 cpus that can do some very fast floating point operations.

    btw I'm very interested in realtime raytracing, but I think it'll be a while before it's a reality.
  • Whoa! you have a 3D card on your sub-$400. WHY?
  • yep you would have to re-raytrace the whole scene, there might be some optimisations, but not much .. btw. just like you would re-render the whole scene every frame in a game, quake etc. send the whole scene to the card every frame.

    It's been years since I've played with POV, but given the amount of time it took to trace relatively simple constructs wouldn't this be a bad idea? You're not just re-displaying a 3 dimensional object from a different perspective, you're recreating the object each time the perspective changes.

    Caveat: I have little clue and I'm looking for enlightenment. If I'm not making sense please correct me!

  • What? has S3 lost their minds? don't they know they can't make good video chips. I mean sure S3 makes a decent 2D video card, but common why are they trying to throw themselves into the market instead of just packaging someone elses 3d chip on their card like every other company does.
  • --
    It's been years since I've played with POV, but given the amount of time it took to trace relatively simple constructs wouldn't this be a bad idea? You're not just re-displaying a 3 dimensional object from a different perspective, you're recreating the object each time the perspective changes.
    --
    Yes you are correct, you COULD re-raytrace just the object that moved however, the trick is to figure out what pixels are affected by the moving of the object.
    For example the object could have been casting one or more shadows, those "shadow" pixels need to be re-raytraced, also if you could/can see the object before/after the move in some other reflective surface you have to re-raytrace those as well.. etc.
  • btw I'm very interested in realtime raytracing, but I think it'll be a while before it's a reality.

    ;) Thanks. After reading your original post I was all "Wow! That's possible!? Great!". You've just popped my bubble too. ;)

  • It's perfectly possible to do 3D in software. It's even somewhat more interesting, since you're not bound to and 3D card's paradigm. Software is where you get cool stuff like true voxels etc...

    Glide probably wasn't out before DX3, but DX3 was pretty much useless (only a very minimum 3D API) so Glide may have beaten anything useful, tho not by much...

    ------------------------

    Replying to another comment:
    No, an API cannot be faster. But an implementation can. And an API's design can affect an implementation. In any case, Glide (the implementation) is a hell of a lot faster than D3D (the implementation), as per my original comment.
  • People have been using the occular graphic processors for millions of years

    Your people are not my people, apparently... My people have not been around that long..

    Apes among us :)

    What is the fill rate of the human eye??

  • I know nVidia had a public contest to pick the name for the NV10 chip. And GeForce is the result???

    Sheesh... what's next - a website about the chip named GeSpot?
  • > It's really too bad we don't have something like Glide (really easy to program to), but open, not 'only 3dfx' crap

    That's a common but very misguided opinion. Glide is a rasterization only API and is pretty much rendered obsolete by the addition of transformation and lighting to the hardware. Of course Glide could be extended to encompass this part of the pipeline as well, but what would be the point - OpenGL already does that. With DX7 D3D will get there too.
  • Actually, the PS2 was always "history" as Nintendo's Dolphin will most likely it in any hardware spec. Howard Lincoln has said that the Dolphin will at be at least on par if not better than the PS2, and Matt Cassimassia(sp) of IGN64 has seen (or at least heard about) the Dolphin specs from a reliable source, and he guarenttee's the Dolphin will be more powerful than the PS2. As of yet, Nintendo has made no official announcement on the specs so that way Sony will have to be commited to it's hardware design (so it can't suddenly change it when they relize they will be overpowered), and so people continue to buy the N64 for a little while longer. So the whole point of this was taht the PS2 is not the end all of videogame systems just because it was featured on Slashdot. So in the way the original poster asked this, yes, the PS2 is and always has been history. As for if the Dolphin beats the new nVidia card, no one knows, as no specs are out to the public yet.
  • The reason why is no video game out there will support the huge amounts of triangles unless every card can handle them. The game would have to be practically rewritten from scratch for the higher triangle count. As a programmer I can't find any way around this because of the meshes have to be written from scratch and whole levels would have to be rewritten just for use on this card.

    First off, I don't take this it as a given that just because you can't figure out a way to represent the meshes with variable levels of detail, that no one can. In fact, its my understanding that Quake 3 implements curves in a way that allows them to be retesselated to higher polygon counts depending on the graphics card and speed of the system. Second, even if a company didn't want to implement something like that in their engine, its not inconcievable that multiple environment resolutions could be placed on the game media. Many games already come with low and high quality sound samples to account for the wildly varying quality of sound cards out there.



  • I suppose you aren't familiar with the game 'messiah'. They've written a new engine that performs real time deformation and tesselation to keep whatever hardware you have running at 100%. When too many polys are on the screen at once, and the framerate starts to drop, some of the models are tessellated. When a single object takes up a large portion of the screen it's polygon count shoots up. That's the only game I know of that can already use the power NV10 offers.
  • Recently there has been released a few cheaper boards with geometry- and Lighting-acceleration, most notably 3DLabs Oxygen GVX1, although it costs up to a $1000. Interesting to see how the GeFroce 256 can compare to the GVX1. The TNT2 sucks at OpenGL in comparison to such cards.

    Darth Shinobi - Champion of Lady weeanna, Inquisitor of CoJ
    "May the dark side of the force be with you"
  • Err...I'm not sure why I'm replying to this particular comment, perhaps because you're spreading misinformation (even if you aren't serious).

    Even if no game uses lighting, and this is usually because games want realistic shadows, and you can only have 8 lights (or so) in a scene at a time. Anyway, the point is *now they can*.

    In reality it's the transformation hardware that's going to speed everything up. And not only that, the CPU is going to have nothing to do but model physics and AI now!

    WOOHOO!
  • Every current chipset supported bump mapping via crappy emboss algorithms. What it comes down to is image quality - we'll have to wait and see when some more pics come out of nVidia.
  • what I heard about the savage 2000 (http://rivaextreme.com/index.shtml#10), that's a pretty sweet chip too. Anyway at the following link you will find a story about why the new generation 3d cards won't live up to the hype
    http://fullon3d.com/opinionated/
  • Didn't DEC demonstrate an Alpha system doing
    near real time ray-tracing? I think it was
    around the time the Compaq merger was finalized.

    Man I wish I had the link.
  • I think the people who are talking this down as something that is not revolutionary should read a good full review first. Check out:

    The review on this tweak3d mirror [explosive3d.com]

    Josh

  • yes, but their current (annoyingly obfusciated and therefore unfixable) drivers have very poor performance. Its really a shame they don't take the clue from Matrox, if they released working code AND left it unobfusciated AND released specs I would buy from them until either they go out of business or I die. So far people are releasing either code or specs, and the code is all obfusciated and therefore unservicable.
  • The N64 is also more powerful than the PSX (except for CD vs. Cartridges). The N64's lackluster performance can be attributed to one thing- Nintendo is a bunch of @#$@# that nobody wants to do business with (sorry, my experience). If Nintendo cleans up their act then MAYBE the system will do well, but if you are going to cheat and steal from the developers, they aren't going to develop on your system, no matter how kewl it is.
  • of course Glide is faster. It's written for a single hardware architecture. If MS optimized DX7 for nVidia and only nVidia architecture, it'd be faster than Glide. You're comparing apples to oranges.. if you want to compare API speeds, that's fine -- but you also need to consider their features and support (Glide is dying, face it -- it won't happen right away, but developers are moving away from it). If you want to compare 3DFX and NVIDIA and Matrox cards, you need to compare them running a common app under a common API. Saying a 3DFX card is faster under Glide than the nVidia is under DX is meaningless.
  • DFP support depends on the board manufacturer, not the chip maker (although they are the same in 3dfx and ATIs cases). I have seen some TNT2 cards that claimed to have DFP support, can't verify that since I don't have one..
  • Do you have a Playstation 2?
    Do you also have a GForce256?
    Do you have actual numbers to prove your point?
    Do you REALIZE just how fast 10x really is?

    Didn't think so.

    Please, don't post something if you have absolutely no clue as to what's going on.
  • How about full screen anti-aliasing? If it's in, this
    card will be hard to beat. If it's out, I'll pass.
  • I'm rather sick of all the hype, and such...

    WHEN DOES IT SHIP? My $ is on mid 2000 at best.

    I'm sick of all the 3D card companies doing this, Nvidia is no exception. (case n point, TNT: delays, and its specs got downgraded quite a damn bit) At least TNT2 is nice, plenty of selection. Its closer to what the original TNT specs called for though, just overclocked.

    3dfx has delivered in the past with specs and inside timeframes, but OPS no 32bit for V3, even though everyone wants/needs it.

    Matrox brings out kickass g400...but just try to buy one, especially dualhead [main selling point] and even more so the max [the one that is always quoted in benchmarks] (sarcasm) I guess there were so many damn reviews they ran out of stock. (/sarcasm) The max is "9/9/99" anywhere you look for it (which is the THIRD date given so far) oh yes you could have preordered (overpaid) and *maybe* ogtten one but sheesh! get real. The dualhead is a pain in the butt to find, the retail version even harder. Plus, expect to pay $30 more than you should.

    ATI? Bleh. Too slow. They sold 'fake' 3D chipsets in the past for too long for my taste anyway.

    S3? Bleh. Too slow. Too late.

    Anonymous Coward, get it? :)
  • While the original release of the NVidia drivers were obfusciated (run through the C preprocessor before the source was released), that was due to a lawyer popping up and causing trouble at the last mintue. I was under the impression that the next release after to XFree86 provided regular source code.

    Am I wrong?
  • I'm pretty sure that the treads *are* geometry -- look at the edge of the tread against the background.
    However this means that the second tyre has perhaps 50 times as many polygons as the first, not the 3-4 times that the chip *might* provide.
    So yes, the pictures are just hype.
  • I really dislike 3Dfx. Sure, their cards have always been with the pack leaders in terms of performance, but only *if* you use their propriatary, non-portable, unextensible GLide API. Their OpenGL performance is really bad. 3Dfx is extremely protective of their API, too. If you so much as look at it the wrong way -- lawsuit city!

    I really, really, *REALLY* hate propriatary APIs. It is like 3Dfx wants us back in the bad-old-days of DOS, where every program had to have its *own* drivers for every piece of hardware out there. If game XYZ didn't support your hardware, you were flat out of luck.

    Frankly, it looks to me like they started out as the market leader, but have since lost their edge, and are trying to keep their strangle-hold on the industry by locking people into an API they own. (Hmmmm, sound familar? *cough*Microsoft*cough*)

    No thank you.
  • Like I said in the other thread, although the characters and items were 2D sprites, the actual map you ran around in was drawn with a (albeit software) 3D renderer.
  • The reason why is no video game out there will support the huge amounts of triangles unless every card can handle them. The game would have to be practically rewritten from scratch for the higher triangle count.

    Obviously you haven't heard of implicit surfaces. You know, those things like B-Splines, NURBS et al. Describe a surface by a series of control points and then tesselate according to your performance requirements. Start off with a low figure, or do benchmarking when first installing the game then use this info to up the number of triangles until you hit the frame rate/quality level ratio you want. Very simple, very old technique. I s'pose you haven't heard of the Teapot either.

  • GeFORCE 256 hardware is not alpha hardware.
    The actual chips are released now to board manufactures.
  • 350Mhz RAMDAC. Nice.

    Question is, are there any monitors that support this to the fullest?

    My monitor max resolution at 85hz refresh is 800x600, and I run at this resolution (75 flickers! To me, anyhow. 60 is just too flickery to use.) Videocard has a nice 250Mhz RAMDAC, does plenty high refresh at high resolutions...

    Hmm. Maybe I need a monitor with longer persistence.

  • Since Diamond bought S3, they're not making nVidia based cards anymore... they'd be competing with themselves.

    ----

"I am, therefore I am." -- Akira

Working...