Nvidia's NV20 161
Bilz writes "ZD Net UK has posted an article on Nvidia's upcoming NV20 video chip. According to them, they state that during complex 3D scenes the card performs up to 7 times faster than a GeForce 2 Ultra."
According to the latest official figures, 43% of all statistics are totally worthless.
Re:3D Realism is becoming dangerous. (Score:1)
I started using nVidia a long time ago, but... (Score:1)
Now I'm running a 3DFX Voodoo3 2000. It's fine. It's fast enough, I have full HW accelerated OpenGL under Linux, FreeBSD, BeOS (well, 4.5.2 in theory) and QNX. My next card HAS to be able to do all of that. Like 3DFX, I want full open source drivers or I will NOT buy the card.
Does Matrox's new card fit my criteria? Does the ATI? I know nVidia doesn't. So my next card won't be an nVidia. Plain and simple.
So, can any of you tell me what the status of the otehr card makers is??
Re:Developers will hit the wall sooner or later (Score:1)
Yeah, but hopefully what will happen is that there will become a market in rendered artifacts. So game developers will be able to go to a website and access a library of pre-built stuff, including textures, and just include it in their game.
I guess this would end up working in the same way as photo agencies - you'd get people doing nothing but contributing to these things, and surviving on only the royalties from the use of their objects, and other people who dabble but that do occasionally create something which is worth re-using and sticking in the library.
So when you make your game, you'd hit the web, saying 'I want a lamp, has to have on/off modes, must fit into Victorian era game' and up will come a list - sure, it'll take a while before a library has enough in it to make this possible, but once it does, you just populate your VR room much like heading down to IKEA to populate your RL room.
I like this idea.
~Cederic
Re:why such a fast RAMDAC? (Score:1)
I can remember going custom PC shopping with a friend in 1995, down Tottenham Court Road in London.
The guys in the electronics shops literally laughed at him when he asked for a 4MB graphics card - taunts of "You'll never need that, unless you're doing graphics for the movies" followed us down the street.
Of course, they also laughed at him for wanting as much as (gasp) 32MB RAM in the PC..
~Cederic
I really don't get it... (Score:1)
Frankly I may understand that this may happen as GeForce does not have enough processing power to hold up more complex models. But that the new chip will get 7 times faster? If it is just 2 times faster on simple models, how does it get faster several times more, on complex models? The basics of the model don't change, wether it is less or more complex in details. The basics will be processed by the same channels and by the same math of the chip. Or am I missing something?
Re: (Score:1)
Won't Go in Consumer Cards (Score:1)
Re:why such a fast RAMDAC? (Score:1)
Re:Still closed drivers (Score:1)
Geez... (Score:1)
just my $0.02
Yep, Verified. How does karma work? (Score:1)
Just an off-topic thought. Are there any provisions for limiting the number of posts for consistently troll or flamebait posters so as to decrease the noise? Maybe limit them to 1 -2 posts per week until the consistently post a few comments above zero? Just a thought.
Re:it doesn't matter how great of card it is.. (Score:1)
Sorry, but this is incorrect. The specs have been released to VA Research, who has not yet finished the DRI driver. The specs were also released to Xig who released a proprietary X driver. When the DRI driver is released, it will become part of XFree 4. The rasterization parts are already in their. If you want 3d support now, you have to get Xig drivers.
Now the Radeon 64Mb Xig drivers(Alpha 2.0) are actually FASTER than some of Nvidias drivers w/ some of their faster cards. So the statement that NVidia has the fastest GL drivers currently is also incorrect. And I suspect that the next release of the V5 drivers(which BTW does support SLI and FSAA) will also be comparable to NVidias drivers. The DRI drivers are doing on hellofa job, and they all deserve our respect. And the fact is, that the open source developer DON'T NEED nvidias pipeline, the curent DRI/GLX stuff does it just fine.
Re:I really don't get it... (Score:1)
It's just a theory though. And it assumes that the figure is even correct.
Re:Tiling? (Score:1)
You know, "hidden surface removal" really doesn't mean that they have any special magic tricks up their sleeve. You can do pretty much correct HSR with a 16-bpp z-buffer, as long as you are careful with your near and far clipping planes. 32 bits per depth pixel is better, of course.
What I'd like to see is a hardware implementation of hierarchical z-buffering for occlusion culling. That'd be neat.
Re:3D Realism is becoming dangerous. (Score:1)
Psst, you're still looking at the damn computer screen while playing. Psst, the game doesn't hurt like discovering the foot of a stool in the dark with one of those small toes on your foot does. Psst, you don't play "real life" with a mouse and the WSAD keys.
Token-symbol-ish enough for you?
The next thing you're going to do is start claiming that each time you die "in the game", you die a little inside. Furrfu!
Re:About the 2-7x faster... (Score:1)
The word you're looking for is "hierarchical Z-buffers". "Hidden surface removal" is a blanket phrase that covers such things as z-sorting (used in early software 3d engines), BSP rendering without a depth buffer, S-buffers, the "active edge list" algorithm used in the software renderers in Quake 1 (and prolly 2 too) and the good old z-buffer.
Please, don't let some corporation smudge the terminology.
Re:nVidia's high end vs. low end (Score:1)
This may also be the real explanation for the drivers' closed-sourceness - a part of their business model is to have manufacturers ship boards that facilitate crippling of the chip's capabilities by the driver. Imagine what would happen if some open-source hacker could modify the driver to ignore the model ID and enable the Quadro-specific features anyway?
Re:Huh? (Score:1)
Not to mention that some companies (SGI comes to mind, also the company that did the Permedia {1,2,3} product line) have shipped products that have implemented most of the OpenGL pipeline (1.0 or 1.1) in silicon. Now, if that doesn't count as a GPU then I'm sure that "GPU" must be a registered trademark, like "Twinview" is.
Ugh. Not to mention the fact that the G200 and G400 chips from Matrox also have a kind of geometry processing unit, called a "warp engine" that's programmable using some sort of proprietary microcode (the utah-glx project used pieces of binary-only microcode received from Matrox, and I'm pretty sure the XF86 4.0 DRIver does too). As far as I can tell from lurking on the utah-glx list back when John C. was working on the driver, the g200 has one warp pipe while the g400 has two. It looks like the current drivers for the g200 and g400 use the warp pipes for triangle setup acceleration (they only seem to use one microcode routine for triangle setup, which I think is a shame...).
Based on my interpretation, that also counts as a GPU, programmable no less!
Re:"hardly any depth" ?? This is pure marketing BS (Score:1)
"By introducing better hidden surface removal"
Oh please, you've just given your own lack of knowledge away right here. Do you even know what a z-buffer is? Or are you suggesting nVidia have some *revolutionary* branch-off from z/w-buffers? Or, wait, don't tell me, they've invented (drum roll ..) *back face culling*, right!? Perhaps you actually meant to say something like "they've optimized the amount of geometry information that needs to be sent to that card by creating higher-level primitives such as curved surfaces, meaning less data to go over the bus (as a simple example, the new sprite primitives in directx8)" .. but I don't think you meant to say that, because it doesn't sound like you know very much about this yourself.
"If you don't know what it is, maybe you should not voice an opinion in the first place"
"Complex" does not imply "lots of triangles" in my book. If they meant "lots of triangles" they shouldn't have said "complex". Anyway, any moron knows that fill rate has become a far bigger bottleneck than number of triangles since the introduction of the first GeForce. Your poly count has absolutely nothing to do with "complexity" (go look up the word in a dictionary if you want to confirm that).
A q3a scene might be defined as complex: multi-texturing, lots of renderstate/texture stage state manipulation, multi-pass rendering etc. Making q3a-style curved surfaces hardware primitives might speed up games like quake, and perhaps this is the direction they're trying to go. "Complex geometry" isn't some specific 3d graphics terminology, it's some vague, undefined marketing BS, and that was my point.
Re:"hardly any depth" ?? This is pure marketing BS (Score:1)
"better than yours (seen your homepage)."
hehe .. yup, Dave Gnukem is pretty much stagnant, I haven't actually worked on it in literally a year, so I can't argue with you there (actually my entire web page is essentially stagnant, it's not a high enough priority in my life right now - my point is, my web page isn't exactly an accurate reflection of what I'm doing.) It's not mentioned on my web page, but I'm currently working on a 3d game with a friend of mine, a networked FPS (OpenGL for gfx, sockets for network, DS for sound etc). It's coming along quite well at the moment, if it gets anywhere close to a finished game we'll be putting up a web-page for it and I'll link to it. Also most of my time goes to my work, which as it happens is 3d graphics simulations, incl. networked (mostly, military and industrial training simulators ..) so I'm not completely clueless ..
"hardly any depth" ?? This is pure marketing BS .. (Score:1)
This "secret document" sounds more to me like a press release crafted by their marketing department. Actually it smells extremely badly of something designed to manipulate stock prices, or at the very least to calm nervous shareholders.
"In environments where there are low detail scenes (large triangles, simple geometry, hardly any depth)) the NV20 is only twice as fast as the Geforce 2 Ultra"
What the hell does "hardly any depth" mean? What they are trying to say here, without it sounding too bad, is that although T&L ops are quite a bit quicker, fill rate is and will still remain your 3D app bottleneck.
"The performance of the chip doubles when handling geometrical data"
Uh, what the heck is "geometrical data"? 3D polygons as opposed to 2D polygons? ?? All your 3D geometry data is "geometrical data", whether the scene is simple or complex. Also I don't know where they get the number "7" if they say here also that the performance only doubles.
So the new chip sounds good, yes, but you can forget about it being 7 times faster, that is 100% pure marketing BS. Sounds like they've upped the clock and optimized the T&L engine and antialiasing. I might believe double the speed, but "7 times faster" goes way beyond lies.
What is "complex geometry" anyway? A polygon is a polygon .. multi-textured maybe? Sheez, I dunno, this whole article appears to have been written by a 1st year marketing student with zero technical knowledge.
Re:3D Realism is becoming dangerous. (Score:1)
Quake III, realistic?? Even with this super-new nVidia-chip I have a really hard time believing that FPS games like quake will ever be realistic. Nicer graphics doesn't equal more realistic graphics. But even if we managed one day to create technology that made games look exactly like reality (which would need 3D-monitors of course), I still doubt teenagers would have a hard time telling the difference.
I would like to see a token symbol placed on the screen that would constantly remind the player that he is in a game universe.
Isn't the icons representing your ammo, your armor, your weapons, and the frag-counter enough? I would think so.
Re:why such a fast RAMDAC? (Score:1)
A digital connection with the same bandwidth would be a lot cheaper (Fast and precise DACs are relatively expensive)!
Re:why such a fast RAMDAC? (Score:1)
Re:Still closed drivers (Score:1)
On Nov 03 2000, wulfie wrote:
> I'll second the anti-Nvidia driver lobby.
I'm planning on buying a new computer soon (a Duron) and one
of the things that was hardest to understand was which video
card to get.
It seems that there is an hiatus between el-cheapo, older PCI
cards and the super-hyper-duper-hi-end cards with 3D
acceleration with all bells-and-whistles. There's nothing in
between for someone like me that wants to buy a cheaper one
and that only cares for 2D performance (I don't play games and
I don't use 3D applications).
Since there were no options (or since the manufacturers don't
want to see that part of the market), I started looking for
cards that would provide a not so bad performance and not
hogging my future system performance, while having a
reasonable price.
In all reviews that I've studied, the NVIDIA cards seem to be
the winners of performance, but the fact that they don't have
a receptive attitude towards the community means that they
don't want people like me as their customers.
This is what made me choose a Matrox G400 for my new system
(together with the recommendation of a close friend that said
the G400 was running quite fast in his system).
Isn't that crap (Score:1)
1280x1024 is wrong too, for that matter. That's why I use 1152x864 in Windows (which doesn't support non-square pixels like X does).
Re:What I think is sad... (Score:1)
Seriously though, I think it has more to do with what sells than what is possible. Game makers could make less violent games using current technology.
I personally can't stand FPS because they give me motion sickness. How's that for real?
I'd rather put my 3d card to some good use, but I really can't think of anything that I'd be interested in. Anyone with ideas?
-- Jacob.
Final Fantasy?? (Score:1)
If by 'A lot of PC game makers just aren't that skilled' you mean 'A lot of PC games dont have a place for pre-redered graphics' then you'd be correct.
FunOne
Re:Tiling? (Score:1)
Re:3D Realism is becoming dangerous. (Score:1)
I disagree. I feel that prolonged involvement goes along with a certain intensity which I feel should be sought out over mediocrity any day.
Re:I really don't get it... (Score:1)
The GeForce 2 get slower as the models get more complex. As you increase the complexity of the scenes, the NV20 gets slower slower.
At a scene of complexity N the NV20 is (say) twice as fast ast the GeForce 2. At a scene of complexity 20*N the NV20 is 7 times as fast as the GeForce 2. The NV20 on a simple scene is still probably faster than the NV20 on a complex, but we're talking about relative speed.
Rendering a full quad-textured 60M polygons 50 times a second is fater than rendering a untextrued cube 60 times a second.
Chill (Score:1)
What I wonder.. (Score:1)
Re:I really don't get it... (Score:1)
Gfx HW is a pipeline, and a pipeline is only as fast as the slowest stage. The two main stages nowadays are transform and pixelfill. If the transform is busy because it has to transform bazillions of tiny triangles the pixelfill will idle. Same vice versa, for screen-filled, multi-textured, bump-mapped polygons the geometry part will sit there twiddling thumbs most of the time.
That's how a card can at the same time be 2 and 7 times faster. It all depends on the problem you throw at it.
The interesting side effect is that for a fill-limited scene you can increase the detail (i.e. use more polygons) without any effect on the framerate, the same goes for transform-limited scenes. The holy grail of graphics programming is to find the sweet spot so that all stages are busy all the time. But as that depends on the graphics card and the screen resolution and bits per pixel and other factors usually only the demo writers for the chip companies bother to do that. Thus most current games are written for the lowest level of customer hw and don't really use all the fancy features. Which is just fine for people like me who write their own software... ;)
Re:A 500Mhz Ramdac ?! (Score:1)
I think they may be confusing it with the DDR-memory clockspeed which will be 500 MHz. A RAMDAC of that speed would be overkill, there aren't any monitors big enough for those resolutions, and the upcoming LCD and digital monitors don't even need a RAMDAC.
300% increase in FSAA speed
Why not? You have to remember that the GeForces did FSAA in software. They probably added a hardware-based implementation, like 3dfx did with the VSA-100.
how crap the V5 are compared to GF2 if you talk about speed.
The V5 wasn't that bad when it came to speed per se (especially with FSAA), it's just that their top card with 4 VS-100 never panned out, thus giving the GeForce 2 and up the edge. nVidia is still far ahead in terms of quality though.
Re:3D Realism is becoming dangerous. (Score:1)
Once we get to the level of holodecks, it's time to be worried
Re:Jeez (Score:1)
I doubt we are going to see even a 1.5x boost of fps at 640x480x16x20K polygons/frame.
A new generation of chips has always been able to outperform an older generation by about 2x ON THE RESOLUTION THAT MATTERS AT THE TIME OF RELEASE, because, my friend, 21" monitors are not exactly cheap.
Therefore, we'll most likely see about 2x performance increase on 1024x768x32 with 30-50K polygons/frame.
Re:Geez... (Score:1)
Re:Open source drivers (Score:1)
As I understand it, they have had some problems working page flipping into the XFree86 architecture, but the next driver version is supposed to support it. I don't know about the grahpics overlay, but it sounds like the kind of thing that they'd be working on supporting soon.
------
Re:Still closed drivers (Score:1)
------
Re:Image clarity and color accuracy .. (Score:1)
And I've been using it like this some months now.
Re:Tiling? (Score:1)
Is speed all that matters? (Score:1)
As time went on, we saw real powerhouses from NVida which put the competition to shame, performance wise. Now we are being flooded by enourmous framerates (who remembers BitBoys claims of 200fps at 1600x1200 in Q3?), GPU's, quad texel pipelines, DDR ram, and so on and so forth.
However, has anyone considered visual quality? Having millions upon millions of polygons drawn per second may seem a real treat, but if they look ugly, then what's the point (remember the Riva 128)? Not many games are taking true advantage of all the power available, and there's always going to be a bottleneck somewhere, so I think it's time to relax, acknowledge that we don't need 200fps, and hope to see some beautful images explode onto our monitors sometime soon.
Re:3D Realism is becoming dangerous. (Score:1)
I see computer games as an escape from reality. Surreal images, strange creatures, and worlds we'll never see sometime in our lifetime make for a perfect outlet. Trouble is, when we get to photorealism, the fantasia vanishes, and the magic is gone.
It's like going to an art museum and seeing a portait painted some 500 years ago, and then compare it some whiz kids photorealistic portrait. It's obvious that the "cruder" image has more feeling inside.
Re:WOW! (Score:1)
Re:why such a fast RAMDAC? (Score:1)
Yes. I _need_ a HiRes head mounted display, i.e. two screens, i.e. double pixel freq.
Jeez (Score:1)
Image clarity and color accuracy .. (Score:2)
Re:Open source drivers (Score:2)
1. Graphics overlay (for playing DVD's etc..) - driver still not supporting this feature
2. Page flipping - what gives the NVidia card a real boost under Windows - is not in the driver yet.
As a person who is working extensivley with lots o f graphics cards I can testify that their drivers are damn fast compared to any driver in XFree 4.0.x - but it's not as stable as the Open Source Matrox G200/G400 driver which is found on XFree 4.0.x
Re:How much memory is Nvidia's X *really* using? (Score:2)
Then this other poster was simply wrong. The large virtual size is due to memory mapping of the framebuffer (32/64megabytes on modern cards) and mapping of the AGP space (128megabytes or more).
The various "bit planes and color depths" are called visuals and they'll occupy at most a few hundred bytes each as structures within the X11 server.
Re:Still closed drivers (Score:2)
I've seen you repeat this a number of times, but I'm afraid it's completely misleading. The information on the nvidia site is not specs at the register level, and it's not even useful information for writing an open source driver. As proof of this claim, try using that information to write a driver for FreeBSD.
This URL has been floating about for months now and every now and then someone repeats on the utah-glx mailing list "hey look nvidia has full register level specs on their website". Each and every time the person is corrected immediately. So please stop spreading this misinformation.
Re:why such a fast RAMDAC? (Score:2)
Supposedly, the correct formula is (RAMDAC speed (MHz) = x * y * refresh rate * 1.32)
So, a 500 Mhz RAMDAC would be able to drive a 1536*2048 display at 120 Hz. I'm sure the calculations are slightly different for widescreen displays.
So do their Windows drivers suck, too? (Score:2)
Even for the enormous Linux kernel module that's required to use their drivers? Really?
Does their Windows driver, after less than a week of use, bloat to consume over 200MB of virtual memory? That's what their closed source XFree86 driver did with my GeForce DDR, on XFree86 4.0.1 and kernel 2.4.0-test9, even without using the 3D features at all. The open source nv.o driver that came with XFree86 isn't exactly a spartan RAM user either, but at least after it's sucked up a big chunk it stops asking for more.
Granted, they don't seem to care about keeping up with development kernels (their kernel module didn't even compile against 2.4-test for a while); I haven't exactly put much work into fixing the problem (but how can I, when I can't even recompile with debugging symbols?); and their drivers did seem to work OK with kernel 2.2.16.
Nevertheless, I don't intend to buy another NVidia card until I have an open source 3D driver to run it with. By contrast, my previous 3D acceleration in Linux came from Mesa on top of Voodoo2 glide; the frame rate may not have been as fast, but the rate of driver improvment certainly was faster.
It was really using 100-200 MB (Score:2)
Re:You'd need *full* card specs to fix a driver. (Score:2)
I never claimed it was easy! ;) It's kinda like being stuck in the middle of the ocean in a rowboat. With closed drivers, you have no paddles, and the rowboat is covered with a sealed, opaque top so you don't even know when you're near land. Open drivers is like having the top open, and a large soupspoon. Rowing yourself to shore with oars would be hard enough, and harder with a spoon. But at least it's possible.
Other benefits come for other OSes (NetBSD, FreeBSd, etc.) for which nVidia will never write drivers. Also companies don't in general last forever. What happens if nVidia goes belly up? All the people who bought their cards and are using their drivers are up shit creek without a paddle (to extend a metaphor too far). Having the code allows you to generate an extremely specific bug report, which can then be passed on to someone more knowledgable. It's very hard for core developers to fix bugs like "It crashes when I click on the menu in starcraft", which could be a hardware problem...
--Bob
Re:Developers will hit the wall sooner or later (Score:2)
Try that one on a musician friend one day and see how far you get :)
What I think is sad... (Score:2)
When will this technology break out of this ghetto? Aren't there more interesting things to do?
Personally, I think 3D technology has been stuck in the "keystone cops" era long enough. In early film, the only thing people could think to show was chase scenes and other stunts. A lot of that had to do with the immaturity of the medium (no sound, poor picture quality). Eventually, I think "3D entertainment" won't by synonymous with "graphic violence."
Porting nVidia's driver. (Score:2)
Actually, I gather from other posts in this thread that the abstraction layer between the driver core and the OS's driver interface is open (or at least published). This should make porting fairly straightforward, even with most of the driver being a black box.
There's also the option of wrapping Linux drivers in their entirity to run under *BSD, though I don't know if *BSD's Linux support has been extended *that* far.
You'd need *full* card specs to fix a driver. (Score:2)
You'd have a lot of trouble doing that, unless it was a silly problem like a memory leak (admittedly worth fixing).
I've worked for a couple of years with a well-known software company that does third-party driver development (well-known cards, well-known platforms). Debugging a driver even *with* the standard reference texts for the card is a royal pain. Doing it blind - say, for hardware bugs or restrictions that aren't documented - is so much trouble it's not funny. This eats a vast amount of time even for us. Trying to debug a driver while having to guess at restrictions/errata in a register spec without support documentation - or worse, having to reverse-engineer the spec from code - would be at best a vast undertaking and at worst impractical.
It can be done, but not nearly as easily as you seem to think, by several orders of magnitude.
Re:Developers will hit the wall sooner or later (Score:2)
There will continue to be applications that push the limits of this and many subsequent 3D accelerators. Trust me.
How much memory is Nvidia's X *really* using? (Score:2)
Nvidia's driver with XFree 4.
Here's what top says:
Size RSS Share
252M 252M 2024 S 0 1.7 100.4 5:18 X
Seems excessive, doesn't it? Well, I've only
got 256M on my machine, and guess what?
NO SWAP SPACE IS USED.
PS tells a different story:
VSZ RSS
276408 12704 ? S 10:00 5:21 X
VSZ is the VIRTUAL size of the process, 276M
12.7M is what it actually uses.
Another poster in another forum explained that
this apparently huge virtual size was due to
virtual-memory-mapping of various bit planes and color depths in the VRAM into virtual memory.
12.7M is still pretty high, but hardly burdensome on my 256M machine.
PeterM
Still closed drivers (Score:2)
Look at the sblive for an example of this. In the beginning it was closed and a pain in the ass to get working under linux. There were kernel version mismatches etc. When they opnened the driver it progressed much faster and it got incorporated into the kernel. Now the sblive is one of the best cards to get for linux since it is supported by every major dist out of the box. In some dists they even use the alsa driver instead of the oss one which is even more capable.
I am not going to get locked into nvidias way of doing things again. When I bought the card they had announcements about how they were going to open their drivers. This did not happen. My next card is going to be an ATI, Matrox, or 3DFX. I am waiting a bit on the radeon till I see the open drivers for them. However the matrox cards and 3dfx cards do have open drivers. I do like 3d but I like stability more and the box with the g200 here has never crashed in x. The nvidia geforce box crashes a lot more often then that.
So please even if you like their hardware don't support them till they open the drivers. In the long run it will help us a lot more. Teaching companies that drivers alone are not enough.
Re:Developers will hit the wall sooner or later (Score:2)
And you can kill a lot of polygons just modelling a realistic telephone. Which you can then reuse everywhere you need a telephone.
Re:why such a fast RAMDAC? (Score:2)
A RAMDAC has nothing to do in reality with 3d acelleration. Instead, the RAMDAC relates to converting from graphic's card's display memory to Analog signals on the monitor. Hence where RAMDAC comes from: Random Access Memory Digital to Analog Converter. A fast RAMDAC can support very high refresh rates. Now a 500Mhz RAMDAC will probably become necessary with high definition TV's which have a resolution a bit higher than 1600x1200, at a decent refresh rate. But as an above poster pointed out, it is likely a mistake in the article.
NVidia ethics (Score:2)
Huh? (Score:2)
"Pioneer of the first ever GPU (Graphics Processing Unit), Nvidia is now introducing a programmable GPU, seven times faster than the previous Geforce 2 Ultra, the NV20."
Now, there are two possibilities here - either the article's author has a shocking grasp of the English language (wouldn't THAT be bad, considering he writes for ZDNet UK, the home of "The Queen's English"), or he hasn't done his research properly, and thinks that NV20 refers to the GeForce2 Ultra.
The GeForce2 Ultra is the NV15 if I'm not mistaken - simply a GeForce2 GTS with faster RAM. GeForce2 MX is the NV10.
The NV20 is the new GPU ZDNet's supposed "leaked documents" claim will be 7 times faster in complex scenes (ie TreeMark). You've gotta love it when the speed at which a product performs is judged by how it performs in a program designed to make it shine.
Bad journalism all round I think - not that we should be surprised....
Thoughts from a game developer (Score:2)
If you look at the PlayStation 1 hardware from five years ago, it doesn't even have bilinear filtering or zbuffering. It's also a total dog. And yet there are PS1 games that look as good or better than many current PC titles that require a TNT2 or better (maybe 15x faster than the PS1 hardware). So theoretically an "old" card like the Voodoo2, which is still 10x faster than a PS1, could do amazing, amazing things--much better than what people expect to see from a GeForce. But we don't bother, because things keep changing at a crazy rate and we're simply trying to get things out the door.
In a way, I'm starting to see new video cards as a way of getting suckers to part with their money.
Re:3D Realism is becoming dangerous. (Score:2)
Son, I was wondering, you're playing that new hyper realistic game. How can you tell the difference between that and reality?
Er, we bought it at the store Dad. You were there, remember?
Yes, but when you've been playing it all day, don't the lines blur between games and reality?
Er, no? I load up the game, sit motionless for 10 hours. People shoot me and I feel no pain. I can carry a bazooka and 20 rockets without getting winded. My game guy picks up stuff with his hands, not mine. IT'S a GAME Dad.
I think you need protection from games. I'm going to start a group against realistic games. Uh, can you show me how to use this new 'HyperNet' to make a web page?
Dad, you don't make web pages anymore. You have to make fully interactive 3-D enviroments. Since everything went analog they haven't used IPv6 in YEARS.
Don't take that tone with me! I was a Unix guru back in the day!
Later
ErikZ
Re:Tiling? (Score:2)
Re:Final Fantasy?? (Score:2)
Re:Damn! (Score:2)
Re:Damn! (Score:2)
Re:Enough with the polygons - lets get some physic (Score:2)
Re:it doesn't matter how great of card it is.. (Score:2)
Re:Damn! (Score:2)
Re:Developers will hit the wall sooner or later (Score:2)
Re:What I think is sad... (Score:2)
Re:3D Realism is becoming dangerous. (Score:2)
>>>>>>>>>>>>>>>>>>>>>>>
Actually, the columnist from MaximumPC (can't remember his name, forgieve me, he's the one with the beard) poited out that games are much closer to books than movies in that movies give you a prepackaged world on a plate, while games give a tool for you own mind to imagine things more vividly. I am inclined to agree. Well-done games take a lot of imagination to play, and can often stimulate the mind like a book does. (I'm not talking Quake, I'm talking Final Fantasy or Zelda.)
Re:why such a fast RAMDAC? (Score:2)
-----------------------
Re:why such a fast RAMDAC? (Score:2)
About the 2-7x faster... (Score:2)
The Radeon has a version of this implemented, but (to be honest), the Radeon isn't really too powerful. Imagine a powerful NVIDIA chip loaded up with HSR, and you'd get up to 7x faster in complex scenes, while simple scenes would only be a bit faster (less hidden surfaces to begin with).
nVidia's high end vs. low end (Score:2)
They're the same boards, with the same chips. The only difference is the position of two chip resistors which identify the product type. [sitegadgets.com] In some models, the "high end" board was a part selection; the faster chips went to the high end. But with the latest round, the GeForce 2 Ultra, the low end is faster. So the reason for the distinction has vanished.
nVidia finally bought ELSA [elsa.com], the last maker of high-end boards that used nVidia chips. At this point, ELSA basically is a sales and tech support operation. It's not clear yet whether nVidia is going to bother with the high end/low end distinction much longer. I hope they get rid of it; its time has passed.
Re:Enough with the polygons - lets get some physic (Score:2)
Did you see the claimed numerical performance for the new NVidia chip? 100 gigaflops. I can hardly wait until we have that kind of performance in the main CPU(s).
Re:Jeez (Score:2)
This isn't to say that these aren't right, but be sure to take with a grain of salt.
Re:Open source drivers (Score:2)
You think your problem with the NVidia driver would be fixed if it were open source?
No, I think a driver would exist for my OSes of choice if NVidia opened the driver sources. It's not all about Linux. On FreeBSD, OpenBSD, and NetBSD, recent NVidia hardware is as useless as an HP or Lexmark WinPrinter. Feh.
Re:Developers will hit the wall sooner or later (Score:2)
Obviously the developers of 3d worlds in film have not yet maxed out their imaginations in terms of what to build, and how detailed to make it - and the gap between their work and the realtime 3d scene is a very big gulf indeed.
So I really don't think we'll be coming up to any significant blockages in terms of human imagination anytime soon. I suppose one might argue that as soon as we hit the point when 3d world complexity is visually indistiguishable from reality we may have hit the max. needed realism. But then of course there's always the visual effects of cosmic-zoom where you might want to soar through the microscopic cracks in someone's skin, etc. So there's plenty of room to keep plugging away.
Thats something (Score:2)
Free Avertising ... Priceless (Score:3)
The fact is, these "secret" documents are released as a form of cheap marketing. In fact, a large portion of todays "journalism" is written directly from the company spin doctors. PR twats wastly outnumber journalists and the trends are extending this.
Always look for hyperbole (like the zdnet headline) and emotive adjectives in phrases like "screaming chipset design". They mention that it will only double performance when large polygons are used. Well I haven't seen many games that are written only for the GeForce. Given the price of development these days, games companies are reluctant to alienate potiential customers by asking for huge specs (unless your name is Geoff Crammond). I think it is safe to assume that the new cards will follow the trend of Nvidia's chipsets since the riva128, at least until benchmarks are out. The next generation is about twice as quick as the plain vanilla flavour of their current best chipset although they can probably manufacture benchmarks to make it look better.
Corporate hype is not newsworthy (as much as they like you to believe it is), but I will be interested when someone reputable publishes benchmarks (and no, not Toms).
Me, I'm happy to run quake3 on my riva128/amd233. Sure it looks like crap and is choppy as hell but ..... man, I really gotta upgrade. Where was that link again ..
Re:Developers will hit the wall sooner or later (Score:3)
We've already seen that in the game industry. Teams of 20 to 100 people cranking out stuff for 2 or 3 or 4 (Daik... nevermind) years.
Sure, movies do it, we get some beautiful movies that kill anything gaming hardware will be able to do. But movies are, what, 2 hours long? A proper game has to have 30 hours of gameplay at the very least (I'm thinking Diablo II at about 25-30 hours to take one character through), I'd rather have 75-100 hours. And with a movie, you might visit a model/texture once, where with a game it might be something that you can look at from all angles, as long as you like.
So we'll have a couple of guys working on an engine to pass geometry and texture, another 5 or 10 working out AI and extensibility and 200 artists and modelers creating the world.
Re:Open source drivers (Score:3)
My argument is simply that there's nothing more frustrating than having a bug that you can't fix. I'm the type that at least gives a hack at it if I find a bug. Open source is not about open source developers fixing bugs for you. It's about coherent, concise bug reports that come from an examination of the code. It's also about (as you mention) fixing simple things that I could find and fix like memory leaks. Clearly I could not write or reverse engineer a driver in any reasonable period of time. Clearly nVidia are the best people to write the driver. Open source is most useful in the last 10% of the development process, fixing bugs and refining the code. If a company expects a magic cavalry of developers to appear to write their driver for them, they are sadly mistaken. But they can expect people to do a little hacking to get an existing driver to work with their hardware combination.
Open Source is not the panacea of magic software creation that some people (3dfx, apparently) think it is. But when I buy a product with closed source drivers and those drivers suck, I'm fucked. If those drivers are open, at least there is hope. I find that in general, if I depend on other people to fix my problems, they will never be fixed. I hack "open source" to fix my problems. nVidia can't own every possible combination of motherboard/processor/OS, and therefore can't fix every possible problem. Open source is simply the only way to go, and I won't ever again bother with companies that aren't open with their drivers.
--Bob
Egad, who modded my original post down as "Troll"? Do your worst, metamoderators.
Re:Image clarity and color accuracy .. (Score:3)
No, the problem lies between the chip and the mini-D connector. NVidia only sells chips to boardmakers, who make the actual card. While almost all of them make similar variations of the reference design, it is the boardmakers who choose where they get the rest of their PCBs, filtering components, etc.
The same thing happened with nVidia's TNT and TNT2 (And with 3dfx's chips before they stopped selling to other companies). The end result is that some are quite good, and others cut corners (And a brand name is little guarantee of quality these days).
Tiling? (Score:3)
A) It doesn't over-render. If geometry isn't going to be seen, it doesn't get rendered. Normally, cards have to render the pixel, and then discard it if the Zbuffer test fails. With tiling, there is no Zbuffer and pixels get discarded before they're rendered.
B) The sorting allows transparency to be handled very easily since geometry doesn't have to be presorted by the game engine.
C) It allows a hideous number of texture layers. The Kyro (PowerVR Series 3 chip-based) can apply up to 8 without taking a noticible speed hit. Also, it lower the bandwidth requirement significantly since the card doesn't have to access the framebuffer repeatedly.
D) It allows incredibly complex geometry. Even though the Kyro is a 120MHz chip, it can beat a GF2Ultra by nearly double the fps in games that have high overdraw (such as Deus Ex.)
The main problem with tiling is that standard APIs like OpenGL and D3D are designed for standard triangle accelerators. As such, the internal jiggering tiling cards have to do often outweight their performance benifets. Also, up until now, only 2bit companies have made tiling accelerators, so they haven't caught on.
If you want to read the Kyro preview, head over to Sharky Extreme. [sharkyextreme.com]
Re:So do their Windows drivers suck, too? (Score:3)
Yes. The only part that is Linux-dependent is the abstraction layer, for which the source code is provided. The same kernel module with a different abstraction layer is used on Windows. (If you don't believe me, head on over to that Linux dev page at nvidia -- the one with the register-level specs [nvidia.com] and such. Too bad the specs are incomplete due to NDA's...)
No one knows -- Windows itself bloats faster. :) OK, that memory leak is obviously something they are working on. It is beta software. Does it hurt so much to restart X once every few days?
Do you honestly expect them to?
You have the source code for the abstraction layer in the NVidia kernel module. Any changes necessary can be made there.
That's because NVidia's Linux driver does not require much in the way of improvements. It is pretty much complete, except for some minor bug fixes. Compare this to the Voodoo 5 driver, which was supposed to be ready a month after the release of the hardware. It is still in very poor shape (only supports one processor, no FSAA) despite having open source code.
------
more speed = better quality (Score:3)
The increase in speed doesn't just go to framerate. Newer game engines will have their framerate locked (at a user-specified value) and will vary visual quality based on how fast the hardware is.
How can that extra speed be used? More polygons, gloss maps, dot product bump mapping, elevation maps, detail maps, better transparency/opacity, motion blur, cartoon rendering, shadow maps/volumes, dynamic lighting, environment mapping, reflections, full screen anti-aliasing, motion blur, etc. I could go on forever. All this will be in my game engine, of course. :)
------
Re:Developers will hit the wall sooner or later (Score:3)
_________________________
Open source drivers (Score:4)
I know they have carefully thought out arguments as to why their non-open source, crappy drivers are better than open source ones. But folks, it just ain't worth it. I don't care how fast their cards are, I'll never make the mistake of buying nVidia again. Stick with the more open 3dfx, or Matrox. With them, if it crashes, you can track the bug down and fix it! Or someone else can. The number of open source hackers that might fix a bug are much, much larger than the number of employees at nVivia working on drivers.
--Bob
Re:Tiling? (Score:4)
One other big feature of the NV20 is the programmable T&L unit. That way you can add in small features you want to what the video card processes instead of relying on the CPU.
Another performance advantage that people will see is from the increased theoretical max fillrate. The GeForce 2 runs at 200Mhz, and has 4 pipelines each capable of processing 2 textures per clock, which gives you a fillrate of 800 Megapixels/second or 1.6 Gigatexels/second. The NV20 will likely run around 250Mhz with 4 pipelines that can handle 3 textures per clock, which will give fillrates of 1 Gigapixel/second and 3Gigatexels/second. This would allow for a theoretical performance increase of about 30% with single and dual textured games and a performance increase on the order of 100-130% in games that use 3 textures per pixel and more. This is of course assuming that there is enough memory bandwidth to push all of those pixels left.
Price wise I would expect a 32MB version with ~200Mhz DDR memory for $300-$350 when it comes out, and a 64MB version for $600 with perhaps 233Mhz DDR memory.
Re:Still closed drivers (Score:5)
Odd... they have not crashed on me in... umm... two driver versions ago... and then the only crashes I ever had were when switching VC's. The only problem is the memory leak when OpenGL programs crash. My OpenGL programs crash alot when I'm writing them. :) But restarting X once every few days isn't much trouble.
The NVidia kernel module is different from the old SBLive binary module in that the NVidia module has a source code layer between it and the kernel. To make the driver work with a new kernel version, you just have to update the source code layer, and in most cases you don't have to make any changes anyway. The binary part of the distribution is in no way dependent on your kernel version.
The SBLive was also different in that Creative didn't really give a rat's ass about the Linux support, whereas NVidia has basically made Linux an official supported platform and is keeping the Linux drivers exactly up-to-date with the Windows drivers.
Don't forget that NVidia's OpenGL driver is the best in consumer 3D graphics. A significant portion of this driver could easily be used to enhance any other company's drivers. The software T&L engine, for example, which contains optimizations for all those instruction sets -- I'm sure 3dfx would love to get its hands on that! Graphics hardware manufacturers typically don't even support OpenGL since writing D3D drivers takes far less work, but NVidia has gone so far as to have better OpenGL support than D3D support. They would lose a significant edge if they openned their drivers.
Let's not forget why we use open source software. I don't know about you, but I use whatever software is of the highest quality. I don't care if it is open or not. In many cases, open source produces better quality software than closed source, which is why I use it. In some cases, though, closed source is better. NVidia's closed Linux drivers are far and away the highest quality 3D graphics drivers available on Linux, and the GeForce 2 has been fully supported since before the card was even announced. The open source Voodoo 5 drivers, on the other hand, are crap to this day. I'm sure you won't have much trouble finding a Linux user who will trade you a Voodoo 5 for whatever NVidia card you have, if that's really what you want.
------
Re:Open source drivers (Score:5)
Unfortunately, you are incorrect. Compare NVidia's drivers to 3dfx's Voodoo 5 drivers. It seems as if 3dfx was simply expecting a few hundred developers to show up as soon as they made the drivers open source. As it turns out, only a couple of people outside 3dfx have made contributions, and one of them was paid to do it. It's sad, but it's true.
NVidia, on the other hand, uses the same codebase for both their Windows and Linux drivers. As a result, one could pretty much say that most of NVidia's in-house developers (over one hundred of them) are actively working on the Linux drivers. That's far more people than are working on the Linux Voodoo 5 driver, and because they are all in-house, they are much better prepared to write the drivers. After all, if one of them has a question about the hardware, they can walk down the hall and ask the lead designer.
I get this funny feeling that someone is going to say, "Well, they only have a few people working on the Linux-specific stuff." This is true, but the Linux-specific code is a very small part of the driver (less that 5%). In contrast, the far fewer 3dfx people have to implement the whole Voodoo 5 driver, including all the non-system-specific stuff, on their own. DRI helps, but it doesn't do everything.
You think your problem with the NVidia driver would be fixed if it were open source? Well, maybe, but open source really isn't the software development Utopia that you think it is. At least the NVidia driver supports all of the features of the hardware (all of them), and at (almost) full speed, as opposed to the Voodoo 5 driver which still does not support the V5's trademark parallel SLI processing or FSAA.
Disclaimer: I am by no means against open source software. Hell, I write open source software.
------
Developers will hit the wall sooner or later (Score:5)
I think that with all this new 3D hardware that has come out in the last 6 months, and then the addition of the rumor of this chip, developers are going to have a hard time actually creating worlds complex enough for gamers to actually tell the difference in what card they are using.
For example, this chipset is 7 times faster in rendering complex scenes, but only 2 times faster for rendering simple 3D scenes. I know that things like shadowing and lighting effects can be built into the gaming engine, but, still, isn't there a lot left to the developer's imagination (such as actually modeling and skinning characters and the objects in the world)? I can see this bumping up the development time for games slightly more every 6 months...