Nvidia Geforce 4 (NV25) Information 351
msolnik writes: "nV News has a
brief article about the long-awaited NV25-based video adapters. These graphics processors have similar capabilities compared to the XGPU, and are a lot more powerful than GeForce3 Ti500. Since they are manufactured using .13 micron technology, they will probably be clocked at very high levels."
The real question (Score:1, Funny)
3dfx... (Score:2, Interesting)
Re:3dfx... (Score:1)
Re:3dfx... (Score:2)
Gigapixel/TDFX was already going out of business anyway. NVDA did not buy TDFX, they merely acquired their intellectual property, mainly numerous pages of research concerning anti-aliasing.
TDFX was dead anyway. nVidia did not buy them to "wipe them off the face of the earth," as that was already happening.
Re:3dfx... (Score:1)
This left quite a void next to NVidia at the top for quite a while, which only ATI is starting to fill now.
Re:3dfx... (Score:2, Informative)
Re:3dfx... (Score:2, Funny)
Same here... Which is why I've settled on a Radeon 7500.
Dinivin
Re:3dfx... (Score:3, Informative)
Re:3dfx... (Score:2)
Re:3dfx... (Score:2)
Just because you can get it to work doesn't mean that you have to blast others for saying they don't work well when, quite frankly, they don't work well. I have a dual-proc (Pentium III) system with a VIA chipset. My Radeon handles it beautifully. My GeForce2 craps out. So I hate to break it to you, but the nVidia driver do not work well for everyone. Believe it or not, there are others who have similar problems with nVidia's drivers.
Dinivin
Re:3dfx... (Score:2)
Re:3dfx... (Score:2)
I've had NO problems with the NVIDIA drivers after having used really bad ATI drivers.
If you want an alternative to the closed src NVIDIA drivers, get a Matrox. They have very high quality drivers (this of course, if you don't do gaming at high res and 32bit).
Re:3dfx... (Score:2)
Or you can get a Radeon 7500 and have open source drivers as well as gaming at high res and 32 bit.
Dinivin
Re:3dfx... (Score:2)
Re:3dfx... (Score:2)
Yes they do! I'm running it myself (2.4.13).
> They are still compiled against xfree86-4.0.2.
I'm not sure wether that's true. In any event, they seem to work with xfree4.1 (or whatever Debian unstable is upto now).
> They are on-again / off-again with -ac patches.
The -ac patches themselves are not known to be very compatable with everything. One reason it's not put into the kernel until it's stable.
> It's fine if you install Redhat (or should I say Linux 7.1) and don't ever upgrade your kernel or X server, but for everyone else (ie real Linux users) there are some fairly sad compatibility problems with the nVidia drivers.
I'm running debian sid, and the xserver/related packages are upgraded every once in a while. I've had no problems. I even replaced my two year old GeForce 1/DDR with a GeForce3, without fiddleing with anything in the drivers (just removing the old card, and putting in the new one).
Re:3dfx... (Score:4, Informative)
from the article:
".. A Voodoo5 5500-esque Anti-Aliasing feature. The presumption is that the NV25 will bring a Rotated-Grid AA implementation.."
This is probably to compete with Ati's SmoothVision FSAA implementation, which is really quite slick. However, 3dfx was rumored to have really advanced FSAA implementations for their future Voodoo5 6000/6500 series. Perhaps the NV25 will include that.
Re:3dfx... (Score:1)
On a side note, probably off-topic, but I for one would have paid anything to have a Voodoo5 6000. External power supply baby, aw yeah.
Re:3dfx... (Score:2)
I abandoned my Voodoo 3 because I couldn't get drivers for XP and Tombraider Chronicles kept crashing the machine whenever I turned on the haredware acceleration and was slow as molasses without (this on a twin Pentium 3 machine with 512Mb Ram).
I just picked up a GeForce 3 500 Ti and suddenly the machine works fine. Not only does TRC run fine but the machine is now stable, I haven't had a crash since. I am even wondering whether to bother with the Windows XP upgrade at all.
The machine has no problem at all doing 30fps at the max resolution of my monitor (1024x1280). It even runs OK at 1200x1600 but the output looks crappy because the LCD display is aliasing like mad.
BTW I bought the Voodoo 3 with the machine because I wanted to get a PCI bus card and leave the AGP slot open for when the Voodoo 5 came available. It is a pity that the 5/6000 never appeared since it gave the term 'gratuitous' a whole new meaning. Hopefully the new NVidia will fill the same role.
As a practical matter however I found that people are far more impressed by the size of the case than the capability of the machine. For my last machine I bought a $300 'server style' case. This is about the same height as a small ATX case but twice the width. The idea was to avoid the type of overheating problems some of my earlier machines had had. It worked pretty well, no heating problems at all and the top is wide enough for the printer to sit on it nicely.
Re:3dfx... (Score:2)
Actually, the GeForce3 already does a form of rotated-grid AA. The 2x and Quincunx AA modes both take diagonally-arranged samples, one at the pixel corner, and one at the middle. It does help a lot, compared to the GF2's implementation.
However, the 4-sample mode is still a straight horizontal/vertical grid pattern. And all modes currently do multisampling instead of supersampling, i.e. they take a single texture sample per pixel instead of up to four. This means it's significantly faster, but textures are blurrier. Which can be compensated for somewhat by turning up the anisotropic filtering, but at a cost in speed...
There's certainly room for improvement though. It'll be interesting to see what's planned.
Re:3dfx... (Score:2, Informative)
3dfx was trying to ski uphill with a set of gear that had been obsolete for some time. nVidia was no longer worried about the, rather about ATI and competing in the OEM deals. Buying 3dfx just gave them protection against someone else buying 3dfx's IP and starting patent litigation against them.
And the fact of the matter is, you don't know what 3dfx was working on, wheither it was "killer routines" or not. Gigapixels technology may have breathed some extra life into t-buffer, but it's hardware transform and programability that will be the forseeable future, 2 things 3dfx was not making much progress towards.
Look, I liked 3dfx as well. I replaced my rendition with a voodoo when there were exactly 3 games that ran on it. However, I don't let my fondness for 3dfx pull the wool over my eyes and convince me that they were somehow not doomed, were going to come back and do great things if only nasty nvidia hadn't bought them.
Remember, 3dfx chose to sell.
What will be the next big innovations? If someone can figure out how to get realisitc sizes of embeded dram, that'll be nice for fill rate. Some people might try heirarchical zbuffering, but that hasn't been a win for a while on most architectures. As vertex processing throughput approaches fillrate throughput, I expect to see things move to randomized splat samping, since it appears the best way to get logarithmic complexity vs number of primatives. I've not seen any other way to get realistic rendering times on datasets involving billions of polygons or more.
What I REALLY wish they would do... (Score:2, Troll)
Obviously, I'm wishing for this for compatibility with old Glide-only apps, not so that new ones could be written. No one has written new ones in eons AFAIK, since as soon as more open standards like OpenGL and DirectX came onto the scene people dumped the 3Dfx-only Glide route, thank God. But there are still several older games written in the Voodoo and Voodoo 2's heyday which are Glide-only, or which work significantly better under Glide than they do in DX.
These apps are few but they contain a couple of early PC classics, as well as the first and still-most-compatible N64 emulator UltraHLE and its offshoot SupraHLE. There are several Glide-wrappers that translate Glide calls into standard DirectX calls, but they don't work well or at all for everybody--me included. None of the Glide wrappers will let me play any Glide-only games or any game through SupraHLE. In addition, some older titles like the first *Tomb Raider* look much, much better under Glide than they do under DX.
So, for the sake of compatability with old games I wish they would release as much of the Glide code as they can, if not write a quick-and-dirty Glide implementation into their drivers. Some may remember that Creative Labs had promised a near-perfect Glide compatibility for their TNT2-based cards back in 1999, in a driver they called Unified. But after 3Dfx sued them the project disappeared, and now that a couple years have passed the desire for Glide capabilities has died down since the games are now so old. But some of us like those old games, and the idea of continued compatability. I just hate it when things break unnecessarily. It's funny how, although some of them need CPU-slowdown programs because they lack internal timing routines, I can still run almost any DOS game with the oldest I've run going back to 1982, yet the development of proprietary 3D APIs like Glide and even DX (Microsoft could break backwards-compatibility with older versions any time they wish) takes away that continuity and certainty.
Just my opinion, though.
Re:What I REALLY wish they would do... (Score:2)
Re:What I REALLY wish they would do... (Score:2)
http://www.ngemu.com/n64/plugins.php?page=wrapp
It has links to some of the most recent versions of the most popular wrappers. eVoodoo and XGL2000 are the best for most people, from what I've read.
If anyone has a better, more informativ site, please let us all know.
Sorry, but you're mistaken. (Score:2)
> it isn't the complete Glide library that older applications (such as Quake II) depend on.
> Rather, it is a subset of the Glide API that allows Mesa and their new OpenGL driver to
> access the Voodoo3.
They never, ever, ever opened the Glide libraries. Instead, they released the source for the part of their Linux driver that hooks into Mesa. That's all. The rest has always been, and unless nVidia can be persuaded to do so will always remain, binary-only and hence not easily hackable (no successful efforts so far) to work with all video cards.
Note also that this
The Glide code is still closed and proprietary and hence old Glide games will probably be unplayable unless one uses an old Voodoo card, which will be increasingly difficult as time goes on for obvious reasons. After all, right now how many Voodoo 1's do you see floating around? There's the occasional one on eBay. In a few years, the same will be the case for Voodoo 2's through Voodoo 5's.
Doubtless there are a few tiny bits of Glide which are proprietary to third parties and hence unreleasable. But from what I understand most of Glide was developed in-house by 3Dfx, so most of it should be releasable. The only question is whether nVidia can be prodded to release it.
What I want (Score:1, Interesting)
finally!! (Score:1)
Re:finally!! (Score:1)
Re:finally!! (Score:2)
However, it's currently not the GeForce 3 killer that ATI had made it out to be.
Re:finally!! (Score:2)
Re:finally!! (Score:2)
Re:finally!! (Score:2)
Re:finally!! (Score:2)
What's so "unreliable" about nVidia's linux drivers? I've had no problems with them (except that Linux performance isn't as good as winXX performace, but that might be because their windows drivers are regarded as best on the market for performance).
This is compared to my horrendous experience with the ATI Rage 128 and Linux, my last non-nVidia card. Granted, this was a while ago, and the drivers for Radeon might be more relibable. However, ATI's main problem is making poor drivers. It always has.. and judging from the first generation of Radeon 8500 drivers, will continue to be so for a while.
Re:finally!! (Score:1)
Tiny Little Item (Score:5, Informative)
In case you miss it 3/4 down the page:
NV25 Information
I was browsing nVidia's forum over @ Fools, and there was a link to Reactor Critical. Here's what they have to say about NV25.
Long-awaited NV25 based adapters. This graphics processor that have similar capabilities compared with XGPU is a lot more powerful than GeForce3 Ti500. Since it is manufactured using 0.13 microns technology, it has a lot of chances to be clocked at the very high levels. The GPU comes in January/February 2002, while professional boards should be available in the second quarter.
ELSA is going to launch two boards based on NV25GL processor, both supports two LCD monitors, though, we do not know whether there are two integrated TMDS transmitters or only one and the second is external.
NV25 that works on 275 MHz. 128 MB DDR SDRAM @ 250 MHz.
NV25 that works on 300 MHz. 128 MB DDR SDRAM @ 330 MHz.
So, this is what a high-end NV25 part *might* look like...
* Rumoured 6 Pixel pipelines .13u Manufacturing process
* Core freq: 300 MHz.
* Memory: 660 MHz. (eff) ~ 10.5 GB/sec BW, assuming they stay with 128-bit data paths.
* Supports TwinView
* Supports (finally) Hardware iDCT
* More powerful T&L unit, to include a second Vertex Shader
* Can't find the link, but there's a rumour stating that we can expect Voodoo5 5500-esque Anti-Aliasing feature. The presumption is that the NV25 will bring a Rotated-Grid AA implementation to the table.
*
It really does sound like a pretty amazing chip. I would be willing to bet we'll be hearing a lot more in the way of rumours as the New Year approaches.
GeForce 4? (Score:1)
Don't you think it's time they adopted a new family name for their newer cards? GeForce is kind of old-sounding now. I think it could use a new name, what do you think? Reply to this post with suggestions, please!
My friend Stigmata's suggestion: (Score:2, Funny)
[08:09p] <[stig]> they need to call it an AK47 or something
[08:09p] >[stig]> just so i say to my friends in school "Yeah so I picked up an AK47 the other day, its really powerful!"
[08:09p] <[stig]> in front of teachers of course
Re:GeForce 4? (Score:2)
A GeforceXP to go with AthlonXP and WindowsXP? God, what an unholy alliance.
C//
eh? (Score:1)
Stop the train, I want to get off.
Re:eh? (Score:3, Interesting)
This 6 months, the GeForce 3 Ti200/500 came out. Last 6 months ago, the GeForce 3 came out, etc...
This kind of release schedule is what made 3dfx, an once undisputed leader in 3d technology, lag so far behind. Consequently, it's also what has made Matrox not even really care about the 3d market.
Re:eh? (Score:2)
Re:eh? (Score:1)
Re:eh? (Score:3, Insightful)
Other Information Links (Score:4, Informative)
GeForce4 Specs [pcvsconsole.com]
Power without Application? (Score:2, Troll)
I have a 800 Duron system with a Geforce 2 MX. It plays any new game at 1152x968 flawlessly. The GeForce 3 can pump out perfect refresh rates at even higher resolutions on any of the newest and graphical intensive game available today. There simply is no challenge, whereas years ago there was always room to improve - refresh rates, resolution, bit colour, texture size, etc.
Does improvement in the 2000's merely mean higher resolutions? If so, I don't want it. On average, most consumer level monitors are 17" and support a max resolution of 1280x1024. These new cards can easily support it flawlessly, so there lacks any point in investing a new card, and I see no point in running Max Payne, for example, at 4800x3600 resolution.
There is no "killer app" available today - even with the GeForce 3 being out for some time now - that will even begin to offer these cards a challenge, and with a GeForce 4 on the way, will NVidia be able to intise buyers into believing they need 300fps at 4800x3600 resolution? In the end, I begin to wonder if NVidia is beginning to find itself in a tough corner. Their hardware is revolutionary, but lacks any practical application.
Re:Power without Application? (Score:3, Insightful)
The enviroments still look crappy. It's just large polygons with painted edges. And were no where near enviroments that react realistically to your actions.
Re:Power without Application? (Score:2, Informative)
The problem (as Hideo Kojima says in this interview [slashdot.org]), is that each of those pores will have to be designed. So as detail increases, so does game development cost.
Games won't be able to keep up with graphics cards until designs scale above the latest hardware. Some kind of fractal / organic method seems the only way to go.
Re:Power without Application? (Score:2)
So Doom3 = new card for me.
Re:Power without Application? (Score:2, Informative)
Developers will continually find ways to make these boards work. The game production pipeline is getting slower, thats all.
Re:Power without Application? (Score:2, Interesting)
I fail to see what's so revolutionary about their hardware. They're basically building huge DSP-style chips where much of the operation is hardcoded for better optimization. If chips continue to do this, of course you're going to see games struggle to catch up. The 3D graphics market seems to be doing very little that's revolutionary--just bringing the chips up to the process limitations of transistor size and speed.
The problem with the current model is that the graphics card itself isn't expected to have any intelligence of its own. It's simply expected to render as much as possible in as little time as it can. Right now, we're expected to pass millions of triangles to the card to render, as well megabytes of textures to slap on them as fast as possible. Imagine if instead, the developer handed the graphics card a mathematical description of the model, and the chip did the rest, filling in details based on fractal algorithms. Instead of applying a bumpy looking texture to a wall, you could make the wall itself bumpy with potentially infinite detail. That would be revolutionary, and would require incredible engineering to design.
Saying the current crop of graphics chips are revolutionary is like denying that SGI ever designed a Reality Engine in the first place. Just because greater integration allows it to be insanely fast doesn't mean it's anything really that new. NVidia's going to have to do something pretty amazing to keep from getting blown away when something truly revolutionary comes around.
Re:Power without Application? (Score:2)
Oh, you make it sound so easy ;-)
What about turning the last 20 years of hardware graphics acceleration on its head by introducing a *programmable* graphics pipeline? It's never been done before, and it changes everything. OpenGL is being totally redesigned from the ground up to cope with this huge shift in the rendering paradigm. This is the biggest thing since Renderman. I'd call it revolutionary.
Imagine if instead, the developer handed the graphics card a mathematical description of the model, and the chip did the rest, filling in details based on fractal algorithms.
What, like these [nvidia.com] hardware mandelbrots, rendered entirely by the GPU? Or this [nvidia.com] Game of Life? Water [nvidia.com] simulation, Perlin noise [nvidia.com], grass [nvidia.com], prodecural 3D noise, particle systems [nvidia.com], all rendered by programmable vertex & pixel shaders on the GPU. Plus fire, fur, toon shading, silhouettes... Of course, this is only what a few people have thought of so far, on some first-generation hardware.
That would be revolutionary, and would require incredible engineering to design.
Hmm. Maybe so.
Re:Power without Application? (Score:2)
Re:Power without Application? (Score:2)
Suck on it Trebek.
Re:Power without Application? (Score:2)
There are actually people who use 3D cards for things other than gaming. Even still, we haven't reached a point where 3D cards are sufficient for gaming.
I don't think that the purpose of newer video cards is to allow someone to run at higher resolution. Sure it is a byproduct of having a faster card, but the main goal is to push more detail at the lower resolutions. I'd rather play a game that only ran at 640x480 but had realistic lighting and was indistinguishable from a TV broadcast, than a game that looked like Q3 but jacked up to 64000x48000
I'd like to know what games play 'flawlessly' at 1152x968 on a GeForce2 MX. I have a GF2 GTS Ultra with 2x1.5 GHz PIV (win2k) and have troubles maintaining 60 fps at 800x600 for games like RTCW, and Ghost Recon with full-detail (the way these games should be played). Sure QIII runs solid, but I don't consider that a new game. IMHO, gaming is horrible at anything less than 60fps and disabling vsync is ugly.
Re:Power without Application? (Score:2)
Since most monitors max out a VSYNC of 120Hz, what good are frame rates higher than that?
Re:Power without Application? (Score:2)
Re:Power without Application? (Score:5, Insightful)
One good reason to improve speed is simply better AA. Bet your card still shows those unsightly jaggies? Textures shimmer as they pass by, moire on the staircase, distant telegraph poles popping in & out of existance? Turn on 4-sample AA. Oh dear, now it's too slow. But not on my GF3, with anisotropic filtering turned up. I just wish I could do more than 4 samples.
Want more realism? Bump mapping everywhere? Gloss maps? More translucent surfaces? Detail & dirt maps? Specular highlights? More objects in the scene (i.e. more overdraw)? Guess you're going to need a lot more fillrate too.
Here's some more reasons, this time for better polygon handling - real-looking trees. Smooth, organic surfaces. Huge, detailed outdoor scenes that aren't always hidden by that strange vertical fog. Realtime, dynamic shadow volumes. And of course, accurate reflections. If you want to see the trees reflecting in the water, you have to render all those polygons twice. To see all the buildings in your nice shiny car, a cubic environment map is a good way, but requires the entire scene to be rendered six times!
And we haven't even started getting to the interesting stuff. Anisotropic pixel shaders, vertex shaders for displacement mapping or nice rippled reflections. Overbright textures for really nice highlights (or for running realtime Renderman shaders :-) Maybe some really computation-heavy stuff - ray traced surfaces, realtime radiosity solutions or global illumination.
Not enough? I'd like that all in stereovision too, please. Better double that workload again. Or perhaps on each wall of an immersive room? 5x more rendering.
DOA3 looks really, really nice on my Xbox, but I can't help thinking how much better it'd be at 1600x1200 with AA. Or with some of the other refinements I've mentioned above. Sadly, the hardware still isn't there yet...
Believe me, the field of 3D has a lot of room to grow yet.
Immersive Room... (Score:2)
Actually that would be 6x more -- and it exists now at ISU's C6 [iastate.edu].
Hah! (Score:2)
When I can play a game that looks as good as the Final Fantasy movie, at a consistent 100 FPS that's when it's fast enough for me :)
Re:Power without Application? (Score:2)
Tomorrow, I'll get the VGA adapter for the Xbox 2 & plug it into a monitor that CAN do those resolutions (like this one [ibm.com]. 9.2 Mpixels - yum :-). But I still want decent AA.
Triangle vs Quads (Score:2)
http://firingsquad.gamers.com/features/nv2/
Re:Power without Application? (Score:2)
Actually, moire is caused by an insufficient sampling frequency. Trilinear filtering will further reduce - but not eliminate - moire on textures. You can also easily see moire patterns on jagged lines that are changing in angle, such as when you walk by a staircase in a 3D game.
The only real advantage to 32 bit color was more alpha levels.
Well, the alternative to 32 bit textures was 16 bit textures, which show banding because they can't represent enough colours for a visually smooth gradient. 24 bits would be enough, but for convenience 32 bits (a power of two) was used, and the extra 8 bits would be ignored or occasionally used for alpha transparency.
Current AA is just as capable of loosing a telephone pole as just raw rendering if by chance the samples miss the pole.
Capable, but less likely (as with higher resolution rendering). In fact, if the AA samples are taken from a pseudo-random or rotated-grid pattern, you can get visually better results from low-res AA than from higher-res with no AA, even though the same number of samples are being used. The pattern used can catch more thin polys than the aligned-grid pattern used by simply rendering at a higher resolution, giving more apparent detail, especially on a moving image.
It's sad, but Square Soft still has to use a Ray-Tracer for cinematics because NVidia is still cracking that one.
Square used PRMan for nearly all of their rendering (and Maya for the rest), none of which is ray-traced. I don't suppose you saw nVidia's Final Fantasy demo? They rendered scenes from the movie in realtime on a Quadro DCC, and it looked damn good - same lovely shading on the hair etc.
As for ray-tracing on a chip, it won't be long before consumer hardware is available with a programmable pipeline and full floating-point pixels. With the ability to do arbitrary, full-precision per-pixel math, it's entirely possible to implement ray-tracing in hardware, with each mathematical operation performed by a single render pass, effectively a massively parallel computation. This has been done before, but never made it out of the lab into a consumer product.
Re:Power without Application? (Score:2)
The solution was obviously trilinear filtering. It does indeed remove moire, but at the cost of bluring the texture beyond recognition in some implementations.
Over-filtering causes blur. Correct filtering reduces aliasing (and therefore moire). This doesn't make up for lack of detail, of course - only more samples can give that - but for a given number of samples, good filtering preserves as much detail as possible while removing the high-frequency artifacting inherently caused by representing a continuous analog signal or picture with a limited number of point samples.
The obvious correct way would be some form of tricubic filtering, as would the obvious way to fix two dimensional scaling would be bicubic filtering.
Trilinear filtering is not the "obvious correct" solution - it's a logical extension to bilinear and it's easy to do, but generally Gaussian filtering is considered to do the best job (but it's quite hard to do fast). It does depend on the case, though.
The banding that is present in 16-bit of depth is not precievable by the faulty human eye.
Incorrect. The sensitivity of the eye to gradations of colour or luminance depends on the range - slight variations in green or light grey are much more easily detectable than slight variations in blue or near-black - but 16 bits total is nowhere near enough to represent continuous tones in almost any range. Many people can easily distinguish differences in certain colour & luminance ranges even when using 24 bits. Your example is easy to spot - have you tried it?
The problem with banding that you see in most games of 16-bit color is caused when the game requests transparencies greater than 1 bit.
Huh? Perhaps you're trying to say, the problem with banding in 16 bit colour is *exacerbated* by repeated overlaying of semi-transparent images, which is true, especially if it's done badly (like early 3dfx hardware tended to).
The use of 32 bits I think is a bit wasteful at this point, but in the future, we would all like to see floating point color implemented in hardware as well(already in place in OpenGL).
32 bits is an absolute minimum for credible graphics work. Film effects typically use 64 bits (48 bit colour, 16 bit tranparency) to avoid banding, or at the very least, a logarithmically-encoded 32 bit scheme.
Floating point image processing is sometimes required, more for representing out-of-range colours than for the extra precision, but is always done in software. The OpenGL API provides for the use of floating point colours, but I know of no OpenGL hardware, consumer or professional, that uses floating point colours in the hardware.
True, of course. But as that's still quite a ways off, both in terms of building such a display and dealing with the sheer amount of data required to represent such an image, we must fall back on techniques to reduce aliasing instead. 2 years is hopelessly optimistic - I would say quite a bit more than 20 years. Thus AA & filtering will be required for some time to come.
The nVidia demo was not realtime nor at a resolution that many gamers would accept.
Did you actually see it, live? I did. I don't know what you define realtime as, but I define it to be "a pace that gives the illusion of motion". Most people accept this to be a few frames per second, or more - and it was definitely that (I judged it to average around 10 fps). Any interactive change, such as a camera move, gave feedback within around a tenth of a second, which is more than enough to work with. Not fast enough for a twitch game like Quake, but good enough for a cutscene, and excellent for a 3D artist to check their work with.
As for the resolution, 1920x1080 is considerably more than 1024x768, which is what most gamers would accept!
I can't help but to see where ray-tracing would make things at least appear more organic.[...] These require polygons(or more CSG for raytracers).
Raytracing typically makes things look sharp & shiny, not organic. While raytracing is excellent for certain effects, generally you need a more advanced lighting solution (such as radiosity or global illumination) to get the more realistic look provided by soft lighting & shadows.
More polygons do help in defining more detailed or smoother organic shapes, but this has nothing to do with raytracing, as such. Incidentally, very few ray tracers use CSG shapes these days - only Real4D comes to mind. Polygons are far easier to use for representing arbitrary objects than a collection of geometrical shapes.
I'm not quite so optimistic about raytracing on a chip.
Actually, people have been doing realtime raytracing in software for years, even on a 486! Admittedly the resolution was low & the scene was simple, but when you think of the sheer floating point grunt of modern graphics hardware (nVidia claim their GeForce3 is capable of 76 gigaflops - a maxed-out, 256-CPU Cray T3D could do 50 gigaflops) and the ever-increasing parallelism being added to these chips as well as the growing clockspeed, I think something will be put together a lot sooner than you think...
I agree that better textures and higher resolutions are not a substitute for more detailed scenes & better physics, but fortunately one does not exclude the other. Two years ago the focus shifted to more polygons, and now game detail is soaring by an order of magnitude.
This year we added programmable vertex & pixel pipelines, and already we're seeing the results (Xbox games feature better bumpmapping, more natural surface lighting & realistically distorted reflections & refraction, in addition to smooth characters & increased detail). OpenGL is being redesigned from scratch to encompass the new paradigm. What will next year bring?
Re:Power without Application? (Score:2)
But lets zapped forward a few years when we should see the new Doom game. It will use the greatest feature of the nvidia cards. Their programmable processor unit (gpu). Musicles will tighten and skin will stretch based on corresponding equations. All of this will be rendered on the fly as the character moves.
It should give the GeForce 4 something to crunch on.
I can't wait til games are like Toy Story or Monsters Inc.
Re:Power without Application? (Score:2)
From: http://forums.overclockers-network.com/cgi-bin/ul
I was using V7100(with a res of 2360x960 32bit and 70hzrefresh).... until my radeons came in and with the v7100 I could move 250,000 polys in real time with a little bit of lag.... now with the Radeon8500 I can work with scenes up to 1,000,000 polys with no lag...just beautifly real time flow!...just to bogg it down to the same lag as the V7100 I have to open 1,500,000 polys!
what the hell are you talking about? (Score:2)
There is NO video card out today that can handle the latest games at 1024x768 with all the bells and whistles turned up all the way. Sure, it may look ok, but it will still slow down to under 30 fps in many cases. And there is also Doom 3 on the horizon, which will run at under 30 fps on a GF3 according to Carmack.
There is still PLENTY of room for more performance with video cards and it will continue until we have FPSes running at 1600x1200 with completely photo-realistic environments, full anti-aliasing, acheiving 120 fps. And even then, I'm sure someone will have a reason to have even more power in their video card.
Current video cards are not even close to that.
Re:what the hell are you talking about? (Score:2)
You're quite right. People just don't get it. They simply don't understand that a graphics card could literally be 1000 times faster, and that still wouldn't be enough.
C//
Actually... Re:Power without Application? (Score:2)
For example, I'm running a highly overclocked radeon 8500 (64Meg), equiv to a slow geforce 3 or fast geforce2:
with all the graphical settings turned up to thier max in the game (extra-high character texture detail, max everything else), and only 2xAA, I CANNOT PLAY THE GAME at 800x600 (the framerate falls to 4fps durring combat); If I turn things down some, I can get away with playing at 800x600. Even with weak settings though, 1024x768 is out of the question unless I turn AA completely off. The only way around this is to switch to vertext lighting (as opposed to lightmaps - which are wonderfully beautifull)...
Anyway, the point is, there is at least 1 game out Now, in which a card like this would be useful. Since games only get more and more complex, by the time a NV25 based card hit the market, many games would want/need this sort of speed.
Re:Power without Trees? (Score:2)
Grip3n: I have a 800 Duron system with a Geforce 2 MX. It plays any new game at 1152x968 flawlessly.
I too have a Duron 800 and Geforce 2 MX, and until last week I would have totally agreed with you.
Last week I installed 3D Mark 2001 [madonion.com] .
Try it yourself. Wait for the scene with the trees... suddenly your system will drop to 1-2 FPS.
Trees. That is why we need a better CPU & graphics card.
Notice how all the "good" games are set indoors, in cities, or in deserts? Yet all the fun army combat films take place in rural areas?
Think of all those war or commando films where you've got a lone gunman sneaking around using trees for cover. Now when you look at games, you're always sneaking around using crates and boxes for cover. That's why we need better hardware.
The games run fast on our Duron 800 / GEF2MX systems because those games are set in an environment which is specifically designed not to challenge our hardware. It's always a sewage system, a couple of city blocks, an underground base, a desert airport. It's never a forrest, a farm, a suburb.
Re:Power without Application? (Score:2)
Game developers are no longer pushing the capabilities of graphics cards (note: I am a game developer). You can look at this several different ways:
1. We're glad to finally have enough power to not worry about getting just polygons on the screen, so we simply write games and no longer have the same technological obsession that many PC buyers do.
2. Newer cards like the Radeon and GeForce 3 are pricey enough and new enough that only the hardcore fanboy types are buying them. If you assume a GeForce 3 level card, then your market gets reduced by a factor of 20 or more. Probably more as there's been a growing trend to not even put 3D accelerators in new systems (other than bare-bones chips, that it is).
3. We still don't really know how to push older generations of cards yet. Ever see games like Spyro: Year of the Dragon on the Playstation *ONE*. Wow is that impressive. PC games look better, but not an order of magnitude better. On the one side we have a system that doesn't even have a z-buffer, and on the other we have state of the art. Sometimes I think that if cards stopped advancing past the Voodoo 2 then game graphics would have still kept advancing all the way to where they are now.
Disclaimer: I know, I know, people who drop $600 on a new graphics card the day it is released don't want to hear this. They're in their own world anyway
possible .13 chip delay (Score:2, Informative)
Video capture (Score:3, Interesting)
I have an old Asus TNT3400/TV, and I never get to use the
Anyone have any recommendations?
Thanks,
Ian
Re:Video capture (Score:3, Informative)
honestly though, I say save 50-100$ on the video card, and buy the tv-card seperate. that way a) save money on the video card b)no need to keep buying the 50-100$ extra card every year, since the capture card will still work just as well.
As for Linux compatibility, I've heard mixed reports about all the Asus Deluxe and the ATI All In Wonders, so you'll have to search around online and take a guess.
Will this card make ascii display faster? (Score:5, Funny)
Re:Will this card make ascii display faster? (Score:2, Funny)
Re:Will this card make ascii display faster? (Score:2, Funny)
Only if you rename nethack to QUAKE.EXE (Score:2, Funny)
You see here a shiny nvidia card.
.
A geforce for 599 zorkmids. Pay? [yn] (n)
y
You bought a geforce for 599 gold pieces. --More--
"Thank you for shopping in Tom's discount hardware!"
R
You remove the heatsink.
You feel like you've done something bad.
#pray
A Large voice booms: "Thou hast angered me." --More--
The geforce explodes! You are blinded by the smoke!
It hits! It hits! --More--
It hits! It burns! --More--
It bites!
You die.
The shopkeeper gratefully inherits all your possessions.
Goodbye wideangle.
You were Microsoft-aligned.
You were inspired by user 31387.
You were unlucky.
You were broke.
Re:Will this card make ascii display faster? (Score:2)
Driving down the price of GEForce 3.. (Score:2, Insightful)
Warrior with zits ... (Score:2)
Unless you were playing Hercules The Teenage Years or something
nVidia playing with fire (Score:2)
That said, I think nVidia is playing with fire in simultaneously building "NV2A" chipsets for the XBox and trying to push the envelope on the PC. I understand they're covering their bases: games on PC are wiltering in comparison to console games (at least for now -- this is a recurring cycle with every new console that comes out). However, by creating one standard that users can lock in to, what's the impetus to purchase a PC and upgrade to a higher video card?
Wired magazine had an interesting take on the "secondary benefits" to Microsoft making the XBox successful. One was the obvious possibility that they will leverage the living room as a new monopoly (which, rightfully so, they agreed was simply conspiracy theory). However, another "benefit" is getting console developers familiar with the (admitally not that bad) DirectX 8 interface, and bringing them back to the PC to develop quality ports. This, in my mind, is the only way nVidia is going to honestly stay in the computer video card game at the growth rate it's been going.
I'm wondering, perchance, if this will release the other extreme: eventually, people just kind of settle on a certain type of technology "good enough" for their present needs. The internal combustion engine was pretty much finalized 60 years ago, and very real modifications have taken place since then. Televisions, likewise, were pretty much finalized in technology 30 years ago. Outside of a few fringe stragglers, very few people now make the jump to "upgraded" tech. I wonder if PCs will be the next.
And if it is, where's nVidia's future in all of this?
2D quality? (Score:2)
Generally, I don't play games much, which is why I also don't care much about 3D performance (it doesn't hurt to have it though :p ). I care a lot more about 2D quality and the Nvidia based cards isn't exactly known to be the best there...
Besides that, I care about driver quality - AFAIK the NVIDIA drivers are generally great, both for windows and Linux (although they are closed source). I also care a lot about noise. I don't want a card that need a huge noisy fan. No active cooling, thanks!
AFAIK, the Geforce3 Ti200 doesn't *need* a fan (the NVIDIA reference card doesn't have one), but most cards comes with a fan anyway - if the heatsink is good enough, it should be safe to disable the fan.
I've heard Leadtek cards is some of the only NVIDIA cards that actually have good 2D quality.
What screen-size limits? (Score:2)
Big 3 - partially OT (Score:2)
You used to have all sorts of chipset makers... S3, Matrox, ATI, WesternDigital, Tseng Labs (whatever happened to them, anyway?), 3dFX...
What happened? Is this consoldiation a good thing or a bad thing?
Re:Question (Score:2)
Re:Question (Score:2)
Note to self: Next year, find out what's on sale before hand (techbargains.com works nicely), buy it the wednesday before at full price, then do a price match once it goes on sale at around 11am or so :)
Re:On the subject of gaming.... (Score:2)
This isn't really off-topic. The subject is gaming video cards. When a source for those games disappears, the question is worth asking.
Re:LCD Tangent (Score:2)
Re:LCD Tangent (Score:2)
Re:LCD Tangent (Score:3, Informative)
Of course, you'll pay real money for the Samsung, but I don't know anyone else selling a 24" LCD monitor these days.
Re:LCD Tangent (Score:2)
IBM's T221. LCD, 22", 3840 x 2560, with a superb viewing angle, and a pricetag to make a grown man weep.
Unfortunately, it takes nearly half a second to get an image onto the display :-(
Re:LCD Tangent... IBM CRT Tangent (Score:2)
Since we broke into the conversation, Has anyone used the IBM 275 20" (19.8) Flat [ibm.com]
1920x1440@75Hz I found one good review on it but nothing else. I would like one for christmas?? but I haven't been able to get a review from a source I know.
I was comparing it to the Mitsubisi 200 Diamond Plus Natural Flat [epinions.com]
22" (20") 1800x1440@72 Hz
Re:Whats the quake 3 benchmarks? (Score:2)
How about REAL-looking 3-D at high speed? (Score:2)
With the GeForce3 and newer chipsets, you now have the capability to render in real time far more realistic-looking games and still maintain very high frame rates. The current GeForce3 Ti 500 can render DirectX 8.0 and later-compliant games with all 3-D effects turned on at over 60 fps even at 1280x1024 32-bit color on today's faster Pentium 4 and Athlon CPU's.
It does matter for better 3-D quality (Score:2)
Try running a game that truly takes advantage of DirectX 8.x routines such as Flight Simulator 2002. If the display driver for your Radeon card properly addresses DirectX 8.x support you should be able to run FS 2002 at around 45-50 fps at 1024x768 32-bit color with no problems even with all 3-D effects turned on (I don't find running above 1024x768 to be useful in most games). That means even very complex 3-D scenes will be rendered with very smooth motion.
By the way, given the fact that memory is dirt-cheap nowadays, you may want to upgrade to 256 MB of RAM. That makes a big difference with the very latest games since you won't have to swap files to and from the hard drive so often.
Very much so (Score:2)
If you want to look at the scenery, sure, 30fps at 640x480 is ok. But when it somes to making a railgun shot across a map, 1600x1200 vs. 640x480 is like foreplay with oven mits on. Sure you can do it, but you can do it a LOT better the other way.
Same with framerates. If, at a distance, your head is only 3-4 pixels wide (which would be only one at 640x480), the more FPS I am getting, the more opportunity I have to register (visually) that my crosshairs are on you, and thus I am a better player.
Plus, there are many a "documented anomaly" at higer FPS that the l33t gamers use in Q3 (and I guess other games, but since everyone plays Q3 or Counter-Strike, who cares =)
Re:In the end does it matter? (Score:2)
Re:Will ATI even bother? (Score:2)
Now the 8500 chips are showing themselves to be, with decent drivers, at least roughly equivalent to the high-end GeForce3 chips. Though perhaps not quite as feature-rich, performance is quite competitive with nVidia offerings. If they keep up this pace, they may very well not only catch up with nVidia both performance and feature wise, but also pass them.
In a way, the situation right now seems to me reminiscent of nVidia's situation when they brought out the RivaTNT chips. They had been playing catchup to 3dfx, and the Riva128 series would be analagous to ATI's Radeon chips, and the first TNT analogous to the 8500 chips. Then people thought nVidia's progress was impressive, but thought they probably would never quite catch up with 3Dfx, and we all know how that went..
Basically, I would say that since the release of the first GeForce chip that nVidia development has slowed down and not made nearly the significant strides seen between the Riva, TNT, and that ended with the GeForce, since the market didn't pressure them enough, and ATI is taking advantage and may take the crown in the somewhat-near future....
Re:Will ATI even bother? (Score:2)
Re:Hardware iDCT Support (Score:2)
Pixar uses sub-pixel rendering (Score:2)