nVidia's GeForce 256 Breaks Out; changes 3D world 191
Hai Nguyen writes " nVidia officially unveiled the GeForce 256 (the chip formerly known as NV10).
Its architecture emphasizes both triangle rate and fill rate, so the chip can render 3D landscape with highly detailed 3D environments and models, and smooth framerates. Go get the full info." Holy moses. I want one. Now.
changes 3D world? (Score:1)
Drop the marketing fluff -- we don't want any here. Just the facts.
The biggest questions we all have (Score:1)
1) Will the driver be open sourced as the TNT/TNT2 driver is?
2) How soon will it be available to the public?
3) What kind of framerate will I get when fragging LPBs in quake 3?
Fun fun fun!
Hokey smokes (Score:1)
I think if anything seeing graphics like this are at least going to set a fire under some butts to get the next generation of stuff out. It is also good to see nvidia not having to worry about selling the chips. This whole Diamond/STB thing had me worried for a while.
Mister programmer
I got my hammer
Gonna smash my smash my radio
Re:close those tags.... (Score:1)
Re:Hokey smokes (Score:1)
Re:changes 3D world? (Score:2)
Aren't you tired of watching perfectly flat walls with big posters stuck on them?
Re:Hokey smokes (Score:1)
-lx
Feature Article (Score:3)
Some specs (Score:4)
Oh my God! (Score:2)
The addition of Transform and Lighting really _is_ revolutionary. Once you've used one of these babies, you won't want to go back.
There's a list of useful links at Blues News (www.bluesnews.com)
Availability (Score:2)
Re:Hokey smokes (Score:1)
^.
( @ )
GeForce vs. Playstation II (Score:2)
The Playstation II has a modified R10000 processor with very hefty floating point extensions - it won't have much of a problem doing geometry transformations. IMO, it will probably be about on par with the graphics cards floating around at the time of its release. It won't leave them in the dust, but neither will it be left in the dust.
OTOH, a friend in the gaming industry says that the Playstation II has architectural problems that might degrade performance (low system bus bandwidth, among other things). We'll see what happens when it ships.
Analysis (minus fluff) (Score:5)
nVidia has a card which can do supported operations fast. It obviously has a lot of fill. It'll be a good board. Of course it'll still be slow in D3D
nVidia seems to have chosen not to support the hardware bump mapping of the Matrox G400, an extremely high fill (runs beautifully bump mapped in a window in 1600x1200x32bpp) card without geom accel. 3DLabs' long awaited Permidia3 will also have some kind of hardware bump. IMHO this is a relatively flexible feature -- you could do a lot with it. It remains to be seen how flexible nVidia's lighting and geom turn out to be.
I'll be impressed if D3D ever delivers real hardware geometry benafits. We have yet to see a single benefit of DX6 over DX5 (not screwing with the fp control word especially) actually work. I'm highly suspect of anything MS sez.
So what about the remaining behemoth, 3dfx? Their Voodoo4 is supposed to be an extremely high fill card (fill has always been their hallmark). It may not support any more hardware features (eg. bump, lighting, geom accel), but it will fill like crazy. It's supposed to do full screen anti-aliasing
I'm eagerly awaiting the new generation. But I expect the real crazy stuff to start happening in the following generation
Competition for this chip. (Score:2)
The Voodoo 4 will be coming out around Christmas, and it will have hardware geometry as well. Rumour has it that it too will be at 0.22 micron instead of 0.18. I don't remember the name of the chipset off-hand.
S3 has also rolled out a new chip, with four pipelines and hardware geometry, at 0.18 micron. Check Sharkey Extreme for details.
Also, I've heard some reports that the PlayStation II will beat the living daylights out of PIII's loaded with then recent and most modern 3d accels. Even with this kind of chip, and most likely other chips to follow from nVidia's competitors, does this still hold true? Will the PlayStation II live up to the hype?
No, but it won't sink either. See my previous response on this subject (check my user info to find the post).
Sorry, this is fluff (Score:2)
So it pushes 15 million triangles a second and a PIII only does 3.5 million. Well, where do they come from? Exactly what is used to store these geometries? I'd say that if they went with a rather Voodoo Glide-esque approach of putting all the geometries on the card and then giving minimal commands to position, scale and rotate them, then it could be significant. This, however, would be pathetically incompatible with all existing games- and frankly the bus is the bottleneck, that PIII is probably pretty comparable for doing transforms, it just cannot get them across the _bus_ as fast as a cached copy of the geometry on the card.
I saw what appeared to be a statistic that implied that games might see a 10% improvement in framerate. That, I think, is closer to the truth.
Sorry guys- you've been Hyped.
Re:Hyperbolically? (Score:2)
You do know what the word "hyperbole" actually means, don't you?
--
This has stiff competition. (Score:3)
3dfx is rolling out another chip, as people have been talking about for a while. It is rumoured to be at 0.22 micron too, and will have hardware geometry processing.
S3 already rolled out a new chip - at 0.18 micron. It too has four texel engines and hardware geometry processing.
IMO, the S3 chip is actually the one to worry about. Architecture may or may not be great, but at 0.18 micron it may outperform nVidia and 3dfx's offerings just on linewidth. ATI did something similar when it rolled out the Rage 128, if you recall.
What I'm waiting for is the release of the GeForce or (insert name of 3dfx's offering here) at 0.18 micron. However, I'll probably be waiting a while.
Re:Some specs (Score:1)
I was pretty excited to hear the V3 3500 supported DFP but they got rid of it when it changed over to the 3500TV. Anyone know of any plans for an upcoming video card to support DFP?
Re:changes 3D world? (Score:2)
"Consumer level" is correct. High-end graphics workstations have been doing this for several years; in fact, the entire OpenGL pipeline has been in hardware for quite a while. Check out 3dlab's high-end boards for examples, or take a look at their competitors. These tend to be 64-bit PCI boards in the $2,000-$3,000 range.
The consumer graphics manufacturers have been making noise about using geometry processing for a while now, but have only recently gotten around to it. In that market, yes it could be called revolutionary (in that it substantially changes game design).
Diamond still in? (Score:1)
A few more notes (Score:2)
GeForce has environment mapping (iirc so does Permidia3) but not bump mapping.
It can do 8 free hardware accel'd lights
A Voodoo3 on a fast machine under Glide can handle about a 10-12kpoly scene lighted textured w/effects and phsyics running about 20-25 fps on a 450a. I'll be very impressed if GeForce can do twice that -- 25kpolys at 25fps, or about 500k real polys/sec (BTW a Voodoo3 under ideal conditions w/out features can do 500k "fake" polys/sec
But 15million polys/sec is the kind of bloated number that usually comes out of graphics shops. Don't believe it for a second.
As for 100kpoly models lighted w/fx running smoothly, i'll believe it when I see it.
if DaveS or DaveR wants to correct me on any of this stuff, go right ahead guys...
Not exactly. (Score:1)
It may not support features of 3dfx & matrox, but: (Score:1)
Sound familiar? Back to the days of 3d-acceleration in games before DirectX?
Playstation 2 specs. (Score:2)
Not if it can only transform 36 million polys per second, it can't (sustained transformation figure from an older slashdot article).
Based on all of the other numbers in that article, I suspect that they dropped a decimal point in the "75 million polys rendered" figure. That, or they're talking about flat-shaded untextured untransformed polygons.
Re:Not very impressive at all (Score:2)
----
Um (Score:2)
There were no 3D games bfore D3D. No one had cards.
Just 'cos a card supports D3D doesn't mean you can assume your program will work right. You still have to test and debug each individual card. This is the voice of experience
Matrox's bump will be in D3D i'm pretty sure...
People use Glide rather than D3D 'cos it's way faster. Speed really is all that matters
Re: FUD! (Score:2)
Do you see GeForce boards on the shelves?
Do you see Playstation 2s on the shelves?
They've been conversation topics for months, but all either has now is alpha test hardware. It becomes difficult to see what point you are trying to make, given that.
Re:No linux info. (Score:1)
Re:Hokey smokes (Score:1)
Where they excell is the ease of use and installation and the homogenity of software design.
However such applications, like Final Fantasy VII, have been ported to the PC too.
Re:No linux info. (Score:1)
All ya glide fans out there, sorry to bust ... (Score:1)
Uhm... Re:Um (Score:1)
I do believe Glide was out before it, and DOS games (like the original Descent?) used it. Wasn't that before D3D?
There were people with the cards back then, not the millions there are now, but enough for accelerated 3D to be implemented in more and more games.
.Shawn
I am not me, I am a tree....
Big performance (Score:1)
Caveat about the tree: (Score:3)
Why only one tree? What program, exactly, did this? There are some very serious questions to ask about demos like this. I, too, write software and try to come up with impressive claims. I can legitimately say that I'm writing a game with a ten million star universe with approximately sixteen million planets, of which the terrestrial ones (hundreds of actually landable-on planets) have terrains the size of the earth at 3 dots per inch for height information.
This is misleading as I'm doing it _all_ algorithmically- it's fair to ask 'well, what does it work like?' but nonsensical to imagine that somehow I'm messing with kajilliobytes of data. It's faked. (I have stellar distribution whipped, am working currently on deriving star types, slightly modified according to actual galaxy distributions- main current task is to come up with RGB values for the actual colors of star types, as this is more like white point color temperatures than anything else- very close to updating my reference pictures.
At any rate, will you believe me when I say that this reeks of demo? It wouldn't be that surprising if they used _all_ the capacity of the card to do that one tree. _I_ would. Might that be why there is only one tree and _no_ other detail at all (one ground poly, one horizon)?
More relevantly, what was used in doing that? If it was vanilla OpenGL, then okay, I concede this is very big. If they had to write their own software to do that, then you have a problem. Here in Mac land (also LinuxPPC land
You guys are looking at exactly the same situation here. Be damned careful. If you go with a proprietary technology you will fragment, and your developers will be faced with tough choices and could end up writing nVidia-only much as some developers in Mac land are writing ATI-only. This is bad. Do I have to explain why this is bad?
Let's get some more information about exactly how you operate this geometry stuff before getting all giddy and flushed about it, shall we? I don't see how software will use it without rewriting the software. And when you do that- it's an open invitation for nVidia to make the thing completely proprietary and lock out other vendors.
Or maybe they'd give the information out to people at no cost and not enforce their (presumed) patents for a while, only to turn around a year from now when they've locked in the market, and start bleeding people with basically total freedom to manipulate things any way they choose? But of course nobody (GIF) would think (GIF) of ever doing (GIF!) a thing like (GIFFF!) _that_...
Re:Analysis (minus fluff) (Score:1)
> it may be finally time to kill some very old paradigms in 3d hardware...
I'd be interested to hear your thoughts on what might replace the current paradigm. Are you thinking voxel-based rendering techniques?
Am I wrong when I state that the amount of research (even recent!) devoted to rendering techniques based on the current paradigm dwarfs the effort put into researching more innovative approaches to rendering?
Voice of experience? (Score:1)
That's funny... I remember playing Wolf3D and DOOM before Direct3D was even a glimmer in Microsoft's eyes.
Re:All ya glide fans out there, sorry to bust ... (Score:1)
15fps can mean a hell of a lot. If you don't have high-end everything, then 15fps can mean the difference between 15fps and 30fps. _you_ can try playing Q3test at 15fps
Of course, when you're getting 120fps, 15fps means next to nothing.
.Shawn
I am not me, I am a tree....
Voice of Experience? (take two) (Score:1)
Posted that lastone when I was only halfway done flaming:
There were no 3D games bfore D3D. No one had cards.
That's funny... I remember playing Wolf3D and DOOM before Direct3D was even a glimmer in Microsoft's eyes.
People use Glide rather than D3D 'cos it's way faster. Speed really is all that matters
You've got to be kidding. Would you mind explaining how one API can be faster than another? Sure, a driver or hardware can be faster, but an API is just a specification. People use whatever API that will get the job done. If the job is to only support 3DFX cards, they use Glide. If the job is to support a number of cards, they use D3D. If the job is for portability, the use OpenGL.
Re:Um (Score:1)
I seem to recall pocking up my first Voodoo1 card, downloading GLQuake and haveing a ball. D3D wasn't even in the picture.
Not to mention Duke3d, Doom, Doom2, Wolf, Triad etc
short memories or just mild drugs ?
Re:Hokey smokes (Score:1)
I bet they will announce a GeFORCE board soon.
Re:Availability (Score:1)
Re:Competition for this chip. (Score:1)
Why?
Because, (AFAIK) TNT2 and Voodoo3 is produced at the same "plant"...
Re:Caveat about the tree: (Score:1)
RE: Nvidia Grunts on 'Today Show' (Score:1)
Re:Hokey smokes (Score:1)
Re:The biggest questions we all have (Score:1)
2. I've seen "late september" mentioned!
3. Really great framerates at 1024x768 and above + it will look beatifull - probably much better than with current cards.
3a. Personally, I am looking more forward to Team Fortress 2 - if Valve can do the same with multiplayer as they did with singleplayer (Halflife) - it's going to be so much fun!
Re:Caveat about the tree: (Score:1)
As far as proprietary natures go, your post gets _way_ ahead of itself. The GeForce 256 will be accessible via OpenGL and DX7. Important extensions to API functionality are performed via review by the ARB and by Microsoft DX version revs. There is no indication that NVidia will deal with the additional capabilities of this chipset in a manner any different from the way multitexturing extensions were handled.
In any case, "how you operate this geometry stuff" is via the OpenGL API, which has been "operating this geometry stuff" in higher-end equipment for some years now. The ability to render high-polygon models in real-time is truly a revolution; not only are texture-mapped low-poly models unsuitable for a wide rage of visualization tasks, they are simply inferior to high-poly models in terms of realism, flexibility, and reusability. From a development perspective, it has little or nothing in common with GIF patent/licensing issues.
One last note: if this "reeks of demo", there's a very good reason for it. It _is_ a demo, designed to demonstrate the capabilities of the chipset. It is neither a benchmark nor a source-level example of _precisely_ how the card behaves. You'll likely have to wait for the silicon to ship before you have either. Whether or not "vanilla OpenGL" was used for the demo is irrelevant, since OpenGL is an API and does not specify a particular software implementation. Implementation is the purpose of _drivers_.
MJP
Re:Um (Score:1)
Re:No linux info. (Score:1)
Is there a way to filter out Hemos stories?
Yes. Get a login/password, click "preferences" and filter away.
Ray-tracing (Score:1)
Re:Feature Article (Score:1)
Re:A few more notes (Score:1)
I believe the OpenGL spec only lists 8 as a minimum. Of course games/apps can use any number of "virtual" lights.
If you're a programmer, check out http://www.opengl.org/Documentation/Specs.html
Re:How do you pronounce "GeForce" anyways? (Score:1)
----
Dave
All hail Discordia!
Re:Some specs (Score:1)
aka rgb48a they way MNG does? that would be incredibly awesome. (note for readers, the current 32bit video aka rgb24a only supports 8 bit per channel sampling and play back. Broadcast television supports up to 10 bit per channel sampling and play back, true 35mm film is closer to 96 bit or 32bit per channel). this is independent of things like gamma and transparancy)
Re:All ya glide fans out there, sorry to bust ... (Score:1)
Re:Big performance (Score:1)
Wrong (Score:1)
Secondly, D3D is a non issue. First of all, DirectX7 is *fast* Second, the games most likely to take advantage of geometry acceleration first will be OpenGl based. Glide sucks. OpenGL is way more open and cross platform.
Third, 3dfx can't defeat everyone in fillrate. They are bound by the speed of available ram which is maxing out at 200Mhz. All they can do is start using multiple pixel pipelines like NVidia and Savage.
But to beat NVidia, they'd have to use a 512-bit or 1024-bit architecture (8 or 16 pipelines) which unfortunately, is difficult with the current manufacturing process (.22 or
So I'm sorry to say, the Voodoo4 is not going to kick anyone's butt in the fillrate department.
(and super-expensive Rambus ram won't help them either)
Fifth, the triangle rate increase is 3-4 up to 10x as much, and many games like Team Fortress 2 or Messiah are using scalable geometry (the original 50,000 polygon artwork is used and scaled dynamically based on processing power and scene complexity)
Sixth, the hardware lights are in ADDITION TO regular lightmap effects and will give much better dynamic lighting effects than Quake's shite spherical lightmap tweaking technique.
3dfx is incompetent and no longer the market leader, and their anti-32bit color, anti geometry, anti-everything-they-cant-implement marketing is tiresome, along with 3dfx groupies who continually praise the company for simply boosting clockrates on the same old Voodoo1 architecture.
Both S3 and NVidia have introduced cards with the potential to do hardware vector math at 10x the speed of a PentiumIII, without the need to ship all the 2d transformed data over the PCI/AGP bus, and they have done it at consumer prices.
I'm sorry, but increased fillrate doesn't do it for me anymore. It's still the same old blocky characters but at 1280x1024. They look just as good if you display them on a TV at 640x480. What's needed is better geometry, skeletal animation, wave mechanics, inverse kinematics, etc everything that geometry acceleration allows you to do (NVidia, S3, Playstation 2)
Bump mapping (Score:1)
Re:Um (Score:1)
I agree with your point, which is still the same: Microsoft didn't invent 3D
Re:Ray-tracing (Score:1)
I'm asking, not baiting here ... how do you animate ray-traced images? Re-trace every frame?
Re:The biggest questions we all have (Score:1)
Re:Ray-tracing (Score:1)
btw. just like you would re-render the whole scene every frame in a game, quake etc. send the whole scene to the card every frame.
MS did *standardize* 3D (Score:1)
It's too bad we couldn't have made a solid, open 3D game API spec before MS gave the world its proprietary version. OpenGL is portable, but writing an OpenGL driver is pretty much a bitch. Direct3D may be annoying to program to, but the drivers supposedly aren't quite so hard to write.
It's really too bad we don't have something like Glide (really easy to program to), but open, not 'only 3dfx' crap.
Re:Bump mapping (Score:1)
Re:Ray-tracing (Score:1)
But raytracing is VERY parallelizable, you can have one cpu per pixel, I believe a company called "division" (.co.uk) did something in this area.
they called it "smart memory".
Realtime raytracing could be the next big thing in computer graphics, but games need to be totally rewritten, you can't use openGL anymore because raytracing can use primitives like spheres, planes, cylinders and yes triangles, openGL doesn't support this.
It would require some major hardware advances, if you want to realtime raytrace a 640x480 image using one cpu per pixel, it would require 302700 cpus that can do some very fast floating point operations.
btw I'm very interested in realtime raytracing, but I think it'll be a while before it's a reality.
Re:Hokey smokes (Score:2)
Re:Ray-tracing (Score:1)
It's been years since I've played with POV, but given the amount of time it took to trace relatively simple constructs wouldn't this be a bad idea? You're not just re-displaying a 3 dimensional object from a different perspective, you're recreating the object each time the perspective changes.
Caveat: I have little clue and I'm looking for enlightenment. If I'm not making sense please correct me!
Re:Hokey smokes (Score:2)
Re:Ray-tracing (Score:1)
It's been years since I've played with POV, but given the amount of time it took to trace relatively simple constructs wouldn't this be a bad idea? You're not just re-displaying a 3 dimensional object from a different perspective, you're recreating the object each time the perspective changes.
--
Yes you are correct, you COULD re-raytrace just the object that moved however, the trick is to figure out what pixels are affected by the moving of the object.
For example the object could have been casting one or more shadows, those "shadow" pixels need to be re-raytraced, also if you could/can see the object before/after the move in some other reflective surface you have to re-raytrace those as well.. etc.
Re:Ray-tracing (Score:1)
More comments (Score:2)
Glide probably wasn't out before DX3, but DX3 was pretty much useless (only a very minimum 3D API) so Glide may have beaten anything useful, tho not by much...
------------------------
Replying to another comment:
No, an API cannot be faster. But an implementation can. And an API's design can affect an implementation. In any case, Glide (the implementation) is a hell of a lot faster than D3D (the implementation), as per my original comment.
Re:This isn't new (Score:1)
Your people are not my people, apparently... My people have not been around that long..
Apes among us
What is the fill rate of the human eye??
GeForce name (Score:1)
Sheesh... what's next - a website about the chip named GeSpot?
Re:MS did *standardize* 3D (Score:1)
That's a common but very misguided opinion. Glide is a rasterization only API and is pretty much rendered obsolete by the addition of transformation and lighting to the hardware. Of course Glide could be extended to encompass this part of the pipeline as well, but what would be the point - OpenGL already does that. With DX7 D3D will get there too.
Re:Hokey smokes (Score:1)
Re:...but can it do enviromental bump mapping (Score:2)
First off, I don't take this it as a given that just because you can't figure out a way to represent the meshes with variable levels of detail, that no one can. In fact, its my understanding that Quake 3 implements curves in a way that allows them to be retesselated to higher polygon counts depending on the graphics card and speed of the system. Second, even if a company didn't want to implement something like that in their engine, its not inconcievable that multiple environment resolutions could be placed on the game media. Many games already come with low and high quality sound samples to account for the wildly varying quality of sound cards out there.
Re:How do you pronounce "GeForce" anyways? (Score:1)
Re:...but can it do enviromental bump mapping (Score:1)
Re:changes 3D world? (Score:1)
Darth Shinobi - Champion of Lady weeanna, Inquisitor of CoJ
"May the dark side of the force be with you"
Re:Feature Article (Score:1)
Even if no game uses lighting, and this is usually because games want realistic shadows, and you can only have 8 lights (or so) in a scene at a time. Anyway, the point is *now they can*.
In reality it's the transformation hardware that's going to speed everything up. And not only that, the CPU is going to have nothing to do but model physics and AI now!
WOOHOO!
What kind of bump mapping? (Score:1)
Re:Hokey smokes (Score:1)
http://fullon3d.com/opinionated/
Alpha 21264! (Score:1)
near real time ray-tracing? I think it was
around the time the Compaq merger was finalized.
Man I wish I had the link.
This chipset is something special (Score:1)
The review on this tweak3d mirror [explosive3d.com]
Josh
Re:Linux Support? (Score:1)
Re:Hokey smokes (Score:1)
Re:Uhm... Re:Um (Score:1)
Re:Some specs (Score:1)
Re:PSX2 is still about 10 time faster than this ca (Score:1)
Do you also have a GForce256?
Do you have actual numbers to prove your point?
Do you REALIZE just how fast 10x really is?
Didn't think so.
Please, don't post something if you have absolutely no clue as to what's going on.
Re:Some specs (Score:1)
card will be hard to beat. If it's out, I'll pass.
Uh, nice...but cmon, how about a SHIP DATE? :) (Score:1)
WHEN DOES IT SHIP? My $ is on mid 2000 at best.
I'm sick of all the 3D card companies doing this, Nvidia is no exception. (case n point, TNT: delays, and its specs got downgraded quite a damn bit) At least TNT2 is nice, plenty of selection. Its closer to what the original TNT specs called for though, just overclocked.
3dfx has delivered in the past with specs and inside timeframes, but OPS no 32bit for V3, even though everyone wants/needs it.
Matrox brings out kickass g400...but just try to buy one, especially dualhead [main selling point] and even more so the max [the one that is always quoted in benchmarks] (sarcasm) I guess there were so many damn reviews they ran out of stock. (/sarcasm) The max is "9/9/99" anywhere you look for it (which is the THIRD date given so far) oh yes you could have preordered (overpaid) and *maybe* ogtten one but sheesh! get real. The dualhead is a pain in the butt to find, the retail version even harder. Plus, expect to pay $30 more than you should.
ATI? Bleh. Too slow. They sold 'fake' 3D chipsets in the past for too long for my taste anyway.
S3? Bleh. Too slow. Too late.
Anonymous Coward, get it?
I thought they settled that? (was:Linux Support?) (Score:1)
Am I wrong?
Re:Sorry, this is fluff (Score:1)
However this means that the second tyre has perhaps 50 times as many polygons as the first, not the 3-4 times that the chip *might* provide.
So yes, the pictures are just hype.
3Dfx is the Micro$oft of the graphics world (Score:1)
I really, really, *REALLY* hate propriatary APIs. It is like 3Dfx wants us back in the bad-old-days of DOS, where every program had to have its *own* drivers for every piece of hardware out there. If game XYZ didn't support your hardware, you were flat out of luck.
Frankly, it looks to me like they started out as the market leader, but have since lost their edge, and are trying to keep their strangle-hold on the industry by locking people into an API they own. (Hmmmm, sound familar? *cough*Microsoft*cough*)
No thank you.
Re:Voice of experience? (Score:1)
Re:...but can it do enviromental bump mapping (Score:1)
Obviously you haven't heard of implicit surfaces. You know, those things like B-Splines, NURBS et al. Describe a surface by a series of control points and then tesselate according to your performance requirements. Start off with a low figure, or do benchmarking when first installing the game then use this info to up the number of triangles until you hit the frame rate/quality level ratio you want. Very simple, very old technique. I s'pose you haven't heard of the Teapot either.
Re: FUD! (Score:1)
The actual chips are released now to board manufactures.
Re:Some specs (Score:1)
Question is, are there any monitors that support this to the fullest?
My monitor max resolution at 85hz refresh is 800x600, and I run at this resolution (75 flickers! To me, anyhow. 60 is just too flickery to use.) Videocard has a nice 250Mhz RAMDAC, does plenty high refresh at high resolutions...
Hmm. Maybe I need a monitor with longer persistence.
Re:Diamond still in? (Score:1)
Since Diamond bought S3, they're not making nVidia based cards anymore... they'd be competing with themselves.
----