Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware

Does ATi Have a GeForce 256 Killer? 121

A reader wrote to us with Sharky's review of the ATi Rage Fury MAXX. Besides simply being a mouthful of words to say whenever you want to refer to your video card, it's also being setup to go head-to-head with the nVidia GeForce 256. According to benchmarks in the review, it's a really good match.
This discussion has been archived. No new comments can be posted.

Does ATi Have a GeForce 256 Killer?

Comments Filter:
  • I'll wait and see what kind of performance it really delivers, but I'm hopeful. I've been following what little info there is on ATI's up-and-coming, and what I've read looks really promising. Every other frame is rendered by one of two processors, so while one is displaying, the other is already rendering the next frame. Yum :-)
  • OK,
    I just had to spoil it once for the spammers.
    We now take you back to your regular Slashdot programming.

    --

  • ATI's got a good idea with the dual-processor, but that also requires dual memory and dual bus. The speed will be fantastic, but what about the cost? Not to mention power consumption and the heat it will generate.

    I think ATI's on the right track, but they should start by creating faster accelerators.
  • Someone at Slashdot should patent some kind of filter on the word "Beowolf" and/or "Post". Even the people mocking the First Posts are getting annoying. It's just a waste of everything.
  • ..Dude. Have you read the article? It ill be priced identically to the GeForce cards. That's what happens when you use 2 relatively old chips. $250 for that sweetness.....T&L won't start appearing native in games until late next year, probably. Give me 50fps in Q3:A at 1024,32...aww yea. Nice.
  • I am a little bit leary of ATI cards. I used to love them as the 1st video card I ever owned was an ISA ATI XL and then everntually I bought an ATI MACH64 a little whole later. But as far as 3d cards go, I haven't liked them. I remeber my friends getting buying the Rage chipsets and then finding out they couldn't play any game except the bundled games in 3d mode.
    Then the Rage Pro and 128, they're "ok" cards, but nothing great, but by how they advertised them you would think that they were king of the world or something. I would like to see a nice and fast ATI card again, but I'll have to see one to believe it.

    P.S. It can't be a GeForce killer if it doesn't have T&L.
  • The benchmarks are on an i820 . . . AGAIN. Why the hell does everyone keep doing this? Test on something I can possibly buy. Honestly, I think they just wanted to toss out the term "coppermine" once or twice.

    That said, it looks like a nice card. I'd like to see how it performs on an Athelon(or however the hell you spell that). I'm personally going to wait; I don't have time to write Linux drivers for it ;-)
  • Tom's Hardware put their own preview up this morning:

    http://www5.tomshardware.com/graphic/99q4/991108 /index.html
  • You know, where do they get the names for these things? Why do they have to make it sound as though it might jump out of your machine at any moment and kill you?

    andy
    (waiting for the new Elf Rosepetal Happyguy 256 from Matrox)
  • Interesting to know how they've phrased the patent for alternate frame rendering; after all, it has long been a technique to draw one framebuffer whilst the other is being displayed, and multiple CPU frame drawing has been around a long time [remember transputers ?].

    I wonder how the processor load for Alternate Frame Rendering compares with the Scan Line Interleave technique used with multiple Voodoo cards ?
  • It's interesting that SharkyExtreme would post a review using the i820 (not available to the general public), and not use the GeForce256 DDR (also not available yet). It seems that if you are going to benchmark on a yet unavailable platform, you might as well not cripple the GeForce256 by forcing it to use SDRAM. This is especially apparent in the higher-resolution benchmarks, the GeForce simply doesn't have the memory bandwidth it was designed for with the SDRAM.

    Secondly, as was pointed out in the article, the MAXX is not a truly revolutionary product, rather it seems more like a band-aid to aging graphics technology. While I applaud the use of the parallel processors, I would rather see them produce a decent chip in the first place. I mean imagine if you strapped 2 GeForce256 chips together, can you say 960MPxls/s??

    I have two complaints, 1) That the "new" board does not contain T&L (obvious, since it is using an older chip), and 2) that the GeForce could even begin to stand up against it in any benchmarks at all.

    Just my $.01,
    TheJet
  • The feature list says something about a "triangle setup engine". Is this T&L? If it is, ignore the rest of this post. If it isn't, read on.

    I am developing an open source 3D game engine, and I know that T&L is the single most important new thing in graphics cards. Extremely high polygon counts are far better looking than extremely high framerates, or fullscreen AA ("t-buffering"). Just look at the DMZG screenshots [kickassgaming.net]! Unfortunately, no current games take advantage of high polygon counts, so current benchmarks are very misleading. I strongly suggest that no gamer buy a 3D card without hardware transformation and lighting ever again.
    -------------
  • Because you play games against monsters who look like they might jump out any minute and kill you. Because it's more appealing to more people that way. That's why. Take a marketing class, already!
  • Wow, the preview at THG shows the GeForce as doing alot better than the ATI compared to the review at SharkyExtreme. (Just to sum it up for people who didn't read it.)
  • The really bad thing is not 2 processors, it is 2 memory bank. You pay for 64 MB of memory but the system only use 32, as the two 32 MB banks are duplicating the same content. With a shared memory architecture things would be much better (twice better to be exact)
  • This is just a sign how far nvidia is in front of it's competition (Disclaimer: I neither own nvidia shares nor even a nvidia graphics card).
    Come on, buy this thing a watch newer game titles crawl next year.
    And what about CPU-scalability, can I use this with a 266Mhz Pentium II and see a difference to a single Ati-Card? What I have read about the geforce, I will see some difference although this wouldn't be the perfect combination too.
    And let's talk about drivers, I assume the next generation geforce will be similar to the geforce256 (like g200->g400), so there is a (slight) chance of getting mature drivers on other OS's like linux.
    This product OTOH seems to be a dead end if it needs other drivers than the single chip cards.
    Oh, and can someone explain to me the difference of the benchmark results between the sharky review and the one on Tom's hardware?
    Methinks on Tom's site the geforce wins most of the bench's.

  • There's nothing like that brand new blue screen of death when playing on a brand new video card. Still get it with a TNT2 in half-life. Why? Because hardware is released about a year before the software drivers catch up.

    I just bought 2 voodoo2's from Creative because I'm finally confident that they have reasonably stable drivers. I'm still waiting for stable Rage Pro drivers ...

    Of course, I'm still waiting for a stable OS to play games on (Yay Loki!) besides Windows, which should make a huge difference in the stability of the games. In particular, XFree86 4 DRI should be just great.

  • It always seems to me that hardware reviewers, especially Sharky, get really disappointed if the product they are reviewing isn't faster than the ones they are comparing it to. To the point where they make all sorts of excuses for the benchmarks where it *doesn't* beat the competition. Why can't they just be objective?

    Refusing to use the new drivers for the GeForce is just another example of the lengths they will go to to make sure the product they are reviewing "wins". I don't get it.
  • > I'd like to see how it performs on an Athelon(or however the hell you spell that)

    Check out ATI vs GeForce 256 [gamersdepot.com]

    Very similair results to these benchmarks, although the Geforce is far superior on the slower :-)Intel CPU

  • You were disappointed by the 3D performance of the Mach64? Maybe that's because it's a 2D *ONLY* card. Any 3D stuff you run on a Mach64 is rendered in software.

    And speaking of the "only works with bundled titles" effect, how many games support hardware T&L? Methinks that very few game companies are going to completely rewrite their engine for a single card. The only way I see this taking off is if M$FT decides to support hardware T&L in DirectX. Sad but true.
  • Hmph.. my Guillemot 3D Prophet (ie. geforce 256) arrived last week..

    I installed it. It didn't work. So I wiped the hard drive and reinstalled Windows and the 3D Prophet. Still didn't work. Had to revert to earlier drivers which treat the card as a TNT 2 Ultra.

    No sign of this problem on the website. The new drivers on the website don't help. No documentation worth speaking of. Very annoyed.
  • No I was talking about the cards ATI said were 3d cards, the Rage chipset and beyond. They never advertised the Mach64 as a 3d card... As to T&L, anything that uses the OpenGl setup engine already does and yes even M$ does support hardware T&L, it's called DirectX 7.0
  • I've directly compared ATi cards in the past, and image quality has been terrible. Now, I haven't tried the new MAXX, but I'll assume it's similar due to the fact that it's two Rage Fury chipsets on the same board. Really, the image quality on these boards sucks. First, it's like viewing a game with 10/10 vision. Textures arent detailed until you're like 2 feet away. You can also see the texture qualities directly changing because the most detailed is limited to that which is directly in front of the player. I've tried this on a variety of games and directly compared it to tnt1, tnt2 and voodoo banshee cards, and there's a world of difference.

    Now, they may have the "power" to display at similar frame rates, but I've noticed many artifacts in opengl and direct3d. They do seem to use a lot of trickery to achieve similar frame rates, and it doesn't really bode well for picture quality. This is hardly what I would expect from a 32mb video card.
    ----------
  • Every few weeks it's "new video card with amazing blah blah blah features, faster than blah blah blah." Whatever. The real bottom line is:

    1. The video card market is getting more and more fragmented.
    2. Windows drivers are still buggy as hell and never seem to get beyond a beta stage.
    3. The performance increase in applications (i.e. games) is hardly noticible.
    4. We have to shell out $125-$250 for a new video card every six months.
    5. We're just starting to get somewhat stable Linux drivers for cards released fifteen months ago, like those based on the Voodoo Banshee chipset. Most Linux distributions still don't include these drivers, even though the cards have sold millions of units.
  • Try the drivers on the NVIDIA web site. They work very well with opengl games. The current drivers have some problems with video overlays though. The 208 detonator drivers also work very well.
    ----------
  • nVidia and 3Dfx the leaders in *open* graphics? This is absurd. Neither company has released anything resembling usable specs for their chips.
    The 3Dfx Mesa driver is built on top of glide (which is closed), nVidia's OpenGL for the Riva boards (on linux) is open source, but the code is obfuscated, and it is too slow to use for anything except OpenGL screensavers.

    Matrox is definately the leader in *open* graphics. They've release comprehensive docs for the G200/G400 chips, and as a result there is a project actively developing an accelerated driver for them, which is already capable of playing Q2/Q3 at a reasonable speed. And the matrox driver is rapidly improving, while nVidia's code has not improved since it was released months ago.

    ATI has recently released specs for their chips too, BTW, although nothing has come of it yet.
  • You want to talk about stupid reviews?
    This one [zdnet.com] comes from our good friends over at ZDnet.
    In it, they compare a $1599 Apple iBook to a $2399 IBM ThinkPad, and bitch over the fact that the iBook doesn't have the serial and parallel ports "that savvy PC consumers have come to expect."
    Yeah, right.

    Why does NextGeneration (a videogame magazine) bitch about the "aging N64 architecture" when new games for that platform come out, yet rave when a new PlayStation game "uses the full capabilities of a venerable platform?"

    I hate when Tom's Hardware reviews a new system by overclocking it as far as they can take it.
    Uh, most people don't do that!

    haven't had coffee yet, feelin' cranky, POpe



    Pope
  • No this isn't the case. If you want to be a compulsive extreme gamer maybe, but a lot of older video cards do very well in current games. The TNT1 as well as the banshee (voodoo2) are pretty cheap as well as being more than enough for games coming out today.
    ----------
  • Is anyone else kicking themselves and thinking 'why didn't I think of this and sell it to a graphics co'?

    Its so much simpler that SLI or split screen technology. You could retrofit it to existing cards with a new driver and a very small gizmo to switch between them. (I guess some syncing required too).

    The penalty, which is inconsequential, but I guess the reason we all overlooked this, is that the delay (or rather lag) from starting on a frame to getting it on the screen is twice as long (since there is only one engine working on each frame). But this lag will not be noticable, wheras low frame rates are.
  • i.e. when can I buy a G4 with one? :-]

    Or, for those whose hardware doesn't come with a solid-color Apple logo on the side, when can I walk into a store and buy a PC with one of these on the motherboard?


  • "* ATi Rage Fury MAXX 64MB AGP Video Card (Set to operate at 125/143MHz)
    * Creative 3D Annihilator GeForce256 SDR AGP Video Card (Set to operate at 120/166MHz) "


    This review seems a bit biased, did ATi pay for it? They make it look like a direct comparison between the MAXX and the GeForce. However, if you look more closely, they're using the performance crippled version of the GeForce (according Toms Hardware Guide): why use the SDR version instead of the DDR and not point this and state these aren't the difintitive GeForce benchmarks? Smacks of bias to me.
  • No, triangle setup engine is something that 3D cards have had for a long time. It's different from T&L. T&L is Very Cool...
  • Does it matter? As long as it delivers performance - a "truely revolutionary product" does not necessarily has its revolution reside on the chip. In the case of MAXX, it happens across two chips. This is revolutionary, because nobody has done alternative frame rendering with video chips before.

    Strapping 2 GeForce256 chips together and get 960Mpixel/s? I guess you think AFR is a "band-aid" so easy to implement that everyone can do it without any technical difficulty?

    I applaud ATI for the innovation - although I'm not getting the MAXX - I'd rather wait for their AFR T&L card. It will be out - willing to bet anything on it.
  • Jeez, give it up already. Name it something new and short. That name just sounds stupid.
  • Ever wonder why Tom uses so many 32-bit benchmarking when he's comparing the TNT2 boards with the V3 boards

    and

    so surprising few (only OpenGL, in fact) when it comes to MAXX vs GeForce?
  • Not really, because if you had a shared memory you could interleave the two memory banks and double the data path, which would double the bandwidth.
  • I had the pleasure of ordering a ATI Rage Fury and a Nividia TNT 1 at the same time and running them to test them both on the same computer. Originally I was going to put the ATI In my prefered computer but was disappointed to discover that as much as the ATI Rage Fury had so many plus features and twice the memory the drivers were either not complete or the hardware was not as good as the TNT1.

    I know what you might say: "Maybe your one card was bad..." I thought of that I ordered another one and had the same results.

    The other thing was that It ran the first few months as the second closest thing to vapourware after Daikatana. The only people that could get a hold of them (after their release date) were people who review hardware.

    Will the "New" ATi Rage Fury MAXX be the same story? If so then the niVidia GeForce will still be my First choice.

    Conclusion Wait till they both have been released. If you can get a hold of one for free test it and post your results for others to see. And for those who can't.. Check online before buying either. But don't take my word for it.
    -------------------------------------------
  • You're "sure" it'll be supported immediately? When it's taking HOW long for decent V3 drivers to hit? Voodoo Banshee? When the company assembles a dozen or so V2 cards, hacks out some beta drivers and goes, "This is what our next-gen product is going to be like. Only better!" then fails to deliver on time. NOWHERE has there been ANY talk about any GPU in V4. Just massive fill rate and T-buffer, which are conditionally exclusive.
    1. You can supposedly get great image quality, but it sucks down all of the fill rate.
    2. You can play at umpty-bazillion fps. But the image quality is that of the current V2/V3 cards. EWWW.

    I prefer the T&L on the GeForce because it's giving you both performance AND a boost in image quality. Not going "If you take the red pill.....If you take the blue pill...."

    And for you business users out there. The Quadro card looks like it's going to be the $#!+ on workstations!


    Chas - The one, the only.
    THANK GOD!!!

  • high poly counts are better than high fill rates, not necessarily high frame rates. :)
    -------------
  • Why? Because of it's T&L engine. The MAXX is two Rage 128's, and the GeForce is two TNT2's AND a T&L engine. So while today's games may be the same speed, since they mostly require high fill-rates, tomorrow's games will run faster on the NVidia chip because they will need higher poly counts.

    There is no contest here, despite what the situation looks like now. If you want a video card that will last you at least another year and a half, go with the GeForce.
  • I think this is not a very elegant way to design a card. I dont like the Idea of paying for 64 MB Ram if I can only use 32 for Textures ans Stuff. And I dont think that this card will increase the number of used triangles.
  • I tend to agree that most future 3D cards will NEED some kind of T&L support to be competitive, but it's doubtful that any developers will be taking advantage of T&L in the near future. Note that the GeForce only permits two T&L matrices per triangle (iirc). This means you're probably still doing at least a portion for your T&L in software before handing it off to the hardware renderer. (Still a vast improvement) Additionally, it requires a fairly significant rework of an existing engine to take advantage of the hardware rendering capabilities. In other words it will take some time for the software to catch up with the hardware (as usual). That said, I saw a demo of the GeForce at the GDC RoadTrip last week, and in a finely tuned application, the GeForce produced incredibly beautiful results. nVidia demoed an out-door 1st-person perspective "game" where you could wander around and look at freaky creatures and trees, lakes, cliffs etc. The sky-dome was very realistic looking, and the trees were very life-like (no intersecting plane image-maps for these guys, these trees were modelled in Maya (iirc) with no fear about poly-count), AND all-together the GeForce was able to display the entire forest (must've been 20-30 trees), the terrain, and some animals (and some weird freak-dude that runs around doing a weird ritual), and a very nicely rendered lake at an impressive frame rate (28+ FPS, iirc). Very impressive.
  • First, a disclaimer: I like NVidia and own one of their cards. However, it has been said that Sharky dislikes NVidia, so maybe our biases will cancel each other out. Now, some observations and opinions...

    The thing this review shows most is that drivers are everything. The ATI "won" on Direct3D benchmarks, but "lost" on OpenGL benchmarks.

    ATI's MAXX card is the moral equivlent of SMP. If one processor is not fast enough, use two. This is a time honored technique, and perfectly valid.

    I find it annoying that ATI has patented their "Alternate Frame Rendering" technique, when it is neither new nor innovative. Grrrr.

    The reviews were done on very high-end (in fact, unavailable) hardware (i820 motherboard, 800 MHz PIII). NVidia's GeForce is largely designed to take load off of the CPU. It would be nice to know how well ATI's solution works on slower CPUs. For example, I have a 300MHz AMD K6-2. The GeForce's extra co-processing capabilities may make it faster on my machine then ATI's offering if the MAXX is CPU bound.

    The GeForce is also something new: Those graphics co-processing features are its big selling point. None of the benchmarks used take advantage of those features. Tomorrow's titles which make use of the GeForce are likely to do better. Of course, today's titles do not, and my motto is "It is all vaporware to me until I can buy a product."

    As a Linux Advocate(TM), I have to ask: Does ATI provide specs and/or Linux drivers? NVidia does.

    In conclusion: It looks like the MAXX is a good product, and will give NVidia a run for its money. Good. I like choices. However, I don't think it is going to "kill" anything anytime soon.

    Just my 1/4 of a byte... ;-)
  • Especially over a "Ge-Force" killer (which of course uses benchmarks from unreleased hardware).

    First off I'm a bit weary of websites that only evaulate cards by fps tests in video games. I usually distrust most benchmarks .. except for perhaps INDY 3D.

    Not everyone will want to play games.. in my case I do 3d Animation and modelling in Lightwave and Maya. Ask any 3d user about ATI's track record for stability and opengl support under NT with any high end application. I had many friends and co-workers who got a ATI card only to return it in short order and buy something along the lines of a Oxygen VX1.

    Applications like Lightwave and 3DSmax seem to be more forgiving regarding 3d cards but soon as you introduce Maya or Softimage to the mix.. forget it.. ATI doesn't cut the mustard unless they pull a large rabbit out of their hat.

    On a related subject.. I did confirm with my local alias rep that Alias|Wavefront [aliaswavefront.com] is testing the GeForce 256 for Maya/Alias certification. (no I wasn't told which ge-force card they were testing). This certification will be significant.. as the Geforce will be more than a "gamers" card .. especially when the quadro [3dgpu.com]version of the chip ships later on in a few months.

    For me.. its an easy choice... go for the card with has a few fps faster framerate in quake and incoming? Or a card that thats practically just as fast in games *AND* can run all my 3d applications faster then most of the high end 3d cards on the market [using Indy 3D].

    == It seems the URL for Indy3d is down.. its http://www.indy3d.com you can compare your cards to a Onxy2 IR if you wish :) ==


  • Toms Hardware Guide [tomshardware.com] has a much less biased review comparing the MAXX and GeForce [tomshardware.com] more clearly and openly.

    Sharkey's review failed to point out that he was using the GeForce SDR, whilst (as THG points out) the GeForce DDR has 16% more memory bandwidth. The benchmarks on THG show how this is a big advantage with the MAXX not even coming close. I wonder where the discrepancy between THG and Sharkey come from with SDR though, with THG indicating that it is generally faster then the MAXX?

    "the features of Fury MAXX in comparison with the other high-end 3D-solution, the GeForce from NVIDIA.

    Fill Rate
    ATI Rage128 Pro AFR:
    2 x 250 Mpixels/s = 500 Mpixels/s

    NVIDIA GeForce256:
    480 Mpixels/s


    In terms of fill rate, Fury MAXX has a slight edge over GeForce, but generally both solutions are close to identical. This will mean that both cards will score close to the same in games that don't use T&L, if fill rate is the only limiting factor.

    Memory Bandwidth
    ATI Rage128 Pro AFR:
    2 x 2.288 GB/s = 4.576 GB/s

    NVIDIA GeForce256:
    2.656 GB/s as SDR-version
    5.312 GB/s as DDR-version


    You can see that Fury MAXX is way ahead of GeForce w/SDR in terms of memory performance. This will make Fury MAXX look a lot better than GeForce's SDR-version at high resolutions and high color-depths. GeForce's DDR-version however comes with a higher memory-bandwidth than Fury MAXX, something that might cost a lot more money though. "
  • FWIW, the review says that image quality is very good. Did you read the review?
  • How old are your drivers? I'm using a Diamond Viper V770, and haven't had a single blue screen (much less a crash) with Half-Life. I'm running it in OpenGL, to boot.
  • Another comparison:

    http://www.gamersdepot.com/rev_ati_vs_nvidia_a.h tm

    This one has ATI on top.

    Is Tom being biased again?

  • Properly coded OpenGL applications take advantage of any hardware you have. Carmack has said that even Quake1 takes advantage of T&L, as long as the drivers are written
  • ...can you say Titanic rendered on Linux


    Can you say software renderer? There's not a hardware solution on the planet that is suitable for photographic quality imaging. Each studio uses propriatary rendering software and often bakes their own clustering systems to churn out the actual frames. SGI has some limits on fillrate, but they do have some spectacular track records for putting the pipeline in hardware. It's pretty cool to hear about the 2048 bits per pixel of their top end reality engines.

    But I do agree. SGI is sinking.

    -sw
  • You obviously know what you're talking about!
    This is the kind of thinking that's going to
    really make linux popular!
    Keep up the good work!

    PC's barely caught up to my 8 year my RealityEngine, and that's only under windows,
    teehee!
  • No, first of all Tom uses different D3D games for benchmark, so you can't compare those. Let's concentrate on the Quake 3 benchmarks.

    Tom has two different GeForce benchmarks. One is for an SDRAM GeForce the other is for a DDR SDRAM configuration. As you can see from the benchmarks at Tom's this makes a HUGE difference, especially in 32-bit mode. The SDR card is beaten consistently by the ATI card.

    Sharky and the gamersdepot benchmarks use the SDR version of the GeForce, thus the ATI card can beat the Geforce, and Gamersdepot is clueless enough to even blaim poor drivers on the GeForce's poor showing. It is not the drivers it is the crappy memory configuration.

    I am not sure if the DDR GeForce's are available yet.

  • Not many people would catch this. Also a 32 vs 64 meg board is biased.
  • Well, Apple appears to have bought its own pet graphics-chip maker [zdnet.com], so ATI's offerings might just be on their way out for our beloved fruity boxen. ;)
    I can't blame them, either--the latest ATI cards have serious issues with lots of Macs. A casual scan through MacFixIt [macfixit.com]'s news archives turn up scores of reported problems. The general consensus I've heard is that ATI's Mac drivers are a big load of rubbish.
  • These violent-sounding product names are starting to get old. The hyperbole is so intense it's getting redundant - don't fury and rage mean just about the same thing?
  • I don't care if ATI puts 2, or 4, or 8, or 50 buzillion processors on their card, and ship it tomorrow. It will still suck rocks if we don't see a stable, fast driver for it until 2001 (which is roughly what has happened to the Rage 128: released fall of 98, and we STILL don't have a stable, fast driver, and the one for the Macintosh it ships on, is even worse!)

    I wish I had a nickel for every time someone said "Information wants to be free".
  • It seems that pretty much everyone replying to this topic has some sort of death wish against ATi. I really don't understand why. Sure, their performance may lack a little behind 3Dfx, which I would probably agree to, but that doesn't mean that their products are trashy. The first really powerful video card I ever bought was my ATi All-In-Wonder Pro, which sports the ATi Rage Pro chipset and 8 megs of RAM. Let me tell you, I've been rather pleased with this baby. It comes with a built-in TV tuner and video capture. I had it record a favorite TV show of mine one night, and the video quality and sound was excellent. Also, it's performance in Direct3D accelerated games has been rather impressive. Final Fantasy VII, Star Wars: Episode 1, and Half-Life run AWESOME on this thing. My configuration, you ask? A simple PII/266, LX chipset, with 64 megs of RAM.

    If I get this good of performance with a Rage Pro, I can't wait to get my Rage 128. Sure, a Voodoo3 3500 would be nice, but who has 200+ to shell out for that? I can get me a nice All-In-Wonder 128 for $140 through ATi's Trade-Up Program. It may lack a bit in performance compared to the Voodoo3, but who cares? Those extra few frames per second aren't going to make a phenomenal difference in your play of Half-Life or Homeworld. Sure, they're slow in updating drivers - I'm still waiting for an update that fixes their latest edition - but they were kind enough to point back to an older set of drivers that works just fine with my games.

    There has to be some reason why ATi's CEO was named Canadian Entrepreneur of the Year last year. I don't think it's because they make crappy video cards. Sure, an ATi board may be a Celeron when compared to a Voodoo3, but for some people, the power of a Celeron is enough. I, for one, am happy with my ATi, and unless I win the lottery, I don't intend to stray from their products.
  • The only simalarity between the GeForce and the TNT2 is the rasterizing engine it is NOT 2 TNT's and a T&A (oops) engine.
  • The Savage 2K was in the news just last week. S3 announced that their dialing down the default speed of the chip to a paltry 125Mhz. This drops the fillrate to less than that of an SDR GeForce 256.
  • Sharky Extreme seems to have been fairly impartial in the past, and having read this latest review I don't see that as having changed at all.

    First off, the Sharky benchmarks show the ATI card being *faster* than the GeForce in most of the benchmarks. Thus, the product they're examining *does* beat the competition.

    Secondly, Sharky used the latest NON-BETA drivers for the GeForce card. The new Nvidia drivers you are speaking of are beta right now. Sharky has said they will re-run the tests with the new drivers once they are no longer beta, and will update their results. Generally, Sharky tends to use the stable drivers to more accurately reflect a product (so bugs in new drivers won't skew results). They would've used non-beta drivers for the ATI card too, but since the ATI card hasn't been released, the only drivers available are the beta-quality ones that came with their engineering sample. Pointing the beta-ness of the ATI drivers out when some benchmark scores seem a little anomalous is, IMHO, a good practice, not a lack of objectivity.

    Sharky's did a Hardware Preview, not a review. Yes there's a reason they call it preview, not review. The ATI card they're previewing is not a released product, but rather an engineering sample with beta drivers. Sharky was upfront about this fact, as they always are, and therefore mentioned that the ATI drivers are beta, and the board they are benchmarking may not be at the same clock speeds as released version (which is coming in December). Thus the numbers are not the end-all, but rather a suggestion of what may be coming along in the next month or two. This is what Sharky Extreme said at the beginning, so I wouldn't get too bent out of shape over it. Nobody claimed that the final ATI board will or will not beat the GeForce solutions, merely that it has the *potential* to.

    Some things to consider about the ATI Rage Fury MAXX versus GeForce256 solutions:

    1. The ATI card being previewed in the Sharky article had 64MB of video RAM, and the GeForce board in question on 32MB. This naturally gives a certain advantage to the ATI card.

    2. The ATI drivers in the test were very rough, beta-quality drivers. This means the performance will likely increase by its release date in December. The GeForce drivers may very well be more optimized by then as well.

    3. By December, there will likely be GeForce boards using Double Data Rate SDRAM, as opposed to the standard SDRAM used on the GeForce and ATI boards right now. This should help out the GeForce. The GeForce also supports up to 128MB of video RAM, so there might be 64MB versions out in December also to match the ATI card's 64, especially since that would help the marketing. These should help out GeForce performance a good bit.

    4. We don't know that the clock rates of the ATI board being tested will match those of the eventual production run. If ATI gets good yields, the released boards may be at higher clock speeds. If not, then they may be lower (this has happened with other companies' previewed boards, such as, IIRC, Sharky's preview of the Savage2000).

    My main point here is that this isn't a full-fledged review, but rather a look ahead at the potential out there. Once the ATI card is released, I'm sure they'll have a review comparing the released boards from the various companies. Let's not get too upset about comparisons between a beta board and a released product :-) They're showing the potential out there. If I were cynical, I would say that this is so you'll visit their website to check for updates when new drivers come out, and when the board is finally released :-)
  • Granted, they may have made the claim it was a 3D board, but what they didn't do was make it compatible with some sort of common 3D API, such as Direct3D or OpenGL. They've since seen the light, so I don't expect to see many problems with that in the future.
  • Half these people here are nothing but skeptics hopelessly clinging to their voodoos and tnts.

    Out of all the gaming cards i've ever owned, I never came across a more stable card with the support of built-in dvd decoding. I have an ATI rage fury and have had zero problems and great gameplay. I don't care what other reviews hold... What pisses me off is people trying to compare the ATI Rage Fury next to the voodoo3 and TNT2. People forget that the rage128 chipset was out long before the 2 other cards were, so of course it's going to be a slower card. TNT and Voodoo had the overall competitive advantage for waiting and improving.

    I'd rather have a more stable card with extra features than a non-stable card that maybe renders a tad faster... ATI is a well respected card maker in the industry, and for a reason... I say power to the MAXX.

  • I remember this article out a long time ago which is largely incorrect.

    I've talked to 2 digital domain employees [who now work at station x studios] who were not aware of any mainstream rendering done with linux on the DEC alphas from carrera... most were NT boxes running SOFT/Lightwave [since there is no linux renderer for Lightwave or Softimage]... The only thing I can thing of linux being used in titantic is perhaps DD's own Nuke compositor.

  • Look at the processors used.
    Sharky uses a PIII overclocked at 800 MHz.
    Tom uses a PIII at 550 MHz.
    Gamespot uses an Athlon at 700 MHz
    and a Celeron at 500 MHz.

    The benchmarks for teh ATI have a greater dependency on the CPU speed because the CPU has to do more work i.e. it does T&L rather than the graphics chip.

    Using Gamespot's figures for Q3A at 32 bpp, 1024x768, the GeForce give about 41 fps for both
    the Athlon 700 and the Celeron 500. However the Rage gives about 48 fps for the Athlon 700 and about 29 fps for the Celeron 500.

    If you are thinking about buying one of these cards, it only makes sense to look at the benchmarks for the applications that you will be using running on processors close in speed to those you will be using.

    Chris Wise
  • Even if it is OpenGL, it doesn't have to use T&L. I don't think quake uses GL lighting, and if they do it is for spark effects, not the main level lighting. And you can always do transformations before they get to the card, which is what crystal space does. I suspect a lot of cross-API games (that use directx, glide, and opengl) use OpenGL just for a rasterizer.

    It is just that opengl is designed for T&L, which is why at least the T part is used alot.
  • Yes, and even looking at it again, I do not see that. I don't see how they would fix the problems when they were using two identical chips which were the same chip that I tried before. I could be wrong though. I'll try and get my hands on one (the company I work for has OEM software in all their wonder products and I live about half a minute away from their main office in thornhill).


    ----------
  • Er nevermind -- my contacts must be glazed over. I'll have to see it for myself though.
    ----------
  • well since the price is the same...whats your point? If the could give me 4 chips for 300$ I would buy it in a sec... no games offer T&L yet... they will in maybe 6-7 months... and by then I will change 2 othe video cards! I mean, do you have Voodoo 2 for your games right now? I didnt think so :)
  • This is on an SMB Celeron running Windows 2000. :) We had issues on a PIII 500 as well, but that was a hardware issues (one of the AGP pins was physically bent out of contact in the slot!)

    ATI's Rage Pro is quite notorious for driver revisions. It's too bad, because the chip is so common, that the drivers tend to flake the whole system.

    Oh, yes, we had some issues with SB Live! and TNT2 Ultra in Rogue Sphere and Homeworld. Yeesh. It's nothing but hardware and software conflicts out the ying-yang. ;)

    The sad thing is: some games it works. Others it don't. sigh What can ya do!

    Cheers!

  • So here is what I don't get about the latest and greatest from ATI: what about latency?

    The review states that each of two Rage chips draws every other frame. If you only have one chip working on a given frame, the time to draw a single frame stays the same even though there are two chips. So the latency is twice what it should be given the frame rate! The end result might be visually equivalent to the competition, but for games where rendering latency actually matters -- Quake 3 on a LAN, say -- I expect the results will be disappointing.

    It seems to me at least that this scheme is an attempt to push benchmark numbers up without offering a "real" solution to the problem. Sadly, latency is not measured by any benchmarks I know of. But perhaps I am wrong about how the chips work, or perhaps I am wrong about latency being a gameplay issue. Maybe someone from ATI will clear all this up?

    -Kekoa

  • The ATI card has 64 megs of RAM as opposed to the 32 on the GeForce and the TNT2. I am also having a very hard time believing those benchmarks b/c I am running a C-400(not even overclocked) with my Asus TNT2 Ultra 32mb and I'm getting better frame rates than they are posting. Approx 60 fps in Quake2 1024x768x32 opposed to what they got as approx 50 fps? They were running frigging coppermines!!! I am running the newest Asus Drivers/Bios and I highly doubt that has anything to do with it. I'll post my benchmarks with a GeForce on a affordable machine(C-400 on a BH6 with 256 mb RAM and 9.8 gig WD hd) when I get a GeForce(sometime in January). Perhaps then we will have reliable benchmarks.
  • Amen! I suffered for a year and a half w/ their awful drivers, continually upgrading when new drivers were released (after being pushed back and back...). Upgrades often broke more things than they fixed (Grim Fandango, for example). Basically, everyone's drivers suck, but ATI's especially.
  • 55.8 fps is the highest I have ever reached at 1024x768@32. This is with a low end machine(I still think it is decent) and with the newest drivers. This is alot higher than you listed as what we should "see" my TNT2 running at. I fully believe the GeForce will shine in all areas, especially lower end machines(ie not a P3) b/c it has something that is being called a "GPU" ever heard of that? I am living quite well with my TNT2 and I plan on keeping it in my computer until January(when I plan on getting a DDR GeForce). I just have really big problems with the benchmarks that Sharkey's ran b/c they didn't even get to the FPS(with a TNT2) that I am getting with a shitty processor(compared to their CopperMine).
  • You may say what you like about dual processors but when you reach a certain point you have two alternatives:
    1. Wait for the next and faster processor.
    2. Slap 2 or 3 processors on there and fly alot faster :-)
    This should aply to the GeForce altso... If they can figure out the cooperative glitches (problems), then you would benefit bigtime from a dual GeForce.

    When it comes to memory... I am dissapointed at the availability of the DDR RAM for the GeForce cards. I really want DDR, but I don't want to wait. So I say; give me a dual GeForce based card and slap on a huge chunk of DDR memory!
    Then I would be almost as happy as if the programmers were to write code based on HW T&L and drivers that work :-)

    I don't consider drivers to work if i have to include any words like: sometimes, often, if not, if you're lucky, etc... That's what I'd expect from beta drivers, not release. (Thinking alot of ATI here :-)

There's no sense in being precise when you don't even know what you're talking about. -- John von Neumann

Working...