Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware

Tom's Hardware on The GeForce256 116

~fly~ writes "Tom's has a detailed review with benchmarks of Nvidia's new GeForce256 'GPU'. " The synopsis: High expectations, but it appears to meet the demand.
This discussion has been archived. No new comments can be posted.

Tom's Hardware on The GeForce256

Comments Filter:
  • by Anonymous Coward
    Well, here are some links and quotes to give you more of an idea of what went down, although I'm having problems finding the original start of the conflict, but others have summed it up just as well.

    But most importantly, here are some of the LINKS you requested for.

    From what I understand of the issue, the first Q3Arena(Q3A)beta was released for Windows, and Tom's Hardware used this program to benchmark a whole bunch of cards.

    Brian Hook, a developer at Id, makers of Q3A, didn't much like Q3A being used at such an early stage of development, and blasted Tom for basing the evaluation of 3D Cards on Q3A.

    AT least that's what I remember, someone correct me if I'm wrong.

    And for what it is worth, I fell on the side of Brian Hook. I mean, it's their software, they know Q3A inside out.

    I've heard Brian speak at Siggraph, one of the premiere Computer Graphic conferences.

    And I've followed his .plan and editorials for a long time, and while he can be very blunt and truthful at times, he has always conducted himself in a manner to speaks well for him, IMHO.

    Who would you choose to listen to?

    http://www.tomshardware.com/blurb/99q3/990802/blur b-01.html

    3D Chips, Quake3 and more ... Talking about 'being reckless to only serve oneself' brings me to another thing. Have you heard the 'news'? In the new version of Quake3-Test TNT2-boards are still beating Voodoo3-boards. Does that surprise anyone? Well, it should! I remember somebody making a big fuss over me claiming exactly that, when I ran two different versions of Quake3 before. I remember crap like 'inaccurate test' and 'software piracy', which was all only covering up for the fact that some individuals didn't like the TNT2 looking a lot better than Voodoo3 in Q3. I don't know what nice benefits the key person who ranted about my article received from 3Dfx and I can only guess which benefits were received by the publications who used his ridiculous accusations to crucify me, but now we can see the crystal clear truth, I was right all along. Thus I am looking forward to receive some nice apology letters from all the individuals who accused me of all kinds of crap. Let the apologies run in, I look forward to one in particular. In case you don't remember that person anymore, he was once taking advantage of the fact that he worked for Id Software, and luckily for all of us he's quieted down in the last months since nobody cares about him anymore.

    >>>

    And here is the link to the original benchmarks Tom made that sparked the debate.... http://www.tomshardware.com/releases/99q2/9905111/ index.html

    Here, for contrast, is Brian Hook's response about Tom's statements, it is in a column on VoodooExtreme called 'Ask Grandmaster B'.

    http://www.voodooextreme.com/ask/askarch/may10-14. html

    >>> May 14th, 1999 - (3:00am MDT)

    Snipped from tomshardware.com

    "This Brian Hook, or 'Grandmaster B', as he likes to call himself very modestly, has said a whole lot in the past, some of it was true, other stuff wasn't. 'Timedemo' DOES work if you do it right, and Brian is unfortunately incorrect. Sorry Brian, but even 'Grandmaster B' is not perfect"

    For several months now it seems as if his site has taken a very biased attitude towards certain video card manufacturers. Tom Pabst will go to any lengths to "prove" the superiority of one card over another. All personal opinions aside, how does it make you feel when you read comments such as this one that exhibit blatant bias and even disregard your comments towards a game that you programmed? This is just one of the many examples of the rampant biases in his "review" site.

    David C.

    Dave,

    I've seen those comments, and I think they pretty much speak for themselves. My track record speaks for itself within this industry, and unlike others, I don't have a resume that basically consists of "I've done a lot of HTML, therefore I must know my shit, right?". I actually do this stuff for a living, so maybe I'm not full of shit. And I don't advertise products that I'm also reviewing on my Web page. I don't make ANY money from Grandmaster B adsor from .plan files, so you know there's no conflict of interest when I tell it like it is. I do not own any shares of stock in 3D accelerator companies, so there's no conflict there.

    When I write this stuff, I do it because I enjoy it and because I like to educate others. It's that simple. It's not a job, it's not something I have to do to feed my dogs or pay off my car. I do it for the love of writing, communication, and education. It's that simple.

    Tom's numbers might be valid. Then again, they might not. There's some variability there, depending on how paranoid you are about his advertiser revenue affecting his findings.

    But all that is irrelevant -- what is going to happen now is that id will publish authoritative numbers using up to date drivers and production hardware. Those numbers will be the unarguable truth, using the most up to date code possible. There will be no illicit overclocking, no conflict of interest, and no advertiser revenue from chip manufacturers to dilute any discovery.

    Tom's comments about _me_ are irrelevant -- Pissing matches over personalities are pretty much useless,people will take sides over who they like the most and rarely will listen to the issues. In the end, what matters is getting good, honest numbers to the public, and that is something that id has promised will be around Real Soon Now.

    >>>

  • by Anonymous Coward
    It is interesting to compare the 32 bit performance of the G400 and the GeForce

    640x480 GeForce has a slim lead
    1024x768 about even
    1600x1200 G400 slapping the GeForce silly

    Expendable 1280x960 [anandtech.com]

    Q3Test 1.08 1600x1200 [anandtech.com]

    both from Anand's review [anandtech.com]
  • by Anonymous Coward
    Here is what happend: Tom posted Quake3test benchmarks, this was at a time when the current test did not correctly calculate the fps scores from timedemo, this was a fact stated by John Carmack and Brian Hook. Hook stated in his .plan that Tom's benchmarks were inaccurate. Tom responded by saying Hook didn't know what he was talking about (keep in mind Hook was the #2 programmer on the quake3 project up untill he moved over to verant) and Tom refused to release any facts about how he ran the timedemo's other then that he did something or other to make it work. I could have made up #'s and claimed the same thing, and I have no reason to beleive that this was not what Tom did. Eventualy it boiled down to a pissing contest Tom vs Hook as to who knew how the Quake3 engine worked. What Tom was doing was the equivalant of me going up to Linus and telling him he doesnt know jack shit about linux. This had nothing to do with "Tom's opinions", this had to do with the fact that he was posting lie's that favoured a certain company using unverifiable techniques that the developers said were invalid. Tom has no credibility, and I now refuse to goto any page containing "tomshardware.com" in the url. Btw, I hope you get your moderator status revoked, as I specificaly recall CmdTaco stating that said status was not to be used in such a manner you suggested, and that it was also to be anonymous (i.e. it was never intended that moderators threaten to use their powers publicly).
  • The GLX driver was written by David Schmenk of nVidia.
  • Interesting idea, but you can probably get more performance out of the GPU the closer it is to the graphics chip.

    Still, this does exist in some form right now. Think 3dnow, AltiVec, etc. (Although this has much wider usage than just geometry processing)
  • There will be a Myth 2 patch in the near future that will add full Mesa/OpenGL compatibility.
  • Yes, I know - Anand's about one year younger than me. Maybe I'm naturally a better writer than him, but I can recognise that he needs to work on it a lot. Michael is better, but both need a good proofreader/editor to go over their work before they post it.

    As it is, though, they get good information out quickly, and once you can look past the piles and piles of irrelevant benchmarks and instances of terrible writing, it's a very good site for tech information.

  • I'm sorry, Anandtech is a good site, but they do not write well. Mixed metaphors, confused tenses, and awkward sentences are the rule of the day there. That's not to say I don't read the reviews with interest (though I do skip over the very long and tedious piles of benchmarking most of the time), but it's very painful reading for someone who knows how to write well, or has read a lot of good writing.
  • Matrox has released all the specs they do not consider to be IP to the linux community. Meaning that we do not have the specs for the 'WARP engine', the triangle setup engine(s) on the G200/G400. Luckily there are some rather good-hearted and persistant engineers inside of Matrox, they got a binary-code version of the setup code out, and specs for loading it in.

    3Dfx has released full 2D specs. It is a shame they do not release 3D specs, but it is definately not for lack of people asking :) 3Dfx was also the first to release, and the only people to actually keep their product up-to-date (glide on Linux is just as fast as glide on windows, I believe the entire thing is in assembly with some MMX and 3DNow optimizations for bandwidth reasons, since Glide does not do transformation).

    Compare to Matrox's zero released drivers, or NVidia's driver release. Will they release another one? Who knows, hopefully. Maybe when XFree 4.0 comes out.

    The only thing you could do with the glide source code is port it to another architecture. Really. Or inline with Mesa. There is absolutely no optimization that can be done to the glide API implementation left (that human beings can comprehend).

    NVidia has released no register specs whatsoever. They did release a special version of the internal toolkit that they use to make the windows drivers (i.e. their version of glide, 'cept more object-oriented and all-encompassing, not just 3d but 2d and video). But they have been rather bad about promoting this, the binary drivers on their page for the API are for an old kernel, and the toolkit is not up-to-date with their own internal version. Because of this (And because the Riva XServer/GLX don't even use it) there haven't been that many people messing around with it.

    But some people on the GLX mailing list (including me) have been trying to lobby for the release of the 'real' nVidia toolkit. If this gets released, things like the GeForce would be supported day of RTM. So keep your fingers crossed. :)
  • Detonator is Win9x and WinNT 4.0 only. All other drivers are written differently. The XFree support was not done by NVidia AFAIK, it was done by one of the people working for ST (Their chip producer, at least at the time). The GLX driver that is out was written by a third party I believe, and does not use any of the NVidia code at all, just #def'd register location/vars.
  • i saw nothing on nvidias site about using
    the gforce for anyting besides gamming. no
    mention of an overlay plane which many 3d
    apps need.

    if they really do have a good 3d engine for
    serious work, hopefully they will make a
    high end board.
  • Uhg.. Well that attempted work around to the Mozilla M10 word wrap problem didn't work. Sorry about the formatting.
  • I think the benchmarks show how poorly Direct3D scales, as a well written OpenGL based game (Q3) scaled with the hardware and driver quality available, and Direct3D requires the next version of Direct3D to make use of any new enhancements.
  • When it comes to drivers, I don't understand why the company wouldn't want to release the souce. It's actually a lot easier for them since some people might actuall look at the source, find bug and/or add improvements.

    Hardware manufacturers are in business to sell hardware. Drivers is something they must write to go with their hardware, but they make *no profit* on them. Thus releasing the source would allow them to leverage the OS model of development, and hence lowering the production costs, while loosing *no* revenue!

    At least that's the way I see it. Comment's / corrections are welcome.
  • Perhaps it's in the interest of certain software companies (naming no names, obviou~1) that drivers are released single-platform, binary-only?

    Hamish
  • [nt] == No Text, get it?
  • I think the problem with most software MPEG-2 decoders is that they DON'T take advantage of the CPU multimedia extensions like the Pentium III SSE or the Athlon enhanced 3DNow!

    I'd wish someone would write one that does use SSE or 3DNow!, because both of these extensions are well-suited for more efficient MPEG-2 video decoding.
  • These are the facts that you should have put in the original post. This is informative.

    Before, you made it sound like you had a personal beef with something he wrote.

    Personally, I don't read Tom's Hardware Guide, I find it a bit sensational.

    Thank you for clarifying. This is hella more useful than your previous post.

  • I was wondering the same thing. My good old Viper550 (TNT) card seems to work with Mesa-3.0 and NVidia's "glx 1.0" driver, but I'm under the impression this is only a "partial" set of drivers. (Myth II supposedly only works with 3dfx brand boards because they're the only ones that have a 'complete' set of OpenGL drivers - though I've been trying to find out if that's really true or not...)

    Has Nvidia said or done anything on the Linux front since the initial release of their drivers?
  • At the rate things are going, graphics cards will soon be the most expensive component in every system

    Maybe the most expensive component in every *gaming* system. Most business PCs (which are most PCs) have pretty crappy graphics - stuff you could buy retail for $10-$20.

    The video card is practically the last point of differentation between systems - most of which ship with similar CPUs, the same Intel-based motherboard, similar EIDE disks, and similar sound hardware.
  • Ain't it a Graphical Processing Unit?

  • (making about 1 Gb/s of bandwidth, about as much as a PC133 SDRAM can churn out).

    If that's true that's kinda weak. Sun's UPA pushes well above that. The U2 which has been around for a few years now can do 1.6Gb/s.

    Of course the h/w costs a ton more and only runs Solaris, Linux, etc... So I doubt there are any games that use it... yet.

  • What's the word on running the GeForce256 under linux? I've looked around but have not been able to find a definitive answer.
  • How can Tom possibly make a scientifically stable benchmark, using beta drivers? nVidia themselves say that the GeForce256 [I still like the name NV10] currently does not have a set of complete [funtionality/optimisation] drivers. Wouldn't it make sense to wait for decent software before running your benchmarks?

    Bah.
  • The Quake3 benchmarks were a bit 'weird' but not broken.

    The numbers were as close between runs as could be expected, texture caching and multitasking variables notwithstanding.

    The numbers were also as would be expected when compared from one computer to another. A P2-300 and a P2-500 scored only a bit closer together than they would in Q2 benchmarks, etc.

    The 'flaw' in the benchmarking was that the demos weren't using the final product, and Q3Test's performance changed significantly from 1.05 to 1.08, let alone to the final, with regard to one 3d card compared to another.

    This problem where a TNT might be more handicapped than a Voodoo created a situation where you could compare Q3 TNT number with Q3 Voodoo. But the numbers weren't meaningless. No more than any other benchmark is. You just needed to understand the fundamental point, the only benchmark that accurately indicates performance in your application of choice is that application.

    If Q3Test was what you wanted to play, and with it being out for (as we knew at the time) at least one month and probably closer to six, many people probably did buy a 3d card for Q3Test.

    And, benchmarks using Q3test can also show you where cards have problems. Even if the TNT worked at half the speed of a V3 due to driver problems, you could get a pretty good indication of where the TNT was limited. Did it have a polygon problem, or was it fill limited, etc.

    So Brian Hook was partially in the wrong when he slammed Tom. And he slammed Tom partly because of Tom's use of the Q3IHV (which was pirated) in benchmarks.

    Then they started flaming each other and they both came out looking like idiots.


    So, to summarize. Broken benchmarks can still be of value if you take a minute to understand them and how they are 'broken'. As long as the numbers aren't derived with a call to random(), they have some meaning.
  • How does a chipset like this compare to something like the SGI's or Intergraph Graphics workstations use?

    Also, what is the price for those adaptors?

  • This is NOT truth. Anand tests boards for things such as upgradebality and reliability. Try reading the motherboard comparisons, where he had winners in different categories, that is, he recommended an Abit board for tweakers and overclockers, while saying at the same time that it was not the most stable solution, and not the best option for end-users that do not overclock and for servers. In another category, he had the best all around board for non tweakers, an Asus (I THINK - not sure).

    Anand is by far the most reliable and objective reviewer around. I have been following his development since he began his site, literally. He is decent and unbiased, as others are. His tests are methodic and reproducible, and he worries a lot more for quality than for volume. Now and then he voices his opinions, but he is very aware of the community and its needs, and that not everyone is a overclocker.
  • My policy about upgrading is that I stick to what I have for as long as I can comfortably stand it. Right now I'm still using a eight month old PII350, TNT, 128mb RAM, SCSI rig at home. I will probably be able to use this machine as my primary workstation for about one and a half more years. Then I will make a similar investment and bump up several generations of technology (eg: new athlon, geforce, etc).

    Once I upgrade, my old workstation gets deligated to an honorable server role (FreeBSD). Thats the way I'm using my old P166 now. I have found that this works very well, since you not only get a new machine, but you realize that your old machine isn't worthless. You'd be amazed by how many cool projects you can do if you have an extra machine sitting around the house that you are willing to experiment with without fear of losing data/a critical resource.

    So, for me at least, the normally vicious cycle of PC upgrades really isn't that bad after all.

  • my friend works and 3dfx and basically the word is that tom is on the payroll of nvidia....3dfx gave him a look at one of their marketing docs and he went and leaked it to the world. 3dfx is not angelic but tom essentially is biased due to getting sponsored by nvdia
  • Those prices are only for the Creative 3D Blaster Annihiliator. Here are listings based on the keyword "GeForce". (They include the Guillemot 3D Prophet, Elsa Erazor X and LeadTek WinFast GeForce 256 among others.)

    Listings on PriceWatch [pricewatch.com]

    As it turns out, the cheapest pre-orders are (as of now) for the Creative 3D Blaster Annihiliator.

  • Don't forget AnandTech.com :) I kinda like this site... Does anyone have any reviews of his reviews? :) Anyways, his review (part 1) is at http://www.anandtech .com/html/review_display.cfm?document=1056 [anandtech.com]
  • >133 Mhz x 64 bit = 1 Gb/s

    My calculator says that 133*64 equals 8512, not 1000 or 1024. Try again.
  • He did read it. He said "on the motherboard" - the GPU is on the graphics card. No mistake

    -----------

    "You can't shake the Devil's hand and say you're only kidding."

  • Given the amount of FP processing power and memory on recent graphics cards, they could probably run a variety of non-graphics tasks faster than the host computer.

    For example, you could run seti@home on your graphics card, instead of just using it to display the results.

    How about it NVIDIA? You could leap to the top of the seti@home CPU statistics!

  • by unyun ( 45048 )
    Check out some benchmarks. I did before buying my video card, and G400 is the fastest video card under linux according to all the sites I saw.
    (Fastest meaning: fastest under Xfree, I didn't look at the performance under the commercial X servers)
    Sorry I don't have a link. I've been looking for those pages that I found the benchmarks on for the past week because my friend is looking at buying one, and I want to show him how much better the G400 is...
    So if you do find a page with that on it, please let me know.
  • Dude. They DO call it a GPU. Read the article. Then post.
  • Agreed, Anand is too verbose and tends to ramble about barely related stuff. He needs to be more concise. But go easy on the kid - he's only 16 or something (check out the about section if you don't believe me!)

    Daniel.
  • I suppose you support 3dfx's lack of a full OpenGL ICD still as well, despite the original Voodoo chip coming out HOW long ago was it now?

    At least nVidia/Matrox have written full OpenGL ICDs, complying with that is considered "THE" standard for 3D graphics - 3dfx continue with their half-baked OpenGL ICD/proprietary Glide system - I know who I'll be supporting......

    I used to have a Canopus Pure3D, utilising the Voodoo chip, but when the time came to upgrade, the TNT was the best option in my opinion - and nVidia's products, again IMHO, continue to set the standard by which the others are all measured, and for good reason.

    If anything, 3dfx are showing M$-like tendencies in releasing a proprietary standard, persisting with it, and expecting people to adhere to it, when a perfectly good and usable standard had already been established, and proven to work acceptably.
  • Hum, well, AGP is 64 bit and the 4x version runs at an effective equivalent speed of 266 Mhz... (making about 1 Gb/s of bandwidth, about as much as a PC133 SDRAM can churn out). So I don't see what's wrong with this bus.
  • Duh, I've been reading his comment about 3dfx and nVidia chipsets, and he is not biased against 3dfx and in favor of nVidia... the nVidia chipsets are way more modern - and better if not much faster. The Voodoo 3 is a faster Voodoo 2 which is basically two Voodoo 1 on the same board. If 3dfx likes to save on R&D by using the same cores again and again that's fine, but when the competition comes up with really new design I find it sad that they cry that reviewers are biased.

    We are at the AGP 4x age and 3dfx is still using local texture, limited to a size of 256x256 pixels with no 32 bit rendering... this is not innovation and they don't deserve any compliments.
  • Duh, I like Anandtech, but this site is not much better than the other : it keeps explaining in details the datasheets of products and running the benchmarks. Tom's hardware usually has some pretty technical stuff that you don't find anywhere else (I remember an article about Rambus a year ago that was pretty advanced, and that you won't see anytime soon on Anandtech).
  • Yup it is not quite huge, but that is the maximum speed you get out of SDRAM PC133 at peak bandwidth (133 Mhz x 64 bit = 1 Gb/s). The only way to increase this speed is to use Rambus or DDR RAM and/or increase the bus width.

    Increasing bandwidth does make sense only if you need it, like if you have several CPU or huge graphic subsystem. A PC graphic chipset will hardly ever need 1 Gb/s of bandwidth right now (most of them don't even max out the 500 Mb/s of AGP 2x...)
  • Perhaps because it may be possible to divine some of thier hardware voodoo by what the driver itself is doing. So they don't release driver source to at least slow the reverse engineering process.

    Or I could be wrong.
  • nVidia has a common driver, Detonator, [nvidia.com] for its TNT, TNT2, and GeForce chips. As you can see from the link, there are drivers for Windows, BeOS, Linux, and even OS/2.
  • The concept and design are superior. The performance is not.
    ---
  • from Creatives website

    This new accelerator leverages the new GeForce 256[tm] technology from NVIDIA

    The 3D Blaster Annihilator will be available through Creative's extensive distribution and retail sales network in October,
  • Why not just slap another processor on the motherboard and call it a "GPU" instead?

    Well, you have to squeeze a lot of geometry through the AGP bus. If you can store some of the scene there, you can just have the card happily rendering away while the CPU(s) and bus are free to do other important things.

    Furthermore, this is a more complicated chip than the P3 (at least in terms of number of transistors), dedicated to the particular maths which make 3d rendering hard. It doesn't matter how fast this runs Office or Photoshop - so they optimize accordingly. Often you can exchange generality for speed.
  • by cdlu ( 65838 )
    Because that's the obvious solution. PHBs don't like obvious solutions. :)
  • upgradability!
  • should one buy the first generatin of the geforce 256 cards? seeing how more than one manufactorer has a license to use the geforce, will it be more optimized in future versions?

  • My guess would be soon now that nVIDIA is making their own X11 servers. Any video card that has enough demand for drivers eventually gets them unless the manufacturer is holding specs.
  • because a CPU designed just to do transformations and lighting costs less and does its particular tasks faster
  • Check em out yourself, here are the specs for SGI's Onyx2 with Infinite Reality graphics:

    http://www.sgi.com/onyx2/tech_specs.html

    And here are the nvidia GeForce DDR specs:

    http://www.firingsquad.com/hardware/nvidiageforc e/page2.asp

    The scarey thing is, according to the paper specs a $300 GeForce has roughly equivalent triangle
    throughput and pixel fill rate to a $300,000 SGI
    Onyx2. Don't you love economies of scale?

    Of course the SGI has a whole bunch of features the GeForce doesn't like 48 bit colour, insane amount of framebuffer memory, support for the complete OpenGL pipeline in hardware including 3d textures etc etc.

    The GeForce doesn't have hardware opengl overlays so it won't be ideal for professional applications, what do you expect for the price?

    But still, I know I'm slapping a GeForce in my system ASAP.
  • you MUST be smoking crack! I rmemeber when nVidia took a lot of heat over the obfuscated source code drivers they released. They had too, and they have been working at getting full source available. (by had to it was other folk's code that they didn't have the legal right to release).

    Compare that to 3dfx who is binary only. I couldn't believe that dared to show up to linux expo... even there they were talking soley binary only...

    plus they do glide... glide should die... openGL is standard...
  • It's not. Myth II uses Glide. 3Dfx is actually unique in being one of the last people to *not* have a complete set of OpenGL drivers; that's why they have their "MiniGL" to run Quake* games.

    That's not right; 3dfx has had a complete non beta OpenGL ICD out for months now. The reason they have a MiniGL is the same reason that Matrox has made their TurboGL (aka, MiniGL)--because it's much easier to optimize a opengl subset for particular games than it is to optimize an entire ICD.
  • Nvidia has released the specs to their cards? ALL the registers etc? Where can I find them?

    I've also been under the impression that 3dfx cards speedwise are the only choice under linux. I know my banshee runs Q3test, Unreal Tournament (through wine), and Quake2 under linux great. So isn't a really fast 3dfx binary glide better than the others?

  • I have to wonder if the people at /. have ever been to Tom "Buy My Book!" Pabst's site. It has degenerated into a lame attempt to sell his crappy book and more advertisements than even the average gaming site. He fudges his results to get boards earlier than other people to review. It doesn't help that he was going on with paranoid rants about conspiracies against him for a while, although I don'y know if he still does this, as I stopped readinghis site months ago.

    If /. is going to post about hardware sites, post about anandtech more often. Anand and his buddies know more about computers than Tom ever could, they write better, are more objective, and although they have a ton of ads, they have less than Tom.

    In short, Tom is full of shit, and so is his site, and I'm willing t post it attached to my account without any AC crap! Not that it matters, /. moderators will nail this as "flamebait" (funny that they don't automatically do that to Jon Katz's demented writings, not that he does much on /. anymore anyway.).
  • This is the best info we have?
    I think not, here's a little roundup of reviews(ripped from The Shugashack [shugashack.com]):

    GeForce / TNT2Ultra / Voodoo3 Roundup [
    Shugashack [shugashack.com]]
    Guillemot GeForce256 3D Prophet Review [Ace's Hardware [aceshardware.com]]
    Guillemot GeFroce256 3D Prophet Review [Puissance PC [puissancepc.com]]
    nVidia GeForce 256: To Buy or Not to Buy [AnandTech [anandtech.com]]
    Guillemot GeForce256 3D Prophet Review [GA-Source [ga-source.com]]
    nVidia GeForce256 DDR Review [3DGPU [3dgpu.com]]
    nVidia GeForce256 DDR Review [Riva Extreme [rivaextreme.com]]
    nVidia GeForce256 DDR Preview [Thresh's FiringSquad [firingsquad.com]]
    nVidia GeForce256 DDR Review [Riva3D [riva3d.com]]
    nVidia GeForce256 DDR Review [Planet Riva [planetriva.com]]
    nVidia GeForce256 DDR Benchmarks [Bjorn3D [bjorn3d.com]]
    Guillemot GeForce256 3D Prophet Review [CGO [cdmag.com]]
    Guillemot GeForce256 3D Prophet Review [Fast Graphics [fastgraphics.com]]
    Creative GeForce256 Annihilator Benchmarks [3DHardware [3dhardware.net]]
  • Hardly surprising, considering Tom's well-known bias towards nVidia. Can't have something that might possibly make it look like less than the best.
  • Check out XIG's web site (http://www.xig.com/Pages/AGP-BENCHMARKS.html). The G400 rocks many times. I was considering leaving in my Voodoo1 card for doing OpenGL stuff, but it has since been pulled out and will make way for a PCI ethernet card.
  • I know this discussion is long over, but I personally like Tom's reviews, and I think that he does have integrity, as evidenced by his reaction to Intel's *apparent* attempt to censor and intimidate [tomshardware.com] him.

    Why do I believe his side of this story in this? Mostly because of all the nasty things I've been reading about Intel's *apparent* strong arming of Taiwanese motherboard manufacturers, and the *apparent* cutting of a deal with Gateway so they'd stop using AMD CPUs in their machines. Not to mention Intel's failed attempt to *apparently* force RDRAM on OEMs and the market, and a number of other Micros1~ esque tactics.

    The fact of the matter is, Tom is very thorough. He was the first to publish an overclocking guide [tomshardware.com] for the Athlon for example. Another example of his dedication is his yearly trips to Taiwan to talk directly with manufacturers there. I mean who does this but someone into what he does? The guy doesn't get paid to do this you know, he's a medical doctor, not an employee of some corporation.

    Regarding 3dfx. They've really been acting strange lately, and not a little bit cheesy, IMO.

    For example, cutting off Creative and Diamond like they did, and making their chipset proprietary. Maybe they have a right to be paranoid, what with Intel in bed with S3 [s3.com] and S3 now owning Diamond... but this was a bad move. Look at the hell that Apple went through as a result of it's decision to go proprietary back in the days before the Return of Jobs(TM). 3dfx will, I suspect, go through similar troubles, as major manufacturers have little choice but to either create their own chipsets (which can be disastrous, look at #9's "Ticket To Ride") or use another company's chipset, like nVidia -- who is making better 3D chipsets right now.

    Another thing that 3dfx did which has lost them quite a bit of market share, (as evidenced by contrasting nVidia's 1999 profit and loss statements with 3dfx's) was essentially ship an overclocked banshee card :) What else can you call the Voodoo3 with @ 16 bit? Then there's the glaring lack of a heat sync fan on the voodoo3s (fans come with nVidia based cards), and the vast, power supply crippling, 183 MHz of the Voodoo3 3500, which I've read has *apparently* been problematic for some users with lower wattage power supplies...

    These things don't go unnoticed by consumers, or for that matter by honest hardware reviewers. The benchmarks Tomas Pabst used in the GeForce article are valid, and Voodoo3 scored at zero on the 32 bit true color tests, because it is a 16 bit card. Simple as that... if you'd read the whole article, you'd see that he did list GeForce's short commings, specifically the memory bandwidth problems with the SDRAM versions of the Card vs. the upcomming double data rate RAM versions of the card (to say nothing of the *expected* 64 MB GeForce that I'm waiting for).

    I have not lost faith in 3dfx, and I hope that their troubles of late will cause some restructuring which will lead to an increased emphasis on design quality, rather than throwing MHz at the problem. I do have to applaud their efforts to support the Apple community with Mac drivers for voodoo cards, although I hear poor Microconversions [microconversions.com] *might* have been forced out of the mac/voodoo card business as a result :(

    Despite all this, the 3D wars are far from over, and I suspect the Voodoo4 will be quite a sight to see, but I won't rush out and buy either a GeForce or a Voodoo4 until I know all the facts, and for that I'll probably read a number of reviews. I've also found that forums where actual users relay anecdotal accounts of their experiences with specific products are truly telling. If you don't believe me go check out the forums [intel.com] at Intel :)

  • That much is true, but except through sheer incompetance raw numbers don't tend to lie. In any case at the moment this is the best information we have and I think its at east conclusive enough to make the Dual-memory one fairly tempting.
  • It's a big internet. Very helpfully there are links to several other GeForce reviews a few post above here. If you don't like Tom's site avoid it, Or, failing that hire Iraqui gunmen to go round his house and steal all remaining copies of the book.
  • They've gone off in different directions. 3dfx went for speed, the rest went for pretty. The trouble is, 3dfx's speed advantage has vapourised in anything other than Glide. Don't get me wrong, my current card is a Voodoo2 and so far I've had no reason to replace it but unless 3dfx do that mystical "Voodoo TNG" you mentioned when I do it'll be with a GeForce.
  • Yeah, saw those later. Sorry about that. They all seem to say pretty much the same thing though.
  • I agree with you, it's nice to see to nVidia added some hardware MPEG 2 acceleration with the motion compensation but I would've like to see some more support for DVD playback. HDTV support is cool but I don't see myself getting one any time soon. I think that a full MPEG 2 decoder, that would be a little excessive since that would mean adding an audio output to the card. But who knows what the next generation chipset will bring? Or what the some card manufacturer will cook-up?

    As for software decoders, the ones for Windows don't seem very optimized. And Linux decoders are still being worked on. For those of you interested in Linux DVD, I recommend that you check out the LiViD Project [openprojects.net] and the Linux TV site. [linuxtv.org]

  • Well, actually.. Nvidia has released the specs for their cards.. All registers, etc etc, so that if they needed to, the community could write their own 2d/3d drivers from the ground up.. But theres no need, as Nvidia have already done this, and will continue to do it. Matrox have done similar things. What do we get from 3dfx? A binary only version of glide.
  • True, anybody can put up a web site and review products, but I'll go to bat for Tom to say he is (or was about a year ago when I was frequenting the site) one of the best on the web when it came to technical information - especially on the whole dual Celeron overclocking thing.

    Then again, I haven't really seen his site in about a year, since he became more of a video board review site.

    "The number of suckers born each minute doubles every 18 months."
  • It's there--competitive with at least the low-end SGI hardware. Basically, there is a hierarchy of computations in 3-D graphics. (Copied from the flightgear hardware requirements page [flightgear.org].)

    1. Stuff you do per-frame (like reading the mouse, doing flight dynamics)
    2. Stuff you do per-object (like coarse culling, level-of-detail)
    3. Stuff you do per-polygon or per-vertex (like rotate/translate/clip/illuminate)
    4. Stuff you do per-pixel (shading, texturing, Z-buffering, alpha-blend)

    At each level of the hierarhcy the amount of computation goes up an order of magnitude or so. The GeForce256 moves up the hierarchy to the per-polygon level, providing, (eventually, when the software properly supports it) an order-of-magnitude improvement in 3-D rendering, just like an SGI system does. There is apparently going to be Linux OpenGL support, too. Price, I believe, is in the $250 range.

  • Because the Geometry Processor Unit (what GPU stands for) will be optomized for processing geometry, which
    currently is a task of the CPU. With the GPU, all the
    processor will be responsible for will be feeding geometry data to the GPU (well, it's the only graphics
    function it'll be responsible for).

    In the end, the GPU should be faster at geometry than the CPU, which is the goal.

  • I was wondering the same thing.

    Me too.

    My good old Viper550 (TNT) card seems to work with Mesa-3.0 and NVidia's "glx 1.0" driver, but I'm under the impression this is only a "partial" set of drivers.

    It's a complete OpenGL driver AFAIK, but it doesn't do direct rendering (it goes over the X pipe), and it's not nearly as optimized as the Windows drivers yet.

    (Myth II supposedly only works with 3dfx brand boards because they're the only ones that have a 'complete' set of OpenGL drivers - though I've been trying to find out if that's really true or not...)

    It's not. Myth II uses Glide. 3Dfx is actually unique in being one of the last people to *not* have a complete set of OpenGL drivers; that's why they have their "MiniGL" to run Quake* games.

    Has Nvidia said or done anything on the Linux front since the initial release of their drivers?

    There was an interview where an Nvidia rep said they'd have GeForce Linux drivers (but X server? Mesa drivers? Who knows?) when the card shipped, but I haven't heard anything since.
  • but I would've like to see some more support for DVD playback. HDTV support is cool but
    Proper HDTV support requires mostly a superset of what is required for DVD support; everything but the subpicture decode.
    I think that a full MPEG 2 decoder, that would be a little excessive since that would mean adding an audio output to the card.
    No, adding an MPEG 2 decoder to the card doesn't necessitate adding an audio output.

    This can be done two ways:

    1. Leave the demux of the MPEG transport or program stream to the main CPU, and only hand the video PES (or ES) to the card. Demux is fairly easy and won't suck up but a tiny fraction of the main CPU as compared to doing full decode.
    2. Let the card demux the transport or program stream, and hand buffered audio PES (or ES) data back to the main CPU.
    As for software decoders, the ones for Windows don't seem very optimized.
    They're not very good, but that's not because they're not very optimized. Just compare any of the Windows players to the NIST code if you want to see the huge difference between majorly optimized and non-optimized decoders.
  • Why should he have to respond? Because some people question the validity of his OPINIONS?

    Q: How do we evaluate someone elses OPINIONS?

    A: The same way we evaluate anything else, if you want to, go for it, if you don't like him, don't read him.

    Apparently, not reading his articles has expanded to the realm of trash-talking him from the comfort and safety of the AC post.

    If you're going to lay into somebody, please, have the courage to accept personal responsibility, and link to the allegations instead of giving a vague, biased (but presented as unbiased) description of what these allegations were.

    You're lucky I ran out of moderator points already.




  • The reason why most of the benchmarks were so close is because none of these games (with the exception of parts of Quake3) are using the OpenGL T&L pipeline because at the time they were made there were no hardware T&L engines and so by 'rolling their own' T&L they could get significant speedups. The nVidia Tree demo should be evidence to anyone what a dramatic difference having hardware T&L can make. That tree demo has far more complexity than your average shoot-em-up game, and these are the kind of things we can expect when developers make games for hardware T&L (most new games will use the hardware). So the real problem with the benchmarks was running a bleeding edge graphics card on yesterday's software. It does well, even better than the competition, but don't expect a 3X increase...you can't get much faster than 100FPS no matter how you try. But the GeForce should be able to do 60FPS with 10X the polygon count of current cards (assuming the developer is handling T&L with OpenGL)
  • Where is the cheapest place to buy one of these things? (including shipping)
  • Thanks, I was serching for Ge Force and nothing was turing up. Here is a link to the complete listing at pricewatch.

    Prices at Pricewatch [pricewatch.com]

  • Why not just slap another processor on the motherboard and call it a "GPU" instead?

    --Adam
  • Kinda off subject, but shame shame Tom for not using the new Matrox G400 drivers that were released on Oct 8th that includes the new Turbo GL (mini GL) drivers. Would have liked to see how the G400 Max performed with the newest drivers compared to the GeForce at the higher resolutions. From some of the benchmarking I have seen, it is giving the TNT2 Ultra a run for it's money on OpenGL games at higher resolutions.
  • "Why not just slap another processor on the motherboard and call it a "GPU" instead?"

    Theoretically, you could do that but you would need a mighty powerful CPU to achieve the level of performance of the GeForce since CPUs aren't optimized for graphics processing (note: GPU stands for Graphics Processing Unit not Geometry Processing Unit as someone earlier posted.) The GeForce is a much more cost effective solution for graphics processing than getting another CPU.

    According to Nvidia web page about the GPU [nvidia.com], their technical definition of a GPU is:

    "a single-chip processor with integrated transform, lighting, triangle setup/clipping and rendering engines that is capable of processing a minimum of 10 million polygons per second."

    The review of the GeForce 256 [aceshardware.com] at Ace's Hardware [aceshardware.com] has good info on comparing CPUs to GPUs. As another poster mentioned, graphics processing exists in a limited form in CPUs (3DNow!,etc.). Possibily in the future CPUs will integrate more advanced graphic processing functions. But, even if you had a CPU with complex graphic processing functions you still need some sort of display adapter. Personally, I think that it makes more sense to have the display adapter and graphics processing integrated on one unit.

  • by Anonymous Coward on Monday October 11, 1999 @03:41PM (#1622314)
    A lot of confusion seems to be going around about that whole GPU T&L thing when applied to Quake3, well Shugashack [3dshack.com] amazingly enough has the answer from one of the developers working with Quake3 technology. Here you go, right from the Shack. His benchmarks [3dshack.com] of the card are pretty good too.

    Quake 3 does indeed use T&L and will take advantage of any hardware supporting it. It uses OpenGL's transformation pipeline for all rendering operations, which is exactly what T&L cards such as the GeForce accelerate.

    Well what if Q3 used the other stuff besides the transform engine? The other three real features are the per-vertex lighting, the vertex blending, and the cube environment mapping. Since Quake 3 has static world lighting, one of the only places for the lighting to be useful would be for the character lighting, especially for dynamic lights. The current character lighting implementation is pretty quick though, I don't really see *too* much of an improvement there, though it is worth mentioning. The vertex blending may help skeletal animation, but since the current test has no skeletal animation, it would not help it at all in the current benchmarks. And the cube environment mapping won't help the game at all, since the game doesn't use cube environment mapping to begin with.

    While I'm at it, the use of OpenGL doesn't necessarily mean that all games will be accelerated by the GeForce's T&L. Such examples are Unreal engine (including UT) based games. Its architecture is very different from QuakeX's and cannot benefit from T&L hardware without rearchitecting the renderer, as Tim Sweeney has said before.

  • by Anonymous Coward on Monday October 11, 1999 @01:44PM (#1622315)
    Tom Pabst (of Tom's Hardware) has gotten himself mixed up in a lot of tough questions about his journalistic integrity (or lack thereof). There have been many accusations that he was a little too generous with certain reviews in exchange for getting hardware to review before anyone else on the 'net, and there was a big stink about him rushing to publish Q3Test "benchmarks" without even looking into whether such a thing would have any basis in reality. Tom has responded to some of these allegations, and his responses have not been particularly professional.

    Anyone on the net can put up a web site and review products. Don't take other people's reviews (which are really opinions) as truth. Question seriously those who are writing. Many online authors have not displayed much professionalism, and those types are probably best avoided.

  • by TraumaHound ( 30184 ) on Monday October 11, 1999 @03:17PM (#1622316)
    Riva3D [riva3d.com] ran the GeForce256 through Sense8's Indy32 benchmark. The results are here [riva3d.com].

    Far as I can gather, looks pretty promising. (with the right CPU. They used an Athlon.)
  • by RickyRay ( 73033 ) on Monday October 11, 1999 @03:45PM (#1622317)
    Seems to me like the next logical step is to have a graphic card that can handle more of the game duties. If a box is built right, the CPU can be slow but everything flies because the work is handed out to chips specialized in different tasks (see: Amiga, mid 1980's, which is still a superior design than any current PC). This chip makes a good first step in that direction, taking over the lighting and such, eliminating the need for faster AGP transfers and such.
    Ideally, I would like to see a graphics board that actually takes over some of the program itself. Of course it would be even better to have a NUMA motherboard and have chips dedicated to I/O, another to graphics, another to sound (not through the ISA/PCI slot), thus the CPU itself wouldn't have to be the latest greatest to turn out incredible results. These guys are turning out a chip in the ballpark of $100/piece wholesale that runs circles around any CPU. The whole computer needs to get that way. The only time you should ever need a fast CPU is for science/math, not for a normal desktop machine.
    ***Of course Transmeta might change the whole scenario, because if their chip can be reprogrammed on the fly to do things like graphics then there's no need for so much hardware.

  • At first, I thought the moderators were all smoking crack again, but I see that they probably ran out of moderation points... Why is it that the subject of 3D graphics cards seems to bring out such obnoxious folk?

    Frankly, I'm just not interested in these new components. Is an extra $100 enough to justify a 5% increase in performance, and if so, how many generations should be skipped after that before upgrading? Nvidia is talking about a 6 month schedule (though nine months to a year seems more realistic).

    At the rate things are going, graphics cards will soon be the most expensive component in every system, even with RAM at its current prices. I'm also willing to bet that NetBSD will be ported to exclusively use the GPU, bypassing most components altogether, before the product is even released...

    For me at least, I can't justify the costs of upgrading my system every six months just so I can play the newest rehash of a ten year old game. It doesn't impress me that the *new* version gives you more control, gore, levels, and/or 3D graphics -- I liked the *old* game just fine.

    The CPU or component speed haven't been the bottleneck in games for a long, long time. The imagination of game developers has been occupied with utilizing the hardware acceleration buzzword of the moment, not with developing new groundbreaking ideas...

    My US$0.01 (lousy Canadian pennies :)
  • by tamyrlin ( 51 ) on Monday October 11, 1999 @02:05PM (#1622319) Homepage
    Doing a search for geforce on www.linuxgames.com revealed this snippet from an irc log:
    -----------
    ([Jar]2) (orlock) WIll they still be supporting Xfree86/Mesa3D/glx/linux/etc like they have in the past?
    (nvdaNick) Yes.


    (MicroDooD) (LaRz17) Will drivers for multiple operating systems be released at the same time?
    (nvdaNick) As for driver releases, I think NVIDIA is planning to release all drivers at once.



    ([Jar]) (MfA) Will the non windows drivers be open source? (ie not run through the pre-processor)
    (nvdaNick) What would you want with open source drivers, by the way?
    (nvdaNick) I'm not sure what our plans will be regarding that.
    -----------------


    \begin{speculation}
    Anyway, if this is correct and nVidia is going to be have official support for Linux they are probably going to use SGI sample implementation and thus cannot release their driver as open source.
    \end{speculation}
  • by Chris Johnson ( 580 ) on Monday October 11, 1999 @04:15PM (#1622320) Homepage Journal
    What happened with that, did they make fun of him or not give him cards to test or something? Like anybody, I have pet vaporware that I'd like to see succeed and become real, and for me that's the next generation 3dfx stuff with the antialiasing and motion blurs (in which the former would work with old games too). It's OK with me if it doesn't fly, I'll still wait and see what happens with it, but it's pretty boggling to see this guy kicking at 3dfx so bad. He was coming up with these big benchmarks for a GeForce card that people can't even get yet, and making nasty remarks about how poorly the Voodoo3 measured up (when actually Glide ran competitively when available), and how old is the V3 by now? Compared with a GeForce that people can't even get ATM?
  • by Eric Smith ( 4379 ) on Monday October 11, 1999 @02:13PM (#1622321) Homepage Journal
    It's nice to see that they've apparently added some of the MPEG 2 motion compensation support that ATI has had for a while. But I really wish they would bite the bullet and add a full MPEG 2 decoder. It would only take about a half million transistors; no one would even notice the extra die area.

    Software MPEG 2 decoders for Windows basically suck, and there aren't (yet) any real-time decoders for Linux anyhow. Hardware decode is the way to go.

    I keep hoping that someone will ship an inexpensive VIP-compatible MPEG 2 decoder daughterboard that I could use with my Asus V3800 TNT2 card, and it hasn't happened yet, but simply building it into the next generation nVidia chip would be even better.

    Eric

  • by sugarman ( 33437 ) on Monday October 11, 1999 @01:54PM (#1622322)
    Did an NDA expire today or something?

    Just a couple quick links:

    Anandtech GeForce 256 Review [anandtech.com]
    Ace's Hardware GEForce 256 Review [aceshardware.com]
    RivaExtreme GeForce 256 DDR Review [rivaextreme.com]
    The FiringSquad GeForce 256 DDR Review [firingsquad.com]
    GA Source Guillemot 3D Prophet Review [ga-source.com]
    3DGPU Geforce 256 DDR Review [3dgpu.com]
    Fast Graphics Guillemot 3D Prophet Review [fastgraphics.com]
    CGO GeForce 256 Preview [cdmag.com]
    Shugashack GeForce, V3 and TNT2 benchmark roundup [shugashack.com]
    Riva3D Full GeForce 256 DDR Review [geforceddr...argetblank]
    GeForce 256 DDR Review at Planet Riva [planetriva.com]

    Any others?

There's no sense in being precise when you don't even know what you're talking about. -- John von Neumann

Working...