Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Hardware

ATI's HyperZ Demystified 47

noname (guess who ;) writes, "There's a really thorough look at ATI's Radeon 256 32MB (which we've not seen yet due to the delay in shipping) and the technology behind it. The author of this piece not only pits it against NVIDIA's best (the GeForce2)also went behind the scenes of ATI's HyperZ technology by interviewing some engineers about the 3D graphics pipeline. Finally we get to the bottom of who's better, the GeForce2 or the Radeon and it looks like a win for NVIDIA here."
This discussion has been archived. No new comments can be posted.

ATI's HyperZ Demystified

Comments Filter:
  • by zephc ( 225327 )
    you know, the fact that this badass card will fit in my cube is what sells me :)

    ---
  • Anyone know when the Linux driver is coming out? I bought one of these in expectation of dumping my nVidia hardware when the DRI driver becomes available. I heard Precision Insight was working on it, but I'm getting antsy.

    Where is the code? Can we actually get a DRI driver for *modern* hardware? Instead of two-year-old pieces of junk like the G400 and Voodoo3?

    Dan
    1. >60fps isn't the point. The point is being able to do more at 60fps. Like twenty-pass shaders with motion blur and depth of field. I'd go into more detail, but this question is put up and knocked down so regularly that I suspect you're just trolling.
    2. The optional imaging subset in OpenGL 1.2 already allows cards to accelerate image processing tasks using arbitrary convolution filters (though IIRC filter size is restricted). I think some 3Dlabs hardware implements this; their drivers certainly do.
    3. You can already play Game of Life with hardware acceleration. There's a solution using the stencil buffer in the OpenGL Red Book.
    4. Gfx cards deal pretty much exclusively with 4x4 matrices and 4-element vectors. No IHV is going to cram a general matrix processor into their silicon just for a laugh.
    5. I know this may come as a shock, but some strange people do use graphics cards for other things than Quake. Seriously, they do. Honest.
  • TNT2 has been supported by XFree86 for a long time. How certain are you that you're getting the benefit of the nVidia-written drivers?
    I don't, but I wasn't making that arguement. All I was saying was that I was able to install the NVIDIA drivers without too much trouble. Since I can get acceptable frame rates, I assume that everything is working just fine.

    Also, at the risk of sounding like the /. sheep, not having open-sourced drivers can make your life a lot more difficult if the closed-source drivers doesn't support your particular flavour of Linux. I went through this with a number of crappy hardware, and I'm no longer willing to do so again. Then again, with the number of games I play in Linux down near zero, I don't think it matters too much
    Ok, you've got a point there, and it's a perfectly valid reason to go Radeon over GF2. I only had a problem with the idea that the NVIDIA drivers were hard to install as that did not reflect my experience.
    --Shoeboy

  • They've started work, I believe.

    "Can we actually get a DRI driver for *modern* hardware? Instead of two-year-old pieces of junk like the G400 and Voodoo3?"

    How about the Voodoo 5500?

    Ranessin
  • There's also a point where the number of polys would make the sorting algorithm used by tiling slower than the Z-Buffer renderer.

    Tile architectures aren't a replacement for hidden surface removal, they're a complementary technique. A tile accelerator doesn't have to use wacky O(nlog(n)) sorting algos to sort polys, it can just do regular zbuffering. The speed boost comes from (1) rendering each tile into fast on-chip memory, and (2) deferred texturing, where the accelerator doesn't bother fetching texels for occluded pixels, further saving on bandwidth. This second feature is particularly applicable to the coming age of per-pixel lighting and shaders, since the idea can be extended to avoid evaluating pixel shaders on occluded pixels.

  • by jallen02 ( 124384 ) on Monday September 18, 2000 @03:32AM (#772485) Homepage Journal
    One of the few points very very few people ever catch about *most* nVidia cards.

    Take my comment with a grain of salt since im not providing URL's but....

    Comparing side by side TNT2/Geforce whatever cards with a MAX in pictures and the MAX pictures were much more clear.

    Everyone cuts corners including nVidia the performance kings. I wish I had a URL to a few pictures but its true and anyone who doesnt believe should research it. (I own a G400 Max which runs on my 19" sony, that I had to pay my left arm for, and it looks so awesome that I could give a crap my QIII performance is sub par.

    That is how it is for most people, I stare at code more than QIII on my system which is more important to me?

    SO I feel like im just getting the best deal with my Matrox, I can run QIII at playable framerates at 800x600, and I have one of the most crisp and clear pictures on any system I have seen (really).

    I love Matrox so much I bought one for my workstation at work.

    Anyhow

    Jeremy

    OT:I have noticed I never ever use my +1 bonus unless I forget to check the No Score +1 Bonus, since I moderate comments for myself (always read /. raw) and..

    I think it should be the default hehe I know a lot of other people do that on accident to.

    Again Me :p
  • "It is 320x200, which, as you seem to need to have explicitly pointed out to you, is closer to 64x48 than 640x480"

    I feel even dumber now. Do you realize that in comparison that has "64x48" as being closer, the resolution 1x1 is closer to 320x200 than 640x480 is. Your reasoning doesn't make sense.

    Look at it this way. 640x480 is about 5 times bigger (surface area) than 320x200. Whereas the resolution 320x200 is 20 times larger than 64x48. 320x200 is closer in surface area to 640x480 than it is 64x48.

    Incidentally, by your reasoning 1000x1000 is closer to 1x1 than it is to 1415x1415. Does that seem correct to you just thinking intuitively?
  • I guess if you looked just at the x ( or just at the y coordinate), the distance between 320 and 64 is 256, which is less than the distance between 320 and 640, so I am right.

    but if you look at the number of pixels:

    640x480=307200
    320x200=64000, distance from 640x480=243200
    64x48=3072, distance from 320x200=60928

    oops, damn, I am right again.

    damn, I must be wrong though, because you feel dumber for having read that.

    hmmm. normalizing?

    640x480 is built of 100 64x48 tiles.
    320x200 is built of nearly 21 64x48 tiles.

    so there are 79 64x48 tiles between 320x200 and 640x480, but only 20 tiles between 64x48 and 320x200. damn it, I am right again.

    Ah, I've got it, percent difference from 320x200.

    640x480 is 79% bigger than 320x200
    320x200 is 95% bigger than 64x48

    So am I wrong after all? I say no, therefore I am not.
  • I can't run a geforce2 on my FreeBSD box. Won't buy one. Just business, nothing personal. Slightly personal, but still in the realm of the technical is the instability of the drivers nvidia uses for linux. Oh and their being locked to a kernel version.

    Sorry nvidia, really wanted to buy your card, I did like the TNT2 for games. I don't even play games on BSD, but since you won't even let me drive it for 2d without a stable driver, guess I'll be going to the competition from now on.
  • I wonder if anyone will ever find a tile rendering hardware implementation which actually beats Z-Buffering (or derivations), when cost isn't an issue

    Draw 100 fullscreen textured quads on top of each other on a Dreamcast and then on a GeForce. Watch the Dreamcast spank the GeForce.

    And cost is always an issue - the Voodoo5-6000 solution to the bandwidth problem works great, but who can afford a $600 video-card?

  • I completely agree with:
    ATI can't write drivers to save their lives and NVIDIA has the best driver dev team in the industry

    I mean, I bought an AIW 128 with 32 megs of ram, and kept losing sync with my monitor. I tried all of their fix ideas, none of which worked. Then, I tried their newest driver. Ha. The new driver crashed my system every time I loaded up Unreal Tournament or Quake III. Once again, ATI attempts to correct their problems with a newer driver. Ok, now my machine only boots up into 256 colors, max with a 640x480 res. I was better off with the original crap drivers they provided in the first place. So, I sold the stupid card, and bought a TNT2. I've never been happier, Nvidia writes EXCELLENT drivers for very good hardware. I also bought an ATI TV-Wonder... I've never been so dissappointed in my whole life with a product. Every time I try start the TV program, instead of actual tv, I get a pink screen. I have to resize the window for anything to actually come in. Or, if you move the window off the screen, and then bring it back, you can sometimes get a picture. Also, that's with the latest update of the drivers, which BTW, you can't download! You actually have to order a driver CD from their website because of some lame agreement on their site. In my opinion, ATI blows, and after all the horrible products I've purchased from them, I'll never buy from them again... no matter what these hardware review sites claim their cards can do!


  • On the issue of ATI's driver development for the Radeon:

    ATI Radeon Beta Driver Comparison [pcmonkey.net]



    A recent review on some of the other features Radeon Cards offer:

    gotapex reviews-Ati Radeon 64MB [gotapex.com]



    A good site for ATI information, and user feedback:

    Rage3D The Place for the Latest ATI News [rage3d.com]



    Even with the GeForce getting higher benchmark scores, the Radeon is considered by many to have
    better video quality, and multimedia friendly features.

    For some of us, that is also an important consideration.



  • Sorry to be off topic but am trying to post an new message under hardware and cannot find where to do this anywhere. If someone could write to slashdot@webfuture.com (because I'm not even sure if I could find my way here again), and let me know how to do this, I would be grateful.

    Thanks,

    Matthew
  • For anybody interested, the basic idea behind "Hyper-Z" is a few years old. The best description I've seen is Hansong Zhang's dissertation [unc.edu] (based on a paper he gave at Siggraph '97). He calls it "hierarchical occlusion mapping" - I guess the same ATI marketdroids who gave us "Pixel Tapestry" and "Charisma Engine" were to blame here...

    It's an interesting technique, for several reasons. For one, it doesn't require massive amounts of scene preprocessing, which means that you can display much more dynamic worlds than if you were tied to an expensive BSP data structure. For another, at some point in the hopefully not too distant future we'll move from Z-buffers to A-buffers (conceptually, a linked list of depth values per pixel) to remove the ugly need to sort transparent polys. For obvious reasons, this is going to stress the hardware, and a way to perform en masse depth rejections would be a great help.

  • Well, ATI is offering a Radeon All-In Wonder. Same Radeon as their strictly 3D card, plus an integrated TV tuner, MPEG decoder, etc. They claim to be on the market now, and should be retailing for about $300. Here's the obligatory link [ati.com]. That being said, I own an AIW Pro, and have been satisfied with its preformance, but 3D card, it's not. Maybe when I do have an extra couple hundred bucks lying around, I can give the Radeon a whirl.
  • Agreed. I usually hunt for the "printer friendly" link when reading from these places which puts the whole article on one page and take out the irritating blinking ad-filled margins. Most sites will have this available if they're reasonably well-visited.

    I'd see if there was one for this particular article, but the corporate firewall is blocking. bastages

  • I completely agree, but the benchmarks and software I've seen don't reflect our perspective. (like using them for something other than Quake...)

    I didn't say anything about general matrix processing; I was just wondering if it could be faster to offload some (probably specialized, yeah) work onto a graphics card; I gather it would be for certain massively parallel cases.

    ...and thanks for the tip! I'll try to re-implement my brute-force Life program to use OpenGL... :)
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • Dude, you totally missed the point. In all my measurements, 320x200 was the origin, not 64x48.

    I am not at work now, so I don't want to spend a lot of time on this one... but look:

    surface area of 640x480 : 307200 square_pixels
    surface area of 320x200 : 64000 square_pixels
    surface area of 64x48 : 3072 square_pixels

    distance between 64x48 and origin : 60928 square_pixels
    distance between 640x480 and origin : 243200 square_pixels

    Do you finally see it? 64x48 is closer to the origin (origin=320x200) than 640x480. I can't make it any planer than that. If you still don't get it, well, I'm sorry. Please don't feel any dumber for it, it isn't important.

    Anyway, your numbers in the same example, calling 1000x1000 the origin...

    distance between 1x1 and origin: 999,999
    distance between 1415x1415 and origin: 1,002,225
    (units in square_pixels)

    so yeah, 1000x1000 is closer to 1x1 than it is to 1415x1415.

    you chose a remarkably bad example. If your last set of numbers was 1414x1414 it would have worked.

    Actually, looking at these numbers, go ahead and feel dumber if you can.
  • You can tell me with a straight face that 1000x1000 is closer to 1x1 than 1415x1415?

    The difference is you are arguing that you compare with the difference between pixel counts. My point is that the difference (using subtraction instead of ratios) between pixel counts doesn't make sense, my example was an attempt to prove that.
  • Guilty as charged. My name being Andy, my fingers' hidden Markov modelling strongly suggests that 'An' is going to be followed by 'd'.
  • The card I'm waiting for is the All-in-Wonder Radeon. I just wish Asus would come out with a GF2 with a tv tuner. Carrying around a vcr just to watch tv on your computer sucks & buying an additional tv box is too expensive. So until then I'm stuck with ATI.

    I agree. The only thing that ATI really has going for it, IMO, is the video input, which has traditionally been one of the better on-card digitizers available.

    It's certainly never been the 3d performance leader.

  • by Shoeboy ( 16224 ) on Monday September 18, 2000 @01:59AM (#772501) Homepage
    Considering that the radeon has been getting reviewed [tomshardware.com] since the 17th of July.
    No matter how good ATI's architecture might be on paper, the simple fact of the matter is that ATI can't write drivers to save their lives and NVIDIA has the best driver dev team in the industry combined with a very mature and stable driver set.
    Plus the GF2 kicks ass [tomshardware.com] under linux.
    As if that wasn't enough, the GF2 chip is bandwidth constrained, so as supplies of 4ns DDR SDRAM increase, GF2 ultras will become common. The Radeon doesn't have the horsepower to take advantage of these improvements in memory technology.
    --Shoeboy
  • Although the GeForce2 does pull ahead a bit in the high resolutions in 32bpp (honestly, why bother benchmarking anything below 1024x768x32bpp on these cards? Is anybody intending to buy a $300 videocard and then use it in 640x480x16bpp?) it's good to see that ATI can deliver a product that can match nVidia's high-end card, if only to give nVidia a reason to keep the price down on the GeForce3 :)
  • by RJ11 ( 17321 )
    That's pretty cool, rarely do you see an actual interview on these gaming sites with the makers of the hardware. It's been a while since ATI has made a good gaming card, I was under the impression that they were targeting the business market now.

    So when's this thing going to be available? And how in their right mind is going to throw down $400 for the 64mb version? Hell, quake3 still looks fine on my voodoo2, I don't need 200,000fps to play a game....
  • Well, maybe the kicking is the reason for the huge Pain In The Ass it is to get the card working under linux with those proprietary non-DRI drivers.
    I dunno, I found the install relatively painless for my TNT2.
    Aptable debs of XFree86 4.0.1 are available, so I snagged those, grabbed the nvidia drivers, renamed some files, gunzipped and untarred the nvidia drivers and typed make.
    Not hard at all. Not noticeably more difficult than getting my buddies i810 up and running.
    --Shoeboy
  • I guess brute force beats good design again. Considering nVidia seems to be about 2 generations ahead of everyone else, I think ATI is going to need a little more than hyper-Z to catch up.

    This is sort of off-topic, but I was wondering what peoples' experiences with FSAA are. I'm sort of in the market for a high-end card, but all the screenshots I've seen comparing FSAA to non-FSAA have been pretty underwhelming; images are almost unnoticeably smoothed, but FPS drops dramatically.
    --
  • So when's this thing going to be available? And how in their right mind is going to throw down $400 for the 64mb version?

    Hmmm... don't know where you are but the Radeon has been available in Canada [canadacomputers.com] for quite some time. Furthermore, I've seen the Radeon 64Mb version for around $440 Cdn (~$290.00 US using 66% exchange rate) for the video in/out version. So, to answer your question, a very misinformed individual would throw down $400 for a Radeon 64Mb version.

    The card I'm waiting for is the All-in-Wonder Radeon. I just wish Asus would come out with a GF2 with a tv tuner. Carrying around a vcr just to watch tv on your computer sucks & buying an additional tv box is too expensive. So until then I'm stuck with ATI.

    jacob

  • ATI can throw clocks and memory at the problem, but the internal processor architechture in their cards is lacking when compared to Nvidia. All ATI is accomplishing is making expensive cards with large numbers, which the general consumer will eat up. A manufacturer is more likely to use the cheaper card, even if it doesn't afford the same performance. This is why you must do your research when buying components, you buy Nvidia and you get better performance, plain and simple.

    Enigma

  • nope.

    I think it is likely enough that the logo was created on a system running 95 or nt or something. Likely enough to make that assumption.

    Granted it wasn't 64x48, but it isn't "closer" to 640x480 (not even *much* closer). It is 320x200, which, as you seem to need to have explicitly pointed out to you, is closer to 64x48 than 640x480.

    I don't know what anti-aliasing on the fly has to do with it. anti aliasing is anti aliasing. Presumably the method of anti aliasing used on the logo is superior to the method used by the cards, but maybe not. I don't know. I'll concede that using "fsaa" doesn't fit this example... I should have just said "aa."

    Perhaps if you had a cup of coffee or tea, you'd make a better argument.
  • by pb ( 1020 ) on Monday September 18, 2000 @02:05AM (#772509)
    Ok, I've got a Matrox G400 32MB Dualhead that I'm very happy with, and I can run the GL versions of MESS and MAME in 1280x1024x32 and whatnot... But doesn't this all get really silly after a while?

    Why would I ever need greater than 60fps in anything? And once I have that, in truecolor, why would I need much better than 800x600 in the first place? Especially if I'm too busy playing Quake to look at the graphics?

    What I want to see is a more versatile, programmable hardware acceleration, like edge-detection style algorithms in hardware that lets you implement, say, Conway's Game of Life. Or let your graphics card churn away on a dataset, doing those funky matrix computations that we all love....
    ---
    pb Reply or e-mail; don't vaguely moderate [ncsu.edu].
  • by pslam ( 97660 ) on Monday September 18, 2000 @03:12AM (#772510) Homepage Journal
    It's interesting how algorithms emerge for every combination of these factors:

    In the days of 286s, some people came up with S-Buffering (segments of Z values) to get around the storage and computation problems of Z-Buffering and snail pace of poly sorting, when using medium poly counts. Plain poly sorting was used for low poly counts.

    When memory got cheap and CPUs a bit faster, Z-Buffering met the demands of higher poly counts.

    Then cheap 3D cards came along (for PCs) and Z-Buffering was an obvious win. Back in the days of the first 3DFX cards, memory far outpaced the renderer. There's a few exceptions which used tile rendering, and we all know what happened to those.

    So now the renderer is exceeding the speed of memory again, and plain Z-Buffering is finally becoming less attractive. Note that Hyper-Z seems to only be a gain for scenes that exceed memory bandwidth, and for anything less it's the same speed as Z-Buffering.

    I wonder if anyone will ever find a tile rendering hardware implementation which actually beats Z-Buffering (or derivations), when cost isn't an issue - so far it's always been a cost saving feature at the expense of performance.

    If anyone knows of any other techniques for improving rendering performance, it'd make a good discussion.
  • But how does the ATI's image quality, specifically text at 1600x1200 res, compare to nVidia's? I'm using a Matrox G400 Max (which looks *great*, but slow) because the nVidia's I've bought (the last being a Creative Labs GeForce256) looked fuzzy at hires, at least compared to the Matrox. I want to play Quake 3 at 1600x1200, but I can't sacrifice text.
  • by cheese_wallet ( 88279 ) on Monday September 18, 2000 @02:29AM (#772512) Journal
    I have a geforce 2 gts 32mb. Occasionally I mess around with the fsaa settings, but in general i've found that the gain in picture quality is not worth the hit to performance.

    In ultima ix, fsaa just makes the picture seem a bit blurry. While it is true that the jaggies go away if you blur your eyes, where is the benefit in that?

    I've used fsaa on the fish screen saver from voodooextreme.com... at 640x480, fsaa makes a big difference, it gives the appearance of running at 1024x768 with a high dotpitch (0.4 maybe) monitor. The interesting thing is that the screen saver runs faster at 1024x768 w/o fsaa than 640x480 w/ fsaa. So even in this app where fsaa shows significant improvement, it is still better to just run at higher rez.

    If you want to see the real deal, the genuine benefit of fsaa, just look at the windows boot screen. Would you believe that the resolution of that logo is actually 64x48 pixels. That's right, it was originally a postage stamp. That fine job of anti aliasing alone makes the os worth the purchase price.

    cheese
  • by barleyguy ( 64202 ) on Monday September 18, 2000 @05:50AM (#772513)
    I hate to point this out, but that post is ass-backwards.

    To test a CPU, you run at low resolutions with V-Sync off, to keep the fill rate on the card from maxing out.

    To test the fill rate of a card, you test at high resolutions. This tests both the pixel/texel rate and the memory bandwidth.

    One legitimate reason to run at low resolutions is if you have a large display that doesn't support higher resolutions. Most 31 inch displays are 800x600, as well as any most multimedia projectors under $5000. So if you want to run a 31 inch monitor, or a 72 inch rear-projection screen, you are probably going to run at lower resolutions.

    Also, with big screens like these, anti-aliasing is an advantage. So 800x600 with 4xFSAA is the best resolution for big monitors.
  • by UnknownSoldier ( 67820 ) on Monday September 18, 2000 @03:16AM (#772514)
    > Why would I ever need greater than 60fps in anything?

    That's like saying "why would I need anything faster then 1 GHz." :-) (Not quite the perfect analogy, but close enough)

    Ok, to actually answer your question:

    Most video cards don't to temperal anti-aliasing, hence the need for &gt then 60 frame rate. You TV does do temperal anti-aliasing, so it can run at the much lower frame rate of 29.97 fps

    You might want to read this page: Conventional Analog Television - An Introduction [washington.edu]
    For some reason, the brighter the still image presented to the viewer ... the shorter the persistence of vision. So, bright pictures require more frequent repetition. If the space between pictures is longer than the period of persistence of vision -- then the image flickers. Large bright theater projectors avoid this problem by placing rotating shutters in front of the image in order to increase the repetition rate by a factor of 2 (to 48) or three (to 72) without changing the actual images.


    Cheers
  • Way to completely misread a post.
    Here's how it breaks down.
    The GF2 is bandwidth constrained but very powerful.
    The Radeon is not very powerful, but thanks to the hyper-z technology, it doesn't have nearly the bandwidth requirements.
    That means that as more memory bandwidth arrives, the GF2 chip will be able to take advantage of it. Check the 1600x1200 high quality benchmarks on the GF2 ultras with 4ns DDR SDRAM if you don't believe me.
    The radeon on the other hand can't push enough texels/pixels to use the extra bandwith, so ATI will need another chip to take advantage of higher memory bandwidth.
    Take a remedial reading course before you start flaming.
    --Shoeboy
  • I am currently running Halflife CS on my GF2 using FSAA. By just using 2xFSAA at 800x600 the image quality is noticably improved while still having raw speed needed. Most of the time I see 85 fps with the occasional drop down to 60-65, which is fully acceptable for me. And no, using 1024x768 does not look better.
    This is using the Detonator3 v6.18 drivers in WinME though.
  • ATI Marketing silliness: "charisma engine" and "pixel tapestry" are silly names for vertex and pixel processing that are straightforward improvements over existing methods. Sony is probably to blame for starting that.

    The Radeon has the best feature set available, with several advantages over GeForce:

    A third texture unit per pixel

    Three dimensional textures
    Dependent texture reads (bump env map)
    Greater internal color precision.
    User clip planes orthogonal to all rasterization modes.
    More powerful vertex blending operations.
    The shadow id map support may be useful, but my work with shadow buffers have shown them to have significant limitations for global use in a game.

    On paper, it is better than GeForce in almost every way except that it is limited to a maximum of two pixels per clock while GeForce can do four. This comes into play when the pixels don't do as much memory access, for example when just drawing shadow planes to the depth/stencil buffer, or when drawing in roughly front to back order and many of the later pixels depth fail, avoiding the color buffer writes.

    Depending on the application and algorithm, this can be anywhere from basically no benefit when doing 32 bit blended multi-pass, dual texture rendering to nearly double the performance for 16 bit rendering with compressed textures. In any case, a similarly clocked GeForce(2) should somewhat outperform a Radeon on today's games when fill rate limited. Future games that do a significant number of rendering passes on the entire world may go back in ATI's favor if they can use the third texture unit, but I doubt it will be all that common.

    The real issue is how quickly ATI can deliver fully clocked production boards, bring up stable drivers, and wring all the performance out of the hardware. This is a very different beast than the Rage128. I would definitely recommend waiting on some consumer reviews to check for teething problems before upgrading to a Radeon, but if things go well, ATI may give nvidia a serious run for their money this year.

    It seems that ATI has all but sputtered to a stop as far as development goes. This was from John Carmack's .plan file from 5/17/00. Four months later, ATI is still way behind in the 3d market.

  • by warmcat ( 3545 ) on Monday September 18, 2000 @02:14AM (#772518)
    Anyone else noticing that some of these high traffic ''review'' sites are serving up less and less on a page?

    It is getting that adverts are becoming a new form of punctuation that you use at the end of a paragraph.

    At least Andandtech [anadtech.com] and Tom's Hardware [tomshardware.com] give you a table of contents for the review so you can cut through filler like the exact details of the test platform.

    It is all the more irritating since browsers do not have a serious problem with monstrously long pages, and the aggregation of the whole review on a single page would have raised no eyebrows.
  • If you want to see the real deal, the genuine benefit of fsaa, just look at the windows boot screen. Would you believe that the resolution of that logo is actually 64x48 pixels. That's right, it was originally a postage stamp. That fine job of anti aliasing alone makes the os worth the purchase price.

    <pedantic type="flammable"> By "windows boot screen", I'm going to assume that you mean the Windows 9x family (95/98/ME). That "64x48 pixel" image that you say the OS antialiases is *not* antialiased by the bloody OS. it's also quite a bit larger than 64x48; it's *much* closer to 640x480. Seeing that the unstable toy of an OS *doesn't* antialias on the fly, would you care to reassess your statement? :-)</pedantic>


    --
  • "It is 320x200, which, as you seem to need to have explicitly pointed out to you, is closer to 64x48 than 640x480."

    I feel dumber for having read that.

  • Does anyone know what TV card has a TV tuner and MPEG-2 decoding acceleration? The ATI Rage seems to be a candidate, but I had no luck finding any alternatives...
  • This holds only holds true up to a certain point. All things included, I think Z-Buffering (and friends) has complexity O(n) whereas tiling has complexity O(n log n), where n is number of polys. Yes, Z-Buffering takes longer in this case because of memory bandwidth constraints.

    However, if the Z-Buffer renderer had higher memory bandwidth and fill rate, it would win out even in this pathological case. There's also a point where the number of polys would make the sorting algorithm used by tiling slower than the Z-Buffer renderer.

    The true test is how it performs with the games it was designed for (or conversely, how well games designed for it perform). While tile based rendering has a band of price/performance where it performs well, it's always been at the low end of the market - and I'm including Dreamcast here. There's always a case for either renderer due to their different characteristics.

    If you want very high poly counts, T&L and high fill rates seems to be the way to go.

    As for Voodoo5-6000... I think they've got the fill rate vs poly count ratio wrong, unless 1600x1200 FSAA is your thing. Whatever happened to 3dfx providing the most cost effective option?
  • earlier in the thread:

    GF2 kicks ass under linux

    I dunno, I found the install relatively painless for my TNT2.

    TNT2 has been supported by XFree86 for a long time. How certain are you that you're getting the benefit of the nVidia-written drivers?

    Also, at the risk of sounding like the /. sheep, not having open-sourced drivers can make your life a lot more difficult if the closed-source drivers doesn't support your particular flavour of Linux. I went through this with a number of crappy hardware, and I'm no longer willing to do so again. Then again, with the number of games I play in Linux down near zero, I don't think it matters too much :-)


    --
  • you're saying that ATI can't write good drivers and you're giving a link to a review which doesn't tell about any problem with Radeon's driver.
    (except the poor 16 bit performances, which maybe due to the driver or not).

    And you are a bit biased: sure 4ns DDR SDRAM supply will improve, but it won't become cheap fast..

  • > I have a GF2 and there isn't any game that even makes it sweat.

    It's called market saturation.

    Us developers can't cater exclusively to the "high-end" 3D cards as we would go out of business. No 3d-only game has come close to selling as well as 2d. Why? Because not everyone has the lastest, greatest, fastest 3d card, unfortunately. (As much as us developers wished everyone had a GeForce 2 :) So we target a lower hardware to allow more people to play the game.

    Of course in 5 years, it's going to be pretty sweet to target a GeForce2 as minimum spec ;-)

Think of it! With VLSI we can pack 100 ENIACs in 1 sq. cm.!

Working...