Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

AGP4X vs. AGP8X 181

An anonymous reader writes "With upcoming chipsets such as the SiS648 claiming support for the latest AGP8X standard, we asked ourselves if there were any performance benefits. We took the SiS648 and Xabre 400 reference boards, modified them and compared the results." I can't even get 4x stable under XP, so I figure 8x is half as likely to let me play NWN :)
This discussion has been archived. No new comments can be posted.

AGP4X vs. AGP8X

Comments Filter:
  • by Boone^ ( 151057 ) on Sunday July 28, 2002 @03:48PM (#3968348)
    you building your own motherboards or something? What mainstream motherboard doesn't have AGP 4x?
    • by Anonymous Coward
      i've had no problem with XP and AGP... well besides the constant random reboots, but luckily it was only bad drivers for an ATI. (just got the latest drivers and everything was fine)
      • I have XP Pro at home with a Nvidia Geforce 2MX and it has been stable (rock solid to be precise) Even NWN is fine.
    • I have a Supermicro P6DGU, which is a great board (2mb mem, up to dual 1ghz, 6pci, scsi onboard, raidport option), but it only supports AGP 2x. I don't know what the real difference is between 2x and 4x, I can't really think that it's %200 faster.

      Sometimes boards that have some of the features that you want, don't have all the features that you want, and when you spend alot on a good server/workstation board, you can't always jump to the newest standard on a whim.

    • Real men run headless servers, without this girlie GUI shit!
    • No, according to his journal [slashdot.org] He "need[s] to replace bad ram, and a dead power supply. The hardware gods are clearly making me pay for something I did wrong.

      But by now he is designing his own geforce 6 card.
  • AGP8X (Score:4, Insightful)

    by neksys ( 87486 ) <grphillips AT gmail DOT com> on Sunday July 28, 2002 @03:56PM (#3968374)
    2.18 gigabytes a second. Jesus - does anyone else see why this is wierd to me? I mean, I understand the need for faster hardware, but can't software producers just make their software more efficient? I mean, any game I'm playing that requires 2.18gb of data to be passed through my video card each second is going to require a better, faster computer than I've got now. I'm tired of always being forced into upgrading just to play the latest and greatest games - and then being told that I'm breaking the law when I want to play old ones that I can't buy anymore! It's absurd, and it makes perfect sense - too many software companies have a vested interest in hardware - the more advanced the game, the more hardware that sells. What we really need is another Mario Bros. or Tetris to come along to give us all a kick in the face - great games don't need outstanding graphics to be great fun.
    • Re:AGP8X (Score:4, Funny)

      by nebby ( 11637 ) on Sunday July 28, 2002 @04:21PM (#3968458) Homepage
      Yet another "Slashdot Environmentalist" who whines as technology progresses and yearns for the days of punchcards. Truly amazing.
      • not necessarily whining about progressing technology, but crappy software. it gets tiring seeing so much crap software out there that requires the latest and greatest hardware just to run because the developers were too lazy to not use bubble sorts and what not.

        it's like doing a for i in *;do rm -f $i;done instead of just a rm -f *. net result is the same, but the execution times are probably a bit different (depending on the number of files etc).
    • Is it even possible for a game to shunt data around at 2.2Gb a second? Even adding up all the different kinds of RAM in my computer, it wouldn't come to any more than perhaps 250Mb. Even for the newest machines, the sum total of the volatile memory is perhaps 700Mb at the most. So do we really need this kind of hardware yet? It may be backward compatible, but if it simply isn't possible for my computer to generate or store this amount of data then whether or not it can handle two gigabytes or a thousand terabytes a second is redundant.
      • You're right- 2.2 Gb is quite a huge chunk of info to sling around. But as far as 3d graphics are concerned, 1 full second is an obsecenely long time.

        When you break it down into the amount of time that is spent transfering texture data per frame, you're looking at milliseconds in the double digits. At that point your 64MB of on-card RAM can indeed become a bottleneck.

      • Is it even possible for a game to shunt data around at 2.2Gb a second? Even adding up all the different kinds of RAM in my computer, it wouldn't come to any more than perhaps 250Mb. Even for the newest machines, the sum total of the volatile memory is perhaps 700Mb at the most. So do we really need this kind of hardware yet?

        That would be about 37Mb of data per screen update, for a 60Hz refresh rate.

        If you're using LOTS of high resolution textures, you need that speed.

        Simon
    • Re:AGP8X (Score:5, Funny)

      by Waffle Iron ( 339739 ) on Sunday July 28, 2002 @04:26PM (#3968474)
      2.18 gigabytes a second. Jesus - does anyone else see why this is wierd to me?

      The really weird part is that a few grams of wet meat at the back of your eye can actually process and perceive 2.18 gigabytes per second of information.

      Then within a few milliseconds, more meat analyzes it, distills it into high-level representations, calculates 3-D trajectories, then moves meat-based servos to aim and fire weapons. All for no other reason than it seems fun.

      Life is strange.

    • That's why The Sims and Roller Coaster Tycoon's sales have blown most FPS games out of the water. They're games the average consumers dig, and you don't have to shell out a lot of cash on hardware.
      Then there's those of us that want shiny, pretty things with lots of stuff happening and the newest most expensive hardware. Despite the fact that we're the minority, hardware companies are right there trying to keep us happy. Might have something to do with the fact that we pay 700 dollars for a video card that will be 200 bucks in 3 months :)
      • Sure, but keep in mind that the minimal recommended configurations for those games once represented the state of the art in technology for consumers. Those games exist with their level of quality and large sales because such equipment became common place, even obsolete compared to the bottom of the barrel $500 system in many stores. (Although video chips is one of the areas where low end machines are most likely to be deficient with many companies still selling Intel 810 based system with no AGP slot.)

        AGP 8X may seem like too for anything mainstream to make good use of but in a few years 8X and NV30 will be in $500 systems. Developers will have had a few years to create better tools and learn to harness that power. Imagine then the games targeting what will then be considered a dull obsolete PC.
        • Even at the time the Sims came out the minimum required hardware wasn't top of the line. This game in particular I remember clearly, because I managed to have the first site to post a review of the game. (it was a very small site, and that review alone generated more hits than any other topic alone).
          I was pretty poor at the time, and couldn't afford any other upgrades so I made do. We're still going to have the UT2003 and Doom 3's to break ground (and piggybanks), but there's still going to be fun stuff coming out that uses last years technology or even the year before. It's just a matter of seeing through the marketing hype machines to find the little treasures out there. Like Bejeweled... that game is like crack! :)

          As for RCT that game will run on practically anything, and it's great fun to boot :)
    • AGP 8x is good for applications that can use full screen anti-aliasing or other high bandwidth extras... with seemingly excessive amounts of bandwidth games will look better period
    • Damn it, yet again we see this small-steps cashing scheme rolling. You know, it all started with that "640Kb is enough for anything" when somebody wasn't laughing but thinking about how to maximize income.

      He thought: "let's not aim for the very best we can imagine or produce.. instead, upgrade little bit at a time here and there and make customers think they get something special when "4X" becomes "8X", etc."

      I mean don't they have resources for more than meager 8X, I understand that making things parallel is a bit costly but still, they could even try to make something significant instead of this.. this.. yuk!

      It's conspiracy I say! Large manufacturers are the only ones who could make something like AGP32X happen but they don't want to give their bleeding edge knowhow out, they want to keep some moving space if something unexpected happen like unknown little companies releasing something revolutionary.

    • I work for a company that makes video special effects software. If we do not support the latest hardware acceleration of the more popular video cards, we lose sales to our competition who does.

      We don't own a part of any hardware company, but we ARE bundled with many higher end video cards, both consumer level and higher. If we don't support their standards, we aren't bundled and we lose exposure to thousands of potential customers (never mind OEM revenue). We are the text generator in Final Cut Pro. Apple certainly expects us to keep up with ther rest of the market.
  • by chamenos ( 541447 ) on Sunday July 28, 2002 @03:57PM (#3968379)
    dammit i was trying to post some replies on the forums when suddenly the server stopped responding

    thanks for slashdotting the server. thanks a lot.

    i was wondering why the sudden slowdown when its 4am here (singapore), and i launch a new browser window, and the first thing i see is the agp 4x vs 8x article, and the page linked is hardwarezone.
  • I don't like windows.
    • by MisterBlister ( 539957 ) on Sunday July 28, 2002 @04:42PM (#3968518) Homepage
      The deal with NWN for Linux is:

      "Don't hold your breath waiting for it. When we announced it we didn't realize that Linux owners are all cheap fucks who don't pay for games."

    • from the bioware FAQ [bioware.com] :

      The Linux dedicated server will be distributed freely online, as close to the game being available in stores as possible. The Linux client will follow shortly thereafter. Linux users will need to own a Windows copy of Neverwinter Nights, as the Linux executables must import certain resources from those Windows CDs. All users will need to register their CD Keys (Linux users register the Windows CD Keys) at the Neverwinter Nights community site (www.neverwinternights.com). The Macintosh version will be available later in the fall (BioWare is completing the Macintosh Neverwinter Nights client and server programs, MacSoft is completing the Toolset).

  • weeee (Score:2, Insightful)

    by laymil ( 14940 )
    According to the article, so far there's only a 4.7% increase between the 4x and 8x cards. Personally, I'd say thats a pretty good start. Of course, I'm still using a GeForce2, TNT2, and Rage128...so as you can see, graphics cards aren't that big of a deal to me.

    What I get worried about with these upgrades is that they're going to come out with games that actually require them! And them I'm screwed :(.

    Personally, I find it interesting that it continues to seem like every card *needs* more bandwidth, more power, etc (and yes, i know these cards operate at lower voltages, but still...). Someday I'm going to need that special SOUNDBLASTER QUASIEXTAGYWITHCHERRIESONTOP made for the SPECIAL SOUND CARD BUS WITH MORE BANDWIDTH. I dread the day when i need a special slot for every type of card i want :(.

    So gogo with the ultrauberbandwidth increases, but keep that backwards compatability! I like pci graphics cards sometimes!
    • Re:weeee (Score:1, Interesting)

      by Anonymous Coward
      I hate it when I forget passwords.

      I doubt 8X AGP is going to make a really big difference, at least not for a while to come. This comparison is using mostly synthetic benchmarks, not actual games. These sorts of benchmarks usually tend to show a more pronounced gap in performance than do real games. Actually if you look at a few of the benchmarks for Serious Sam and Quake3, 4X AGP is actually benchmarking nominally faster than 8X.

      They should've tested with some more modern, more demanding games to give a clearer picture on whether or not 8X is actually much of a help or not. They're right though, we're going to have to wait until the RADEON 9700 and NVIDIA's NV30 before we can tell whether or not it really is going to make a difference. I'm putting my money on "no."
    • According to the article, so far there's only a 4.7% increase between the 4x and 8x cards.

      That article would have been about a million times more useful if they had bothered to show performance of an AGP 2x system. From what I can see, there is no compelling performance increase (5% better doesn't compel me to buy a new mobo) with this new standard.

      Anyone have an idea as to how AGP 4x compared to 2x when it first came out, and how it stacks up now that the technology is mature?

      Actually I just got a new AGP4x video card, and I've been thinking about dropping it in my old Dell workstation with 2x AGP to see how it does, but of course that computer has a PII-350 and my new one has an Athlon XP 1600+ so it wouldn't really be too useful a comparison...
      • if you are running linux, you can set agpgart to run at 2x agp to run the benchmark. I don't know if it works the same, but it would be interesting to see.
    • Re:weeee (Score:3, Insightful)

      by Znork ( 31774 )
      Well, 4.7% max increase. On one single benchmark at a low resolution and the rest of the benchmarks showed between close to no performance improvement to worse performance. I wonder who paid that reviewer to be even close to lukewarm because he sure as hell had no data to say anything but 'total junk with this generation gfx cards, dont spend a dime on it unless you plan on buying a $1k graphics card in the near future because by the time it'll make a difference at consumer gfx card levels it's gonna be time for a new motherboard anyway'.

      Since the only noted difference is at lower resolutions it means the gfx core is the slowdown at any higher resolution which means the gfx core has to get a lot faster before the AGP bandwidth becomes the actual bottleneck. Which means one or two gfx core generations until you'll need faster AGP.

      So, dont worry, there'll be no requirement for AGP 8x for any game that wants to sell more than a dozen copies in the next three years at least.
  • Feh (Score:2, Funny)

    by Anonymous Coward
    Taco it amazes you can't get an operating system to work that was designed for the hopelessly clueless. Maybe my grandma could give you some tips.
  • OT: SiS rocks (Score:5, Interesting)

    by Toasty16 ( 586358 ) on Sunday July 28, 2002 @04:12PM (#3968424) Homepage
    Basically, SiS has come out of nowhere with motherboards that absolutely trash the competition in regards to performance and features. It started last year with the SiS 735, the best performing Athlon mobo of the year. Sadly, it was a poor overclocker, so it was shunned by AMD fans. But this year SiS has had a string of hits. It's the only 3rd party with a P4 license, which makes it the only choice for mobo manufactures in terms of 3rd party P4 mobos (obviosuly they're ansty about Intel frowning upon their Via-based P4 boards, seeing as Via doesn't have a valid P4 license). The SiS 645, 645DX, and now the 648 have consistently been of high quality with features no one else has. The 645 introduced MuTIOL which doubled the bandwidth between north and south bridges, to 533MB/s. The 645DX introduced unnofficial, rock solid DDR400 support. Now the 648 again doubles bandwidth between north and south bridges to 1 GB/s, it introduces AGP 8x, and it probably will officially support DDR400. SiS 648 boards also have Serial ATA support. This is a far cry from a decade ago, when everyone knew SiS=shit.
    • Re:OT: SiS rocks (Score:3, Interesting)

      by rabidcow ( 209019 )
      SiS has been around for a LONG time, tho probably not doing chipsets. I've got an old Hercules monochrome clone here with a bunch of large chips marked "SiS". (dated 8804)

      But I had a MB based on the SiS 530 chipset and it was nasty. It was basically a cheapo bargain board. It sounds like they've improved substantially since then.
      • by Reziac ( 43301 )
        SiS has been doing motherboard chipsets since at least the 486 era, and I/O cards before that. I've also had SiS-based motherboards, and they had bugs and instabilities I'd never even heard of before. I've long since come to associate SiS chipsets with the mobo mfgr cutting corners, and haven't seen anything yet to make me change my mind. (Tho the SiS-based I/O cards for 386/486 machines seemed to be pretty good in their day.)

        As to another poster who says he likes SiS because of the "low heat/low power" ... gee, I wonder if that's why SiS chipsets need heatsinks, even when no other chipset in the same class needs 'em.

        • by swv3752 ( 187722 )
          Where have you been? I have a ECS K7S5A with an SiS 735. The northbridge runs cool to the touch. Other chipsets require a heatsink fan. Shouldn't bother responding to trolls.
          • Maybe that's so .. I haven't kept track of every chipset in the world, especially the newer ones. But SiS chipsets had heatsinks for about 3 years before anyone else found it necessary to do. That always made me wonder why they needed it when no one else did.

            Sometimes a company sucks for years, but suddenly gets better. Maybe SiS has done so while I wasn't looking. But when I go to a computer show and examine dozens of motherboards, and the ones that have clearly cut corners are mostly SiS-based, it doesn't produce a sense of confidence in their product.

            And I wasn't trolling (I *never* troll). I've been building computers for 9 years (and make part of my living that way) and what I posted are my consistent observations over that timespan.

    • The Sis 735 MB's offered good performance at a great price. Unfortunately to get stable performance you needed to buy a very expensive power supply that could supply enough juice. This negated any cost savings on performance these MB's offered.
    • I agree with the observation that SiS has been an unexpected dark horse mobo/chipset candidate lately.

      One correction though: Last I heard it was not a proven fact that Via doesn't have a "valid" P4 license. Via claims that the license is valid because they purchased S3, and S3 had a license. Intel claims S3's license was not transferrable. It seems the case is still up the in air, and the lawyers will have to sort it out. Via does seem to have a reasonable claim to the license, however.
  • Comment removed based on user account deletion
  • Maybe I missed something, but it really would have been nice if they explained how these tests stressed the AGP bus. You're not going to get much of a performance boost out of better AGP unless you're running tests with more textures than can fit in the on-board memory.
  • by DeionXxX ( 261398 ) on Sunday July 28, 2002 @04:20PM (#3968451)
    This review / test is bullshit. The only reason that they see an improvement in lower resolutions is that its the only resolution where the game / app is not limited by the video card.

    I'd definitely take this with a grain of salt until someone can do a 4x/8x review with a NV30 or a ATI 9700.

    What kind of hardware guy looks at this and doesnt say "WTF Xabre 4000?? What kind of video card is that to benchmark anything?"

    Hopefully the /. editors will stop "jumping the gun" and wait until some real reviews come out. This is like testing a new high performance tires that can go upto 400/mph with a Yugo. Is anyone going to be surprised when the $25 tire performs just as good as a $400 tire? Sorry for the lame analogy, haven't had my morning Coke. :-p

    -- D3X
    • Hmmm... well considering that the Xabre 400 is the ONLY available video card that supports AGP 8X, I suppose that it WOULD make a good benchmark card for measuring the differences between AGP4X and AGP8X.

      The moral of the story is _research_, children.
      • Ummm if its the only one available, whats with all the reviews of the ATI 9700?? If they don't have the available cards to do this report correctly why do it at all? It's just plain stupid. The Xabre is not a very good card, can't even compete with a Geforce from last year. They should've waited a month or so and done a decent review instead of spitting out some garbage and having Taco post it on /.

        -- D3X
    • What kind of hardware guy looks at this and doesnt say "WTF Xabre 4000??

      The kind of hardware guy who knows the Xabre was the first video card to support AGP8x and is still one of very few that do.
  • "You can see in the charts that there's actually quite a bit of advantage with AGP8X especially at lower resolutions."

    This guy is smoking crack. all of the charts are virtually identical. Maybe a different person wrote the writeup from the one who made the charts?? or hes on the payola. Either way there was practically no difference.
  • AGP4x VS AGP8x. (Score:4, Insightful)

    by tcc ( 140386 ) on Sunday July 28, 2002 @04:41PM (#3968517) Homepage Journal
    Most people will say agp8x is way too much and overkill and will introduce some bugs and firmware/hardware/signal issues with some lower quality cards, etc...

    Well, when AGP 1x was out, people didn't find it very useful because it wasn't fast enough

    AGP2x was okay to offload the PCI bus and do some basic stuff, but not fast enough for high-speed games and transfering large chunks of information.

    AGP4x seems to be okay for today's technology and all, and AGP8X seems to be way overkill, but I personnaly think that it's finally what it should have been since the start: a *VERY* fast graphics port on which the bandwidth bottlenect doesn't become an issue, * at any resolutions * , and that help cutting down the cost in other fields beside gaming. (one example: uncompressed video editing 1600x1200@24bits(or more for film and with newer card with better colorspace) @60FPS) Right now you require exotic hardware for this, especially for uncompressed playback. let's say you'd want to invest on a fast Ultra320 array (ok you'll say if you do so you can afford the exotic hardware as well, but the point here is actually CUTTING down the price, and this is one way), well now you could get way more drives for your system.

    There are many more examples for this, but the main idea is there are new features that are going to come out for cards, bigger bitdepth, better this and that, that's going to choke the bandwidth and 256MB on a card won't be enough in a not so distant future, using system memory at almost local memory speed increases quality and possibilities tremendously, and while we don't see much use right now, I'm sure it won't take long after 8x is installed that we'll see a use for 12x or 16x :)
    • Not fast enough? (Score:4, Informative)

      by Christopher Thomas ( 11717 ) on Sunday July 28, 2002 @05:02PM (#3968563)
      Well, when AGP 1x was out, people didn't find it very useful because it wasn't fast enough

      AGP2x was okay to offload the PCI bus and do some basic stuff, but not fast enough for high-speed games and transfering large chunks of information.


      Not fast enough to be useful? What reviews were you reading?

      Back when AGP 1x and 2x were rolled out, they were found to be marginally useful because the graphics card was the bottleneck. This is true even today. Fill rate is still almost invariably the bottleneck for performance, and CPU power for geometry and physics is usually second.

      The original intent of AGP was to transfer textures across the bus, with the card's texture memory just a cache of this data. But this is a _bad_ thing to do - bandwidth and especially latency of a card's on-board memory is likely to be much better than AGP transfer bandwidth and latency, so nobody in their right mind writes games that require streaming textures from system memory. This isn't going to change - the memory in your PC is optimized for being big. The memory in your graphics card is optimized for being fast. Even with a zero-latency, infinite-bandwidth AGP port, local memory is better.

      All AGP is used for now is to transfer geometry data, and it's plenty fast for that (cards are still generally fill-rate limited). With on-board transformation and lighting, and further folding-in of the graphics pipeline on the way, the amount of data that needs to be transferred per frame is going to get _smaller_, not larger.

      Very high AGP transfer rates are a marketing bullet-point, and not much else.

      Oh, and if you're editing a 1600x1200 movie on a PC, you're limited by your disk transfer rate. No way are you storing *any* significant chunk of that in a PC's RAM.
      • >Oh, and if you're editing a 1600x1200 movie on a PC, you're limited by your disk transfer rate. No way are you storing *any* significant chunk of that in a PC's RAM.

        ever heard of PCI-X and aggregated (i.e. many in parallel) Ultra320 arrays? Added with lossless compression that result in 1:1 up to 4:1 compression depending on the data?

        Plus, I was merely stating an example, add some funky stuff to process on the graphic card or CPU before displaying (thus you *MIGHT* need the extra bandwidth back and forth the memory/gfxcard/cpu to process the information PRIOR dumping it on a display. Of course you'd also want plenty of RAM to buffer the whole thing. You can add mathematically LOSSLESS compression (like a ZIP codec for example) to the video stream comming from the array, effectively doubling (in most cases) the amount of data comming in (let's see, "double" PCI-X bandwidth, yep... that's a lot of data). Of course you need a Quad CPU system to do all of this in real time (or a very powerful dual system).

        As I've stated, it's easy to blast ONE given scenario, I'm sure a lot of people here could give you many scenarios where 8X is welcomed. In my case I'd have to break a (blah!) NDA to illustrate a very specific case in detail, but the concept of increasing complexity, bitdepth and quality/functionnality of newer graphic cards still remains.

        About the 2x issues not being good enough, well the latency and all is a big problem for GAMES yes, your specific example for GAMES is right, but for OTHER stuff, 2x was too SLOW, with or without the latency issues, the bandwidth was just too little. The numbers in theory were good, but in practice with all of the other processes going around you had to count the given numbers by almost half. Anyways, you're right about the gaming issues and the fact that these GAMING card couldn't perform. I was thinking ASIDE from gaming. Profesionnal equipment, HDTV editing, Framebuffers, etc.
        • Oh, and if you're editing a 1600x1200 movie on a PC, you're limited by your disk transfer rate. No way are you storing *any* significant chunk of that in a PC's RAM.

          ever heard of PCI-X and aggregated (i.e. many in parallel) Ultra320 arrays? ...
          Of course you need a Quad CPU system to do all of this in real time (or a very powerful dual system).

          Quite the "PC" there. *smirk*

          I repeat - nothing that you're going to do real-time video editing on at that resolution will *have* an AGP bus (or cost less that about ten times what a home PC costs).

          All you're doing is supporting my case.
      • ..... isn't that kind of like saying: "based on NT Technology" ?
    • Re:AGP4x VS AGP8x. (Score:4, Interesting)

      by jtdubs ( 61885 ) on Sunday July 28, 2002 @06:23PM (#3968838)
      With 3D Games, screen resolution isn't really an issue. The screen resolution and how much you need to saturate the AGP bus are completely independant. The only thing that determines the AGP bus saturation is how much geometry you need to send and how efficiently you can send it. How many textures you need to send and how efficiently you can send them.

      With texture memory creeping upwards in 3D cards we should eventually see a point where all textures can be stored on the card and sending textures over AGP should be rare.

      However, sending geometry is usually done per-frame in most 3D games, and you'd be surprised how much all of those triangles can add up.

      1M triangles, with 3 vertecies, 3 texture coordinates, 3 normal vectors and sometimes more per vertex with each vector being comprised of 4 floats and each float of 4 bytes.

      1,000,000 * (3 + 3 + 3) * 4 * 4 = 144,000,000

      That's 144MB per frame. At 60 frames per second that's 4.22GB per second.

      Now, granted, 1M tris per frame is way high for today's games. Most current games push around 30k per frame, never more than 60k. My friend and I are doing closer to 300k and are already starting to become AGP-bandwidth-limited.

      Anyway, you are right. You can't have too much bandwidth to your video card. I'd love to be able to push a full 1M tris/frame, and I'm sure I will be able to soon. Just not yet. And not even with AGP 8x in all likelyhood.

      Justin Dubs
      • How do you know you're AGP-bandwidth limited? I doubt you can really exceed the bandwidth of agp 4x with current systems. After all, the triangles need to be processesd in the graphic card - graphic card manufacturers claim high poly throughput numbers, but in reality they are much lower. Plus, you need to take that triangle data from somewhere. If you take it from ram, you could in fact get more data than 2.1GB/s - but the scene would obviously be static. And can you really calculate 1M polygons (in a way that makes sense) on current cpus? mczak
        • I'm just guessing as my CPU isn't spiked as my app runs.

          Also, at the ~300k tris/frame we are pushing at the ~30 frames we are getting, that would make 1.2GB/s of bandwidth. That's assuming perfect efficiency. We are probably using over 1.5GB/s of bandwidth.

          The triangle we are pushing is stored completely in RAM and is being moved via DMA to the vid card. The scene itself is static, you are correct, but the camera can move, as this doesn't require any changes to the geometry be done by us.

          We just push the new Model-View and Projection matricies, and push the same 300k tri scene and let the video card transform the triangles for us.

          So, you are right, being AGP-bandwidth limited isn't a certainty. In fact, it is likely that we are video card limited. But, regardless, at 1.5GB/s we are getting close to pushing AGP 4x to the max. And when the next generation of vid cards comes out that can push twice as many tris/sec we will need AGP 8x to keep the vid cards saturated.

          Justin Dubs
  • my Tribes2 + X (Score:3, Interesting)

    by Trevelyan ( 535381 ) on Sunday July 28, 2002 @04:43PM (#3968521)
    I have a ATI AIW 32MB DDR, and my tribes2 can get a lil laggy esp when their are some vehicles on my screen.
    reading my X log I notices that DRI was using 1X mode for AGP. after some RTFM, I found the option to kick it into 4x.
    Anyway the point being it didn't help speed up the game gfx (well i didn't notice much difference)

    In case ur wondering for ATI cards the XF86Config option is:
    Option "AGPMode" "4"

    Also i noticed:
    Option "AGPSize" "32"
    But i cant tell if setting this bigger then the ram on the card helps or not (maybe that the buffer size opt?), was hoping to let the card borrow more of my sys ram (which is pc100, slow compared to gfx cards DDR, but better then hdd =)
    anyone know any other good opts to help eek more speed?
    • Well, getting away from pc100 would be a good start. Just moving up to PC133 would make a noticable difference; moving to DDR would make an even more noticable difference.

      Also, regarding the AGPSize (which I thought referred to the AGP aperture in BIOS), I once read that setting it higher than 64mb in Linux was useless because of some X limitation. Perhaps someone with more experience can enlighten me as to why...?
      • Faster memory isn't going to do much for that video card (the ATI AIW 32MB DDR). If you were using a GeForce 4, or the new Radeon card on an SDRAM system, then there would be a substantial bottleneck. However, on older, or more efficient 3D Renderers (more efficient, as in a tile-based rendering Kyro 2), SDRAM is perfectly fine, and DDR is pracially useless, as the card is not as bandwidth hungry.

        His problem is A) Running Tribes 2 on an older Radeon. And B) The Tribes 2 Garage Games engine was horribly unoptimized in that game.
    • I'm guessing that it's the AGP Aperture size. It kinda depends on a the graphics card, but I'd definitely set it higher. 256Mb was the best for my Matrox G200 (8Mb graphics card, 224 Mb System), and 128Mb is the only stable size for my GeForce2 (32Mb gfx, 512Mb System).

      HTH,

      jh
      • As I understand "AGP Aperture Size", it is basically a portal in system RAM to your Video RAM, a range of memory addresses. When anything is written to this memory range, it is automatically transferred to the Video Card's RAM. By your post, the G200 used 8 MB of System RAM for it's Video memory? I wonder what led you to set the AGPAS to 256MB (review, tech articles and such), and what actually was going on (if the BIOS/OS decided to overried your settings), and why there is instability with your GeForce2 (I have the same card, with a 64MB AGPAS). Anyway, just curious.
  • I can't even get 4x stable under XP, so I figure 8x is half as likely to let me play NWN
    It's always about you, isn't it?
  • by Malor ( 3658 ) on Sunday July 28, 2002 @04:52PM (#3968541) Journal
    "[...]there's actually quite a bit of advantage with AGP8X especially at lower resolutions."

    What are these people smoking? The vast majority of the tests are all but identical. The VERY BEST performance difference is 3DMark2001SE Pro at 800x600x16, and it shows a whopping 4.7% improvement.

    Clue: In the current 3D world, AGP4X IS NOT a constraint. Even AGP2X is fine. Hell, there was an early version of the (TNT2 or GeForce 1, I forget which) that was *PCI*, for chrissake, and it was only a whisker slower than the AGP cards at the time.

    Geometry transfer, it would appear, just isn't very bandwidth intensive. The only time the AGP rate is going to matter much is when doing very heavy texturing from main memory, but that just isn't happening. Instead, manufacturers are putting more and more RAM on the video card instead, and all the games are oriented around pre-loading all necessary textures in that specialized, super-high-speed RAM.

    At the present 1.06 MB/sec transfer rate of AGP 4X, that means that the entire video RAM of a 128MB card be filled in roughly 1/10th of a second. If you spend all the time, money, and effort to upgrade to AGP 8X, you can improve your load time by 1/20th of a second.

    Just think...if you played 50 levels of some FPS a day, every day, you'd save over 15 minutes in your first year alone!

    Obviously, this is a very important technology we should all rush out to buy. Thanks, hardwarezone.com! I'll trust you for all my technology reviews in future.

    -----
    AGP8X: Saving your time so efficiently, you won't even notice.
    • Did you know that the huge amount of high speed memory is supposed to be just a texture cache?

      What's supposed to happen is that insanely high resolution textures are supposed to be streamed from that gigabyte of DDR400 RAM that you have to back up that 128M GF4Ti4600. That's why we need more bus bandwidth. Trying to stream hundreds of megs of textures before the next frame needs to be rendered requires absolutely insane amounts of bus bandwidth.
      • Re:Oh come on! (Score:2, Interesting)

        by Malor ( 3658 )
        right... so they DON'T DO THAT. Even 8X AGP is going to be very, very slow compared to the incredible speed of the RAM in most of the high-end video cards. There have been a few demos using AGP texturing, but all the real-life apps I'm aware of are carefully constructed to stay within that cache.

        It may help doing background loads of 'seamless transition' games, but even so.... unless you're trying to stream all these textures out every frame, it's not likely to help much. AGP 4x can fill a 128MB card in 1/10th second; 8X can do it in 1/20th. Unless you get to the point of multiple updates per second, it's just not going to matter very much. Developers will use good caching algorithms and reasonably careful level design to work around AGP speed issues.

        Streaming textures IS a pretty cool idea, and I would like to see games that use them. Maybe Doom 3 will, but it hasn't sounded like Carmack is trying to do anything like this yet.

        The reason I was so acerbic in my original comment was that the website was talking like it mattered NOW, for the apps we have TODAY. (a whole 4.7% increase! in one benchmark! wow!).

        In a nutshell: for everything out now and probably for another 18 months, AGP8X isn't going to matter a whit. Don't worry about it until 2004 sometime.
    • Comment removed based on user account deletion
  • hypocrite (Score:1, Offtopic)

    by stepson ( 33039 )
    I can't believe the hypocrisy on slashdot, even after all the ass reaming cmdrtaco takes about it. Is he trying to do this, maybe slowly switching things over to 'Hey, XP is cool!' so that all the morons who switched over to linux because of this site, will slowly switch back to XP because of shit like NWN? C'mon, its a fucking game, and I think it even runs under Linux (or will) and is being ported to the Mac. I'm sure CmdrTaco and the rest of the /. bunch ran out to get Xboxes, even as they come back and bitch about how much "M$" sucks.

    I don't run MS Software. I run Omniweb for a browser on Mac OS X. I don't use Office, I use Appleworks, and if not, I could always run KOffice or even star office.

    Mods will probably mark this as off-topic, but c'mon, is cmdrtaco's offhand "Look I run XP" comment on topic? Of course, he's not as easy to moderate, I suppose.
    • Why are you so offended by the fact that he uses windows XP? Why do you assume he should bear the burden of fighting what you view as an evil empire? He wants to play a game and maybe he likes that XP is a little easier to manage than Linux.

      The minute an editor says something objective you jump on him for not being blindly pro-Linux?

      If slashdot loses the few strands of objectivity it has left it will be of no value. They'd do no service to people by presenting propaganda. I'm quite sure the editors realize this, you'd do well to realize it yourself.
    • Actually there WILL be Linux binaries *soon*. The game developers have promised us this much, and I believe the executive producer has pushed for this.
  • bash-2.05a$ cat /proc/driver/nvidia/agp/status
    Status: Enabled
    Driver: NVIDIA
    AGP Rate: 4x
    Fast Writes: Enabled
    SBA: Enabled

    uname -a

    Linux daryl 2.4.19-gentoo-r5 #5 SMP Fri Jul 26 18:07:32 EDT 2002 i686 GenuineIntel

    Nice and stable!
  • Fancy shit (Score:3, Insightful)

    by Graymalkin ( 13732 ) on Sunday July 28, 2002 @05:54PM (#3968747)
    It seems like the editors have a particular list of text strings they grep all incoming submissions for, apparently among these is AGP8x. This comparison is ridiculous even for Rob to point heedlessly to. The wbesite itself is Yet Another Anandtech/Tom's Hardware ripoff design with an article that reads like a SiS fanboy on crack.

    The whining and crying about AGP 8x is a bit premature and the AGP 3.0 standard has been pretty much supplanted in usefulness by graphics card manufacturers. Having a dedicated high speed port for graphics hooked up to the northbridge is a good design idea. It frees the traditionally low bandwidth nb-sb connection from needing to carry lots of graphics data. The memory sharing available in AGP has become increasingly useless as worthwhile graphics cards have scads of local memory now. About the only thing an AGP apeture is good for is an i845G chipset board or some other cheap piece of shit HPaq sticks in their computers.

    The AGP 2.0 spec isn't much of a bottleneck either. Case in point, replacing the TNT2 based video card in my dual P3 500 with a GF2GTS more than doubled the 3DMark2001 SE score from 926 to 2068. The board is an IWill DBD-100 with a 2x AGP port on it. The fillrate or poly rendering ability was not adversely affected by the AGP 2x port, the only thing keeping the 3DMark score down is the relatively slow processors (as 3DMark is single threaded) and the low FSB bandwidth.

    The fillrate of an ATi R300 or nVidia NV30 isn't going to affected much by an AGP bandwidth on ONLY 1GB/s. Most cards based on these chips will end up having >100MB of on board memory. It won't be too terribly long before the video card in the PC has more and faster memory than the system's main memory. Even Doom3's 80MB of textures isn't going to really stress a 4x AGP card, it would take all of a seventh of a second to transfer all 80MB of textures. Maybe AGP 8x will be on my upgrade path when the load time of a game's textures take a perceptible amount of time to load into the video card's local memory.

    Rob it isn't Microsoft's fucking fault your AGP card doesn't work properly, you're probably stuck with some old VA Lin^H^H^HSoftware POS box. My system doesn't have any problems running reliably under Windows XP and I don't think too many other people running Windows 2000 or XP are having too many problems either. When do we get to mod the editors as -1 Troll?
    • Don't forget those triangles.

      To hit any kind of realistic graphics in complex scenery they need to handle a minimum of a million triangles per frame. 3 or 4 would be better (think individual points on maple leaves on a maple tree).

      The math is easy (points * fps * number * 4bytes).

      Once we hit 96x AGP (and a GPU which can crunch it) we can start getting some games which could be confused for a photo.

      It's still pretty easy to tell them apart at a glance. Movies are certainly getting good, and stills we've pretty much mastered depending on the artist -- so maybe 2010 or so games will be able to start concentrating on physics improvements (hardware) than graphics hardware.
      • You can have games whose graphics get confused with photos with DirectX 9 compatible graphics cards. Not only do they have to support floating point math of the pixel shaders but also need more pixel pipelines (8 is the minimum IIRC) and support larger shader code sizes. With the NV30 or R300 anyone utilizing half of their features is going to have damn good looking graphics. DX9 and OpenGL2.0 are also going to require support for FDRL (full dynamic range lighting) that is coupled with the floating point pixel shading. What makes shit look real when rendered by an engine like PRMan and Mental Ray are the shaders and lighting options. nVidia and ATi are adding support for Renderman and Mental Ray quality shaders in hardware.

        Also it isn't usually feasible even with tons of processing power to add more triangles to a scene than you need. A super high quality picture of a maple tree could be easily done by making a rectangle with a transparency map, bump map, and texture map fitted on top of it. A single maple leaf object can have as many instances as you need and only require that one bit of memory space for the model and maps. Wait until Doom3 and UT2002 based games hit the shelves. AthlonXP 3000+s with their GF6Pt will be outputting shit that looks like FF:TSW in realtime. Aki won't need a billion vertexes, just some cool shader tricks and support for hardware transforms and patches.
  • "You can see in the charts that there's actually quite a bit of advantage with AGP8X especially at lower resolutions."

    Huh? The difference at 1024x768x32 and above is moot, or often non-existant. Are these guys looking at the same graphs I am?

    No one plays at 800x600x16 anymore.
  • Would it be possible to create other devices that use the agp slot? Imagine a gigabit or scsi controller with a 2.1 GBps link to the northbridge. Mmmm... bandwidth..
  • Anybody else notice that their sponsors are SIS? :)

    No wonder they're calling a "4.7% increase" worthwhile... jesus...

  • warning! this is probably stupid but I am sleep deprevated and rambling...

    since many will say that one of the big issues with any large leap like 4x to 8x is utilization of that technology, I wonder about the fact that there does not seem to be an indication of significant slowdown of these types of HW advances. In the face of this (as if that is a startling revelation) I wonder if API's (and the drivers written to support them) would be best served by making forward compatible designs. For example: the directx design allows any release of directx to work with older versions called on it.

    That is good, however because of the WAY that the calls are written (and among these is the very annoying factor of inconsistency between versions) it is rarely an easy task to upgrade directx versions (or even sub versions) within a program. It would seem silly then to go in and do an equivelent amount of work within the code base who's goal was to 'buff up' the pipeline and storage method.

    Now I am probably wrong... but as far as I know (haven't honestly messed with any directx past 7) there is no 'bandwidth detection' that is trully open ended, thus allowing a maximal optimization of texture and object transfer based on the [usable] bandwidth. I know memory is checked (optional), but what about the bandwidth? Would an external library that acts as a API of API's work, in which you could store the algorithm implementations and constants that, say... in the case of some great hardware advance would either already recalculate (with a config routine) or be patched that gives those with the new HW toys something to play with? Would this significantly slow down the program with an added lookup layer (or more)?

    I only ask this because I am toying with a graphics rendering engine (toying being the key) that while 'could' be used for gaming will most likely be for rendering architecture crud. Because I am lazy, and for the sheer pleasure of seeing if it can be done, I would like to see an easier way to upgrade programs to make use of new technologies. Perhaps this could simply be a build time only API/tool that is a developing framework... ah, who knows?!

    However (assuming this does not get modded down for [stupidity]) if anyone knows of such an existing process, toolset or API please respond.

  • The great thing about the Universal AGP 3.0 is that you can get a board with it in the coming months and a decent AGP 2.0 card for a decent price.

    Then when your hardware is starting to lag (for games) you can go out and get an 8X card that will have matured, and become afordable. The performance gains, from moving to 8X from 4X, might be only a few percent, but couple that with a new ATI or NV card down the road and boom you got playable framerates again.
  • by CTho9305 ( 264265 ) on Sunday July 28, 2002 @09:49PM (#3969459) Homepage
    This [tomshardware.com] tomshardware article from a while back compares AGP 1x -> 4x... here [tomshardware.com] were the results. You can see that even in the beginning of 2000, the benefits of higher AGP speed showed diminishing marginal returns.
  • "the performance gained with AGP8X is up to only about 4.7%"

    They need to run some tests where the memory to GPU bandwidth dominates the problem. For example, open up 3DS Max, Maya, or Softimage XSI with a complicated textured scene that can't redraw at full frame rate, and see if it helps.

    The big win for more AGP bandwidth should be when the board's texture memory is full and the textures spill into main memory. Typically, game textures are tuned to avoid this, but you hit it all the time with authoring tools.

    A bottleneck on geometry feed from the main CPU is unlikely, since it's hard for the CPU to generate a gigabyte/second of geometry.

    • They need to run some tests where the memory to GPU bandwidth dominates the problem. For example, open up 3DS Max, Maya, or Softimage XSI with a complicated textured scene that can't redraw at full frame rate, and see if it helps.

      I agree, it's silly to benchmark AGPx8 on games which are designed to run on the current crop of video cards and at least one generation back. The ATI 9700 and the NV30 are really streaming processors with a really slow memory access, games can compress textures and use low polygon optimized models because they throw lots of programmers at the problem. Now with floating point instead of bytes for the buffers that will need 4 times the bandwidth for the same performance, for better pictures of course. A good test would be to send a high dynamic range (floating point RGB) movie as a texture for a cube map, see if the frame rate isn't exactly twice as fast as AGPx4 then... Or just send 10 nice 0.5 million triangle subd surfaces, the frame rate will be dreadfully slow, even if the hardware accelerator can handle the load.
  • Does anyone happen to know of a good site for explaining what the various facets of AGP Bios settings might accomplish? I'm running a GeForce3 on an Abit KX7-333, and my dxdiagnostic always reports that AGP texturing is disabled. There are more than a half-dozen bios settings for AGP, but the good people at Abit apparently don't want me to know what they do. From the manual:
    • Enhance AGP Performance
    Two options are available: Disabled or Enabled. The default setting is disabled. This item can improve your AGP display performance. How about that - the 'Enhance AGP Perforance' option can in fact improve my performance. Who'd have guessed? No idea how or why it does it, nor why it's disabled by default. Also available are AGP Driving Controls (with a manual specification option involving settings in hex), Fast Write, Read Synchronization, and a few others. Can anyone point me to a site that might demystify some of this stuff for me? Guess-and-check + reboot for each combination isn't appealing...

I'm always looking for a new idea that will be more productive than its cost. -- David Rockefeller

Working...