Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Hardware

Graphics Memory Sizes Compared: How Much Is Enough? 347

EconolineCrush writes "Trying to decide between whether or not to get a 64MB graphics card, or spring for that 128MB version? Hit up this article, which explores the performance of ATI and NVIDIA-based cards with 64 and 128MB of memory, before swiping your credit card. Not so long ago 32MB was the top end for graphics memory on consumer video cards, but now even budget cards are available with 128MB. 128MB might seem excessive now, but a year from now 64MB cards might just be obsolete."
This discussion has been archived. No new comments can be posted.

Graphics Memory Sizes Compared: How Much Is Enough?

Comments Filter:
  • I have a Radeon 64 MB card and I have had no problems with it. In another box, I have a Voodoo3 3000 and it still runs Counter-Strike and Quake 3 just fine. It all depends on what you want to do with it.

    • I agree. I have been using a V33k since 1998 and I haven't had any problems but then again I am not a PC gamer.

      If I were to get a new video card I would probably be using that video RAM for SWAP ;)
    • UT2003 / Doom 3 (Score:2, Insightful)

      Well, I dont know if you tried the UT2003 demo, but if you want to run the game smoothly at a decent framerate, your going to need a good video card. I'm imagining that when UT2003 comes out on oct 1st, with the full textures(the demo uses low quality textures to cut down on the download side, iirc), im willing to bet that 128mb of memory on the card is going to help out quite a bit.
    • Ya but how much longer are you going to be able to use that voodoo card? I have all sorts of friends with left over 3dfx cards and they have more problems then anyone I know, I know there are some people writing their own drivers for them now but these newer games seem to not be too friendly with the voodoo's.
    • I have a GeForce2 GTS 32MB and it runs both of those games no problem. Come to think of it, the 16MB TNT (no, I don't mean TNT2) it replaced ran both of those games just fine, too, although the default machine gun in Q3 isn't really a usable weapon if you can't consistently hit 90+ FPS.

      However, my GeForce2 seriously slows down on Morrowind. Fortunately, Morrowind is perfectly playable at low framerates, but that's not my point.

      My point is, of course Counter-Strike and Quake3 run just fine on your 64MB Radeon: both of those games are older than your card!

    • by andycat ( 139208 ) on Monday September 23, 2002 @06:18PM (#4315657)
      At work the two machines I use regularly for interactive walkthroughs of large environments (10-100 million triangles) have 64MB and 128MB of texture memory, respectively. I am constantly running up against the 64MB limit and I'm fast approaching the 128MB one. Here's how it breaks down:

      Frame, depth, and stencil buffer: 1024x1024x(32 bits + 32 bits + 8 bits) = 9 megabytes

      9 megabytes so far. No problem. Double that when I push the resolution up to 1600x1200 for demos, but we'll ignore that for the moment. Now, the model I'm using has 19MB of surface texture, so we're up to 28MB. The system I'm running on this poor hypothetical card uses 512x512 textures to replace distant geometry. Each one takes up 768K of memory and I've generally got a working set of between 40 and 60 textures. There's another 30-45MB, so total usage is somewhere between 58 and 73MB. Add in shadow maps and we lose another 20 or 30MB. The 64MB card is now swapping to AGP memory. The 128MB card is filling fast. It's adequate for the current generation of the system I'm running, but before I can write the next couple versions I'm going to have to implement some serious resident-texture-set cache management.

      Now, you can certainly argue that this is an atypical application. You would be quite correct. However, I do need that much video memory and I do use it. Yes, it's massive overkill if you want to play Quake, Unreal, whatever, but once you start looking into more exotic applications it's easy to get into situations where you can use arbitrary amounts of texture RAM. Real-time shading can get you there in a hurry, too, once you start using textures as lookup tables for expensive-to-compute functions (e.g. light fields or factorized BRDFs) or caching the framebuffer for later re-use.

      So yes, 90% of the programs 90% of users will run will currently fit neatly in 64MB of video memory, but there definitely exist systems that require more than that.

  • What's the answer to the question? The answer is: it doesn't matter.

    I got a GeForce 4 Ti 4200 with 128 megabytes and video input for $160. The 64 MB version with no video in was $130. So, the difference is $30. For $30, I'd get the extra 64 MB.
  • All that graphics memory is going to continue to be important as ever more complex and detailed games such as Doom III continue to be developed. With the kind of high resolution textures presented in such applications, a large amount of high-bandwidth memory is advantageous for rendering them in real time, high polygon count models worlds.

    For those of you planning to never buy another game, well, why ask in the first place?

  • Bah... (Score:4, Funny)

    by RQuinn ( 521500 ) on Monday September 23, 2002 @05:07PM (#4315098)
    a year from now 64MB cards might just be obsolete

    Bah. Next thing you'll be trying to tell me my Voodoo 3 2000 PCI is obsolete!
  • removable RAM? (Score:5, Interesting)

    by littlerubberfeet ( 453565 ) on Monday September 23, 2002 @05:07PM (#4315108)
    this might be offtopic, but why can't the RAM on graphics cards be modular, like the stuff we stick in computers? Is it a card manufacture conspiracy? a different type of RAM? I would be willing to buy a high end Graphics card if I could eventually stick 256, or 384 MB on the card.
    • Re:removable RAM? (Score:2, Interesting)

      by Anonymous Coward
      Ram used to be Module based, see cards such as the Matrox Millenium and some Trident cards, however memory was hard to find and match, since each company uses there own type and speed.
    • Re:removable RAM? (Score:5, Interesting)

      by Milican ( 58140 ) on Monday September 23, 2002 @05:17PM (#4315187) Journal
      By soldering the RAM directly to the board there is a better connection which allows them to run the chips faster. If you add a connector then you add a capacitive load and thus you have to slow down the memory. Thats not to say modular RAM could not be incorporated down the line, but for the latest and greatest its better and cheaper to solder the stuff right on the PCB (printed circuit board).

      JOhn
      • Nice work Beavis :), you just upped the US's trade deficit as hundreds of overclockers destroy their motherboards attempting to solder in their RAM modules.

        No doubt you'll be receiving an extra bonus from the Taiwanese motherboard manufacturers this month. Of course I suppose that assumes they're able to recognize solder drips and scorch marks as good reasons to assume the board was not received DOA...

        (Yes, I realize there are overclockers all over the world -- I apologize in advance for my horribly US-centric post... geez...)
    • Re:removable RAM? (Score:3, Interesting)

      by hike2 ( 550205 )
      In the case of NVIDIA the "value-added resellers" add their own high-speed memory as in the case of Leadtek. If they were to make the memory modular they could not charge and extra 50-100 bucks over the comptetition just so that you can gloat at your extra .5% gain over you buddies exact chipset.
    • Re:removable RAM? (Score:2, Insightful)

      by Erpo ( 237853 )
      I'm sure that graphics card manufacturers want to provide us with as few upgrade options as possible for the simple reason that it would mean more money for them. However, I don't think memory slots on cards would be all that useful. Usually, the bottleneck that triggers an upgrade for most consumers is a killer app that pushes more polygons and requires a faster GPU with more rendering pipelines and faster graphics memory. More sticks of DDR SGRAM isn't going to do you any good there.
    • But then again, how many times have you thought it would be adequate to just upgrade the memory and not your entire card? The little performance to be gained by adding some memory compared to that of upgrading your entire card just doesnt seem worth it. If youve kept your card for long enough that your willing to upgrade it, then its probably obsolete by then.
    • Re:removable RAM? (Score:5, Informative)

      by John_Booty ( 149925 ) <johnbooty@noSpAm.bootyproject.org> on Monday September 23, 2002 @05:38PM (#4315354) Homepage
      this might be offtopic, but why can't the RAM on graphics cards be modular, like the stuff we stick in computers?

      Another reason in addition to the ones posted by other users... when are you going to upgrade the memory on your graphics card? Perhaps 12 months after you bought it? Two years?

      --If you're going to stay close the cutting edge in PC graphics, you'd be buying a new card at the point. Considering the pace at which PC graphics card technology increases, your card would be fairly dated by that point anyway and you'd be looking at another one.

      --If you were going to buy the extra video memory fairly soon after purchasing your card when it's still bleeding-edge, why not buy a card with that much RAM in the first place?

      Of course, other posters have noted lots of good reasons as well such as the profits made by board/chip manufacturers on the extra RAM, physical RAM connection issues, etc, etc.
    • High end graphics cards used to use SIMMs or DIMMs for texture memory and the video memory was usually dual ported VRAM or similar.

      I think the optimal situation would be to have stage 1 and stage 2 texture memory. Stage 1 would be fairly large, perhaps 32-64MB just for textures, and stage 2 would be one or two SDRAM DIMMs. Then you could swap things in from secondary texture memory as necessary, or render them directly if they're small.

    • I think the most likely reason is because every time I have felt the need to upgrade, it has been the GPU, not the RAM, and I suspect it's like that for most other people. The graphics industry progressess fast, much faster than CPUs. Not only that, but there are huge visible gains with new graphics cards, you notice the difference between a GeForce 3 and GEForce 2 with new software, whereas a P4 1.5 to P4 2.5 is just a little faster.

      At any rate I've never found a time where I said "Wow, my GPU is just fine, but I really need more RAM!" Whenever I upgrade, it's because I need a new GPU, which also comes with more RAM. Now as apps grow they need more RAM than what a given GPU has, BUT they have also grown in other ways that simply outdo the GPU entirely, and you need a new one.

      As a side note ultra high end GPUs like some of the Wildcat sieres DO have upgradable RAM, BUT only extra texture memory, the main memory is still soldered on for signaling reasons.

    • My Bondi Blue iMac has ability to upgrade the video memory from 2MB to 6MB....
  • by gurnb ( 80987 ) on Monday September 23, 2002 @05:08PM (#4315112) Homepage
    How can you say that a 64Meg Video card may be obsolete in a year!?

    If a piece of hardware is doing what you need it to do, then it is not obsolete. Not every plays/needs/wants the latest UT2003/Doom3 game.

    • My words exactly! Somebody might be led to believe that the '89 Amiga 3000 sitting on my side table is somewhere way beyond obsolete but that's not true. All the programs run as well as they did, what, 13 years ago. If I need quick and dirty subtitles on a video or just fancy a quick game of pinball, everything is running 15 seconds from power-up. And my kids seem to prefer those classic games as well :-)

      Comic-not
      • [laughing] I've got a perfectly good XT here that I was trying to find a good home for... then one day I needed it to test some software, so it's earned its keep. And I've got a 286 you couldn't pry away from me with a crowbar, cuz it's my emergency backup machine if I *need* a computer during an extended blackout: it does everything I can't live without, AND my heavy UPS will run it for two solid hours. (Said UPS is 22 years old.)

        But you make a good point. Just because hardware or software is old doesn't mean it gets any worse than when it was new!! (eyeing prominent desktop icon for WordPerfect 5.1 :)

    • by Anonymous Coward
      I think the discussion assumes that by "obsolete," it is implied that the hardware is insufficient to run newer software with higher requirements..I'll assume you ignored that obvious fact to make your point. By your argument a Tandy Model 200 laptop is not "obsolete," as long as the 4 AA batteries it requires allows you to create miniscule BASIC programs and type up documents. In this case you're referring to usefullness rather than obsolescence. A piece of hardware might never live out it's -usefulness-, but it'll almost certainly become obsolete..much more rapidly in the computer industry, in fact. So yes, the nameless 64MB video card may very well be obsolete within a year, six to nine months..maybe even less. Its usefulness might survive on for as long as the hardware itself survives. I think you've mixed these two concepts up somewhere along the line.
      • Parent: +1 Insightful

        Nice to see somebody can actually think before they post. Even an AC.
      • You're correct about that -- the distinction between "useful" and "obsolete" has been largely lost in the average discussion of hardware or software. And as everyone knows, all computer-related materials are obsolete no later than the moment you remove them from their box. :/

        Maybe we need a new term meaning "obsolete for a particular level of usefulness".

    • Send me your antique 64mb cards, I'll give them a good home in their old age :)

      Cripes, the best video card I've got here is a 16mb G200 that just barely keeps up with a P3-500. But it does what I need of it, so who cares if it's not the latest and greatest??

  • It sure looks like it could use it now...

    And even with the slashdot 128mb we still can't take out more than 25 sites a second...

  • by Anonymous Coward on Monday September 23, 2002 @05:14PM (#4315161)
    Maybe with enough RAM, processor speed and plasma displays I could create a $50,000 virtual reality room where winamp visuals would rival a $3.00 hit of LSD.

    Wouldn't that be cool. You could make your freakin trip end when you needed it to. Game Over Man. Legal too.
  • by dutky ( 20510 ) on Monday September 23, 2002 @05:17PM (#4315186) Homepage Journal
    What the hell do I need memory for on an AGP video card for?!? Wasn't AGP supposed to allow the video card to use system memory for textures and other crap? (While we're at it, how the hell does texture mapping help my compile or download times? If all that extra memory on the video card doesn't improve compile or download time, why do I care?)

    Yeah, I know: If you're the kind of gormless muggins who thinks the only use for $2500 worth of steel, plastic and silicon is to play Doom and Quake, then this kind of crap is important to you. I, however, have real work to do and couldn't care less about this kind of garbage. <grumble> <wheeze>

    • What the hell do I need memory for on an AGP video card for?!? ... If you're the kind of gormless muggins who thinks the only use for $2500 worth of steel, plastic and silicon is to play Doom and Quake, then this kind of crap is important to you.

      As others have pointed out, on-board memory is much faster than AGP.

      A decent PC no longer costs $2,500. If a $100 video card can turn a sub-$1,000 PC into a game machine, I'm all for it.

    • The real use of AGP is for a really freaking fast pci bus. Since consumers dont want to pay for 6 pci-x ports they can get one for the only thing that needs it on their computer, the graphics card. The biggest thing keeping AGP transfer mode from taking off other than bandwidth is latency, physically long leads like those between an expansion port and main ram will always be much slower latency wise then on board leads that are a couple cm at most and usually closer to just a single cm.
  • by aled ( 228417 ) on Monday September 23, 2002 @05:21PM (#4315221)
    I still use my trusted GeForce 256 DDR (yes, the original) and it still does work fine with any games I tried. That said I haven't tried many new games this year but NWN does fine. I guess that DOOM III would be just too much for it.
    It alls depends on what you want, how much are you trying to spend and what are you doing with it. If its a luxury but you know it, what the heck! When I buyed mine it was pretty expensive, but I liked it. On the other side a cheaper card may work wonders this days and you can get a Geforce 2 for maybe 5 times less than I payed mine.
    The game market is the driver for 3D graphics cards, so what's your game?
  • Bad joke. (Score:4, Funny)

    by Anonymous Coward on Monday September 23, 2002 @05:21PM (#4315223)
    I'd say 640K of memory should be enough for anybody.
  • by raehl ( 609729 ) <raehl311.yahoo@com> on Monday September 23, 2002 @05:23PM (#4315244) Homepage
    If you're a high-end gamer, isn't EVERYTHING obsolete a year from now?
  • by floppy ears ( 470810 ) on Monday September 23, 2002 @05:24PM (#4315250) Homepage
    Sharky Extreme has an interesting comparison [sharkyextreme.com] of the MSI GeForce4 Ti 4200 128-MB and 64-MB cards. Apparently the 64-MB card has faster memory, and its performance is almost as good. The main choice between the two may depend on whether or not you need VIVO (video in-video out).
  • by atari2600 ( 545988 ) on Monday September 23, 2002 @05:24PM (#4315253)


    Does the amount of memory matter as much as the bandwidth of the memory on the card? what about the GPU on the card? I had a nVidia TNT2-M64 with 32MB SDRAM which i used to play practically every game that came out at 1024.768 & 16bit colour - Direct3D and OpenGL were perfect - On Windows and Linux systems ( thank you Loki :) ). Then i felt Deus Ex struggle a bit and i chose 800x600 and finished the game - then i felt Operation Flashpoint struggle and i knew it was time for another card but being a poor student i bought the GeForce2 GTS with 32MB DDR for 45$ and i still use it to play Tactical Ops, BF1942 among a host of other games online and i do frag freely getting rid of those 1.8Ghz and Geforceti4600 boasters. I am looking at a Geforce3ti platinum with 64MB or an ATI 9000 Pro or perhaps the new Trident cards out to come which would cost around 100$.



    Bottom Line I dont think 64 meg will become obsolete in an year even for hardcore gamers and hardcore gaming is defined by how good you are at a game compared to other humans and how less time you spend in mastering parts of a game - not really by a geforce4 card with 128mb or a parhelia with 256megs just cos you can afford them big cards or you have a rich parent. Cmon guys there are hardcore gamers here - its a joke that 64MB would become obsolete.

  • Obsolete? So what? (Score:5, Insightful)

    by iiioxx ( 610652 ) <iiioxx@gmail.com> on Monday September 23, 2002 @05:25PM (#4315257)
    "...but a year from now 64MB cards might just be obsolete."

    So? A year from now, 128MB might be a low-end card, too. So in a year, buy a new card. Don't invest in tomorrow's technology today at a premium, when you can get it tomorrow at a discount. That's why smart buyers invest in modular components. When your hardware gets outdated, pluck and chuck.

    I never invest in the top-end. I buy in the middle ground. Why? Because components drop from high-end to mid-range very quickly, but then stay there a long time before obsolescing to the low-end (or dead-end). And when a product drops from the high-end to the middle ground, the pricetag typically gets cut in half.
    • When your hardware gets outdated, pluck and chuck.

      That sound you hear is a million Macintosh zealots twitching and convulsing while they try and convince themselves that lack of upgradeability is a GOOD thing because it's "less confusing". :)

      • Apple actually uses some nice hardware in their newer PowerMacs. 64 bit/66 MHz PCI, Nvidia GeForce4 graphics, dual CPUs... I wouldn't mind having a dual G4 PowerMac.

        Still, the upgrade options do kind of suck. Adaptec, Nvidia, and ATI are the only manufacturers who support the Mac. Bleh.
        • I wouldn't mind having a dual G4 PowerMac.


          Me either. Still holding out for the possibly mythical G5/Power4 though.


          Adaptec, Nvidia, and ATI are the only manufacturers who support the Mac.


          For graphics ATI and Nvidia pretty much cover the bases. Hard drives aren't Mac-specific, just drop in a Western Digital or Seagate, format it using OS X's Disk Utility, and go. What else do you need?

          • How about a PCI video capture card (USB sucks), SCSI host adapter (for those cheap SCSI drives on ebay), second NIC (having two NICs lets you share your broadband connection), ATA 133 IDE controller (for that new Western Digital you mentioned), sound card (Creative Labs doesn't seem to support the Mac any more...?), hardware MPEG encoder/decoder, cryptography co-processor, etc.

            There's a lot more than ATI and Seagate out there. I can deal with not having Matrox video cards (kinda sucks) or Tekram SCSI cards (oh well), but paying double the PC price for your hardware -- when you can even find any upgrades -- is just outrageous. It's better to just buy a brand new Mac than try to upgrade an old Mac. I'm sure they're designed that way. :)
      • That sound you hear is a million Macintosh zealots twitching and convulsing while they try and convince themselves that lack of upgradeability is a GOOD thing because it's "less confusing". :)

        Actually it's the sound of them pointing and laughing because the only thing to do with year old PC components is "pluck and chuck" as the man said. The mac zealots are all watching their two year old hardware draw bids on ebay and taking more than half of their original purchase price to the bank. And people say macs are more expensive than PCs...
        • Dude, they get more money back because they start life as twice as expensive! PCs don't hold their value as much because they start at rock-bottom pricing. Sheesh, the reality distortion field is pretty strong around here.

    • Don't invest in tomorrow's technology today at a premium, when you can get it tomorrow at a discount. That's why smart buyers invest in modular components. When your hardware gets outdated, pluck and chuck.

      No! No! No!

      Please DO invest today in the top-end graphics cards! Spend two to three hundred $ buying the best cards on the market! (Or more!)

      You see, unlike the parent poster, I think this is a positively brilliant plan for each and every one of you in the high-end gaming crowd!

      Look at the benefits: State of the art technology, frame rates so fast that subliminal advertising is practical, bitBLTs that could move your entire DNA encoding in one transfer, and colour depth that makes the games so close to real life you never have to leave your chaise-lounge and encounter the real world!

      And as a nice bonus for those of us in the category of the less driven to best-of-the-best-damn-the-cost, there is this:

      As all of the high end gamers drive the market up, some really decent hardware becomes really cost-effective and affordable for the rest of us!

      So yes, Please Please Please DO buy the BEST and Most Expensive! Drive the market as hard as it can be driven! The mild and meek will quietly thank you and buy really nice (but obviously outdated) products for a bargain basement price!

      Ooops.... forgot to tag the whole post <SARCASM>
  • I am planning to do a major upgrade due to my slow Pentium III 600 Mhz system with a GeForce2 Pro card. You can read my newsgroup thread here [google.com]. :)

  • by Rahga ( 13479 ) on Monday September 23, 2002 @05:28PM (#4315283) Journal
    Here's something you should consider before buying a 128 MB GeForce Ti-series card. There are four choices you can make right now:

    Ti-4600: Highest price, best features, 10.4 GB/s memory bandwidth, 650 MHz memory clock
    Ti-4400: High price, excellent features, 8.8 GB/s memory bandwidth, 550 MHz memory clock
    Ti-4200 (1): Decent price, great features, will handle BF1942 and UT2003, 64 MB limit, 8 GB/s memory bandwidth, 500 MHz Memory clock
    Ti-4200 (2): High price, great features, slowest out of all 4 thanks to memory speeds, will handle BF1942 and UT2003, 128 MB limit, 7.1 GB/s memory bandwidth, 444 MHz memory clock.

    Basicly, on the 4200's, if you go for double the memory for almost double the price, you will see a performance hit.

    After my research (urged on by PNY's box), I decided that by the time I need 128 Mhz, I'll also want the features of a chip beyond the current Nvidia line.

    Of course, if you want anything that performs beyond the 4200, then why bother reading anything here in slashdot? You're getting at least 128 MB on your card ;) .....

    So, this weekend, I found a 64 MB Ti 4200 for $129, and it printed out a $30 rebate at the counter. Happy day, indeed. I spent the rest of the weekend playing OpenGL-boosted Doom [doomsdayhq.com] and Hexen.

    BTW, if you are completely out of the know, but love gaming, do not but the MX series of cards. They are not for you.
    • Sorry. It's late, and spelling errors abound tonight ;) .

      I also meant to say "do not buy the MX series". And a few other typos.... If I took time to get them all right, my post would have hit near the bottom of the list, thanks to the active troll population here on /.

      G'night!
    • "if you are completely out of the know, but love gaming, do not but the MX series of cards."

      AMEN to that. I helped a buddy pick out a new graphics card last weekend so he could play Battlefield 1942 (which requires hardware T&L). I reccommended a GF4 Ti4200 to him, but he opted for the cheaper GF4 MX 460 instead. (I tried to warn him, really I did)

      Hmm... no fan on this heat sink. Oh well.. maybe that's a blessing... no moving parts to break down. I'm sure it won't overheat.. I mean they test this stuff, and if it ran too hot, of course they'd slam a fan on it. Right? Right???

      Hmm... Unreal Tourney locks up after 5 minutes.

      Hmm... May Payne locks up after 1 minute.

      Hmm... BF42 locks up in SECONDS.

      How can they sell this shit? Doesn't it get some cursory testing?? I even UNDERclocked the damn thing to minimum speed, it still froze on absolutely everything we threw at it.. the more advanced the graphics, the faster it crashed. Anyway, we returned it and picked up a cheap Radeon 7500 which has been running like a champ. ARE YOU READING THIS, NVIDIA!?
      • Err, ever heard of a faulty product? You know when you take home that new TV/Toaster/VCR and it doesnt work? Take it back and get one that does work.. Same concept here.

        btw, I purposly removed the heatsink from my gf4 mx, I like my computers quiet. It works perfectly with only a decent heatsink on it...
    • Second the MX comment:

      The MX series of the GF4's do not support the full line of graphical features of even the GF3's let alone the GF4's - the "GF4MX" series is a very misleading name.

      This is not, true, however, with the GF2 MX series of cards. These are a great value (esp. the 64meg ones) for light or even casual gamers.

      Personally, if you want the best possible graphics on the latest game engines all for a reasonable price, I think that you should seriously consider the GF4 Ti4400. The ATI's seem to be getting better driver support as well so their higher end (8500's or 9000's) may warrent a good look as well.
    • After my research (urged on by PNY's box)

      Edward Tufte, author of Visual Display of Quantitative Information, would not be pleased with PNY's marketing. The bar graph on the back of the box implies that the 64 MB version is 150% faster than the 128 MB version, based on a 3D Mark score that is only 2% higher.

      I chose 64 MB for the value ($100), because it seems silly to spend more on something that is already eclipsed by the Radeon 9700 Pro, not to mention the forthcoming NV30.

      • Amen to that.....

        It is a nice try for backing up the "Stomps" claim.... a claim that could be (mostly) wiped out with a little bit of overclocking on the 128MB variant.

        Still, like I've said, it's not worth the money in my case to spend that much more money on a 4200.
  • texture memory (Score:3, Insightful)

    by phorm ( 591458 ) on Monday September 23, 2002 @05:31PM (#4315298) Journal
    AGP will let you run the textures from your video card off of system RAM, but there is still a speed loss involved in this. More RAM is of course nicer for newer games. Older games, it doesn't mean squat. No games, squat. Games without 3d, squat. (no I'm not counting those who use the video card for system memory).

    However, if you intend to play Q3 or whatever enough at superhighres, ultracolordepth, whateverwhatever, then you may want more Video RAM. Crank down the texture detail a little bit and you don't need as much, I'm sure the game is just as fun.

    AGP, fast video cards and video RAM are all about games. But when you can buy a whole PS2 for the cost of an expensive video card, it makes you think a bit.

    With my old 15" 1024x1024maxres monitor it doesn't matter much anyhow - phorm
    • To be honest, messing with details (resolution aside) on Q3 is not going to make a humongous impact if you have a decent 2000-ish video card. Modern cards have no problem at all, even 64 meg or 32 meg. I've found that the the big deal there will be your processor and/or internet speed. Your average k6-2-450 tends to choke on a few bots even with a new video card and memory :)
    • AGP will let you run the textures from your video card off of system RAM

      This, by the way, has got to be one of the most useless features of AGP. Not only is the transfer from main memory to the card slow as molasses, it's also tying up the memory so that the CPU has to wait for the video card in order to get to RAM. So while the graphics card is rendering your scene out of main memory textures, it's also making it impossible for the CPU to get to RAM in order to do geometry processing for the next frame of animation.

      Not to mention, these cards are coming out with enough video RAM to rival the main memory, so the entire point is moot anyway. Well, unless you've got 1 GB of RAM like I do..

      • I skips my normal embellishment on this. What I meant to say is:
        AGP will let you run the textures from your video card off of system RAM, but you might as well go fix a can of soup, feed the cat, etc for all the good it does you speed-wise.

        Ok, so maybe that's too extreme. :-)

        Not to mention, these cards are coming out with enough video RAM to rival the main memory

        Spending money on 256MB/512MB video cards seem ludicrous to me, when people I know are just scraping that much up in system RAM (I'm 512 myself until I get a new motherboard). If a game takes more RAM than my standard PC configuration, I think it's time to dull those pixels a bit in favor of a little decency as far as memory consumption. I imaging when these cards start toting that much RAM and CPU power, they're also going to be so hot that you could probably find a case mod that allows them to boil your morning coffee...

        The newest innovation, part video card, part base heater, part toaster. Don't forget the heatsink and fan! - phorm
    • s/PS2/XBox/ and you have a point. The PS2's graphics are more compareable to a highend Geforce or maybe a GF2MX.
  • Lets assume that you've grown tired of yet another 3D shoot-em-up. Why do you need such ridiculous ammounts of graphics memory? At 128MB of ram you can have a normal workspace screen with 44,739,242 pixels, or a resolution of arround 7680*5760 at 24bit color. Thats like a 130 inch screen at average dpi.

    I'm much more interested in why I cant pick up cheap ($20) 2 head or 4 head video cards, or ones with decent video out at the same time as VGA out.
  • 1600x1200x32bit = 8mb. make that dual headed you get 16mb. add a bit for local caching of data on the card and you find that 32mb is way more than you need.

    if you're even questioning if the cheapest card you can find on the shelves today has enough ram, you're being silly.

    video quality and number & type of outputs are all that matter. go buy a game console if you think otherwise.

    --
    ask not what your pocketbook can do for you but what you can do for your pocketbook.
  • by kaoshin ( 110328 ) on Monday September 23, 2002 @05:58PM (#4315510)
    My 1MB trident SVGA card works just fine. Enlightenment looks great in 800x600x16bit,
    and I play alpha centauri, starcraft, freeciv, etc. And I have been using it day and night since around 1993 without it melting, and with no noisy cooling fans. Considering it cost me one buck, I think that it is not a bad bargain.
    • No fscking suit, you have one of those Trident SVGA cards? Is it a PCI or an ISA card? I still have this Trident SVGA ISA card among my stuff, but no ISA slot to pop it in :o) but I wish there was a way to get at least 256 colors in 600x800 mode. In theory, at least, it should be possible, but not with X11, as far as I know....
  • Not just for gamers (Score:2, Informative)

    by Anonymous Coward
    Anyone running Mac OS X 10.2 ('Jaguar') would be well advised to spring for the fattest card they can afford. The new compositing engine treats every window as an OpenGL texture, so the more RAM you have the more windows you can open before your graphics card starts pushing textures into main RAM. The performance difference between a 16Mb PowerBook and a 64Mb Power Mac is noticeable (and yes, I know there are other factors in play there :).

    If other windowing systems head in the same direction (and MS indicate that Windows will, in a couple of years' time - X... who knows? Anyone have a plan there?), the advice will presumably apply equally.
    • I disagree. I have a Powermac G4/733 quicksilver with the stock ATI Radeon 7500 (32MB DDR) in it, and this machine flies. Right now I have around 17 visible windows open, ranging from around 800x600 to full screen. Quartz Debug (part of the developer tools) says that I have many more windows open, but they are invisible.

      Performance is far superior to what it was in OSX 10.1.5 of course. The genie and scale effects are WAY faster. Scrolling has been improved by a lot, and the new eyecandy goes at a perfect speed. The only thing bad is resizing windows, and that is around a 4X improvement from OSX 10.1.5 . Most importantly, however, is that my load average has gone down a bit. While writing this my load average has been about at 0.60. Back in 10.1.5 It would have been about 1.35 to 1.6.

      As always, your mileage may vary and speed is subjective, but I have found it zippy. You definately don't need the latest and greatest video card, but it can't hurt.

      Sten

  • i don't think so. (Score:2, Insightful)

    by hahnar ( 584140 )
    seems as though all this is for show and tell sake. 'dude, i got a 128 meg card!' seriously whats the point of going top end right now? My desktop's 32meg ddr worrk fine. i can play pretty mmuch all the new games. For the average gamer 64 wil probably be great for quiite a while. When the need arises for 128 or higher, then it should be bought, not now when there is such a premium on such technology. If it isn't broken, don't fix it.
  • by ikekrull ( 59661 ) on Monday September 23, 2002 @07:30PM (#4316092) Homepage
    These card use up to about 4MB - more like 2MB or less for 16-bit modes, for the framebuffer, and the rest is used solely for storing textures.

    If you do not use OpenGL/Direct3D, then any RAM above, say 8MB (you may be doing dual or triple-head at 1600x1200 32bit or more), is completely useless.

    The extra bandwith on the cards is also useless, as only 3D operations are accelerated across the super-fast busses built into these cards.

    Everything else, including 2D blits in the majority of available OpenGL/Direct3D drivers are handled by the host CPU and involve reading from system RAM and passing that data across the AGP bus.

    I am not aware of many (any?) games that can take advantage of more than 64MB of texture RAM, and while games that *may* take advantage of >64MB are on the horizon, the big news for games is vertex/pixel shaders, rather than the ability to texture map hundreds of megabytes of pixel data per frame.

    There are applications that will benefit from the availablity of 128MB or more texture RAM, but these are typically custom-written scientific visualisation apps, or conceivably you could use 128MB of textures to do realtime previews in your lightwave/3DS Max/Maya/Blender scenes.

    However, the actual utility of this RAM for most desktop users and even gamers is rather questionable. I don't doubt that the Radeon 9700 and the NVidia Ti4600 are fast cards, but they still rely heavily on the host CPU to achieve their stellar performance, as opposed to some of the professional cards which provide much more capable geometry engines and accelerate practically all of the openGL pipeline, as opposed to the consumer cards which are focussed mostly on texturing and fillrate optimization, ideal form games but not necessarily optimal for other forms of 3D activity.

    That being said, the pace of development from Intel and AMD have made it more difficult to justify using dedicated hardware for these seteps, as a 2GHz Athlon will probably out-light-and-transform dedicated OpenGL hardware, which is much more costly and low-volume to produce.

    The SGI O2 is a good axample of a machine that simply uses system memory to store textures, and while the SGI's graphics system is not in the same class as some of the more modern 3D boards from NVidia and 3DLabs, it is certainly sufficient to do impressive texture-mapping demos. This is really not an option on the current x86 architectures, but is a useful example of the 'other' way to handle texture memory, as it allows the user of the system to make maximum use of the resources available - i.e. when 3D graphics are not used, the 'texture memory' is available to the apps, and vice-versa.

    I think it is amazing that we now have consumer cards that contain more texture memory than was typically available as system RAM in a mid-range 3D workstation a few years ago, but the unfortunate thing is that very, very few people are able to put those capabilities to real use with the current crop of system architectures, applications and games available

    • I am not aware of many (any?) games that can take advantage of more than 64MB of texture RAM

      Well, the article shows (as did Anand [anandtech.com], and others, in June) that Jedi Knight II can use the extra memory for a 10 - 25% increase in FPS. We've heard the Unreal Tournament 2003 will use more detailed textures than the demo, so 128 MB may help there, too.

    • I realize that this is a lame nit for me to pick, and it has nothing to do with the current software situation, but:

      Look at QuartzGL. Next generation *2D* compositing can use much more than the standard framebuffer. Folks have posted evidence here on /. that MS might move to a similar system in future operating systems.

      Of course, this isn't a reason to go with a 128 card over a 64 necessarily. On the mac it's a reason to go with a 32MB card rather than a 16 or an 8MB card. At some point, however, they might figure out how to suck up a ton of RAM and accelerate scrolling of composited windows (which I understand they haven't done yet in QuartzGL) and all of a sudden you might want massive amounts of VRAM in your next windows machine.
  • Matrox has come up with a crazy-ass video card called the Odyssey Xpro.

    1GB of 128-bit DDR memory at 333MHz
    1GHz Motorola 7455 CPU (i.e. an apple G4 chip)
    custom memory controller
    SIMD vector math unit
    PCI-X host interface

    Yes, you can be the first on your block to have a graphics card that runs its own operating system!
  • by Tokerat ( 150341 ) on Monday September 23, 2002 @09:10PM (#4316669) Journal
    Quartz Extreme.

    Ok Ok, so it's a Mac OS X thing, so what? How long before M$ innovates this feature into Windows? How long before it's patched into XFree86?

    Think of all the cool things you can do, both for visual pleasure and UI functionality by operating in an accelerated, 3D enviroment, while the main CPU is free to crunch away at whatever it is you have your CPU doing, thus improving overall speed. Yes, I realize the CPU still has to intruct the card of what to do, but at least we're not blitting as we're trying to host web pages, for example.

    For that you're going to need texture memory. Lots of texture memory. When you run out of memory on the card, the framebuffers must be stored in RAM. When those framebuffers are needed, you'll need to swap them into the card's RAM. This will cause the main CPU to stutter as it pumps a couple 8-9MB buffers through the system & PCI bus, which, needless to say, will get old fast, especially if the framebuffers get paged out to a swap file. Yuck!

    Of course, maybe you should wait until the other 2 of the Big Three implement this in some form (I know some work as been done on a 3D window manager for X, no idea if it's meant to take advantage of acceleration, though). I've heard rumor that M$ is working on it for Windows XP(ensive) 2005 or 6 or whatever it is, and I'm sure some Linux hacker has it working on his overclocked Athlon box already. Either way, you probably want to be ready for this. Or wait and buy a card when it finally happens, when 128MB will be standard.

    Since color depths will probably never exceed 48-bit (32-bit + alpha), screen resolutions are fine at 2???X???? or whatever the current highest is, it'd take quite a few windows open at once to framebuffer all that memory up. Assuming about 8 megs per window, which is admittedly above average for most windows (sans Photoshop or web browsers), you'd get about 14 or 15 windows open at once.

    Oh well, someday, you'll be sorry your card doesn't have 512MB on-board :-D
  • by Vegan Pagan ( 251984 ) <deanas&earthlink,net> on Monday September 23, 2002 @09:24PM (#4316748)
    Here's what Tim Sweeney says about texture caching: [anandtech.com]

    "This is something Carmack and I have been pushing 3D card makers to implement for a very long time. Basically it enables us to use far more textures than we currently can. You won't see immediate improvements with current games, because games always avoid using more textures than fit in video memory, otherwise you get into texture swapping and performance becomes totally unacceptable. Virtual texturing makes swapping performance acceptable, because only the blocks texels that are actually rendered are transferred to video memory, on demand.

    Then video memory starts to look like a cache, and you can get away with less of it - typically you only need enough to hold the frame buffer, back buffer, and the blocks of texels that are rendered in the current scene, as opposed to all the textures in memory. So this should let IHV's include less video RAM without losing performance, and therefore faster RAM at less cost.

    This does for rendering what virtual memory did for operating systems: it eliminates the hardcoded limitation on RAM (from the application's point of view.)"
  • Yes I'm trolling. But this is also good advice.
    DON'T buy an ATI.
    The DRI team aren't allowed to implement S3 Texture Compression, so you won't be able to run UT2003 or any other games which use Texture compression.
    The DRI team aren't allowed to implement ATI's HyperZ technology.
    The Gatos team aren't allowed to implement TV-out.
    Everywhere I turn ATI are advising that I am not allowed to use feature 'X' under Linux.
    ATI are now releasing closed-source FireGL drivers for their newer Radeons. But I paid $AUS500 for my 64MB DDR VIVO Radeon only a year ago and I don't need to upgrade yet thankyou. And the FireGL drivers are slower and less stable than the DRI drivers.
    ATI should provide closed-source binary-only modules for the DRI drivers to add features which are patented. But instead they force their customers to upgrade early and suffer inferior quality drivers. Not I! I am going back to bloody nVidia. And I swore I'd never do that..
  • Ok really. We all read the article, and it prettymuch said what we thought it would say. Only at rediculously high texture sizes did you get any bennefit from the 128 meg vs 64 meg. All of his numbers show it. After each test he comments how little difference it made... Then at the end he goes on and on about how you could get a 128meg card, cause its 1337, and really does make a difference. I'm sorry but please. He shows one thing and says another. Blahh.

To be awake is to be alive. -- Henry David Thoreau, in "Walden"

Working...