AGP Texture Download Problem Revealed 268
EconolineCrush writes "The latest high-end graphics cards are capable of rendering games at 1600x1200 in 32-bit color at jaw-dropping frame rates, but that might be all they're good for. For all their gaming prowess, all of these cards have horrific AGP download speeds that realize only 1/100th of their theoretical peak. This article lays it all out, testing video cards from ATI, Matrox, and NVIDIA, and clearly illustrates just how bad the problem is. While these cards have no problems rendering images to your screen, you're out of luck if you want to capture those images with any kind of reasonable frame rate via the AGP bus."
Um, this is a surprise? (Score:4, Informative)
The only situation I can see where you'd want more than PCI bandwidth returning would be for uncompressed HDTV capture, and there are better ways to do that (grab the raw broadcast stream for example)
Re:Um, this is a surprise? (Score:2, Informative)
Re:Um, this is a surprise? (Score:5, Interesting)
Two reasons for wanting to grab the framebuffer (or parts of it) are for
a) texture imposters (realtime adaptive billboarding) and
b) split world/image-space occlusion culling.
With faster readback, both these techniques would probably be used more in "normal" software (ie games).
0.02
Yes, but... (Score:5, Informative)
That's what render-to-texture is for, you don't need to read data back to the CPU.
b) split world/image-space occlusion culling.
This wouldn't be too useful for realtime graphics anyways, because of the way the 3D graphics pipeline works. The CPU can already be processing data a few frames ahead of what the GPU is currently working on. If you read back data from the card every frame, you have to wait for the GPU to finish rendering the current frame before you can start work on the next one.
Re:Yes, but... (Score:5, Informative)
That is true for simple versions, but with methods moving towards image based rendering you often have to pull the data back anyway. Then you can process the textures to produce better imposters - not necessarily just billboards
Re: occlusion culling. People are using these methods today for realtime graphics (for example combinations of Greens HZB, or HOMs) even with the low readback speed. UNC's Gigawalk software is one published example (Google for it). Getting Z or alpha channel infomation back is the biggest hit, so these methods would be even more efficient and so more widley applicable with faster transfers. When you're rendering N million triangles per frame (UNC quote 82Million) you have to do this stuff to get realtime rendering.
So it is used for realtime graphics today - although mainly for heavy duty applications not games.
HTH
Re:Yes, but... (Score:2)
As to the future, everybody can see the difference in bus speed vs. GPU performance. Shaders are going to open up a lot of possibilities in the next few years - for all parts of the pipeline.
But at the end of the day performance is what counts. Today we need to do readbacks, tomorrow hopefully not. The fact that we might not need to do them in the future doesn't mean that people shouldn't make the most of what we have at the moment. Nothing lasts forever, everything changes - and in computer graphics - especially fast.
Re:Yes, but... (Score:3, Informative)
Re:Yes, but... (Score:2)
With big scenes (as I mentioned in another post) the cost of attempting to render occluded geometry is far more costly than stalling the pipe for a few ms. Trying to render a few million polys can also kill your performance
GPU RAM is not CPU RAM - Film at 11 (Score:2)
It seems the lesson here is that proper captures from video RAM are slow. Yeah, it'd be nice to change that. But how many people really care? Given how long it took anyone to notice, I can't help but think that very very few people really care - and with good reason. Unless you're into making rendered movies, it's irrelevant.
Software issue? (Score:5, Informative)
In any event, there's another issue he doesn't really touch upon; while he mentions that a single frame at 1600x1200@32bit colour is 7.5MB, he ignores the fact that a 30fps movie would require (30*7.5)=225MB per second uncompressed; you either have to have that much disk bandwidth or have enough CPU grunt to compress that on the fly. I guess a dedicated MPEG encoder card could help, but your average box is going to have trouble keeping up with on-screen gibs, rocket trails and blood splatters and encoding video.
Not just a software issue (Score:2)
I asked nVidia at SIGGRAPH why image readback is so slow. They said, no motherboard they know of (not even their own) supports AGP Writes back to the system memory. Without that, you're limited to PCI bandwidth at best, far less than what the AGP spec allows.
However, we're not even seeing that. Results are showing 1% of what is possible. It's certainly a hardware issue, but there may be a lot of room to improve from the software side, too.
Re:Software issue? (Score:2)
When your pursuit is REAL TIME special effects/video manipulation, this problem has little to do with the disk, raid or no raid.
We just want to get the video out of the graphics accelerator and into a professional video IO card. Aside from the fact that this gretly stresses the PCI bus, the problem witht he AGP bus is worse..
The number of motherboards with both 64 bit PCI and AGP can be counted on one hand. While NTSC (uncompressed SDI) is around 270 Mb/s (a number which is certainly way below the peak bandwidth numbers), doing both in and out of the card as well as other IO (ethernet, serial, sound), pretty much ensures you'll have problems with latency.
Around 60% of our CPU usage is associated with blitting video out of the graphics accelerator.
It would be really nice if they got AGP to work.
At this point, we're just hoping that video cards will go over to PCI-X, whose hardware will have to work well for both input and output.
Re:Software issue? (Score:2)
Re:Software issue? (Score:2)
Re:Software issue? (Score:3, Interesting)
Actually, my scenario is more like:
I use my expensive GFX card to render shots for my incredibly innovative but poorly funded sci-fi flick. I want to grab each frame in perfect detail so it can be post-processed. The easiest and cheapest way to do this is to have the renderer save each frame as it's computed. Real-time is not an issue, just like it's not an issue with a raytracer or whatever.
It better become feasable if companies are going to want renderfarms based on the nv30/40/whatever. Having two seperate machines per renderer would be pretty.. dodgy
Re:Software issue? (Score:2)
Although having said that, I doubt even hardware accelerated rasterisers will be pushing 10MB/s of video data out in most cases, so..
Re:Software issue? (Score:2)
Seriously, by the time most people have nv30+-level GPU's, they'll have an enormous amount of rendering power that's quite comparable in most cases with raytracing. If you can render a scene on the GPU in a few seconds and have it look almost identical to a raytraced image that takes an hour, which do you think the average user will choose?
Worth keeping an eye on, anyway.
nobody asked! (Score:2, Interesting)
Why should they, was anybody complaining till now. The well wont come to horse, the horse has to go to the well to drink water.
So unless a large number of people want it nobody wants to mess around with a perfectly working driver.
And it is not a piece of cake. Recording its own rendrings the software way would be a bitch, the best way would be to provide an access point on the bus itself, though it would play havoc with the board timings and noise issues.
In the end it will call come down to
Imagine That (Score:5, Insightful)
In summary, who the fuck cares?
Re:Imagine That (Score:4, Insightful)
This is exactly the attitude that creates endless headaches mapping good concepts onto workable implementations, and results in systems becoming so convoluted by the time they work properly they are nearly impossible to maintain.
The principle of least surprise dictates that random orders of magnitude should not be sacrificed in your fundamental primitives.
It seems to me that if I spend $300 on my CPU and $600 on my GPU that I might want to be able fetch back what the GPU creates. What kind of idiot puts their most powerful processor at the end of a one way street?
There are endless reasons that could come up why this feature might need to be exploited. Just because you can't come up with them doesn't mean they don't exist. You are talking about 99.9 percent of your own creativity, which I assure you is a far sight less that the sum total of the creativity out there looking for cool new things to do.
It does make sense to consider cost/benefit here. The first observation here is that we are talking about a baseline primitive (texture returned to system memory), and that we are looking to recover a rough factor of ten, not a rough factor of 10 percent.
In the video card industry, things are designed to hit the 90 percent point. These days the GPU industry rivals the CPU industry in dollar value. I simply can't believe the graphics card companies can't afford to have someone sit down and crank this up to 50% bus utilization. I suspect they could do this without even scratching their head.
I've had to use many primitives over the years designed by this guy or his second cousin. If he only knew how much of the pain he experiences as a computer user is the result of good people bending over backwards to deal with unsuspected, arbitrary constraints when they could have been polishing the product interface instead. But some people have no imagination for these things.
Perhaps... (Score:5, Insightful)
Maybe they're the kind of idiots who know most people just want the best possible OUTPUT for gaming possible, and so don't want to add any overhead in card performance - or even additional design time - that isn't related to gaming performance. You know, the idiots who make cards that get award after award from gaming companies, then write near-perfect drivers, port those drivers to linux, and let you overclock the card to your heart's content. Those sort of idiots. My, they're idiotic.
Nobody says, "buy a geforce 4 ti, make the next toy story." No, it's advertised as a gaming card, and that's what its designed to do. If you want to do high-end video rendering things, perhaps a gaming card isn't the best choice.
Re:Perhaps... (Score:2)
Re:Perhaps... (Score:4, Funny)
Really, as soon as the market for this sort of capture starts to grow, someone will have a hardware solution. The first ones will be cheesy: a connecter into a separate PCI capture card, for example; but eventually a more reasonable method will become standard design.
To me, this is just the free market in action, working (more or less) as it should be.
* I know how much scanners cost. Think hyperbole.
High-end cards are slow too. (Score:2)
Why is it that a much more expensive Quadro card gives equally slow results? I've run a very similar test on an SGI 320 (shared-memory design) and it only gives 18.9 MB/s.
Anyone reading this with a Wildcat 6000-series? What does that bench at?
I think you just showed us the solution... (Score:2)
the kind of idiots who know most people just want the best possible OUTPUT for gaming, and so don't want to add any overhead in card performance - or even additional design time - that isn't related to gaming performance. You know, the idiots who make cards that get award after award from gaming companies, then write near-perfect drivers,
here it comes...
port those drivers to linux
Bingo!
The only problem is in the driver. Hardware's up to the job.
The driver has been ported to Linux.
So fix it!
Closed source? Reverse engineer it.
you're missing the point. (Score:3, Interesting)
Now, streaming real-time rendering images over the internet? Maybe not fullscreen stuff right now because of a multitude of hampering factors on affordable internet bandwidth which I won't name for clarity's sake, but for the limiting factor to be the internet itself and not the graphics card is still a significant step.
This would definately be very beneficial to low-budget game developers and movie directors. We could very well see the return of the shareware boom (remember the early-mid 90's?) because of this.
sure, only a small portion of the people who'd buy the cards would use these features that the article talks about, but they'd be people that didn't have that capability before. Whenever this happens in any medium/artform/what-have-you, there is the tendency for a lot of experimental stuff to appear. I think we have some very interesting times ahead of us if someone gets these drivers written.
Re:Imagine That (Score:2)
- A.P.
It's not the cards (Score:5, Insightful)
As the quoted article clearly indicates, the problem lies with the drivers and not with the cards, the latter which the original poster intimates.
And the underlying reason is immediately understandable: after years of AGP cards and years of noone really complaining raising this issue - (except, now, developers of video-editing software who could benefit) - it seems clear that there isn't much demand for this kind of performance. In the (near ?) future there might be, but why should these companies spend money working on driver performance in areas like this when really customers only care about how well Quake will run ?
When people are willing to pay for these features is when companies will pay to build the requisite drivers. And that is how it should be.
Re:It's not the cards (Score:2)
Alternately, they could publish full specs for their cards and provide the drivers as open source, and the few people who need the different features now could write them or have them written. This code could be contributed back to the card manufacturers and integrated in future driver releases, resulting in the feature being available for everyone. For example, ATI apparently didn't see enough market demand to provide 3d-accelerated Linux drivers for the Radeon 8500, but The Weather Channel did [linuxhardware.org], and now we'll all benefit.
Obviously this is a bit idealistic, but hey, we're talking about how it should be here. As I started writing this, no one has made a good answer on the "what about under Linux" question, but honestly (and despite the way that that seems like a reflexive slashdot response), that's the real solution to this "problem".
Re:It's not the cards (Score:4, Interesting)
But why? (Score:3, Interesting)
Re:But why? (Score:2)
Huh... (Score:4, Interesting)
That would mean that software like VNC would have much higher performance, if the drivers were updated, the way these guys are demanding. (Wouldn't it?)
That'd be fantastic!
Re:Huh... (Score:2)
The slowest card reads back at 8.376 MB/s OR 67.008 Mb/s OR about 2/3 the bandwidth available on a 10/100 network.
Network performance is the primary limitation to streaming frames.
The best cards would stream at 13.283 MB/s OR 106.264 Mb/s exeeding the speed of 10/100 and only able to push 8 streams on perfect Gigabit ethernet. Unfortunately, Gigabit ethernet is not nearly as fast as advertised, ranging from as low as 280 Mb/s for generics, to as high as 860 Mb/s for 3Com's best.
Re:VNC faster, not really. (Score:2)
Re:VNC faster, not really. (Score:2)
Allow you to peer over other's shoulders (Score:2)
Might this be intentional? (Score:4, Insightful)
This *is meant to be* a dumb question. Mod me down if I'm wrong; it's only Karma.
Is it me, or is the author smoking crack? (Score:4, Insightful)
1) Recording games/presentations/etc. The reason why we don't do it is because if the system was capable of generating it real time in the first place, it's far less space intensive to record the parameters of the animation than the output. i.e. It's cheaper to say "Daemia fires rocket at these coordinates" than record an MPEG of said rocket shot. AND, as hardware gets better, your recording does too.
Which leads me to point 2:
2) Since it's cheaper to capture realtime animation by capturing parameters, the only use of the capture function would be NON-realtime applications - i.e. getting your Geforce5TiUltraPro to render an extremely complex scene with incredible realism at 1 fps. That's not a typo. If we have 10MB/s back-into-the-PC bandwidth and each super high resolution shot takes 10MB on average, we have a wonderful solution working at 1 fps. Spend the fill rates on 600 passes for each pixel or something like that. Imagine the quality of the scenes! Capture the damn things and be glad you're not rendering at 1 frame per hour like they were 5 years ago.
Repeat after me - if you're rendering for posterity you don't need real time... That'll come eventually.
-JackAsh
DMCA (Score:2, Troll)
Well, duh (Score:2)
A stunning example of stating the obvious.
The hardcore 3D gamer market is small enough; I can't see manufacturers busting their humps to serve an even smaller one.
Re:Well, duh (Score:2)
One of the worst technical articles.... (Score:5, Interesting)
Re:One of the worst technical articles.... (Score:3, Interesting)
Why? You don't seem to follow up this opinion with any facts to back yourself up. Being able to do things like Interactive Multi-Pass Programmable Shading [nec.com] means that you can achieve near-PRman levels of graphics quality, using standard graphics hardware. But, of course, you need to capture that back to main memory for it to be any use. That hardly seems worthy of your ridicule.
"as someone else pointed out, transferring back high-res images would take up over 200MB - that's a quarter of your AGP bandwidth!"
Who are you to decide what's a good use case, and what's a bad one? This sounds to me like a case where several different people have presented reasonable requests for features - and you're shooting them down because you think what they want to do is "a joke". Since this can be fixed with a software update, I think it's a pretty reasonable request.
"you simple couldn't realise the full potential of the bandwidth without a lot of other (expensive?) hardware..."
Why on earth do you make that claim? Could you back that up with some facts? The article is claiming that it's a software issue, only. In fact, the test they put together sounds like a very reasonable one - they're not coming anywhere NEAR using the bandwidth in creating the images, and still, they're getting horrible bandwidth, downloading them. That doesn't sound like contention and timing - that simply sounds like bad, bad drivers.
"you would be *far* better off taking a stream of data from the DVI connector"
So, now, to solve the bandwidth issue, you're going to add a second card to the motherboard. What magical, ethereal bus bandwidth will this second card use? I think you need to re-examine your argument on this point.
"However when does that require 3d rendering to be taking place?"
This isn't just talking about 3d rendering. This is all screen capturing.
"There should be no contention and no reason why the AGP bus couldn't be utilised fully"
Wait a minute - now you're switching your argument?
"would the graphics companies make enough out of this to justify the effort?"
As everyone keeps saying, this sounds like it can be fixed in software. That's a pretty negligible cost for the vendors to spend.
"As for internet streaming - how many people have access to bandwidth fast enough for high quality, full screen video streaming?"
What about intranet? Lots of companies have intranet bandwidth fast enough for what you're talking about.
Enough said...
Re:One of the worst technical articles.... (Score:2)
I'm defending the use case, and you're attacking it. Why do you care? If your argument is that it's not "reasonable" to expect them to support it, based on the additional money they would make, that's fine - I don't necessarily disagree with your opinion about that.
What I'm saying is that there's both a need, and a simple software solution. The vendors would do good to encourage this kind of feedback - it makes their products better.
I'm saying that it makes sense, at any given moment, to take advantage of the bandwidth that's there. If I render a scene, I expect that to be fast. If I then pause until I can capture the image back to main memory, I expect that to be fast, as well. 8 frames per second is agonizingly slow. In the case of near real-time, waiting 0.125 seconds for a screen capture is very frustrating. Especially when you can render the frame in something like 0.0125 seconds. It's not as though the AGP bus is doing both tasks at the exact same instant, as you seem to keep implying.
Intranet: So, because the article didn't mention something, I can't mention it, and it's not worthy of your contemplation? What? =)
In this specific use case, every vendor has crappy drivers. If you've got a better list of what their driver developers should be working on, by all means, post it. Until then, let them work on the reported issues and requested features - this sounds like a good one, to me.
Re:One of the worst technical articles.... (Score:2)
1600x1200x32bit = 7,680,000 bytes / image
24fps means 184,320,000 bytes / second back down the AGP bus -- and that's if you only want 24 fps. That's a lot of bytes moving around, especially when you have to be sending data back up to render future frames.
Maybe you could do some sort of hardware compression, but as other people have mentioned, video cards are already large enough, make too much heat, use too much power, and are expensive enough that I don't want to be adding additional complexity and cost to them for what a few people want to do. If there are people who want this, they should pay for the R&D and production costs of these specialized chips.
Do the sums (Score:2, Insightful)
Uncompressed, say just 1600x1200x24bit is about 6Mb per frame. At say 70 frames/sec is about 420Mb a second to store to disk.
So what exactly are you going to do with that much data? If you had 512Mb of ram you could hold 1 seconds worth.
Forget a hard disk, even a 3 disk raid doesn't have that sustained IO rate.
Re:Do the sums (Score:2)
oh wait. that's megabits. we're talking megaBYTES. fuxor. sounds like we've got a decade or so before we have consumer-level storage options at this level. crazy.
btw, if i had mod points currently, i'd mod you up.
Capture of streamed video? (Score:3, Insightful)
No worries about macrovision, badly controlled overlays, or screwey playback software.
How about this (Score:2)
Does this really have to be over-engineered?
Ray Tracing on the GPU (Score:5, Interesting)
Our ray intersection algorithm implemented on the GPU (an "old" Radeon 8500) was able to intersect 114M rays per second. This was loads faster than the best CPU implementation, which could handle between 20 and 40 intersections.
But when we tried to implement a ray tracer based on this, and an efficient one that didn't intersect every ray with every triangle, the readback rate killed us. Our execution times slowed down to the low end of the fastest CPU implementations.
And the readback delay seems to be completely due to the drivers, which apparently still use the old PCI-bus code. If the drivers could use the full potential of the AGP bus, our ray tracer could approach twice the speed of the best CPU ray tracers.
Hmm sounds like a call to arms... (Score:2)
If the drivers are truely the only issue and not the hardware, wouldn't this be a great opportunity for the XF86 guys and whoever writes the particular tdfx modules to optimize Linux first.
"No Mr. Vallenti sir you don't understand we have to use Linux. It's the only game out there for our CG budget. Windows can't do RAM write back with decent FPSes, and commodity GPU's are 20 times cheaper..."
Wouldn't that suck for them... at least it would be amusing.
I've seen this firsthand... (Score:2)
NVIDIA says that if you ask for contents of the framebuffer in a call to glReadPixels and you ask for it in the same pixel units its stored in, you won't be really disappointed. If, however, you ask for that same region of the framebuffer in another format, you're screwed. (So, if your framebuffer is 8-8-8-8 RGBA, and you ask for luminance or 10-10-10-2 or something else odd, you aren't going to be pleased with the performance.)
This isn't by the way, just a render-movies-on-your-PC issue. Lots of scientific computing, visualization, etc., applications render with OpenGL and then grab the framebuffer to store a result. This throughput issue is significant considering that for many applications, what was an enormous data set 10 years ago is now not such a big data set. Like another poster said, this issue is one of the ones that still ties people to SGI.
While 99% of your other concerns might be dealt with, there are still lingering problems like this one that keep some people from moving to commodity hardware.
Re:I've seen this firsthand... (Score:2)
My suspicion is that the raw bit-shovelling across the bus is more likely to be the problem.
Re:I've seen this firsthand... (Score:2)
What I'm refering to is the smaller applications, the ones that don't justify the purchase of an Onyx, and don't warrant significant time on an Onyx. Around here, there are a small number, but growing, group of projects that started 10 years ago on a fleet of Onyx. The data sets haven't grown significantly in that time, and what would have choked a PC and required an Onyx 10 years ago isn't that big a deal now. While I realize this is still a small set, the number grows every couple of months.
Would you agree that smaller, especially custom applications, might eventually move away from SGI?
Re:I've seen this firsthand... (Score:2)
And you almost pinned down what I do
Faster readback has been requested for years (Score:3, Informative)
Here are a few other important but non-Quake techniques that are driven by readback speeds. I'll go into more detail on the first for illustration purposes.
High-quality real-time occlusion culling -- many techniques render the scene quickly by using a unique color tag per object or polygon and then read back the framebuffer to figure out everything that was visible (and how many pixels for each) for a final high-quality pass. If HW drivers would even just implement the standard glHistogram functions (which essentially compress the framebuffer before readback), this would become practical. NVidia adds their NVOcclusion extension, but it's limited in how many objects at a time you can test, it's very asynchronous, and it requires depth sorting on the CPU to make it most useful. The render-color technique does not. Yet HW makers are spending lots of money adding custom HW to do z-occlusion when a simple driver-based software technique may be easier.
Dynamic Reflection Maps -- for simple, reflective surfaces -- Requires background rendering from multiple POVs (generally six 90 degree views) and caching these. Even if you can cache a small set of maps in AGP memory, you want fast async readback if you have a large fairly static scene and you're roaming around.
Real-time radiosity -- similar to above, but needs more CPU processing of the returned images and possibly depth maps (reading back the depth buffer is often even more expensive than the color).
Real-time ray tracing -- the better quality approaches need fast readback to store intermediate results (due to recursion, etc..). With floating point framebuffers and good vertex/pixel shaders, ray-tracing becomes possible, but not yet practical. I believe
So there's a lot more to this issue than just making movies of your games. Faster, better graphics would be possible. So why isn't this a priority?
------------ cyranose@realityprime.com
Not the SW (Score:2)
The article claims that the drivers, not the HW, are causing the performance problem. Based on my conversations with a premier graphics programmer and some x86 experts, I don't believe that it is this simple. In particular, note that XFree86 2D, which uses its own drivers, also has pathetic readback rates.
I barely understand the technical details, but it seems like there are some serious misfeatures in the way that the AGP bus interacts with CPUs and caches on both Intel and AMD during readback; it is going to be hard for card vendors to fix this problem (even if they decide to care). It may be that a new bus and/or new CPU glue will be needed for high-readback-rate applications.
You're supposed to render to an offscreen buffer (Score:3, Insightful)
OpenGL supports reading back the screen buffer mostly so that the OpenGL validation suite can check the rendering accuracy. For that, it doesn't have to be efficient. And if you read back in some format other than the actual structure of the framebuffer, every pixel gets converted in software and performance will be awful.
This article reads like it was written by an overclocker, not a graphics developer.
Machinima could use faster transfer rates (Score:2)
The nascent art of machinima [machinima.com], which involves using 3D game engines to make desktop movies, could benefit from a practical way to record game output faster. (It would also be nice to export directly to .AVI format for editing in Premiere or Avid, but that's another wishlist.)
This is old news; Intel AGP spec was short sighted (Score:4, Interesting)
If you read the AGP spec, which was written by Intel, you will note that it is based on the PCI 2.0 spec. The PCI 2.0 spec is for a 32 bit, 33 MHz symmetric bus which gives you a max transfer of rate of 132 MB per second. The AGP spec is for an asymmetric bus, 33 MHz read and 66+ MHz write. But writes were optimized at the expense of reads, since Intel was pushing video with NO onboard texture memory, and who would want to read back the image in real-time anyway, right?!?
Yes, I am sure that drivers do have some affect, but the AGP spec is the first bottleneck. On an OpenGL news group it was reported last year that a person tested two identical video cards, the only difference being one was AGP and the other was PCI. The read performance for the PCI version was several times faster than the AGP version.
Of course, some video cards are also to blame because of the frame buffer format they use, but that is another story...
Finally a reason for NVidia to OpenSource (Score:2)
But if they had, the drivers would have been updated to scratch whoever's itch needed to be scratched. In this case the bandwidth from card to Memory.
One of the benifits of Open source is that even seldom used features are enhanced, so that when suddenly there is a demand for this the features are in place.
AGP is effectively a one way bus, by design. (Score:2)
AGP was designed by Intel as an ad hoc solution to combat the problem of transferring large textures to a graphics card over the PCI bus. It's an extension to PCI, essentially, allowing fast, pipelined, ONE-WAY transfers. That should be repeated. AGP is PCI, with a different connector, and a bunch of extra pins and logic for pipelined transfers from system memory to the card. In fact, without "fast writes" enabled, CPU -> graphics card writes are plain PCI; only transfers requested BY THE CARD are accelerated.
There is nothing new about this. It's in the spec.
It is NOT meant to be a two-way bus. It it was never designed for offloading cinematic rendering to the card, for later recovery. AGP came out around 1997, before NVIDIA or ATI had shaders in hardware. PC rendering was nowhere near photorealistic at the time; that was the domain of software raytracers. Without AGP, video cards seriously hog the AGP bus with their texture streaming. That is ALL that AGP fixes.
The real solution is to come up with a new bus. I tend to like unified memory architecture designs, but they have disadvantages as well. The real trouble is getting the PC industry to agree on anything; if ATI came up with a new bus standard, for instance, I doubt NVIDIA or Matrox would adopt it, not wishing to appear to submit to their competitor.
-John
Sheesh (Score:2)
Lather, rinse, repeat.
Won't be cheap, but someone could almost certainly whip one up with a Xilinx FPGA. I know they make one with a built-in TMDS receiver, which is what you'd need to decode the DVI signal.
Misleading headline, much. (Score:2)
"AGP Texture Download Problem" implies that there's a problem downloading textures via AGP from main memory. But it's not about texture transfers at all, it's about transfers of rendered frames back to the system (in the opposite direction).
Hey, 'Taco... You're the high point of the
Use the Digital output. (Score:2)
output, dragging that final image back up to the input is kinda like
running up a downward moving escalator...you *can* do it - but you
probably shouldn't.
It seems to me that if you are rendering movies with this technology,
you are either a small operation who can probably afford to wait (say)
10x longer than realtime to do it - or you are some big production house
who can afford to do better.
In those cases, why not simply stick a frame-grabber onto the digital output?
Heck you can even get around the 8 bits-per-component problem by using a
fragment shader to render the high order bits to red and the middle bits
to green and the low order bits to blue - then do three passes to render
the Red component of your image at 24 bits per pixel, then the green, then
the blue.
Using the downstream performance to your advantage is the way to go.
The title of this article (which talks about "Texture Download" is most
confusing because that's a term usually used to describe the process of
taking a texture map out of the CPU and stuffing it into the graphics
card's texture memory.
This is more like "Screen Dump Upload".
Re:Hmm. (Score:3, Insightful)
To be honest though, most people buy a GF4 to play games, not capture video.
Re:Hmm. (Score:4, Informative)
What about HDTV? (Score:2)
Also, having the ability to render faster means that you can do it faster than real-time. If you are working to a deadline in a TV news studio, that might be a real advantage (think late-breaking news where a story has to be put together during a comercial break).
Re:Hmm. (Score:3, Insightful)
So really, I guess that I meant to say that I fail to see the relevance of the article. It is kinda of silly, actually, to even want to record real-time game footage with this hardware. Just pipe the video output to a real capture card on another machine. Problem solved.
Re:Hmm. (Score:2)
Of course if your editing video on the cheap, you probably go for something slightly more dedicated like the Matrox RT2500 anyway, which is not that much more expensive.
Re:Hmm. (Score:2)
Capturing what you do in the average FPS would be silly, but what if you're doing 3D rendering with your graphics card? What you propose would be like ripping CDs by plugging a CD player into your soundcard's line-in jack. What the article envisions would be more like ripping CDs with EAC...you eliminate the digital-to-analog-to-digital conversion.
Re:Hmm. (Score:2)
a VCR to the Svideo output (Score:2)
Surelly simply by connecting the S-video output to a VCR while playing quake through the monitor should do the trick.
Re:Hmm. (Score:2)
Not everyone is using their video card to play Quake. =) (Although, I do that, too.)
Re:Hmm. (Score:2)
The AGP bus can't supply data / textures fast enough to a modern GPU/VPU. Both the bus and the main memory is way to slow. Some business pcs uses shared video and main memory. It works ok for most 2D apps, and will even allow you to play DVDs or streamed video. For games; forget it.
- Ost
Re:128 bit colour? (Score:2)
Yes, human eye can't go beyond that, but any decent processor can. And image should be processed after being grabbed from screen, for example divx:ed, or something.
if you don't know why scanners grab images at more than 8it/channel then..
Re:128 bit colour? (Score:2)
Last time i checked, my eye was a human one.
Re:128 bit colour? (Score:2, Informative)
First off there is no such thing as 32-bit color. Its 24-bit color with either a padding octet or an alpha channel.
Second, 256 levels is enough that provided a good monitor you can make due quite well.
Third, flamebait much?
Tom
Re:128 bit colour? (Score:3, Insightful)
Second, the poster wants to do more than "make due". You can also make due with 16 colors. And no, 256 levels is not enough, if you're compositing many images together, or if your data has a high dynamic range (which would require more gamma range than 256 levels are capable of providing, without serious banding.)
Third, pot. Kettle. Black.
Re:128 bit colour? (Score:2)
Second, for those people that DO need to blend, they often need to blend 100s of images. You don't need to get out to 1000s of images to see these effects. Just because current standards for MPEG and JPEG don't allow more, that doesn't mean it's useless. And I'm talking more about generating PRman (RenderMan)-style graphics. One approach is to render many, many passes - decomposing the math down into 100s (1000s) of images. It adds up to visual artifacts, very quickly, unless you have extended bit depths.
Third, saying the first poster was posting flamebait - I was saying that what you were doing was a case of "the pot calling the kettle black." I was accusing you of posting flamebait. =)
Re:128 bit colour? (Score:2)
If you want to display a gradient from say, dark blue to light blue, you have quite a few shades of blue to choose from. More than 1024, that's for sure, especially in 32 bit color. But your monitor can only display 1024 vertical lines, each being a different shade. (Depending on your resolution, blah, blah, blah.)
Therefore, you get banding. Go ahead, use 64 or 128 bit color. It'll help, in the 'it won't help at all' sense.
Re:128 bit colour? (Score:4, Informative)
Hmmm. Close but still not quite right. Think of the colour space as a cube with RGB as the three axis of the cube. In 32bit colour you have 8 bits per colour plane, giving you a cube that is 256 x 256 x 256. Any gradient from any point on the cube to any other point on the cube is going to be a maximum of 443 (if my maths is freaked out - distance from two opposite corners of the cube). Plus some messing about with the various quantisation that this line will pass through gives you definite banding on all but the lowest resolution displays...
Re:128 bit colour? (Score:2)
Yes, I'm pretty sure you're more or less right on the 443, though I would have expressed it as ~400, due to the fact that I don't like niggling with triangles.
The thing is, you get more shades of blue than just the 443. As 255 RGB values, shades of red can be
255 0 0
255 1 0
255 1 1
255 0 1
255 2 0
255 2 1
255 2 2
255 0 2
255 1 2
(I say red now because I put the 255's first, and don't want to write it again.)
And so on. Each resulting in a different shade of blue.
*I think* anyway. We're wandering off the pier of stuff I know, into the stuff I think I might be able to figure out.
So, I think you'd get more than 443, and have more blue than monitor lines, still.
Re:128 bit colour? (Score:2)
Also remember that the figure of 443 is the theoretical maximum number that can be achieved. Most interpolations will be from two points in the colour cube that are much closer together and will therefore result in correspondingly worse artifacts.
Re:128 bit colour? (Score:2)
Re:128 bit colour? (Score:3, Informative)
The distance between opposite corners is about 443, but the diagonal distance between color points is 1.732, so you still have 256 points in the gradient.
Think about it this way, the gradient from (0,0,0) to (255,255,255) passes through (1,1,1), (2,2,2), etc. Exactly 256 points.
-
Re:128 bit colour? (Score:2)
Re:128 bit colour? (Score:2)
However, there's NO REASON I can tell why you'd actually want to grab 128-bit color rendered frames! They could be dithered to 24 or 32 bit without losing anything visible.
Re:128 bit colour? (Score:3, Informative)
You typically composite and re-composite layer after layer to create decent effects, it's not a one-shot thing. Certainly professional video runs at ~48bit for film work.
Simon
Re:128 bit colour? (Score:5, Interesting)
And boards are starting to ship with 128-bit IEEE floating point buffers.
Essentially, you're right - a human can't tell the difference beyond 24-bit on a given image. But if 100 images were composited together (very likely, to support something like RenderMan-style rendering in hardware), 24 bits is nowhere near enough - you'd get all sorts of accumulation error.
Professional GFX processing (Score:3, Informative)
Re:128 bit colour? (Score:2)
Re:128 bit colour? (Score:2)
Re:Drivers, not hardware (Score:2)
Yeah, I know it's fun to bash Microsoft and hint that your OSOS (Open Source Operating System) of choice would do better, but the drivers in question here are not Microsoft drivers. They're vendor-supplied drivers which would probably use 90% common code and have 99% of the same problems on any OS.
Re:Drivers, not hardware (Score:2)
However, linux's open source nature at least gives people a chance to tweak the system to provide that advantage if it isn't there already; it may cause some interesting developments in linux graphics.
Re:Why? Where? How? (Score:3, Interesting)
- The company hasn't released the game yet, but wants to release a video of gameplay to the public. Current methods would require implementing a "save game as it goes" and then a "replay, in offline rendering mode at a steady frame rate, and record results" pass. Or, you could save it at reduced quality if you had video out on your computer and video in on another computer.. but that's just ridiculous, imo.
- Likewise, you have the game, and your friend hasn't purchased it yet, and lives too far away to just take a glance at it..
- You're having a graphical glitch in a game with your particular card that can't be easily illustrated with screenshots. Think how much easier it would be to just send a video clip than having to send a half-dozen screenshots and a wordy explanation, where they still might not believe you.
- You have a Radeon9700, he has a Geforce2. You want to show him how different Doom III looks on your card, as opposed to his card, in real time.
Etc..
Re:Actually it's unlikely because... (Score:2)
all of the cards had the same problem.
Re: (Score:2)
Re:Just Drivers (Score:2)
Yes, but are you downloading textures/frames from the card to main memory?
The issue here is whether it is possible to use the programmable GPU to render frames for use in animation projects. The various bandwidth problems appear to be associated with drivers optimized for immediate display.
With an open source driver, the few individuals running linux based rendering farms could, theoretically, relieve the CPU of some of its load. With closed source drivers, you will have to rely on nVidia optimizing their drivers for this kind of minority application.