Talking with Timothy Miller 222
barryman_5000 writes "Timothy Miller has written plenty of drivers for the open source effort and now kerneltrap has an interview with him on his newest effort for an open graphic card. He talks about his background, struggle with secretive 3D vendors and more."
duh (Score:4, Interesting)
This is because the graphics card market depends on vast amounts of R&D and producing a product that is technically superior to everything else out there. Essentially being continually ahead of the game as your competitiors try to catch up.
As much as OSS advocates would not like to hear it, opening up the graphics card specifications to all and sundry would be the equivilant of pooring your R&D down the pan. Selling support for graphics cards doesn't keep you in business - making a product that kicks the ass of your competitors (and them having difficulty working out how to beat it) does.
iD-eal project for Carmack (Score:4, Interesting)
JC should stick some of his $ behind this project instead of making rockets.
All we need now is a petition for its support! (Score:2, Interesting)
Y. nVidia are probably the most helpful to the community yet have 2 sets of Linux drivers - the OSS ones and the closed (official) ones.
A: Does the project have an official name?
Timothy Miller: Depends on what you mean by "official". We're calling ourselves the "Open Graphics Project",
cool, now Ill be able to play the doom3 engine with OpenGL and OpenGP ..
All geometry and vertex processing will be done in software in the host computer.
This is a bit disappointing. Ever played a game in S/W mode? Nightmare - last century. At least its only a part of the processing though.
Keep in mind that no graphics card on the market can fully support Doom III, with all features turned on, at a high framerate. So the fact that a card like this couldn't handle it shouldn't surprise anyone.
True. I still turned up all my settings for Doom3 though and just played at a lower fps on an nVidia Geforce FX 5700 256. It was playable (on Linux with the latest nVidia Linux drivers)..
Anyways, I'm getting one as soon as it comes out, if it comes out!!
Re:gonna party like it's 1999 (Score:2, Interesting)
FOSS (Score:3, Interesting)
These shoddy nvidia drivers really bug me, and it would be nice to see a hardware accelerated opengl X enviroment sometime in the next 5 years (before longhorn), and that is never going to happen unless we can get some real hardware support.
A question not asked in the interview (Score:3, Interesting)
Re:gonna party like it's 1999 (Score:1, Interesting)
Intel's approach is the same as is used here - there chipsets don't do any Geometry of vertex processing, they're essentially a pixel unit. Intel run all of the vertex processing on the CPU.
Of course that's also in Intel's interest as it allows them to get extra value from their CPU's. Geometry vertex processing is perfect for parrelisation through SSE type extentions.
In DirectX speak the vertex shaders are run on the CPU but the pixel shaders are run on the GPU - the new Grantsdale chipset supporte pixel shader model 2.0 and below IIRC.
What's more suprising is that in many cases you won't see a performance hit using the grantsdale over a full GPU with vertex unit.
5 years ago this was a great step forward (about when I started comercial work on GFX programming hardware) TnL (as it was called back then by nVidia who gave it us first in the GeForce series of cards supported by Direct X 7.0.
However these days quality demands that we light and do many many operations at the pixel level rather than at the vertex level and it's much cheaper to have a bump map than 1000 extra verts no matter which way you look at it. Therefore most algorithms for doing things from will hit the pixel units hard but the vertex work is trivial in comparison.
With programable shader models the vertex units and pixel units on GPU's are becoming more and more similar. I'm pretty sure within a generation (2 at most) they'll just be shader units that can be switched from pixel to geometry utilisation on the fly with a clever memory architecture to make the pipeline feed back into itself rather than a standard waterfall type flow.
Now from TFA Tim was talking about stalled CPU's and ways of using those cycles. I can tell you that developing a video game (on any platform, PS2, XBox but especially PC) these days you're very very unlikley to get stalls waiting for the vertex units and if you do rearranging your sequence of rendering will overcome that. However you're much more likley to get stalls waiting for the pixel units. Using techniques like defered rendering and having a nice big command buffer however means that these CPU cycles can be used for useful work - why not do the vertex processing on CPU then? In my current project I think we throw in some extra AI processing where we can in such stalls so we would see a slight performance hit from using a software pipeline. However I think it's rare for anyone other than games developers to really squeeze so much out of the system and many games developers don't bother if they don't need to.
Infact for most scenes I've tried the performance of the pixel only grantsdale is almost as good as a middle to top end ATi or nVidia part.
I can see no reason for the Open Graphics Project board to perform significantly differently to the Intel chipsets - although what they're targetting is a little behind the grantsdale that's their top end as they don't support programmable models.
Essentially what I'm saying is the world has moved on and if you want 5 year old features buy an old GPU or one which has to be backward compatible, otherwise if you want what's important now then get a board that is designed for todays requirements - not what they were 5 years ago.
This open board is very sensibly designed to perform well in todays market with todays needs whilst being a manageable project in terms of scope and duration and I for one applaud it!
I would be interested to know how well intel support it's chips linux and how open they are about their chips specs. It'd also be interesting to see what their stance on their existing written software vertex libraries (heavily _x86 optimised but probably never going to port) being available to plug into other driver sets as a more generic vertex processing library.
I'll buy one (Score:3, Interesting)
- accelerate all the eye candy I enjoy
- make things like alpha transparency and video rendering fast and smooth and not impact system performance
- allow me to manipulate 3D plots or complex CAD objects in three space in real time smoothly
then it does what I need from a graphics card. If it can make bzflag run smoothly, so much the better. And I suspect a middling card with excellent drivers will stack up OK for normal worka against a really fast card with iffy drivers. Plus, if this is a success they might make better cards in the future.
Guys, let's make this the standard card for non-gaming open source boxes. Especially if it's a quality piece of work. That counts for quite a lot, too - solid hardware is a blessing if you don't have the $$ to casually replace it.