Forgot your password?
typodupeerror
Programming Hardware IT Technology

AMD Releases 3D Programming Documentation 94

Posted by kdawson
from the fosdem-fossdoc dept.
Michael Larabel writes "With the Free Open Source Developers' European Meeting (FOSDEM) starting today, where John Bridgman of AMD will be addressing the X.Org developers, AMD has this morning released their 3D programming documentation. This information covers not only the recent R500 series, but goes back in detail to the R300/400 series. This is another one of AMD's open source documentation offerings, which they had started doing at the X Developer Summit 2007 with releasing 900 pages of basic documentation. Phoronix has a detailed analysis of what is being offered with today's information as well as information on sample code being released soon. This information will allow open source 3D/OpenGL work to get underway with ATI's newer graphics cards."
This discussion has been archived. No new comments can be posted.

AMD Releases 3D Programming Documentation

Comments Filter:
  • Re:Makes me ask (Score:4, Interesting)

    by bersl2 (689221) on Saturday February 23, 2008 @08:38PM (#22530760) Journal
    fglrx is probably a technical and legal mess unable to be cleaned up with less effort than it would take to re-write the drivers using good documentation.
  • Yeeha!!!! (Score:5, Interesting)

    by Anonymous Coward on Saturday February 23, 2008 @09:10PM (#22531012)
    I'm the owner of 5 boxes all with Nvidia graphic cards.
    I've been using only Nvidia cards since 2000 because they had
    the best 3D graphics card for my Linux box. I was willing to deal
    with binary drivers because there was nothing else available to me
    at my price range (loooow budget) for 3D graphics.

    But.... over the years I would get burned every now and then
    when
    1) I would upgrade the kernel and then the X server would get borked
    because the Nvidia kernel module didn't match the new kernel, or

    2) Some funky memory leak in the binary Nvidia module would lock
    up my box hard because of some damn NvAgp vs. Agpart setting or
    some funky memory speed setting. Of course, this didn't happen with
    every Nvidia driver so of course I wouldn't bother writing down
    what it took to fix the problem.

    Finally when I switched to Debian Linux in fall 2004 and had
    beautiful apt-get/synaptic take care of all of my needs I thought
    I was done ... until I found out that Nvidia doesn't time its
    driver releases with kernel releases so if I wanted to upgrade
    my kernel painlessly with apt-get/synaptic I would have to
    wait for Nvidia to get off it's damn rocking chair playing their
    damn banjo and release a driver to go with the newer kernel.

    The final straw for me was when all of my 5 nvidia cards were
    now listed in the "legacy driver" section. Can you guess what
    "legacy driver" means about Nvidia fixing their closed source
    driver? Yeah, that's exactly the point.

    That's when I started looking around for open source 3d drivers.
    I know about Nouveau for Nvidia, but frankly I'm too pissed off
    about Nvidia to consider them. Ati had a long history of treating
    Linux customers like second class scum. Intel on the other hand
    earned the "golden throne" by providing full open source for their
    graphic chipsets. So now that I'm looking for getting a dual core
    64 bit cpu + 3D graphic chipset the only viable choice was intel,
    which I was happy to do business with.

    Now that Ati has decided to come forth with 3D documentation I'm
    willing to give an intel/ATi or AMD/Ati combo serious consideration.

    Way to go ATI!!!!

     
  • by Junta (36770) on Saturday February 23, 2008 @09:24PM (#22531116)
    I see that as a reason not to open source the existing drivers, but not to preclude releasing the details needed by the open source community to produce an open driver with their own shader programs, which may be lower performance, but good enough for default operation for a lot of distributions.

    I find an interesting perspective being hinted at by AMD in this context. That they approach a common open source layer at the low level, and plug in their proprietary 'good stuff' as a replacement for higher layer things. As an example, they feel their powerplay stuff isn't top secret, so putting it at a layer where everyone can bang on it and improve it is ideal for everyone. Same with things like display handling. AMD and nVidia both do bizarre things requiring proprietary tools to configure display hotplug, instead of the full xrandr feature set, which has grown to include display hot plug.

    In general, there are *many* things AMD has historically gotten wrong in their drivers. Mostly with respect to power management, suspend, stability with arbitrary kernels/X servers. One thing they seem to do better relative to the open source community is good 3D performance if all the underlying stuff happens to line up. If they can outsource the basic, but potentially widely varying work to the community, it would do wonders if their driver architecture lets them leverage that. And by giving open source 3D developers a chance to create a full stack, it's the best of all worlds. I would be delighted to see the Open Source 3D stack surpass the proprietary stack, but wonder what patents stand in the way of that being permitted...
  • Re:Yeeha!!!! (Score:3, Interesting)

    by LWATCDR (28044) on Saturday February 23, 2008 @10:14PM (#22531458) Homepage Journal
    That isn't the issue. The interfaces are pretty stable other wise you couldn't just recompile most drivers when the new kernel comes out. What is missing is a stable binary interface. I am all for a binary interface. The developers don't want a binary interface for what I feel are bad reasons. But they are the devs and they get to make that call even if I don't like it.
  • Re:Too late (Score:3, Interesting)

    by Solra Bizna (716281) on Sunday February 24, 2008 @02:44AM (#22532984) Homepage Journal

    I've been lamenting for years that the R300 card in my G4 (now a G5, long story) would never get specs. I figured they'd start releasing only specs for R500 and up. So when I read this story, I LITERALLY jumped for joy. I'm so happy that I'm switching from nVidia to ATI in my next custom Linux box.

    -:sigma.SB

  • by Jah-Wren Ryel (80510) on Sunday February 24, 2008 @03:26AM (#22533150)

    I hope we can get some sort of media acceleration beyond the stale old XVideo & XV-MC.
    You won't get it, and the reason is DRM.

    ATI's cards that have h.264 acceleration (and all kinds of other good stuff like smart de-interlacing all collectively branded as "UVD") are unlikely to ever have the specs for UVD disclosed because they integrated the good stuff with the bad stuff (DRM) and are afraid the exposing how to use the good stuff in UVD will also expose how to circumvent the bad stuff on microsoft windows systems.

    So, once again, those DRM apologists who say that DRM is purely optional, that if you don't want to use it, it won't hurt you, are proven wrong again.

    On the plus side, the next gen cards will have the DRM broken out into a separate part of the chip so that they can feel safe in publishing the specs for good video stuff while leaving the bad stuff locked away.

    One of many such statements by ATI/AMD. [phoronix.com]
  • by z0M6 (1103593) on Sunday February 24, 2008 @11:27AM (#22535102)
    Actually, r600 documentation is expected in a few months. That can hardly be called catching up compared to how it has been earlier.

    Using the gpu to decode h264 etc is something I see as quite possible, but it is likely that it is something we have to implement ourselves (something I think we are capable of).
  • by Junta (36770) on Sunday February 24, 2008 @12:27PM (#22535512)

    Uh huh. And just how many CEO's and CTO's have been fired for using ATI or Nvidia's binary blob? I suspect the number's between zero and your imagination.
    He was suggesting AMD's or Intel's CEO, not 'client' companies. I doubt it would get to C*O level, but I could see leadership being shuffled out of responsibility if they didn't, for example, make a correct strategy to get the GPUs sold into the HPC market for GPU computing while the competitor did. I.e. if someone takes the open source specs and designs a set of really kick-ass math libraries that cream anything done with nVidia's CUDA, that could lead to a lot of AMD GPUs being moved while nVidia rushes to leverage that. I doubt anyone would be fired though.

    The total number of hardware and still growing that's released with a binary blob is still greater than the total number that have open source drivers.
    Huh? I can count two families with binary blobs as the only option for full-function, nVidia and AMD. This story hypothetically paves the way for the AMD half to go away, leaving only nVidia for now (rumor has it nVidia will follow suit). There exist some fakeraid cards that have binary only drivers to use the same format as the firmware support, but overwhelmingly this is skipped for pure software RAID. There exist a few wireless drivers without Linux drivers at all, but ndiswrapper has brought over the Windows drivers, so I guess you could say those are binary blobs. Even counting all that, you still have countless network adapters, graphics chips (current hardware is mostly Intel on that front), wireless adapters, storage controllers, audio devices, USB devices which in no way require a binary blob. The binary blob portion of linux support is a vast minority.

If at first you don't succeed, you must be a programmer.

Working...