Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Graphics Software Hardware

Audio Processing on Your Graphics Card? 335

edsarkiss writes "BionicFX has announced Audio Video EXchange (AVEX), a technology that transforms real-time audio into video and performs audio effect processing on the GPU of your NVIDIA 3D video card, the latest of which are apparently capable of more than 40 gigaflops of processing power compared to less than 6 gigaflops on Intel and AMD CPUs." Another reader points out a story on Tom's Hardware.
This discussion has been archived. No new comments can be posted.

Audio Processing on Your Graphics Card?

Comments Filter:
  • by WIAKywbfatw ( 307557 ) on Friday September 03, 2004 @02:17PM (#10151314) Journal
    The amount of silicon on an average GPU overtook the amount of silicon on the average CPU some time ago.

    Having all that processing power available to do more than just shift pixels makes perfect sense. I'm just surprised that nobody thought of doing it sooner.
    • by SoTuA ( 683507 ) on Friday September 03, 2004 @02:27PM (#10151445)
      Nobody thought of it sooner?

      Emmm, what [psu.edu] about [psu.edu] this [psu.edu], for example?

      • Nobody thought of it sooner?

        About 4-5 years ago there were some 3dfx commercials that had the engineer walking around the plant talking about how powerful their new processor was and how it could be used to "save the world" then over the loud speaker comes the message "Scrap that, we are going to use it for games instead.", next we see the engineers all croweded around a computer and one screams "Blow his freaking head off!"

        Ad Critic used to have them before they went for profit.

    • by Anonymous Coward
      I'm just surprised that nobody thought of doing it sooner.

      It's easy to be surprised when you're wrong: BrookGPU: General Purpose Programming on GPUs [slashdot.org] December 2003.
    • by ahsile ( 187881 ) on Friday September 03, 2004 @02:31PM (#10151501) Homepage Journal
      I believe slashdot has already [slashdot.org] covered [slashdot.org] stories [slashdot.org] about using the programmable pipeline on mondern gpus for non-graphics functions. They're built to crunch vectors/math. Why not?
    • by thpr ( 786837 ) on Friday September 03, 2004 @02:49PM (#10151715)
      The amount of silicon on an average GPU overtook the amount of silicon on the average CPU some time ago.

      And another post:

      How can the price range be so slow when the processing power is claimed to be so many times faster than Intel chips?

      First, silicon area doesn't necessarily mean performance. The whole reason that IBM, AMD and Intel are building multi-core chips [slashdot.org] is that so much of the area in a moden microprocessor is spent in workarounds for different structural hazards rather than in real work. The GPUs are huge because they are parallel mathematical computation engines. On a FLOP per sq. mm basis, they are a LOT more efficient than a single core CPU could hope to be.

      As WIAKywbfatw points out, GPUs became more powerful than CPUs (on a FLOP basis) a decade or more ago. This was the whole reason Intel created the AGP port - to prevent the GPU from becoming the center of the the computer (it was a huge threat to their business).

      Today, silicon is more and more about customization... on a FLOP basis, the chips in HD digital TVs have nearly the performance of the latest P4 - but at MUCH less cost... because they are less flexible (a LOT less flexible). Their design is to optimize single precision floating point performance... You can't use that CPU power for a long-running simulation ("scientific computing") - only for graphics; where single precision is still orders of magnitude more precise than the monitor can display.



      • GPU were NEVER a threat to cpus. They became only usable for ANYTHING but graphics with the introduction of vertex and pixelshaders, e.g. with the R100 or NV10 chips. Really usable are only chips with ps2, end even those can rarely archive "better then cpu" performance, even with tuned algorithms (main problem is memory access fragmentation breaking the caching strategies and causing pipeline stalls (wasting 100s of cyles) and multipass overhead because many implementations need 1000s of passes).
        10 years
      • As WIAKywbfatw points out, GPUs became more powerful than CPUs (on a FLOP basis) a decade or more ago. This was the whole reason Intel created the AGP port - to prevent the GPU from becoming the center of the the computer (it was a huge threat to their business).

        nVidea used the term GPU [nvidia.com] to refer to its fixed function T&L capable NV10 chip, which was released on August 31, 1999 as the GeForce 256.

        The AGP 1.0 standard dates to 1996, and it was intended to provide fast bandwidth for textures and video.
    • by nurb432 ( 527695 )
      Its been thought of before, however the performance of a GPU compared to a CPU purely by clock speed or mips is sort of comparing apples to oranges...

      GPUs are special purpose.. CPU's are not..
  • by MP3Chuck ( 652277 ) on Friday September 03, 2004 @02:18PM (#10151315) Homepage Journal
    While most audio workstations may not have great video cards at the current time, I'd go spend $500 on a video card that'd take 90% of the workload off my processor while mixing ... it's cheaper than a lot of equipment out there.

    And the ability to get a few frags in while the band is taking a break isn't too bad either! ;)
    • Definitely! It would also make me more inclined to spend more money on a video card. I can't see spending $300 on a top-of-the-line card for an imperceptible increase in FPS, but it becomes an important piece if it makes my whole PC run faster.

    • by Midnight Thunder ( 17205 ) * on Friday September 03, 2004 @03:11PM (#10151932) Homepage Journal
      Add to the mix the PCI-X architecture and you are no longer limited to one graphics card, as you are with AGP. So at this point you could have two or more graphics cards doing audio processing.
    • Code coprocessor (Score:3, Insightful)

      by GCP ( 122438 )
      It seems likely that we'll soon see high octane media coprocessors as standard equipment on PCs. Before long, all PCs will be "audio workstations", as well as video workstations, photo processors, movie theaters, two-way video telephones, game boxes, etc.--a lot of it simultaneously.

      Oh, wait. They already are, but they're just trying to do most of this stuff with an x86 chip. Silly. It's not inconceivable that the future of PCs is a block of powerful media processors where the x86 chip will end up being th
    • by javaxman ( 705658 ) on Friday September 03, 2004 @03:25PM (#10152094) Journal
      From the Tom's Hardware article:
      So far Cann cannot take as much performance away from the GPU as he would like. "Right now, getting the data back from the video card is very slow, so the overall performance isn't even close to the theoretical max of the card. I am hoping that the PCI Express architecture will resolve this. This will mean more instances of effects running at higher sample rates," he said.

      so it appears that there may really be a problem here... a GPU will normally do a bunch of calculations, then the raster goes *out* to the monitor, not *back* to the bus... I can see how getting data back out to the bus might be an issue. A "real" DSP/audio card would certainly be better, and they aren't *all* as expensive as the original article would have you believe... a quick google found at least one [mtlc.net] decent-looking DSP card for ~$500 out there, and I'm sure there are others, probably for cheaper ( the quoted price is for a card *and* a stack of software ), if you looked around a bit... if you're considering plunking down the cash for a PCI-X machine and a good GPU, you probably have a ~$500 for a good DSP card, too, and a special-purpose solution *designed* for the purpose at hand is almost always going to be better than repurposing a *different* special-purpose product.

      Did that make sense? What I'm trying to say is that you'd be much better off buying an actual DSP audio card than buying two GPUs. That'd just be silly. This repurposed GPU stuff is just for folks unwilling to buy an extra card, but who have a nice GPU already.

    • by Narcocide ( 102829 ) on Friday September 03, 2004 @04:11PM (#10152531) Homepage
      i mean seriously... what would you ever need that much audio processing power for? distributed key cracking however....
  • How can the price range be so slow when the processing power is claimed to be so many times faster than Intel chips?
  • by ron_ivi ( 607351 ) <sdotno@cheapcomp ... s.com minus poet> on Friday September 03, 2004 @02:18PM (#10151332)
    I love this. It's like the old NeXT Computers that had a general-purpose coprocessor for it's audio (a DSP wasn't it). Moving away from the 'triangle-only' acceleration will be a great advance for all sorts of computing needs.

    Personally, I'd like to see search algorithms (perhaps data-search, perhaps even video search) move to suchc a co-processor.

    • That's the Motorola DSP5600 [angelfire.com] you're talking about.

      You could get some sweet real-time full-bitrate audio effects out of that puppy. It was the only time I ever found assembly worth writing.

      It actually seems a *little* odd to me that you'd use a chip designed strictly with video in mind to do something like audio processing, but why not? Actually... well, is there a reason you might not? What happens when your machine askes the GPU to do some 'normal' graphics work when you have this stuff installed? What's y

    • Personally, I'd like to see search algorithms (perhaps data-search, perhaps even video search) move to suchc a co-processor.

      Oh, I'm sure you'll be able to buy a GoogleCard(TM) for your machine in the next few years...
  • Hmmm (Score:5, Funny)

    by Neil Blender ( 555885 ) <neilblender@gmail.com> on Friday September 03, 2004 @02:18PM (#10151336)
    Pretty soon my graphics card is going to do more, cost more, heat up more, be louder and use more electricity than the rest of my computer combined.
    • Pretty soon my graphics card is going to do more, cost more, heat up more, be louder and use more electricity than the rest of my computer combined.

      That is until they move the CPU onto the graphics card... make the graphics card the motherboard.. and start making peripherals to take the workload of your all in one motherboard...

      Lather rinse repeat ... digistyle!

      • This is called... (Score:2, Informative)

        by katz ( 36161 )
        This phenomenon is commonly known as the "Wheel of Reincarnation". Diverting functionality to specialized components, and then folding it back onto the CPU has been going on since the 60s.

        A more detailed description of the WoR is available here [cap-lore.com].
    • Re:Hmmm (Score:2, Funny)

      by kfg ( 145172 )
      In the future the "graphics card" will be refered to as the "motherboard" and you'll plug a "computer card" into it.

      KFG
    • Pretty soon my graphics card is going to do more, cost more, heat up more, be louder and use more electricity than the rest of my computer combined.

      It's called a NVidia GeForce 6800 Ultra [compusa.com]./p>

    • Add to that, cell phone, video/camera, wireless networking, can-openner, gps reciever, tricorder....
    • Re:Hmmm (Score:3, Interesting)

      A primates have a big-ass visual cortex [susx.ac.uk]. In fact, up to 50% of the brain is involved in processing visual information. It's therefore unsurprising to see such investment in video I/O for computers.

      In other words: the interfaces of a computer are (often) intended to provide immersive experiences for their users. Computer users are humans, so you would expect the processing power dedicated to each component of I/O to reflect the discernment of humans in their corresponding sense.

      In yet more words: if

  • mad possible by Doom (Score:3, Interesting)

    by dirvish ( 574948 ) <(dirvish) (at) (foundnews.com)> on Friday September 03, 2004 @02:20PM (#10151356) Homepage Journal
    Brought to you thanks to Doom:
    In January 1993, John Carmack sent a press release announcing a "technical revolution in PC programming" on 386sx processors - a real-time, 256 color 3D game that let you play simultaneously with three other people. Doom was born and the desktop video game industry took off creating an impetus that pushed video and stream processors to the point they are today. Here's to you John Carmack! The repurposing of a GPU for digital audio processing would not be possible without your passion and influence.
    • Okay, I'll agree that the Carmack is mighty impressive, but I don't see how you can start giving credit like that. It just devalues the work of countless other excellent programs to give such long and tenuous trace-backs.
  • by hoggoth ( 414195 ) on Friday September 03, 2004 @02:20PM (#10151363) Journal
    It sounds like we should buy a computer with a GPU on the motherboard and plug in an expansion card with a CPU on it.

    • Problem with this idea is that graphics processors get much of their power from specialization. There are certain things they are good for and certain things they aren't so good for-- the CPU is still necessary as a "gatekeeper," to farm out tasks to different subsystems.

      I've thought the same thing though. :)
      • The point of the grandparent post was that GPU's that do audio processing are no longer specialized.
        • The power of GPUs comes from their specialization at performing certain computational tasks, tasks such as manipulation of large matrices and bitwise operations on huge datasets. These tasks are not limited to pixelpushing, but many kinds of task (such as audio processing) can be visualized as operations on graphical bitmaps. For instance, a reverb filter is very similar to a directional blur in photoshop.

          There are also many optimizing tricks involving GPUs that may lend themselves to certain tasks more
    • by keiferb ( 267153 ) on Friday September 03, 2004 @02:36PM (#10151570) Homepage
      Unfortunately, it's not quite that simple. While the GPU can do many more operations per second than a CPU, think of the two as doctors.

      <analogy accuracy="flawed at best">
      The CPU's a generalist and can treat most patients in a fair amount of time. The GPU is a specialist, however. If you know any of these in real life, you know that they can do one thing, and one thing only. In this case, it's graphics. You ask them to do something else, like gardening, and they look at you like you're from outer space.
      </analogy>
    • In a way it's already been done. The National Center for Supercomputing Applications at the University of Illinois has created a "supercomputer" by connecting 100 PlayStations together.

      The console's CPU is not being used at all, only the graphics co-processor is being used.

      http://www.simulationinformation.com/entertainme nt 2.html

  • Very cool. I've wondered for a while why we weren't offloading some DSP effects to video cards or somesuch. Now it seems we finally are. I wonder if this will eventually lead to distributed rendering of some heavy duty audio effects...
  • by cephyn ( 461066 ) on Friday September 03, 2004 @02:21PM (#10151372) Homepage
    dude...the sound man...i can SEE it...sound and sight man, its all the same....far out man....
  • From what I think I understand, video cards specialize in floating point operations, for handling 3-D objects and all that stuff, but sound processing is all about integers. I thought that was why Intel's MMX technology didn't really do much, because it only helped sound. (No one else really needed to do multiple adds in a single clock cycle.)
    • From what I think I understand... sound processing is all about integers.

      In your CD player, maybe. CDs represent audio using 16-bit integer samples. Currently, professional audio is often recorded at 24 bit integer, and then immediately converted to 32 bit floating point.

      32-bit FP audio has a much larger dynamic range. If you use a 16-bit audio stream, raising the volume can cause clipping (if the values exceed 2^15), and lowering the volume will lose information (the same information is represented usi

  • by mustangdavis ( 583344 ) on Friday September 03, 2004 @02:22PM (#10151381) Homepage Journal


    Now I'm going to have to find a motherboard that I could use to play Doom3 on that supported 2 video cards ....

    (one for video, one for sound) :)

    These innovations are getting pricey!!! :)
  • I have a 8500 All-inwonder DV which has Dolby 5.1 digital auudio and firewire sitting on the AGP card. Still a POS, but neat idea.
  • GPGPU.org (Score:3, Informative)

    by thatguymike ( 598339 ) <mhouston@nOsPAM.graphics.stanford.edu> on Friday September 03, 2004 @02:23PM (#10151400) Homepage
    This kind of stuff has been talked about and done in the research community for quite some time now. See http://www.gpgpu.org. While audio is an interesting idea, FFT's and Genomics are already running on GPUs Yes, GPU's can be fast, but they can also be a pain to program. Take a look at the Stanford Brook for GPU's project for a nice elegant way to program for GPUs. http://brook.sourceforce.net
  • Compared to the capability of just six GFlops of a typical CPU, Nvidia's chips can reach more than 40 GFlops, according to Cann.

    "can reach more than"... "Capable of more than"...

    So what's the real-world performance?

    This is like those radio commercials where a store sells candy bars for $0.30, and then trumpets "up to 70% off everything in the store!"
  • I'm only two weeks into my "Computer Architecture" class so I could use some clarification. Why does it matter which processor processes what? It's all bits, right? Why can't you process any data on either your CPU or your GPU? I counting on the folks here at /. to help me out here so that I don't have to open my text book!
    • A GPU is typically very good at matrix processing. CPU's are more general purpose. Lunchtime, or I'd say more ;)
    • I believe you can process whatever you want on either, but different processing units are geared towards different things.

      I'm not really skilled in this area, but I belive the CPU is more like a jack of all trades, whereas a GPU is specialized to just do the math involved for Graphics
    • Correct me if I am wrong, but if you process graphics on your CPU, thats "software rendering" that you see as an option in some games.
    • Certain processor designs can be better suited to different tasks. For example, there was a point when the Alpha beat the shit out of Intel's offerings in integer performance in part because of a significant clock speed gap, but had rather superior floating point performance, even if you extrapolated Intel's performance up to match the Alpha's clock rate. (Aside from issues like performance not scaling perfectly with clock rate, which you should learn about early this semester if this is a real computer arc
    • A processor has specific pathways for specific operations. A very very very basic RISC processor may not even have a special path for a multiply op (over simplifying here, I don't think there is such a processor). So, by definition, a multiply operation would have to use the addition pathway, many many times.

      Graphics processors have very very specialized ops -- operations which are hardware pathways. If you take a RISC processor and tell it to rotate a matrix of numbers, then you have to reduce the problem
    • Wow! Thanks for all the responses!
    • Why does it matter which processor processes what? It's all bits, right? Why can't you process any data on either your CPU or your GPU?

      There is the Sh language [sourceforge.net] that tries to balance workload between the CPU and the GPU.

      However, the CPU is a general purpose processor. The GPU is evolving into a general purpose parallel processor. That means the CPU can do this, then do that, then do something else very well. The GPU can do the exact same thing many times very well. So each processor has its pros.

      As

  • Great. (Score:5, Funny)

    by Power Everywhere ( 778645 ) on Friday September 03, 2004 @02:26PM (#10151439) Homepage
    Now let's see some video rendering on our audio cards.
  • I liked this technology better when it was called "Geiss."
  • Just a thought, but could this mean there will be a movement towards natural event-based synchronization between graphics and audio events in games given a common processor? I realize this was never the case before, but with 40 gigaflops of audio processing capability this must become an attractive option.

    M
  • Jesus (Score:4, Informative)

    by iamdrscience ( 541136 ) on Friday September 03, 2004 @02:29PM (#10151476) Homepage
    Anytime there is an article talking about the power of your graphics card's GPU or the phenomenal processing power of DSPs, the discussion is always inundated with people asking "Hey why aren't we using these instead of our regular slow processors!", thinking they've come up with some sort of brilliant idea. For the thousandth time, people, things just don't work that way. DSPs achieve their high processing speeds by being very good at a few select things, but not really being general purpose devices. If you want to know more of the specific details, do a google search, there's a ton of information about DSPs on the web and I'm sure there are plenty of pages that explicitly address the difference between CPUS, GPUs and DSPs.
  • by carcosa30 ( 235579 ) on Friday September 03, 2004 @02:33PM (#10151523)
    People are doing extremely interesting things with modern graphics hardware, including fluid dynamics simulation, cloud simulation and multiplication of large matrices.

    A good site for information on it is www.gpgpu.org, where there are perhaps 200 different projects related to general purpose GFX card use.

    As the capabilities of graphics cards expand and become more esoteric, perhaps game developers will begin to eschew the use of certain graphics featuers in favor of using those parts of the pipeline to perform generic calculations, such as physics.

    Perhaps there are also ways of performing such calculations and using the results as decorative graphics, ie when we're showing decorative ripples on water, perhaps those ripples are artifacts of some calculation that is being used elsewhere in the game.
  • Coprocessor? (Score:3, Interesting)

    by phorm ( 591458 ) on Friday September 03, 2004 @02:34PM (#10151540) Journal
    Basically, from what I can glean of the article, it is basically making use of your video card's GPU as a co-processor. It doesn't state that the GPU is better at processing audio, just that in many instances it is mostly idle and thus available.

    The GPU is of course heavily optimized (over a regular CPU) for video, and perhaps some of those optimizations would be passed on to audio as well. In the future, if such things pick up, one might well see more "multimedia" card which would incorporate a mixed GPU/SPU or perhaps dual processors?
  • by account_deleted ( 4530225 ) on Friday September 03, 2004 @02:35PM (#10151562)
    Comment removed based on user account deletion
  • by acomj ( 20611 ) on Friday September 03, 2004 @02:40PM (#10151615) Homepage
    Apple is creating libs to work with many graphics cards acceleration for image processing. The demo was real time effects on video.

    Supports ATI and NVideo (lib figures out if you have a useable graphics card, else it just uses the cpu)
    http://www.apple.com/macosx/tiger/core.html [apple.com]
  • Goes to show... (Score:4, Interesting)

    by JediDan ( 214076 ) on Friday September 03, 2004 @02:43PM (#10151657)
    what dedicated hardware can do. It's an proven fact and anyone that works with embeded systems can testify to the performance. We need to stop flaunting 3+ gigahertz processors using archaic instruction sets and focus on routing data to hardware that can handle the task.

    If the CPU was nothing but a router and directed data to dedicated hardware (network cards, GPU with integrated physics engine, harddisk controller, etc) we can get away from inefficient execution tied up in an architecture that 99% of the market depends on.

    Computers were built with modularity in mind. We need to get back to those roots as it's not only a good idea, but the only way we're going to get past some performance barriers.
  • This is old (Score:5, Informative)

    by michaelmalak ( 91262 ) <michael@michaelmalak.com> on Friday September 03, 2004 @02:44PM (#10151671) Homepage
    Tom Rokicki computed the Game of Life using Amiga's Blitter. March 17, 1987 UseNet post [google.com].
  • by freeze128 ( 544774 ) on Friday September 03, 2004 @02:50PM (#10151720)
    Does anyone have a screenshot?

    This would probably look best when viewed with a Viewsonic monitor.
  • Short Memory... (Score:4, Insightful)

    by Duncan3 ( 10537 ) on Friday September 03, 2004 @02:51PM (#10151730) Homepage
    *chuckles* I love this, people are saying how old this tech is by talking about projects from a year ago.

    The concept of using a CPU to do I/O and other "OS stuff" for a vector processor is a wee bit older then that.

    Maybe you remember the Cray 1? Or all those i860's we used to use on cards back in the 286 days?

    Those who forget history are doomed to post on /. about how cool their "new" toys are.
  • by ZackSchil ( 560462 ) on Friday September 03, 2004 @02:53PM (#10151757)
    Apple's core image and video technology allows you to write your own processing algorithms to be run on the video card. I can imagine something like this being used to process audio in Mac OS X.
  • by adisakp ( 705706 ) on Friday September 03, 2004 @02:56PM (#10151790) Journal
    I work in the video games industry. Using graphics processor for audio is not new. The Nintendo 64 had a "Reality-Engine" graphics coprocessor that also processed sound by uploading new microcode.

    If you think about it, things like bilinear/trilinear filtering are perfect for resampling, graphic blendops like add/subtract/modulate are great for audio mixing and can be done with even older fixed function hardware and bit of programming effort. The programmability of new hardware with pixel and vertex shaders improves the generic applications of the GPU by orders of magnitude and allows significantly more non-graphic algorithms to be implemented.
  • by Beardo the Bearded ( 321478 ) on Friday September 03, 2004 @02:59PM (#10151823)
    If you overclock it, does that mean your mp3s all start to sound ike Alvin and the Chipmunks?

    Wait - what happens to the Chipmunk mp3s?
    • If you overclock it, does that mean your mp3s all start to sound ike Alvin and the Chipmunks?

      Wait - what happens to the Chipmunk mp3s?


      You won't hear anything, but your dog will be really pissed off.
  • Too bad there isn't an API for this so that people could try to extend this to other applications. How about using the video card to make MP3 files or a souped up version of WinRAR that uses your video card to compress/decopmpress faster.
  • by swb ( 14022 ) on Friday September 03, 2004 @03:21PM (#10152045)
    There's a blurb on the 6 series of GeForce cards that claim they can do video transcoding; since an hour of 2 pass encoded MPEG2 video takes my P4-3.2c about 2.5 hours, I'd love to get it at least 1x real time encoding speed (for 2-pass encodes) or at least 2x real time (for 1-pass encodes).

    Anyone know any more about this? Audio is nice, but its not nearly as CPU intensive as video transcoding.
  • SETI/Folding (Score:3, Interesting)

    by Remlik ( 654872 ) on Friday September 03, 2004 @03:28PM (#10152122) Homepage
    So can the SETI guys use spare GPU cycles? I know my work machine uses less video than CPU.
  • Latency? (Score:3, Insightful)

    by Shawn Parr ( 712602 ) <parr@@@shawnparr...com> on Friday September 03, 2004 @03:55PM (#10152366) Homepage Journal
    With the conversions happening outside the GPU/Card to convert audio to video data and back, one important question has not been addressed . . .

    What kind of latency does this pose?

    There are currently lesser expensive audio DSP cards on the market (UAD 1 by Universal Audio/Kind of Loud [uaudio.com], and the TC Powercore [tcelectronic.com], and nowadays they don't cost much more than a GPU. However on both of those cards the latency is pretty harsh. Many audio system will compensate for the latency in some instances, although some can't/don't compensate for bussed effects, which is unfortunate as reverb is the greatest reason to use a card like this, and it is a bus effect typically, and the extra delay incurred acts to set a huge, usually inappropriate predelay.

    Of course there will always be those willing to work around the potential latency issues, however that defeats the purpose that they state on their site (no more freezing/bouncing/yelling at the machine).

    This is exactly why Protools TDM systems are still in vogue for higher end studios and producers. The TDM hardware does just about everything as offloaded DSP, therefore the latency is extremely low, fixed, and documented. You can look up (command-click on the track volume display actually) to find out the amount of latency on a track in samples, and if there is a need to compensate than you can figure it out. Although typically one doesn't need to compensate for only 20 samples of latency as that is less than you might find in a analog studio using digital effects.

    • Re:Latency? (Score:3, Interesting)

      by Sleen ( 73855 )
      Great question...I am in the instrumentation business where its all about latency. Reading through the pdf, bionicFX very much claims REALTIME processing, which one may take as meaning SMALL BUFFERS...hopefully right?

      Also, "which is unfortunate as reverb is the greatest reason to use a card like this, and it is a bus effect typically, and the extra delay incurred acts to set a huge, usually inappropriate predelay."

      Which is why their first stated proof of concept algorithm will be a convolution based verb
  • OpenAL (Score:3, Informative)

    by upsidedown_duck ( 788782 ) on Friday September 03, 2004 @05:21PM (#10153252)

    Given that OpenAL is backed by sound card manufacturers, I wonder if they would ever concede to using GPUs to accelerate 3-D sound. I hope that the apparent conflict of interest doesn't hinder progress, if GPUs can really make a difference.

    OpenAL is the one cross-platform audio API I've tried that actually _works_, while the other cross-platform options seem to either be stagnant, incomplete, just plain garbage, or so lacking in documentation that no mere mortal could figure them out. Here's to hoping that OpenAL and cross-platform audio on UNIX keeps getting better and better, because we really do need it.

Two can Live as Cheaply as One for Half as Long. -- Howard Kandel

Working...