Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Media Hardware

Realtime GPU Audio 157

CowboyRobot writes "Two researchers at San Francisco State University has successfully implemented hardware acceleration for realtime audio using graphics processing units (GPUs). 'Suppose you are simulating a metallic plate to generate gong or cymbal-like sounds. By changing the surface area for the same object, you can generate sound corresponding to cymbals or gongs of different sizes. Using the same model, you may also vary the way in which you excite the metallic plate — to generate sounds that result from hitting the plate with a soft mallet, a hard drumstick, or from bowing. By changing these parameters, you may even simulate nonexistent materials or physically impossible geometries or excitation methods. There are various approaches to physical modeling sound synthesis. One such approach, studied extensively by Stefan Bilbao, uses the finite difference approximation to simulate the vibrations of plates and membranes. The finite difference simulation produces realistic and dynamic sounds (examples can be found here). Realtime finite difference-based simulations of large models are typically too computationally-intensive to run on CPUs. In our work, we have implemented finite difference simulations in realtime on GPUs.'"
This discussion has been archived. No new comments can be posted.

Realtime GPU Audio

Comments Filter:
  • by WillgasM ( 1646719 ) on Friday May 10, 2013 @02:57PM (#43688461) Homepage
    What does it sound like when you strike a neutered cat with graphene carrots of varying length?
    • by Anonymous Coward on Friday May 10, 2013 @03:23PM (#43688701)

      You had to wait for a computer model to find this out? I guess the music scene in your town is pretty boring.

      • Yes. I'm not allowed around cats anymore.

    • I can tell you that if the cat wasn't neutered, it would sound like a trip to the hospital.

    • ...perhaps it would be easier to visualize it? The frame buffer is just one bit-throw away...

  • Finally (Score:4, Insightful)

    by Richy_T ( 111409 ) on Friday May 10, 2013 @03:01PM (#43688509) Homepage

    Something to do with all those GPUs when ASIC mining of Bitcoin takes over. It's going to get noisy.

    • by Anonymous Coward

      They've already got GPU accelerated noise makers, but all they do is repeat "litecoin litecoin litecoin"!

  • impossible! (Score:5, Funny)

    by zlives ( 2009072 ) on Friday May 10, 2013 @03:05PM (#43688541)

    " simulate nonexistent materials or physically impossible geometries"
    the sound of one hand clapping

  • Might be interesting to me, if it was ported to Linux and could use AMD GPUs! Mac and Nvidia,no way!
    • Re: (Score:2, Funny)

      by Anonymous Coward

      This just in: paper author commits suicide now that slashdot poster not interested in his life's work.

    • You do realize this is research and not a product, don't you? As in, hey look what we discovered we can do!

      If you want it ported to Linux using AMD GPUs, request the source code (since that's the only way it's provided) and port it yourself.

    • FTFA: "Our software synthesis package, FDS (Finite Difference Synthesizer), was designed to operate on Mac OSX and Linux."
  • Yawn (Score:2, Insightful)

    Yeah, you can do computationally heavy things in a GPU. We've done that for years. All this is saying is that some audio signal processing tasks are computationally heavy.

    • Re:Yawn (Score:5, Insightful)

      by AlphaWolf_HK ( 692722 ) on Friday May 10, 2013 @03:17PM (#43688633)

      I think what's most important is now we have the mathematical models in place that allow us to simulate convincing sounds rather than "sample and include". For the creative types, this will save a ton of effort and money. It also has implications for games, e.g. with the given environment model, be able to produce convincing sounds in real-time rather than taking sound samples mixing them with reverb, attenuation, positioning, etc.

      • Re:Yawn (Score:4, Insightful)

        by asliarun ( 636603 ) on Friday May 10, 2013 @04:15PM (#43689217)

        I think what's most important is now we have the mathematical models in place that allow us to simulate convincing sounds rather than "sample and include". For the creative types, this will save a ton of effort and money. It also has implications for games, e.g. with the given environment model, be able to produce convincing sounds in real-time rather than taking sound samples mixing them with reverb, attenuation, positioning, etc.

        Yes, absolutely! I see it as analogous to vector graphics vs bitmapped graphics. Vector audio is THE holy grail of accurate sound reproduction.

        If these guys can pull this off, it will be the literal (digital) equivalent of having your own live performance - every time! You will have software based models of various instruments that will play music for you by playing their respective instruments for you real-time. The possibilities of this are actually astounding. You would record or store music not as digital samples (lossy, lossless, notwithstanding) but in terms of *how* each instrument is played. You have now turned the problem on its head - you are constrained by the accuracy of your software/mathematical model of each instrument, and by how well you are able to control it to become more nuanced. At a hardware level, if you assume infinite processing power, the challenge would be to accurately play these software instruments. You could again take a completely different approach - you could for example have an array of speakers where each speaker is dedicated to playing a specific instrument, and all the speakers are fed separate audio signals.

        Contrast this to the currently audio setup - which would be a 2.0 or 2.1 or 5.1 or 7.1 stereo/HT setup - where each speaker tries (and fails) to accurately reproduce the entire audible frequency spectrum, or you have a mish-mash setup where different speakers divvy up the frequency spectrum between themselves (think sub-woofer and satellite speakers) so they can do a marginally better half-assed job.

        If you look at the entire chain in a traditional setup, you have the speaker driver's mechanicals, the speaker crossover electronics, the speaker wire, the power amp, the pre-amp, the DAC, the player, the source audio signal (mp3, flac, redbook CD etc.), the recording mike, and the recording room - all of these links in the chain distort the music in their own way.

        What I mentioned above is only my interpretation of how this technique can be used -there are a huge number of other possibilities - software defined objects, such as in games, can now have their own (genuine) sound, and that will sound different depending on how you interact with them. You could also have virtual instruments, unconstrained by the laws of physics, define their own physics and their own unique sound. You could even program room acoustics and have the instruments play sounds as if it was being played in open space, a large hall, a studio, on a beach etc.

        Sigh.

        • by fbjon ( 692006 )
          Physical modelling in sound generation is decades old, there was lots of interest in it in the 90's with commercial hardware, but it has kind of died down. It's computationally intensive for one, which a GPU can help with, but it's also a bitch to actually use well for most real-world instruments. Bell-like sounds are common and can be quite interesting, wind instruments can be done fairly ok, bowed instruments are a bit meh compared to the real thing or samples.

          The novelty is doing it on a GPU which means

        • The difficulty in synthesizing sound is getting the models right. You can't simulate each atom so you need a simplifying model that allows you to reduce the work. And that model has to be accurate in the areas where it matters.

          While moving stuff to a GPU gives more computing power (but in a more constrained fashion than a CPU) and certainly helps, the models aren't there yet.

          The people researching physical modelling continue to make progress, but I think that if you put state of the art in a game, you'd per

        • by mattr ( 78516 )

          They could make a bundle consulting for Hollywood space opera movies or pro sound designers maybe.
          And how about Google... what will the clanging feel like when they bump into and drill into that asteroid? Will it drive the miners or their robots insane??

      • Wasn't Aureal starting in on this kind of thing before Creative bought them and killed the product? I seem to recall their sound chips doing some things to calculate real-time echos and other changes to the sound based on materials and room geometry.

        I guess it's good that it can be done on the GPU; it might make for one less chipset to go into a system especially given the move toward DisplayPort.

    • Re:Yawn (Score:5, Interesting)

      by MozeeToby ( 1163751 ) on Friday May 10, 2013 @03:20PM (#43688669)

      Actually I think this is pretty cool. It's always bothered me how repetitive sounds can get in games, it would be a neat trick if you could model object's for sound the way you model them for graphics. Each door, window, rock, etc, could have a subtly different sound from the one next to it. I'm sure they're not to that point now, but they are spelling out the possibilities.

      • Now that definitely sounds like the most interesting application of this technology. Organic-sounding-artificially-made sound effects.
      • Re:Yawn (Score:5, Insightful)

        by Instine ( 963303 ) on Friday May 10, 2013 @03:40PM (#43688885)
        I'm also excited by this. Especially as to what this could mean for Text to Speech. Generating more organically modeled TTS could really push it out of the uncanny valley. Currently if you ask a tts engine to say a word or phoneme, it is identical to the last time it was made. What if it were generated in realtime with the same variances as a human voice.
      • I was thinking that it would be good for mapping out real "surround sound" similar to how complex reflection and/or ray-tracing is done.

        Even if the initial sounds themselves are canned, the sound through a wooden hallway, a hallway with a carpet, or a large open room would be different. Combine that with digital surround and it could be quite useful.

      • Or the game creators could just stop being cheap, and record/licence 100 different smashy-glass sounds instead of 3. And don't get me started on that damn squeaky-door noise in movies!
        • Or the game creators could just stop being cheap, and record/licence 100 different smashy-glass sounds instead of 3. And don't get me started on that damn squeaky-door noise in movies!

          What next? Are you going to bitch about the Wilhelm Scream?

      • by Hentes ( 2461350 )

        There's already some sound randomization in better games.

    • Re: (Score:2, Interesting)

      by jfengel ( 409917 )

      It's standard fare for science press releases. If the actual advance you're making is boring (using different hardware to speed up processing), then tell them about the part that's been done all along and take credit for it. (Alternatively: take credit for being "just about there" from some far off future goal, to which you've just made a non-trivial but still minor advancement.)

      Most "science" journalists eat it up, slightly rewriting it and passing it over to their editors so that they can knock off early

    • by u38cg ( 607297 )
      I find it a bit odd, because this is stuff that's been around for ages. You can buy commercial packages off the shelf for many instruments, particularly piano, and they sound fantastic.
  • physically impossible geometries or excitation methods.

    then we'll finally have the answer to "what is the sound of one hand clapping"

  • You should see 3d graphics done with my audio card.
  • by Kaptain Kruton ( 854928 ) on Friday May 10, 2013 @03:25PM (#43688727)

    What do they mean by "physically impossible geometries"? Are they talking about things that have a higher or lower number of physical dimensions (eg: a 4 dimensional object or a 2 dimensional object)? A weird combination of Euclidean and non-Euclidean geometry?

    • by JWW ( 79176 ) on Friday May 10, 2013 @03:35PM (#43688813)

      Imagine a metal cymbal shaped as a sphere with no holes in it floating free in the air. Now hit that cymbal with a mallet that is longer than the diameter than the cymbal. But hit the cymbal on the inside of the sphere. Oh and the interior of the sphere is a vacuum.

      There you go, there are a few impossible geometries (and other things) in that scenario.

    • by Hentes ( 2461350 )

      A cymbal shaped like a Klein bottle could be simulated by this, for example.

    • the sound of vibrating klein bottle

  • by loufoque ( 1400831 ) on Friday May 10, 2013 @03:29PM (#43688757)

    News at 11.

    • by Anonymous Coward

      News at 11.

      Will them news be GPU rendered in Realtime ?

  • by Anonymous Coward

    One of the fundamental problems with computer based music production is that we're still, unless we're working with synthesized music, limited to pre-recorded samples.

    Vienna Symphonic Library, for example, is well over several hundred gigabytes in size, many of those samples covering various articulations (playing techniques) of the same instrument.

    One set of violins playing legato. One set of violins playing pizzicato. Marcato samples etc. etc. With virtual instruments that is no longer necessary. We can j

  • There are many reasons that make GPU not as useful for audio.
    The second is that most audio processing usually relies on complex directed graphs consisting on nodes that each process a different task, and that kind of interaction is too complex for the simpler, massively parallel GPU architecture.
    It would be fanastic for us that work in the audio industry to have some sort of DSP acceleration coprocessors for audio, but there's not enough demand to make that affordable so we can only wait for GPUs to becom
    • by Anonymous Coward

      With HDMI output, the graphics card is the last place in the computer to touch audio before it goes to a TV. It might have other advantages.

    • They're not talking about processing in the sense of DSP, they're talking about synthesis of sound waveforms simulating physical models of the instruments. Any DSP would come after that.

    • It would be fanastic for us that work in the audio industry to have some sort of DSP acceleration coprocessors for audio, but there's not enough demand to make that affordable so we can only wait for GPUs to become more flexible and realtime friendly, or CPUs to become more parallel.

      You can very easily buy audio DSP co-processor quite cheaply. Connect via Firewire or USB, PCIe, or as a standalone unit.

      Pro Tools HD being one of the most widely used DSP co-processors. Write your own RTAS or AAX plugin, you can use the DSPs.

      If you really want to get down to prototyping, use Matlab and buy a SHARC dev board.

      DSP is being used in almost all audio hardware these days. Consoles, compressors, EQs, it's all going digital. The demand is huge.

    • by ja ( 14684 )
      Fortunately, you are terrible wrong! :-) On the GPU you can trivially explore the parallelism of multiple identical channel strips or multiple, polyphonic synth voices. The code to do this can almost be copy/pasted from your favorite on-line DSP resource - the difference being that you'll get 32 of each (assuming the function call is warp aware) rather than one.
    • by elucido ( 870205 )

      DSP is better than GPU but GPU can do stuff a DSP cannot. Physical modeling is a perfect example because to get certain instrument sounds right, like strings based instruments, steel drums, gongs, etc, you need hardware acceleration. Software does a crappy job at it and sampling cannot do it very well period.

      If it's possible to use some of the GPU power for audio then we should.

  • Hopefully this means my old college buddy Marc can finally graduate. :p

  • I can't wait until real-time synthesized voices escape the uncanny valley. Neal Stephenson was pretty prophetic in 'The Diamond Age' of having live voice actors behind dynamically scripted content; not that we have that, but that we still don't have good voice generators.

    Voice 'acted' games without requiring actors to pre-record every possible phrase would be great.

    • by Romwell ( 873455 )
      Indeed, that would be awesome. There is a place for voice acting, but most of the lines in adventure/RPG games could be left to machines. One of the reasons re-making Larry is taking $500K on Kickstarter is that they have to record thousands of lines of speech, most of which probably wouldn't even be heard by the majority of players on the first play-through.

      Because of expenses like that, I sometimes wish the dialogues were un-voiced (as in Fallout 1/2); however, a TTS engine would be a good alternative t

  • DSPs have done sound modeling for years. So is the GPU the new DSP? Or is it simply cheaper because your desktop machine already has a GPU, whereas it may not have a DSP?
    • by elucido ( 870205 )

      DSPs have done sound modeling for years. So is the GPU the new DSP? Or is it simply cheaper because your desktop machine already has a GPU, whereas it may not have a DSP?

      Cheaper and possibly more accurate.

  • Isn't this basically what Roland's SuperNatural has been doing for years? I don't get it...
  • Because if it's not, why should we care?

As you will see, I told them, in no uncertain terms, to see Figure one. -- Dave "First Strike" Pare

Working...