High performance FFT on GPUs 274
A reader writes: "The UNC GAMMA group has recently released a high performance FFT library which can handle large 1-D FFTs. According to their webpage, the FFT library is able to achieve 4x higher computational performance on a $500 NVIDIA 7900 GPU than optimized Intel Math Kernel FFT routines running on high-end Intel and AMD CPUs costing $1500-$2000. The library is supported for both Linux and Windows platforms and is tested to work on many programmable GPUs. There is also a link to download the library freely for non-commerical use."
Final Fantasy Tactics? (Score:4, Funny)
Re:Final Fantasy Tactics? (Score:5, Funny)
Re:Final Fantasy Tactics? (Score:3, Funny)
It's nice... (Score:5, Informative)
Re:It's nice... (Score:5, Interesting)
If you are taking data off of some kind of sensor, there are damned few sensors with 24 good bits of data out of the noise floor. Radars work just fine with 16-bit A/D converters.
IEEE-754 compliance helps you in the ill-defined corners of the number space. FFTs inherently work in the well-behaved arena of simple trig functions and three-function (add/subtract/multiply) math.
I'm currently doing FFTs with 16-bit fractional arithmetic in Blackfin DSP. For what I'm doing with the results, it is good enough.
Not to mention you could use a "GPU farm" to do a fast search, and take any "interesting" data regions and feed those to a 64-bit, fully-IEEE-754 compliant, slow-as-molasses-in-January x86 FFT.
Eventually, with some more articles like this one and yesterday's Cell piece, people will start to figure out that the x86 architecture is brain-dead and needs to be put out of its misery.
Re:It's nice... (Score:5, Informative)
most highly parallel GPU-type chips lack support for gradual underflow, for example, one of those "ill-defined corners of the number space" where 754 has been a tremendous boon. flush-to-zero is fine if you're decoding MP3s or unpacking texture maps, but it causes a lot of problems when you start trying to do more general scientific computations. sometimes those low order bits matter a whole lot; sometimes they're the difference between getting an answer accurate to 4 digits and an answer with *no* correct digits.
"simple trig functions" have their own problems on these architectures; try writing an acceptable range-reduction algorithm for sin or cos without having correctly rounded arithmetic ops. sin and cos are, in fact, two the hardest operations in the math lib on which to get acceptable accuracy.
admittedly, none of these objections are an issue with FFTs. but the reason that FFTs will perform acceptably on such an architecture is that the operations are (usually) constrained to the domain in which you don't encounter the problems i mention, not because the operations themselves are inherently safe. the lack of support for gradual underflow will cause you to lose many, many bits in frequency components that have nearly zero magnitude, but you usually don't care about those components when you're doing FFTs, anyway.
Re:It's nice... (Score:2)
Re:It's nice... (Score:4, Insightful)
Take a look at their benchmarks [unc.edu]. The chart goes up to eight million elements. The accumulated rounding error in FFT outputs may be around n * log2(n) ULP, where n is the number of elements, and ULP (units in last place) is relative to the largest input element. (Caveats: That is the maximum; the distribution of the logs of the errors resembles a normal distribution. Input was numbers selected from a uniform distribution over [0, 1). The error varies slightly depending on whether you have fused multiply-add and other factors.)
So with eight million elements, the error may be 184 million ULP, or over 27 bits. With only 24 bits in your floating-point format, that is a problem. Whether you had 24-bit or 1-bit data to start with, it is essentially gone in some output elements. Most errors are less than the maximum, but it seems there is a lot of noise and not so much signal.
It may be that the most interesting output elements are the ones with the highest magnitude. (The FFT is used to find the dominant frequencies in a signal.) If so, those output elements may be large relative to the error, so there could be useful results. However, anybody using such a large FFT with single-precision floating-point should analyze the error in their application.
Your error analysis is totally wrong (Score:5, Informative)
In floating-point arithmetic, the algorithm was proved in 1966 to have an upper bound for the error that grows only as O(log N), and the mean (rms) error grows only as O(log N). (See this page [fftw.org] for more info.) (Errors in fixed-point arithmetic are worse, going as N.)
Even in single precision, the errors for their FFT sizes are probably quite reasonable, assuming they haven't done something silly like use an unstable trigonometric recurrence.
erratum (Score:4, Informative)
Re:It's nice... (Score:5, Informative)
Why? Because the x86 isn't a DSP?
The x86 is a general-purpose CPU. It isn't brain dead; historically it's almost always been at least half as fast as the latest expensive processor fad du jour, and sometimes it has actually been the fastest available general purpose processor. As these fads have come and gone, the x86 has quietly kept improving by incorporating many of their best ideas.
The cell processor is basically a POWER processor core packaged with a few DSPs tacked onto the die. That sounds like a kludge to me, but if it turns out to be a success, there's nothing stopping people from tacking DSPs onto an x86 die.
All a DSP is good at is fast number crunching. It usually has little in the way of an MMU, along with a memory architecture tuned mainly for vector-like operations, branch prediction tuned only for matrix math, etc. DSPs would make a bad choice for running general purpose programs, especially with cache and branch issues becoming the dominant performance bottleneck in recent times. DSPs would a horrible choice for running an OS with any kind of security enforcement. Using a GPU as a poor-man's DSP is interesting, but it suffers even more from these same limitations. If DSPs really offered a better solution for general-purpose problems, they would have replaced other CPU architectures decades ago.
32 bit not good enough (Score:2, Interesting)
Re:It's nice... (Score:3, Interesting)
I think you are confusing the precision of the input data with the precision of the power spectrum. An FFTs do a scaled add of a large number of samples, so the precision in the output is dependent on the number of input samples.
For example SETI@home uses 1 bit complex sampling. (Yes, the SETI@home ADCs are a pair of high spe
Rush hour math. (Score:3, Insightful)
GPUs are nice, but there's the little matter of getting data and results on and off the chip.
That's where PCIe is useful (Score:4, Informative)
Rush hour dollars, too (Score:2)
Not to mention that their dollar figures are somewhat misleading, as they didn't include the cost of the host PC...and AMD Opeteron 280 processors don't cost ANYWHERE NEAR $2,000. They also didn't show us how cheaper processors do.
You can buy a Tyan quad-CPU motherboard for about $1k, and the dual-core version of the 280 for $1k tops. So...that's $5-5.5K for a box with EIGHT processors that will do 6 million co
8xx verses 2xx (Score:2, Informative)
Re:Rush hour dollars, too (Score:2)
They also didn't say how cheaper GPU's do.
You can buy a Tyan quad-CPU motherboard for about $1k, and the dual-core version of the 280 for $1k tops. So...that's $5-5.5K for a box with EIGHT processors that will do 6 million complex values in almost half the time as the $500 video c
Re:Rush hour math. (Score:5, Informative)
FFTs in GPUs, eh? (Score:3, Funny)
Any 64 bit GPU's? (Score:3, Insightful)
Re:Any 64 bit GPU's? (Score:2)
/no really, would it?
Re:Any 64 bit GPU's? (Score:3, Informative)
Implementing 'big numbers', or numbers larger than the proccessor's spec, is actually quite computationally heavy when compared to the operations you're replacing. As such, a 4x increase in the speed of computation can translate to a (to pull a number from my arse) 0.25x loss of performance when dealing with larger floats.
However...
With CPU/GPU cooperation, the floating gap can be handled by using the CPU to generate a lookup table of high-precision trig as, say, a texture, and treating the numbers as m
Re:Any 64 bit GPU's? (Score:2)
Re:Any 64 bit GPU's? (Score:2)
Though, the floating point trig table can be converted to fractional and - oh, wait.. fractional is pretty process heavy too.
Ok, GP. The answer is quite a definitive 'No'.
Re:Any 64 bit GPU's? (Score:4, Interesting)
Re:Any 64 bit GPU's? (Score:5, Interesting)
Current generation GPUs handle 64bit and 128bit colours already. A 64-bit colour value is just four channels of 16-bit floats (halfs in Cg parlance). A 128-bit colour value is a vector of four 32-bit colour values.
If game developers wanted 256-bit colour, then GPUs would need to support 64-bit floating point arithmetic. This is unlikely to happen, however, since 64-bit colour (which is really 48-bit colour with a 16-bit alpha channel) gives more colours than the human eye can distinguish. In fact, even with 64- or 128-bit colour for the intermediate results, current cards only have a 10-bit DAC for converting the colour value to an analogue quantity that can be displayed on an analogue screen.
It is worth noting that Pixar's RenderMan software doesn't use more than 128-bit colour, and films like Toy Story were rendered using 64-bit mode.
Re:Any 64 bit GPU's? (Score:4, Informative)
Re:Any 64 bit GPU's? (Score:3, Informative)
Re:Any 64 bit GPU's? (Score:3, Insightful)
While interesting, I need IEEE 64 bit double precision for my scientific applications.
Depends on what you need 64 bit for - is it for the precision (i.e. mantissa size) or the range (i.e. exponent size)?
If you can live with a double-precision mantissa but a single-precision exponent, it's possible to get that using single-precision building blocks with less than a 2x slowdown. Sorry, don't have the references to hand right now, but a dig around on Citeseer/Google should get you there.
Re:Any 64 bit GPU's? (Score:2)
FFT; (Score:2, Informative)
Nope! (Score:4, Informative)
For the latter, you need a PSD (power spectral density) plot, which is obtained by finding the square of the magnitude of the freq-domain FFT (complex) outputs.
And the term "FFT" usually describes a specific class of algorithms that finds a Discrete Fourier Transform of a signal in much less than O(N^2) time, where N is the number of elements/samples considered.
However, the FFT is also useful to perform fast polynomial multiplication (and even fast multiplication of very very very long numbers). This application has nothing to do with power or frequencies in a signal.
Math library for sale? (Score:2)
Don't get me wrong, new tools are cool, but can someone explain to me why this is newsworthy?
Re:Math library for sale? (Score:3, Insightful)
Re:Math library for sale? (Score:2, Informative)
Precision limit. (Score:3, Informative)
Most GPU can do max up to 32-bit floating point operations (depending on the brand and the model), where as most scientific applications use 64-bit and higher (the old FP unit could do 80bit FP math, SSE registers in recent processors can do 128-bits FP math).
So some user will be happy, like for sound processing (GPU have already been used to process reverberation to create realistic sound environnement - too lazy to do the search for the slashdot referen
Re:Precision limit. (Score:2)
This is nothing but FUD, nVidia has always had great support for *nix OS's http://www.nvidia.com/object/unix.html [nvidia.com] even ATI has some support for *nix OS's https://support.ati.com/ics/support/default.asp?de ptID=894&task=knowledge&folderID=27 [ati.com] try using them sometime rather than listening to your tech gods at the local geek squa
Re:Precision limit. (Score:2)
There is a difference between token support and good performance, and apparently you need to go back to school for reading comprehension as you missed the difference in the GP's post.
Cryptography? (Score:3, Insightful)
No need for floating point.
IF (Score:3, Funny)
Re:Math library for sale? (Score:2)
FFT=Fast Fourier Transforms (Score:5, Interesting)
Re:FFT=Fast Fourier Transforms (Score:3, Informative)
Interesting question... (Score:3, Interesting)
Is the 32-bit precision enough for SETI@Home application ?
Or does the project needs the higher precisions (64bit to 128bit) that can (for now) only be provided by the CPU ?
IMHO, maybe this could be useful. They're trying to find which chunk contains candidate data. If there's some fast low-precision algorithm that can quickly mark chunks as interesting / recheck with higher precision / un-interesting, it'll be helpful to quickly tell appart interseting chunk, even if data n
Re:Interesting question... (Score:5, Informative)
Our tests on nVidia 5600 series AGP cards (this was several years ago) showed that the net SETI@home throughput using the GPU was at best 1/5 of what we could obtain with the CPU. This was primarily due to transfers out of graphics memory and into main memory.
PCI Express allows for symmetric bandwidth to graphics memory and graphics memories are now typically larger than the size of our working set. The difficulty will be in benchmarking to see which is faster for a specific GPU/CPU combination.
At any rate it's a fairly simple job to swap FFT routines in SETI@home. The source is available [berkeley.edu]. Someone may have done it by now...
This is news? (Score:2, Interesting)
Re:This is news? (Score:2)
New or the 2nd or 3rd dup here on
Re:This is news? (Score:5, Funny)
I'm sorry, but playing Quake at really high framerates does not count as research. He's not fooling anyone.
The business cards which list him as 'Profess0r of Pwnage' probably aren't helping either.
It's also bad when he refers to the undergrads as 'n00bs' during his lectures to them.
$1500-$2000? (Score:3, Insightful)
Re:$1500-$2000? (Score:5, Informative)
They're running against dual-processor systems (Opteron and Xeon).
Re:$1500-$2000? (Score:3, Informative)
Re:$1500-$2000? (Score:2)
Dual-Core Model 875 HE $2,149
and i'm sure the fastest Xeon is similarily priced.
Re:$1500-$2000? (Score:2)
Re:$1500-$2000? (Score:2)
Re:$1500-$2000? (Score:2)
Cray-1 comparison (Score:5, Interesting)
Thirty years later, a $500 GPU, weighing less than 1 pound, can produce 6 gigaflops. People complain about its power and cooling needs, but they are rather below 200 kW! We sometimes forget just how amazing the developments in computing have been over the last three decades.
Re:Cray-1 comparison (Score:2, Insightful)
You can probably make up your own flawed car analogy
Re:Cray-1 comparison (Score:2)
And then marvel at how automobile technology has advanced, which I think was the point.
Re:Cray-1 comparison (Score:3, Funny)
Why compare it with Cray-1, compare it with the steam-powered calculators of the past that take minutes to multiply two simple numbers and the results are sometimes kinda off.
People always demand more, this is why they develop more, so to get more. If people become suddenly satisfied with whatever state they're in, they'
Good occasion (Score:2)
Note how many comments are about explaining what FFT is as opposed to how many comments consist in asking what FFT means. Quite a fucked up demand/offer ratio.
Based on FFTW? (Score:2)
Are the two linked or is the W at the end of the name just a mere coincidence?
Re:Errr, I don't want to sound skeptical... (Score:3, Informative)
(Almost) Comparison to BSD Licensed version (Score:2, Interesting)
Bytes/bits? (Score:3, Informative)
The Video RAM will determine the maximum array length that can be sorted on the GPU. A rough guideline for performing FFT on 32-bit floats is: Maximum array length in millions = Video RAM in MB / 32
Max array length equals video RAM in megabytes divided by 32... bits? Correct me if i'm dumb but shouldn't it rather be "Video RAM in MB / 4"?
Re:Bytes/bits? (Score:2)
No surprise here... (Score:2, Insightful)
What's an FFT (Score:5, Informative)
The Fast Fourier Transform is an algorithm to turn a set of data (as amplitude vs. time) into a set of waves (as amplitude vs. frequency). Say that I have a recording of a piano playing an A at 440 Hz. If I plot the actual data that the sound card records, it'll come out something like this picture [pianoeducation.org]. There's a large fading-out, then the 440 Hz wave, then a couple of overtones at multiples of 440 Hz. The Fourier series will have a strong spike at 440 Hz, then smaller spikes at higher frequencies: something like this plot [virtualcomposer2000.com]. (Of course, that's not at 440, but you get the idea.)
The reason we like Fourier transforms is that once you have that second plot, it's extremely easy to tell what the frequency of the wave is, for example - just look for the biggest spike. It's a much more efficient way to store musical data, and it allows for, e.g., pitch transformations (compute the FFT, add your pitch change to the result, and compute the inverse FFT which uses almost the same formula). It's good for data compression because it can tell us which frequencies are important and which are imperceptible - and it's much smaller to say "Play 440 Hz, plus half an 880 Hz, plus..." than to specify each height at each sampling interval.
The FFT is a very mathematics-heavy algorithm, which makes it well suited for a GPU (a math-oriented device, because it performs a lot of vector and floating-point calculations for graphics rendering) as opposed to a general-purpose CPU (which is more suited for data transfer and processing, memory access, logic structures, integer calculations, etc.) We're starting to see a lot of use of the GPU as the modern equivalent of the old math coprocessor.
If you're looking for more information, Wikipedia's FFT article is a good technical description of the algorithm itself. This article [bu.edu] has some good diagrams and examples, but his explanation is a little non-traditional.
Re:What's an FFT (Score:2)
I was going to karma-whore this myself, but you beat me to it, and probably did a better job.
Re:What's an FFT (Score:3, Informative)
Be careful with the terminology; you correctly referred to "naive FT algorithm" above, but this sentence might give the impression that the Fourier transform itself is an algorithm. FT is a function whereas the FFT is an algorithm that computes the function. It would be more appropriate to say that FFT is to the Fourier transform what quicksort is to sorting.
Great for audio! (Score:3, Insightful)
I want to see how I can take advantage of this... I hope the license isn't too restrictive.
It might be a good example of how to use the GPU for general purpose (vector-based) computation, something I've been wanting to explore.
Just curious, how does the use of the GPU for this kind of thing affect the graphics display?
Are you unable to draw on the screen while it's running, or something?
Finally.... (Score:5, Funny)
Unfair comparison (Score:2, Interesting)
not worth it (Score:2)
Overa
stating the obviou... (Score:3, Interesting)
Modular squaring (Score:2)
Melissa
I wonder if this could work using MPI? (Score:2)
Pro Audio applications? (Score:3, Interesting)
Re:Uhh.. (Score:5, Informative)
Re:Uhh.. (Score:2, Redundant)
You don't need a translation... (Score:3, Insightful)
Some calculation which can be heavily optimized to simple but fast processing. Hence a [relatively] cheap part that does a few simple tasks very fast can out perform a more expensive part that can do a vastly greater range of tasks with more efficiency across that general range but less in a specific area when performing that optimized calculation.
By capitalizing on this incredibly basic rule of computer science (the an optimized simple thing going fast is faster than a more powerful general thing that
Re:Uhh.. (Score:3, Informative)
One useful way to think of the FFT is as transform of signal data from the time domain (raw samples) to the frequency domain
Re:Uhh.. (Score:4, Interesting)
As an example of how far we've come, I implemented the Cooley-Tukey FFT in assembler on an Amiga, and it was just barely out of real-time. You had to capture some audio data, then wait while it was analysed. Nowadays, you can write the same thing in Objective-C on a G4, using the standard audio capture library, and have the FFT's computed between redraw events.
Re:Uhh.. (Score:2)
Fourier series, as a concept, is analogous to the polynomial-based power series of a function - both allow you to construct an infinite set of coefficients. Just as Taylor's theorem can be used to construct a power series (and determine the coefficients) for any function, the FFT can be used to obtain the Fourier co
Re:Uhh.. (Score:5, Informative)
FFTW is the 'Fastest Fourier Transform in the West', a cute name for the work of a number of graduate students who use several techniques to turn the FFT from 'Numerical Recipes in C' into a freaking speed daemon.
GPUFFTW is much the same thing, but ported to your video card's GPU - which is generally more optimized for doing the 'apply a floating point matrix to an array' thing - thus speedin the FFTW up even more while relieving the main processor from doing the work.
If you don't have a high-powered video card, this means nothing for you. If you do, it means the above operations (compression, spectrum analysis, etc) can be done faster and without eating up processes.
Re:Uhh.. (Score:2)
Re:Uhh.. [little correction] (Score:2, Insightful)
Re:Uhh.. (Score:5, Funny)
Re:As you should very well know... (Score:2)
Re:As you should very well know... (Score:4, Funny)
And just you wait, France will develop a European alternative to that Fourier nonsense as well!
Re:As you should very well know... (Score:3, Funny)
And just you wait, France will develop a European alternative to that Fourier nonsense as well!
Read [wikipedia.org]... And notice the country....
Mod the guy up.. He might be a troll, but he is funny :-D
Re:Please (Score:3, Funny)
Re:Please (Score:2)
But, can an Nvidia run FFT calculations faster than Chuck Norris? I doubt it!
Re:Please (Score:2)
What do you mean an [slizone.com] overpriced video card, and why would you want one instead of an overpriced CPU?!
Re:Please (Score:2)
Re:If you need gigaflops... (Score:2, Informative)
SRC computers puts out FPGA based systems that has a nice
32bit floating point fft library already in their development
environment. Most customers using the fft are for radar image
processing where the best PC solution is 50 times slower
then the fpga based solution. Think UAVs with smart
tracking off their radar.
http://www.srccomputers.com/ [srccomputers.com]
Re:If you need gigaflops... (Score:2)
If for no other reason than that you can get a working GP-GPU program running in a day or two. Designing hardware (even on FPGA) typically takes months.
The Windowing Problem (Score:2)
And virtually all of this computation is single precision.
There are enough inaccuracies in transforming to the frequency domain due to the "windowing problem" (which says that the finite length
Re:The Windowing Problem (Score:2)
there's a bazillion other applications that need double (and higher) precision, but i'm pretty sure that's the most compelling one for the average person.
Re:wasted processing power... (Score:2)