Fast Fourier Transform Algorithm example essay topic
If one simply remembers a couple of things from basic mathematics, the above would make more sense. For one, the transform kernels, e. g., exp (i 2 f) are of the general form of Eulers identity, so, From Eulers relationship it can clearly seen that the Fourier transform pair have sine and cosine terms just like a Fourier series does. And, since one knows that integration is the limiting expression of a summation that becomes continuous, it is realized that the Fourier transform is really the expression of a infinite, continuous summation of sine and cosine functions. In fact, the Fourier transform can be expressed using separate sine and cosine transforms. So, Fourier analysis expressed by the Fourier transform is simply the decomposition of a signal into its composite frequency (sine and cosine) components. Rather than the discrete spectral lines (frequencies) appearing in a Fourier series, the Fourier transform has a continuous spectrum to represent a non periodic process.
The transform of a signal into its continuous frequency components is familiar to us all in nature when white light passing through a glass prism exposes its color spectrum. When this happens with rain drops it is called a rainbow. So a rainbow is really natures Fourier transform although one have ever heard anyone call a rainbow a Fourier transform. The Fast Fourier Transform (FFT) is a DFT algorithm developed by Tukey and Cooley in 1965 which reduces the number of computations from something on the order of N 02 to N 0 log N 0. There are basically two types of Tukey-Cooley FFT algorithms in use: decimation-in-time and decimation-in-frequency.
The algorithm is simplified if N 0 is chosen to be a power of 2, but it is not a requirement. The Fourier transform, an invaluable tool in science and engineering, has been introduced and defined. Its symmetry and computational properties have been described and the significance of the time or signal space (or domain) vs. the frequency or spectral domain has been mentioned. In addition, there are important concepts in sampling required for the understanding of the sampling theorem and the problem of aliasing. A popular offspring of Fourier transform (DFT) is the Fast Fourier Transform (FFT) algorithm. The Fourier Transform is a projection of any function onto complex exponential's of the form exp, where w is the frequency.
Mathematically, the integral of the product of two functions is an inner product and the complex exponential's are a convenient set of orthogonal basis functions for an arbitrary function space. That is the Fourier theorem. Interpreted another way, one can view a sampled signal (i.e. a list of numbers) as a vector of arbitrary dimension. One can define a vector space containing all such possible sampled signals.
Now, what are some reasonable basis vectors which span this space? One convenient basis is the natural basis, which contains the orthogonal unit vectors 1, 0, 0, 0... 0, 1, 0, 0, ... , 0, 0, 1. But a projection onto these basis vectors yields little insight. A more useful basis is the normalized exponential's, exp.
A projection onto this basis allows us to reconstruct the signal as a sum of exponential's. It is good because complex exponential's are sinusoids. It is sinusoids - or, at least, one can detect the presence or absence of sinusoids at various frequencies. Thus, such a transform is musically relevant. Most significantly, the operation of filtering is very simple once we are in the frequency domain - in order to change the amount of a certain frequency which is in a signal, we just multiply that coefficient by a constant. The transform described above is very close to the Discrete Fourier Transform.
It is not exactly the same, unfortunately, because the DFT is not normalized. That is, instead of projecting onto exp, one actually projects onto 1/N exp, where N is the number of samples. The point, however, remains: the DFT is a convenient way of breaking down a signal into its frequency components. Best of all, efficient algorithms (called Fast Fourier Transforms) exist to compute the DFT, which allow us to perform efficient filtering by taking the DFT, multiplying the frequency coefficients, and then reconstructing the signal (i.e. taking the inverse DFT.) Doing this in the time domain requires linear convolution, which is in general much more time-intensive.