Signal Samples X N Eur example essay topic

2,222 words
Instructional report by Yog esh Sa want (ID: 104605545) in partial fulfillment of course requirements of ESE 558 Submitted to Professor Murali Subbarao Aliasing Aliasing is a potential problem whenever an analog signal is point sampled to convert it into a digital signal Aliasing is what happens when analog data is represented on a digital system. It happens whenever an analog signal is not sampled at a high enough frequency. A curved line drawn on a grid, where the curved line represents the analog data and the grid represents the digital system, is a good example of analog data on a digital system (see figure 1.1) Figure 1.1 Figure 1.2 Digital version of the line Line on grid representing analog data on digital system When the analog data is converted to digital some problems arise. The digital system in this example is the grid. To convert the analog line to a digital line each point in the grid may either represent a point in the line, by being filled in, or represent an area where the line does not exist, by remaining white. There can't be a square that is only partly filled.

Each square must be either filled in or not. In other words, to draw the line in digital format we need to completely fill in any square that a portion of the line passes through. That's all part of it being digital. Okay no problem, right? The line goes through the different squares so we " ll fill in each square that the line goes through.

Figure 1.2 shows what the line looks like when we do this. Not very smooth, is it? We no longer have curves; all we have is a choppy line made up of squares and rectangles. Now let us describe the same thing discussed above in slightly technical terms. A signal x (t), periodically sampling at time instances t = kTs, produces a time-series x [k] = {x [0], x [Ts], x [2 Ts], ...

}. Consider the example shown below which consists of sinusoid and an added "glitch". Prior analysis, using a spectral analyzer failed to detect the low-energy broadband (viz., spectrum) glitch since it is overwhelmed by the energy in the sinusoid. Upon sampling, the glitch is missed altogether (as shown) and after reconstruction a pure sinusoid is recovered. The proud engineer is left with the belief that Shannon's Sampling Theorem has been successfully applied to the problem. The system is delivered to customer who immediately complains that the "anomaly" detector he purchased doesn't work reliably.

The customer says that he can only detect a few random occurrences of the anomaly (glitch) every 100 ms when there should have been over 100. The engineer, in order to solve the above problem, designs another system with a sampling rate 100 times faster than the first, does this fix the problem? This is an error due to aliasing. The frequency response of the glitch may be 10 x, 100 x or greater than the Nyquist rate defined by the sinusoid. What needs to be done is to establish a mechanism by which these errors can be predicted and quantified. Consider the data shown below which illustrates a process resulting in an aliasing error.

In the diagram, the high frequency signal is sampled just under the Nyquist rate. As a result, each sample is taken at a slightly later in each cycle. A smooth interpolating curve passing through the sample values reconstruct a sinusoid of lower frequency. This signal is called an aliased signal because the reconstructed version of the sampled signal x (t) impersonates another at a lower (baseband) frequency. Unfortunately, one can be falsely mislead by simple mathematical studies that seem to defy the sampling theorem. Suppose the signal shown below, given by x (t) = cos (2 x 1000 t) is sampled at 2 kHz (at the Nyquist rate).

The resulting time-series is x [k] = {... , 1, -1, 1, ... } which seems sufficient to reconstruct x (t) (see Figure). Suppose, however, that due to random timing delays, x (t) = sin (2 x 1000 t). Now sampling at 2 kHz results in time-series is x [k] = {0, 0, 0, ... }. The reconstructed envelope impersonates the time-series of another signal y (t) = 0.

Thus, one must always be careful in applying the Sampling Theorem, computing the sample frequency, and insuring that a suitable anti-aliasing filter is in-place. For example, suppose that the previously introduced engineer's firm decides to build samplers for digital instruments. The system they previously designed is outfitted with an analog anti-aliasing filter whose passband extends only over the information spectrum (i. e., frequency range of the sinusoid). The system behaves as shown below. So we can say that in order to prevent aliasing in a sampled-data system the sampling frequency should be chosen to be greater than twice the highest frequency component fc of the signal being sampled. Hence by definition fs = 2 fc Although Shannon showed that there is a lower bound to the sampling rate, the sampled signal may not be interpretable without interpolation as the next graph illustrates.

Some examples of aliasing errors are shown below to drive home the point that aliasing can do some serious damage to the information content of any image. Un anti-aliased Images Anti-aliased images Antialiasing Antialiasing methods were developed to combat the effects of aliasing. There are three main classes of antialiasing algorithms. o As aliasing problem is due to low resolution, one easy solution is to increase the resolution, causing sample points to occur more frequently. This increases the cost of image production. o The image is created at high resolution and then digitally filtered. This method is called supersampling or postfiltering and eliminates high frequencies which are the source of aliases. o The image can be calculated by considering the intensities over a particular region.

This is called prefiltering. Prefiltering. Prefiltering methods treat a pixel as an area, and compute pixel color based on the overlap of the scene's objects with a pixel's area. These techniques compute the shades of gray based on how much of a pixel's area is covered by a object. Prefiltering amounts to sampling the shape of the object very densely within a pixel region.

For shapes other than polygons, this can be very computationally intensive. Postfiltering. Supersampling or postfiltering is the process by which aliasing effects in graphics are reduced by increasing the frequency of the sampling grid and then averaging the results down. This process means calculating a virtual image at a higher spatial resolution than the frame store resolution and then averaging down to the final resolution. It is called postfiltering as the filtering is carried out after sampling.

Supersampling is basically a three stage process. o A continuous image I (x, y) is sampled at n times the final resolution. The image is calculated at n times the frame resolution. This is a virtual image. o The virtual image is then lowpass filtered. o The filtered image is then re sampled at the final frame resolution. Algorithm for supersampling o To generate the original image, we need to consider a region in the virtual image. The extent of that region determines the regions involved in the lowpass operation. This process is called convolution. o After we obtain the virtual image which is at a higher resolution, the pixels of the final image are located over superpixels in the virtual image.

To calculate the value of the final image at (Si, Sj), we place the filter over the super image and compute the sum of the filter weights and the surrounding pixels. An adjacent pixel of the final image is calculated by moving the filter S superpixels to the right. Thus the step size is same as the scale factor between the real and the virtual image. o Filters combine samples to compute a pixel's color. The weighted filter shown on the slide combines nine samples taken from inside a pixel's boundary.

Each sample is multiplied by its corresponding weight and the products are summed to produce a weighted average, which is used as the pixel color. In this filter, the center sample has the most influence. The other type of filter is an unweighted filter. In an unweighted filter, each sample has equal influence in determining the pixel's color. In other words, an unweighted filter computes an unweighted average. o The spatial extent of the filter determines the cutoff frequency. The wider the filter, the lower is the cutoff frequency and the more blurred is the image.

The options available in supersampling are o The value of S - scaling factor between the virtual and the real images. o The choice of the extents and the weights of the filter As far the first factor is concerned, higher the value, the better the result is going to be. The compromise to be made is the high storage cost. Sampling using Orthonormal Bases Sampling is the practice of reconstructing a function from its values at a finite number of points. Sampling involves converting a continuous image f (x, y) to its digital representation I (i, j) such that both the spatial coordinates and amplitude is converted to discrete values. In this section we will consider sampling using orthonormal bases. One of the methods that implements this is called Discrete Wavelet Transform.

The discrete wavelet transform (DWT) is a representation of a signal x (t) EUR 2, using an orthonormal basis consisting of a countably-infinite set of wavelets. Note the relationship to Fourier series and to the sampling theorem: in both cases we can perfectly describe a continuous-time signal x (t) using a countably-infinite (i. e., discrete) set of coefficients. Specifically, Fourier series enabled us to describe periodic signals using Fourier coefficients {X [k] | k EUR }, while the sampling theorem enabled us to describe bandlimited signals using signal samples {x [n] | n EUR }. In both cases, signals within a limited class are represented using a coefficient set with a single countable index. The DWT can describe any signal in 2 using a coefficient set parameterized by two countable indices: { d k, n | k EUR ∧ n EUR }. Wavelets are orthonormal functions in 2 obtained by shifting and stretching a mother wavelet, ψ (t) EUR 2.

As k increases, the wavelet stretches by a factor of two; as n increases, the wavelet shifts right. Power-of-two stretching is a convenient, though somewhat arbitrary, choice. In our treatment of the discrete wavelet transform, however, we will focus on this choice. Even with power-of two stretches, there are various possibilities for ψ (t), each giving a different flavor of DWT. Wavelets are constructed so that {ψ k, n (t) | n EUR } (i. e., the set of all shifted wavelets at fixed scale k), describes a particular level of "detail' in the signal. As k becomes smaller (i. e., closer to -∞ ), the wavelets become more "fine grained" and the level of detail increases.

In this way, the DWT can give a multi-resolution description of a signal, very useful in analyzing "real-world" signals. Essentially, the DWT gives us a discrete multi-resolution description of a continuous-time signal in 2. In the modules that follow, these DWT concepts will be developed "from scratch" using Hilbert space principles. To aid the development, we make use of the so-called scaling function φ (t) EUR 2, which will be used to approximate the signal up to a particular level of detail.

Like with wavelets, a family of scaling functions can be constructed via shifts and power-of-two stretches EQUATION 4 For all k, n, k ∧ n EUR : φ k, n (t) = 2 exp (-k/2) φ (2 exp (-k) t - n) given mother scaling function φ (t). Another way to represent sampling in terms of orthonormal bases is to write any given signal f (t) as follows One says is band-limited with bandwidth if. In this case, one can see that in fact, Hence, we see that bandlimited functions of bandwidth, can be reconstructed by sampling from points. This calculation of can be done in an easier way by the use of orthonormal bases. Suppose is an orthonormal basis of eigenfunctions, with respect to the inner product (the discrete inner product on the th level of the gasket). Suppose, then observe: since is an orthonormal basis.

So all that remains to be done, is to find an orthonormal basis for our space of eigen functions. But, since eigenfunctions with different eigenvalues are already orthogonal, all we must do is apply the Gram method to the different eigen spaces

Bibliography

1. Digital Image Processing, R.C. Gonzalez and R.E. Woods 2. Digital Picture Processing, Vol. 1, A. Rosenfeld and A.C. Kak 3. web 4. Image Sampling and Reconstruction, Thomas Funk houser, Princeton University 5. Sampling and Reconstruction in Volume Visualization, Thomas The ubl, Technische n Universitat Wien 6. web 7. web 8. web 9. web 10. web 11. web 12. web 13. web.