Chapter 3. Sampling and Quantizing

Our main purpose in this book is to study the basic techniques required to accurately simulate communication systems using digital computers. In most communications applications, waveforms are generated and processed through the system under study. The computer, of course, can only process numbers representing samples of the waveforms of interest. In addition, since the computer has finite word length, the sample values have finite precision. In other words, the sample values are quantized. Thus, sampling and quantizing are underlying operations in all digital simulations, and each of these operations give rise to errors in the simulation results. The complete elimination of these error sources is not possible and tradeoffs are often required. We will see that the best we can do is to minimize the effects of sampling and quantizing on simulation accuracy. It is worth noting that many physical systems make use of digital signal-processing (DSP) techniques and also suffer from the effects of sampling and quantizing errors.

Sampling

As illustrated in Figure 3.1, a digital signal is formed from an analog signal by the operations of sampling, quantizing, and encoding. The analog signal, denoted x(t), is continuous in both time and amplitude. The result of the sampling operation is a signal that is still continuous in amplitude but discrete in time. Such signals are often referred to as sampled-data signals. A digital signal is formed from a sampled data-signal by encoding the time-sampled values onto a finite set of values. As we will see, errors are usually induced at each step of this process.

Sampling, quantizing, and encoding.

Figure 3.1. Sampling, quantizing, and encoding.

The Lowpass Sampling Theorem

The first step in forming a digital signal from a continuous-time signal, x(t), is to sample x(t) at a uniformly spaced series of points in time to produce the sample values, xs(t) = x(kTs) = x[k].[1] The parameter Ts is known as the sampling period and is the inverse of the sampling frequency, fs.

A model for the sampling operation is illustrated in Figure 3.2. The signal x(t) is multiplied by a periodic pulse p(t) to form the sampled signal xs(t). In other words

Equation 3.1. 

Sampling operation and sampling function.

Figure 3.2. Sampling operation and sampling function.

The signal p(t) is referred to as the sampling function. The sampling function is assumed to be a narrow pulse, which is either zero or one. Thus xs(t) = x(t) when p(t) = 1, and xs(t) = 0 when p(t) = 0. We will see shortly that only the period of the sampling function p(t) is significant and the waveshape of p(t) is arbitrary. The pulse type function illustrated in Figure 3.2 simply provides us with the intuitively pleasing notion of a switch periodically closing at the sampling instants.

Since p(t) is a periodic signal, it can be represented by the Fourier series

Equation 3.2. 

in which the Fourier coefficients are given by

Equation 3.3. 

Substituting (3.2) into (3.1) gives

Equation 3.4. 

for the sampled signal.

In order to derive the sampling theorem and thereby show that under appropriate conditions x(t) is completely represented by the samples x(kTs), we must derive the spectrum of xs(t) and show that x(t) can indeed be reconstructed from xs(t). The Fourier transform of the sampled signal is

Equation 3.5. 

which, upon interchanging integration and summation, becomes

Equation 3.6. 

Since the Fourier transform of the continuous-time signal x(t) is

Equation 3.7. 

it follows from (3.6) that the Fourier transform of the sampled signal can be written

Equation 3.8. 

We therefore see that the effect of sampling a continuous-time signal is to reproduce the spectrum of the signal being sampled about dc (f = 0) and all harmonics of the sampling frequency (f = nfs). The translated spectra are weighted by the corresponding Fourier coefficient in the series expansion of the sampling pulse p(t).

The next, and final, step in the development of the sampling theorem is to define p(t). Since the samples are assumed to be taken instantaneously, a suitable definition of p(t) is

Equation 3.9. 

This is known as impulse function sampling in which the sample values are represented by the weights of the impulse functions. Substitution of (3.9) into (3.3) gives

Equation 3.10. 

Applying the sifting property of the delta function gives

Equation 3.11. 

Using this result in (3.2) shows that the Fourier transform of p(t) can be represented by

Equation 3.12. 

For impulse function sampling Cn = fs for all n. Thus, using (3.8) the spectrum of the sampled signal becomes

Equation 3.13. 

Note that this result could have also been obtained from the expression

Equation 3.14. 

where ⊛ denotes convolution. The generation of Xs(f) using (3.14) is illustrated in Figure 3.3 for the case of a bandlimited signal.

Sampling viewed in the frequency domain.

Figure 3.3. Sampling viewed in the frequency domain.

The sampling theorem can be developed from observation of Figure 3.3. In order for the samples x(nTs) to contain all of the information in the continuous-time signal x(t), so that no information is lost in the sampling process, the sampling must be performed so that x(t) can be reconstructed without error from the samples x(nTs). We will see that reconstruction of x(t) from xs(t) is accomplished by extracting the n = 0 term from Xs(f) by lowpass filtering. Accomplishing reconstruction without error therefore requires that the portion of the spectrum of Xs(f) about f = ±fs [the n = ±1 terms in (3.13)] not overlap the portion of the spectrum about f = 0 [the n = 0 term in (3.13)]. In other words, all translated spectra in (3.13) must be disjoint. This requires that fsfh > fh or fs > 2fh, which proves the sampling theorem for lowpass signals.

Theorem 1A bandlimited signal may be reconstructed without error from samples of the signal if the sampling frequency fs exceeds 2fh, where fh is the highest frequency present in the signal being sampled.

While this theorem is usually referred to as the lowpass sampling theorem, it also works for bandpass signals. However, applying the lowpass sampling theorem to bandpass signals usually results in excessively high sampling frequencies. Sampling bandpass signals is the topic of a later section.

If fs < 2fh the spectra centered on f = ±fs overlap the spectrum centered on f = 0 and the output of the reconstruction filter, as illustrated in Figure 3.4, will be a distorted version of x(t). This distortion is referred to as aliasing. The effect of aliasing is also illustrated in Figure 3.4, assuming that the spectrum of x(t) is real.

Illustration of undersampling leading to aliasing error.

Figure 3.4. Illustration of undersampling leading to aliasing error.

Sampling Lowpass Random Signals

The waveform x(t) in the preceding discussion was assumed to be a deterministic finite-energy signal. As a result of these assumptions, the Fourier transform exists and the sampling theorem could be based on the spectrum (the Fourier transform) of the signal. In most of our applications throughout this book it will be more natural to assume that the simulation processes sample functions of a random process. Therefore, instead of selecting a sampling frequency based on the Fourier transform of the signal to be sampled, the selection of an appropriate sampling frequency must be based on the power spectral density (PSD) of the sampling frequency.

For the case of random signals we write

Equation 3.15. 

where the sampling function P(t) is written

Equation 3.16. 

in which D is a random variable independent of X(t) and uniformly distributed in (0,Ts). Note the similarity of (3.15) and (3.1) and the similarity of (3.16) and (3.9). There are only two essential differences. First, uppercase letters are used in the time functions X(t), P(t), and Xs(t) to remind us that they represent random processes. The other difference is the use of the random variable D in (3.16). The effect of D is to ensure that Xs(t) is a stationary random process. Without the inclusion of D the sampled signal is cyclostationary. The effect of D is to make the time origin of P(t) random but fixed.

The power spectral density of Xs(t) is found by first determining the autocorrelation function of

Equation 3.17. 

The Fourier transform of the resulting autocorrelation function gives the PSD of Xs(t), which is [2]

Equation 3.18. 

where SX(f) denotes the PSD of X(t). Note the similarity of (3.18) and (3.13). Also note that Figures 3.3 and 3.4 apply if the spectra are PSDs corresponding to X(t) and if the axes are labeled accordingly. Note that the sampling theorem as previously derived still holds, and therefore the signal must be sampled at a frequency exceeding twice the sampling frequency if aliasing is to be avoided.

Bandpass Sampling

We now consider the problem of sampling bandpass signals. There are a number of strategies that can be used for representing bandpass signals by a set of samples. In the following sections we consider the two most common methods.

The Bandpass Sampling Theorem

The bandpass sampling theorem for real bandpass signals is stated as follows [2]:

Theorem 2If a bandpass signal has bandwidth B and highest frequency fh, the signal can be sampled and reconstructed using a sampling frequency of fs = 2fh/m, where m is the largest integer not exceeding fh/B. All higher sampling frequencies are not necessarily usable unless they exceed 2fh, which is the value of fs dictated by the lowpass sampling theorem.

A plot of the normalized sampling frequency fs/B as a function of the normalized center frequency f0/B is illustrated in Figure 3.5, where f0 and fh are related by fh = f0+B/2. We see that the allowable sampling frequency always lies in the range 2B ≤ fs 4B. However, for f0 ≫ B, which is typically the case, the sampling frequency dictated by the bandpass sampling theorem is approximately equal to, but is lower bounded by, 2B.

The required sampling frequency for bandpass sampling.

Figure 3.5. The required sampling frequency for bandpass sampling.

Sampling Direct/Quadrature Signals

Suppose we have a bandpass signal expressed in the form

Equation 3.19. 

The function A(t) is referred to as the envelope of the bandpass signal and the function φ(t) is referred to as the phase deviation of the bandpass signal. In most communications applications both A(t) and φ(t) are lowpass signal and have bandwidths roughly on the order of the bandwidth of the information-bearing signal. Using standard trigonometric identities, the bandpass signal can be written

Equation 3.20. 

or

Equation 3.21. 

In this representation

Equation 3.22. 

is called the direct (or in-phase) component and

Equation 3.23. 

is the quadrature component. Since A(t) and φ(t) are lowpass signals, it follows that xd(t) and xq(t) are lowpass signals and therefore must be sampled in accordance with the lowpass sampling theorem. Note from (3.21) that if xd(t), xq(t) and the carrier frequency fc are known, the bandpass signal can be reconstructed without error. The representation of bandpass signals using direct and quadature components will be covered in detail in Chapter 4.

The frequency-domain representation of a bandpass signal is given in Figure 3.6(a). The complex envelope corresponding to this signal is defined by

Equation 3.24. 

Bandpass signal and the corresponding complex envelope.

Figure 3.6. Bandpass signal and the corresponding complex envelope.

Since both xd(t) and xq(t) are lowpass signals:

Equation 3.25. 

is lowpass as illustrated in Figure 3.6(b). In Figure 3.6 we see that , and consequently xd(t) and xq(t) are lowpass signals. Thus xd(t) and xq(t) must be sampled according to the lowpass sampling theorem. Since the highest frequency present in xd(t) and xq(t) is B/2, the minimum sampling frequency for each is B. However, two lowpass signals [xd(t) and xq(t)] must be sampled rather than one. As a result, a sampling rate exceeding 2B must be used. We therefore see that sampling the complex envelope using the lowpass sampling theorem yields the same required sampling frequency as sampling the real bandpass signal using the bandpass sampling theorem for the typical case in which f0B.

Example 3.1. 

It follows from the preceding discussion that the bandpass signal x(t) can be reconstructed without error if xd(t) and xq(t) are sampled appropriately in accordance with the lowpass sampling theorem. The advantage of representing bandpass signals by lowpass signals is obvious. Consider, for example, that we are to represent 1 second of an FM signal by a set of samples. Assume that the carrier frequency is 100 MHz (typical for the FM broadcast band) and that the highest frequency present in the modulation or information-bearing signal is 15 kHz. The bandwidth B of the modulated signal is usually approximated by Carson’s rule [2], which is

Equation 3.26. 

Assuming a deviation ratio D of 5 gives

Equation 3.27. 

which is 180 kHz (90 kHz each side of the carrier). The highest frequency present in the modulated signal is therefore 100,090 kHz. Thus, 1 second of signal requires a minimum of 200,180,000 samples according to the lowpass sampling theorem.

Now suppose we elect to represent the FM signal in direct/quadrature form. The bandwidth of both xd(t) and xq(t) is B/2, or 90 kHz. Thus xd(t) and xq(t) will each require at least 180,000 samples to represent 1 second of signal. This gives a total of 360,000 lowpass samples to represent 1 second of data. The savings is

Equation 3.28. 

and translates directly into a corresponding reduction in computer run time.

Quantizing

The quantizing process and a simple fixed-point encoding process is illustrated in Figure 3.7, which shows a continuous-time waveform and a number of samples of that waveform. The sample values are represented by the heavy dots. Each sample falls into a quantizing level. Assuming that there are n quantizing levels and that each quantizing level is represented by a b-bit binary word, it follows that

Equation 3.29. 

Quantizing and encoding.

Figure 3.7. Quantizing and encoding.

In Figure 3.7 each quantizing level is mapped to three-bit (b = 3 and n = 8) digital word. After quantizing, sample values are represented by the digital word corresponding to the quantizing level into which the sample value falls, and digital processing of the waveform is accomplished by processing these digital words. For example, the first three sample values (from left to right) in Figure 3.7 are represented by the binary sequence 100110111.

From sampling theory we know that a continuous-time bandlimited signal, sampled at a frequency exceeding the Nyquist frequency, can be reconstructed without error from the samples. Therefore, under these conditions the sampling operation is reversible. The quantizing operation, however, is not reversible. Once sample values are quantized, only the quantizing level is maintained and therefore a random error is induced. As before, the value of the waveform at the sampling instant t = kTs is denoted x[k], and the corresponding quantized value is denoted xq[k], which is

Equation 3.30. 

where eq[k] is the error induced by the quantizing process. The quantizer model implied by (3.30) is illustrated in Figure 3.8. If the original signal is not bandlimited, the resulting digital signal contains both aliasing and quantizing errors.

Model for quantizing error.

Figure 3.8. Model for quantizing error.

The quantity of interest is the signal-to-noise ratio (SNR), where the noise is interpreted as the noise resulting from the quantizing process. The SNR due to quantizing, denoted (SNR)q, is

Equation 3.31. 

where E{·} denotes statistical expectation and Nq is the noise power resulting from the quantizing process. In order to determine (SNR)q the probability density function of the error eq[k] must be known. The pdf of the quantizing error is a function of the format used to represent numbers in the computer. There are a wide variety of formats that can be used. The broad categories are fixed point and floating point.

Fixed-Point Arithmetic

Even though we are, for the most part, considering simulation using general-purpose computers in which numbers are represented in a floating-point format, we pause to consider quantizing errors resulting from fixed-point number representations. There are several reasons for considering fixed-point arithmetic (quantizing). First, by considering fixed-point arithmetic, the basic mechanism by which quantizing errors arise is illustrated. Also, special-purpose simulators have been developed that use fixed-point arithmetic because fixed-point calculations can be executed much faster than floating-point calculations. In addition, power consumption is usually lower with fixed-point processors. Perhaps the most important reason for considering fixed-point arithmetic is that devices using fixed-point arithmetic must often be simulated. For example, software-based communications systems are becoming popular, since they can easily be reconfigured for different applications by simply downloading appropriate programs to the device. In order to be commercially attractive in a competitive environment, these systems must be available at the lowest possible cost. Cost is typically minimized by using fixed-point arithmetic and, in addition, fixed-point algorithms execute much faster than floating-point algorithms. We should point out that the design of these software-based devices usually starts with a simulation and, when the simulation shows that the device is properly designed and meets specifications, the simulation code is downloaded to the device.[2] In such applications, the simulation of the device and the physical device merge to a great extent. As previously mentioned, speed, cost, and power consumption requirements usually dictate that many commercial devices utilize fixed-point arithmetic, and simulation is an important tool for the design and performance evaluation of these devices.

Assume that the width of a quantizing level, as illustrated in Figure 3.7, is denoted Δ. Also assume that a sample value corresponding to a given quantizing level is assumed to be the value at the center of the quantizing level.[3] In this case the maximum value of |eq[k]| is Δ/2. If the number of quantizing levels is large, corresponding to long digital wordlengths, and if the signal varies significantly from sample to sample, a given sample is equally likely to fall at any point in the quantizing level. For this case the errors due to quantizing can be assumed to be uniformly distributed and independent. The pdf (probability density function) of the quantizing error is therefore uniform over the range [−Δ/2, Δ/2] as illustrated in Figure 3.9. Denoting the quantizing error of the kth sample by eq[k], we have

Equation 3.32. 

so that the quantizing error is zero mean. The variance of eq[k] is

Equation 3.33. 

Assumed pdf of quantizing error.

Figure 3.9. Assumed pdf of quantizing error.

We now compute the signal-to-noise ratio due to quantizing.

Assume that a quantizer has a dynamic range D and that the word length is b. Assuming binary arithmetic, there are 2b possible quantizing levels and the dynamic range is given by

Equation 3.34. 

Thus

Equation 3.35. 

and the noise power due to quantizing is

Equation 3.36. 

The dynamic range is determined by the peak-to-peak value of the input signal to the quantizer. If the signal power is S, the signal-to-noise ratio due to quantizing, (SNR)q, is

Equation 3.37. 

Assuming the signal to be zero mean, the values of S and D are related by the crest factor of the underlying signal. The crest factor is defined as the ratio of the RMS, or standard deviation, of a signal to the peak value of the signal. To illustrate this relationship, assume that the underlying signal, having dynamic range (peak-to-peak value) D, lies in the range ±D. Since the signal power is S, the standard deviation is . Therefore, the crest factor is

Equation 3.38. 

Substitution of (3.38) into (3.37) gives

Equation 3.39. 

which, in dB units, is

Equation 3.40. 

Note that signals with a high crest factor are more immune to quantizing error than signals with a small crest factor. This result is logical, since signals with a high crest factor have a large standard deviation, which means that they are more spread out through the quantizing levels. It is, however, the word length b that has the most significant impact on (SNR)q. Note that (SNR)q improves by 6 dB for each bit added to the word length.

Floating-Point Arithmetic

As mentioned previously, throughout most of this book our concern will be simulations for execution on general-purpose computers that utilize floating-point number representations. The form of a floating-point number is ±M * (±10ˆE), where M and E are referred to as the mantissa and exponent, respectively. Where accuracy is required, 64-bit (double-precision) digital words are used and these 64 bits must be allocated between the mantissa and exponent. This allocation can have a significant effect on the result of a given computation. Fortunately, this assignment has been standardized and most, but not all, computers adhere to the standard. The ANSI/IEEE standard for floating-point arithmetic specifies that 53 bits are assigned to the mantissa and 11 bits are assigned to the exponent [3]. Fortunately, MATLAB provides a simple way to determine whether or not the IEEE standard is implemented on a given computer. One simply enters isieee at the MATLAB prompt, and a 1 is returned if the standard is implemented.

Since we will be using MATLAB throughout this book for developing and demonstrating simulations, it is important to consider the accuracy that can be expected. For our purposes, the most important parameters resulting from the floating-point format are the resolution (the difference between 1 and the next largest floating-point number), which is the MATLAB variable eps, the largest number that can be represented (realmax in MATLAB) and the smallest positive number that can be represented (realmin in MATLAB). Executing the simple MATLAB script mparameters tests for compliance with the IEEE floating-point standard and returns the values of each of these three important parameters. The script mparameters follows.

% File: c3_mparameters.m
format long       % display full precision
a = ['The value of isieee is ',num2str(isieee),'.'];
b = ['The value of eps is ',num2str(eps,15),'.'];
c = ['The value of realmax is ',num2str(realmax,15),'.'];
d = ['The value of realmin is ',num2str(realmin,15),'.'];
disp(a)           % display isieee
disp(b)           % display eps
disp(c)           % display realmax
disp(d)           % display realmin
format short      % restore default format
% End script file.

Executing the file mparameters on a computer that implements the IEEE floating-point standard provides the following results:

≫ mparameters
The value of isieee is 1.
The value of eps is 2.22044604925031e-016.
The value of realmax is 1.79769313486232e+308.
The value of realmin is 2.2250738585072e-308.

The first result displayed (isiee = 1) indicates that the computer does indeed conform to the ANSI/IEEE standard for floating-point arithmetic. The next result is eps, which is essentially the smallest resolvable difference between two numbers. Note that eps is 2−52 (the extra bit associated with the mantissa accounts for the sign bit), which illustrates the relationship between eps and the word length. We see that more than 15 significant figures of accuracy are achieved. It is the value of eps that ties most closely to the width of the quantization level Δ that was discussed in connection with fixed-point arithmetic. Note that ±realmax defines the dynamic range, which, in this case exceeds 600 orders of magnitude.

Example 3.2. 

Suppose that we use floating-point arithmetic, consistent with the ANSI/IEEE standard, to compute the value of

Equation 3.41. 

which is obviously zero. However, performing this computation in MATLAB gives the following:

≫ a = 1-0.4-0.3-0.2-0.1
a =
 -2.7756e-017

We see that the error induced by floating-point arithmetic is certainly small and is probably negligible in most applications. The error is not zero, however, and the user should always keep in mind that computed results are not usually exact.

From this point forward we will assume that the quantizing errors resulting from floating-point calculations are negligible. While this is an appropriate (and necessary) assumption for the material contained in the remainder of this book, one should be aware that even small errors can accumulate, in certain types of calculations, to the point where the results are useless. DSP calculations in which the signal of interest is a small difference of two very large numbers are a classical example. Very large block length FFTs can give problems because of the large number of butterfly calculations that are cascaded. There are many other examples. In developing DSP algorithms care must be used to ensure that finite word length effects are minimized.

Reconstruction and Interpolation

We now consider the reconstruction of a continuous-time signal from a sequence of samples. Since a digital simulation processes only sample values, a continuous-time signal is never reconstructed from a set of samples in a simulation environment. Consideration of the reconstruction process, however, leads to the subject of interpolation, which is an important operation in a simulation environment.

The general reconstruction technique is to pass the samples through a linear filter having an impulse response h(t). Thus the reconstructed waveform is given by xr(t) = xs(t) ⊛ h(t), where, as before, ⊛ denotes convolution. From (3.1) and (3.9) we can write

Equation 3.42. 

Thus, the reconstructed signal is given by

Equation 3.43. 

which is

Equation 3.44. 

The problem is to choose a h(t) that gives satisfactory results with a reasonable level of computational burden.

Ideal Reconstruction

Assuming that a bandlimited signal is sampled at a rate exceeding 2fh, the signal may be reconstructed by passing the samples through an ideal lowpass filter having a bandwidth of fs/2. This can be seen in Figure 3.10. If fs > 2fh the spectra centered on f = ±fs do not overlap the spectrum centered on f = 0. The output of the resconstruction filter is fsX(f) or, in the time domain, fsx(t). Amplitude scaling by 1/fs = Ts yields x(t).

Reconstruction filter.

Figure 3.10. Reconstruction filter.

It follows from Figure 3.10 that the impulse response of the reconstruction filter is

Equation 3.45. 

where the scale factor of Ts has been included. Thus:

Equation 3.46. 

or

Equation 3.47. 

Substitution into (3.44) gives

Equation 3.48. 

or, in more convenient form:

Equation 3.49. 

Note that since the signal x(t) is assumed bandlimited and the sampling frequency is sufficiently high to ensure that aliasing errors are avoided, xr(t) = x(t). Thus, perfect reconstruction is achieved, at least in theory. Note, however, that (3.49) can never be used in practice, since the sinc(·) function is infinite in extent. Equation (3.49) will be used, however, as the building block for a practical interpolation technique in the following section.

Upsampling and Downsampling

Upsampling and downsampling are used in the simulation of many systems. The need for these operations is illustrated by an example. Consider the direct sequence spread-spectrum system illustrated in Figure 3.11. The data source generates a data signal having a narrowband spectrum of bandwidth W .[4] The data signal is multiplied by a wideband spreading code c(t), which is represented by a binary sequence with a symbol rate much greater than the data rate. The ratio of the spreading code rate to the data rate is called the processing gain of the system. Multiplication by the spreading code c(t) generates a wideband signal, having bandwidth B. The channel imperfections may consist of interference from other users, jamming signals in military communication systems, noise, and perhaps other degradations not accounted for in Figure 3.11. The waveform at the output of the channel is multiplied by the despreading code. The spreading code is assumed to take on values ±1 and if the spreading code and the despreading code are identical and properly synchronized, multiplying the spreading code and the despreading code gives c2(t) = 1 so that the spreading and despreading codes have no effect on the channel of interest. Note that the data signal is multiplied by c(t) twice and the channel impairments are multiplied by c(t) only once. Thus, at the input to the lowpass filter following multiplication by the despreading code the data signal is again narrowband and all other components are wideband. The lowpass filter extracts the narrowband data signal and passes it to the receiver.

System in which upsampling and downsampling is useful.

Figure 3.11. System in which upsampling and downsampling is useful.

The important attribute of the system illustrated in Figure 3.11 is that both narrowband signals and wideband signals are present. If BW, which is typically the case, sampling the narrowband signal at the sampling rate required for the wideband signal will be inefficient and will result in excessive simulation run times. Ideally, each signal should be sampled with a sampling rate appropriate for that signal.

Since signals having two different bandwidths are present in the example system, it is appropriate to use two different sampling rates. Thus the sampling rate must be increased at the boundary between the narrowband and wideband portions of the system (left-hand dashed line in Figure 3.11) and decreased at the boundary between the wideband and narrowband portions of the system (right-hand dashed line in Figure 3.11). Increasing the sampling rate is accomplished by upsampling followed by interpolation, in which new sample values are interpolated from old sample values. Reducing the sampling rate is accomplished by decimation in which unneeded samples are discarded. Upsampling is represented by a block with an upward-pointing arrow and downsampling is represented by a block with a downward-pointing arrow. The parameter M represents the factor by which the sampling period is reduced (upsampling) or increased (downsampling) by the process.

In the material to follow we will use Ts to represent the sampling period prior to the upsampling or downampling process. After upsampling or downsampling, the sampling period will be represented by Tu or Td, respectively. The signal prior to upsampling or downsampling is denoted x(t) (no subscript on x) and the signal after upsampling or downsampling will be denoted using the appropriate subscript; for example, xu(t) and xd(t).

Upsampling and Interpolation

Upsampling is the first operation illustrated in Figure 3.11 and is the process through which the sampling frequency is increased. Since upsampling reduces the sampling period by a factor of M the new sampling period Tu and the old sampling period Ts are related by Tu = Ts/M. Thus, in terms of an underlying continuous-time signal x(t), the upsampling process generates new sample values x(kTu) = x(kTs/M) from the old sample values x(kTs). As an example, suppose that we construct a new set of samples by interpolating the reconstructed signal xr(t), given by (3.49) at points t = nTs/M. Performing this operations gives

Equation 3.50. 

This is not a practical interpolator, since the sinc(·) function is infinite in extent. Truncating the sinc(·) function yields

Equation 3.51. 

a more practical, although not perfect, interpolator. Making L large clearly reduces the interpolation error. However, since each interpolated sample requires 2L + 1 samples, the computational burden is often unacceptable for large L. Thus, there is a tradeoff between computational burden and accuracy. This tradeoff will be seen many times in our study of simulation. Note also that, since a causal function must be used for interpolation, a delay of LTs is induced. This delay does not present a problem in simulation, but we must be aware of its presence.

A more practical interpolator, requiring much less computation than the sinc(·) function interpolator, is the linear interpolator. The linear interpolator, although much simpler than the sinc(·) function interpolator, can be used when the underlying signal is significantly oversampled. The impulse response of the linear interpolator is defined by

Equation 3.52. 

Note that there are 2M − 1 nonzero values of h[k]. A MATLAB program for developing h[k] follows:

% File: c3_lininterp.m
function h=c3_lininterp(M)
h1 = zeros(1,(M-1));
for j=1:(M-1)
    h1(j) = j/M;
end
h = [0,h1,1,fliplr(h1),0];
% End of function file.

The upsampling operation is implemented on a discrete set of samples as a two-step process as illustrated in Figure 3.12. We first form xu[k] from x[k] according to

Equation 3.53. 

which can be implemented with the MATLAB code

% File: c3_upsample.m
function out=c3_upsamp(in,M)
L = length(in);
out = zeros(1,(L-1)*M+1);
for j=1:L
 out(M*(j-1)+1)=in(j);
end
% End of function file.
Upsampling and interpolation.

Figure 3.12. Upsampling and interpolation.

The result of this operation is to place M − 1 zero value samples between each sample in the original sequence x[k]. Interpolation is then accomplished by convolving xu[k] with h[k], the impulse response of the linear interpolator. The process of linear interpolation with M = 3 is illustrated in Figure 3.13. Note that only two samples are used in the upsampling operation. The necessary delay is then Ts. As illustrated in Figure 3.13, the interpolated value is found by summing the contributions from x[k] and x[k − 1], which are ((M −1)/M)x[k] and ((M −2)/M)x[k −1], respectively. Thus, with M = 3, the interpolated value is

Upsampling and interpolation.
Illustration of interpolation process.

Figure 3.13. Illustration of interpolation process.

Since only two samples are used in the interpolation process, linear interpolation is very fast.

Example 3.3. 

As an illustration of upsampling and interpolation we consider interpolating the samples of a sinewave. The basic samples are illustrated in the top segment of Figure 3.14 as x[k]. Upsampling with M = 6 yields the sample values xu[k]. Linear interpolation with M = 6 gives the sequence of samples xi[k]. Note the delay of Ts. The MATLAB program used to generate Figure 3.14 follows:

% File: c3_upsampex.m
M = 6;                     % upsample factor
h = c3_lininterp(M);       % imp response of linear interpolator
t =  0:10;                 % time vector
tu = 0:60;                 % upsampled time vector
x = sin(2*pi*t/10);        % original samples
xu = c3_upsamp(x,M);       % upsampled sequence
subplot(3,1,1)
Upsampling and interpolation operations used in Example 3.3.

Figure 3.14. Upsampling and interpolation operations used in Example 3.3.

stem(t,x,'k.')
ylabel('x')
subplot(3,1,2)
stem(tu,xu,'k.')
ylabel('xu')
xi = conv(h,xu);
subplot(3,1,3)
stem(xi,'k.')
ylabel('xi')

It is clear that upsampling and downsampling involve a significant amount of overhead. If the upsampling factor M is modest, say, 2 or 3, it is usually best to develop the simulation using a single sampling frequency and therefore oversample the narrowband signals present in the system. If, however, the difference in B and W exceeds an order of magnitude, it is usually most efficient to utilize multiple sampling frequencies in the simulation and sample each signal at an appropriate sampling frequency.

Downsampling (Decimation)

Downsampling is the second operation illustrated in Figure 3.11 and is the process through which the sampling frequency is reduced. The process is accomplished by replacing a block of M samples by a single sample. Downsampling is therefore much simpler than upsampling. The functional representation for the samples at the output of a downsampler is obtained by recognizing that the downsampling process increases the sampling period by a factor of M. Thus the samples at the output of a downsampler, denoted xd(kTd), are given by xd(kTd) = x(kMTs). The sample values are given by

Equation 3.54. 

We need to be careful, however, to ensure that the downsampled signal does not exhibit aliasing.

The Simulation Sampling Frequency

A fundamental decision that must be made in the development of a simulation is the selection of the sampling frequency. For linear systems without feedback, the necessary sampling frequency is dictated by the allowable aliasing error, which in turn is dependent on the power spectral density of the underlying pulse shape.[5] We therefore pause to consider a common model for representing baseband signals used for data transmission and to develop a technique for calculating the power spectral density of the signal corresponding to the pulse shape. Since the pulse shape plays such an important role in the selection of an appropriate sampling frequency, we consider the problem in some detail.

We know from our study of sampling that the complete elimination of aliasing errors requires an infinite sampling frequency. This is clearly a situation that cannot be achieved in practice. In addition, as the sampling frequency increases, more samples must be processed for each data symbol passed through the system. This increases the time required for executing the simulation. Since aliasing errors cannot be eliminated in practice, a natural strategy is to choose a sampling frequency for the simulation that achieves an acceptable tradeoff between aliasing errors and simulation run time. Of course, a sampling frequency must be selected so that the errors due to aliasing are negligible compared to the system degradations being investigated by the simulation.

General Development

A common model for the transmitted signal in a digital communication system is

Equation 3.55. 

where

is a sequence of a random variables representing the data. The values of ak are typically denoted +1 or -1 in a binary digital system, p(t) is the pulse shape function, T is the symbol period (bit period for binary transmission), and Δ is a random variable uniformly distributed over the sampling period.[6] The parameter A is a scaling constant used to establish the power in the transmitted signal. By incorporating this parameter, we can scale the pulse shape function so that the peak value is unity. We assume that E{ak} = 0 and E{akak+m} = Rm represent the mean and the autocorrelation of the data sequence, respectively.

It is easily shown [2] that the autocorrelation function of the transmitted signal is given by

Equation 3.56. 

in which

Equation 3.57. 

The required sampling frequency is determined from the PSD of the transmitted signal. Applying the Weiner-Khintchine theorem to (3.56) gives

Equation 3.58. 

or

Equation 3.59. 

We now put this in a form more useful for computation.

The first step is to apply the change of variables α = τ – mT to (3.59). This gives

Equation 3.60. 

Denoting

Equation 3.61. 

gives

Equation 3.62. 

We now determine Sr(f).

Fourier transforming (3.57) gives

Equation 3.63. 

Applying the change of variables β = t + α allows (3.63) to be expressed in the form

Equation 3.64. 

The second term in (3.64) is the Fourier transform of the pulse shape function p(t) and the first term is the complex conjugate of the first term. This gives

Equation 3.65. 

where G(f) is the energy spectral density of the pulse shape function p(t). Substitution of (3.65) into (3.62) gives the general result

Equation 3.66. 

In many applications the data symbols can be assumed independent. This assumption results in significant simplifications.

Independent Data Symbols

If the data symbols {ak} are independent the autocorrelation becomes

Equation 3.67. 

so that for m = 0 and Rm = 0 otherwise. The PSD of x(t) as defined by (3.66) takes a very simple form for this case:

Equation 3.68. 

If the data symbols are assumed to be ak = ±1 for all k, and

Equation 3.69. 

which is independent of the data.[7] For the case in which the data symbols are not independent, the underlying autocorrelation function Rm must be determined and (3.66) must be evaluated term by term. See [2] for an example.

Example 3.4. 

Consider the rectangular pulse shape illustrated in Figure 3.15. It follows from Figure 3.15 that

Equation 3.70. 

Rectangular pulse shape.

Figure 3.15. Rectangular pulse shape.

This can be placed in the form

Equation 3.71. 

or, in terms of the sinc(·) function

Equation 3.72. 

Therefore

Substitution into (3.69) gives

Equation 3.73. 

for the power spectral density of x(t).

The transmitted power is, from (3.69)

Equation 3.74. 

From Parseval’s theorem and Figure 3.15 we know that

Equation 3.75. 

Substitution into (3.74) gives

Equation 3.76. 

as expected. This simple result arises from the fact that p(t) is a unit amplitude pulse. Thus Σ akp(tkT − Δ) has unit power. Multiplication by A as shown in (3.55) simply scales the power by A2. For other pulse shapes the relationship between power and A must be computed using the technique just illustrated. Remember also the assumed data sequence {ak} is a unit power (variance) process.

Example 3.5. 

An interesting pulse shape, which will be needed later, is illustrated in Figure 3.16(a). The basic pulse shape p(t) can be expressed p1(t) ⊛ p1(t) where ⊛ denotes convolution and p1(t) is illustrated in Figure 3.16(b). Taking the Fourier transform of p1(t) gives

Equation 3.77. 

Triangular pulse shape.

Figure 3.16. Triangular pulse shape.

Since convolution in the time domain is equivalent to multiplication in the frequency domain we have

Equation 3.78. 

Thus

Equation 3.79. 

Substitution into (3.69) gives

Equation 3.80. 

This result will be used later in this chapter and in the problems.

Simulation Sampling Frequency

We now return to the problem of relating the simulation sampling frequency to a given pulse shape. This is accomplished by considering the signal-to-noise ratio of the sampling process where the noise power arises from aliasing. The goal is to select a sampling frequency so that the errors due to aliasing are negligible compared to the system degradations being investigated by the simulation. It will be shown that the required sampling frequency is dependent upon the waveshapes present in the simulation model.

Consider a waveform defined by (3.55), having the rectangular pulse shape illustrated in Figure 3.15, to be sampled as illustrated in Figure 3.17. In drawing Figure 3.17 a sampling frequency of six times the symbol frequency was assumed. The power spectral density of the data sequence is given by (3.73). Combining this result with (3.18) gives

Equation 3.81. 

Sequence of rectangular pulses sampled at six samples per symbol.

Figure 3.17. Sequence of rectangular pulses sampled at six samples per symbol.

Sampling the data sequence at m samples per symbol (fs = m/T) gives

Equation 3.82. 

which is

Equation 3.83. 

The next task is to compute the signal-to-noise ratio due to aliasing.

The signal-to-noise ratio due to aliasing can be expressed (SNR)a = S/Na where the signal power is

Equation 3.84. 

and the noise power due to aliasing is

Equation 3.85. 

Note that we have made use of the fact that PSD is an even function of frequency. The signal power is determined by integrating the n = 0 term in (3.83) over the simulation bandwidth |f| < fs/2. The noise power due to aliasing is the power from all of the frequency translated terms (n ≠ 0) that fall in the simulation bandwidth. Thus, the noise due to aliasing is found by integrating over all terms in (3.83) with the n = 0 term excepted. This is made clear by Figure 3.18, which is drawn (not to scale) for m = 6. Figure 3.18 illustrates the positive frequency portion of the n = 0 term of (3.83) in the range 0 < f < fs. The translated spectra for n = ±1 and n = 2 are also shown.

Spectra and translated spectra for n = 0, n = ±1, and n = 2.

Figure 3.18. Spectra and translated spectra for n = 0, n = ±1, and n = 2.

The next step in the determination of (SNR)a is to show that both S and Na can be determined using only the n = 0 term in (3.83). The lobes of the sinc(·) function, each having width 1/T, are illustrated and numbered in Figure 3.18. Note that for m = 6 and n = 0 lobes 1, 2, and 3 fall in the range 0 < f < fs/2 and therefore represent signal power. Lobes 4, 5, and 6 fall into the range 0 < f < fs/2 for the n = 1 term and therefore represents aliasing noise. In a similar manner lobes 7, 8, and 9 result from the n = −1 term in (3.83) and lobes 10, 11, and 12 result from the n = 2 term in (3.83). Continuing this line of thought shows that

Equation 3.86. 

Therefore

Equation 3.87. 

As can be seen by comparing Examples 3.4 and 3.5, the form of the integrand will be different depending on the pulse shape.

We will frequently find it necessary to use numerical integration in order to evaluate (SNR)a. In order to accomplish this a second sampling operation is introduced in which the continuous frequency variable f is sampled at points f = jf1. For accuracy we clearly require f1 ≪ 1/T so that many samples are taken in each lobe of the sinc(·) function. Frequency sampling in this way allows the integrals in (3.87) to be replaced by sums. In order to satisfy f1 ≪ 1/T let f1 = 1/(kT) where k is large so that the error induced by the numerical integration is small. With f = jf1 and f1 = 1/(kT) we have

Equation 3.88. 

The next step is to compute the folding frequency fs/2 in terms of the discrete parameters k and m. From Figure 3.18 we see that, for m samples per symbol, the folding frequency fs/2 is m/(2T). Since k samples are taken for every frequency interval of width 1/T, the folding frequency corresponds to the index km/2. Using (3.88) and the fact that fs/2 corresponds to km/2 in (3.87) gives

Equation 3.89. 

The preceding is an approximation because numerical integration is used to approximate the true value of the integral.

The MATLAB program to evaluate (3.89) follows.

% File: c3_sna.m
k = 50;                              % samples per lobe
nsamp = 50000;                       % total frequency samples
snrdb = zeros(1,17);                 % initialize memory
x = 4:20;                            % vector for plotting
for m = 4:20                         % iterate samples per symbol
    signal = 0; noise = 0;           % initialize sum values
    f_fold = k*m/2;                  % folding frequency
    for j = 1:f_fold
        term = (sin(pi*j/k)/(pi*j/k))^2;
        signal = signal+term;
    end
    for j = (f_fold+1):nsamp
        term = (sin(pi*j/k)/(pi*j/k))^2;
        noise = noise+term;
    end
    snrdb(m-3) = 10*log10(signal/noise);
end
plot(x,snrdb)                       % plot results}
xlabel('Samples per symbol')}
ylabel('Signal-to-aliasing noise ratio')}
% End script file.

Note that 50 frequency samples are taken in each lobe of the sinc(·) function and that a total of 50,000 frequency samples are taken. Thus, the summation in the denominator of (3.89) spans 1,000 lobes of the sinc(·) function, after which the PSD is assumed negligible. This assumption may be verified by experimenting with the parameter nsamp.

Executing the preceding program yields the result illustrated in Figure 3.19. Note that (SNR)a is slightly less than 17 dB for m = 10 samples per symbol and that (SNR)a continues to increase as m increases. However, the impact on (SNR)a decreases for increasing m. Also note that the PSD of the sampled signal decreases as 1/f2 for a rectangular pulse shape. Example 3.5 shows that the PSD of the sampled signal decreases as 1/f4 for a triangular pulse shape. Thus, for a given value of m, the value of (SNR)a will be greater for the triangular pulse shape than for the rectangular pulse shape. The rectangular pulse shape represents is a worst-case situation. Other pulse shapes are considered in the Problems.

Signal-to-aliasing-noise ratio for the rectangular pulse shape.

Figure 3.19. Signal-to-aliasing-noise ratio for the rectangular pulse shape.

In a practical communications system the pulse shape p(t) is chosen to give a required bandwidth efficiency [2]. High bandwidth efficiency implies that the spectrum of x(t) as defined by (3.55) is compact about f = 0.[8] Thus, signals that exhibit high bandwidth efficiency require a smaller value of m for a given (SNR)a.

Summary

The purpose of this chapter was to cover a number of topics related to sampling and the representation of sample values in communication system simulations. Two fundamental sampling theorems were considered the lowpass sampling theorem and the bandpass sampling theorem. Since bandpass signals are usually represented by lowpass signals in system simulations, the lowpass sampling theorem is the most important of these to theorems for our application. We saw that a bandlimited lowpass signal may be sampled and that the underlying bandpass signal may be reconstructed from the sample values if the sampling frequency exceeds twice the bandwidth of the bandlimited lowpass signal. The bandpass sampling theorem, although less useful in the simulation context than the lowpass sampling theorem, gave a somewhat similar result. Bandpass signals could be sampled and reconstructed if the sampling frequency is between 2B and 4B where B is the bandwidth of the bandpass signal being sampled.

Next quantizing was considered. Quantizing errors are present in all simulations, since sample values must be represented by digital words of finite length. Two types of quantizing errors were considered; errors resulting from fixed-point number representations and errors resulting from floating-point number representations. When fixed-point number representations are used, the signal-to-quantizing-noise ratio increases 6 dB for each bit added to the word length. When simulations are preformed on general-purpose computers, which use floating-point number representations, the noise resulting from quantizing errors is usually negligible. This noise, however, is never zero and there are situations in which errors can accumulate and significantly degrade the accuracy of the simulation result. The simulation user must therefore be aware of this potential error source.

The third section of this chapter treated reconstruction and interpolation. We saw that if a lowpass bandlimited signal is sampled with a sampling frequency exceeding twice the signal bandwidth, the underlying continuous-time signal can be reconstructed without error by weighting each sample with a sin(x)/x waveform, which is equivalent to passing the samples through an ideal lowpass filter. The result is a waveform defined for all values of time and, by extracting “new” samples between the original samples, interpolated samples can be generated. This operation, known as upsampling, increases the effective sampling frequency. The inverse operation, downsampling, can be accomplished by extracting every Mth sample from the original set of samples. Using the operations of upsampling and downsampling, one can develop a simulation in which multiple sampling frequencies are present. This is useful when the system being simulated contains signals having widely differing bandwidths. A spread-spectrum communications system is an example of such a system.

The final topic treated in this chapter was the important problem of relating the sampling frequency to the pulse shape used for waveform transmission. The pulse shape was assumed time limited and therefore cannot be bandlimited. Therefore aliasing errors occur. The criterion used for selecting the sampling frequency was to determine the required signal-to-noise ratio, where aliasing error constituted the noise source. A general method was developed for determining the PSD of the modulated signal and numerical integration of this PSD determined the signal-to-aliasing-noise ratio.

Further Reading

Most textbooks on basic communication theory consider several of the topics presented in this chapter. Included are the sampling theorem and models for transmitted signals using various pulse shape functions. Examples are:

  • R. E. Ziemer and W. H. Tranter, Principles of Communications: Systems, Modulation and Noise, 5th ed., New York: Wiley, 2001.

  • R. E. Ziemer and R. L. Peterson, Introduction to Digital Communication, 2nd ed., Upper Saddle River, NJ: Prentice Hall, 2001.

The topics of quantizing, interpolation, and decimation are typically covered in textbooks on digital signal processing. Although a wide variety of books are available in this category, the following is recommended:

  • A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing, Upper Saddle River, NJ: Prentice Hall, 1989.

The following textbook is an excellent reference on multirate signal processing and sampling rate conversion:

  • R. E. Crochiere and L. R. Rabiner, Multirate Digital Signal Processing, Upper Saddle River, NJ: Prentice Hall, 1983.

Simulation applications of the topics presented in this chapter can be found in:

  • M. C. Jeruchim, P. Balaban, and K. S. Shanmugan, Simulation of Communication Systems, 2nd ed., New York: Kluwer Academic/Plenum Publishers, 2000.

References

Problems

3.1

A signal x(t), given by

x(t) = 5cos(6πt) + 3sin(8πt)

is sampled using a sampling frequency fs of 10 samples per second. Plot X(f) and Xs(f). Plot the output of the reconstruction filter assuming that the reconstruction filter is an ideal lowpass filter with a bandwidth of fs/2. The passband gain of the reconstruction filter is Ts = 1/fs.

3.2

Repeat the preceding problem using a sampling frequency of 7 samples per second.

3.3

Develop a MATLAB program that produces, and therefore verifies, Figure 3.5.

3.4

A bandpass signal has a center frequency of 15 MHz and a bandwidth of 750 kHz (375 kHz each side of the carrier).

  1. Using the bandpass sampling theorem determine the minimum sampling frequency at which the bandpass signal can be sampled and reconstructed without error.

  2. By drawing the spectrum of the sampled signal and defining all frequencies of interest, show that the signal can be reconstructed without error if the sampling frequency found in (a) is used.

  3. Starting with the sampling frequency found in (a) consider the effect of increasing the sampling frequency. By how much can the sampling frequency be increased without incurring aliasing errors?

3.5

Assume that a signal defined by 5sin(10πt) is sampled and quantized using a fixed-point number representation.

  1. Determine the dynamic range of the signal.

  2. Determine the crest factor of the signal.

  3. Determine the signal-to-noise ratio (SNR)a for b = 4, 8, 16, and 32 bits.

3.6

Repeat the preceding problem for the signal illustrated Figure 3.20.

Figure for Problem 3.6.

Figure 3.20. Figure for Problem 3.6.

3.7

In evaluating the effect of a fixed-point quantizing process, the assumption was made that the error induced by the quantizing process can be represented by a uniformly distributed random value. In this problem we investigate the validity of this assumption.

  1. Use sin(6t) as a signal. Using a sampling frequency of 20 Hz, generate, using MATLAB, a vector of 10,000 samples of this waveform. Note that the signal frequency and the sampling frequency are not harmonically related. Why was this done?

  2. Develop a MATLAB model for a fixed-point quantizer that contains 16 quantizing levels (b = 4). Using this model quantize the sample values generated in (a). Generate a vector representing the 10,000 values of quantizing error.

  3. Compute the values of E{e[k]} and E{e2[k]}. Compare with the theoretical values and explain the results.

  4. Using the MATLAB function hist, generate a histogram of the quantizing errors. What do you conclude?

3.8

The value of realmax is the largest number that can be represented on a computer that adheres to the ANSI/IEEE standard for representing floating-point numbers. Anything larger results in an overflow, which in MATLAB is represented by Inf. Using a computer that adheres to the ANSI/IEEE standard make the following computations and answer the accompanying questions:

  1. Compute realmax + 1. Note that no overflow occurs. Explain this apparent contradiction.

  2. Compute realmax + 1.0e291 and realmax + 1.0e292. Explain the results.

3.9

Using MATLAB, compute

A = 1 − 0.5 − 0.25 − 0.125 − 0.125

Compare the result of this calculation with the result of Example 3.2. Explain the difference.

3.10

Fill in the steps to derive (3.56) and (3.57).

3.11

Data is transmitted as modeled by (3.55) in which the pulse shape p(t) is the triangular pulse illustrated in Figure 3.16(a). Develop a MATLAB program to plot the signal to aliasing noise ratio (SNR)a as the number of samples per symbol varies from 4 to 20. Compare the result with that of the rectangular pulse shape by plotting both on the same set of axes. Explain the results.

3.12

The energy spectral density of an MSK (minimumshift keyed) signal is defined by

Figure for Problem 3.6.

where Tb is the bit time [2]. Develop a MATLAB program to plot the signal to aliasing noise ratio (SNR)a as the number of samples per symbol varies from 4 to 20. Compare the result with that of the rectangular pulse shape by plotting both on the same set of axes. Explain the results.

3.13

Repeat the preceding problem for the QPSK signal for which

G(f) = 2Tb sinc2 (2Tbf)

 



[1] Once a signal is sampled, the sample values are a function of the index k and the notation x[k] is used. This notation, made popular by Oppenheim and Schafer [1], is commonly used in the DSP literature. Since the square brackets implies a sampling operation the subscript s is not needed to denote sampling. The value of x[·] is defined only for integer arguments.

[2] Recall the design cycle discussed in Chapter 1.

[3] In order to demonstrate basic principles, the pdf is assumed to be for a simple zero-mean process. In practice the pdf will depend on the manner in which fixed-point numbers are represented in the computer. The most common representations are sign-magnitude, ones-complement, and twos-complement [1].

[4] The terms narrowband and wideband are used in a relative sense.

[5] It will be shown in later chapters that, in addition to signal bandwidth, a number of other factors affect the required sampling frequency. For example, the presence of nonlinearities result in a requirement for higher sampling frequencies. The same is often true for systems containing feedback. In addition, multipath channels place requirements on the sampling frequency so that the multipath delays can be resolved. All of these topics will be considered in detail in later chapters.

[6] Note that we are now using p(t) for the pulse shape rather than for the sampling function as in the preceding section. The meaning of p(t) will be clear from the context in which it is used.

[7] We make the assumption that the data samples are +1 and −1 rather than +1 and 0 to be consistent with the assumption that E{ak} = 0.

[8] The reference is f = 0 rather than f = fc, in which fc is a nonzero carrier frequency, since (3.55) represents a lowpass model of a bandpass process.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset