Chapter 8. Postprocessing

In this chapter we briefly introduce the important topic of postprocessing. As discussed previously, the role of the postprocessor is to manipulate the data created by the simulation into a useful form. Postprocessors are usually graphics intensive, since visual displays are more easily interpreted than numerical listings, which are the most common data output from a simulation program. For example, a plot of the bit error rate for several different systems conveys information more quickly than a numerical table containing the same information.

Postprocessing routines may, or may not, involve significant computational complexity. Some postprocessors simply take data created by a simulation and, after properly formatting the data, generate the appropriate graphical output. An example is a routine for generating a plot of bit or symbol error probability, PE, as a function of Eb/N0. The values of PE, along with the accompanying values of Eb/N0, are created by the simulation and passed to the postprocessor as data files. The postprocessor simply formats the data and creates the required plot. Other examples of postprocessors that generate graphical output with minimal processing are those for displaying signal waveforms, eye diagrams, and scatter plots.

On the other hand, some postprocessing routines involve significant data processing. Most of these involve some type of estimation. A simple example is the generation of a histogram, which is an estimator of a probability density function. More complex examples are estimators for time delay, signal-to-noise ratio (SNR), and power spectral density. Other examples considered in this chapter involve the mapping of the channel symbol error rate to a decoded bit error rate for a system utilizing error-control coding. The list of possible postprocessing operations is virtually endless and, in this chapter, we only explore a few examples. All of these operations are considered postprocessing, since they make use of the data created by a simulation and are implemented after the simulation engine has completed its work. (Recall the discussion of the simulation engine in Chapter 6.)

Basic Graphical Techniques

In order to illustrate the graphical techniques used in a typical simulation postprocessor, the concepts are considered within the context of an example system. The choice for an example system is, of course, arbitrary. We consider here π/4 DQPSK, since it has a number of interesting characteristics and is used in a number of wireless systems [1].[1]

A System Example—π/4 DQPSK Transmission

A block diagram of a π/4 DQPSK transmitter is illustrated in Figure 8.1. The output of the data source is assumed to be a sequence, a, of the form

a(1)a(2)a(3)a(4)... a(k)...

π/4 DQPSK transmitter.

Figure 8.1. π/4 DQPSK transmitter.

The parallel-to-serial converter assigns alternate (odd-indexed) symbols to the direct channel and the remaining (even-indexed) symbols to the quadrature channel. Thus:

π/4 DQPSK transmitter.

The transmitted signal is given by

Equation 8.1. 

where Ts is the symbol period. The phase deviation of the transmitted signal is determined by the values of d(k) and q(k) as well as the phase deviation θ(k − 1), which is the phase deviation during the previous symbol period. This dependence of the previous symbol period is, of course, what makes π/4 DQPSK a differential modulation technique. The relationship between θ(k) and θ(k − 1) is

Equation 8.2. 

where φ(k) is an explicit function of d(k) and q(k), and is defined in Table 8.1. The required transmitted phases are generated in the phase mapper shown in Figure 8.1. The phase mapper uses d(k), q(k), and θ(k−1) to generate the new values d′(k) and q′(k), so that the transmitted signal has the proper phase. After appropriate pulse shaping, the direct and quadrature channel signals are translated to the transmission frequency, fc, as shown.

Table 8.1. Differential Phase Shifts for π/4 DQPSK

Information Symbols, d(k) and q(k)

Differential Phase Shift, φ(k)

1 1

π/4

0 1

3π/4

0 0

−3π/4

1 0

π/4

As an example, assume that the output of the data source is the binary sequence

Equation 8.3. 

and that the initial phase is defined by θ(0) = 0. Since the first two data symbols are 00, it follows from Table 8.1 that φ(1) = −3π/4. From (8.2) we then have

The next two data symbols are 10, so that φ(2) = −π/4. Thus:

The next two data symbols are 11. Thus, φ(3) = π/4, which gives

In like manner, φ(4) = 3π/4, so that

and φ(5) = π/4, which gives

A quick observation illustrates that θ(1), θ(3), and θ(5) are phases from the first QPSK signal constellation illustrated in Figure 8.2(a), and that θ(2) and θ(4) are phases from the second QPSK signal constellation illustrated in Figure 8.2(b). Thus, π/4 DQPSK operates by transmitting signal points from alternating QPSK signal constellations where the two QPSK signal constellations are displaced by a π/4 phase rotation. Although this has been demonstrated using a specific data sequence, we see that the result is general, since the differential phases only take on the values ±π/4 or ±3π/4.

Signal constellations for π/4 DQPSK.

Figure 8.2. Signal constellations for π/4 DQPSK.

Waveforms, Eye Diagrams, and Scatter Plots

Prior to demonstrating the basic programs for plotting waveforms, eye diagrams, and scatter plots, we first pause to illustrate the relationship between these plots. Suppose that we develop a three-dimensional coordinate system as shown in Figure 8.3 with the axes labeled as shown. Note that three intersecting planes can be formed, each of which contain two of the axes. These planes are formed by the D and t axes, the Q and t axes, and the Q and D axes. If the direct-channel signal xd(t) is plotted on the (D, t) plane and the quadrature channel signal xq(t) is plotted on the (Q, t) plane, a three-dimensional signal, parameterized on t, is generated. Projecting this signal onto a given subspace (D, t), (Q, t), or (D, Q), generates xd(t), xq(t), or the scatter plot, which is a plot of xq(t) as a function of xd(t). This is illustrated in Figure 8.3. Looking in from the right so that the (D, t) plane is seen edge-on shows the quadrature signal xq(t). In the same manner, the direct channel signal, xd(t), is obtained by viewing the three-dimensional image from below so that the (Q, t) plane is seen edge-on. Looking down the time axis so that the time axis becomes a point reveals the scatter plot.

Three-dimensional coordinate system.

Figure 8.3. Three-dimensional coordinate system.

While the three-dimensional (Q, D, t) image is seldom generated in practice, visualizing the (Q, D, t) image is a good learning tool and shows clearly the relationship between xd(t), xq(t), and the scatter plot. An interesting tutorial was based on this concept [2].

Eye Diagrams

The eye diagram gives a qualitative measure of system performance [3]. A well-defined and open eye usually indicates good performance, while a poorly defined eye usually indicates poor performance. In addition, the size of the eye relates to the accuracy required of the symbol synchronizer. While the eye diagram does not provide a quantitative measure of system performance, it is difficult to conceive of a high-performance system having a poorly defined eye diagram.

The generation of an eye diagram is illustrated in Figure 8.4. Three segments of a waveform, with each segment corresponding to a symbol period, are shown in Figure 8.4. The waveform corresponding to three data symbols is illustrated in Figure 8.4(a). Assume that this waveform is displayed on an oscilloscope and that the oscilloscope is triggered at the points denoted by the dotted vertical lines. The result will be the three-segment eye diagram illustrated in Figure 8.4(b).

Generation of an eye diagram.

Figure 8.4. Generation of an eye diagram.

Example 8.1. 

In this example, several important signals present in a π/2 DPSK system are generated and displayed. The MATLAB program for simulating the system and generating the graphical output is given in Appendix A. Upon entering the program name, c8_pi4demo, at the MATLAB prompt, a menu is presented. From this menu the user may select one of the following seven options (after a plot is generated, hitting the space bar will display the menu so that another selection can be made):

  1. Unfiltered π/4 DQPSK signal constellation

  2. Unfiltered π4 DQPSK eye diagram

  3. Filtered π/4 DQPSK signal constellation

  4. Filtered π4 DQPSK eye diagram

  5. Unfiltered direct and quadrature signals

  6. Filtered direct and quadrature signals

  7. Exit program (return MATLAB prompt)

The student should study the material in Appendix A closely, as it illustrates many of the common postprocessing procedures. In addition, the code used for generating the various plots can be used in the postprocessor of other simulation programs. Here we illustrate three of the more interesting results. Figures 8.5, 8.6, and 8.7 illustrate the scatter plot (signal constellation), the direct and quadrature channel signals, and the direct and quadrature channel eye diagrams, respectively. [Note that by visualizing the three-dimensional signal in the (D, Q, t) space as previously discussed, the relationship between Figures 8.5 and 8.7 is easily seen.]

Filtered signal constellation.

Figure 8.5. Filtered signal constellation.

Unfiltered direct and quadrature signals.

Figure 8.6. Unfiltered direct and quadrature signals.

Filtered eye diagram.

Figure 8.7. Filtered eye diagram.

Estimation

Many useful estimation routines are based on data generated by a simulation program. Here we consider only a small sampling of the many possibilities.

Histograms

When a set of samples of a random process is available, as will be the case in a simulation environment, a histogram formed from that set of samples is frequently used as an estimator of the underlying probability density function (pdf). The histogram is formed by grouping data, consisting of N total samples, into B bins or cells. Each bin is assumed to have equal width W, and the center of each bin is denoted bi. A given sample x[n] falls into the ith bin if

Equation 8.4. 

The quantity of interest is Ni, which denotes the number of samples falling into the ith bin. Clearly

Equation 8.5. 

We adopt the notation Count{N : R} to represent the number of samples, in the set of N total samples, falling into the histogram bin defined by R. Thus

Equation 8.6. 

A bar graph is then plotted in which the height of each bar is proportional to Ni, and each bar is centered at bi. In order to be an estimator of the pdf, the histogram is scaled so that the total area is one. This is accomplished by dividing Ni by NW. The height of each bar is then Ni/NW. The area of the bar, Ai, representing the ith histogram bin, is found by multiplying the height by the width W. Thus

Equation 8.7. 

Note that Ai represents the relative frequency of the ith histogram bin. The total area is

Equation 8.8. 

as required if the histogram is to represent a probability density function.

Note that each histogram bin represents the pdf over a finite span of width W by a constant. For some point within a given bin, the estimator of the pdf will be unbiased. However, over most of the range defined by a given histogram bin, the estimator will be biased. It is easily shown, by expanding the pdf fX(x) in a Taylor series about the center of the ith bin, that the bias is [4]

Equation 8.9. 

Thus, the bias can be reduced only by decreasing the bin width, W.

It can also be shown that the variance of the estimator is [4]

Equation 8.10. 

Note that WfX(bi) is the probability of the event that a given sample falls into bin bi. Since it is a probability, WfX(bi) ≤ 1 is less than one and is usually much less than one. Thus

Equation 8.11. 

We see that, for fixed N, increasing W decreases the variance of the estimator. Unfortunately, from (8.9), we see that the effect of increasing W increases the bias. We therefore desire small W and large NW. Thus, N and W must be related. For example, if and , W → 0 as N → ∞ and also NW → ∞ as N → ∞. In this case, for sufficiently large N, the result will be a pdf with negligible bias and negligible variance. The following example illustrates the histogram for various choices of N and W.

Example 8.2. 

Assume that we generate N samples of a zero-mean unit-variance Gaussian random variable. The pdf is estimated by constructing a histogram with B bins. In this example, we wish to examine the impact of varying both N and B. In order to accomplish this, the following MATLAB problem is used:

% File c9_hist.m
subplot(2,2,1)
x = randn(1,100); hist(x,20)
ylabel('N_i'), xlabel('(a)')
subplot(2,2,2)
x = randn(1,100); hist(x,5)
ylabel('N_i'), xlabel('(b)')
subplot(2,2,3)
x = randn(1,1000); hist(x,50)
ylabel('N_i'), xlabel('(c)')
subplot(2,2,4)
x = randn(1,100000); hist(x,50)
ylabel('N_i'), xlabel('(d)')
% End of script file.

Executing this program gives the results illustrated in Figure 8.8. Figure 8.8(a) illustrates the results for N = 100 and B = 20. As we can see, the histogram is not well defined, since N/B = 5, which is much too small to yield a reliable estimator for the number of samples falling into a given histogram bin. Figure 8.8(b) illustrates the result for N = 100 and B = 5. While the ratio for N/B is now much larger, the number of bins is too small. Note that with N fixed, the histogram with B = 5 will exhibit greater bias than B = 20. Figure 8.8(c) illustrates the result for N = 1,000 and B = 50. While this provides a better estimator, the ratio N/B is once again too small and we therefore increase N. Figure 8.8(d) illustrates the result for N = 100,000 and B = 50. This yields N/B = 2,000 and the resulting histogram has the predicted Gaussian shape. If the underlying pdf has a complicated shape, a very large number of samples is often required if the histogram is to be an accurate estimator of the pdf.

Histograms for a Gaussian random variable (a) N = 100, B = 20; (b) N = 100, B = 5; (c) N = 1,000, B = 50; (d) N = 100,000, B = 50.

Figure 8.8. Histograms for a Gaussian random variable (a) N = 100, B = 20; (b) N = 100, B = 5; (c) N = 1,000, B = 50; (d) N = 100,000, B = 50.

Power Spectral Density Estimation

Another postprocessing operation that is frequently used in a simulation study is the estimation of the power spectral density (PSD) of a signal at a point in a system. This is a more difficult task than we might generally assume. Typically the waveform of interest is a sample function of a stochastic process. This leads to a situation in which the PSD at a given value of frequency f1 is a random variable. It then becomes, as with most problems in estimation theory, necessary to minimize the variance of the spectral estimate Power Spectral Density Estimation. A number of books [59] and many papers have been written on this topic and many techniques have been developed for PSD estimation. In this section we consider only the most fundamental techniques. The techniques considered here are based on the fast Fourier transform (FFT) and are frequently used in the simulation context.

The Periodogram

The simplest, fastest, and most frequently used PSD estimation algorithm is the periodogram. It is defined by

Equation 8.12. 

in which N is the total number of samples in the data record, and IN(kfΔ) is the N-point FFT of the data for which the PSD estimate at frequency f = kfΔ is to be computed. The computational efficiency of the periodogram comes from the use of the FFT to form the PSD estimate. The result provides us with N frequency domain estimates having a resolution of fΔ = fs/N, where fs is the sampling frequency associated with the time-domain points for which the spectral estimate is being performed. Note that fs in this context is not always the sampling frequency associated with the simulation. For example, if a given signal in a simulation is significantly oversampled, the samples may be decimated[2] prior to forming the PSD estimate.

The difficulty with the periodogram is that it is biased and is not a consistent estimator of the PSD at a frequency f. For many applications, the variance of the periodogram is unacceptably high. The bias results from the unavoidable fact that the data record is finite. However, for sufficiently large N, the bias can be neglected. Therefore, the main difficulty results from the high variance. Assuming that the data samples x[n] are independent, the variance of the spectral estimate at frequency f is [6]

Equation 8.13. 

where is the variance of the data samples x[n]. We observe that the variance of does not tend to zero as N → ∞ and, for large N, the variance of the spectral estimate is independent of frequency. The periodogram, however, despite this serious flaw, is useful for a “quick look” at the PSD.

The Periodogram With a Data Window

If a data window is not explicitly specified, the default rectangular window is used.[3] For a rectangular window each sample value x[n] is multiplied by w[n] = 1 for 0 ≤ nN − 1. The impact of the rectangular window is to convolve the data samples x[n], with the Fourier transform of w[n], which has the amplitude spectrum sin(πNf) /N sin(πf). The sidelobe structure of this data window, when viewed in the frequency domain, results in considerable spectral leakage [6]. This spectral leakage distorts and reduces the dynamic range of the estimated spectrum. (See Problem 8.7.)

When an arbitrary data window is used, The Periodogram With a Data Window takes the form

Equation 8.14. 

where U is the energy in the data window, which is given by

Equation 8.15. 

Note that for the rectangular window, for which w[n] = 1 for all n, U = N and (8.12) results. The choice of data window represents a number of tradeoffs. The ideal data window must have finite duration in the time domain so that IN(kfΔ), the Fourier transform of the data, can be accurately estimated using a finite data record. In addition, the estimated Fourier transform of the data record must not be adversely affected by the window function. Since multiplication in the time domain is convolution in the frequency domain, and only convolution with an impulse function leaves the transform unchanged, the ideal window function is an impulse in the frequency domain. Since the Fourier transform of an impulse is not of finite extent, these are conflicting requirements. We therefore seek a data window that, in the frequency domain, exhibits a narrow main lobe about f = 0, and sidelobes that are greatly attenuated. A variety of window functions are discussed in the classic paper by Harris [10].

Segmented Periodograms

A common technique for reducing the variance associated with the periodogram is to divide the N-sample data record into K segments, with each segment consisting of M samples. The FFT is computed for each segment and the results are averaged. The averaging process reduces the variance of the spectral estimate. The segments may or may not be overlapping. If the segments do not overlap, K = M/N; otherwise K > M/N.

The periodogram of the ith data segment is given by

Equation 8.16. 

where x(i)[n] represents the samples in the ith data record and fΔ = fs/M. The K periodograms are then averaged to produce the PSD estimator

Equation 8.17. 

This estimator is biased, of course, since the data record is finite. Assuming that the K periodograms are independent

Equation 8.18. 

which tends to zero as K → ∞.

Comparing (8.12) and (8.16) reveals an obvious problem. The periodogram defined by (8.12) has a frequency resolution of fΔ = fs/N, while the periodogram defined by (8.16) has a frequency resolution of fΔ = fs/M. Since M < N for K > 1, the frequency resolution is degraded by segmenting the original N-sample data record. Thus, using the segmentation technique gives rise to a tradeoff between resolution and variance. Also, the validity of (8.18) requires that the K periodograms used in the averaging process be independent. Since we desire the largest possible value of K for a fixed N, the segments are often overlapped. A 50 percent overlap is often used. When using a 50 percent overlap all samples x[n] are used twice except for the M/2 samples at each end of the N sample data record, and the value of K is increased from N/M to 2(N/M) − 1. If data segments are overlapped, however, the K periodograms are no longer independent and the reduction in the variance of the PSD estimator is less than that predicted by (8.18). The use of a data window, at least partially, helps to restore the independence of the K segments.[4]

While there are many data windows that can be used in (8.16), the Hanning window is frequently used for PSD estimation. The Hanning window is defined by

Equation 8.19. 

Example 8.3. 

In this example we pass independent (white noise) samples through a Chebyshev filter having 5 dB passband ripple. The problem is to estimate the PSD at the filter output. The MATLAB program for accomplishing this follows:

% File: c8_PSDexample.m
settle = 100;                  % ignore transient
fs = 1000;                     % sampling frequency
N = 50000;                     % size of data record
f = (0:(N-1))*fs/N;            % frequency scale
[b,a] = cheby1(5,5,0.1);       % filter
NN = N+settle;                 % allow transient to die
in = randn(1,NN);              % random input
out = filter(b,a,in);          % filter output
out = out((settle+1):NN);      % strip off initial samples
window = hanning(N)';          % set window function
winout = out.*window;          % windowed filter output
fout = abs(fft(winout,N)).^2;  % transform and square mag
U = sum(window.*window);       % window energy
f1out = fout/U;                % scale spectrum
psd1 = 10*log10(abs(f1out));   % log scale
subplot(2,1,1)
plot(f(1:5000),psd1(1:5000))
grid; axis([0 100 -70 10]);
xlabel('Frequency, Hz')
ylabel('PSD')
%
K = 25;                        % number of segments
M = N/K;                       % block size
fK = (0:(M-1))*fs/M;           % frequency scale
d = zeros(1,M);                % initialize vector
psdk = zeros(1,M);             % initialize vector
window = hanning(M)';          % set window function
U = sum(window.*window);       % window energy
for k=1:K
    for j=1:M
        index = (k-1)*M+j;
        d(j) = out(index);
    end
    dwin = d.*window;
    psdk = (abs(fft(dwin,M)).^2)/U + psdk;
end
psd2 = 10*log10(psdk/K);
subplot(2,1,2)
plot(fK(1:250),psd2(1:250))
grid; axis([0 100 -70 10]);
xlabel('Frequency, Hz')
ylabel('PSD')
% End of script file.

Executing the program yields the result shown in Figure 8.9. Note that for the nonoverlapped case (top pane), the variance is large. Also note that the variance is independent of frequency. Using 25 segments (bottom pane) results in a much smaller variance at the cost of reduced frequency resolution. Finally, note that the 5 dB passband ripple of the Chebyshev filter is much more obvious with K = 25 than for K = 1.

Power spectral density estimates, not averaged (top frame) and averaged (bottom frame).

Figure 8.9. Power spectral density estimates, not averaged (top frame) and averaged (bottom frame).

The PSD estimator illustrated in Example 8.3 was developed using basic MATLAB commands. The MATLAB Signal Processing Toolbox contains a number of routines for PSD estimation. Two of these are psd and pwelch. The interested student should study these in some detail. Here we illustrate the Welch periodogram from the Signal Processing Toolbox.

Example 8.4. 

In this example, we estimate the PSD of a QPSK signal. Rectangular pulse shaping is assumed, and the direct and quadrature components of the QPSK signal are sampled at 16 samples per symbol. The MATLAB code follows:

% File: c8_welchp.m
fs = 16;
x = random_binary(1024,fs)+i*random_binary(1024,fs);
for nwin=1:4
    nwindow = nwin*1024;
    [pxx,f] = pwelch(x,nwindow,fs);
    pxx = pxx/sum(sum(pxx));
    n2 = length(f)/2;
    pxxdB = 10*log10(pxx/pxx(1));
    ptheory = sin(pi*f+eps)./(pi*f+eps);
    ptheory = ptheory.*ptheory;
    ptheorydB = 10*log10(ptheory/ptheory(1));
    subplot(2,2,nwin)
    plot(f(1:n2),pxxdB(1:n2),f(1:n2),ptheorydB(1:n2))
    ylabel('PSD in dB')
    xx = ['window length = ',num2str(nwindow)];
    xlabel(xx)
    axis([0 8 -50, 10]); grid;
end
% End of script file.

Executing the preceding code yields the results illustrated in Figure 8.10. Note that 16 × 1,024 points are generated and that window sizes (nwindow) of 1,024, 2,048, 3,072, and 4,096 are used. As with the preceding example, larger values for nwindow yield less averaging and, as a result, the estimated PSD exhibits greater variance. Smaller values for nwindow yield reduced variance at the cost of reduced resolution. These trends can be seen in Figure 8.10.

PSD Estimates generated in Example 8.4.

Figure 8.10. PSD Estimates generated in Example 8.4.

 

Gain, Delay, and Signal-to-Noise Ratios

The signal-to-noise ratio is a commonly used figure of merit for evaluating the performance of a communications system. The SNR estimation technique presented here originated from a method for measuring channel distortion errors in wideband telementy systems [11], in which the noise in a signal at a point in a system is defined as the mean-square error (MSE) between the actual signal and a desired signal at that point. The SNR can be estimated by defining the desired signal as an amplitude-scaled and time-delayed version of the information-bearing signal at the system input. In the past, applications of this technique have included monitoring reliable transmission of digital pulse code modulated data [12] and estimation of the carrier-to-intermodulation ratio in a nonlinear channel [13].

Theoretical Development for Real Lowpass Signals

For a linear time-invariant distortionless system, the signal y(t) at any point in the system is an amplitude-scaled and time-delayed version of the input reference signal x(t). Therefore, we can write the distortionless signal as

Equation 8.20. 

where A is the gain and τ is the group delay to the point in the system at which the SNR is to be defined. Let x(t) be the reference signal and y(t) be the measurement signal, such that

Equation 8.21. 

where n(t) represents the external additive noise and d(t) is the signal-dependent internal distortion induced by the system, which could result from intersymbol interference or a nonlinearity. A block diagram that depicts the relationships among different signals in the system is given in Figure 8.11.

Test procedure for estimating gain, delay, and the signal-to-noise ratio.

Figure 8.11. Test procedure for estimating gain, delay, and the signal-to-noise ratio.

The noise power is defined as the MSE between y(t) and the output of the distortionless system z(t) = Ax(t − τ). That is:

Equation 8.22. 

The desired estimates for A and τ are the values for which ε(A, τ) is minimized. The preceding expression can be written

Equation 8.23. 

For stationary signals, the moments are independent of the time origin. In addition, the expectation of a sum is the sum of the expected values. Equation (8.23) can therefore be written

Equation 8.24. 

or

Equation 8.25. 

 

where Px and Py represent the average powers in x(t) and y(t), respectively. Clearly, the value of τ which minimizes ε(A, τ) is the value of τ, denoted τm, for which Rxy (τ) is maximized. We refer to this as the system time delay. The system gain, Am, is the value of A for which

Equation 8.26. 

This gives

Equation 8.27. 

The power in the error signal is found from (8.25) with A = Am and τ = τm. Since this is the component of y(t) orthogonal to the signal x(t), we define the power in the error signal as the noise power, N. Thus:

Equation 8.28. 

The power in the signal component of y(t) is

Equation 8.29. 

Therefore, the signal-to-noise ratio is

Equation 8.30. 

The correlation coefficient, ρ, relating the signals x(t) and y(t), is defined as

Equation 8.31. 

With this definition, the SNR at the measurement point y(t) takes the very simple form

Equation 8.32. 

For systems with real signals, this problem has been studied by Turner, Tranter, and Eggleston [4], and Jeruchim and Wolfe [5]. To minimize ε(A, τ) is equivalent to the maximization of Rxy(τ), the cross-correlation function between the reference signal x(t), and the measured signal y(t).

A MATLAB function for implementing a postprocessor for estimating the system gain, delay, and the SNR follows:

function [gain,delay,px,py,rxy,rho,snrdb] = snrmse(x,y)
ln = length(x);           % length of the reference (x) vector
fx = fft(x,ln);           % FFT the reference (x) vector
fy = fft(y,ln);           % FFT the measurement (y) vector
fxconj = conj(fx);        % conjugate the FFT of the reference vector
sxy = fy .* fxconj;       % determine the cross PSD
rxy = ifft(sxy,ln);       % determine the cross correlation function
rxy = real(rxy)/ln;       % take the real part and scale
px = x*x'/ln;             % determine power in reference vector
py = y*y'/ln;             % determine power in measurement vector
[rxymax,j] = max(rxy);    % find the max of the cross correlation
gain = rxymax/px;         % system gain
delay = j-1;              % system delay
rxy2 = rxymax*rxymax;     % square rxymax for later use
rho = rxymax/sqrt(px*py); % correlation coefficient
snr = rxy2/(px*py-rxy2);  % snr
snrdb = 10*log10(snr);    % snr in db
% End of function file.

We now pause to work a simple example. (Note: The technique used here for estimating delay will be used in Chapter 10 when we consider semianalytic simulation.)

Example 8.5. 

In order to illustrate the preceding techniques, assume that x(t) is the sinusoidal signal

Equation 8.33. 

and that the measurement signal (the signal for which the SNR is to be determined) is

Equation 8.34. 

where G is the system gain, n(t) is a zero-mean unit variance white Gaussian noise process, and σn is the standard deviation of the additive noise process. The PSD of y(t) is illustrated in Figure 8.12, where Pd is the signal power (the power in the desired component), Pi is the power in the interfering tone, and N0 is the single-sided power spectral density of the noise component. It was shown in Chapter 7 that N0 and σn are related by

Equation 8.35. 

where fs is the sampling frequency.

Single-sided PSD of the measurement signal y(t).

Figure 8.12. Single-sided PSD of the measurement signal y(t).

For this example, the reference signal is defined by

Equation 8.36. 

The signal at the receiver input is assumed to be

Equation 8.37. 

where n(t) is a sample function of a zero-mean unit-variance process. The MATLAB program for this senario follows:

% File: c8_snrexample.m
kpts = 1024;                     % FFT Block size
k = 1:kpts;                      % sample index vector
fd = 2;                          % desired signal frequency
fi = 8;                          % interference frequency
Ax = 80; Ayd = 20; Ayi =4;       % amplitudes
phase = pi/4;                    % phase shift
nstd = 0.8;                      % noise standard deviation
%
theta = 2*pi*k/kpts;             % phase vector
x = Ax*sin(fd*theta);            % desired signal
yd = Ayd*sin(fd*theta+pi/4);     % desired signal at receiver input
yi = Ayi*sin(fi*theta);          % interference
noise = nstd*randn(1,kpts);      % noise at receiver input
yy = yd+yi+noise;                % receiver input
[gain,delay,px,py,rxy,rho,snrdb] = snrmse(x,yy);
%
% display results
%
cpx = ['The value of Px is ',num2str(px),'.'];
cpy = ['The value of Py is ',num2str(py),'.'];
cgain = ['The value gain is ',num2str(gain),'.'];
cdel = ['The value of delay is ',num2str(delay),'.'];
csnrdb = ['The value of SNR is ',num2str(snrdb),' dB.'];
disp(' ')                        % insert blank line
disp(cpx)
disp(cpy)
disp(cgain)
disp(cdel)
disp(csnrdb)
% End of script file.

Executing the program yields the following results:

The value of Px is 3200.
The value of Py is 208.7872.
The value gain is 0.25012.
The value of delay is 64.
The value of SNR is 13.6728 dB.

The theoretical values are easily computed. Since the reference signal is a sinusoid having a peak value of 80:

Equation 8.38. 

Three components are present at the receiver input: the sinusoidal signal component at 2 Hz, the sinusoidal interference component at 8 Hz, and the white noise component. The power Py is the sum of these components. This yields

Equation 8.39. 

The gain is the ratio of the amplitude of the measurement signal to the amplitude of the corresponding component at the receiver input (the component at 2 Hz). This gives

Equation 8.40. 

Noting that the signal component has a period of 512 samples [x(t) goes through two periods in the span of 1,024 samples], and that the phase delay is π/4, the delay is

Equation 8.41. 

The SNR is the ratio of the interference plus noise power to the signal at the input to the receiver. This is the ratio of the first term in (8.39) to the sum of the last two terms in (8.39). (Note that the interference is considered noise, since the interference is orthogonal to the signal component.) This gives

Equation 8.42. 

These results are summarized in Table 8.2. The small errors are due to the fact that the noise variance is a random variable since the record length is finite. Modifying the program slightly so that five estimates of the SNR (in dB) are generated results in the vector output

[13.6572 13.6524 13.5016 13.5245 13.5201]

Table 8.2. Summary of Results for Example 8.2

Parameter

Theoretical Value

Estimated Value

Px

3,200

3,200

Py

208.64

208.7872

G

0.25

0.25012

τ

64 samples

64 samples

S/N

13.6452 dB

13.6728 dB

We clearly see that the estimated SNR is a random variable.

The single biggest difficulty with this method is the accurate determination of delay. Note that if a small error in the estimation of delay occurs, a small error will result in the estimated value of Summary of Results for Example 8.2. If the SNR is large, Summary of Results for Example 8.2 will result and, as we see from (8.30) a small error in the estimation of Summary of Results for Example 8.2 will result in a large error in the estimated signal-to-noise ratio. We must therefore be able to accurately determine the peak value of Summary of Results for Example 8.2. This may require that Summary of Results for Example 8.2 be closely sampled, which requires a high sampling frequency for the simulation. Thus, we have the ubiquitous tradeoff between accuracy and the time required to execute the simulation.

 

In this development, we have assumed that the signals are real. The technique illustrated here can be applied with equal ease to signals defined by complex envelopes. The estimator for this case is derived by replacing x(t) by xd(t) + jxq(t) and y(t) by yd(t) + jyq(t). The resulting expression for the SNR is, once again, (8.32). The details of the development are left to the interested student.

Coding

When simulation is used to determine the bit error rate (BER) of a digital communication system that makes use of error control coding, one usually does not use the simulation to count errors at the output of the decoder. There are a variety of reasons for this. First, the BER at the decoder output is usually very small. Consequently, very long simulation run times are required to collect a sufficient number of errors to generate accurate estimates of the BER. Also, many decoding algorithms are computationally complex, which also significantly increases the simulation run time. In addition, both coders and decoders are deterministic devices. Once the code is defined, the source data uniquely determines the codewords. Similarly, the pattern of errors at the receiver output uniquely determines the BER at the decoder output. This suggests a semianalytic approach in which the symbol error rate (SER) at the receiver input, determined using simulation, is mapped to the decoded BER using analysis. Performing this mapping is, in general, a complex task if exact results are desired. Fortunately, however, exact results are seldom necessary, and a number of useful approximations and bounds have been developed to simplify this task.

A waveform-level simulation is typically used to determine the symbol error rate, SER, at the receiver input. An alternative method is to use discrete channel models implemented as Markov models (HMMs). The HMM is a computationally efficient technique for simulating systems for a given set of channel conditions and is therefore very useful for studying the impact of various coding/decoding algorithms. The discrete channel model and the HMM will be studied in detail in Chapter 15.

Analytic Approach to Block Coding

As we know, block codes are formed by grouping information symbols into blocks of length k. To each k-symbol block is appended (nk) parity symbols to form codewords of length n. These codewords are then transmitted through the channel and, due to disturbances in the channel, random errors may result. In most practical applications, the n-symbol codewords are transmitted in a time slot of duration kTb, where Tb denotes the time for transmitting a single information bit without coding. If the transmitted power is the same with and without coding, a typical assumption, the energy associated with transmission of the code symbols is (k/n)Eb, where Eb is the energy per bit and k/n is the code rate. Since the energy per transmitted symbol is reduced through the use of error control coding, the channel symbol error probability with coding is increased over the symbol (bit) error probability without coding. One hopes that the added redundancy, through the addition of parity symbols, will provide sufficient error correction capability to provide a net increase in system performance. This may or may not be true.

Assume that a given code can correct up to t errors in each n-symbol block. Also assume that error events are independent, which can at least be approximately ensured by using interleaving. The probability of error associated with the code symbols transmitted through the channel is denoted Psc, where the subscript denotes channel symbols as opposed to information bits. Since t errors per n-symbol codeword can be corrected by the decoder, the probability that the decoded word will be in error, Pcw, is

Equation 8.43. 

Equality holds in (8.43) if all received blocks of n symbols containing t or fewer errors are decoded correctly and no blocks of n symbols containing t + 1 or more errors are decoded correctly. These are known as perfect codes. The only perfect binary codes are the repetition codes for which n is odd, the single error-correcting Hamming codes and the triple error-correcting (23,12) Golay code. For all other codes (8.43) provides a useful bound.

The decoded word error probability, Pcw, does not allow direct comparison of different codes. In order to compare different codes it is necessary to map the decoded word error probability to a decoded information bit error probability, which we denote Pb. An exact mapping is a function of the generator matrix of the code, which determines the code weight distribution. Fortunately, a highly accurate approximation has been developed [16, 17]. This approximation is

Equation 8.44. 

where q denotes a q-ary channel. In other words, for binary channels q = 2, and for Reed-Solomon codes, the most popular nonbinary block code, q = 2k − 1.

Example 8.6. 

We now illustrate the use of (8.44). Assume a binary (q = 2) phase shift keying (PSK) communications system operating in an additive, white, Gaussian noise (AWGN) environment. For this case the bit error probability, without coding, is

Equation 8.45. 

where z represents Eb/N0. With an (n, k) block code, the channel symbol error probability is

Equation 8.46. 

Two different binary codes are considered: a (23, 12) Golay code for which n = 23, t = 3, and d = 7, and a (15, 11) Hamming code for which n = 15, t = 1, and d = 3.

At this point, all parameters and variables in (8.44) are known. Prior to evaluating (8.44), however, we must evaluate

Equation 8.47. 

The MATLAB function for evaluating (8.47) follows:[5]

function out = nkchoose(n,k)
a =  sum(log(1:n));             % ln of n!
b = sum(log(1:k));              % ln of k!
c = sum(log(1:(n-k)));          % ln of (n-k)!
out = round(exp(a-b-c));        % result
% End of function file.

The MATLAB routine for computing the performance curves for a (15,11) Hamming code and a triple error correcting (23,12) Golay code follows: (Note that PSK modulation and an AWGN channel is assumed.)

% File c8_cerdemo
zdB = 0:0.1:10;                      % set Eb/No axis in dB
z = 10.^(zdB/10);                    % convert to linear scale
ber1 = Q(sqrt(2*z));                 % PSK result
ber2 = Q(sqrt(12*2*z/23));           % CSER for (23,12) Golay code
ber3 = Q(sqrt(11*z*2/15));           % CSER for (15,11) Hamming code
berg = cer2ber(2,23,7,3,ber2);       % BER for Golay code
berh = cer2ber(2,15,3,1,ber3);       % BER for Hamming code
semilogy(zdB,ber1,zdB,berg,zdB,berh) % plot results
xlabel('E_b/N_o in dB')              % label x axis
ylabel('Bit Error Probability')      % label y axis
% End of scrit file.

The preceding MATLAB code makes use of the function cer2ber, which converts the channel symbol error probability to the approximation of the decoded bit error probability given by (8.44). The MATLAB code for implementing this function is as follows:

function [ber] = cer2ber(q,n,d,t,ps)
% Converts channel symbol error rate to decoded BER.
lnps = length(ps);                      % length of error vector
ber = zeros(1,lnps);                    % initialize output vector
for k=1:lnps                            % iterate error vector
    cer = ps(k);                        % channel symbol error rate
    sum1 = 0; sum2 = 0;                 % initialize sums
    %
    % first loop eveluates first sum
    %
    for i=(t+1):d
        term = nkchoose(n,i)*(cer^i)*((1-cer))^(n-i);
        sum1 = sum1+term;
    end
    %
    % second loop evaluates second sum
    %
    for i=(d+1):n
        term = i*nkchoose(n,i)*(cer^i)*((1-cer)^(n-i));
        sum2 = sum2+term;
    end
    %
    % compute BER (output)
    %
    ber(k) = (q/(2*(q-1)))*((d/n)*sum1+(1/n)*sum2);
end
% End of function file.

The result of these computations are illustrated in Figure 8.13.

Performance comparisons for Hamming and Golay block codes.

Figure 8.13. Performance comparisons for Hamming and Golay block codes.

 

Analytic Approach to Convolutional Coding

A number of analytic approximations can be used to map the channel symbol error probability to a decoded bit error probability for the convolutional code case. These mappings take the form of upper bounds on the error probability and are therefore the convolutional code equivalent of (8.44). These bounds are usually based on the Viterbi decoding algorithm, which asymptotically approaches the maximum likelihood decoder performance, and is the standard for decoding convolutional codes.

A frequently used bound is based on the transfer function of the convolutional code. The transfer function describes the distance properties of the convolutional code and can be derived from the state transition diagram of the code.

The transfer function for the rate 1/2 convolutional coder shown in Figure 8.14 is given by [18, 19]

Equation 8.48. 

Rate 1/2 convolutional coder for Example 8.6.

Figure 8.14. Rate 1/2 convolutional coder for Example 8.6.

Expressing (8.48) in polynomial form gives

Equation 8.49. 

which describes the distance properties of various paths in the trellis for the code that starts at state 0 and merge to state 0 later on. The power of D denotes the Hamming distance (the number of binary ones) separating the given path from the all-zeros path in the decoding trellis. The power of L indicates the length of a given path. In other words, the exponent of L is incremented each time a branch in the trellis is traversed. The power of I is incremented if the branch transition results from a binary one input, and is not incremented if the branch transition results from a binary zero input. For example, the term D5L3I represents a path having Hamming distance 5 from the all-zeros path. This path has length 3 and results from input data having 1 binary one and 2 binary zeros (100 to be exact). The next term, D6L4(1 + L)I2 = D6L4I2 + D6L5I2, represents two paths, each of which lie Hamming distance 6 from the all-zeros path. One path has a length of 4 branches and the other path has a length of 5 branches. The path of length 4 results from an input of 2 ones and 2 zeros, and the second a results from an input having 2 ones and 3 zeros. The smallest output weight of all the paths that begin and merge with the state of all zeros represents the minimum free distance, df, of the code, which is 5 in this case.

In order to approximate the decoded bit error probability, we first let L = 1 in (8.48), since we do not have interest in the path lengths. This gives

Equation 8.50. 

For antipodal signaling (PSK) in an AWGN environment, the decoded symbol error probability is given by [18]

Equation 8.51. 

where R is the code rate (R = 1/2 in this case.). This result is used in Example 8.6. For a general binary symmetric channel, the Bhattachayya bound is used. This gives [18, 19]

Equation 8.52. 

where

Equation 8.53. 

and q is the channel symbol error probability determined by simulation. The bounds defined by (8.51) and (8.52) can be rather loose. This is especially true of short constraint length codes.

Equations (8.51) and (8.52) assume hard decision decoding. With soft decision decoding, d in (8.51) is replaced d0, where

Equation 8.54. 

in which N is the number of quantizer output values, yi is the ith quantized output value, and the conditional probabilities represent the probability of a 0 or 1 at the channel input appearing at the quantizer output as level yi. These conditional probabilities are estimated using either a Monte Carlo or semianalytic technique over the waveform channel.

Example 8.7. 

We now apply (8.51) to the rate 1/2 code defined by (8.50), and illustrated in Figure 8.14. Substitution of (8.50) into (8.52) yields

Equation 8.55. 

The following MATLAB code results:

% File c8_convcode.m
zdB = 2:0.1:10;                 % set Eb/No axis in dB
z = 10.^(zdB/10);               % convert to linear scale
puc = Q(sqrt(2*z));             % uncoded BER
W = exp(-z/2);
Num = W.^5;
Den = 1-4*W+4*W.*W;
ps = 0.5*Num./Den;
semilogy(zdB,puc,'-.',zdB,ps)
grid
legend('uncoded','coded')
xlabel('E_b/N_o in dB')         % label x axis
ylabel('Bit Error Probability') % label y axis
% End of script file.

Executing the code results in Figure 8.15 in which the bound on decoded error probability and the uncoded error probability are compared.

Transfer function bound for example rate 1/2 convolutional code.

Figure 8.15. Transfer function bound for example rate 1/2 convolutional code.

Summary

In this chapter we have considered the topic of postprocessing by giving a number of examples. The generation of waveform plots, signal constellations (scatter plots) and eye diagrams were demonstrated within the context of a π/4 DQPSK modulator. We also considered the development of estimators based on data generated by a simulation. First we considered the histogram, which is an estimator for probability density functions. While the histogram is, in general, a biased and nonconsistent estimator for a pdf, we saw that if sufficient data is available, both the bias and the variance can be made negligible. Next we considered variations of the basic periodogram as an estimator of the PSD of a signal. We saw that while the variance of the basic periodogram is often unacceptably large, the variance can be reduced by averaging periodograms of windowed data segments. This process of segmenting and averaging periodograms involves a tradeoff between estimator variance and resolution. The next estimator considered allowed estimation of system gain, time delay, and signal-to-noise ratio. The techniques developed here will be used in a following chapter when semianalytic techniques are considered. The final topic considered in this chapter was the estimation of decoded error probability in systems using error control coding. The technique here is to determine the channel symbol rate using simulation and to map the channel symbol error rates to the decoded bit error rate using bounding techniques.

Further Reading

For detail on MATLAB graphics capabilities, the reader is encouraged to study the latest version of the MATLAB manual. With the exception of the simple menu given in Example 8.1, postprocessor user interfaces were not covered in this chapter. MATLAB provides routines for developing user interfaces, and the reader is encouraged to study this material. User interfaces can be used to advantage in the development of general-purpose postprocessors.

The other topic covered in this chapter dealt with estimators for probability density functions, power spectral density, gain, delay, and signal-to-noise ratio, as well as estimators for approximating the performance of coded systems based on the uncoded symbol error rate on the channel. The references given below provide detailed information on these topics.

References

Problems

8.1

Based on our knowledge of a π/4 DQPSK transmitter as illustrated in Figure 8.1, draw a block diagram of a π/4 DQPSK receiver. Develop a MATLAB simulation of a π/4 DQPSK receiver. By combining the receiver simulation with the transmitter simulation previously developed, show that the reciever simulation works properly.

8.2

Using the MATLAB code developed for Example 8.1, determine and plot the magnitude of the complex envelope of both the unfiltered and the filtered π/4 DQPSK signal. What do you observe? Explain the results.

8.3

Using the MATLAB plotting routines, generate the (D, Q, t) coordinate system as illustrated in Figure 8.3 and plot xd(t) and xq(t) for a π/4 DQPSP signal. Use four symbols of both xd(t) and xq(t). Using the resulting MATLAB program, illustrate the generation of xd(t), xq(t), and the scatter plot from the three-dimensional image. In addition, generate, the real envelope signal. (Note: Consider the MATLAB commands plot3, rotate3d, and view.)

8.4

By appropriately modifying the MATLAB program in Appendix A, rework Example 8.1 so that the filter is a raised cosine filter as described in Chapter 5. Plot the direct channel and the quadrature channel signals at the filter output. Also plot the eye diagram. Compare the resulting eye diagrams with those illustrated in Figure 8.7. Use rolloff factors of 0.5 and 0.7.

8.5

A Gaussian mixture is a random process defined by the pdf

Problems

Using the parameters a = 0.8, m1 = 0, m2 = 1, and σ1 = σ2 = 1, plot fX(x). Rework Example 8.2 using the pdf for the Gaussian mixture. Discuss the results.

8.6

Develop a postprocessor that has as input a file of N samples generated by a MATLAB program. The postprocessor is to be menu driven and is to generate a histogram, a PSD estimate of the input data, and the autocorrelation of the input data. Any necessary parameters required for operation of the postprocessor are to be entered through a parameter file read by the postprocessor. Test the postprocessor using a vector of N = 5,000 samples generated by passing N independent samples of a zero-mean, unit-variance, Gaussian process through a third-order Butterworth filter having a 3 dB break frequency of 0.2fN, where fN is the Nyquist rate.

8.7

Rework Example 8.3 using a rectangular data window rather than a Hanning window. Compare the results using a rectangular window with the results using a Hanning window. How are they different? Explain the reasons for the noted differences.

8.8

Determine the theoretical PSD for Example 8.3. Compare the results given in Example 8.3 with the theoretical results.

8.9

Modify the MATLAB program given in Example 8.3 so that overlapped segments can be used. Let the number of samples that are overlapped be user specified. Rework Example 8.3 using a 50 percent overlap (M/2 samples) and compare the results with the result given in Example 8.3.

8.10

Using the periodogram approach, develop a “power meter” for estimating the power in a given frequency band f1ff2. Demonstrate the operation of your power meter by computing the power, in a given frequency band, at the output of a suitably chosen linear system with a white noise at the system input. Your choice of a system is arbitrary, but you must verify the validity of the results given by your power meter.

8.11

The input of a linear system is

Problems

Consider two outputs

Problems

and

Problems

For each output compute the SNR (in dB) of the measurement signal relative to the reference signal x[n]. Fully explain any differences. What is the theoretical SNR for each case?

8.12

The input to a linear system is defined by

Problems

and the output is defined by

Problems
  1. Using the input x[n] as a reference, determine the system gain and delay from the input to the point where y[n] is measured. Express the delay in both sample periods and in seconds. Also detemine the SNR of y[n] relative to x[n]. You are free to choose the sampling frequency and the number of samples processed. However, you are to justify these choices.

  2. Discuss the sources of error in your results given in (a). Conduct an appropriate experiment to show that these errors are not significant.

  3. Does the system exhibit amplitude distortion? Does the system exhibit phase (delay) distortion?

  4. Suppose y[n] is given by

    Problems

    where A, B, a, and b are parameters. What are the values of these parameters if the system is to be distortionless? Using the techniques illustrated in Example 8.5, show that your answers are correct.

8.13

BCH codes are binary block codes that allow multiple errors per codeword to be corrected. An (n, k, t) BCH code has rate k/n, and can correct t errors per block of n symbols. By using the technique illustrated in Example 8.6, examine the relative performance of (63,30,6) and (255,123,19) BCH codes. Assume PSK modulation and compare both BCH codes to the performance of the uncoded system.

8.14

Extend Example 8.6 so that one may plot the decoded bit error probability as a function of the channel symbol error probability. Illustrate the resulting algorithm using a Golay code.

8.15

Rework Example 8.7 using the Bhattachayya bound defined by (8.53). Compare the result to that given in Example 8.7.

Appendix A: MATLAB Code for Example 8.1

Main Program: c8_pi4demo.m

% File: c8_pi4demo.m
m = 200; bits = 2*m; % number of symbols and bits
sps = 10; % samples per symbol
iphase = 0; % initial phase
order = 5; % filter order
bw = 0.2; % normalized filter bandwidth
%
% initialize vectors
%
data = zeros(1,bits); d = zeros(1,m); q = zeros(1,m);
dd = zeros(1,m); qq = zeros(1,m); theta = zeros(1,m);
thetaout = zeros(1,sps*m);
%
% set direct and quadrature bit streams
%
data = round(rand(1,bits));
dd = data(1:2:bits-1);
qq = data(2:2:bits);
%
% main programs
%
theta(1) = iphase; % set initial phase
thetaout(1:sps) = theta(1)*ones(1,sps);
for k=2:m
    if dd(k) == 1
        phi_k = (2*qq(k)-1)*pi/4;
    else
        phi_k = (2*qq(k)-1)*3*pi/4;
    end
    theta(k) = phi_k + theta(k-1);
    for i=1:sps
        j = (k-1)*sps+i;
        thetaout(j) = theta(k);
    end
end
d = cos(thetaout);
q = sin(thetaout);
[b,a] = butter(order,bw);
df = filter(b,a,d);
qf = filter(b,a,q);
%
% postprocessor for plotting
%
kk = 0; % set exit counter
while kk == 0 % test exit counter
k = menu('pi/4 QPSK Plot Options',...
    'Unfiltered pi/4 QPSK Signal Constellation',...
    'Unfiltered pi/4 QPSK Eye Diagram',...
    'Filtered pi/4 QPSK Signal Constellation',...
    'Filtered pi/4 OQPSK Eye Diagram',...
    'Unfiltered Direct and Quadrature Signals',...
    'Filtered Direct and Quadrature Signals',...
    'Exit Program'),
    if k == 1
        sigcon(d,q) % plot unfiltered signal con.
        pause
    elseif k ==2
        dqeye(d,q,4*sps) % plot unfiltered eye diagram
        pause
    elseif k == 3
        sigcon(df,qf) % plot filtered signal con.
        pause
    elseif k == 4
        dqeye(df,qf,4*sps) % plot filtered eye diagram
        pause
    elseif k == 5
        numbsym = 10; % number of symbols plotted
        dt = d(1:numbsym*sps); % truncate d vector
        qt = q(1:numbsym*sps); % truncate q vector
        dqplot(dt,qt) % plot truncated d and q signals
        pause
    elseif k == 6
        numbsym = 10; % number of symbols to be plotted
        dft=df(1:numbsym*sps); % truncate df to desired value
        qft=qf(1:numbsym*sps); % truncate qf to desired value
        dqplot(dft,qft) % plot truncated signals
        pause
    elseif k == 7
        kk = 1; % set exit counter to exit value
    end
end
% End of script file.

Supporting Routines

sigcon.m

function []=sigcon(x,y)
plot(x,y)
axis('square')
axis('equal')
title('SIGNAL CONSTELLATION')
xlabel('Direct Channel')
ylabel('Quadrature Channel')
% End of function file.

dqeye.m

function [] = dqeye(xd,xq,m)
lx = length(xd); % samples in data segment
kcol = floor(lx/m); % number of columns
xda = [0,xd]; xqa = [0,xq]; % append zeros
for j = 1:kcol % column index
    for i = 1:(m+1) % row index
        kk = (j-1)*m+i; % sample index
        y1(i,j) = xda(kk);
        y2(i,j) = xqa(kk);
    end
end
subplot(211) % direct channel
plot(y1,'k'),
title('D/Q EYE DIAGRAM'),
xlabel('Sample Index'),
ylabel('Direct'),
subplot(212) % quadrature channel
plot(y2,'k'),
xlabel('Sample Index'),
ylabel('Quadratute'),
subplot(111)
% End of function file.

dqplot.m

function [] = dqplot(xd,xq)
lx = length(xd);
t = 0:lx-1;
nt = t/(lx-1);
nxd = xd(1,1:lx);
nxq = xq(1,1:lx);
subplot(211)
plot(nt,nxd);
a = axis;
axis([a(1) a(2) 1.5*a(3) 1.5*a(4)]);
title('Direct and Quadrature Channel Signals'),
xlabel('Normalized Time'),
ylabel('Direct'),
subplot(212)
plot(nt,nxq);
a = axis;
axis([a(1) a(2) 1.5*a(3) 1.5*a(4)]);
xlabel('Normalized Time'),
ylabel('Quadratute'),
subplot(111)
% End of function file.

 



[1] π/4 DQPSK has been adopted as the modulation format in a number of system standards. These include the USDC (United States Digital Cellular) system, the PACS (Personal Access Communication System) PCS system, the PDC (Pacific Digital Cellular) system, and the PHS (Personal Handy Phone) cordless system. [1]

[2] Recall the discussion of decimation in Chapter 3.

[3] In MATLAB the rectangular window is called the BOXCAR window.

[4] One obviously wishes to select the overlap so that a minimum-variance spectral estimator results. Marple [8] states that, for a Gaussian process, the minimum variance is achieved with an overlap of 65 percent when using a Hanning window.

[5] MATLAB contains the function nchoosek = n!/(k!(nk)!) as a standard m-file. The routine given here is named nkchoose in order to avoid an obvious conflict with mchoosek. The technique given here uses logarithms to increase the dynamic range of the computation and is handy in those applications where n is so large that n! results in an overflow.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset