Chapter 7. Generating and Processing Random Signals

To this point we have been concerned with deterministic signals in simulations. In all communication systems of practical interest, random effects such as channel noise, interference, and fading, degrade the information-bearing signal as it passes through the system from information source to the final user. Accurate simulation of these systems at the waveform level requires that these random effects be modeled accurately. Therefore, algorithms are required to produce these random effects. The fundamental building block is the random number generator. While much can be said about random number generators (several books and many research papers have been written on the subject), the emphasis in this chapter is on the use of random number generators in the simulation of communication systems. Thus, we restrict our study to the essential task of generating sampled versions of random waveforms (signals, interference, noise, etc.) for use in simulation programs. In the simulation context, all random processes must be expressed by sequences of random variables. Generating and testing these random sequences are the subject of this chapter. Many programming languages useful for developing simulation programs, such as MATLAB, contain random number generators as part of the library of “built-in” functions. Understanding the concepts upon which these number generators are based provides important insight into the overall simulation program. It is wise to ensure that these number generators are properly designed and appropriate for use in a given application.

We will see that these random number generators do not, strictly speaking, generate random numbers, but produce sequences that appear random over the observation (simulation) interval so that they can be used to approximate a sample function of a random process in a given simulation program. By “appear random” we mean that the generated sequences, over a given simulation interval, have the properties required to accurately model a random process, with the required level of accuracy, for a given application. We refer to such sequences as pseudo-random sequences, since, even though they are deterministic, they appear random when used for a given application. The accuracy required is dependent on the application. For example, if we must generate a waveform to represent the noise at the input of a PLL discriminator, higher accuracy is required to model the noise waveform for an input SNR of 50 dB than for 8 dB. More accuracy is required to model the noise component in a digital communications system if the bit-error probability is 10−7 than if the bit-error probability is 10−3.

In this chapter we first consider the generation of sample functions of a random process. The concept of stationarity is examined in the simulation context. Simulation models for digital modulators are then briefly considered. After these preliminary discussions, we turn our attention to the main focus of this chapter and consider the following:

  • Generating uncorrelated random numbers uniformly distributed in (0,1)

  • Mapping random numbers that are uncorrelated and uniformly distributed to random numbers that are uncorrelated and have an arbitrary (desired) probability density function (pdf)

  • Generating random numbers that are uncorrelated and have a Gaussian pdf

  • Generating random numbers that are correlated and have a Gaussian pdf

  • Generating random numbers that are correlated and have an arbitrary (desired) pdf

We then take a brief look at the generation of pseudonoise (PN) sequences and at several computational techniques applied to sequences of random numbers.

Stationary and Ergodic Processes

When simulating a communications system, the sample functions generated to represent signals, noise, and interference will be assumed ergodic. This is required, since we typically process time-domain samples of waveforms through the system sequentially and, at each point in the system, there is a single waveform (sample function). We make the assumption that the waveform processed by the simulation is a typical member of the ensemble defined by the underlying statistical model. Various statistical quantities such as moments, signal-to-noise ratios, and bit-error rates will be computed as time average quantities. When comparing simulation results with corresponding theoretical results, there will usually be an underlying assumption that time averages, computed by the simulation, are equivalent to ensemble averages. As a result, there is an implied assumption that the underlying random processes are ergodic.

Ergodic processes are always stationary. Therefore, the sample functions generated within a simulation are always assumed to be members of a stationary random process. Recall from basic random process theory that the definition of stationarity is that all statistical quantities are independent of the time origin. In order to demonstrate several of these ideas, we pause to consider a simple example.

Example 7.1. 

Assume that the sample functions of a random process are defined by the expression

Equation 7.1. 

in which ξi is an outcome in the sample space of an underlying random experiment, and each outcome ξi is mapped to a phase φi. We also assume that the underlying random experiment consists of drawing a number from a uniform number generator. The result of this draw is the outcome ξi = ui, where ui is uniformly distributed in (0, 1). The value of ui is then mapped into a phase φi = kui. With A and f fixed, the value of φi determines the waveform. In this example we have interest in two values of k, namely, k = 2π, in which the phase is uniformly distributed in (0, 2π), and k = π/2, in which the phase is uniformly distributed in (0, π/2). As a second example, assume that the random process is described by the expression

Equation 7.2. 

In this case the amplitude is uniformly distributed in the range (A, 2A).

The following MATLAB program produces three sets of sample functions of a random process. The first set of waveforms, denoted x(t), corresponds to (7.1) with k = 2π. The second set of waveforms, denoted y(t), corresponds to (7.1) with k = π/2. The third set of waveforms, denoted z(t), are defined by (7.2). For all waveforms, A = 1 and f = 1. Two seconds of data and twenty sample functions are generated for each simulation.

% File:  c7_sinewave.m
f = 1;                 % frequency of sinusoid
fs = 100;              % sampling frequency
t = (0:200)/fs;        % time vector
for i=1:20
    x(:,i) = cos(2*pi*f*t+rand(1)*2*pi)';
    y(:,i) = cos(2*pi*f*t+rand(1)*pi/2)';
    z(:,i) = (1+rand(1))*cos(2*pi*f*t)';
end
subplot(3,1,1); plot(t,x,'k'), ylabel('x(t)')
subplot(3,1,2); plot(t,y,'k'), ylabel('y(t)')
subplot(3,1,3); plot(t,z,'k'), ylabel('z(t)')
% End of script file.

Executing this program yields the results illustrated in Figure 7.1.

Sample functions for three different random processes.

Figure 7.1. Sample functions for three different random processes.

The time averages of all sample functions comprising x(t), y(t), and z(t) are all equal to zero. The reader can easily verify (see Problem 7.1 that the ensemble averages of x(t) are approximately zero when computed at a large number of points, ti for 0 ≤ ti ≤ 2. The time averages will converge to zero as the number of sample functions tends to ∞. For y(t), however, the ensemble average is approximately 1 for t in the neighborhood of 0.875 and 1.875, is approximately −1 for t in the neighborhood of 0.375 and 1.375, and is approximately zero for t in the neighborhood of 0.125, 0.625, 1.125, and 1,625. This is an example of a cyclostationary process, in which the moments are periodic.

The sample functions denoted by z(t) are also sample functions from a cyclostationary process. Note that sampling the process at t = 0.5k generates a random variable, the mean of which is approximately +1.5 for k even and is approximately −1.5 for k odd.

 

The preceding example made use of the MATLAB uniform random number generator rand. In the following example we illustrate the use of a random number generator for modeling a digital modulator. In the following section the algorithms used for implementing uniform number generators is explored in detail.

Example 7.2. 

In the work to follow we have frequent need for models of digital modulators. The fundamental building block for these modulators will be the function random_binary, which produces a binary waveform having values +1 or −1. The number of bits produced, as well as the number of samples per bit, are arguments. The listing follows:

function [x, bits] = random_binary(nbits,nsamples)
% This function generates a random binary waveform of length nbits
% sampled at a rate of nsamples/bit.
x = zeros(1,nbits*nsamples);
bits = round(rand(1,nbits));
for m=1:nbits
    for n=1:nsamples
    index = (m-1)*nsamples + n;
    x(1,index) = (-1)^bits(m);
    end
end

The function random_binary can be used to simulate a number of digital modulators. For example a QPSK modulator can be simulated using the MATLAB statement

x = random_binary(nbits,nsamples)+i*random_binary(nbits,nsamples);

The following MATLAB program generates a QPSK signal for 10 bits with a sampling frequency of 8 samples per bit:

% File:  c7_example2.m
nbits = 10; nsamples = 8;
x = random_binary(nbits,nsamples)+i*random_binary(nbits,nsamples);
xd = real(x); xq = imag(x);
subplot(2,1,1)
stem(xd,'.'), grid; axis([0 80 -1.5 1.5]);
xlabel('Sample Index'), ylabel('xd')
subplot(2,1,2)
stem(xq,'.'), grid; axis([0 80 -1.5 1.5]);
xlabel('Sample Index'), ylabel('xq')
% End of script file.

Executing the program yields the QPSK signal, having the direct and quadrature components, illustrated in Figure 7.2.

Direct and quadrature components of a QPSK signal.

Figure 7.2. Direct and quadrature components of a QPSK signal.

Uniform Random Number Generators

A random variable having a uniform probability density function is easily transformed to a random variable having a desired pdf other than uniform. Therefore, the first step in the generation of a random variable having a specified pdf is to generate a random variable that is uniformly distributed on the interval (0,1). This is typically accomplished by first generating a sequence of numbers (integers) between 0 and M and then dividing each element of the sequence by M. The most common technique for implementing random number generators is known as linear congruence.

Linear Congruence

A linear congruence generator (LCG) is defined by the operation

Equation 7.3. 

where a and c are referred to as the multiplier and increment, respectively, and the parameter m is referred to as the modulus. This is, of course, a deterministic sequential algorithm in which successive values of x are generated in turn. The initial value of x, denoted x0, is referred to as the seed number of the generator. (We will have more to say about seed numbers later in this section.) Given that x0, a, c, and m are integers, all numbers produced by the LCG will be integers. Since the operation [axi + c] is evaluated mod(m) it follows that, at most, m distinct integers can be generated by (7.3). A desirable property of the generator output is that it has a long period, so that the maximum number of integers are produced in the output sequence before the sequence repeats. When the period is maximized, for a given value of m, we say that the generator is full period. In addition, application to a given simulation program places other demands on the LCG. For example, we usually require that the samples xi and xi+1 be uncorrelated. In addition, the LCG output may be required to pass other statistical tests, depending on the application. The LCG can take many different forms. In this section, only the most common algorithms are considered.

Technique A: The Mixed Congruence Algorithm

The most general congruence algorithm is the “mixed” congruence algorithm for which c ≠ 0. We refer to this algorithm as a mixed algorithm because both multiplication and addition are involved in the calculation of xi+1. The mixed linear algorithm takes the form given in (7.3)

Equation 7.4. 

For c ≠ 0, the generator has a maximum period of m. This period is achieved if and only if

  • The increment c is relatively prime to m. In other words, c and m have no common prime factors.

  • a − 1 is a multiple of p, where p represents the prime factors of the modulus m.

  • a − 1 is a multiple of 4 if m is a multiple of m.

A proof of the foregoing statement is given by Knuth [1].

Example 7.3. 

We wish to design a mixed congruence generator having a period m = 5,000. Since

Equation 7.5. 

we can ensure that m and c are relatively prime by setting c equal to a product of primes other than 2 and 5. This satisfies the first property. One of many possibilities is to set

Equation 7.6. 

The value of a must now be selected. The second property is satisfied by setting

Equation 7.7. 

and

Equation 7.8. 

where p1 = 2 and p2 = 5 (the factors of m), and k1 and k2 are arbitrary integers. Since 4 is a factor of m = 5,000, we satisfy the third bullet by setting

Equation 7.9. 

where k3 is an arbitrary integer. An obvious choice for a is to let

Equation 7.10. 

or

Equation 7.11. 

where k is an integer. With k = 6, we have a = 241. Thus:

Equation 7.12. 

is a full-period generator. Note that there are many other choices of parameters that will produce a full-period generator with m = 5,000.

Example 7.4. 

In this example we show that the LCG designed in the previous example does indeed have a period of m = 5,000. In the following MATLAB program, a seed number is entered and the program runs until the seed reoccurs. If n integers are generated and n > m without the seed recurring, one assumes that the generator is caught in a loop in which a short sequence is repeatedly generated. The MATLAB program is

% File:  c7_LCGperiod.m
a = input('Enter multiplier a > '),
c = input('Enter offset c > '),
m = input('Enter modulus m > '),
seed = input('Enter seed %> '),
n=1; ix = rem((seed*a+c),m);
while (ix~=seed)&(n<m+2)
    n = n+1; ix = rem((ix*a+c),m);
end
if n%>m
    disp('Caught in a loop.')
else
    text = ['The period is ',num2str(n,15),'.'];
    disp(text)
end
% End of script file.

Executing the program yields the following dialog:

≫ c7_LCGperiod
Enter multiplier a %> 241
Enter offset c %> 1323
Enter modulus m %> 5000
Enter seed %> 1
The period is 5000.

We see that the period is indeed 5,000 as expected.

Technique B: The Multiplicative Algorithm With Prime Modulus

The multiplicative generator is defined as

Equation 7.13. 

which is the mixed algorithm with the increment c set equal to zero. Note that xi cannot equal zero for c = 0. Therefore, the full period is m − 1 rather than m, as was the case previously. The multiplicative algorithm produces a full period if [1]

  • m is prime (m is usually required to be large)

  • a is a primitive element mod(m)

As we know, a prime number is a number evenly divisible only by 1 or by the number itself. The second property perhaps requires an explanation. We mean that a is a primitive element mod(m) if ai − 1 is a multiple of m for i = m − 1, but for no smaller value of i. In other words, a is a primitive element mod(m) if

Equation 7.14. 

and

Equation 7.15. 

for k an arbitrary integer. For a proof that (7.13) produces a full-period generator under the given conditions see [1].

Technique C: The Multiplicative Algorithm with Nonprime Modulus

The most important case in which the modulus m is not a prime number is m equal to a power of two. In other words:

Equation 7.16. 

for integer n. For the case defined by (7.16), the maximum period is 2n/4 = 2n−2. This period is achieved if

  • The multiplier a is 3 or 5 mod(8)

  • The seed x0 is odd

A proof is given by Knuth [1].

Since the product of two odd numbers is odd, it follows that all values generated by (7.16) are odd if x0 is odd. Thus, no even values of xi are generated, which reduces the period by a factor of two. The odd integers generated by (7.16) are divided into two sets, only one of which is generated for a given seed. This reduces the period by another factor of two. The set of odd integers actually generated depends upon the choice of the seed. (See Problem 7.3.)

The advantage of using m = 2k is that integer overflow can be used to perform the mod(m) operation. This reduces the computation time. While this is indeed desirable, the result is a program that is not easily transportable.

Testing Random Number Generators

The previous section provides us with the tools for generating pseudo-random numbers that are uniformly distributed between 0 and 1. To this point we have considered only the period of the sequence produced by an LCG. While we obviously wish this period to be long, there are other desirable attributes to be satisfied for a given application. At the very least, we desire the sequence to be delta correlated (white). Other requirements may be necessitated by the application.

A number of procedures have been developed for testing the randomness of a given sequence. Among the most popular of these are the Chi-square test, the Kolomogorov-Smirnov test, and the spectral test. A study of these is beyond the scope of the material presented here. The interested student is referred to the literature [1, 2]. The spectral test appears to be the most powerful of these tests. A brief description of the spectral test, applied to the Wichmann-Hill algorithm to be discussed later, is given in the paper by Coates [3].

For many of the applications to follow, the most important attribute to be satisfied is that the elements of a given sequence are independent, or at least uncorrelated. Toward this end, we consider two very simple tests: scatterplots and the Durbin-Watson test. It should be pointed out that the properties of a given sequence apply to the complete sequence (the full period). If one uses only a portion of the sequence, the properties of the complete sequence no longer apply.

Scatterplots

The scatterplot is best illustrated by an example.

Example 7.5. 

A scatterplot is a plot of xi+1 as a function of xi, and represents an empirical measure of the quality of the number generator. For this example, we consider two number generators defined by

Equation 7.17. 

and

Equation 7.18. 

Applying the program c7_LCGperiod.m presented in Example 7.2 shows that both of these generators are full period. The MATLAB code for generating the scatterplots for each of these generators is

% File:  c7_LCDemo1.m
m = 2048; c = 1; seed = 1;        % default values of m and c
a1 = 65; a2 = 1229;               % multiplier values
ix1 = seed; ix2 = seed;           % initialize algorithm
x1 = zeros(1,m); x2 = zeros(1,m); % initialize arrays
for i=1:m
    ix1 = rem((ix1*a1+c),m);
    x1(i) = ix1/m;
    ix2 = rem((ix2*a2+c),m);
    x2(i) = ix2/m;
end
subplot(1,2,1)
y1 = [x1(1,2:m),x1(1,1)];
plot(x1,y1,'.')                   % plot results for a1
subplot(1,2,2)
y2 = [x2(1,2:m),x2(1,1)];
plot(x2,y2,'.')                   % plot results for a2
% End of script file.

Executing the program yields the scatterplots illustrated in Figure 7.3. One seeks a scatterplot in which all combinations of the ordinate xi+1 and the abscissa xi occur. For this case, the scatterplot is devoid of structure. It appears from Figure 7.3 that a = 65 yields a generator having smaller serial correlation than the generator with a = 1,229. We will see in Example 7.6 that this is indeed the case.

Scatterplots for a1 = 65 (left) and a2 = 1,229 (right).

Figure 7.3. Scatterplots for a1 = 65 (left) and a2 = 1,229 (right).

The Durbin-Watson Test

The Durbin-Watson test for independence is implemented by calculating the Durbin parameter

Equation 7.19. 

where X[n] is a zero-mean random variable [4]. We will show that values of D in the neighborhood of 2 imply small correlation between X[n] and X[n − 1].

In order to illustrate the properties of the Durbin-Watson test, assume that X[n − 1] and X[n] are correlated and that X[n] is an ergodic process. In order to simplify the notation we assume that N is large so that N − 1 ≈ N and write

Equation 7.20. 

where X denotes X[n], Y denotes X[n − 1], and E{·} denotes expectation. Since we assumed that X[n] and X[n − 1] are correlated we let[1]

Equation 7.21. 

where X and Z are uncorrelated and ρ is the correlation coefficient relating X and Y. Note that X, Y, and Z all have equal variance, which we denote σ2. Substituting (7.21) in (7.20) gives

Equation 7.22. 

The middle term is zero, since X and Z are uncorrelated and zero-mean. Since X and Z have equal variance

Equation 7.23. 

Since −1 ≤ ρ ≤ 1, the Durbin parameter D varies between 0 and 4, with D = 2 if ρ = 0. Values of D < 2 imply positive correlation, while D > 2 implies negative values of ρ. The following MATLAB function computes the value of the Durbin parameter:

% File: c7_durbin.m
function D = durbin(x)
N = length(x);                 % length of input vector
y = x-mean(x);                 % remove dc
ydiff = y(2:N)-y(1:(N-1));     % numerator summand
Num = sum(ydiff.*ydiff);       % numerator factor of D
Den = sum(y.*y);               % denominator factor of D
D = Num/Den;                   % Durbin factor
% End of script file.

Example 7.6. 

In this example, we calculate the value of D for the two noise generators considered in Example 7.5. The MATLAB code is as follows:

% File:  c7_LCDemo2.m
m = 2048; c = 1; seed = 1;
a1 = 65; a2 = 1229;
ix1 = 1; ix2 = 1;
x1 = zeros(1,m); x2 = zeros(1,m);
for i=1:m
    ix1 = rem((ix1*a1+c),m);
    x1(i) = ix1;
    ix2 = rem((ix2*a2+c),m);
    x2(i) = ix2;
end
D1 = c7_Durbin(x1); D2 = c7_Durbin(x2); % calculate Durbin parameters
rho1 = 1-D1/2; rho2 = 1- D2/2; % calculate correlation
text1 = ['The value of D1 is ',num2str(D1),' and rho1 is ',...
   num2str(rho1),'.'];
text2 = ['The value of D2 is ',num2str(D2),' and rho2 is ',...
   num2str(rho2),'.'];
disp(text1)
disp(text2)
% End of script file.

Executing the program yields:

≫ c7_LCDemo2
The value of D1 is 1.9925 and rho1 is 0.0037273.
The value of D2 is 1.6037 and rho2 is 0.19814.

For a1 = 65 the correlation is approximately 0, while for a2 = 1,229 the correlation is approximately 0.2. It therefore follows from the Durbin-Watson test that a1 = 65 gives superior results to a2 = 1,229. This result is consistent with the scatterplots shown in Figure 7.3.

Minimum Standards

It is a major task to thoroughly test a given LCG for quality by showing that a variety of statistical tests for randomness are passed. This is especially true when the generated sequence is long. In order to partially solve this problem, a number of algorithms have been identified as minimum standard algorithms. A minimum standard algorithm is one that is [5]

  • Full period

  • Passes all applicable statistical tests for randomness

  • Easily transportable from one computer to another

Once such an algorithm has been identified and properly documented, it becomes a minimum standard. The algorithm can then be used with confidence by others without additional testing. As pointed out in [5], if a minimum standard algorithm is used, one need not worry about the correctness of the algorithm, but must ensure that the algorithm is implemented correctly for the given computational environment.[2] An important programming concern is that all numbers generated by the algorithm be uniquely representable.[3]

Lewis, Goodman, and Miller Minimum Standard

The Lewis, Goodman, and Miller minimum standard is defined by [5]

Equation 7.24. 

in which m is the Mersenne[4] prime 231 − 1. This value of m was first suggested by Lehmer, who was responsible for much of the basic work on LCGs more than half a century ago [5]. It is widely used and is easily implemented in integer arithmetic on 32-bit computers, and in floating-point arithmetic if the mantissa exceeds 31 bits.[5]

The Wichmann-Hill Algorithm

The previous work has shown that we desire number generators having long periods. An effective technique for constructing a waveform having a long period is to sum several periodic waveforms having slightly different periods. For example, cos 2π(1)t has a period of 1 second and cos 2π(1.0001)t has a period of 10,000/10,001, which is slightly less than 1 second. The composite waveform can be written in the form

Equation 7.25. 

which has a period of 10,000 seconds, or approximately 2.78 hours. During this period, the first component goes through 10,000 periods and the second component goes through 10,001 periods. Additional components can be used if desired.

The same technique can be applied to LCGs by combining several number generators having different, but approximately the same, periods [6]. The Wichmann-Hill algorithm is probably the best known example of a combined number generator. Many different variations of the Wichmann-Hill algorithm are possible. The original algorithm, which is nicely described in a paper by Coates [3], uses three component generators defined as

Equation 7.26. 

Equation 7.27. 

Equation 7.28. 

The three component generators are indeed full-period generators (see Problem 7.11). The three component generators are combined to give the output

Equation 7.29. 

This Wichmann-Hill algorithm is equivalent to a multiplicative LCG with multiplier

Equation 7.30. 

and modulus

Equation 7.31. 

Since M is clearly not prime, the period is shorter than m − 1. It is shown in [3] that the period is approximately 7.0 × 1012 that, although less than m, is still extremely long.

The Wichmann-Hill algorithm, although somewhat different in architecture from the previously presented minimum standard, is considered a minimum standard uniform number generator, since it has been tested extensively, has been shown to pass all of the standard statistical tests, and is easily transported from one machine to another [3].

MATLAB Implementation

Prior to the release of MATLAB 5, the uniform random number generator rand, included in the MATLAB library, was the minimum standard number generator defined by (7.24). The uniform random number generator used in MATLAB versions 5 and 6 is based on a technique developed by Marsaglia. [6] This number generator, which is targeted at the generation of floating-point numbers rather than scaled integers, is briefly described in a short paper by Moler [7]. MathWorks claims that this number generator has a period exceeding 21492 and is “fairly sure” that all floating-point numbers between eps and 1-eps/2 are generated, where the MATLAB constant eps, as described in Chapter 3, is 2−52. The new number generator uses only addition and subtraction. Since no multiplications or divisions are used, the algorithm executes much faster than an LCG.

Seed Numbers and Vectors

Since the simulation examples presented in this book are based on MATLAB, it is important to briefly consider the way in which MATLAB handles seeds. The “old” MATLAB random number generator (prior to MATLAB 5 and defined by (7.24) used a single seed number. The “new” random number uses a vector seed, referred to as the state of the number generator. This vector consists of 35 elements (32 floating-point numbers, two integers, and a flag) that define the state of the number generator [7]. With MATLAB 5 or later, either number generator can be used. The new random number generator is the default. The old random number defined by (7.24) can be invoked by using the command RAND('seed',0) or RAND('seed',J). As with all MATLAB commands, the user should carefully study the information provided by the help command. In addition, the user should be aware of the following (the term seed is used to cover both integer seeds and state vectors):

  • The user can either use the default seed number or can specify a seed number.

  • Closing and reopening MATLAB resets the seed to the default value. Thus, if one makes N calls to a random number generator, then closes and reopens MATLAB and makes N more calls to a random number generator, the same N numbers will be produced in both cases. This property can be used to advantage in MATLAB, since it allows one to reproduce identical sequences of results. This is useful for testing purposes.

  • The system clock can be used to randomize the initial seed. (See MATLAB help for details.)

  • Seed numbers are stored in a buffer and not on the MATLAB workspace. As a result, executing the command clear all has no effect.

Mapping Uniform RVs to an Arbitrary pdf

Many different methods have been developed for mapping a uniformly distributed random variable to a target random variable having a pdf that is not uniform. There are basically three different situations that occur:

  1. The cumulative distribution for the target random variable is known in closed form. We will see that if the CDF of the target random variable is known in closed form, a very simple technique, known as the inverse transform method, can be used.

  2. The pdf of the target random variable is known in closed form, but the CDF is not known in closed form. The important Gaussian random variable falls into this category. A number of ad hoc methods exist for this case and, in addition, rejection methods can be used.

  3. Neither the pdf nor the CDF are known in closed form. This situation is often encountered when one must develop a random number generator to fit the pdf of experimentally collected data.

We now examine techniques that can be used in each of these three cases.

The Inverse Transform Method

The inverse transform method allows us to transform a uniformly distributed uncorrelated random sequence U to an uncorrelated (independent samples) sequence X having a distribution function, FX (x). The transformation makes use of a memoryless nonlinearity as shown in Figure 7.4. The fact that the nonlinearity is memoryless ensures that the output sequence is uncorrelated if the input sequence is uncorrelated. Of course, by the Weiner-Khitchine theorem, a sequence of uncorrelated random numbers has a PSD which is constant (white). The technique is to simply set

Equation 7.32. 

and solve for x, which gives

Equation 7.33. 

Inverse transform method.

Figure 7.4. Inverse transform method.

Application of the inverse transform technique requires that the distribution function, FX(x), be known in closed form.

It is easy to see that the inverse transform technique yields a random variable with the required distribution function [2, 8]. Recall that a distribution, FX(x), is a nondecreasing function of the argument x, as illustrated in Figure 7.5. By definition

Equation 7.34. 

Cumulative distribution function.

Figure 7.5. Cumulative distribution function.

Setting X = F−1(U) and recognizing that FX(x) is monotonic gives

Equation 7.35. 

which is the desired result. We now pause to illustrate the technique through a simple example.

Example 7.7. 

In this example, a uniform random variable will be transformed into a random variable having the one-sided exponential distribution

Equation 7.36. 

where u(x) is the unit step defined by

Equation 7.37. 

The first step is to find the CDF. This is

Equation 7.38. 

Equating the distribution function to the uniform random variable U gives

Equation 7.39. 

which, when solved for X, provides the result

Equation 7.40. 

Since the random variable 1 − U is equivalent to the random variable U (Z = U and Z = 1 − U have the same pdf), we may write the solution for X as

Equation 7.41. 

The MATLAB code for implementing the uniform-to-exponential transformation follows:

% File:  c7_uni2exp.m
clear all                               % be safe
n = input('Enter number of points %> '),
b = 3;                                  % set pdf parameter
u = rand(1,n);                          % generate U
y_exp = -log(u)/b;                      % transformation
[N_samp,x] = hist(y_exp,20);            % get histogram parameters
subplot(2,1,1)
bar(x,N_samp,1)                         % plot histogram
ylabel('Number of Samples')
xlabel('Independent Variable - x')
subplot(2,1,2)
y = b*exp(-3*x);                        % calculate pdf
del_x = x(3)-x(2);                      % determine bin width
p_hist = N_samp/n/del_x;                % probability from histogram
plot(x,y,'k',x,p_hist,'ok')             % compare
ylabel('Probability Density')
xlabel('Independent Variable - x')
legend('true pdf','samples from histogram',1)
% End of script file.

The result for β = 3 and N = 100 is illustrated in Figure 7.6. The top portion of Figure 7.6 shows the histogram. The second part of Figure 7.6 shows both the theoretical pdf and the resulting “experimental” values for 100 samples. The relatively poor results obtained with N = 100 provide motivation to try again with a significantly larger number of samples. The result for N = 2,000 is illustrated in Figure 7.7. A significant improvement is noted.

Uniform to exponential transformation for N = 100.

Figure 7.6. Uniform to exponential transformation for N = 100.

Uniform to exponential transformation for N = 2,000.

Figure 7.7. Uniform to exponential transformation for N = 2,000.

Example 7.8. 

As a second example, consider the Rayleigh random variable described by the pdf

Equation 7.42. 

where, as before, the unit step defines the pdf to be single-sided. The CDF is given by

Equation 7.43. 

Setting FR(R) = U gives

Equation 7.44. 

This is equivalent to

Equation 7.45. 

where we have once again recognized the equivalence of 1 − U and U. Solving for R gives

Equation 7.46. 

This transformation is the initial step in the Box-Muller algorithm, which is one of the basic algorithms for Gaussian number generation.

As with the previous example, it is interesting to implement the transformation and evaluate the performance as a function of the number of points transformed. The MATLAB program follows, and the results for N = 10,000 are shown in Figure 7.8. Other values of N should be used and the results compared to Figure 7.8.

Uniform to Rayleigh transformation with N = 10,000.

Figure 7.8. Uniform to Rayleigh transformation with N = 10,000.

% File:  c7_uni2ray.m
clear all                                % be safe
n = input('Enter number of points > '),
varR = 3;                                % set pdf parameter
u = rand(l,n);                           % generate U
y_exp = sqrt(-2*varR*log(u));            % transformation
[N_samp,r] = hist(y_exp,20);             % get histogram parameters
subplot(2,1,1)
bar(r,N_samp,1)                          % plot histogram
ylabel('Number of Samples')
xlabel('Independent Variable - x')
subplot(2,1,2)
term1 = r.*r/2/varR;                    % exponent
ray = (r/varR).*exp(-term1);            % Rayleigh pdf
del_r = r(3)-r(2);                      % determine bin width
p_hist = N_samp/n/del_r;                % probability from histogram
plot(r,ray,'k',r,p_hist,'ok')           % compare results
ylabel('Probability Density')
xlabel('Independent Variable - x')
legend('true pdf','samples from histogram',1)
% End of script file.

 

The previous two examples illustrated the application of the inverse transform method to continuous random variables. The technique, however, can be applied to discrete random variables. The histogram method, which we now describe, is a numerical (discrete data) version of the inverse transform method.

The Histogram Method

Assume that we have a set of data collected experimentally. In such a situation, both the pdf and the CDF are unknown, even though the pdf can be approximated by a histogram of the data. Our problem is to develop an algorithm for generating a set of samples having a pdf approximating the pdf of the experimental data. The first step is to generate a histogram of the experimental data. Assume that the histogram illustrated in Figure 7.9 results. Once the histogram is generated, an approximation to the pdf and CDF are known and, therefore, the inverse transform method can be applied. The technique presented here is a simple extension of the inverse transform method studied in the previous section.

Histogram of experimental data.

Figure 7.9. Histogram of experimental data.

The probability that a sample value x lies in the ith histogram bin is

Equation 7.47. 

The CDF, evaluated at the point x, is denoted

Equation 7.48. 

where

Equation 7.49. 

The next step is to set FX(X) = U, where U is a uniform random variable. This gives

Equation 7.50. 

Solving for X gives

Equation 7.51. 

The algorithm for the required number generator is defined by the following three steps:

  1. Generate U by drawing a sample from a random number generator producing numbers uniformly distributed in (0, 1).

  2. Determine the value of i so that

    Equation 7.52. 

    where Fi is defined in (7.49).

  3. Generate X according to (7.51) and return X to the calling program.

The fidelity of the number generator clearly depends on the accuracy of the underlying histogram. The accuracy of the histogram, as we shall see in the following chapter, depends on the number of samples available.

Rejection Methods

The rejection (or acceptance) technique for generating random variables having a desired or “target” pdf, fX(x), basically involves bounding the target pdf by a function MgX(x), in which gX(x) represents the pdf of an easily generated random variable and M is a constant suitably large to ensure that

Equation 7.53. 

In the simplest form, gX(x) is uniform on (0, a). If the target pdf fX(x) is zero outside the range (0, a) we have

Equation 7.54. 

where, since MgX(x) bounds fX(x),

Equation 7.55. 

This is illustrated in Figure 7.10.

Rejection method for Example 7.9.

Figure 7.10. Rejection method for Example 7.9.

The algorithm for generating the random variable X having pdf fX(x) is defined by the following four steps:

  1. Generate U1 and U2 uniform in (0, 1).

  2. Generate V1 uniform in (0, a), where a is the maximum value of X.

  3. Generate V2 uniform in (0, b), where b is at least the maximum of fX(x).

  4. If V2 ≤ fX(V1), set X = V1. If the inequality is not satisfied, V1 and V2 are discarded and the process is repeated from step 1.

It is easy to show that this procedure produces a random variable X having the target pdf, fX(x).

Since V1 and V2 are uniformly distributed, the point generated by the sample pair (V1, V2) has an equal probability of falling anywhere in the area ab. The probability that V1 is accepted is the fraction of the ab area falling under the pdf fX(x). This is the ratio of the shaded area in Figure 7.10 to the total area ab. Thus:

Equation 7.56. 

Since the numerator in the preceding expression is one by definition:

Equation 7.57. 

Thus:

Equation 7.58. 

Equation 7.59. 

which defines the target pdf.

Example 7.9. 

In this example, we apply the technique just discussed to the pdf

Equation 7.60. 

The algorithm for generating the random variable X having pdf fX(x) is obtained by letting a = R and b = M/R. Figure 7.10, modified for this specific case, is illustrated in Figure 7.11.

Rejection method for Example 7.9.

Figure 7.11. Rejection method for Example 7.9.

The MATLAB code follows:

% File:  c7_rejexl.m
R = 7;                                    % default value of R
M =4/pi;                                  % value of M
N = input('Input number of points N %> '), % set N
fx = zeros(1,N);                          % array of output samples
u1 = rand(1,N); u2 = rand(1,N);           % generate u1 and u2
v1 = R*ul;                                % generate v1
v2 = (M/R)*rand(1,N);                     % generate v2 (g(x))
kpts = 0;                                 % initialize counter
for k=1:N
if v2(k)<(M/(R*R))*sqrt(R*R-v1(k)*v1(k));
kpts=kpts+1;                            % increment counter
fx(kpts)=v1(k);                         % save output sample
end
end
fx = fx(1:kpts);
[N_samp,x] = hist(fx,20);               % histogram parameters
subplot(2,1,1)
bar(x,N_samp,1)                         % plot histogram
ylabel('Number of Samples')
xlabel('Independent Variable - x')
subplot(2,1,2)
yt = (M/R/R)*sqrt(R*R-x.*x);            % calculate pdf
del_x = x(3)-x(2);                      % determine bin width
p_hist = N_samp/kpts/del_x;             % probability from histogram
plot(x,yt,'k',x,p_hist,'ok')            % compare
ylabel('Probability Density')
xlabel('Independent Variable - x')
legend('true pdf','samples from histogram',3)
text = ['The number of points accepted is  ',...
num2str(kpts,15),' and N is ',num2str(N,15),'.'];
disp(text)
% End of script file.

Executing this program for N = 3,000 points yields the results illustrated in Figure 7.12. Of these 3,000 points, 2,301 were accepted and 699 were rejected. This gives an efficiency of 76.70%, which is close to the theoretical (as N → ∞) value of 78.54%.

Results for rejection exercise.

Figure 7.12. Results for rejection exercise.

 

Even though we have illustrated the rejection method for the case in which the bounding pdf is uniform, the technique illustrated here is easily modified for the case in which the bounding pdf is not uniform. Ideally, the target pdf should be tightly bounded so that the probability of rejection is minimized. The rejection method works well in cases in which the target pdf has finite support (is nonzero only over a finite range). Finite support is, however, not necessary and the rejection technique, as illustrated here, can easily be extended to the infinite support case. A nice treatment of the rejection method is given in Rubinstein [2].

Generating Uncorrelated Gaussian Random Numbers

We know from our study of communication systems that the Gaussian random variable is frequently encountered and represents an appropriate model for thermal noise and a number of other phenomena. Gaussian noise generators are a fundamental building block in many simulations and, as a result, a number of techniques have been developed for producing Gaussian random variables. The CDF is

Equation 7.61. 

where Q(x) is the Gaussian Q-function defined by

Equation 7.62. 

Since the Gaussian Q-function cannot be written in closed form, the inverse transform technique cannot be used. Rejection techniques can be applied but are not efficient. Thus, we seek other techniques for generating Gaussian random variables.

The Sum of Uniforms Method

The central limit theorem (CLT) provides an attractive avenue for developing a random variable having a Gaussian pdf. The CLT states that, under rather general conditions, the pdf of the sum of N independent random variables will converge to a Gaussian random variable as N → ∞ [8].

Assume that we have N independent uniform random variables, Ui, i = 1, 2, ..., N. From these N uniform random variables we form

Equation 7.63. 

where B is a constant that establishes the variance of Y. From the CLT, we know that Y converges to a Gaussian random variable as N → ∞. Since , the mean of Y is

Equation 7.64. 

The variance of Y is found by first noting that the variance of is

Equation 7.65. 

Since the component random variables Ui are assumed independent

Equation 7.66. 

we have

Equation 7.67. 

Thus, given the value of N, the variance of Y, , can be set to any desired value by the proper selection of B. The selection is N is a tradeoff between speed and the accuracy of the tails of the resulting pdf. The value of N is often set to 12, since N = 12 gives the simple result B = σy.

While the procedure of generating a Gaussian random variable based on the central limit theorem is straightforward, several significant difficulties occur when one attempts to apply it to practical problems in digital communications. First, since varies from to , it follows from (7.63) that Y varies from −BN/2 to BN/2. Thus, even though (7.63) may closely approximate the pdf of a Gaussian random variable in the neighborhood of the mean, the tails of the pdf are truncated to ±BN/2. If one is simulating a digital communication system for the purpose of determining the probability of symbol error, the tails of the pdf are important, since the tails of the pdf represent the large noise values that result in transmission errors. The effect of the truncation of the tails of the pdf of Y can be minimized by choosing N sufficiently large.

Specifically, from (7.67), the value of B is

Equation 7.68. 

Therefore the “approximately Gaussian” pdf is truncated so that it is nonzero only in the range

Equation 7.69. 

With N = 100, the Gaussian random variable is truncated at ±17.32σy. For some applications, truncation at 17 standard deviations may still yield significant errors. The appropriate value of N is application dependent.

The difficulty with truncating the tails of the probability density function of the noise in a digital communications system is illustrated in Figure 7.13, which shows the conditional pdfs of the output of a matched filter receiver for a low receiver input SNR and for a high receiver input SNR. (The tails of the pdfs are not actually discontinuous. Figure 7.13 is drawn to emphasize the effect of truncation.) The pdfs are conditioned on a binary 0 transmitted, denoted fV(v|0), and on a binary 1 transmitted, denoted fV(v|1). Figure 7.13 is constructed assuming that the noise variance is constant and that the SNR is adjusted by varying the signal power. For the case in which the receiver input SNR is sufficiently low, as in Figure 7.13(a), there is considerable overlap of the conditional pdfs and the probability of error may be determined with reasonable accuracy. As the signal power increases, the conditional pdfs are pushed farther apart and the simulation accuracy degrades. Due to the truncation of the tails of the conditional pdfs, increasing the signal power eventually results in a situation in which the conditional pdfs no longer overlap. This is illustrated in Figure 7.13(b). If the conditional pdfs do not overlap, the probability of error is zero independent of the SNR. This is clearly not a realistic situation.

Pdf of matched filter output for low receiver input SNR and for high reciever input SNR.

Figure 7.13. Pdf of matched filter output for low receiver input SNR and for high reciever input SNR.

Choosing N large, however, leads to the second difficulty with using (7.63) to approximate a Gaussian random variable. Since it takes N calls to the uniform random number to generate a single value of X, use of the algorithm defined by (7.63) can require excessive CPU time.

The two redeeming features of (7.63) are that it does a good job of approximating a Gaussian random variable in the neighborhood of the mean of Y, and that Y will be approximately Gaussian even if the pdfs of the constitute random variables, Ui, are not uniform.

While we are mainly concerned with simulation using serial processing with a single CPU, this is a good place to point out that algorithms that are not suitable for traditional serial processing applications may be quite suitable for use with parallel processing machines. For example, if a certain parallel processing machine uses 100 CPUs, it is possible to generate 100 values of a uniform random variable in the same time required to generate a single value of a uniform variate. Summing 100 uniform random variables may result in an excellent approximation to a Gaussian random variable for most applications and, using parallel processing, can be accomplished very quickly. This is just one example in which the choice of algorithm depends on the computational environment.

Mapping a Rayleigh RV to a Gaussian RV

From Example 7.8 we know that a Rayleigh random variable R can be generated from a uniform random variable U through the use of the transformation Mapping a Rayleigh RV to a Gaussian RV. We now consider the problem of mapping a Rayleigh random variable to a Gaussian random variable.

Assume that X and Y are two independent Gaussian random variables having equal variance σ2. Since X and Y are independent, the joint pdf is the product of the marginal pdfs. Thus

Equation 7.70. 

With x = r cosθ and y = r sinθ we have

Equation 7.71. 

and

Equation 7.72. 

The joint pdf fRΘ(r, θ) is found from fXY(x,y) by the transformation

Equation 7.73. 

where dARΘ and dAXY represent differential areas in the R,Θ and X,Y planes, respectively. It follows from (7.73) that

Equation 7.74. 

The ratio of the differential areas is the Jacobian of the transformation, which is

Equation 7.75. 

This gives

Equation 7.76. 

Thus:

Equation 7.77. 

We now examine the marginal pdfs of R and Θ.

The pdf of R is

Equation 7.78. 

and the pdf of Θ is

Equation 7.79. 

Thus, R is a Rayleigh random variable and Θ is uniform. Since a Rayleigh random variable is generated from two orthogonal Gaussian random variables, it follows that orthogonal projections of a Rayleigh random variable produce a pair of Gaussian random variables. Therefore, assuming that R is Rayleigh and Θ is uniform over (0, 2π), Gaussian random variables X and Y can be produced using

Equation 7.80. 

and

Equation 7.81. 

Both X and Y are zero-mean random variables and both have variance σ2. Since they are uncorrelated and Gaussian, it follows that X and Y are statistically independent. Thus, a pair of independent Gaussian random variables X and Y are generated from a pair of uniformly distributed random variables U1 and U2 by the algorithm

Equation 7.82. 

and

Equation 7.83. 

where we have used (7.46) for R.

A MATLAB program for implementing the Box-Muller algorithm follows:

% File: c7_boxmul.m
function [out1,out2]=c7_boxmul(N)
u1 = rand(1,N);                      % generate first uniform RV
u2 = rand(1,N);                      % generate second uniform RV
ray = sqrt(-2*log(u1));              % generate Rayleigh RV
out1 = ray.*cos(2*pi*u2);            % first Gaussian output
out2 = ray.*sin(2*pi*u2);            % second Gaussian output
% End of function file.

The Polar Method

Another algorithm for generating a pair of uncorrelated zero-mean Gaussian random variables is the polar method [9]. The polar algorithm consists of the following steps:

  1. Generate two independent random variables, U1 andU2, both of which are uniform on the interval (0, 1).

  2. Let V1 = 2U1 − 1 and V2 = 2U2 − 1 so that V1 and V2 are independent and uniform on (-1, 1).

  3. Form The Polar Method. If S < 1 proceed to step 4. If S ≥ 1 discard S and go back to step 1.

  4. Form The Polar Method.

  5. Set X = A(S)Vl and Y = A(S)V2.

The MATLAB code for generating a pair of Gaussian random vectors using the polar method follows:

% File: c7_polar.m
function  [out1,out2]=c7_polar(N)
u1 = rand(1,N); u2 = rand(1,N);                 % generate uniform RVs
v1 = 2*u1-1;  v2 = 2*u2-1;                      % make uniform in -1 to +1
outa = zeros(1,N); outb = zeros(1,N);           % allocate memory
j = 1;                                          % initialize counter
for i=1:N
   s(i) = v1(i)^2 + v2(i)^2;                    % generate s
   if s(i) $<$ 1                                % test
   j = j+1;                                     % increment counter
   a(i) = sqrt((-2*log(s(i)))/s(i));
   outa(j) = a(i)*v1(i);                        % first Gaussian RV
   outb(j) = a(i)*v2(i);                        % second Gaussian RV
   end
end
out1 = outa(1,1:j); out2 = outb(1,1:j);         % truncate arrays
% End of function file.

Note that the MATLAB library function rand was used to generate the required pair of uniform random variables in the preceding example code. Obviously a user-supplied uniform random number generator, based on LCG techniques, could have been used.

The polar method is an example of a rejection method, since, as we see from step 3, some values of S are rejected. Thus, fewer than N random variables will be generated with each call to this function. The probability of rejection is easily determined from Figure 7.14. The random variables V1 and V2 are uniformly distributed over the box, having area Abox = 4, that bounds the circle of radius R = 1 and area Acircle = π. The probability that S is not rejected is therefore

Polar method.

Figure 7.14. Polar method.

Equation 7.84. 

The polar algorithm often provides results that have better correlation properties than the Box-Muller algorithm. There is, however, a disadvantage with the polar method. Note that N calls to the Box-Muller algorithm will generate N pairs of Gaussian random variables. If the polar algorithm is called N times, the number of pairs of Gaussian random variables generated will be a random variable having mean (π/4)N. The fact that an unknown number of calls must be made to the polar algorithm in order to generate a given number of Gaussian random variables often complicates the simulation program.

MATLAB Implementation

Prior to the release of MATLAB 5, the Gaussian random number contained in the MATLAB library, randn, made use of the minimum standard number generator given in (7.24) and the polar method to map the uniformly distributed random numbers to random numbers having a Gaussian distribution. Starting with MATLAB 5, a completely different algorithm, involving no multiplications or divisions, replaced the previously used algorithm. The new random number generator produces numbers having a Gaussian pdf directly without requiring a transformation of uniformly distributed random numbers. Since multiplication and division are not used in the algorithm, and the uniform to Gaussian step is not needed, it is very fast. The algorithm used in later versions of MATLAB is a refined version of the basic Ziggurt algorithm [1] and is described in detail in a paper by Marsaglia and Tsang [10].

Generating Correlated Gaussian Random Numbers

So far, the goal has been to generate uncorrelated random numbers. We now turn our attention to the situation in which the goal is to generate random numbers that have a Gaussian pdf and are correlated. We first examine a simple technique for generating two sequences that are related through a given correlation coefficient (first-order correlation). We then turn our attention to the more general case in which the generated sequence is to have a given power spectral density (PSD). Establishing the PSD is, of course, equivalent to establishing a given autocorrelation function.

Establishing a Given Correlation Coefficient

We have seen two methods for generating a pair of (approximately) uncorrelated Gaussian random variables. It is an easy task to map a pair of uncorrelated Gaussian random variables, denoted X and Y, to a pair of Gaussian random variables having a specified level of correlation. Assuming that X and Y are zero mean and uncorrelated, the next step is to generate a third random variable Z, defined by

Equation 7.85. 

where ρ is a parameter with |ρ| ≤ 1. With this algorithm, X and Z are zero mean random variables with equal variance. We will show that ρ is the correlation coefficient relating X and Z.

The proof is simple. It is clear that Z is a Gaussian random variable, since it is a linear combination of Gaussian random variables. It also follows that Z is zero mean if X and Y are zero mean. The variance of Z is

Equation 7.86. 

Since E {XY} = E {X} E {Y} = 0 and , the preceding becomes

Equation 7.87. 

The covariance E{XZ} is

Equation 7.88. 

where the last step follows because X and Y are independent and zero mean. The correlation coefficient ρXZ is

Equation 7.89. 

as desired.

Establishing an Arbitrary PSD or Autocorrelation Function

The general technique used for establishing a sequence of random numbers with a given autocorrelation function, or equivalently a given PSD, is to filter a set of uncorrelated samples so that the target PSD is established. Uncorrelated samples have, by definition, a PSD that is constant over the simulation bandwidth |f| < fs/2. The variance is, by definition, the area under Sn(f). This is, as illustrated in Figure 7.15,

PSD for independent samples.

Figure 7.15. PSD for independent samples.

Equation 7.90. 

Thus, in order to establish a given noise PSD, N0, the random number (noise) generator variance, and the sampling frequency must satisfy

Equation 7.91. 

Thus, the required noise generator variance for a given PSD is a function of the sampling frequency.

The establishment of a desired PSD for a random sequence is relatively straightforward using a fundamental result from basic stochastic process theory. We know that if the input to a linear system is a random process having a PSD SX(f), the PSD of the system output is

Equation 7.92. 

where H(f) is the transfer function of the linear system. This is illustrated in Figure 7.16. If the noise samples are independent, the PSD of the input is constant at K = N0/2 Watts/Hz,

Generation of a correlated random sequence.

Figure 7.16. Generation of a correlated random sequence.

Equation 7.93. 

and the required H(f) to establish the target PSD is

Equation 7.94. 

Therefore, the problem of shaping the power spectral density to meet a given requirement reduces to the problem of finding a filter with a transfer function H(f) so that H2(f) gives the required spectral shape.

Example 7.10. 

A convenient method for synthesizing a filter having a given transfer function is to determine the best fit, in the minimum mean-square error sense, to the transfer function. Solving the Yule-Walker equations using the MATLAB function yulewalk accomplishes this task. Specifically, the function yulewalk determines the filter coefficients bk and ak so that the transfer function

Equation 7.95. 

is the minimum mean-square error fit to a given transfer function.

In order to illustrate the technique, assume that a given PSD is to have the form S(f) = K/f. This is referred to as flicker or one-over-f noise [11] and is often used to model phase noise in oscillators. In order to generate this noise, we must generate a filter having a transfer function of the form . Passing white noise through this filter generates the required flicker noise process. Since H(f) → ∞ as f → 0, the transfer function is defined as

Equation 7.96. 

The following MATLAB program generates an approximation to the required H(f) in which frequency is normalized to the Nyquist frequency fN and f0 = fN/20:

% File:  c7_flicker.m
f = 0:100;                      % frequency points
fn = 100;                       % Nyquist rate
F = f/fn;                       % frequency vector
M = abs(100./sqrt(f));          % normalized fequency response
M = [zeros(1,6),M(6:100)];      % bound from zero frequency
[b1,a1] = yulewalk(3,F,M);      % generate order=3 filter
[b2,a2] = yulewalk(20,F,M);     % generate order=20 filter
[h1,w1] = freqz(b1,a1);         % generate 3-rd order H(f)
[h2,w2] = freqz(b2,a2);         % generate 20-th order H(f)
subplot(2,1,1)
plot(F,M,':',w1/pi,abs(h1))
xlabel('Normalized Frequency')
ylabel('Squared Magnitude Response')
subplot(2,1,2)
plot(F,M,':',w2/pi,abs(h2))
xlabel('Normalized Frequency')
ylabel('Squared Magnitude Response')
%End of script file.

Executing the program yields the results illustrated in Figure 7.17. The desired transfer function is illustrated by the dashed line. The third-order approximation is illustrated in the top pane and the twentieth-order approximation, which is clearly superior to the third-order approximation, is illustrated in the bottom pane. Note that a low-order approximation is adequate in the region where H(f) is relatively smooth. In a frequency region where H(f) is changing rapidly, a higher-order approximation is required. This approach is a powerful tool for generating arbitrary PSDs.

Generation of flicker noise.

Figure 7.17. Generation of flicker noise.

 

Example 7.11. 

The simulation of wireless systems, in which motion is present, requires the generation of a process having the power spectral density

Equation 7.97. 

to represent the effect of doppler. The quantity fd in (7.97) represents the maximum doppler frequency. As illustrated in (7.94), the required filter transfer function is

Equation 7.98. 

This filter is implemented as an FIR filter whose impulse response is obtained by taking the inverse DFT of sampled values of (7.98).

The MATLAB program for generating the impulse response of the Jakes filter, and filtering white noise with the filter, is contained in Appendix A. Executing the code yields the results illustrated in Figures 7.18 and 7.19. Figure 7.18 illustrates the impulse response and the transfer function H(f) (bottom pane). Figure 7.19 illustrates the effect of passing complex white noise through the filter. The top pane shows the estimated PSD at the filter output. The bottom pane illustrates the magnitude of the envelope on a log scale. In a wireless communication system, this corresponds to the fading envelope.

Impulse response and target PSD.

Figure 7.18. Impulse response and target PSD.

Estimated PSD and envelope function.

Figure 7.19. Estimated PSD and envelope function.

Establishing a pdf and a PSD

Generating a noise waveform that simultaneously satisfies a given PSD and pdf requirement is, in general, a difficult task. There is, however, one situation in which this problem is not difficult. If the pdf of the system output is to be Gaussian, one can simply generate a sample sequence for the filter input in which the samples are Gaussian and independent. Since the input samples are independent, the PSD will be constant at K Watts/Hz. The PSD can be shaped using a linear filter, as described in the previous section. Since the PSD shaping filter is linear, the pdf of the output will be Gaussian. We have this simple result because any linear transformation of a Gaussian process yields another Gaussian process [8]. Fortunately, many practical problems can be handled in this manner.

If both the pdf and the PSD of a target waveform are specified and the pdf is required to be something other than Gaussian, the problem is much more difficult. A solution to this problem was proposed by Sondhi [12]. The basic scheme for implementing the Sondhi algorithm is illustrated in Figure 7.20. As always, we start with a sequence of samples u[n] that are uniformly distributed on the interval (0, 1). In addition, the sequence {u[n]} is assumed to be delta correlated so that the PSD is white. The job of the first memoryless nonlinearity, denoted MNL1, is to map the sequence {u[n]} into a sequence {x[n]}, which is white but has a Gaussian pdf. We have studied several techniques for accomplishing this mapping. The filtering operation predistorts the spectrum so that the PSD is SY(f). Since the filter is linear, the pdf of the sequence {y[n]} remains Gaussian. The basic operation of the second memoryless nonlinearity, denoted MNL2, is to map the Gaussian pdf of the sequence {y[n]} to the final desired pdf. However, the second nonlinearity also affects SY(f). Thus, the filter must modify the predistorted PSD SY(f), so that passing the sample sequence through the second nonlinearity will result in the sequence {z[n]} having both the desired PSD and pdf.

The Sondhi algorithm.

Figure 7.20. The Sondhi algorithm.

A detailed example of the Sondhi algorithm is beyond the scope of our current effort and the interested student is referred to [12]. An interesting simple example in which a number generator is developed having a first-order (single pole) PSD is given in the paper by Coates et al. [3].

PN Sequence Generators

Pseudonoise (PN) sequence generators are used in a number of applications, especially in the area of synchronization. As one application, PN sequences are used for approximating a random variable having a uniform probability density function. A PN sequence generator can take a number of forms [13] but the most common form, and the one upon which we shall concentrate, is illustrated in Figure 7.21.

PN sequence generator.

Figure 7.21. PN sequence generator.

In the simulation context, the most important reason for using PN sequences is for modeling data sources. By using a PN sequence generator almost all possible bit combinations having a given length can be can be produced over the shortest possible simulation length. We will use this fact to study the impact of inter-symbol interference (ISI) when we consider semianalytic simulation techniques in Chapter 10.

A PN sequence generator consists of three basic components: an N-stage shift register, a mod-2 adder, and a connection vector that defines the connections between specific shift register stages and the mod-2 adder. The connection vector establishes the performance characteristics of the generator and is defined by the polynomial

Equation 7.99. 

If gi = 1, a connection exists between the ith stage of the shift register and the mod-2 adder. If gi = 0, there is no connection. Note that in the polynomial g(D), both g0 and gN are equal to 1.

It can be shown that the maximum period of the PN sequence generator output is

Equation 7.100. 

and is achieved if, and only if, the polynomial g(D) is primitive [13].[7] The autocorrelation function R[m] of the output of the PN sequence generator is illustrated in Figure 7.22, in which we assume that the data values (symbols) are ±1. For m = 0 or a multiple of L, the autocorrelation is one. For 0 < m < L, the autocorrelation R[m] is −1/L, which for large L is approximately zero. Thus, for PN sequences having a large period, the autocorrelation function approaches an impulse. The PSD is therefore approximately white as desired. The PN sequence has many interesting properties, many of which are summarized in [13]. Several properties of interest in the simulation context include the following:

  • The sequence in nearly balanced. In other words, in one period of the sequence, the number of ones will exceed the number of zeros by 1. By adding an extra zero at the right point, the sequence can be balanced.

  • All possible bit combinations appear within one period, except for the fact that there is no sequence of N zeros but there is a sequence of N ones. (Note that if all shift registers contain a binary zero, the generator will become stuck in the all-zero state. By adding an extra zero at the point where there are N − 1 zeros, the sequence can be balanced. The result will be a deBruijn sequence [14]).

  • The autocorrelation function, although periodic, is very nearly the same as that of a random binary waveform.

Autocorrelation of register contents for a maximal length PN sequence.

Figure 7.22. Autocorrelation of register contents for a maximal length PN sequence.

The design of a PN sequence generator based on an N-stage shift register reduces to finding a primitive polynomial of degree N. For large N this can be a very difficult task. Fortunately, extensive tables of primitive polynomials are available in the literature [13]. A small sample, represented in a form consistent with (7.99), is given in Table 7.1.

Table 7.1. Short Table of Primitive Polynomials

N

g1

g2

g3

g4

g5

g6

g7

g8

g9

g10

g11

g12

g13

g14

3

1

0

1

           

4

1

0

0

1

          

5

0

1

0

0

1

         

6

1

0

0

0

0

1

        

7

0

0

1

0

0

0

1

       

8

0

1

1

1

0

0

0

1

      

9

0

0

0

1

0

0

0

0

1

     

10

0

0

1

0

0

0

0

0

0

1

    

11

0

1

0

0

0

0

0

0

0

0

1

   

12

1

0

0

1

0

1

0

0

0

0

0

1

  

13

1

0

1

1

0

0

0

0

0

0

0

0

1

 

14

1

0

0

0

0

1

0

0

0

1

0

0

0

1

Developing a simulation program for the PN sequence generator is quite simple. First, let the shift register contents be represented by the vector

Equation 7.101. 

and let the connection vector be represented by

Equation 7.102. 

The output of the mod-2 adder, as illustrated in Figure 7.21, is the feedback symbol f[n]. It is defined by

Equation 7.103. 

The feedback signal is also the next value of b1. The register must obviously be initialized with a vector B in which at least one value of bj is nonzero.

Example 7.12. 

Our goal in this example is to design a PN sequence generator with N = 10. With g(D) primitive, the period will be, from (7.100), L = 1,023. The connection vector is, from Table 7.1,

Equation 7.104. 

indicating a connection from the third and last stages of the shift register. This yields the configuration shown in Figure 7.23. The MATLAB code for simulating the PN generator is as follows:

% File:  c7_PNdemo.m
pntaps = [0 0 1 0 0 0 0 0 0 1];     % shift register taps
pninitial = [0 0 0 0 0 0 0 0 0 1];  % initial shift register state
pndata = zeros(1,1023);             % initialize output vector
samp_per_sym = 1;                   % samples per symbol
pnregister = pninitial;             % initialize shift register
n = 0;                              % initialize counter
kk = 0;                             % set terminator indicator
while kk == 0
n = n+1;                                    % increment counter
pndata(1,n) = pnregister(1,1);              % save output
feedback = rem((pnregister*pntaps'),2);     % calculate feedback
pnregister = [feedback,pnregister(1,1:9)];  % increment register
if pnregister == pninitial; kk = 1; end     % reset termination
end
text = ['The period is ',num2str(n,15),'.'];
disp(text) % display period
pndata=replicate(pndata,samp_per_sym);      % replicate data
kn = n*samp_per_sym;                        % output vector length
pndata = 2*pndata - 1;                      % make output +/- one
a = fft(pndata);
b = a.*conj(a);                             % PSD of data
Rm = real(ifft(b))/kn;                      % autocorrelation
x1 = (0:length(Rm)-1)/samp_per_sym;
x2 = 0:100;
subplot(3,1,1)
plot(x1,Rm,'.k'), ylabel('R[m]')
subplot(3,1,2)
stem(x2,Rm(1:101),'.k'), ylabel('Partial R[m]')
subplot(3,1,3)
stem(x2,pndata(1:101),'.k'), ylabel('First 100 outputs')
axis([0 100 -1.5 1.5]);
% End of script file.

PN sequence generator for Example 7.5.

Figure 7.23. PN sequence generator for Example 7.5.

Note that the program performs a test to ensure that the PN sequence generator is indeed full period. (For this case, executing the program returns the period as 1023, which is full period for N = 10.) The program also generates a plot of the autocorrelation function, the first 101 samples of the autocorrelation, and the first 101 samples at the generator output. These results for one sample/symbol are illustrated in Figure 7.24. Changing the sampling rate to five samples results in Figure 7.25.

Generated plots for one sample per symbol.

Figure 7.24. Generated plots for one sample per symbol.

Generated plots for five samples per symbol.

Figure 7.25. Generated plots for five samples per symbol.

 

Signal Processing

In this section we examine several input-output relationships for linear systems for the case in which the system input, and consequently the system output, is random. We have interest in the following basic results:

  • Relationship between the mean of the system input and the system output

  • Relationship between the variance of the system input and the system output

  • Input-output cross-correlation

  • Relationship between the autocorrelation and PSD of the system input and the system output

These relationships, which are useful in the study of simulation, give us the basic tools for signal processing for the case in which the system input is a sample function of a random process. Throughout the analyses to follow, we will assume that the system of interest is linear and fixed, and that the input to the system is wide-sense stationary.[8] More detail on processing random signals can be found in a variety of books. An excellent reference is Oppenheim and Schafer [15].

Input/Output Means

Since the system is assumed linear, output samples y[n] are related to input samples x[n] by the well-known discrete convolution relationship. Thus, as always

Equation 7.105. 

The mean, or dc value, of the output is, by definition

Equation 7.106. 

since the expected value of a sum is the sum of the expected values. We also note that, for a fixed system, the terms representing the unit impulse response h[k] are constants. From the stationarity assumption, E{x[nk]} = E{x[n]}. This gives the simple result

Equation 7.107. 

where mx and my are the mean of x[n] and y[n], respectively. Note that since , the preceding expression can be written

Equation 7.108. 

where H(0) is the dc gain of the filter.

Input/Output Cross-Correlation

The input-output cross-correlation is defined by

Equation 7.109. 

which is

Equation 7.110. 

which is

Equation 7.111. 

The input-output cross-correlation is used in the development of a number of performance estimators. One of these, an estimator for the signal-to-noise ratio at a point in a system, will be developed in Chapter 8.

Output Autocorrelation Function

By definition, the autocorrelation function of the system output is, at lag m, E{y[n]y[n + m]}. This gives

Equation 7.112. 

which is

Equation 7.113. 

Once again we rely on the assumption that x[n] is stationary to go to the next step. If x[n] is stationary, the autocorrelation E{x[nj]x[nk + m]} depends only on the lag m − k + j and

Equation 7.114. 

where Rxx[m] is the autocorrelation of x[n] at lag m. Unfortunately, the expression for Ryy [m] cannot, in general, be simplified further without knowledge of the statistics of x[n]. The one exception is when x[n] is a delta-correlated (white) sequence. Taking the discrete Fourier transform of both sides of (7.114) yields the input-output PSD relationship

Equation 7.115. 

which was used to generate correlated sequences.

If the input sequence is delta-correlated (i.e., white noise), by definition

Equation 7.116. 

Substitution into (7.114) gives

Equation 7.117. 

where we have used the sifting property of the delta function.

Input/Output Variances

The variance of the system output is, by definition, Ryy[0] = E{y2[n]}. For the general case, the output variance follows from (7.114) with m = 0. The result is

Equation 7.118. 

If x[n] is a delta-correlated (white noise) sequence, can be found by simply letting m = 0 in (7.117). This gives

Equation 7.119. 

This result will be encountered again when we study equivalent noise bandwidth in Chapter 10.

Summary

Our goal in this chapter was to explore techniques for generating and processing random signals. The results of our study give us the tools to represent noise, interference, random information-bearing signals, and other random phenomena in communication systems.

First we explored methods for generating a pseudo-random set of uniformly distributed integers. A fundamental technique for generating uniformly distributed integers is the linear congruence generator (LCG). While there are many variations of the LCG, the goal is to generate a sequence having the longest possible period. We also wish to generate a set of integers having the lowest possible correlation. Two tests for correlation were considered: the scatterplot and the Durbin-Watson test. Many other tests are available, but they tend to be more complicated to implement.

After a pseudo-random sequence is generated, the next step is to shape the pdf and the PSD so that a given waveform in the system being simulated can be modeled with a required level of accuracy. Several methods were studied for shaping the pdf. The simplest technique was the inverse transform method, which required that the CDF be known in closed form. Two ad hoc techniques were explored for generating independent pairs of Gaussian random variables. These were the Box-Muller algorithm and the polar algorithm. A technique for emulating experimental data, based on the histogram, was also explored. Shaping the PSD is a filtering operation. The filtering operation preserves the pdf if the pdf is Gaussian. If one must shape both the pdf and the PSD, and if the target pdf is not Gaussian, a difficult problem results. This problem can be solved using the Sondhi algorithm.

Sequences having low correlation can also be generated using the PN sequence generator, which is implemented as a shift register having an appropriately chosen feedback function. In typical applications, the LCG is used to model noise and interference waveforms, and the PN sequence generator is used to model data sources.

Further Reading

This chapter covered a number of diverse topics. Random number generation is the subject of many papers and books. For an overview, the paper by Park and Miller [5] is strongly recommended. For a detailed treatment, complete with proofs, Knuth [1] is highly recommended. The general subject of mapping uniform pdfs to a specific target pdf is also contained in many books. Rubinstein [2] and Ross [9] are reasonably complete.

References

Problems

7.1

Write a MATLAB program to compute the ensemble averages of x(t), y(t), and z(t) as defined in Example 7.1 for N = 20, 50, and 100, where N is the number of sample functions. Do this for 51 values of t for 0 ≤ t ≤ 2. Discuss the results. For N = 20, plot the ensemble average as a function of t. Why is it a sinusoid?

7.2

Consider y(t) as defined in Example 7.1. Compute the ensemble average E{y(t)}. Estimate E {y(t)} using N = 5 sample functions at 51 points ti where 0 ≤ ti 2. Plot the result and compare with E{y(t)}. Repeat for N = 20, N = 50, and N = 100. What do you conclude?

7.3

Repeat the preceding problem for z(t) as defined in Example 7.1.

7.4

A random telegraph waveform is defined as a waveform that takes on values+1 or −1 with the switching instants occuring at random. Develop a MATLAB program for computing and plotting three sample functions of a random telegraph waveform.

7.5

Using Example 7.2 as a guide, develop a MATLAB program for simulating a 8-PSK modulator.

7.6

Using Example 7.2 as a guide, develop a MATLAB program for simulating a 16-QAM modulator.

7.7

Develop a MATLAB program for testing whether or not a number m is prime. Also develop a program to identify primitive elements mod(m). Using this program, determine which elements are primitive mod(89). Using the results of this investigation develop a multiplicative LCG and show that it has period 88.

7.8

Suppose we wish to develop a uniform number generator with m = 5,000, a = 241, and c = 1,323. Unfortunately, a small error results in the multiplier a being entered as 240. Using the seed x0 = 1, describe the impact of this error.

7.9

Design a mixed congruential generator having a period of 6,000. Using the MATLAB routine c7_LCGPeriod.m show that a period of 6,000 is actually achieved.

7.10

A mixed congruential generator has a = c = 1 and m = 256. Assuming that the seed is x0 = 0, describe the generator output. Is this a full-period generator? Determine the scatterplot and test for independence using the Durbin-Watson algorithm.

7.11

Show that the three component generators of the Wichmann-Hill algorithm have periods of 30268, 30306 and 330322. You may use the MATLAB routine c7_LCGPeriod.m to solve this problem.

7.12

Show, using both analysis and simulation, that the random number generator defined by

xi+1 = 5xi mod(7)

is a full-period generator. Determine the generated sequence for seeds x0 = 1 and x0 = 4. Compare the sequences and comment on the results.

7.13

Show, using both analysis and simulation, that the LCG described by

xi+1 = 133xi mod(256)

has a period of 64. Using an appropriate MATLAB program with the seed x0 = 1, determine the generated sequence.

7.14

Run the MATLAB program given in Example 7.8 for N = 100, 1,000, and 20,000. Compare the results to those illustrated in Figure 7.8. Comment on the results.

7.15

A Weibull random variable is defined by the pdf

Problems

where a is a parameter and u(x) denotes the unit step. Using the inverse transform method, develop an algorithm for generating a sequence of random numbers having a Weibull distribution. Using the algorithm just generated with a = 5, generate 500 samples of a Weibull random variable. Plot the histogram and compare with the pdf.

7.16

A Laplacian random variable is defined by the pdf

Problems

Note that this pdf is double-sided. Using the inverse transform method, determine an algorithm for generating X from a uniforrmly distributed random variable. Let a = 3 and generate 1,000 samples of X. Determine the resulting histogram and compare the result with the theoretical pdf.

7.17

  1. Generate N = 1000 pairs of zero-mean Gaussian random variables using the Box-Muller algorithm. The desired variance is σ2 = 5. From the set of generated samples determine Problems, and ρxz. Comment on the results.

  2. Repeat part (a) but generate approximately N = 1,000 random variables using the polar method.

7.18

Both the Box-Muller and the polar techniques for generating a pair of random numbers require that a pair of independent uniformly distributed random numbers be generated. This can be accomplished using the following two lines of code:

u1 = rand(1,N);
u2 = rand(1,N);

Will the code segment

u1 = rand(1,N);
u2 = [u1(1,N),u1(1,1:N-1)];

work just as well? Why or why not? Test both of the code segments to demonstrate the validity of your answer.

7.19

Rework Example 7.8 for the pdf

Problems

Note that this will require a simple modification to the algorithm used in Example 7.9.

7.20

The unit impulse response of the Jakes filter defined by (7.98) is given by

Problems

where J1/4(·) is the fractional Bessel function and Γ(·) denotes the gamma function. Let fd = 10 Hz and plot h(t). Compare with the sampled unit impulse response derived by taking the inverse FFT of H(f) as determined in Example 7.11.

7.21

In this problem you are to extend the rejection method so that a random variable having infinite support can be generated. The target pdf describes the generalized Gaussian random variable, which is defined by

Problems

where m is the mean, v is the mode σ is a parameter, and Γ(·) denotes the gamma function. Note that for v = 2, fX(x) is Gaussian.

  1. Plot the generalized Gaussian pdf for m = 0 and v = 1, 1.5, 2, and 2.5.

  2. Develop, using a rejection method, an algorithm for generating X with v = 1.5 and m = 0. Select a bounding function and plot both the bounding function and fX(x).

  3. Using the algorithm with v = 1.5 and m = 0, generate 10,000 values of X. Plot the resulting histogram and compare with the theoretical pdf.

7.22

Using the algorithm defined by (7.87), generate N = 50 pairs of zero-mean Gaussian random variables with σ2 = 3 and ρ = 0.6. From the set of generated samples, determine Problems, and ρxz. Repeat for N = 5,000. Comment on the results.

7.23

Develop an algorithm for generating a set of uncorrelated samples having the PSD

Problems

where fs is the sampling frequency. Generate 10,000 samples having the target PSD and verify the algorithm. Using the generated data, determine the autocorrelation function and compare with the target autocorrelation function.

Appendix A: MATLAB Code for Example 7.11

Main Program: c7_Jakes.m

% File c7_Jakes.m
% Generate and test the impulse response of the filter.
%
fd = 100;                              % maximum doppler
impw = jakes_filter(fd);               % call to Jakes filter
fs = 16*fd; ts = 1/fs;                 % sampling frequency and time
time = [1*ts:ts:128*ts];               % time vector for plot
subplot(2,1,1)
stem(time,impw,'.'), grid;
xlabel('Time'), ylabel('Impulse Response')
%
% Square the fft and check the power transfer function.
%
[h f] = linear_fft(impw,128,ts);      % generate H(f) for filter
subplot(2,1,2)
plot(f,abs(h.*h)); grid;
xlabel('Frequency'), ylabel('PSD')
%
% Put Gaussian noise through and check the output psd.
%
x = randn(1,1024);                     % generate Gaussian input
y = filter(impw,1,x);                  % filter Gaussian input
[output_psd ff] = log_psd(y,1024,ts);  % log of PSD figure;
subplot(2,1,1)
plot(ff,output_psd); grid;
axis([-500 500 -50 0])
xlabel('Frequency'), ylabel('PSD')
%
% Filter complex noise and look at the envelope fading.
%
z = randn(1,1024)+i*randn(1,1024);     % generate complex noise
zz = filter(impw,1,z);                 % filter complex noise
time = (0.0:ts:1024*ts);               % new time axis
%
% Normalize output and plot envelope.
%
zz = zz/max(max(abs(zz)));             % normalize to one
subplot(2,1,2)
plot(time(161:480),10*log10(abs(zz(161:480)))); grid;
axis([0.1 0.3 -20 0])
xlabel('Time'), ylabel('Log Amplitude')
% End of function file.

Supporting Routines

Jakes_filter.m

% File: Jakes_filter.m
function [impw] = jakes_filter(fd)
% FIR implementation of the Jakes filter (128 points)
n = 512; nn = 2*n;                   % nn is FFT block size
fs = 0:fd/64:fd;                     % sampling frequency = 16*fd
H = zeros(1,n);                      % initialize H(f)
for k=1:(n/8+1)                      % psd for k=1:65
   jpsd(k)=1/((1-((fs(k))/fd)^2)^0.5);
   if(jpsd(k))>1000
      jpsd(k)=1000;
   end
   H(k)=jpsd(k)^0.5;                 % first 65 points of H
end
for k=1:n                            % generate negative frequencies
   H(n+k) = H(n+1-k);
end
[inv,time] = linear_fft(H,nn,fd/64); % inverse FFT
imp = real(inv(450:577));            % middle 128 points
impw = imp.*hanning(128)';           % apply hanning window
energy = sum(impw.^2);               % compute energy
impw = impw/(energy^0.5);            % normalize
% End of function file.

linear_fft.m

% File: linear_fft.m
function [fftx,freq] = linear_fft(x,n,ts)
% This function takes n (must be even) time domain samples (real or
% complex) and finds the PSD by taking (fft/n)^2. The two sided
% spectrum is produced by shifting the PSD. The array freq provides
% the appropriate frequency values for plotting purposes.
y = zeros(1,n);
for k=1:n
   freq(k) =(k-1-(n/2))/(n*ts);
   y(k) = x(k)*((-1.0)^(k+1));
end;
fftx = fft(y)/n;
% End of function file.

log_psd.m

% File: log_psd.m
function [logpsd,freq,ptotal,pmax] = log_psd(x,n,ts)
% This function takes the n time domain samples (real or complex)
% and finds the psd by taking (fft/n)^2. The two sided spectrum is
% produced by shifting the psd; The array freq provides the
% appropriate frequency values for plotting purposes.
% By taking 10*log10(psd/max(psd)) the psd is normalized; values
% below -60 dB are set equal to -60dB.
%
% n must be an even number, preferably a power of 2
%
y = zeros(1,n);                               % initialize y vector
%
h = waitbar(0,'For Loop in PSD Calculation'),
for k=1:n
 freq(k) =(k-1-(n/2))/(n*ts);
 y(k) = x(k)*((-1.0)^k);
 waitbar(k/n)
end;
%
v = fft(y)/n;
psd = abs(v).^2;
pmax=max(psd);
ptotal=sum(psd);
logpsd = 10*log10(psd/pmax);
%
% Truncate negative values at -60 dB
%
for k =1:n
 if(logpsd(k)<-60.0)
 logpsd (k) =-60.0;
 end
end
close(h)
% End of function file.

 



[1] This transformation will be discussed in Section 7.5.1.

[2] We generally assume that a computational environment is a general-purpose computer. However, for some applications the computational environment could be a special-purpose machine, an ASIC chip, an FPGA chip, or a programmable DSP chip.

[3] For speed of computation, LCGs are typically implemented using integer arithmetic. In MATLAB we use floating-point arithmetic. In order to uniquely represent each number defined by the algorithm, m must be less than the MATLAB constant eps, the default value of which exceeds 4 × 1015 on IEEE compliant computers (see Chapter 3).

[4] If m =2k − 1 is a prime number, m is known as a Mersenne prime.

[5] Recall from Chapter 3 that the IEEE floating-point standard assigns 51 bits to the mantissa.

[6] See The MathWorks Support Solution Number 8542.

[7] A polynomial g(D) of degree N is primitive if the smallest integer k for which g(D) dividesDk + 1 without remainder is k = 2N − 1.

[8] Recall that a wide-sense stationary process is one for which means and variances are independent of the time origin, and the autocorrelation function is only dependent on the time lag between samples.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset