As we saw in the previous chapter, a large number of samples are required to generate an RF (radio frequency) bandpass (BP) signal, and for that reason RF bandpass signals are not typically used in waveform-level simulations. RF signals are bandpass signals in which the carrier frequency fc usually exceeds the bandwidth B by several orders of magnitude. Using lowpass (LP) representations for bandpass signals typically results in simulations that execute much faster and have greatly reduced data storage and signal-processing requirements. The use of lowpass models for both signals and systems for use in simulation is the focus of this chapter.
We now look at the lowpass complex envelope representation of bandpass signals in detail. The representation is viewed in both the time domain and in the frequency domain. Both deterministic and random signals are considered.
A general bandpass signal, such as would be found at the output of a modulator, can always be written in the form[1]
where A(t) is the amplitude, or real envelope, of the signal and φ(t) is the phase deviation from the phase 2πf0t. For the case in which (4.1) represents the output of a modulator, f0 is the carrier frequency and 2πf0t is the instantaneous phase of the unmodulated carrier. It follows from Euler’s formula[2] that (4.1) can be written
which is
The quantity
is known as the complex envelope of the real signal x(t). Equation (4.4) is the polar, or exponential, form of the complex envelope.
It is often convenient to express the complex envelope in rectangular form, which is
The real and imaginary parts of the complex envelope are referred to as the direct component and quadrature component of x(t), respectively. Applying Euler’s formula to (4.4) gives
Thus:
and
In applications of practical interest A(t) and φ(t) are lowpass functions with bandwidths much less than f0. Thus, xd(t) and xq(t) are also lowpass signals. It follows from the definition of complex numbers that A(t) and φ(t) are related to xd(t) and xq(t) by
and
Using (4.3) and (4.5), the time-domain signal x(t) can be written
which is
Note that (4.12) could have been written by applying the trigonometric identity
to (4.1) and defining xd(t) and xq(t) using (4.7) and (4.8).
Although f0 is typically chosen as the center frequency of the bandpass signal, f0 is arbitrary and can be chosen for convenience. However, as will be illustrated in Example 4.1, xd(t) and xq(t) are dependent upon the selection of f0. Example 4.2 illustrates the lowpass representation of an analog FM signal. Examples 4.3, 4.4 and 4.5 illustrate the application to digital signals. These last three examples illustrate the development of simulation models for digital modulators.
Example 4.1.
Consider the bandpass signal
where fc is the carrier frequency and φ is the carrier phase deviation. We assume that fc ≫ fm and desire xd(t) and xq(t) as defined by (4.3) and (4.5). The first step is to choose the frequency f0 defined in (4.3). In order not to assume that f0 = fc, let fc = f0 + fΔ. This gives
which can be written
By inspection, the complex envelope is
Thus:
from which
and
follow directly by equating real and imaginary parts. Note that both xd(t) and xq(t) depend upon the relationship between fc and f0. The simplest expressions result if f0 = fc(fΔ = 0) but, as previously mentioned, the choice of f0 is arbitrary. In simulation problems we typically choose f0 so that the computational burden is minimized.
Example 4.2.
An analog FM modulator is defined by the expression
where Ac and fc represent the amplitude and frequency of the unmodulated carrier, respectively, m(t) is the message or information carrying signal, kf is the modulation index, t0 is an arbitrary reference time, and φ(t0) is the phase deviation at time t0. Assuming that the time reference t0 = 0 is selected and that φ(t0) = 0, it follows that xc(t) can be represented
Thus, the complex envelope is
from which
and
Thus, in order to represent x(t) in a simulation, it is necessary only to generate the two lowpass signals given by (4.24) and (4.25).
A suite of models for an FM modulator are illustrated in Figure 4.1. Figure 4.1(a) shows the continuous-time bandpass model. The continuous-time lowpass model, in which the output is the lowpass complex envelope representation of xc(t), is shown in Figure 4.1(b). The discrete-time equivalent is illustrated in Figure 4.1(c), in which the sampling period is T and n indexes the samples. Note that, in the discrete-time model, the integrator is modeled as an accumulator (summation operator). This is equivalent to rectangular integration in which the area accumulated in the kth time slot is Tm(kT). Other integrator models can be used, as will be discussed in later chapters.
We now consider a few examples involving digital modulation. In order to do this a simulation model for the modulator is needed. The basic model is illustrated in Figure 4.2. [Note: Figure 4.2 illustrates a bandpass model for which the output is the bandpass signal xck(t). The lowpass model used for simulation has the outputs xdk(t) and xqk(t), which define the lowpass complex envelope of xck(t).] We assume that ak represents a binary data stream. In addition, M-ary signaling, in which b information bits are grouped together to form a data symbol so that the transmitted signal in the kth signaling interval can carry more than 1 bit of information. Typically M = 2b, but this is not a necessary assumption. The binary to M-ary symbol mapping shown in Figure 4.2 performs the function of grouping together b bits to form the M-ary symbol. The output of the mapper are the direct and quadrature components of the k-symbol. These are denoted dk and qk. (The kth symbol itself can be viewed as complex valued, sk = dk + jqk.)
The symbols dk and qk can be considered impulse functions having weights determined by the bit-to-symbol mapping. The impulse response of the pulse-shaping filter is denoted p(t) so that the direct and quadrature signals for the kth signaling interval, kT < t <(k +1)T, are xdk(t) and xqk(t) as shown in Figure 4.2. The transmitter output for the kth signaling interval is
The corresponding discrete-time signal model is
which is simply the sampled version of the continuous-time signal model.
Before leaving this topic, we pause to review a few terms, to briefly discuss scattergrams, and to point out the difference between scattergrams, as used in the simulation context, and signal constellations, which are familiar to us from our study of digital communication theory. A signal space is defined as a K-dimensional space generated by K orthonormal basis functions φi(t), i = 1,2, ... , K and signals are represented as vectors in this space. For example, assume an M-ary communications system in which one of m signals is transmitted in the kth signaling interval. In terms of the basis functions, the signal transmitted in the kth signaling interval is expressed
In signal space the mth signal is represented by the vector
Signal space is scaled so that |Xm|2 represents the energy associated with the mth signal in the set of possible transmitter outputs. The K-dimensional plot of the points Xm, m = 1,2, ...,M defines the signal constellation.
The scattergram corresponding to a signal is a plot of xq(t) versus xq(t) and is therefore defined in terms of the lowpass complex envelope of a real bandpass signal. The dimensionality of the scattergram is one if xd(t) = 0 or xq(t) = 0. Otherwise, the dimensionality of the scattergram is two. Examples 4.3 and 4.4 illustrate that the scattergram and the signal constellation are closely related when K = 2, which is the case for QPSK and QAM. We will see in Example 4.5 that this relationship is lost when the dimensionality of the signal space exceeds two.
Example 4.3. (QPSK)
In order to form a QPSK signal, the data symbols ak are formed by taking binary symbols two at a time. Thus each data symbol consists of one of the binary pairs 00, 01, 10, or 11. The bandpass QPSK signal for the kth signaling interval has the form
where φk takes on one of the four values +π/4, −π/4, +3π/4, or −3π/4. The complex envelope corresponding to xck(t) can be written
from which the direct and quadrature components are
and
Note that if p(t) is constant over a signaling interval, both xdk(t) and xqk(t) are constant over a signaling interval. In this case both xdk(t) and xqk(t) take on only one of two values and switch instantaneously from one value to the other. If xqk(t) is plotted as a function of xqdk(t), the scattergram, illustrated in Figure 4.3, results. Note that Figure 4.3, when properly scaled, also represents the signal constellation.
Each of the points in the QPSK signal constellation correspond to a pair of binary symbols as determined by the bit-to-symbol mapping illustrated in Figure 4.2. Although the mechanism by which pairs of binary symbols are mapped to QPSK symbols is arbitrary, the mapping shown in Figure 4.3, which is known as Gray code mapping, is frequently used. In Gray code mapping nearest neighbors, in signal space, differ in only one binary symbol. This strategy is justified, since the error probability is typically an inverse monotonic function of the Euclidean distance between points in signal space.[3] For the mapping illustrated in Figure 4.3, assume that the bit sequence is b1b2, where b1 is the most significant bit and b2 is the least significant bit. If b2 = 0, dk = 1, and if b2 = 1, dk = −1 so that the least significant bit determines whether the QPSK symbol is in the left half or in the right half of the D-Q plane. In a similar fashion, if b1 = 0, qk = 1, and if b1 = 1, qk = −1 so that the most significant bit determines whether the QPSK symbol is in the upper half or in the lower half of the D-Q plane.
In order to look at QPSK scattergrams for a nonconstant p(t) and the corresponding time-domain waveforms a simple MATLAB program is developed. The program is given in Appendix A and is somewhat general in that the number of levels on the D (direct) axis and the Q (quadrature) axis can be assigned independently. For QPSK both of these parameters are equal to two. The pulse-shaping filter is a sixth-order Butterworth filter.[4] The filter bandwidth used in the simulation, denoted bw
, is equal to the symbol rate.[5] The input to the MATLAB program follows.
>> c4_qamdemo
Number of D levels > 2
Number of Q levels > 2
Number of symbols > 100
Number of samples per symbol > 20
Filter bandwidth, 0<bw<1 > 0.1
Executing the program given in Appendix A with these parameters yields the results illustrated in Figure 4.4. The top row shows scattergrams formed by plotting xq(t) as a function of xd(t). The scattergram at the top left results for p(t) = 1, 0 ≤ t < Tsym, where Tsym is the symbol period (twice the bit period). Note the relationship of this scattergram to the signal constellation shown in Figure 4.3. The scattergram at the top right is formed by passing this signal through a sixth-order Butterworth filter. The corresponding time-domain signals are illustrated on the bottom row of Figure 4.4, with xd(t) at the bottom left and xq(t) at the bottom right.
It is also worth noting from the simulation program given in Appendix A that the bit pattern is never explicitly formed. Rather, symbols are formed that represent pairs of bits. The receiver model demodulates symbols and, using simulation, the symbol error rate can be estimated. The mapping of symbol error rate to bit error rate is deterministic and can be accomplished analytically.
Example 4.4. (16-QAM)
The block diagram of a QAM transmitter can also be represented as shown in Figure 4.2. The bit-to-symbol mapper maps each group of four input binary symbols ak to a single 16-QAM symbol. The signal constellation is illustrated in Figure 4.5 along with the binary sequence corresponding to each 16-QAM symbol. As was the case with QPSK, the mapping of binary symbols to 16-QAM symbols is arbitrary but the mapping is usually defined such that a Gray code results. Note that Figure 4.5 is a Gray code mapping [2].
In QPSK each of the symbols dk and qk could take on one of two values, which in the previous example were defined to be +1 and −1. We see from Figure 4.5 that both dk and qk in 16-QAM can take on one of four values in each symbol period.
For convenience in the simulation to follow these four values are defined to be +3, +1, −1, and −3.
The same program used for the QPSK example can be used for the 16-QAM example. For 16-QAM the number of the number of levels on the D (direct) axis and the Q (quadrature) axis are both equal to four. The pulse-shaping filter is, once again, a sixth-order Butterworth filter. For this example the filter bandwidth used in the simulation is twice the symbol rate. The input to the MATLAB program follows.
>> c4_qamdemo
Number of D levels > 4
Number of Q levels > 4
Number of symbols >
500 Number of samples per symbol > 20
Filter bandwidth, 0<bw<1 > 0.2
Executing the program given in Appendix A with the preceding parameters yields the results illustrated in Figure 4.6. The top row shows scattergrams formed by plotting xq(t) as a function of xd(t). The scattergram at the top left results for p(t) = 1, 0 ≤ t < Tsym, where Tsym is the symbol period (four times the bit period). Note the relationship of this scattergram to the signal constellation shown in Figure 4.5. The scattergram at the top right is formed by passing this signal through a sixth-order Butterworth filter. The corresponding time-domain signals are illustrated on the bottom row of Figure 4.6, with xd(t) at the bottom left and xq(t) at the bottom right. The four (steady-state) values can clearly be seen.
As in the previous example the bit pattern is never explicitly formed in the simulation. Rather, symbols are formed that represent 16-QAM symbols The receiver model demodulates symbols and, using simulation, the symbol error rate can be estimated. The mapping of symbol error rate to bit error rate is deterministic and can be accomplished analytically.
Example 4.5. (4-FSK)
In the two preceding examples the dimensionality of the signal space of the transmitted signals was two so that the mapping from signal space to the D-Q plane was trivial. We now consider an example in which binary symbols are grouped together two at a time as in QPSK but the dimensionality of the signal space is four. Thus, the signal-space is generated using four orthogonal basis functions rather than two basis functions as in QPSK. For this example the basis functions are chosen to be sinusoids having different frequencies (thus the name 4-FSK). Letting the pulse shaping function p(t) be the constant A over the kth signaling interval we have
Thus, in each signaling interval one of four sinusoids are transmitted. Note that each sm(t) must pass through an integer number of cycles in the T -second signaling interval to ensure that
so that the signals are orthogonal.
Writing sm(t) in the form
yields
Thus, the complex envelope corresponding to the mth signal in the set of transmitted signals is
from which
and
Note that, even though p(t) is a constant, both xdk(t) and xqk(t) are time-varying over a signaling interval. This is in contrast to the results of the preceding two examples and is a consequence of the fact that the dimensionality of the underlying signal space K exceeds 2, the dimensionality of the D-Q plane.
From (4.3) we can write
Multiplying by exp (−j2πf0t) gives
or
As we saw in the previous section, is a lowpass signal (a signal having a spectrum that is nonzero only in the neighborhood of f = 0). Taking the lowpass portion of (4.43) yields
where Lp {·} denotes the lowpass portion of the argument.
The Fourier transform of (4.43) is given by
As illustrated in Figure 4.7, X(f) is nonzero except in the neighborhood of f = ±f0, with X+(f) representing the positive frequency portion of X(f) and X−(f) representing the negative frequency portion of X(f). Thus, is nonzero except in the neighborhood of f = −f0 (X+(f) translated to the left by 2f0) and f = −3f0 (X−(f) translated to the left by 2f0). This term does not contribute to or, equivalently, to , since is nonzero only in the neighborhood of f = 0. The manner in which X(f + f0) contributes to is illustrated in Figure 4.7. Note that X(f + f0) is nonzero except in the neighborhood of f = −2f0 (X−(f) translated to the left by f0) and f = 0 (X+(f) translated to the left by f0). It therefore follows that only X+(f + f0) contributes to and
which is equivalent to
As illustrated in Figure 4.7, the lowpass filtering operation can be implemented by using a filter having the transfer function H(f) = U(f + f0). Thus, (4.46) can be expressed
Expressions for Xd(f) and Xq(f) are easily derived. Fourier transforming (4.5) yields
Replacing f by −f gives
Since xd(t) and xq(t) are real, and . Thus, (4.50) can be written
Complex conjugating (4.51) gives
Adding (4.49) and (4.52) gives
Multiplying (4.52) by −1 and adding the result to (4.49) gives
Inverse Fourier transforming Xd(f) and Xq(f), as defined by (4.53) and (4.54), gives xd(t) and xq(t), respectively.
The spectra Xd(f) and Xq(f) corresponding to the lowpass complex envelope derived in Figure 4.7 are illustrated in Figure 4.8. These spectra follow directly from (4.53) and (4.54). Also note that in plotting the quarature component it is more natural to plot jXq(f). Note that Xd(f) is real and even and that Xq(f) is imaginary and odd.
From Figure 4.7 we see that the spectrum X(f) of the real bandpass signal x(t) is nonsymmetric about f0. As a result, samples of the lowpass complex envelope will be complex valued. Both the real and imaginary part of , xd(t), and xq(t) will have bandwidth B/2, which is half the bandwidth of the real bandpass signal x(t). Thus, as discussed in the previous chapter, both xd(t) and xq(t) must be sampled at a rate exceeding 2(B/2) = B samples per second. The result of this sampling operation will produce at least 2B samples per second. Conversely, if X(f) has conjugate symmetry about f0, will have conjugate symmetry about f = 0. For this case will be real (xq(t) = 0) and sampling the quadrature component will not be required.
As we know from linear system theory, Parseval’s theorem tells us that the Fourier transform preserves energy and power. Unfortunately, however, the energy (or power) in the complex envelope is not equal to the corresponding energy (or power) in the corresponding bandpass signal. Using (4.41) gives
for the instantaneous power in x(t). The preceding expression gives
Carrying out the indicated multiplication yields
Since the terms and represent bandpass signals, and therefore have zero average value, taking the expectation yields
By definition, the average power in the real bandpass signal x(t) is
This is sometimes referred to as real power. The power in the lowpass complex envelope , which is sometimes referred to as complex power, is
Substitution of (4.59) and (4.60) into (4.58) yields
Thus, the power in the complex envelope of a signal is twice the power in the real bandpass signal from which it is derived. Fortunately, as we will see in a section soon to follow, a similar result holds for random signals and noise. Therefore, a number of important quantities, most notably the signal-to-noise ratio, are preserved when real bandpass signals are represented by their corresponding lowpass complex envelopes.
The representation of signals in the frequency domain, through the use of the Fourier transform, implies that the signals are deterministic energy signals. Bandpass random signals also have a lowpass representation in terms of direct and quadrature components. The underlying mathematics for generating lowpass signal models for random bandpass signals is quite a bit different from that used in the previous two sections. We pause a minute to review the underlying theory.
Consider, for example, a narrowband random process defined by the equation
where θ is an arbitrary phase angle uniformly distributed in [−π, π).[6] The process in (4.62) can be written
or equivalently
The complex envelope corresponding to n(t) is defined as
In rectangular coordinates
The real envelope R(t) is
and
We assume that we know the power spectral density (PSD) of n(t). The problem is to determine the PSD nd(t), nq(t), and ñ(t).
The problem of determining quadrature models for bandpass random processes is commonly covered in basic courses on communication theory [3]. We cite only the most important results here. All of these results will be useful in the work to follow.
(Means) Since n(t) is a bandpass process, it is zero mean. Thus, the expectation of the right-hand side of (4.62) is zero mean and, consequently, nd(t) and nq(t) are also zero mean. Therefore:
where E {·} denotes statistical expectation.
(Variances) It also follows that nd(t) and nq(t) have the same variance (or power, since the process is assumed zero mean) and this power is equal to the total power in the bandpass process. In other words:
where N is the total power in the underlying bandpass process.
(PSD of nd(t) and nq(t)) The PSD of nd(t) and nq(t) are equal and are determined from the PSD of n(t), denoted Sn(f), by the expression [3]
(Autocorrelation of nd(t) and nq(t)) It follows from the Weiner-Khintchine theorem [3] that
and
where Rnd(τ) and Rnq(τ) are the autocorrelation functions of nd(t) and nq(t) and ↔ denotes a Fourier transform pair.
(Cross-PSD) The cross PSD of nd(t) and nq(t) is given by
Note that the cross-PSD is imaginary.
(Cross-correlation of nd(t) and nq(t)) We again invoke the Weiner-Khintchine theorem to define the cross-correlation of nd(t) and nq(t). The result is
where Rndnq(τ) is the cross-correlation of nd(t) and nq(t). Note that since Sndnq(ƒ) is imaginary, Rndnq(τ) is odd. For bandlimited processes the cross-correlation, as well as either autocorrelation, must be continuous. Thus, for a bandlimited process, nd(t) and nq(t) are uncorrelated. However, nd(t) and nq(t+τ) may be correlated for τ ≠ 0.
(Mean of the complex envelope ñ(t)). The mean of ñ(t) is
Since the expectation of the sum is the sum of the expectations
(Variance of the complex envelope ñ(t)). The power in ñ(t) is
Since ñ(t) is zero mean, is also the variance. Carrying out the indicated multiplication, (4.78) can be written
Since nd(t) and nq(t) are uncorrelated for bandlimited processes
and, in a similar fashion
Thus
where Pnd and Pnq represent the power in nd(t) and nq(t), respectively. It follows from (4.70) that
which means that the power in the lowpass complex envelope representation of a bandpass signal is double the power of the real bandpass signal from which the lowpass complex envelope is derived.
(PSD of the complex envelope ñ(t)). From (4.71) we know that the PSD of nd(t) and nq(t) are equal. Thus
We now illustrate why the preceding relationships are important in the simulation context.
As we know from basic communication theory, the signal-to-noise ratio (SNR) at the input of a receiver is usually a major factor in determining the performance of the system. At a receiver input, both the signal and the noise are bandpass. Assuming that the signal and noise are additive, the receiver input is
where x(t) is the signal and n(t) represents the noise. The signal-to-noise ratio, in terms of the real bandpass signals, is defined as
It follows from (4.58) and (4.83) that
where (SNR)bp and (SNR)lp refer to the signal-to-noise ratio based on the real bandpass signals and the corresponding lowpass complex envelopes, respectively. This very important result shows that representing bandpass functions (both signal and noise) by their respective lowpass equivalents, which is standard simulation methodology, preserves the signal-to-noise ratios. A simple example further illustrates this important fact by viewing the underlying signals in the frequency domain.
Example 4.6. (SNR Transformations)
As a simple example, assume that a bandpass signal is represented by
where x(t) is the sinusoid
The bandpass signal can be represented
This PSD of x(t) is, therefore:
as illustrated in Figure 4.9(a). Thus, the total power in the real bandpass signal is
as expected from (4.89). The PSD of the assumed noise is illustrated in Figure 4.9(b). Thus, the total noise power is
This gives, from (4.92) and (4.93)
for the bandpass signal-to-noise ratio.
We now turn our attention to the lowpass equivalents of x(t) and n(t). Equation (4.89) can be written
from which the complex envelope is
The power in the complex envelope is
which gives the PSD
as shown in Figure 4.9(c). The PSD of nd(t) and nq(t) given by (4.71) and is illustrated in Figure 4.9(d). From (4.84) the PSD of ñ(t) is found by multiplying the PSD illustrated in Figure 4.9(d) by two. The result is illustrated in Figure 4.9(e). From Figure 4.9(e) the power in the complex lowpass representation of the noise is
Combining (4.97) and (4.99) yields
Comparing (4.94) and (4.100) we see that the signal-to-noise ratio is preserved when real bandpass signals are replaced by their complex lowpass equivalents.
We now turn our attention from signals to systems. The basic problem is to determine the time-domain input-output relationship for a linear system given that the input to the system and the unit impulse response of the system are both bandpass signals expressed in terms of lowpass complex envelopes. The result will provide us with a methodology for developing waveform-level simulations of linear systems based on lowpass models.
Assuming that a system is linear, we know that convolution may be used to determine the output, y(t), given the input, x(t). For the time-invariant case the convolution takes the simple form
where h(t) is the unit impulse response of the time-invariant system and the symbol ⊛ is used to denote convolution. By definition, the complex envelopes of the linear time-invariant (LTIV) system input, , and output, , are defined by
and
respectively. If we require that the relationship between and satisfy
so that (4.101) and (4.104) have exactly the same form, the unit impulse response of the bandpass system h(t) and the corresponding complex envelope, must be related by
A formal proof of the preceding statements appear in Appendix B. A consequence of the factor of 2 is that a unity gain bandpass filter corresponds to a complex envelope representation of a unity gain lowpass filter.
Equation (4.105) is easily justified by showing that the factor of 2 in (4.105) results in the transformation of a unity gain bandpass filter into the unity gain lowpass filter so that the factor of 2 preserves the filter passband gain. Consider the ideal bandpass filter characteristic illustrated in Figure 4.10(a). The transfer function of the ideal bandpass filter is represented by
where H+(f) and H−(f) are the positive frequency and negative frequency portions of the bandpass filter transfer function, respectively. Replacing f by f + f0 gives
as illustrated in Figure 4.10(b). Clearly H+(f + f0) is a lowpass function and is therefore defined as . Thus:
as shown in Figure 4.10(c). Note that (4.108) can also be written
We see that a unity gain bandpass filter maps to a unity gain lowpass filter, since is derived from the positive frequency portion of the transfer function for the bandpass filter by a simple frequency translation, and that no amplitude scaling is involved.
It is important to understand the difference between (4.48) and (4.109). Representing bandpass signals by their lowpass complex envelope results in
while representing linear bandpass systems by their lowpass equivalent results in
Except for the factor of 2, the two expressions are equivalent.
From (4.101) and (4.104) we see that there are two techniques that can be used to compute, in the time domain, the output of an LTIV system given the bandpass input signal and the unit impulse response of the network. The two techniques are illustrated in Figure 4.11. The first technique is to simply convolve x(t) and h(t) as defined by (4.101) and illustrated in Figure 4.11(a). The second technique is to determine the complex envelopes of x(t) and h(t), convolve the complex envelopes as defined by (4.104), and then determine the bandpass output signal, y(t), using (4.103). This technique is shown in Figure 4.11(b).
Using (4.104) we can write
Since the convolution of a sum is the sum of convolutions (linear operations again) we have
Thus, the direct component of a linear system output is given by
and the quadrature component of a linear system output is given by
The convolution of two complex functions is therefore equivalent to four real convolutions just as the multiplication of two complex numbers is equivalent to four real multiplications. The operations for deriving the direct and quadrature outputs of a linear bandpass system are defined by the operations shown in Figure 4.12.
Example 4.7.
In this example we determine the values of hd(t) and hq(t) for a bandpass phase shifter. Assume that the input to the system is
and that the output of the phase shifter is
so that the system shifts the input phase by φ. This model could be used to represent synchronization errors in a demodulator. In order to simulate this device using complex lowpass models hd(t) and hq(t) must be derived.
The complex envelope of x(t) and y(t) are given by
and
respectively. It follows that
Writing this in rectangular coordinates yields
Equating real parts gives
and equating imaginary parts gives
In this example xd(t) and xq(t) are multiplied by the cos φ and sin φ, respectively. Thus:
and
Note that the delta function is present because the system is memoryless.
The lowpass model of a phase-shift network is illustrated in Figure 4.12 with hd(t) and hq(t) given by (4.124) and (4.125), respectively. Of course, φ could be time varying.
In order to simulate a system using bandpass components, such as bandpass filters, we typically know the transfer function, H(f). The lowpass simulation model for the filter is that illustrated in Figure 4.12. In order to develop the simulation model for the filter, which is based on the complex envelope of h(t), it is necessary to determine hd(t) and hq(t) from H(f), the transfer function of the bandpass filter. There are two fundamental methods for finding hd(t) and hq(t). The first method is to determine Hd(f) and Hq(f) from H(f) and inverse transform Hd(f) and Hq(f) in order to establish hd(t) and hq(t). The second method is to determine from H(f), inverse transform to find , and take the real and imaginary parts to determine hd(t) and hq(t).
The first step in either of the two methods of finding hd(t) and hq(t) from H(f) is to determine . By definition
Taking the inverse Fourier transform yields
The real and imaginary parts of give hd(t) and hq(t), respectively.
Replacing f by −f in (4.126) gives
Since hd(t) and hq(t) are both real functions of time, basic Fourier transform theory tells us that Hd(−f) is the complex conjugate of Hd(f) and that Hq(−f) is the complex conjugate of Hq(f). Thus, (4.128) can be written
Note that and are not complex conjugate pairs, because is not, in general, a real function of time. Taking the complex conjugate of (4.129) gives
Adding (4.126) and (4.130) gives
Multiplying (4.130) by −1 and adding the result to (4.126) gives
The functions hd(t) and hq(t) are then obtained from the inverse Fourier transforms of Hd(f) and Hq(f), respectively. If , so that and hq(t) are both zero, H(f) is said to exhibit conjugate symmetry about f0. Figure 4.13 illustrates , , Hd(f) and Hq(f) for the case in which H(f) is an ideal bandpass filter.
Example 4.8.
We now consider the determination of hd(t) and hq(t) directly from H(f). From H(f), is written using (4.108) as illustrated in Figure 4.13. The inverse transform of is
since = 1 over the range of integration. This integrates to
The preceding expression can be written
which is
Taking the real and imaginary parts yields
and
Note that if f0 is selected to be the algebraic center frequency (fu + fl)/2, hq(t) = 0 for all t. This obviously simplifies the lowpass simulation model illustrated in Figure 4.12. The important consequence is that the computational burden associated with finding the system output, given the system input, is reduced by a factor of 2.
As implied in the preceding example, in many cases of practical interest f0 can be selected so that q(t) ≪ d(t) for all t. In such cases hq(t) can often be neglected so that the complex envelope of the impulse response can be approximated as a real function without significant loss of accuracy. As pointed out in the preceding example, it is important to take advantage of this approximation when applicable, since elimination of hq(t) reduces the computational burden of the filtering operation by a factor of 2. It follows from basic Fourier transform theory that is real if exhibits conjugate symmetry (even amplitude spectrum and odd phase spectrum) about f = 0. This will be the case if H(f), the transfer function of the bandpass filter, exhibits conjugate symmetry about f = f0. Most filter designs closely approximate this property if the bandwidth of the filter is small compared to the center frequency of the filter. The quadrature component can be viewed as a measure of the conjugate asymmetry of H(f) about f0.
Consider the frequency division multiplex (FDM) of M signals
where ai(t) and φi(t) represent the amplitude and phase modulation on the ith carrier, respectively, and fi is the ith carrier frequency. Since the terms ai(t) are real, we may write
Defining
gives
We can define the complex envelope of y(t) as
where, for the moment, f0 remains arbitrary. With this definition y(t) can be written
Thus, the complex envelope of y(t) is
The direct and quadrature components of the FDM signal are therefore given by
and
respectively.
Example 4.9.
Consider the frequency division multiplex (FDM) signal consisting of four channel signals as shown in Figure 4.14(a). Suppose that the signal of interest is x2(t) and that a simulation is being performed in order to examine the effects of adjacent channel interference and intermodulation distortion resulting from a nonlinear amplifier in the system. Since we have interest in x2(t), we shall let f0 = f2 so that x2(t) is translated to f = 0 when the complex envelope of the composite signal y(t) is formed. With f0 = f2 the complex envelope of y(t) is
This is illustrated in Figure 4.14(b) for f0 = f2. Note that the minimum sampling frequency for is dependent on which bandpass signal is translated to f = 0. For the case shown, f0 = f2, the sampling frequency must satisfy
where B is the bandwidth of X4(f). The operations involved in forming are illustrated in Figure 4.15.
Throughout this chapter, the focus has been on fixed (time-invariant) linear systems. Many systems of practical interest involve time-varying components, such as the wireless radio channel, or nonlinear components, such as high-power amplifiers operating near the point of saturation. Design and analysis of systems that are nonlinear, or time-varying, or both nonlinear and time-varying, using traditional mathematical tools are usually very difficult or even impossible. As a result, simulation is frequently used as a design and analysis tool for these systems. Chapter 12 focuses on nonlinear systems and Chapter 13 is devoted to time-varying systems. However, for completeness we very briefly consider nonlinear and time-varying systems here.
The basic concept of a transfer function is not defined for a nonlinear system. Even though an impulse response can be measured for a nonlinear system, it does not in general relate the system output to the system input through convolution. The familiar convolution integral is based on the concept of superposition, which does not hold for nonlinear systems. Simulation models for nonlinear systems can certainly be developed but they are typically based on measurements obtained from physical systems. Analysis can sometimes be used to develop simulation models for nonlinear systems, but the techniques are usually ad hoc and cannot be generalized. Several important simulation models for nonlinear systems are explored in Chapter 12.
We now pause to consider a simulation model for a simple nonlinear system. The model developed in this example will be useful in our later work.
Example 4.10.
Assume that the input to a system has the form
Measurements made at the system output for various choices of A(t) and θ(t) show that the envelope of the system output is a constant independent of A(t) but that the zero crossings of the input are preserved and match the zero crossings of the output. Thus, the measurements suggest that the system can be accurately modeled by a bandpass hard limiter. Our task is to develop a simulation model for the device.
The complex envelope of the system input is
The output of the bandpass hard limiter is defined as a sinusoid having a constant amplitude and a phase deviation equal to the phase deviation of the input. Thus
for which the complex envelope is
where B is an assumed positive constant. We see that, for a bandpass hard limiter, it is the envelope of the signal that is hard limited rather than the signal itself.
Note that by definition of magnitude
The complex envelope of the output signal can be written
Thus
from which
and
The device defined by (4.157) and (4.158) is referred to as a bandpass hard limiter. It removes all variations on the envelope while preserving the zero-crossing locations.
A number of demodulators are based on nonlinear operations. For example, assume that r(t), defined by
represents a received signal at the input of a demodulator. Also assume that the purpose of the demodulator is to remove the positive portion of envelope as is the case for AM [3].[7] The envelope detector, which is usually used for envelope recovery (AM demodulation), has the output z(t) defined by
Also useful for carrier recovery is the square-law demodulator defined by
The envelope and square-law demodulators are examples of noncoherent nonlinear demodulators and are used when recovery of the carrier phase deviation φ(t) is not required.
A number of situations exist in which recovery of the carrier phase deviation is required. Examples are demodulators for analog FM and PM signals and demodulators for PSK and QPSK digital signals. The basic building block for demodulators requiring phase recovery is the phase-locked loop (PLL). The phase-locked loop is a nonlinear system and is covered in detail in Chapter 6.
In contrast to the nonlinear case, if a system is linear but time-varying, many of the tools developed in this chapter can be used for analysis and system-modeling purposes. This is true because, as long as a system is linear, convolution can still be used to relate the system input and output in the time domain and transfer functions can be used to relate the system input and output in the frequency domain. The system impulse response and the system transfer function are defined for linear time-varying systems. However, both the impulse response and the transfer function must be modified from their time-invariant definitions to account for the time-varying nature of the system.
For example, if a linear system is time-varying, the system input x(t) and output y(t) are related by the convolution of complex envelopes
where is the time-varying impulse response of the system. The impulse response is defined as the response of the system, measured at time t, to an impulse applied at the input τ seconds earlier. In other words, an impulse is applied to the system input at time t – τ and the response is measured at time t, after an “elapsed time” of τ. For a time-invariant system, the impulse response is a function only of the time difference t – τ. The impulse is assumed to be applied at t – τ = 0 and the resulting impulse response is the familiar h(τ).
Since the impulse response of a time-varying system is a function of two time-domain variables, t and τ, the transfer function of a time-varying system is also a function of two frequency-domain variables. It is defined as
It follows that and are a Fourier transform pair.
These concepts will be expanded upon in Chapter 13 and the results will be a theoretical framework for the design, analysis, and simulation of linear time-varying systems. Thus, even though the focus of this chapter is linear time-invariant systems, many of the concepts discussed in this chapter serve as a foundation for developing a methodology for simulating time-varying systems.
Example 4.11.
Consider the situation depicted in Figure 4.16. A receiver in a moving automobile (mobile) receives a signal from a single transmitter that has propagated along two paths. One propagation path is a direct path from the transmitter to the mobile. The second path is due to a reflection off a building. This is often referred to as the two-ray model. The automobile is assumed to be moving as shown. As a result of this movement, the lengths of both paths change with time. Consequently both the signal attenuation and the propagation delay for each path are time-varying. Assume that the attenuation and delay associated with the nth signal path are denoted an(t) and τn(t), respectively. The received signal can then be defined by the simple channel model
We assume that the input to the channel input is general modulated signal [see (4.1)]
Substitution of (4.166) into (4.165) yields
Since waveform-level simulation is usually accomplished using complex envelope signals, we now determine the complex envelope for both x(t) and y(t).
The complex envelope of the transmitted signal is, by inspection:
Determining the complex envelope of the received signal defined by (4.167) takes a little more effort. Since an(t) and A(t) are both real, (4.167) can be written
From (4.168)
so that
The complex path attenuation is defined as
so that
Thus, the complex envelope of the receiver input is
This defines the complex lowpass channel model. This model will be revisited in Chapter 14 when we consider waveform channel models in detail.
This chapter dealt with signal and system theory based on lowpass complex envelope representations of bandpass signals and systems. The motivation for using complex envelope representations for bandpass signals and systems in simulations is computational efficiency. Through the use of the complex envelope, the number of samples required to represent a bandpass signal is significantly reduced. The use of these techniques directly leads to a significant reduction in the time required to execute a given simulation. Therefore, in the work to follow we will attempt to model all bandpass signals and systems using lowpass complex models.
Two fundamental concepts were addressed in this chapter. The first concept was the development of models for signals and systems based on the lowpass complex envelope representation of bandpass signals and the lowpass equivalent impulse response of bandpass systems. The second concept dealt with the development of techniques for calculating the lowpass complex envelope for the system output given the lowpass complex envelope of the system input and the lowpass model for the system. We saw that complex envelope models for bandpass signals are typically specified in terms of xd(t) and xq(t), which are the real and imaginary components of the lowpass complex envelope . Processing the complex envelope of the system input through the system model, defined by hd(t) and hq(t), usually involves four real convolutions. The computational burden associated with this operation can be reduced by a factor of 2 if the transfer function of the bandpass system exhibits conjugate symmetry about the reference frequency f0. This reduction in computational burden arises from the fact that hq(t) = 0 for the conjugate symmetry case. In many cases of practical interest hq(t), while not exactly equal to zero, is negligible compared to hd(t). A filter having a bandwidth much less than the center frequency is an example of a case in which hq(t) can be neglected.
While the emphasis in this chapter was on fixed (time-invariant) linear systems, many practical communication systems involve operations that are nonlinear, time-varying, or both. While these systems are much more complicated than the fixed linear systems considered here, as illustrated by the two examples presented in the preceding section, it is possible to develop simulation models, based on complex envelope representations, for these systems also. These more complicated systems are covered in Chapters 12 and 13.
The topic of signal and system analysis, based on complex envelope representations with an emphasis on applications to communications, can be found in a variety of books. The following are examples:
S. Haykin, Communication Systems, 3rd ed., New York: Wiley, 1994.
R. E. Ziemer and W. H. Tranter, Principles of Communications: Systems, Modulation and Noise, 5th ed., New York: Wiley, 2002.
The following two books present the material in this chapter from a simulation point of view:
M. C. Jeruchim, P. Balaban and K. S. Shanmugan, Simulation of Communication Systems, 2nd ed., New York: Plenum, 2000.
F. M. Gardner and J. D. Baker, Simulation Techniques, New York: Wiley, 1997.
4.1 | Another method for defining the lowpass complex envelope is through the use of the Hilbert transform. The Hilbert transform of a signal x(t) is denoted and is computed by passing x(t) through a linear filter having the transfer function where so that The analytic signal xA(t) corresponding to the real signal x(t) is the complex signal defined by The lowpass complex envelope is then written where f0 is usually taken as the center frequency of the bandpass signal x(t).
|
4.2 | A bandpass signal is defined by
where
|
4.3 | An angle modulated signal is defined by
|
4.4 | An FM signal is represented by
|
4.5 | A single sideband (SSB) signal can be represented by where the plus sign is used for lower sideband SSB, the minus sign is used for upper sideband SSB, and is the Hilbert transform of the message signal m(t).
|
4.6 | In frequency-shift keyed (FSK) signaling, transmission of a binary 0 (space) or a binary 1 (mark) is accomplished using signals of two different frequencies. For example, assume that an FSK signal is given by where Tb is the bit period and fΔ is the difference between the two frequencies in the FSK signal set. It follows that the two frequencies in the FSK signaling set are f0 + fΔ/2 and f0 − fΔ/2. Determine the complex envelope of x(t) in both polar and rectangular form. |
4.7 | Binary phase-shift keying (PSK) signals can be defined by where the signal defined by k = 0 is used for transmission of a binary 0 and the signal defined by k = 1 corresponds to transmission of a binary 1. As in the previous problem, Tb is the bit period. Determine the complex envelope of x(t) in both polar and rectangular form. |
4.8 | The Fourier transform of a signal x(t) is illustrated in Figure 4.17. Assume that X(f) is real and positive for all f.
|
4.9 | Develop a MATLAB program to compute and plot hd(t) and hq(t) as defined by (4.137) and (4.138). Assume that fl = 180, fu = 220 and that f0 takes on the following four values:
Compare the results. What do you observe from this comparison? |
4.10 | The Fourier transform of the complex envelope of a signal x(t) is shown in Figure 4.18. Assume that is real and positive for all f. Plot accurately Xd(f) and Xq(f). Be sure to label all frequencies of interest and the amplitudes corresponding to these frequencies. |
4.11 | A second-order bandpass filter is defined by where ω0 is the geometric center frequency of the filter in radians/second and ωb is the filter bandwidth in radians/second. Assume that the impulse response of the filter is defined by
|
4.12 | The input to a bandpass hard limiter is the AM signal The output of the hard limiter is to be
|
% File: c4_qamdemo.m levelx = input('Number of D levels > '), levely = input('Number of Q levels > '), m = input('Number of symbols > '), n = input('Number of samples per symbol > '), bw = input('Filter bandwidth, 0<bw<1 > '),% % [xd,xq] = qam(levelx,levely,m,n); % [b,a] = butter(6,bw); % determine filter coefficients yd = filter(b,a,xd); % filter direct coefficient yq = filter(b,a,xq); % filter quadrature coefficient % subplot(2,2,1) % first pane plot(xd,xq,'o') % unfiltered scatterplot a = 1.4; maxd = max(xd); maxq = max(xq); mind = min(xd); minq = min(xq); axis([a*mind a*maxd a*minq a*maxq]) axis equal xlabel('xd'), ylabel('xq') % subplot(2,2,2) % second pane plot(yd,yq) % filtered scatterplot axis equal; xlabel('xd'), ylabel('xq'), % sym = 30; % number of symbols in time plot nsym = (0:sym*n)/n; % x axis vector for time plots subplot(2,2,3) % third pane plot(nsym(1:sym*n),yd(1:sym*n)) % filtered direct component xlabel('symbol index'), ylabel('xd'), % subplot(2,2,4) % fourth pane plot(nsym(1:sym*n),yq(1:sym*n)) % filtered quadrature component xlabel('symbol index'), ylabel('xq'), % End of script file.
function [xd,xq] = qam(levelx,levely,m,n) xd = mary(levelx,m,n); xq = mary(levely,m,n); % End of function file.
function y= mary(levels,m,n) % m = number of symbols % n = samples per symbol l = m*n; % Total sequence length y = zeros(1,l-n+1); % Initalize output vector lm1 = levels-1; x=2*fix(levels*rand(1,m))-lm1; for i = 1:m % Loop to generate info symbols k = (i-1)*n+1; y(k) = x(i); end y = conv(y,ones(1,n)); % Make each symbol n samples % End of function file.
We now formally show that if x(t) and h(t) are defined as
and
then
where
The proof of (4.179) is accomplished by substituting x(t) and h(t) in the integral and evaluating the result. Recognizing that the sum of a function and its complex conjugate is twice the real part of the function allows us to express x(t) and h(t) in the form
and
respectively. Substituting x(t) and h(t) into the convolution integral yields y(t) as the sum of four integrals. We therefore write
where
and
Note that the integrands in both I1 and I2 are complex bandpass signals having a center frequency of 2f0. The envelope of these functions is slowly varying with respect to 2f0, since the bandwidth of and is assumed to be much less than 2f0. The integral therefore approximately cancels half-cycle by half-cycle. The approximation that I1 and I2 are negligible improves as f0 increases. Thus, limf0→∞(I1 and I2) = 0.
Note also that I3 and I4 are complex conjugates. Thus:
Substitution of (4.185) into (4.187) yields
which is equivalent to
Since by definition
it follows that
The preceding development shows that two bandpass signals may be convolved by convolving their complex envelopes and using (4.190) to generate the desired bandpass signal from its complex envelope. Note the asumption that f0 is large.
[1] Throughout this chapter we will usually write mathematical expressions in terms of continuous time t in order to simplify notation. Keep in mind, however, that digital simulations actually process sample values defined by sampling at times t = nT, where T is the sampling period.
[2] exp(jθ) = cos θ + j sin θ.
[3] For example, for an AWGN channel, the pairwise error probability Pr{xi → xj}, which is the probability of transmitting xi(t) and deciding (incorrectly) at the receiver that xj(t) was transmitted, j ≠ = i, is
In this expression Yi and Yj are vectors, in signal space, corresponding to yi(t) and yj(t) which are xi(t) and xj(t) referenced to the receiver input and ‖Yi – Yj‖ represents the Euclidean distance separating yi(t) and yj(t) [1]. Thus as signal vectors move closer together due to channel conditions or system inperfections, the error probability increases.
[4] Normally we would use a zero-ISI filter, such as a root-raised cosine filter, for these applications to ensure that the error probability does not increase due to the memory induced by the filter. However, high-order Butterworth filters approximate zero-ISI filters, since they are nearly ideal and therefore closely approximate a sin(x)/x impulse response. In addition, they are quickly implemented in MATLAB using built-in functions.
[5] Keep in mind that MATLAB normalizes filter bandwidths to the Nyquist frequency, fN, which is half the sampling frequency fs. In other words, the MATLAB parameter bw
is fbw/fN, where fbw is the filter bandwidth in Hertz. Assume that the sampling frequency is k times the symbol rate fsym (recall the discussion of the simulation sampling frequency from the preceding chapter). Also assume that the ratio of the filter bandwidth to the symbol rate, fbw/fsym is denoted λ. Using these definitions the ratio of the filter bandwidth to the Nyquist frequency (the normalized bandwidth used in MATLAB), is
Thus, if the filter bandwidth is equal to the symbol rate λ = 1. Also, if the simulation sampling frequency is 20 times the symbol rate, k = 20. This leads to a normalized filter bandwidth of 0.1 as used in the simulation.
[6] The random phase is required for the process to be stationary. Without the random phase, which is equivalent to an arbitrary time reference, the process is cyclostationary.
[7] Recall that an AM (amplitude modulated) signal is defined by
where a is the modulation index and m(t) is the message signal normalized so that am(t) ≤ 1 for all t. For this signal the positive portion of the envelope, with the dc term removed, is the message signal to be recovered by the demodulator.