12
ADAPTIVE SYSTEMS

12.1 INTRODUCTION

The ever‐increasing demand for channel capacity and bandwidth has resulted in the widespread use of adaptive signal processing techniques that compensate for the inevitable signal interference and distortion that results from the competitive needs for capacity and bandwidth. The significant advantages in efficient spectrum utilization, gained by high‐order symbol modulation with spectral containment, have been facilitated by the use of adaptive processing algorithms that compensate for the interference and distortion under crowded channel conditions. The development of adaptive processing for improved communications was jump‐started when wireline telephone networks [1] were under pressure for more capacity and higher data rates, and was well understood and developed when wireless communications entered the marketplace.

In the following sections, the mathematical background and algorithms are developed for adaptive systems as they apply to waveform equalization of intersymbol interference (ISI), cancellation of interfering signals, and waveform identification. These three objectives are obtained using subtle alterations in the adaptive processing configurations. The adaptive processing generally uses finite impulse response1 (FIR) filters with weights that are adaptively adjusted to minimize error between the sampled filter output and the received input signal with known or estimated data using the minimum mean‐square error [2] (MMSE) algorithm. With known data, a preamble or training sequence is available to aid in the adaptive acquisition or convergence processing. In these cases, the processing following the known data is based on the estimates of the received data using a decision‐directed adaptive processing algorithm. After the training interval, the decision error probability is sufficiently low to maintain the MMSE condition. In applications where a training sequence is not available, the adaptive algorithm is referred to as a blind or self‐recovering algorithm; however, using the training sequence significantly reduces the adaptation time.

In the next section, the orthogonality principle is introduced and applied in the derivation of Weiner’s optimum linear estimation filter [3–5]. In Section 12.2, the Wiener estimation filter is described using the orthogonality principle and the MMSE criteria. In Section 12.3, the optimum FIR filter is examined, with the tap weights adaptively estimated using the lease mean‐square (LMS) algorithm. The LMS algorithm results in lower implementation complexity than the MMSE algorithm. Section 12.4 considers various forms of equalizers and Section 12.5 discusses adaptive interference cancellation using the LMS algorithm. The preceding adaptive techniques are based on symbol‐by‐symbol processing and an entirely different approach, using the recursive least‐squares (RLS) algorithm, is discussed in Section 12.6. The RLS algorithm converges to the steady‐state condition in considerably less time than that of the MMSE or LMS algorithms. Case studies involving the application of ISI equalization and narrowband interference cancellation are given in Sections 12.7 and 12.8; this chapter concludes with Section 12.9 with a case study of the RLS‐adaptive equalizer.

12.1.1 The Orthogonality Principle

The orthogonality principle is the fundamental principle used in the evaluation of the MMSE between a set of parameters and their estimates. The principle applies in the analysis of linear or nonlinear estimation, involving constant or time‐varying, real or complex parameters. However, to simplify the following description, the linear mean‐square estimation (LMSE) of a single, constant, and real parameter is used. The orthogonality principle states that the MMSE results when the estimation error images is orthogonal to the measurement x, that is, images and, under this condition, the MMSE is computed as the expectation images.

Consider that a real zero‐mean parameter y is observed as2 x = y + n, where n represents zero‐mean additive noise and the estimate of y is formed using linear combination of the observations xi, expressed as

The weights ai form a filter that decreases the noise and, thereby, improves the estimate. In this description, the index i simply associates x with the sampled noisy value of the parameter y. Using (12.1), the mean‐square error (MSE) is expressed as

To establish the MMSE, the derivative of (12.2) with respect to ai is formed and set to zero resulting in

Therefore, based on the last equality in (12.3), the requirement for the MMSE is

The first equality in (12.2) is expanded as

(12.5) images

and, upon substituting (12.1) for the expectation images, the MMSE error expression is evaluated as

The last equality in (12.6) follows from (12.4) and confirms the requirement that the MMSE occurs when the expectation of the product of the measurement error and data are orthogonal, that is, when images.

12.2 OPTIMUM FILTERING—WIENER’S SOLUTION

Wiener’s approach to determining the optimum filter of the noisy estimate of an event images(t) is based on the filter that provides the MMSE images of images(t). The solution is based on the application of the orthogonality principle outline in the introduction. Consider the noisy observation images of the event images(t) expressed as

(12.7) images

with the filtered estimate of images(t) expressed as

(12.8) images

In the following analysis, the stochastic processes are stationary and characterized as complex baseband functions and the filter is linear and time‐invariant. It is desired to determine the optimum linear filter ho(t) that results in the MMSE of the measurement error.3 Referring to Section 12.1.1, the MMSE occurs when the orthogonality principle applies and, under the above conditions, is expressed as

The solution to (12.9) is expresses in terms of the expectations as

Upon using the wws property and defining images, (12.10) simplifies to

The rhs of (12.11) is expressed as the convolution integral by substituting images with  =  resulting in

(12.12) images

Therefore, the optimum filter is expressed as having frequency domain response

(12.13) images

Evaluation of the MMSE corresponding to the optimum filter ho(t) is left as an exercise. The LMSE algorithm can also be applied to the prediction [6] of images from the data images.

12.3 FINITE IMPULSE RESPONSE‐ADAPTIVE FILTER ESTIMATION

In this section, the Wiener filter is applied to the sampled signal images where the index i represents the uniform sampling of images at increments t = iTs, where Ts is the reciprocal of the sampling frequency fs. All of the discrete‐time functions in this section are considered to be complex baseband functions that are within the Nyquist bandwidth. Furthermore, the analysis involves discrete‐time stochastic processes that are ergodic and stationary in the wide‐sense, that is, the mean value is constant and the correlation and covariance functions depend only on the sample‐time increment Ts. The basic structure of the adaptive filter is shown in Figure 12.1. The filter input is represented by the current sampled complex data images and N − 1 previous samples. If the sampled signal is the output of the demodulator matched filter then Ts is equal to the modulation symbol duration T, otherwise Ts may represent a suitable sub‐symbol sampling interval; the unit delay designation z−1 corresponds to the normalized sample delay. For each set of N samples, the filter output images is computed and compared to a known reference signal forming the error images corresponding to the sample t = jTs. The error is then applied to the weight update function that adjusts all of the weights resulting is a steady‐state condition that corresponds to the minimum error. The adaptive processing details are discussed in the remainder of this section.

Schematic of an FIR-adaptive filter, featuring the sampled data, the reference, and the weight update.

FIGURE 12.1 FIR‐adaptive filter.

The sampled complex inputs images and the filter weights images are defined, respectively, as the complex vectors

(12.14) images

and

(12.15) images

The reference, or desired response, of the adaptive filter is imagesj and the difference between imagesj and the estimated response4 images is the instantaneous estimation error, expressed as

The optimum estimation filter corresponds to the MMSE that results when the filter has settled to the steady‐state condition corresponding to the optimum weight vector images. Based on the orthogonality principle, the MMSE error occurs when the error is orthogonal to the data, that is, when

(12.17) images

and, referring to (12.6), the corresponding MMSE is

(12.18) images

Aside from the trivial condition corresponding to images, the MSE criterion is based on the error defined in (12.16) and is expressed as the expectation

Upon expanding (12.19) and distributing the expectation, the MSE is evaluated as

where images is the variance of the reference, images is the N × 1 cross‐correlation vector between the reference and data, and R is an N × N correlation matrix of the data; images and R are defined, respectively, as

(12.21) images

and

The second equality in (12.20) is established using the following relationships:

(12.23) images
(12.24) images
(12.25) images

and

(12.26) images

The matrix R possesses unique properties based on the stationarity of the sampled stochastic processes and is referred to as a Hermitian matrix.5 The diagonal elements images are real and the trace of R is images. Indexing the elements of the row and column vectors images and images as xi and xj respectively, the elements images. For k > 0 (k < 0), the matrix elements correspond to positive (negative) correlation lags that lie above (below) the matrix diagonal. Furthermore, the elements of the Hermitian matrix correspond to the condition images, so the elements above and below the principal diagonal possess complex conjugate symmetry.

The MSE, defined by (12.20), characterizes a concave upward surface with a minimum value corresponding to the derivative images equal to zero. The derivative is defined as the gradient vector images and is evaluated as

The solution to the derivatives in (12.27), indicated by the third equality, is found in Section 1.12.5. Upon evaluation of (12.27) with images the optimum filter weights are expressed as

(12.28) images

From the evaluation of (12.20) under the optimum condition images, the MMSE is expressed as

Based on the method of steepest‐descent [7], the filter tap weights are altered to reduce, or minimize, the slope of the N‐dimensional concave upward surface characterized by the weights. At each decision sample jTs, the weight updated is based on the first‐order recursive filter response, expressed as

The second equality in (12.30) results from (12.27) and the step‐size parameter u is used to provide stability and control the rate of convergence. As the expectation of the gradient vector approaches zero, the filter approaches the optimum MSE estimate of the reference imagesj. The slope of the concave upward surface about the minimum point forms a discriminator S‐curve that ensures stable tracking about the optimum tap weights. The updating is identical for each tap weight and the required processing for the n‐th tap weight is determined for the corresponding elements at sample j of the column vectors, images and images, in (12.30); with the subscript n,j representing the n‐th row at sample j, the result is expressed as

The complex scalars images and images are evaluated as

(12.32) images

and

where the Kronecker delta function arises from the evaluation of the gradient vector in (12.27) and the variance in (12.33) is simply images. Substituting these results into (12.31), the weight updating for the n‐th tap weight and sample j is expressed as

The updating of the weight images at sample j given the previous weight images at sample images is shown in Figure 12.2; the asterisk * denotes conjugation. The filter is typically initialized using images = 0 + j0; however, if feasible, the best guess of the steady‐state solution should be used.

Schematic of an MSE-adaptive filter weight update algorithm. It directs towards the summing bus.

FIGURE 12.2 MSE‐adaptive filter weight update algorithm.

The stability and convergence properties of the FIR‐adaptive filter have been analyzed by Widrow [8], Ungerboeck [9], Gitlin and Weinstein [10], and Haykin [5]. Their analysis focuses on the eigenvalues of the correlation matrix R and, based on the method of steepest‐descent, the adaptive is filter stable and converges to the steady‐state condition, regardless of initial conditions, if all of the eigenvalues images satisfy the condition

Equation (12.35) leads to the conditions images and, recognizing that the eigenvalues for the Hermitian matrix R are all positive real values, upon defining the maximum eigenvalue as λmax, the condition on u is

Haykin [11] examines the transient behavior of the steepest‐descent algorithm for a two‐tap FIR filter with varying eigenvalue spread,6 fixed step‐sizes u, and varying step‐sizes u with fixed eigenvalue spread. Haykin’s salient observations are:

  • The steepest‐descent algorithm converges most rapidly when the eigenvalues, λ1 and λ2 are equal or when the initial conditions are chosen properly.
  • For a fixed step‐size u, as the eigenvalue spread increases the correlation matrix R becomes more ill conditioned.
  • For small step‐sizes u, the transient behavior of the steepest‐descent algorithm becomes overdamped. For large step‐sized, approaching 2/λmax, the transient behavior becomes underdamped and exhibits oscillations.

Haykin concludes that the transient behavior of the steepest‐descent algorithm is highly sensitive to the step‐size parameter and the eigenvalue spread. In the selection of the adaptive processing gain, a performance tradeoff exists between the variance of the tap weights and the responsiveness of adaptive filter to the channel dynamics. For example, decreasing the gain decreases the variance but increases the system performance sensitivity to the received signal dynamics and increasing the gain has the inverse affects. Gitlin and Weinstein have investigated the adaptive equalizer MSE as it relates to the tap weight precision, the number of taps, the channel characteristics, and the digital resolution. Gitlin and Weinstein conclude that the number of taps should be kept to a minimum, consistent with the steady‐state performance objectives and the weight precision should be ten to twelve bits. Insufficient processing precision will result in tap accumulation round‐off errors and dominate the system performance as the number of filter taps is increased. In general, the gain should be no greater than necessary to track the time‐varying fading as determined by the signal decorrelation time and the length or span of the equalizer should not exceed the multipath delay spread; these topics are also discussed in Chapters 18 and 20. Adaptive filtering in the frequency domain is discussed by Dentino, McCool, and Widrow [12] and by Bershad and Feintuch [13].

12.3.1 Least Mean‐Square Algorithm

The weight updating in the preceding section, using the Wiener FIR‐adaptive filter, is problematic, in that, the algorithm is dependent on the correlation matrix R. To circumvent the sensitivity of the weight update algorithm on the correlation matrix, Widrow and Hoff [14] developed the LMS algorithm using real data. The LMS algorithm was subsequently modified by Widrow, McCool, and Ball [15] using the complex data. The LMS algorithm is an efficient method of updating the FIR‐adaptive filter tap weights through the entire process of adaptation and estimation. The steepest‐decent algorithm is used, in that, the filter tap weights are updated using a single‐pole recursive filter that reduces the slope, or gradient vector images, as given by the first equality in (12.30). However, in this case, the N‐dimensional instantaneous square‐error is defined as

(12.37) images

and the gradient vector is evaluated as

The third equality in (12.38) is established using the rules for complex matrix differentiation discussed in Section 1.12.4. From (12.38), the filter tap weights are updated as

The LMS tap weight update algorithm is identical for each tap weight and is shown in terms of the n‐th complex weight images in Figure 12.3. Although (12.39) is a convenient form in expressing the tap weight updating, an alternate expression is obtained by substituting (12.16) for images and rearranging the terms as in (12.34) to obtain the algorithm for the n‐th tap weight update expressed as

(12.40) images

The tap weight update algorithm processing for the LMS feedforward equalizer and canceler, depicted respectively in Figures 12.6 and 12.8, is shown in Figure 12.3a. Figure 12.3b shows the weight update processing for the peak‐distortion zero‐forcing equalizer (ZFE) depicted in Figure 12.5.

2 Schematics of adaptive filter weight update algorithms. They each feature weight processing for LMS algorithm (left) and zero-forcing algorithm (right).

FIGURE 12.3 Adaptive filter weight update algorithms.

In remainder of this section, the stability and convergence properties of the FIR‐adaptive filter using the LMS algorithm are summarized based on the work by Haykin [5]. As before, the analysis focuses on the eigenvalues of the correlation matrix R and, based on the method of steepest‐descent, the adaptive filter is stable and converges to the steady‐state condition if the step‐size satisfies the condition

The sum of the eigenvalues represents the mean‐square value, or total power, of the filter taps. Since the sampled input represents a stationary process, the eigenvalues represent the zero lag correlation values images of the Hermitian matrix R and the left‐hand‐side (lhs) of (12.41) follows from the positive real values images.

The misadjustment parameter [16] M is defined as the ratio of the average excess MSE images to the MMSE Jmin, expressed in (12.29), that is,

The parameter M corresponds to the steady‐state condition evaluated as expressed in the second equality in (12.42) [16].

The average excess MSE is defined as

where J(∞) is steady‐state mean‐square convergence error of the LMS algorithm which exceeds Jmin. In terms of the convergence samples j and the tap weight vector error ε(j) = w(j) − wo, the LMS mean‐square convergence error is images which leads to the second equality in (12.43).

Haykin’s salient performance observations for the LMS algorithm are [17]:

  • The adaptive filter converges slowly for small values of u with a correspondingly low misadjustment. Conversely, for larger values of u the filter converges rapidly with a correspondingly large misadjustment.
  • The convergence of the average MSE, as given by (12.41), also guarantees the convergence of the mean tap weight values given by (12.36) since images.
  • For increasing eigenvalue spreads, the convergence time of the excess MSE becomes slower and the average tap weight convergence is limited by the smallest eigenvalues.

The simplicity of the LMS algorithm stems from the fact that it is based on the instantaneous squared error and does not require the expectation associated with the evaluation of the correlation functions or inversion of the correlation matrix. Considerable attention has been given to the theoretical evaluation and measurement of the symbol‐error probability, with channel‐induced ISI, using the LMS algorithm [18–22].

The adaptive filters are characterized as having a primary and reference input that determine the principal application of the adaptive processing. The primary input is associated with the received signal. For example, equalization uses the received signal as the primary input into the adaptive filter with the reference input derived from a locally generated replica of known data or data estimates derived from the received data. The reference data are often derived from a preamble or training sequence associated with the received data. With noise and interference cancellation, the reference input is a correlated replica of the interfering signal that is applied to the input to the adaptive filter. For sufficiently high interference‐to‐signal ratios (ISRs), the reference input may also be derived directly from the received signal. System or waveform identification is accomplished by applying the reference signal to the adaptive filter, possibly using multiple references that are suspected be correlated with the primary received signal.

12.4 INTERSYMBOL INTERFERENCE AND MULTIPATH EQUALIZATION

In this section, several commonly used equalizers are discussed with case studies of their performance presented in Sections 12.712.9.

12.4.1 Zero‐Forcing Equalizer

The ZFE is among the first equalizers to be analyzed and implemented [23]. The equalizer operates on the output of the channel filter that introduces ISI distortion on the transmitted data‐modulated symbols denoted by the baseband signal samples images. The channel filter is shown in Figure 12.4 with the output expressed as the convolutional sum

(12.44) images

The peak distortion, at the output of the channel, is defined in terms of the channel impulse response as

where images is the maximum filter sample with the filter normalized such that images = 1. The peak distortion is a measure of the noise‐free dispersion about the data‐modulated symbol rest‐points resulting from the channel ISI. The dispersion between adjacent rest‐points characterizes the eye‐opening and when Do < 1 the eye is said to be open [24].

Schematic of channel filter, featuring sampled signal and filtered signal.

FIGURE 12.4 Channel filter.

The ZFE, shown in Figure 12.5, follows the channel filter and attempts to minimize the loss from the channel ISI. The equalizer output is evaluated as the convolutional sum

(12.46) images

The peak distortion at the output of the equalizer is defined in terms of the impulse response of the channel cascaded with the equalizer and is expressed as

As indicated in Figures 12.4 and 12.5 the channel has L taps and the equalizer has 2N + 1 taps, so the extent or range of the non‐zero sampled impulse response is −N to N + L − 1 for a total of 2N + L samples. The cascaded impulse response in (12.47) is sampled at the time intervals jTs and computed as

(12.48) images

The peak distortion, given by (12.47), can be made arbitrarily small at the sampled points as the number of equalizer taps approaches infinity, and, although Lucky [25] has shown that D is a convex upward function of the equalizer coefficients, there is no assurance that the tap weights will converge for a finite number of taps. However, a solution does exist for a finite number of taps if the equalizer input peak distortion given by (12.45) satisfies the condition Do < 1, but the distortion or decision error cannot be made arbitrarily small. The decision error at the output of the equalizer FIR filter is computed as

(12.49) images

Using the data estimate images results in a decision‐directed equalizer algorithm and using the known data imagesj, as with a preamble or training data sequence, results in a data‐directed equalizer algorithm. The relationship between the forward indexing of images and the backward indexing of images is expressed as images. The weight update processing indicated in Figure 12.5 is based on the algorithm shown in Figure 12.3b.

Schematic of the zero-forcing equalizer featuring the sampled data, the references, and the resulting decision directed and the data directed.

FIGURE 12.5 Zero‐forcing equalizer.

12.4.2 Linear Feedforward Equalizer

The linear feedforward equalizer (LFFE) using the LMS tap weight update algorithm is shown in Figure 12.6. The LMS tap weight processing is indicated in Figure 12.3a. The complex tap weights are all initialized to zero and, as the sampled input data make its way through the FIR filter, the complex output of the summing bus is applied to the decision device to obtain the data decision estimate images that is used to form the decision error images. The data decision estimate is based on a complex limiter that computes the real and imaginary parts of images based on the closest received waveform modulation rest states to the complex summation images. For example, with binary phase shift keying (BPSK), if images then images otherwise images. Under the input signal condition of wide‐sense stationarity,7 the mean value of the decision error will approach zero as the filter weights are adjusted during the adaptation processing. The decision estimate images is subject to signal‐to‐noise ratio‐dependent decision errors; however, when a preamble is used, the known preamble data imagesj is used to reduce the learning time.

Schematic of the LMS linear feedforward equalizer. It features sampled data, summing bus, the references, and the resulting decision directed.

FIGURE 12.6 LMS linear feedforward equalizer.

The performance of the LFFE is examined in the case study in Section 12.7. In the study, the recovery of the bit‐error performance loss resulting from ISI is examined for narrowband filtering of a BPSK‐modulated waveform.

12.4.3 Nonlinear Decision Feedback Equalizer

The decision feedback equalizer (DFE) [26, 27], shown in Figure 12.7, augments the LFFE symbol equalizer by forming the weighted linear combination of the previous symbols and reducing their contributions to the ISI. The DFE processing is equivalent to that involving linear prediction as embodied in forward linear prediction (FLP), backward linear prediction (BLP), and forward–backward linear prediction (FBLP) algorithms described by Haykin [28]. Belfiore and Park [29] and Proakis [30] also provide detailed analyses and compare the implementation and performance of the ZFE, LFFE, the DFE implementations. The DFE is used to correct multipath interference and signal distortion resulting from frequency selective fading [31] that is prevalent in wideband direct‐sequence spread‐spectrum (DSSS) applications. Lee and Hill [32] discuss the application of the DFE in terms of maximum‐likelihood sequence estimation (MLSE).

Schematic of the LMS nonlinear decision feedback equalizer. It features the sampled data, the summing bus, the data estimate, and the resulting decision directed.

FIGURE 12.7 LMS nonlinear decision feedback equalizer.

Proakis shows that, for moderate to severe multipath profiles, there is about a 2 dB loss in Eb/No performance between using known data decisions, as would occur during the reception of a learning or training data sequence, and using the locally detected data estimates. When using the LFFE, a severe multipath profile results in bit‐error probabilities that asymptotically approach images with increasing signal‐to‐noise ratio; however, with the DFE equalizer the bit‐error probabilities continue to decrease. These results are demonstrated using symbol‐spaced equalizer (SSE) taps with an optimally sampled matched filter that is matched to the waveform and channel. Typically the channel is unknown so using the fractionally‐spaced equalizer (FSE) taps, as described in the following section, is necessary. The performance of the DFE is sensitive to the detected data because each decision error is shifted through the entire feedback path of the FIR filter so the selection of the step‐size gain is a particularly important consideration.

12.4.4 Fractionally‐Spaced Equalizers

In the preceding examples, the equalizer sample delay may correspond to the symbol interval of T seconds or a sample interval corresponding to images seconds where Ns is the number of samples‐per‐symbol. The SSE must operate on the demodulator matched filter samples at images and only corrects for distortion aliased about the frequency 1/2T; it is also sensitive to the demodulator sampling delay (Tε) relative to the received symbols. On the other hand, the FSE with sample delay Ts will compensate for the sample delay error Tε and the channel distortion within the sampled bandwidth of 1/2Ts. In addition to the channel distortion compensation, the FSE provides the symbol matched filtering required for the optimum detection of the received signal when sampled at the optimum sampling time [33–36].

12.4.5 Blind or Self‐Recovering Equalizers

In many applications, it is desirable for the equalizer to adapt to the distorted received signal without the benefit of a preamble or special training sequence. Equalizers that function in this manner are referred to as blind or self‐recovering equalizers [37–40]. This is a major undertaking, in that, knowledge of symbol timing and carrier phase must be resolved simultaneously with the equalizer weight adaptation. Godard [41] discusses an approach for equalizer adaptation independent of the carrier phase and the modulated waveform symbol constellation. Since this approach does not require demodulator data estimates for equalization, the carrier acquisition and tracking must be performed on the equalizer output.

12.5 INTERFERENCE AND NOISE CANCELLATION

The adaptive filter, used as an interference signal canceler, has many applications as discussed by Widrow et al. [42] in their comprehensive paper on the subject. The adaptive canceler requires two inputs, usually identified as the primary signal and the reference signal. The primary input to the system is the desired signal corrupted by an additive interfering signal and the reference input is a correlated version of the interfering signal, ideally derived from the source of the interference. The reference input may be generated internally as in the case of decision‐directed applications or obtained locally in the case of nearby interfering signal and noise sources. The applications are nearly endless; however, include the broad subject of interference cancellation involving: antenna sidelobes, adaptive antenna arrays, electric motors, power lines, speech, and electrocardiography.

Figure 12.8 shows the general configuration of the LMS‐adaptive canceler with the reference input passing through the FIR filter and subtracted from the primary signal forming the estimation error that is minimized by the recursive weight update processing. The canceler performance is evaluated in the case study of Section 12.8 where the primary input is a BPSK‐modulated signal and a continuous wave (CW) interfering signal. In this case, the canceler reduces the interfering signal and the performance measure is the bit‐error probability based on the optimally sampled output of the matched filter.

Schematic of the LMS-adaptive canceler. It features the primary input, reference input, the summing bus, the matched filter, and the Rx data.

FIGURE 12.8 LMS‐adaptive canceler.

12.6 RECURSIVE LEAST SQUARE (RLS) EQUALIZER

The adaptive RLS equalizer [43] algorithm processes data block‐by‐block to update the filter weights and results in faster convergence than the previously discussed equalizers. The implementation of the RLS‐adaptive equalizer is shown in Figure 12.9 and the following description of the processing follows the work of Haykin [44]. This configuration uses a known training sequence images which is stored in a separate array containing the M most current values. The values of images can be generated locally at the receiver without the need for storage. During reception of the message data, the detected data estimates images are used resulting in a decision‐directed equalizer.

Schematic of the RLS-adaptive equalizer. It features the sampled data, the summing bus, the RLS weight processing, the data detection, and the resulting decision directed.

FIGURE 12.9 RLS‐adaptive equalizer.

Since the RLS algorithm processes a block of data, it is convenient to characterize the filter tap values and weights in term of the vectors images and images respectively. The number of FIR filter taps (N) is selected to span the duration of the expected ISI of the received signal. The value of the M determines the relative location of the reference signal and is typically about N/2; however, if the duration of the pre‐ and post‐symbol ISI is not equal, then M must be selected proportionately to encompass all of the ISI samples. The computations also require computing a gain vector images and the N × N matrix C = Φ−1; where Φ is the N × N correlation matrix defined as

(12.50) images

This definition of the correlation matrix is different from the correlation matrix R defined by (12.22), in that, it includes the exponential weighting factor images where λ < 1 is a positive constant close to unity. The application of the scalar parameter λ forms a recursive‐adaptive infinite impulse response (IIR) filter that increasingly diminishes the influence of the older data blocks with decreasing values of λ. From a filtering point‐of‐view, λ controls the bandwidth of the recursive tap adjustments, influences the convergence time of the weight processing, and the responsiveness to the channel dynamics. Conventional inversion of the correlation matrix is processing intense; however, to obtain the recursive solution for the tap weight updating, the matrix C is computed based on the matrix inversion lemma discussed by Haykin. These quantities are initialized as indicated in Table 12.1. The vector images and matrix Cj−1 provide auxiliary data storage for computing the recursively updated values and, upon completion, the contents of the updated arrays are transferred to these arrays in preparation for the next iteration.

TABLE 12.1 Parameter Initializations

Parameter Initial Value Comments
λ 0.99 Recursive scalar coefficient (typically: 0.999–0.95)
images (0,0) Complex weight vector
images (0,0): images Elements of complex matrix Cj−1 (small positive constant δ = 0.005)
(δ−1,0): images

After initialization, the adaptive processing computations are described as follows. Upon reception of each received signal sample, the FIR filter output is computed as

The complex output of (12.51) is used for data detection and updating the adaptive filter weights for the next iteration. In computing the new weight values, the estimation error is determined as

The complex error is input to the weight processing function where the following operations are performed. Starting with the inverse correlation matrix, the gain vector is computed as

where the vector j and scalar u are computed, respectively, as images and images. Using (12.52) and (12.53), the FIR filter taps are updated as

(12.54) images

The final step involves updating the matrix inverse for the next iteration and the computation for this step is expressed as

(12.55) images

In preparation for the next iteration images and images. Following these computations, the processing is repeated with the next received block of samples until receipt of the communication message is completed. The message must be prefixed with a training data sequence and possibly periodically repeated training sequences throughout the entire message. The inclusion and frequency of the periodic training sequences depends on the channel and system dynamics. As mentioned above, the training sequence reference bit must be replaced with the M‐th tap value of the stored data upon declaring acquisition. With periodic training sequences, the equalizer must switch back‐and‐forth between the training sequence and the message data. A case study demonstrating the performance of the RLS equalizer is given in Section 12.9.

12.7 CASE STUDY: LMS LINEAR FEEDFORWARD EQUALIZATION

In this case study, the LMS algorithm is used with the LFFE to reduce the ISI loss of a coherently detected BPSK‐modulated waveform. The performance with ISI is examined for three 0.4 dB ripple Chebyshev filters. Two of the filters have 6‐poles with respective normalized bandwidths BT = 1.5, 2.0; the phase response of these filters corresponds to the intrinsic Chebyshev filter phase function. The third filter has 4‐poles, a normalized bandwidth of BT = 3.0, and includes quadratic phase equalization with 6.5° at the band edges. The filtered received signal spectrums are shown in Figure 12.10 for the three filters.

3 Graphs of magnitude over normalized frequency for filtered BPSK-modulated spectrums. It features plots for the unfiltered and the filtered.

FIGURE 12.10 Filtered BPSK‐modulated spectrums.

The bit‐error performance results, shown in Figure 12.11, are based on Monte Carlo (MC) simulations using 1 M bits at each signal‐to‐noise ratio <8 dB, otherwise 10 M bits are used. The dotted curve is the theoretical or antipodal performance of BPSK and the circled data points verify the fidelity of the simulations program and correspond to the AWGN channel without the equalizer. The filters are symmetrical baseband filters and the received signal frequency error is zero. Ideal demodulator phase and bit‐time tracking is used with an integrate‐and‐dump (I&D) matched filter so the performance degradation is solely based on the channel and equalizer. The solid and dashed cures in each figure correspond to the channel filter without and with the equalizer. The equalizer uses a nine tap bit‐spaced FIR filter with a tap weight gain step‐size u = 0.002. The learning time, before demodulation bit‐error counting, is 400 bits. The simulation results demonstrate the performance improvement for the worst case filter shown in Figure 12.10a; however, for the less severe channel filters the equalizer has little effect. Reducing the equalizer step‐size during the data detection will improve the equalizer bit‐error performance while maintaining an acceptable learning time, although the channel dynamics will influence the misadjustment parameter restricting the lower limit of the step size.

3 Graphs of bit-error probability over signal-to-noise ratio for the BPSK modulation with LMS linear equalization.

FIGURE 12.11 BPSK modulation with LMS linear equalization (9‐tap FIR filter, Δ = 0.002, training = 400 bits).

12.8 CASE STUDY: NARROWBAND INTERFERENCE CANCELLATION

In this case study, the cancellation of narrowband interference is examined using CW interfering signals characterized as

(12.56) images

where Vi = kvVs, images and Vs is the peak carrier voltage of the received waveform and T is the modulation symbol duration. The simulation program provides an arbitrary number of narrowband interfering tones; however, the following results apply for a single tone centered on the desired signal spectrum with discrete phases relative to the phase of the received signal. The total power (Ptot) of the interfering tones, the modulated signal, and the receiver noise are adjusted to the automatic gain controlled power of Pagc according to the relationships images, images, and images where ρ is defined as images and the total power is computed as

The noise power is established from the specified signal‐to‐noise ratio γb = Eb/No according to the relationship

(12.58) images

where γs is the signal‐to‐noise ratio in the bandwidth of the transmitted symbol so γs = krcγb with k bits/symbol and a forward error correction (FEC) code rate of rc. The performance simulations use uncoded BPSK so k = rc = 1. Using these normalized conditions results in a common received power level that provides for establishing a common adaptive filter gain, or step size u, independent of the number of interferers or the ISR; it also simplifies the specification of the analog‐to‐digital converter (ADC) range and the maximum amplitude level when the effect of finite amplitude quantization is being examined.

The model for evaluating the performance with the signal and interference is depicted in Figure 12.12. Under ideal conditions, the interfering signal is input as the canceler reference; however, the model provides for scenarios where the reference also contains the signal. To accommodate these situations, the parameter images defines the signal level that is added to the canceler reference input. In the following simulated performance results, the parameter images is specified as the signal attenuation or loss and is alternately used to denote the signal attenuation in decibels.

Schematic of model for signal and interference.

FIGURE 12.12 Model for signal and interference.

Based on Figure 12.12, the interference into the canceler can be characterized, in terms of the ISR, as

Equation (12.59) is plotted in Figure 12.13 as a function of Iin/I for various values of the parameter ks. The departure from the condition Iin = I results from the influence of the additive signal power to the interference with decreasing I/S. From Figure 12.12 it is seen that as ks increases in value from 1 (0 dB), the interference power PIin → PI which is a more authentic representation of the interference.

Graph of input-to-interference ratio over interference-to-signal ratio for dependence of input interference on the interference-to-signal ratio, with plots 40, 30, 20, 10, and 0.

FIGURE 12.13 Dependence of input interference on the interference‐to‐signal ratio.

As defined in (12.57), the total receiver power, including the desired signal, interference, and channel noise, is adjusted to the AGC power of 0.5 W and the resulting signal carrier power is computed using images as

where Rb is the user bit rate. In the performance simulation program Rb = 1 kbps so the modulated carrier power is 30 dB above the energy‐per‐bit. For a single interfering CW tone, the power (Pcw) is given by

(12.61) images

As shown in Figure 12.14, the canceler gain adjustment, or step‐size Δ = 2u, is the key parameter that determines the canceler convergence characteristics given a specified FIR filter length. As seen from Figure 12.14, the canceler convergence time can be decreased by increasing Δ; however, the bit‐error performance of the demodulator is improved by decreasing Δ following an acceptable convergence time. Consequently, a lower adjustment step‐size is typically used when convergence is detected based on the statistical properties of the canceler error. In the following bit‐error performance simulations, the bit‐error counting is delayed until the canceler has reached, or nearly reached, the steady‐state condition.

Graph of normalized error over sample for theoretical LMS canceler convergence responses, with plots 6e–5, 4e–5, 2e–5, 1e–5, and 6e–6.

FIGURE 12.14 Theoretical LMS canceler convergence responses (Ns = 1000).

The canceler convergence is derived in Section 12.8.1, and Figure 12.14 is based on the generic normalized expression

(12.62) images

where Ns is the number of canceler taps, Δ is the canceler weight incremental adjustment, and j is the incremental time index. The bit‐error performance of the demodulator with the CW interfering tone is shown in Figure 12.15 without the canceler. The ISR is indicated in decibels and the results clearly demonstrate the impact on the performance when the canceler is not used. To achieve a performance loss less than a few tenths of a decibel requires ISR ≤ −30 dB, which is virtually no interference.

Graph of bit-error probability over signal-to-noise ratio for the performance of BPSK with CW interference without canceler, with plots -30, -20, -10, 0, 10, and 20. It also labels the antipodal and the ISR.

FIGURE 12.15 Performance of BPSK with CW interference without canceler.

With the exception of the performance in Figure 12.15 for ISR ≥ 0 dB, the MC simulations involve 1 M symbols or trials for signal‐to‐noise ratios <8 dB otherwise 10 M symbols for each signal‐to‐noise ratio are used. The range of signal‐to‐noise ratios is 0–14 dB in 1 dB steps. Furthermore, the phase of the interfering tone is indexed over 360° using ten deterministic phases given by images for i = 1, …, 10. Therefore, the bit‐error performance involving 10 M MC trails results by averaging the bit errors over ten 10 M MC trials with each trial corresponding to a different signal phase. These simulation conditions apply to the simulated bit‐error results in the remainder of this case study.

Figure 12.16 shows the bit‐error performance under the steady‐state conditions following the acquisition or learning time using a canceler step‐size of Δ = 5 × 10−5. The signal attenuation parameter is ks = ∞ dB, that is, the reference signal into the canceler is the CW interference tone without any additive signal. Upon examining the expanded plot in Figure 12.16b, it is seen that the performance loss is negligible for ISR ≤ 0 dB and increases to 0.4 dB at Pbe = 10−6 when the ISR is increased to 30 dB; the corresponding loss is 0.23 dB at Pbe = 10−5. Consequently, for a step‐size of Δ = 5 × 10−5 the losses were reduced to that shown in Figure 12.16 for ISR ≤ 30 dB. These losses can be further reduced by decreasing the step‐size.8

2 Graphs of signal-to-noise ratio (Eb/No) dB versus bit-error probability (Pbe) for signal attenuation ks = ∞ displaying descending dashed, dotted, and solid lines labeled ISR (dB) 20, 30, 0, –20, and 10.

FIGURE 12.16 LMS performance of CW interference cancellation with constant signal attenuation.

In Figure 12.17, a series of bit‐error performance curves show the impact of the additive signal level to the canceler reference input as depicted in Figure 12.12. For a given ISR, the bit‐error performance approaches that in Figure 12.16 as ks approaches infinity. The objective is to characterize the minimum value of ks (ksmin) to achieve a loss in Eb/No ≤ 0.2 dB relative to that shown in Figure 12.16. Examination of the results in Figure 12.17, supplemented with the additional performance for ISR = 30 and 50 dB, indicates that ks ≥ 20 dB results in an Eb/No performance that is consistent with the above objective. These performance results are a direct consequence of the user bit‐rate of Rb = 1 kbps corresponding to the signal carrier power (Ps) being 30 dB‐Hz above the energy‐per‐bit as expressed in (12.60).

4 Graphs of signal-to-noise ratio (Eb/No) dB versus bit-error probability (Pbe) for ISR = 20 dB, 10 dB, –10 dB, and –20 dB displaying descending dotted, dashed dotted, and solid lines labeled 0, 5, 10, 20.

FIGURE 12.17 LMS performance of CW interference cancellation with constant ISR.

The interpretation of these results is shown in Figure 12.18 with the signal carrier power S = Ps = 30 dBW. Referring to Figure 12.12 and using images the canceler reference input is images where images (images) is the interference input power resulting from the signal. Therefore, the signal‐related interference power is ks(dB) below the signal power and, from Figure 12.17, ks ≥ 20 dB so ksmin = 20 dB corresponding to images. Consequently, images cannot exceed the maximum margin of Δmax = 10 dB/s relative to Eb, otherwise, the presence of the signal in the canceler reference degrades the bit‐error probability. Noting that Rb and Δ have units of 1/s, these results can be generalized for BPSK modulation by the requirement ksmin = Rbmax = Rb/10.

Schematic of interpretation of ks in terms of received signal energy displaying horizontal line with vertical arrows labeled Eb=PSTb W-s, PS = 30dBW, and PI'max= 10dBW with dashed vertical lines between arrows.

FIGURE 12.18 Interpretation of ks in terms of received signal energy.

12.8.1 Theoretical Canceler Convergence Evaluation

This case study is concluded by examining the theoretical tap weight convergence based on the characteristic equation of the canceler. The analysis is based on the minimum shift keying (MSK) symbol matched filer expressed as

(12.63) images

where ωm = 2πfm with fm = 1/2T where T is symbol duration. The analysis considers the input as a CW interfering signal given by

(12.64) images

The optimally sampled MSK filter outputs, images, are input to the canceler as images and the canceler output is expressed as

(12.65) images

where N is the number to FIR filter tap weights images and images is the delay of the interfering tone into the canceler. The optimum weight vector wo corresponds to the weights that minimize the MSE images. The weight error is defined as the vector

(12.66) images

Applying the method of steepest‐decent, discussed in Section 12.3, the transient response of the weights is formulated in terms of the weight error vector as

where vo is the initial weight error vector, Δ = 2u is the canceler gain step‐size, I is an N × N identity matrix, and R is the N × N correlation matrix, given by

(12.68) images

The matrix R is Hermitian, so there exists N linearly independent characteristic vectors,9 and, upon using the diagonal transformation images where Q is a matrix composed of the N independent characteristic vectors and Λ is a diagonal matrix of characteristic values. Using these results, (12.67) is expressed as

The solution to (12.69) is based on the trace tr[Λ], defined as the summation of the diagonal elements of Λ, that is,

(12.70) images

so the normalized solution to (12.69) is

At this point the analysis is simplified by assuming real signals corresponding to ωε and ϕ equal to zero, in which case, s(t) = Vp, independent of time, and the optimally sampled matched filter output is given by

In this example, the corresponding characteristic values are identical and evaluated as

(12.73) images

and (12.71) becomes

Equation (12.74) is plotted in Figure 12.19 with Vp = 0.125 V. The simulated performance, based on (12.72), is indicated by the circled data points.

Graph of matched filter samples (j) versus response (Vj /Vo) of convergence performance of 4-tap CW LMS canceler with simulated data points displays descending solid lines with circle points 0.1, 0.3, and 1.0.

FIGURE 12.19 Theoretical convergence performance of 4‐tap CW LMS canceler with simulated data points.

12.9 CASE STUDY: RECURSIVE LEAST SQUARES PROCESSING

The RLS processing is examined using MC computer simulations of the bit‐error performance with MSK modulation using 10 M bits for each signal‐to‐noise ratio. The application involves the acquisition of the received signal when the signal phase and/or amplitude are unknown over the respective ranges |φ| ≤ 180° and images. The signal phase increment is Δφ = 18° with φi = −180 + Δφ(i − 1) : i = 1, …, 11 and the amplitude increment is images with Ai = −Am + ΔA(i − 1) : i = 1, …, 10. For example, when both phase and amplitude are unknown, the simulation indexes over 100 K bits for each parameter result in 11 × 10 × 100 K = 11 M bits for each signal‐to‐noise ratio. At each signal‐to‐noise ratio, the acquisition processing is re‐initiated for each parameter and the detected bit‐error counting starts following completion of the training sequence. The training data sequence is generated using the same random process as used for generating the message data, so no attention is given to finding a good training sequence. The functions of the simulation processing are shown in Figure 12.20.

Flow diagram depicting RLS simulation functional description from source data to received data.

FIGURE 12.20 RLS simulation functional description.

Upon completion of the training data sequence, the acquisition switch is changed to use either the fixed tap weights or the decision‐directed RLS processing during receipt of the message data.

The channel is characterized by a cosine‐weighted impulse response expressed as

(12.75) images

where Ts is the sampling interval, Np = odd integer is the number channel impulses, and Ks determines the impulse spread. The characteristics of the four channels considered, ranging in severity from the worst‐case to best‐case, are summarized in Table 12.2.

TABLE 12.2 Channel Dispersion Characteristics

n Channel Impulses
NP = 5 NP = 3
Ch‐1 Ch‐2 Ch‐3 Ch‐4
ks = 6 ks = 3.5 ks = 3.5 ks = 2.9
1 0.25 0.050 0.389 0.219
2 0.75 0.389 1.000 1.000
3 1.00 1.000 0.389 0.219
4 0.75 0.389
5 0.25 0.050

The details of RLS processing are described in Section 12.6 and the data detection includes the MSK matched filter detection and ideal tracking following the training sequence in use.

12.9.1 Performance with Fixed Weights Following Training

The bit‐error performance results in this section characterize the acquisition and subsequent bit‐error evaluation with combinations of known and unknown received signal phase and amplitude as described above. For these results the RLS algorithm uses δ = 0.1 for initialization of the matrix C and λ = 0.999; this value of λ results a long memory of past events resulting in a narrow recursive bandwidth. Figure 12.21 shows the performance using 4 samples/bit and a block length of 2‐bits or 8 samples. The training data ranges in length from 4 to 64 blocks with various combinations of known and unknown parameters. Figure 12.21a corresponds to the ideal conditions with zero signal phase and a fixed carrier amplitude using channel No.1. These known received signal phase and amplitude conditions are ideal so the evaluations in Figure 12.21a characterize the performance of the RLS processing as a channel equalizer. The circled data points represent the ideal simulation conditions corresponding to antipodal detection performance and the dashed curve represents the performance without equalization. The solid curve demonstrates the performance improvement when the RLS equalizer is used. The results in Figure 12.21b–d represent the performance improvements with increasing training blocks under the indicated received signal parameter conditions. In these cases, the ideal channel is used, that is without filtering, so the RLS processing estimates the unknown received signal phase and/or amplitude parameters. A training sequence of 64 blocks results in near optimum performance.

4 Graphs illustrating RLS performance with known and unknown parameters displaying descending dashed, dotted, dotted with circle points, and solid lines labeled 8, 4, 12, 64, 20, and 32.

FIGURE 12.21 RLS performance with known and unknown parameters.

Figure 12.22a–c shows the performance results with unknown received signal phase and amplitude when using channels No. 1 and 3 for the respective training sequence lengths of 12, 20, and 64 blocks. These results demonstrate the ability of the RLS‐adaptive processing to perform equalization, parameter estimation and correction, and acquire a noisy received signal with sufficient fidelity to provide near optimal bit‐error performance. However, as mentioned above, these results are based on λ = 0.999 which severely restricts the dynamic behavior of the input signal.

3 Graphs of signal-to-noise ratio (Eb/No) dB versus bit-error probability (Pbe) for 12, 20, and 64 blocks displaying descending dotted, dashed, and solid lines labeled channels No. 1 and 2 and no channel.

FIGURE 12.22 RLS performance with channels 1 and 2 and unknown parameters.

ACRONYMS

BLP
Backward linear prediction
BPSK
Binary phase shift keying
CW
Continuous wave
DFE
Decision feedback equalizer
DSSS
Direct‐sequence spread‐spectrum
FBLP
Forward–backward linear prediction
FIR
Finite impulse response (filter)
FLP
Forward linear prediction
FSE
Fractionally‐spaced equalizer
I&D
Integrate‐and‐dump (filter)
IIR
Infinite impulse response (filter)
ISI
Intersymbol interference
ISR
Interference‐to‐signal ratio
LFFE
Linear feedforward equalizer
LMS
Least mean‐square (algorithm)
LMSE
Linear mean‐square estimation (algorithm, equalizer)
MC
Monte Carlo (simulation)
MLSE
Maximum‐likelihood sequence estimation
MMSE
Minimum mean‐square error (algorithm)
MSE
Mean‐square error (algorithm)
MSK
Minimum shift keying
RLS
Recursive least‐squares (equalizer)
SSE
Symbol‐spaced equalizer
TDL
Tapped delay line
ZFE
Zero‐forcing equalizer

PROBLEMS

  1. Evaluate the MMSE for the optimum filter ho(t) developed in Section 12.2 using the relationship images expressed in (12.6). Express the result in terms of the respective spectrums Sỹỹ(ω), images, and images. Note that the Fourier transform of images is images.
  2. Consider that the optimally sampled matched filter output of an isolated symbol through a bandlimited channel is characterized as
    images
    where cj is the matched filter sample with c0 ≫ cj : j ≠ 0 and dn = ±1 is the isolated n‐th source data symbol of duration T seconds, such that, E[dndm] = δnm where δnm is the Kronecker delta function.

    For the estimator shown in the following figure, determine the open‐loop estimates Ek[•] : k = 1, 2, 3 considering contiguous source data symbols corresponding to the matched filter samples

    images

    Redraw the figure to include an ideal integrator following the appropriate estimators and a transversal filter with adjustable taps that will adaptively eliminate the ISI terms cj : j ≠ 0.

    Flow diagram starting from matched filter to received data, E1[•], E2[•], and E3[•].

REFERENCES

  1. 1. R.W. Lucky, “Techniques for Adaptive Equalization of Digital Communication Systems,” Bell Labs Technical Journal, Vol. 45, No. 2, pp. 255–286, February 1966.
  2. 2. D.W. Tufts, “Nyquist’s Problem—The Joint Optimization of Transmitter and Receiver in Pulse Amplitude Modulation,” Proceedings of the IEEE, Vol. 53, pp. 248–259, March 1965.
  3. 3. A. Papoulis, Probability, Random Variables, and Stochastic Processes, Chapter 11, Linear Mean‐Square Estimation, pp. 385–429, Chapter 7, Functions of Two Random Variables, pp. 187–232, McGraw-Hill Book Company, New York, 1965.
  4. 4. Y.W. Lee, Statistical Theory of Communication, pp. 56–69 and pp. 367–393, John Wiley & Sons, Inc., New York, 1964.
  5. 5. S. Haykin, Adaptive Filter Theory, pp. 3–121, Prentice‐Hall, Englewood Cliffs, NJ, 1986.
  6. 6. A. Papoulis, Probability, Random Variables, and Stochastic Processes, McGraw‐Hill, New York, 1965, Section 11.5.
  7. 7. R.D. Gitlin, J.E. Mazo, M.G. Taylor, “On the Design of Gradient Algorithms for Digitally Implemented Adaptive Filters,” IEEE Transactions on Circuit Theory, Vol. CT‐20, No. 2, pp. 125–136, March 1973.
  8. 8. B. Widrow, “Adaptive filters,” R.E. Kalman, N. DeClaris, Editors, Aspects of Network and System Theory, Holt, Rinehart and Winston, New York, 1970.
  9. 9. G. Ungerboeck, “Theory on the Speed of Convergence in Adaptive Equalizers for Digital Communication,” IBM Journal of Research and Development, Vol. 16, pp. 546–555, November 1972.
  10. 10. R.D. Gitlin, S.B. Weinstein, “On the Required Tap‐Weight Precision for Digitally Implemented, Adaptive, Mean‐Squared Equalizers,” Bell Labs Technical Journal, Vol. 58, No. 2, pp. 301–321, February 1979.
  11. 11. S. Haykin, Adaptive Filter Theory, pp. 197–218, Prentice‐Hall, Englewood Cliffs, NJ, 1986.
  12. 12. M. Dentino, J. McCool, B. Widrow, “Adaptive Filtering in the Frequency Domain,” Proceedings of the IEEE, Vol. 66, No. 12, pp. 1978–1979, December 1978.
  13. 13. N.J. Bershad, P.L. Feintuch, “Analysis of the Frequency Domain Adaptive Filter,” Proceedings of the IEEE, Vol. 67, No. 12, pp. 1658–1659, December 1979.
  14. 14. B. Widrow, M.E. Hoff, “Adaptive Switching Circuits,” IRE WESCON Conference Record, Part 4, pp. 96–104, 1960.
  15. 15. B. Widrow, J. McCool, M. Ball, “The Complex LMS Algorithm,” Proceedings of the IEEE, Vol. 63, No. 4, pp. 719–720, April 1975.
  16. 16. S. Haykin, Adaptive Filter Theory, pp. 236–237, Prentice‐Hall, Englewood Cliffs, NJ, 1986.
  17. 17. S. Haykin, Adaptive Filter Theory, pp. 238–240, Prentice‐Hall, Englewood Cliffs, NJ, 1986.
  18. 18. B.R. Saltzberg, “Intersymbol Interference Error Bounds with Application to Ideal Bandlimited Signaling,” IEEE Transactions on Information Theory, Vol. IT‐14, pp. 563–568, July 1968.
  19. 19. E.Y. Ho, Y.S. Yeh, “A New Approach for Evaluating the Error Probability in the Presence of Intersymbol Interference and Additive Gaussian Noise,” Bell Labs Technical Journal, Vol. 49, No. 9, pp. 2249–2265, November 1970.
  20. 20. F.E. Glave, “An Upper Bound on the Probability of Error Due to Intersymbol Interference for Correlated Digital Signals,” IEEE Transactions on Information Theory, Vol. IT‐18, pp. 356–362, May 1972.
  21. 21. O. Shimbo, M. Celebiler, “The Probability of Error Due to Intersymbol Interference and Gaussian Noise in Digital Communication Systems,” IEEE Transactions on Communication Technology,” Vol. COM‐19, pp. 113–119, April 1971.
  22. 22. R. Lugannani, “Intersymbol Interference and Probability of Error in Digital Systems,” IEEE Transactions on Information Theory, Vol. IT‐15, pp. 682–688, November 1969.
  23. 23. R.W. Lucky, J. Salz, E.J. Weldon, Jr., Principles of Data Communication, McGraw‐Hill, New York, 1968.
  24. 24. R.W. Lucky, J. Salz, E.J. Weldon, Jr., Principles of Data Communication, pp. 65–72, McGraw‐Hill, New York, 1968.
  25. 25. R.W. Lucky, “Automatic Equalization for Digital Communication Systems,” Bell Labs Technical Journal, Vol. 44, pp. 548–588, April 1965.
  26. 26. M.E. Austin, “Decision‐Feedback Equalization for Digital Communication over Dispersive Channels,” Technical Report No. 437, M.I.T. Lincoln Laboratory, Cambridge, MA, August 1967.
  27. 27. J. Salz, “Optimum Mean Square Decision Feedback Equalization,” Bell Labs Technical Journal, Vol. 52, pp. 1341–1373, October 1973.
  28. 28. S. Haykin, Adaptive Filter Theory, pp. 234–376, Prentice‐Hall, Englewood Cliffs, NJ, 1986.
  29. 29. C.A. Belfiore, J.H. Park, Jr., “Decision Feedback Equalization,” Proceedings of the IEEE, Vol. 67, No. 8, pp. 1143–1156, August 1979.
  30. 30. J.G. Proakis, Digital Communications, McGraw‐Hill, New York, 1989.
  31. 31. R.L. Bogusch, F.W. Guigliano, D.L. Knepp, “Frequency‐Selective Scintillation Effects and Decision Feedback Equalization in High Data‐Rate Satellite Links,” Proceedings of the IEEE, Vol. 71, No. 6, pp. 754–767, June 1983.
  32. 32. W.U. Lee, F.S. Hill, Jr., “A Maximum‐Likelihood Sequence Estimator with Decision–Feedback Equalization,” IEEE Transactions on Communications, Vol. COM‐25, No. 9, pp. 971–979, September 1977.
  33. 33. J.G. Proakis, Digital Communications, pp. 585–587, McGraw‐Hill, New York, 1989.
  34. 34. R.D. Gitlin, S.B. Weinstein, “Fractionally‐Spaced Equalization: An Improved Digital Transversal Equalizer,” Bell Labs Technical Journal, Vol. 60, No. 2, pp. 275–296, February 1981.
  35. 35. S.U.H. Qureshi, G.D. Forney, Jr., “Performance and Properties of a T/2 Equalizer,” National Telecommunications Conference Record, pp. 11.1.1–11.2.14, Los Angeles, CA, December 1977.
  36. 36. G. Ungerboeck, “Fractional Tap‐Spacing Equalizer and Consequences for Clock Recovery in Data Modems,” IEEE Transactions on Communications, Vol. COM‐24, No. 8, pp. 856–864, August 1976.
  37. 37. D.D. Falconer, “Jointly Adaptive Equalization and Carrier Recovery in Two‐Dimensional Communication Systems,” Bell Labs Technical Journal, Vol. 55, No. 3, pp. 317–334, March 1976.
  38. 38. A. Benveniste, M. Goursat, “Blind Equalizers,” IEEE Transactions on Communications, Vol. COM‐32, No. 8, pp. 871–883, August 1984.
  39. 39. G. Picchi, G. Prati, “Blind Equalization and Carrier Recovery Using a ‘Stop‐and‐Go’ Decision‐Directed Algorithm,” IEEE Transactions on Communications, Vol. COM‐35, No. 9, pp. 877–887, September 1987.
  40. 40. G.J. Foschini, “Equalizing Without Altering or Detecting Data,” Bell Labs Technical Journal, Vol. 64, No. 8, pp. 1885–1911, October 1985.
  41. 41. D.N. Godard, “Self‐Recovering Equalization and Carrier Tracking in Two‐Dimensional Data Communications Systems,” IEEE Transactions on Communications, Vol. COM‐28, No. 11, pp. 1867–1875, November 1980.
  42. 42. B. Widrow, J.R. Glover, Jr., J.M. McCool, J. Kaunitz, C.S. Williams, R.H. Hearn, J.R. Zeidler, E. Dong, Jr., R.C. Goodlin, “Adaptive Noise Canceling: Principles and Applications,” Proceedings of the IEEE, Vol. 63, No. 12, pp. 1692–1716, December 1975.
  43. 43. T. Vaidis, C.L. Weber, “Block Adaptive Techniques for Channel Identification and Data Demodulation Over Band‐Limited Channels,” IEEE Transactions on Communications, Vol. 46, No. 2, pp. 232–243, February 1998.
  44. 44. S. Haykin, Adaptive Filter Theory, Chapter 8, Prentice‐Hall, Englewood Cliffs, NJ, 1986.

NOTES

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset