6

Essence of Distributed Target Tracking

Track Fusion and Track Association

Shozo Mori, Kuo-Chu Chang, and Chee-Yee Chong

CONTENTS

6.1    Introduction

6.2    Track Fusion

6.2.1    One-Time Track Fusion

6.2.1.1    One-Time Track Fusion Rules

6.2.1.2    Calculation of Cross-Covariance Matrix

6.2.1.3    Covariance Intersection Methods

6.2.1.4    Optimality of Track Fusion

6.2.1.5    Performance Comparison of One-Time Track Fusion Rules

6.2.2    Repeated Track Fusion

6.2.2.1    Repeated Track Fusion without Feedback

6.2.2.2    Repeated Track Fusion with Feedback

6.3    Track Association

6.3.1    Track Association Problem Definition

6.3.2    Track Association Metrics

6.3.3    Comparison of Track Association Metrics

6.4    Conclusions

References

6.1    INTRODUCTION

This chapter describes an important, practical, widely studied application of the distributed estimation theories described in Chapter 5, i.e., distributed target tracking. Multiple-target tracking problems can be viewed as an extension of classical dynamical state estimation problems, or filtering problems (Wiener 1949, Kalman 1960, Kalman and Bucy 1960, Anderson and Moore 1979), to estimate the states of generally moving physical entities. An essence of the extension is from single-target problems to multiple-target problems with an unknown number of targets, without a priori target identification, where any observation originates from any one of the modeled targets, or an object of no interest (i.e., clutter, false alarms, etc.) (Blackman 1986, Bar-Shalom and Fortmann 1988, Bar-Shalom and Li 1993, Blackman and Popoli 1999, Bar-Shalom et al. 2001, 2011). In short, multiple-target tracking problems are dynamical state estimation problems with data association problems.

As any other information processing system, given a set of sources of information, i.e., sensors, optimal or near-optimal state estimates are obtained by central processing, i.e., by centrally processing all the relevant information provided by all the sensors. However, in many large-scale system designs, an alternative processing architecture, i.e., distributed processing, is preferred because of the lack of single-point-of-failure, generally reduced communication requirements, and possible minimization of processing bottlenecks, as discussed in the previous chapter. This preference is particularly prevalent for multiple-target tracking problems, mainly because of often severely heavy information-processing requirements for solving data association problems. In distributed tracking systems, the data association requirements are typically divided into (i) local data association where sensor measurements are correlated together into local (or sensor) tracks, and (ii) global processing where local tracks are associated and fused together into a set of global (or system) tracks. In this way, the processing and the communication loads may be system-wide balanced.

For this reason, the studies of distributed tracking started almost at the same time when the multiple-target tracking itself began to be studied. We can cite a pioneering work (Singer and Kanyuck 1971) and two seminal papers (Bar-Shalom 1981) and (Bar-Shalom and Campo 1986), which cover two essences of distributed tracking, i.e., track association and track fusion. As mentioned in the previous chapter, the studies of the track association and fusion problems were formulated and solved in the framework of distributed estimation problems (Chong et al. 1985, 1987, Liggins II et al. 1997), with general sensor and information networks. Since then, the amount of the literature on track fusion has exploded (Hashemipour et al. 1988, Durrant-Whyte et al. 1990, Belkin et al. 1993, Lobbia and Kent 1994, Drummond 1997a, Miller et al. 1998, Zhu and Li 1999, Li et al. 2003), and many others.

As described in Drummond (1997a), Liggins II et al. (1997), Chong et al. (2000), Moore and Blaire (2000), Dunham et al. (2004), and Liggins II and Chang (2009), many distributed tracking systems, both military and civilian, have been developed and operated, system engineering studies, mainly of so-called fusion architecture studies, have been conducted, and performance of various functions and algorithms has been examined. Recently, the topics of the distributed target tracking have been migrated into the area of the robotics (Durrant-Whyte et al. 1990) and the distributed large-scale sensor networks (Iyengar and Brook 2005).

Instead of covering the entire areas concerning the distributed tracking, this chapter revisits its two essences, i.e., track association and track fusion, in terms of track fusion rules and track association metrics. We will describe as many rules and metrics that have been proposed and examine them, as quantitatively as possible.

To do this, we need to limit our scope using simple abstract mathematical models. However, we will try to cover as many practical factors as possible: consequences of one-time versus repeated information exchanges, fusion of information from similar versus dissimilar sensors, target maneuverability, a priori position and velocity uncertainty, and target density. We also choose a minimum complexity of system architecture, i.e., a two-sensor, or two-station system, by which we can isolate the two essential problems, i.e., the track association and fusion, to enable clear comparison of many algorithms. In this way, we can discuss key design factors for the distributed tracking, i.e., fusion with or without feedback, and the effect of the depth of memory of the past informational transactions, etc.

The rest of this chapter is divided into two major sections: Section 6.2 describes representative track fusion rules, and numerically compares the performance, under a set of prescribed variations of track fusion environments and designs. Section 6.3 examines a simple one-time track-to-track association and compares the performance using various track-to-track association metrics.

6.2    TRACK FUSION

Although track association is prerequisite to track fusion, we will discuss track fusion first in this section before discussing track association in the next section. Despite a large volume of works on track fusion, the studies on the track association are still rather sparse comparing with the studies on the track fusion.

We will first consider a simple, one-time track-to-track fusion problem in Section 6.2.1 and more complicated cases where track fusion is done repeatedly in Section 6.2.2.

6.2.1    ONE-TIME TRACK FUSION

Suppose that two sensors, i = 1, 2, have been observing the same target as

yik=Hikx(tik)+ηik

(6.1)

at time tik, for k = 1, …, Ni, such that ti1 < … < tiNt, where each measurement error ηik is an independent zero-mean Gaussian random vector with covariance matrix* Rik=E(ηikηTik) and Hik is an observation matrix with appropriate dimensions. x(⋅) in (6.1) is the target state process defined by

ddtx(t)=Atx(t)+Bt˙w(t)

(6.2)

on [t0, ∞) with t0 ≤ min{t11, t21} on a Euclidean target state space E, with a unit-intensity vector white noise process (˙wt)t[t0,,.) and by the initial state x(t0), a Gaussian vector with mean ˉx0 and covariance matrix ˉV0, i.e.,* P(x(t0))=g(x(t0)ˉx0;ˉV0).

We assume the local data processor, for each sensor, i = 1, 2, produces the local estimate ˆxt=E(x(tF)|(yik)Nik=1), which is the conditional expectation of the target state x(tF) at a common fusion time tF ≥ max {t1N1, t2N2}, conditioned by the local data (yik)Nik=1, together with estimation error covariance matrix Vi=E((ˆxix(tF))(ˆxix(tF))T) that we assume is strictly positive definite, i.e., P(x(tF)|(yik)Nik=1)=g(x(tF)xi;Vi). Our track fusion problem is then defined as the problem of generating a “good” estimate ˆxF of the target state x(tF) as a function of the local estimates ˆx1 and ˆx2.

The joint probability density function of the two local estimation errors can then be written as

P(ˆxx(tF),ˆx2x(tF))=g([ˆx1x(tF)ˆx2x(tF)];[V1V12V21V2])

(6.3)

We need to consider the cross-covariance matrix, V12=E((ˆx1x(tF))(ˆx2x(tF))T) and V21=VT12, in (6.3), because the initial condition x(t0) = x0 and the process noise (w(t))t∈[t0,∞) in (6.2) both commonly affect the two estimates, ˆx1 and ˆx2.

Let the local estimation errors be denoted by ˜xidef=ˆxix(tF), i = 1, 2. Then, we should immediately recognize the following three facts:

1.  For each i, the estimation error ˜xi is independent (orthogonal) to the state estimate ˆxi.

2.  The two estimation error vectors, ˜x1 and ˜x2, are correlated.

3.  Each estimation error ˜xi is not necessarily independent of the target state x(tF).

Although (1) is the basic fact of the linear Gaussian estimation (cf., e.g., Anderson and Moore 1979), (2) and (3) are the distinct characteristics of the track fusion problems, which prevent us from treating the two local estimates as if they were two independent sensor measurements of the target state x(tF). As mentioned earlier, (2) originated from the common use of the initial state condition and the process noise while (3) is simply due to the fact that ˆxi is the processed result, correlated to the initial condition x(t0) = x0, and hence ˜xi is correlated to x(tF).

Some of the track fusion rules described subsequently can be used for track fusion problems with nonlinear target dynamics and nonlinear observation models. In such a case, (6.3) may be considered as a Gaussian approximation of a non-Gaussian joint estimation error probability distribution.

6.2.1.1    One-Time Track Fusion Rules

In the following, for the sake of simplicity, we will drop the time index and replace x(tF) by x, E(x(tF)) by ˉx, and write ˉV=E((xˉx)(xˉx)T). All the rules described in this section are in the form of the linear combination

ˆxF=W0ˉx+W1x1+W2x2

(6.4)

with W0 + W1 + W2 = I (unbiasedness), where all the weight matrices are constant and independent of sensor data, (yik)Nik=1, i = 1, 2, either as a conscious choice, or as a consequence of the linear-Gaussian assumptions. The estimation error covariance matrix VF can therefore be evaluated by

VF=[W0W1W2][ˉVV01V02VT01V1V12VT02VT12V2][WT0WT1WT2]

(6.5)

The covariance matrix Vi is provided with each local state estimator, i = 1, 2, and the a priori state variance ˉV at the fusion time tF is given by

ˉV=Φ(tF,t0)ˉV0Φ(tF,t0)T+Q(tF,t0)

(6.6)

where Φ(t, τ) is the fundamental solution matrix of (At)t∈[t0, ∞), defined by a matrix differential equation (∂/∂t)Φ(t, τ) = AtΦ(t, τ) with Φ(τ, τ) = I, and ˙Q(⋅,⋅) is defined by

Q(t2,t1)=t2t1Φ(t2,τ)BτBTτΦ(t2,τ)Tdt

(6.7)

for any t0t1t2. Later, Section 6.2.1.2 shows how to calculate the cross-covariance V12 between the two local state estimation errors, ˜x1 and ˜x2, as well as the cross-covariance V0i between the state a priori expectation error, ˉxx, and the local state estimation error ˜xi, i = 1, 2.

Some of the fusion rules described later in this section declare the estimation error covariance matrix VF by itself, assuming implicitly or explicitly that some of the statistics, e.g., V12 or V0i, are not available when fusing the two local estimates, ˆx1 and ˆx2. In that case, the declared covariance matrix VF may not be the true one defined by (6.5). We will call the declared VF honest (consistent) if it coincides with the one calculated by (6.5), pessimistic if it is generally larger, and optimistic if smaller.

6.2.1.1.1    Bar-Shalom–Campo and Speyer Fusion Rules

The Bar-Shalom–Campo fusion rule, described in a seminal paper (Bar-Shalom and Campo 1986), is defined by the weights, W0 = 0, and

Wi=(VjVji)(V1+V2V12V21)1

(6.8)

for i = 1, 2 with j = 3 − i. Since W1 + W2 = I, the unbiasedness of the local estimates implies the unbiasedness of the fused estimate ˆxF. As shown in Li et al. (2003), this fusion rule is obtained as a unique solution x = ˆxF that maximizes the likelihood function L(x|ˆx1,ˆx2) defined as L(x|ˆx1,ˆx2)=p(ˆx1x,ˆx2x), where p(˜x1,˜x2) is the joint probability density function of the two local estimation errors, ˜x1 and ˜x2.

We should note that this likelihood function L(x|ˆx1,ˆx2) is not the likelihood function P(ˆx1,ˆx2|x) in the strict Bayesian sense, i.e., the conditional joint probability density function of the data, ˆx1 and ˆx2 given the true state x, because the estimation errors, (˜x1,˜x2), are not independent of the target state x = x(tF). Nonetheless, L(x|ˆx1,ˆx2)=p(ˆx1x,ˆx2x) is certainly qualified as a likelihood function of x in the classical statistics sense, i.e., a joint probability density function of ˆx1 and ˆx2, when we consider x as a constant parameter to be determined. Two other fusion rules based on the conditional expectation and the likelihood function, both in the strict Bayesian sense, will be described later in this section. Both of those rules, as well as the Bar-Shalom–Campo rule, use the cross-covariance matrix V12, generated by the common factors, i.e., the common initial condition and the common process noise.

If we ignore the cross-covariance Vij, (6.8) becomes, for i = 1, 2 with j = 3 − i,

Wi=Vj(V1+V2)1=(V11+V12)1V1i

(6.9)

which is the fusion rule obtained by treating two estimates ˆx1 and ˆx2 as if they were two conditionally independent observations of x. Since the gain matrices (6.9) are obtained by normalizing two positive definition matrices Vi or V1i to have W1 + W2 = I, we may call (6.9) the simple convex combination rule, with some caution for not confusing this with the covariance intersection fusion rules described later. It is also called the naïve fusion rule in Chang et al. (2008). We call this simplified rule the Speyer fusion rule, because this fusion rule seems to have appeared for the first time as Equation (22) of Speyer (1979).

Since det([V1V12VT12V2])=det(V1V12V12VT12)det(V2), ignoring the cross-variance as V12 = 0 means an increase in the ellipsoidal area defined by the joint covariance matrix [V1V12VT12V2]. In that sense, we may say the simplified fusion rule (6.9) is obtained by inflating the joint covariance matrix. Using either fusion rule, the fused estimate is unbiased in the sense E(ˆxFx)=0. For the Bar-Shalom–Campo rule, the declared fused estimation error covariance matrix, VF=V1(V1V12)(V1+V2V12VT12)1(V1VT12), is honest (or consistent), while, ignoring the cross-covariance, VF=(V11+V12)1=V1V1(V1+V2)1V1 for the Speyer rule is not honest and generally optimistic.

6.2.1.1.2    Tracklet Fusion Rule

For each sensor i = 1, 2, let ˆpi(x)=P(x|(yij)Nij=1). Then, as shown in Chong (1979), the tracklet fusion rule to obtain the fused probability density function ˆpF by fusing ˆp1 and ˆp2 can be written as ˆpF(x)=C1ˆp1(x)ˆp2(x)/ˉp(x), with the a priori probability density ˉp(x) and the normalizing constant C. This fusion rule can be applied to any probability (generally non-Gaussian) distributions on any appropriate target space E as long as the densities and the integral are all well defined. In our linear-Gaussian case, as shown in Chong et al. (1983,1986,1990), etc., the tracklet fusion rule is defined by

Wi=VFV1i,i=1,2;W0=VFˉV1=IW1W2

(6.10)

with the declared fused estimate error covariance matrix VF=(V11+V12ˉV)1, where ˉV=E((ˉxx)(ˉxx)T) is the a priori covariance matrix.

Unfortunately, the tracklet fusion rule may not be exact in the sense, ˆpF(x)=P(x|(y1j)N1j=1,(y2j)N2j=1) unless the target dynamics are deterministic, i.e., Bt ≡ 0 in (6.2). Nonetheless, the extrapolation of the a priori covariance matrix by (6.6) takes the effects of the process noise into account. However, the declared fused estimate error covariance matrix VF=(V11+V12ˉV)1 is often not honest and generally optimistic.

The fusion rule (6.10) can be rewritten as

{V1FxF=ˉV1ˉx+˜V11z1+˜V12z2V1F=ˉV1+˜V11+˜V12

(6.11)

with, for i = 1, 2,

{˜V1izi=V1iˆxiˉV1ˉx˜V1i=V1iˉV1

(6.12)

Equation 6.11 appears as a Kalman filter update equation that updates the state estimate by the two conditionally independent measurements, z1 and z2. Equation 6.12 can be interpreted as the decorrelation of the two local estimates, ˆx1 and ˆx2, by removing the prior information represented by the pair (ˉx,ˉV) from ˆx1 and ˆx2. The decorrelated estimates, z1 and z2, defined by Equation 6.12, are called the equivalent measurements, or the pseudo-measurements, or the state estimates of tracklets (or a track segment, a portion of a track, small enough represented by a single Gaussian distribution but large enough to have such a full-state representation, defined by (zi, ˜Vi)) (Belkin et al. 1993, Lobbia and Kent 1994, Drummond 1997a, 1997b), etc. This is the reason why we call this rule the tracklet fusion rule. Because Equation 6.11 is in the information matrix form of Kalman filter, a distributed track fusion algorithm using Equation 6.11 with Equation 6.12 is sometimes called information filter or information matrix filter (Chang et al. 2002).

The decorrelation formula (6.12) also gives us a convenient way of representing a tracklet or a track segment by a pair (zi,˜Vi) of the equivalent measurement and its measurement error covariance matrix, or equivalently (˜V1izi,˜V1i). From this point of view, the distributed track fusion algorithm that use this pair, (zi,˜Vi) or (˜V1izi,˜V1i) to represent approximately the conditionally independent unit of information, is called the Channel filter in Durrant-Whyte et al. (1990) and Rao et al. (1993).

6.2.1.1.3    Minimum-Variance (MV) Fusion Rule

Let ˆxF=E(x|ˆx1,ˆx2),, i.e., the conditional expectation of the target state x = x(tF) at the fusion time tF, given the two local state estimates, ˆx1 and ˆx2. It is well known (cf., e.g., Rhodes 1971) that the estimate ˆxF minimizes the expected estimation error* E(ˆxx2|ˆx1,ˆx2) among all the estimates ˆx, as defined as any measurable functions of ˆx1 and ˆx2. Because of the Gaussianness, the fused estimate, ˆxF=E(x|ˆx1,ˆx2), is also the maximum a posteriori (MAP) estimate of x conditioned by ˆx1 and ˆx2, and is given by (6.4) using [W1W2]=VxzV1zz and W0 = IW1W2, with

{Vzz=[V1+ˉVVT01V01V12+ˉVVT01V02VT12+ˉVVT02V01V22+ˉVVT02V02]Vxz=[ˉVV01ˉVV02]

(6.13)

where

Vzz is the self-covariance matrix of z=[ˆxT1ˆxT2]T

Vxz is the cross-covariance between x and z

We should note that, since ˆxiˉx=ˆxix+xˉx=˜xi(ˉxx), i = 1,2, we have E((ˆxiˉx)(ˆxjˉx)T)=Vij+ˉVVT0iVT0j, for i, j = 1, 2, with Vi = Vii, and E((xˉx)(ˆxiˉx)T)=ˉVV0i, for i = 1, 2. Therefore, while the Bar-Shalom–Compo rule considers only the correlation caused by the common process noise, and what the tracklet rule considers explicitly only the correlation caused by the use of the common a priori information, the MV rule considers both and provides the optimal estimate as the conditional expectation given z=[xT1ˆxT2]T. The declared fused estimate error covariance matrix, VF=ˉVVxzV1zzVTxz, is honest.

As shown in Zhu and Li (1999) and Li et al. (2003), this MV fusion rule is also the best linear unbiased estimate (BLUE) by choosing the best weights (W0, W1, W2) to minimize the estimation error variance, under constraint W0 + W1 + W1 = I, and hence we may call it the BLUE fusion rule. The Bar-Shalom–Campo rule is obtained as the BLUE rule with more restriction, i.e., by the minimization with respect to (W1, W2), with the constraints W0 = 0 and W1 + W2 = I.

6.2.1.1.4    Bayesian Maximum-Likelihood Fusion (BML) Rule

Define z=[ˆxT1ˆxT2]T as before. Consider P(z|x), which is the conditional probability density of z=[ˆxT1ˆxT2]T (data) given x (state to be estimated), i.e., the likelihood function in the strict Bayesian sense. Reversing the roles of x and z, we have P(z|x)=g(zˆz;ˆVZZ) with

{ z^=z¯+VzxV¯1(xx¯)V¯=VzzVzxV¯1Vxz

(6.14)

Hence, the likelihood function P(z|x) (as a function of x) is maximized at

x^F=x¯+V¯(VxzV^zz1Vzx)1VxzV^zz1(zz¯)

(6.15)

Thus the maximum likelihood estimate of the target state x given the local estimates x^1 and x^2 can be expressed by (6.4) with the weight matrices calculated by [W1W2]=VxzV^zz1 with M=V¯(VxzV^zz1VxzT)1 with W0 = IW1W2, instead of the MV fusion weights [W1W2]=VxzVzz1.

We may say the likelihood function P(z|x) is the likelihood function in the strict Bayesian sense, and hence we call the fusion rule defined by (6.15) the Bayesian Maximum Likelihood Rule, or BML Rule.

6.2.1.2    Calculation of Cross-Covariance Matrix

Besides the Speyer (simple convex combination) fusion rule and the tracklet fusion rule, it is necessary to calculate the cross-covariance matrix between the estimation errors, x˜1 and x˜2, of local estimates, x^1 and x^2. The calculation was described in Bar-Shalom (1981), for the synchronous sensor case, which can be easily extended to nonsynchronous cases as shown in Mori et al. (2002). To do this, let T=i=12{tik}k=1Ni be the union of the observation times of the two sensors, (Tk)k=1N be the unique enumeration of T such that T1 < T2 < … < TN, and Ik = {i ∈ {1, 2}|Tk = tik′ for some k′} for every k.

Let V¯12k and V^12k be the cross-covariance matrices between estimation errors of the state estimates of x(Tk) based on {y1k|t1kTk−1} and {y2k|t2kTk−1}, and between those by {y1k|t1kTk} and {y2k|t2kTk}, respectively. Then we have

V¯12k=Φ(Tk,Tk1)V^12(k1)Φ(Tk,Tk1)T+Q(Tk,Tk1)

(6.16)

and

V^12k={ (IK1kK1k)V¯12kifIk={1}andt1k=TkV¯12k(IK2kK2k)TifIk={2}andt2k=Tk(IK1kK1k)V¯12k(IK2kK2k)TifIk={1,2}andt1k=t2k=Tk

(6.17)

together with an appropriate initial condition, where K1k and K2k′′ are the Kalman filter gain matrices used by sensors 1 and 2 to process y1k and y2k′′, respectively. The cross-covariance matrix V12 between x˜1 and x˜2 can be obtained at the end of this recursion, with an extra extrapolation (6.16) (at the end) if necessary.

The cross-covariance matrices V^0ik,, between the estimation error of the state estimate of x(Tk) conditioned on {yik|tikTk} and the a priori extrapolation error E(x(Tk)x(Tk)), can be calculated similarly by the extrapolation equation (6.16), and the second update equation of (6.17), to obtain V01 and V02 (which are necessary for calculating the MV and BML fusion rules). The local estimation error covariance matrices, V1 and V2, are of course provided by the local Kalman filters, while the a priori extrapolation error covariance matrix V¯ is calculated by (6.6).

6.2.1.3    Covariance Intersection Methods

The covariance intersection (CI) method was introduced as a method of fusing two estimate-covariance pairs, (x^1, V1) and (x^2, V2) when the cross-covariance V12 of the estimation errors is not known or available. The CI approach is a heuristic approach to adjust the commonly used the simple weighting, i.e., the Speyer fusion rule (6.9), as

{ VF1x^F=αV11x^1+(1α)V21x^2VF1=αV11+(1α)V21

(6.18)

with a fixed scalar parameter α ∈ [0, 1], i.e., (6.4) with W0 = 0, W1=αVFV11, and W2=(1α)VFV21. The term “covariance intersection” originates from the fact that the ellipsoid* {xE|||x||VF12χ2} is included in the intersection of two ellip-soids, {xE|||x||Vi12χ2}, i = 1, 2, for any given χ2 > 0 (Nicholson et al. 2001, 2002, Julier et al. 2006). But the terminology may appear rather confusing because the ellipsoid {xE|||xx^F||VF12χ2} is not necessarily contained in the intersection i=12{xE|||xx^i||Vi12χ2}.

The CI rule (6.18) can be viewed as a Gaussian case of the fusion rule,

p^F(x)=p1(x)αp2(x)1αEp1(x)αp2(x)1αdx

(6.19)

which is called the Chernoff fusion rule in Hurley (2002) and Julier (2006), because the denominator of the right-hand side of (6.19) is known as Chernoff information (Cover and Thomas 2006). There are several proposals about how to choose the parameter α ∈ [0, 1]. A couple of choices for the scalar weight α are shown below.

6.2.1.3.1    Shannon Fusion Rule

Consider the continuous-random-variable version of the entropy, known as the differential entropy or the continuous entropy of the fused probability distribution,

H(p^F)=Eln(p^F(x))p^F(x)dx

(6.20)

The fusion rule that minimizes (6.20) can be called the minimum entropy fusion or Shannon rule. In the case where p^F is Gaussian with the CI covariance matrix VF, we have H(p^F) = (1/2)(ln(det(2πVF)) + dim(E)), the minimization of which becomes the minimization of the determinant. The resultant fusion rule (6.18) is called the Shannon rule in Chang et al. (2008).

6.2.1.3.2    Chen–Arambel–Mehra Fusion Rule

This fusion rule is defined as the one that minimizes the estimation error mean square, i.e., the trace of the fused covariance matrix VF defined in (6.18), tr(VF)=tr((αV11+(1α)V21)1), as a function of α ∈ [0, 1]. Let α^[0,1] such that the trace tr(VF) is minimized at α=α^. Chen et al. (2002) show a very interesting interpretation of this optimal α^, i.e., the corresponding CI fusion gain matrix pair, W1=α^VFV11 and W2=(1α^)VFV21, is an optimal solution that minimizes

h(W1W2)=trW1V1W1T+trW2V2W2T

(6.21)

subject to the unbiasedness condition W1 + W2 = I. We call the convex intersection with this α^ the Chen–Arambel–Mehra fusion rule.

6.2.1.4    Optimality of Track Fusion

The track fusion rules described so far obtain an target state estimate that is the “best” in some sense, e.g., maximum likelihood, maximum a posteriori (MAP), minimum variance, etc., given the two local target state estimates, either explicitly or implicitly, under the assumption that the local estimates are optimal in the usual sense, i.e., the outputs of the local Kalman filters. However, because the conditioning uses only the local estimates that are not sufficient statistics when the target dynamics are nondeterministic, e.g., with process noise in the target model to account for target maneuvers, the performance of the fused estimate is generally inferior* to that of the central processing using all the raw sensor data, ((yik)k=1Ni)i=12. For this reason, the performance of a track fusion rules should be compared with that of central processing, rather than the MAP (or the MV) fusion rule, whose optimality is also limited to the conditioning by the local sate estimates, x^1 and x^2.

Furthermore, there may be one more compelling reason why the performance of every track fusion rule must be compared with the centralized tracking performance. That is because, for the first time in the long history of track fusion studies, it was recently shown that the reconstruction of the globally optimal state estimate only by fusing or combining the local estimates is possible. Although such reconstruction can be obviously done in cases with deterministic dynamics or the full-rate communication, it is remarkable to see that the fusion rule in Koch (2008, 2009), Govaers and Koch (2010, 2011), which we may call the Koch–Govaers fusion rule, can achieve the global optimality with any asynchronous communication and arbitrary communication rate to achieve the global optimality after each communication.

Using the notation in Section 6.2.1.2, the Koch–Govaers fusion rule requires local estimates, ((x¯ik)k=1N)i=12 and ((x^ik)k=1N)i=12, each x¯ik paired with the covariance matrix V¯k, and each x^ik paired with the covariance matrix V^ik, to satisfy

P(x(Tk)|{yik|tikTk1,i=1,2})=C1i=12g(x(Tk)x¯ik;V¯k)

(6.22)

and

P(x(Tk)|{yik|tikTk,i=1,2})=C1i=12g(x(Tk)x^ik;V^ik)

(6.23)

The recent series of papers (Koch 2009, Govaers and Koch 2010, 2011) show that those two requirements can be satisfied by the extrapolation step

{ x¯ik=2Φ(Tk,Tk1)(i=12V^i(k1)1)V^i(k1)x^i(k1)V¯k=2(Φ(Tk,Tk1)(i=12V^i(k1)1)1Φ(Tk,Tk1)T+QΦ(Tk,Tk1))

(6.24)

and the updating step

{ x^ik=x¯ik+KikyikHikx¯ikV^ik=(IKikHik)V¯kifTk=tik{ x^ik=x¯ikV^ik=V¯k

(6.25)

with Kik=V¯k1HikT(HikV¯k1HikT+Rik)1 Tk = tik′ each for each sensor i = 1, 2. The initialization can be done in any way to satisfy the condition, either (6.22) or (6.23), for any appropriate k.

We should note two crucial facts: (i) neither ((x¯ik)k=1N)i=12 nor ((x^ik)k=1N)i=12 are necessarily locally optimal estimates in any sense, and (ii) both extrapolation (6.24) and update (6.25) require knowledge of the local variance matrices (V^ik)i=12, not only its own but also those of the other sensor. In other words, global optimality is obtained by sacrificing local optimality, and we need extensive knowledge in terms of covariance matrices of the other local processor. As far as (i) is concerned, however, local optimality, if necessary, can be maintained by locally running the Kalman filter for each sensor in parallel to the estimates ((x¯ik)k=1N)i=12 nor ((x^ik)k=1N)i=12 defined by (6.24) and (6.25). The latter requirement (ii), however, looks too excessive at the first glance.

We should note, however, that for our linear-Gaussian systems, all the estimation error (self and cross) covariance matrices as well as all the crucial parameters, such as the Kalman filter gain matrices and the innovations variance matrices, are all constant (i.e., not random). Hence, in the probability theory, they are all known or a part of the problem definition or the problem statement. In practice, however, the exchange or transfer of such knowledge may require significant communication bandwidth that may or may not be available. Therefore, realistically, we need to assume those parameters may have to be considered as a part of system design parameters, i.e., off-line information communicated “beforehand.” Otherwise, e.g., communicating each measurement error covariance matrix for every local observation to a fusion center, or to each other local agent, may be equivalent to or exceeds the full measurement communication. Furthermore, whenever the linearity or the Gaussianness is questionable and extended Kalman filters are needed, the covariance matrices may become data-dependent, and hence, at least, some adjustment may become necessary. Thus the feasibility of this “optimal” track fusion algorithm remains to be demonstrated in a practical situation.

6.2.1.5    Performance Comparison of One-Time Track Fusion Rules

In order to characterize various track fusion rules and to compare with each other, we would like to use simple yet realistic examples. For this purpose, we chose a four-dimensional (two-dimensional position, two-dimensional velocity) state space, with the Ornstein–Uhlenbeck model, i.e., At[ 0I0βI ],Bt[ 0qI ], and V¯0=[ σp2I00σv2I ](β>0andq=2βσv2>0), and the two-dimensional position-only observation,* Hik ≡ [I 0]. The Ornstein–Uhlenbeck model can approximate a realistic target maneuver behavior known as a random-tour behavior with β−1 as the mean time between two maneuvers or of the length of each constant-velocity leg (Washburn 1969, Vebber 1991). For the sake of simplicity, we use synchronous, uniform sampling (measurements), i.e., Δtti(k+1)tik for k = 1, …, N, t0 = ti1, and tF = tiN, for i = 1, 2.

It is customary to use the so-called almost constant-velocity model or the small-white-noise model to model target maneuvers, i.e., β = 0. Since we have chosen the Ornstein–Uhlenbeck model instead, we would provide some explanation. The Ornstein–Uhlenbeck dynamics are usually determined by two parameters, the inverse β of the time constant (which can be considered the mean time between two maneuvers) and the white noise intensity q that drives the variations in the velocity, from the initial condition. However, if the target velocity is, a priori, a stationary process defined by the stochastic differential equation dv(t)=βv(t)dt+qdw(t) with the stationary covariance matrix σv2I the two parameters are constrained as q=2βσv2. Using the stationary velocity process with standard deviation σv reflects the physical reality of real moving objects, in particular, on ground or on surface water or under water. Avoidance of ever increasing a priori velocity uncertainty (contradicting with reality) is the major motivation for using the Ornstein–Uhlenbeck model.

Image

FIGURE 6.1 Characterization of Ornstein–Uhlenbeck model: (a) Increase of a priori position uncertainty in time, (b) a priori position uncertainty as function of normalized white noise intensity, (c) size of state uncertainty ellipsoid as function of normalized white noise intensity.

Figure 6.1 presents the key features of the Ornstein–Uhlenbeck model. Figure 6.1a shows the time increase of the a priori root mean square (RMS) position by the Ornstein–Uhlenbeck model, which increases as β−1(1 − e−βtv (which can be approximated as σvt when β is small, and approaches 0 when β is large) by the velocity uncertainty, and by the white noise intensity, as σv2β1t for large β and (2/3)βσv2t3 for small β. The increase as the function of time is generally much slower than the small white noise model. Figure 6.1b shows the a priori positional RMS at a fixed time ΔT as a function of the normalized white noise intensity q/(σv2/ΔT). We should note that, as q ↓ 0, since q=2βσv2, we have β ↓ 0, i.e., the model approaches a deterministic system, and that, as the white noise intensity increases q ↑ ∞, β increases as β ↑ ∞, and a priori position uncertainty approaches to be stationary one.

As β ↑ ∞ (hence q ↑ ∞), the average time between maneuvers approaches zero, and it will eventually reach the point where there are so many maneuvers to every direction, the effects cancel each other, resulting in the almost stationary positional RMS. Figure 6.1c shows the volume of the state uncertainty hyper volume at fixed time ΔT (normalized by the initial state hyper volume) as the function of normalized white noise intensity q/(σv2/ΔT). It is interesting to see that, for both small and large q’s (consequently small and large β’s), the position-velocity joint uncertainty volume approaches the same volume at the initial time, and the maximum of the volume is attained in the middle. For a small β, the position-velocity cross-covariance makes the state uncertainty volume time-invariant, while for a large β, the position-velocity cross-covariance disappears and both position and velocity covariance matrices become stationary.

6.2.1.5.1    Supplementary Sensor Case

Let us consider two sensors that have almost the same performance characteristics, so that the addition of the second sensor to the first sensor is supplementary. As an extreme case in such situations, we assume, with N = 10 (track fusion after each sensor accumulates 10 measurements), Rik[ σm200σm2 ],k=1,,N,i=1,2, which impiles V1 = V2.

In this extreme case, the Bar-Shalom–Campo rule is reduced to W1 = W2 = (1/2)I, which is the same as the Speyer rule. In other words, no matter how big the cross-covariance between the two local track estimation errors is, the inter-sensor cross-covariance is irrelevant to the fusion rule.

Moreover, because V1 = V2, any value α in the unit interval [0, 1] provides the same V^F for any rule of the CI method. Although the actual fused estimation error covariance may change with the weight α, considering the symmetry, it is reasonable to choose α = 1/2, which makes both the Shannon and the Chen–Arambel–Mehra rules* become the same as the Bar-Shalom–Campo and the Speyer rules. As is well known, however, the fused estimation error covariance by any CI rule is the same as each local estimation error covariance and is extremely overestimated (overly pessimistic).

Figure 6.2 compares the performance by the four fusion rules: the Bar-Shalom–Campo rule (also Speyer and CI rules), the MV rule, the tracklet rule, and the BML rule, with the centralized tracking performance, when the normalized white noise intensity, q/(σv2/Δt), is varied in a wide range. Other key parameters are set as σp = 10σm (the initial position standard deviation) and σv = 3(σmt) (the stationary velocity standard deviation).

First of all, we should note that the deterioration of the estimation performance from centralized tracking is very small, i.e., less than 4% for the Bar-Shalom–Campo, minimum variance (MV), and the tracklet rules, over a wide range of the process noise intensity, which is consistent with the observations reported in Bar-Shalom and Campo (1986) and Mori et al. (2002). The apparent poor performance of the fusion rule using the likelihood function in the strict Bayesian sense, labeled as BML rule, in the figure, is, however, rather surprising. The deterioration of the performance of the BML rule from centralized tracking is within 10%–30% for both the position and the velocity error RMS, for small process noise intensity, when q<(σv2/Δt). However, when q>(σv2/Δt), the estimation errors, in particular for the velocity estimates, deteriorate and seem to increase rapidly.

Image

FIGURE 6.2 Percent increase of RMS estimation errors over centralized tracking performance as function of normalized process noise intensity: supplementary sensors: (a) RMS position estimation error and (b) RMS velocity estimation error.

As mentioned in Section 6.2.1.1, the BML rule is defined using the likelihood function in the strict Bayesian sense, i.e., the conditional probability density of the data P(x^1,x^2|x) given the target state x to be estimated. In order for the BML rule to be close to the optimal in the sense of the minimum variance, we must have V¯Vxz(VzzVzxV¯1Vxz)1Vzx, which is apparently violated for large process noise intensities. As mentioned in Section 6.2.1.1, the Bar-Shalom–Campo rule uses a likelihood function in the classical statistics sense, and apparently, its performance is much better than the ML estimate using the likelihood function in the strict Bayesian sense. In Figure 6.2, the full extent of the BML rule performance is not shown, since its bad performance will otherwise obscure the comparison of the performance of the other three fusion rules.

The MV rule provides optimal performance in terms of the estimation error variance as shown in Figure 6.2. It is however interesting to see that the tracklet rule, which does not use the inter-sensor cross-covariance matrix, shows better performance in terms of positional estimation than the Bar-Shalom–Campo rule, which uses the cross-covariance. But the order of the performance is reversed for the velocity estimation. This trend holds generally true for complementary sensor and repeated fusion cases, as shown later. All three fusion rules, Bar-Shalom–Campo, tracklet, and MV, converge to the performance of centralized tracking performance both when q ↓ 0 and q ↑ ∞, although the Bar-Shalom–Campo rule that does not use the a priori target state information exhibits a small bias as q ↓ 0.

Figure 6.3 shows a similar comparison when we vary the initial state position standard deviation, σp, which represents the a priori information, in a wide range.

Image

FIGURE 6.3 Percent increase of RMS estimation errors over centralized tracking performance as function of normalized initial position standard deviation—supplementary sensors: (a) RMS position estimation error and (b) RMS velocity estimation error.

For this figure, the process noise intensity and the stationary velocity covariance are kept constant at q=0.1(σv2/Δt) and σv = 3(σmt), respectively. The Bar-Shalom–Campo rule does not use the initial state (a priori) information, and its performance is invariant with respect to σp. Like Figure 6.2 obtained by varying the process noise intensity, the performance of the BML rule using the likelihood function in the strict Bayesian sense (that we may call the Bayesian likelihood function) is noticeably worse than the Bar-Shalom–Campo rule that is a maximum likelihood estimate using a likelihood function in the classical statistics sense. In particular, the estimation error of the BML rule exhibits more than 20% increase in the velocity estimation error RMS over centralized tracking for small initial position uncertainty (small σp), although Figure 6.3b does not show that part. Again the position estimation performance by the tracklet rule is better than the Bar-Shalom–Campo rule consistently, and the order of the performance is reversed for the velocity estimation.

We performed similar studies by changing the stationary velocity standard deviation σv, and did not observe any significant effects on the performance of any of the fusion rules.

6.2.1.5.2    Complementary Sensor Case

Let us consider cases where two sensors compensate with each other, by letting R1k=[ σm2004σm2 ], and R2k=[ 4σm200σm2 ], using the same parameters otherwise, including the Ornstein–Uhlenbeck model. Figure 6.4 shows the changes in estimation performance by several fusion rules due to the process noise intensity.

In this case, the 90° difference in the orientations of the local measurement error covariance matrices, R1k and R2k, is propagated into the local state estimation error covariance matrices, V1 and V2, and the state fusion weight matrices, W1 and W2, of various fusion rules. In particular, the difference in the behaviors of the fusion rules that use the inter-sensor, cross-covariance matrix V12 (Bar-Shalom–Campo and MV rules) and those that do not use it (tracklet, Speyer, and CI rules) becomes visible in Figure 6.4. Nonetheless, like the supplementary sensor case of Figure 6.3, the estimation performance deterioration of the four fusion rules from the centralized tracking performance remains within a very small range, i.e., 4%–5%. In Figure 6.4, we exclude the performance of the BML rule using the likelihood in the strict Bayesian sense to prevent its bad performance from obscuring the comparison of other fusion rules.

Image

FIGURE 6.4 Percent increase of RMS estimation errors over centralized tracking performance as function of normalized process noise intensity—complementary sensors: (a) RMS position estimation error and (b) RMS velocity estimation error.

There are apparently two peaks in the departure of distributed tracking fusion performance from the centralized tracking performance, i.e., the low q peak and the high q peak. For the position estimation performance, the tracklet rule that does not use the V12 exhibits better performance over the Bar-Shalom–Campo rule for very small q's (close to deterministic cases). For large q's, on the other hand, the Bar-Shalom–Campo rule using the V12 exhibits clear advantages over the other rules that do not use V12. For the velocity estimation performance, the advantage of the Bar-Shalom–Campo rules over others (except for the MV rule) is uniform with respect to the process noise intensity q.

Unlike the supplementary sensor case (where V1 = V2), the scalar weight α in (6.18) does change the fused estimation error covariance VF since V1V2. However, in our examples, for both supplementary and the complementary cases, since the measurement error covariance matrices are diagonal, all the state estimation (self and cross) covariance matrices are also diagonal. The minimization of the determinant of the CI fused state estimation error covariance matrix (αV11+(1α)V21)1 is therefore the same as the minimization of its trace, and both are reduced to the maximization of α(1 − α), achieved uniquely at α = 1/2. This makes all the CI rules identical to the Speyer rule, i.e., Wi=(V11+V21)Vi1, i = 1, 2. In other words, both Shannon and Chen–Arambel–Mehra rules become the same as the Speyer rule.

Figure 6.5 shows the sensitivity of the four algorithms to the initial state estimation accuracy, i.e., the dependence on the a priori information. Both the Bar-Shalom–Campo and the Speyer rules do not use the a priori information, and hence, only very small secondary effects are visible. Because of the sensors’ difference in observability, the effects of including the cross-correlation or not are apparent. Like the case chosen for Figure 6.3, the process noise intensity and the stationary velocity covariance are kept constant at q=0.1(σv2/Δt) and σv=3(σm/Δt). With this parameter, as shown in Figure 6.4, the tracklet rule performs better than the Bar-Shalom–Campo rule, for the position estimation, while the opposite is true for the velocity estimation.

Other tendencies are almost identical with those shown in the supplementary sensor case (Figure 6.3). Both Figures 6.4 and 6.5 exhibit the robustness of various track fusion rules, due to the changes in the key tracking parameters, i.e., the process noise intensity level and the initial state accuracy (except for the BML fusion rule), as well as Figures 6.2 and 6.3.

Image

FIGURE 6.5 Percent increase of RMS estimation errors over centralized tracking performance as function of normalized initial position standard deviation—complementary sensors: (a) RMS position estimation error and (b) RMS velocity estimation error.

6.2.2    REPEATED TRACK FUSION

In the previous section, we considered a simple case where track fusion takes place only once to fuse two local state estimates. In this section, we will explore cases where communication between two sensors or to a fusion center is repeated.

Figure 6.6 shows three architectures of distributed tracking systems using two sensors that have their own independent local data processing capabilities. The two sensor systems may act as two completely autonomous systems that exchange data between them, or alternatively, report their processed data to a high level system, which we may call a fusion center. The fusion center may feed fused state estimates back to the two local sensor systems, to improve the performance of the local systems. In this section, we first consider the cases where there is no feedback, and then later, the cases with feedback.

We assume the same linear dynamics of a target to be tracked using two sensors with linear observations, as described in Section 6.2.1. For the sake of simplicity, let us consider only cases where the informational exchange happens synchronously at the same time, but repeatedly at tF1, tF2,…(t0 < tF1 < tF2 < …). The local estimates of the two sensors and their estimation error covariance matrices will be denoted as ((x^11,V11),(x^12,V12))attF1,((x^21,V21),(x^22,V22))attF2,((x^31,V31),(x^32,V32))attF3, and so forth, while the fused state estimate at each tFk, k = 1, 2, … will be denoted as x^Fk.

Image

FIGURE 6.6 Three possible architectures using two sensor systems with local data processing capabilities: (a) two-autonomous-sensor distributed system, (b) two-level hierarchical distributed system, and (c) hierarchical system with feedback.

For repeated-fusion cases, with or without feedback, the fusion rules x^Fk=ϕk(x^11,x^12;x^21,x^22;;x^k1,x^k2) (where ϕk is a linear or affine function because we are using a linear-Gaussian model) can be categorized as follows:

1.  Memoryless: x^Fk=ϕk(x^k1,x^k2) uses only the most recent local estimates (x^k1,x^k2).

2.  Limited memory: x^Fk=ϕk(x^(k+1)1,x^(k+1)2;;x^k1,x^k2) uses only the most recent pairs of local estimates.

3.  Full memory: The full history (x^11,x^12;x^21,x^22;;x^k1,x^k2) of the past local estimates is used.

We may categorize the Bar-Shalom–Campo, the Speyer, and the CI rules into the memoryless fusion rules, while the MV, and the BML rules can be made to be either limited or full memory rules, and the tracklet fusion rule may become a memoryless or one-step limited memory rule.

6.2.2.1    Repeated Track Fusion without Feedback

Let us first consider the cases where each sensor subsystem maintains the local data processing only with the local data, and does not mix with data from other sensors, while fused state estimates are calculated by fusing the unmixed local estimates. The rationale for not letting the local sensor system use the fused information is that, depending on what fusion rule is used and how fused results are fed back to the local data processing system, the performance of local systems, and eventually of the overall system, may deteriorate, rather than improve, by contamination of the otherwise pure local data. This data processing can be achieved either by a hierarchical or two-autonomous-system design, as shown in Figure 6.7.

Figure 6.7 shows two information graphs (described in Chapter 5) to illustrate the information flow in track fusion without feedback. The two information graphs are equivalent to each other and describe informational transactions in a two-sensor-one-fusion-center system and a two-autonomous-sensor system. Dotted lines and circles with dotted lines represent a priori information, and squares represent raw sensor data that are fed into and accumulated in the local sensor data information graph nodes. The data accumulation, represented by horizontal informational flows at the same horizontal position, is represented without arrows in the graph. As shown in the graph, the same data processing can be implemented in either (a) hierarchical architecture or (b) autonomous architecture (or replicated hierarchical architecture). In the latter case, each local system maintains two state estimation filters, one local and one global. The global filter maintained by each local sensor system is sometimes called a shadow tracker (Drummond 1997b).

Image

FIGURE 6.7 Information graphs for processing architectures of two-sensor track fusion without feedback: (a) two sensor and fusion center and (b) two autonomous sensors.

The various fusion rules introduced in Section 6.2.1 can be adapted as follows:

•  Bar-Shalom–Campo, Speyer, and CI fusion rules: Those rules do not use the a priori information. For repeated track fusion, the a priori information at one fusion time tFk can be viewed as the information accumulated up to the previous fusion time tF(k−1). Thus these fusion rules ignore this a priori information, and simply combine the latest available local state estimates (i.e., memoryless fusion rules).

•  Minimum Variance (MV) Fusion Rule: As indicated in Figure 6.7, either fusion center or the fused state filter in a local system accumulates the local estimates as ((x^11,V11),(x^12,V12)),,((x^(k1)1,V(k1)1),(x^(k1)2,V(k1)2)), at fusion time tF, in addition to the current pair ((x^k11,Vk11),(x^k22,Vk22)). Therefore, a general linear estimate x^Fk of the target state x(tF) is a linear function of all the available estimates ((x^κi,Vκi)i=12)κ=1k plus the a priori information P(x(t0)) (or equivalently P(x(tF)) with mean x¯k and covariance matrix V¯k). z=((x^κi)i=12)κ=1k and x = x(tFk) in (6.13), and by calculating the covariance matrices Vxz and Vzz with augmented dimensions, the MV fusion rule can be expressed as

xFk=Wk0x¯k+κ=1ki=22Wκix^κi

(6.26)

which is a full-memory fusion rule. Note that the calculation of the matrices, Vxz and Vzz, are not trivial involving many random vectors ((x^κ=1)i=22)κ=1k. Nonetheless, it can be done through a simple extension of the method described in Section 6.2.1.2. We should note that the MV fusion rule is called the quasi-tracklet fusion method in Gao and Li (2010).

Replacing the summation κ=1k in (6.26) by κ=k+1k, we have a memoryless ( = 1) or a limited memory ( ≥ 1) fusion rule. In such a case, the MV rule obtained in that way is the BLUE with respect to the weights (Wk0, Wk1, Wk2, …, W(k−ℓ+1)1, W(k+1)2) with the constraint Wk0+κ=k+1ki=12Wκi=I, while the Bar-Shalom–Campo rule is the BLUE with respect only to (Wk1, Wk2) with Wk1 + Wk2 = I.

The BML fusion rule using the likelihood function in the strict Bayesian sense, defined by (6.15) in Section 6.2.1.1 for one-time track fusion, can be extended to the repeated fusion rules without feedback in exactly the same way as the MV rule. However, because of the poor performance that we found for the single-time track fusion, we will exclude the BML fusion rules from our consideration in the rest of this chapter.

6.2.2.1.1    Tracklet Fusion Rule and Decorrelation Method

The tracklet fusion rule defined by (6.10) in Section 6.2.1.1 can be directly translated into the repeated fusion as Wk0=VFkV¯k1,Wk1=VFkVk11, and Wk2=VFkVk21. In other words, we can apply the one-time tracklet fusion rule (6.10) used to decorrelate the past fused estimate from the most recent pair of the local estimates. Without feedback to the local processing, it can be shown (cf., e.g., Chong et al.1990) that this rule can achieve the performance of the centralized tracker for deterministic target dynamics without process noise, and for non-deterministic target dynamics when the fusion rate is the same as the sensor revisit rate.

Another approach is to decorrelate the local estimates between the current estimates x^ki at the current fusion time tFk, and the previous fusion time tF(k−1), by rewriting Equation 6.12 as

{ V˜ki1zki=Vki1x^kiV¯ki1x¯kiV˜ki1=Vki1V¯ki1

(6.27)

to obtain the decorrelated pair (zki, V˜ki), for each sensor, i = 1, 2. This rule is similar to Equation 6.11 except that the local past estimates in used in decorrelation (Chong 1979). As mentioned in Section 6.2.1.1, the vector, zki, i = 1, 2, obtained this way, is called the pseudo-measurement or the equivalent measurement, and the measurements between the two consecutive fusion times tF(k−1) and tF are often called a tracklet. The decorrelated pair (zki, V˜ki), i = 1, 2, is then used to obtain the updated fused estimate x^Fk, using the Kalman filter update equations

{ VFk1x^Fk=V¯Fk1x¯Fk+V˜k11zk1+V˜k21zk2VFk1=V¯Fk1+V˜k11+V˜k21

(6.28)

The local prediction (x¯ki,V¯ki) and the global prediction (x¯Fk,x¯Fk) can be obtained by the extrapolation described in Section 6.2.1.1.

The tracklet fusion rule may viewed as a memoryless rule, since it uses only the most recent pair of local estimates, (x^k1,x^k2), although it uses the extrapolated a priori state mean. On the other hand, decorrelation of the local estimates uses the extrapolated pair of (x¯k1,x¯k2) of the last local estimates (x^(k1)1,x^(k1)2), as well as the extrapolation x¯Fk of the last fused state estimate x^F(k1), i.e., a fusion rule with limited (one-step) memory. However, it can be readily shown that the two methods are equivalent to each other only if the target dynamics are deterministic, i.e., Bt ≡ 0 in (6.2). However, when the target dynamics are not deterministic, their performance will be different. As mentioned in Section 6.2.1.1, this local estimate decorrelation fusion rule is called the Channel filter in Bourgault and Durrant-Whyte (2004).

6.2.2.1.2    Numerical Example of Repeated Track Fusion without Feedback

Figure 6.8 compares the performance of various track fusion rules applied to repeated-fusion-without-feedback case. We used the same simplified model defined in Section 6.2.1.5, i.e., the Ornstein–Uhlenbeck model with the time constant β and q=2βσv2, with a priori position standard deviation, σp and the velocity standard deviation σv. Only the complementary case with R1k=[ σm2004σm2 ]andR2k=[ m200σm2 ] is used.

As shown in Section 6.2.1.5, 10 local synchronous measurements are taken between two consecutive fusion times, which are repeated 5 times, at the end of which we evaluate the performance of each repeated track fusion rule by the methods described in Section 6.2.1.2. The performance is shown only for the variation of the normalized process noise intensity. When the initial positional covariance matrix (determined by σP) or the stationary velocity covariance matrices (determined by σv) is varied, virtually no sensitivity was found due to the relatively long simulation period.

Comparing Figure 6.8 with Figure 6.4 in Section 6.2.1.5, the relative trends of the various fusion rules remain the same for the positional estimation performance, while the deterioration of the velocity estimation performance when the process noises is noticeably smaller in the repeated fusion than the one-time fusion. Since this is a complementary-sensor case, the local estimation error covariance matrices are different, and hence the fusion weights of the Bar-Shalom–Campo and the Speyer rules are different, resulting in some differences in Figure 6.8. However, because of the use of the completely complementary sensors defined by constant measurement error covariance matrices, Rk1 and Rk2, all the CI fusion rules become the same with α = 1/2, as in the one-time fusion case (for Figure 6.4) and is identical to the Speyer rule.

As described in (6.26), the full-memory MV fusion rule uses an increasing number of past fused state estimates in the fusion rule as track fusion is repeated, requiring correlation among a larger number of past local estimates. To obtain Figure 6.8, we considered two cases for the number of local estimates used by the MV estimate. Case 1 (MV1) uses only the most recent local estimate for each sensor and the fused estimates is x^Fk=Wk0x¯k+i=22Wkix^ki for each sensor and the fused estimate is x^Fk=Wk0x¯k+i=22(Wkix^ki+W(k1)x^(k1)i). As mentioned earlier, the MV fusion rules outperform any other fusion rules that were compared. We should also note that, except for the MV2 fusion rule, all the fusion rules do not exhibit the convergence to the performance of the centralized tracking when the system approaches the deterministic dynamics, i.e., β ↓ 0, which may be a general indication of a potential instability associated with repeated fusion without feedback. Nonetheless, as seen in Figure 6.8, the performance deterioration of distributed tracking from centralized tracking by various fusion rules remains relatively very small, i.e., 1%–5%, over a very wide range of the process noise intensity level q. In particular, the performance by the relatively simple fusion rules, i.e., Bar-Shalom–Campo, Speyer, and CI, is found to be very robust.

Image

FIGURE 6.8 Percent increase of RMS estimation errors over centralized tracking performance as function of normalized process noise intensity: repeated fusion without feedback—complementary sensors: (a) RMS position estimation error and (b) RMS velocity estimation error.

The tracklet rule shown in Figure 6.8 is in its decorrelation form, which is widely used to decorrelate a sequence of up-stream trackers’ outputs that are input into a fusion engine that fuses tracking information from multiple sources, given in terms of state estimates rather than raw sensor measurements. The decorrelation form of the tracklet rule is defined by (6.27) and (6.28). This tracklet fusion rule is practical because it does require inter-sensor local target state estimation error covariance, and the result in Figure 6.8 justifies its use in the cases where fused state estimates are not fed back the local tracks.

6.2.2.2    Repeated Track Fusion with Feedback

It is rather intuitive to expect better state estimation performance, both local and global, by feeding back the fused state estimates to the local tracking agents. However, even using linear models, i.e., a rather idealized version of generally nonlinear real-world systems, such expectation may not be realized, depending on what fusion rule is used. This is the case because, although some fusion rules may perform reasonably well for state estimates at fusion times as shown in Chang et al. (2002), they may declare wrong, generally unreasonably optimistic estimation error covariance matrices, thereby contaminating the performance of the local trackers. This may cause secondary effects such as contamination of the fused state estimates generated later in by local tracking agents and subsequent deterioration of the overall performance. For this reason, repeated fusion without feedback may be preferred in many practical cases.

Repeated track fusion with feedback can be illustrated by the information graph shown in Figure 6.9.

In this figure, feedback is represented by those from the fusion center to the local processors in (a) two-local-sensor-one-fusion-center architecture and by arrows that connect local processing information graph nodes directly in (b) two-autonomous-sensor architecture. In (b), two-autonomous-sensor distributed architecture, each local processing node sends its current state estimate at an agreed upon fusion time to the other node, and upon the receipt of the state estimate from the other node, fuses the local and remote state estimates into the global estimate at that moment. Then, until the next fusion time, each local sensor processes only its local data.

Image

FIGURE 6.9 Information graphs for processing architectures of two-sensor track fusion with feedback: (a) two sensor and fusion center and (b) two autonomous sensors.

Image

FIGURE 6.10 Percent increase of RMS estimation errors over centralized tracking performance as function of normalized process noise intensity: repeated fusion with feedback—complementary sensors: (a) RMS position estimation error and (b) RMS velocity estimation error.

Figure 6.10 shows the performance of the various fusion rules adapted to repeated track fusion with feedback, using the same complementary-sensor model used to compare the performance of Figure 6.8. The adaptations are shown below.

6.2.2.2.1    Bar-Shalom–Campo, Speyer, and CI Rules

The same fusion rules are used but the local state estimates and estimation error covariance matrices modified by feedback are used. All the covariance matrices are diagonal due to the use of the same diagonal measurement error covariance matrices Rki’s. Hence, all the CI rules become the same with α = 1/2 as shown earlier. However, although the Bar-Shalom–Campo fusion rule provides the honest fused state estimation error covariance, neither the Speyer nor the CI fusion rule does. The Speyer rule ignores the cross-covariance and results in generally optimistic estimation error covariance matrices, whereas the CI rules generally produce grossly pessimistic estimation error covariance matrices, with a typical determinant about four times bigger than that of the actual estimation error, both contaminating the local trackers’ performance.

6.2.2.2.2    Tracklet Rule

In this fusion rule, x^Fk=V^Fk(Vk11x^k1+Vk21x^k2V¯Fk1x¯Fk)1, the a priori global state estimation pair (x¯Fk,V¯Fk) obtained by extrapolating the last fusion result (x¯F(k1),V¯F(k1)), is used to eliminate the redundant information contained by the two local state estimates through the feedback and remove double counting. Nonetheless, the declared fused estimation error covariance matrix V^Fk=(Vk11+Vk21V¯Fk1)1 is not honest and generally optimistic, thus contaminating the local sensor data processing.

6.2.2.2.3    MV Fusion Rule

The MV1 rule as defined in Section 6.2.1.1, which only uses the most recent local estimates as x^Fk=Wk0x¯k+i=12Wkix^ki, is used because the most recent estimates contain all the significant updates by the local agents due to feedback of the fused estimation results to the local agents. Any version of the MV fusion rules based on the BLUE principle generates honest estimation error covariance matrices, and hence there will be no contamination propagated through fusion and its feedback to the local processors.

6.2.2.2.4    Numerical Example of Repeated Track Fusion with Feedback

The same simplified linear models with synchronized complementary sensors as those to produce Figure 6.8 are used for Figure 6.10.

The behavior of the tracklet fusion rule is much more stable than that in fusion without feedback and behaves as a good approximation of the MV rule, which we may consider an almost optimal distributed fusion algorithm, as far as positional estimation is concerned. As observed in Figures 6.2, 6.3, 6.4, 6.5, 6.8 and 6.10, however, the simpler rules, i.e., Bar-Shalom–Campo, Speyer, and CI, may provide better velocity estimation performance for a range of the process noise intensity levels q. The MV rule may be improved more by considering linear optimal estimate using longer length of memory. On the other hand, the Bar-Shalom–Campo, the Speyer, and the CI fusion rules do not use the a priori information. In the fusion with feedback case, we can see its consequences in Figure 6.10, although all the variations are within a relatively small margin, i.e., 5%. Therefore, we see again the robustness of the simple fusion rules, despite concerns about the use of information that may be much less than information available at each fusion time, and about the contamination of the local tracker by not honest (either pessimistic or optimistic) state fusion estimation error covariance matrices.

6.3    TRACK ASSOCIATION

Track association is a prerequisite for track fusion in a distributed tracking system. However, in many cases, association is rather obvious, and therefore target state estimation from multiple sensors or track fusion becomes the major problem. On the other hand, when the target density is high, track association becomes a much more important problem than track fusion. As the track density becomes even higher, track association and track fusion can no longer be treated as separate problems. In that case, many local track association hypotheses are possible and equally likely, so that the best local association hypothesis may not provide high-quality tracks for association at the fusion site. Then some form of distributed multiple hypothesis tracking is needed (Chong et al. [1990], Dunham et al. [2004]).

In this section, we treat the situation where the local tracks are good enough so that distributed tracking can be viewed as a two-stage problem, i.e., track association followed by track fusion. The concept of distributed tracking in terms of track association was developed shortly after target tracking started to be investigated with modern estimation theory or filtering theory. Early work includes Singer and Kanyuck (1971) and Yaakov Bar-Shalom (1981).

6.3.1    TRACK ASSOCIATION PROBLEM DEFINITION

We use the same linear-Gaussian model described in Section 6.2.1. We assume, however, a fixed number n of “true” targets represented by the system ((xi(t))t[t0,))i=1n of n replicated stochastic processes on the time interval [t0, ∞) with the joint initial condition Prob { x1(t0)dx1(t0),,xn(t0)dxn(t0) }=i=1ng(xi(t0)x¯0;V¯0)dxi(t0). Each individual stochastic process (xi(t))t[t0,), i = 1, … n, is defined as in Section 6.2.1, with a system ((w˙i(t))t[t0,))i=1n of white noises, or equivalently Wiener processes ((wi(t))t[t0,))i=1n. The target density can be measured by γ0(x)=ng(xx¯0;V¯0) so that for any measurable subset B in the target state space E the integral Bγo(x)dx is the expected number of targets whose initial condition xi(t0) is in the set B.

Instead of assuming the number n of targets to be a known constant, we may assume that n is a random variable. When n is a Poisson random variable, the system (xi(t0))i=1n of random vectors in E is a Poisson point process. We maintain the constant n assumption for this chapter for the sake of simplicity, because the main purpose of this chapter is to compare various track association metrics.

We assume the following scenario: All the targets are visible by each of the two sensors, i.e., we assume the detection probability (for the local track level) by each sensor is unity. We also assume that there are no false tracks. The last assumption is supported by the fact that any track made up solely of false alarms would have been weeded out by the local sensor’s tracking. Thus we have n targets that are observed by two sensors, which produces n local tracks through Ni measurements, i = 1, 2, prior to a fusion time tF. Then our goal is to associate two sets of local tracks represented by the n-tuple of state estimates, (x^1i(tF))i=1nand(x^2i(tF))i=1n, at the fusion time tF, where each estimate x^ij is associated with the estimation error covariance matrix Vij.

Uncertainty of the association between the true targets and the set of tracks from each sensor, i = 1, 2, can be modeled by two independent assignment functions ai that is a permutation on the set {1,…, n}. The association hypothesis between the two sets of local tracks is then expressed as a(i)=(a2a11)(i), i.e., the i-th track from sensor 1 and the a(i)-th track from sensor 2 share the same origin. The problem is then the determination of the most likely or most probable association according to an evaluation function that has the general form

P(a)=C1i=1n(i,a(i))

(6.29)

where

C is the normalizing constant

(i, j) is the likelihood of track i from sensor 1 that shares the same origin (target) as track j from sensor 2

Under an appropriate set of assumptions mentioned earlier, (6.29) becomes the a posteriori probability of the association a conditioned by the set of state estimates of all the tracks from both sensors, with the normalizing constant C. However, in this chapter, we consider (6.29) as the expression that relates the association hypothesis evaluation function to the track association metrics represented by the likelihood function (i, j) or its half negative logarithm, L(i, j) = −(1/2)ln ((i, j)).

6.3.2    TRACK ASSOCIATION METRICS

Using the negative half logarithm L(i, j) = −(1/2)ln ((i, j)), the optimal track association â is obtained by minimizing the association cost f(a)=i=1nL(i,a(i)). By the track association metrics, we mean the metrics that represent the cost L(i, j) for associating the ith track from sensor 1 and the jth track from sensor 2. Some of the metrics in the following list were originally developed as the metric to be used in the classical chi-square test, but can be considered as an association metric because of its structure.

Singer–Kanyuck metric: In a pioneering paper (Singer and Kanyuck 1971), the usual chi-square metric

L(i,j)= x^1ix^2j (V1t+V2j)12=(x^1ix^2j)T(V1i+V2j)1(x^1ix^2j)

(6.30)

is proposed. This metric can be interpreted as the negative half logarithm of

(i,j)=Ep^1i(x)p^2j(x)dx=Eg(xx^1i;V1i)g(xx^2j;V2j)dx=g(x1ix^2j;V1i+V2j)

(6.31)

when we eliminate the factor det(2π(V1i + V2j))−1/2, or its negative half logarithm ln (det(2π(V1i + V2j))), as a constant that appears in the metrics for the other pairs.

Bar-Shalom metric: Yaakov Bar-Shalom proposed the metric

L(i,j)= x^1ix^2j (V1i+V2jV12tjV12ijT)12

(6.32)

in Bar-Shalom (1981) to be used also in a chi-square test for the track association, where V12ij is the cross-covariance between two tracks, track i from sensor 1 and track j from sensor 2, obtained assuming that they originate from the same target. This metric can be interpreted as the negative half logarithm of

(i,j)=Eg([ xx^1ixx^2j ];[ V1iV12ijV12ijTV2j ])dx=g(x^1ix^2j;V1i+V2jV12ijV12ijT)

(6.33)

when we ignore the factor  det (2π(V1i+V2jV12ijV12ijT))1/2, or its negative half logarithm ln ln (det (2π(V1i+V2jV12ijV12ijT))), as a constant that is to be canceled out.

CI metric: To the best of our knowledge, there is no track association metric based on the CI principle. However, based on the observation on the two metrics described earlier, and on the definition of the CI fusion (6.19), an appropriate track association metric may be defined as

L(i,j)= x^1ix^2j (α^ij1V1i+(1α^ij)1V2j)12

(6.34)

This metric can be interpreted as the negative half logarithm of

(i,j)=Ep^1i(x)α^ijp^2j(x)(1α^ij)dx=Eg(xx^1i;V1i)α^ijg(xx^2j;V2j)(1α^ij)dx

(6.35)

when we ignore the factor (det (V1i)(1α^ij) det (V2j)α^ijdet ((1α^ij)V1i+α^ijV2j))1/2, or its negative half logarithm, as a constant that is to be canceled out. The “optimal” weight α^ij [0, 1] may be chosen as the one that either maximizes the determinant det (αV1i1+(1α)V2j1) (corresponding to the Shannon fusion rule), or minimizes trace  ((αV1i1+(1α)V2j1)1) (corresponding to the Chen–Arambel–Mehra fusion rule).

Chong–Mori–Chang metric: Under the assumption that there are no false tracks and missed tracks for the two-sensor track-to-track association, we can show that the Bayesian track association hypothesis evaluation formula is expressed by (6.29) using the track association likelihood given by

 (i,j)=Ep^1i(x)p^2j(x)p¯(x)dx=Eg(xx^1i;V1i)g(xx^2j;V2j)g(xx¯;V¯)dx=(det (V¯) det (V^Fij)det (V1i) det (V2j))1/2 exp (12(x^Fijx^1iV1i12+x^Fx^2jV2j12x^Fx¯V¯12))

(6.36)

where

p¯(x)=g(xx¯;V¯) is the a priori probability density of the target state x = x(tF) at the fusion time tF

(x^Fij,V^Fij) is the pair of the fused state estimate and the estimation error covariance matrix, obtained by the track fusion rule, defined by (6.15) in Section 6.2.1.1, i.e.,

{ V^Fij1x^Fij=V1i1x^1i+V2j1x^2jV¯1x¯V^Fij1=V1i1+V2j1V¯1

(6.37)

all under the hypothesis that the i-th track from sensor 1 and the j-th track from sensor 2 originate from the same target. Unfortunately, like the tracklet fusion rule (6.15), the last statement is true only when the target dynamics are deterministic, i.e., there is no process noise (Bt = 0). Nonetheless, like the tracklet fusion rule, combining with the nondeterministic extrapolation formula, the track association metric of (6.36) can be adapted to the nondeterministic cases by combining with the nondeterministic extrapolation formula. By eliminating the four determinant factors from (6.36) as the factors that can be canceled out, the negative half logarithm of the track likelihood becomes

L(i,j)= x^Fx^1i V1t12+ x^Fx^2j V2j12 x^Fx¯ V¯12

(6.38)

Expanded State metric: This metric is obtained by expanding the target state from the state x(tF) at the fusion time tF to the states at multiple times, (t1, t2, …, tn), within the time interval [t0, tF]. If this set (t1, t2, …, tn) covers all the measurement times by both sensors, we can reformulate the nondeterministic problem defined in Section 6.2.1, as a static state problem in which the “static” states are (x(t1), …, x(tn)) instead of x(tF). In this way, all the uncertainty generated by the process noise is translated into the cross-covariance among the target states at different times. Then the track association hypothesis evaluation formula (6.29) using the Chong–Mori–Chang metric (6.36) becomes truly the conditional probability of each association hypothesis in the Bayesian sense, from which we can obtain the MAP probability track association hypothesis by solving the classical bipartite assignment problem.

Remarks: Strictly speaking, the use of (6.29) is justified only when the number of targets is known, i.e., when there is no missed target and there are no false tracks. When missed targets are possible, we may have unpaired local tracks. In such a case, as shown in Mori and Chong (2003) and Ferry (2010), when there may be unpaired local tracks, each track-to-track association must be adjusted according to the estimate of the target density, and when the a priori number of targets is not Poisson, the constant C in (6.19) may depend on the number of paired and unpaired local tracks. The cases where there may be false tracks are theoretically more complicated. We can find a proposal of track-to-track association metric used in such a case in Blackman and Popoli (1999), and a recent theoretical treatment can be found in Mori et al. (2009).

The sensor biases and the track association are closely related, and may not be separable in some cases. In such a case, the track association metric in (6.29) can be modified by the sensor bias probability distribution, as shown in Levedahl (2002), Mori and Chong (2007), and Ferry (2010).

6.3.3    COMPARISON OF TRACK ASSOCIATION METRICS

In order to compare the various track association metrics described in Section 6.3.2, we will examine the track association performance using the evaluation function (6.29) with different track association metrics. A simple linear model, using the Ornstein–Uhlenbeck target dynamics and two complementary sensors described in Section 6.2.1.5, is used for this purpose. The complementary sensor case was chosen to mimic a situation where each local sensor is able to separate the targets relatively well into a set of high-quality local tracks, but there is still significant association uncertainty between the local tracks from both sensors, as illustrated in Figure 6.11.

Figure 6.12 shows the result of this comparison. Unlike the track fusion performance analysis of Section 6.2, there is no obvious analytical method of predicting the track association performance by any of the association metrics described earlier. Therefore, Monte Carlo analysis was conducted. In each run, a random set of 100 targets was generated according to the model described in Section 6.3.1, assuming synchronous observation with the same number of 10 local measurements for each track. The initial position uncertainty standard deviation is 10 times as big as the measurement error, i.e., σP = 10σm. The figure shows a comparison of association performance by (1) the Bar-Shalom metric, (2) the Singer–Kanyuck metric, (3) Chong–Mori–Chang metric, and (4) the extended state metric, varying (a) the normalized process noise intensity q and (b) the initial position uncertainty standard deviation σp. The complementary sensor case with 90° different sensor measurement error covariance matrices is used for this comparison, resulting in the equal weights for the CI fusion rule, i.e., α = 1/2, in (6.18). The corresponding CI track association metric is defined by (6.34) with equal weight αij ≡ 1/2. This weight makes the CI association metric the same as the Singer–Kanyuck metric.

Image

FIGURE 6.11 Local tracks from two complementary sensors.

Image

FIGURE 6.12 Track association performance comparison: (a) track association performance as function of normalized process noise intensity and (b) track association performance as function of normalized initial position standard deviation.

For each run, we examined each target to see whether the tracks originating from that target are correctly associated or not. Then the probability of correct association, as defined as the probability of each track from sensor 1 being assigned to the “correct” track from sensor 2 (“correct” as indicated by the ground truth), was calculated as the number of correctly associated targets over the total number of targets. Each point in the figure was obtained by averaging 1000 samples.

In Figure 6.12a and b, the advantage of using the inter-sensor cross-covariance in the association metric is clearly shown by the better performance of the Bar-Shalom metric over the Singer–Kanyuck or the Chong–Mori–Chang metric that does not use the cross-covariance matrix. The deterioration of the association performance for the middle range of the process noise intensity can be explained by its effect on the joint target state density, shown in Figure 6.1c in Section 6.2.1.5. The use of the a priori state mean by the Chong–Mori–Chang metric results in better association performance by that using the Singer–Kanyuck metric, but the difference is rather small because the 10 local measurements may lessen the effect of the initial condition. The association performance using either of metrics is worse than that of the Bar-Shalom metric that uses the cross-covariance. In almost all situations, the extended state metric exhibits much better association performance than the other association metrics because it considers the state estimates at multiple times, and not just at the fusion time.

6.4    CONCLUSIONS

This chapter has addressed the track fusion and association problems in distributed multiple-target tracking. We have reviewed several track fusion algorithms developed over the last three decades and compared their performance. The use of linear-Gaussian models allows closed form analytical performance evaluation. Simple but realistic target dynamics with the Ornstein–Uhlenbeck model were used to compare the various track fusion rules for one-time fusion, and repeated fusion cases, with and without feedback of the global fused target state estimates to the local tracking agents. Our analysis indicates that even though some fusion rules perform slightly better than others depending on the situation, the performance of the more common fusion rules such as speyer, minimum variance (MV) or BLUE, Bar-Shalom Campo, decorrelation, is only slightly worse (<5%) than that of centralized tracking. The choice of the appropriate fusion rule should depend on factors such as communication requirements, implementation difficulty, and robustness.

Various track association metrics were compared with respect to track association performance for a simple one-time track fusion. For the complementary sensor case, we confirmed clearly better track association performance of the Bar-Shalom metric that considers the cross-covariance between two local tracks hypothesized to originate from the same target, over the Singer–Kanyuck metric or the Chong–Mori–Chang metric that does not use such cross-covariance information. At the same time, the extended state vector for track association, which requires more data and computation, exhibits much better track association performance than any other association metrics. This is not surprising because association tracks with only the state estimates are difficult when the target is maneuvering. An approximate extended state track association metric may be desirable in the case of highly nondeterministic target dynamics.

REFERENCES

Anderson, B. D. O. and J. B. Moore. 1979. Optimal Filtering. Englewood Cliffs, NJ: Prentice-Hall.

Bar-Shalom, Y. 1981. On the track-to-track correlation problems. IEEE Transactions on Automatic Control AC 26(2): 571–572.

Bar-Shalom, Y. and L. Campo. 1986. The effect of the common process noise on the two-sensor fused-track covariance. IEEE Transactions on Aerospace and Electronic Systems AES 22(6): 803–805.

Bar-Shalom, Y. and T. E. Fortmann. 1988. Tracking and Data Association. San Diego, CA: Academic Press.

Bar-Shalom, Y. and X. R. Li. 1993. Estimation and Tracking: Principles, Techniques and Software. Dedham, MA: Artech House.

Bar-Shalom, Y., X. R. Li, and T. Kirubarajan. 2001. Estimation with Applications to Tracking and Navigation: Theory, Algorithms, and Software. New York: John Wiley & Sons.

Bar-Shalom, Y., P. K. Willet, and X. Tian. 2011. Tracking and Data Fusion: A Handbook of Algorithms. Storrs, CT: YBS Publishing.

Belkin, B., S. L. Anderson, and K. M. Sommar. 1993. The pseudomeasurement approach to track-to-track data fusion. Proceedings of the 1993 Joint Service Data Fusion Symposium, Laurel, MD, pp. 519–538.

Blackman, S. S. 1986. Multiple-Target Tracking with Radar Application. Norwood, MA: Artech House.

Blackman, S. S. and R. Popoli. 1999. Design and Analysis of Modern Tracking Systems. Norwood, MA: Artech House.

Bourgault, F. and H. F. Durrant-Whyte. 2004. Communication in general decentralized filters and the coordinated search strategy. Proceedings of the 7th International Conference on Information Fusion, Stockholm, Sweden, pp. 723–730.

Chang, K. C., C.-Y. Chong, and S. Mori. 2008. On scalable distributed sensor fusion. Proceedings of the 11th International Conference on Information Fusion, Cologne, Germany.

Chang, K. C., R. K. Saha, and Y. Bar-Shalom. 1997. On optimal track-to-track fusion. IEEE Transactions on Aerospace and Electronic Systems 33(4): 1271–1276.

Chang, K. C., Z. Tian, and R. K. Saha. 2002. Performance evaluation of track fusion with information matrix filter. IEEE Transactions on Aerospace and Electronic Systems 38(2): 455–466.

Chen, L., P. O. Arambel, and R. K. Mehra. 2002. Estimation under unknown correlation: Covariance intersection revised. IEEE Transactions on Automatic Control 47(11): 1879–1882.

Chong, C. Y. 1979. Hierarchical estimation. Proceedings of the MIT/ONR Workshop on C3, Monterey, CA.

Chong, C. Y., E. Tse, and S. Mori. 1983. Distributed estimation in network. Proceedings of the 83 American Control Conference, San Francisco, CA.

Chong, C. Y. and S. Mori. 2001. Convex combination and covariance intersection algorithms in distributed fusion. Proceedings of the 4th International Conference in Information Fusion, Montréal, Québec, Canada.

Chong, C.-Y., S. Mori, and K. C. Chang. 1985. Information fusion in distributed sensor networks. Proceedings of the 1985 American Control Conference, Boston, MA, pp. 830–835.

Chong, C. Y., K. C. Chang, and S. Mori. 1986. Distributed tracking in distributed sensor networks. Proceedings of the 1986 American Control Conference, Seattle, WA.

Chong, C. Y., S. Mori, and K. C. Chang. 1987. Adaptive distributed estimation. Proceedings of the 26th IEEE Conference on Decision and Control, Los Angeles, CA, pp. 2233–2238.

Chong, C. Y., S. Mori, and K. C. Chang. 1990. Distributed multitarget multisensor tracking (Chapter 8). In Multitarget-Multisensor Tracking: Advanced Applications. Y. Bar-Shalom (Ed.), pp. 247–295, Norwood, MA: Artech House.

Chong, C. Y., S. Mori, K. C. Chang, and W. H. Barker. 2000. Architectures and algorithms for track association and fusion. IEEE Aerospace and Electronic Systems Magazine 15: 5–13.

Cover, T. M. and J. A. Thomas. 2006. Elements of Information Theory. New York: John Wiley & Sons.

Drummond, O. E. 1996. Track fusion with feedback. Proceedings of the SPIE Symposium on Sensor and Data Processing of Small Targets, Vol. 2759, Orlando, FL, pp. 342–360.

Drummond, O. E. 1997a. A hybrid sensor fusion algorithm architecture and tracklets. Proceedings of the SPIE Symposium on Signal and Data Processing of Small Targets, Vol. 3163, San Diego, CA.

Drummond, O. E. 1997b. Tracklets and a hybrid fusion with process noise. Proceedings of the SPIE Symposium on Signal and Data Processing of Small Targets, Vol. 3163, San Diego, CA.

Dunham, D. T., S. S. Blackman, and R. J. Dempster. 2004. Multiple hypothesis tracking for a distributed multiple platform system. Proceedings of the SPIE Symposium on Signal and Data Processing of Small Targets, Orlando, FL, pp. 13–15.

Durrant-Whyte, H. F., B. S. Y. Rao, and H. Hu. 1990. Toward a fully decentralized architecture for multi-sensor data fusion. Proceedings of the IEEE International Conference on Robotic Automation, Cincinnati, OH, pp. 1331–1336.

Ferry, J. P. 2010. Exact association probability for data with bias and feature. Journal of Advances in Information Fusion 5(1): 41–66.

Gao, Y. and X. R. Li. 2010. Quasi-tracklet fusion accounting for cross-correlation. Proceedings of the 13th International Conference on Information Fusion, Edinburg, U.K.

Govaers, F. and W. Koch. 2010. Distributed Kalman filter fusion at arbitrary instants of time. Proceedings of the 13th International Conference on Information Fusion, Edinburg, U.K.

Govaers, F. and W. Koch. 2011. On the globalized likelihood function for exact track-to-track fusion at arbitrary instants of time. Proceedings of the 14th International Conference on Information Fusion, Chicago, IL.

Hashemipour, H. R., S. Roy, and A. J. Laub. 1988. Decentralized structures for parallel Kalman filtering. IEEE Transactions on Automatic Control AC 33: 88–93.

Hurley, M. 2002. An information-theoretic justification for covariance intersection and its gen-eralization. Proceedings of the 5th International Conference on Information Fusion, Annapolis, MD.

Iyengar, S. S. and R. R. Brook, Eds. 2005. Distributed Sensor Networks. Chapman &Hall CRC Computer & Information Science Series. Boca Raton, FL: CRC Press.

Julier, S. J. 2006. An empirical study into the use of Chernoff information for robust, distributed fusion of Gaussian mixture models. Proceedings of the 8th International Conference on Information Fusion, Florence, Italy.

Julier, S. J., J. K. Uhlmann, J. Walters et al. 2006. The challenge of scalable and distributed fusion of disparate sources of information. Proceedings SPIE Conference on Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications, Vol. 6242, Orlando, FL.

Kalman, R. E. 1960. A new approach to linear filtering and prediction problems. Transactions of ASME—Journal of Basic Engineering, Series D 82: 35–45.

Kalman, R. E. and R. S. Bucy. 1960. New results in linear filtering and prediction theory. Transactions of ASME—Journal of Basic Engineering, 83: 95–108.

Koch, W. 2008. On optimal distributed Kalman filtering and retrodiction at arbitrary communication rates for maneuvering targets. Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligence Systems (MFI 2008), Seoul, Korea, pp. 457–462.

Koch, W. 2009. Exact update formulae for distributed Kalman filtering and retrodiction at arbitrary communication rates. Proceedings of the 12th International Conference on Information Fusion, Seattle, WA, pp. 2209–2216.

Levedahl, M. 2002. An explicit pattern matching assignment algorithm. Proceedings of SPIE Symposium on Signal and Data Processing of Small Targets, Vol. 4728, Orlando, FL.

Li, X. R., Y. Zhu, J. Wang et al. 2003. Optimal linear estimation fusion—Part I: Unified fusion rules. IEEE Transactions on Information Theory 49: 9.

Liggins II, M. E. and K. C. Chang. 2009. Distributed fusion architectures, algorithms, and performance within a network-centric architecture. In Handbook of Multisensor Data Fusion: Theory and Practice. M. E. Liggins, D. H. Hall, and J. Llinas (Eds.), Boca Raton, FL: CRC Press.

Liggins II, M. E., C.-Y. Chong, I. Kadar et al. 1997. Distributed fusion architecture and algorithms for target tracking. Proceedings of the IEEE 85: 95–107.

Lobbia, R. and M. Kent. 1994. Data fusion of decentralized tracker outputs. IEEE Transactions on Aerospace and Electronic Systems 30: 787–799.

Miller, M. D., O. E. Drummond, and A. J. Perrella. 1998. Tracklets and covariance truncation options for theater missile tracking. Proceedings of the 1st International Conference on Multisource-Multisensor Data Fusion, Las Vegas, NV.

Moore, J. R. and W. D. Blaire. 2000. Practical aspects of multisensor tracking (Chapter 1). Multitarget-Multisensor Tracking—Applications and Advances, Vol. III. Y. Bar-Shalom and W. D. Blair (Eds.), Boston, MA: Artech House.

Mori, S., W. H. Barker, C.-Y. Chong, and K.-C. Chang. 2002. Track association and track fusion with non-deterministic target dynamics. IEEE Transaction on Aerospace and Electronic Systems 38(2): 659–668.

Mori, S. and C.-Y. Chong. 2003. Track-to-track association metric. Proceedings of the 6th International Conference on Information Fusion, Cairns, Queensland, Australia.

Mori, S. and C.-Y. Chong. 2007. Comparison of bias removal algorithms in track-to-track association. Proceedings of SPIE Symposium on Signal and Data Processing of Small Targets, Vol. 6699, San Diego, CA.

Mori, S., C.-Y. Chong, and K. C. Chang. 2009. Track association and fusion using Janossy measure density functions. Proceedings of the 12th International Conference on Information Fusion, Seattle, WA.

Nicholson, D., S. J. Julier, and J. K. Uhlmann. 2001. DDF: An evaluation of covariance intersection. Proceedings of the 4th International Conference on Information Fusion, Montreal, Quebec, Canada.

Nicholson, D., C. M. Lloyd, S. J. Julier et al. 2002. Scalable distributed data fusion. Proceedings of the Fifth International Conference on Information Fusion, Annapolis, MD, pp. 630–635.

Rao, B. S. Y., H. F. Durrant-Whyte, and J. A. Sheen. 1993. Fully decentralized multi-sensor system for tracking and surveillance. International Journal of Robotics Research 12(1): 20–44.

Rhodes, I. B. 1971. A tutorial introduction to estimation and filtering. IEEE Transactions on Automatic Control AC 16(6): 688–706.

Singer, R. A. and A. J. Kanyuck. 1971. Computer control of multiple site correlation. Automatica 7: 455–463.

Speyer, J. L. 1979. Computation and transmission requirements for a decentralized linear-quadratic-Gaussian control problem. IEEE Transactions on Automatic Control AC 24: 266–269.

Vebber, P. W. 1991. An examination of target tracking in the antisubmarine warfare system evaluation tool (ASSET), Master thesis, Naval Postgraduate School, Monterey, CA.

Washburn, A. 1969. Probability density of a moving particle. Operations Research 17(5): 861–871.

Wiener, N. 1949. Extrapolation, Interpolation, and Smoothing of Stationary Time Series. New York: Technology Press of MIT.

Zhu, Y. and X. R. Li. 1999. Best linear unbiased estimation fusion. Proceedings of the 2nd International Conference on Information Fusion, Sunnyvale, CA, pp. 1054–1061.

*  By XT we mean the transpose of a vector or matrix X. E is the conditional or unconditional mathematical expectation operator.

  More precisely, (6.2) is meant to be a stochastic differential equation, dx(t) = Ax(t)dt + Bdw(t), with unit-intensity Wiener process (w(t))t∈[t0, ∞). We assume x(t0), (w(t))t∈[t0, ∞). and ((ηik)k=1Ni)i=12 are all independent from each other.

*  For this chapter, we use P and p as the generic symbols for conditional or unconditional probability density or mass function, and g as the generic zero-mean Gaussian density function, i.e., g(ξ;V)def = det(2πV)−1/2 exp(−(1/2)ξTV−1ξ).

  In this chapter, we use any conditioning in the strict Bayesian sense, e.g., P(x|y) = P(x, y)/P(y). (yik)k=1Ni is shorthand for a finite sequence yi1, yi2, … yiNi).

*  By ∥⋅∥, we mean the standard Euclidean norm, i.e., x=xTx for any vector x.

*  By ∥⋅∥A, we mean the norm on any Euclidean space defined by a positive definite symmetric matrix A as xA=defxTAx for each vector x.

*  Except for some extreme cases such as the cases where the local agents send out the local estimates after every synchronized observation (i.e., the full-communication-rate cases).

*  Where I and 0 are the 2 × 2 identity and zero matrices, respectively.

*  With α = 1/2, the denominator of the right hand side of (6.19) becomes Ep1(x)p2(x)dx, the expression known as the Bhattacharyya bound, and hence, we may call covariance intersection fusion rule with α = 1/2, Bhattacharyya fusion rule (Chang et al. 2008).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset