10

A Framework for Distributed

Subrata Das

CONTENTS

10.1  Introduction

10.2  Concept and Approach

10.3  Distributed Fusion Environments

10.4  Algorithm for Distributed Situation Assessment

10.4.1  Junction Tree Construction and Inference

10.5  Distributed Kalman Filter

10.6  Relevance to Network-Centric Warfare

10.7  Role of Intelligent Agents

10.7.1  What Is an Agent?

10.7.2  Use of Agents in Distributed Fusion

10.8  Conclusions and Further Reading

References

10.1  INTRODUCTION

This chapter presents distributed fusion from the situation assessment (SA) (a.k.a. Level 2 fusion [Das 2008b, Steinberg 2008]) perspective. It also describes the relevance of distributed fusion to network-centric warfare (NCW) environments and the role of intelligent agents in that context.

For SA in a distributed NCW environment, each node represents a sensor, software program, machine, human operator, warfighter, or a unit. A fusion node maintains the joint state of the set of variables modeling a local SA task at hand. Informally, the set of variables maintained by a fusion node is a clique (maximal sets of variables that are all pairwise linked), and the set of cliques in the environment together form a clique network to be transformed into a junction tree, where the nodes are the cliques. Thus, the cliques of a junction tree are maintained by local fusion nodes within the environment. Local fusion nodes communicate and coordinate with each other to improve their local estimates of the situation, avoiding the repeated use of identical information.

A junction tree can also be obtained by transforming (Jensen 2001) a Bayesian belief network (BN) (Pearl 1988, Jensen 2001, Das 2008b) model representing a global SA model in the context of a mission, thereby contributing to the development of a common tactical picture (CTP) of the mission via shared awareness. Each clique is maintained by a local fusion node. Inference on such a BN model for SA relies on evidence from individual local fusion nodes. We make use of the message-passing inference algorithm for junction trees that naturally fits within distributed NCW environments. A BN structure with nodes and links is a natural fit for distributing tasks in an NCW environment at various levels of abstraction and hierarchy. BNs have been applied extensively for centralized fusion (e.g., Jones et al. 1998, Das et al. 2002a, Wright et al. 2002, Mirmoeini and Krishnamurthy 2005, Su et al. 2011) where domain variables are represented by nodes.

A major source of evidence for SA is target tracks. We briefly describe distributed target tracking in an NCW environment for the sake of completeness of the chapter. We produce an overall estimate of a target at a fusion center that combines estimates from distributed sensors located at different fusion nodes which are all tracking the same target. We make use of the Kalman filter (KF) algorithm for estimating targets at local fusion nodes from sensor observations. Individual estimates from local fusion nodes are then combined at a fusion center, thereby generating evidence to be propagated into a BN model for SA.

The objective of the survey-type penultimate section on intelligent agents is to provide readers with a background on the capabilities of intelligent agents in the context of fusion, and how these intelligent agents are being exploited by fusion researchers and developers. It is clear that the agent technology is ideal for SA in a distributed NCW environment. In fact, each node in the environment can be implemented as or represented by an intelligent agent.

Readers who need supplementary background information on BN models, algorithms, and distributed fusion are recommended to consult the concluding section’s list of relevant references.

10.2  CONCEPT AND APPROACH

Some sensor networks consist of a large number of nodes of sensing devices, densely distributed over the operational environment of interest. Nodes have wired or wireless connectivity tied into one or more backbone networks, such as the Internet, SIPRNET (secret Internet protocol router network), or NIPRNET (nonclassified [unclassified but sensitive] Internet protocol router network). Each sensor node has its own measurements-collection and processing facility to estimate and understand its environment. A sensor is thus “situationally aware” in terms of position and movement of targets and the threats they pose. This awareness must be shared among all other nodes to generate an assessment of the environmental situation as a whole (sometimes called the common tactical picture [CTP]) for effective coordinated action.

Sensor nodes can be conceptualized as intelligent autonomous agents that communicate, coordinate, and cooperate with each other in order to improve their local situational awareness and to assess the situation of the operational environment as a whole. The concept of distributed fusion refers to decentralized processing environments, consisting of autonomous sensor nodes, and additional processing nodes without sensors, if necessary, to facilitate message communication, data storage, relaying, information aggregation, and assets scheduling. Some of the advantages of distributed fusion are reduced communication bandwidth, distribution of processing load, and improved system survivability from a single point failure. The distributed fusion concept naturally fits within the upcoming NCW paradigm and its backbone command network, the global information grid (GIG).

As a concrete example of distributed fusion, consider the decentralized processing environment as shown in Figure 10.1.

Image

FIGURE 10.1 An example of distributed fusion environment.

In this example, we assume there is a high-value target (top right of the figure) within a region of interest and that the designated areas A and B surrounding the target are considered to be the most vulnerable. These two areas must be under surveillance in order to detect any probing activities that indicate a possible attack threat. The sensor coverage in areas A and B, shown in grey, is by an infrared sensor (MASINT) and a video camera (IMINT), respectively. In addition, a human observer (HUMINT) is watching the area in common between A and B. There are two local fusion centers for the two areas to detect any probing activity. The infrared sensor has wireless connectivity with the local fusion center for area A, whereas the video camera has wired connectivity with the local fusion center for area B for streaming video. Moreover, the human observer communicates wirelessly with both local fusion centers. Each of the two local centers fuses the sensor data it receives in order to identify any possible probing activity. The centers then pass their assessments (i.e., higher-level abstraction, rather than raw sensor information, thus saving bandwidth) to another fusion center that assesses the overall threat level, based on the reports of probing activities and other relevant prior contextual information.

In a centralized fusion environment, where observations from IMINT, HUMINT, and MASINT are gathered in one place and fused, a BN model, such as the one in Figure 10.2, can be used for an overall SA. This model handles dependence among sensors and fusion centers via their representation in nodes and interrelationships.

Image

FIGURE 10.2 A centralized BN model for situation assessment.

A probing activity at an area will be observed by those sensors covering the area, and the lower half of the BN models this. For example, MASINT and HUMINT reports will be generated due to a probing activity at area A. Similarly, IMINT and HUMINT reports will be generated due to a probing activity at area B. The upper half of the BN models the threat of an attack based on the probing activities at areas A and B, together with other contextual information.

In a decentralized environment, as illustrated in Figure 10.1, each of the three fusion centers contains only a fragment of the above BN model as shown in Figure 10.3.

Local fusion centers A and B assess probing activities based on their local model fragments, and send their assessments to the global fusion center via messages. The global fusion center then uses its own models to determine the overall attack threat. If the same HUMINT report is received by both local fusion centers, the process has to ensure that this common information is used only once; otherwise, there will be a higher-than-actual level of support for a threat to be determined by the global fusion model. This is called the data incest problem in a distributed fusion environment, which is the result of repeated use of identical information. Pedigree needs to be traced, not only to identify common information but also to assign appropriate trust and confidence to data sources. An information graph (Liggins et al. 1997), for example, allows common prior information to be found.

10.3  DISTRIBUTED FUSION ENVIRONMENTS

As shown in Figure 10.4, a typical distributed fusion environment is likely to contain a variety of fusion nodes that do a variety of tasks:

•  Process observations generated from a cluster of heterogeneous sensors (e.g., the local fusion centers A and B in Figure 10.1, and nodes labeled 5 and 9 in Figure 10.4).

•  Process observations generated from a single sensor (e.g., nodes labeled 11, 12, and 13 in Figure 10.4).

•  Perform a task (e.g., situation assessment [SA] and threat assessment [TA], course of action [COA] generation, planning and scheduling, CTP generation, collection management) based on information received from other sensors in the environment and from other information stored in databases (e.g., nodes labeled 1, 2, 3, 4, 6, 7, and 10 in Figure 10.4).

•  Relay observations generated from sensors to other nodes (e.g., the node labeled 8 in Figure 10.4).

Image

FIGURE 10.3 Distributed parts of the BN models.

Image

FIGURE 10.4 A typical distributed fusion environment.

As shown in Figure 10.4, a fusion node receives values of some variables obtained either from sensor observations (X variables) or via information aggregation by other nodes (A variables). Such values can also be obtained from databases. For example, the fusion center labeled 6 receives values of the variables A2, X5, and X6 from the cluster fusion node labeled 9 and values of the variable X3 from a database. Note that an arrow between two nodes indicates the flow of information in the direction of the arrow as opposed to a communication link. The existence of an arrow indicates the presence of at least a one-way communication link, though not necessarily a direct link, via some communication network route. For example, there is a one-way communication link from node 3 to node 1. A reverse communication link between these two nodes will be necessary in implementing our message-passing distributed fusion algorithm to be presented later.

Each node (fusion center, cluster fusion, relay switch, or local fusion) in a distributed fusion environment has knowledge of the states of some variables, called local variables, as shown in Figure 10.5. For example, the fusion node labeled 6 has knowledge of the X variables X3, X5, and X6, and A variables A2 and A3. The node receives values of the variables A2, X5, and X6 from the node labeled 9 and the variable X3 from a database. The node generates values of the variable A3 via some information aggregation operation. On the other hand, fusion node 9 receives measurements X4, X5, and X6 from a cluster of sensors and generates A2; fusion node 8 relays values of the variables X10, X1, and X12 to other nodes; and fusion node 12 obtains measurements of X8 from a single sensor.

Image

FIGURE 10.5 Network of distributed fusion nodes.

Image

FIGURE 10.6 Possible distributed fusion environments: (a) centralized; (b) hierarchical; (c) peer-to-peer; and (d) grid-based.

Figure 10.6 shows four possible distributed fusion environments: centralized, hierarchical, peer-to-peer, and grid-based. Note that the direction of an arrow indicates both information flow and the existence of a communication link along the direction of an arrow.

In a centralized environment, only the sensors are distributed, sending their observations to a centralized fusion node. The centralized node combines the sensor information to perform tracking or SA. In a hierarchical environment, the fusion nodes are arranged in a hierarchy, with the higher-level nodes processing results from the lower-level nodes and possibly providing some feedback. The hierarchical architecture will be natural for applications where situations are assessed with an increasing level of abstraction along a command hierarchy, starting with the tracking of targets at the bottom level. Considerable savings in communication effort can be achieved in a hierarchical fusion environment. In both peer-to-peer and grid-based distributed environments, every node is capable of communicating with every other node. This internode communication is direct in the case of a peer-to-peer environment, but some form of “publish and subscribe” communication mechanism is required in a grid-based environment.

10.4  ALGORITHM FOR DISTRIBUTED SITUATION ASSESSMENT

As mentioned in Section 10.1, there are two ways in which we can accomplish SA in a distributed environment: (1) each local fusion node maintains the state of a set of variables and (2) there is a BN model for global SA.

In the first case, we start with a distributed fusion environment such as the one shown in Figure 10.4. Our distributed SA framework in this case has four steps:

•  Network formation

•  Spanning tree formation

•  Junction tree formation

•  Message passing

The nodes of the sensor network first organize themselves into a network of fusion nodes, similar to the one shown in Figure 10.5. Each fusion node has partial knowledge of the whole environment. This network is then transformed into a spanning tree (a spanning tree of a connected, undirected graph, such as the one in Figure 10.5, is a tree composed of all the vertices and some or all of the edges of the graph), so that neighbor nodes establish high-quality connections. In addition, the spanning tree formation algorithm optimizes the communication required by inference in junction trees. The algorithm can recover from communication and node failures by regenerating the spanning tree. Figure 10.7 describes a spanning tree obtained from the network in Figure 10.5. The decision to sever the link between nodes 4 and 6, as opposed to between nodes 3 and 6, can be mitigated using the communication bandwidth and reliability information in the cycle of nodes 1, 3, 6, and 4.

Using pairwise communication-link information sent between neighbors in a spanning tree, the nodes compute the information necessary to transform the spanning tree into a junction tree for the inference problem. Finally, the inference problem is solved via message-passing on the junction tree.

During the formation of a spanning tree, each node chooses a set of neighbors, so that the nodes form a spanning tree where adjacent nodes have high-quality communication links. Each node’s clique is then determined as follows. If i is a node and j is a neighbor of i, then the variables reachable to j from i, Rij, are defined recursively as

Image

FIGURE 10.7 A spanning tree.

Rij=Diknbr(i){j}Rki

(10.1)

where Di is the set of local variables of node i. A base case corresponds to a leaf node, which is simply a collection of a node’s local variables. If a node has two sets of reachable variables to two of its neighbors that both include some variable V, then the node must also carry V to satisfy the running intersection property of a junction tree. Formally, node i computes its clique Ci as

Ci=Dij,knbr(i)jkRjiRki

(10.2)

A node i can also compute its separator set Sij = CiCj with its neighbor j using reachable variables as

Sij=CiRji

(10.3)

Figure 10.8 shows the junction tree obtained from the spanning tree in Figure 10.7.

The variables reachable to a leaf node, for example, fusion node 9, are its local variables A2, X4, X5, X6. The variables reachable to an intermediate node, for example, fusion node 1, from its neighboring nodes 3 and 4 are

Image

FIGURE 10.8 A junction tree from the distributed fusion environment.

R31={A1,A2,A3,X1,X2,X3,X4,X5,X6}R41={A3,A5,A6,A7,X7,X8,X9,X10,X11,X12}

The local variable of the fusion node 1 is D1 = {A1, A2, A4, A7}. Therefore, its clique is

C1={A1,A2,A3,A4,A7}

The formation of a suitable junction tree from a BN model for SA is the only part of our distributed fusion approach that is global in nature.

Now we focus on the second case of a distributed SA where we have a BN model for global SA such as the one shown in Figure 10.2. We first apply an algorithm (Jensen 2001) that systematically transforms a BN to a junction tree in four steps: moralization, triangulation, clique identification, and junction tree formation.

10.4.1  JUNCTION TREE CONSTRUCTION AND INFERENCE

The moral graph of a BN is obtained by adding a link between any pair of variables with a common “child” and dropping the directions of the original links in the BN. An undirected graph is triangulated if any cycle of length greater than 3 has a chord, that is, an edge joining two nonconsecutive nodes along the cycle. The nodes of a junction tree for a graph are the cliques in the graph (maximal sets of variables that are all pairwise linked).

Once we have formed a junction tree from either of the aforementioned two cases, such as the one in Figure 10.8, a message-passing algorithm then computes prior beliefs of the variables in the network via an initialization of the junction tree structure, followed by evidence propagation and marginalization. The algorithm can be run asynchronously on each node responding to changes in other nodes’ states. Each time a node i receives a new separator variables message from a neighbor j, it recomputes its own clique and separator variables messages to all neighbors except j and transmits them if they have changed from their previous values. Here we briefly discuss the algorithm and how to handle evidence by computing the posterior beliefs of the variables in the network.

A junction tree maintains a joint probability distribution at each node, cluster, or separator set in terms of a belief potential, which is a function that maps each instantiation of the set of variables in the node into a real number. The belief potential of a set of variables X will be denoted as φX, and φX(x) is the number onto which the belief potential maps x. The probability distribution of a set of variables X is just the special case of a potential whose elements add up to 1. In other words,

xXφX(x)=xXp(x)=1

(10.4)

The marginalization and multiplication operations on potentials are defined in a manner similar to the same operations on probability distributions.

Belief potentials encode the joint distribution p(X) of the BN according to the following:

p(X)=iϕCijϕSj

(10.5)

where φCi and φSj are the cluster and separator set potentials, respectively. We have the following joint distribution for the junction tree in Figure 10.8:

p(A1,,A9,X1,,X12)=ϕC1ϕC2ϕC1.3ϕS13ϕS14ϕS24ϕS35ϕS1213

(10.6)

where

Ci represents the variable in clique i

Sij = Ci ∪ Cj represents the separator set between nodes i and j

It is imperative that a cluster potential agrees with its neighboring separator sets on the variables in common, up to marginalization. This imperative is formalized by the concept of local consistency. A junction tree is locally consistent if, for each cluster C and neighboring separator set S, the following holds:

CSϕC=ϕS

(10.7)

To start initialization, for each cluster C and separator set S, set the following:

ϕC1,ϕS1

(10.8)

Then assign each variable X to a cluster C that contains X and its parents pa(X). Then set the following:

ϕCϕCp(X|pa(X))

(10.9)

When new evidence on a variable is entered into the tree, it becomes inconsistent and requires a global propagation to make it consistent. The posterior probabilities can be computed via marginalization and normalization from the global propagation. If evidence on a variable is updated, the tree requires re-initialization. Next, we present initialization, normalization, and marginalization procedures for handling evidence.

As before, to start initialization, for each cluster C and separator set S, set the following:

ϕC1,ϕS1

(10.10)

Then assign each variable X to a cluster C that contains X and its parents pa(X), and then set the following:

ϕCϕCp(X|pa(X))λX1

(10.11)

where λX is the likelihood vector for the variable X. Now, perform the following steps for each piece of evidence on a variable X:

•  Encode the evidence on the variable as a likelihood λXnew.

•  Identify a cluster C that contains X (e.g., one containing the variable and its parents).

•  Update as follows:

ϕCϕCλXnewλXλXλXnew

(10.12)

Now perform a global propagation using the two recursive procedures: collect evidence and distribute evidence. Note that if the belief potential of one cluster C is modified, then it is sufficient to unmark all clusters and call only distribute evidence (C).

The potential φC for each cluster C is now p(C, e), where e denotes evidence incorporated into the tree. Now marginalize C into the variable as

p(X,e)=C{X}ϕC

(10.13)

Compute posterior p(X|e) as follows:

p(X|e)=p(X,e)p(e)=p(X,e)Xp(X,e)

(10.14)

To update evidence for each variable X on which evidence has been obtained, first update its likelihood vector. Then initialize the junction tree by incorporating the observations. Finally, perform global propagation, marginalization, etc.

10.5  DISTRIBUTED KALMAN FILTER

As shown in Figure 10.9, we assume that a distributed KF environment consists of N local fusion nodes, producing track estimates based on a single sensor or multiple local sensors, and a fusion center, combining these local estimates into a global one. An example distributed environment for tracking a ground vehicle on a road consists of (1) a group of ground acoustic sensors laid on the road, which are coordinated by a local fusion node, (2) an aerial video feed provided by an unmanned aerial vehicle (UAV), and (3) ground moving target indicator (GMTI) data generated by joint surveillance target attack radar system (JSTARS). A local fusion node requires feedback from the global estimate to achieve the best performance. Moreover, the global estimation has to cope with sensors running at different observation rates.

Image

FIGURE 10.9 Distributed Kalman filter.

The target’s dynamic is modeled in a transition model as

Xk=FXk1+Wk1

(10.15)

where

the state vector Xk is the estimate of the target at time-instant k

F is the transition mode matrix invariant of k

Wk is the “white and Gaussian process” noise with zero-mean

The measurement models are given by

Zik=HiXk+Vik

(10.16)

where

Zik is the measurement or observed output state at time step k from the ith sensor (i = 1,2,…, N)

Hi is the corresponding observation matrix invariant of k

Vik is the corresponding white and Gaussian noise with zero-mean

The centralized KF algorithm for estimating the target’s state and error covariance matrix has the following two recursive steps:

Prediction:

Pk|k1=FPk1|k1FT+Qk1X^k|k1=FX^k1|k1

(10.17)

Update:

Pk|k1=Pk|k11+i=1NHiTRik1HiX^k|k=Pk|k[ Pk|k11X^k|k1+i=1NHiTRik1Zik ]

(10.18)

where

X^k|k is the state estimate at time step k

Pk|k is the error covariance matrix

Qk and Rik are covariance matrices of the process and measurement noises, respectively

The inverse P−1 of the covariance matrix P is a measure of the information contained in the corresponding state estimate.

In a distributed KF environment, each local fusion node i produces its own estimate X^i(k|k) based on the information available from its sensors, using the standard KF technique. These individual estimates are then fused together at the fusion center to produce the overall estimate X^k|k.

As shown in Figure 10.10, there are two ways to carry out distributed KF (Liggins et al. 1997):

•  Without feedback, meaning an individual fusion node performs target tracking based on its own local sensor measurements and sends its estimation of the target state and error covariance matrix to the fusion center at every time step.

•  With feedback, meaning an individual fusion node sends its estimation to the fusion center as before but obtains feedback from the fusion center in terms of the center’s overall estimation, using combined results from individual local fusion nodes.

Image

FIGURE 10.10 Distributed target tracking with and without feedback.

Without feedback:

Pk|k1=Pk|k11+i=1N[ Pi(k|k)1Pi(k|k1)1 ]X^k|k=Pk|k[ Pk|k11X^k|k1+i=1N[Pi(k|k)1X^i(k|k)Pi(k|k1)1X^i(k|k1)] ]

(10.19)

With feedback:

Pk|k1=i=1NPi(k|k)1(N1)Pk|k11X^k|k=Pk|k[ i=1NPi(k|k)1X^i(k|k)(N1)Pk|k11X^k|k1 ]

(10.20)

Note in the above two cases of estimation that the fusion center fuses only the incremental information when there is no feedback. The new information is the difference between the current and previous estimates from the local fusion nodes. When there is feedback, the fusion node must remove its own previously sent information before combining the local estimates. In other words, the new information to be sent to the fusion center is the difference between the new estimate and the last feedback from the fusion center. The process of removing an estimate is to make sure that the local estimates that are combined are independent.

10.6  RELEVANCE TO NETWORK-CENTRIC WARFARE

The NCW concept (Cebrowski and Garstka 1998, Cebrowski 2001) is a part of the DoD’s effort to create a twenty-first-century military by transforming its primarily platform-centric force to a network-centric force through the use of modern information technologies. NCW is predicated upon dramatically improved capabilities for information sharing via an Internet-like infrastructure. When paired with enhanced capabilities for sensing, information sharing can enable a force to realize the full potential of dominant maneuver, precision engagement, full-dimensional protection, and focused logistics.

As shown in Figure 10.11, NCW involves working in the intersection of three interconnected domains, namely, physical, information, and cognitive.

The physical domain is where the situation the military seeks to influence exists. It is the domain where strikes, protections, and maneuverings take place across the environments of ground, sea, air, and space. It is the domain where physical platforms and the communications networks that connect them reside.

The information domain is where information is created, manipulated, and shared. It is the domain that facilitates the communication of information among warfighters. It is the domain where the command and control of modern military forces is communicated and where the commander’s intent is conveyed.

Image

FIGURE 10.11 Conceptual vision of NCW. (From NCW, Network Centric Warfare, Department of Defense, Report to Congress, 2001.)

The cognitive domain is in the minds of the participants. It is the domain where perceptions, awareness, understanding, beliefs, and values reside and where, as a result of sense-making, decisions are made.

From the perspective of distributed fusion presented in the earlier sections, it is the cognitive domain that provides warfighters with the capability to develop and share high-quality situational awareness. Fusion nodes representing warfighters communicate their assessments of situations via appropriate coordination and negotiation.

The GIG is a globally interconnected, end-to-end set of information capabilities, associated processes, and personnel for collecting, processing, storing, disseminating, and managing information-on-demand for warfighters, defense policymakers, and support personnel. The GIG will operate within the information domain to enable the creation of a fusion network consisting of fusion nodes and any needed interconnections.

10.7  ROLE OF INTELLIGENT AGENTS

Recent advances in intelligent agent research (AGENTS 1997–2001, AAMAS 2002–2010) have culminated in various agent-based applications that autonomously perform a range of tasks on behalf of human operators. Examples of the kinds of tasks these applications perform include information filtering and retrieval, situation assessment and decision support, and interface personalization. Each of these tasks requires some form of human-like intelligence that must be simulated and embedded within the implemented agent-based application.

Image

FIGURE 10.12 Role of intelligent agents for NCW.

Expectations are high that agent technologies can provide insights into and solutions for complex problems in the industrial, military, and business fusion communities. Such expectations are due to the agents’ inherent capability for operating autonomously while communicating and coordinating with other agents in the environment. This makes them suitable for embedding in entities operating in hazardous and high-risk operational environments, including robots, UAVs, unattended ground sensors, etc. A recent DoD-wide thrust on NCW is by definition distributed in nature, where agents can play a vital role in the areas of cooperation, coordination, brokering, negotiation, and filtering. As shown in Figure 10.12, autonomous intelligent agents can act on behalf of warfighters (Lichtblau 2004) within an NCW environment to reduce their cognitive workload.

10.7.1  WHAT IS AN AGENT?

An agent is a computational entity with intentionality that performs user-delegated tasks autonomously (Guilfoyle and Warner 1994, Caglayan and Harrison 1997). Some of the most important properties of software agents (Wooldridge and Jennings 1995), namely, autonomy, monitoring, and communication skills, are desirable features for building an ideal information fusion system. Table 10.1 summarizes the key properties of agents and agent-based fusion systems.

Intelligent agents can also be viewed as traditional artificial intelligence (AI) systems simulating human behavior. Rich AI technologies can therefore be leveraged to build intelligent agents that learn from the environment, make plans and decisions, react to the environment with appropriate actions, express emotions, or revise beliefs. Intelligent agents typically represent human cognitive states using underlying knowledge and beliefs modeled in a knowledge representation language. The term “epistemic state” is used to refer to an actual or possible cognitive state that drives human behavior at a given point in time; the accurate determination (or estimation) of these epistemic states is crucial to an agent’s ability to correctly simulate human behavior (Das 2008a).

TABLE 10.1
Agent Properties and Data Fusion

Property

Definition

Agent-Based Fusion System

Autonomy

Operates without the direct intervention of humans or others

Autonomously executes tasks—target tracking and identification, situation and threat assessment, sensor management and decision support

Sociability

Interacts with other agents

Communicates with external environment such as sensors, fusion systems, and human operators

Reactivity

Perceives its environment and responds in a timely fashion

Perceives the environment and adjusts response accordingly

Pro-activity

Exhibits goal-directed behavior by taking the initiative

Goal of delivery of situation and threat assessment in time

Learnability

Learns from the environment to adjust knowledge and beliefs

Dynamic capabilities and behavior learned over time by observing areas of operations

Mobility

Moves with code to a node where data resides

Execute code at local sensor nodes accumulating observations

10.7.2  USE OF AGENTS IN DISTRIBUTED FUSION

Using agent technologies, experiments in building fusion systems (Das 2010) are being conducted at all levels (in the sense of JDL [Liggins et al. 2008, Steinberg et al. 1998, White 1988]). The communication ability of agents naturally lends itself to performing fusion tasks in a decentralized manner, where the cooperation among a set of spatially distributed agents is vital. Many important fusion problems (e.g., target tracking) are inherently decentralized.

A decentralized data fusion system, according to Durrant-Whyte and Stevens (2006),

consists of a network of sensor nodes, each with its own processing facility, which together do not require any central fusion or central communication facility. In such a system, fusion occurs locally at each node on the basis of local observations and the information communicated from neighboring nodes.

Such decentralized systems rely on communication among nearby platforms, and therefore the number of messages that each platform sends or receives is independent of the total number of platforms in the system. This property ensures scalability to distributed systems with (almost) any number of platforms (Rosencrantz et al. 2003).

In contrast to a decentralized system, all platforms in centralized architectures communicate all sensor data to a single special agent, which processes it centrally and broadcasts the resulting state estimate back to the individual platforms. Such an approach suffers from single-point-of-failure and communication and computational bottlenecks.

There is an abundance of work in the area of distributed agent–based target tracking and in the area of distributed fusion. In general, a distributed processing architecture for estimation and fusion consists of multiple processing agents. Here we mention only some of them.

Horling et al. (2001) developed an approach to real-time distributed tracking, wherein the environment is partitioned into sectors to reduce the level of potential interaction between agents. Within each sector, agents dynamically specialize to address scanning, tracking, or other goals by taking into account resource and communication constraints. See also (Waldock and Micholson, 2007) for an approach to distributed agent-based tracking.

Hughes and Lewis (2009) investigated the track-before-detect (a method that identifies tracks before applying thresholds) problem using multiple intelligent software agents. The developed system is based on a hierarchical population of agents, with each agent representing an individual radar cell that is allowed to self-organize into target tracks.

Martin and Chang (2005) developed a tree-based distributed data fusion method for ad hoc networks, where a collection of agents share and fuse data in an ad hoc manner for estimation and decision making.

Chong and Mori (2004) highlighted the advantage of distributed estimation over centralized estimation, due to reduced communication, computation, and vulnerability to system failure, but expressed the need to address the dependence in the information. The authors developed an information graph approach to systematically represent this dependence due to communication among processing agents.

Graphical Bayesian belief networks have been applied extensively by the fusion community to perform situation assessment (Das 2008b). A network structure with nodes and links, modeling a situation assessment problem, is a natural fit for distributing tasks at various levels of abstraction and hierarchy, where nodes represent agents and messages flow between agents along the links. An approach along these lines has been adopted by Pavlin et al. (2006).

Mastrogiovanni et al. (2007) developed a framework for collaborating agents for distributed knowledge representation and data fusion based on the idea of an ecosystem of interacting artificial entities.

Rosencrantz et al. (2003) developed a decentralized technique for state estimation from multiple platforms in dynamic environments. The approach utilizes particle filters and deploys a selective communication scheme that enables individual platforms to communicate only the most informative pieces of information to other entities, thus avoiding communication overhead.

Mobile agents have also been employed for distributed fusion. Mobile agents are able to travel between the nodes of a network in order to make use of resources that are not locally available. Mobile agents enable the execution code to be moved to the data sites, thus saving network bandwidth and providing an effective way to overcome network latency.

Qi et al. (2001) developed an infrastructure for mobile agent–based distributed sensor networks (MADSNs) for multisensor data fusion. Bai et al. (2005) developed a mobile agent–based distributed fusion (MADFUSION) system for decision making in Level 2 fusion. The system environment consists of a peer-to-peer ad hoc network in which information may be dynamically distributed and collected via a publish/subscribe functionality. The software agents travel deterministically from node to node, carrying a data payload of information, which may be subscribed to by users within the network.

Focusing on fusion tasks beyond tracking and situation assessment, Nunnink and Pavlin (2005) proposed an algorithm that, based on the expected change in entropy, determines the optimal sensing resource to devote to fusion task assignment.

Jameson’s (2001) Grapevine architecture for data fusion integrates intelligent agent technology, where an agent generates the information needs of the peer platform it represents.

Gerken et al. (2003) embedded intelligent agents into the mobile commander’s associate (MCA) decision-aiding system to improve the situational awareness of the commander by monitoring and alerting based on the information gathered.

Das (2008a) provided three fundamental and generic approaches (logical, probabilistic, and modal) for representing and reasoning with agent epistemic states, specifically in the context of decision making. In addition, an introduction is given to the formal integration of these three approaches into a single unified approach called P3 (propositional, probabilistic, and possible world), which combines the advantages of the other approaches. The P3 approach is useful in implementing a knowledge-based intelligent agent specifically designed to perform situation assessment and decision-making tasks. Modeling an agent epistemic state (Das et al. 1997, Das and Grecu 2000, Das 2007) in such a way is analogous to modeling via the BDI (belief, desire, and intention) architecture (Rao and Georgeff 1991).

10.8  CONCLUSIONS AND FURTHER READING

In this chapter, we have presented distributed fusion from the SA and target tracking perspectives and its relevance to NCW environments. The approach to distributed fusion via message passing is a natural fit to distributed NCW environments, as it maintains the autonomy and privacy of individual agents and data sources. There are approaches along these lines, namely, distributed perception networks (DPN) (Pavlin et al. 2006) and multiply section Bayesian networks (MSBN) (Xiang et al. 1993), but the proposed approach leverages existing algorithms and reduces the overall message flow to save bandwidth.

Readers are recommended to consult Liggins et al. (1997) and Durrant-Whyte (2000) for an overall discussion on distributed fusion from the target tracking perspective. Liggins et al. (1997) also discuss an approach to address the data incest problem via information graphs. There are alternative approaches to a distributed KF algorithm, for example, that presented by Rao and Durrant-Whyte (1991). Schlosser and Kroschel (2004) present some experimental results from their study of the effect of communication rate among fusion nodes on the performance of a decentralized KF algorithm.

The book by Pearl (1988) is still the most comprehensive account of BNs, and more generally on using probabilistic reasoning to handle uncertainty. The junction tree algorithm of Lauritzen and Spiegelhalter (1988), as refined by Jensen et al. (1990) in HUGIN, is the most popular inference algorithm for general BNs. A good comprehensive procedural account of the algorithm can be found in Huang and Darwiche (1996). Jensen’s books (1996, 2001) are also useful guides in this field. (See Das et al. [2002] for an application of BNs for conventional battlefield SA.)

The reader can also refer to Paskin and Guestrin (2004) for a more detailed account of a junction tree–based distributed fusion algorithm along the lines of the one presented here. The algorithm in the paper, in addition, optimizes the choice of junction tree to minimize the communication and computation required by inference. (See Das et al. [2002] for distributing components of a BN for battlefield SA across a set of networked computers to enhance inferencing efficiency and to allow computation at various levels of abstraction suitable for military hierarchical organizations.)

There is an abundance of open source literature on NCW. A “must read” on NCW is Cebrowski and Garstka (1998), and also NCW (2001) and Cebrowski (2001). Further reading of topics related to NCW are on effect-based operations (EBO) (Smith 2002) and sense and respond logistics (S&RL) (OFT 2003). The NCW vision is being realized within the DoD branches, including in the Army via its FCS (future combat systems) program, in the Navy (Antanitus 2003), and in the Air Force (Sweet 2004).

Section 10.7 on intelligent agents is nontechnical in nature; a multitude of references on intelligent agents in the fusion context are embedded within the section itself.

REFERENCES

AAMAS (2002–2010). Proceedings of the International Joint Conferences on Autonomous Agents and Multi-Agent Systems (1st—Bologna, Italy; 2nd—Melbourne, Victoria, Australia; 3rd—New York; 4th—the Netherlands; 5th—Hakodate, Japan; 6th—Honolulu, HI; 7th—Estoril, Portugal; 8th—Budapest, Hungary; 9th—Toronto, Ontario, Canada), ACM Press, New York.

AGENTS (1997–2001). Proceedings of the International Conferences on Autonomous Agents (1st—Marina del Rey, CA; 2nd—Minneapolis, MN; 3rd—Seattle, WA; 4th—Barcelona, Spain; 5th—Montreal, Quebec, Canada), ACM Press, New York.

Antanitus, D. 2003. FORCEnet architecture. Briefing to National Defense Industrial Association (NDIA) Conference, San Diego, CA.

Bai, L., J. Landis, J. Salerno, M. Hinman, and D. Boulware. 2005. Mobile agent-based distributed fusion (MADFUSION) system. Proceeding of the 8th International Conference on Information Fusion, Philadelphia, PA.

Caglayan, A. K. and C. Harrison. 1997. Agent Sourcebook. New York: John Wiley & Sons, Inc.

Cebrowski, A. K. 2001. The Implementation of Network-Centric Warfare. Office of the Force Transformation, Washington, DC.

Cebrowski, A. K. and J. J. Garstka. 1998. Network centric warfare: Its origin and future. Proceedings of the Naval Institute, 124(1): 28–35.

Chong, C.-Y. and S. Mori. 2004. Graphical models for nonlinear distributed estimation. Proceedings of the Conference on Information Fusion, Stockholm, Sweden, Vol. I, pp. 614–621.

Das, S. 2007. Envelope of human cognition for battlefield information processing agents. Proceedings of the 10th International Conference on Information Fusion, Quebec City, Quebec, Canada.

Das, S. 2008a. Foundations of Decision-Making Agents: Logic, Probability, and Modality. Singapore: World Scientific.

Das, S. 2008b. High-Level Data Fusion. Norwood, MA: Artech House.

Das, S. 2010. Agent-based information fusion, Guest Editorial, Information Fusion (Elsevier Science), 11: 216–219.

Das, S., J. Fox, D. Elsdon, and P. Hammond. 1997. A flexible architecture for autonomous agents. Journal of Experimental and Theoretical Artificial Intelligence, 9(4): 407–440.

Das, S. and D. Grecu. 2000. COGENT: Cognitive agent to amplify human perception and cognition. Proceedings of the 4th International Conference on Autonomous Agents, June 2000, Barcelona, Spain, pp. 443–450.

Das, S., R. Grey, and P. Gonsalves. 2002a. Situation assessment via Bayesian belief networks. Proceedings of the 5th International Conference on Information Fusion, Annapolis, MD, pp. 664–671.

Das, S., K. Shuster, and C. Wu. 2002b. ACQUIRE: Agent-based complex query and information retrieval engine. Proceedings of the 1st International Joint Conference on Autonomous Agents and Multi-Agent Systems, Bologna, Italy, pp. 631–638.

Durrant-Whyte, H. F. 2000. A Beginners Guide to Decentralised Data Fusion. Sydney, New South Wales, Australia: Australian Centre for Field Robotics, The University of Sydney.

Durrant-Whyte, H. and M. Stevens. 2006. Data fusion in decentralised sensing networks. Australian Centre for Field Robotics, The University of Sydney, Sydney, New South Wales, Australia, http://www.acfr.usyd.edu.au

Gerken, P., S. Jameson, B. Sidharta, and J. Barton. 2003. Improving army aviation situational awareness with agent-based data discovery. American Helicopter Society 59th Annual Forum, Phoenix, AZ, pp. 602–608.

Guilfoyle, C. and E. Warner. 1994. Intelligent agents: The new revolution in software. Ovum Report.

Horling, B. et al. 2001. Distributed sensor network for real time tracking. Proceedings of the 5th International Conference on Autonomous Agents, Montreal, Quebec, Canada, pp. 417–424.

Huang, C. and A. Darwiche. 1996. Inference in belief networks: A procedural guide. International Journal of Approximate Reasoning, 15(3): 225–263.

Hughes, E. and M. Lewis. 2009. An intelligent agent based track-before-detect system applied to a range and velocity ambiguous radar. Electromagnetic Remote Sensing Defence Technology Centre (EMRS DTC) Technical Conference, Edinburgh, U.K.

Jameson, S. 2001. Architectures for distributed information fusion to support situation awareness on the digital battlefield. Proceedings of the 4th International Conference on Data Fusion, Montreal, Quebec, Canada, pp. 7–10.

Jensen, F. V. 1996. An intoduction to Bayesian Networks. New York: Springer.

Jensen, F. V. 2001. Bayesian Networks and Decision Graphs. New York: Springer-Verlag.

Jensen, F. V., S. L. Lauritzen, and K. G. Olesen. 1990. Bayesian updating in causal probabilistic networks by local computations. Computational Statistics Quarterly., 4: 269–282.

Jones, P. et al. 1998. CoRAVEN: Modeling and design of a multimedia intelligent infrastructure for collaborative intelligence analysis. Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC’98), San Diego, CA, pp. 914–919.

Lauritzen, S. and D. Spiegelhalter. 1988. Local computations with probabilities on graphical structures and their applications to experts systems. Journal of Royal Statistical Society B, 50(2): 154–227.

Lichtblau, D. E. 2004. The critical role of intelligent software agents in enabling net-centric command and control. Command and Control Research and Technology Symposium, The Power of Information Age Concepts and Technologies, San Diego, CA, pp. 1–16, http://www.dtic.mil/dtic/tr/fulltext/u2/9466040.pdf

Liggins, M. E., C.-Y. Chong, I. Kadar, M. G. Alford, V. Vannicola, and S. Thomopoulos. 1997. Distributed fusion architectures and algorithms for target tracking. Proceedings of the IEEE, 85(1): 95–107.

Liggins, M., D. Hall, and J. Llinas (eds.). 2008. Handbook of Multisensor Data Fusion: Theory and Practice, 2nd edn. Boca Raton, FL: CRC Press.

Martin, T. and K. Chang. 2005. A distributed data fusion approach for mobile ad hoc networks. Proceedings of the 8th International Conference on Information Fusion, Philadelphia, PA, pp. 25–28.

Mastrogiovanni, F., A. Sgorbissa, and R. Zaccaria. 2007. A distributed architecture for symbolic data fusion. Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI), Hyderabad, India, pp. 2153–2158.

Mirmoeini, F. and V. Krishnamurthy. 2005. Reconfigurable Bayesian networks for adaptive situation assessment in battlespace. Proceedings of the IEEE Conference on Networking, Sensing and Control, Tucson, AZ, pp. 810–815.

NCW. 2001. Network Centric Warfare, Department of Defense, Report to Congress.

Nunnink, J. and G. Pavlin. 2005. A probabilistic approach to resource allocation in distributed fusion systems. Proceedings of the Conference on Autonomous Agents and Multi-Agent Systems, AAMAS 2005, Utrecht, the Netherlands, pp. 846–852.

OFT. 2003. Operational Sense and Respond Logistics: Co-Evolution of an Adaptive Enterprise, Concept Document, Office of Force Transformation, Washington, DC.

Paskin, M. and C. Guestrin. 2004. Robust probabilistic inference in distributed systems. Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence (UAI), Banff, Alberta, Canada, pp. 436–445.

Pavlin, G., P. de Oude, M. Maris, and T. Hood. 2006. Distributed perception networks: An architecture for information fusion systems based on causal probabilistic models. Proceedings of the International Conference on Multisensor Fusion and Integration for Intelligent Systems, Heidelberg, Germany.

Pearl, J. 1988. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. San Mateo, CA: Morgan Kaufmann.

Qi, H., X. Wang, S. Iyengar, and K. Chakrabarty. 2001. Multisensor data fusion in distributed sensor networks using mobile agents. Proceedings of 5th International Conference on Information Fusion, Annapolis, MD, pp. 11–16.

Rao, B. S. and H. F. Durrant-Whyte. 1991. Fully decentralised algorithm for multisensor Kalman filtering. IEE Proceedings-Control Theory and Applications, 138(5): 413–420.

Rao, A. S. and M. P. Georgeff. 1991. Modeling rational agents within a BDI architecture. Proceedings of the 2nd International Conference on Knowledge Representation and Reasoning, Cambridge, MA, pp. 473–484.

Rosencrantz, M., G. Gordon, and S. Thrun. 2003. Decentralized sensor fusion with distributed particle filters. Proceedings of the Conference on Uncertainty in Artificial Intelligence, Acapulco, Mexico.

Schlosser, M. S. and K. Kroschel. 2004. Communication issues in decentralized Kalman filters. Proceedings of the 7th International Conference on Information Fusion, Stockholm, Sweden, pp. 731–738.

Smith, E. A. 2002. Effects based operations—Applying network centric warfare in peace, crisis, and war, DoD Command and Control Research Program, CCRP, Washington, DC.

Steinberg, A. 2008. Foundations of situation and threat assessment. In Liggins, M., D. Hall, and J. Llinas (eds.). Handbook of Multisensor Data Fusion: Theory and Practice, 2nd edn. Boca Raton, FL: CRC Press, pp. 437–501.

Steinberg, A. N., C. L. Bowman, and F. E. White, Jr. 1998. Revisions to the JDL data fusions models. Proceedings of the 3rd NATO/IRIS Conference, Quebec City, Quebec, Canada.

Su, X., P. Bai, F. Du, and Y. Feng. 2011. Application of Bayesian networks in situation assessment. In Intelligent Computing and Information Science, Communications in Computer and Information Science, Vol. 134. Berlin, Germany: Springer.

Sweet, N. 2004. The C2 constellation: A US air force network centric warfare program. Command and Control Research and Technology Symposium, San Diego CA, pp. 1–30, http://www.dodccrp.org/events/2004-CCRTS/CD/papers/164.pdf

Waldock, A. and D. Nicholson. 2007. Cooperative decentralised data fusion using probability collectives. Proceedings of the 1st International Workshop on Agent Technology for Sensor Networks, Honolulu, HI, pp. 47–57.

White, Jr., F. E. 1988. A model for data fusion. Proceedings of the 1st National Symposium on Sensor Fusion, Vol. 2, Orlando, FL.

Wooldridge, M. and N. R. Jennings. 1995. Intelligent agents: Theory and practice. The Knowledge Engineering Review, 10: 1–38.

Wright, E., S. Mahoney, K. Laskey, M. Takikawa, and T. Levitt. 2002. Multi-entity Bayesian networks for situation assessment. Proceedings of the 5th International Conference on Information Fusion, Annapolis, MD, pp. 804–811.

Xiang, Y., D. Poole, and M. Beddoes. 1993. Multiply sectioned Bayesian networks and junction forests for large knowledge based systems. Computational Intelligence, 9(2): 171–220.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset