2

Distributed Data Fusion

Overarching Design Concerns and Some New Approaches

David Nicholson, Steven Reece, Alex Rogers, Stephen Roberts and Nick Jennings

CONTENTS

2.1    Introduction

2.1.1    Content

2.2    DDF System Concept

2.3    DDF Design Concerns

2.4    Information Recycling

2.4.1    Bounded Covariance Inflation

2.4.2    Coupling Scalars

2.4.3    Decentralized Tracking Example

2.5    Sensor Coordination

2.5.1    Max-Sum Algorithm

2.5.2    Target Tracking Example

2.6    Selfish Stakeholders

2.6.1    Problem Description

2.6.2    Valuation Function

2.6.3    Mechanism

2.6.4    Example

2.7    Trust and Reputation

2.7.1    Expected Utility of a Contract

2.7.2    Heterogeneous Contracts: Inflated Independent Beta Distributions

2.7.3    Heterogeneous Contracts: A Kalman Filter Trust Model

2.7.4    Empirical Evaluation

2.8    Future Design Concerns and Opportunities

2.8.1    HAC Design Concerns

2.8.2    HAC Opportunities

Acknowledgments

Reference

2.1    INTRODUCTION

This chapter exposes some of the design concerns associated with distributed data fusion (DDF) systems and describes how they could be overcome. These concerns arise from the inherent openness of DDF systems. Open exchange and fusion of information creates the potential for system degradation as a result of recycling old information, failing to coordinate multiple information sources under the ownership of one or more stakeholders, and failing to recognize untrustworthy information sources. The overarching design concern is how to remove or reduce these problems without compromising the flexibility and scalability benefits of DDF systems.

2.1.1    CONTENT

Section 2.2 introduces the DDF system concept on which this chapter is based. This is a multi-agent system (MAS) in which each agent is a data fusion and decision-making node situated within some larger information fusion network. Section 2.3 introduces four critical design concerns that must be resolved if a multi-agent DDF system is to succeed in practice. Section 2.4 describes the resolution of information recycling concerns with a technique known as bounded covariance inflation (BCI). Section 2.5 reinforces the needs for coordinated actions within a DDF system and describes how this can be achieved with the max-sum algorithm. Section 2.6 describes how the concern of potential selfish actions in a multistakeholder DDF system can be managed by means of computational mechanism design. Finally, Section 2.7 raises the issue of trust and reputation in DDF systems. It describes how a probabilistic model of trust combined with the technique from Section 2.4 can be used to resolve this concern. Each section is highlighted with an example from the familiar domain of target tracking and sensor fusion. Section 2.8 concludes the chapter with some perspectives on the new design challenges that will be raised by future DDF systems that achieve effect by tightly interleaving human and software agent endeavors.

2.2    DDF SYSTEM CONCEPT

The DDF system concept explored in this chapter is illustrated in Figure 2.1. It is composed of autonomous, reactive, and proactive components, referred to as agents. These agents filter and fuse data to derive situational information. They interact by exchanging messages over communication links to achieve individual and collective goals. Within this MAS there may be multiple organizational relationships and stakeholders.

Our main focus will be decentralized sensor networks. Each sensor agent is tasked with detecting and tracking multiple targets. Within a region of observation (ROO) an agent is able to estimate the position of targets by making noisy or imprecise measures of their range and bearing. However, in order to better resolve the uncertainty in these position estimates, the agents must acquire target observations from neighboring agents and then fuse these observations with their own.

Image

FIGURE 2.1 DDF system concept.

2.3    DDF DESIGN CONCERNS

The DDF design concerns noted in Section 2.1 are explained in more detail in the following:

•  Information recycling. As DDF networks are dynamic and ad hoc, the information could arrive at any agent from multiple routes. Unless the information is attached with a record of its provenance to eliminate redundancy, there is a risk of recycling common information through the agents’ fusion processes. This can give rise to inconsistent situation awareness throughout the system and subsequently to spurious decisions.

•  Sensor coordination. If each agent in a DDF network determines its next action (e.g., where to look or what to communicate) without considering the actions of the other agents, their collective actions could be highly suboptimal. Unless the network is fully connected with zero propagation delay, the agents will need to explicitly coordinate their actions by communicating with each other until a set of agreed actions is reached.

•  Selfish stakeholders. In a heterogeneous DDF system the agents may represent distinct stakeholders with different aims and objectives. If they are left to make their own selfish decisions, without any intervention from a system designer, then the overarching DDF system goals are likely to be compromised as the agents will compete for resources.

•  Trust and reputation. One or more agents in a DDF system may not be trustworthy due to faults, bias, or malice. If these agents are unrecognized, the open nature of DDF systems would permit their false data to propagate to other agents and rapidly pollute the whole system. Thus, agents have to earn their reputations as trustworthy sources as well as estimating the trustworthiness of their information suppliers.

2.4    INFORMATION RECYCLING

Information recycling in a DDF network results in cross-correlation between the estimates of state variables generated by each agent. Ignoring this cross-correlation results in over-confident state estimates, but trying to keep track of cross-correlation requires extra book keeping operations that consume memory and bandwidth. In practice, bounds on the cross-correlation may at least be calculable and conservative estimates may be an acceptable trade-off for preserving the flexibility and scalability benefits of DDF systems. This section introduces the general theory of bounded covariance inflation (BCI) as a viable solution to the information recycling design challenge (Reece and Roberts 2005).

2.4.1    BOUNDED COVARIANCE INFLATION

If û is an estimate of the state u then P*uuPuu is a conservative matrix for the covariance of ûu if

P*uuE[˜u˜uT]where˜u=ˆuuPuuE[u˜u˜T]whereu˜=uˆu

The symbol ≥ denotes positive semi-definite. When u is composed by stacking two vectors, x and y say, with corresponding covariance matrices Pxx and Pyy, respectively, BCI is the procedure by which P*uuPuu can be determined from Pxx and Pyy when the cross-covariance, Pxy, between x and y is unknown but bounded

[PxyDxy]TP1xx[PxyDxy]S2Pyy[PxyDxy]TP1xx[PxyDxy]S2Pyy

(2.1)

or, equivalently

ˉx,ˉy.|ˉxTRRxx(PxyDxy)Ryyˉy|Sx¯,y¯.x¯TRRxx(PxyDxy)Ryyy¯S

(2.2)

for unit vectors ˉxx¯ and ˉyy¯, some “centered” matrix Dxy, “matrix spread” S and sphering matrices Rxx and Ryy such that P1xx=RTxxRxxP1xx=RTxxRxx and P1yy=RTyyRyyP1yy=RTyyRyy. When Dxy = 0 then S is the correlation coefficient. In general, we choose Dxy so that S is as small as possible (see Figure 2.2).

Given this setup, it is possible to find a proven conservative covariance matrix P*BCIPBCI for all possible joint covariance matrices, P, defined by

P=[PxxPxyPTxyPyy]P=[PxxPTxyPxyPyy]

(2.3)

Define P*BCIPBCI:

P*BCI=[(1+KS)PxxDxyDTxy(1+SK)Pyy]

(2.4)

Image

FIGURE 2.2 Family of covariance ellipses (dotted lines) for which Pxx = 2, Pyy = 1, and 0.2 < Pxy < 0.8. The “centered” ellipse is shown as a thick dashed line. Also shown is a conservative ellipse (solid line) for the family for which Dxy = 0.5 × (0.2 + 0.8), S = 0.3, and K = 1.

The positive value K is called the inflation factor and is chosen to minimize the overall uncertainty encoded by the covariance matrix. In the remainder of this section we will be concerned with symmetric correlation bounds (i.e., Dxy = 0). BCI with Dxy = 0 effectively replaces two correlated random vectors with two uncorrelated random vectors whose covariance is guaranteed to be conservative with respect to the original vectors.

When x, y, and u are state vectors and u is related to x and y by a linear transform F, thus

u=F(xy)

then a conservative estimate P*uu of the covariance Puu for ũ can be obtained from a conservative covariance matrix P* over ˜x and ˜y:

Ifˆu=F(ˆxˆy)thenP*uu=FP*FT

(2.5)

P*uu is conservative since P*uu=FP*FTFPFT=Puu. Both prediction and estimation fusion operations within the Kalman filter are linear operations for appropriate choices of F. We will now derive the Kalman filter update equation for an inflated covariance matrix.

Assume ˆx and ŷ are correlated state estimates over the same state space, Pxx and Pyy are the corresponding covariance matrices and û is an estimate obtained by fusing ˆx and ŷ. The inflated covariance for ˜x and ˜y is diagonal and the estimates can be considered to be uncorrelated under the inflated covariance matrix. Therefore a conservative estimate Puu for û can be calculated by fusing the random vectors using the Kalman filter and the inflated covariance matrix:

P1uuˆx=P1xxˆx1+KS+P1yyˆy1+(S/K)P1uu=P1xx1+KS+P1yy1+(S/K)

Note that when S = 0 we recover the information form of the Kalman filter for uncorrelated variables (Durrant-Whyte et al. 2001) and when S = 1 and K = ω/(1 − ω) (with ω ∞ [0, 1]) we recover covariance intersection (Julier and Uhlmann 2001). The next step is to determine upper and lower bounds on cross-correlations.

2.4.2    COUPLING SCALARS

Two correlated estimates ˆx and ŷ for state vector x and y, respectively, can be decomposed into orthogonal random vectors ˆα, βx, and βy by Gram–Schmidt orthogonalization (Doob 1990)

ˆx=CxαP1αˆα+βxˆy=CyαP1αˆα+βy

(2.6)

where Cxα and Cyα are the cross-covariance between x and α and between y and α, respectively. Since βx and βy are orthogonal then

Pxy=CxαP1αCαy

(2.7)

Thus, the cross-covariance between two random vectors is the information shared between the two vectors projected onto the vector spaces.

To obtain an expression for the minimum cross-correlation bound S in Equation 2.2, first rewrite Equation 2.7

Pxy=[CxαP1α][CyαP1α]T

and use the Cauchy–Schwarz inequality

|ˉxTRTxxPxyRyyˉy|maxeig[RTxxCxαP1αCwRxx]×maxeig[RTyyCyαP1αCαyRyy]

Comparing the aforementioned inequality with Equation 2.2, we observe that the right-hand side is a known bound for the cross-correlation S. The scalar

Ω=maxeig[RTxxCxαP1αCαxRxx]

is called the coupling scalar for x and is independent of y. The coupling scalars can be calculated locally. Thus, when an agent C receives two messages, one from each of agents A and B, comprising the information vector, matrix, and coupling scalar, a bound on the correlation between the estimates in these messages is

S=ΩAC×ΩBC

The key point of note is that the cross-correlation between two random vectors can be bounded by the product of just two scalars. This is crucial for limited bandwidth communication applications when bookkeeping messages such as the coupling scalar must be kept to a minimum. The coupling scalar can be interpreted as the fraction of the covariance matrix, which is the correlated part CxαP1αˆα of ˆx. From Equation 2.6 we see that it would be possible to communicate the correlated part and the uncorrelated part βx of the estimate ˆx separately. The receiving agent would then be able to fuse ˆx into its own estimate more efficiently than the method described earlier, as only the correlated part of ˆx would have to be inflated prior to fusion. However, this alternative approach would involve nearly twice the communication load compared to the coupling scalar approach, which is undesirable in applications where there is limited bandwidth.

2.4.3    DECENTRALIZED TRACKING EXAMPLE

In this example three stationary agents track a dynamic process xt and each agent maintains an estimate of the state of the target using a Kalman filter. All agents have the same behavior model of the target xt = xt−1 + νt with νtN(0, 0.1) and they are each able to make a measurement of the target at each time step. The ith agent’s observation model is zit = xt + μit with μitN(0, σi) where σ2 = {3, 1, 0.1} for the three agents, respectively. Both ν and μ are uncorrelated in time and independent of each other and μit and μjt are uncorrelated for all ij.

The agents communicate intermittently, cycling between agent 1 making contact with agent 3, then agent 3 with agent 2, and then agent 2 with agent 1. A contact takes place each five time intervals. This will correlate the agents’ estimates in two ways: through information recycling and because of the fact the agents are modeling the same stochastic process.

Figure 2.3 plots the fused track covariance at each agent for various methods: BCI using both upper and lower cross-correlation bounds, BCI using upper bound only, covariance inflation, and the best and worst possible cases, namely the centralized Kalman filter and local Kalman filters without any communication. BCI is clearly a conservative but consistent performer throughout.

Image

FIGURE 2.3 Agents calculated variances of the track error over time using a variety of methods.

2.5    SENSOR COORDINATION

Sensor coordination presents a fundamental design challenge for DDF systems as often physically distributed devices must act together, under computational and communication constraints, to meet system-wide goals. Consider a wide-area surveillance application in which the sensors are deployed in an ad hoc manner, for example, dropped from a military aircraft or ground vehicle. In this case, the local environment of each sensor, and hence the exact configuration of the network, cannot be determined prior to deployment. The sensors themselves must be equipped with capability to self-organize and coordinate sometime after deployment once the local environment in which they (and their neighbors) find themselves has been determined.

A common feature of these self-organization problems is that the sensors must typically choose between a small number of possible states (e.g., which neighboring sensor to transmit data to, or which sense/sleep schedule to adopt), and the effectiveness of the sensor network as a whole depends not only on the individual choices of state made by each sensor, but on the joint choices of interacting sensors. Thus, to maximize the overall effectiveness of the sensor network, the sensors within the network must typically make coordinated, rather than independent, decisions. Furthermore, this coordinated decision must be performed despite the specific constraints of each individual device (such as limited power, communication, and computational resources), and the fact that each device can typically only communicate with the few other devices in its local neighborhood (due to the use of low-power wireless transceivers, the small form factor of the device and antenna, and the hostile environments in which they are deployed). DDF systems are required to perform coordination without a central coordinator and ensure that the deployed solution scales well as the number of devices within the network increases. The max-sum algorithm is an efficient method by which decentralized sensor coordination can be achieved (Rogers et al. 2011, Waldock and Nicholson 2011).

2.5.1    MAX-SUM ALGORITHM

Consider M sensors and the state of each sensor may be described by a discrete variable xm. Each sensor interacts locally with a number of other sensors such that the utility of an individual sensor Um(xm) is dependent on its own state and the states of these other sensors (defined by the set xm). The approach at this stage is generic with no specific assumptions regarding the structure of the individual utility functions.

In this setting, we wish to find the state of each sensor, x*, such that the sum of the individual sensors’ utilities is maximized:

x*=arg maxxMi=1Ui(xi)

(2.8)

Furthermore, in order to enforce a truly decentralized solution, we assume that each sensor only has knowledge of, and can directly communicate with, the few neighboring agents on whose state its own utility depends. In this way, the complexity of the calculation that the sensor performs depends only on the number of neighbors that it has (and not the total size of the network), and thus we can achieve solutions that scale well.

The optimization problem defined by Equation 2.8 is represented as a bipartite factor graph. Specifically, each sensor is decomposed into a variable node that represents its state, and a function node that represents its utility. The function node of each sensor is connected to its own variable node (since its utility depends on its own state) and also to the variable nodes of other sensors whose states impact its utility. For example, we show in Figure 2.4 an example in which three sensors, {S1, S2, S3}, interact with their immediate neighbors through the overlap of their sensor areas. Figure 2.4c shows the resulting bipartite factor graph in which the sensors are decomposed into function nodes, {U1, U2, U3}, and variables nodes, {x1, x2, x3}. The overall function represented by this factor graph is given by

U=U1(x1,x2)+U2(x1,x2,x3)+U3(x2,x3)

The max-sum algorithm operates directly on the factor graph representation described earlier. When this graph is cycle-free, the algorithm is guaranteed to converge to the global optimal solution such that it finds the combination of states that maximizes the sum of the sensors’ utilities. When applied to cyclic graphs (as is the case here), there is no guarantee of convergence but extensive empirical evidence demonstrates that such family of algorithms generate good approximate solutions. The max-sum algorithm solves this problem in a decentralized manner by specifying messages that should be passed from variable to function nodes and from function nodes to variable nodes. These messages are defined as

Image

FIGURE 2.4 Sensor network showing (a) the position of three sensors whose fields of view overlap, (b) the sensor interaction graph, and (c) the resulting factor graph with sensors decomposed into function and variable nodes.

•  From variable to function

qij(xi)=αij+kijrki(xi)

(2.9)

where

i is a vector of function indices, indicating which function nodes are connected to variable node i

αij is a normalizing constant to prevent the messages from increasing endlessly in cyclic graphs

•  From function to variable

rji(xi)=maxxji[Uj(xj)+kNjiqkj(xk)]

(2.10)

where Nj is a vector of variable indices, indicating which variable nodes are connected to function node j and xji{xk:kNji}.

The messages flowing into and out of the variable nodes within the factor graph are functions of a single variable that represent the total utility of the network for each possible value of that variable. At any time during the propagation of these messages, agent i is able to determine which state it should adopt such that the sum over all the agents’ utilities is maximized. This is done by locally calculating the function, zi(xi), from the messages flowing into agent i’s variable node:

zi(xi)=jirji(xi)

(2.11)

and hence finding argmaxxizi(xi).

The messages described earlier may be randomly initialized, and then updated whenever a sensor receives an updated message from a neighboring sensor; there is no need for a strict ordering or synchronization of the messages. In addition, the calculation of the marginal function shown in Equation 2.11 can be performed at any time (using the most recent messages received), and thus sensors have a continuously updated estimate of their optimum state. When the underlying factor graph contains cycles there is no guarantee that the max-sum algorithm will converge; nor that if it does converge it will find the optimal solution. However, extensive empirical evaluation on a number of benchmark coordination problems indicates that it does in fact produce better quality solutions than other state of the art approximate algorithms but at significantly lower computation and communication cost.

Finally, we note that if messages are continuously propagated, and the states of the agents are continuously updated, then the algorithm may be applied to dynamic problems where the interactions between agents, or the utilities resulting from these interactions, may change at any time. For example, within tracking problems where the decentralized coordination algorithm is being used to focus different sensors onto different targets, then the utilities of each sensor are continually changing due to the changing position of targets, and the actions of other sensors. Thus, by continually propagating messages each agent is able to maintain a continuously updated estimate of the state that it should adopt in order to maximize social welfare in this dynamic problem.

2.5.2    TARGET TRACKING EXAMPLE

In this section the max-sum algorithm is applied to the target tracking example illustrated in Figure 2.5. The system involves three stationary sensors, each of limited observation range shown by the gray areas bounded by the dashed lines. The sensors are initialized with a weak prior over the target positions as well as the targets (factors) they are responsible for maintaining in the factor graph. System performance is measured by the total information in the target tracks.

Three sensor management strategies were implemented: local, centralized, and decentralized. The local strategy selects the sensor (pointing) control parameter that maximizes the total information given by local observations only. The centralized strategy selects control parameters for each sensor based on a brute-force search through all combinations to find the one that yields maximum information. The decentralized strategy solves each of the factors using a brute-force approach and then uses the max-sum algorithm to derive the optimal sensor control parameters that maximize the utility function.

Image

FIGURE 2.5 Example target tracking scenario with three sensors (of limited observation range indicated by the gray shaded areas) and three targets.

Image

FIGURE 2.6 Performance profile for local, centralized, and decentralized sensor management strategies.

Figure 2.6 displays the performance profile for each strategy. As both targets pass through the center of the environment, each sensor must handover a target to another sensor. The two handover points occur roughly at time steps 18 and 34. At these times the performance of the local strategy degrades since it cannot resolve the conflict that prevents the sensors selecting the same target.

The performance of the max-sum algorithm depends on the time allowed to exchange the variable and factor messages. Figure 2.7 compares performance with the centralized strategy as the period to exchange messages (negotiation time) is adjusted from 50 to 1000 ms (the experiment was conducted 20 times for each negotiation time). As the negotiation time is increased the performance of the decentralized strategy converges on the centralized performance.

2.6    SELFISH STAKEHOLDERS

In the previous section it was implicit that the local objectives of the sensor agents were aligned with the global objective. This situation is best modeled as a cooperative MAS problem in which the agents are designed to work toward the global objective of the system. This can be achieved, as we have seen, through the max-sum algorithm. In this section we consider the situation in which different stakeholders may be responsible for each sensor (or group of sensors). For example, in a disaster relief application, different governmental and nongovernmental organizations must share information gathered by their sensors to help coordinate an effective response. The sensors are now operating in a competitive rather than a cooperative environment. As such, they will attempt to optimize their own gain at a cost to the overall performance of the system. Given this, the challenge is to design a system such that desirable system-wide properties emerge from the interaction between its constituent (selfish) agents (Dash et al. 2005, Rogers et al. 2006).

Image

FIGURE 2.7 Performance of max-sum sensor coordination approach as negotiation time is varied.

Computational mechanism design offers a principled framework with which to design systems that exhibit desirable global properties, despite the selfish actions and goals of the constituent parts. It is an extension of the economic field of mechanism design and addresses the additional challenges imposed by a computational setting (i.e., agents that are computationally limited, communication that is not cost or error free, and settings that are open and dynamic). At its core, is the notion that agents hold or require valued items, and are seeking to maximize their own utility through the exchange of these items. In the real world, these items may be goods or services, and thus they will have real monetary value. In the sensor network scenario, information offers a principled currency or valuation metric. It can be applied in any context where sensors make and exchange imprecise observations and thus must deal with uncertainty.

2.6.1    PROBLEM DESCRIPTION

We consider a scenario where a number of sensors are tasked with detecting targets. The sensors each have a partial and inaccurate view of the world and need to communicate with each other in order to increase this accuracy. The “view of the world” in this case is a view of the target passing in the region that the sensors are monitoring. The communication network that the sensors use is constrained by a limited bandwidth. Thus, there is a need to globally decide on how to optimally allocate this bandwidth in order to best satisfy the sensors’ overall goal of forming an accurate view of the world.

In more detail, each sensor has two regions that they consider. There is a ROO in which they can observe targets and a region of interest (ROI) they wish to monitor. Figure 2.8 depicts a typical instance of a scenario where the ROI of sensor 1 is shown and there is a target within this ROI. We can observe that agent 1 can already know about this event in its ROI since this overlaps (as it usually does) with its ROO. However, due to noise inherent in the measurement process, agent 1 will have some uncertainty in its observation (e.g., the position, type or speed of the target may be described by a probability distribution rather than an absolute value). Agent 1 can however decrease this uncertainty by gaining data about the target from other agents, namely agents 3 and 5 (which also have the target in their ROO). However, if agent 1 can only receive data from one of these two agents due to bandwidth limitations, it will then have to decide as to which agent to gain the data from. This decision making process is further complicated if the other agents also have to make similar decisions. Thus, different flows of data (i.e., descriptions of which sensors will transmit data and along which path this data will flow) will yield different results in terms of the total reduction of the uncertainty (or the equivalent increase of information). Given this, the high level representation of our problem is then to allocate the flow of data within the bandwidth constraints imposed by the communication network so as to optimize the overall gain in information each sensor has about its ROI.

Image

FIGURE 2.8 A multisensor network target tracking scenario.

We tackle this problem by first modeling it as a MAS. Each sensor is then viewed as an agent i, within a set of agents ℐ, which has data and a function xi that characterizes the accuracy of this data. The data have a size and thus a bandwidth requirement of bwi. We consider the simplest communication protocol that exhibits a bandwidth constraint, and thus we assume a broadcast protocol whereby any sensor can simultaneously transmit to all other sensors. The total bandwidth available for the transmission of the data is such that only a subset of the sensors can actually transmit their data.

In order to characterize this problem, we first need to make a few assumptions about the scenario:

•  The time taken in calculating the allocation of data and in communicating between agents is small compared to the time taken for another target to appear. This allows us time frames where the mechanism can be implemented.

•  The agents have perfect and common knowledge about the sensor-network topology and their neighbors. This removes the problem of neighbor discovery in communication systems. These assumptions thus permit us to concentrate solely on the issue of allocating the flow of data under the bandwidth constraints. We now need a way for each agent to value the data received from different agents based around the measure of data accuracy, xi.

2.6.2    VALUATION FUNCTION

We develop a suitable valuation function based on the information form of the Kalman filter. Now, in the standard Kalman filter, observations are of the form z(t) = H(t)y(t) + n(t), where y(t) is the state of the system at time t, H(t) is the linear observation model and n(t) is a zero mean random variable drawn from a normal distribution with variance R. The covariance update component, P−1(t|t), of the information form of the Kalman filter for N observations is

P1(t|t)=P1(t|t1)+Nj=1HT(j)R1(j)H(j)

(2.12)

The summation in the above expression represents the decrease in covariance and thus the gain in information at time t when all the N observations are fused. In the case of our problem, the value of receiving data from another agent can thus be represented by the gain in information resulting from this observation.

In order to achieve an efficient allocation, this gain in information must be calculated from the measure of the data prior to fusion. Thus, we can represent the measure of accuracy of data xj, as its covariance, which is calculated from the covariance of its observation R(j):

xj=HT(j)R1(j)H(j)

(2.13)

Thus, the gain in information of agent i, when all relevant data are transmitted to it and fused, can be expressed as a sum of this measure of accuracy provided by each of the other agents:

vi(x)=xi+jixj

(2.14)

where i=i

Equations 2.13 and 2.14 thus cast our valuation function. However, we need to modify this so as to incorporate the characteristics of our scenario. This is because all observations may not fall in an agent’s ROI and furthermore an agent may not be able to receive all the data as a result of the bandwidth constraints of the communication network. Defining αij as the probability that the data observed by agent j is relevant to agent i, and a vector f as describing the flow of data in the network, then the expected valuation is

ˉvi(x,f)=xi+jifijαijxj

By slight abuse of notation, we shall hereafter refer to the expected valuation ˉvi(.) as v(.).

From the valuation function, we can observe that the valuation of an agent i depends on xj, which are signals measured by other agents. There are two conditions that are necessary in order to achieve an efficient allocation when considering selfish agents (Jehiel and Moldovanu 2001). Firstly

vi(x,f)xj=0i,j

and secondly

vi(x,f)xi>vj(x,f)xii,j,ij

The first condition is automatically satisfied in our case since new data cannot decrease information. In the case of the second condition, it implies that we need to restrict an agent’s ROI to its ROO. Otherwise, there may be an event outside its ROO that falls in its ROI such that data from another agent has a greater effect on its utility than its own data. This condition is necessary because otherwise selfish agents may profitably lie about their observed data and derive from it positive utility. Furthermore, the overlap between the agents, ROOs must be such that this condition is satisfied (i.e., jiαij<1). This means that the ROO of any agent cannot be entirely overlapped by the ROO of other agents (i.e., no agent is redundant).

2.6.3    MECHANISM

The aim is to ensure that the global bandwidth resource is used efficiently, that is, ensure that given the limited bandwidth, the information gain of the entire network is maximized. Thus, a mechanism is imposed whereby sensors are called upon to privately reveal the information content of observations to an auctioneer. This auctioneer then allocates the limited bandwidth of the communication network to those sensors whose observations will yield the highest system-wide information gain. However, since each sensor is individually attempting to maximize its information regarding its own ROI, with a simple mechanism there is an opportunity for a sensor to behave strategically (e.g., by understating the information content of its own observations, in an attempt to ensure that bandwidth is allocated to other sensors whose observations it can make use of or by overstating it, in order to deny bandwidth to other sensors).

Such strategic behavior is generally undesirable since it reduces the overall efficiency of the network and is computationally expensive for the individual sensors. Thus, we focus onto a subclass of mechanisms that are said to be strategy-proof or incentive compatible (Dash et al. 2003). That is, within the mechanism, the sensors have a dominant strategy (one which they should adopt regardless of the behavior of other sensors) to truthfully reveal their private information regarding the value of observations to the auctioneer. The mechanism proceeds as follows:

•  Each agent i transmits to a central auctioneer its valuation function vi(f, x) for all the possible allocations of the information flow f, where is the set of all feasible flows.

•  Each agent i also transmits its observed signal ˆxi.

•  The center then computes the optimal allocation f*0 which is calculated as

f*0=argmaxf(ivi(f,ˆx))

•  The center also calculates the payment ri made by each agent i. To do this, the center first finds the m next best allocations as the signal xi is decreased until the presence of i makes no difference to the allocations. That is, find allocations f*1f*m and the signal values zli such that

Zli=inf{yiivi(f*l,yi,xi)=ivi(f*l+1,yi,xi)}

(where each allocation f*l is different) until

Zmi=inf{yi:ivi(f*m,yi,xi)=ivi(f*m,yi,xi)}

where the allocation f*m is the optimal allocation when i does not exist, that is

f*m=argmaxfjivj(f,x)

Then the payment to buyer i is

ri=m1l=0[jivj(f*l,zli,xi)jivj(f*l+1,zli,xi)]

The earlier scheme rests upon making an agent derive a utility equal to the marginal contribution that its presence makes to the whole system of agents. Thus the additional part of this mechanism is to take into account the effect that an agent’s signal xi has on the overall utility of the system. This mechanism is general in that it can also be applied to the case of independent valuations. In our scenario, such valuations arise when the ROOs of the sensors do not overlap, and the agents are simply collecting, rather than combining, observations.

2.6.4    EXAMPLE

The mechanism was applied to a simulated sensor fusion problem which allowed the allocation of bandwidth and results of the auction process to be tracked. Figure 2.9 shows the system running.

At the specific instance in time shown in Figure 2.9, the bandwidth is severely limited. Thus, although a target falls into the ROI of both sensors 2 and 3, there is insufficient bandwidth for these sensors to exchange observations (allocated bandwidth is indicated by the thick lines between sensors). The value of information that each sensor receives from other sensors and the payments that they receive in exchange for transmitting their own observations are shown in the bar-graph at the bottom right of the display (note that sensors 1 and 4 both have negative payments since they are currently receiving more information than they are transmitting; indeed, sensor 4 is transmitting no information at all). When sensors truthfully reveal the information content of their observations, they maximize their individual information gain and maintain their budget of currency (shown on the right of the display). However, a sensor that does not adopt this strategy (due to faulty, strategic, or malicious behavior), will not achieve these aims and its budget will gradually be depleted. Such sensors can be recognized and removed from the network, thus incentivizing the truthful reporting that is necessary to ensure that the constrained bandwidth of the sensor network is allocated to achieve the system-wide goal of maximizing the information gain of the entire sensor network.

2.7    TRUST AND REPUTATION

The role of computational models of trust, within MAS in particular and open distributed systems in general, is generating a great deal of research interest. In such systems, agents must typically choose between interaction partners, and in this context trust can be viewed to provide a means for agents to represent and estimate the reliability with which these interaction partners will fulfill their commitments. Effective trust models should allow agents to (a) estimate the trustworthiness of a supplier as they acquire direct experience, (b) express their uncertainty regarding this estimate, (c) exchange their estimates as reputation reports, and (d) filter and fuse these reputation reports with their own direct experience to yield more accurate estimates (Reece et al. 2007a,b).

Image

FIGURE 2.9 Example sensor network system showing the auction allocation in process and the resulting communication allocation.

This section develops a probabilistic model of computational trust that allows agents to exchange and combine reputation reports over heterogeneous, correlated multidimensional contracts. Specifically, it considers the case of an agent attempting to procure a bundle of services (e.g., audio, video, and data services) that are subject to correlated quality of service failures (e.g., due to use of shared resources or infrastructure), and where the direct experience of other agents within the system consists of contracts over different combinations of these services.

2.7.1    EXPECTED UTILITY OF A CONTRACT

Consider an agent attempting to procure a bundle of services from a single supplier. In order to make a rational decision, or to negotiate a price for this bundle, the agent must estimate the expected utility of a contract with this supplier. Thus, we denote the outcome of a contract as a vector, X, that indicates whether or not each service within the bundle was successfully delivered (e.g., X = {oa = 1, ob = 0, oc = 0, …} indicates that service a was successfully delivered, while services b and c were not). If u(oa = 1) is the marginal utility that the agent derives if service a is successfully delivered, then the expected utility of the agent will depend on the probability that this happens, p(oa = 1). However, neither the probabilities nor the correlations between them are known to the agent and thus it must use observations of previous contract outcomes to determine a distribution over their possible values. It can then determine an expectation of the expected utility of the contract:

E[E[U]]=ˆp(X)TU(X)

(2.15)

and a variance, describing its uncertainty:

var(E[U])=U(X)TP(X)U(X)

(2.16)

where

U(X)=(u(oa=1)u(ob=1)u(oc=1))

Thus, the agent’s estimate of the expected utility is dependent on a trust estimate composed of two expressions: a vector estimate of the probability that each service is successfully delivered

ˆp(X)=(ˆp(oa=1)ˆp(ob=1)ˆp(oc=1))

and a covariance matrix that describes the uncertainty and correlations in these estimates:

P(X)=(VaCabCacCabVbCbcCacCbcVv)

where

the diagonal terms, Va, Vb, and Vc, represent the uncertainties in p(oa = 1), p(ob = 1), and p(oc = 1)

the off-diagonal terms Cab, Cac, and Cbc represent the correlations between these probabilities

A formalism using the Dirichlet distribution allows an agent to calculate both ˆp(X) and P(X) from its direct experience of previous contract outcomes (Reece et al. 2007b). Within this formalism an agent that has observed N contract outcomes in total simply records, for each pair of services (e.g., a and b), the number of times that both were successfully delivered, nab11, the number of times both were delivered unsuccessfully, nab00, and both combinations in which one was delivered successfully, and the other unsuccessfully delivered, nab01 and nab10. These counts over contract outcomes can be communicated as reputation reports, and these reputation reports can be combined by simply aggregating the counts. However, this formalism is limited to the case that contract observations are homogeneous (i.e., all agents observe contracts over the same dimension). Thus, we next consider two formalisms that address the more general case where contract observations are heterogeneous: a simple benchmark formalism using independent beta distributions (with covariance inflation) and a full formalism that uses the Kalman filter.

2.7.2    HETEROGENEOUS CONTRACTS: INFLATED INDEPENDENT BETA DISTRIBUTIONS

We can provide a reasonable benchmark formalism for dealing with heterogeneous contracts through a simple extension of a single dimensional trust model. That is, we do not explicitly represent the correlations between the services within the bundle, but rather, we use independent beta distributions to represent each individual service. Thus, if an agent has direct experience of N previous contract outcomes, in which service a was successfully delivered na times, then the trust estimate, ˆp(X), can simply be calculated using the standard result from the beta distribution that

ˆp(oa=1)=na+1N+2

(2.17)

Similarly we can calculate the diagonal terms of the covariance matrix, P(X), by again using the standard result from the beta distribution that

Va=(na+1)(Nna+1)(N+2)2(N+3)

(2.18)

Finally, rather than explicitly calculating the off-diagonal elements of the covariance matrix, we can employ the covariance inflation method from Section 2.4 to derive a conservative covariance matrix by simply setting the off-diagonal elements to zero, and multiplying the diagonal variance terms by the number of dimensions in the state vector, X. Thus in the case of two services we have

P(X)=(2Va002Vb)

We now develop a more sophisticated approach using the Kalman filter to fuse heterogeneous estimates containing correlation information.

2.7.3    HETEROGENEOUS CONTRACTS: A KALMAN FILTER TRUST MODEL

The Kalman filter trust model operates by fusing an agent’s prior trust estimate (calculated from an agent’s own direct experience of previous contract outcomes) with reputation reports that are received from other agents in order to give a posterior trust estimate. As described earlier, these trust estimates are represented by a vector, ˆp(X), and a covariance matrix, P(X), and the standard form of the Kalman filter provides two equations to update these:

ˆpposterior=ˆpprior+K(oˆpprior)Pposterior=(IK)pprior

where K is the Kalman gain

K=Pprior(Pprior+R)1

and o is an observation with covariance R, that together represent the reputation reports received from other agents.

Now, when we have heterogeneous contracts, one or more dimensions of either the prior estimate or the reputation reports may be missing. Within the Kalman filter framework we can simply represent these missing contract observations by setting the corresponding diagonal elements of the covariance matrix to infinity. By doing this we are effectively saying that the estimate for this contract part has no certainty.

In fact, performing these matrix operations involving infinity can be problematic. We can avoid this by using the information form of the Kalman filter whereby an estimate is represented by its precision, Y, which is the inverse of the correlation matrix (i.e., Y = P(X)−1), and its information estimate, ŷ, which is the product of the precision and the state estimate (i.e., ˆy=P(X)1ˆp(X)).

In this case, the missing information can be represented by inserting zeros into the precision matrix, and as before, the Kalman filter allows us to combine reputation reports with prior beliefs to yield a posterior information estimate and precision matrix:

ˆyposterior=ˆyprior+ˆy0Yposterior=Yprior+Y0

where Y0 = R−1 and ŷ0 = R−1o. The information form of the Kalman filter is particularly useful within MAS since reputation reports from multiple agents are simply added (in any order) to an agent’s prior estimate. However, the two forms are exactly equivalent, and we can easily switch between the two.

Thus, having presented the Kalman filter in the context of a computational trust model, we describe how an agent’s prior estimate is calculated from its own direct experience, and how other agents can communicate reputation reports calculated from their own direct experience.

The prior belief of the agent is represented by a trust estimate, ˆp(X), and a covariance matrix, P(X). These can be calculated from an agent’s direct experience using the Dirichlet formalism noted earlier. More specifically ˆp(X) and the diagonal elements of P(X) are calculated from the counts of contract outcomes (as per Equations 2.17 and 2.18), while the full details of the Dirichlet distribution are required to calculate the off-diagonal terms of P(X) (Reece et al. 2007a). The prior explicitly represents the correlations over the subset of services for which the agent has directly observed previous contract outcomes. When the agent has no direct experience of some services, it may simply insert infinity into the relevant diagonal element of P(X) to reflect this lack of information (or alternatively insert zero into Y if the information form of the Kalman filter is being used).

The Kalman filter fuses a prior estimate with an observation, o, whose covariance is R. In our computational trust model, o and R together represent a reputation report and are calculated from the direct experience of the originating agent. This calculation is different from that which generates ˆp(X) and P(X), since the covariance R describes the variability of o about the true probabilities, p(X), while the covariance P(X) describes the variability of p(X) about the estimate ˆp(X). This is a subtle but important difference. Calculating o is straightforward since it is a vector estimate of the probability that each service is successfully delivered (i.e., o = {oa, ob, oc, …}). It is calculated from an agent’s previous contract outcomes, and thus if the agent has observed N contracts in total, and service a was successfully delivered in na of these, then oa = na/N. Note that due to the reasons described earlier, this expression is different from that shown in Equation 2.17.

Calculating R is more complex. Since we are using the Kalman filter with a Dirichlet distribution (rather than the more common Gaussian distribution), the covariance, R, is itself dependent upon the probabilities that each service is successfully delivered, p(X). These probabilities are not known; indeed, these are what we are attempting to estimate. However, the beauty of the Kalman filter lies in its flexibility and we need not worry about finding R exactly. Provided that we can find a conservative matrix, R*, to use in place of R, we can guarantee that our estimates will remain consistent. We can build such a conservative covariance matrix for R from an agent’s direct experience and the method of covariance inflation described in Section 2.4.

2.7.4    EMPIRICAL EVALUATION

To evaluate the effectiveness of the trust formalisms just described, we present simulation results in which ten agents, each with their own direct experience of a supplier that provides two services, participate within a reputation system. We assume that one of these agents is attempting to evaluate the trustworthiness of the supplier in order to calculate the expected utility of interacting with it. As such, the agent must fuse its own direct experience with reputation reports received from the other nine agents.

In each simulation run, contract outcomes are drawn from an arbitrary joint distribution that induces correlations between the services. The contract outcomes are randomly allocated such that some agents observe both services, while others observe just one service. We apply the trust formalisms to calculate posterior trust estimates and then calculate two metrics. The first is a scalar measure of the information content of the trust estimate; a standard way of measuring the uncertainty encoded within the covariance matrix (Bar-Shalom et al. 2001). More specifically, we calculate the determinant of the inverse of the covariance matrix

I=det(P(X)1)

and note that the greater the information content, the more precise ˆp(X) will be. The second metric measures the normalized error of the estimate:

E=[ˆp(X)p(X)]TP(X)1[ˆp(X)p(X)]

We perform 1000 repeated simulation runs and calculate the expectation of these two metrics (and the standard error in these expectations). We note that the expectation of the normalized error is commonly termed the normalized standard error and it describes the consistency of the estimate. A consistent estimate has a normalized standard error less than the cardinality of the trust estimate; two in this case. A normalized standard error much less than this value indicates that the covariance matrix is too conservative.

In Figure 2.10 we present these results (with the standard error in the expected values shown as error bars) as the number of contract observations ranges from 10 to 400. We note that the information content of the trust estimates generated by the Kalman filter formalism far exceeds that of those generated using inflated independent beta distributions (typically by a factor of 3). By explicitly representing the correlations between the services, our formalism generates more precise trust estimates. This increased precision is not realized at the cost of producing inconsistent estimates; the normalized standard error of both formalisms is less than two, and thus they both generate consistent estimates. Finally, we note that as the number of contracts increases, the Kalman filter encodes more precise correlation information, and the difference between the formalisms also increases.

Table 2.1 illustrates the effect that the precision of the trust estimate has on an agent’s estimate of the expected utility of a contract (calculated using the relationships shown in Equations 2.15 and 2.16 in an example setting where u(oa = 1) = 2 and u(ob = 1) = 6). While both formalisms generate estimates of expected utility close to the true distribution, the more precise covariance matrix of the Kalman filter results in a better estimate of the standard deviation of the expected utility (while that of the inflated independent beta distribution is approximately double the true value).

In summary, we have developed a trust formalism based on the Kalman filter that represents trust as a vector estimate of the probability that each service will be successfully delivered, and a covariance matrix that describes the uncertainty and correlations between these probabilities. We have described how the agents’ direct experiences of contract outcomes can be represented and combined within this formalism, and we have empirically demonstrated that the formalism provides significantly better trustworthiness estimates than the alternative of using separate single-dimensional trust models for each separate service (where information regarding the correlation between each estimate is lost).

Image

FIGURE 2.10 Comparison of the expected information content, E[I], and normalized standard error (NSE) for trust formalisms using the Kalman filter and independent beta distributions.

TABLE 2.1
Estimated Expected Utility and Its Standard Deviation Calculated from an Agent’s Posterior Trust Estimate

Method

E[E[U]]±Var(E[U])

True distribution

5.80 ± 0.27

Inflated independent beta

5.86 ± 0.53

Kalman filter

5.82 ± 0.34

2.8    FUTURE DESIGN CONCERNS AND OPPORTUNITIES

As cheap sensing and computation increasingly pervades the world around us, it will profoundly change the ways in which we work with computers. Rather than issuing instructions to passive machines, we will increasingly work in partnership with highly interconnected computational components (agents) that are able to act autonomously and intelligently. Humans and software agents will continually and flexibly establish a range of collaborative relationships with one another, forming human-agent collectives (HACs) to meet their individual and collective goals.

This vision of people and computational agents operating at a global scale raises a very significant design concern that must be faced as we shift to becoming increasingly dependent on systems that interweave human and computational endeavor. As systems based on HACs grow in scale, complexity, and temporal extent, we will increasingly require a principled science that allows us to reason about the computational and human aspects of these systems if we are to avoid developments that are unsafe, unreliable, and lack the appropriate safeguards to ensure societal acceptance.

2.8.1    HAC DESIGN CONCERNS

The global scale and decentralized nature of HACs mean that control and information will be widely dispersed between a large number of potentially self-interested, actors with different aims, objectives, and availabilities. These features of HAC raise the following design challenges:

•  Understand how to provide flexible autonomy that will allow agents to sometimes take actions in a completely autonomous way without reference to their human owner, while at other times being guided by much closer human involvement in key decisions

•  Discover the means by which groups of agents and humans can exhibit agile teaming and come together on an ad hoc basis in order to achieve a goal that none of the individuals can achieve in isolation and then disband once the cooperative action has been successful

•  Elaborate the principles of incentive engineering in which actors’ rewards are designed in such a way that the actions that the participants are encouraged to take, when amalgamated, generate socially desirable outcomes

•  Design and develop an accountable information infrastructure that can provide a step change in situational awareness by blending sensor and crowd generated content in a robust and reliable way, and developing mechanisms that allow its veracity and accuracy to be confirmed and audited

How to solve these challenges and establish the new science needed to understand, build, and apply HACs in the real world is still very much a subject in its infancy. It is sure to draw on the DDF methods described in this book, but they will need to be enriched with insights, understanding, and methodology resulting from a broader multidisciplinary approach involving artificial intelligence, agent-based computing, machine learning, decentralized information systems, participatory systems, and ubiquitous computing.

2.8.2    HAC OPPORTUNITIES

If these challenges can be solved they will help us meet some of the key societal challenges of sustainability, inclusion, and safety that are crucial to our future. To conclude, let us consider three application domains that are expected to be significant beneficiaries:

Disaster response: Effective disaster response requires rescue services to make critical decisions in the face of an uncertain and rapidly changing situation. We aim to develop systems that allow first responders and software agents to work effectively together in such situations to collect the best possible information from the environment (though diverse sources such as CCTV feeds, UAVs, and crowd generated content), in order to most effectively manage and coordinate the various rescue resources available. Key technologies to achieve these aims include (i) decentralized coordination algorithms that can effectively allocate resources in the absence of centralized control, (ii) methodologies to flexibly handle autonomy so that the decisions that are autonomously made by software agents can be continuously changed as needs arise, and (iii) the ability to track the provenance of information and decisions such that previous decisions can be updated as new information comes to light.

Smart grid: Developing a modern electricity grid where information flows in both directions between consumers and producers is critical to achieving worldwide carbon reduction targets. HACs are an essential part of this vision, for example, the use of agents (or “energy avatars”) that are capable of continuously monitoring, predicting, and feeding back information about energy generation and consumption within the grid, in order to satisfy individuals’ preferences for cost, carbon, and comfort. Some requirements in support of these aims are (i) coalition formation algorithms that allow multiple self-interested parties such as renewable generators to come together with consumers to create virtual power plants that can more effectively manage the intermittent nature of these energy sources, (ii) algorithms to generate effective short term predictions of demand and supply to allow the optimization of energy use, and (iii) accountable information infrastructure to ensure the information provided to users on their smart meters is easily understandable, credible, and auditable for billing purposes.

Citizen science: Scientific research projects are increasing turning to citizen scientists to help solve problems that defy conventional computational approaches, for example, the Zooniverse projects in astronomy (zooniverse.org). These projects require approaches that allow such problems to be solved at scale, making full use of the skills, preferences, and capabilities of the volunteer participants. To make effective use of volunteer participants within these settings there is a need to develop (i) algorithms to model and predict the accuracy and trustworthiness of citizen generated content, (ii) methodologies and data models that allow us to track and reason about the provenance of information collected in this way, and (iii) mechanisms that allow us to target which volunteers are asked which questions based on learned models of their capabilities.

ACKNOWLEDGMENTS

The research described in Sections 2.4 and 2.5 was undertaken as part of the ARGUS II DARP project. This was a collaborative project involving BAE Systems, QinetiQ, Rolls-Royce, the University of Oxford, and the University of Southampton. It was funded by the industrial partners together with the EPSRC, MOD, and DTI. The research described in Section 2.6 was funded by the Systems Engineering for Autonomous Systems Defence Technology Centre. The research described in Section 2.7 was undertaken as part of the ALADDIN project and was jointly funded by a BAE Systems and EPSRC strategic partnership (EP/C548051/1). The research described in Section 2.8 is being undertaken as part of the ORCHID project funded by EPSRC.

REFERENCES

Bar-Shalom, Y., X.-R. Li, and T. Kirubarajan. 2001. Estimation with Applications to Tracking and Navigation. Hoboken, NJ: Wiley Interscience.

Dash, R.K., D.C. Parkes, and N.R. Jennings. 2003. Computational mechanism design: A call to arms. IEEE Intelligent Systems, 18(6): 40–47.

Dash, R., A. Rogers, S. Reece, S.J. Roberts, and N.R. Jennings. 2005. Constrained bandwidth allocation in multi-sensor information fusion: A mechanism design approach. Proceedings of the Eighth International Conference on Information Fusion, Philadelphia, PA.

Doob, J.L. 1990. Stochastic Processes. New York: Wiley.

Durrant-Whyte, H.F., M. Stevens, and E. Nettleton. 2001. Data fusion in decentralized sensor networks. Proceedings of the Fourth International Conference on Information Fusion, Montreal, Quebec, Canada, pp. 302–307.

Jehiel, P. and B. Moldovanu. 2001. Efficient design with interdependent evaluations. Econometica, 69(5): 1237–1259.

Julier, S.J. and J.K. Uhlmann. 2001. General decentralized data fusion with covariance intersection (CI). In Handbook of Multisensor Data Fusion, D.L. Hall and J. Llinas, eds., Boca Raton, FL: CRC Press, Chapter 12.

Reece, S. and S.J. Roberts. 2005. Robust, low-bandwidth, multi-vehicle mapping. Proceedings of the Eighth International Conference on Information Fusion, Philadelphia, PA.

Reece, S., A. Rogers, S.J. Roberts, and N.R. Jennings. 2007a. A multi-dimensional trust model for heterogeneous contract observations. Proceedings of 22nd AAAI Conference on Artificial Intelligence, Vancouver, British Columbia, Canada, pp. 128–135.

Reece, S., A. Rogers, S.J. Roberts, and N.R. Jennings. 2007b. Rumours and reputation: Evaluating multi-dimensional trust within a decentralized reputation system. Proceedings of Sixth International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS-07), Honolulu, HI, pp. 1063–1070.

Rogers, A., R. Dash, N.R. Jennings, S. Reece, and S.J. Roberts. 2006. Computational mechanism design for information fusion within sensor networks. Proceedings of the Ninth International Conference on Information Fusion, Florence, Italy.

Rogers, A., A. Farinelli, R. Stranders, and N.R. Jennings. 2011. Bounded approximate decentralized coordination via the max-sum algorithm. Artificial Intelligence, 175(2): 730–759.

Waldock, A. and D. Nicholson. 2011. A framework for cooperative control applied to a distributed sensor network. The Computer Journal, 54(3): 471–481.

Zooniverse. n.d. Real Science Online. www.zooniverse.org (accessed on March 16, 2012.)

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset