16

Human Engineering Factors in Distributed and Net-Centric Fusion Systems

Ann Bisantz, Michael Jenkins and Jonathan Pfautz

CONTENTS

16.1  Introduction

16.2  Characterizing the Domain to Drive Fusion System Design, Development, and Evaluation

16.3  Identifying Fusion System Capabilities to Mitigate Domain Complexities

16.4  Identification of Touch Points within a Hard-Soft Fusion Process for Intelligence Analysis

16.4.1  Touch Point 1

16.4.2  Touch Point 2

16.4.3  Touch Point 3

16.4.4  Touch Point 4

16.4.5  Touch Point 5

16.4.6  Touch Point 6

16.5  Conclusion

Acknowledgment

References

16.1  INTRODUCTION

Successful deployment and use of distributed fusion systems require careful integration of human and automated reasoning processes. While fusion systems exploit sophisticated and powerful algorithms that can rapidly and efficiently transform masses of data into situational estimates, human operators remain a crucial component of the overall fusion process. Not only are humans information consumers, but they are also participants throughout the fusion process (Blasch and Plano 2002), acting individually or collectively as sensors or, more critically, as in-the-loop guides for specific computations. Because of the complex interaction between human and automation inherent in fusion systems, attention to human-system integration issues during design, development, and evaluation is critical in the ultimate success of these systems.

For example, as information is delivered by the fusion system, human operators must make a judgment about its pertinence, as well as about the means by which it was generated (e.g., what data sources were used, which algorithms with what assumptions and error characteristics) and how that may affect the quality of the information (e.g., whether a remote sensor is reliable, whether a model for the situation is valid, whether a correlation tool has sufficient data to reach desired levels of accuracy and precision). These judgments determine how the human as information consumer will use, interpret, and appropriately value or trust the fused information, and are based in part on the information that qualifies the outputs of the fusion system, or meta-information (e.g., uncertainty, source quality or pedigree, timeliness, pertinence) (Bisantz et al. 2009, Pfautz 2007). These judgments may be more challenging in distributed or net-centric situations where the operator is removed (spatially, jurisdictionally, and/or hierarchically) from the sources of data or nodes performing the data processing.

In well-designed fusion systems, operators can work in concert with the fusion process, using their knowledge of the situational context and environmental constraints to:

•  Provide qualitative data as a sensor—for example, judge the location or count of visible entities

•  Guide the selection of input data—for example, choose sources, filter sources, filter certain types of data, or prioritize data (e.g., as a function of source)

•  Provide corrections to intermediate results—for example, manually provide detections, “hand-stitch” tracks to correlate objects across frames, or select most-apt situation models based on mission context

•  Adjust or “tune” fusion system parameters—for example, manipulate detection thresholds, define training sets for learning models, or specify processing stages or algorithms

Operators can perform some or all of these tasks to improve the fusion system’s performance, particularly in mixed-initiative, dynamic fusion systems, and “hard/soft” fusion systems, which combine data from human or “soft” sources with those from physical sensors (Hall et al. 2008). Figure 16.1 illustrates the range of actions operators can take with respect to fusion system performance. Operators can also interact with the fusion system to pose questions and test their own hypotheses about the implications of the fused information. These tasks, like those of an operator simply consuming the fused information, are also directly affected by the meta-information inherent in the fusion process (e.g., Algorithm B performs in real-time, Algorithm A can only perform in near-real-time, but has a higher rate of detection; therefore, given a critical need for timely information, Algorithm B is the better choice, despite lower detection rates, because it performs in real-time).

To further explore the role of human operators within a complex fusion process as well as to illuminate various challenges and potential human factors solutions, this chapter uses the design of a fusion system that aids a hypothetical intelligence analysis process as a case study. This case study demonstrates that by combining knowledge of both human and system strengths and limitations with an understanding of the work domain (i.e., through the application of cognitive systems engineering methods [Bisantz and Burns 2008, Bisantz and Roth 2008, Crandall et al. 2006, Hollnagel and Woods 2005, Vicente 1999]), engineers can design fusion systems that successfully integrate human operators at critical interaction points (“touch points”) to enhance overall system performance.

Image

FIGURE 16.1 Characterizing potential points of human–fusion system interaction

The need to make these judgments raises important human factors challenges regarding appropriate methods for coordinating mixed human-automated control over fusion processes, communicating system processes and states to the human operator, and visualizing key information and meta-information. These challenges are exacerbated in net-centric environments, in which humans distributed in time and space provide input, interact with, and make decisions based on fusion processes.

This chapter begins with a discussion of military intelligence analysis, its characteristics as a work domain, and the challenges it poses for human analysts. Subsequent sections highlight potential features of fusion systems that can mitigate these challenges for different stages of the intelligence analysis process. The chapter ends with design characteristics and rationale for a set of touch points for human control and interaction with a distributed, hard-soft information fusion system.

16.2  CHARACTERIZING THE DOMAIN TO DRIVE FUSION SYSTEM DESIGN, DEVELOPMENT, AND EVALUATION

One of the first requirements for identifying and supporting beneficial interactions between a fusion system and a human operator is to understand the relevant characteristics of the domain in which they will perform, as well as the requirements and constraints that rise from environmental, computational, operational, and socioorganizational factors. This understanding can be derived via formal systems engineering practices (e.g., requirements analysis [Laplante 2009]), but these approaches often fail to adequately characterize human factors. Cognitive systems engineering and its associated methodologies for characterizing work domains (e.g., cognitive work analysis [Bisantz and Burns 2008, Bisantz and Roth 2008, Vicente 1999]; human-centered system engineering [Crandall et al. 2006, Hollnagel and Woods 2005]) represent an approach that focuses on both human and system performance and their interrelationships. In our case study, we demonstrate how this approach reveals specific features of human-system interaction that should influence design, development, and evaluation of fusion systems.

The intelligence analysis domain: “Intelligence analysis” broadly refers to reasoning over available data with the goal of making a coherent whole of the past, present, and/or future states of some real-world environment or situation. Intelligence analysts review and process (i.e., filter, validate, correlate, and summarize) as much data as feasible under constraints such as externally imposed deadlines, availability of data, and availability of data collection resources. This typically massive amount of data crosses spatio-temporal scales (from a street corner to vast regions, from microseconds to decades) (Mangio and Wilkinson 2008). It is interpreted by the analysts who use their experience to compare their interpretations with existing hypotheses (Tam 2008–2009). The analyst revises and generates hypotheses defining the current situation (which continues to evolve during the analysis) until a threshold of confidence is reached, or, more often, a deadline is reached. At this point, the analyst creates useful products for decision-makers, often in distributed locations. This entire process may be completed in only seconds (e.g., to answer requests such as “Where did the target go?”) or months (e.g., for questions such as “What are the historic and economic drivers behind the regional instability?”).

Prior literature provides a variety of characterizations of intelligence analysis, including some from the human factors community (Cook and Smallman 2008, Elm et al. 2004, Grossman et al. 2007, Heuer 1999, Patterson et al. 2001a, Pfautz et al. 2005b, Pfautz et al. 2006, Pirolli and Card 2005, Powell et al. 2006, Powell et al. 2008, Trent 2007, Zelik et al. 2007). An extensive review of this literature is out of the scope of this chapter; however, there are several characteristics worth highlighting. In terms of the tasks that analysts commonly perform, Hughes and Schum (2003) point out that regardless of the specific form or goal of an analysis, the intelligence analysis process always involves three parts: (1) hypotheses generation, (2) evidence gathering and evaluation, and (3) generation and evaluation of arguments linking evidence and hypotheses. Incomplete and uncertain information combined with the need for timely solutions mean that success in this domain is best characterized by convergence (Grossman et al. 2007). Elm et al. (2005) Potter et al. (2006) define convergence as “a stable balance of applying broadening and narrowing … to focus on a reduction of the problem towards an answer.” Therefore, a successful analysis effort will be one that applies multiple cycles of broadening and narrowing based on available resources (e.g., time, information, cognitive capacity, etc.) to avoid premature closure and arrive at a set of final hypotheses that best explain the substantive problem.

Kent’s early model describing seven intelligence analysis stages (1965) includes activities which can be associated with more recent models, as shown in Figure 16.2.

These models help provide a general definition of the steps of intelligence analysis and support identification of the characteristics of the domain that make successful performance challenging. By leveraging human factors engineering techniques, several research efforts identified factors that contribute to the complexities that commonly arise during the analysis process. The majority of these factors relate to the unique characteristics of the data that feed the intelligence analysis process and the dynamic structure of the problems that the intelligence analysis process attempts to solve. Table 16.1 presents a nonexhaustive list of factors that create challenges during problem analysis and references that illustrate the methods used to uncover and characterize each factor. In subsequent sections, we demonstrate how this analysis can contribute to the design of supportive fusion systems.

Finally, in addition to understanding the characteristics of the domain that make performance challenging, cognitive engineering analyses typically characterize expert performance within the domain. Here, that focus is on identifying the knowledge and strategies required for successful intelligence analyses. The iterative broadening and narrowing across data collection, synthesis, and hypothesis evaluation analysis phases as described in Elm et al. (2004, 2005) is one example strategy. Multiple cognitive task analyses of the intelligence analysis process support the fact that analysts employ simplifying strategies as a result of high cognitive demands (particularly in terms of large amounts of data with varying characteristics, the need to simultaneously consider multiple hypotheses, and time pressure) (Endsley 1996, Jian et al. 1997), and that these strategies may introduce biases into the analysis process (see Heuer 1999, Hutchins et al. 2004, 2006, Johnston 2005, Kahneman et al. 1982, Mangio and Wilkinson 2008, Patterson et al. 2001a,c, Pirolli 2006, Pirolli and Card 2005, Roth et al. 2010, Tam 2008–2009, Trent et al. 2007). Examples of decision biases which may impact intelligence analysis include:

Image

Image

FIGURE 16.2 Comparison of models of intelligence analysis and implied characteristics and challenges of intelligence analysis tasks. (From Elm, W. et al., Designing support for intelligence analysis, Presented at the 48th Annual Meeting of Human Factors and Ergonomics Society, New Orleans, LA, 2004; Hughes, F.J. and Schum, A., Preparing for the future of intelligence analysis: Discovery-proof-choice, Joint Military Intelligence College, Unpublished Manuscript, 2003; Kent, S., Special Problems of Method in Intelligence Work. Strategic Intelligence for American World Policy, Archon Books, Hamden, CT, 1965.)

TABLE 16.1
Nonexhaustive List of Factors Contributing to the Challenges of Problem Analysis

Factor

Description

References

Ambiguous requests for information (RFI)

Analysts may receive RFIs from a third party they have no means of contacting. In these cases, if information needs are poorly expressed or communicated, or if the operational context surrounding the RFI is unclear, then the analyst is faced with the problem of first defining the problem that he or she is supposed to be solving. Information, such as why the question is being asked, what are the boundaries for investigation, and what the consumer is really trying to accomplish, are all critical pieces of information for successful analysis that are often missing

IW JOC (2007) Elm et al. (2005) Heuer (1999)

Hewett et al. (2005)

Hutchins et al. (2004)

Johnston (2005), Kent (1965) Pfautz et al. (2006) Roth et al. (2010)

Complex data characteristics

The complexity of the data that analysts must interpret is increased by numerous factors that are all commonly present during analysis. Factors, such as multiple data formats (e.g., images, written/verbal reports, technical data, etc.), operational context (i.e., multiple lines of operation require multiple skill sets for interpretation), massive volumes of data, data set heterogeneity, data set complexity (i.e., number of relationships within the data set), and data class (e.g., HUMINT, SIGINT) all increase the overall complexity of the data. As these factors become more prominent, the difficulty of the analysis increases and analysts become more susceptible to problems such as data overload

MSC 8/06 (2006), 2FM 3-24 12/06(2006)

IW JOC (2007) Drucker (2006) Elm et al. (2004) Greitzer (2005), Heuer (1999), Hutchins et al. (2006)

Larson et al. (2008) Patterson et al. (2001a,b), Pfautz et al. (2006), Roth et al. (2010), Taylor (2005)

Distributed work structure

Analysts may not be in the environment or participants in the situation they are investigating. Analysts may or may not be co-located with their "customers." Socio-organizational structure may impose several levels (and geospatial distance) between the analyst and the source of information, information requestor, and other analysts or experts, creating communication barriers that impede analysis. Analysts may be required to work on requests that overlap with current or past projects of analysts in other organizations. The lack of inter-organizational information sharing in the intelligence community (even within the same organization) creates information access challenges, and results in a failure to consider multiple perspectives

Hewett et al. (2005) Johnston (2005) Kent (1965) Pfautz et al. (2006)

Multiple perspectives for consideration

Analysts face the difficult challenge of simultaneously integrating considerations from multiple perspectives when performing their analysis, including the perspective of the information sources; the situation/culture/area being assessed; and the objectives, goals, and situational context of the consumer or user of the information being developed

Johnston (2005) Taylor (2005)

Information may be deceptive, unreliable, or otherwise qualified

Issues of deception are common in the intelligence analysis domain and add complexity to each piece of information that analysts must interpret. Information may come from human sources which are biased or unreliable due to characteristics of the situation or the human observer. Analysts must understand the impact these different levels of meta-information have on information utility, as well as maintain and communicate qualifiers throughout the analysis process, a task which can be particularly challenging in distributed systems

Bisantz et al. (2009), Hardin (2001), Heuer (1999), Jenkins et al. (2011), Pfautz et al. (2005a), Pfautz et al. (2007), Pfautz et al. (2005b), Zelik et al. (2007)

Dynamic nature of domain

As time passes during the course of an analysis, analysts are faced with the challenge of constantly re-evaluating their hypotheses and supporting/refuting evidence to ensure they are still relevant and valid. This is a requirement resulting from the dynamic nature of the real-world. For example, the location of a person of interest is likely to be highly dynamic and must be continuously monitored and updated by the analyst. Additional considerations must also be given to meta-data (e.g., the source of data), which can evolve over time as real-world events play out

Pfautz et al. (2006), Roth et al (2010)

Unbounded problem space

During the broadening process, when analysts are attempting to uncover novel explanations for a set of information, they are faced with the challenge of considering a theoretically infinite problem space. The "open world" of possible explanations and events creates a wide range of unpredictable hypotheses for consideration. This unbounded range of possibilities is likely to result in a high degree of mental workload if analysts attempt to consider every hypothesis they can think of. Further, even in the case when they consider a large set of possible hypotheses, asking "Am I missing something?" is a very difficult question to answer, especially after an extensive attempt to avoid missing any relevant information

Hutchins et al. (2006), Pfautz et al. (2006)

•  Selectivity bias: Information is selectively recalled as a function of how salient the information is to the individual analyst.

•  Availability bias: Frequency of an event is predicted based on how easily it is recalled from long-term memory.

•  Absence of evidence bias: Failure to recognize and incorporate missing data into judgments of abstract problems (e.g., “What am I missing?”).

•  Confirmation bias: Interpreting information to confirm existing beliefs or hypotheses.

•  Overconfidence bias: Sureness that one’s own hypotheses are correct when most of the time they are wrong.

•  Oversensitivity to consistency bias: Placing too much reliance on small samples and failure to discern that multiple reports come from the same source information.

•  Discredited evidence bias: Persistence of beliefs or hypotheses even after evidence fully discrediting those hypotheses is received.

Systems designed to enhance analysis performance must provide support to reduce or overcome the impacts of these biases.

16.3  IDENTIFYING FUSION SYSTEM CAPABILITIES TO MITIGATE DOMAIN COMPLEXITIES

Information fusion systems can be designed to support information analysis processes. Considering domain characteristics and complexities allows identification of fusion system capabilities which support human performance in this challenging environment. After identifying the challenges that intelligence analysts are likely to face over the course of their analyses, the domain complexities that create these challenges can be characterized and high-level fusion system capabilities that address the complexities can be determined. Fusion system capabilities and their mapping to domain complexities are provided in Table 16.2.

In many situations, fusion systems are designed to integrate into existing work-flows, as is the case with our fusion system to support a hypothetical intelligence analysis process. In these cases, it is often helpful to expand the capabilities map to show where in the existing workflow the end-user will access each capability. This map is useful for two reasons. First, it allows fusion engineers and system designers to understand where in their existing workflow end-user will access the system, and what goals they will have in mind at that time. This is important because it allows system designers to better understand the end-user’s information seeking, system control or monitoring, and/or other interaction-related tasks.

TABLE 16.2
Intelligence Analysis Complexities and the High-Level Fusion System Capabilities Selected to Mitigate Them

Intelligence Analysis Complexity

Fusion System Capabilities

Justification

Ambiguous RFIs

Support multiple searches (stored or ad hoc) for situations of interest

Allow analysts to explore multiple possibilities simultaneously to cover multiple interpretations or contexts. Facilitate systematic interaction between analyst and requestor to enable clarification and tie to specific fusion system products and processes

Data characteristics

Support manual or automated data association

Challenges relating to volume, heterogeneity, complexity, uncertainty, etc. of data can all be mitigated by providing external tracking of meta-data and automating data association to alleviate the analyst's burden

Support manual/automated situation assessment

Given the volume of data that analysts receive on a daily basis and the often high degree of complexity, having the fusion system perform automated situation assessments that can be highlighted for the analyst based on some pre-defined criteria will allow the analyst to handle a significantly larger volume of data during the analysis process

Distributed Structure

Store access credentials to multiple data sources

Analysts need to access different databases and information sources across departments and with different access requirements that need to be remembered. Having the fusion system store these credentials and automatically pull in data from these sources as needed will help mitigate this issue

Automate language translation

Feasibility may be an issue, but translating incoming natural language messages and reports into the analyst's native language will open up the analysis to a range of sources that may otherwise be ignored due to the difficulty of overcoming the language barrier

Lack of data

Set up custom search alerts to notify when data or situation is available/appears

Lack of key data or data sets may require analysts to repeatedly return to a source or data set to check for updates or appearance of the missing data. Allowing the analysts to set expectancies in terms of missing data and be automatically notified by the fusion system on its appearance/update will save time/effort during an analysis

Maintain and fusing pedigree data during data association processes

Information meta-data or pedigree is critical for analysts to evaluate the quality of the data and weight it appropriately to support or refute their hypotheses. Having the fusion system maintain this data in an accessible format before, during, and after processing will help analysts to quickly evaluate different pieces of data

Potential for deception

Support what-if scenarios (e.g., what if this piece of data were false/true)

Allowing analysts to quickly change the characteristics of a piece of data (e.g., set uncertainty to 100% or 0%) and see how it affects networks of information and situations of interest will allow analysts to see how important different pieces of data are so they understand the degree to which their hypotheses rely upon them

Characterize and compute meta-information as part of the fusion process

Augment or enhance the algorithms in the fusion process to expose factors that allow for reporting of confidence, certainty. Similarly expose and maintain the qualities of the data feeding the fusion process. Provide an overall characterization of the state of the fusion process, its components, etc.

Real-world dynamics

Update knowledgebase in near real-time with most recently received data

Analysts need access to the most recent data to perform the most accurate analysis possible. System should update its database with new information as quickly as possible

Create temporal boundaries on situation assessment and data association

The theoretical dataspace for an analysis is infinite and impossible for a single analyst to fully consider. Allowing the analyst to set temporal boundaries on what data should be considered helps to limit the data under consideration and ensures that the data being considered falls within a specific time-frame, an important feature for historical analyses

Maintain temporal reference meta-data

All real-world events play out on the same timeline so analysts can drill-down to data or organize sets of data to view the order in which they occurred

Unbounded problem space

Support custom boundary ranges

For seemingly unbounded problems where theoretical data space is near infinite allowing the analyst to set custom boundary ranges (e.g., temporal, regional, cultural, etc.) will significantly reduce the volume of data for consideration. It also allows the analyst to provide top-down feedback to the system on where relevant data are likely to be found based on prior experience

The second reason is that the capabilities map allows system designers to see if they are designing capabilities that do not readily integrate into existing work-flows. If, during the capabilities-to-workflow mapping process, it is found that a capability does not have a clear location within the existing workflow, then the capability needs to be augmented or the workflow must change to incorporate the new capability.

Figure 16.3 provides an example of the capabilities map created for our fusion system to support intelligence analysis with the capabilities from Table 16.2 mapped to different common stages of the intelligence analysis process. The stages of analysis are based on Kent’s overview of the intelligence analysis process (described earlier) (1965). Cells shaded with grid backgrounds in Figure 16.3 represent stages of analysis where the respective intelligence analysis complexities are most likely to present challenges to analysts. This complexity-to-workflow analysis mapping was uncovered during the characterization of the domain and its complexities during the analysis of the work domain. Ideally, all highlighted cells will have associated capabilities that mitigate the challenges that arise due to the respective complexity at that stage of the intelligence analysis process. However, due to technological or other limitations, some of these capabilities may not be feasible. For example, Figure 16.3 shows that the challenges arising from the distributed structure of the domain during the analysis of the substantive problem (Stage 2) are not being addressed by any planned capabilities. Finally, capabilities which do not directly map to the intended purpose of the system (i.e., those that appear in cells with white backgrounds in Figure 16.3) can also be identified and reviewed regarding their respective utility (for example, to provide redundancy across stages).

16.4  IDENTIFICATION OF TOUCH POINTS WITHIN A HARD-SOFT FUSION PROCESS FOR INTELLIGENCE ANALYSIS

The domain and capabilities analysis is combined with information regarding the structure of a proposed hard-soft information system being developed to support intelligence analysis (Jenkins et al. 2011). Human-system interaction touch points represent stages during processing where the human operator will interact with the fusion system. Touch points can be included to accomplish a variety of goals; however, the overall suite of touch points should provide the information, interfaces, and control needed for the fusion system to support the human operator in the fusion loop. Subsequently, the selection and design of respective touch points are critical to the overall success of the fusion system because they will serve as the windows to the fusion system for the end-user. If the human operators do not have access to the information they need, there is a strong possibility they will develop an inappropriate understanding of the system’s true capabilities, which can lead to under-or overreliance on the fusion system (Lee 2008, Parasuraman and Riley 1997).

This example presents the location and early definition of six touch points identified for inclusion in the hard-soft fusion system. For this example, touch point locations and definitions were based on an early-stage fusion architecture to illustrate how touch points can be identified early in the overall system development process. Early identification of touch point locations is important as touch point locations and requirements influence the design of the fusion system. For example, a touch point requiring the display of specific information to the human operator influences the format and type of information the fusion system must maintain. This approach is consistent with the type of formative cognitive systems modeling and analysis described by Vicente (1999), in which modeling the work domain complexities can lead to new requirements for data sensing and processing capabilities necessary for successful performance.

Image

FIGURE 16.3 Extended capabilities map showing intended capabilities, the associated intelligence analysis complexity they are intended to mitigate, and the stage of intelligence analysis where they are expected to be accessed. Stages of intelligence analysis are based on Kent’s model (1965). Highlighted cells indicate stages in the workflow where a complexity is likely to appear.

The fusion system architecture used for this example is limited by user interface mediums (e.g., keyboard/mouse/monitor), system input/output formats, and availability of source and data meta-information not yet being defined in terms of both format and availability. Subsequently, the touch points were defined by first identifying their location in the fusion processing stream given the capabilities they were intended to support. They were then refined by focusing on feature level details needed to support those capabilities. Each touch point can then be mapped to the stage(s) of analysis where it will be leveraged and the domain complexities that it is expected to mitigate. These details help designers understand the justification of respective touch points and link the capabilities map to the touch points list. Figure 16.4 provides the outline of the fusion architecture that was used to select the location of touch points (for additional details on this fusion architecture, see [Jenkins et al. 2011]) along with their location in the fusion stream. Figure 16.4 shows each of the touch points, the capabilities they support, the domain complexities they mitigate, and the interaction features required to support the human operator’s anticipated goals during the touch point interaction. In addition to these details, each fusion touch point provides the expected stage or stages of analysis (based on Kent’s seven-stage process mentioned previously [Kent 1965]) where the analyst is expected to access the touch point. Understanding when the analyst will leverage the touch points within their current (or future) workflow allows system designers to anticipate the intelligence analyst’s goals and design more effective supporting capabilities.

Image

FIGURE 16.4 Fusion system architecture to support the intelligence analysis process.

16.4.1  TOUCH POINT 1

Description: While analysts are researching the problem(s) they are responding to, they must begin by setting boundaries on the data for consideration. This touch point allows the analyst to select data sets for inclusion in the final fused data set that they will access throughout the course of their analysis. Analysts can revisit this touch point to change the boundaries of data being considered during analysis if new information becomes available, which needs to be integrated with the already processed data or if their understanding of the substantive problem changes in a way that affects what data is likely to be relevant, creating a need for previously excluded data sets to be integrated into the fused data.

Location: Before incoming data set processing, but after incoming data sets have been received and are accessible to the fusion system for processing.

Expected intelligence analysis stage(s) when accessed:

•  Stage 2—Analysis of substantive problem

•  Stage 3—Collection of data

•  Stage 6—Additional data collection and hypothesis testing

Capabilities to be supported:

•  Multiple searches (stored or ad hoc) for situations of interest

•  Queue incoming data for processing

•  Custom boundary ranges

Required features:

•  Create custom data sets

•  Prioritize multiple data sets for processing

•  Scan/review data sets pre-processing

Complexities addressed:

•  Ambiguous requests for information

•  Lack of data

•  Unbounded problem space

16.4.2  TOUCH POINT 2

Description: After the fusion system has performed initial operations on incoming data to prepare it for data association, intelligence analysts can review the data and potentially make changes to data association settings before processing occurs. Analysts will want to ensure that respective data sets (including the primary fusion database) are appropriate for identifying associations, given their contextual understanding of the substantive problem and the goals of the analysis. Potential settings that analysts may want to alter include values such as the threshold level for associating two pieces of data or for merging together co-referenced entities. Settings can include thresholds specific to entity type, general thresholds, or other levels of granularity to ensure that associations are identified based on the analyst’s preferences. The analyst can also manually set the uncertainty/probability/reliability value associated with a piece of data or with a data source, which will override system-generated values. This feature will support what-if scenarios where analysts may wish to see what happens if a piece of data is true/false, and let analysts leverage their past experiences with the reliability of respective data sources or classes/types of information.

Location: After initial processing and uncertainty alignment of incoming data sets, but before data associations and co-references are identified and resolved.

Expected intelligence analysis stage(s) when accessed:

•  Stage 3—Collection of data

•  Stage 6—Additional data collection and hypothesis testing

Capabilities supported:

•  Manual/automated data association

•  What-if scenarios (e.g., what if this piece of data were false/true)

•  Custom boundary ranges

Required features:

•  Review of data sets being considered for data association

•  Filtering of data sets to determine custom boundaries to utilize for data association

•  Selection of additional data sets (previously processed) to be included in data association

•  Custom threshold levels set to determine when manual approval is needed for a system-proposed merge to be carried out

•  Custom uncertainty values to be assigned to individual data elements (e.g., what if scenarios)

Complexities addressed:

•  Data characteristics

•  Potential for deception

•  Unbounded problem space

16.4.3  TOUCH POINT 3

Description: Depending on analysts’ preferences and requirements, they can approve potential data merges during the data association process. This needs to be done during data association processing because subsequent processing decisions may be affected by the decision to merge or not merge two pieces of data. Given the large volume of data that can be processed over the course of an analysis, the high number of associations likely to be uncovered within the volume of data, and the rapid updating of data elements as new and/or updated information becomes available, analysts need the ability to monitor fusion processes to maintain their contextual understanding of what the volume of data represents. A touch point during fusion processing can serve as a window to the underlying fusion processes to allow the analyst to understand how the system is manipulating the data before it presents its final output.

Location: Available throughout data association processing, either on demand or as needed by the system based on the system’s authority to autonomously carry out data association.

Expected intelligence analysis stage(s) when accessed:

•  Stage 3—Collection of data

Capabilities supported:

•  Manual/automated data association

•  What-if scenarios (e.g., what if this piece of data were false/true)

•  Maintain and fuse pedigree data during data association processes

Required features:

•  Review of system proposed merges based on predetermined threshold level

•  Approval/rejection of system-proposed merges

Complexities addressed:

•  Data characteristics

•  Potential for deception

16.4.4  TOUCH POINT 4

Description: The system will maintain a large-scale database that represents a network of associated entities and their respective attributes. This database will be a primary source of relevant information for analysts. Analysts can rapidly search through the database and set boundaries to focus on sections of the database for consideration. However, analysts could prefer to perform fusion operations on two or more data sets and analyze the results before they are merged and associated with the larger database of previously fused information. This touch point was incorporated to support this anticipated analyst requirement. It serves as a pre-integration window to analyze incoming data before it is associated and merged with previously processed, older information. This touch point is also especially critical for distributed fusion environments, where multiple analysts and fusion processes may be simultaneously working on similar problems. It will also allow analysts to highlight/flag/annotate different pieces of data or relationships before integration into the larger fusion system database to allow analysts to understand where pieces of data they—or other analysts—viewed as relevant ended up in the larger network of information.

Location: After data association processing, but before the integration of merged and associated data set into the overall fusion entity-association database

Expected intelligence analysis stage(s) when accessed:

•  Stages 3 and 4—Collection and evaluation of data

Capabilities supported:

•  Automated language translation

•  Manual or automated data association

•  Maintain and fuse pedigree data during data association processes

•  Maintain temporal reference meta-data

•  Support automated pattern identification to highlight potential situations of interest

•  Custom search alerts that notify when data or situation is available/appears

Required features:

•  Browsing/reviewing of incoming data sets’ entity-association network

•  Selection of entities within the network to highlight or annotate prior to fusing with the overall fusion database

•  Review of executed data merges

•  Search input for the incoming data sets’ network for situations of interest

•  Expand entity or association meta-data, source data, and data association log

Complexities addressed:

•  Distributed structure of the intelligence analysis domain

•  Potential for deception

•  Real-world dynamics

16.4.5  TOUCH POINT 5

Description: The fusion system can automatically pull in available data sets to be associated and integrated into the active (current bounded volume for analysis) or static (overall fusion database) volume of data maintained by the fusion system. When this occurs, analysts can review updates to the database so they can rapidly become aware of what is going on in the scenario represented by the incoming data (i.e., focus their attention on what is changing). To support the analysis of multiple hypotheses, the analyst can create data milestones that, if they occur, would help to confirm or reject one of the analyst’s current hypotheses. In this situation, the analyst can view the incoming data that signaled the milestone “alert” within its original context before it is merged into the larger fusion data set. This view would include the data’s meta-data and associations, which may be removed during the merge.

Location: After any update to the fusion system information database

Expected intelligence analysis stage(s) when accessed:

•  Stages 3 and 4—Collection and evaluation of data

Capabilities supported:

•  Store access credentials to multiple data sources

•  Set up custom search alerts to notify when data or situation is available/appears

•  Maintain and fuse pedigree data during data association processes

•  What-if scenarios (e.g., what if this piece of data were false/true)

•  Custom boundary ranges

•  Update knowledge database in near real-time with most recently received data

•  Maintain temporal reference meta-data

Required features:

•  Browsing/reviewing of the fusion database

•  Selection of database boundaries with respect to search/browsing capabilities

•  Review of highlighted/annotated entities and/or associations

•  Manual editing/addition of entities/associations/attributes

•  Search the database using custom criteria

•  Creation of entity/association placeholders that indicate expected hypotheses not yet incorporated/observed

•  Drill-down to entity or association meta-data, source data, update log, weighting, edit precedence, and data association log

Complexities addressed:

•  Distributed structure of intelligence analysis domain

•  Potential for deception

•  Real-world dynamics

•  Unbounded problem space

16.4.6  TOUCH POINT 6

Description: Touch Point 6 allows interaction with the fusion system’s primary output. Depending on the output capabilities of the system, this touch point can help analysts generate their presentation artifact(s) by providing network association diagrams, key pieces of data supporting/refuting their hypotheses, meta-information on data and/or source, or other components in a consumable format. Because this touch point supports a number of analyst goals, the list of required features generated even at this high-level of abstraction is already lengthy; however, maintaining an understanding of the end-user’s goals as the touch point and associated features are defined will provide the foundation to ensure the final suite of features provide a positive and beneficial user experience.

Location: Outside of the fusion processing architecture, this touch point serves as a global touch point which provides access to the output of the fusion system.

Expected intelligence analysis stage(s) when accessed:

•  Stage 3—Collection of data

•  Stage 4—Evaluation of data

•  Stage 6—Additional data collection and hypothesis testing

•  Stage 7—Presentation

Capabilities supported:

•  Multiple searches (stored or ad hoc) for situations of interest

•  Manual data association

•  Manual situation assessment

•  Store access credentials for multiple data sources

•  Automate language translation

•  Set up custom search alerts to notify when data or situation is available/appears

•  Maintain pedigree data

•  What-if scenarios (e.g., what if this piece of data were false/true)

•  Automated pattern identification to highlight potential situations of interest

•  Create temporal boundaries on situation assessment and data association

•  Maintain temporal reference meta-data

•  Support custom boundary ranges

Required features:

•  Filtering of data sets to determine custom boundaries

•  Custom uncertainty values to be assigned (e.g., what-if scenarios)

•  Review/approval of system proposed data merges

•  Browse/review individual or group data sets’ entity-association database

•  Select entities within the entity-association database to highlight or annotate

•  Review executed data merges

•  Search input to the entity-association database for situations of interest

•  Select entity-association database boundaries/filters for search or other capabilities

•  Review highlighted/annotated entities

•  Manual editing/addition of entities and/or attributes

•  Creation of entity/association placeholders that indicate expected hypotheses not yet incorporate/observed

•  Drill-down to entity or association meta-data, source data, update log, weighting, edit precedence, and data association log

•  Revision to data set boundaries

Complexities addressed:

•  Data characteristics

•  Distributed structure

•  Lack of data

•  Potential for deception

•  Real-world dynamics

•  Unbounded problem space

•  Integration of additional data sets

•  Review of data set and fusion entity-association database updates since previous milestone

16.5  CONCLUSION

The case study presented in this chapter describes the high-level challenges and system capabilities relevant to the support of human operators in fusion systems. These examples were presented within the context of developing a hard-soft fusion system to aid in intelligence analysis; however, the techniques used to analyze the domain and identify the challenges and posit design requirements are generalizable. Similarly, the process by which touch points for humans (and human organizations) allow interaction with data fusion systems may also be generalized from the example presented here. Our process involved three phases.

First, an extensive literature review was used to identify common stages of intelligence analysis and factors that commonly add complexity to the task(s) facing the analyst. This review formed the foundation of understanding the role of humans in a distributed data fusion system, and could be expanded to include a review of work products and system or process documentation, ethnographic observations, study of artifacts in the work domain, structure interviews, and other knowledge elicitation activities (as formalized in Cognitive System Engineering practices), as well as a domain-specific review of research literature.

Second, fusion system capabilities were identified that could support analysts in situations where factors that commonly contribute to complexity are present. In this phase, our generalized process rapidly focuses a potentially broad analysis on areas of greatest need, and identifies where more design (and development and evaluation) efforts are required. Finally, by overlaying this capabilities mapping to the common stages of analysis, recommend touch points were identified that allow analysts to best leverage the fusion system capabilities throughout their analysis. While further effort is still required to refine the capabilities mapping and to define the specifics of each interaction touch point, focusing on these critical interactions facilitates the appropriate calibration of analyst perceptions to match the fusion system’s true capabilities, which will lead to appropriate reliance and, in turn, successful integration into analyst workflows. This critical final step of the process is focused on where and how human interaction with the data fusion system will result in overall improvement in unified human-system performance. Explicit definition of the touch points (and where they are required) allows for design and development investment to address specific human-related challenges, whether the human is a sensor, in the loop, or a consumer of fusion products. This same process applies to socio-organizational challenges inherent in distributed human/automated systems (see [Pfautz and Pfautz 2008] for a treatment of the different approaches to analysis of socio-organizational challenges)—as touch points can be considered not only between the human and the fusion system but also among humans (e.g., a human operating in the loop with a data fusion system may report on confidence to the decision-maker consuming fusion products). Clearly, the role of humans in data fusion cannot be trivialized nor simplified—the deep complexity in human-system interaction should and must guide the design and development of fusion systems.

ACKNOWLEDGMENT

Authors Jenkins and Bisantz were supported by Army Research Office MURI grant W911NF-09-1-0392 for “Unified Research on Network-based Hard/Soft Information Fusion” for the conduct of this work.

REFERENCES

Bisantz, A. M. and C. M. Burns, eds. 2008. Applications of Cognitive Work Analysis. Boca Raton, FL: CRC Press.

Bisantz, A.M., R. Stone, J. Pfautz, A. Fouse, M. Farry, E. M. Roth, E.M. et al. 2009. Visual representations of meta-information. Journal of Cognitive Engineering and Decision Making, 3(1):67–91.

Bisantz, A. M. and E. M. Roth. 2008. Analysis of cognitive work. In: Reviews of Human Factors and Ergonomics, Boehm-Davis, D.A., ed. Santa Monica, CA: Human Factors and Ergonomics Society, pp. 1–43.

Blasch, E. and S. Plano. 2002. JDL Level 5 fusion model: User refinement issues and applications in group tracking. Proceedings of the SPIE 2002, 4729(Aerosense):270–279.

Cook, M.B. and H. S. Smallman. 2008. Human factors of the confirmation bias in intelligence analysis: Decision support from graphical evidence landscapes. Human Factors: The Journal of the Human Factors and Ergonomics Society, 50:745–754.

Crandall, B., G. A. Klein, and R. R. Hoffman. 2006. Working Minds: A Practitioner’s Guide to Cognitive Task Analysis. Cambridge, MA: The MIT Press.

Drucker, S.M. 2006. Coping with information overload in the new interface era. In the Proceedings of the Computer-Human Interaction (CHI) 2006 Workshop “What is the Next Generation of Human Computer Interaction?” Montreal, Quebec, Canada.

Elm, W. et al. 2004. Designing support for intelligence analysis. Presented at the 48th Annual Meeting of Human Factors and Ergonomics Society. New Orleans, LA.

Elm, W. et al. 2005. Finding decision support requirements for effective intelligence analysis tools. Human Factors and Ergonomics Society Annual Meeting Proceedings, 49:297–301.

Endsley, M.R. 1996. Automation and situational awareness. In Automation and Human Performance: Theory and Applications, Parasuraman, T.V. and Mouloua, M., eds. Mahwah, NJ: Lawrence Erlbaum Associates.

Flynn, M. T., M. Pottinger, and P. D. Batchelor. 2010. Fixing intel: A blueprint for making intelligence relevant in Afghanistan. Working Papers, Washington, DC: Center for New American Security, accessed at: http://www.cnas.org/node/3924. (Accessed on June 2011).

FM 3-24 12/06. 2006. Counterinsurgency, U.S.D.o.t. Army, Editor Headquarters, Department of the ARMY, Washington DC, Field Manual (FM) number (No.) 3-24. http://www.fas.org/irp/doddir/army/fm3–24.pdf (accessed June 2011).

Greitzer, F.L. 2005. Methodology, Metrics and measures for testing and evaluation of intelligence analysis tools. Technical Report PNWD-3550. Richland, WA: Battelle-Pacific Northwest Division.

Grossman, J. B., D. D. Woods, and E. S. Patterson. 2007. Supporting the cognitive work of information analysis and synthesis: A study of the military intelligence domain. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 51(4):348–352.

Hall, D.L., M. McNeese, and J. Llinas. 2008. A framework for dynamic hard/soft fusion. In 11th International Conference on Information Fusion. Cologne, Germany: Proc. ICIF.

Hardin, R. 2001. Conceptions and explanations of trust. In Trust in Society, Cook, K.S., ed. New York: Russell Sage Foundation, pp. 3–39.

Heuer, R. J. J. 1999. The Psychology of Intelligence Analysis. Washington, DC: Center for the Study of Intelligence.

Hewett, T. et al. 2005. An Analysis of Tools That Support the Cognitive Processes of Intelligence Analysis. Philadelphia, PA: Drexel University.

Hollnagel, E. and D. D. Woods. 2005. Joint Cognitive Systems: Foundations of Cognitive Systems Engineering. Boca Raton, FL: Taylor & Francis.

Hughes, F. J. and A. Schum. 2003. Discovery-Proof-Choice, The art and Science of the process of Intelligence analysis preparing for the future of intelligence analysis: Discovery-proof-choice. Joint Military Intelligence College.

Hutchins, S. G., P. L. Pirolli, and S. K. Card. 2004. A new perspective on use of the critical decision method with intelligence analysts. Ninth International Command and Control Research and Technology Symposium. San Diego, CA: Space and Naval Warfare Systems Center.

Hutchins, S. G., P. L. Pirolli, and S. K. Card. 2006. What makes intelligence analysis difficult? A cognitive task analysis of intelligence analysts. In Expertise out of Context: Proceedings of the Sixth International Conference on Naturalistic Decision Making, Hoffman, R.R., ed. New York: Lawrence Erlbaum Associates, pp. 281–316.

IW JOC 9/07. 2007. Irregular Warfare (IW) Joint Operating Concept (JOC), Department of Defense, Editor U.S. Department of Defense, Irregular Warfare (IW) Joint Operating Concept (JOC), Version 1.0, September 11, 2007. http://www.fas.org/irp/doddir/dod/iw-joc.pdf (accessed on June 2011).

Jenkins, M. P., G. Gross, A. M. Bisantz, and R. Nagi. 2011. Towards context-aware hard/soft information fusion: Incorporating situationally qualified human observations into a fusion process for intelligence analysis. Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2011 IEEE First International Multi-Disciplinary Conference on Situation Awareness and Decision Support. Miami Beach, FL, pp.74–81.

Jian, J.-Y. et al. 1997. Foundations for an Empirically Determined Scale of Trust in Automated Systems. Buffalo, NY: State University of New York at Buffalo, Center of Multisource Information Fusion, p. 48.

Johnston, R. 2005. Analytic Culture in the US Intelligence Community: An Ethnographic Study. Washington, DC: U.S. Government Printing Office.

Kahneman, D., P. Slovic, and A. Tversky. 1982. Judgment under Uncertainty: Heuristics and Biases. New York: Cambridge University Press.

Kent, S. 1965. Special Problems of Method in Intelligence Work. Strategic Intelligence for American World Policy. Hamden, CT: Archon Books.

Laplante, P. 2009. Requirements Engineering for Software and Systems. Redmond, WA: CRC Press.

Larson, E. V. et al. 2008. Assessing Irregular Warfare: A Framework for Intelligence Analysis. Santa Monica, CA: RAND Corporation.

Lee, J. D. 2008. Review of a pivotal human factors article: “Humans and automation: Use, misuse, disuse, abuse.” Human Factors: The Journal of the Human Factors and Ergonomics Society, 50(3):404–410.

Mangio, C. A. and B. J. Wilkinson. 2008. Intelligence analysis: Once again. Interim report. Mason, OH: SHIM Enterprise Inc.

MSC 8/06, Multi-Service Concept for Irregular Warfare, include U.S. Marine Corps Combat Development Command, Quantico, VA and U.S. Special Operations Command Center for Knowledge and Future, MacDill AFB, FL, August 2006. https://www.mccdc.usmc.mil/CIW/ER/Multi%20Service/Multi-Service%20Concept%20for%20Irregular%20Warfare%20-%20DistributionC.pdf (accessed on June 2011).

Parasuraman, R. and V. Riley. 1997. Humans and automation: Use, misuse, disuse, abuse. Human Factors: The Journal of the Human Factors and Ergonomics Society, 39(2):230–253.

Patterson, E., E. M. Roth, and W. Woods. 2001a. Predicting vulnerabilities in computer-supported inferential analysis under data overload. Cognition, Technology & Work, 3:224–237.

Patterson, E. S. et al. 2001b. Using cognitive task analysis (CTA) to seed design concepts for intelligence analysts under data overload. Human Factors and Ergonomics Society Annual Meeting Proceedings, 45:439–443.

Patterson, E. S. et al. 2001c. Aiding the Intelligence Analyst in Situations of Data Overload: From Problem Definition to Design Concept Exploration. Columbus, OH: Ohio State University.

Pfautz, J., A. Fouse, K. Shuster, A. Bisantz, and E. M. Roth. 2005a. Meta-information visualization in geographic information display systems. Digest of Technical Papers—SID International Symposium, 2nd edn. Boston, MA: Society for Information Display.

Pfautz, J. and S. Pfautz. 2008. Methods for the analysis of social and organizational aspects of the work domain. In Cognitive Work Analysis: Current Applications and Theoretical Challenges, Bisantz, A. and Burns, C., eds. Boca Raton, FL: CRC Press.

Pfautz, J., E. M. Roth, A. M. Bisantz, A. Fouse, S. Madden, and T. Fichtl. 2005b. The impact of meta-information on decision-making in intelligence operations. Proceedings of the Human Factors and Ergonomics Society 49th Annual Meeting. Santa Monica, CA.

Pfautz, J. et al. 2006. Cognitive complexities impacting army intelligence analysis. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 50(3):452–456.

Pfautz, J., A. Fouse, M. Farry, A. Bisantz, and E. Roth. 2007. Representing meta-information to support C2 decision making. In Proceedings of the International Command and Control Research and Technology Symposium (ICCRTS ’07). Newport, RI.

Pirolli, P. L. 2006. Assisting People to Become Independent Learners in the Analysis of Intelligence. Palo Alto, CA: Palo Alto Research Center, Inc., p. 100.

Pirolli, P. L. and S. K. Card. 2005. The sensemaking process and leverage points for analyst technology as identified through cognitive task analysis. In International Conference on Intelligence Analysis. McLean, VA.

Potter, S. S., W. C. Elm, and J. W. Gualthieri. 2006. Making sense of sensemaking: Requirements of a congintive analysis to support C2 Decision support system design. MANTECH SMA, pittsburgh, PA, Cognitive systems engineering center.

Powell, G. M. 2006. Understanding the role of context in the interpretation of complex battlespace intelligence. In 9th International Conference on Information Fusion. Florence, Italy.

Powell, G. M. et al. 2008. An analysis of situation development in the context of contemporary warfare. In Proceedings of 13th International Command and Control Research and Technology Symposium. Bellevue, WA.

Roth, E. M., J. Pfautz, S. M. Mahoney, G. M. Powell, E. C. Carlson, S. L. Guarino, T. C. Fichtl, and S. S. Potter. 2010. Framing and contextualizing information requests: Problem formulation as part of the intelligence analysis process. Journal of Cognitive Engineering and Decision Making, 4(3):210–239.

Tam, C.K. (2008–2009). Behavioral and psychosocial considerations in Intelligence Analysis: A Preliminary review of literature on critical thinking skills. Interim report, May 2008–October 2008, 2009, Report ID: AFRL-RH-AZ-TR-2009-0009, Air Force Research Lab Mesa, AR. Human Effectiveness Directorate. p. 17. http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA502215 (accessed June 2011).

Taylor, S. M. 2005. The several worlds of the intelligence analyst. In International Conference on Intelligence Analysis. McLean, VA: MITRE Corporation.

Trent, S. A., E. S. Patterson, and D. D. Woods. 2007. Challenges for cognition in intelligence analysis. Journal of Cognitive Engineering and Decision Making, 1:75–97.

Vicente, K. J. 1999. Cognitive Work Analysis. Mahwah, NJ: Erlbaum.

Zelik, D. J., E. S. Patterson, and D. D. Woods. 2007. Judging sufficiency: How professional intelligence analysts assess analytical rigor. Human Factors and Ergonomics Society Annual Meeting Proceedings, 51:318–322.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset