13

Service-Oriented Architecture for Human-Centric Information Fusion

Jeff Rimland

CONTENTS

13.1  Introduction

13.2  Participatory Sensing and Sensor Webs

13.2.1  Participatory Sensing Campaigns

13.2.1.1  MobileSense

13.2.1.2  PEIR

13.2.1.3  “Voluntweeters”

13.2.1.4  DARPA Network Challenge

13.2.2  Sensor Webs

13.3  Service-Oriented Fusion Architecture

13.3.1  Service-Oriented Fusion Pyramid

13.3.1.1  Low-Level Operations

13.3.1.2  Composite Operations

13.3.2  High-Level Assessments

13.4  Hybrid Sensing/Hybrid Cognition over SOA

13.5  Conclusion

Acknowledgment

References

13.1  INTRODUCTION

Information is currently undergoing a paradigm shift that is radically changing how it is sensed, transmitted, processed, and utilized. One of the primary driving forces in this shift is the transformation of mobile device usage. The new mobile device user is an amazingly capable hybrid system of human senses, cognitive powers, and physical capabilities along with a suite of powerful physical sensors including high-definition (HD) video/still camera, global positioning satellite (GPS) positioning, and multi-axis accelerometers. Additionally, these devices are linked to the “hive mind” (Kelly 1996) of various social networks and the distributed power of ever-growing open-source information provided by nearly countless applications that have the potential to funnel specific geospatial and temporally appropriate bits of information to the user on a high-resolution display or through high-fidelity audio. The potential information gathering and processing power of massive social networks connecting these human/machine hybrids that we call “mobile device users” is unprecedented. However, advances in architecture and infrastructure are required to fully recognize this potential. The paradigm of service-oriented architecture (SOA) is evolving in exciting and largely unanticipated ways due to the convergence of these factors.

In O’Reilly and Battelle (2009), Tim O’Reilly discusses the potential for “collective intelligence” to exist in the World Wide Web, but states that the current web is somewhat like a newborn baby—having the basic facilities necessary to grow into an intelligent and conscious entity yet still “awash in sensations, few of which she understands.” This analogy can also be applied to human-centric information fusion, which in many ways is a part of the second-generation “Web 2.0” that O’Reilly discusses.

The information fusion community has recently been shifting its emphasis toward network-centric and human-centric operations (see Castanedo et al. 2008, Fan et al. 2010, Keisler 2008, Kipp 2006). Distributed human-centric information fusion is proving indispensable in a broad variety of civilian and military applications. Team-based tasks that were formerly limited by geographic distance, siloed information, and inability to share mental models present great opportunities for a hybrid-sensing/hybrid-cognition model. There has already been extensive research into “participatory sensing” campaigns that facilitate decentralized collaboration for disaster response, counterinsurgency efforts, and “citizen science.” However, the potential for true collective intelligence remains largely unrealized. Addressing this need by combining the paradigms of participatory sensing and distributed sensor networks over an evolving human-centric SOA (Figure 13.1) is the focus of this chapter.

Image

FIGURE 13.1 The intersection of distributed sensors, participatory sensing, and service-oriented architecture.

13.2  PARTICIPATORY SENSING AND SENSOR WEBS

Participatory sensing (Burke et al. 2006) enables the observation of both people-centric and environment-centric (Christin et al. 2011) information by leveraging the ubiquitous and increasingly powerful mobile devices already being carried by billions of people, the high-speed/high-reliability voice and data networks already in place to support these devices, and the uncanny human ability to capture, isolate, annotate, and transmit the data of interest.

Although this evolving technique of distributed “soft” sensing holds great promise, the field is still in its infancy and there are many challenges to planning and executing a successful participatory sensing campaign. There is a great deal of concern among participants and other citizens that the Orwellian mass surveillance mechanism of “Big Brother” (Orwell 1977) will effectively be realized by “Four Billion Little Brothers” (Shilton 2009) armed with smart phones. Without a mechanism to address this fear, the number of participants and the type of information shared will be limited. Additionally, new methods are needed for determining the veracity, objectivity, and “observational sensitivity” of the source (Schum and Morris 2007). Without sufficient quality control of the observers and the data provided, campaigns can be severely compromised through either intentional deception or simple lack of competence on the part of the observer. Incentivizing participation (Jiang 2010) is another challenge that can be especially difficult and critical to the success of campaigns that require longitudinal data. The following section will provide a sample of campaigns from a variety of disciplines.

13.2.1  PARTICIPATORY SENSING CAMPAIGNS

13.2.1.1  MobileSense

The MobileSense campaign (Lester et al. 2008) conducted by researchers at the University of Washington attempted to gather GPS, barometric pressure, and accelerometer data and use it to infer the type of activity that the subject is currently performing. Additionally, they used Geographical Information System (GIS) data layers in conjunction with the information that they gathered from each subject to determine specific locations where the subject had a tendency to dwell for various periods of time. This fusion of data allowed inferences to be made regarding where the subject lived, worked, shopped, and socialized.

They collected data from 53 test subjects over a period of 1 week using a device called the Mobile Sensing Platform (MSP) (MSP Research Challenge n.d.), which is a proprietary device that is worn using a belt clip. The subjects also manually recorded their activities every hour. For privacy, the users were allowed to switch the device on and off at will, resulting in a per-subject average of 53 h of data collected per week. Requiring the user to switch off the entire device for privacy is less than optimal in terms of user convenience and data gathering. In Christin et al. (2011), more advanced privacy schemes such as k-anonymity, identity blurring, and user-configurable granularity are introduced.

MobileSense is a useful early example of combining data gathered via participatory sensing with open-source GIS. Fusing a priori information with sensed data in this manner provides advantages that will be discussed in later sections.

13.2.1.2  PEIR

The Personal Environmental Impact Report (PEIR) (Mun 2009) is a long-running participatory sensing campaign led by the Center for Embedded Networked Sensing (CENS) at UCLA. PEIR uses a variety of GPS-enabled mobile devices to provide assessments of both how the individual impacts the environment, and how much exposure the individual has had to environmental threats and hazards. Although the system detects as variety of factors, the impact contributors can be summarized as carbon impact and sensitive site impact, and the environmental hazards can be summarized as smog exposure and (the somewhat controversial) fast food exposure.

The project has evolved somewhat since it entered “pilot production mode” in June 2008, but there are several aspects of this project that can serve as lessons for other participatory sensing campaigns. Rather than forcing the user to completely switch the sensor off when they are in a private location (as MobileSense does), PEIR instead allows the user to select locations that they would like to obscure from the public report. Additionally, when the user specifies that a location is private, the system uses an algorithm to create an alternate simulated path to avoid raising suspicion of unusual/illicit activity during the “blanked out” time period. While this synthetic generation of location data might not be appropriate for all participatory sensing projects, it adds to the user’s convenience and therefore reduces the chances of them dropping out of the campaign.

Additionally, the system lets the user review their PEIR (and all contributing data) before uploading it to the server. After viewing the data, they are allowed to selectively delete any personal information that they would not like to share before uploading it to the CENS server. If they chose, the user may also share this information directly with their social network via a Facebook application.

Where some participatory sensing campaigns attempt to gather several modalities of data by equipping the users with advanced mobile sensor systems consisting of several integrated devices (Ishida et al. 2008), PEIR takes the approach of only gathering GPS data directly from the individual and using open-source information to obtain a myriad of other details related to that location. Rather than attaching a smog sensor to the mobile device and attempting to measure levels directly, they utilize the Emissions Factors Model (EMFAC) developed by the California Air Resources Board (CARB) to determine individual smog exposure based on their location at a given time. Although not always practical, this approach improves scalability by reducing dependency on nonstandard devices, reduces battery drain, and improves efficiency by offloading processing tasks to a remote server that is far more powerful than the mobile device.

13.2.1.3  “Voluntweeters”

While many crowdsourcing efforts are the result of a top-down, centralized campaign to achieve a specific purpose or collect specific data, the massive efforts of the “digital volunteers” who responded to the devastation that resulted from the 7.0 magnitude earthquake near the capitol city of Port-au-Prince Haiti on January 12, 2010, was largely self-organized and startlingly effective (Starbird 2011).

The importance of incentives is perhaps most relevant in this category of crowd-sourcing. The participants were taking part in these activities to save their own lives or the lives of others, or to determine whether their loved ones were safe.

Because of its low bandwidth and battery requirements, ease of use, ubiquity of compatible devices, and connection to a publicly searchable timeline, the micro-blogging service Twitter became the platform of choice. With the help of a few facilitating organizations such as the CrisisCamp initiative (crisiscommons.org) and the ATLAS Institute (Starbird 2011), an augmented Twitter syntax called “Tweak the Tweet” (TtT) was designed to leverage the existing Twitter hashtag capabilities to further reduce some of the ambiguity inherent in 140 characters of free-form text.

If the Twitter users on the ground in Haiti were simply uploading messages to the public Twitter feed, that may have been somewhat helpful. However, the real utility came when social networks formed between people in need of assistance, people with information, people seeking information, and volunteers capable of translating the requests between both multiple languages and multiple networking methods.

The success of this grassroots effort shows the importance of incentive and the power of platforms that support and facilitate rapid self-organization of social networks.

13.2.1.4  DARPA Network Challenge

Due to the diversity of the goals, methods, and scopes of most participatory sensing campaigns, it is generally very difficult to obtain “apples-to-apples” comparisons for the test and evaluation (T&E) of these campaigns. The Defense Advanced Research Projects Agency (DARPA) Network Challenge (DARPA Report 2010) is one of the few instances where a direct quantitative comparison is possible between various campaigns that are competing to accomplish the same goal.

In December 2009, DARPA conducted an experiment that tested the combination of crowdsourcing, social network utilization, and viral marketing capability in a race to most quickly solve the distributed geo-location problem of finding ten 8 ft red balloons that were tethered at various locations throughout the United States. Locating the red balloons could be considered analogous to detecting the start of an epidemic, distributed terrorist attack, or massive-scale cyber attack—thus the importance of understanding the key elements of successfully undertaking this challenge is very high. Many lessons can be learned by the techniques that the top performing teams utilized to win the challenge.

The winning MIT team’s strategy began with incentivizing participation. They used the $40,000 in potential prize money to construct a recursive incentive structure that rewarded not only those participants who actually spotted balloons, but also those who connected balloon spotters to the team (see Figure 13.2). This incentive structure rewards not only individuals who are good at performing the end task, but also those who are good at creating network connections that improve the overall odds of success at the given task. Since mobilization time was of the essence and there was insufficient time to develop an advanced machine-learning system to determine which reports were valid and which were false, the winning MIT team relied on human reasoning of the data that was provided by the network (Tang et al. 2011). This exemplifies the effectiveness of modular hybrid-sensing, hybrid-cognition models (Rimland 2011) that enable an ad hoc combination of human and machine sensing as well as human and machine processing of the data. The hybrid approach used computerized Internet Protocol (IP) tracing to filter out obvious false reports (e.g., Pennsylvania IP addresses used to upload pictures of a balloon in Texas) and Web/GIS-assisted human analysis to verify that businesses, weather conditions, roadways, etc. that are shown in the reported pictures of the balloons are consistent with real-world features.

Image

FIGURE 13.2 Example of MIT’s recursive incentive structure.

The tenth place iSchools team relied on open-source intelligence methods of searching cyberspace for potential leads and then confirming those leads by quickly activating the “hive mind” (Kelly 1996) of its extensive social network across Twitter, Facebook, and e-mail in an attempt to find direct confirmation of the observation.

There were many lessons about participatory sensing learned from the Network Challenge. The most successful teams were those that incentivized effectively, relied on their specific advantages (e.g., mass media coverage, wide geographical distribution, etc.), and—possibly most importantly—relied on the strengths of social networks, open web information and tools, and the power of an individual with a mobile device who is at the right place at the right time.

13.2.2  SENSOR WEBS

As sensors and networking capability improve in both performance and affordability, the interconnected network of sensors, or sensor web, is proving invaluable for tasks ranging from weather prediction (Hart and Martinez 2006) to tracking the behavior of endangered species (Hu et al. 2005). While the idea of using a distributed network of low-cost sensors instead of sending humans into conditions that may be hazardous or difficult/expensive to access is promising, there are several challenges.

Sensors record data in a wide variety of formats. Many of these are proprietary and designed to work with specific software tools. While this is not a problem in the case of a single organization accessing a small number of sensors, it makes it difficult or impossible to construct a network of distributed heterogeneous sensors or to share that sensor information between multiple organizations. Additionally, semantic standards describing exactly what is being observed by the sensors, as well as metadata related to accuracy, timeliness, and provenance, are required if the sensors are to be integrated in an extensible manner.

To address these issues, the Open Geospatial Consortium (OGC) has spearheaded improvements in Sensor Web Enablement (SWE), which is a series of standards and best practices for facilitating the connectivity and interoperability of heterogeneous sensors and devices that are connected to a network.

Among these innovations, Transducer Markup Language (TML) is of particular interest to the information fusion community. This XML-based language facilitates the storage and exchange of information between sensor systems—which may include actual sensors as well as transmitters, receivers, actuators, or even software processes. The data can be exchanged in either real-time streaming mode or archived form, and real-time data can be exchanged in various chunk sizes over multiple protocols (e.g., TCP or UDP) depending on network bandwidth and data integrity requirements. TML provides the capability to capture intrinsic (e.g., physical hardware) specifications as well as extrinsic (e.g., environmental) metadata that may be of interest to consumers of the data.

Sensor Planning Service (SPS) is another SWE tool that applies SOA principles to the identification, use, and management of sensor systems based on sensor availability and feasibility for successful completion of a specific task.

Other OGC standards for SWE include

1.  Observations and Measurements (O&M)

2.  Sensor Model Language (SensorML)

3.  Sensor Observation Service (SOS)

4.  Sensor Alert Service (SAS)

5.  Web Notification Services (WNS)

The benefit of these openly available, consortium-designed standards is that they provide organizations with the ability to web-enable all manners of sensors, collections of sensors, sensor platforms, and even human observers via the web (Botts et al. 2007).

For the information fusion community, these advances in SWE translate into improved timeliness, coverage, metadata, and consistency. Perhaps most importantly, sensor net technology provides the potential for sensor tasking based on capability and not simply modality. For example, a weather forecaster might need to know if it is currently snowing in a given location. In some instances, a human observer will be the most cost-effective and accurate way of making this determination. In other instances (e.g., dangerous remote locations), a persistent physical sensor might be the best tool for the job. In either case, the consumer of the data requires a certain degree of precision, timeliness, and credibility, but is often not concerned with the modality of the sensor itself.

13.3  SERVICE-ORIENTED FUSION ARCHITECTURE

For several years, SOAs have been praised for providing loosely coupled and interoperable software services for accomplishing system requirements and goals. In the conventional SOA sense, these services are typically specific software functionalities made discoverable and accessible via a network. The service advertises its functionality, and then returns data and/or performs a task when called with the appropriate parameters. In a hybrid-sensing/hybrid-cognition framework, this paradigm is extended to not only allow software routines, but also sensors, sensor platforms (e.g., robots and Unpiloted Air Vehicles [UAVs]), and even humans with mobile devices to be queried and tasked in a manner analogous to software resources in conventional SOA.

This approach can be considered both a logical extension and a somewhat radical departure from existing methods. It is a logical extension in that it follows the SOA principle of encapsulating the inner workings of the service and selecting it based on its availability and capability. It is a radical departure in that it could result in a condition where human beings are receiving instructions from a software application. Although this may sound like cause for alarm, I will explain in further sections why this too is a natural progression of SOA.

The service-oriented computing (SOC) paradigm asserts that services are the fundamental and most basic elements with which applications and software systems may be created. Although these methodologies are conventionally applied to situations in which the service (or system of services) is provided purely by software, there is nothing about SOC or SOA that mandates this. In fact, when viewed in the context of creating systems for improving situational awareness or quality of knowledge via distributed hard and soft information fusion, these conventional machine-centric visions of SOC/SOA can be a limiting factor.

13.3.1  SERVICE-ORIENTED FUSION PYRAMID

When discussing the roles and components of conventional SOA, a pyramid diagram is often used (Georgakopoulos and Papazoglou 2008). The bottom third of the pyramid represents the basic services provided by the architecture and the low-level “plumbing” tasks of the system including publication, discovery, selection, and binding. The middle or composition layer of the pyramid shows tasks related to the coordination, performance/integrity monitoring, and combination of various lower-level services to complete the task at hand. The top or management layer of the pyramid shows high-level concepts such as performance assurance and top-level status of the system. While this view is useful for visualizing the operational structure of conventional SOA, it does not offer insight into the utility of SOA for human-centric information fusion. The service-oriented fusion pyramid (SOFP) attempts to addresses this (see Figure 13.3).

Image

FIGURE 13.3 A perspective on SOA for human-centric information fusion.

In the SOFP, the levels of data fusion (as outlined by the Joint Directors of Laboratories [JDL] Fusion Model [Hall and Llinas 1997]) as well as relevant human factors are integrated with the corresponding levels of the SOA pyramid. The three levels of the SOFP are low-level operations, composite services, and high-level assessments.

13.3.1.1  Low-Level Operations

The bottom level is similar to existing SOA pyramids in that it contains service publication, discovery, and selection. These are fundamental SOA principles that make it possible for stateless, loosely coupled services to advertise their capabilities, locate other useful services, and perform basic communication. In conventional, first-generation SOA, services are described by the Web Services Description Language (WSDL), directory lookup is facilitated by the Universal Description, Discovery, and Integration (UDDI) framework, and the services eventually communicate via the XML-based Simple Object Access Protocol (SOAP) (Walsh 2002). Although it is beyond the scope of this chapter, it should be noted that the newer representational state transfer (REST) architecture eliminates much of this complexity. Fielding and Taylor (2002) is recommended reading for details of REST. Polling data for Quality of Service (QoS) analysis is also performed at this level.

In addition to these SOA capabilities, the SOFP adds human observations, sensor data readings, and tasking at the bottom level of the pyramid. In the JDL data fusion model, data preprocessing and entity assessment correspond to levels 0 and 1, respectively. Tasking of sensors is typically considered a process refinement task that corresponds to JDL level 4. In one regard, tasking can be considered a higher-level operation because it relies on broad understanding of the system from a composite and multifaceted perspective (Bedworth and O’Brien 2000). However, in the context of the service-oriented model, physical sensors and even human observers can be considered as a service provider—although the ramifications of Humans as a Service (HaaS) require additional exploration (in a later section). From this perspective, tasking can often be a decentralized operation that relies on the localized needs of other services and entities in the system, as opposed to relying purely on high-level dictation from a centralized tasking mechanism.

Decentralized control has many benefits in a distributed heterogeneous system. In addition to increased robustness due to removing single points of failure and the performance advantage of parallel processing, decentralized control based on local inputs has been shown to have excellent potential for finding good solutions to problems that are otherwise considered intractable. For example, consider the Ant Colony Optimization (ACO) for solving the travelling salesman problem. In Dorigo et al. (2006), control in computational systems is modeled after control in ant colonies—which relies on both local stimulus (finding food) and localized messaging between ants (by modifying their environment with pheromones, which is known as stigmergy). Much as centralized control of an ant colony would be impossible and even undesirable, the same applies to the evolving paradigm of distributed crowdsourcing and information fusion. That is why tasking is presented as a low-level operation in this framework.

13.3.1.2  Composite Operations

Although the previous section extolled the virtues of decentralized control, most systems still require an element of logical hierarchy that can combine and compose basic services into more complex services, coordinate data flow between services, and ensure system integrity. Additionally, a primary task in designing a distributed information fusion system is ensuring that the data is available where it is needed and in a usable format. These are considered composite operations in the SOFP framework.

A meaningful task is seldom performed by a single service. For example, viewing a website and placing an order with an online retailer causes the elaborate orchestration of a multitude of services across multiple domains and corporate boundaries (Mallick et al. 2005). When everything works properly, the customer is presented with a coherent and appealing shopping experience in which product availability, detailed images, suggested complimentary purchases, customer reviews, shipping rates, and opportunities to save money (or spend more) through affiliate programs. Behind the scenes, however, this requires coordination between product databases, media services, retail supply, and delivery channels, and affiliate partners that are often geographically dispersed and have disparate representations of their data.

Another example that more closely reflects current concerns in the information fusion community is the Global Information Grid (GIG) model that the Department of Defense (DoD) has embraced for information superiority (Chang 2007). The next generation of battle applications relies heavily on network-centric and service-centric principles to provide a globally connected, highly secure, and extremely robust method for enabling operations that span multiple domains, varying security levels, a broad variety of HCI form factors (McNamee 2006). Accomplishing this feat requires an extensive orchestration and constant evaluation of system status.

Much like information exchange between various services relies on coordination and composition, information fusion relies on association of data from different sources before algorithms (such as Kalman filters, see Hall and McMullen [2004]) can be used to make predictions about future states. Techniques for data association include Nearest Neighbor (NN), Strongest Neighbor (SN), Probabilistic Data Association (PDA), and Multiple Hypothesis Testing (MHT) (Mitchell 2007). Additional details on data association and related aspects of data fusion can be found in Hall and Llinas (2001).

When making decisions based on fusion of information from a variety of sources, it often becomes necessary to perform adjudication over which sensors, observers, or fused combinations of these are most qualified to deliver accurate assessments. For example, in Tutwiler (2011), Flash LIDAR is fused with mid-wavelength infrared (MWIR) to deliver a product that provides both distance and thermal information about a scene or subject (see Figure 13.4). Under varying conditions and situations, each one of these modalities might prove more effective than the other for tasks such as identification, localization, and tracking. There are a variety of adjudication and voting methods (Parhami 2005) for physical sensors, yet there are very few that account for the introduction of humans as observers. This will be a ripe area for research over the coming years.

In complex systems that combine heterogeneous inputs from a broad variety of geographically distributed sources, providing adequate QoS is vital. It is informative to compare the approach to QoS taken by the SOA community with the T&E metrics that appear in the data fusion literature. Since SOA relies largely on aggregating component services (often from multiple providers) into a more complex composite service, the QoS metrics must also take into account the QoS of the services that it aggregates. This is typically done by looking at the following categories of metrics: (1) provider-advertised metrics, (2) consumer-rated metrics, and (3) observable metrics (Zeng et al. 2007).

Provider-advertised metrics are simply the claims or advertisements make by the service provider. Service cost, for example, is typically a provider-advertised metric. Consumer-rated metrics are based on the feedback and evaluations of past service consumers. This can be thought of as analogous to feedback left by buyers on online auction sites. Ratings for factors such as responsiveness or accuracy of information obtained can be averaged and supplied to future consumers. Finally, observable metrics can be obtained through direct measurement and the application of formulae that are typically specific to the domain in question.

Image

FIGURE 13.4 Fused LIDAR and MWIR data showing both distance and thermal information. (From Tutwiler, R., Hard sensor fusion for COIN inspired situation awareness, Proceedings of the 14th International Conference on Information Fusion, Chicago, IL, 2011.)

For years, the data fusion community has focused on measuring the quality of a fused information product (and the underlying fusion process) in terms of Measures of Performance (MOPs), Measures of Effectiveness (MOEs), and Measures of Force Effectiveness (MOFEs) (White 1999). MOPs include direct evaluation of performance factors such as

1.  Detection probability

2.  False alarm rate

3.  Location estimate accuracy

4.  Time from transmission to detection

MOEs evaluate fusion processes on their capability to contribute to the success of a task or mission. They typically include

1.  Target nomination

2.  Target leakage

3.  Information timelines

4.  Warning time

MOFEs look at the bigger picture of performance of a data fusion system as well as the larger force that it is a part of Hall and Llinas (2001). MOFEs are typically applied to military situations, but have other applications as well.

There is a good deal of literature and research related to QoS in SOA (Oriol 2009) and T&E of data fusion systems (Blasch 2004), but there is little research into the intersection of those areas—which occurs in the middle layer of the SOFP. Prior work in this area is especially sparse regarding human-centric factors.

13.3.2  HIGH-LEVEL ASSESSMENTS

In the JDL data fusion model, levels 2 and 3 refer to situation assessment and threat or impact assessment. While the levels of the JDL model are not necessarily intended as sequential flowchart, fusion at these higher levels typically requires that information be represented at the feature or entity attribute level as opposed to raw data representations. Additionally, JDL level 4 is a meta-process in which the fusion process itself is evaluated. That is, in the JDL model, the level 4 process is a process that monitors the other ongoing fusion processes and seeks to optimize the processing results (e.g., by directing sensors, modulating algorithm parameters, etc.).

In SOA literature, the high-level management tasks include system evaluation through statistical analysis, delivering notification upon completion of high-level tasks, and telegraphing the results of high-level decision making. Since SOA supports open service marketplaces in which service providers can autonomously negotiate with each other to add value or help perform a task, service-level agreements (SLAs) are often provided to facilitate “fair trade” within these marketplaces. The management levels of SOA help to negotiate these agreements.

In this highest level of the pyramid, a significant change occurs from lower levels of SOA and data fusion. At lower levels, mathematical formulae, pattern matching, and various detection and tracking algorithms can refine signals and give estimates of attributes such as position, identity, and direction/velocity of motion. At higher levels, shared knowledge and understanding becomes necessary (Perlovsky 2007). Since software fusion systems lack the natural language capabilities that humans use to exchange understanding and knowledge between individuals, we rely on ontologies or other knowledge representations in an attempt to digitally describe and delineate properties that are often easily and intuitively understood by humans, yet poorly captured by machine representation. The resulting systems often work well for isolated “toy problems” or provide “one off” solutions, but may prove brittle in real-world applications.

13.4  HYBRID SENSING/HYBRID COGNITION OVER SOA

One approach to solving this problem is enlisting a “human-in-the-loop” to provide both information gathering and sense-making contributions to the system. While computers solve certain problems with speed and accuracy far exceeding that of any human, there are also tasks that are still poorly performed by the best sensors, software algorithms, and hardware. These tasks often require fusing multiple senses or applying innate reasoning or understanding.

In the simple act of stepping off a curb to cross a street, the human perceptual and cognitive systems are performing an amazing sequence of calculations that we are completely unaware of except on the relatively rare occasion that conscious intervention is needed on our part. The visual system is locating and tracking oncoming and passing traffic that we maintain record of just long enough for the visual inputs to be corroborated by our auditory system and finally for the sense of passing vibration and occasionally the rush of air against our skin. Our vestibular and proprioceptive systems maintain awareness of our balance and the location of each joint as it moves through its range of motion. In robotics, the inverse kinematics field (Tolani et al. 2000) is dedicated to calculating the proper angles and forces necessary to move robotic limbs into the correct position to perform a given task. Humans do this exquisitely and automatically.

Aside from advantages in perception, information fusion, and movement dynamics, humans have an amazing ability to make near-instantaneous assessments of current situation status or risk. Computer systems may be able to read license plates or even identify faces with superhuman speed and accuracy, but the best automated systems are still utterly incapable of judging individual intent. When one person is approached by another, perceptual elements and cues are integrated subliminally. We generally do not consciously notice the saccadic eye movements (Pelz et al. 2001) or subtle facial gestures, but we can quickly tell that a person in a crowd recognizes us, or that we have just made a statement that hurts a friend’s feelings, or that someone is about to ask for a favor.

There are also tasks at which computers are at a clear advantage over humans. Performing complex numerical calculations, searching large volumes of text, rapidly matching patterns, and performing certain types of quantitative assessments (e.g., “the vehicle weighs 2819.56 pounds”) are tasks that put computers and physical sensor systems at a clear advantage. However, the most significant task that computers are capable of is facilitating the connection of people in ways that were previously impossible.

Advances in several parallel technologies are now converging in a way that is poised to change how humans approach the most complex and difficult tasks that we undertake. This will happen through the following factors:

1.  The mobile device user, capable of acting as a sensor platform to capture high-definition, high-fidelity digital information, is at the same time able to apply his or her innate human sensing and cognitive abilities to either annotate the captured digital information or share direct observations via speech or micro-blogging services such as Twitter. Additionally, the mobile device user’s capability as a sensor platform is enhanced through open-source information, geo-location, and group collaboration facilities available to them via the device.

2.  Social networks allow members to readily identify and aggregate a “hive mind” that is ideally suited for the task at hand. Although this capability exists to some degree already, the concept of “friending” someone on a social network will evolve to include opportunistic sharing of data or cognitive ability for a specific task or type of task, as opposed to the current model of permissively sharing personal information with large numbers of friends or acquaintances.

3.  Artificial intelligence and data fusion algorithms are improving—not only in their stand-alone capacities, but also through their increasing abilities to interact with a human-in-the-loop.

4.  Service-oriented system methodologies not only connect computers, sensors, and mobile device users, but also facilitate abstraction and service description to allow sensing, information fusion, and cognition tasks to be performed by either computer, human, or a hybrid (e.g., mobile device user).

13.5  CONCLUSION

The last point mentioned earlier is the most important point of this chapter, so it is worth restating. The SOA, with its ability to provide access to data or processing algorithms via loosely coupled, rapidly reconfigurable, modular services, allows each of the earlier mentioned breakthroughs of mobile device technology, social networking, and advancing algorithms, to act as an effectiveness multiplier for each other. When the potential of this combination is fully realized, it will have broad implications for the sciences, prevention and recovery from natural and man-made disasters, medicine, and countless other aspects of human endeavor.

ACKNOWLEDGMENT

We gratefully acknowledge that this research activity has been supported in part by a Multidisciplinary University Research Initiative (MURI) grant (Number W911NF-09-1-0392) for “Unified Research on Network-based Hard/Soft Information Fusion,” issued by the U.S. Army Research Office (ARO) under the program management of Dr. John Lavery.

REFERENCES

Bedworth, M. and J. O’Brien. 2000. The omnibus model: A new model of data fusion? IEEE Aerospace and Electronic Systems Magazine, 15(4):30–36.

Blasch, E. 2004. Fusion metrics for dynamic situation analysis. SPIE Proceedings 5429, Onlando, FL, pp. 1–11.

Botts, M., G. Percivall, C. Reed, and J. Davidson. 2007. OGC sensor web enablement: Overview and high level architecture (ogc 07-165). Open Geospatial Consortium White Paper, 28.

Burke, J. et al. 2006. Participatory sensing. Workshop on World Sensor Web (WSW’06): Mobile Device Centric Sensor Networks and Applications, Boulder, CO, pp. 1–5.

Castanedo, F., J. García, M. A. Patricio, and J. M. Molina. 2008. Analysis of distributed fusion alternatives in coordinated vision agents. 11th International Conference on Information Fusion, Chicago, USA, pp. 1–6.

Chang, W. Y. 2007. Network-Centric Service Oriented Enterprise (illustrated ed. Dordrecht). Houten, the Netherlands: Springer.

Christin, D., A. Reinhardt, S. Kanhere, and M. Hollick. 2011. A survey on privacy in mobile participatory sensing applications. Journal of Systems and Software, doi:10.1016/j. jss. 2011.06. 073, 1–19.

DARPA. 2010. DARPA Network Challenge Project Report.

Dorigo, M., M. Birattari, and T. Stutzle. 2006. Ant colony optimization. Computational Intelligence Magazine, IEEE 2006, 1(4):28–39.

Fan, X., M. McNeese, B. Sun, T. Hanratty, L. Allender, and J. Yen. 2010. Human—Agent collaboration for time-stressed multicontext decision making. Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions, 40(2):306–320.

Fielding, R. T. and R. N. Taylor. 2002. Principled design of the modern web architecture. ACM Transactions on Internet Technology (TOIT) 2002, 2(2):115–150.

Georgakopoulos, D. and M. P. Papazoglou. 2008. Service-Oriented Computing. Cambridge, MA: MIT Press.

Hall, D. L. and J. Llinas. 1997. An introduction to multisensor data fusion. Proceedings of the IEEE 1997, 85(1):6–23.

Hall, D. L. and J. Llinas. 2001. Handbook of Multisensor Data Fusion. Boca Raton, FL: CRC Press.

Hall, D. L. and S. A. H. McMullen. 2004. Mathematical Techniques in Multisensor Data Fusion. London, U.K.: Artech House Publishers.

Hart, J. K. and K. Martinez. 2006. Environmental sensor networks: A revolution in the earth system science? Earth-Science Reviews 2006, 78(3–4):177–191.

Hu, W. et al. 2005. The design and evaluation of a hybrid sensor network for cane-toad monitoring. Information Processing in Sensor Networks, IPSN 2005, Fourth International Symposium, Sydney, New South Wales, Australia.

Ishida, Y., S. Konomi, N. Thepvilojanapong, R. Suzuki, K. Sezaki, and Y. Tobe. 2008. An implicit and user-modifiable urban sensing environment. Urbansense08, 2008:36.

Jiang, M. 2010. Human-centered sensing for crisis response and management analysis campaigns. Proceedings of the 7th International ISCRAM Conference, Seattle, WA.

Keisler, R. J. 2008. Towards an agent-based, autonomous tactical system for C4ISR operations. Proceedings of the Army Science Conference (26th), Orlando, FL, December.

Kelly, K. 1996. The electronic hive: Embrace it. Computerization and Controversy: Value Conflicts and Social Choices, 1996:75–78.

Kipp, J. 2006. The human terrain system: A CORDS for the 21st century. Military Review, September–October 2006:1–8.

Lester, J., P. Hurvitz, R. Chaudhri, C. Hartung, and G. Borriello. 2008. MobileSense-sensing modes of transportation in studies of the built environment. Urbansense08, 2008:46–50.

Mallick, S., A. Sharma, B. V. Kumar, and S. V. Subrahmanya. 2005. Web services in the retail industry. Sadhana, 30(2):159–177.

McNamee, D. 2006. Building multilevel secure web services-based components for the global information grid. CROSSTALK The Journal of Defense Software Engineering, May 2006:1–5.

Mitchell, H. B. 2007. Multi-Sensor Data Fusion: An Introduction. Berlin, Germany: Springer.

MSP Research Challenge. 2007. University Washington and Intel Research Seattle. http://seattle.intel-research.net/MSP (accessed on August 2, 2011).

Mun, M. 2009. PEIR, the Personal Environmental Impact Report, as a platform for participatory sensing systems research. Proceedings of the 7th International Conference on Mobile Systems, Applications, and Services, Krakow, Poland, pp. 55–68.

O’Reilly, T. and J. Battelle. 2009. Web squared: Web 2.0 five years on. Web 2.0 Summit 2009, California, USA, pp. 1–13.

Oriol, M. 2009. Quality of Service (QoS) in SOA Systems. A systematic review. Master thesis, UPC, Departamento LSI [Biblioteca Rector Gabriel Ferraté de la Universitat].

Orwell, G. 1984: A Novel (1977 edition). New York: Penguin.

Parhami, B. 2005. Voting: A paradigm for adjudication and data fusion in dependable systems. Dependable Computing Systems: Paradigms, Performance Issues, & Applications, 52:2, 87–114.

Pelz, J., M. Hayhoe, and R. Loeber. 2001. The coordination of eye, head, and hand movements in a natural task. Experimental Brain Research, 139(3):266–277.

Perlovsky, L. I. 2007. Cognitive high level information fusion. Information Sciences, 177(10):2099–2118.

Rimland, J. 2011. A multi-agent infrastructure for hard and soft information fusion. SPIE Proceedings, Orlando, FL.

Schum, D. A. and J. R. Morris. 2007. Assessing the competence and credibility of human sources of intelligence evidence: Contributions from law and probability. Law, Probability and Risk, 6(1–4):247.

Shilton, K. 2009. Four billion little brothers?: Privacy, mobile phones, and ubiquitous data collection. Communications of the ACM, 52(11):48–53.

Starbird, K. 2011. Voluntweeters: Self-organizing by digital volunteers in times of crisis. Proceedings of the 2011 Annual Conference on Human Factors in Computing Systems, Vancouver, BC, pp. 1071–1080.

Tang, J. C., M. Cebrian, N. A. Giacobe, H. W. Kim, T. Kim, and D. B. Wickert. 2011. Reflecting on the DARPA red balloon challenge. Communications of the ACM, 54(4):78–85.

Tolani, D., A. Goswami, and N. I. Badler. 2000. Real-time inverse kinematics techniques for anthropomorphic limbs. Graphical Models, 62(5):353–388.

Tutwiler, R. 2011. Hard sensor fusion for COIN inspired situation awareness. Proceedings of the 14th International Conference on Information Fusion, Chicago, IL, pp. 1–5

Walsh, A. E. 2002. UDDI, SOAP, and WSDL: The web services specification reference book. Prentice Hall Professional Technical Reference.

White, F. 1999. Managing data fusion systems in joint and coalition warfare; Signals, systems, and computers. Paper presented at the Conference Record of the Thirty-Third Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA.

Zeng, L., H. Lei, and H. Chang. 2007. Monitoring the QoS for web services. Service-Oriented Computing—ICSOC 2007, 4749, 132–144.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset