12

Risk and Threat Assessment

“Prediction is difficult, especially of the future.”

Neils Bohr

The world is filled with risk. Every action that we consider or take has some degree of risk associated with it. We conduct risk or threat assessments internally almost constantly as we move through our lives. “Should I pull out in front of this car?” “Will I run out of gas?” “Can I skip my annual checkup?” These are only a few of the many calculated risks that we take each day.

Part of risk assessment is weighing the likelihood of a particular outcome associated with a particular action or inaction against the potential seriousness of the outcome. For example, when making a decision to fly, we know that the risk of a crash is extremely small; however, the potential consequences associated with an airplane crash can be very serious. It is much riskier to drive, but the perceived consequences associated with a crash are much less. Also factoring into the equation in many cases are the perception of control. If I drive to my destination, there is the perception that I have greater control over the outcome, which highlights the fact that some, if not most, of the information that we use in personal risk or threat assessment might be inaccurate or unreliable.

In the public safety community, we are asked to evaluate and mitigate risk on a regular basis. For example, is the potentially hostile crowd depicted in Figure 12-1 likely to riot, or will a show of force only escalate a relatively stable situation? Are there other ways of dealing with this potentially volatile situation that will reduce the risk? Perhaps one of the greatest challenges is that we usually are dealing with very unlikely events and with incomplete information. The likelihood of being a victim of crime generally is very low. This likelihood or risk, however, can be increased by certain lifestyle factors or other related issues known to impact crime victimization rates. Data mining and predictive analytics can greatly assist the process of identifying factors associated with risk, and are particularly adept at addressing the many issues that make accurate risk and threat assessment such a challenge.

image

Figure 12-1 Potentially hostile crowd outside the Coalition Provisional Authority headquarters in Basra, Iraq (Courtesy of Staff Sergeant Tom Ferguson, USMC).

Fourth-Generation Warfare (4GW)

Modern warfare has been divided into three distinct generations by Lind et al.1 The first generation is based on line and column and was largely driven by the weapons technology of the time: smoothbore muskets. Second-generation warfare was driven by changes in technology, which included the introduction of rifled muskets and the breechloader, barbed wire, the machine gun, and indirect fire. Second-generation warfare remained predominantly linear, incorporating the tactics of fire and movement; however, it relied on massed firepower rather than massed personnel. Second-generation warfare remained the foundation of U.S. doctrine until the 1980s, and is still used today.

While first- and second-generation warfare tended to be technology driven and linear, third- and now fourth-generation warfare represent the first nonlinear tactics, which reflected changes in ideas rather than technology, and emphasizes maneuver over attrition. Several elements attributed to 4GW have significant implications for local law enforcement. These include decreasing reliance on centralized logistics and the use of agile, compartmentalized units, which are similar to the cells frequently associated with international terrorist organizations or domestic hate groups. 4GW also emphasizes the goal of collapsing the enemy from within as opposed to annihilating him physically, and targets may include social and cultural objectives as well as support for a war effort, economic stability, power plants, and industry. One needs only look at the stock market and the airline industry after 9/11 to see the larger impact of these tactics. Perhaps one of the most important issues for local law enforcement will be the blurred distinction between civilian and military, and the absence of a traditional front or battlefield. According to Lind and colleagues, the enemy will be targeted at every level, including social and cultural, and traditional “battlefields” might be difficult to identify in a 4GW scenario.

Risk and threat changed forever after the terrorist attacks of September 11. Increasingly, local law enforcement finds itself on the front lines of the war on terrorism. Knowledge of data mining and predictive analytical techniques is only part of the requirements for the development of reliable and accurate risk and threat assessment models that have value to the public safety community. Domain expertise is absolutely critical in the development and thoughtful creation and evaluation of risk models. Domain experts understand which data are available, what types of models are needed, and which ones are actionable or have value. Just as it would be difficult for someone without a background or training in meteorology to develop weather models, domain expertise in public safety should be a prerequisite for work in this area.

12.1 Risk-Based Deployment

The concept of risk-based deployment was developed as part of the Project Safe Neighborhoods initiative in the Eastern District of Virginia,2 and it has been used repeatedly throughout this text to illustrate various features of data mining and predictive analytics. Essential to development of the deployment model, however, was creation of the risk model.

Briefly, it was determined that while armed robberies were bad, an armed robbery that escalated into an aggravated assault was worse. By developing a model of robbery-related aggravated assaults, it would be possible to identify potential risk factors associated with this serious pattern of offending and develop targeted law enforcement initiatives that were designed specifically to reduce the risk for these crimes.

Perhaps one of the first challenges associated with this task was that robbery-related aggravated assaults, like many other examples in risk and threat assessment, were a very low-frequency event. Less than 5% of all armed robberies escalated into an aggravated assault. This issue was addressed in the modeling process in two ways. First, it was important to collect a sample that included a sufficient number of the events of interest so that this pattern of crime could be modeled adequately. Therefore, the sampling frame for the analysis was six months. This represented an adequate number of events of interest for modeling purposes, yet it did not extend so long as to start incorporating a greater amount of variability, which also would compromise model preparation.

Crime trends and patterns tend to fluctuate over time, some more than others. Robberies frequently vary throughout the year, and can change as various players come and go. Although this issue is addressed in Chapter 13, a sampling frame of approximately six months seemed to work well for this pattern of offending. Much beyond that, the amount of variability in the information associated with these long-term patterns and trends really compromised the model construction. Moreover, it also was questionable how much value this type of model would have, as it would be based on relatively old information.

The second method of addressing low-frequency events was to adjust the predicted probabilities of the model. This has been addressed previously, but when modeling low-frequency events, it is important to ensure that the predicted probabilities reflect the actual probabilities. In other words, a good model in this case should predict that less than 5% of armed robberies will escalate into an aggravated assault. In risk and threat assessment, we are asked to develop a model that will predict an event that generally is relatively infrequent. It is important, therefore, that the model is created to predict future events with a frequency that is relatively close to what would be anticipated based on the rate of incidents historically.

12.2 Experts versus Expert Systems

If data mining and predictive analytics truly are game changing, why haven’t they been universally adopted? It would seem that increased public safety is something that everyone could get behind; however, there has been a lag in the acceptance of automated tools in some areas. Research from the political science community may provide an answer to this apparent disconnect between science and practice. It seems that people are more inclined to trust an “expert” despite the finding that the accuracy of “expert” predictions does not differ from those of mere mortals, both of which perform well below predictions derived using statistics and mathematical modeling.3

I have direct personal experience with this phenomenon. Several years ago, I attended a scientific meeting that included a lively debate over expert versus statistical estimates of risk for future violence. Despite the fact that the data overwhelmingly supported the accuracy and reliability of the statistical estimates, the attendees found a number of exceptions that would have been missed by computer models and ultimately elected to stay with the human judgments.

One possible explanation for this is that people may find comfort in the authority that an “expert” conveys, rather than believing that human nature can be reduced to math and equations.4 Given the capacity that data mining and predictive analysis can bring to support public safety and security, however, this disconnect between science and practice really needs to be addressed. Perhaps the best model for the paradigm shift required lies somewhere in between those two extreme positions and could include domain experts using the expert systems embodied in data mining and predictive analysis software.

12.3 “Normal” Crime

In many ways, solid internal norms or domain expertise is essential to effective risk and threat assessment. Knowing what is “normal,” particularly for crime and criminal behavior, can be used to identify abnormal behavior, which generally indicates an increased risk for escalation. Data mining and predictive analytics can function as invaluable tools in the characterization of normal, as well as in the identification of abnormal incidents or trends that are worthy of additional investigation. For example, a series of suspicious situation reports could mean totally different things, depending on the time of day and local crime patterns. Suspicious behavior occurring during the daytime could indicate possible surveillance associated with burglaries or other property crimes. On the other hand, surveillance during the night, when residents are likely home alone, could suggest something far more sinister. This example underscores the importance of knowing the community, normal patterns, and even normal crime when evaluating analytical output. Again, characterization and analysis of normal behavior, including crime, can be invaluable to identifying abnormal or potentially threatening behavior. This concept is addressed in greater detail in Chapter 10.

12.4 Surveillance Detection

This topic is covered in Chapter 14; however, the ability to detect when a person or place is being watched or evaluated can be extremely important in risk and threat assessment. Not only does the identification of possible surveillance activity indicate the possibility of some type of preoperational planning or activity, but it also can highlight previous undetected vulnerabilities in either the physical or operational security of a particular location or activity. For example, consistent reports of suspicious activity indicating possible surveillance associated with an area frequented by employees who smoke might reveal that a door is propped open regularly to facilitate return into the building without having to utilize distant or inconvenient points of access. This surveillance activity, while focusing primarily on a particular area, ultimately reveals potential security vulnerability that could be addressed.

12.5 Strategic Characterization

Generic profiles of suspected suicide bombers have been developed, although the area of strategic characterization of possible or likely offenders has been associated with considerable controversy. These methods of strategic characterization have been used to guide security screening processes, as well as mandatory registration and law enforcement interviewing protocols. A further examination of these models starts to reveal the challenge associated with these general profiles.

While no one would dispute the fact that all of the 9/11 hijackers were of Middle Eastern descent, it is not equally true that all individuals of Middle Eastern descent are probable terrorists. A significant increase in false alarms is a common problem associated with the challenge of modeling infrequent events. Worldwide, the number of likely terrorists is extremely small, something that is magnified further when compared to the total world population. Attempts to further refine and characterize suspected terrorist profiles have included tightening the possible age range and focusing on males. While significantly limiting the relative percentage of possible suspects, these decision rules still catch a very large number of individuals in their net. Two problems emerge from this inability to further refine the model.

First, by creating such a general model, the likelihood of a false positive is increased significantly. Accurate prediction of very low-frequency events can be extremely challenging. The nature of the errors must be considered when evaluating the accuracy of the model. For example, model predicting an event that occurs only 1% of the time would be 99% accurate if it always predicted that nothing would happen. Clearly, this is not acceptable, particularly when public safety and human lives are in the balance. Therefore, a concomitant increase in the number of false alarms generally is associated with acceptable prediction rates for these infrequent events. Given the very low frequency of potential terrorists within the general population and the limited knowledge regarding this comparatively hidden group of individuals, these models are even more challenging to develop, particularly with acceptable levels of accuracy and false alarms.

Second, the numbers identified by these decision rules are so large as to be almost useless for enforcement strategies. Even if the decision rules were refined considerably, the selection process is still likely to include large numbers of individuals. To accommodate this volume challenge, random selection protocols have been implemented. In light of the extremely low frequency associated with the subjects of interest, however, the likelihood that anyone of interest will be identified through this layered process of generic strategic characterization, which is further diluted by a random sampling procedure, is extremely low.

While refinements to these models are under development, the technique of tactical characterization of possible terrorist behavior also is being explored. This approach generally has been accepted by the public, perhaps because there is a sense that if someone is acting suspiciously or strangely, then they should expect to be stopped.

Characteristics of possible pre-incident surveillance or behavior have been compiled. These include various types of surveillance activity and security probes. The Air Force Office of Special Investigations has further categorized and characterized suspicious behaviors, which certainly represents an excellent foundation for further analytical work in this area.5 Similarly, possible preattack behaviors have been characterized. These include nervous behavior and bulky clothing, which might be hiding a bomb vest or other weapon.6

These behavioral models also represent a good start, but still lack a certain degree of specificity. For example, while the description of an individual who is overdressed for the prevailing weather conditions make sense to most, it is not a perfect discriminator. In fact, it was not too long ago that it was possible to drive through the streets of Richmond, Virginia, and find groups of young men similarly dressed. Looking like a group of climbers ready to summit Mount Everest, these individuals would be dressed in heavy coats and parkas. They frequently were sweating and looked nervous, not because they were wearing a bomb vest, but because they were on the corner selling drugs and it was really hot. Their reason for wearing the heavy coat was to avoid detection of drugs and weapons during a pat and frisk, but their potential threat to the community was very different than that of a potential suicide bomber.

Therefore, one missing piece in many of these models is a control group, which is a shortcoming in many studies of risk. For example, in medical studies, people are much more likely report a side effect or another type of adverse outcome than they are to let the drug company know that everything worked out fine. While some side effects are extremely rare and do not emerge until large numbers of individual have experience with a particular treatment or medication, this can result in a pattern of bias in outcome studies. Similarly, in law enforcement and intelligence analysis, people are much more likely to report cases when something actually happened, as opposed to suspicious situations that turned out to be benign. This control group, however, is extremely important to the development of meaningful models that will have significant predictive value with future, unknown samples of behavior. Therefore, it can be just as important to include the false alarms in the analysis as it is ensure that the accurate hits are represented. How can we predict that something is likely to happen if we do not know how true incidents differ from false alarms? These comparison data can have tremendous value in that they significantly increase our ability to create models that classify reliably and accurately.

12.6 Vulnerable Locations

Some locations are uniquely vulnerable to attacks that could disrupt life or generate a large number of casualties, either by the nature of the business conducted or by the value of the occupants. These locations include critical infrastructure and locations where large groups of people congregate. For example, we have seen increasing evidence of hostile surveillance on critical financial facilities and the transportation industry, while the developing security and military forces in Iraq have been the target of ongoing attacks by the insurgents. Similarly, Israel has experienced attacks in crowded locations, such as shopping malls and dining establishments frequented by civilians, for many years. Recent attacks throughout the world have targeted hotels and resorts, further underscoring the increased risk associated with locations like these.

One location of increasing concern to public safety and security experts is our schools. Children represent something very important to us as individuals and society. Their innocence and vulnerability has been exploited by individuals and terrorist groups that hope to further an agenda or create fear in a community. As someone that was affected directly by the wave of random violence associated with the Washington, D.C. sniper in the autumn of 2002, I can speak directly to the abject fear that can be created by the potential for risk to our children. Even the mere suggestion that school children in central Virginia might be at risk resulted in school closings and widespread panic. Unlike types of risk that are associated with involvement in high-risk activities, occupations, or lifestyles, there is something terribly unsettling about predators randomly targeting children, particularly in the school setting.

Sheep, Wolves, and Sheepdogs

Lieutenant Colonel (Ret.) Dave Grossman, a global expert on violence and terrorism, is a hero to many in the operational worlds, and deservedly so. He is not only a dynamic lecturer who has studied violence and our response to it, but he speaks with considerable authority about honor and the warrior lifestyle. In these lectures, Colonel Grossman frequently categorizes people as sheep, wolves, and sheepdogs.7 The vast majority of people in the world can be categorized as sheep. Grossman advises that this label is not derogatory. Rather, it refers to the fact that most people are kind and gentle with few inclinations toward violence; however, like sheep, they require protection from predators. The wolves, as one might imagine, are the predators among us. Grossman describes these people as truly evil, preying on the weak and defenseless sheep at will. Finally, the sheepdogs are relatively few in number; it is their role to protect the flock by confronting the wolves. According to Colonel Grossman, these are the “warriors,” the operational personnel in the public safety, security, and military fields that protect us against the predators in the world. The sheepdog also has the capacity for violence, but only in its role as protector of the flock.

This analogy is particularly relevant to several of the topics in this book. For example, Grossman describes research based on interviews with convicted violent offenders that suggests these predators look for weak victims: those off by themselves, demonstrating a lack of confidence or poor survival skills. Similarly, predators often target prey that wander off or get separated from the rest of the herd, or those that are weak, show poor survival skills, or a lack of situational awareness. These interview data are consistent with some of the research on victim risk factors and victimology outlined in Chapter 11. Grossman goes on to describe sheepdog behavior as being vigilant, “sniffing around out on the perimeter;” behaviors that are very similar to the surveillance detection described in Chapter 14.

This is a wonderful model for understanding the relationship between predators, public safety and security personnel, and the people that they protect. As an analyst, however, the role of the shepherd immediately comes to mind. Like analysts, the shepherd is not there to tell the sheepdog how to do its job; rather, the shepherd brings a unique perspective that can enhance the sheepdog’s situational awareness and provide additional guidance. Ultimately, like the sheepdog and shepherd, analysts and operators must work together and support each other. To be truly effective, the sheepdog and shepherd must work together as a team. Although each functions in a different capacity, they both share a common goal of protecting the sheep, something that neither of them can do alone.

12.7 Schools

Colonel Grossman has studied the potential vulnerability associated with schools. He has suggested that schools are a particularly desirable target for Islamic fundamentalists and has cited several examples underscoring the number of international terrorist attacks specifically targeting schools.8

Michael and Chris Dorn have compiled a historical accounting of attacks in and on schools and schoolchildren throughout the world.9 Going back to the 1960s, terrorists and other predators have specifically targeted schools, underscoring their value as potential targets to extremists and other individuals. More recently, Chechen terrorists attacked a school in Belsan, Russia, moving the children and their parents into the auditorium and planting 10 to 30 explosive devices throughout the crowd.10 When the siege ended 52 hours later, 300 to 400 were dead.11

Colonel Grossman suggests that while preparedness for weapons of mass destruction is important, all indicators suggest that terrorist groups will continue to use conventional explosives, particularly car bombs, given their ongoing success with these methods.12 This is not to suggest that they have no capacity for improvement to their methods or that they do not learn. Rather, as John Giduck asserts in his analysis of the Belsan siege,13 these groups constantly are improving their tactics and strategy in an effort to address operational flaws or limitations, as well as countermeasures. The terrorists involved in the school hostage taking and subsequent siege and massacre had incorporated lessons learned from the Nord-Ost Theater hostage siege and massacre two years earlier. Colonel Grossman highlights the terrorists’ use of an initial assault that is used to increase the number of victims by massing them in a common location (e.g., outside a building), or otherwise channel the victims to an area where they can be managed more easily (e.g., the Belsan school auditorium).

Why is it important for an analyst to understand terrorist tactics and strategy and how these incidents play out? There are several very important reasons. First, the complexity associated with some of the larger terrorist operations underscores not only the amount but also the prolonged duration of the preattack planning cycle. In many situations, this requires significant gathering of intelligence regarding the facility or person of interest. While this may include searches of open-source materials related to the potential target, it also frequently requires extensive on-site observation and collection. As a result of this extensive preattack planning activity, it is not unusual for the hostile surveillance or intelligence collection to be observed and reported in the form of suspicious situation reports, which can be exploited by the analyst to identify and characterize potential preoperational surveillance and attack planning. The longer the planning cycle, theoretically the more opportunities for identification, which ultimately supports proactive, information-based prevention, deterrence, and response efforts.

Prior to the attack on the school in Belsan, the terrorists had collected intelligence on this and related facilities in support of target selection, as well as preparation of the operational plan, tactics, and strategy. This information gathering had included preoperational surveillance of the school. In his review of the incident, John Giduck noted that the Belsan operation reflected not only the lessons learned from the Nord-Ost Theater attack, but also al Qaeda tactical training, particularly the sections that addressed how to deal with hostages.14 This preattack planning activity represents opportunities for identification, characterization, and proactive responses to potential threats, including prevention.

Studying previous attacks provides greater insight into tactics and strategy, which can be parlayed into the tacit knowledge frequently required to effectively identify potential threats. For example, an understanding of how predatory pedophiles select and acquire their victims may give the analyst a greater ability to identify likely predators before they harm a child, rather than after the fact. Forewarned truly is forearmed. Identifying an impending attack during the planning phase allows public safety and security professionals an opportunity to move within their adversary’s decision cycle and to change outcomes.

By studying incidents and outcomes, the analyst can contribute to the identification and understanding of changes in tactics and strategy. As illustrated by operational revisions incorporated into the Belsan siege that were associated with lessons learned after the attack on the Nord-Ost Theater, terrorist methods are constantly changing and evolving in response to previous failures as well as improved countermeasures and response. Data mining and predictive analytics are well suited to identifying and capturing fluid changes in behavior and modus operandi in a timely fashion. The powerful modeling algorithms incorporated in the tools are able to accommodate and adjust to changes and refresh predictive models accordingly. Again, the ability to stay within our adversary’s decision cycle can be game changing in terms of the options available for prevention and deterrence. To support this function, however, the analyst should maintain current knowledge regarding tactics and strategy, particularly as they apply to the assessment of risk and threat.

Finally, knowledge of the operational aspects of risk and threat assessment, as well as response strategies and tactics, is necessary to the production of operationally relevant and actionable output. For example, in terms of surveillance detection, it is important to consider the nature of the activity and to create a variable that could be used to illustrate changes or escalation in the hostile surveillance. Being able to depict this information in an operationally relevant manner increases the value of the analysis and allows the end users to incorporate their tacit knowledge in the development of surveillance detection and response operations. Similarly, the identification of specific risk factors or victims attributes associated with an increased risk for victimization can be used to develop targeted operational tactics and strategies that directly address the risk or threat. For example, the finding that drug-related violence was associated with the robbery of drug users coming into a particular area to purchase drugs supported the use of a specific operational plan: demand reduction. By identifying why victims were at risk, the command staff was able to structure an operational strategy that kept potential victims out of the area and thereby reduced their risk. To support operational plans like these, though, the analyst needs a solid understanding of crime and criminals as well as operational tactics and strategy to create actionable output.

Although I have highlighted schools as a vulnerable location, it is important to remember that the predator selects the location. Although we can anticipate when and where they might attack, they ultimately select the location based on access, availability, personal preference, secondary gain, and a host of other factors known only to them. Therefore, risk and threat assessment is a fine balance between identifying locations worthy of additional attention and vigilance and remaining open to subtle indicators and signs that reveal the predator’s intentions. This is one area where data mining and predictive analysis can be a tremendous asset. Identification of preoperational surveillance can not only illuminate interest in a particular facility but also be used as a starting point for risk-based threat assessment and response.

The ability to identify, model, and characterize possible hostile surveillance provides at least two direct operational benefits. First, it allows us to identify the times and location of interest to our adversary, which then supports targeted surveillance detection efforts. If we know when and where we are being watched, then we also know when and where to watch them (watch us). This can be invaluable in revealing larger patterns of hostile surveillance and attack planning.

The second benefit is that it can reveal potential vulnerabilities or areas of interest to likely predators. The risk associated with a particular facility, location, or individual is unique and can fluctuate in response to prevailing conditions and a wide array of external events. I can speculate as to what might be of interest to someone; however, I am likely to be wrong, as I do not have sufficient information to see the big picture from another’s perspective. For example, someone interested in particular facility because spouse works there presents a very different risk than someone interested in a facility because it supports critical infrastructure or represents the potential for a high number of casualties. The potential threat, strategy, and required tactics associated with the domestic situation would be expected to be very different in time, space, and method than the threat associated with someone interested in the entire facility. To try to assume what might happen can blind the analyst to what is being considered. It is generally better to let predators reveal their intentions to us.

Specific issues to consider in information-based risk and threat assessment include the following.

12.8 Data

The data used for risk and threat assessment are especially poor and are almost exclusively comprised of narrative reports. One of the first challenges associated with collecting the data required for effective and thorough risk and threat assessment is to encourage people to report things. In his book The Gift of Fear, Gavin de Becker posits that people do not just “snap;” rather, there generally are signs and indicators preceding the event that often go unanswered.15 “As predictable as water coming to a boil,”16 these signs can be observed and predicted. He even notes that in cases of workplace violence, coworkers often know exactly who the perpetrator is the moment the first shot rings out, further underscoring the leading indicators present in these cases.

As the book title suggests, De Becker observes that most people have the gift of fear. Getting them to acknowledge and heed their fear can save their life. As an analyst, however, getting people to not only acknowledge but also report their suspicions or concerns can save other lives as well.

Colonel Grossman has recommended providing digital cameras to personnel working in and around schools to accurately and reliably collect information suggestive of hostile surveillance or some other threat.17 Not only would this approach be relatively simple for these folks, who are noteworthy for their level of responsibility and lack of spare time and extra hands for the added responsibility of surveillance detection, but this method also provides an opportunity to retain the data for additional review, analysis, comparison, and follow-up. This approach is a relatively easy, low-cost one that could be applied to other locations, particularly those identified as being at risk.

Maintaining open communication and an open attitude is key to making people feel comfortable about reporting their suspicions. Although most attention is on terrorism and homeland security issues, a facility is far more likely to experience violence related to a domestic situation or a disgruntled employee. People carry their personal risk with them, and two of the most predictable locations are school and work. We are generally expected to arrive at a particular time and leave at a particular time. Frequently, our routes to these locations are as set as our schedules, which makes school and work some of the easiest locations to find individuals. Ultimately, this increases the level of natural surveillance in and around a facility, as well as the willingness to report it.

12.9 Accuracy versus Generalizability

The issue of model accuracy versus generalizability and error types have been previously covered in Chapter 1, but are worth addressing again within the context of risk and threat assessment.

One might think that it is always desirable to create the most accurate model possible, particularly in the public safety arena. Further examination of the issue, however, reveals several trade-offs when considering accuracy. First, and perhaps of greatest practical importance, is the fact that it is unlikely that we will have all of the information necessary to either generate or deploy models with 100% accuracy. When analyzing historical data in the development of models, it is extremely rare to be privy to all of the relevant information. In most cases, the information is similar to a puzzle with many missing pieces as well as the inclusion of a few extra ones that do not belong. On the other hand, it is possible to generate some very accurate models, particularly when using previously solved closed cases in which most of the pieces and been identified and fit into place. It is extremely important, however, to be aware of what information is likely to be available and when. For example, when developing models regarding drug-related homicides, we were able to achieve a high degree of accuracy when suspect information was included in the model. In an investigative setting, however, a model has considerably more value to investigators if it relies on information available early on in an investigation (see Chapter 13).

The second point to consider is how the model will be used. Very accurate models can be developed using some of the “black box” modeling tools currently available; however, those models are not very user friendly. In other words, they cannot be pulled apart and reviewed in an effort to generate actionable output like some of the decision tree models. Even some of the more complex decision trees are relatively opaque and will be difficult to interpret. If the model is intended to be used to guide an operational plan or risk reduction, like the risk-based deployment models, then some consideration to generalizability of the model will need to be given. The ability to clearly interpret a model generally increases at the cost of accuracy. Each circumstance will require thoughtful review and consideration of possible consequences and nature of errors.

12.10 “Cost” Analysis

No matter how accurate, no model is perfect. In an effort to manage these inaccurate predictions, the nature of the errors in a model needs to be evaluated. Again, all errors are not created equal. The cost of each type of mistake needs to be evaluated. A “confusion” matrix can be generated to determine the nature of errors (see Chapter 4).

The cost analysis for risk and threat assessment should include the cost of responding, as compared to cost associated with a failure to respond. In many cases, the potential cost associated with a failure to respond or evacuate in a timely fashion can be enormous. One only needs to look at fatality rates associated with hurricanes prior to accurate prediction models and evacuation to see the enormous cost that can be associated with a failure to act in the face of an imminent threat. Much of the discussion regarding the possible intelligence failures leading up to 9/11 have focused on the number of lives that could have been saved had the threat been recognized and been acted upon in a timely fashion.

When making a decision to evacuate in response to a predicted hurricane, officials include the potential cost associated with a false alarm. Unnecessary calls for evacuation can be extremely expensive, as the economic costs associated with evacuating an area can be enormous. Perhaps more importantly, they also can cause public safety personnel to lose credibility, which can impact future calls for evacuation. Concern regarding this type of “alert fatigue” has been raised regarding the number of times the terrorist threat level has been raised within the United States. Again, activation of an emergency response system or threat level that is associated with a null event also can compromise public safety if individuals begin to ignore a system that has been associated with repeated false alarms.

12.11 Evaluation

Colonel Grossman has discussed U.S. regional responses to mass killings in schools that include mandated “lockdown” drills and a requirement for emergency response plans in schools.18 These responses have included enhanced efforts to identify possible incidents of preoperational surveillance. He has indicated that some schools have distributed digital cameras to school employees to encourage reporting and increase the accuracy of the collected information and has noted that these strategies can serve a deterrence function by creating an inhospitable environment for the necessary preoperational surveillance and planning. This particular strategy also would deter pedophiles and could discourage school-based domestic child abductions.

This combined approach highlights the dual metrics that can be used to evaluate effective risk and threat assessment: prevention and response. Ideally, the identification and characterization of a potential threat will support the development of effective prevention strategies. Again, forewarned is forearmed. Although sometimes difficult to measure, one of the goals of data mining and predictive analysis in public safety and security is the identification and characterization of potential threats in support of effective, specifically targeted prevention and deterrence efforts.

Unfortunately, these approaches frequently are imperfect, falling well short of the crystal ball each analyst secretly covets. Therefore, a second measure of the efficacy of risk and threat assessment is effective response planning. The transportation attacks in London during the summer of 2005 underscore this point well. Although the signs and indicators of an impending attack were not discovered until the subsequent investigation, the methodical response planning and high state of preparedness resulted in a response to those incidents that was enviable. The interagency coordination and collaboration in support of an integrated response was flawless, and almost certainly limited the loss of life to that associated with the blasts.

Another excellent example of effective response planning occurred in this country on September 11. Rick Rescorla, Vice President for Security at Morgan-Stanley/Dean-Witter, had the prescience to know that the World Trade Center was at risk for a terrorist attack. While his foreknowledge of the likely method for the attack was startling in its accuracy, it was his insistence on routine evacuation drills that was credited for saving the lives of 2700 of his colleagues in the South Tower.19

Colonel Grossman has created a very interesting analogy to support preparedness for violence in schools and other public venues by citing the amount of resources and time devoted to fire safety. He highlights the number of fire alarms and sprinklers, the use of fire-retardant materials, and the signage marking exits and posted escape plans, noting that the likelihood of a fire is very small, yet there are considerable resources devoted to it. He extends the comparison to the school setting and notes that the number of students killed in a school during a fire during the last 25 years was zero, while the number of kids killed as the result of violence (either an assault or a school mass murders) was in the hundreds, yet fire drills and response plans are mandated while similar planning for violence (the greater threat) generally does not exist.20

12.12 Output

Figure 12-2 illustrates possible hostile surveillance activity in and around a critical facility and demonstrates an important point regarding the generation of operationally actionable output in risk and threat assessment. As can be seen in the figure, the concentration of activity associated with a specific aspect of the building highlights the location of greatest interest to the individual or group involved in the hostile surveillance. This information can be used to further refine the threat assessment of this building by focusing on the areas associated with the greatest activity and highlighting particular spatial features or attributes worthy of additional review. Moreover, the analysis of the nature of surveillance activity (outlined in Chapter 14) can further underscore the escalation in the operational relevance of the behavior observed. This information, together with Figure 12-2, which indicates spatial refinement and focusing of the behavior, suggests an increased level of risk associated with this facility.

image

Figure 12-2 Figure depicting suspected preoperational surveillance activity associated with a critical facility. The darker dots represent the hypothesized increase in operational value of the surveillance methods employed.

An important aspect of surveillance detection is to identify and characterize a possible threat so that effective countermeasures can be used. Ideally, the analytical output should build on the end users’ tacit knowledge and increase their situational awareness in support of effective prevention, deterrence, and response planning. There is a fine line, though, that separates thoughtful analysis and interpretation of the findings, and reading too much into the data. Errors in interpretation can misdirect resources and potentially cost lives. Therefore, it is almost always a better strategy to let the behavioral trends and patterns speak for themselves and reveal the suspect’s intentions than to try to presuppose or second-guess what they might be considering.

12.13 Novel Approaches to Risk and Threat Assessment

In The Gift of Fear, de Becker describes the small voice most of us have that speaks up and tells us when things are not right or that we are in danger.21 This is the “gift” of fear. As previously mentioned, it can be very difficult to encourage people to act on their intuition. Some novel approaches to risk and threat assessment, however, use nontraditional means to tap into these gut feelings and intuition, as well as insider information in some cases.

For example, an interesting extension of the “gut feeling” is the finding that crowds tend to be smarter than individuals. Based on his experience with the popular TV game show Who Wants to Be a Millionaire, Michael Shermer reviewed the literature that supports the accuracy of group decisions as compared to those made by individuals.22 He found that the audience was correct 91% of the time, as compared to the “experts,” who were correct only 65% of the time on the show. In explaining this finding, Shermer notes that individual errors on either side of the correct response tend to cancel each other out, bringing the group response closer to the truth than an individual response. It is important to note that this finding does not apply to all groups. Critical features of the group include autonomy, diversity, and decentralization to ensure the range of knowledge and opinion for this phenomenon to occur.

This has not gone unnoticed by the U.S. Department of Defense. The Pentagon’s Defense Advanced Research Projects Agency (DARPA) supported research in this area, which included their Electronic Market-Based Decision Support and Futures Markets Applied to Prediction (FutureMAP) programs.23 Briefly, these programs were designed to artificially create groups that incorporated the diversity of thinking found to be required for accurate group-based decision making. By tapping into the knowledge from a varied array of experts, it was hypothesized that the consensus opinions would be superior to those generated by individuals, even if these individuals were experts in their respective fields. The DARPA scientists used market-based techniques to compile and consolidate these diverse opinions and generate a unified response. This concept was not new to the Department of Defense. In 1968, naval scientist John Craven assembled a group of submarine commanders in an effort to find the missing submarine Scorpion.24 Using Bayes’ Theorem and consensus expert opinions generated by the commanders, Craven was able to construct an effective search strategy and find the submarine.

The DARPA programs used these market-based approaches to generate estimates of the likelihood of specific events of interest to the Department of Defense. These included estimates for the development or acquisition of certain technologies, as well as estimates of political stability in certain regions. The ultimate goal of these programs was to consolidate opinion from a variety of sources, including expert opinion and insider information, in an effort to accurately predict future events and avoid surprise attacks. Unfortunately, this program came under attack and was cancelled when it became known publicly that terrorist attacks and assassinations were included in the events of interest. The public outcry that ensued in response to the idea that the United States government was essentially betting on tragedy was more than enough to terminate the program.

Interestingly, this concept still exists. The website Tradesports.com supports an electronic market that includes subjects of interest to those tasked with preventing future terrorist attacks and supporting homeland security. Tradesports.com describes itself as a “person-to-person trading ‘Exchange’” where individuals can trade directly on a variety of events including sports, weather, entertainment, legal outcomes, and politics, to name just a few categories. Tradesports.com also accepts “contracts” on current events, including anticipated events related to the war on terrorism. For example, at the time of this writing, current contracts relate to whether Osama Bin Laden and Abu Musab al-Zarqawi will be “captured/neutralized” by a certain date. Their market-based predictions tend to be highly accurate, most likely due to the same factors that DARPA was trying to exploit. Tradesports.com taps into a very large sample that includes a diverse array of individuals with expertise in a variety of areas. These opinions very likely include insider information in a variety of areas that can further enhance the accuracy of the consensus opinion generated. The exchange consolidates these opinions and generates a consensus probability. These market-based approaches incorporate the speed and agility necessary to effectively track issues that may fluctuate rapidly. Public opinion can change on a dime, far faster than most existing collection methods. As a result, tools like these bring the speed and agility required to instantly document changes and effectively track fluid trends.

It is important to remember, though, that groups also are able to generate some very bad consensus opinions. For example, expectations regarding how significant events may affect gasoline prices or stock prices can actually alter these events, albeit temporarily. Like any risk assessment tool, group opinion is only as reliable as the inputs. Bad information results in inaccurate and unreliable predictions, regardless of the method used to calculate the risk. The value that can be added by compiling and integrating diverse expert opinions cannot be underestimated, however, and supports the importance of a close working relationship between the analytical and operational personnel. As always, solid domain expertise and a healthy dose of skepticism are necessary tools in the evaluation of risk and threat.

12.14 Bibliography

1. Lind, W.S., Nightengale, K., Schmitt, J.F., Sutton, J.W., and Wilson, G.I. (1989). The changing face of war: Into the fourth generation. Marine Corps Gazette, October, 22–26.

2. McCue, C. and McNulty, P.J. (2003). Gazing into the crystal ball: Data mining and risk-based deployment. Violent Crime Newsletter, September, 1–2.

3. Colvin, G. (2006). Ditch the ‘experts.’ Fortune, February 6, 44.

4. Ibid.

5. United States Air Force, Office of Special Investigations (2003). Eagle eyes: Categories of suspicious activities. http://www.dtic.mil/afosi/eagle/suspicious_behavior.html

6. The International Association of Chiefs of Police (2005). Suicide (Homicide) Bombers, Part I. Training Key #581. The International Association of Chiefs of Police, Alexandria, VA. http://www.theiacp.org/pubinfo/IACP581SuicideBombersPart1.pdf.

7. http://www.blackwaterusa.com/btw2004/articles/0726sheep.html

8. Grossman, D. (2005). Lecture for ArmorGroup, International Training, Richmond, VA, October 31.

9. Dorn, M. and Dorn, C. (2005). Innocent targets: When terrorism comes to school. Safe Havens International, Macon, GA.

10. See Dorn, M. and Dorn, C.; and Giduck, J. (2005). Terror at Belsan. Archangel Group, Royersford, PA.

11. Dorn and Dorn.

12. Grossman.

13. Giduck.

14. Ibid.

15. De Becker, G. (1997). The gift of fear. Dell, New York.

16. Ibid.

17. Grossman.

18. Ibid.

19. Stewart, J. and Stewart, J.B. (2003). Heart of a soldier. Simon & Schuster, New York.

20. Grossman.

21. De Becker.

22. Shermer, M. (2004). Common sense: Surprising new research shows that crowds are often smarter than individuals. ScientificAmerican.com; http://www.sciam.com/article.cfm?chanID=sa006&articleID=00049F3E-91E1-119B-8EA483414B7FFE9F&colID=13

23. DARPA - FutureMAP Program. Policy analysis market (PAM) cancelled. IWS - The Information Warfare Site; http://www.iwar.org.uk/news-archive/tia/futuremap-program.htm, July 29, 2003.

24. Sontag, S. and Drew, C. (1999). Blind man’s bluff: The untold story of American submarine espionage. HarperCollins, New York.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset