19

MEASURING TEAM DYNAMICS
IN THE WILD

Michael A. Rosen, Jessica L. Wildman and Eduardo Salas

DEPARTMENT OF PSYCHOLOGY, AND INSTITUTE FOR SIMULATION AND TRAINING, UNIVERSITY OF CENTRAL FLORIDA

Sara Rayne

NAVY PERSONNEL RESEARCH, STUDIES, AND TECHNOLOGY


Teams are a way of life in modern organizations. It is easy to take this for granted, but the safety and well-being of people in many industries and those they serve rely on effective communication, collaboration, and coordination. Good teamwork in the cockpit has helped to make commercial flights safer than the highways. Poor teamwork contributes heavily to the estimated 98, 000 deaths a year caused by preventable medical errors in the US. For these reasons, we (along with many others) have dedicated our professional careers to understanding and improving team performance where it counts – in high-stakes industries. We have found that team performance measurement is central to understanding and building high-performance teams. It is critical for building and testing new theory and for systematically applying the science of teams to improve productivity, safety, and quality through better teamwork. However, measuring team performance in the field can be very challenging. Organizations have many moving parts, and practical constraints such as time, access, and control over confounding variables frequently threaten to undermine the meaning of any measurements.

This chapter attempts to provide an introduction to some of these real-world challenges involved in measuring team performance in, well, the real world. Specifically, we address four goals. First, we describe features of organizational environments that can complicate the process of measuring team dynamics and introduce several examples from our work of teams operating in these conditions. These examples will be referenced throughout the chapter to illustrate the principles discussed. Second, we provide a brief review of the main methods available. Third, we provide a summary of major challenges to team performance measurement in the wild. Fourth, we discuss strategies for managing the tradeoffs inherent in designing measurement systems when dealing with the challenges in the field. We hope readers see these challenges as we do: opportunities to solve creatively and adaptively unique problems impacting the quality of life of many workers.

Description of the “Wild:” Challenging Characteristics
of Teams and Organizations

One of the great rewards of being an applied scientist is the ability to be an “occupational tourist” to some extent, learning about and getting to know the people in a variety of complex domains. There are many differences across work teams in different industries; however, there are also some unifying features. In this section we present a description of some of these factors as well as examples of teams presenting tough cases for performance measurement.

Features of the team, organizational, and environmental context

Teams are made up of individual members, all of whom can (and frequently do) have very different backgrounds, experiences, and expertise. They are embedded within organizations with different norms, policies, and cultures. Some team members will be a part of multiple teams at once and have to manage and prioritize their contributions across these teams – not to mention finding time to complete their own individual work. While there is no one framework that accounts for all of these differences, field researchers working across domains such as aviation, healthcare, the military, and power generation have developed a common set of organizational and environmental descriptors outlining challenging real-world contexts (Cannon-Bowers, Salas, & Pruitt, 1996; Orasanu & Connolly, 1993). These features include: ill-structured problems; uncertain dynamic environments; shifting, ill-defined, or competing goals; action/feedback loops; time stress; high stakes for errors; multiple players or stakeholders in outcomes; organizational goals and norms; expertise levels of the team members; and task characteristics such as information quantity and decision complexity. Table 19.1 provides examples the challenges some of these factors pose for team performance measurement. However, in general, they outline a dynamic environment demanding adaptive responses. This changing nature of tasks, environmental conditions, and performance processes is one of the core complications for conducting team performance measurement in the field. Defining the content of a measuring system is relatively straightforward in stable tasks and environments. Unfortunately, the real world does not stand still for measurement.

We have found that the features of the team's task as well as its structure or design are strong influences on what good and poor team performance will look like for a given team. As such, it is a good starting point for thinking about team performance measurement.

First, you have to understand the work a team is accomplishing – what task is the team performing? Developing this understanding involves clarifying issues of task type and task complexity, as well as the external context in which the task is performed (McGrath, 1984; Strubler & York, 2007). Essentially, you need to clarify the problem the team is addressing and the conditions under which they have to work. Understanding characteristics of the task will help draw connections between previous research and the present context. This is critical for setting expectations of the types of teamwork behaviors that will contribute to effective performance.

Table 19.1 Features of field settings that complicate team performance measurement

Feature of the environment Issues posed Jot team performance measurement Tips
Ill-structured problems When the task is ill-defined, it is challenging to develop a model of what ‘good performance’ looks like and consequently difficult to evaluate a team relative to a standard. Performance processes will naturally be different when the task changes. Different performance strategies can develop between teams and assessing whether one or the other is better or there is an equifinality involved can be difficult Capture information about what the team is doing concurrently with information about team processes. This will help you to understand different aspects of teamwork during different tasks or phases of performance.
Uncertain dynamic environments When outcomes are unclear and probabilistic, it is difficult to make connections between the team's performance processes (i.e., what they did) and the team's outcome. As much as possible, identify the other factors at play and measure those as well, for example, the patient's status upon arrival plays a large role in that patient's outcomes at a trauma center. If severity data arc captured on patients, it can be statistically controlled in analyses.
Shifting, ill-defined, or competing goals Performance outcomes can only be judged relative to a criterion based on the team's goals. If goals change rapidly, arc not defined clearly. or conflict, developing a sense of whether or not the team is meeting a valued goal is a challenge. When this is a primary environmental condition, focusing on the team's ability to identify- and communicate goals, short or longer term, can be an effective strategy.
High stakes Access to teams performing in high stakes environments can be limited for safety and security reasons. Leave no pre-existing source of information unturned. Use document reviews, interviews, case reports, existing training and any other documentation available to make sure you have the best possible understanding of the work. You'll need this to make the most of the limited time you'll have.
Level of expertise of members Teamwork and individual level taskwork arc interdependent and reciprocally causative. Differences in how team members perform their individual tasks have implications for what effective and ineffective teamwork looks like. As clearly as possible, draw distinctions between individual and team level performance (e.g.. what an individual docs, and how the individuals interact as a unit). Keep track of who is on the team for each performance episode so that it can be linked back to individual level information.

Second, you need to understand the way in which the team is configured to address this task – what is the team's structure? The team's design or structure can be thought of as the organization's solution to the problem (i.e., the task). This includes dimensions such as leadership and communication structure (Dyer, 1984). Structural aspects of a team influence how the members interact. For example, differences in role division and interdependence will change the amount of communication required to be effective (e.g., higher interdependence and role specialization may require more communication as members must rely more intensely on others with different specialties). Table 19.2 provides a description and relevant literature on the dimensions of team task and team design.

Table 19.2 Overview of different team characteristics that have implications for team performance

Team task
Task type The nature ot the work performed by the team McGrath (1984), Cohen and Bailey (1997), Driskell, Salas, and Hogan (1987), Mattson. Mumford, and Sintay (1999), Saavedra Earley, and Dyne (1993), Sundstrom, De Meuse, and Futrell (1990)
Task complexity The extent to which the task places high cognitive demands on the task-doer Campbell (1988), Jehn (1994), Kankanhalli, Tan, and Wei (2006)
Task identity The degree to which the team works on a problem from beginning to end Abbott. Boyd, and Miles (2006), Strubler and York (2007)
Task environment The characteristics of the environment in which the team task is performed including environmental uncertainty, dynamism, stakes for errors, multiple stakeholders, or conflicting goals Duncan (1972), Hough and White (2004), Lipshitz and Strauss (1997), Locke, Smith, Erez, Chah, and Schaffer (1994), Priem, Rasheed, and Kotulic (1995), Shi and Tang (1997)
Size A structural variable of a team describing the number of member directly involved in the teams primary-goals and tasks Dyer (1984), Sundstrom, et al. (1990)
  Team design  
Technology dependence The degree to which team activities arc constrained by technological resources such as communication systems, machines, tools, or specialized equipment Devine (2002). Sundstrom et al. (1990)
Interdependence The nature of the interconnections between members of the team including goal. task, and feedback interconnections Bell and Kozlowski (2002). Cohen and Bailey (1997). Mattson et al. (1999). Saavedra, Farley, and Dyne (1993)
Distribution The degree to which the team members are spread out across time (temporal distribution) and space (physical distribution) Bell and Kozlowski (2002). Kirkman and Mathieu (2005)
Leadership structure The configuration of leadership within the team, which can generally be categorized into basic structures: external manager, designated leader, temporary designated leader, task-based team leader, and distributed team leadership Barry (1991). Erez. LePine. and Elms (2002). Gronn (2002). Klein. Ziegert. Knight, and Xiao (2006). Morgeson (2005). Yang and Shao (1996)
Role division The way the team divide work among members, which can generally be categorized into two basic structures: functional and divisional Harris and Raviv (2002). Moon et al (2004)
Autonomy Level of control the team has over its own design, structure, and accomplishment of work that can be divided into three levels: semi-autonomous, self-regulating, and self-designing Abbott et al. (2006). Cohen and Bailey (1997). Sundstrom et al. (1990)
Communication structure The pattern of both verbal and nonverbal communication within the team that can generally be categorized into three patterns: hub-and-wheel. star, and chain Dyer (1984)
Temporal structure The aspects of the team focused specifically on time such as team lifespan, performance episode duration, performance episode frequency, and continuity of membership Devine (2002)

In sum, teams can differ in terms of the task they perform, how they are structured to complete this task, and the organizational context in which they are embedded. All of these factors need to be considered when developing measurement protocols for teams in context. Before discussing some measurement methods, specific challenges and strategies that work, we provide a few examples of teams we've worked with.

Example teams in context

The following sections provide brief descriptions of real teams we have encountered as team researchers. These teams posed significant challenges for measurement, and are vastly different than the three to four person teams performing a well-defined task within a laboratory setting.

Multidisciplinary trauma teams

At a minimum, multidisciplinary trauma teams usually consist of surgeons, emergency medicine physicians, nurses, technicians, and first responders as well as the patient. Frequently, residents at various levels of experience are included as well. There are large differences in level and type of expertise of members. These teams assemble rapidly to care for critically injured patients. They are under extreme time pressure and have the highest consequences for errors. Quite literally, people's lives depend on how they perform their technical skills and how they work together as a team. There are multiple transitions of care – critical points where information and authority are transferred between team members (e.g., first responders to stabilization team and subsequently to surgery and the inrtensive care unit). Leadership (or ultimate responsibility for the patient) changes with these transitions and consequently there can be competing subgoals on the team as well as disagreement on how to prioritize injuries and treatments.

One of the many challenges for measurement posed by this team involves understanding the complexity of the domain and gauging effectiveness. First, teamwork and taskwork (i.e., the parts of a team member's job that are not directly dependent on other team members) are intertwined. Effective communication is, in part, about how team members communicate (e.g., clarity and structures such as closed-loop communication) but it is also about what and when they communicate. Understanding the effectiveness of team communication processes in terms of content and timing requires an understanding of the domain, in this case trauma medicine. Learning the fundamentals of trauma is no small task, and trying to focus your attention on teamwork in the presence of life-and-death situations was difficult for our observers. In one project, our goal was to develop measurement protocols that helped instructors provide diagnostic feedback to team members on the quality of their team processes during trauma cases. Several features of our solution involved: (a) tying measurement to a phase structure for a trauma case; (b) focusing on critical tasks that occur for every patient, regardless of conditions; and (c) clearly defining team membership and roles.

First, every trauma case tends to unfold in phases (e.g., pre-arrival information from first responders, arrival and handoff, primary survey, secondary survey, etc.). Team performance tends to vary over time (i.e., sometimes a team will communicate effectively; other times they will not). If global or overall ratings of a team's performance are made, this dynamic aspect of a team's interaction is lost. By structuring measurement around the temporal flow of a team's task, observers are capturing smaller chunks of performance. They are measuring at a higher temporal resolution.

Second, we concretely identified critical, recurring team interdependencies. In each phase of performance, we found that there were very specific tasks where teamwork mattered. For instance, during primary survey (i.e., an initial inspection of a patient's injuries), close-loop communication was critical to ensure the team leader heard and understood what the primary surveyor was finding. At a different phase, the leader's ability to articulate a clear plan of care for the patient was the critical indicator of effectiveness. Uncovering these very specific team task elements within each phase helped to focus observers’ attention – they knew when to look for what – and it helped trainers to provide effective, process-oriented feedback because they had rich detail about what happened.

Third, we developed a map of the team's role structure. By incorporating the key individual clinical task responsibilities into the measurement tool, drawing distinctions between deficiencies in team interaction and deficiencies in clinical execution became possible. This was necessary for our purposes of providing systematic feedback on teamwork. It was only through creating a highly structured tool that we were able to make sense of the technical complexity occurring during trauma cases.

As frequently happens in applied research projects, the development of a measurement tool becomes an intervention in its own right. It is an opportunity for the experts within a domain to reflect on their processes and come to consensus on key issues (e.g., a defined role structure). Our work with these teams led not only to the tool, but to several clarifications about roles in addition to a shared expectation for what good teamwork is on the trauma team.

Navy security detachments

The US Navy is full of teams doing various types of work. In a project aimed at identifying the basic characteristics of US Navy teams for the purposes of improving selection and training for those teams, we had the opportunity physically to observe a security detachment team in a training exercise. The primary mission of the team is to provide any and all security services to other Navy groups. This mission can include a variety of activities such as providing security watches and patrols for groups embedded downrange in temporary bases, providing security on deployed ships, or more short-term assignments such as escorting vehicles through hostile territory. These teams are much larger than the well-known three and four person teams that most researchers focus on, often consisting of 70 or 80 members that identify with one another as a team. The overall mission of the team often requires for the group to split off into multiple subgroups, and the roles within the team are very unstable with team members often rotating in and out of tasks depending on the situation. Every member of the team is trained to have the same set of security-related skills such as weapons training, combat skills, and fine-tuned situation awareness. By the very nature of their work, these teams are expected to encounter unexpected life-or-death situations and to be prepared to engage in enemy combat on a moment's notice. The consequences of their work are extremely high – the success of the Navy groups they are protecting depends on their ability to provide security services successfully.

There are several unique challenges within US Navy security teams that make it difficult to measure their performance accurately. First, the extreme nature of the work setting makes it difficult to observe natural team performance without putting the researcher in danger. Few people want to risk being caught in a fire fight for the sake of research, and the Navy is not exactly willing to put researchers out where they can get in the way of their soldiers either. Second, because of the sheer size of the team, it is nearly impossible to identify the boundaries of the team clearly and to observe every member simultaneously. We observed a team of 75 individuals engaged in a 16-hour overnight field training exercise which included simulated attack sequences such as detecting a “bomb” attached to a visitor to the simulated camp and responding to small attacks to the camp's perimeter. Just trying to identify where the action was taking place at any given time was a challenge. Luckily, the goal of our observation was to get a general understanding of the structure and the nature of the team, so the specific events that occurred were less important than the overall approach the team took to address their work.

The first step toward developing a measurement tool for these teams was to build a literature-based set of team characteristics that we were interested in understanding. From this list of characteristics, an interview protocol was developed that probed team experts in an open-ended manner regarding the structure and nature of their team's interactions (e.g., what tasks does your team engage in? How does your team communicate?) The findings from these interviews were used to develop grounded observational checklists outlining a variety of observable characteristics for the team such as task type, leadership structure, task interdependence levels, team processes, and physical distribution type. These observation checklists were used as tools to gather information regarding the security detachment while observing their training exercise.

As expected, the gathering of information regarding the characteristics and performance of the security detachment was less than straightforward. For example, it was nearly impossible to describe the physical distribution of the team at any given point in time as the physical proximity of the team members differed dramatically depending on what task they were engaging in (i.e., some were co-located together in the base's command tent while others were in small groups patrolling the perimeter), which members of the team were being considered at the time, and whether or not electronic communication was considered. The best conclusion we could come to is that their distribution is generally mixed, which is less than ideal in terms of specificity. In another example, the leadership structure of the team also changed dramatically over time. The team does have a formally defined leader, but in one simulated situation, that leader was “injured” and leadership was shifted immediately to the next qualified team member. Furthermore, when subgroups of the team would go out on patrol, each of those patrol groups would have a temporarily assigned leader who controlled the moment-by-moment movements of the patrol group while still reporting directly to the detachment's formal leader. The point being, the complexity of the security detachment made it very difficult for us to describe the team using the more clear-cut approaches that have been developed in traditional team science settings.

Methods of Team Performance Measurement
in Field Settings

As with designing any measurement system, decisions about the content (i.e., what to measure) and the method (i.e., how to measure it) are central to building a team performance measurement system. Questions of when to measure are equally important (Wildman, Bedwell, Salas, & Smith-Jentsch, 2010) and will be addressed later in this chapter. Additionally, the purpose of the measurement system, or answering the question of why (e.g., research questions, validation/evaluation of an intervention, certification as in aviation and military teams, feedback during training or continuous improvement) impacts decisions about what, how, and when to measure. We present a brief discussion of content and method below. For more detailed discussions of these and related issues, see Cannon-Bowers and Salas (1997); Salas, Priest, and Burke (2005); Kendall and Salas (2004); and, Rosen, Salas, Lazzara, and Lyons (in press).

Content: what is team performance?

The content of team performance measurement systems should be rooted in a theory of teamwork. The Input-Process-Output (IPO) framework and its recent adaptations (e.g., Ilgen, Hollenbeck, Johnson, & Jundt, 2005; Kozlowski & Klein, 2000; Marks, Matheiu, & Zaccaro, 2001) has proven to be a valuable tool in our work. Input factors (e.g., member and task characteristics) influence the processes teams engage in (e.g., the interaction of members) as well as emergent states (see Marks et al., 2001) to determine team outcomes. Team performance is the sum of individual and team level actions taken to reach a shared goal. It is a process, and not an outcome (Campbell, 1990). However, in naturalistic settings, the team's processes are not the only topic of interest. It is the linking of team performance (processes) to team effectiveness (an evaluation of outcomes) that organizations and researchers are usually interested in. Team effectiveness has been defined as a judgment or evaluation relative to some set standards of the products or outputs of team performance (Salas et al., 2007). This includes evaluations of quality and quantity of team outputs, member satisfaction with team processes, and the degree to which the team's interactions strengthened or weakened the team's ability to continue to work together in the future (Hackman, 1987).

These ideas drive much of our work. From them, we have learned the importance of distinguishing and subsequently drawing connections between team performance and effectiveness, that is, the processes and outcomes of teams (Cannon-Bowers & Salas, 1997; Salas, Burke, & Fowlkes, 2006). Looking at one in isolation from the other can be misleading. There are numerous theories of team performance or components of team performance to choose from when developing a measurement system (for recent reviews, see Kozlowski & Ilgen, 2006). However, these theories are cast in generalizable terms; that is, they describe what team performance looks like in a very abstract way. This is beneficial and a fundamental part of the scientific method, but it requires that descriptions of teamwork from these theoretical frameworks be contextualized – or made specific – for the team(s) of interest (Salas et al., 2007). For example, back-up behavior in a team involves one team member stepping in to provide assistance to another team member having difficulty completing his or her task. This is a general team process behavior linked to good performance (Salas, Sims, & Burke, 2005); however, the legitimacy of need is a critical factor in understanding if the back-up behavior is an effective or ineffective instance of that process (Porter et al., 2003).

For an observer rating team performance, a contextualized understanding of what is happening with a given team is necessary in order to make judgments about the observed process behavior. In the trauma team example describe above, a team member stepping and taking over another's task could be an indicator of good teamwork if the person receiving the help was having difficulty completing their task (e.g., less experienced team members having problems with a difficult intubation will often trade out with other team members with more skill). However, if there was no real need, this can be a very disruptive behavior that undermines trust on a team. Therefore, creating a measurement tool that grounds the generalizable and theoretically based process behaviors with a team's specific context can greatly increase the reliability and validity of the tool (e.g., discriminating between good and bad back-up behavior). To achieve this, a variety of methods can be employed including interviews, focus groups, informal or unstructured observations, and document reviews (for a review of these methods, see Rosen, Salas, Lazzara, & Lyons, in press).

Method: what are the main methods for measuring team
performance in the field?

In field settings, observation is still the workhorse of performance measurement systems (Cannon-Bowers & Salas, 1997). Automated performance measurement and communication analysis systems exist (e.g., Dorsey et al., 2009; Foltz & Martin, 2009); however, these are not widely used, and are generally developed for very specific applications. Most researchers and practitioners working in the field are not in a position to spend the time and other resources needed to take advantage of these technologies. Consequently we focus on methods of observation here. Choosing a specific method for an observational protocol always involves tradeoffs. There is no one best universal solution, but there will be solutions that are more effective in specific cases than others. Table 19.3 provides a summary of the main types of observational methods and their associated strengths and weaknesses. We have found that managing these tradeoffs is the key to developing good measurement tools.

Table 19.3 Summary of main observational performance measurement methods

Method Description and examples Strengths/weaknesses Citations
Behaviorally anchored rating scales (BARS) Brief descriptions of effective and ineffective team behaviors are used as anchors associated with each dimension of team performance being measured.
Example: Rating the quality of a team's mutual support during a trauma resuscitation from 1 (team members did not ask for or provide task assistance) to 5 (team members asked for and provided task assistance as needed).
Strengths
Captures information about the quality of teamwork behaviors. Concrete descriptions of teamwork process behaviors facilitate interrater reliability.

Weaknesses
Temporal information can be lost as ratings are summated across time.
Kendall and Salas (2004)
Behavioral observation scales (BOS) Likert-type scales are used to rate the frequency of team process behaviors.
Example: Rating the frequency of closed-loop communication in a trauma case on a scale from 1 (never) to 5 (always).
Strengths
Captures information about the frequency of teamwork behaviors.

Weaknesses
While temporal information is captured, it is summative in nature and does not capture the sequence of behaviors. Relies on human capacity of frequency estimation which can be flawed.
Kendall and Salas (2004)
Behavioral markers/event-based methods Behavioral marker measurement systems use descriptions of team process behaviors linked to trigger events (i.e., task or environmental changes requiring a team response).
Example: A behavioral marker of leadership could be a trauma team leader verbalizing a clear plan of care for the patient to all team members after a secondary survey has been completed.
Strengths
Behavioral marker and event-based methods are capable of capturing performance over time (e.g., how teams respond to changes in their environment). Interrater reliability can be easier to develop and maintain than methods requiring more abstract ratings.

Weaknesses
Event-based methods are most useful in simulations where the events can be controlled.
Flin and Martin (2001), Fowlkes, Dwyer, Oser, and Salas (1998), Rosen et al. (2008)

In addition to the scoring method and content, it is critical that the measurement system be implemented in a systematic way. This involves training raters to use the structure protocols, assessing interrater reliability, and developing supporting materials (e.g., manuals and scoring guides). Additionally, it has often been noted that it takes a team to observe a team. Because of the complexities of team performance, one person will not likely be able to capture all of what is happening. Dividing the observational workload is a key strategy for success. In the following section, we describe some of the main challenges to implementing the methods described above.

Challenges

The preceding discussion has hopefully made salient both the importance and challenging nature of team performance measurement in the wild. In this section, we provide an overview of some of the tough problems researchers have to address when developing a team performance measurement system.

Some aspects of team performance are not directly observable

As described above, theoretical models of teamwork are most commonly built around IPO frameworks. Team performance is the process component of these frameworks. It involves the interdependent actions the team members make to meet its goals. It has been noted that understanding aspects of team performance is easier than individual performance because much of the team's work is external and observable (in comparison to the internal cognition involved in individual performance; Cooke, Salas, Kiekel, & Bell, 2004). While this is very true, it is not always the case that a full understanding of a team's performance can be developed strictly from observable processes. Implicit group processes are critical to performance outcomes in many situations (Entin & Serfaty, 1999; MacMillan, Entin, & Serfaty, 2004). Expert team performance is often as much about what the team members do not have to communicate explicitly as it is about observable processes (Salas, Rosen, Burke, Goodwin, & Fiore, 2006). A well-executed blind pass in basketball is the perfect example of implicit coordination. One player is able to pass to another in the absence of any communication.

To complicate the matter further, teams will develop different coordination strategies (i.e., mixes of implicit and explicit coordination) based on differences in input variables such as team characteristics (e.g., distribution), technology in use, and task and organizational characteristics (Espinosa, Lerch, & Kraut, 2004). Consequently, identifying the set of coordination behaviors representing ‘good’ team performance (i.e., process behaviors linked to effective outcomes) is problematic. There are no ‘one size fits all’ team performance measurement systems, out of the box, or off the shelf solutions. As observation-based methods are still the primary means of measuring team performance in the wild, these balances between implicit and explicit coordination strategies pose a major challenge. The best solution for a given situation usually involves a mix of methods, with teamwork behaviors captured through observation and unobservable aspects of performance captured through other means such as self-report. This is the solution we adopted in a study of operating room (OR) teams. Team knowledge at attitudes where capture via survey at different points in time (some measures taking after each operation), and behaviors were observed by external raters.

Identifying boundaries

Knowing who is on the team is critical for focusing observation sessions. Many times, however, defining the boundaries of team membership is not as simple as it sounds. Disentangling teams from their surroundings or mapping out the interdependencies of members can prove quite challenging. If an organization is represented as a network with people as nodes and task interdependencies as links, the problem usually becomes clear. Finding where the team begins and ends is difficult, and with any densely connected network, drawing boundaries is somewhat arbitrary.

For example, even in something as tightly constrained as an OR, the boundaries of team membership can become blurred. Within the actual room during procedures, there are usually groups of people performing the procedure (i.e., the surgeon, assistant, surgical technician), managing the patients airway and sedation (i.e., anesthesiologist, nurse anesthetist), and a person managing the overall workflow (i.e., a scrub nurse documenting and managing). This can be viewed as one complex team; however, the interdependencies do not stop with these groups of people. There are several other groups of staff tightly coupled to the immediate OR team, including support staff involved in room turnover in between procedures (i.e., room cleaning and sterilization; room turnover is a tightly monitored metric of efficiency in most ORs), surgical supply personnel responsible for stocking a room with the necessary equipment for a given procedure, and OR department administration responsible for scheduling and ensuring appropriate resources are available. While none of these groups of people are present during an actual surgery, their work can directly impact the work of the OR team.

The same difficulty applies to large teams such as the US Navy security detachment. This group of 70 or more members is formally identified as a unit and trains together as an intact group, but much of the work they are engaging in is segmented into multiple smaller units. Within each perimeter patrol team, two or three team members are tightly coupled and act independently until there is an event to report back to the command team, which is co-located and tightly coupled within the central command center in the base. These subgroups can be seen as parts of a larger complex team or as interconnected teams of their own – the divisions are quite subjective.

Along these lines, it has been noted that not all team members contribute equally to the team's processes and outcomes (Humphrey, Morgeson, & Mannor, 2009). Some members are more central than others to performance outcomes, and consequently should be represented more in the measurement of processes. In order to deal with this issue, careful attention needs to be paid to the purpose of measurement. Clearly articulating a purpose for measurement is the logical first step, but one that is usually not given due attention. A measurement system cannot capture everything. The focus – including the team members targeted for measurement – is determined by the questions being asked.

Changes over time

All teams, regardless of differences such as type, size, or composition, develop, perform, and change over time. Therefore, it can be challenging to capture the true characteristics or dynamics of a team when observing or measuring for only a short duration. For example, even if a team is observed for an entire performance episode, that particular performance episode may not be representative of the team's overall performance. They may have been influenced by the observer's presence and driven to perform at their maximum level rather than their typical level, or something about that particular performance episode could have differed drastically from the average performance episode (e.g., there was a rare change in the environment).

In order to get a true understanding of a team's average performance, measurement must span over a significant amount of time, which can be difficult if not impossible in a practical sense. Furthermore, team composition and structure characteristics often change over time as well. Some team members leave and new ones come in and have to be socialized and trained, creating changing social dynamics and coordination issues that can influence team dynamics. Changing environments and task requirements cause the team to change aspects such as communication and leadership structure in response. These changes can become very difficult to capture in the field when performance is sampled at a ‘low frame rate’ (i.e., small observation periods with large gaps in between). As described above, we usually attempt to uncover some type of temporal structure to a team's work (e.g., phases of performance, or critical events or tasks). For example, when observing the Navy security team engaged in the training exercise, we discovered that when a security threat occurred, the leadership structure would shift from more formal, designated leadership to a task-based leadership in which team members who have the most expertise in an area took the lead to solve the threat. Once the threat was handled, the leadership structure returned to a formal external leader. This understanding of the temporal structure helps to anchor assessments of teamwork across time and create a profile or pattern of performance as opposed to a single snapshot.

Balancing methodological rigor and practicalities

The reliability and validity of the measurement tools used in the field is of utmost concern. With poorly designed or implemented measurement systems, data collections are at best a waste of time in that the data are unusable and at worst actually influence answers to research questions or inform organizational decisions based on biased or otherwise compromised data (Brannick & Prince, 1997). In field research, especially in applied communities, there are many constraints and pressures threatening to undermine the ideal of performance measurement system design and implementation. For example, there frequently will not be open-ended opportunities to perform observations. Access to teams in field settings is almost always limited. Consequently, all of the time necessary for the traditional process of iterating the development of tools and evaluating the psychometric properties of an observation protocol in context will not usually be available. Obtaining the necessary reliability and validity evidence involves managing an imperfect situation and finding creative ways to meet the constraints inevitably placed upon the field researcher. Developing ‘quick start’ observer training using prerecorded videos of teams performing to establish interrater reliability initially, and ongoing monitoring of rater drift are strategies we have found to be invaluable.

What Works: Strategies for Success

As discussed previously, developing a team performance measurement system for field settings always involves making decisions about which methods to use, what content to focus on, and when to collect the data. All of these decisions are bound by the constraints of the field and developing a measurement system involves balancing the tradeoffs of different approaches with the uniqueness of the situation. In this section, we provide several guiding strategies we follow when entering into a new domain. These strategies are summarized in Table 19.4.

Table 19.4 Summary of strategies for success

What works? Tips and considerations
1. Start with a clear understanding of the why, what,
and how of your measurement
What purpose will the measurement data serve in the end?
What aspects of performance need to be captured?
What measurement method is best suited for that content?
What are the practical constraints on performance measurement?
2. Capturing the dynamics of team dynamics ... measure team
performance over time
What aspects of team structure and composition need to be measured over time?
What aspects of performance are inherently time-based?
3. Get the most data in the least amount of time:
use structured protocols
Take the time to plan your measurement approach before you begin
Make measurement tools as detailed and clear as possible to maximize efficiency
Use structured protocols to prioritize measurement goals in case of unexpected changes
4. Make your measurement diagnostic: capture performance
at multiple levels
Remember that measurement should go beyond description: it should evaluate and diagnose the causes of performance as well
Link processes to outcomes in order to uncover the underlying causes of performance
Link multiple levels to uncover how individual performance influences team-level performance
5. Teams don‘t perform in a vacuum: capture contextual
influences to get the big picture
Consider top-down influences on team performance: organizational and environmental factors
Use team classification schemes to identify potential contextual factors and measure those influences in order to take into account top-down influences

Begin with a clear understanding of the why, what, and how of the measurement

It is critical to have a clear purpose in mind when undertaking any team performance measurement procedure. Like good theory, the purpose drives the entire measurement process. The purpose of performance measurement determines a variety of decisions regarding the measurement approach such as whether single or multiple measures are necessary, the processes and outcomes to be measured, and how those constructs will be captured.

When designing a measurement system, you should always be able to answer the following question: what is the intended outcome of the measurement? In other words, what purpose will the performance measurement data serve in a practical sense? For example, in our trauma team example described above, the data capture was intended to drive decisions about the type of feedback the team needed as well as whether or not they need some type of team training. In the Navy security team example, the data were aimed at describing the nature of the teams in a specific enough manner to match team member knowledge, skills, and abilities to the teams for selection and training purposes. More generally, performance measurement data may be used as part of a training needs analysis to develop training for the teams, as evaluative data aimed at providing the team members with feedback (e.g., Pritchard, Youngcourt, Philo, McMonagle, & David, 2007) and making human resource decisions or programmatic decisions (e.g., Tannenbaum, 2006), or more basically as scientific research data aimed at a better understanding of this particular team in the field.

Answering the why question clearly and early in the measurement design process will help to determine the answer to the next critical question: what specific content of team dynamics will be measured? As described above, this will involve things such as team performance behaviors and team effectiveness outcomes. We have found that being as concrete and specific about the content of the measurement tool is almost always a productive endeavor. For example, we were only able to judge the quality of the trauma team's communication when we had a detailed mapping of what they should be communicating and when the communication should occur (e.g., when a leader needs to verbalize a clear plan of care for the patient). If we simply tried to rate ‘team communication’ on a scale from 1 to 5 (highly ineffective to highly effective), we would not have been able to do that reliably. We needed to be very specific about what good and bad communications were in order to rate them. It is important to pinpoint what aspects of team performance are to be captured before beginning a measurement procedure, as the content of the measurement will dictate the answer to the third critical consideration: which measurement tool or approach is most appropriate for any given team dynamic construct?

Different measurement approaches are more or less appropriate for capturing different aspects of performance such as teamwork-related behaviors and taskwork-related behaviors (Cannon-Bowers, Burns, Salas, & Pruitt, 1998). For example, if the purpose of the measurement is to develop training to improve the communication practices within a team (i.e., a teamwork behavior), then the team's existing communication patterns would be a critical component of performance to capture. In this situation, the measurement is aimed at training of communication skills, and therefore using audio-recording equipment to capture communication over time that can be analyzed using communication coding schemes later may be the most appropriate approach.

Conversely, if the purpose of performance measurement is to evaluate the team's overall effectiveness and implement a reward system for achieving the team's primary goal (i.e., a taskwork behavior), then the content of the measurement may only need to capture the outcomes of the team's performance considered important by the stakeholders (e.g., final sales, number of products completed). In a more basic research-oriented situation, the purpose of measurement may be to capture a variety of theoretically driven antecedents, processes, and consequences related to team performance. This purpose would likely require the measurement system to capture multiple aspects of team performance simultaneously (e.g., cohesion, communication, coordination, overall effectiveness), which may require multiple measurement techniques or approaches.

Finally, it is important to consider the practical constraints on performance measurement in the field early in the decision process. Specifically, regardless of the purpose of your measurement and the corresponding ideal measurement method, unless you have the resources available to proceed with the ideal method, a less ideal path may be the only available option. Perhaps you do not have access to enough observers to perform any direct observational methods, or you need results immediately and therefore do not have time to transcribe and code communication data. In this case, the performance measurement system should be designed to capture the most important aspects of the team's performance with the most robust method that the practical constraints will allow. The key, however, is to identify all practical constraints early on in the performance measurement system design process in order to avoid delays and setbacks once measurement has started.

Frequently, when measuring team performance in the wild, the purpose on some level will be to improve that performance. Salas and colleagues (2009) have proposed a set of best practices in the management of team performance organized around the adaptive, leadership, management, and technical capacities underlying organizational effectiveness (Letts, Ryan, & Grossman, 1998). As detailed in Table 19.5, these best practices can be used to help focus the development of a team performance measurement system focused on increasing effectiveness.

Capturing the dynamics of team dynamics … measure team performance over time

As the phrase “team dynamics” implies, a static snapshot of a team is usually insufficient to generate a meaningful understanding of a team. The nature and quality of a team's performance can vary greatly at different phases or points in their performance cycle. The structure of the team may change over time as well (e.g., leadership structure, communication structure) or the composition of the team may change as new team members replace old members leaving. Some teams may vary more over time than others, but all teams encounter temporal factors that influence their performance, such as changing deadlines and time pressure.

Table 19.5 Summary of best practices for team performance management (from Salas, Weaver, Rosen, & Smith-Jentsch, 2009)

Best practice Tips Selected references
    Adaptive capacity  
1. Build flexible and adaptable team players Build mutual performance monitoring and back-up behavior skills in team members using cross training and other methods. Buildmutual trust among team members. Burke, Fiore, and Salas r (2003), Porter et al. (2003), Salas, Sims, and Burke, (2005)
2. Build a big play book — encourage a large team-task strategy repertoire Provide a safe environment to practice new performancestrategies (e.g., use simulation-based training). Orasanu (1990), Salas, Priest, Wilson, and Burke (2006)
3. Create teams that know themselves and their work environment Team cue recognition training. Perceptual contrast training. Build team communication skills (information exchange, closed-loop communication). Salas, Cannon-Bowers, 1 Fiore, and Stout (2001), Wilson, Burke, Priest, and Salas (2005)
4. Build teams that can tell when the usual answer isn't the right answer Develop team planning skills. Use guided error training to promote an understanding of when the routine solution is not the appropriate solution. Lorenzet, Salas, and Tannenbaum (2006)
5. Develop self-learning teams — train teams to help themselves Team self-correction training; team leader debrief skills. Foster a team learning orientation, psychological safety. Bunderson and Sutcliffe (2003), Edmondson (1999), Smith-Jentsch et al. (1998)
6. Don‘t let the weakest link have the strongest voice — build teams that take advantage of their resources Develop a strong team orientation in team members. Promote assertiveness. Build diversity of expertise and transactive memory. Eby and Dobbins (1997), Hollenbeck, Sego, Ilgen, and Major (1997)
    Leadership capacity  
7. Articulate and cultivate a shared vision which incorporates both internal and external clients Ask how the team will make a difference for internal and external clients.
Establish measurable indicators of team success.
Determine what team hopes to accomplish in its wildest dreams.
Briner, Hastings, and Geddes (1996), Christenson and Walker (2004), Williams and Laugani (1999)
    Adaptive capacity  
8. Create goals the team can grow with: build hierarchically aligned goals with malleability and flexibility at both the individual and team level Include all team members in goal generation.
Set team and individual level goals which are aligned with upper level goals.
Allow overall goals to have wiggle room and build flexibility into subgoals.
Ensure that there are multiple strategies to reach the goal.
Getz and Rainey (2001), Locke and Brian (1967)
9. Build motivation into the performance management process — make clear connections between actions, evaluations, and outcomes Team members should be encouraged and rewarded for praising colleague accomplishments and being supportive during setbacks. Only utilize group-level incentives and rewards for work performance. Create opportunities for taking major responsibility for some elements ofthe task are available for each member. Make the connections between actions, results, evaluations, and outcomes clear. Oser, McCallum, Salas, and Morgan (1989), Pritchard and Ashwood (2008), Swezy and Salas (1992)
10. Team leaders must champion coordination, communication, and cooperation Build the team to reflect the various forms of expertise required by the tasks at hand. Foster the use of external sources (i.e., temporary members, consultant team members) if the expertise is not inherent in the team. Divide tasks to suit individual expertise, but do allow opportunities for growth. Remember that leader does not equal expert, defer to those with the expertise (see Best Practice # 5) Dyer (1984), Salas, Wilson, Murphy, King, and Salisbury (in press), Zalesney, Salas, and Prince (1995)
11. Understand the “why”: examine both failures and successes during debriefs Review instances of both effective and ineffective behavior during feedback sessions. Recognize failures as learning opportunities. Ellis and Davidi (2005), Zakay, Ellis, and Shevalsky (2004)
    Management capacity  
12. Clearly define what to measure: Develop and maintain a systematic and organized representation of performance Develop a document or set of documents explicitly linking KSAs to performance metrics, feedback, and outcomes (e.g., reinforcement, promotion, pay). The purpose of measurement should drive measure development. Bartram (2005), Kurtz and Bartram (2002), Stevens and Campion (1994)
13. Uncover the ‘why’ of performance: develop measures which are diagnostic of performance Foster an understanding of why performance was effective or ineffective.
Incorporate measures which include outcomes and processes. The purpose of measurement should drive development.
Avoid ‘easy’ measures which miss large amounts of performance related information.
Measure performance from multiple perspectives. Solicit input from team members (e.g., using 360° feedback).
Develop a discipline of pre-brief– performance–debrief.
Cannon-Bowers and Salas (1997)
14. Measure typical performance continuously Measure performance over time. Choose to measure what employees “will do.”
Automate as much of the performance monitoring process as possible.
Provide ongoing, diagnostic feedback which identifies and removes roadblocks to effective performance.
Klehe and Anderson (2007), Sackett, Zedeck, and Folgi (1988)
    Technical capacity  
15. Include teamwork competencies in formal performance evaluations Offer both team and individual level reinforcement (both formal and informal) Murphy and Cleveland (1995), Salas, Kosarzycki, Tannenbaum, and Carnegie (2005)
16. Have a plan for integrating new team members, and execute it Clearly define teamwork and taskwork competencies needed for effective performance and ensure new team members possess these KSAs. Cannon-Bowers and Salas (1997), Levine and Choi (2004)
17. Assess and foster shared mental models Measure and provide feedback (cue-strategy associations). Cross-training, interpositional knowledge training. Encourage a culture of learning. Develop a strong sense of ‘collective,‘trust, teamness and confidence. Blickensderfer, Cannon-Bowers, and Salas (1998), Cannon-Bowers, Salas, and Converse (1993), Cooke, Gorman, Duran, and Taylor(2007), Mohammed and Dumville (2001)
18. Develop or select for individual personal discipline and organizational skills Include these skills in KSA and competency definition. Ensure modes of distributed communication, information systems, and access to necessary organizational materials remotely. Ancona and Caldwell (2007)
19. Communicate the “big picture:” facilitate a global awareness of competing goals and deadlines of all teams Coordinate meetings of team leaders to discuss multiple deadlines.
Create global Gantt chart with real-time updates if possible.
Mortensen, Woolley, and O'Leary (2007)
20. Maturity counts — recognize that a multiteam framework works best for mature projects Apply MTM to mature teams or projects.
Have at least one member 100% dedicated to a single team during the kickoff periodto ensure continuity.
Mortensen, Woolley, and O'Leary (2007)
21. Foster trust — cultivate a culture of information sharing Foster information sharing. Cultivate a culture of error reporting and feedback, which focuses on learning from mistakes, notpunishment. Salas, Sims, and Burke (2005), Webber (2002)

KSA = Knowledge, Skills, and Attitudes; MTM = Multi-team Management.

Furthermore, some aspects of team dynamics, such as coordination and entrainment processes, are inherently temporal in that they are focused on the synchronization and timingof the team's actions. Marks, Mathieu, and Zaccaro (2001) suggested that teams cycle through action and transition phases throughout their life as a team. In other words, teams go through a series of input– process–output episodes, some episodes focused on planning and preparation activities (i.e., transition phases), and others focused on actually enacting the previously created plans (i.e., action phases). This framework suggests that if team performance measurement wasto only capture a snapshot of team performance, it could only capture one of these two phases. In order to capture the pattern of and interaction between action and transition cycles in a team, it is important to measure their performance over time. Additionally, team adaptation, or the process of adjusting and changing functioning over time in responseto dynamic environmental cues (Burke, Stagl, Salas, Pierce, & Kendall, 2006), inherently cannot be measured at one point in time.

Get the most data in the least amount of time: use structured protocols

When trying to capture team dynamics in the wild, it can be tempting simply to gather informal observations of performance, conduct a few informal interviews, and consider the job done. However, we have found that an upfront investment is required to ensure a high-quality performance measurement system (see also Salas, Burke, & Fowlkes, 2006). Specifically, a good performance measurement system cannot be developed haphazardly or at the last minute, and informal observations are not enough. The design of any measurement tool requires ample time, labor, and expertise in order to ensure that the protocol is as structured, straightforward, and intuitive as possible. In other words, when designing the tools for performance measurement, such as researcher observation protocols or communication coding schemes, these tools should provide a detailed, uniform template that ensures each observer or coder captures all desired aspects of performance in the same way, and that interpretations are made in a consistent manner. This ensures that the data captured will be as useful and usable as possible, and the process of performance measurement will be efficient and precise. If measurement approaches such as observations or communication analysis are left too vague or open to an individual researcher's whim, the data gathered across sources will not be consistent enough to combine in any meaningful way, and data collection may take more time than necessary, resulting in wasted resources. Therefore, the more clearly the measurement protocol describes to the observer or coder (a) what dimensions of performance to capture and (b) how exactlyto record and interpret these data, the better. For example, in the Navy security team observation, the protocol was designed in a relatively simple, multiple-choice structure (e.g., leadership structure is hierarchical, shared, emergent, or other) with very detailed descriptions accompanying each choice to aid the rater in making accurate judgments.

Another advantage to structuring performance measurement protocols is the prioritization of measurement goals. More specifically, if the performance measurement system is structured in such a waythat the most critical aspects of the team's dynamics are captured first, and then other “wish list” items are collectedas opportunity allows, then when problems arise or constraints appear (as is often the case in the field), the measurement approach can be adjusted as appropriate while retaining the most critical aspects of performance. In the ideal situation, all aspects of performance would be captured. However, in the event that the performance episode being observedis unexpectedly cut short, at the very least the observer should capture the more important aspects of performance as dictated by the prioritization. Thus, the structuring of performance measurement protocols can ensure that captured data will still be usable even when things do not go quite as planned.

Make your measurement diagnostic: capture performance at multiple levels

Performance measurement is always a means to an end. The data are destined for some purpose ranging from improving understanding to designing training or management interventions. Regardless ofthe specific purpose, all performance measurement systems should serve three core purposes: describing, evaluating, and diagnosing the team's performance. Description of team performance is necessary, but not sufficient for useful performance measurement data. Without evaluating or diagnosing the causes underlying performance outcomes, no measurement system can be used to develop training, serve as an assessment tool, or truly inform the science of teams.

Diagnosing the causes of effective and ineffective performance is perhaps the most important component in the performance measurement process because without an understanding of the causes underlying performance, the development of feedback and training becomes difficult and suboptimal, if not impossible. Identifying deficiencies in performance is not very useful unless you can also identify the underlying causal mechanisms to change in order to rectify those deficiencies. Diagnosing the underlying causes of effective and ineffective performance is the only way performance measurement can be used to manage and improve performance. The utility of linking specific performance outcomes to specific causal mechanisms in order to achieve effective performance measurement has been called to attention by other organizational researchers (e.g., Ittner & Larcker, 2003).

In order for any performance measurement system to provide the most accurate description, evaluation, and diagnosis of performance, it should take an integrative, multilevel approach to measurement. Specifically in terms of team performance measurement, this means measurement should capture both individual and team level processes and outcomes (Cannon-Bowers & Salas, 1997). Process measures are focused on strategies and procedures used to accomplish a task, while outcome measures are focused on quantifying the end result of those processes. Outcome measures alone are not diagnostic of performance as the process measures are what provide an explanation of the underlying causes. Furthermore, team-level measures must be supplemented with individual-level measures as individual behavior can influence team-level processes and outcomes. This position has been also been advocated quite strongly throughout the team literature (e.g., Salas, Burke, Fowlkes, & Priest, 2003). In order to diagnose and manage team performance accurately, both individual- and team-level processes and outcomes must be captured, and the linkages between them must be delineated.

Teams don't perform in a vacuum: capture contextual influences to get the big picture

In the previous section, the influence of individual-level processes and outcomes on team performance was discussed, with an emphasis on how lower-level performance can influence performance i a bottom-up direction. However, there are also influences that can impact team performance from a top-down direction. We cannot fully understand the dynamics of various teams without capturing how the surrounding context impact processes and performance. Additionally, not all teams operate in the same environment, meaning the way in which environmental factors influence team processes and outcomes differs across teams. In other words, the differing impact of contextual factors may cause different teams to engage in extremely different sets of processes. Thus, itis integral that performance measurement systems capture the differing impact of environmental context on team dynamics for each individual measurement situation.

A recent review of the team classification literaturedelineated seven underlying contextual dimensions critical to the classification of teams: (1) fundamental work cycle; (2) physical ability requirements; (3) temporal duration; (4) task structure; (5) active resistance; (6) hardware dependence; and (7) health risk (for full descriptions, see Devine, 2002). For example, military teams often perform underconditions of gunfire and other potentially life-threatening occurrences (i.e., health risk). This contextual influence would quite dramatically alter team processes and performance (e.g., stress, handling injuries) and therefore should be captured as part of theperformance measurement system in order to take these effects into account. Organizational and environmental variables such as those describedby Devine (2002) can have a very real impact on the processes and outcomes of team performance, and should be included in any performance measurement system.

Concluding Remarks

Field settings pose major challenges for researchers and practitioners seeking to understand and improve team functioning. However, to answer the important questions in the science of teams and to guide application, an increasingly more sophisticated understanding of team dynamics in the wild must be developed. We hope this chapter has helped to bring to light some of the fundamental issues in capturing team performance in what is frequently a messy or difficult environmen for measurement. The strategies offered are only a start and it is our hope that others continue to add to the dialogue and generate new approaches, techniques, tools, and methods to augment the current state of the science.

Acknowledgment

This work was supported by the Navy Personnel Research, Studies and Technology Department under the auspices of the US Army Research Office Scientific Services Program administered by Battelle (Contract No. W911NF-07-D-001, TCN 09059). The views presented in this paper are those of the authors are do not represent the views of the Navy Personnel Research, Studies and Technology Department or the U. S. Army Research Office. At the time of writing this chapter, Michael A. Rosen and Jessica Wildman were graduate student at UCF. Michael Rosen is now an Assistant Professor at the Armstrong Institute for Patient Safety and Quality, and the Department of Anesthesiology and Critical Care Medicine, The Johns Hopkins University School of Medicine (email: [email protected]). Jessica Wildman is now an Assistant Professor in the College of Psychology and Liberal Arts, Florida Institute of Technology ([email protected])

References

Abbott, J. B., Boyd, N. G., & Miles, G. (2006). Does type of team matter? An investigation of the relationships between job characteristics and outcomes with a team-based environment. Journal of Social Psychology, 146, 485–507.

Ancona, D. G., & Caldwell, D. (2007). Improving the performance of new product teams. Research-Technology Management, 50(5), 37–43.

Barry, D. (1991). Managing the bossless team: Lessons in distributed leadership. Organizational Dynamics, 20, 31–47.

Bartram, T. (2005). Small firms, big ideas: The adoption of human resource management in Australian small firms. Asia Pacific Journal of Human Resources, 43(1), 137–154.

Bell, B. S., & Kozlowski, S. W. J. (2002) A typology of virtual teams: Implications for effective leadership. Group and Organization Management, 27(1), 14–49.

Blickensderfer, E., Cannon-Bowers, J. A., & Salas, E. (1998). Cross training and team performance. In J. A. Cannon-Bowers & E. Salas (Eds.), Making decisions under stress: Implications for individual and team training (pp. 299–311). Washington, DC: American Psychological Association.

Brannick, M. & Prince, C. (1997). An overview of team performance measurement. In M. Brannick, E. Saas & C. Prince (Eds.) Team performance assessment and measurement: Theory, methods, and applications (pp. 3–16). Mahwah, NJ:Lawrence Erlbaum Associates.

Briner, W. Hastings, C. & Geddes, M. (1996). Project leadership (2nd ed.). Gower Publishing Company.

Burke, C. S., Stagl, K. C., Salas, E., Pierce, L., & Kendall, D. L. (2006). Understanding team adaptation: A conceptual analysis and model. Journal of Applied Psychology, 91, 1189–1207.

Bunderson, J. S. & Sutcliffe, K.M. (2003). Management team learning orientation and business unit performance. Journal of Applied Psychology, 88(3), 552–560

Burke, C. S., Fiore, S. M., & Salas, E. (2003). The role of shared cognition in enabling shared leadership and team adaptability. In J Conger & C. Pearce (Eds.), Shared leadership: Reframing the hows and whys of leadership (pp. 103–122). London: Sage Publishers.

Campbell, D. J. (1988). Task complexity: A review and analysis. Academy of Management Journal, 13(1), 40–52.

Campbell, J. P. (1990). Modeling the performance prediction problem in Industrial and organizational psychology. In M. D. Dunette & L. M. Hough (Eds.), Handbook of industrial and organizational psychology. Palo Alto, CA: Consulting Psychologists Press.

Cannon-Bowers, J. A., Burns, J. J., Salas, E, & Pruitt, J. S. (1998). Advanced technology in decision-making training. In J. A. Cannon-Bowers & E. Salas (Eds.), Making decisions under stress: Implications for individual and team training (pp. 365–374). Washington, DC: APA Press.

Cannon-Bowers, J. A. & Salas E. (1997). A framework for developing team performance measures in training. In M. T. Brannick, E. Salas, & C. Prince (Eds.), Team performance assessment and measurement: Theory, methods, and applications (pp. 45–62). Mahwah, NJ: Lawrence Erlbaum Associates.

Cannon-Bowers, J. A., Salas, E., & Converse, S. A. (1993). Shared mental models in expert decision-making teams. In N. J. Castellan (Ed.), Current issues in individual and group decision making (pp. 221–246). Hillsdale, NJ: Erlbaum.

Cannon-Bowers, J. A., Salas, E., & Pruitt, J. S. (1996). Establishing the boundaries of a paradigm for decision-making research. Human Factors, 38(2), 193–205.

Christenson, D. & Walker, D. H. T. (2004). Understanding the role of “vision” in project success. Engineering Management Review, 32(4), 57–73.

Cohen, S. G. & Bailey, D. E. (1997). What makes teams work: Group effectiveness research from the shop floor to the executive suite. Journal of Mangement, 23(3), 239–290.

Cooke, N. J., Gorman, J. C., Duran, J. L., & Taylor, A. R. (2007). Team cognition in experienced command-and-control teams. Journal of Experimental Psychology: Applied, 13(3), 146–157.

Cooke, N. J., Salas, E., Kiekel, P. A., & Bell, B. (2004). Advances in measuring team cognition. In E. Salas & S. M. Fiore (Eds.), Team cognition: Understanding the factors that drive process and performance (pp. 83–106). Washington, DC: American Psychological Association.

Devine, D. J. (2002). A review and integration of classification systems relevant to teams in organizations. Group Dynamics: Theory, Research, and Practice, 6(4), 291–310.

Dorsey, D., Russell, S., Keil, C., Campbell, G., Van Buskirk, W., & Schuck, P. (2009). Measuring teams in action: Automated performance measurement and feedback in simulation-based training. In E. Salas, G. F. Goodwin & C. S. Burke (Eds.), Team effectiveness in complex organizations: Cross-disciplinary perspectives and approaches (pp. 351–381). New York: Routledge.

Driskell, J. E., Salas, E. & Hogan, R. (1987). A taxonomy for composing effective naval teams (Technical Report Number 87–002). Orlando, FL: US Naval Training Systems Center Technical Reports.

Duncan, K. (1972). Strategies for analysis of the task. In J. Hartley (Ed.), Strategies for programmed instruction: An educational technology (pp. 19–81). London: Butterworths.

Dyer, J. L. (1984). Team research and team training: A state of the art review. In F. A. Muckler (Ed.), Human factors review (pp. 285–323). Santa Monica: Human Factors Society.

Eby, L. T. & Dobbins, G. H. (1997). Collectivistic orientation in teams: An individual and group-level analysis. Journal of Organizational Behavior, 18(3), 275–295.

Edmondson, A. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350–383.

Entin, E. E., & Serfaty, D. (1999). Adaptive team coordination. Human Factors, 41(2), 312–325.

Ellis, S., & Davidi, I. (2005). After-event reviews: Drawing lessons from successful and failed experience. Journal of Applied Psychology, 90(5), 857–871.

Erez, A., LePine, J. A., & Elms, H. (2002). Effects of rotated leadership and peer evaluation on the functioning and effectiveness of self-managed teams: A quasi-experiment. Personnel Psychology, 55, 929–948.

Espinosa, J. A., Lerch, F. J., & Kraut, R. E. (2004). Explicit versus implicit coordination mechanisms and task dependencies: One size does not fit all. In E. Salas & S. M. Fiore (Eds.), Team cognition: Understanding the factors that drive process and performance (pp. 107–129). Washington, DC: American Psychological Association.

Flin, R., & Martin, L. (2001). Behavioral markers for crew resource management: A review of current practice. International Journal of Aviation Psychology, 11(1), 95–118.

Foltz, P. W., & Martin, M. J. (2009). Automated communication analysis of teams. In E. Salas, G. F. Goodwin & C. S. Burke (Eds.), Team effectiveness in complex organizations: Cross-disciplinary perspectives and approaches (pp. 411–431). New York: Routledge.

Fowlkes, J. E., Dwyer, D. J., Oser, R. L., & Salas, E. (1998). Event-based approach to training (EBAT). The International Journal of Aviation Psychology, 8(3), 209–221.

Getz, G. E., & Rainey, D. W. (2001). Flexible short-term goals and basketball shooting performance. Journal of Sport Behavior, 24.

Gronn, P. (2002). Distributed leadership as a unit of analysis. Leadership Quarterly, 13, 423–451.

Hackman, J. R. (1987). The design of work teams. In J. Lorsch (Ed.), Handbook of organizational behavior (pp. 315–342). New York: Prentice Hall.

Harris, M., & Raviv, A. (2002). Organization design. Management Science, 48, 852–865.

Hollenbeck, J., Sego, D., Ilgen, D., & Major, D. (1997). Team decision-making accuracy under difficult conditions: Construct validation of potential manipulations using TIDE2 simulation. Team performance assessment and measurement: Theory, methods, and applications (pp. 111–136). Mahwah, NJ: Lawrence Erlbaum Associates Publishers.

Hough, J. R., & White, M. A. (2004). Scanning actions and environmental dynamism: Gathering information for strategic decision making. Management Decision, 42, 781–793.

Humphrey, S. E., Morgeson, F. P., & Mannor, M. J. (2009). Developing a theory of the strategic core of teams: A role composition model of team performance. Journal of Applied Psychology, 94(1), 48–61.

Ilgen, D. R., Hollenbeck, J. R., Johnson, M., & Jundt, D. (2005). Teams in organizations: From input-process-output models to IMOI models. Annual Review of Psychology, 56, 517–543.

Ittner, C. D., & Larcker, D. F. (2003). Coming up short on nonfinancial performance measurement. Harvard Business Review, 88–95.

Jehn, K. A. (1994). Enhancing effectiveness: An investigation of advantages and disadvantages of value-based intragroup conflict. Journal of Conflict Management, 5(3), 223–238.

Kankanhalli, A., Tan, B. C. Y., & Wei, K. (2006). Conflict and performance in global virtual teams. Journal of Management Information Systems, 23, 237–274.

Kendall, D. L., & Salas, E. (2004). Measuring team performance: Review of current methods and consideration of future needs. In J. W. Ness, Tepe, V., and Ritzer, D. (Ed.), The science and simulation of human performance (pp. 307–326). Boston: Elsevier.

Kirkman, B., & Mathieu, J. (2005). The dimensions and antecedents of team virtuality. Journal of Management, 31, 700–718.

Klehe, U., & Anderson, N. (2007). Working hard and working smart: Motivation and ability during typical and maximum performance. Journal of Applied Psychology, 92(4), 978–992.

Klein, K., Ziegert, J. C., Knight, A. P., & Xiao, Y. (2006). Dynamic delegation: Shared, hierarchical and deindividualized leadership in extreme action teams. Administrative Science Quarterly, 51, 590–621.

Kozlowski, S. W. J., & Ilgen, D. R. (2006). Enhancing the effectiveness of work groups and teams. Psychological Science in the Public Interest, 7(3), 77–124.

Kozlowski, S. W. J., & Klein, K. J. (2000). A multilevel approach to theory and research in organizations: Contextual, temporal, and emergent processes. In K. J. Klein & S. W. J. Kozlowski (Eds.), Multilelvel theory, research, and methods in organizations: Foundations, extensions, and new directions (pp. 3–90). San Francisco, CA: Jossey-Bass.

Kurz, R., & Bartram, D. (2002). Competency and individual performance: Modelling the world of work. In I. T. Robertson, M. Callinan, & D. Bartram (Eds.), Organizational effectiveness. The role of psychology (pp. 227–255). Chichester, UK: Wiley.

Letts, C. W., Ryan, W. P., & Grossman, A. (1998). High performance nonprofit organizations: Managing upstream for greater impact. New York: John Wiley & Sons, Inc.

Levine, J. M., & Choi, H. (2004). Minority influence in work teams: The impact of newcomers. Journal of Experimental Social Psychology, 40, 273–280.

Lipshitz, R., & Strauss, O. (1997). Coping with uncertainty: A naturalistic decision-making analysis. Organizational Behavior and Human Decision Processes, 69, 149–163.

Locke, E. A., & Brian, J. F. (1967). Performance goals as determinants of level of performance and boredom. Journal of Applied Psychology, 51(2), 120–130.

Locke, E. A., Smith, K. G., Erez, M., Chah, D., & Schaffer, A. (1994). The effects of intra-individual goal conflict on performance. Journal of Management, 20, 67–91.

Lorenzet, S. J., Salas, E., & Tannenbaum, S. I. (2006). Benefitting from mistakes: The impact of guided errors on learning, performance, and self efficacy. Human Resource Development Quarterly. 16, 301–322.

MacMillan, J., Entin, E. E., & Serfaty, D. (2004). Communication overhead: The hidden cost of team cognition. In E. Salas & S. M. Fiore (Eds.), Team cognition: Understanding the factors that drive process and performance (pp. 61–82). Washington, DC: American Psychological Association.

Marks, M. A., Mathieu, J. E., & Zaccaro, S. J. (2001). A temporally based framework and taxonomy of team processes. Academy of Management Review, 26, 356–376.

Mattson, M., Mumford, T. V., & Sintay, G. S. (1999). Taking teams to task: A normative model for designing or recalibrating work teams. Paper presented at the Academy of Management.

McGrath, J. E. (1984). Groups: Interaction and performance. Englewood Cliffs, NJ: Prentice Hall.

Mohammed, S., & Dumville, B. C. (2001). Team mental models in a team knowledge framework: Expanding theory and measurement across disciplinary boundaries. Journal of Organizational Behavior, 22(2), 89–106.

Moon, H., Hollenbeck, J. R., Humphrey, S. E., Ilgen, D. R., West, B., Ellis, A. P. J., et al. (2004). Asymmetric adaptibility: Dynamic team structures as one-way streets. Academy of Management Journal, 47, 681–695.

Morgeson, F. P. (2005). The external leadership of self-managing teams: Intervening in the context of novel and disruptive events. Journal of Applied Psychology, 90, 497–508.

Mortensen, M., Woolley, A. W., & O'Leary, M. (2007). Conditions for enabling effective multiple team membership. In K. Crowston, S. Sieber, & E. Wynn (Eds.). IFIP, Virtuality and Virtualization (Vol.236, pp. 215–228). Boston: Springer.

Murphy, K. R., & Cleveland, J. N. (1995). Understanding performance appraisal: Social, organizational, and goal-based perspectives. Thousand Oaks, CA: Sage.

Orasanu, J. (1990). Diagnostic approaches to learning: Measuring what, how, and how much: Comments on chapters 12, 13, and 14. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Orasanu, J., & Connolly, T. (1993). The reinvention of decision making. In G. Klein, J. Orasanu, R. Calderwood & C. E. Zsambok (Eds.), Decision making in action: Models and methods (pp. 3–20). Norwood, CT: Ablex.

Oser, R. L., MacCallum, G. A., Salas, E., & Morgan, B. B., Jr (1989). Toward a definition of teamwork: An analysis of critical team behaviors. (Technical Report 89–004). Orlando, FL: Naval Training Systems Center.

Porter, C. O., Hollenbeck, J. R., Ilgen, D. R., Ellis, A. P., West, B. J., & Moon, H. (2003). Backing up behaviors in teams: the role of personality and legitimacy of need. Journal of Applied Psychology, 88(3), 391–403.

Priem, R. L., Rasheed, A. M. A., & Kotulic, A. G. (1995). Rationality in strategic decision processes, environmental dynamism and firm performance. Journal of Management, 21, 913–929.

Pritchard, R. & Ashwood, E. (2008). Managing motivation: A manager's guide to diagnosing and improving motivation. New York: Routledge, Taylor & Francis Group.

Pritchard, R. D., Youngcourt, S. S., Philo, J. R., McMonagle, D., & David, J. H. (2007). The use of priority information in performance feedback. Human Performance, 20(1), 61–83.

Rosen, M. A., Salas, E., Lazzara, E. H., & Lyons, R. (in press). Cognitive task analysis: methods for capturing and leveraging expertise in the workplace. In M. A. Wilson,

R. J. Harvey, G. M. Alliger & W. Bennett, Jr (Eds.), The handbook of work analysis: The methods, systems, applications, & science of work measurement in organizations.

Rosen, M. A., Salas, E., Wu, T. S., Silvestri, S., Lazzara, E. H., Lyons, R., et al. (2008). Promoting teamwork: An event-based approach to simulation-based teamwork training for emergency medicine residents. Academic Emergency Medicine, 15(11), 1190–1198.

Saavedra, R., Earley, R. C., & Dyne, L. (1993). Complex interdependence in task-performing groups. Journal of Applied Psychology, 78(1), 61–72.

Sackett, P. R., Zedeck, S., & Folgi, L. (1988). Relations between measures of typical and maximum job performance. Journal of Applied Psychology, 73, 482–486.

Salas, E., Burke, C. S., & Fowlkes, J. E. (2006). Measuring team performance “in the wild:” Challenges and tips. In W. Bennet, Jr, C. E. Lance & D. J. Woehr (Eds.), Performance measurement: Current perspectives and future challenges (pp. 245–272). Mahwah, NJ: Erlbaum.

Salas, E., Burke, C. S., Fowlkes, J. E., & Priest, H. A. (2003). On measuring teamwork skills. In J. C. Thomas & M. Hersen (Eds.), Comprehensive handbook of psychological assessment (pp. 427–442). Indianapolis, IN: Wiley Publishing, Inc.

Salas, E., Cannon-Bowers, J. A., Fiore, S. M., & Stout, R. J. (2001). Cue-recognition training to enhance team situation awareness. In M. McNeese, E. Salas, & M. Endlsey, (Eds.), New trends in collaborative activities: Understanding system dynamics in complex environments (pp. 169–190). Santa Monica, CA: Human Factors and Ergonomics Society.

Salas, E., Kosarzycki, M. P., Tannenbaum, S. I., & Carnegie, D. (2005). Aligning work teams and HR practices: Best practices. In R. J. Burke & C. L Cooper (Eds.), Reinventing human resource management: Challenges and new directions (pp. 133–149). New York: Taylor & Francis Group.

Salas, E., Priest, H. A., & Burke, C. S. (2005). Teamwork and team performance measurement. In J. R. Wilson & N. Corlett (Eds.), Evaluation of human work (3rd ed., pp. 793–808). Boca Raton, FL: Taylor & Francis.

Salas, E., Priest, H. A., Wilson, K. A., & Burke, C. S. (2006). Scenario-based training: Improving military mission performance and adaptability. In T. Britt, A. Adler, C. Castro & T. Britt (Eds), Military life: The psychology of serving in peace and conflict (Vol. 2,Operational stress, pp. 32–53). Westport, CT: Praeger Security International.

Salas, E., Rosen, M. A., Burke, C. S., & Goodwin, G. F. (2009). The wisdom of collectives in organizations: An update of the teamwork competencies. In E. Salas, G. F. Goodwin, & C. S. Burke (Eds.), Team effectiveness in complex organizations: Cross-disciplinary perspectives and approaches (pp. 39–79). New York: Routledge.

Salas, E., Rosen, M. A., Burke, C. S., Goodwin, G. F., & Fiore, S. (2006). The making of a dream team: When expert teams do best. In K. A. Ericsson, N. Charness, P. J. Feltovich & R. R. Hoffman (Eds.), The Cambridge handbook of expertise and expert performance (pp. 439–453). New York: Cambridge University Press.

Salas, E., Rosen, M. A., Burke, C. S., Nicholson, D., & Howse, W. R. (2007). Markers for enhancing team cognition in complex environments: The power of team performance diagnosis. Aviation, Space, and Environmental Medicine (Special Supplement on Operational Applications of Cognitive Performance Enhancement Technologies), 78(5), B77–85.

Salas, E., Sims, D. E., & Burke, C. S. (2005). Is there a big five in teamwork? Small Group Research, 36(5), 555–599.

Salas, E., Stagl, K. C., Burke, C. S., & Goodwin, G. F. (2007). Fostering team effectiveness in organizations: Toward an integrative theoretical framework of team performance. In R. A. Dienstbier, J. W. Shuart, W. Spaulding, & J. Poland (Eds.), Modeling complex systems: Motivation, cognition and social processes. Nebraska Symposium on Motivation (Vol. 51, pp. 185–243). Lincoln, NE: University of Nebraska Press.

Salas, E., Weaver, S. J., Rosen, M. A., & Smith-Jentsch, K. A. (2009). Managing team performance in complex settings: Research-based best practices. In J. W. Smither & M. London (Eds.), Performance management: Putting research into practice (pp. 197–232). San Francisco, CA: Jossey-Bass.

Salas, E., Wilson, K. A., Murphy, C., King, H., & Salisbury, M. (in press). Communicating, coordinating and cooperating when the life of others depends on it: Tips for teamwork. Joint Commission Journal on Quality and Safety.

Shi, Y., & Tang, H. K. (1997). Team role behaviour and task environment. Journal of Managerial Psychology, 12, 85–94.

Smith-Jentsch, K. A., Zeisig, R. L., Acton, B., & McPherson, J. A. (1998). Team dimensional training: A strategy for guided team self-correction. In J. A. Cannon-Bowers & E. Salas (Eds.), Making decisions under stress: Implications for individual and team training (pp. 271–298). Washington, DC: American Psychological Association.

Stevens, M. J., & Campion, M. A. (1994). The knowledge, skill, and ability requirements for teamwork: Implications for human resource management. Journal of Management, 20(2), 503–530.

Strubler, D. C., & York, K. M. (2007). An exploratory study of the team characteristics model using organizational teams. Small Group Research, 38, 670–695.

Sundstrom, E., De Meuse, K. P., & Futrell, D. (1990). Work teams: Applications and effectiveness. American Psychologist, 45, 120–133.

Tannenbaum, S. I. (2006). Applied performance measurement: Practical issues and challenges. In W. Bennett, C. E. Lance, & D. J. Woehr (Eds.), Performance measurement: Current perspectives and future challenges (pp. 297–318). Mahwah, NJ: Lawrence Erlbaum Associates.

Webber, S. (2002). Mapping a path to the empowered searcher. In C. Graham (Ed.), Online Information: Proceedings (pp. 3–5). Oxford.

Wildman, J. L., Bedwell, W. L., Salas, E., & Smith-Jentsch, K. A. (2010). Performance measurement at work: A multilevel perspective. In S. Zedeck (Ed.), APA handbook of industrial and organizational psychology. Washington, DC: American Psychological Association.

Williams, G., & Laugani, P. (1999). Analysis of teamwork in an NHS community trust: An empirical study. Journal of Interprofessional Care, 13, 19–28.

Wilson, K. A., Burke, C. S., Priest, H. A., & Salas, E. (2005). Promoting health care safety through training high reliability teams. Quality and Safety in Health Care, 14, 303–309.

Yang, O., & Shao, Y. E. (1996). Shared leadership in self-managed teams: A competing values approach. Total Quality Management, 7, 521–534.

Zakay, D., Ellis, S., & Shevalsky, M. (2004). Outcome value and early warning indications as determinants of willingness to learn from experience. Experimental Psychology, 51(2), 150–157.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset