4

GROUP RESEARCH USING
HIGH-FIDELITY EXPERIMENTAL
SIMULATIONS


Franziska Tschan and Norbert K. Semmer

UNIVERSITY OF NEUCHÂTEL, SWITZERLAND AND UNIVERSITY OF BERN, SWITZERLAND

Sabina Hunziker and Stephan U. Marsch

UNIVERSITY HOSPITAL OF BASEL, SWITZERLAND

The Group Researcher's Dream

As a young group researcher interested in small group productivity, the first author daydreamed about the “paradise for a small group researcher,” most often while waiting for undergraduate student participants who did not show up for her laboratory experiments. She imagined a close to real-life situation with a number of characteristics: competent and highly motivated participants; a research-friendly room (not too noisy, so that communication could be recorded; well lit, so there could be video recording); a task that was highly interdependent (thus inducing group processes) and complex (so there would be variance in performance); necessity of overt behavior, so group processes could be observed and coded. She wished for a rather short group process (as transcribing and coding is time-consuming), and finally she dreamt that there would be ample communication going on that could be related to group performance. It seemed much too much to ask for. However, some years later, she stumbled across the website of a group of physicians at a nearby hospital, presenting the “high-fidelity patient simulator” they used for training and research. The description surpassed the dream. The pictures showed medical professionals in a well-lit room gathered around a patient and obviously working in a highly interdependent way as a group. She wrote a mail. So, for a number of years, the authors of this chapter (two psychologists and two physicians) have been collaborating in research projects using a high-fidelity patient simulator that is situated in the Intensive Care Unit of the Basel University Hospital in Switzerland.

In this chapter, we share our thoughts and experiences on how to do group research using experimental simulation with high-fidelity simulators. Because we draw on our own experiences, most of the examples will be related to patient simulators, but the general lessons apply to research with simulators in other fields, too. (1) We will first give a short introduction into experimental simulation, what simulators are and how they are used. (2) In the main part, we discuss the characteristics of group research with simulators. We will present how research designs and methods may have to be adapted to characteristics of the tasks and the participants involved. (3) Research with simulators requires most often interdisciplinary cooperation. Again, drawing from our experiences, we will talk about the different backgrounds, interests, and research and publication strategies of group researchers and physicians, as we experience them.

What are simulators, who uses them and for
what purpose?

Simulation in a broad sense is really nothing new in small group research. Group researchers have “simulated” political committees (Stasser & Titus, 1987) jury deliberations (Kerr & MacCoun, 1985; Tindale, Nadler, Krebel, & Davis, 2004); mechanical assembly groups (Moreland & Myaskovsky, 2000), or business negotiations (Weingart, Bennett, & Brett, 1993), to name just a few. However, most laboratory tasks are rather simple and only faintly resemble ‘real’ tasks, because the experimenter wants to control as much of the situation as possible. The use of simple tasks has many advantages; it generates, however, the problem of limited generalizability of the results to more complex situations. One obvious solution to this problem is to study real groups working on real tasks. This is, however, very difficult to do, and often, the complexity and variety of real tasks and situations make it difficult to draw conclusions from field research (e.g., Brehmer & Dörner, 1993). It thus seems to be a good idea to work with settings that simulate reality, but can still be standardized and controlled by the researchers (Vincenzi, Wise, Moula, & Hanckock, 2009).

In the continuum between artificial laboratory tasks and real situations, Gray (2002) distinguishes different types of simulated task environments with increasing complexity and realism. (1) Microworlds are computer game-like dynamic tasks that require rather complex decision making. For example, one has to be the mayor of a city, make decisions of allocating money; give advice to an African tribe (Brehmer & Dörner, 1993), or manage a forest fire (Granlund & Johanssoin, 2004). Although researchers usually try to present these tasks in a realistic context, the main goal is to study specific task requirements rather than being representative of the context that is chosen to embed the task. (2) Scaled worlds (also called synthetic task environments) are constructed to recreate important aspects of a real task, but are not necessary realistically to match the whole task and its environment. Cooke and Shope (2004) describe how to design a scaled world; theirs is based on the analysis of real military operations, and maps the most important aspects of these operations. When constructing a scaled world, researchers do not try to achieve high realism of all aspects of the task and its environment. Rather, they strive for “psychological fidelity;” this is to present tasks so that they trigger the psychological or group processes the researcher wants to train or to study (Gurtner, Tschan, Semmer, & Nägele, 2006; Kozlowski & DeShon, 2004). (3) Finally, high-fidelity simulators are closest to real tasks because they are designed to provide an as-realistic-as-possible task in a realistic environment. High-fidelity simulators often map real tasks and their environments very closely. For example, power-plant simulators can not only simulate most of the states and potential problems of the plant: many are also located in a room that exactly matches the real control room of the plant, including the layout of all the work stations, screens, and even phones, printers, and noises. Flight-simulators do not only map the outline of the cockpit and display realistic computer animations of what pilots would see in the windows, but they are built on a platform that can simulate motion; and simulator stations for firefighters might include constrained paths, smoke, and even heat (McFetrich, 2007). This realism is useful, because high-fidelity simulators are most often designed for training of complex and difficult situations.

In our research, we work with high-fidelity patient simulators, which will be described next. In medicine, artificial full-sized patient simulators have been used as early as 1874 for nursing education (Good, 2003; Nehring & Lashley, 2009). However, only the recent technological development has allowed constructing high-fidelity patient simulators. The one we are working with is a real life-size rubber mannequin, full of mechanical and electronic devices that are controlled by a computer. He (most of the time our ‘patient’ is male) can open his eyes; he blinks regularly, and when doctors shine a flashlight into his pupils, they contract, unless the simulator is programmed otherwise. He breathes, and when he does, his chest moves. When he exhales, one can feel the air coming out of his mouth, and carbon dioxide levels can be measured; they map those of a real person exhaling and change depending on the condition of the patient. One can palpate pulse at many different locations, and little microphones in his chest can be programmed to display a wide array of different breathing and heart sounds. He can even twitch his thumb. Although a closer look makes very apparent that he is artificial, he looks stunningly human. The patient can even talk, with loudspeakers in his head being connected to an intercom operated from behind a one-way mirror. The simulator recognizes administration of medication and responds to it in real time. Parameters such as blood pressure, heart rate, and blood oxygen levels can be displayed on a monitor. The number of interventions medical professionals can perform is large. There are also many other medical simulators on the market – simulators of body parts, baby and infant simulators, or a simulator of a woman giving birth, to name just a few.

High-fidelity patient simulators are a relatively recent development; they are still expensive, but their use is growing rapidly. Cooper and Taqueti (2004), who present the history of medical simulators, estimate that the “tipping point” has not yet been reached; so one can expect more and more hospitals and universities to acquire high-fidelity simulators in the future. These simulators are mainly used for training medical professionals, ranging from basic training for medical students to continuous education of experienced physicians, nurses, and other medical professionals (McGaghie, Siddal, Mazmanian, & Myers, 2009). In simulators, medical professionals often train tasks or situations that are either uncomfortable or dangerous for the patient, if not correctly performed. Take the example of an intubation, which involves sticking a plastic tube through the patient's mouth down to the lungs for artificial ventilation. It seems indeed a good idea to perform such an intervention on a plastic mannequin before trying it on a real patient. Simulators are also useful to prepare for situations that occur infrequently (Wang et al., 2008). Real life often may not provide enough opportunities to practice the required reactions to the point at which they become sufficiently routinized. This situation is similar to training in flight-simulators, where pilots repeatedly practice events that they may never encounter, such as emergency landings on water, or in power-plant simulators, where operators train for situations that will hopefully never happen in reality.

Participants in a simulator session are confronted with a scenario that has been developed and programmed beforehand. In the case of our medical simulator, it can be programmed to be a male or female patient of a certain age with a specific basic condition. In our experimental laboratory, each “patient” has a regular patient file that lists his or her previous conditions. Typically, the scenario unfolds into an emergency situation, usually with the patient developing a specific condition characterized by several distinct phases. For example, in one of our scenarios (Marsch et al., 2005), Mr Bortolotti, the patient, is a 52-year-old male that had been admitted to intensive care after an acute myocardial infarction. He is already branched on to a monitor, and he has an intravenous drip, this is a needle access directly to the vein allowing the delivery of fluids and medication. The hospital bed with Mr Bortolotti is in a room that is furnished like a regular single patient room in an intensive care unit (with the possibility to branch all medical devices). The only difference to a regular patient room is that the room has no windows but a large one-way mirror.

When the participants (physicians) have been with the patient for two minutes, the patient suffers a sudden cardiac arrest; his heart stops pumping blood, but still displays electrical activity (a pulseless ventricular tachycardia). When this happens, we expect the physicians to start resuscitation, which includes defibrillation (applying an electrical shock to the patient's heart using two paddles). After the third defibrillation attempt, the programmer that runs the simulator changes the heart rhythm of the patient for a short time, and then heart activity stops (asystole). At this moment, the physicians are expected to inject a specific drug, and one minute after they do, the heart beat changes again, and can be converted to a normal rhythm with another defibrillation. This is a typical scenario; it includes several phases, and the transition into a next phase depends on the physicians’ interventions. The patient's state depends on the interventions of the team. For example, if the team misses a step, the patient becomes worse. However, there is always a nurse from the simulator team in the room, who intervenes if the physicians are lost for a long time and helps them to save the patient (see Figure 4.1).

High-fidelity patient simulators permit simulating rather complex situations and tasks. This includes tasks that require close collaboration of several people, offering excellent opportunities to train cooperation, communication, and teamwork. Although team training is not yet explicitly one of the core uses of simulators (Rosen et al., 2008), more and more authors stress the potential of simulators for training and assessing teamwork (Aggarwal, Undre, Moorthy, Vincent, & Darzi, 2004; Fernandez et al., 2008; Lane, Slavin, & Ziv, 2001).

In sum, patient simulators allow presenting very realistic medical situations (our participants rate the realism of the scenario at 8.5, on a scale from 1 to 10); they are used increasingly, most often for training technical skills. In addition, their potential for team training and assessment is being recognized, and presents an excellent starting point for the psychologist interested in group and team research.

image

FIGURE 4.1 Cardiopulmonary resuscitation in a high-fidelity simulator setting.

Doing Small Group Research with High-fidelity
(Patient) Simulators

Research topics and designs

Research topics

Given that the main use of high-fidelity simulators is skills training, research most often evaluates training success with regard to technical skills, often comparing simulator-based training with other teaching protocols. For medical simulations, the Agency for Healthcare Research and Quality (Marinopoulos et al., 2007) provides a good overview for this research, as does a paper by McGaghie and colleagues (2009).

Group and team-related research using simulators is on the rise and covers a wide variety of research questions. However, a literature search in psychology or communication journals may convey the impression that not much has yet been done, since many simulator-based studies on teamwork are published in journals of the respective field, rather than in psychology or communication journals. Research with patient simulators is found in many different medical journals, but there is also a medical journal that is entirely devoted to simulation-based medical research, called Simulation in Healthcare. Research with flight-simulators or military simulators may be published in human factors or ergonomics journals (Burke, Salas, Wilson-Donnelly, & Priest, 2004; Gaba, Howard, Fish, Smith, & Sowb, 2001; Helmreich & Davis, 1997; Salas, Sims, Klein, & Burke, 2003; Sexton, Thomas, & Helmreich, 2000), or domain-specific journals, such as aviation or military psychology journals. The literature search is even more difficult because some of the relevant research is not found under the keywords “groups” or “teams,” as scholars use other keywords, such as “nontechnical skills,” “communication” or “leadership.” To find research topics and assess previous research, scholars may thus have to do literature searches beyond their field.

Research designs

What research designs are particularly well suited for simulator-based research? In principle, the same rules and methods as for other (group) research apply. Simulators seem well suited for experimental research, because they allow presenting the same task and situation with the same problem and the same temporal development to all participants. However, achieving a fully controlled experimental situation is not easy, because rigorous control of all important influences is often limited. This has to do with the fact that “laboratory experiments are inherently artificial” (Brewer, 2000, p. 15), whereas one of the main objectives (and strengths) of high-fidelity simulators is the possibility to model complex and highly realistic tasks. Experimental psychologists may thus be confronted with several factors limiting experimental control, which we illustrate here with regard to patient simulators.

First, participants are experienced medical professionals; even medical students are by no means “naïve subjects” (McGathie, 2008). In our research, we try to control for differences in expertise by asking physicians about their levels of training and experience. Medical students will also answer a knowledge test; physicians, however, will often not consent to such a test (remember, they come for training purposes, not primarily to participate in research). Furthermore, too specific questions may entail cues about the scenario to be expected. Thus, it is difficult to control for differences in pre-existing knowledge and expertise, which tend to be rather large.

Second, our participants expect training that is interesting and useful, and also provides them with credits for continuous education; therefore, the scenarios have to be meaningful and complex enough so that even experienced physicians can learn something. Certain interventions we would sometimes like to do are therefore not feasible; for example, we cannot interrupt the group after partial task fulfillment to give them self-report questionnaires. Third, our tasks can be solved in very different ways. For example, the guidelines for treating a “simple” cardiac arrest describe more than 30 different potential steps to consider or to carry out (e.g., Nolan, Deakin, Soar, Bottiger, & Smith, 2005). Although the guidelines suggest an ideal sequence of interventions, there are several reasonable ways to proceed, implying that different groups take different steps at different times. In order to stay realistic, the patient's reactions to all these variations has to be adapted, even if this was not planned initially. So occasionally, there are deviations from the standardized protocol, which may create less standardization than is needed for fully controlled experiments. Furthermore, a so-called scripted nurse (a confederate) participates in most simulator sessions to assist the physicians and help in case of technical problems. Although the nurse is instructed to act and react in a prescribed manner, he or she has to adapt his or her behavior to the actual situation and may not be able to stick to the script entirely.

All this creates threats to validity. These threats could be at least partially overcome with large sample sizes, which allows including more control variables without losing too much statistical power. However, simulator-based studies often have relatively small sample sizes. Participants cannot be recruited ad libitum, and running scenarios takes a lot of time and resources. Sample size concerns are even more important for group research, where often the group is the unit of analysis, implying small N values even though many participants may be involved.

Researchers who are not working with patient simulators may encounter similar practical limitations. We emphasize these aspects because, although simulator studies seem to be the experimental group researcher's dream at first sight, there may be important limitations for conducting highly controlled experiments.

Nevertheless, experimental designs are possible. For example, in one of our studies we randomly assigned medical students to groups with a technical debriefing (emphasizing the importance of some technical skills) or a leadership debriefing (emphasizing the importance of teamwork aspects) after the session. Four months later, we tested the students a second time, assigning them to groups at random but always within the same debriefing condition. This allowed us to test the influence of debriefing type on later group performance (Hunziker et al., 2010). In another study, we randomly assigned physicians to conditions where they received standard information plus a short leadership instruction prior to meeting the patient, or standard information only, in order to test effects of the different instructions (Tschan, Semmer, Windlinger, Hunziker, & Marsch, 2009). In these studies, the experimental manipulations were relatively simple, and we could test their influence on behavior and performance.

Simulators also present many possibilities to conduct quasi-experimental research (Shadish, Cook, & Campbell, 2002) where groups are not randomly assigned to experimental conditions. For example, in one study, we compared performance of general physicians and advanced medical students for the same scenario (Lüscher et al., 2010).

Simulator settings are particularly well suited for observational studies (Kerr, Aronoff, & Messé, 2000; Runkle & McGrath, 1972), where processes that are occurring “naturally” are reported, and hypotheses about these processes can be tested. The high realism of simulator settings offers many possibilities to investigate phenomena that have not yet been extensively studied in small groups, or to investigate phenomena well known from group research in a more realistic environment, with participants acting in their everyday roles. For example, we are currently investigating treatment interruptions in groups confronted with a cardiac arrest. This is particularly important, as each minute of untreated cardiac arrest diminishes survival chances by 7–10 per cent (von Planta, 2004). In a first study, we simply measured and counted interruptions – and were astonished by their number and length (Marsch et al., 2005). Given that the physicians told us that they were not aware of having interrupted treatment, in a later study we coded what groups actually did when interrupting patient care. We found that they are often so actively involved in monitoring the patient's condition or in resolving technical or knowledge problems that their attention was diverted from immediate treatment requirements (Tschan, Vetterli, Semmer, Hunziker & Marsch, in press). We are now evaluating which parts of the resuscitation process are most vulnerable for interruptions, what triggers them, and how they are overcome.

Simulator studies are also well suited for qualitative research or a combination of quantitative and qualitative analyses. Many researchers, particularly psychologists, shy away from including qualitative aspects into their research; it is difficult to do, and many psychology journals seem reluctant to publish studies based on qualitative data. However, qualitative research can be very useful for domains where theories are not well developed; and a combination of qualitative and quantitative research can be interesting if theories are “nascent”, that is, not yet fully developed or mature (Edmondson & McManus, 2007). For example, in one scenario we observed that important information from the patient file often was not communicated to the group, despite the fact that at least one group member had consulted the patient file. Qualitative analyses of who handled the patient file in which manner suggested a phenomenon we called an “illusory transactive memory system.” Transactive memory systems imply that members know about each other's competences, and rely on this knowledge. If one group member obviously has access to potentially important information (in this case, the patient file) but does not communicate anything, the group may erroneously assume that there is no important information in the file (Tschan et al., 2009). Another reason for qualitative data analysis for simulator-based studies is the occurrence of rare events, such as errors (Burke et al., 2004).

In sum, simulators provide good opportunities for experimental research, although there might be more confounding factors and unwanted influences than in classical laboratory experiments. Simulations are also well suited for observational and qualitative research, thus allowing a wide combination of methods.

Running a simulator study

Getting access to a (patient) simulator

For researchers, the first step is to establish cooperation with the professionals that run the simulators. This may be difficult in some cases, for example, access to powerplant simulators or military or police simulators may not be easy. In our case, we have found physicians and nurses to be very open for such a collaboration: after the first author initiated the contact, the physicians invited us to visit the simulator center; after some visits and discussions, the current research group formed. So, a first task for a researcher may be to make contact with simulator centers. Good sources of information where medical simulators are located are the websites of the Society for Simulation in Healthcare (SSIH) (www.ssih.org) in the US, and the Society for Simulation Applied to Medicine (SESAM) (www.sesam-web.org) in Europe. The annual conference of the SSIH provides an excellent opportunity for getting information, learning about ‘hot’ research topics, and for networking.

If the possibility for running a group study with a simulator is established, the general steps are the same as for any group research; however, some adaptations may be necessary. The process starts with scenario development and adaptation.

Scenario development and adaptation

Developing scenarios is a central aspect of simulator studies (Good, 2003). Obviously, good scenarios require profound knowledge in the domain studied; the scenario has to be realistic, the time frame has to be adequate, reactions of the system have to be plausible, etc. This has to be done by the domain specialists, but group researchers can contribute in terms of criteria for task requirements. For example, from a medical point of view, different illnesses may be worlds apart. From a psychological point of view, more general task requirements may be prominent, for instance, with regard to communication, planning, leadership, and coordination, or with regard to common errors in diagnosis, decision making, etc.

To give an example, our research involves two basic types of tasks. The first one (cardiac arrest) refers to a situation where the diagnosis is clear and unambiguous; the main question is whether participants perform the necessary (and well-known) intervention in a timely and adequate manner. Good team coordination in this scenario requires decisive leadership and timely task distribution. The second scenario refers to a situation where the diagnosis is ambiguous, as we provide cues pointing to a diagnosis that is wrong yet plausible (pneumothorax), sharing some (but not all) symptoms with the correct diagnosis (anaphylactic shock). The main question in this scenario is whether the team will adequately consider the information available, and good team coordination requires sharing of deliberate reasoning rather than quick decisions (cf. Tschan et al., 2009). The challenge in developing this scenario was to construe a case where some of the patient's symptoms could plausibly be interpreted in terms of two different diagnoses. Also, we had to construct a patient history that made sense for both diagnoses. Sometimes, in planning scenarios for simulations, researchers conduct a cognitive task-analysis (Gugerty, 2004), based on real situations (Klein, 2000) that helps to identify important task requirements of a situation. Once the basic scenario is developed, one has to consider possible adaptations, either for specific training goals (see Beard and colleagues, 1995, for guidelines), or for research goals. In some cases, relatively small adaptations suffice; others require more extended variations.

For example, in two of our studies we investigated errors in information transmitted to a physician who joined an ongoing emergency situation (Bogenstätter et al., 2009). The design for the first study was relatively easy: we simply asked the physicians to wait outside and join the situation only when called by the nurses who witnessed the emergency. We could then assess the adequacy of information transmitted to the physicians after they joined the group. In the second study, we wanted to test the hypotheses that (a) unusual information, (b) shared information, and (c) information about interventions would have a greater chance to be transmitted. For this, we had to change the patient history and to adapt the beginning of the scenario in a way that allowed us to provide some information to one group member only, and some to all. For the ambiguous diagnosis scenario mentioned above, (Tschan et al., 2009), we had to construe a variant without distracting cues in order to demonstrate that it was indeed the distracting information, rather than the complexity of the anaphylactic shock reaction, that was responsible for the difficulty in determining the correct diagnosis.

Running a simulator study

Running a simulator study typically requires especially long preparation time, and often more people are involved in each session than in other group experiments. For example, to run one of our experiments, we need a scripted nurse, someone who plays the patient (in terms of talking as the patient and at the same time controlling the simulator), a physician who hands the patient over to the group, and one person who organizes and runs the experiment. Otherwise, a simulator study is not very different from any other group experiment; we therefore refer to the respective chapters in this book. As with all research, it is advisable to obtain early Institutional Review Board (IRB) permission. Our participants sign up for training sessions, and they are informed in advance that we would like to use their data for research. We ask participants after the session to sign a sheet permitting us to use their data for research purposes; so far, only a few people declined. The IRB gave its consent to our procedure.

Data coding

Typically, in simulator centers, video cameras are installed, and sessions are taped for training debriefing. Thus, group process data are easily accessible. In our simulator room one of two cameras is focused on the patient bed, the other one, a wide-angle camera, is overlooking the whole room, so that all participants can be observed. A picture of the patient surveillance monitor is displayed on the screen, which allows for coding information about the patient's condition over time.

There is a variety of methods for behavioral coding in small group research. Generally applicable systems are the International Process Analysis (IPA) coding system by Bales (1950), the time-by-event method by Futoran and colleagues (Futoran, Kelly, & McGrath, 1989), or the function-oriented interaction coding system (Hirokawa, 1988) to name just a few. However, these systems are better suited for communication tasks than for tasks with high manual components. Most often, researchers adapt observational systems to their specific needs; such adaptations are explicitly encouraged in introductory texts on group process methods (McGrath & Altermatt, 2001; Weingart, 1997; Weingart, Olekalns, & Smith, 2004). Coding schemes have been developed and adapted for different simulator settings, such as powerplant simulations (Stachowski, Kaplan, & Waller, 2009), or aviation (Kanki, Lozito, & Foushee, 1989). Examples from the medical field are the observational coding systems by Kolbe, Künzle, Zala-Mezö, Wacker, and Grote (2009), and by Manser, Howard, and Gaba (2009) that are designed for observation during anesthesia inductions. These systems include measures for explicit and implicit coordination, for heedful interrelating, but also for task distribution and other important aspects of cooperation. Most importantly, these systems suggest coding not only for communication, but also for actions, and they provide examples of how to do this.

Another, more general, approach for observational systems has been proposed by Flin and Maran (2004). Their method allows developing and adapting behavior categories (called behavioral markers) for various cooperative situations in medicine, such as anesthesia or surgery. Their approach is based on similar systems used in aviation research (Fletcher et al., 2004; Helmreich, 2000). To develop behavioral markers for an observational system, they interview and survey specialists (Flin, Yule, Paterson-Brown, & Maran, 2006) and draw on existing observation systems (Fletcher et al., 2002, 2003; Klampfer et al., 2001; Yule, Flin, Paterson-Brown, Maran, & Rowley, 2006). Although their systems are mostly used for on-site assessments (Yule et al., 2008), they also are a valuable source for developing observational categories for video-based data.

To provide an example, we shortly describe parts of the coding system we used for one of our first studies. It investigated a resuscitation scenario, with medical professionals joining an emergency situation sequentially (Tschan et al., 2006). The goal of this study was to assess the relationship between communication and performance before and after new group members joined. We first transcribed all communication word by word, indicating who said what when. These transcripts were done by a medically trained person. Although on-line coding systems exist that would not require transcription, transcribing all communication helped, since often, communication was not easy to understand and participants used medical jargon. Since we hypothesized that directive leadership should be important for this task, we then coded each communication utterance with regard to directive leadership and other, more indirect strategies, such as a strategy we called “structuring inquiry.” As in many other studies, we also initially coded some other aspects that did not make it into the paper. After training, coding could be done reliably, but was time consuming. We still use spreadsheets for transcripts and coding, but many others use observational software.

Our first experiences with behavioral coding taught us some seemingly trivial yet important lessons. For instance, we learned that automatic alarms from the surveillance monitor were so dominant (as alarms should be) that they masked some of the communication; the same applied to noisy equipment (e.g., an artificial breathing machine). Currently, we avoid having noisy equipment in the room whenever possible. Also, it was sometimes difficult to distinguish people and voices based on the video tapes, another problem for completing transcripts. Now, participants can be identified by the use of large numbers on their front, arms and back (as they move around in the room), facilitating the transcription of communication.

Students who work with us, by transcribing and coding, often underestimated the extent to which they needed to understand the medical terms used. Nurses and physicians use specialized vocabulary including, for example, abbreviations for drugs and doses; they discuss highly technical aspects, or ask questions about patient conditions that only medically trained people understand. They also use jargon, such as “filling up the patient” (i.e., increase the speed of the saline solution dripping into the patient's arm in order to increase the volume of fluid in the patient's body). This expert communication requires that the researchers understand the task and the possible interventions well, and transcripts cannot be done by untrained students. Even so, the physicians were required to help in interpreting the communication. This necessity to have at least a basic understanding of the task to be able to do reliable transcripts and coding may be even more of a challenge when using other simulators, for example, powerplant or flight simulators, and researchers have to be aware of the training requirements for coders.

Assessing group performance

We usually are interested in predicting group performance, so we need group performance measures. One way to assess performance is to rely on specialist's overall judgment, for example, by having experienced physicians rate the performance. This has been done successfully in powerplant simulation studies (Waller, 1999) and in flight simulator studies (Brannick, Prince, & Salas, 2002; Waller, 1996). However, this requires several domain specialists to review each session, which often is not feasible.

For our research, we develop performance markers for each scenario (Tschan et al., 2011), based on recommendations by Gaba and colleagues (1998). For some cases, this was easy. In the study on ambiguous diagnostic information, for instance (Tschan et al., 2009), we simply assessed whether and when the group communicated the correct diagnosis and started the appropriate first intervention; this could be easily coded based on the video tapes. For other scenarios, developing performance measures was more complicated. For example, in the cardiac arrest scenario, patient survival is not a feasible performance measure, as patients may well die even if the resuscitation is done perfectly (White & Guly, 1999). In addition, for didactic and ethical reasons, we never let the patient “die.” In the rare cases where the groups do not treat the patient in an appropriate way for a very long time, the scripted nurse intervenes. We thus had to develop process performance measures. For this, we perform a task analysis (Tschan et al., 2011) and often develop several performance measures. For cardiac resuscitation, task analysis is based on the published treatment algorithms (Nolan et al., 2005) that describe “appropriate treatment during a cardiac arrest.” We thus code for every second whether someone ventilated the patient, did cardiac massage, defibrillated or intubated the patient.

Given that time is critical in cardiac arrest situations, we often use performance measures that are time-related. For instance, we calculate the time the patient received appropriate treatment as a percentage of the time he needed treatment (i.e., had no pulse). Using such a performance measure, we can compare the performance across different phases or time-segments (e.g., before and after a physician joins). In the literature, one finds performance markers or performance checklists for many different medical scenarios. For example, Forrest and colleagues (2002) describe a performance assessment for anesthesia, based on textbook descriptions and guidelines, as well as on their own experience and expert advice.

Data analysis

Analysis of simulator-based data is, again, very similar to that of other group behavioral studies, and we refer to the respective literature (McGrath & Altermatt, 2001; Weingart,1997). As the tasks we study are quite complex, we often do not analyze the process as a whole, but only specific phases. It has been shown, for instance, that a first organizing and planning phase should occur early in an emergency (McGrath & Tschan, 2004; Tschan, McGrath et al., 2009). Therefore, we often assess only initial leadership behavior, for instance, during the first three minutes. Indeed, in one of our studies we could confirm the hypothesis that performance was predicted only by directive leadership behavior that occurred during the first 30 seconds after the first physician joined an emergency (Tschan et al., 2006).

Often, specific events, such as for interruptions, or information transmission episodes, are of special interest. As such events may occur multiple times, multilevel modeling may be the appropriate methodological approach for such a data structure (e.g. Bogenstätter et al., 2009).

In sum, group research with (patient) simulators is not fundamentally different from other group research. However, whereas researchers in social psychology often can choose a task and therefore avoid specificities of tasks exerting a dominant influence, our research with such realistic and complex tasks constantly reminds us of the need to develop a good understanding of the tasks involved (Hackman & Morris, 1975; McGrath, 1984; Tschan & von Cranach, 1996).

Interdisciplinary collaboration

Simulator research requires interdisciplinary collaboration with domain experts. In our case, we collaborate with physicians and nurses. Each profession speaks a different professional language, and brings different competences, but also different research interests and research traditions to the table. It is thus important to get to know each other's perspectives and to negotiate interests. For example, medical professionals have the knowledge about patient conditions, diseases and intervention possibilities necessary for developing scenarios and interpreting participant behavior. On the other hand, social scientists typically have detailed knowledge about group processes, running experiments, data coding, and analysis. In the following, we list a few aspects where we became aware of the differences between disciplines. Again, we limit the discussion to the collaboration between physicians and psychologists, but assume that there are similar issues when cooperating with pilots, firefighters, or powerplant operators.

Maintaining realism versus standardization of the scenario

Physicians often are more concerned about the medical realism of the scenario than are psychologists. Thus, when a group did something unexpected, the physicians sometimes applied short-term changes to the scenario in order to adapt to the group's behavior. The psychologists were less worried about maintaining realism under all circumstances, but more concerned about standardization of the experimental situation. In our studies, given that physicians attend simulator sessions for training, in case of doubt, realism has to be more important than standardization. If important deviations from the protocol are necessary, we deal with this rare, but recurring issue by excluding those groups from analysis, similar to the exclusion of participants that do not follow instructions in a classical experiment. In one scenario that required flexible adaptations more frequently, the physician steering the simulator kept to the standardization as long as possible in each session, and signaled with a very short change of the patient's heartbeat (visible on the screen) that from this point forward he had to deviate from the standardized protocol. We then analyzed the group process data only up to this point.

Standardization of instructions and experimenter behavior

Similar problems arose for standardizing behavior such as instructions, information given to the participants, ‘patient’ verbal behavior, and the behavior of the scripted nurse. Careful instruction of all people involved was necessary. Often, physicians who happened to be on duty on the ward when we ran simulator sessions helped by playing the role of the physician handing over the patient to the group. All of them would have made great actors, but we had to limit their improvisation talents and to require that they act strictly in accordance with the instructions. Standardization of behavior was even more of a challenge for the scripted nurses and for the physician who represented the patient's voice, because they had to react flexibly but within prescribed limits to the actions of group members, and still play their role realistically. They thus had to maintain an “ad-hoc” equilibrium between standardization and adaptation.

Research traditions

Analyzing data and publishing follows different traditions in medicine and social sciences. Papers in medical journals are usually much shorter than publications in psychology; they have a somewhat different structure, and medical journals require a less extensive theoretical introduction than is customary in psychology. The psychologists in our group had thus to learn to refrain from suggesting long theoretical introductions when collaborating on a paper for a medical journal, and the physicians sometimes expressed concern about the rather extensive theorizing and the length of the papers for psychology journals. In our group, the physicians take the lead for publications in medical journals (often focused on aspects of medical training or medical performance with a strong emphasis on results that are immediately relevant for medical practice), the psychologists for publications in psychological journals (in our case focused on aspects of team processes, with a strong emphasis on basic research). This works very well.

There are also some differences in the statistical methods used in the two fields. For example, mediation analysis à la Baron and Kenny (1986) is not widely known or used in medical research. On the other hand, physicians use analyses that are not well known in other fields, such as survival analysis, and they often tend to divide continuous variables into discrete groups, for instance, by dichotomizing. Also, citation rules are strict in both medicine and psychology, but they differ in important ways.

Publications

Interdisciplinary collaboration is very enriching, but also has its costs, and these may be important, especially for young researchers. Running simulator-based studies requires substantial contributions of many people: specialists are needed for scenario development, for running the studies, and for coding, analysis, and writing. Therefore, most of our publications have many authors, sometimes more than six, and this is not uncommon in medicine. Publishing with many co-authors may, however, imply a disadvantage for young scholars who may be confronted with a selection committee that only partly acknowledges multiauthor publications. In addition, it may also be a disadvantage for social scientists to publish in medical journals, because colleagues may not easily find these publications, and journals of other fields are sometimes not recognized when publications or citations are counted in order to assess performance of a researcher. However, there is increasing awareness for the need of interdisciplinary research; thus, having publications in different fields may also be an advantage. Nevertheless, especially young researchers need to consider these aspects.

Conclusion

In our experience, research with simulators is one of the most fascinating opportunities for group researchers. In our research, we could show how directive leadership is important in unambiguous situations, and how explicit reasoning is important in ambiguous ones; we could show that some leadership interventions may be effective despite being very short; or we could show that quantitative information will be transmitted more accurately if encoded exactly as is needs to be transmitted (see Hunziker et al., 2010, for an overview of our research). This research is continuing, and there are many more issues we will be investigating.

Young researchers need to be aware that simulator-based research may take somewhat more time than classical experiments run with in-house subject pools, and they may have to reflect on possible publication strategies. However, running simulator sessions and observing the group processes is fascinating; the tasks have high external validity, and there are many different topics worth studying.

Authors’ Note:

This paper was supported by a grant from the Swiss National Science foundation #32513113429 to S. U. Marsch, N. K. Semmer, and F. Tschan.

References

Aggarwal, R., Undre, S., Moorthy, K., Vincent, C., & Darzi, A. (2004). The simulated operating theatre: comprehensive training for surgical teams. Quality and Safety in Healthcare (suppl 1), i27–32.

Bales, R. F. (1950).Interaction process analysis. A method for the study of small groups. Cambridge, MA : Addison-Wesley Press.

Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 1173–1182.

Beard, R. L., Salas, E., & Prince, C. (1995). Enhancing transfer of training: Using role-play to foster teamwork in the cockpit. International Journal of Aviation Psychology, 5, 131–143.

Bogenstätter, Y., Tschan, F., Semmer, N. K., Spychiger, M., Breuer, M., & Marsch, S. (2009). How accurate is information transmitted to medical professionals joining a medical emergency? A simulator study. Human Factors: The Journal of the Human Factors and Ergonomics Society, 51, 115–125.

Brannick, M., Prince, C., & Salas, E. (2002). The reliability of instructor evaluations of crew performance: Good news and not so good news. The International Journal of Aviation Psychology, 12, 241–261.

Brehmer, B., & Dörner, D. (1993). Experiments with computer-simulated microworlds: Escaping both the narrow straits of the laboratory and the deep blue sea of the field study. Computers in Human Behavior, 9, 171–184.

Brewer, M. B. (2000). Research design and issues of validity. In H. T. Reis, & C. M. Judd (Eds.), Handbook of research methods in social and personality psychology (pp. 3–16). Cambridge : Cambridge University Press.

Burke, C. S., Salas, E., Wilson-Donnelly, K., & Priest, H. (2004). How to turn a team of experts into an expert medical team: Guidance from the aviation and military communities. Quality and Safety in Health Care, 13( suppl 1), i 96–104.

Cooke, N. J., & Shope, S. M. (2004). Designing a synthetic task environment. In S. Schiflett, G., L. Elliott, R., E. Salas & M. D. Coovert (Eds.), Scaled worlds: Development, validation, and application (pp. 263–278). Aldershot : Ashgate.

Cooper, J. B., & Taqueti, V. R. (2004). A brief history of the development of mannequin simulators for clinical education and training. Quality and Safety in Health Care, 13( suppl 1), i 11–18.

Edmondson, A. C., & McManus, S. E. (2007). Methodological fit in management field research. Academy of Management Review, 32, 1155–1179.

Fernandez, R., Vozenilek, J. A., Hegarty, C. B., Motola, I., Rezneck, M., Phrampus, P. E., & Kozlowski, S. W. (2008). Developing expert medical teams: Toward an evidence-based approach. Academic Emergency Medicine, 15, 1025–1036.

Fletcher, G., Flin, R., McGeorge, P., Glavin, R., Maran, N., & Patey, R. (2003). Anaesthetists’ Non-Technical Skills (ANTS): Evaluation of a behavioural marker system. British Journal of Anaesthesia, 90, 580–588.

Fletcher, G., Flin, R., McGeorge, P., Glavin, R., Maran, N., & Patey, R. (2004). Rating non-technical skills: Developing a behavioural marker system for use in anaesthesia. Cognition, Technology and Work, 6, 165–171.

Flin, R., & Maran, N. (2004). Identifying and training non-technical skills for teams in acute medicine. Quality and Safety in Health Care, 13 (suppl 1), i 80–i84.

Flin, R., Yule, S., Paterson-Brown, S., & Maran, N. (2006). Attitudes to teamwork and safety in the operating theatre. The Surgeon, 4(145–51).

Forrest, F. C., Taylor, M. A., Postlethwaite, K., & Aspinall, R. (2002). Use of a high-fidelity simulator to develop testing of the technical performance of novice anaesthetists. British Journal of Anaesthesia, 88, 338–344.

Futoran, G. C., Kelly, J. R., & McGrath, J. E. (1989). TEMPO: A time-based system for analysis of group interaction processes. Basic and Applied Social Psychology, 10, 211–232.

Gaba, D. M., Howard, S. K., Fish, K. J., Smith, B. E., & Sowb, Y. A. (2001). Simulation-based training in Anesthesia Crisis Resource Management (ACRM): A decade of experience. Simulation Gaming, 32, 175–193.

Gaba, D. M., Howard, S. K., Flanagan, B., Smith, B. E., Fish, K. J., & Botney, R. (1998). Assessment of clinical performance during simulated crisis using both technical and behavioral ratings. Anesthesiology, 89, 8–18.

Good, M. L. (2003). Patient simulation for training basic and advanced clinical skills. Medical Education, 37( suppl 1), 14–21.

Granlund, R., & Johanssoin, B. (2004). Monitoring distributed collaboration in the C3 fire Microworld. In S. Schiflett, L. Elliott, E. Salas, & Coovert (Eds.),Scaled worlds: Development, validation, and application (pp. 37–48). Aldershot : Ashgate.

Gray, W. D. (2002). Simulated task environments: The role of high-fidelity simulations, scaled worlds, synthetic environments, and laboratory tasks in basic and applied cognitive research. Cognitive Science Quarterly, 2, 205–227.

Gugerty, L. (2004). Using cognitive task analysis to design multiple synthetic tasks von uninhabited aerial vecile operatoin. In S. Shifflett, L. Elliott, E. Salas, & M. Coovert (Eds.), Scaled worlds: Development, validation, and applications (pp. 240–261). Aldershot : Ashgate.

Gurtner, A., Tschan, F., Semmer, N. K., & Nägele, C. (2006). Getting groups to develop good strategies: Effects of reflexivity interventions on team process, team performance, and shared mental models. Organizational Behavior and Human Decision Processes, 102, 127–142.

Hackman, J. R., & Morris, C. G. (1975). Group tasks, group interaction process, and group performance effectiveness: A review and proposed integration. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 8, pp. 45–99). New York : Academic Press.

Helmreich, R. L. (2000). On error management: Lessons from aviation. British Medical Journal, 320, 781–785.

Helmreich, R. L., & Davis, J. M. (1997). Anaesthetich simulation and lessons to be learned from aviation. Canadian Journal of Anaesthesia, 44, 907–912.

Hirokawa, R. Y. (1988). Group communication and decision-making performance: A continued test of the functional perspective. Human Communication Research, 4, 487–515.

Hunziker, S., Bühlmann, C., Tschan, F., Balestra, G., Legret, C., Schumacher, C., Semmer, N. K., Hunziker, P., & Marsch, S. U. (2010). Brief leadership instructions improve cardiopulmonary resuscitation in a high fidelity simulation: a randomized controlled trial. Critical Care Medicine, 38, 1086–1091.

Kanki, B. G., Lozito, S., & Foushee, H. C. (1989). Communication indices of crew coordination. Aviation, Space, and Environmental Medicine, 60, 56–60.

Kerr, N. L., & MacCoun, R. J. (1985). The effects of jury size and polling method on the process and product of jury deliberation. Journal of Personality and Social Pschology, 48, 349–363.

Kerr, N. L., Aronoff, J., & Messé, L. A. (2000). Methods of small group research. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social and personality psychology (pp. 160–189). Cambridge : Cambridge University Press.

Klampfer, B., Flin, R., Helmreich, R. L., Häusler, R., Sexton, B., Fletcher, G., Field, P., Staender, S., Lauche, K., Dieckmann, P., & Amacher, A. (2001). Enhancing performance in high risk environments: Recommendations for the use of behavioural markers. Presented at the Behavioural Markers Workshop sponsored by the Daimler–Benz Stiftung GIHRE-Kolleg, Swissair Training Center, Zurich, July 5–6, 2001.

Klein, G. (2000). Cognitive task analysis of teams. In J. M. Schraagen, S. Chipman, & V. Shalin (Eds.), Cognitive Task Analysis. Mahwah, NJ : Lawrence Erlbaum.

Kolbe, M., Künzle, B., Zala-Mezö, E., Wacker, J., & Grote, G. (2009). Measuring coordination behavior in anaesthesia teams during induction of general anaesthetics. In R. Flin & L. Mitchell (Eds.), Safer surgery. Analysing behavior in the operating theatre (pp.203–223). Burlington, VT: Ashgate Publishing Company.

Kozlowski, S. W., & DeShon, R. P. (2004). A psychological fidelity approach to simulation-based training:Theory, research and principles. In S. G. Schifflet, L. Elliott, E. Salas, & M. D. Coovert (Eds.), Scaled worlds: Development, validation and applications (pp. 75–99). Aldershot : Ashgate.

Lane, J. L., Slavin, S., & Ziv, A. (2001). Simulation in medical education:A review. Simulation Gaming, 32, 297–314.

Lüscher, F., Hunziker, S., Gaillard, V., Tschan, F., Semmer, N. K., Hunziker, P., & Marsch,

S. U. (2010). Proficiency in cardiopulmonary resuscitation of medical students at graduation: A simulator-based comparison with general practitioners. Swiss Medical Weekly, 140, 57–61.

Manser, T., Howard, S. K., & Gaba, D. (2009). Identifying characteristics of effective teamwork in complex medical work enviornments: Adaptive crew coordination in anaesthesia. In R. Flin & L. Mitchell (Eds.), Safer surgery. Burlington, VT: Ashgate Publishing Company.

Marinopoulos, S. S., Dorman, T., Ratanawongsa, N., Wilson, L. M., Ashar, B. H., Magaziner, J. L., Miller, R. G., Thomas, P. A., Prokopowicz, G. P., Qayyum, R., & Bass, E. B. (2007). Effectiveness of continuing medical education. Evidence Report/Technology Assessment No. 149 AHRQ Publication No.07-E006. Rockville, MD : Agency for Healthcare Research and Quality.

Marsch, S. U., Tschan, F., Semmer, N., Spychiger, M., Breuer, M., & Hunziker, P. R. (2005). Unnecessary interruptions of cardiac massage during simulated cardiac arrests. European Journal of Anaesthesiology, 22, 831–833.

McFetrich, J. (2007). A structured literature review on the use of high fidelity patient simulators for teaching in emergency medicine. Emergency Medical Journal, 23, 509–511.

McGathie, W. C. (2008). Research opportuniites in simulation-based medical education using deliberate practice. Academic Emergency Medicine, 15, 995–1001.

McGaghie, W. C., Siddal, V. J., Mazmanian, P. E., & Myers, J. (2009). Lessons for continuing medical education from simulation research in undergraduate and graduate medical education. Chest, 135, 62S–68S.

McGrath, J. E. (1984). Groups, interaction and performance. Englewood Cliffs, NJ : Prentice-Hall.

McGrath, J. E., & Altermatt, W. T. (2001). Observation and analysis of group interaction over time: Some methodological and strategic consequences. In M. A. Hogg & R. S. Tindale (Eds.), Blackwell handbook of social pschology: Group processes (pp.525–556). Oxford : Blackwell Publishers.

McGrath, J. E., & Tschan, F. (2004). Dynamics in groups and teams: Groups as complex action systems. In M. S. Poole & A. H. van de Ven (Eds.), Handbook of organizational change and development (pp. 50–73). Oxford : Oxford University Press.

Moreland, R. L., & Myaskovsky, L. (2000). Exploring the performance benefits of group training:Transactive memory or improved communication? Organizational Behavior and Human Decision Processes, 82, 117–133.

Nehring, W. M., & Lashley, F. R. (2009). Nursing simulation:A review of the past 40 years. Simulation & Gaming, 40, 528–552.

Nolan, J. P., Deakin, C. D., Soar, J., Bottiger, B. W., & Smith, G. (2005). European resuscitation council guidelines for resuscitation 2005: Section 4. Adult advanced life support. Resuscitation, 67( suppl 1), S39–S86.

Rosen, M. A., Salas, E., Wu, T. S., Silvestri, S., Lazzara, E. H., Lyons, R., Weaver, S. J., & King, H. B. (2008). Promoting teamwork:An event-based approach to simulation-based teamwork training for emergency medicine residents. Academic Emergency Medicine, 15, 1190–1198.

Runkle, P., & Mc Grath, J. E. (1972). Research on human behavior: A systematic guide to method. New York, NJ : Holt.

Salas, E., Sims, D. E., Klein, C., & Burke, C. S. (2003). Can teamwork enhance patient safety? Forum, 23, 5–9.

Sexton, J. B., Thomas, E. J., & Helmreich, R. L. (2000). Error, stress, and teamwork in medicine and aviation: Cross sectional surveys. British Medical Journal, 320, 745–749.

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inferences. Boston : Houghton-Mifflin.

Stachowski, A. A., Kaplan, S. A., & Waller, M. J. (2009). The benefits of flexible team interaction during crises. Journal of Applied Psychology, 94, 1536–1543.

Stasser, G., & Titus, W. (1987). Effects of information load and percentage shared information in the dissemination on unshared information during discussion. Journal of Personality and Social Psychology, 53, 81–93.

Tindale, R. S., Nadler, J., Krebel, A., & Davis, J. H. (2004). Procedural mechanisms and jury behavior. In M. B. Brewer & M. Hewstone (Eds.), Applied social psychology. Perspectives on social psychology (pp. 136–164). Malden, MA : Blackwell Publishing.

Tschan, F., & von Cranach, M. (1996). Group task structure, processes and outcome. In M. West (Ed.), Handbook of work group psychology (pp. 95–121). Chichester: Wiley.

Tschan, F., McGrath, J. E., Semmer, N. K., Arametti, M., Bogenstätter, Y., & Marsch, S. U. (2009). Temporal aspects of processes in ad-hoc groups: A conceptual scheme and some research examples. In R. Roe, M. J. Waller, & C. Clegg (Eds.), Doing time. Advancing temporal research in organizations. (pp.42–60). London : Roudlege.

Tschan, F., Semmer, N. K., Gautschi, D., Hunziker, P., Spychiger, M., & Marsch, S. C. U. (2006). Leading to recovery: Group performance and coordinative activities in medical emergency driven groups. Human Performance, 19, 277–304.

Tschan, F., Semmer, N. K., Gurtner, A., Bizzari, L., Spychiger, M., Breuer, M., & Marsch, S. U. (2009). Explicit reasoning, confirmation bias, and illusory transactive memory: A simulation study of broup medical decision making. Small Group Research, 40, 271–300.

Tschan, F., Semmer, N. K., Vetterli, M., Gurtner, A., Hunziker, S., & Marsch, S. U. (2011). Developing observational categories for group process research based on task analysis: Examples from research on medical emergency driven teams. In M. Boos, M. Kolbe, P. Kappeler, & T. Ellwart (Eds.), Coordination in human and primate groups. Berlin : Springer.

Tschan, F., Semmer, N., Windlinger, R., Hunziker, S., & Marsch, S. U. (2009). Enhancing leadership and performance by minimal invasive training:The case of medical emergency driven groups treating a cardiac arrest in a high fidelity simulator Paper presented at the 4th Annual INGRoup Conference, Colorado Springs, July 2009.

Tschan, F., Vetterli, M., Semmer, N. K., Hunziker, S., & Marsch, S. U. (in press). Activities during interruptions in cardiopulmonary resuscitation: A simulator study. Resuscitation.

Vincenzi, D. A., Wise, J. A., Moula, M., & Hanckock, P. A. (2009). Human factors in simulation and training. Boca Raton, FL : Taylor & Francis.

von Planta, M. (2004). Wissenschaftliche Grundlagen der kardiopulmonalen Reanimation (CPR) [Scientific bases of cardiopulmonary resuscitation]. Schweizerisches Medizin Forum, 4, 470–477.

Waller, M. J. (1996). Multiple-task performance in groups. Academy of Management Proceedings, 303–306.

Waller, M. J. (1999). The timing of adaptive group responses to nonroutine events. Academy of Management Journal, 42, 127–137.

Wang, S. S., Quinones, J., Fitch, M. T., Dooley-Hash, S., Griswold-Theodorson, S., Medzon, R., Korley, F., Laack, T., Robinett, A., & Clay, L. (2008). Developing technical expertise in emergency medicine – the role of simulation in skill acquisition. Academic Emergency Medicine, 15, 1046–1057.

Weingart, L. R. (1997). How did they do that? The ways and means of studying group processes. Research in Organizational Behavior, 19, 189–239.

Weingart, L. R., Bennett, R. J., & Brett, J. M. (1993). The impact of consideration of issues and motivational orientation on group negotiation process and outcome. Journal of Applied Psychology, 78, 504–517.

Weingart, L. R., Olekalns, M., & Smith, P. L. (2004). Quantitative coding of negotiation behaviour. International Negociation, 9, 441–455.

White, S. P., & Guly, H. R. (1999). Survival from cardiac arrest in an accident and emergency department: use as a performance indicator? Resuscitation, 40, 97–102.

Yule, S., Flin, R., Paterson-Brown, S., Maran, N., & Rowley, D. (2006). Development of a rating system for surgeon's non-technical skills. Medical Education, 40, 1098–1104.

Yule, S., Flin, R., Rowley, D., Mitchell, A., Youngson, G. G., Maran, N., & Paterson-Brown, S. (2008). Debriefing surgical trainees on non-technical skills (NOTSS). Cognition, Technology & Work, 10, 265–274.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset