1

Historical Foundations of Human Factors

Our interest in the design of machines for human use runs the full gamut of machine complexity—from the design of single instruments to the design of complete systems of machines which must be operated with some degree of coordination.

A. Chapanis, W. Garner, & C. Morgan
1949

INTRODUCTION

The quote with which we begin this chapter is from the first textbook devoted specifically to human factors, Applied Experimental Psychology: Human Factors in Engineering Design, by Alphonse Chapanis, Wendell Garner, and Clifford Morgan. Designing machines and systems, whether simple or complex, for human use was not only the central concern of their pioneering book but also the driving force for subsequent research on human factors and ergonomics over the past 69 years. The following quotation from the U.S. National Academy of Engineering in their report The Engineer of 2020, now more than 10 years old, captures the ever increasing importance of the role of human factors in the introduction of new technologies and products:

Engineers and engineering will seek to optimize the benefits derived from a unified appreciation of the physical, psychological, and emotional interactions between information technology and humans. As engineers seek to create products to aid physical and other activities, the strong research base in physiology, ergonomics, and human interactions with computers will expand to include cognition, the processing of information, and physiological responses to electrical, mechanical, and optical stimulation. (2004, p. 14)

It is our purpose in this textbook to summarize much of what we know about human cognitive, physical, and social characteristics and to show how this knowledge can be brought to bear on the design of machines, tools, and systems that are easy and safe to use.

In everyday life, we interact constantly with instruments, machines, and other inanimate systems. These interactions range from turning on and off a light by means of a switch, to the operation of household appliances such as stoves and digital video recorders (DVRs), to the use of mobile smartphones and tablet computers, to the control of complex systems such as aircraft and spacecraft. In the simple case of the light switch, the interaction of a person with the switch, and those components controlled by the switch, forms a system. Every system has a purpose or a goal; the lighting system has the purpose of illuminating a dark room or extinguishing a light when it no longer is needed. The efficiency of the inanimate parts of this system, that is, the power supply, wiring, switch, and light bulb, in part determines whether the system goal can be met. For example, if the light bulb burns out, then illumination is no longer possible.

The ability of the lighting system and other systems to meet their goals also depends on the human components of the systems. For example, if a small person cannot reach the light switch, or an elderly person is not strong enough to operate the switch, then the light will not go on and the goal of illumination will not be met. Thus, the total efficiency of the system depends on both the performance of the inanimate component and the performance of the human component. A failure of either can lead to failure of the entire system.

ELECTRONIC AND DIGITAL EQUIPMENT

The things that modern electronic and digital equipment can do are amazing. However, how well these gadgets work (the extent to which they accomplish the goals intended by their designers) is often limited by the human component. As one example, the complicated features of video cassette recorders (VCRs) made them legendary targets of humor in the 1980s and 1990s. To make full use of a VCR, a person first had to connect the device correctly to the television and cable system or satellite dish system that provided the signal and then, if the VCR did not receive a time signal from the cable or satellite, accurately program its clock. When the person wanted to record a television program, she had to set the correct date, channel number, intended start and end times, and tape speed (SP, LP, or EP). If she made any mistakes along the way, the program she wanted would not be recorded. Either nothing happened, the wrong program was recorded, or the correct program was recorded for the wrong length of time (e.g., if she chose the wrong tape speed, say SP to record a 4 hour movie, she would get only the first 2 hours of the show). Because there were many points in this process at which users could get confused and make mistakes, and different VCRs embedded different command options under various menus and submenus in the interface, even someone who was relatively adept at programming recorders had problems, especially when trying to operate a machine with which he was unfamiliar.

Usability problems prevented most VCR owners from using their VCRs to their fullest capabilities (Pollack, 1990). In 1990, almost one-third of VCR owners reported that they had never even set the clock on the machine, which meant that they could never program the machine for recording at specific times. Usability problems with VCRs persisted for decades after their introduction in 1975.

Electronic technology continues to evolve. Instead of VCRs, we now have DVRs and DVR devices like TiVo and Roku. These products still require some programming, and, in many cases, they must be connected to other devices (such as a television set or a home Internet router) to perform their functions. This means that usability is still a major concern, even though we do not have to worry about setting their clocks any more.

You might be thinking right now that usability concerns only apply to older people, who may not be as familiar with technology as younger people. However, young adults who are more technologically sophisticated still have trouble with these kinds of devices. One of the authors of this textbook (Proctor) conducted, as a class project, a usability test of a modular bookshelf stereo with upper-level college students enrolled in a human factors class. Most of these students were unable to program the stereo’s clock, even with the help of the manual. Another published study asked college students to use a VCR. Even after training, 20% of them thought that the VCR was set correctly when in fact it was not (Gray, 2000).

COMPUTER TECHNOLOGY

Perhaps nowhere is rapid change more evident than in the development and proliferation of computer technology (Bernstein, 2011; Rojas, 2001). The first generation of modern computers, introduced in the mid-1940s, was extremely large, slow, expensive, and available mainly for military purposes. For example, in 1944, the Harvard-IBM Automatic Sequence Controlled Calculator (ASCC, the first large-scale electric digital computer in the U.S.) was the length of half of an American football field and performed one calculation every 3–5 s. Programming the ASCC, which had nothing like an operating system or compiler, was not easy. Grace Hopper, the first programmer for the ASCC, had to punch machine instructions onto a paper tape, which she then fed into the computer. Despite its size, it could only execute simple routines. Hopper went on to develop one of the first compilers for a programming language, and in her later life, she championed standards testing for computers and programming languages.

The computers that came after the ASCC in the 1950s were considerably smaller but still filled a large room. These computers were more affordable and available to a wider range of users at businesses and universities. They were also easier to program, using assembly language, which allowed abbreviated programming codes. High-level programming languages such as COBOL and FORTRAN, which used English-like language instead of machine code, were developed, marking the beginning of the software industry. During this period, most computer programs were prepared on decks of cards, which the programmer then submitted to an operator. The operator inserted the deck into a machine called a card reader and, after a period of time, returned a paper printout of the run. Each line of code had to be typed onto a separate card using a keypunch. Everyone who wrote programs during this era (such as the authors of this textbook) remembers having to go through the tedious procedure of locating and correcting typographical errors on badly punched cards, dropping the sometimes huge deck of cards and hopelessly mixing them up, and receiving cryptic, indecipherable error messages when a program crashed.

In the late 1970s, after the development of the microprocessor, the first desktop-sized personal computers (PCs) became widely available. These included the Apple II, Commodore PET, IBM PC, and Radio Shack TRS-80. These machines changed the face of computing, making powerful computers available to everyone. However, a host of usability issues arose when computers, once accessible only by a small, highly trained group of users, became accessible by the general public. This forced the development of user-friendly operating system designs. For example, users interacted with the first PCs’ operating systems through a text-based, command line interface. This clumsy and unfriendly interface restricted the PC market to the small number of users who wanted a PC badly enough to learn the operating system commands, but development of a “perceptual user interface” was underway at the Xerox Palo Alto Research Center (PARC). Only 7 years after Apple introduced the Apple II, they presented the Macintosh, the first PC to use a window-based graphical interface. Such interfaces are now an integral part of any computer system.

Interacting with a graphical interface requires the use of a pointing device to locate objects on the screen. The first computer “mouse” was developed by Douglas Engelbart in 1963 for an early computer collaboration system (see Chapter 15). He called it an “X-Y position indicator.” His early design, shown in Figure 1.1, was later improved by Bill English at Xerox PARC for use with graphical interfaces. Like the graphical interface, the mouse eliminated some of the need for keyboarding during computer interaction.

FIGURE 1.1The first computer mouse, developed by Douglas Engelbart after extensive usability testing.

Despite the improvements provided by graphical interfaces and the computer mouse, there are many usability issues yet to be resolved in human–computer interaction (HCI), and new issues appear as new functionality is introduced. For example, with a new piece of software, it is often hard to figure out what the different icons in its interface represent, or they may be easily confused with each other. One of the authors of this book, when he was not paying attention, occasionally clicked unintentionally on the “paste” icon instead of the “save” icon for a popular word processor, because they were of similar appearance and in close proximity to each other. One very popular operating system required clicking on a button labeled “Start” when you wanted to shut the computer down, which most people found confusing. Moreover, like the old VCR problem, many software packages were very complex, and this complexity ensured that the vast majority of their users would be unable to use the software’s full range of capabilities.

With the development of the Internet and the World Wide Web beginning around 1990, the individual PC became a common household accessory. It can function as a video game console, telephone (e-mail; voice messaging; instant messaging), digital video disc (DVD) player/recorder and DVR, stereo system, television, library, shopping mall, and so on. Usability studies form a large part of research into human factors relevant to interacting with the Web (Chen & Macredie, 2010; Vu & Proctor, 2011; Vu, Proctor, & Garcia, 2012). Although vast amounts of information are available on the Web, it is often difficult for users to find the information for which they are searching. Individual websites vary greatly in usability, with many being cluttered and difficult to comprehend. Starting with the introduction of the iPhone in 2007, most Americans now own smartphones, and many of those owners rely on those smartphones for access to the Web (Smith, 2015). A host of other usability issues are introduced by smartphones because of the devices’ small display screens and restricted forms of user input (Rahmati et al., 2012). The preparation and structuring of content, as well as appropriate displays of information and input modes for a variety of devices, are important for the design of effective websites.

HEALTHCARE SYSTEMS

Over the past 15 years, healthcare has become a major research focus of human factors specialists. At the turn of the century, the Institute of Human Medicine published a report, To Err Is Human: Building a Safer Health System (Kohn, Corrigan, & Donaldson, 2000). This report pointed out that between 44,000 and 98,000 deaths annually were occurring as a result of human error during healthcare, and called for research clarifying the causes of the errors and a shift in focus of the healthcare system toward one of patient safety. This call has led to a torrent of human factors research on healthcare and the application of a human factors approach to the design of healthcare systems (Carayon & Xie, 2012; Carayon et al., 2014).

Interest in the role played by human factors in healthcare is sufficiently great that, starting in 2012, the Human Factors and Ergonomics Society has held an annual meeting, the International Symposium on Human Factors and Ergonomics in Health Care, which presents “the latest science in human factors as it applies to health-care delivery, medical and drug-delivery device design and health-care applications” (www.hfes.org/web/HFESMeetings/2015HealthCareSymposium.html). Among the technological advances in healthcare are electronic medical records, which allow the sharing of medical information among many parties, including possibly patients, but for which the interfaces must be usable by all potential users (Zarcadoolas, Vaughon, Czaja, Levy, & Rockoff, 2013).

CYBER SECURITY

In addition to usability, privacy and security on the Internet is a huge problem in general and for medical records in particular. Individuals and organizations want to keep some information open to the world while keeping other information secure and restricting its availability to authorized users only. An unsecured computer system or website can be damaged, either accidentally or intentionally, when an unauthorized person tampers with it, which may lead to severe financial damages. Careless system designers may make confidential information accessible to anyone who visits the site, which is what happened in the much-publicized 2014 hacking of Sony Pictures Entertainment in association with the motion picture The Interview. This cyber-attack resulted in the release of personal data and e-mail messages for thousands of then-current and former Sony employees. Lawsuits claim that the hacking was facilitated by Sony failing to provide an appropriate level of security for the information.

Increases in cyber security usually come at the cost of decreased usability (Proctor, Vu, & Schultz, 2009; Schultz, 2012). Most people, for a number of reasons, do not want to perform the additional tasks required to ensure a high degree of security. Other people try to make their data secure but fail to do so. So, questions remain about the best ways to ensure usability while maintaining security and privacy. The desire to allow people control over their private online data has resulted in the introduction of a new term, human–data interaction, to characterize the complex interactions between humans, online software agents, and data access (Mortier et al., 2014).

SERIOUS ACCIDENTS RESULTING FROM MAJOR SYSTEM FAILURES

Though some consumers’ frustrations with their digital equipment may seem amusing, in some cases, great amounts of money and many human lives rely on the successful operation of systems. It is not difficult to find examples of incidents in which inadequate consideration of human factors contributed to serious accidents.

On January 28, 1986, the space shuttle Challenger exploded during launch, resulting in the death of its seven crew members. Design flaws, relaxed safety regulations, and a sequence of bad decisions led to the failed launch, the consequence of which was not only the loss of human lives but also a crippled space program and a substantial cost to the government and private industry. Even though the investigation of the Challenger disaster highlighted the problems within the National Aeronautics and Space Agency (NASA)’s culture and patterns of administration that led to the explosion, almost exactly the same kinds of mistakes and errors of judgment contributed to the deaths of seven astronauts on February 1, 2003, when the space shuttle Columbia broke up on re‑entry to the earth’s atmosphere.

The Columbia Accident Investigative Board concluded that the shuttle’s heat shield was significantly damaged during liftoff when struck by a piece of foam from the booster. Although the cameras monitoring the launch clearly recorded the event, a quick decision was made by NASA officials that it did not threaten the safety of the shuttle. After all, they rationalized, foam had broken off during liftoff for other shuttle missions with no consequences. NASA engineers were not as cavalier, however. The engineers had worried for many years about these foam strikes. They requested (several times) that Space Shuttle Program managers coordinate with the U.S. Department of Defense to obtain images of the Columbia with military satellites so that they could determine the extent of the damage. These requests were ignored by Space Shuttle Program managers even after the engineers determined (through computer modeling and simulation) that damage must have occurred to the heat shield. The managers were suffering from the effects of a group behavior labeled “groupthink,” in which it becomes very easy to ignore information when it is provided by people who are not part of the group.

NASA is not the only organization to have experienced disaster as a result of bad human factors. On March 28, 1979, a malfunction of a pressure valve triggered what ultimately became a core meltdown at the Three Mile Island nuclear power plant near Harrisburg, Pennsylvania. Although the emergency equipment functioned properly, poorly designed warning displays and control panels contributed to the escalation of a minor malfunction into the worst accident in the history of the U.S. nuclear power industry. This incident resulted in considerable “bad press” for the nuclear power industry as well as financial loss. The most devastating effect of the accident, however, was on U.S. citizens’ attitudes toward nuclear power: Popular support for this alternative energy source dropped precipitously and has remained low ever since.

The Three Mile Island incident was a major impetus for the establishment of formal standards in nuclear plant design. Some of these standards attempted to remedy the obvious ergonomic flaws that led to the disaster. Other disasters have led to similar regulation and revision of design and safety guidelines. For example, in 1994, the Estonia, a Swedish car and passenger ferry, sank off the coast of Finland because the bow doors broke open and allowed water to pour into the hold. In what was regarded as one of the worst European maritime disasters since World War II, 852 lives were lost. The door locks failed because of poor design and lack of maintenance, and the crew failed to respond appropriately or quickly enough to the event. New safety guidelines were established for all European ferries after the disaster.

As with the Challenger and Columbia disasters, the increased attention to ergonomic issues on ferries did not prevent other disasters from occurring. After the Estonia disaster, the Norwegian ferry Sleipner ran into a rock and sank in November 1999, resulting in the loss of 16 lives; the crew was poorly trained, and few safety procedures existed.

Unfortunately, we can present still more examples. On October 31, 2000, a Singapore Airlines jumbo jet attempted to take off on a runway that was closed for construction, striking concrete barriers and construction equipment before catching fire. At least 81 people lost their lives as a consequence of this accident, which was due in part to poor placement of the barriers and inadequate signs. Yet another is the head-on collision, on September 12, 2008 in Chatsworth, California, of a commuter train and a freight train, which resulted in the loss of 25 lives and more than 100 injured people. The collision occurred when the commuter train ran a red signal and entered a section of track to which the freight train had been given access. Vision ahead was limited because of the curve in the track, which meant that the engineers of the two trains could not see each other until a few seconds before impact. Investigation showed that the signal was working properly and that the engineer of the commuter train was using his cellphone during the period in which the train passed the red light. Consequently, the National Transportation Safety Board (2010) determined, “The engineer failed to respond appropriately to a red signal at Control Point Topanga because he was engaged in text messaging at the time.”

The Challenger, Columbia, Three Mile Island, and other disasters can be traced to errors in both the machine components and the human components of the systems. After reading this text, you should have a good understanding of how the errors that led to these incidents occurred and of steps that can be taken in the design and evaluation of systems to minimize the likelihood of their occurrence. You should also appreciate how and why human factors knowledge should be incorporated into the design of everything from simple products to complex systems with which people must interact.

WHAT IS HUMAN FACTORS AND ERGONOMICS?

When engineers design machines, they evaluate them in terms of their reliability, ease of operation or “usability,” and error-free performance, among other things. Because the efficiency of a system depends on the performance of its operator as well as the adequacy of the machine, the operator and machine must be considered together as a single human–machine system. With this view, it then makes sense to analyze the performance capabilities of the human component in terms consistent with those used to describe the inanimate components of the system. For example, the reliability (the probability of successful performance) of human components can be evaluated in the same way as the reliability of machine components.

DEFINITION

The variables that govern the efficiency of the operator within a system fall under the topic of human factors: the study of those variables that influence the efficiency with which the human performer can interact with the inanimate components of a system to accomplish the system goals. This also is called ergonomics, and, in fact, the term ergonomics is more familiar than the term human factors outside of the U.S. and also to the general population within the U.S. (Dempsey, Wogalter, & Hancock, 2006). Other names include human engineering, engineering psychology, and, most recently, human–systems integration, to emphasize the many roles of humans in large‑scale ­systems and systems of systems (Durso et al., 2015).

The “official” definition for the field of human factors, adopted in August, 2000, by the International Ergonomics Association and endorsed by the Human Factors and Ergonomics Society, is as follows:

Ergonomics (or human factors) is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data, and other methods to design in order to optimize human well-being and overall system performance.

The field of human factors depends on basic research from relevant supporting sciences, applied research that is unique to the field, and application of the resulting data and principles to specific design problems. Human factors specialists thus are involved in research and the application of the data from that research to all phases of system development and evaluation.

Embodied in the definition of human factors is the importance of basic human capabilities, such as perceptual abilities, attention span, memory span, and physical limitations. The human factors specialist must know the limits of these capabilities and bring this knowledge to bear on the design of systems. For example, the placement of a light switch at an optimal height requires knowledge of the anthropometric constraints (i.e., the physical characteristics) of the population of intended users. For the switch is to be used by people who are confined to wheelchairs as well as by people who are not, it should be placed at a height that allows easy operation of the switch by both groups. Similarly, the human factors specialist must consider people’s perceptual, cognitive, and movement capabilities when designing information displays and controls, such as those found in automobiles, computer software packages, and microwave ovens. Only designs that accommodate and optimize the capabilities of the system’s users will be able to maximize total system performance. Otherwise, the system performance will be reduced, and the system goals may not be met.

Barry Beith (2006, p. 2303), past president of the Human Factors and Ergonomics Society, said the following about what the human factors practitioner needs to know:

HF/E is a field with very broad application. Essentially, any situation in which humans and technology interact is a focus for our field. Because of this breadth and diversity, the Basics are critical to success. By Basics, I am referring to the fundamental tools, techniques, skills, and knowledge that are the underpinnings of our discipline. Knowledge of human beings includes capabilities and limitations, behavioral and cultural stereotypes, anthropometric and biomechanical attributes, motor control, perception and sensation, cognitive abilities, and, most recently, emotional attributes addressed by affective human factors.

The basics covered in this textbook will be important for your understanding of all the areas to which human factors and ergonomics analyses can be applied.

BASIC HUMAN PERFORMANCE

There is now a massive amount of scientific data on the limits of human capabilities. This research spans about 150 years and forms the core of the more general study of human performance. Specifically, the study of human performance involves analyses of the processes that underlie the acquisition, maintenance, transfer, and execution of skilled behavior (Healy & Bourne, 2012; Johnson & Proctor, 2017; Matthews, Davies, Westerman, & Stammers, 2000). The research identifies factors that limit different aspects of a person’s performance, analyzes complex tasks by breaking them into simpler components, and establishes estimates of basic human capabilities. With these data, we can predict how well people will be able to perform both simple and complex tasks.

Just as engineers analyze the machine component of a complex system in terms of its constituent subsystems (recall the wiring, switch, and light bulb of the lighting system), the human performance researcher analyzes the human component in terms of its subsystems. Before the light can be turned on, the human must perceive a need for light, decide on the appropriate action, and execute the action necessary to flip the switch. In contrast, the human factors specialist is concerned primarily with the interface between the human and machine components with the goal of making the communication of information between these two components as smooth and efficient as possible. In our lighting system example, this interface is embodied in the light switch, and the human factors issues involve the design and placement of the switch for optimal use. Thus, whereas the human performance researcher is interested in characterizing the processes within the human component, the human factors specialist is concerned with designing the human–machine interface to optimize achievement of the system goal.

In designing a system, we have much more freedom in how we specify the operating characteristics of the machine than in how we specify the characteristics of the human operator. That is, we can redesign and improve the machine components, but we shouldn’t expect to be able to (or be permitted to) redesign and improve the operator. We can carefully screen and extensively train our operators before placing them in our system, but many limitations that characterize human performance cannot be overcome.

Because of this relative lack of freedom regarding the operator, it becomes imperative to know the constraints that human limitations impose on machine designs. Thus, the human factors specialist must consider basic human performance capabilities in order to wisely use the freedom that is available in the design of the machine component of the system.

HUMAN–MACHINE SYSTEMS AND DOMAINS OF SPECIALIZATION

Figures 1.2 and 1.3 summarize the domains of the design engineer, the human performance researcher, and the human factors specialist. Figure 1.2 shows a human–machine system: a person operating a microcomputer. The human–computer interface involves a video screen on which information is displayed visually by the microcomputer to communicate with the operator. This information is received by the operator through her visual sense. She processes this information and communicates back to the computer by pressing keys on the computer keyboard or moving the mouse. The computer then processes this information, and the sequence begins anew. The widespread use of microcomputers and other smart devices has forced the development of a branch of human factors that focuses exclusively on the problems involved in HCI (see Box 1.1).

FIGURE 1.2Human–computer system.

Figure 1.3 shows a more abstract version of the human–computer system. In this abstraction, the similarity between the human and computer is clear. We can conceptualize each in terms of subsystems that are responsible for input, processing, and output, respectively. While the human receives input primarily through the visual system, the computer receives its input from the keyboard and other peripheral devices. The central processing unit in the computer is analogous to the cognitive capabilities of the human brain. Finally, the human produces output through overt physical responses, such as keypresses, whereas the computer exhibits its output on the display screen.

Figure 1.3 also shows the domains of the design engineer, the human performance researcher, and the human factors specialist. The design engineer is interested primarily in the subsystems of the machine and their interrelations. Similarly, the human performance expert studies the subsystems of the human and their interrelations. Finally, the human factors specialist is most concerned with the relations between the input and output subsystems of the human and machine components, or in other words, with the human–machine interface.

The final point to note from Figure 1.3 is that the entire human–machine system is embedded within the larger context of the work environment, which also influences the performance of the system. This influence can be measured for the machine component or the human component as well as the interface. If the computer is in a very hot and humid environment, some of its components may be damaged and destroyed, leading to system failure. Similarly, extreme heat and humidity can adversely affect the computer user’s performance, which may likewise lead to system failure.

FIGURE 1.3Representation of the human–machine system. The human and the machine are composed of subsystems operating within the larger environment.

The environment is not just those physical aspects of a workspace that might influence a person’s performance. It also consists of those social and organizational variables that make work easier or harder to do. We use the term macroergonomics to describe the interactions between the organizational environment and the design and implementation of a system (Carayon, Kianfar, Li, & Wooldridge, 2015; Hendrick & Kleiner, 2002).

The total system performance depends on the operator, the machine, and the environment in which they are placed. Whereas the design engineer works exclusively in the domain of the machine, and the human performance researcher in the domain of the operator, the human factors specialist is concerned with the interrelations between machine, operator, and environment. In solving a particular problem involving human–machine interaction, the human factors specialist usually starts with a consideration of the capabilities of the operator. Human capabilities began to receive serious scientific scrutiny in the 19th century, well ahead of the technology with which the design engineer is now faced. This early research forms the foundation of contemporary human factors.

BOX 1.1HUMAN–COMPUTER INTERACTION

Human–computer interaction (HCI), also known as computer–human interaction (CHI), is the term used for an interdisciplinary field of study devoted to facilitating user interactions with computers. HCI has been a topic of burgeoning interest over the past 35+ years, during which time several organizations devoted to HCI have been established (e.g., the Special Interest Group on Computer–Human Interaction [SIGCHI] interest group of the Association for Computing Machinery [ACM]; see www.acm.org/sigchi/). HCI research is published not only in human factors journals but also in journals devoted specifically to HCI (e.g., Human–Computer Interaction, the International Journal of Human–Computer Interaction, and the International Journal of Human-Computer Studies). As well, there are textbooks (e.g., Kim, 2015; McKenzie, 2013), handbooks (Jacko, 2012; Weyers, Bowen, Dix, & Palanque, 2017), and encyclopedias (Soegaard & Dam, 2016) devoted exclusively to the topic of HCI.

Although some HCI experts regard it as a distinct field, it can be treated as a subfield of human factors, as we do in this text, because it represents the application of human factors principles and methods to the design of computer interfaces. HCI is fertile ground for human factors specialists because it involves the full range of cognitive, physical, and social issues of concern to the field (Carroll, 2003). These issues include everything from the properties of displays and data-entry devices, to the presentation of complex information in a way that minimizes mental workload and maximizes comprehension, to the design of groupware that will support team decisions and performance. More recently, there has been a push toward development of smart environments—offices, homes, businesses, classrooms, vehicles—in which adjustments and communications are made in response to input from sensors in the environment, people’s preferences, and their actions (Hammer, Wißner, & André, 2015; Volpentesta, 2015).

The following are two examples of HCI considerations, one involving physical factors and the other involving cognitive factors. People who operate a keyboard for many hours on a daily basis are at risk for developing carpal tunnel syndrome, an injury that involves neural damage in the area of the wrist (Shiri & Falah-Hassani, 2015; see Chapter 16). The probability of developing carpal tunnel syndrome can be reduced by many factors, including the use of split or curved keyboards that allow the wrists to be kept straight, rather than bent. Thus, one concern of the human factors specialist is how to design the physical characteristics of the interface in such a way that bodily injuries can be avoided.

In addition to physical limitations, human factors professionals have to take into account the cognitive characteristics of targeted users. For example, most people have a limited ability to attend to and remember items such as where specific commands are located in a menu. The demands on memory can be minimized by the use of icons that convey specific functions of the interface that can be carried out by clicking on them. Icons can increase the speed with which functions are carried out, but if the icons are not recognizable, then more time is lost by executing the wrong functions.

Because computers are becoming increasingly involved in all aspects of life, HCI is studied in many specific application domains. These include educational software, computer games, mobile communication devices, and interfaces for vehicles of all types. With the rapid development of the Internet, one of the most active areas of HCI research in the past few years has been that associated with usability of the Internet and World Wide Web (Krug, 2014; Nielsen & Loranger, 2006; Vu & Proctor, 2011). Issues of concern include homepage design, designing for universal accessibility, e-commerce applications, Web services associated with health delivery, and conducting human research over the Web.

HISTORICAL ANTECEDENTS

The major impetus for the establishment of human factors as a discipline came from technological developments during World War II. As weapon and transport systems became increasingly sophisticated, great technological advances were also being made in factory automation and in equipment for common use. Through the difficulties encountered while operating such sophisticated equipment, the need for human factors analyses became evident. Human factors research was preceded by research in the areas of human performance psychology, industrial engineering, and human physiology. Thus, the historical overview that we present here will begin by establishing the groundwork within these areas that relate to human factors. The primary message you should take from this section is the general nature and tenor of work that provided an initial foundation for the field of human factors and not the details of this work, much of which is discussed more thoroughly in later chapters.

PSYCHOLOGY OF HUMAN PERFORMANCE

The study of human performance emphasizes basic human capabilities involved in perceiving and acting on information arriving through the senses. Research on human performance dates to the mid-19th century (Boring, 1942), with work on sensory psychophysics and the time to perform various mental operations being particularly relevant for human factors. Many of the concepts and methods these early pioneers developed to study human performance are still part of the modern human factors toolbox.

Sensory Psychophysics

Ernst Weber (b1795–d1878) and Gustav Fechner (b1801–d1887) founded the study of psychophysics and are considered to be the fathers of modern experimental psychology. Both Weber and Fechner investigated the sensory and perceptual capabilities of humans. Weber (1846/1978) examined people’s ability to determine that two stimuli, such as two weights, differ in magnitude. The relation that he discovered has come to be known as Webers law. This law can be expressed quantitatively as

ΔII=K,

where:

I

is the intensity of a stimulus (say, a weight you are holding in your left hand),

ΔI

is the amount of change (difference in weight) between it and another stimulus (a weight you are holding in your right hand) that you need to be able to tell that the two stimuli differ in magnitude, and

K

is a constant.

Weber’s law states that the absolute amount of change needed to perceive a difference in magnitude increases with intensity, whereas the relative amount remains constant. For example, the heavier a weight is, the greater the absolute increase must be to perceive another weight as heavier. Weber’s law is still described in textbooks on sensation and perception (e.g., Goldstein, 2014) and provides a reasonable description for the detection of differences with many types of stimuli, except at extremely high or low physical intensities.

Fechner (1860/1966) formalized the methods that Weber used and constructed the first scales for relating psychological magnitude (e.g., loudness) to physical magnitude (e.g., amplitude). Fechner showed how Weber’s law implies the following relationship between sensation and intensity:

S=Klog(I),

where:

S

is the magnitude of sensation,

I

is physical intensity,

K

is a constant, and

log

is to any base.

This psychophysical function, relating physical intensity to the psychological sensation, is called Fechners law. Like Weber’s law, Fechner’s law is presented in contemporary sensation and perception textbooks and still provokes theoretical inquiry concerning the relationship between what we perceive and the physical world (Steingrimsson, Luce, & Narens, 2006). The term psychophysics describes the research examining the basic sensory sensitivities, and both classical and contemporary psychophysical methods are described in Chapter 4.

Speed of Mental Processing

Fechner and Weber showed how characteristics of human performance could be revealed through controlled experimentation and, consequently, provided the impetus for the broad range of research on humans that followed. At approximately the same historical period, other scientists were making considerable advances in sensory physiology. One of the most notable was Hermann von Helmholtz (b1821–d1894), who made many scientific contributions that remain as central theoretical principles today.

One of Helmholtz’s most important contributions was to establish a method for estimating the time for the transmission of a nerve impulse. He measured the difference in time between application of an electrical stimulus to a frog’s nerve and the resulting muscle contraction, for two different points on the nerve. The measures indicated that the speed of transmission was approximately 27 m/s (Boring, 1942). The importance of this finding was to demonstrate that neural transmission is not instantaneous but takes measurable time.

Helmholtz’s finding served as the basis for early research by Franciscus C. Donders (b1818–d1901), a pioneer in the field of ophthalmology. Donders (1868/1969) developed procedures called chronometric methods. He reasoned that, when performing a speeded reaction task, a person must make a series of judgments. He must first detect a stimulus (is something there?) and then identify it (what is it?). Then he may need to discriminate that stimulus from other stimuli (which one is it?). After these judgments, the observer selects the appropriate response to the stimulus (what response am I to make?).

Donders designed some simple tasks that differed in the combination of judgments required for each task. He then subtracted the time to perform one task from the time to perform another task that required one additional judgment. In this way, Donders estimated the time it took to make the judgments.

Donders’ procedure is now called subtractive logic. The significance of subtractive logic is that it provided the foundation for the notion that mental processes can be isolated. This notion is the central tenet of human information processing—the approach that underlies most contemporary research on human performance. This approach assumes that cognition occurs through a series of operations performed on information originating from the senses. The conception of the human as an information-processing system is invaluable for the investigation of human factors issues, because it meets the requirement of allowing human and machine performance to be analyzed in terms of the same basic functions (Proctor & Vu, 2010). As Figure 1.4 shows, both humans and machines perform a sequence of operations on input from the environment that leads to an output of new information. Given this parallel between human and machine systems, it makes sense to organize our knowledge of human performance around the basic information-processing functions.

FIGURE 1.4Information processing in humans and machines.

Wundt and the Study of Attention

The founding of psychology as a distinct discipline usually is dated from the establishment of the first laboratory devoted exclusively to psychological research by Wilhelm Wundt (b1832–d1920) in 1879. Wundt was a prolific researcher and teacher. Consequently, his views largely defined the field of psychology and the study of human performance in the late 1800s.

With respect to contemporary human performance psychology, Wundt’s primary legacy is in his scientific philosophy. He promoted a deterministic approach to the study of mental life. He advocated the view that mental events play a causal role in human behavior. Wundt held that our mental representation of the world is a function of experience and the way our mind organizes that experience. Wundt and his students used a wide range of psychophysical and chronometric methods to investigate attentional processes in detail (de Freitas Araujo, 2016).

The topic of attention, as will be seen in Chapter 9, is of central concern for human factors. It attracted the interest of many other researchers in the late 19th and early 20th centuries. Perhaps the most influential views on the topic were those of William James (b1842–d1910). In his classic book Principles of Psychology, published in 1890, James devoted an entire chapter to attention. James states:

Everyone knows what attention is. It is the taking possession by the mind, in clear and vivid form, of one out of what seems several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others. (James, 1890/1950, pp. 403–404)

This quote captures many of the properties of attention that are still addressed in modern research, such as different types of attention, limitations in processing capacity, and the role of consciousness (see Johnson & Proctor, 2004; Nobre & Kastner, 2014).

Learning and Skill Acquisition

Whereas Wundt and others had tackled the experimental problem of isolating mental events, Hermann Ebbinghaus (b1850–d1909) was the first to apply experimental rigor successfully to the study of learning and memory. This contribution was important, because previously it had been thought that the quantitative study of such higher-level mental processes was not possible (Herbart, 1816/1891). In a lengthy series of experiments conducted on himself, Ebbinghaus (1885/1964) examined his ability to learn and retain lists of nonsense syllables. The procedures developed by Ebbinghaus, the quantitative methods of analysis that he employed, and the theoretical issues that he investigated provided the basis for a scientific investigation of higher mental function.

In a landmark study in the history of human performance, Bryan and Harter (1899) extended the topic of learning and memory to the investigation of skill acquisition. In their study of the learning of Morse code in telegraphy, Bryan and Harter determined many of the factors involved in the acquisition of what would later come to be called perceptual-motor skills. Using procedures and methods of analysis similar to those provided by Ebbinghaus, Bryan and Harter were able to contribute both to our basic understanding of skill learning and to our applied understanding how people learn to use a telegraph. By examining learning curves (plots of performance as a function of amount of practice), Bryan and Harter proposed that learning proceeds in a series of phases. Contemporary models of skill acquisition still rely on this notion of phases (e.g., Anderson, 1983; Anderson et al., 2004; see Chapter 12).

HUMAN PERFORMANCE IN APPLIED SETTINGS

Although we have dated the founding of human factors as a discipline to World War II, considerable applied work important to modern human factors was conducted prior to that time. Much of this work was oriented toward improving job performance and productivity, and at least two journals with a human factors emphasis were published prior to World War II.

Job Specialization and Productivity

Charles W. Babbage (b1792–d1871) wrote the book Economy of Machinery and Manufactures in 1832. In that book, he proposed methods for increasing the efficiency with which workers could perform their jobs. In the spirit of the modern factory assembly line, Babbage advocated job specialization, with the idea that this would enable a worker to become skilled and proficient at a limited range of tasks. Babbage also designed and planned two steam-powered machines: the Difference Engine, a special-purpose computer intended to perform differential equations, and the Analytical Engine, the first general-purpose programmable computer. One impetus for his work on computers was the desire to reduce the number of errors in scientific calculations.

We give credit to W. B. Jastrzębowski (1857) for being the first person to use the term ergonomics, in an article entitled “An Outline of Ergonomics, or the Science of Work Based upon the Truths Drawn from the Science of Nature” (Karwowski, 2006a; Koradecka, 2006). He distinguished useful work, which helps people, from harmful work, which hurts people, and emphasized the development of useful work practices.

Frederick W. Taylor (b1856–d1915) was one of the first people to systematically investigate human performance in applied settings. Taylor, who was an industrial engineer, examined worker productivity in industrial settings. He conducted one of the earliest human factors studies. As described by Gies (1991), Taylor was concerned that the workers at a steel plant used the same shovel for all shoveling tasks. By making careful scientific observations, he designed several different shovels and several different methods of shoveling that were appropriate for different materials.

Taylor is best remembered for developing a school of thought that is referred to as scientific management (Taylor, 1911/1967). He made three contributions to the enhancement of productivity in the workplace. The first contribution is known as task analysis, in which the components of a task are determined. One technique of task analysis is time-and-motion study. With this technique, a worker’s movements are analyzed across time to determine the best way to perform a task. Taylor’s second contribution was the concept of pay for performance. He suggested a “piecework” method of production, by which the amount of compensation to the worker is a function of the number of pieces completed. Taylor’s third contribution involved personnel selection, or fitting the worker to the task. While personnel selection is still important, human factors emphasizes fitting the task to the worker. Although many of Taylor’s contributions are now viewed as being dehumanizing and exploitative, Taylor’s techniques were effective in improving human performance (i.e., increasing productivity). Moreover, time-and-motion study and other methods of task analysis still are used in contemporary human factors.

Frank Bunker Gilbreth (1868–1924) and Lillian Moller Gilbreth (1878–1972) are among the pioneers in the application of systematic analysis to human work. The Gilbreths developed an influential technique for decomposing motions during work into fundamental elements or “therbligs” (Gilbreth & Gilbreth, 1924). Frank Gilbreth’s (1909) investigations of bricklaying and of operating-room procedures are two of the earliest and best-known examples of applied time-and-motion study.

Gilbreth, himself a former bricklayer, examined the number of movements made by bricklayers as a function of the locations of the workers’ tools, the raw materials, and the structure being built. Similarly, he examined the interactions among members of surgical teams. The changes instituted by Gilbreth for bricklaying resulted in an impressive increase in productivity; his analysis of operating-room procedures led to the development of contemporary surgical protocols. Lillian Gilbreth, who collaborated with Frank until his death, later extended this work on motion analysis to the performance of household tasks and designs for people with disabilities.

Another early contributor to the study of work was the psychologist Hugo Münsterberg (b1863–d1916). Though trained as an experimental psychologist, Münsterberg spent much of his career as an applied psychologist. In the book Psychology and Industrial Efficiency, Münsterberg (1913) examined work efficiency, personnel selection, and marketing techniques, among other topics.

Personnel selection grew from work on individual differences in abilities developed during World War I, with the use of intelligence tests to select personnel. Subsequently, many other tests of performance, aptitudes, interests, and so on were developed to select personnel to operate machines and to perform other jobs. While personnel selection can increase the quality of system performance, there are limits to how much performance can be improved through selection alone. A poorly designed system will not perform well even if the best personnel are selected. Thus, a major impetus to the development of the discipline of human factors was the need to improve the design of systems for human use.

Early Human Factors Journals

Two applied journals were published for brief periods in the first half of the 20th century that foreshadowed the future development of human factors. A journal called Human Engineering was published in the U.S. for four issues in 1911 and one issue in 1912 (Ross & Aines, 1960). In the first issue, the editor, Winthrop Talbot, described human engineering as follows:

Its work is the study of physical and mental bases of efficiency in industry. Its purpose is to promote efficiency, not of machines but men and women, to decrease waste—especially human energy—and to discover and remove causes of avoidable and preventable friction, irritation or injury. (Quoted by Ross & Aines, 1960, p. 169)

From 1927 to 1937, a journal called The Human Factor was published by the National Institute of Industrial Psychology in England. The content and methodology of the journal are broader than those later associated with the field of human factors, with many of the articles focusing on vocational guidance and intelligence testing. However, the journal also included articles covering a range of issues in what were to become core areas of human factors. For example, the 1935 volume contained articles titled “The Psychology of Accidents,” “A Note on Lighting for Inspection,” “Attention Problems in the Judging of Literary Competitions,” and “An Investigation in an Assembly Shop.” The journal published a transcript of a radio speech given by Julian Huxley (b1887–d1975), the renowned biologist and popularizer of science. In this speech, Huxley differentiated what he called Industrial Health from Industrial Psychology:

[Industrial Psychology], however, is something broader [than Industrial Health]. It, too, is dealing with the human factor in industry, but instead of dealing primarily with industrial disease and the prevention of ill-health, it sets itself the more positive task of finding out how to promote efficiency in all ways other than technical improvement of machinery and processes. To do this, it all the time stresses the necessity of not thinking of work in purely mechanical terms, but in terms of co-operation between a machine and a human organism. The machine works mechanically; the human organism does not, but has its own quite different ways of working, its own feelings, its fears and its ideals, which also must be studied if the co-operation is to be fruitful. (Huxley, 1934, p. 84)

BIOMECHANICS AND PHYSIOLOGY OF HUMAN PERFORMANCE

Human performance also has been studied from the perspective of biomechanics and physiology (Oatis, 2016). The biomechanical analysis of human performance has its roots in the early theoretical work of Galileo and Newton, who helped to establish the laws of physics and mechanics. Giovanni Alphonso Borelli (b1608–d1678), a student of Galileo, brought together the disciplines of mathematics, physics, and anatomy in one of the earliest works on the mechanics of human performance (Borelli, 1679/1989).

Probably the most important contribution to biomechanical analysis in the area of work efficiency was by Jules Amar (b1879–d1935). In his book The Human Motor, Amar (1920) provided a comprehensive synthesis of the physiological and biomechanical principles related to industrial work. Amar’s research initiated investigations into the application of biomechanical principles to work performance. The ideas of Amar and others were adopted by and applied to the emerging field of human factors.

Another major accomplishment was the development of procedures that allowed a dynamic assessment of human performance. In the latter part of the 19th century, Eadweard Muybridge (b1830–d1904) constructed an apparatus comprised of banks of cameras that allowed him to take pictures of animals and humans in action (e.g., Muybridge, 1955). Each series of pictures captured the biomechanical characteristics of complex action (see Figure 1.5). The pictures also could be viewed at a rapid presentation rate, with the result being a simulation of the actual movement.

FIGURE 1.5A clothed man digging with a pickaxe. Photogravure after Eadweard Muybridge, 1887. By: Eadweard Muybridge and University of Pennsylvania.

Muybridge’s work opened the door for a range of biomechanical analyses of dynamic human performances. In particular, the physiologist Etienne-Jules Marey (b1830–d1904) exploited related photographic techniques to decompose time and motion (Marey, 1902). Today, such analyses involve videotaping performances of human action that can then be evaluated. Modern camera-based systems, such as OPTOTRAK CERTUS® (see Figure 1.6), track small infrared sensors attached to a person’s body to analyze movement kinematics in three dimensions.

FIGURE 1.6The Optotrak system for recording movement trajectories. The cameras pick up infrared light from infrared light-emitting diodes (IREDs) attached to the human subject. Computer analysis provides a three-dimensional account of each IRED at each sampling interval.

SUMMARY

A great deal of research conducted prior to the middle of the 20th century laid the foundation for the field of human factors. Psychologists developed research methods and theoretical views that allowed them to investigate various aspects of human performance; industrial engineers studied many aspects of human performance in job settings with an eye toward maximizing its efficiency; biomechanists and physiologists developed methods for examining physical and biological factors in human performance and principles relating those factors to work. Our coverage of these developments has of necessity been brief, but the important point is simply that without the prior work in these areas, human factors specialists would have had no starting point to address the applied design issues that became prominent in the latter half of the 20th century.

EMERGENCE OF THE HUMAN FACTORS PROFESSION

Although interest in basic human performance and applied human factors goes back to before the turn of the 20th century, a trend toward systematic investigation of human factors did not begin in earnest until the 1940s (Meister, 2006a). The technological advances brought about by World War II created a need for more practical research from the academic community. Moreover, basic research psychologists became involved in applied projects along with industrial and communications engineers. By the close of the war, psychologists were collaborating with engineers on the design of aircraft cockpits, radar scopes, and underwater sound detection devices, among other things.

Among the most significant developments for human factors was the founding in 1944 of the Medical Research Council Applied Psychology Unit in Great Britain (Reynolds & Tansey, 2003) and in 1945 of the Psychology Branch of the Aero Medical Laboratory at Wright Field in the U.S. The first director of the Applied Psychology Unit was Kenneth Craik (b1914–d1945), a leader in the use of computers to model human information processing, whose contributions to the field were cut short by his untimely death at age 31. The founding Head of the Psychology Branch, Paul M. Fitts (b1912–d1965), was a central figure in the development of human factors, who left a lasting legacy in many areas of research. Human factors and ergonomics remain prominent at Wright-Patterson Air Force Base in the 711th Human Performance Wing, particularly its Human Systems Integration Directorate. Also, in 1946, Ross McFarland (1946) published Human Factors in Air Transport Design, the first book of which we are aware to use the term human factors. Around this period, some industries also began to research human factors, with Bell Laboratories establishing a laboratory devoted specifically to human factors in the late 1940s. The interdisciplinary efforts that were stimulated during and immediately after the years of the war provided the basis for the development of the human factors profession.

The year 1949 marked the publication of Chapanis et al.’s first general textbook on human factors, Applied Experimental Psychology: Human Factors in Engineering Design, from which we quoted to begin this book. Perhaps more important, the profession was formalized in England with the founding of the Human Research Group in 1949 (Stammers, 2006). In 1950, the group changed its name to the Ergonomics Research Society, subsequently shortened to the Ergonomics Society, and the term ergonomics came to be used in the European community to characterize the study of human–machine interactions. This term was intentionally chosen over the term human engineering, which was then popular in the U.S., because human engineering was associated primarily with the design-related activities of psychologists. According to Murrell (1969, p. 691), one of the founders of the Ergonomics Research Society, “In contradistinction the activities which were envisaged for the infant society would cover a much wider area, to embrace a broad spectrum of interests such as those of work physiology and gerontology.” Reflecting the internationalization of human factors terminology and work, the Society changed its name again in 2009 to the Institute of Ergonomics and Human Factors, by which it is currently known.

Several years later, in 1957, the Ergonomics Society began publication of the first journal devoted to human factors, Ergonomics. In that same year, the Human Factors Society (changed in 1992 to the Human Factors and Ergonomics Society) was formed in the U.S., and the publication of their journal, Human Factors, began 1 year later. Also, the American Psychological Association established Division 21, Engineering Psychology, and in 1959, the International Ergonomics Association, a federation of human factors and ergonomics societies from around the world, was established.

CONTEMPORARY HUMAN FACTORS

From 1960 to 2000, the profession of human factors grew immensely. As one indicator of this growth, the membership of the Human Factors and Ergonomics Society increased from a few hundred people to more than 4500 by the late 1980s, which is approximately the current membership. The range of topics and issues investigated has grown as well.

Table 1.1 shows the composition of the Technical Groups of the Human Factors and Ergonomics Society in 2017. The topics covered by these groups indicate the broad range of issues now addressed by human factors specialists. Professional societies have been established in many other countries, and more specialized societies have developed. HCI alone is the focus of the Association for Computing Machinery’s Special Interest Group on Computer–Human Interaction, the Software Psychology Society, and the IEEE Technical Committee on Computer and Display Ergonomics, as well as many others. Outside of computer science, the rapid growth of technology has made the human factors profession a key component in the development and design of equipment and machinery.

TABLE 1.1

Technical Groups of the Human Factors Society

Aerospace systems: Application of human factors to the development, design, certification, operation, and maintenance of human–machine systems in aviation and space environments.

Aging: Concerned with human factors appropriate to meeting the emerging needs of older people and special populations in a wide variety of life settings.

Augmented cognition: Concerned with fostering the development and application of real-time physiological and neurophysiological sensing technologies that can ascertain a human’s cognitive state while interacting with computing-based systems … [to] enable efficient and effective system adaptation based on a user’s dynamically changing cognitive state.

Children’s issues: Consists of researchers, practitioners, manufacturers, policy makers, caregivers, and students interested in research, design, and application concerning human factors and ergonomics (HF/E) issues related to children’s emerging development from birth to 18.

Cognitive engineering and decision making: Encourages research on human cognition and decision making and the application of this knowledge to the design of systems and training programs.

Communications: Concerned with all aspects of human-to-human communication, with special emphasis on communication mediated by technology.

Computer systems: Concerned with human factors in the design of computer systems. This includes the user-centered design of hardware, software, applications, documentation, work activities, and the work environment.

Education: Concerned with the education and training of human factors and ergonomics specialists.

Environmental design: Concerned with the relationship between human behavior and the designed environment … [including] ergonomics and macroergonomics aspects of design within home, office, and industrial environments.

Forensics: Application of human factors knowledge and techniques to “standards of care” and accountability established within the legislative, regulatory, and judicial systems.

Health care: Maximizing the contribution of human factors and ergonomics to medical system effectiveness and the quality of life of people who are functionally impaired.

Human performance modeling: Focuses on the development and application of predictive, reliable, and executable quantitative models of human performance.

Individual differences in performance: Interest in any of the wide range of personality and individual difference variables that are believed to mediate performance.

Internet: Interest in Internet technologies and related behavioral phenomena.

Macroergonomics: Focuses on organizational design and management issues in human factors and ergonomics as well as work system design and human–organization interface technology.

Occupational ergonomics: Application of ergonomics data and principles for improving safety, productivity, and quality of work in industry.

Perception and performance: Promotes the exchange of information concerning perception and its relation to human performance.

Product design: Dedicated to developing consumer products that are useful, usable, safe, and desirable … by applying the methods of human factors, consumer research, and industrial design.

Safety: Development and application of human factors technology as it relates to safety in all settings and attendant populations.

Surface transportation: Information, methodologies, and ideas related to the international surface transportation field.

System development: Integration of human factors/ergonomics into the development of systems.

Test and evaluation: All aspects of human factors and ergonomics as applied to the evaluation of systems.

Training: Information and interchange among people interested in training and training research.

Virtual environments: Human factors issues associated with human–virtual environment interaction.

The close ties between human factors and the military have persisted since the years of World War II. The U.S. military incorporates human factors analyses into the design and evaluation of all military systems. All branches of the military have human factors research programs. These programs are administered by the Air Force Office of Scientific Research, the Office of Naval Research, and the Army Research Institute, among others.

Additionally, the military branches have programs to ensure that human factors principles are incorporated into the development of weapons and other military systems and equipment. These programs operated independently until 2009, when, as part of the National Defense Authorization Act, the U.S. Department of Defense was asked to establish the Human Systems Integration (HSI) initiative,

focused on the role of the human in the Department of Defense (DoD) acquisition process. The objective of HSI is to provide equal consideration of the human along with the hardware and software in the technical and technical management processes for engineering a system that will optimize total system performance and minimize total ownership costs. (Office of the Assistant Secretary of Defense, 2015)

The result of this initiative has been the FY011 Department of Defense Human Systems Integration Management Plan (2011), which distinguishes eight elements, including Human Factors Engineering, Personnel, Manpower, Training, and Safety.

The value of human factors analyses is also apparent in our everyday lives. For example, the automotive industry has devoted considerable attention to human factors in the design of automobiles (Gkikas, 2013). This attention has extended from the design of the automobile itself to the machinery used to make it. Similarly, modern office furniture has benefited significantly from human factors evaluations. Still, we often encounter equipment that is poorly designed for human use. The need for good human factors has become sufficiently obvious that manufacturers and advertising agencies now realize that it can become a selling point for their products. The makers of automobiles, furniture, ball-point pens, and the like advertise their products in terms of the specific ergonomic advantages that their products have over those of their competitors. If the present is any indication, the role of human factors will only increase in the future.

Standard human factors principles apply in space as well as on earth. For example, the factors contributing to a collision of the Russian supply spacecraft Progress 234 with the Mir space station in 1997 included poor visual displays and operator fatigue resulting from sleep deprivation (Ellis, 2000). In addition, the unique conditions of extraterrestrial environments pose new constraints (Lewis, 1990). For example, in microgravity environments, a person’s face will tend to become puffy. Furthermore, a person can be viewed from many more orientations (e.g., upside-down). Consequently, perception of the nonlinguistic cues provided by facial expressions likely is impaired, compromising face-to-face communication (Cohen, 2000). In recognition of the need to consider human factors in the design of space equipment, NASA published the first human factors design guide in 1987, updated and expanded in 2010 as the Human Integration Design Handbook, to be used by developers and designers to promote the integration of humans and equipment in space.

The plan to construct the International Space Station began in 1984. The U.S. committed NASA to developing the station in conjunction with space programs from other countries, including the European Space Agency and the Canadian Space Agency, among others. NASA quickly acknowledged the need to incorporate human factors into the design of all aspects of the space station (Space Station Human Productivity Study, 1985), and human factors engineering was granted an equal status with other disciplines in the system development process (Fitts, 2000). Human factors research and engineering played a role in designing the station to optimize the crew’s quality of life, or habitability (Wise, 1986), as well as to optimize the crew’s quality of work, or productivity (Gillan, Burns, Nicodemus, & Smith, 1986). Issues of concern included, for example, design of user interfaces for the equipment used by the astronauts to conduct scientific experiments (Neerincx, Ruijsendaal, & Wolff, 2001). Additional factors that have been investigated include how to achieve effective performance of multicultural crews (Kring, 2001), the dynamics of crew tension during the mission and their influence on performance (Sandal, 2001), and the effects of psychological and physiological adaptation on crew performance in emergency situations (Smart, 2001). Moving beyond the International Space Station to the extended duration that will be required for spaceflights to Mars and back, additional human factors issues emerge (Schneider et al., 2013).

SUMMARY

From its beginnings in the latter part of the 1940s, the field of human factors has flourished. Building on a scientific foundation of human performance from many disciplines, the field has developed from an initial focus primarily on military problems to concern with the design and evaluation of a broad range of simple and complex systems and products for human use. The importance of the contribution of human factors to industry, engineering, psychology, and the military cannot be overemphasized. When design decisions are made, failure to consider human factors can lead to waste of personnel and money, injury and discomfort, and loss of life. Consequently, consideration of human factors concerns at all phases of system development is of utmost importance.

In the remainder of the book, we elaborate on many of the themes introduced in this chapter. We will cover both the science of human factors, which focuses on establishing principles and guidelines through empirical research, and the profession, which emphasizes application and evaluation for specific design problems. We will describe the many types of research and knowledge that are of value to the human factors specialist and the many specific techniques that are available for predicting and evaluating many aspects of human factors.

RECOMMENDED READINGS

Casey, S. (1998). Set Phasers on Stun: And Other True Tales of Design, Technology, and Human Error (2nd ed.). Santa Barbara, CA: Aegean Publishing Company.

Chiles, J. R. (2001). Inviting Disaster: Lessons from the Edge of Technology. New York: Harper Business.

Norman, D. A. (2013). The Design of Everyday Things (revised and expanded edition). New York: Basic Books.

Perrow, C. (1999). Normal Accidents (updated edition). Princeton, NJ: Princeton University Press.

Rabinbach, A. (1990). The Human Motor: Energy, Fatigue, and the Origins of Modernity. New York: Basic Books.

Salas, E. & Maurino, D. (2010). Human Factors in Aviation (2nd ed.). San Diego, CA: Academic Press.

Vaughan, D. (1997). The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. Chicago, IL: University of Chicago Press.

Vicente, K. (2006). The Human Factor: Revolutionizing the Way People Live with Technology. New York: Routledge.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset