Chapter 5. Acting

MOVEMENT IS ONE OF the more complex acts of coordination between our senses, bodies, and intellect. Unlike many sculptors, Michelangelo carved in exquisite detail but left pieces deliberately unfinished. Partially complete, these works look like real human straining to be freed from a block of solid Carrera marble. Dancer Trisha Brown and basketball player Stephen Curry are sculptures in motion, pushing the envelopes of creative expression and athletic prowess (see Figure 5-1). And just how exactly do we get cereal in our mouths while reading the newspaper, not once looking down at the bowl or the spoon? Incredibly, we perform these types of physical feats with the least awareness. The saying goes that you never forget how to ride a bike. But it’s fair to say that you don’t consciously remember either.

A typically athletic (and aesthetic) performance by the Trisha Brown dance company (Set and Reset, 1996; photo by Chris Callis)
Figure 5-1. A typically athletic (and aesthetic) performance by the Trisha Brown dance company (Set and Reset, 1996; photo by Chris Callis)

Highly developed physical abilities bring the contrast between knowledge and awareness into high relief: we know how to do many activities very well, but we may not have the faintest awareness of how we know. “The best way to mess up your piano piece is to concentrate on your fingers; the best way to get out of breath is to think about your breathing; the best way to miss the golf ball is to analyze your swing.”1 Much of physical activity lies within the realm of implicit memory, knowledge that our brain stores but cannot consciously access.

These physical abilities don’t start out as implicit memory. We develop many of them over time. It takes many months for a child to learn how to sit up, crawl, and then walk. We may also develop variations in these skills. For tying shoelaces, some people might learn the bunny ear method. Others might learn swoop and loop. It also takes time to develop the physical skills to use interfaces—typing being one of the most challenging. Much of interaction design relies on this implicit memory of physical movements. Gestures like tapping and pinching are really common manual gestures beyond screens, understandable on the first try. On the other hand, a more complex video game might dedicate a few introductory levels just to practice and memorize different button combinations. The ability to speak takes years to develop, distributed across understanding speech, developing motor control over our lips, tongue, and the vocal folds in our throats, the rules of grammar, a robust vocabulary, as well as the social etiquette of conversation. Calling this kind of user interface “natural” might be a misnomer: Most people have spent decades acquiring linguistic abilities through rigorous training and education.

About Anthropometrics

Our size and shape help define our physical abilities. This is already very well understood, and anthropometrics, the measurement of people, has long been a part of design. From the tolerance of high G-force on pilots and astronauts, to how age affects our grip on vegetable peelers, the catalog of human measurements is as varied as the kinds of physical tasks we undertake. Common activities like sitting, gripping manual tools, and getting soup into our mouths are very well-understood design problems that have been solved countless times over millennia. Sensors are a relatively new product technology. There are many different kinds that can be applied to everyday activities, and we are still in a largely experimental stage, figuring out how to make them valuable and usable in products. Part of this is understanding the complex link between our physical attributes and sensory abilities.

For example, 20/20 is a standardized relative measurement used to describe visual acuity. It means that a person can see what a “normal” person can see from 20 feet away. The ophthalmologist Heinrich Kuechler came up with this standard measurement of vision in 1843. Eye exams have evolved greatly in the last century, and has expanded to include new dimensions of vision and several types of eye movement. A check-up may now also evaluate peripheral vision, accommodation (the ability to shift visual focus via contraction of the ciliary muscle), movement (the ability to detect movement in the visual field), eye muscle performance (the ability of both eyes to work together and coordinate movement), and tests for blind spots.2

These new tests reflect a better understanding of vision and how deeply integrated our sensory and physical abilities really are. Focal accommodation and eye movement have a tremendous impact on head-worn devices, like AR and VR headsets. The precision of manual dexterity and the shape and size of our fingers play a strong role in touchscreen interactions, particularly on small screens like smartwatches. Designers humorously call this the “fat-finger” problem. The differences between the movement of walking and running are used in fitness trackers that recognize and measure exercise. These kinds of human measurements go far beyond just our various sizes and shapes nd deeper into how we incorporate our senses and physical movements into everyday activities.

The Origin of Anthropometrics

In the 1950s, the industrial designer Henry Dreyfuss conducted a study across over 2,000 people to obtain the face and neck measurements needed to design the Model 500 telephone for Bell Labs. He solidified this work into two personas, Joe and Josephine (see Figure 5-2):

They are a part of our staff, representing the millions of consumers for whom we are designing, and they dictate every line we draw … They react strongly to touch that is uncomfortable or unnatural; they are disturbed by glaring or insufficient light and by offensive coloring; they are sensitive to noise, and they shrink from disagreeable odor … Our job is to make Joe and Josephine compatible with their environment.3

Top: The Model 500 phone; bottom: the personas Joe and Josephine, works created by industrial designer Henry Dreyfuss (sources: top—R Sull, Dhscommtech at English Wikipedia, Creative Commons Share Alike)
Figure 5-2. Top: The Model 500 phone; bottom: the personas Joe and Josephine, works created by industrial designer Henry Dreyfuss (sources: top—R Sull, Dhscommtech at English Wikipedia, Creative Commons Share Alike)

Two members of Dreyfuss’ staff expanded on this initial work to create the Humanscale design tools: a set of three scales using rotary wheels to create a range of measurements for human size, movement, strength, including for those with physical disabilities. This shifted from consolidated averages to ranges across audience segments, or cohorts. The scale broke out measurements indexing from a small woman to a large man, with special sections broken out for the elderly, disabled, and children. The tools included a wide range of anthropometrics, including measurements like height, weight, and arm and leg length. They added basic sensory and movement metrics, like sight line (10°), slumping (degree of not sitting up straight, 1.6” for men, and 1.4” for women), left-handedness (10% of population), glasses (30% of population at the time), and physical disability (15%–20% of population) (see Figure 5-3). Many of the measurable traits identified then are still common today, though the specific measurements have changed. Now we know that about 64% of the American population requires some form of vision correction.4

The Humanscale design tools by Henry Dreyfuss offered a handy, comprehensive way to design for common ranges of human attributes
Figure 5-3. The Humanscale design tools by Henry Dreyfuss offered a handy, comprehensive way to design for common ranges of human attributes

As design curator Ellen Lupton noted, this set of guidelines reflected a broader social movement to overcome diverse physical limitations:

The Humanscale project responded to the UNIVERSAL DESIGN movement. In the late 1960s and early 1970s, the newly vocal disability community compelled designers, builders, manufacturers, and lawmakers to accommodate the needs of a greater diversity of bodies. Humans face physical limitations throughout their lives, from childhood through the aging process. Some disabilities are permanent and others are temporary, but all are exacerbated by poor design decisions.5

Task Performance

A key aspect of physical activity within interaction design is performance: the ability to execute a specific behavior. This is well understood in existing human factors for product design and architecture, and mechanical design in products like cars, appliances, and tools. Key metrics include physical ability, mobility, and appropriate response times. A simple example of this would be the number of times a phone rings before it is forwarded to voicemail. That number is based on the estimated time it takes for someone to locate their phones and make a decision about answering.

Other metrics address accuracy and repetition. These track whether people can successfully complete a physical action and whether they can repeat it for as long as needed. Typing on a tiny mobile phone keyboard, or any keyboard for that matter, can be cumbersome, especially when editing text. Features like autofill, autocorrect, and magnifying glass help to mitigate the small size of keys and text on screens, which can be much smaller than the contact area of our fingertips on a touchscreen. A similar feature emerged for the XBox Kinect, which required a user to hold their hand over an option to select it. Without a tap or click to verify selection, this gesture helped avoid incorrect selections in the interface. These kinds of features, which range between helpful and irritating, speak to the level of physical precision expected in some of our interfaces. The increase in repetitive stress injuries like eye strain and carpal tunnel syndrome also point out that over time, the repetition of high-precision movement can be unhealthy (see Figure 5-4).

High precision interface, such as that required by a mouse, can have unhealthy effects over the long term
Figure 5-4. High precision interface, such as that required by a mouse, can have unhealthy effects over the long term

Repetitive daily usage is one of the most challenging types of interaction to design. It’s not uncommon for people to type hundreds or thousands of characters or click dozens or hundreds of buttons a day, every day, for a significant portion of their lives. It’s not hard to imagine that the number of keystrokes we type could approach or exceed the number of breaths we take. Over the course of our lifetimes, we exert more muscular force by simply holding up the weight of our own fingers and arms than by lifting any external objects.

From that perspective, it’s easy to understand how significant these small physical activities can be.

Nonverbal Communication

There are all kinds of estimates about how much communication is nonverbal. It seems to vary widely between cultures, relationships, and individuals. Following World War II, the U.S. Department of State committed deeply to international diplomacy and deepening ties with newly formed allies around the world. In order to communicate effectively across cultures, a special research team was assembled to better understand cultural norms and customs. From this body of work emerged the study of paralanguage headed by George Trager, kinesics headed by Ray Birdwhistell, and proxemics by Edward T. Hall. Paralanguage describes the nonlinguistic elements of verbal communication, like prosody, intonation, and utterances like gasps or sighs. Kinesics is the study of movement and gesture in communication—both alone and as modifiers of spoken language. Proxemics is the study of physical proximity and how people define the usage of physical space.

Kinesics explores the way physical movement is used in communication. These kinds of physical movements are not commonly used as part of interaction design, because they vary as widely as the verbal languages spoken, and the same gesture can mean wildly different things across cultures. For example, a thumbs up in American culture roughly means “OK.” In the Middle East, it can be an offensive gesture.6 (It’s still unclear whether its use in emoji will continue or erase the split.)

Social relationships are strongly correlated to the physical distance between people. With voluntary proximity, the closer people stand, the more familiar they are with each other. (Emphasis on voluntary: this does not really apply in densely populated areas, or to the person who fell asleep on your shoulder during a flight.) People are sensitive to physical proximity and establish zones of personal and shared space (see Figure 5-5).

With the rising use of gestures in interaction, it is important to be sensitive to unintended meanings of gestures. Proximity-based technologies, like beacons, GPS, and Bluetooth technologies, are impacted by how people perceive the line between personal and social interactions, especially where privacy is a concern. It can feel at best awkward or at worst invasive and unsafe when spatial boundaries are mismatched to the nature of the experience. Touchless payment, like Apple Pay or Samsung Pay, happen over inches, which is well within the range of personal control. At farther ranges, however, it would feel strangely exposed.

Proxemic ranges contribute to user expectations for interactions
Figure 5-5. Proxemic ranges contribute to user expectations for interactions

Precision Versus Strength

Our hands and feet have more individual tendons, joints, muscles, and nerves than our arms and legs. In contrast, our arms and legs have many times more bone density and muscle mass. Together, these specializations within anatomy give us strength and precision. Hand tools often reflect this balance between grip strength and control, used for a single or small range of actions. In contrast, interface controllers generally require less grip strength and allow a wider range of movements and hand or finger positions to be more versatile (see Figure 5-6).

Hand tools balance grip strength and control, usually for a small range of interactions
Figure 5-6. Hand tools balance grip strength and control, usually for a small range of interactions

The Grasping Hand, by C.L. MacKenzie and T. Iberall, details the ways that different hand positions and grips affect the possibilities for interaction and how that is affected by the design of objects:

The amount of force you can generate depends on the way you hold it—the grip employed brings into play different muscle groups, which in turn differ in their force generation and sensory resolution capabilities. Power grasps are designed for strength and stability, involving the palm as well as the fingers. The form of the handle invites a particular kind of grasp: the hand’s configuration determines the strength versus precision that can be applied. Maximum forces can be in the range of 5 to hundreds of Newtons. Precision, or pinch, grasps are less strong, generating up to 25% of the force, and are characterized by apposition of the thumb and distal joints of the fingers.7

Trade offs like these are common across many of our movements. We flex our legs and feet differently when we are balancing on uneven terrain versus trying to jump as high as we can. We make different faces when trying to read romantic poetry in a foreign language versus trying to chomp to the center of a Tootsie Pop in one go. This also applies to device interactions. Narrower smartphones allow a user to use one hand to maintain their grip of their phone while also typing with their thumb. Good chef’s knives balance the weight of the handle against the weight of the blade to increase control of slicing movements. Power steering in cars and other vehicles emerged because in stopped or slow driving conditions, it was difficult for drivers to turn the steering wheel with enough force and control simultaneously. With the XBox Kinect in particular, the strain of holding up the entire weight of both arms demonstrated that fatigue in strength movements made them difficult to repeat. Fatigue in precision movements emerged more as a loss of control and degradation of accuracy (see Figure 5-7).

Design for repeated precise movements must account for potential degradation of control and accuracy over prolonged use, caused by muscle fatigue
Figure 5-7. Design for repeated precise movements must account for potential degradation of control and accuracy over prolonged use, caused by muscle fatigue

Inferring Versus Designating Intent

Many new smart products use automated technologies that are triggered by physical actions. The Dyson hand dryer starts when a person dips their hands between the blowers. In this case, it is inferring the user’s intention to dry their hands. Occasionally this might not be the case—for example, when an article of clothing or a purse brushes through the opening or it’s being wiped dry. However, it’s uncommon that it is accidentally triggered, called a false positive, and the nuisance factor is low.

Palm rejection technology on Apple touchscreen devices is the opposite and assumes that resting your hand on a screen is unintentional, perhaps the product of habit or a tired wrist. However, if you wanted to make a finger-painted turkey it would also reject your palm print, defying generations of grade school tradition. This is called a true negative, when an intended action is not recognized. For now, real finger-painting will remain that much more accurate, messy, and satisfying. (Which is not to say that it wasn’t already.)

Assisted or automated interactions blend the physical activity of a task with the trigger that activates device functionality. Stepping onto a scale triggers weighing. Approaching an automatic door causes it to slide open. Interactions that infer intent require thoughtful consideration of the user behavior that signals intent. Is the trigger behavior common across many different activities? Is it important to task performance? Do people have different methods or styles that could affect triggering? (See Figure 5-8.)

Inferring intentions and responding appropriately is part of appearing lifelike and being useful, as demonstrated by this robot from the Office of Naval Research (source: U.S. Navy photo by John F. Williams)
Figure 5-8. Inferring intentions and responding appropriately is part of appearing lifelike and being useful, as demonstrated by this robot from the Office of Naval Research (source: U.S. Navy photo by John F. Williams)

A waving gesture “wakes up” the XBox Kinect. Gestures are tied to screen-based target areas. There is a specific set of gestures that designate specific commands. Because the Kinect accepts such a large range of physical movements as part of its overall interaction, it requires that certain commands be unique and deliberate to reduce the possibility of accidental commands and to distinguish between system commands and more general game play. This principle applies to the “wake words” for voice assistants as well. “Alexa,” “Echo,” and “OK, Google” are somewhat uncommon words or in everyday household conversation. Alexa, in particular, is a fairly uncommon name and will probably begin to decline even more.

Between inferred and designated intents, a great deal of consideration goes into the behaviors that become part of an interaction and whether that will interfere with everyday life. Common words and gestures can mean many things or nothing at all, so it can be challenging to rely on them for device functionality. A person may open their refrigerator door to find something to eat, to check if they have milk, or when the air conditioning is broken, to cool off on a hot day. The cost of a wrongly inferred intention can range from trivial to lifethreatening.

Summary

The ability to engage in tasks and activities is called acting. It involves using our bodies to accomplish something within their surrounding context. Some abilities we are born with, or soon develop, and others take effort to learn. For many, the highest level of learning a task is achieved when we forget that we are even doing it. Interfering with those non-aware activities can create unexpected distractions. Acting is also the way we demonstrate intent, so sensing and inferring what a user hopes to accomplish is an important and often nuanced part of interactivity. As the boundaries between an interface and an activity are blurred, designers must weigh how they complement and interfere with each other.

1 David Eagleman, Incognito: The Secret Lives of the Brain, Reprint Edition (New York, Vintage, 2012): 56.

2 “Eye Exam,” Mayo Clinic, accessed January 20, 2018, https://www.mayoclinic.org/tests-procedures/eye-exam/.

3 Henry Dreyfuss, Designing for People (New York: Allworth Press, 2003), 24.

4 “Vision Facts and Statistics,” MES Vision, accessed January 20, 2018, https://www.mesvision.com/includes/pdf_Broker/MESVision%20Facts%20and%20Statistics.pdf.

5 Ellen Lupton, Beautiful Users: Designing for People (Princeton, Princeton University Press, 2014), 26.

6 Brendan Koerner, “What Does a ”˜Thumbs Up’ Mean in Iraq?”, Slate, March 28, 2003, http://www.slate.com/articles/news_and_politics/explainer/2003/03/what_does_a_thumbs_up_mean_in_iraq.html.

7 Christine L. MacKenzie and Thea Iberall, The Grasping Hand (Amsterdam; New York: North Holland, 1994).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset