11

Design, prototyping, and construction

  1. 11.1 Introduction
  2. 11.2 Prototyping and construction
  3. 11.3 Conceptual design: moving from requirements to first design
  4. 11.4 Physical design: getting concrete
  5. 11.5 Using scenarios in design
  6. 11.6 Using prototypes in design
  7. 11.7 Tool support

11.1 Introduction

Design activities begin once some requirements have been established. The design emerges iteratively, through repeated design–evaluation–redesign cycles involving users. Broadly speaking, there are two types of design: conceptual and physical. The former is concerned with developing a conceptual model that captures what the product will do and how it will behave, while the latter is concerned with details of the design such as screen and menu structures, icons, and graphics. We discussed physical design issues relating to different types of interfaces in Chapter 6 and so we do not return to this in detail here, but refer back to Chapter 6 as appropriate.

For users to evaluate the design of an interactive product effectively, designers must produce an interactive version of their ideas. In the early stages of development, these interactive versions may be made of paper and cardboard, while as design progresses and ideas become more detailed, they may be polished pieces of software, metal, or plastic that resemble the final product. We have called the activity concerned with building this interactive version prototyping and construction.

There are two distinct circumstances for design: one where you're starting from scratch and one where you're modifying an existing product. A lot of design comes from the latter, and it may be tempting to think that additional features can be added, or existing ones tweaked, without extensive investigation, prototyping, or evaluation. It is true that if changes are not significant then the prototyping and evaluation activities can be scaled down, but they are still invaluable activities that should not be skipped.

In Chapter 10, we discussed some ways to identify user needs and establish requirements. In this Chapter, we look at the activities involved in progressing a set of requirements through the cycles of prototyping to construction. We begin by explaining the role and techniques of prototyping and then explain how prototypes may be used in the design process. Tool support plays an important part in development, but tool support changes so rapidly in this area that we do not attempt to provide a catalog of current support. Instead, we discuss the kinds of tools that may be of help and categories of tools that have been suggested.

The main aims of this chapter are to:

  • Describe prototyping and different types of prototyping activities.
  • Enable you to produce simple prototypes from the models developed during the requirements activity.
  • Enable you to produce a conceptual model for a product and justify your choices.
  • Explain the use of scenarios and prototypes in design.
  • Discuss the range of tool support available for interaction design.

11.2 Prototyping and Construction

It is often said that users can't tell you what they want, but when they see something and get to use it, they soon know what they don't want. Having collected information about work and everyday practices, and views about what a system should and shouldn't do, we then need to try out our ideas by building prototypes and iterating through several versions. And the more iterations, the better the final product will be.

11.2.1 What is a Prototype?

When you hear the term prototype, you may imagine something like a scale model of a building or a bridge, or maybe a piece of software that crashes every few minutes. But a prototype can also be a paper-based outline of a screen or set of screens, an electronic ‘picture,’ a video simulation of a task, a three-dimensional paper and cardboard mockup of a whole workstation, or a simple stack of hyperlinked screen shots, among other things.

In fact, a prototype can be anything from a paper-based storyboard through to a complex piece of software, and from a cardboard mockup to a molded or pressed piece of metal. A prototype allows stakeholders to interact with an envisioned product, to gain some experience of using it in a realistic setting, and to explore imagined uses.

For example, when the idea for the PalmPilot was being developed, Jeff Hawkin (founder of the company) carved up a piece of wood about the size and shape of the device he had imagined. He used to carry this piece of wood around with him and pretend to enter information into it, just to see what it would be like to own such a device (Bergman and Haitani, 2000). This is an example of a very simple (some might even say bizarre) prototype, but it served its purpose of simulating scenarios of use.

Ehn and Kyng (1991) report on the use of a cardboard box with the label ‘Desktop Laser Printer’ as a mockup. It did not matter that, in their setup, the printer was not real. The important point was that the intended users, journalists and typographers, could experience and envision what it would be like to have one of these machines on their desks. This may seem a little extreme, but in 1982 when this was done, desktop laser printers were expensive items of equipment and were not a common sight around the office.

So a prototype is a limited representation of a design that allows users to interact with it and to explore its suitability.

11.2.2 Why Prototype?

Prototypes are a useful aid when discussing ideas with stakeholders; they are a communication device among team members, and are an effective way to test out ideas for yourself. The activity of building prototypes encourages reflection in design, as described by Schön (1983) and as recognized by designers from many disciplines as an important aspect of the design process. Liddle (1996), talking about software design, recommends that prototyping should always precede any writing of code.

Prototypes answer questions and support designers in choosing between alternatives. Hence, they serve a variety of purposes: for example, to test out the technical feasibility of an idea, to clarify some vague requirements, to do some user testing and evaluation, or to check that a certain design direction is compatible with the rest of the system development. Which of these is your purpose will influence the kind of prototype you build. So, for example, if you are trying to clarify how users might perform a set of tasks and whether your proposed design would support them in this, you might produce a paper-based mockup. Figure 11.1 shows a paper-based prototype of the design for a handheld device to help an autistic child communicate. This prototype shows the intended functions and buttons, their positioning and labeling, and the overall shape of the device, but none of the buttons actually work. This kind of prototype is sufficient to investigate scenarios of use and to decide, for example, whether the buttons are appropriate and the functions sufficient, but not to test whether the speech is loud enough or the response fast enough.

Heather Martin and Bill Gaver (2000) describe a different kind of prototyping with a different purpose. When prototyping audiophotography products, they used a variety of different techniques including video scenarios similar to the scenarios we introduced in Chapter 10, but filmed rather than written. At each stage, the prototypes were minimally specified, deliberately leaving some aspects vague so as to stimulate further ideas and discussion.

11.2.3 Low-fidelity Prototyping

A low-fidelity prototype is one that does not look very much like the final product. For example, it uses materials that are very different from the intended final version, such as paper and cardboard rather than electronic screens and metal. The lump of wood used to prototype the PalmPilot described above is a low-fidelity prototype, as is the cardboard-box laser printer.

Low-fidelity prototypes are useful because they tend to be simple, cheap, and quick to produce. This also means that they are simple, cheap, and quick to modify so they support the exploration of alternative designs and ideas. This is particularly important in early stages of development, during conceptual design for example, because prototypes that are used for exploring ideas should be flexible and encourage rather than discourage exploration and modification. Low-fidelity prototypes are never intended to be kept and integrated into the final product. They are for exploration only.

Storyboarding. Storyboarding is one example of low-fidelity prototyping that is often used in conjunction with scenarios, as described in Chapter 10. A storyboard consists of a series of sketches showing how a user might progress through a task using the product under development. It can be a series of sketched screens for a GUI-based software system, or a series of scene sketches showing how a user can perform a task using an interactive device. When used in conjunction with a scenario, the storyboard brings more detail to the written scenario and offers stakeholders a chance to role-play with the prototype, interacting with it by stepping through the scenario. The example storyboard shown in Figure 11.2 (Hartfield and Winograd, 1996) depicts a person using a new system for digitizing images. This example does not show detailed drawings of the screens involved, but it describes the steps a user might go through in order to use the system.

images

Figure 11.1 A paper-based prototype of a handheld device to support an autistic child

images

Figure 11.2 An example storyboard

Sketching. Low-fidelity prototyping often relies on sketching, and many people find it difficult to engage in this activity because they are inhibited about the quality of their drawing. Verplank (1989) suggests that you can teach yourself to get over this inhibition by devising your own symbols and icons for elements you might want to sketch, and practice using them. They don't have to be anything more than simple boxes, stick figures, and stars. Elements you might require in a storyboard sketch, for example, include ‘things’ such as people, parts of a computer, desks, books, etc., and actions such as give, find, transfer, and write. If you are sketching an interface design, then you might need to draw various icons, dialog boxes, and so on. Some simple examples are shown in Figure 11.3. Try copying these and using them. The next activity requires other sketching symbols, but they can still be drawn quite simply.

images

Figure 11.3 Some simple sketches for low-fidelity prototyping

Activity 11.1

Produce a storyboard that depicts how to fill a car with gas (petrol).

Comment

Our attempt is shown in Figure 11.4.

images

Figure 11.4 A storyboard depicting how to fill a car with gas

Prototyping with index cards. Using index cards (small pieces of cardboard about 3 × 5 inches) is a successful and simple way to prototype an interaction, and is used quite commonly when developing websites. Each card represents one screen or one element of a task. In user evaluations, the user can step through the cards, pretending to perform the task while interacting with the cards. A more detailed example of this kind of prototyping is given in Section 11.6.2.

Wizard of Oz. Another low-fidelity prototyping method called Wizard of Oz assumes that you have a software-based prototype. In this technique, the user sits at a computer screen and interacts with the software as though interacting with the product. In fact, however, the computer is connected to another machine where a human operator sits and simulates the software's response to the user. The method takes its name from the classic story of the little girl who is swept away in a storm and finds herself in the Land of Oz (Baum and Denslow, 1900). The Wizard of Oz is a small shy man who operates a large artificial image of himself from behind a screen where no-one can see him. The Wizard of Oz style of prototyping has been used successfully for various systems, including PinTrace, a robotic system that helps surgeons to position orthopedic ‘pins’ accurately during the surgery of hip fractures (Molin, 2004).

11.2.4 High-fidelity Prototyping

High-fidelity prototyping uses materials that you would expect to be in the final product and produces a prototype that looks much more like the final thing. For example, a prototype of a software system developed in Visual Basic is higher fidelity than a paper-based mockup; a molded piece of plastic with a dummy keyboard is a higher-fidelity prototype of the PalmPilot than the lump of wood.

If you are to build a prototype in software, then clearly you need a software tool to support this. Common prototyping tools include Flash, Visual Basic, and Smalltalk. These are also fully-fledged development environments, so they are powerful tools, but building prototypes using them can also be straightforward.

In a classic paper, Marc Rettig (1994) argues that more projects should use low-fidelity prototyping because of the inherent problems with high-fidelity prototyping. He identifies these problems as:

  • They take too long to build.
  • Reviewers and testers tend to comment on superficial aspects rather than content.
  • Developers are reluctant to change something they have crafted for hours.
  • A software prototype can set expectations too high.
  • Just one bug in a high-fidelity prototype can bring the testing to a halt.

High-fidelity prototyping is useful for selling ideas to people and for testing out technical issues. However, the use of paper prototyping and other ideas should be actively encouraged for exploring issues of content and structure. Further advantages and disadvantages of the two types of prototyping are listed in Table 11.1.

images

Table 11.1 Relative effectiveness of low- vs. high-fidelity prototypes (Rudd et al., 1996)

Powerpoint is often used for prototyping because it balances the provisionality of paper with the polished appearance of software prototypes. A Powerpoint prototype has characteristics of high and low fidelity.

11.2.5 Compromises in Prototyping

By their very nature, prototypes involve compromises: the intention is to produce something quickly to test an aspect of the product. The kind of questions or choices that any one prototype allows the designer to answer is therefore limited, and the prototype must be designed and built with the key issues in mind. In low-fidelity prototyping, it is fairly clear that compromises have been made. For example, with a paper-based prototype an obvious compromise is that the device doesn't actually work! For software-based prototyping, some of the compromises will still be fairly clear; for example, the response speed may be slow, or the exact icons may be sketchy, or only a limited amount of functionality may be available.

Box 11.1: Prototyping Cultures

“The culture of an organization has a strong influence on the quality of the innovations that the organization can produce.” (Schrage, 1996, p. 193)

This observation is drawn mainly from product-related organizations, but also applies to software development. There are primarily two kinds of organizational culture for innovation: the specification culture and the prototyping culture. In the former, new products and development are driven by written specifications, i.e. by a collection of documented requirements. In the latter, understanding requirements and developing the new product are driven by prototyping. Large companies such as IBM or AT&T that have to gather and coordinate a large amount of information tend to be specification-driven, while smaller entrepreneurial companies tend to be prototype-driven. Both approaches have potential disadvantages. A carefully prepared specification may prove completely infeasible once prototyping begins. Similarly, a wonderful prototype may prove to be too expensive to produce on a large scale.

David Kelley (Schrage, p. 195) claims that organizations wanting to be innovative need to move to a prototype-driven culture. Schrage sees that there are two cultural aspects to this shift. First, scheduled prototyping cycles that force designers to build many prototypes are more likely to lead to a prototype-driven culture than allowing designers to produce ad hoc prototypes when they think it appropriate. Second, rather than innovative teams being needed for innovative prototypes, it is now recognized that innovative prototypes lead to innovating teams! This can be especially significant when the teams are cross-functional, i.e. multidisciplinary.

(Schrage, 1996)

Two common compromises that often must be traded against each other are breadth of functionality provided versus depth. These two kinds of prototyping are called horizontal prototyping (providing a wide range of functions but with little detail) and vertical prototyping (providing a lot of detail for only a few functions).

Other compromises won't be obvious to a user of the system. For example, the internal structure of the system may not have been carefully designed, and the prototype may contain ‘spaghetti code’ or may be badly partitioned. One of the dangers of producing running prototypes, i.e. ones that users can interact with automatically, is that they may believe that the prototype is the system. The danger for developers is that it may lead them to consider fewer alternatives because they have found one that works and that the users like. However, the compromises made in order to produce the prototype must not be ignored, particularly the ones that are less obvious from the outside. We still have to produce a good-quality system and good engineering principles must be adhered to.

images

A different point is made by Holmquist (2005), who points out that the design team themselves must be careful not to inadvertently design something that is not technically feasible. Claiming that a mobile device will be able to detect the state of its user automatically and modify its behavior accordingly is fairly simple to do when you only have a paper prototype to play with, however, having a paper model does not mean that a claimed feature can be implemented. This is one reason why it is important to have technical and design knowledge in an interactive design team.

11.2.6 Construction: from Design to Implementation

When the design has been around the iteration cycle enough times to feel confident that it fits requirements, everything that has been learned through the iterated steps of prototyping and evaluation must be integrated to produce the final product.

Although prototypes will have undergone extensive user evaluation, they will not necessarily have been subjected to rigorous quality testing for other characteristics such as robustness and error-free operation. Constructing a product to be used by thousands or millions of people running on various platforms and under a wide range of circumstances requires a different testing regime than producing a quick prototype to answer specific questions.

The dilemma box below discusses two different development philosophies. One approach, called evolutionary prototyping, involves evolving a prototype into the final product. An alternative approach, called throwaway prototyping, uses the prototypes as stepping stones towards the final design. In this case, the prototypes are thrown away and the final product is built from scratch. If an evolutionary prototyping approach is to be taken, the prototypes should be subjected to rigorous testing along the way; for throw-away prototyping such testing is not necessary.

Dilemma: prototyping to throw away

Low-fidelity prototypes are never intended to be kept and integrated into the final product. But when building a software-based system, developers can choose to do one of two things: either build a prototype with the intention of throwing it away after it has fulfilled its immediate purpose, or build a prototype with the intention of evolving it into the final product.

Above, we talked about the compromises made when producing a prototype, and we commented that the ‘invisible’ compromises, concerned with the structure of the underlying software must not be ignored. However, when a project team is under pressure to produce the final product and a complex prototype exists that fulfills many of the requirements, or maybe a set of vertical prototypes exists that together fulfill the requirements, it can become very tempting to pull them together and issue the result as the final product. After all, many hours of development have probably gone into developing the prototypes, and evaluation with the client has gone well, so isn't it a waste to throw it all away? Basing the final product on prototypes in this way will simply store up testing and maintenance problems for later on: in short, this is likely to compromise the quality of the product.

Evolving the final prototype into the final product through a defined process of evolutionary prototyping can lead to a robust final product, but this must be clearly planned and designed for from the beginning. Building directly on prototypes that have been used to answer specific questions through the development process will not yield a robust product. As Constantine and Lockwood (1999) observe, “Software is the only engineering field that throws together prototypes and then attempts to sell them as delivered goods.”

On the other hand, if your device is an innovation, then being first to market with a ‘good enough’ product may be more important for securing your market position than having a very high-quality product that reaches the market two months after your competitors'.

11.3 Conceptual Design: Moving from Requirements to First Design

Conceptual design is concerned with transforming needs and requirements into a conceptual model. Designing the conceptual model is fundamental to interaction design, yet the idea of a conceptual model can be difficult to grasp. One of the reasons for this is that conceptual models take many different forms and it is not possible to provide a definitive detailed characterization of one. Instead, conceptual design is best understood by exploring and experiencing different approaches to it, and the purpose of this section is to provide you with some concrete suggestions about how to go about doing this.

In Chapter 2 we said that a conceptual model is an outline of what people can do with a product and what concepts are needed to understand how to interact with it. The former will emerge from the current functional requirements; possibly it will be a subset of them, possibly all of them, and possibly an extended version of them. The concepts needed to understand how to interact with the product depend on a variety of issues related to who the user will be, what kind of interaction will be used, what kind of interface will be used, terminology, metaphors, application domain, and so on. The first step in getting a concrete view of the conceptual model is to steep yourself in the data you have gathered about your users and their goals and try to empathize with them. From this, a picture of what you want the users' experience to be when using the new product will emerge and become more concrete. This process is helped by considering the issues in this section, and by using scenarios (introduced in Chapter 10) and prototypes (introduced in Section 11.2) to capture and experiment with ideas. All of this is also informed by the requirements activity and must be tempered with technological feasibility.

There are different ways to achieve this empathy. For example, Beyer and Holtzblatt (1998), in their method Contextual Design, recommend holding review meetings within the team to get different peoples' perspectives on the data and what they observed. This helps to deepen understanding and to expose the whole team to different aspects. Ideas will emerge as this extended understanding of the requirements is established, and these can be tested against other data and scenarios, discussed with other design team members and prototyped for testing with users. Other ways to understand the users' experience are described in Box 11.2.

Ideas for a conceptual model may emerge during data gathering, but remember what Suzanne Robertson said in her interview at the end of Chapter 10: you must separate the real requirements from solution ideas.

Key guiding principles of conceptual design are:

  • Keep an open mind but never forget the users and their context.
  • Discuss ideas with other stakeholders as much as possible.
  • Use low-fidelity prototyping to get rapid feedback.
  • Iterate, iterate, and iterate.

Before explaining how scenarios and prototyping can help, we explore in more detail some useful perspectives to help develop a conceptual model.

Box 11.2: How to Really Understand the Users' Experience

Some design teams go to great lengths to ensure that they come to empathize with, not just understand, the users' experience. We know from learning things ourselves that ‘learning by doing’ is more effective than being told something or just seeing something. Buchenau and Suri (2000) describe an approach they call experience prototyping, which is intended to give designers some of the insight into a user's experience that can only come from first-hand knowledge. For example, they describe a team designing a chest-implanted automatic defibrillator. A defibrillator is used with victims of cardiac arrest when their heart muscle goes into a chaotic arrythmia and fails to pump blood, a state called fibrillation. A defibrillator delivers an electric shock to the heart, often through paddle electrodes applied externally through the chest wall; an implanted defibrillator does this through leads that connect directly to the heart muscle. In either case, it's a big electric shock intended to restore the heart muscle to its regular rhythm that can be powerful enough to knock people off their feet.

This kind of event is completely outside most people's experience, and so it is difficult really to understand what the user's experience is likely to be for this kind of device. You can't fit a prototype pacemaker to each member of the design team and simulate fibrillation in them! This makes it difficult for designers to gain the insight they need. However, you can simulate some critical aspects of the experience, one of which is the random occurrence of a defibrillating shock. To achieve this, each team member was given a pager to take home over the weekend (elements of the pack are shown in Figure 11.5). The pager message simulated the occurrence of a defibrillating shock. Messages were sent at random, and team members were asked to record where they were, who they were with, what they were doing, and what they thought and felt knowing that this represented a shock. Experiences were shared the following week, and example insights ranged from anxiety around everyday happenings such as holding a child and operating power tools, to being in social situations at a loss how to communicate to onlookers what was happening. This firsthand experience brought new insights to the design effort.

Another instance in which designers tried hard to come to terms with the user experience is the Third Age suit, developed at ICE, Loughborough University (see Figure 11.6). This suit was designed so that car designers could experience what it might be like to be in an older body. The suit restricts movement in the neck, arms, legs, and ankles in a way that simulates the mobility problems typically experienced by someone over 55 years of age. For example, when operating the foot pedals in a car, many ‘third agers’ (as they are called) lack the flexibility in their ankles to be able to rest their heel on the floor and operate the pedals by flexing their ankle. Thus they have to lift their whole foot up and push it down each time they operate the pedal, which puts much more stress on their leg muscles.

images

Figure 11.5 The patient kit for experience prototyping

images

Figure 11.6 Third Age suit: (a) riding a bike and (b) using a cell phone

11.3.1 Developing an Initial Conceptual Model

Some elements of a conceptual model will derive from the requirements for the product. For example, the requirements activity will have provided information about the concepts involved in a task and their relationships, e.g. through task descriptions and analysis. Immersion in the data and attempting to empathize with the users as described above will, together with the requirements, provide information about the product's user experience goals, and give you a good understanding of what the product should be like. In this section we discuss approaches which help in pulling together an initial conceptual model. In particular, we consider:

  • Which interface metaphors would be suitable to help users understand the product?
  • Which interaction type(s) would best support the users' activities?
  • Do different interface types suggest alternative design insights or options?

In all the discussions that follow, we are not suggesting that one way of approaching a conceptual design is right for one situation and wrong for another; they all provide different ways of thinking about the product and hence aid in generating potential conceptual models. Box 11.3 describes another way of thinking about different conceptual models, and introduces the idea that some are based on process and others are based on product.

Interface metaphors. As mentioned in Chapter 2, interface metaphors combine familiar knowledge with new knowledge in a way that will help the user understand the system. Choosing suitable metaphors and combining new and familiar concepts requires a careful balance between utility and fun, and is based on a sound understanding of the users and their context. For example, consider an educational system to teach six-year-olds mathematics. You could use the metaphor of a classroom with a teacher standing at the blackboard. But if you consider the users of the system and what is likely to engage them, you will be more likely to choose a metaphor that reminds the children of something they enjoy, such as a ball game, the circus, a playroom, etc.

Erickson (1990) suggests a three-step process for choosing a good interface metaphor. The first step is to understand what the system will do. Identifying functional requirements was discussed in Chapter 10. Developing partial conceptual models and trying them out may be part of the process. The second step is to understand which bits of the system are likely to cause users problems. Another way of looking at this is to identify which tasks or subtasks cause problems, are complicated, or are critical. A metaphor is only a partial mapping between the software and the real thing upon which the metaphor is based. Understanding areas in which users are likely to have difficulties means that the metaphor can be chosen to support those aspects. The third step is to generate metaphors. Looking for metaphors in the users' description of the tasks is a good starting point. Also, any metaphors used in the application domain with which the users may be familiar may be suitable.

When suitable metaphors have been generated, they need to be evaluated. Again, Erickson (1990) suggests five questions to ask.

  1. How much structure does the metaphor provide? A good metaphor will require structure, and preferably familiar structure.
  2. How much of the metaphor is relevant to the problem? One of the difficulties of using metaphors is that users may think they understand more than they do and start applying inappropriate elements of the metaphor to the system, leading to confusion or false expectations.
  3. Is the interface metaphor easy to represent? A good metaphor will be associated with particular visual and audio elements, as well as words.
  4. Will your audience understand the metaphor?
  5. How extensible is the metaphor? Does it have extra aspects that may be useful later on?

In the group travel organizer introduced in Chapter 10, one obvious metaphor we could use is a printed travel brochure. This is familiar to everyone, and we could combine that familiarity with facilities suitable for an electronic ‘brochure’ such as hyperlinks and searching. Having thought of this metaphor, we need to apply the five questions listed above.

  1. Does it supply structure? Yes, it supplies structure based on the familiar paper-based brochure. This is a book and therefore has pages, a cover, some kind of binding to hold the pages together, an index, and table of contents. Travel brochures are often structured around destinations but are also sometimes structured around activities, particularly when the company specializes in activity holidays. However, a traditional brochure focuses on the details of the holiday and accommodation and has little structure to support visa or vaccination information (both of which change regularly and are therefore not suitable to include in a printed document).
  2. How much of the metaphor is relevant? Having details of the accommodation, facilities available, holiday plan, and supporting illustrations is relevant for the travel organizer, so the content of the brochure is relevant. Also, structuring that information around holiday types and destinations is relevant, but preferably both sets of grouping could be offered. But the physical nature of the brochure, such as page turning, is less relevant. The travel organizer can be more flexible than the brochure and should not try to emulate the book nature of the brochure. Finally, the brochure is printed maybe once a year and cannot be kept up-to-date with the latest changes whereas the travel organizer should be capable of offering the most recent information.
  3. Is the metaphor easy to represent? Yes. The holiday information could be a set of brochure-like ‘pages.’ Note that this is not the same as saying that the navigation through the pages will be limited to page-turning.
  4. Will your audience understand the metaphor? Yes.
  5. How extensible is the metaphor? The functionality of a paper-based brochure is fairly limited. However, it is also a book, and we could borrow facilities from electronic books (which are also familiar objects to most of our audience), so yes, it can be extended.

Activity 11.2

Another possible interface metaphor for the travel organizer is the travel consultant. A travel consultant takes a set of holiday requirements and tailors the holiday accordingly, offering maybe two or three alternatives, but making most of the decisions on the travelers' behalf. Ask the five questions above of this metaphor.

Comment

  1. Does the travel consultant metaphor supply structure? Yes, it supplies structure because the key characteristic of this metaphor is that the travelers specify what they want and the consultant goes and researches it. It relies on the travelers being able to give the consultant sufficient information to be able to search sensibly rather than leaving him to make key decisions.
  2. How much of the metaphor is relevant? The idea of handing over responsibility to someone else to search for suitable holidays may be appealing to some users, but might feel uncomfortable to others. On the other hand, having no help at all in sifting through potential holidays could become very tedious and dispiriting. So maybe this metaphor is relevant to an extent.
  3. Is the metaphor easy to represent? Yes, it could be represented by a software agent, or by having a sophisticated database entry and search facility. But the question is: would users like this approach?
  4. Will your audience understand the metaphor? Yes.
  5. How extensible is the metaphor? The wonderful thing about people is that they are flexible, hence the metaphor of the travel consultant is also pretty flexible. For example, the consultant could be asked to bring just a few options for the users to consider, having screened out inappropriate ones; alternatively the consultant could be asked to suggest 50 or 100 options!

Box 11.3: Process-oriented Versus Product-oriented Conceptual Models

Mayhew (1999) characterizes conceptual models in terms of their focus on products or on process.

The difference between these two kinds of conceptual model concerns the drivers for the design activity. For a product-oriented application, the main products and the tools needed to create them form the main structure of the application. For a process-oriented application, it is the list of process steps that forms the system's basis. Mayhew suggests the following issues must be addressed during conceptual design, whether the application is primarily product-oriented or process-oriented:

images

Figure 11.7 An example of a process-oriented conceptual model

  • Products or processes must be clearly identified. For example, what documents are to be generated and what other tools are required to produce them? In a process-oriented model, what processes are to be supported?
  • A set of presentation rules must be designed. For example, urgent tasks must be placed on the desktop, while less urgent tasks may be accessible through the menu bar. If designing for a GUI, design rules and guidelines come with the particular platform.
  • Design a set of rules for how windows will be used.
  • Identify how major information and functionality will be divided across displays.
  • Define and design major navigational pathways. This will draw on the task analysis earlier, and leads to a structure for the tasks. Don't over-constrain users, make navigation easy, and provide facilities so that they always know where they are.
  • Document alternative conceptual design models in sketches and explanatory notes. An example conceptual model based on this approach is shown in Figure 11.7. This is a process-based model, and so it is structured around the processes and subprocesses the system is to support.

An example conceptual model based on this approach is shown in Figure 11.7. This is a process-based model, and so it is structured around the processes and subprocesses the system is to support.

(Mayhew, 1999)

Interaction types. In Chapter 2 we introduced four different types of interaction: instructing, conversing, manipulating, and exploring. Which is best suited to your current design depends on the application domain and the kind of product being developed. For example, a computer game is most likely to suit a manipulating style, while a drawing package has aspects of instructing and conversing.

Most conceptual models will include a combination of interaction types, and it is necessary to associate different parts of the interaction with different types. For example, consider the travel organizer. One of the users' tasks is to find out the visa regulations for a particular destination; this will require an instructing approach to interaction. No dialog is necessary for the system to show the required information, the user simply has to enter a predefined set of information, e.g. origin of passport and destination. On the other hand, the users' task of trying to identify a holiday for a group of people may be conducted more like a conversation. We can imagine that the user begins by selecting some characteristics of the holiday and some time constraints and preferences, then the organizer will respond with several options, and the user will provide more information or preferences and so on. This is much more like a conversation. (You may like to refer back to the scenario of this task in Chapter 10 and consider how well it matches this type of interaction.) Alternatively, for users who don't have any clear requirements yet, they might prefer to be able to explore the information before asking for specific options.

Activity 11.3

Consider the library catalog system introduced in Chapter 10. Identify tasks associated with this product that would be best supported by each of the interaction types instructing, conversing, manipulating, and exploring.

Comment

Here are some suggestions. You may have idengified others:

  1. Instructing: the user wants to see details of a particular book, such as publisher and location.
  2. Conversing: the user wants to idendify a book on a particular topic but doesn't know exactly what is required.
  3. Manipulating: the library books might be represented as icons that could be interrogated for information or manipulated to represent the book being reserved or borrowed.
  4. Exploring: the user is looking for interesting books, with no particular topic or author in mind.

Interface types. Considering different interfaces at this stage may seem premature, but it has both a design and a practical purpose. When thinking about the conceptual model for a product, it is important not to be unduly influenced by a predetermined interface type. Different interface types prompt and support different perspectives on the product under development and suggest different possible behaviors. Therefore considering the effect of different interfaces on the product at this stage is one way to prompt alternatives.

Before the product can be prototyped, some candidate alternative interfaces will need to have been chosen. These decisions will depend on the constraints on the product, arising from the requirements you have established. For example, input and output devices will be influenced particularly by user and environmental requirements. Therefore, considering interfaces here also takes one step towards producing practical prototypes.

To illustrate this, we consider a subset of the interfaces introduced in Chapter 6, and the different perspectives they bring to the travel organizer:

  • WIMP/GUI interface. This is the traditional desktop interface which uses windows, icons, menus, and a pointing device. It would certainly be possible to build the travel organizer around this model, but having a separate keyboard and mouse in a public space may not be practical because of potential risk of damage. WIMP systems also tend not to be very exciting and engaging, but a system to support people going on holiday to have a good time should be exciting and fun.
  • Shareable interface. The travel organizer has to be shareable as it is intended to be used by a group of people. The design issues for shareable interfaces which were introduced in Chapter 6 will need to be considered for this system.
  • Tangible interface. Tangible interfaces are a form of sensor-based interaction, where blocks or other physical objects are moved around. Thinking about a travel organizer in this way conjures up an interesting image of people collaborating, maybe with the physical objects representing themselves traveling, but there are practical problems of having this kind of interface in a public place, as the objects may be lost or damaged.
  • Advanced graphical interface. These interfaces include multimedia presentations, virtual environments, and interactive animations. If this kind of interface were used then the system could provide moving images and virtual experiences representing the holiday and the destination for the users. Giving users the chance to experience more than two-dimensional descriptions and photographs would be a very exciting prospect indeed!

Activity 11.4

Consider the library catalog system and pick out two interface types from Chapter 6 that might provide a different perspective on the design.

Comment

Library catalog systems tend to be built around the WIMP/GUI style of interface, so it is worth exploring other styles to see what insights they may bring. We had the following thoughts, but you may have had others.

The library catalog is likely to be used only in certain places, such as the library itself or perhaps in an office. However, it might be useful to be mobile around the library while searching for a book or journal. The mobile interface therefore presents some interesting alternative ideas such as using bar code readers or positioning technology to help identify locations, or books, or to inform users about sections of the library they are in. Any interfaces that require speech or that make a noise would not be appropriate in this quiet environment, also complex presentation styles (such as multimedia or multimodal) might make the task of finding the book you want more complex than it needs to be. A web-based interface may be appropriate.

11.3.2 Expanding the Initial Conceptual Model

Considering the issues in the previous section helps the designer to produce a set of initial conceptual model ideas. These ideas must be thought through in more detail and expanded before being prototyped or tested with users. For example, you have to decide more concretely what concepts need to be communicated between the user and the product and how they are to be structured, related, and presented. This means deciding which functions the product will perform (and which the user will perform), how those functions are related, and what information is required to support them. Although these decisions must be made, remember that they are made only tentatively to begin with and may change after prototyping and evaluation.

What functions will the product perform? Understanding the tasks the product will support is a fundamental aspect of developing the conceptual model, but it is also important to consider more specifically what functions the product will perform, i.e. how the task will be divided up between the human and the machine. For example, in the travel organizer example, the system may suggest specific holidays for a given set of people, but is that as far as it should go? Should it automatically put the holidays on hold, or wait until told that this holiday is suitable? Developing scenarios, essential use cases, and use cases for the system will help clarify the answers to these questions. Deciding what the system will do and what must be left for the user is sometimes called task allocation. The trade-off between what the product does and what to keep in the control of the user has cognitive implications (see Chapter 3), and is linked to social aspects of collaboration (see Chapter 4). In addition, if the cognitive load is too high for the user, then the device may be too stressful to use. On the other hand, if the product takes on too much and is too inflexible, then it may not be used at all.

Another aspect concerns the functions the hardware will perform, i.e. what functions will be ‘hard-wired’ into the product and what will be left under software control, and thereby possibly indirectly in the control of the human user? This leads to considerations of the product's architecture, although you would not expect necessarily to have a clear architectural design at this stage of development.

How are the functions related to each other? Functions may be related temporally, e.g. one must be performed before another, or two can be performed in parallel. They may also be related through any number of possible categorizations, e.g. all functions relating to telephone memory storage in a cell phone, or all options for accessing files in a wordprocessor. The relationships between tasks may constrain use or may indicate suitable task structures within the product. For example, if a task is dependent on completion of another task, then you may want to restrict the user to performing the tasks in strict order.

If task analysis has been performed on relevant tasks, the breakdown will support these kinds of decisions. For example, in the travel organizer example, the task analysis performed in Section 10.1 shows the subtasks involved and the order in which the subtasks can be performed. Thus, the system could allow potential holiday companies to be found before or after investigating the destination's facilities. It is, however, important to identify the potential holiday companies before looking for holiday availability.

What information needs to be available? What data is required to perform a task? How is this data to be transformed by the system? Data is one of the categories of requirements we aim to identify and capture through the requirements activity. During conceptual design, we need to consider these requirements and ensure that our model provides the information necessary to perform the task. Detailed issues of structure and display, such as whether to use an analog display or a digital display, will more likely be dealt with during the physical design activity, but implications arising from the type of data to be displayed may impact conceptual design issues. Information visualization was discussed in Section 6.2.3.

For example, in the task of identifying potential holiday options for a set of people using the travel organizer, the system needs to be told what kind of holiday is required, available budget, preferred destinations (if any), preferred dates and duration (if any), how many people it is for, and any special requirements (such as physical disability) that this group has. In order to perform the function, the system must have this information and also must have access to detailed holiday and destination descriptions, holiday availability, facilities, restrictions, and so on.

Physical design involves considering more concrete, detailed issues of designing the interface, such as screen or keypad design, which icons to use, how to structure menus, etc.

11.4 Physical Design: Getting Concrete

There is no rigid border between conceptual design and physical design. Producing a prototype inevitably means making some detailed decisions, albeit tentatively. Interaction design is inherently iterative, and so some detailed issues will come up during conceptual design; similarly, during physical design it will be necessary to revisit decisions made during conceptual design. Exactly where the border lies is not relevant. What is relevant is that the conceptual design should be allowed to develop freely without being tied to physical constraints too early, as this might inhibit creativity.

Design is about making choices and decisions, and the designer must strive to balance environmental, user, data, and usability requirements with functional requirements. These are often in conflict. For example, a PDA must provide adequate functionality but the size of screen and use of keyboard are constrained by the fact that it is a portable device. This means that the display of information is limited and the number of unique function keys is also limited, resulting in restricted views of information and the need to associate multiple functions with function keys.

There are many aspects to the physical design of interactive products, and we can't cover them all in this book. Chapter 6 introduced you to several interface types and their associated design issues. You will notice from this Chapter that a plethora of guidelines for all types of design are available, and their number is growing. In addition, there are guidelines and regulations covering design for certain kinds of user. Box 11.4 describes some pointers for designing telephones for elderly people; Box 11.5 discusses issues concerning culturally-sensitive design.

The way we design the physical interface of the interactive product should not conflict with the user's cognitive processes involved in achieving the task. In Chapter 3, we introduced a number of these processes, such as attention, perception, memory, and so on, and we must design the physical form with these human characteristics very much in mind. For example, to help avoid memory overload, the interface should list options instead of making us remember a long list of possibilities. A wide range of guidelines, principles, and rules has been developed to help designers ensure that their products are usable.

Box 11.4: Designing Telephones for the Elderly and Disabled

The British Royal National Institute for the Blind (RNIB), together with the British Department of Trade and Industry and British Telecommunications, have compiled a brochure to explain the different impairments affecting many telephone user groups, together with a set of suggested telephone features that could greatly enhance the accessibility of devices for such user groups. They identify 15 impairments and 44 features that could be added to telephones to make their use more pleasant. The impairments include cognitive impairment, weak grip, limited dexterity, speech impairment, hearing impairment, and hand tremor (Gill and Shipley, 1999). Features that could make a difference to these user groups include:

  • Guarded or recessed keys to help prevent pressing the wrong key by mistake.
  • Sidetone reduction, which reduces the amount of noise picked up from the environment and mixed with incoming speech at the earpiece.
  • Allowing the user to adjust the amount of pressure needed to select a key. Apart from the more obvious consequences of too much or too little pressure, unsuitable key pressure may produce muscle spasms in some users.
  • Audio and tactile key feedback to indicate when a key has been pressed.

The ALVA MPO braille-based cell phone was described in Box 6.5. This phone represents an advance in cell phones for people with visual impairments, although it does not cater for all of these guidelines.

Box 11.5: Designing for Different Cultures

Throughout the book, you will find interaction design examples relating to different cultures, e.g. Indian midwives, Chinese ATM users, European travelers, American and British online shoppers, and so on. As companies and communities stretch around the world, designing systems, particularly websites, for a wide ranges of cultures has become more important. Building on the cultural dimensions proposed by Hofstede (see Chapter 10), Aaron Marcus and Emilie West Gould (2000) have suggested how these dimensions are reflected in the design of websites. For example, websites designed for countries with high ‘power distance’ are likely to focus on expertise, authority, certificates, leaders, and official stamps, while low power distance countries will focus more on social and moral order, customers, citizens, and freedom. Countries with high ‘uncertainty avoidance’ will be designed around simple, clear metaphors, navigation schemes to avoid users getting lost, and redundant cues to reduce ambiguity. Low uncertainty avoidance cultures emphasize complexity with maximal choice, acceptance of ‘risk’ associated with getting lost, less control of navigation, and multiple links with coding in color, sound, and typography.

An alternative approach to taking explicit account of differences in national culture is to develop an international site. The following guidelines are intended to help international design (Esselink, 2000).

  1. Be careful about using images that depict hand gestures or people.
  2. Use generic icons.
  3. Choose colors that are not associated with national flags or political movements.
  4. Ensure that the product supports different calendars, date formats, and time formats.
  5. Ensure that the product supports different number formats, currencies, weights and measurement systems.
  6. Ensure that the product supports international paper sizes, envelope sizes, and address formats.
  7. Avoid integrating text in graphics as they cannot be translated easily.
  8. Allow for text expansion when translated from English.

Dilemma One Global Website or Many Local Websites?

One choice that multinational companies have to make when designing their websites is whether to attempt to produce one site that will appeal across all cultures or to tailor each country's web-site to the local culture. This is more than just translating the site into different languages, as different cultures around the world respond differently to a whole range of attitudes and priorities. The list of guidelines in Box 11.5 illustrates this. It could be argued that trying to create one website image that is appealing to most cultures is very difficult, e.g. what does a generic icon look like? Yet creating different images for each culture is resource-intensive and might run the risk of diluting the brand. The website for Coca-Cola (www.cocacola.com), a brand which has a worldwide image, has links to a different local website for each of over 80 countries from Malawi to New Zealand and from Russia to the Seychelles. On the other hand, Pepsi, who also have a worldwide image, do not have such links from their website (www.pepsi.com). If you were going to design a website for a multinational company, what would you do?

11.5 Using Scenarios in Design

In Chapter 10, we introduced scenarios as informal stories about user tasks and activities. They are a powerful mechanism for communicating among team members and with users. We stated in Chapter 10 that scenarios could be used and refined through different data-gathering sessions, and they can be used to check out alternative designs at any stage.

Scenarios can be used to explicate existing work situations, but they are more commonly used for expressing proposed or imagined situations to help in conceptual design. Often, stakeholders are actively involved in producing and checking through scenarios for a product. Bødker identifies four roles that have been suggested for scenarios (Bødker, 2000, p. 63):

  • As a basis for the overall design.
  • For technical implementation.
  • As a means of cooperation within design teams.
  • As a means of cooperation across professional boundaries, i.e. as a basis of communication in a multidisciplinary team.

In any one project, scenarios may be used for any or all of these. Box 11.6 details how different scenarios were used throughout the development of a speech-recognition system. More specifically, scenarios have been used as scripts for user evaluation of prototypes, providing a concrete example of a task the user will perform with the product. Scenarios can also be used as the basis of storyboard creation (see below), and to build a shared understanding among team members of the kind of system being developed. Scenarios are good at selling ideas to users, managers, and potential customers. For example, the scenario presented in Figure 10.9 was designed to sell ideas to potential customers on how a product might enhance their lifestyles.

An interesting idea also proposed by Bødker is the notion of plus and minus scenarios. These attempt to capture the most positive and the most negative consequences of a particular proposed design solution (see Figure 11.8), thereby helping designers to gain a more comprehensive view of the proposal.

images

Figure 11.8 Example plus and minus scenarios

Activity 11.5

Consider an in-car navigation system (‘sat nav’) for planning routes, and suggest one plus and one minus scenario. For the plus scenario, try to think of all the possible benefits of the system. For the minus scenario, try to imagine everything that could go wrong.

Comment

Scenario 1. This plus scenario shows some potential positive aspects of an in-car navigation system.

Beth is in a hurry to get to her friend's house. She jumps into the car and switches on her in-car navigation system. The display appears quickly, showing her local area and indicating the current location of her car with a bright white dot. She calls up the memory function of the device and chooses her friend's address. A number of her frequent destinations are stored like this in the device, ready for her to pick the one she wants. She chooses the ‘shortest route’ option and the device thinks for a few seconds before showing her a bird's-eye view of her route. This feature is very useful because she can get an overall view of where she is going.

Once the engine is started, the display reverts to a close-up view to show the details of her journey. As she pulls away from the pavement, a calm voice tells her to “drive straight on for half a mile, then turn left.” After half a mile, the voice says again “turn left at the next junction.” As Beth has traveled this route many times before, she doesn't need to be told when to turn left or right, so she turns off the voice output and relies only on the display, which shows sufficient detail for her to see the location of her car, her destination and the roads she needs to use.

Scenario 2. This minus scenario shows some potential negative aspects of an in-car navigation system.

Beth is in a hurry to get to her friend's house. She gets in her car and turns on the in-car navigation system. The car's battery is faulty so all the information she had entered into the device has been lost. She has to tell the device her destination by choosing from a long list of towns and roads. Eventually, she finds the right address and asks for the quickest route. The device takes ages to respond, but after a couple of minutes displays an overall view of the route it has found. To Beth's dismay, the route chosen includes one of the main roads that is being dug up over this weekend, so she cannot use the route. She needs to find another route, so she presses the cancel button and tries again to search for her friend's address through the long list of towns and roads. By this time, she is very late.

Box 11.6: Using Scenarios Throughout Design

Scenarios were used throughout the design of a speech-recognition system (Karat, 1995). The goal of the project was to produce a product that used speech-recognition technology, so there was no defined set of user requirements to start with. The system offered speech-to-text dictation capabilities and also speech command capabilities for an application running on the same platform.

Initially, scenarios were used to set the direction of the project: discussions revolved around whether the scenario was correct or not, i.e. whether people would want to use the device to achieve the suggested task. Then scenarios were used to sketch out screens and an early user guide. Discussions at this point included checking what information was needed on the screen at what time, and also deciding what components needed to be built. Use-oriented scenarios, i.e. scenarios suggesting how the device might be used, formed the basis of early design meetings that resulted in a shared understanding of what facilities the system might include. An example scenario from basic direction setting was, “Imagine taking away the keyboard and mouse from your current workstation and describe doing everything through voice commands.”

Once the basic direction was agreed, further scenarios were generated to discuss the components of the system. These scenarios focused on typical use of speech commands so that vocabulary could be tracked. An example scenario for discussing vocabulary and system components was as follows.

Overall task: Open system editor, find file REPORT.TXT, change font to Times 16, save changes, and exit the editor.

This scenario was then broken down into a specific word list as follows:

Voice scenario steps: “system editor” “open” “open” “file” “find” “r” “e” “p” “open” “font” “times” “16” “ok” “save” “close”

A short user guide was developed early on, in parallel with the initial scenario development. User guide scenarios were generated by thinking about the kinds of question a user might need to answer, for example, “What is a speech manager?” “How do I know what I can say?”

Once early prototypes were developed, scenarios together with additional tasks were used as a basis for user testing. One of the problems was that people were unsure of what they could say, and although the system included a “What can I say?” module, this itself proved difficult to use. An example scenario used in testing was “Change the background color of the icon for the communications folder to red.”

Scenarios in the form of video prototypes were taken to potential customers later in the project for feedback. The feedback they received was mostly in scenario form too, and the scenarios extracted were fed back into the design process. For example, one of the scenarios collected was, “I would like to walk around while I dictate.” This could be accommodated by making mobility a factor when selecting the microphone.

Collecting feedback in the form of scenarios continued later in the project, and these informed both the design of the product and the associated documentation.

11.6 Using Prototypes in Design

We introduced different kinds of prototype and reasons for prototyping in Section 11.2. In this section we illustrate how prototypes may be used in design, and specifically we demonstrate one way in which prototypes may be generated from the output of the requirements activity, as described in this book: producing a storyboard from a scenario and a card-based prototype from a use case. Both of these are low-fidelity prototypes and they may be used as the basis to develop more detailed interface designs and higher-fidelity prototypes as development progresses.

11.6.1 Generating Storyboards from Scenarios

A storyboard represents a sequence of actions or events that the user and the system go through to achieve a task. A scenario is one story about how a product may be used to achieve the task. It is therefore possible to generate a storyboard from a scenario by breaking the scenario into a series of steps which focus on interaction, and creating one scene in the storyboard for each step. The purpose for doing this is two-fold. First, to produce a storyboard that can be used to get feedback from users and colleagues. Second, to prompt the design team to consider the scenario and the use of the system in more detail. For example, consider the scenario for the travel organizer developed in Chapter 10. This can be broken down into six main steps:

  1. The Thomson family gather around the organizer and enter a set of initial requirements.
  2. The system's initial suggestion is that they consider a flotilla holiday but Sky and Eamonn aren't happy.
  3. The travel organizer shows them some descriptions of the holidays written by young people.
  4. The system asks for various further details.
  5. The system confirms that there are places in the Mediterranean.
  6. The travel organizer prints out a summary.

The first thing to notice about this set of steps is that it does not have the detail of a use case, and is not intended to be a use case. We are simply trying to identify the key events or activities associated with the scenario. The second thing to notice is that some of these events are focused solely on the travel organizer's screen and some are concerned with the environment. For example, the first one talks about the family gathering around the organizer, while the third and fourth are focused on the travel organizer and the information it is outputting. We therefore could produce a storyboard that focuses on the screens or one that is focused on the environment. Either way, sketching out the storyboard will prompt us to think about issues concerning the travel organizer and its design.

For example, the scenario says nothing about the kind of input and output devices that the system might use, but drawing the organizer forces you to think about these things. There is some information about the environment within which the system will operate, but again drawing the scene makes you stop and think about where the organizer will be. You don't have to make any decisions about, e.g. using a trackball, or a touch screen or whatever, but you are forced to think about it. When focusing on the screens, the designer is prompted to consider issues including what information needs to be available and what information needs to be output. This all helps to explore design decisions and alternatives, but is also made more explicit because of the drawing act.

We chose to draw a storyboard that focuses on the environment of the travel organizer, and it is shown in Figure 11.9. While drawing this, various questions relating to the environment arose in my mind such as how can the interaction be designed for all the family? Will they sit or stand? How confidential should the interaction be? What kind of documentation or help needs to be available? What physical components does the travel organizer need? And so on. In this exercise, the questions it prompts are just as important as the end product.

images

Figure 11.9 The storyboard for the travel organizer focusing on environmental issues

Note that although we have used the scenario as the main driver for producing the storyboard, there is other information from the requirements activity that also informs the development.

Activity 11.6

In Activity 10.3, you developed a futuristic scenario for the one-stop car shop. Using this scenario, develop a storyboard that focuses on the environment of the user. As you are drawing this storyboard, write down the design issues you are prompted to consider.

Comment

We used the scenario in the comment for Activity 10.3. This scenario breaks down into five main steps: the user arrives at the one-stop car shop; the user is directed into an empty booth; the user sits down in the racing car seat and the display comes alive; the user can print off reports and can take a virtual reality drive in their chosen car. The storyboard is shown in Figure 11.10. Issues that occurred to me as I drew this storyboard included where to locate the printer, what kind of virtual reality equipment is needed, what input devices are needed: a keyboard or touchscreen, a steering wheel, clutch, accelerator and brake pedals? How like the car controls do the input devices need to be? You may have thought of other issues.

images

Figure 11.10 The storyboard generated from the one-stop car shop scenario in Chapter 10

11.6.2 Generating Card-based Prototypes from Use Cases

The value of a card-based prototype lies in the fact that the screens or screen elements can be manipulated and moved around in order to simulate interaction (either with a user or without). Where a storyboard focusing on the screens has been developed, this can be translated into a card-based prototype and used in this way. Another way to produce a card-based prototype is to generate one from a use case output from the requirements activity.

For example, consider the use case generated for the travel organizer in Chapter 10. This focused on the visa requirements part of the system. For each step in the use case, the travel organizer will need to have an interaction component to deal with it, e.g. a button or menu option, or a display screen. By stepping through the use case, it is possible to build up a card-based prototype to cover the required behavior. For example, the cards in Figure 11.11 were developed by considering each of the steps in the use case. Card one covers step 1. Card two covers steps 2, 3, 4, 5, 6, and 7. Card three covers steps 8, 9, 10, and 11—notice the print button that is drawn into card three to allow for steps 10 and 11. As with the storyboards, drawing concrete elements of the interface like this forces the designer to think about detailed issues so that the user can interact with the prototype. In card two you will see that I chose to use a drop-down menu for the country and nationality. This is to avoid mistakes. However, the flaw in this is that I may not catch all of the countries in my list, and so an alternative design could also be incorporated where the user can choose an ‘enter below’ option and then type in the country or nationality (see Figure 11.12).

images

Figure 11.11 Cards one to three

images

Figure 11.12 Card four from the above

These cards can then be shown to potential users of the system to get their informal feedback. Remember that the more often you show developing ideas to potential users, the more likely it is that the design will fulfill its goals.

Activity 11.7

Produce a card-based prototype for the library catalog system and the task of locating a book as described by the use case in Activity 10.5. You may also like to ask one of your peers to act as a user and step through the task using the prototype.

Comment

Three of the cards from our prototype are shown in Figure 11.13. Note that in putting these cards together, we have not included a separate search author screen (as implied by step 8 of the use case), but have included other search options for the user to choose. In checking the interaction with users, it may be decided that having a separate screen is better.

images

Figure 11.13 A card-based prototype for locating a book in the library catalog system

Card-based prototypes may be shown to users to gain informal feedback. Equally importantly, they may be shown to colleagues to get another designer's perspective on the emerging design. In this case, I showed these cards to a colleague, and through discussion of the application and the cards, we concluded that although the cards represent one interpretation of the use case, they focus too much on an interaction model that assumes a WIMP/GUI interface. Our discussion was informed by several things including the storyboard and the scenario. One alternative would be to have a map of the world, and users can indicate their destination and nationality by choosing one of the countries on the map; another might be based around national flags. These alternatives could be prototyped using cards and further feedback obtained. In fact, a world map was used in the eSpace project (see Case Study 11.1).

CASE STUDY 11.1: Supporting Collaboration when Choosing Holidays Through the Design of Shared Visualizations and Display Surfaces

The goal of the eSPACE project was to help agents and customers collaborate more effectively when developing complex, tailor-made products like round the world holidays, hi-fi systems, insurance portfolios, fitted kitchens, or digital TV packages. At the beginning of the project an in-depth six-month field study was conducted looking at how agents and customers plan and build up travel products. The study revealed that the collaborative process is often hampered by asymmetries in the planning process: the agent finds it difficult to communicate and share information about the product while the customer finds it difficult to piece together all the different aspects of the holiday experience. To overcome these problems an innovative workspace, called the ‘eTable’ (see Figure 11.14), was designed and implemented that uses interactive dynamic visualizations and user-centric interactive planning tools. The designs were informed by the conceptual framework of external cognition (Scaife and Rogers, 1996).

The eTable prototype was placed in trade shows across the world and has been evaluated in use at a leading travel agency. It has met with overwhelmingly positive responses. Customers come away with a much clearer idea of what they will be doing. Consultants are able to explore options with customers in a much more integrated way. One consultant commented that the eTable enabled her to ‘draw the customer in’ by providing an attractive vicarious experience of the holiday. “It's much easier to sell when the client can see everything, they get excited when it starts taking shape.”

images

Figure 11.14 The eSpace project

Box 11.7: Design Patterns for HCI

Design patterns have become popular in software engineering since the early 1990s. Patterns capture experience, but they have a different structure and a different philosophy from other forms of guidance, such as the guidelines we introduced earlier, or specific methods. One of the intentions of the patterns community is to create a vocabulary, based on the names of the patterns, that designers can use to communicate with one another and with users. Another is to produce a literature in the field that documents experience in a compelling form.

The idea of patterns was first proposed by Christopher Alexander, a British architect who described patterns in architecture. His hope was to capture the ‘quality without a name’ that is recognizable in something when you know it is good.

But what is a pattern? One simple definition is that it is a solution to a problem in a context. What this means is that a pattern describes a problem, a solution, and where this solution has been found to work. This means that users of the pattern can see not only the problem and solution, but also understand when and where it has worked before and access a rationale for why it worked. This helps designers in adopting it (or not) for themselves.

Patterns on their own are interesting, but not as powerful as a pattern language. A pattern language is a network of patterns that reference one another and work together to create a complete structure.

The application of patterns to interaction design has grown steadily since the late 1990s. For example, Jan Borchers (2001) describes three pattern languages for interactive music exhibits: one for music, one for HCI, and one for software engineering, all of which have arisen from his experience of designing music exhibits.

A more recent example comes from a case study at Yahoo!, where they wanted to communicate standards in order to increase consistency, predictability, and usability across their various product teams, and to increase productivity by preventing developers from ‘reinventing the wheel.’ They developed an interaction design pattern repository and a process for submitting and reviewing patterns. This pattern repository is growing.

11.6.3 Prototyping Physical Design

Moving forward from a card-based prototype, you can expand the cards to generate a more detailed software or paper-based prototype. To do this, you would take one or more cards and translate them into a sketch that included more detail about input and output technology, icon usage, error messages, menu structures, and any style conventions needed for consistency with other products. Also, issues such as interface layout, information display, attention, memory, etc., would be considered at this stage.

If the prototype remains paper-based then it may consist of a set of sketches representing the interface and its different states, or it may be more sophisticated using post-it notes, masking tape, acetate sheets, and various other paper products that allow the prototype to be ‘run’ by a person pretending to be the computer. In this set-up, the user sits in front of the paper representation of the interface and interacts with the interface as though it was real, pressing buttons, choosing options, etc., but instead of the system reacting automatically and changing the display according to the user's input, a human places the next screen, or modified message display, or menu drop-down list, etc., in front of the user. Snyder (2003) provides a lot of practical information for creating this kind of prototype and running these sessions. She suggests that a minimum of four people are required to evaluate this form of prototype: the ‘computer,’ a user, a facilitator who conducts the session, and one or more observers. Figure 11.15 illustrates this kind of session. Here you see the user (on the left) interacting with a prototype while the ‘computer’ (on the right) simulates the system's behavior. Further information about paper prototyping is in Case Study 11.2.

images

Figure 11.15 The ‘computer’ highlights a term the user has just clicked on

Box 11.8: Involving Users in Design: Participatory Design

The idea of participatory design emerged in Scandinavia in the late 1960s and early 1970s. There were two influences on this early work: the desire to be able to communicate information about complex systems, and the labor union movement pushing for workers to have democratic control over changes in their work. In the 1970s, new laws gave workers the right to have a say in how their working environment was changed, and such laws are still in force today. A fuller history of the movement is given in Ehn (1989) and Nygaard (1990).

Several projects at this time attempted to involve users in design and tried to focus on work rather than on simply producing a product. One of the most discussed is the UTOPIA project, a cooperative effort between the Nordic Graphics Workers Union and research institutions in Denmark and Sweden to design computer-based tools for text and image processing.

Involving users in design decisions is not simple, however. Cultural differences can become acute when users and designers are asked to work together to produce a specification for a system. Bødker et al. (1991) recount the following scene from the UTOPIA project:

Late one afternoon, when the designers were almost through with a long presentation of a proposal for the user interface of an integrated text and image processing system, one of the typographers commented on the lack of information about typographical code-structure. He didn't think that it was a big error (he was a polite person), but he just wanted to point out that the computer scientists who had prepared the proposal had forgotten to specify how the codes were to be presented on the screen. Would it read “<bf/” or perhaps just “” when the text that followed was to be printed in boldface?

In fact, the system being described by the designers was a WYSIWYG (what you see is what you get) system, and so text that needed to be in bold typeface would appear as bold (although most typographic systems at that time did require such codes). The typographer was unable to link his knowledge and experience with what he was being told. In response to this kind of problem, the project started using mockups. Simulating the working situation helped workers to draw on their experience and tacit knowledge, and designers to get a better understanding of the actual work typographers needed to do. An example mockup for a computer-controlled parcel-sorting system, from another project, is shown in Figure 11.16 (Ehn and Kyng, 1991). The headline of this newspaper clipping reads, “We did not understand the blueprints, so we made our own mockups.”

images

Figure 11.16 A newspaper cutting showing a parcel-sorting machine mockup

CASE STUDY 11.2: paper prototyping as a core tool in the design of cell phone user interfaces

Paper prototyping is increasingly being used by cell phone companies as a core part of their design process. This approach is replacing the old adage of ‘throwing the technology at users to see if it sticks.’ There is much competition in the cell phone industry demanding ever more new concepts. Mobile devices are feature-rich. They now include mega-pixel cameras, music players, media galleries, and downloaded applications and more. This requires designing complex interactions, but which are clear to learn and use. Paper prototyping offers a rapid way to work through every detail of the interaction design across multiple applications.

Cell phone projects involve a range of disciplines—all with their own viewpoint on what the product should be. A typical project may include programmers, project managers, marketing experts, commercial managers, handset manufacturers, user experience specialists, visual designers, content managers, and network specialists. Paper prototyping provides a vehicle for everyone involved to be part of the design process—considering the design from multiple angles in a collaborative way.

The case study on the website describes the benefits of using paper prototyping from a designers' viewpoint while considering the bigger picture of its impact across the entire project lifecycle. It starts by explaining the problem space and how paper prototyping is used as an integrated part of UI design projects for European and US-based mobile operator companies. The case study uses project examples to illustrate the approach and explains step-by-step how the method can be used to include a range of stakeholders in the design process—regardless of their skill set or background. The case study offers exercises so you can experiment with the approach yourself.

images

Figure 11.17 Prototype developed for cell phone user interface

As well as considering information relating to the product's usability and user experience goals, physical design prototypes draw on design guidelines for potential interface types, like those discussed in Chapter 6. For example, consider the travel organizer system. For this, we need a shareable interface and from the design issues discussed in Chapter 6, we know that a horizontal surface encourages more collaboration than a vertical one; we also know that a large interface leads to a break-up in this collaboration. So, a horizontal (i.e. table) interface appears to be most appropriate at this stage. Scott et al. (2003) have studied these kinds of system and generated a set of eight design guidelines tailored just for tabletop displays. They are summarized briefly in Table 11.2.

Some of these guidelines are particularly interesting to consider in the context of the travel organizer. For example, guideline two addresses the user's behavior of switching between different activities. In a drawing package, for example, the user is normally required to explicitly signal when moving from the activity of pointing, to writing text, to drawing a box, etc., yet if you observe people interacting without software there is usually a much more fluid transition between activities. In the travel organizer, users will be spending a lot of time choosing alternatives, e.g. selecting a destination or holiday and wanting to find out more about it. They will also need to enter text, e.g. to give their names or to initiate searches. One possible way to support this would be to have a touchscreen so that people can point at what they want, and to support hand-writing recognition with touchscreen technology. This would be simple and easy to use, as touchscreens only need a finger or other similar implement; the main disadvantage is that touchscreens are not very accurate.

  1. Support Interpersonal Interaction: technology designed to support group activities must support the fundamental mechanisms people use to collaborate. Some of these were discussed in Chapter 4.
  2. Support Fluid Transitions between Activities: technology should not impose excessive overhead on switching between activities such as writing, drawing, and manipulating artifacts.
  3. Support Transitions between Personal and Group Work: people are adept at rapidly moving between individual and group work when collaborating. With a traditional table, people often identify distinct areas for personal use, and this may help to support transition from individual to group working.
  4. Support Transitions between Tabletop Collaboration and External Work: the system should provide an easy way to integrate work done external to the tabletop environment.
  5. Support the Use of Physical Objects: items placed on a table include work-related items such as reports and design plans and personal items such as coffee cups. Tabletop systems should support these familiar practices.
  6. Provide Shared Access to Physical and Digital Objects: sharing an object supports collaboration more easily than if each participant has their own copy of the object. If people are at different positions around the table then orientation of an object can become an issue, e.g. it may be upside down for some participants.
  7. Consideration for the Appropriate Arrangements of Users: people gather around a table in different locations, in relation to each other and in relation to the items on the table. The kind of activity and the items on the table affect the most appropriate positions for individuals, e.g. text or pictures may be upside down for some attendees.
  8. Support Simultaneous User Actions: at a standard table, participants can interact with an item on the table simultaneously. Turn-taking in such a situation, while possible, feels cumbersome.

Table 11.2 Design guidelines for tabletop displays (Scott et al., 2003)

Guideline seven is relevant to table location and how it would be oriented within the travel agent's office. For example, if it was against the wall then users would only be able to access it from the front and sides. This has advantages as a lot of the display material will be textual or photographic, which makes most sense if you see it the right way up. This might also restrict the number of people who can gather comfortably around the device, depending on its size, and how friendly members of the group want to be with each other.

Activity 11.8

Consider guidelines three and five, and discuss their application within the context of the travel organizer. It might help you to think about how you and your peers interact around an ordinary table. Does the shape of the table make a difference? How does the interaction vary?

Comment

These were our thoughts, but yours may be different.

If you look at people collaborating on a task around a table, they often have their own personal table space as well as interacting with the wider shared table space. How might this be achieved for the travel organizer? What kind of personal space might be needed? One possibility would be to divide up the surface so that parts of the display can be partitioned off to make a separate screen area for each individual. In this context, users might then be looking at other information away from the main focus of the group, and may be exploring alternative holidays on their own. In much the same way that people exploring their email through a laptop is distracting from the wider meeting, this could also be disruptive from the main goal of the group. However, providing an area for people to take notes and print them off later would be helpful.

Guideline five is about supporting the use of physical objects. But what physical objects might the users of the travel organizer have with them? Some of these are not relevant to the task, e.g. shopping bags, but they will need to be taken care of, i.e. stored somewhere. Other objects such as travel brochures or books or maps, which are relevant to the task, need somewhere to stand so that they can be seen and used as appropriate.

11.7 Tool Support

The tools available to support the activities described here range from development environments that support prototyping through sketching tools, environments to support icon and menu design to widget libraries and so on.

For example, researchers at Berkeley (Newman et al., 2003) have been developing a tool to support the informal sketching and prototyping of websites. This tool, now called DENIM, is a pen-based sketching tool that allows designers to sketch sites at different levels of refinement—overall site map, sequence of pages (like our card-based prototypes), and individual page. DENIM links these different levels of sketching together through zooming. Once created, pages can be linked to each other as with web links, and the prototype can then be run to show how the pages link together. All the while, the prototype maintains the ‘sketchy’ look. Figure 11.8 shows a screen from DENIM illustrating the individual pages and the links between them (green lines). The pages are at different stages of development, with some being blank while others have text, images, and links sketched onto them.

images

Figure 11.18 An example screen from the tool DENIM that supports the development and running of sketchy prototypes in early design

User interface software tools support developers by reducing the amount of code that has to be written in order to implement elements of the user interface such as windows and widgets, and to tie them all together. Box 11.9 summarized the successes and failures of these tools through the twentieth century, but Myers et al. (2002) go on to suggest some changes for user interface tools of the future. Most desktop applications built today were constructed using user interface software tools, but new tools and techniques are needed for constructing other interfaces such as very small (e.g. mobile) or very large (e.g. tabletop) displays. For example, the standard menu designs and widgets built into user interface tools are inappropriate for very small or very large displays; using a stylus instead of a mouse and keyboard relies on different interaction techniques from standard GUI applications, as do multi-modality interfaces. The need to rapidly prototype and evaluate devices as well as applications will affect the kinds of tools developers need. Tools to cater for the higher sophistication of users, the aging population (and therefore reduced dexterity of users), and end-user programming activities will also be needed. Maybe even tools that enforce the design of usable interfaces will be developed and deployed.

Box 11.9: Successes and Failures for User Interface Tools

Looking at the history of user interface design tools, we can see some tools that have been successful and have withstood the test of time, and others that have fallen by the wayside. Understanding something of what works and what doesn't gives us lessons for the future of such tools.

Tools that have been successful are:

Window managers and toolkits. The idea of overlapping windows was first proposed by Alan Kay (Kay, 1969). These have been successful because they help to manage scarce resources: screen space and human perceptual and cognitive resources such as limited visual field and attention.

Event languages that are designed to program actions based on external events: for example, when the left mouse button is depressed, move the cursor here. These have worked because they map well to the direct manipulation graphical user interface style.

Interactive graphical tools or interface builders, such as Visual Basic. These allow the easy construction of user interfaces by placing interface elements on a screen using a mouse. They have been successful because they use a graphical means to design a graphical layout, i.e. you can build a graphical screen layout by grabbing and placing graphical elements without touching any program code.

Component systems are based on the idea of dynamically combining individual components that have been separately written and compiled. Sun's Java Beans uses this approach. One reason for its success is that it addresses the important software engineering goal of modularity.

Scripting languages have become popular because they support fast prototyping. Example scripting languages are Python and Perl.

Hypertext allows elements of a document to be linked in a multitude of ways, rather than the traditional linear layout. Most people are aware of hypertext links because of their use on the web.

Object-oriented programming. This programming approach is successful in interface development because the objects of an interface such as buttons and other widgets can so readily be cast as objects in the language.

Promising approaches that have not caught on are:

Technology has changed so fast that in some cases the tools to support the development of certain technologies have failed to keep up with the rapidly changing requirements. Good ideas that have fallen by the wayside include:

User interface management tools (UIMS). The idea behind UIMS was akin to the idea behind database management systems. Their purpose was to abstract away the details of interface implementation to allow developers to specify and manipulate interfaces at a higher level of abstraction. This separation turned out to be undesirable, as it is not always appropriate to be able to understand and manipulate interface elements only at a high level of abstraction.

Formal language based tools. Many systems in the 1980s were based on formal language concepts such as state transition diagrams and parsers for context-free grammars. These failed to catch on because: the dialog-based interfaces for which these tools were particularly suited were overtaken by direct manipulation interfaces; they were very good at producing sequential interfaces, but not at expressing unordered sequences of action; and they were difficult to learn even for programmers.

Constraints. Tools that were designed to maintain constraints, i.e. relationships among elements of an interface such as that the scroll bar should always be on the right of the window, or that the color of one item should be the same as the color of other items. These systems have not caught on because they can be unpredictable. Once constraints are set up, the tool must find a solution to maintain them, and since there is more than one solution, the tool may find a solution the user didn't expect.

Model-based and automatic techniques. The aim of these systems was to let developers specify interfaces at a high level of abstraction and then for an interface to be automatically generated according to a predefined set of interpretation rules. These too have suffered from problems of unpredictability, since the generation of the interfaces relies on heuristics and rules that are themselves unpredictable in concert.

(Myers et al., 2002)

Assignment

This assignment continues work on the web-based ticket reservation system introduced at the end of Chapter 10. The work will be continued in the assignments for Chapters 14 and 15.

  1. (a) Based on the information gleaned from the assignment in Chapter 10, suggest three different conceptual models for this system. You should consider each of the aspects of a conceptual model discussed in this Chapter: interface metaphor, interaction style, interface style, activities it will support, functions, relationships between functions, and information requirements. Of these conceptual models, decide which one seems most appropriate and articulate the reasons why.
  2. (b) Produce the following prototypes for your chosen conceptual model:
    1. (i) Using the scenarios generated for the ticket reservation system, produce a storyboard for the task of booking a ticket for one of your conceptual models. Show it to two or three potential users and get some informal feedback.
    2. (ii) Now develop a card-based prototype from the use case for the task of booking a ticket, also incorporating feedback from part (i). Show this new prototype to a different set of potential users and get some more informal feedback.
    3. (iii) Using a software-based prototyping tool (e.g. Visual Basic or Director) or web authoring tool (e.g. Dreamweaver), develop a software-based prototype that incorporates all the feedback you've had so far. If you do not have experience in using any of these, create a few HTML web pages to represent the basic structure of your website.
  3. (c) Consider the web page's detailed design. Sketch out the application's main screen (homepage or data entry). Consider the screen layout, use of colors, navigation, audio, animation, etc. While doing this, use the three main questions introduced in Chapter 6 as guidance: Where am I? What's here? Where can I go? Write one or two sentences explaining your choices, and consider whether the choice is a usability consideration or a user experience consideration.

Summary

This Chapter has explored the activities of design, prototyping, and construction. Prototyping and scenarios are used throughout the design process to test out ideas for feasibility and user acceptance. We have looked at different forms of prototyping, and the activities have encouraged you to think about and apply prototyping techniques in the design process.

Key Points

  • Prototyping may be low fidelity (such as paper-based) or high fidelity (such as software-based).
  • High-fidelity prototypes may be vertical or horizontal.
  • Low-fidelity prototypes are quick and easy to produce and modify and are used in the early stages of design.
  • There are two aspects to the design activity: conceptual design and physical design.
  • Conceptual design develops an outline of what people can do with a product and what concepts are needed to understand how to interact with it, while physical design specifies the details of the design such as screen layout and menu structure.
  • We have explored three approaches to help you develop an initial conceptual model: interface metaphors, interaction styles, and interface styles.
  • An initial conceptual model may be expanded by considering which functions the product will perform (and which the user will perform), how those functions are related, and what information is required to support them.
  • Scenarios and prototypes can be used effectively in design to explore ideas.
  • There is a wide variety of support tools available to interaction designers.

Further Reading

SNYDER, C. (2003) Paper Prototyping. Morgan Kaufmann. This book provides some useful practical guidance for creating paper-based prototypes and ‘running’ them to get feedback from users and others.

CARROLL, J.M. (ed.) (1995) Scenario-based Design. John Wiley & Sons, Inc. This volume is an edited collection of papers arising from a three-day workshop on use-oriented design. The book contains a variety of papers including case studies of scenario use within design, and techniques for using them with object-oriented development, task models, and usability engineering. This is a good place to get a broad understanding of this form of development.

MYERS, B., HUDSON, S.E. and PAUSCH, R. (2002) Past, present and future of user interface software tools. Chapter 10 in HumanComputer Interaction in the New Millennium, J.M. Carroll (ed.). Addison-Wesley. This paper is an updated version of a previous paper in ACM Transactions on Computer–Human Interaction. It presents an interesting description of user interface tools and their future, expanding on the information given in Box 11.9.

WINOGRAD, T. (1996) Bringing Design to Software. Addison-Wesley and ACM Press. This book is a collection of articles all based on the theme of applying ideas from other design disciplines in software design. It has a good mixture of interviews, articles, and profiles of exemplary systems, projects, or techniques. Anyone interested in software design will find it inspiring.

TIDWELL, J. (2005) Designing Interfaces: Patterns for Effective Interaction O'Reilly. This book is about designing interfaces in detail. It includes patterns for doing user research, information architecture, organizing widgets on an interface, and navigation, among other things.

INTERVIEW: with Karen Holtzblatt

images

Karen Holtzblatt is the originator of Contextual Inquiry, a process for gathering field data on product use, which was the precursor to Contextual Design, a complete method for the design of systems. Together with Hugh Beyer, the codeveloper of Contextual Design, Karen Holtzblatt is cofounder of InContext Enterprises, which specializes in process and product design consulting.

HS: What is Contextual Design?

KH: If you're going to build something that people want, there are basically three large steps that you have to go through. The first question that you ask as a company is, “What in the world matters to the customer or user such that if we make something, they're likely to buy it and use it?” So the question is “What matters?” Now once you identify what the issues are, every corporation will have the corporate response of how to change the human practice with technology to improve it. This is the ‘vision.’ Finally you have to work out the details and structure the vision into a product or system or website or handheld application…. In any design process, whether it's formalized or not, every company must do these things. They have to find out what matters, they have to vision their corporate response, and then they have to structure it into a system.

Contextual Design has team and individual activities that bring them through those processes in an orderly fashion so that you can deliver a reliable result that works for people. So you could say that Contextual Design is a set of techniques to be used in a customer-centered design process with design teams. It is also a set of practices that help people engage in creative and productive design thinking with user data and it helps them co-operate and design together.

HS: What are the steps of Contextual Design?

KH: In the ‘what matters’ piece, we go out into the field, we talk with people about their work or life practice as they do it: that's Contextual Inquiry and that's a one-on-one, two to two-and-a-half-hour field interview. Then we interpret that data with a cross-functional team, and we model the activities with five work models: The flow model showing communication and coordination, the cultural model showing influences between people, both from law, and from geography, the physical model looking at the physical environment's role in organizing activity, the sequence model showing the steps of a task or business process, and the artifact model showing the things people use and how they are used. We also capture individual points on virtual post-it notes. After the interpretation session, every person we interviewed has a set of models and a set of post-its. Our next step is to consolidate all that data because you don't want to be designing from one person, from yourself, or from any one interview; we need to look at the structure of the practice itself. The consolidation step means that we end up with an affinity diagram and five consolidated models showing the issues across the target population.

At that point, we have modeled the work practice as it is and we have now six communication devices that the team can dialog with. Each one of them poses a point of view on which to have the conversation ‘what matters?’

Now the team moves into that second activity, which is “what should our corporate response be?” We have a visioning process that is a very large group story-telling process to reinvent the practice given technological possibility and the core competency of the business. After that, we develop storyboards driven by the consolidated data and the vision. At this point we have not done a systems design; we have redesigned the practice. In Contextual Design we redesign the practice first, seeing the technology as it will appear within the work or life activity that will change.

To structure the system we start by rolling the storyboards into a User Environment Design (UED)—the structure of the system itself, independent of the user interface and the object model or implementation. The UED operates like a software floor plan that structures the movement inside the product. This is used to drive the user interface design, which is mocked up in paper and tested and iterated with the user. When it has stabilized, the UED, the storyboards, and the user interface drive development of the object model. Finally, we do visual design and mock the whole system up in an interactive environment and test that too. In this way we deal with interaction design, visual design and branding testing as well.

This is the whole process of Contextual Design, a full front-end design process. Because it is done with a cross-functional team, everyone in the organization knows what they're doing at each point: they know how to select the data, they know how to work in groups to get all these different steps done. So not only do you end up with a set of design thinking techniques that help you to design, you have an organizational process that helps the organization actually do it.

HS: How did the idea of Contextual Design emerge?

KH: Contextual Design started with the invention of Contextual Inquiry in a postdoctoral internship with John Whiteside at Digital in about 1987. At the time, usability testing and usability issues had been around maybe eight years or so and he was asking the question, “Usability identifies about 10 to 20% of the fixes at the tail end of the process to make the frosting on the cake look a little better to the user. What would it take to really figure out what people want in the product and system?” Contextual Inquiry was my answer to that question. After that, I took a job with Lou Cohen's Quality group at DEC, where I picked up the affinity diagram idea. Also at that time, Pelle Ehn and Kim Madsen were talking about Morten Kyng's ideas on paper mock-ups and I added paper prototyping with post-its to check out the design. Sandy Jones and I worked out the lower level details of Contextual Inquiry then Hugh and I hooked up. He's a software and object-oriented developer. We started working with teams and we noticed that they didn't know how to go from the data to the design and they didn't know how to structure the system to think about it. So then we invented more of the work models and the UED.

So the Contextual Design method came from looking at the software development practice; we evolved every single step of this process based on what people needed. The whole process was worked out with real people doing real design in real companies. So, where did it come from? It came from dialog with the problem.

HS: What are the main problems that organizations face when putting Contextual

Design into practice?

KH: The question is, “What does organizational change look like?” because that's what we're talking about. The problem is that people want to change and they don't want to change. What we communicate to people is that organizational change is piecemeal. In order to own a process you have to say what's wrong with it, you have to change it a little bit, you have to say how whoever invented the process is wrong and how the people in the organization want to fix it, you have to make it fit with your organizational culture and issues. Most people will adopt the field-data gathering first and that's all they'll do and they'll tell me that they don't have time for anything else and they don't need anything else, and that's fine. And then they'll wake up one day and they'll say, “We have all this qualitative stuff and nobody's using it… maybe we should have a debriefing session.” So then they have debriefing sessions. Then they wake up later on and they say, “We don't have any way of structuring this information… models are a good idea.” And basically they reconstruct many aspects of the Contextual Design process as they hit the next problem—of course adding their own flavor and twists and things they learned along the way.

Now it's not quite that clean, but my point is that organizational adoption is about people making it their own and taking on the parts, changing them, doing what they can. You have to get somebody to do something and then once they do something it snowballs.

From an organization change perspective it is nice that Contextual Design generates paper and a design room as part of the process. The design room creates a talk event, and the talk event pulls everyone in because they want to know what you're doing. Then if they like the data, others feel left out, and because they feel left out they want to do a project and they want to have a room for themselves as well.

The biggest complaint about Contextual Design is that it takes too long. Some of that is about time, some of it is about thought. You have people who are used to coding and now have to think about field data. They're not used to that. So for that reason we wrote Rapid CD—to help people see how to pick and choose techniques from Contextual Design in short amounts of time.

HS: You have recently published a book on Rapid CD. What are the compromises that you made when integrating Contextual Design into a shortened product lifecycle?

KH: The most important thing to understand about Contextual Design and in point of fact any user-centred design approach is that time is completely dependent on scope. The second factor, which is actually secondary to scope, is the number of models that you use to represent the data.

Rapid CD creates guidelines to help you identify a small enough scope so that you can get user data into projects quickly. If you have a small tight scope then it's going to take less time because you're going to interview fewer users, and you're going to have a less extensive design. Limiting your product or system to one to four job types means that your scope is going to be small, and then after the visioning process you can prioritize scope again. At that point you may end up prioritising out roles and aspects of the vision that can be addressed later. The next phase of Rapid CD is working out the details of the design through paper prototyping and visual design and so on. This phase is again completely dependent on scope. If we already started with one to four roles, you're not going to have more than that so you can keep the number of screens to be developed small enough to manage quickly. The difference between this process and a normal Contextual Design process is that you are limiting scope and as a result you can do it with fewer people and in less time.

The second thing that we do in Rapid CD is we limit the number of models. One thing that we cut out is the UED. We eliminate the UED because we've limited the scope and if we're doing something simple like a webpage where you already have the idea of a webmap (which is effectively a UED), or you're doing the next version of a particular product which means you already have system structure, then you can go from having the data and the vision to mocking up some user interfaces. So we eliminate the UED without feeling that we're losing quality because we've reduced scope. One model we don't cut out at all is the affinity diagram because it's the best organisation and structure for understanding the issues. Finally, depending on the problem and how Contextual Design is being used we may or may not have sequence models (task analysis) as part of Rapid CD.

To make it easy for people we characterised Rapid CD into three smaller processes: Lightning Fast, Lightning Fast Plus, and Focused Rapid CD. With Lightning Fast you use Contextual Design up to the end of the visioning process and then follow your normal process to work out the detailed design. It appears shorter because we're just using Contextual Design for the requirements gathering phase and to conceptualise the product or process.

In Lightning Fast Plus you do the visioning process and work up your ideas your way, then you mock up your interfaces, and take them out and test them with users. Any time you're not testing with the user you're at risk. So in Lightning Fast Plus we're skipping storyboarding, extensive modeling, and the UED.

In Focused Rapid CD you do sequence consolidation for a task analysis, vision a solution, then storyboarding, paper mock-ups, and testing. So Focused Rapid CD eliminates the UED and the rest of the models. Focused Rapid CD says if you have a task or a small process then you really need to do consolidated sequences, in other words, you need to do task analysis. In typical webpage design you don't need sequences unless you're doing transactions. If all you're doing is an information environment, you don't need sequences. But any time you need to do task analysis then the recommendation would be that you use Focused Rapid CD.

HS: What's the future direction of Contextual Design?

KH: Every process can always be tweaked. I think the primary parts of Contextual Design are there. There are interesting directions in which it can go, but there's only so much we can get our audience to buy.

I think that for us there are two key things that we're doing. One is we're starting to talk about design and what design is, so we can talk about the role of design and design thinking. And we are still helping train everyone who wants to learn. But the other thing we're finding is that sometimes the best way to support the client is to do the design work for them. So we have the design wing of the business where we put together the Contextual Design teams. What clients really like is our hybrid design process where we create a cross company team and do the work together—they learn and we get the result.

A new challenge for Contextual Design is its role in Six Sigma process redefinition work. We believe that qualitative approaches to business process redesign works well with quantitative approaches like Six Sigma. Our initial work on this has shown that Contextual Design uncovers root causes and processes to address much much faster than typical process mapping. And our visioning process helps redesign process and technology together—so that they inform each other instead of trying to deal with one at a time. We hope to have more stories about these successes in the future.

But for most organizations looking to adopt a customer centered design process, the standard Contextual Design is enough for now, they have to get starhyphented. And because Contextual Design is a scaffolding, they can plug other processes into it, as we suggest with Rapid CD. Most organizations haven't got a backbone for customer-centered design, and Contextual Design is a good backbone to start with.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset