Chapter 1. Introduction to 3D User Interfaces

On desktop computers, good user interface (UI) design is now almost universally recognized as a crucial part of the system development process. Almost every computing-related product touts itself as “easy to use,” “intuitive,” or “designed with your needs in mind.” For the most part, however, desktop user interfaces have used the same basic principles and designs for the past decade or more. With the advent of virtual reality (VR), augmented reality, ubiquitous and mobile computing, and other “off-the-desktop” technologies, three-dimensional (3D)

UI design is now becoming a critical area for developers, students, and researchers to understand. In this chapter, we answer the question, “What are 3D user interfaces?” and provide an introduction to the terminology used throughout the book. We describe the goals of 3D UI design and list some application areas for 3D user interfaces. Keeping these applications in mind as you progress through the book will help provide a concrete reference point for some of the more abstract concepts we discuss.

1.1 What Are 3D User Interfaces?

Modern computer users have become intimately familiar with a specific set of UI components, including input devices such as the mouse and touchscreen, output devices such as the monitor, tablet, or cell phone display, interaction techniques such as drag-and-drop and pinch to zoom, interface widgets such as pull-down menus, and UI metaphors such as the Windows, Icons, Menus, Pointer (WIMP) desktop metaphor (van Dam 1997).

These interface components, however, are often inappropriate for the nontraditional computing environments and applications under development today. For example, a virtual reality (VR) user wearing a fully immersive head-worn display (HWD) won’t be able to see the physical world, making the use of a keyboard impractical. An HWD in an augmented reality (AR) application may have limited resolution, forcing the redesign of text-intensive interface components such as dialog boxes. A VR application may allow a user to place an object anywhere in 3D space, with any orientation—a task for which a 2D mouse is inadequate.

Thus, these nontraditional systems need a new set of interface components: new devices, new techniques, new metaphors. Some of these new components may be simple refinements of existing components; others must be designed from scratch. Because many of these nontraditional environments work in real and/or virtual 3D space; we term these interfaces 3D user interfaces (a more precise definition is given in section 1.3).

In this book, we describe and analyze the components (devices, techniques, metaphors) that can be used to design 3D user interfaces. We also provide guidance in choosing the components for particular systems based on empirical evidence from published research, anecdotal evidence from colleagues, and personal experience.

1.2 Why 3D User Interfaces?

Why is the information in this book important? We had five main motivations for producing this book:

3D interaction is relevant to real-world tasks.

Interacting in three dimensions makes intuitive sense for a wide range of applications (see section 1.4) because of the characteristics of the tasks in these domains and their match with the characteristics of 3D environments. For example, virtual environments (VEs) can provide users with a sense of presence (the feeling of “being there”—replacing the physical environment with the virtual one), which makes sense for applications such as gaming, training, and simulation. If a user can interact using natural skills, then the application can take advantage of the fact that the user already has a great deal of knowledge about the world. Also, 3D UIs may be more direct or immediate; that is, there is a short “cognitive distance” between a user’s action and the system’s feedback that shows the result of that action. This can allow users to build up complex mental models of how a simulation works, for example.

The technology behind 3D UIs is becoming mature.

UIs for computer applications are becoming more diverse. Mice, keyboards, windows, menus, and icons—the standard parts of traditional WIMP interfaces—are still prevalent, but nontraditional devices and interface components are proliferating rapidly, and not just on mobile devices. These components include spatial input devices such as trackers, 3D pointing devices, and whole-hand devices that allow gesture-based input. Multisensory 3D output technologies, such as stereoscopic projection displays, high-resolution HWDs, spatial audio systems, and haptic devices, are also becoming more common, and some of them are now even considered consumer electronics products.

3D interaction is difficult.

With this technology, a variety of problems have also been revealed. People often find it inherently difficult to understand 3D spaces and to perform actions in free space (Herndon et al. 1994). Although we live and act in a 3D world, the physical world contains many more cues for understanding and constraints and affordances for action that cannot currently be represented accurately in a computer simulation. Therefore, great care must go into the design of UIs and interaction techniques for 3D applications. It is clear that simply adapting traditional WIMP interaction styles to 3D does not provide a complete solution to this problem. Rather, novel 3D UIs based on real-world interaction or other metaphors must be developed.

Current 3D UIs either are straightforward or lack usability.

There are already some applications of 3D UIs used by real people in the real world (e.g., entertainment, gaming, training, psychiatric treatment, and design review). Most of these applications, however, contain 3D interaction that is not very complex. For example, the interaction in current VR entertainment applications such as VR films is largely limited to rotating the viewpoint (i.e., turning the head). More complex 3D interfaces (for applications such as modeling and design, education, scientific visualization, and psychomotor training) are difficult to design and evaluate, often leading to a lack of usability or a low-quality user experience. While improved technology can help, better technology will not solve the problem—for example, over 40 years of AR technology research have not ensured that today’s AR systems are usable. Thus, a more thorough treatment of this subject is needed.

3D UI design is an area ripe for further work.

Finally, development of 3D UIs is one of the most exciting areas of research in human–computer interaction (HCI) today, providing a new frontier for innovation in the field. A wealth of basic and applied research and development opportunities are available for those with a solid background in 3D interaction.

It is crucial, then, for anyone involved in the design, implementation, or evaluation of nontraditional interactive systems to understand the issues discussed in this book.

1.3 Terminology

The technology sector loves acronyms and jargon, and precise terminology can make life easier as long as everyone agrees about the meaning of a particular term. This book is meant to be accessible to a broad audience, but we still find it useful to employ precise language. Here we present a glossary of some terms that we use throughout the book.

We begin with a set of general terms from the field of HCI that are used in later definitions:

human–computer interaction (HCI)

A field of study that examines all aspects of the interplay between people and interactive technologies. One way to think about HCI is as the process of communication between human users and computers (or interactive technologies in general). Users communicate actions, intents, goals, queries, and other such needs to computers. Computers, in turn, communicate to the user information about the world, about their internal state, about the responses to user queries, and so on. This communication may involve explicit dialog, or turn-taking, in which a user issues a command or query, the system responds, and so on, but in most modern computer systems, the communication is more implicit, free form, or even imperceptible (Hix and Hartson 1993).

user interface (UI)

The medium through which the communication between users and computers takes place. The UI translates a user’s actions and state (inputs) into a representation the computer can understand and act upon, and it translates the computer’s actions and state (outputs) into a representation the human user can understand and act upon (Hix and Hartson 1993).

input device

A physical (hardware) device allowing communication from the user to the computer.

degrees of freedom (DOF)

The number of independent dimensions of the motion of a body. DOF can be used to describe the input possibilities provided by input devices, the motion of a complex articulated object such as a human arm and hand, or the possible movements of a virtual object.

output device

A physical device allowing communication from the computer to the user. Output devices are also called displays and can refer to the display of any sort of sensory information (i.e., not just visual images, but also sounds, touch, taste, smell, and even the sense of balance).

interaction technique

A method allowing a user to accomplish a task via the UI. An interaction technique includes both hardware (input/output devices) and software components. The interaction technique’s software component is responsible for mapping the information from the input device (or devices) into some action within the system and for mapping the output of the system to a form that can be displayed by the output device (or devices).

usability

The characteristics of an artifact (usually a device, interaction technique, or complete UI) that affect the user’s use of the artifact. There are many aspects of usability, including ease of use, user task performance, user comfort, and system performance (Hix and Hartson 1993).

user experience (UX)

A broader concept encompassing a user’s entire relationship with an artifact, including not only usability but also usefulness and emotional factors such as fun, joy, pride of ownership, and perceived elegance of design (Hartson and Pyla 2012).

UX evaluation

The process of assessing or measuring some aspects of the user experience of a particular artifact.

Using this HCI terminology, we define 3D interaction and 3D user interface:

3D interaction

Human–computer interaction in which the user’s tasks are performed directly in a real or virtual 3D spatial context. Interactive systems that display 3D graphics do not necessarily involve 3D interaction; for example, if a user tours a model of a building on her desktop computer by choosing viewpoints from a traditional menu, no 3D interaction has taken place. On the other hand, 3D interaction does not necessarily mean that 3D input devices are used; for example, in the same application, if the user clicks on a target object to navigate to that object, then the 2D mouse input has been directly translated into a 3D virtual location; we consider this to be a form of 3D interaction. In this book, however, we focus primarily on 3D interaction that involves real 3D spatial input such as hand gestures or physical walking. Desktop 3D interaction requires different interaction techniques and design principles. We cover some desktop and multi-touch 3D interaction techniques in Chapters 7–9 but emphasize interaction with 3D spatial input throughout the book.

3D user interface (3D UI)

A UI that involves 3D interaction.

Finally, we define some technological areas in which 3D UIs are used:

virtual environment (VE)

A synthetic, spatial (usually 3D) world seen from a first-person point of view. The view in a virtual environment is under the real-time control of the user.

virtual reality (VR)

An approach that uses displays, tracking, and other technologies to immerse the user in a VE. Note that in practice VE and VR are often used almost interchangeably.

augmented reality (AR)

An approach that uses displays, tracking, and other technologies to enhance (augment) the user’s view of a real-world environment with synthetic objects or information.

mixed reality (MR)

A set of approaches, including both VR and AR, in which real and virtual information is mixed in different combinations. A system’s position on the mixed reality continuum indicates the mixture of virtuality and reality in the system (with the extremes being purely virtual and purely real). Mixed reality systems may move along this continuum as the user interacts with them (Milgram and Kishino 1994).

ubiquitous computing (UbiComp)

The notion that computing devices and infrastructure (and access to them) may be mobile or scattered throughout the real environment so that users have “anytime, anyplace” access to computational power (Weiser 1991).

telerobotics

The ability to control a robot that is geographically separated from the user, thus requiring remote operation. Robots often have many degrees of freedom and operate in the physical world, making 3D UIs applicable to their operation.

1.4 Application Areas

3D interaction and the principles we discuss in this book can be employed in a wide variety of application domains. Keep these application examples in mind as you read through the various sections of the book. We discuss this list of application areas in greater detail in Chapter 2. 3D UI application areas include:

Design and prototyping

Heritage and tourism

Gaming and entertainment

Simulation and training

Education

Art

Visual data analysis

Architecture and construction

Medicine and psychiatry

Collaboration

Robotics

1.5 Conclusion

In this chapter, we introduced 3D UIs—their importance, terminology, and applications. In Chapter 2, “3D User Interfaces: History and Roadmap,” we step back to look at the bigger picture—the history of and context for 3D UIs.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset