CHAPTER 10

Setting the Target

Projects have a purpose, a goal, a criterion for success. It's not always clear what the goal is, but it's not safe to make assumptions about what it is or about whether key stakeholders have the same goal for the project. Even assuming that the goal is about performance outcomes is risky.

It's important for overall success—and all the decisions that have to be made to realize success—to know what the target actually is and how success will be measured. Successive approximation, pragmatic as always, helps flesh out what people are thinking, define the goal, and determine the most efficient way of reaching the target.

GOALS

Goals often evolve and sometimes even transform into something entirely different from what was initially set. Up until this point, the process has encouraged creativity over either practicality or appropriateness. Everything goes in brainstorming. The team hasn't had to stifle creativity, worry about budget, or consider much about the limitations of delivery. This may seem backward, as traditional processes and even common sense would suggest that setting the goal is the very first thing to do.

Ready, fire, aim. Even with successive approximation, some discussion about goals is likely and appropriate from the very first cycle of the Savvy Start, but no one was appointed judge to determine whether each idea was fully in-line with both constraints and the project target. This looseness is deliberate, because experimenting with alternate solutions can actually be one of the best ways of determining what the goal should be. Getting the goal right is, of course, extremely important, and while contrary to intuition, finalizing the goal shouldn't be the first thing to do; neither should it be the last.

Goal setting in SAM is, like the whole of the process, an iterative process that might best be viewed as piggybacking on instructional design. As the team excitedly explores ideas for learning experiences, some ideas will expand thoughts of what skills might be targeted while conversely, initially appealing design ideas may spend too much time on insignificant outcomes. It's unfortunate when goals are set too soon and opportunities, perhaps the most valuable opportunities, are bypassed by strict adherence to initial goals. On the other hand, it is important to implant goals before the following steps of additional design and major development work begin.

After several iterations of design, prototyping, and evaluation, it is time to consider project constraints, requirements, and goals in earnest. A tested and effective way of doing this is by writing clear and concise outcome objectives, assigning treatment methods to each objective, and deciding how to assess success.

INSTRUCTIONAL OBJECTIVES

Objectives are needed for estimating the amount of work to be done, preparing cost estimates, listing needed resources, setting out a project schedule, and, in short, doing many of the project planning tasks ahead. Instructional objectives provide a helpful foundation for design and development; however, not everyone is familiar with writing formal objectives and this may cause them to be put off or feel unable to contribute. In actuality, a review of constructed prototypes may well reveal the knowledge and behavioral skills the team has found to be of primary importance. With just a little guidance, the team can distill what they've been saying into objectives. Writing the objectives after some prototyping has been completed is a more interesting and engaging task, helps the team determine if they've been on an appropriate track, and provides support and guidance for the continued iterations.

Although this book is about process and not design, preparation of objectives, the objectives x treatments matrix, and in-line assessments are important steps in the process. We divert to design topics a bit here to talk about these essential activities primarily as a clarification of the process.

Objectives clarify exactly what should be taught and learned. Good objectives have very little ambiguity in them, although as tightly constructed as one may try to make them, there is often more opportunity for varied interpretations than intended. Here is an example objective and some reminders of how to write them.

Sample behavioral objective:

Within 30 minutes of arriving on site and without any errors or omissions, pool service personnel will perform all routine cleaning items, test and treat water, replace all owner furniture and equipment as found, place all service equipment and unused supplies back in the truck, and leave a properly written, dated, and signed service ticket at the front door as described in the company's best practices guidebook.

Complete behavioral objectives have three components (reference the example objective above):

  1. A description of observable behavior (perform routine cleaning, etc.)
  2. The conditions under which the learner must be able to perform successfully (within 30 minutes of arriving on site, on site with a service truck, etc.)
  3. Criterion of successful behavior (without any errors or omissions)

Verbs such as “think,” “understand,” and “know” are not observable (or measurable) behaviors, whereas “list,” “identify,” “complete” and many others are. Even if the intent is to stimulate cognitive functions, such as appreciating, understanding, or feeling, it's important for designers to express observable manifestations of such mental behavior so that we can assess whether or not the instruction is having the needed impact.

Objectives x Treatments Matrix

An important use of objectives is to determine how many types of instructional events or “treatments” will need to be designed for the entire application. All objectives need design treatments—one or more—but not all objectives need to have unique pedagogical designs. Preparing a matrix of objectives and instructional designs gives the team insight into what design work has been accomplished and what remains to be done.

Typically, the team writes objectives in the first column of a table. When instructional designs are sketched, each one is lettered and briefly described at the top of columns. As the team takes up successive objectives, they look to see whether one or more existing designs would constitute good and appropriate learning events for them. If so, they check that cell of the growing matrix to note use of the existing design and move on to other objectives (see Table 10-1).

Table 10-1. Objectives x Treatments Matrix Summary

images

Building the matrix will be instructive to the team, and it will help them see how objectives are used to estimate the amount of effort it will take to complete the entire project. At this point in the process, however, it's not necessary to write the full complement of objectives, even if you could. Just write a few so that you can select a couple for prototyping and demonstrate the process that will be continued later.

 

Table 10-2. Example Objectives x Treatments Matrix

 

Observable Behavior   Treatment    
    Context   Activity
1. Clear pool area for safety and proper cleaning.   On site, at a variety of pools: Some situations show hazards (such as children playing nearby, gardeners at work, or party preparations with electrical devices near water) and typical obstacles to cleaning (chairs, tables, glasses, etc.).   (A) Target identification and classification: Find all items of concern and match with appropriate remedy.
2. Test pool water.   Provided: a basic residential sanitizer residual and pH testing kit with color-coded directions for testing DPD chlorine (.5-5 ppm), DPD bromine (1-10 ppm) and pH (6.8-8.2). Pools have normal and abnormal water conditions.   (B) Sequential task completion: Procedural activity simulation for common errors in processes with delayed feedback. Learners must perform test steps in proper order and report correct results.
3. Examine water filter. Clean or replace.   Different types of filters (sand, D.E., or cartridge) and filter conditions (normal, damaged, spent).   (A) Target identification and classification: Find all items of concern and match with appropriate remedy.
(B) Remedy: Sequential task completion.
4. Clean pool.   Pools with varying states of debris, tile calcification, and water clarity.   (A) Target identification and classification.
(B) Sequential task completion.
5. Clean site.   Pool areas previously shown for objective #1 after cleaning with both service equipment and customer property in likely places.   (A) Target identification.
(C) Drag and drop positioning: Place objects in original positions, safe positions, or truck as appropriate.
6. Complete service ticket.   Completed forms corresponding to performed service but with errors.   (A) Target identification.
(D) Text entry correction.

 

In the example, objectives 1, 3, 4, 5, and 6 can all be reached or assisted with the help of target identification activities. Sequential task completion is also used for multiple objectives (2, 3, and 4). The objectives x treatment matrix helps designers employ the efficiency of reusing an instructional model. It won't be necessary to prototype repeated uses of instructional treatments (and developers will be able to reuse model to cover spans of content). Only one prototype is needed for each type of objective and activity, not for each objective.

Working Backward

The natural instinct is to work from the simple and basic to the complex and advanced skills. While SAM supports this traditional sequence as well as any other, some find persuasive benefits from working the other way; that is, from the final or “terminal” objectives backward to the foundational or beginning objectives. I personally find working backward helps me focus on what's most important and use my resources most effectively.

To work backward, the question to be asked repeatedly is this: What are the last skills we want learners performing successfully before…? In the first iteration, the question is, What are the last skills we want learners performing before we award them their completion certificate and release them to the world? This question is answered easily. We want them performing the tasks the instruction was designed to teach. We want them successfully performing in contexts and situations that are identical to the contexts and situations they will actually face, or as close to them as we can possibly simulate.

Filling in the objectives x treatments matrix, we would begin with objectives that define the ultimate performance targeted. We would then design the learning experiences that would support this performance and remediate when learners failed to reach the criterion. Then we would ask the question again, What are the last skills we want learners performing before we assign them this last instructional event? Our answers would create the learning objectives that, in delivery sequence, would precede the objective currently in the matrix. We would look to see if the instructional treatments existing for any objective would be applicable to these new objectives, with adjustments to context and content, of course. If the treatments appear satisfactory, we simply check off the reuse of them and continue iterating the performance question. If not, then we would design the treatments needed and press onward—or backward! Eventually, the team will back into initial levels of performance abilities that all targeted learners are expected to have. And then stop.

PRE-EXISTING CONTENT

Working backward from desired performance outcomes is a very successful and highly recommended practice. Working backward from desired outcomes, defining performance objectives, and building the objectives x treatments matrix clarifies content that is and isn't needed in the product.

Working from existing content is, unfortunately, a more common practice—and a practice that leads in the wrong direction. It's true that content is expensive, and that time and money can be saved when existing content can be reused. It is very easy to underestimate the amount of time, effort, and cost required to develop good course content; not to mention the special talents needed to do it well. Poor content will ruin the product and prevent a successful learning experience.

Some designers consider a clear presentation of content to be a sufficient instructional strategy—an effective means of achieving the learning and performance outcomes. However, this approach works only in those situations where learners are highly motivated and where all that's necessary to perform is knowing how to do it. To learn more complex tasks—usually the more valuable tasks—people need to put new information to work within a meaningful context. And they need to practice. Maximum impact occurs when the volume of content is reduced and the focus is placed on a few key behaviors.

The objectives x treatments matrix should drive content requirements, but when there is a lot of pre-existing content, there tends to be a feeling that it needs to be used—pretty much all of it—so as not to be wasteful. Instructional designers face this situation frequently and know that large amounts of content must be whittled down into useful pieces. It's often more expedient to start from scratch.

Content is included to achieve objectives and for no other purpose. Extraneous content consumes too much time, even if just for presentation, and leaves little for learning and practice.

There's a second prevalent misunderstanding about pre-existing content that can be even more problematic. If the existing content were developed for one mode of delivery, such as a book, and the product being developed were for an instructor-led course, the existing content would be inadequate. It's not just that the format would be wrong. It would be necessary to rewrite what exists and add more content so that instructors could answer learner questions, perform demonstrations, and so on.

Figure 10-1 provides a visual representation of the transition of pre-existing information into content for an e-learning application.

Figure 10-1. Usefulness of Content Developed for Other Instructional Modes

images

Notice that the project starts with a large amount of content that comes from existing manuals, presentation slides, documents, and other sources. This material is helpful in early project work and when only presentations intended for passive learning are being developed. But active learning events require context, challenge, activities, and feedback. Fragments of these content components may be found in existing materials, but development work often requires filling in the many gaps between what's available and what's needed. Content creation work may range from creating missing pieces to truly starting from scratch. The more interactive and the more individualized the product is to be, the more content will be needed. Again, it's a common mistake to underestimate the work needed to create content in the form needed, especially when pre-existing content is available.

Developing content for prototyped designs, as will take place in the iterative development phase, takes a substantial effort and should be given the necessary time and talent. The organization's desire to use much, if not all, of the existing content should be challenged. The learners will be served best with a few focused key points that will support their performance change efforts.

ASSESSMENT

Sponsors may show little interest in assessment except for post-tests, but learning events need interleaved assessments as a means of adapting the instruction to the learner, providing feedback, and determining the effectiveness of the instruction.

Macro Assessment

People think of post-tests when they think of assessment, but there's far more involved here. Classical post-tests generally have little value for any purpose, although it is traditional to award certificates and to issue grades based largely on post-test scores. Unfortunately, posttest measures provide incentives for learners to use only short-term memory in preparation for the test, resulting in rapid extinction of learned behaviors after the test. On-the-job practice can counter this loss, but delayed measures based on post-learning performance can be much more meaningful. At the same time, when anticipated by learners, delayed measures encourage sufficient practice to commit learning to long-term memory.

In any case, the design team needs the clear targets that assessments represent, and learners need the goals that assessments define. To prevent “teaching the test”—a situation in which learners are prepared only to perform well on post-tests and not in real performance situations—a separate team working in parallel may define assessment exercises. They work from the same set of objectives as the instructional design team, but the assessments are not shared with the instructional designers so that content developers will focus on preparing learners to fully meet the requirements of the objectives—not of just the tests. This is a rather extreme tactic and not necessary in many situations, but preparing criterion tasks that demonstrate competency and could be used for post-instruction assessment is a smart thing to do.

Learning projects exist to achieve behavioral change. The most vital, ultimate assessment is whether they do that. The most effective assessments are not post-tests, but rather on the-job or in-real-life behaviors. In academic settings, teachers should be more satisfied and proud if a student were able to participate in an intelligent conversation on a subject of instruction with learned individuals than able to provide correct answers to test questions. In the business setting, trainers should be more satisfied when an employee corrects a mechanical malfunction efficiently rather than when he or she correctly answers test questions on how it should be done.

Micro Assessment

Each interaction with a learner is an opportunity for assessment. In a classroom context or synchronous remote learning, individual assessment interactions carry with them the potential pain of embarrassment. While some would argue that these challenges are motivating and learners should be able to withstand the pressures of public evaluation, it's often better to use more private forms of assessment.

Perhaps the biggest advantage of e-learning is that it can assess learner abilities and readiness at every point of private interaction. Each learner has the opportunity to work with the full attention of the mentor and make mistakes without jeers from peers. Interestingly, the four cornerstones of e-learning interaction, context, challenge, activity, and feedback, are the same essential components of sound assessments. Separate design and development is not needed to construct in-line assessments because instruction and assessment are designed and built together. Each interaction can be both a teaching and assessment event.

Ideally, interactions simultaneously give learners the cognitive stimulation they need to learn and instructors (or the instructional software) the information needed to guide the learner. By knowing where an individual learner's strengths and weaknesses lie, instructors can determine what the learner needs to learn and what they are ready to learn. Because instructors are usually working with many learners simultaneously in a classroom, they aren't able to craft individual learning plans that adapt quickly. Indeed, their instructional plans are often set before the term begins, and learners must keep up as best they can.

 

Individualization is a term for adapting learning experiences to people. It requires knowledge of people. Superficial but welcomed individualization includes:

  • Using the learner's name in sentences: Here are your options, Susan.
  • Recalling information: Welcome back. You haven't missed a day since we started.
  • Reporting progress: You have only two modules left to complete.

 

More significant forms of individualization are tied to assessments, such as:

  • Comparison to previous personal performance: Your best performance yet, Eric! Excellent.
  • Comparison to others or to averages: You required less time to solve this problem than nearly everyone else.
  • Branching based on performance need (e.g. remediation): Let's back up a step and review another example.
  • Recognition of learning styles and abilities: You're doing much better after viewing videos than after reading documents, so let's jump right to a video first.
  • Honoring preferences: Since you've indicated you like challenges, let's see if you can solve this problem before we study the principles involved.

In order to accomplish significant levels of individualization, it's important to design interactions that yield the prerequisite data. The SAM team can benefit from considering the value of individualization at the outset and then including necessary interactions and assessments to provide for them.

SUMMARY

Although we want to be creative and design learning experiences that are packed with energy and engagement, they have a purpose. Learning experiences are meant to build skills and change performance. It's therefore critical to the success of the creative, iterative process to set goals. Writing objectives helps articulate what the project intends to achieve and helps focus brainstorming efforts on appropriate needs.

The objectives x treatments matrix is a simple mechanism to help minimize the number of unique treatments that are designed, which in turn minimizes development time and costs.

Assessments, built of the same components as interactive learning experiences (context, challenge, activity, and feedback), are necessary within instructional interactions to provide meaningful individualization.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset