CHAPTER 15

Iterative Evaluation

Reviews of design prototypes have been focused on validating whether proposed learning events appropriately address the performance objectives, all content areas are being covered, and the interface is appropriate and effective. Those reviews were largely unstructured and general because the criteria, themselves, were fluid and up for reconsideration as necessary.

Evaluating the design proof is the first structured review of the emerging product. The structure helps keep focus on the issues relevant at this stage. For example, the design proof review should not focus on the completeness of content elements or interactive details, but rather on whether the course structure makes sense, whether challenges and activities are effective, and whether the key content areas are addressed.

The list of evaluation questions in Figure 15-1 offers guidelines for reviewing an e-learning design proof.

Figure 15-1. Design Proof Evaluation Checklist for e-Learning Projects

images

images

MANAGING REVIEWS

The value of an iterative process comes from the opportunity to repeatedly review and revise, frequently making course corrections and doing so early enough that the project never departs significantly from the best path. Because the best path for each project may be unique, the critical steps are working up to a point, pausing, reviewing, and making suggestions for improvement before moving on. Whether reviews are completed by each team member individually or together in a meeting, collecting the comments and ideas is a requisite to creating the plan of action for the next iteration.

Effective review requires an understanding of process stages, the purpose of each review, and how to review the associated elements. Reviews require an appreciation for what should be examined and why. They should build on one another in a progression toward the final review.

Every learning product has a variety of many elements (images, activities, projects, interaction formats, dialogue, tests, feedback, and so on) that require repeated review as they move from concept to completion. Below are some of the attributes and questions pertinent to each.

images  Appropriateness

Does the project meet learner needs as identified in the project plan and the Savvy Start? Are language and media suitable and proper for the audience, culture, and values of the organization?

images Correctness

Are media elements accurate and placed properly? Is all text spelled correctly? Is grammar correct?

images Functionality

Are instructor notes clear and sufficient? Do all links, navigation, animations, and learner response evaluations work properly? Are performance data captured correctly? Do sessions terminate properly, and are bookmarks functional?

images Usability

Are printed materials legible and well organized? Are interactions and controls intuitive and free from ambiguity? Are perceived effort and response time commensurate with the value of each operation? Are all “destructive” events safeguarded from accidental misuse?

images Design Consistency

Do all elements adhere to client and team approved design requirements? Are terms, icons, and controls used consistently throughout?

images Psychological Impact

Perhaps most often overlooked is how people feel when they participate in a learning experience, but if the experience makes learners uncomfortable, stressed, or unhappy, it's doubtful it will be as successful as it could be. Do users feel victimized? Do they feel burdened and bogged down, or do they feel energized and empowered?

There are also some special considerations to keep in mind concerning appearance, text, functionality, and effectiveness, detailed below.

images Appearance

How things look is important. The standard isn't how fancy, modern, or exciting, but rather how effective the visual elements are. Whether it's the color, form, placement, or organization of the visual elements, it is important to review and get this right. The special purpose prototypes explored various design alternatives and the design proof should set the standard. Subsequent iterations should be reviewed for consistency with the approved design proof.

 

Some things to check:

  • Do colors align to the sponsor's standards?
  • Do visuals effectively represent the process or activity?
  • Does the interface support the learning and the learner?

images Text

Experienced designers know that no matter how important some details may seem at project beginning, text will get a lot of attention in the end. Because most people assume that the iterative process allows many revisions and text is generally very easy to change, all too often little concern is given to text early on. As the product nears completion, long, drawn-out struggles may ensue to get the text perfect.

The words of a course, whether instructional content, feedback, or instructions should not be rewritten repeatedly just for the sake of rewriting. Perhaps because everyone can write, text is the one thing people have the hardest time letting go. It's easy to think that one more review or revision will perfect what is written. Although it may get better, perfection is hard to realize. Good is good.

Early in the process, most of the text will be within sketches, prototypes, and outlines. It's appropriate that it isn't refined. The team should review these materials for completeness of scope rather than specific phrasing or terminology. As the design proof emerges, and then again with production of the alpha release, the team will have the opportunity to review all content as it will be presented to the learner. This is the time for an in-depth review of the words, phrases, terminology, and contextually suitable content.

 

Some review questions for text include:

  • Is it written appropriately for the learner population?
  • Is it brief and presented in short segments?
  • Is it accurate?
  • Does it adequately convey the necessary meaning?
  • Are the correct terms used?
  • Is it grammatically correct?
  • Does it have the right character and personality?
  • Does it comply with local, cultural standards?

images Functionality

When technology is involved, functionality becomes an especially important concern. You don't want people to think: I clicked on a button and nothing happened. It says it is loading, but nothing comes up. I am not able to select an item. The feedback doesn't come up when I make a choice. It says I didn't get all the answers correct, but I did!

If there is an activity that the learner must accomplish and cannot, clearly there is a functional error. Functional errors may even frustrate and limit the reviewer's ability to complete reviews. If the choices cannot be selected, for example, the reviewer cannot see the feedback or the next item.

 

Just because a course functions as designed, doesn't mean the functionality is acceptable. Functionality is a multifaceted characteristic existing on multiple planes.

  • Does it work mechanically?
  • Does it have the desired effect as it works?
  • Does it contribute to the learning process?
  • Is it consistent with learner expectations?
  • Is the activity natural and obvious?
  • Is it accessible by all individuals who will need access?
  • Does it meet with Section 508 accessibility requirements?

images Effectiveness

In an iterative process, reviewing for errors comes second to reviewing for effectiveness. There's no need to pull out the errors of something that won't become effective anyway. At every stage in the process, SAM stresses involvement of learners and people other than the designers and developers. It's hard to overstate how surprising it always is that those not grounded in the design and development of a learning product will respond differently than expected.

Reviewing the application for effectiveness is reviewing against the stated performance goals as criteria. Looking for whether a particular learning experience provides an effective instructional opportunity for the learner is much different than scouting for grammatical or other incidental errors. A scenario may be grammatically incorrect while still meeting the measure of effectiveness. On the other hand, lacking careful review, it's possible to have a very well worded piece of learner distraction.

 

SAM uses iterative formation and summative evaluation.

 

Reviewing for effectiveness does not suggest a full summative review within the development cycle, of course. But then, if not on track to change the performance of the learners, a lot of effort and time will have been wasted. In traditional terms, SAM attempts a combination of both formative and summative evaluation within the review cycles, reversing the ADDIE notion that summative evaluations follow formative. We'll get things perfected from a formative viewpoint as soon as we're quite certain we're on a path that will prove effective.

It's easy to be reviewing for instructional effectiveness and get lost in catching little details—missing, as it were, the forest for the trees. Although the design proof may not have much to use for evaluating effectiveness, there is always more to be gained by having it reviewed by prospective learners than anyone ever expects. It's very good practice to sit down with a few learners, fill in the gaps as necessary, and observe their responses.

 

Some key questions to ensure effectiveness:

  • Does each context relate to the learner?
  • Does each learning event advance the learner toward performing effective outcome behaviors?
  • Does each module give the learner all of the practice necessary to perfect and sustain performance?
  • Does each activity present a meaningful, authentic event?
  • Is feedback represented in the form of likely consequences?

SETTING EXPECTATIONS FOR ITERATIVE REVIEWS

Early reviews require thinking creatively and broadly to ensure the groundwork is laid for later phases of design and development. Later reviews here in the iterative development phase require more focused reviews, still open-minded of course, to effectively validate progress—or lack thereof.

With the variety of deliverables (prototypes, documents, media, and others) that must be prepared, the team must have aligned expectations for each stage of the process so work can be prepared for the upcoming reviews, check accomplishments, and meet expectations. Before work is done, expectations should be enumerated and agreed to so that there are no surprises and wasted time.

In Table 15-1 are some potential expectations for items feeding the development phase and those produced in it. There will be many criteria for each iteration in the process, but the list provides examples of the changing focus that is important for each review and to keep the process moving.

Table 15-1. Review Expectations for the Development Phase

Deliverable Review Expectations
Interaction Prototypes
  • Interactions appropriately address the learning objectives.
  • The context relates well and realistically to the learner.
  • The challenge makes sense to the learner.
  • Activities are authentic and supportive of the challenge.
  • Learner options are intuitive and easily understood.
  • Feedback shows consequences of learner activity and decisions.
Project Plan
  • The plan is complete and realistic.
  • The plan includes a complete objectives x treatments matrix.
  • Specific responsibilities are identified by individual or group
  • Project expectations are listed.
  • Project contingencies are identified for when delays and other problems occur.
Media Prototypes
  • Media elements are appropriate to the age, proficiency, and abilities of the learners.
  • Media comply with organization's standards, if any.
  • Media contribute to learning (rather than simply serving as ornamentation).
  • Redundancy is provided to assist learners with different learning styles and abilities.
  • Interface for controlling media is intuitive to the presenter or learner.
Content Grid
  • The grid is complete and provides all content elements needed for each learning event.
  • Vocabulary (written or spoken) is appropriate for the learners.
  • Consequences of learner actions can be created and shown within project constraints.
  • Feedback provides clear guidance and response to the learner's choices.
  • Adequate resources are available for learners to call upon when needed.
Design Proof
  • At least one example of every content element is shown at a final level of refinement and in a functional context.
  • Course flow is fully defined and presented as it will be in the final product.
  • Navigation, if any, provides learners appropriate and desired options.
  • All outstanding design or content issues (there should be very few) are clearly identified.
  • Proof is functionally deliverable via intended mode of instruction, whether classroom, e-learning with LMS, mobile devices, or other.
Alpha
  • All content is implemented.
  • Functionality is complete and acceptable with all exceptions documented.
  • Product can be tested with learners and instructors where appropriate.
  • All bugs and errors are listed.
Beta
  • Product is complete and needed corrections identified in testing the alpha release have all been made.
  • If no problems that must be fixed prior to a release are identified, the beta release becomes the gold release and rolls out for use; otherwise, problems are identified and a second beta release is constructed to repair them.
Gold Release
  • Product is ready for rollout.

Complete, downloadable checklists for releases can be found online at www.alleni.com/samchecklist.

CONDUCTING A LEARNER REVIEW

When we have learners review instructional products in the process of design and development, they can help validate that we have designed meaningful context, challenge, activity, and feedback; that we have constructed learning experiences that relate to their current needs and understandings; that we have provided authentic opportunities to practice realistic behaviors; that we have composed instructive consequences that provide insight and guidance; that we have matched their ability to comprehend and participate.

Every step of the successive approximation process benefits from the participation of learners. Their reviews affirm the appropriateness of design intentions and assumptions and test development work. This evaluation can come only from those who are required to learn and perform the targeted behaviors. But effectively managing learner reviews is necessary to ensure they have the opportunity to provide relevant and helpful feedback.

Learners are not usually instructional designers and therefore need help providing useful feedback. Depending on the stage of the process (prototype, design proof, alpha, and so on), different types of feedback are helpful. They will need the most guidance when looking at rough prototypes and presumably no guidance when reviewing a beta release. For example, learners may struggle with a review of a prototype without clear directions of what to look for. They need to understand that a prototype is simply a design consideration with perhaps incomplete components. What would you think if we developed something like this? Further along in the process, learners will be able to respond to questions, attempt homework assignments, listen to a case study and ask questions, and so on. They can then give their feedback, indicating whether they understood directions, saw the relevance of activities, and felt feedback was clear and helpful.

Many designers are reluctant to involve learners until products are somewhat polished and nearly complete. From my own experience, I must state in the strongest terms that this is a mistake. It's amazing what helpful feedback and insightful suggestions prospective learners can contribute. You would never expect what they can do unless you've tried it.

Another invaluable source of reviews comes from recent learners—people who were, until recently, unable to perform the targeted skills. They are able to remember not knowing how to perform and not being able to. They are able to remember what was most helpful to them in building their new skills. They can identify what was most confusing or most difficult. They can identify what wasn't helpful and what may have wasted time and effort. These are important things to know.

QUALITY ASSURANCE IN SAM

Successive approximation validates the notion that quality is best attained by giving it continuous attention rather than only near the end of product production.

The SAM process has no need for a single, formidable, “Quality Assurance” event. In SAM, quality is assessed very often, within each iteration, in fact. And as we have discussed in this chapter, each assessment has specific criteria established ahead of time. Indeed, SAM takes quality so seriously that it is continuously part of the process. There's no need to layer over yet another distinct process.

EVALUATING THE COURSE

Waterfall processes quite logically place evaluation as the last step. With evaluation performed continuously throughout the iterative SAM process, there's much less left to the end. There's much less anxiety about whether the product is deliverable, whether learners will respond positively to it, or whether targeted skills will be developed. There can be a high degree of confidence that the final product will succeed, and yet only success can affirm the achievement.

High scores are easy to obtain at the first two levels of Kirkpatrick's (1959) four-level evaluation model. Having involved learners and iterated designs to get good responses from them, learner reactions (level 1) should be very positive. There should also be little doubt that learning will occur (level 2), as this has been repeatedly verified in the process.

The third and fourth levels are, however, the most important and unfortunately are not so easily verified within the design and development process. Level 3 is achievement of behavioral change. Not only do learners know how to perform more effectively, they actually do it in life, perhaps on the job, where true benefits arise. Level 4 is gathering desired results from changed behaviors. Of course, everyone could have been wrong in the most important assumption of all—that achieving the prescribed behavioral changes would lead to success. In business and in academia, success is defined in many different ways, yet it was presumably the goal that inspired and justified the development of the instructional product. The final, level 4 question is, Was the goal realized?

Evaluation external to the SAM process of instructional product design and development is necessary to answer these final questions. Regrettably, and as important as they are, this final feedback which would place the whole of successive approximation in an outer loop of iteration, is too seldom gathered. In those cases where it is, however, SAM seems the perfect process to respond efficiently and effectively to the outcome.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset