WHEN THE PURPOSE OF USING MULTI-RATER FEEDBACK IS BEHAVIOR CHANGE

Maxine A. Dalton

I will argue here that instrumented multi-rater feedback (MRF) is best used for developmental purposes and, therefore, should be a confidential and private process. MRF data should not be used for administrative, evaluative, or decision-making purposes.

In my experience the reason most often given for the use of instrumented MRF in an organization is to initiate behavior change in the person who is to receive the feedback. Whether the feedback is being given to problem performers or high-potentials, it is presented to the individual as baseline information from a variety of perspectives—an opportunity to see ourselves as others see us. The rationale behind this activity is that a person cannot become more effective, improve, or design a plan of development without first having baseline data about his or her current level of performance against some standard.

To satisfy this objective—a change in behavior as a response to MRF information—the data must be of high quality and the person must accept the data.

In this essay I will first comment on these two points: conditions that increase the likelihood that a feedback report will be based on reliable, veridical data; and the conditions that facilitate an individual “being able to hear” disconfirming information about him- or herself.

The Quality of the Data

I will approach this topic with both anecdotal and quantitative data. As I have been working with MRF surveys for a number of years, I have gathered many stories about multi-rater interventions. Undoubtedly, my belief system has created a filter for my retention and recall. Nonetheless, I will first describe the case, as I have seen it play out, and then will address the quantitative support for my argument.

Typically, when peers and subordinates are asked to rate their colleague and boss, respectively, they express concern about how these ratings will be used and if the individual will know “what I said.” If the peers and subordinates like the individual they are being asked to rate, they are concerned about doing or saying anything that might hurt him or her. It might, therefore, be expected that either positive halo or restriction of range will mute the message. If raters dislike the individual they are being asked to rate, they may fear retribution or decide that this is a good chance to get even. Again, the data may reflect restriction of range or a negative halo.

If raters think the individual has a “good” boss, they may decide to trust him or her with the “truth.” If they think the individual has a “bad” boss, they may be less willing to be forthcoming and candid. Typically, to address the issue of retribution and accountability and increase the likelihood that raters will provide candid feedback, they are guaranteed anonymity.

As to the use of the data, when raters are assured that the data will be used for confidential purposes only, they are relieved and much more willing to be candid and use the entire range of the scale.

These experiences are supported in the literature. Farh, Cannella, and Bedeian (1991) reported that when peers rated colleagues for evaluative rather than for developmental (confidential) purposes, the ratings showed greater halo and were more lenient, less differentiating, less reliable, and less valid. They concluded that “other things being equal, the more severe the perceived consequences of a negative rating, the greater incentive for the rating to be lenient.” Hazucha, Szymanski, and Birkeland (1992) reported that boss and self scores on a multi-rater survey were higher under conditions of open feedback than under conditions of confidential feedback. London and Smither (1996) reported that forty percent of the raters interviewed after giving feedback under confidential conditions said they would change their ratings if the situation changed to an appraisal condition.

Accepting the Data

It is likely that a significant proportion of individuals receiving feedback from a multi-rater instrument will learn that they rate themselves higher than others do. Those who have studied self/other agreement on MRF instruments have reported that approximately one-third of the self-raters are likely to describe themselves in a more favorable light than their subordinates (see, for instance, Fleenor, McCauley, & Brutus, 1996).

Depending on the skills, traits, and attributes being rated, an individual may be receiving information about a personal quality that is core to the image of self, one’s persona in the world. At best, this is an unsettling experience. At worst, it is devastating.

But it is not surprising. The typical workplace is not feedback-rich. Employees can work for years without a performance appraisal. Individuals can receive promotions and regular raises right up until the day they are dismissed for behaviors that they never even knew were considered a problem. And so it is not hard to understand that self-ratings tend to be higher than observer ratings for many people or that individuals can be “knocked out of their socks” by an MRF experience.

Therefore, to be able to take in instrumented feedback information, process it, evaluate it, integrate it, and make plans to learn new skills because of it, an individual must feel safe. In the struggle to reorganize self-image in response to the evaluations of others, a person must believe that the process does not make him or her unreasonably vulnerable. He or she must feel a high degree of psychological safety.

Traditionally, a cornerstone of psychological safety has been the convention of confidentiality. All of the tenets of counseling and therapy in the Western world are based on this convention, the belief that individuals are most likely to change if they can enter the process of self-discovery with a qualified person in a confidential setting—where the information disclosed and discussed will not be used against the individual who has undertaken a personal journey of self-discovery. This convention has even been codified for psychotherapists in most states, except in cases of child abuse or threats of murder or suicide.

Of course, MRF in organizations is not therapy, but it is about facilitating behavioral change. When MRF is collected and used for evaluative purposes—for making decisions about performance, promotion, salary increase—within a semipublic forum, then the process is no longer safe. The individual is highly vulnerable and, it is this writer’s argument, much more likely to argue with, deny, reject the data, and fail to change.

In a breakthrough study on performance-appraisal processes conducted at GE in 1965, Meyer, Kaye, and French found just this outcome. Individuals who received negative evaluations from their boss that were tied to salary decisions rejected the data, and those who received the greatest number of criticisms during the appraisal process actually registered a decrement in performance. Meyer and colleagues recommended that coaching about performance for the purpose of improving performance be separated from salary decisions and that this coaching be a day-to-day and not a once-a-year activity.

Not only does the evaluative climate increase the likelihood that the data will be rejected (and therefore reduce the likelihood of behavior change), this climate also increases the stress associated with the feedback event.

To receive discrepant feedback is stressful. Individuals may perceive the stressful event as challenging, threatening, or harmful (Folkman & Lazarus, 1985). If the individual defines the feedback event as challenging, he or she may see this as an opportunity to use the feedback as a catalyst for change. If the individual determines that the stressful event is threatening or harmful—something that will block a promotion or a salary increase—he or she is more likely to use a different set of coping strategies, including denial, a damaging venting of emotions, or behavioral and mental disengagement (Carver, Scheier, & Weintraub, 1989). Even worse, if the boss who has access to the multisource feedback is not a skilled coach, or is himself or herself a punitive person, the feedback event may escalate into a truly unpleasant and unhelpful activity.

The Motivation to Make the Process Public and Evaluative

There are a number of motives for making the multisource feedback process public and evaluative. Some are benign, some less so. Some are laudatory but naive to issues of power and retribution.

A frequently cited reason for making feedback evaluative and public is the following: “How can we make the person change if we don’t know what the feedback says?” Such a question represents the expressed manifestation of a personal theory of behavior change: Individuals change through punishment, retribution, and fear—the stick. This is an example of the construct of coercive power (Raven & Kruglanski, 1970) and produces a change in behavior that can only be maintained by monitoring, surveillance, and watchfulness. As Meyer and colleagues (1965) demonstrated, the outcome may also be a demoralizing psychological reaction in which performance actually declines.

A second reason for wanting to make multisource feedback public and evaluative is to provide greater rater input into the process of appraisal. The argument is a liberal one, based on a vision of egalitarian and collaborative organizational structures. It is an appealing argument: “Why should the boss be the only one to have a say in the appraisal process?” Despite its appeal, it is naive about issues of power and retribution. A rater might be told that his or her ratings of the boss will be anonymous, but the average score of five 1s is still 1; it does not seem too big a leap to surmise that the individual who receives such a score is the individual least likely to have the interpersonal skill to cope with the knowledge that all of the raters described him or her in such a fashion.

Individuals in organizations need feedback and they need it from multiple sources. They cannot be expected to change their behavior without the 360-degree perspective that tells them that their message is not being received as intended. The goal of an instrumented program of multisource feedback should be to move toward the day when instrumentation is no longer necessary, when appraisal is not an annual event, when enlightened individuals are able to let one another know “in the moment” about the impact of a set of behaviors on the work task so that recalibration can begin immediately. The core of this argument about the use of multisource feedback is how do we get there. It is my argument that instrumented multisource feedback is a tool in service to this goal, but that it is a tool that must not be used as a blunt instrument.

Rather, the goal is most likely to be achieved over time through multiple iterations of confidential, developmental, multisource feedback events—linked to business-driven development planning—where development is rewarded and learning is valued. When individuals and organizations see that feedback doesn’t kill; that information is being provided in service to learning; and that learning occurs through open, consistent feedback and the opportunity to practice needed skills; then we can all throw down our number 2 pencils.

This debate is about how to help people learn, grow, and change over time. I will argue that it is not done through ever-more-inventive rating systems that attempt to force distributions or foil the raters. It is not accomplished through punishment or public humiliation, by ignoring basic principles about the conditions under which people can accept and incorporate disconfirming information about the self. It is accomplished by thoughtful attention to creating environments where information is freely shared and accepted and where learning in service to individual and organizational goals is rewarded.

References

Carver, C. S., Scheier, M. F., & Weintraub, J. K. (1989). Assessing coping strategies: A theoretically based approach. Journal of Personality and Social Psychology, 56(2), 267-283.

Farh, J. L., Cannella, A. A., & Bedeian, A. G. (1991). Peer ratings, the impact of purpose on rating quality and user acceptance. Group and Organizational Studies, 16(4), 367-386.

Fleenor, J., McCauley, C., & Brutus, S. (1996). Self-other rating agreement and leader effectiveness. Leadership Quarterly, 7(4), 487-506.

Folkman, S., & Lazarus, R. S. (1985). If it changes it must be a process: Studies of emotion and coping during three stages of a college examination. Journal of Personality and Social Psychology, 48, 150-170.

Hazucha, J., Szymanski, C., & Birkeland, S. (1992). Will my boss see my ratings? Effect of confidentiality on self-boss congruence. Symposium of the American Psychological Association, Washington, D.C.

London, M., & Smither, J. W. (1996). Can feedback change self-evaluations, skill development, and performance? Theory-based applications and directions for research. Unpublished manuscript.

Meyer, H. H., Kaye, E., & French, J. R. P., Jr. (1965, January/February). Split roles in performance appraisal. Harvard Business Review, No. 65108, p. 123.

Raven, B. H., & Kruglanski, A. W. (1970). Conflict and power. In P. Swingle (Ed.), The structure of conflict (pp. 69-109). New York: Academic Press.

Image

Maxine A. Dalton is a research scientist and program manager at the Center for Creative Leadership, where she has trained hundreds of feedback specialists and given feedback to hundreds of individuals and groups since being introduced to the concept of 360-degree feedback in 1989. She has also been active in managing the process of the translation and adaptation of 360-degree-feedback instruments in other countries. Prior to coming to CCL, she was a consultant with Drake Beam Morin. Dalton’s recent publications include “The Benefits of 360-degree Feedback for Organizations” in Maximizing the Value of 360-degree Feedback: A Process for Successful Individual and Organizational Development (Eds. W. Tornow, M. London, & CCL Associates; Jossey-Bass, March 1998); and with George P. Hollenbeck, How to Design an Effective System for Developing Managers and Executives (CCL, 1996). Dalton holds a Ph.D. in industrial/organizational psychology from the University of South Florida.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset