Dana Gaines Robinson

On how it all started…

When I was a training director at a bank on the East Coast, I was asked to bring supervisory training into the organization. I was actually the first training director they had ever hired; this was in the 70s. And I went out to find a program that I thought would work for us and I did. I found a program provided through a supplier. And doing the cost analysis on that with my management team, it became apparent that this was a much heavier investment than they had anticipated. So they asked me to prove that it would make a difference and if I could, they would support bringing it into the organization. I said, “Oh I can do that!” Then I went back to my office and said, “How the heck do I do that?” and went out to find help. That help came in the form of Jim Robinson so, in a sidebar to this story, that is how I met the man I eventually married. We measured the impact of the program in three ways: attitudinal change, behavioral change, and operational impact. I used a control experimental design—because I was only being funded to train 50 people and there were 300 in the group, it was easy to do. In other words, just by default or by accident we did a lot of things right, and we were able to validate a huge impact in all three of those areas that resulted in my budget doubling for the next year, increasing my staff by two people, and away we went! So the many insights I had out of that have lasted my entire professional life. One of those insights is that the development of skill, while important, is not the same thing as changing performance. Performance change requires a holistic approach in which skill enablement is a piece. Another insight was that when you can validate and affirm to managers that there is a business case for a program, that there is an investment to be made, but there is also a return to be realized, you get support. And so we carried those two learnings forward, and they became part of the work we did in building the models of performance consulting that we now teach people to use.

I was moved into the training and development field in 1976 and was lucky enough and fortunate enough to very early in my career learn the difference between building skill and changing performance. That is actually one of the key insights I had through a measurement effort. In learning that difference, I took it forward and realized, along with Jim, that there is a lot to be done to help people who work in the learning and development field develop a systemic approach that can be replicated, that moves beyond acquiring skill only, and that looks at changing performance. That really became the basis for our book Performance Consulting, which was published in 1995. We take great pride in thinking that we’ve helped to bring that into the field in a more robust manner, and it has been our life’s work to help people become more strategic in their work, focus beyond the solution of learning, and think about what they are doing to actually change performance that will be sustained over time.

On how training evaluation has changed over the years…

I think it’s a mixed bag, and I don’t think we are where we need to be, but we are better than we used to be. First of all, it is a much more discussed topic than it was in the 1970s when I was doing the work I did. And there is a lot more available to people to teach them how to do this. There are also many more systems that you can adopt and replicate and read about. There is software and technology that supports doing it. So, there are a lot of available enablers that make it easier to do. But it is still “OK,” and I am going to use that in quotes, that people in our field don’t do this. It is something we accept; in fact it is almost as though people are still surprised if they find out that you are consistently working to measure the impact of at least some of your work every year. If I am a professional and I mention that I deliver training but I don’t form learning objectives before I build a program, most people would look askance at that; but if I indicate that I deliver training but no, we haven’t been able to measure impact because we didn’t do that, people just accept it. And that is a disappointment to me because I would hope by now that it would be much more of a thing that people typically do on those programs for which it is appropriate. I do want to stress that not every learning intervention can or should be measured for impact, but unfortunately too few of them are.

On the progress the profession has made in embracing evaluation…

I am always an optimist, that is my nature, so what I see is movement in a positive type of direction, just much slower than I would have predicted a couple of decades ago. But I do have a lot of hope because of the various enablers that we have through technology and software, through the capability development opportunities that exist, and through the awareness of the criticality of this. And I do believe more and more people who come into our field appreciate and value the need to be business people, to think like business people, and to look at things in terms of investments and returns. With that mental model becoming more embedded in the profession, it bodes well for more of this to be done. Once again there is still the caveat that not everything that is done under a learning umbrella can or should be measured for impact, but we still need to do more of it because many things that could be measured this way are not.

On how executives view learning and development and investment in it...

I think that we need to build both a “push” and “pull” strategy on three levels in the profession, in our organizations and functions, and then within ourselves as professionals. Let me explain. There are times we need to push and be the initiator and build a want for this, and there are times when we can be pulled in when someone is seeking it, and I think we need both strategies to be operative.

On how evaluation makes a difference in the perception held by executives of training and development…

Here is a very valuable thing we can do: Would a manager want to know if results are not occurring so we can take additional actions to get results moving in the right direction? If we frame it as, “We are going to help you know you are getting the type of performance that you need from people and the type of results you need from business and, if not, why not?” a lot of managers would see a lot of value in that.

That is the definition of being strategic: you are aligned to a business and you are benefiting the business. And so if we are going to be aligned to the business, then we need to think like business people: What are the business needs here, what is the way in which I can support those, what would be the cost for doing that, what is the return? And, of course, in the learning field there is a very critical insight that took me a few years to get that I hope people are more familiar with now and that is there is only one way, one way, in which building skills affects the business result and that is through the performance of people. The business styles of an organization don’t move in a positive direction because of what I know; they move in a positive direction because of what I do with what I know and therefore, of course, why we feel performance consulting is such a critical, critical part of the equation.

I really feel people that are leading and doing learning and development work in organizations, they are absolutely business people. Now their focus or their area of expertise is on learning or human performance improvement. We are business people and we need to approach what we do from a business perspective, and a classic business perspective is here’s the investment and this is the return we should anticipate and how do we know if that happens. We owe it to ourselves and to our organizations to do some validation of that.

On why we still see such a low investment in training measurement and evaluation within organizations and how we can facilitate more investment in the future…

As we go into organizations, we often ask people to assess various criteria, the things that are most characteristic of these criteria, and those that are least characteristic of these criteria, and these are the criteria for operating as performance consultants. There are eight of these criteria; one of the criteria that always shows up at the bottom of the list is measuring the results of what we do. And so we ask these groups why that is and here is what they share with us. One category of answers is all about how we don’t get asked to do this; our managers don’t ask us do this, they’re busy people, they are moving on to the next thing, and so are we. So it is the lack of focus, the lack of being pulled into doing it. I would say also then the lack of expectation that it will be done. We have many people tell us that they don’t do it because they don’t know how. And of course that is an overcome-able thing in our field; we know about learning and skill development, and as I mentioned there are many ways to learn this. So that is something we can overcome. And another reason we get is that some people are very wary of what you really learn from doing it because they say you can’t isolate training as the variable that made the difference and, if you can’t isolate it, why would you bother measuring it? But I believe that many people in our field still do not truly understand that there is a difference between building skill and changing performance. And it is as though people go and acquire the skill and the knowledge that is being provided and then say, “We are OK, the rest will happen.” Of course we know it doesn’t. There are people in our field who don’t feel they are accountable for ensuring people apply skill, it is their job to help them learn it but it is management’s job to make sure they use it. But of course in performance consulting, we are about a partnership with management and together we should share the accountability for getting the impact from what we do. Another reason that I believe it’s done infrequently is that measurement is really a front-end process, and, if you haven’t identified the performance and business outcomes that you expect from the learning, it’s pretty hard to measure whether you got them. It’s the old “we have to have our destination in mind.” So unfortunately a lot of people don’t do enough front-end work to know what those destinations might be. And then I think some people are concerned that they might measure and find limited results. In today’s business world that puts us in a vulnerable place. So I think there are a lot of reasons why it isn’t done.

On what the future holds for measurement and evaluation…

Performance consulting is really a process in which clients and consultants partner to optimize workplace performance in support of business goals. That is what it is. It is a solution-neutral process. It’s not just about learning; it is about optimizing workplace performance and doing whatever is needed to make that happen both in terms of changing performance of people but also in terms of building a workplace infrastructure that allows them to work effectively. So performance consulting is a four-phase process, and one of those phases is measurement; it is the last of the four phases. So measurement has always been an integral part of performance consulting in our minds. Performance consulting is a process that yields results: performance change, workplace change, and business impact. It is about results. And we tend to measure what we produce so if we are producing results, we want to measure those at the levels that are commonly thought of as 3, 4, and 5 in our levels of evaluation. And measurement also helps to determine if results are not occurring as we hoped. So what else might be needed to achieve those results? Measurement is an integral part of performance consulting. It is one of the four phases, and it’s affirming that the results we set out to achieve through our performance consulting process and work have, in fact, occurred. I don’t see that changing; they are completely integral to each other.

About Dana Gaines Robinson

Dana Gaines Robinson, former founder and president of Partners in Change (a consulting practice founded with her husband, Jim Robinson), developed and advanced the concepts of performance consulting. Exemplary Performance is now the sole distributor of their workshops and consulting services, and Robinson is semi-retired, working on selected projects through the Exemplary Performance organization. She is the coauthor of Performance Consulting and Training for Impact. A past member of ASTD’s Board of Directors, she is the co-recipient, along with her husband Jim, of ASTD’s Distinguished Contribution Award for Workplace Learning and Performance.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset