CHAPTER TEN
UNDERSTANDING NONPROFIT EFFECTIVENESS*

David O. Renz and Robert D. Herman

* This chapter is from an article first published by the authors in Nonprofit Leadership and Management, in Summer 2008; adapted and reprinted with permission.

In an era of heightened concern for nonprofit performance, results, and accountability, we hear more and more about organizational effectiveness and our need to ensure it. Nonprofit leaders and funders all feel increased pressure to guarantee results, and it seems that the mantra of “effectiveness” has become the standard answer. Who can be against effectiveness? But what are we really talking about? Are we all talking about the same thing? Nonprofit organizational effectiveness continues to be an elusive and contested concept. The reality is that most nonprofit leaders and researchers, lacking the simple criterion of bottom-line profit or loss, struggle with the concept of nonprofit organization (NPO) effectiveness and how to make it meaningful in their own organizations. Confronted with these growing pressures to enhance nonprofit organization impact and accountability, they are exploring questions such as:

  • What is nonprofit organizational effectiveness? And is program effectiveness the same as organizational effectiveness?
  • Is there some “real” effectiveness out there just waiting to be discovered?
  • Can those of us trying to explain effectiveness agree on what it is?
  • And if we found it, would we be able to agree on what we have identified?
  • Do certain management practices generally promote greater organizational effectiveness? Are there “best practices” and, if so, what are they?

These are just a few of the important questions that confront those interested in studying and improving nonprofit organizational effectiveness. In this chapter we synthesize into key themes the results of recent research on nonprofit organizational effectiveness and explore the implications of these for practice and further study.

Theoretical Perspectives on Nonprofit Organizational Effectiveness

Organizational effectiveness has long been a challenging and contested concept in the world of organizational theory and research. For many, the obvious perspective to use in understanding organizational effectiveness is what theorists call the rational “goal attainment model.” This pervasive and common-sense view considers organizations to be rational instruments—mechanisms to achieve something. Thus, the goal approach assesses nonprofit organization effectiveness by the degree to which the organization accomplishes its goals. This perspective is quite appealing. After all, most people join nonprofit organizations because they want to help them accomplish their missions. Yet, while the goal model makes intuitive sense, it often is inadequate to help us understand the real-life complexities of our organizations and what it takes for them to be successful. For example, is an organization truly effective if it accomplishes its goals for the year but must close because it has failed to raise adequate funds? Is a nonprofit effective if it accomplishes its goals by setting those goals so low that they are easily accomplished? And how effective is the organization that sets goals that are irrelevant to the needs of its clients? The reality is that nonprofit organization effectiveness is more complicated.

Given these concerns, some scholars and researchers have developed and tested alternatives or modifications to the goal model of effectiveness. One approach is the “system resource” approach (Yuchtman and Seashore, 1967). This perspective considers effectiveness to be the ability of an organization to acquire scarce and valued resources. This approach justifies the use of measures of resource acquisition, especially financial measures such as total revenues generated or fundraising success, as indicators of organizational effectiveness. Some studies of effectiveness have used this approach, such as the Pfeffer (1973) study of hospitals that used the percentage increase in number of beds occupied and percentage increase in budget over a five-year period as measures of organizational effectiveness. Some others (for example, Provan, 1980) have used the percentage change in funding as the measure of success.

Certainly, resource acquisition is an aspect of effectiveness. Indeed, it may be the most important criterion for some chief executives or board members (although we doubt they would ever say so). But it seems unlikely to be very important to clients or other key stakeholders. Most leaders of nonprofits tend to emphasize the importance of mission and progress toward mission accomplishment, not increases in the budget, when talking about effectiveness. In fact, emphasis on financial growth in itself would threaten many organizations' legitimacy with their community if that were reported as their primary measure of effectiveness.

Some others, in recognition of the challenge of using the goal attainment model with its emphasis on the “ends” of the organization, will instead identify and measure performance on a variety of management practices (that is, “means,” as opposed to the “ends”) that they believe will result in organizational effectiveness. This is known as the “internal process approach” to organizational effectiveness (Steers, 1977). It often manifests itself as an assessment of an organization's use of “best practices.”

In our own research on nonprofit effectiveness, we have found it important to draw on two contemporary theoretical perspectives to help us understand organizational effectiveness—the multiple constituency perspective and the social constructionist perspective. The multiple constituency perspective is supported by the work of Kanter and Brinkerhoff (1981), who observe that organizations have various constituencies, or stakeholders, and that each constituency is likely to evaluate an organization's effectiveness by using criteria important to that constituency. They argue that organizational effectiveness is not a single reality but rather a more complicated matter of addressing differing interests and expectations. This understanding makes sense to us. We too accept that nonprofits have multiple constituencies or stakeholders who may be likely to differ in how they evaluate the effectiveness of an organization.

The additional perspective we find useful, social constructionism, is not a specific model of organizational effectiveness but rather a general ontological perspective. Proponents of social constructionism explain that reality or some parts of reality are created by the beliefs, knowledge, and actions of the people who are involved. Thus, reality is not something independent of people and the judgments they make, even though people may believe that what they are examining exists as an independent, objective reality. It is a function of their perceptions. The “new institutional school” in organization theory (see Scott, 1995, for a summary) takes a social constructionist approach to analyzing many aspects of organizations, including effectiveness. Our approach to understanding organizational effectiveness builds on these two perspectives. In short, we have come to embrace the view that overall nonprofit organizational effectiveness is whatever its relevant multiple constituents or stakeholders judge it to be.

We recognize that nonprofit organizations have multiple constituencies, such as clients, employees, funders (both individual donors and organizations such as grant-making foundations and United Ways), licensing and accrediting bodies, boards of directors, and vendors. These different constituencies are likely to use different criteria, even when evaluating the effectiveness of the same nonprofit organization. This is not to say that such judgments of effectiveness are capricious or arbitrary—they simply differ from constituency to constituency. For example, clients may pay the most attention to changes in their personal condition (are they improving, achieving what they want from their relationship?), while funders may pay more attention to the degree to which the organization follows the correct management procedures (such as strategic planning or outcomes assessment) or provides consistently accurate client and financial reports. Individuals within constituencies, and no doubt to some extent across constituencies, are likely to communicate with one another about the nonprofit and how they think it is doing. They are also likely to see and hear communications from people in the organization about how well they and the organization are doing. In such ways, judgments of effectiveness are developed and even changed. It is not inevitable that constituencies differ in their judgments—we just know that they often do. We also have learned that their views of nonprofit effectiveness may change over time—it is not necessarily stable (even if organizational conditions do not change). In some situations, these social processes that result in judgments of nonprofit effectiveness may even lead to different constituencies using the same criteria and evaluating information about an organization in the same way. However, as we have learned from our own and others' research, it is not uncommon for different constituencies to differ in their judgments of the same organization's effectiveness.

Key Insights on Organizational Effectiveness

As we consider the findings of the research conducted since the early 1990s, we have identified some fundamental themes that help us understand nonprofit organization effectiveness, what it is, and how we might better understand it.

It's a Matter of Comparison

It is essential that we recognize that judgments on organizational effectiveness are, by logical requirement, always a matter of comparison. The key question, often left unasked, is to what are we comparing any particular organization's effectiveness? Is it comparison with the same organization at earlier times, or to similar organizations at the same time, or to some ideal model, or something else? And are others using the same basis for their comparisons? The basis for the comparison is a key to understanding varying judgments of effectiveness, and it often is hidden or unknown (sometimes even to those doing the judging).

Effectiveness Is Multidimensional

Nonprofit organization effectiveness is multidimensional. Most management practice models, as well as the models underlying much of the research on nonprofit organizations, expect that nonprofit organizations should have a number of different criteria by which to judge their effectiveness, and these criteria often are independent of one another. Models that reflect this characteristic include the competing values framework of Quinn and Rohrbaugh (1981), the balanced scorecard technique advanced by Kaplan and Norton (1992), and many other studies on nonprofit effectiveness (see reviews by Forbes, 1998; and Stone and Cutcher-Gershenfeld, 2002).

Baruch and Ramalho (2006), in their analysis of 149 studies published between 1992 and 2003 on organizational effectiveness (in all kinds of organizations), found that the criteria used to assess effectiveness varied significantly by types of organizations in the studies. For example, in studies of businesses, a slight majority used multiple criteria, but 42 percent used only financial criteria. In studies of nonprofits, virtually all used nonfinancial (for example, employee satisfaction, customer orientation, quality, public image) as well as financial criteria. However, they note that the most commonly used criterion in the nonprofit studies was efficiency (conceived as an input/output ratio), although nonfinancial criteria had been used almost as often.

This recognition that nonprofit effectiveness is multidimensional has fundamental implications for both research and practice. One of the most important implications: if nonprofit effectiveness is multidimensional, then it cannot legitimately be assessed by using only one single indicator. Thus, models that focus on helping nonprofits enhance effectiveness by maximizing a single criterion (for example, surplus, growth, total revenues) are inadequate. This also means that it is equally inappropriate to assess organizational effectiveness using only the results of individual program performance.

Effectiveness Is a Social Construction

Our research results reinforce our view that NPO effectiveness is “socially constructed.” That is, effectiveness is whatever significant stakeholders think it is, and there is no single objective reality “out there” waiting to be assessed. This perspective challenges many because they want effectiveness to be an objective condition that can be seen, measured, and understood in the same way by everyone. It is not that simple. We recognize that the social construction perspective challenges many taken-for-granted understandings about the social world. Nonetheless, many parts of the social world are “real” only because people have believed and acted in ways that are consistent with that reality. This is not to deny that they have significance or consequence. For example, many scientists have observed that the idea and categories of “race” are social constructions. Of course, as our experience with “race” makes all too clear, once a social construction is perceived as real, it can become real in its consequences.

To illustrate how nonprofit effectiveness is socially constructed, we share a baseball story. As the story goes, three umpires are describing how they call balls and strikes. The first says “I call 'em as they are.” The second says “I call 'em as I see 'em.” The third, the social constructionist of the group, says “They ain't nuthin' 'til I call 'em.” In the world of nonprofits, there are activities and accounts of activities, such as annual reports, program outcome reports, stories told by CEOs to board members, funders, and others, and so on. These activities, like pitches in the baseball story, are nothing until someone calls or interprets them. That is, they are not significant until someone forms judgments of effectiveness from them (and, usually, communicates those judgments) and acts on the judgments. Unlike in baseball, for most nonprofits there is no single umpire—all stakeholders are permitted to “call” or judge effectiveness. Some stakeholders will be more credible than others, and some will be more influential than others, and this will make a practical difference. As yet, there is no commonly agreed basis for judging NPO effectiveness, much less a single, objectively “real” measure.

Boards of Directors and Nonprofit Effectiveness

Many studies, using different kinds of nonprofits and different models and measures of board and organizational effectiveness, have found a relationship between board effectiveness and organizational effectiveness. The common assumption is that board effectiveness causes organization effectiveness. But it may not be this clear or unidirectional. Only one study to date (Jackson and Holland, 1998) provides any solid evidence in support of the assertion that board effectiveness is a cause of organizational effectiveness, and several others have failed to affirm this.

We conducted a study in which we compared changes in ratings of board effectiveness and organizational effectiveness for a group of human services organizations over a period of time (Herman and Renz, 2004a), and found that only slightly more than half of the organizations increased their use of recommended board practices during this time; some actually decreased their use of these recommended practices. Interestingly, we found that both chief executives and board members considered the financial condition of the organization as a significant measure of the board's effectiveness. We also found, in the case of funders, that perceptions of the prestige of the members of the board had some impact on funders' judgments of board effectiveness (completely separate from any information about the boards' actual practices or other measurable results).

There is some interesting research on the relationship between board performance and organizational performance. William Brown (2005) found that, for chief executives, certain dimensions of board performance (as judged by those executives) are related to organizational performance (as judged by board members). In fact, he also found (using the six board performance dimensions developed by Jackson and Holland [1998]) that board performance is related to organizational performance. In particular, he found that “interpersonal” board competence was significantly related to organizational performance, and both “interpersonal” and “strategic” board competence were significantly related to organizational performance. Further, in separate research by Preston and Brown (2005), there is evidence that board member emotional commitment, as well as members' length of membership, frequency of board attendance, and hours spent on organizational activities all are positively related to a higher level of board performance. Thus, recent research provides support for the view that (in at least some ways) board effectiveness is related to organizational effectiveness. But there is much more to learn.

Effectiveness and Management Practices

The idea that the use of certain board and management practices leads to improved organizational effectiveness is currently in favor. Perhaps this should not be a surprise. In general, the research indicates that nonprofits that are more effective are more likely to use correct management practices. And, as would be predicted by those of the institutional school of organization theory (for example, DiMaggio and Powell, 1983; Meyer and Rowan, 1977), when outcomes are difficult to measure or there is substantial uncertainty about the methods for achieving the desired outcomes, organizations are likely to emphasize the use of approved procedures to achieve or maintain their legitimacy. Thus, use of the “right practices” becomes a de facto indicator of effectiveness.

But does the use of correct management practices equate to organizational effectiveness? Some research suggests a relationship between the use of various management practices (often some part of the strategic planning process) and some measure of overall organizational performance. Several studies (Crittenden, Crittenden, and Hunt, 1988; Odom and Boxx, 1988; Siciliano, 1997) find relationships between the use of certain planning practices (such as financial analysis, stakeholder analysis, environmental trend analysis, goal setting, action plans, and monitoring of results) and higher levels of organizational performance. Among the varying kinds of measures of organizational performance that were used in these studies were membership numbers, growth in membership, growth in contributions, and ratio of total revenues to total operating expenditures. (Unfortunately, in a review of research on strategic planning in nonprofit organizations, Stone, Bigelow, and Crittenden [1999] found that little can be reliably said about exactly which elements of the strategic planning process could be used by nonprofit organizations to improve their overall effectiveness.)

In our research, we, too, have compared the practices of highly effective organizations with those of less effective organizations (effectiveness was based on the aggregate judgments of all of the organizations' key stakeholders). We identified the management practices through the deliberations of focus groups of experienced practitioners whom we convened to identify the practices they considered to be relevant to organizational effectiveness. The practices they considered to be indicators of effectiveness included the presence of a mission statement, a recent needs assessment, a planning document, a system to measure client satisfaction, a formal CEO and employee appraisal process, an independent financial audit, and a statement of organizational effectiveness criteria. We found, for funders, board members, and senior managers, that the organizations rated as more effective did in fact use more of these “correct” management practices, and that greater use of more of these correct practices was positively correlated with higher ratings of organizational effectiveness for all three groups.

Several studies support aspects of this thesis. For example, Galaskiewicz and Bielefeld (1998) report that increased use of selected managerial tactics led to increased organizational growth (in expenditures and number of employees). We could argue that increases in organizational size or growth are not necessarily appropriate indicators of organizational effectiveness, yet it also is certainly arguable that some stakeholders might regard growth in size as an indicator of effectiveness. We (Herman and Renz, 2004a) have found that board members judged organizational effectiveness in relation to the extent of use of correct management practices, but funders and senior managers did not. This illustrates the variation that can exist among different stakeholders, even when judging the same organization. Such results raise concern about the merit of the increasingly common trend to claim there is some validated set of practices that are “best.” In relation to both nonprofit board management and organizational management, we must question the assumption that there is “one best way” of doing board work or managing NPOs.

We also have studied whether organizations that increase their use of correct management practices over time are viewed as more effective. Of the forty-four organizations we studied over a nine-year time frame, 55 percent increased their use of the proportion of recommended board practices, 14 percent made no changes, and 32 percent actually reported using fewer of the recommended board practices. This also suggests that assertions with regard to what constitute best practices will change over time. Our experience is that nonprofits are likely to find that certain influential stakeholders (foundations, United Ways, accrediting bodies) will change their beliefs about “best practices” because, as more and more nonprofits adopt the preferred “best practices” over a given period of time, those best practices will no longer seem to them to differentiate the more-effective from the less-effective. So they start looking at new practices and lists. However, this needs further investigation.

The Lure of “Best Practices”

In recent years, the concept of “best practices” has become something of a holy grail for nonprofits seeking to enhance effectiveness. It has been very widely invoked and applied. There is an argument to be made for valuing certain practices, as we discussed in the previous section. And yet, the promise of best practices should be viewed with skepticism. The evidence suggests it is unlikely that there are any universally applicable “best practices” that can be prescribed for all NPO boards and management. In our research (Herman and Renz, 2004a), the evidence does not support the claim that any particular board and management practices are automatically best or even good (that is, that using them leads to increased effectiveness for boards and organizations).

What evidence is required to support a claim of best practice? Keehley, Medlin, Longmire, and MacBride (1997) write that “best practices” should meet seven criteria: be successful over time; show quantifiable gains; be innovative; be recognized for positive results (if quantifiable results are limited); be replicable; have relevance to adopting organizations; and not be linked to unique organizational characteristics (in other words, they need to be generalizable). We have not found any “best practice” that comes close to meeting these criteria. Interestingly, in the business world, studies of what have been promoted as “best practices” for corporation boards also have found no relation between recommended practices and corporate performance (see Heracleous, 2001). We prefer to talk in terms of “promising practices” to describe those approaches that warrant consideration because, at best, it may be said only that they are worth consideration and must be judged in the context of the specific organization. Further, as noted in the previous section, practices that are considered to be “best” at one point in time are likely to change. There is much yet to be studied and understood regarding the assertion that more effective NPOs are likely to use correct management practices.

Effectiveness and Organizational Responsiveness

One of the realities of most research on organizational effectiveness is that researchers (and organizations) focus on specific objective criteria to measure or test. But our collective inability to identify any specific measures suggests that this may not be useful. Instead of telling the respondents exactly what criteria should be used, perhaps we should employ an alternate approach—and leave it to the judge or survey respondent to determine for themselves what criterion or criteria are to be used. In fact, this might offer a way to embrace the social construction of effectiveness yet still allow for aggregating stakeholders' judgments of effectiveness. To this end, we employed a measure of nonprofit effectiveness in our work that emphasizes responsiveness as a way to address the challenge of aggregating the ratings of the various stakeholder groups offering their differing judgments of effectiveness. Adapting the approach of Anne Tsui (1984), in which she measured co-workers' judgments of the effectiveness of individual managers, we asked various constituencies to assess how well the organization is doing on whatever they deem important. We did not tell the respondents what to use as a basis for their judgment.

We (Herman and Renz, 2004a) found that all stakeholder groups rated organizational responsiveness as strongly related to organizational effectiveness. Our work showed that responsiveness is positively related to effectiveness (for all stakeholder groups), and we also found that each stakeholder group's rating of organizational responsiveness was highly related to the average rating of effectiveness for all groups. This suggests to us that an averaged rating of responsiveness can be used as an indicator of effectiveness or, at least, one kind of effectiveness.

It is our hope that others engaged in nonprofit effectiveness research will conduct further study using this concept and instrument. For executives and board leaders, this simple tool (see the resource website for this book for a copy of the tool) may be used as one useful way to assess various stakeholders' judgments of their organization's effectiveness.

Type of Organization Makes a Difference

As many have observed, the (U.S.) legal category that has often been used to define and identify “nonprofit organizations” includes very disparate organizations—in terms of activities, size, scope and other characteristics. What such organizations have in common at a minimum is that they cannot distribute earnings to anyone (the non-distribution constraint) and that they must receive certain proportions of their revenues from various public sources (that is, public support).

Research indicates that it can be useful to differentiate among different “types” of nonprofit organizations as we assess the merits of different approaches to understanding nonprofit effectiveness. One limited but conceptually useful approach is to distinguish among publicly supported charities by general revenue orientation. Specifically, with a growing interest in social entrepreneurship and nonprofit commercial enterprise, we have found it useful to distinguish between “donative” and “commercial” charities (a distinction apparently first proposed by Hansmann, 1980). In other words, it is useful to compare characteristics of conventional nonprofits that operate largely on donations (thus, they are called “donative” organizations) versus nonprofits that engage in commercial activity to generate income. Some who advocate that nonprofits become more commercial certainly see such organizations as importantly different from donative nonprofits. Are they different from an effectiveness perspective?

One study that distinguished organizations by primary revenue source (private donations, government contracting, and commercial) found that chief executives of donative organizations reported using significantly more board involvement practices, compared to commercial and government-dependent organizations (Hodge and Piccolo, 2005). In our research (Herman and Renz, 2002; 2004b) we also investigated whether classifying nonprofits into donative and commercial categories would make a difference.

This kind of distinction is useful for both theoretical and practical perspectives. Neo-institutional theory suggests that organizations will use larger numbers of prescribed board and management practices as a way of showing that they are legitimate (that is, demonstrating to funders and other interested stakeholders that a nonprofit does the right things). The goal model, on the other hand, would lead us to believe that such practices are used as a rational means to achieve organizational goals. But if the use of these practices was focused on seeking legitimacy, then we would expect donative nonprofits to use a greater proportion of both prescribed board and management practices. We did find that, over time, the donative nonprofits increased their use of correct practices to a greater degree than did the commercial nonprofits.

We also compared whether financial management outcomes (surplus and change in revenues) were more strongly related to use of correct management practices in donative than in commercial nonprofits, and we found that, for donative nonprofits, the more they used both prescribed management practices and formal performance management over time, the larger their surplus. For commercial nonprofits, neither of these “good management” indicators was related to change in surplus. Similarly, for donative nonprofits, the more they used both prescribed management practices and formal performance management, the greater their increase in total revenues. For commercial nonprofits, neither good management indicator was related to financial growth. Galaskiewicz and Bielefeld (1998) found similar results in their research, although they focused on organizational growth (measured by revenues and numbers of employees and volunteers) rather than effectiveness itself.

We (Herman and Renz, 2004a) also examined whether stakeholders regarded either form of nonprofit, commercial or donative, to be more effective, and found no substantive differences for any stakeholder group. There were clear differences in the extent to which organizations relied on commercial versus donative or public sources of income, but this distinction was not consistently related to the use of board or management practices, managerial tactics, or stakeholder judgments of effectiveness. In other words, these and other studies indicate it is useful to examine how management practice and effectiveness differ in terms of this commercial-donative distinction, but not all have a relationship to effectiveness.

Differentiating Program, Organization, and Network Effectiveness

Nonprofit effectiveness, despite its elusiveness, is so important to so many stakeholders that it is not surprising that managers have done their best to measure and use whatever effectiveness results they can find to improve management practices. Yet, too often these assessments focus on the measurement and use of program outcomes. Nonprofit organizational effectiveness is related to, yet distinct from, effectiveness at both program and network levels of effectiveness. It is important to understand that difference in level of analysis makes a difference in understanding effectiveness; it is important to differentiate effectiveness at program, organization, and network levels.

The recent emphasis on the assessment of program outcomes as a way to assess organizational effectiveness suggests that some stakeholders (especially funders) consider program effectiveness to be more important or of greater interest than other kinds of effectiveness. And nonprofit organization effectiveness is sometimes treated merely as the sum of the effectiveness of an agency's programs. But research and practice both affirm that organizational effectiveness is not identical to program effectiveness and, similar though they are, each must be understood and assessed separately.

Sawhill and Williamson (2001) have argued that nonprofit missions (which certainly are closer to the organizational level of effectiveness) could be measured, yet they ultimately back away from that assertion and focus on the value of setting specific and fairly difficult goals. They also extol the marketing and public relations advantages of communicating performance goals. Such approaches may well be useful for managing, yet they do not provide a systematic basis for generating evidence relevant to general nonprofit effectiveness.

Likewise, it is becoming increasingly important to understand nonprofit effectiveness from the perspective of networks, especially in an era when “collective impact” approaches are becoming more popular with many foundations and other community funders. (Collective impact initiatives are initiatives that integrate the work of a large number of nonprofits to address a complex community or system challenge; see Kania and Kramer, 2011, for further explanation.) An emphasis on the effectiveness of nonprofits as separate and clearly distinct entities can easily lead to the conclusion that an organization creates its own effectiveness. However, in many ways, the perceived effectiveness of an organization often depends on the effectiveness of other organizations and people with which it is interconnected. As more nonprofits deliver services through networks of service delivery (including collective impact initiatives), network effectiveness will become increasingly important to understand in relationship to organizational effectiveness.

For example, Provan and Milward (1995) investigated how network characteristics (among community mental health service providers) were related to assessments of client outcomes. They found that client and family assessments of client outcomes were closely correlated, although staff assessments were not correlated (illustrating the thesis that stakeholders often evaluate program outcomes differently). They found network centralization was most clearly related to positive client and family assessments. Studies of program effectiveness may often need to go beyond an organizational focus to an understanding of networks (Provan and Milward, 2001).

Implications

The strong interest of NPO managers, board members, funders, and NPO regulators in finding clear answers to the question “How can an NPO be effective?” compels us to articulate some of the practical implications of the information we have presented. We do so in this final section.

Implications for Organizational Practice

Important stakeholders frequently are not clear about their bases for assessing a nonprofit's effectiveness. Like art, they may know effectiveness when they see it, but what do they look for? Further, over time, many stakeholders will change their implicit criteria for assessing effectiveness. It is essential that NPO leaders regularly interact with key stakeholders to ensure that they understand their criteria and how they may be changing. And if the NPO leaders find that stakeholder criteria are off base, they must help them refine them.

Research by Balser and McClusky (2005) supports the importance of managing stakeholder relations. In an in-depth qualitative study they find that organizations identified as highly effective by a panel of knowledgeable observers differed from much less effective organizations in the ways and extent to which the effective organizations engaged stakeholders. They suggest that effective stakeholder engagement is more than mere frequency of communication—effective nonprofits exhibit a consistent thematic approach to engagement. We believe it is crucial for the organization's managers to understand what stakeholders expect and move the organization toward more fully responding to and meeting its stakeholders' expectations (including, when appropriate, to respond to and honestly challenge or debate those expectations).

Some are uncomfortable with the notion that nonprofit effectiveness is a social construction; they worry that this means that nonprofit effectiveness is arbitrary. It is not. And even though effectiveness is socially constructed, there are useful dimensions of effectiveness (such as financial condition, fundraising performance, or program outcomes) that can be grounded in “hard” data. For example, use of generally accepted accounting principles provides solid evidence about revenues, costs, and surplus. Other dimensions of effectiveness, such as those related to community collaboration or working with volunteers, are likely to be less amenable to “hard” evidence. We support and encourage the use of “hard” evidence to the extent legitimately possible, but we also know that nonprofit leaders should not expect that all of their stakeholders will interpret and use that evidence the same way or combine it with other kinds of evidence in the same ways.

The popularity of “best practices” attests to the hope of finding a pot of gold at the end of the search. One key assumption of the best practices approach is that a particular technique or process that works well in one setting can and should be incorporated into other different settings. This may be true for certain rather standard administrative functions, for example, the adoption of procedures to improve billing. However, in many instances a practice that enhances effectiveness in one organization may be a poor choice for another.

We do not conclude that practices and procedures are unimportant. Undoubtedly, every organization must discover and continually seek to improve its practices, consistent with its values, mission, and stakeholders' expectations. But these practices must fit together to enhance effectiveness.

Implications for Boards and Governance

Board members need to understand that NPO effectiveness is socially constructed, that it is not a stable construct, and that different stakeholders will judge it differently. Likewise, board effectiveness is socially constructed and changeable. Thus, a critical role that board members may serve on behalf of a nonprofit is that of a monitor and sensor—a vital link to help the agency remain in touch with the potentially changing effectiveness judgments of key stakeholders.

Just as with management practices, we do not believe that the research suggests that board process management is unimportant. But not only is there no “silver bullet” (that is, one practice that ensures effectiveness)—but there is no “silver arsenal” for board success. Boards, perhaps with the help of executive or other facilitative leadership, need to identify those processes that will be most useful to them. Do not use a practice just because others say it is useful. Ask some key questions: Does the practice fit this board's circumstances? Does the practice actually help the board reach good decisions? Does the practice contribute to the organization's success?

Implications for Program Evaluation and Outcomes Assessment

We have explained the need to be careful about using program outcome assessments to judge nonprofit effectiveness. We see only a very few (rather unlikely) circumstances under which program outcomes could legitimately be considered to equal organizational effectiveness. (Such a conclusion would be valid, for example, in situations in which the nonprofit conducts only one program and there are no other explanations for outcomes, such as the effect of other programs or events.) These circumstances are so unusual that, for the typical nonprofit, program outcomes assessments must be regarded as relevant but limited indicators of organizational effectiveness.

Certain approaches to program evaluation may be uniquely useful. For example, qualitative forms of program evaluation that emphasize the engagement of key stakeholders in the process (see, for example, Patton, 1997) may more closely align with the realities of organizational effectiveness and be more likely to help all stakeholders to work toward mutually valued results.

Implications for Capacity Building and Capacity Builders

Given that we lack evidence for “best practices,” those who fund or provide capacity building support should avoid advocating for one best way or set of ways for doing things. Ideally, they will recognize and support an array of promising practices and provide process skills and knowledge to help nonprofits assess the match of the practices to their environment, circumstances and stakeholders. (See Wing, 2004, for more on the dilemmas facing those funding and doing organizational capacity building.) Further, capacity building should go beyond the internal organization and help nonprofit leaders create processes by which to identify and understand the interests and expectations of key stakeholders, and to create constructive strategies by which to engage them. As noted earlier, some promising practices will differ depending on the domain or field of service of the organization. Therefore, capacity builders should research and help nonprofits identify the practices that are considered fundamental or “absolutely required” as matters of ethical practice, as well as the emerging and promising practices that will be relevant to effectiveness, given a nonprofit's particular domain and environment.

Conclusion

There is little doubt that readers will find some of these propositions and implications more compelling than others. We offer these observations and suggest these implications not because we have “the answers,” but because we want to encourage further thoughtful examination of the construct of effectiveness and how we can better understand, measure, and develop it. We invite executives, scholars, and practitioners alike to test these observations and consider their implications for their work. Only as a community will we be able to develop useful understandings of nonprofit organization effectiveness and how we can build the sector's capacity to achieve meaningful results.

References

  1. Abzug, R., and Simonoff, J. S. Nonprofit Trusteeship in Different Contexts, Burlington, VT: Ashgate, 2004.
  2. Balser, D., and McClusky, J. Managing Stakeholder Relationships and Nonprofit Organization Effectiveness. Nonprofit Management and Leadership, 2005, 15(3), 295–315.
  3. Baruch, Y., and Ramalho, N. Communalities and Distinctions in the Measurement of Organizational Performance and Effectiveness Across For-Profit and Nonprofit Sectors. Nonprofit and Voluntary Sector Quarterly, 2006, 35(1), 39–65.
  4. Bloom, H. S., Hill, C. J., and Riccio, J. A. Linking Program Implementation and Effectiveness: Lessons from a Pooled Sample of Welfare-to-Work Experiments. Journal of Policy Analysis and Management, 2003, 22(4), 551–575.
  5. Brown, W. A. Exploring the Association Between Board and Organizational Performance in Nonprofit Organizations. Nonprofit Management and Leadership, 2005, 15(3), 317–339.
  6. Crittenden, W. F., Crittenden, V. L., and Hunt, T. G. Planning and Stakeholder Satisfaction in Religious Organizations. Journal of Voluntary Action Research, 1988, 17(2): 60–73.
  7. DiMaggio, P. J., and Powell, W. W. The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields. American Sociological Review, 1983, 48, 147–160.
  8. Forbes, D. P. Measuring the Unmeasurable: Empirical Studies of Nonprofit Organization Effectiveness from 1977 to 1997. Nonprofit and Voluntary Sector Quarterly, 1998, 27(2), 183–202.
  9. Galaskiewicz, J., and Bielefeld, W. Nonprofit Organizations In an Age of Uncertainty, New York: Aldine de Gruyter, 1998.
  10. Hansmann, H. The Role of Nonprofit Enterprise. Yale Law Journal, 1980, 89, 835–901.
  11. Heinrich, C. J., and Lynn, L. E., Jr. Governance and Performance: The Influence of Program Structure and Management on Job Training Partnership Act (JTPA) Program Outcomes. In C. J. Heinrich and L. E. Lynn, Jr. (eds.), Governance and Performance: New Perspectives, Washington, DC: Georgetown University Press, 2000.
  12. Heracleous, L. What Is the Impact of Corporate Governance on Organisational Performance?” Corporate Governance, 2001, 9(3), 165–173.
  13. Herman, R. D., and Renz, D. O. Multiple Constituencies and the Social Construction of Nonprofit Organization Effectiveness. Nonprofit and Voluntary Sector Quarterly, 1997, 26(2), 185–206.
  14. Herman, R. D., and Renz, D. O. Theses on Nonprofit Organizational Effectiveness. Nonprofit and Voluntary Sector Quarterly, 1999, 23(2), 107–126.
  15. Herman, R. D., and Renz, D. O. Effectiveness in Commercial and Donative Nonprofit Organizations. Paper presented at the Annual Meeting of the Association for Research on Nonprofit Organizations and Voluntary Action, Montreal, Canada, Nov. 14–16, 2002.
  16. Herman, R. D., and Renz, D. O. Doing Things Right: Effectiveness in Local Nonprofit Organizations, a Panel Study. Public Administration Review, 2004a, 64(6), 694–704.
  17. Herman, R. D., and Renz, D. O. Investigating the Relation between Financial Outcomes, Good Management Practices, and Stakeholder Judgments of Effectiveness in Nonprofit Organizations. Paper presented at the Annual Meeting of the Association for Research on Nonprofit Organizations and Voluntary Action, Los Angeles, Nov. 18–20, 2004b.
  18. Herman, R. D., and Renz, D. O. Advancing Nonprofit Organizational Effectiveness Research and Theory: Nine Theses. Nonprofit Management and Leadership, 2008, Summer, 18(4), 399–415.
  19. Hodge, M. M., and Piccolo, R. F. Funding Source, Board Involvement Techniques and Financial Vulnerability in Nonprofit Organizations. Nonprofit Management and Leadership, 2005, 16(2), 171–190.
  20. Jackson, D. K., and Holland, T. P. Measuring the Effectiveness of Nonprofit Boards. Nonprofit and Voluntary Sector Quarterly, 1998, 27(2), 159–182.
  21. Kania, J., and Kramer, M. “Collective Impact.” Stanford Social Innovation Review, 2011, 36–41.
  22. Kanter, R. M. and Brinkerhoff, D. W. Organizational Performance: Recent Developments in Measurement. In R. H. Turner and J. F. Short, Jr. (eds.), Palo Alto, CA: Annual Review of Sociology: Annual Review, 1981, 321–349.
  23. Kaplan, R. S., and Norton, D. P. The Balanced Scorecard: Measures That Drive Performance. Harvard Business Review, 1992, 70(1), 71–79.
  24. Keehley, P., Medlin, S., Longmire, L., and MacBride, S. A. Benchmarking for Best Practices in the Public Sector: Achieving Performance Breakthrough in Federal, State, and Local Agencies. San Francisco: Jossey-Bass, 1997.
  25. Meyer, J. W., and Rowan, B. Institutionalized Organizations: Formal Structure as Myth and Ceremony. American Journal of Sociology, 1977, 83(2), 340–363.
  26. Odom, R. Y., and Boxx, W. R. Environment, Planning Processes, and Organizational Performance of Churches. Strategic Management Journal, 1988, 9(2): 197–205.
  27. Patton, M. Q. Utilization-Focused Evaluation: The New Century Text (3rd ed.). Thousand Oaks, CA: Sage, 1997.
  28. Pfeffer, J. Size, Composition and Function of Hospital Boards of Directors: A Study of Organization-Environment Linkage. Administrative Science Quarterly 1973, 18: 349–364.
  29. Preston, J. B., and Brown, W. A. Commitment and Performance of Nonprofit Board Members. Nonprofit Management and Leadership, 2005, 15(2), 221–238.
  30. Provan, K. G. Board Power and Organizational Effectiveness among Human Service Agencies. Academy of Management Journal, 1980, 22(2): 221–236
  31. Provan, K. G., and Milward, H. B. A Preliminary Theory of Interorganizational Network Effectiveness: A Comparative Study of Four Community Mental Health Systems. Administrative Science Quarterly, 1995, 40(1), 1–33.
  32. Provan, K. G., and Milward, H. B. Do Networks Really Work? A Framework for Evaluating Public-Sector Organizational Networks. Public Administration Review, 2001, 61(4), 414–423.
  33. Quinn, R. F., and Rohrbaugh, J. A Competing Values Approach to Organizational Effectiveness. Public Productivity Review, 1981, 5(2), 122–141.
  34. Sawhill, J. C., and Williamson, D. Mission Impossible? Measuring Success in Nonprofit Organizations. Nonprofit Management and Leadership, 2001, 11(4), 371–386.
  35. Scott, W. R. Institutions and Organizations. Thousand Oaks, CA: Sage. 1995.
  36. Siciliano, J. I. The Relationship between Formal Planning and Performance in Nonprofit Organizations. Nonprofit Management & Leadership, 1997, 7(4): 387–403
  37. Steers, R. M. Organizational Effectiveness: A Behavioral View. Santa Monica, CA: Goodyear. 1977.
  38. Stone, M. M., Bigelow, B., and Crittenden, W. Research on Strategic Management in Nonprofit Organizations. Administration & Society, 1999, 31(3): 378–423.
  39. Stone, M. M., and Cutcher-Gershenfeld, S. Challenges of Measuring Performance in Nonprofit Organizations. In P. Flynn and V. A. Hodgkinson (eds.), Measuring the Impact of the Nonprofit Sector. New York: Kluwer Academic/Plenum, 2002.
  40. Thomas, J. C. Outcome Assessment and Program Evaluation. In R. D. Herman (ed.), The Jossey-Bass Handbook of Nonprofit Leadership and Management (3rd ed.). San Francisco: Jossey-Bass, 2010.
  41. Tsui, A. S. A Role Set Analysis of Managerial Reputation. Organizational Behavior and Human Performance, 1984, 34, 64–96.
  42. Wing, K. T. Assessing the Effectiveness of Capacity-Building Initiatives: Seven Issues for the Field. Nonprofit and Voluntary Sector Quarterly, 2004, 33(1), 153–160.
  43. Yuchtman, E., and Seashore, S. E. A System Resource Approach to Organizational Effectiveness. American Sociological Review, 1967, 32: 891–903.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset