Charity Navigator’s New “Impact Score” Tells Us Little About a Nonprofit’s True Value

 
Screen Shot 2021-01-26 at 11.49.57 AM.png
 

Editor's Note: Our Lisa Pilar Cowan and Chicago Beyond's Liz Dozier recently co-authored the opinion piece below for The Chronicle of Philanthropy. You may view the original piece here.

The nonprofit watchdog group Charity Navigator last fall announced a new feature it designed to provide donors with an improved measure of nonprofit effectiveness. An “impact score” based on “how much good the nonprofit achieves per dollar of cost” will now be added to each nonprofit’s profile. Sounds great, right? Theoretically, it gives a donor insight into how much “good” will result from their dollars. In reality, it doesn’t come close to delivering on that promise.

Charity Navigator, which acquired an organization called ImpactMatters to provide this new measure, isn’t the first or only entity to define a donation’s success by its return on investment. But this attempt to quantify impact for a broad range of nonprofits nationwide is troubling. Because Charity Navigator uses only data that can be standardized across organizations, many qualitative factors are lost. Its primary impact measures — such as share of budget spent on programs, whether program fees are charged, and whether the organization already receives private funding — are overly simplistic and do not accurately help donors understand a nonprofit’s true value for the community it serves.

In our roles as a former high-school principal and an education program director, we‘ve seen the harms that often accompany a one-size-fits-all evaluation approach. It’s not unlike a “color-blind” approach to equity. Context is important, and when we ignore it, we may end up replicating the very thing we are fighting to eradicate.

Consider, for example, this revealing thought exercise included in the ImpactMatters blog post about the impact score:

A program has a limited budget of $100,000 to improve literacy in a community. It can choose between two approaches to do so: One that can boost literacy by a grade level for 100 students and a second that can also boost literacy by a grade level but for 200 students. All else equal, a sensible program administrator would choose the second, as of course it reaches twice as many students. This is a cost-effectiveness decision. We have limited resources and unlimited needs. Cost-effectiveness is a decision tool that makes those resources go further — helping more people in more ways.

The telling words here are “all else equal.” Many factors could vary from one program to another: Do the two programs offer the same supports? At what grade level are the students reading? Who is leading the program, and who is staffing it? What is the organization’s relationship to the community? What literature are the students reading? Are the students primarily being taught how to take a test, or are they learning critical thinking skills? Are their reading materials at school in the same language they speak at home? Are they getting enough to eat during the school day?

Both of us are now grant makers who have centered our work on trusting grantees. We think Charity Navigator’s approach to measuring impact misses the mark — and the point. Impact should be defined, or at least informed, by the organizations and communities that experience the work firsthand. (For more on this, check out Chicago Beyond’s recent guidebook, which reveals the seven inequities lurking within most evaluation systems.)

Beyond the specifics of evaluating a literacy program (or food pantry or senior center), we strongly believe that what an organization chooses to measure is a statement about what it values. When we measure the quality of a nonprofit based on return on financial investment, we ignore the complexities of a grantee’s work and reinforce the idea that the most important factor in giving is protecting a foundation’s reputation or a donor’s wealth. And when we prioritize wealth and reputation over any other indicator of a human life, we are actively preserving the very inequities we purportedly want to eradicate.

This attention to impact as a function of the grant-making process is especially relevant at a moment when philanthropy’s legitimacy has been challenged by critiques of both its effectiveness and its historic role in perpetuating systemic inequities.

So what’s the alternative? Here are a few ways we think about impact and evaluation at our foundations:

“How” we fund is intimately related to “what” we fund. The nature and quality of our interactions with nonprofit partners is as important to achieving the outcomes we seek as what we fund and support. We see our grantees as equal partners in this work so how we engage with them is critical to the effectiveness of that work. While foundations typically consider themselves the “brains” and grantees the “brawn,” we believe the nonprofits we support are the experts on their own issues and solutions. Working together, as equals, without the constraints of those false roles, lays the groundwork for strong collaborative efforts that ultimately produce the results we all seek.

Shared learning is a critical part of our evaluation process. What if we approached evaluation as an opportunity to learn with nonprofits, rather than measuring impact with overly simplistic metrics like spending ratios and program fees? What if the evaluation becomes a shared pursuit in understanding the challenges, opportunities, and evolution of a program or organization? When we approach evaluation in this way, it becomes an avenue to further understand the context in which successes and failures occurred, to notice inequities, and to collaboratively reflect on progress. This approach, far more than a grade on a nonprofit’s return on investment, would benefit any organization — and ultimately the community it serves.

How we measure grantee success internally is a reflection of our goals for society as a whole. Approaching grantee relationships from a place of trust, humility, and transparency is key. Within our foundations, we are building the kinds of assessment and planning processes that we want to see in the larger world. This means listening, especially to grantee partners, so that we develop a shared understanding about the issues, challenges, and solutions ahead.

Those of us occupying leadership positions at foundations must be self-reflective about our practices and share accountability for the impact of the work. That is what solidarity and collaborative stewardship look like. It is a deliberate, shared experience toward a common cause.

This is why the Charity Navigator impact score is so problematic. It renders all other factors void. It gives us an excuse to put up blinders rather than open a window into the complexities and larger context behind a grantee’s potential success. Too often in philanthropy we try to boil things down. And, in the process of all that boiling, we lose the opportunity to have a lasting impact on the people and the communities we desperately want to help.