Face Plants: Survey Says…The Survey Isn’t Working Out

edited.jpg

By Philip Li

President & CEO 

Editor’s Note: This post is part of an occasional series exploring funder failures and mistakes. We call it ‘Face Plants.’ You can find earlier posts in the series on our main blog page. 

I have an aversion to the phrase “best practice,” because it connotes that there is an optimal way to do something. It’s as if a lab has tested all the possibilities and has anointed a winner. It discourages thinking in different ways. And, in reality, these ideas are only ‘best’ until something better comes along.   

At the Robert Sterling Clark Foundation, when we come across an idea that has potential or resonance, we’re apt to call it a ‘promising practice.’ This carries the notion that it has potential, but is nonetheless a work in progress that will evolve and change or even be abandoned. Some ideas continue to get refined while others have gone ‘bust.’

As I reflect on my past few years at the foundation, there’s one approach that seemingly showed much promise that ultimately became a failure—or a face plant, as we like to call them.

When we began our current grantmaking program with a focus on investing in people through leadership development, we wanted to see if there was a way to measure the impact we were having. It wasn’t so much about the progress of each grantee partner, but more so assessing ourselves, as a foundation, and how we were doing as grantmakers. We felt that our own vetting of our grantee partners was the due diligence, and wanted to see how well we were doing and how the vetting process could be improved. 

With the wide array of issue areas our grantee partners address, and the fact that our grants are all general operating, we wanted a common rubric or tool that could be employed across the portfolio. We came upon an online tool that felt right and moved forward with it.  

Moreover, it felt like a ‘win-win,’ which was important. Our grantee partners would get a mirror on their own performance across a variety of dimensions, be able to use the data as a benchmark and compare themselves to similarly-situated organizations. They could then prioritize any investments, if desired, in areas that could use strengthening. And we’d get what we were looking for, an aggregate report on the portfolio that gave us a report card on ourselves. We also were trying to see if we might have a way of ‘making the case’ for flexible funding more broadly in the sector.

The first year everything went smoothly and the feedback on the tool was encouraging – or at least that was what was conveyed to us. Some grantee partners told us it helpful to get a snapshot of how they were doing and that it helped identify areas on which to focus, though a few expressed challenges in the applicability of the survey to their work. We felt good, and it seemed, um, promising. 

It wasn’t until we asked the grantees to use the tool again a year later, as a way for them to see a year-over-year comparison, that we encountered some resistance. Only a third of the grantees had completed the survey by the ‘deadline’ and despite gentle reminders – sent by the administrators – we saw little increase in participation. In a moment of disappointment and frustration, I whined, “we ask for so little and they can’t do the one thing we ask of them.” That’s when my colleague Lisa called me out on bad funder behavior.

Fortunately, we had our first-ever grantee partner retreat looming, and we had hoped to brief them on the survey results at that time. With the low participation rate, however, we asked our independent evaluators to re-orient the grantee-only session and get feedback on the new grantmaking process, including the survey.  

What the grantee partners shared was invaluable and, of course, eye opening. Our grantmaking process got favorable reviews, but the survey did not. In fact, it was overwhelmingly despised.  The evaluators were told that our grants weren’t of a magnitude, in which we could attribute any of the changes in the organization to our support. The requirement that some board members participate in completing the survey forced some organizations use a valuable ‘chit’ with them. And the survey questions didn’t align well for some groups since they weren’t direct service organizations. 

We hadn’t known what to anticipate, and their feedback left us stunned, but grateful for their candor and willingness to tell us what wasn’t working. When we returned to work on Monday, we immediately suspended the survey tool. That resulted in a surprisingly powerful positive response from the grantee partners, and even offers to help our evaluators devise a tool that better captures what impact our grants have – and possibly make the case for flexible funding.

Philip LiComment