Let’s say that you are writing a grant.
But on the grant requirements page, you see that they want statistics on a whole range of things that you have no way of measuring. Now they tell you that if you can’t measure it, they can’t make sure you’re using their money wisely, THEREFORE you won’t get the grant.
Why this is wrong
If a kid gets an A in math, even though you tutored him, can your nonprofit really take credit for that? No, because, as I’ve written before, there’s too much noise in the system. So many other things could have caused that. His parents could have given him a homework space. He could have formed a study group with friends. There are lots of things that could have affected why his grades improved.
If you clean up a stream, you can say, “We cleaned 5.6 miles of stream” and a grantor might deem this a useful activity. But what if you don’t categorize the type of waste you dredged up? What if you didn’t weigh it and measure it enough? And what about the rest of the stream? Is it really going to make a difference to clean this section of a very long stream? Aren’t you just putting a bandaid on the problem unless you also do a public awareness campaign about not littering? But where is the money to market that public awareness campaign? Foundations don’t fund marketing!
If you help a woman escape from a domestic violence situation, you cannot measure the ripple effect it has on her children, on her family, or on her future. Your typical domestic violence agency cannot keep track of what happens to its participants 3-4 or 5 years down the line. They don’t have the resources to keep checking in with people. And again, too much noise in the system. What if she got a job but then went back to school? What if she went back to her abuser? Can your nonprofit take the blame or take the credit for any of that?
What I’m trying to say here is:
Real, lasting change cannot be reduced to a single metric like overhead or numbers of people “served”. Changing a culture or an institution is typically too sloppy, random, never-ending, and elusive to be captured by a mathematical formula or metric.
This post is a response to Alison Bernstein’s Metrics Mania post over at the National Endowment for the Arts blog. Alison has a fantastic writing style so I am going to be quoting liberally from her article here. She works at the Ford Foundation, and she talks about current trends in grantmaking, what she’s noticed, and the historical context of grant makers trying to solve social problems. And everything she says is true.
So, how does this relate to you?
Alison writes:
“Coercive accountability” . . . is the idea that an organization and its grants can only be effective when it arrays all the data that are known or can be measured by a metric and make decisions based on that metric. But the metric by its very nature only measures what can be measured and thus it is a proxy or an incomplete indicator of what is actually happening.”
Want more things to get mad about?
Okay.
Ever heard of Coercive analytics? Alison Bernstein says:
I want here to focus on the process applicants are being asked to undertake to get a grant. This new process requires them to draft and re-draft proposals so that they fit the philanthropist’s sense of what works or should work in any given setting. As one grantee anonymously put it, “Foundations have become more focused on developing pre-set portfolios of projects, managing risks, and producing outcomes rather than listening to communities… with their new strategies and staff, foundations are increasingly treating NGOs like ours not as innovators but as contractors who are hired to deliver donors’ visions of what needs to be done.”
So, basically, funders are going to tell you how to solve the problem, with NO on the ground experience. They will tell you what outcomes you should be looking for, and what to measure, so their investors can feel like they invested in the right thing. This is BEYOND wrong. And this is where grantmaking is going.
You know, I won’t front like I’m better than that. I did this too, BEFORE I KNEW BETTER.
Hold up.
I was volunteering in Indonesia in 2003. I volunteered in mobile health clinics in the poorest slums of Jakarta. I had NO IDEA what Indonesians really needed, but I decided, before I got there, that they needed condoms. But when I got there, on the ground preparing medicines for people in the clinics, I saw what was REALLY hurting people was lack of access to clean water. That was where all of their diseases were coming from. Grantors and funders who wish to give top-down dictums about what the problem is and how it should be solved are making the same grave mistake that I made, except MUCH MUCH BIGGER.
As I’ve written before, this is why strategic philanthropy is insulting. It’s saying that we, the little people, with the immediate experience of the problem, do not know what it is, nor how to solve it, as well as people who live 1,000 miles away from the problem, and maybe cracked a book once that talked about it.
Here’s what I suggest.
If you REALLY want to measure nonprofit effectiveness, measure how well the nonprofit is treating its workers.
What is the turnover rate? What is the employee retention rate?
What steps do leaders take when they find a subordinate has made a mistake?
Who is allowed to make mistakes?
Is there a budget for employee education?
How do the staff feel about working there? Do they get health insurance? Do they get paid time off?
Do they stay for a year and go somewhere else because of senior leadership or bad pay?
When we measure this, we are measuring impact. Suddenly, charity leaders realize they are being watched, and will pull up their socks.
It’s cruel to expect nonprofit workers to work as hard as for profit workers with far fewer resources and absolutely no job security or any pay for performance bonuses.
What do you think?
Great points! You make me feel lucky that I oversee a grant with performance measures and indicators that my organization crafted ourselves. It’s also helpful to know that other professionals, especially those with more experience behind them, still struggle to successfully and accurately measure their outcomes. With the advent of new digital media in the last 10 years, there seems to be no excuse for focusing on quantitative data, except that funders demand it.
Dear Desiree,
So happy that you’ve got performance measurements that you crafted yourselves. I happen to know that you also worked on the ground with programs for a long time, so that you know how things SHOULD be measured, rather than being beholden to funders random analytics.
Thanks for commenting Desiree! I appreciate it!
Mazarine