Idealistics is shutting down. This blog will continue to be updated at fullcontactphilanthropy.com.
Reporting benefits and harms

When I was in graduate school I had a fellowship that placed me with a community development financial intermediary. The organization, like most agencies in the social sector, was interested in demonstrating the effectiveness of its interventions.

I asked the executive director whether she wanted me to try to figure out what impact their work was having, or if she simply wanted me to report positive outcomes. Depending on how you look at a spreadsheet, you can make any gloomy picture look like rock-star results. To her credit, the executive director asked that I try to develop a real picture, not simply a rosy one.

But most of the pictures painted on organizations’ “Impact” sections of their websites is of the Photoshop variety. There is a lot that is wrong with the way outcomes are reported, and conflated with impact in social sector communications. One problem that I consistently see is reporting positive outcomes while neglecting to report the changes in those that experienced worse outcomes.

For example, the website of a celebrated family focused startup non-profit boasts that 20% of children in their program increased school attendance. Sounds great. But what happened to the other 80%? Did their attendance stay the same, did it get worse? And if so, by how much?

Increases always sound nice, but does any increase always outweigh a decrease? If 20% of students improved school attendance and 80% attended school less would we still consider the program a success?

Well, we probably need some more information to answer this question, information which is never provided in this type of outcomes advertising. We would at the very least need to know what an increase or decrease in attendance means. A 20% increase students attending classes sounds great, but if a kid was missing 30 days of school a year and now she misses 29, is the gain really as meaningful as it first sounded?

More importantly, what would have happened to these kids without the program? We need an estimate of their counterfactual (what the attendance of these youth would have been without this program’s intervention) in order to truly determine whether we think this increase is reason to celebrate (or the possible decrease for a portion of the other 80% is cause for alarm).

Ultimately this comes down to a question of what I call reporting believability. Most of the impact results I have seen on organizations’ websites are simply not believable, as they tend to claim outrages and unsubstantiated gains.

But these unsubstantiated claims of impact are big business. And if the social sector wants to truly move toward evidenced based programming, we need to figure out how to make it more profitable for organizations to report credible data instead of fantastical folly.