Idealistics is shutting down. This blog will continue to be updated at fullcontactphilanthropy.com.
Snake oil nonprofit consultants sell outcomes as impact

Google’s advertising algorithm knows me too well. Pretty much the only advertisements I see now are for non-profit services. I tend to click through to these advertisements as a way of checking the pulse of social sector offerings outside the typical circles I operate in.

Yesterday I clicked on this advertisement for Apricot non-profit software by CTK. The advertisement includes a product demonstration video for their outcomes management system in which the narrator conflates outcomes with impact ad nauseam.

Sigh.

Outcomes is not another word for impact. An outcome refers to a target populations’ condition, impact is the change in that condition attributable to an intervention.

While we all tend to be saying the same buzz words (“manage to outcomes”, “collective impact”, etc.) we lack uniform agreement on what these terms mean. In the case of outcomes and impact, these are terms that come from the evaluation literature, and are (hopefully) not open to the manipulation of social sector consultancies with more depth in marketing than social science.

There are some that believe helping an organization at least understand its outcomes is a step in the right direction. I count myself as one of them. But telling an organization they can infer something about impact and causality by simply looking at a distribution of outcomes is not only irresponsible, it is downright dishonest.

The promise of metrics is to help the social sector move toward truer insight, not to use data to mislead funders. Whether the persistently misleading misuse of outcomes metrics is intentional or the result of ignorance, it has no place our work, and only stands to derail the opportunity we all have to raise the standards of evidence in our sector.

  • http://twitter.com/isaac_outcomes Isaac Castillo

    I feel your pain David.  This topic (difference between outcomes and impact, and how you measure both) is something I have presented about a lot in 2012. I think far too many people throw around the terms without understanding their technical definitions – and ‘impact’ and ‘collective impact’ have become the cool jargon terms.  

    What is as troubling to me is that people are too scared to ask others what they actually mean by ‘impact’. If people actually stopped someone and asked ‘what do you mean by impact’ and force them to define it, I think it would lead to a lot more productive discussions, as everyone would be on the same page.  

    Slides from a presentation that I did earlier this year that talked about outcomes and impact from a funder perspective can be found here if you are interested:
    http://www.childtrends.org/Files//Child_Trends-2012_02_02_SP_RightEval.pdf

    • http://www.fullcontactphilanthropy.com David Henderson

      Isaac, thanks for the comment and link to your slides. I think you are right that pressing the issue of what people really mean when they say outcomes and/or impact is a good way to press the issue, and help people wrap their heads around what they really intend to measure.

      Reading through your slides I was struck by a few things. First, I’ve seen you make the “do no harm” argument in a few different places, but every time I’m struck by how important this issue is. Of course, the problem is that to really measure whether a program is doing harm, you need some form of counter-factual, and thus move into the realm of what you refer to as evaluation in your slides and away from performance management.

      On this issue of performance management versus evaluation, in your slides you write that performance measurement is focused on outcomes exclusively. I would push back on that a bit, as ideally one would at least try to go through the mental exercise of imagining a counter-factual, if not using various types of regression techniques or other data sets to construct a synthetic comparison group.

      With the organizations my firm works with, we try to get them to consider where outcomes might be insufficient, not just from an evaluation standpoint, but from what you refer to as performance measurement. These various techniques to construct comparison groups are not sufficient to the standard one would expect to in as exacting a way as possible estimate an average treatment effect, which might be the distinction your slide are getting at. I just wouldn’t want to give the impression that impact has no role in performance measurement, as ideally we would only use impact instead of outcomes (assuming it were possible to do so).