Idealistics is shutting down. This blog will continue to be updated at fullcontactphilanthropy.com.
Operationalize, optimize, then advertise your outcomes

In the for-profit sector, profits are the bottom line. Yet companies spend considerable amounts of money trying to figure out whether their products work as advertised, whether their customers are happy, and what they can do to improve quality and customer satisfaction. While advertising drives sales of an effective product or service, smart organizations are careful to first figure out how to measure the effectiveness of their offerings with their target consumers (operationalization), optimize based on customer feedback, and then advertise.

In the social sector, we make up an intervention and then skip straight to advertising the hell out of it. No measurement plan, no operationalization of desired outcomes, and certainly no optimization (an impossibility if we aren’t measuring our effectiveness to begin with).

While we are loath to operationalize, optimize, or do anything that rhymes with “evaluation”, we love it when it rains advertisers. Our advertisers come in the form of grant writers, social-media-for-social-good consultants, and pretty much anyone willing to work on a retainer to tell an organization it’s fantastic.

But how fantastic can we be without sensible data collection strategies? And how much can we improve if we continue to offer the same intervention year after year without improvements? The assumption is that the continued existence of an intervention is sufficient proof that a service is working and is valuable. Of course, this is the simple fallacy that always leads to the poor getting screwed, especially when their choice is between bad services and nothing.

The emphasis on advertising stems from agencies’ survival instincts. Indeed, the primary function of any organization is to continue to exist. I get that. But the irony is that funders and donors are begging for any organization to step up with reliable metrics and believable outcomes.

As funders and donors have started to demand more evidence of impact from organizations, the usual suspects of advertising consultants have shifted their rhetoric (but not offerings) to appear more in line with the shifting philanthropic landscape. All of a sudden, non-profit marketing consultants with backgrounds in Russian literature and interpretive dance are qualified to help organizations craft logic models and develop rigorous data collection strategies.

One such consultant, who I was later hired to replace to clean up his mess, outlined a data collection strategy that included an outcome of “a healthy, vibrant community free of crime where all people are treated with respect and dignity.”

!%#*

What an operationalization nightmare. How the heck do you measure that? You don’t. And that’s the point. The logic model was not a functional document used for optimizing results. Instead, it was an advertising document to lull unsavvy donors into believing an organization is effective in the absence of evidence.

The good news is that donors and funders are starting to get wise to the backward thinking “advertise first” mentality. The social sector is shifting, for the better, to reward organizations that take their data collection plans seriously, and who look to improve on their impact rather than simply advertise it to anyone willing to listen.

Organizations hoping to enjoy fundraising success in the future would be wise to invert their funding strategy to a model that emphasizes operationalization and optimization of outcomes first. In this new era of philanthropy, without evidence of impact, your advertising partners won’t have anything to sell.

  • Jennifer Banks-Doll

    David, I really enjoy your blog and love the example you have given above about the unmeasurable outcome.  So true and so common!  But I feel like you’ve left us hanging…What kind of outcomes would you reccomend instead?  Can you give us an example or two?  Thanks!

    • http://www.fullcontactphilanthropy.com David Henderson

      Jennifer, thanks for the question and kind words. The problem above is one of unoperationalized goals. For example, “safety” is an abstract concept. So what an organization needs to do is define what these abstracted concepts mean.

      “Safety” could be a mix of actual and perceived freedom from crime. In this case you might use two metrics to approximate safety; the crime rate in an area and a community survey of residents’ perception of the safety of their neighborhood.

      The next step is to determine some weighting on these indicators which approximate the goal of safety. Depending on your organization’s utility, you might put more weight on how people “feel” in terms of their safety versus the crime rate. Indeed, the data might show that as crime rates go up that so does perceptions of safety, as more arrests might mean more police activity rather than more crime.

      The point is to operationalize (make measurable) what it is that an organization intends to affect. A common misperception is for people to think that “measurable” necessarily means an indicator that is inherently numeric (like a crime rate).

      Qualitative measures, like an opinion survey on perceptions of safety in an area are perfectly valid metrics when collected with care. While brainstorming about goals might begin up in the clouds, in order to really measure effectiveness (and thus improve and demonstrate outcomes) organizations need to do the work of defining precisely what success means to them and how to measure that success.