Idealistics is shutting down. This blog will continue to be updated at
Impact transparency and audited outcomes statements

Transparency is generally thought to be one of the core values of the social sector. Executives and non-profit consultants go to great length to ensure agencies remain transparent.

Despite our obsession with transparency (some may argue because of it), nonprofit fraud is not widespread. One would of course hope that the social sector exists for more than just to not commit fraud. Are our standards really so low that a day free of governance maleficence is a day well spent?

Of course not. The social sector exists to create social value. That’s what donors expect of us, and that is what most of us expect from ourselves.

Charity watchdogs like Guidestar have begun moving beyond financial and governance issues, encouraging organizations to post evidence of impact on their websites. While the for-profit technology startup scene advocates the importance of “failing fast” in order to test ideas and iterate quickly to achieve effective offerings, the social sector seems petrified by the prospect of evaluation, lest an evaluative inquiry discover that an organization has failed to single handedly eradicate global poverty.

And yet without feedback, no program, for-profit or non-profit, can improve. Savvy donors understand that. In the new era of impact investing donors are more persuaded by an organization they believe has an honest approach more than an agency with a fantastical story.

My work centers on helping organizations improve program impact by learning from their evaluative metrics. I got into this business to help organizations help people better, but a natural consequence of better outcomes is strong fundraising prowess as well.

As the donor mindset shifts to that of the impact investor, I have begun to encourage the organizations I work with to learn (and fail) out loud. While many organizations have “impact” pages on their websites, the content of those pages tend to be a story of one amazing case or an enumeration of a select subset of program outputs. These pages tend to not be terribly informative, nor believable to the savvy donor.

But despite the plethora of shadowy claims of exceptional, unsubstantiated claims of impact on organizations’ websites, these same organizations will consider it a sign of organizational integrity to publicly post their third-party audited financial statements.

So, we consider it a value to publicly post our financials but not our impact? Frankly, I’m more interested in the results an organization produces than how it spends its money. Most donors are.

To help our clients better communicate their results (and more importantly their institutional learning) we developed a system that allows organizations to publicly share the reports we produce on their websites.

Just as there is value in hiring a third party auditor to examine an organization’s financial data, we believe the same is true for evaluative metrics. The public has grown to expect audited financial statements from non-profits. Audited impact statements are the logical next step if transparency is truly valuable to the social sector. Indeed, rrom a donor perspective, impact transparency might be the only kind of transparency that matters at all.

Evaluative metrics bring truth in advertising to the social sector

My primary focus is helping organization’s improve their programs based on outcomes metric, but there is no doubt that fundraising and development are on the minds of every agency executive that I work with. I used to shy away from this fact, but I now embrace the desire of agency executives to use evaluative data for fundraising purposes as an opportunity to bring truth in advertising, a particularly exciting prospect as donors become increasingly savvier investors.

It is undeniable that there are plenty of organizations (and people) that overplay their contributions to the public good. But there are also organizations that unwittingly mask their outcomes in poorly defined metrics, framing their social value to donors in ways that are at odds with their own internal organizational decision calculus.

A large affordable housing development and homeless service providing organization I work with is a great illustration of how our choice of outcomes metrics can obscure the real value an organization aims to optimize over.

As is typical in homeless services, this organization reports, among other things, the number of people housed annually. The problem with this metric is that it values housing a mentally ill chronically homeless person who has been on the streets for 19 years the same as housing someone who slipped into homelessness for one month due to momentary economic shocks, like job loss.

Assuming the number of people housed is actually what this organization wanted to maximize, the rational thing to do would be to move away from chronic homelessness and only house those whose homeless spells is likely to be very short. But this is not how the organization actually thinks, or how it makes program decisions.

This agency, again like many homeless service providers, cares deeply about those experiencing the continuum of homelessness, especially those who are chronically homeless. In economic terms, the organization derives more utility, or value, from housing a more difficult to house individual than from a less difficult to house person.

The trick then is to first internally formalize the organization’s utility framework, and then to identify the outcomes metrics the organization is actually optimizing over. For example, instead of the number of people housed, a more meaningful metric might be the number of years of homeless prevented, or the number of years of life preserved that likely would have otherwise been lost. Not only are these metrics better representative of how the organization plans its interventions, it also paints a more complete picture for potential donors.

In cases like this, I embrace the fundraising and development aspirations of the organizations I work with to the extent that helping them better understand their data will actually move them closer (rather than further) from articulating a truer story of the social benefit their agency contributes toward.

Your donors are not stupid

Anyone familiar with the concept of an outlier is fairly well inoculated from the seductive aroma of a well penned anecdote. Yet fundraising consultants continue to peddle the tired myth of the winning story, the singular parable that will endow an agency into perpetuity.

We are in the era of savvy donors. While I applaud the sector’s recognition of the importance of outcomes measurement, I’m afraid we are overemphasizing the importance of big numbers versus good measurement practices.

When I was in college I ran a small non-profit that was surprisingly competitive in grant applications. We did not necessarily have a greater impact than those organizations we competed for funding against (quite the contrary in fact). Although we did not have bigger numbers than other organizations, we did have better processes and capacity to collect and learn from our data.

Other organizations provided bigger numbers, we provided more believable ones.

The pressure to report big numbers is more self-imposed than we tend to realize. Foundations and donors are begging organizations not for more fantastically large metrics, but something they have good reason to believe is true, within some well estimated margin of error.

As donors become more sophisticated in their giving, increasingly more agnostic across focus areas, organizations must also become more sophisticated in their internal use of information and more statistically honest in their reporting.

Anyone can make up a big number. 5,439! See, that was easy. But it doesn’t mean anything. If you know it, so do your donors.

Indeed, the brilliance of the dawn of the impact oriented social investor is that organizations would do well to fully embrace evidence based processes for the betterment of program impact and the assurance of future funding.

But having data is not enough, and blindly reporting metrics or wrongly inferring meaning from misleading infographics is plainly transparent to this new crop of social investors. To win their hearts, organizations must win these donors’ minds. And in their minds, they are investors, no longer emotionally manipulatable donors.

Minimizing years of life lost due to homelessness

Housing First, the approach to chronic homelessness that places people into supportive housing instead of treating them on the streets, has been largely adopted by homeless advocates due to its measurable cost savings. Numerous studies have found that the cost of placing chronically homeless persons in supportive housing is lower than the cost of servicing them on the street.

Housing First’s detractors argue investment in Housing First is crowding out more traditional interventions like homeless shelters, which serve chronic and non-chronic homeless persons alike.

While the research on Housing First has largely focused on the cost of housing versus not housing the chronically homeless, the larger debate is really about how to best allocate the total sum of dollars available for homeless services, chronic and non-chronic.

Ultimately, any social investment is about maximizing or minimizing an outcome given a set of constraints. While cost has largely driven the Housing First rhetoric, another way to think about homeless interventions is to minimize the decrease in life expectancy an individual experiences due to homelessness.

According to Pathways to Housing, chronic homelessness decreases life expectancy by an average of 25 years. Given this assumption, I wondered what the equivalent number of years of life lost for a non-chronic homeless person would need to be indifferent between investing in chronic versus non-chronic homelessness.

In 2011, the estimated national homeless population on a given night was 636,017 people, with 107,148 chronically homeless and 528,869 not chronically homeless. To setup my indifference model, I assumed that the chronically homeless population was a closed population, that is, the same 107,148 people stayed homeless for the year (a knowingly incorrect approximation).

Calculating the number of non-chronically homeless persons is more complicated, the annualized number of non-chronically homeless persons is a function of the average duration of non-chronically homeless persons’ homeless spells (measured in months spent homeless in my model). Mathematically, the approximation of the annual number of unique non-chronically homeless persons is the point in time count divided by the fraction of the year spent homeless.

To get the point of indifference between investing in chronic versus non-chronic homelessness, assuming all we care about is minimizing the total number of years of life lost, we need to find the point where the sum of all years of life lost for chronically homeless persons is equal to the sum of all years lost for non-chronic homeless persons.

The following chart shows the average number of years a non-chronically homeless person would have to lose on account of their homelessness based on the number of months in the year spent homeless. Assuming a twelve-month average duration of homelessness, non-chronic homelessness would have to take an average of five years off of non-chronically homeless persons’ life expectancies to equal the twenty-five years average loss of life for non-chronically homeless persons.

The above calculations make a lot of assumptions, and are better understood as approximations rather than concrete guidelines. Furthermore, the model assumes that we value the total number of lives lost equally, that is, that twenty-five years of life lost for one person is equivalent to one year of life lost for twenty-five people, which might not be how you actually think about the value of life.

I’m not sure what the average duration of non-chronic homelessness is, so I can’t necessarily weigh in on whether this model would suggest a change in the current investment mix in homeless services. Regardless, if the social sector is to be more strategic with its investments, we would do well to carefully consider what outcomes were are maximizing or minimizing over. While the cost savings of Housing First are encouraging, I would hope that our objective has more to do with minimizing harm to humans than to our coffers.

Your grants budget should include evaluation

Evaluation and fundraising are two very different worlds. There is a dangerous trend in the social sector to conflate evaluation with fundraising. To be very clear, the skillsets and objectives of an evaluator are different than those of a grant writer and fundraiser, as well they should be.

I care a lot about improving evaluation, and data literacy, in the social sector. It is the only way we will be able to move beyond our current level of collective impact (whatever that level is). As a firm that specializes in helping organizations learn from their evaluative metrics, I often struggle with the best way to position our work.

On the one hand, our customers are significantly more competitive in grant applications because they have a better approach to understanding their outcomes, and improving programming accordingly, than other agencies. But while our customers enjoy a competitive advantage in grant applications, my partner and I have always resisted the urge to encourage organizations to work with us to improve their financial bottom lines.

The reason we have resisted drawing a relationship between fundraising and evaluative metrics is because our job is not to help organizations “tell their story” or “prove an organization is a great non-profit”. We try to help our customers learn from the reality of their program impact (or lack thereof), and to improve their programming based on as true an estimate as we can get of the effects of their interventions.

The fact is that funders are desperately looking for any evidence that their dollars are making a difference in the world. That is why our clients are more competitive in grant applications, because they can demonstrate an enhanced capacity to evaluate their outcomes and learn from their mistakes. This is not the same thing as demonstrating that they are awesome, rather, it is signaling that they are capable of identifying where they are awesome, where they are not, and how to improve.

I saw a job posting on Idealist for a director of grants and evaluation. The job description largely entailed focusing on grant opportunities, with the “evaluation” portion solely dedicated to proving program impact. This is absolutely the wrong way to think about evaluation.

I do believe it makes sense for organizations to dedicate some of their grants budget to evaluation, but not to prove program impact. Instead, any organization that can demonstrate an ability to identify program success and failure, and the capacity to learn from those results, will stand out to evidence starved funders.

White House holds homeless app competition, triviality announced winner

The Department of Veterans Affairs and the White House are holding an app competition for mobile applications that connect unhoused persons to social programs. The competition has announced the top five finalists, including the demeaningly named Sherlock Homeless, but triviality has already stolen the show.

The premise of the app competition was flawed from the outset, and is emblematic of a chronic syndrome in the social sector. We are too easily swayed by the trends of the corporate world, time and again believing that if we just copy what those in the corporate sector do we will enjoy success.

In the mid to late 2000s the craze was hiring MBAs into the social sector. If only non-profits were more like businesses! The economic collapse at the hands of MBAs cooled that trend, but alas the app craze filled our empty panacea cup.

As Silicon Valley blazes trails like it did in the 1990s, the social sector has started wondering why there is an app for sharing photos with friends, but there is no app for ending poverty. Well, photo sharing is trivial, ending poverty is not.

But, we can make trivial applications about poverty. That must count for something right?


And yet the White House itself is pushing the misnomer that technology can solve social problems. It cannot. If connecting people to homeless services was simple enough that a part-time developer could solve this problem, Google would have done so a long time ago. Google indexes pretty much every website on the Internet, so why does Google search fail to effectively connect people to services?

Social service agencies do not always have a web-presence, and when they do, they do not adequately maintain their sites with sufficient information to make referrals. That is why 211, and my own company, employ people to manually maintain our resource databases. The problem of maintaining resource data is not a technological one, it is logistical, and there is no app for that.

I am not arguing that there is no place for technology in the social sector. As a firm that uses technology in its work with social sector organizations, obviously I believe there is a place for technical innovation in our work. But slick, shiny apps with ridiculous names and soon to be outdated databases are not what anyone needs.

There is a reason that in the app economy apps sells for a dollar. They are easy to make, and easy to forget. We don’t need apps, we need real solutions.

Lower standards guarantee higher outcomes

Measurements are often given meaning relative to thresholds. Someone is housed or unhoused, poor or not poor, by some definition. Yet these thresholds are arbitrary, and open to debate and manipulation. While one might think there would be agreement on what homeless means, especially since it is a word that almost defines itself, there is considerable argument over its definition with significant policy consequences.

As the social sector struggles to measure its impact and make the case that real progress is being made, the LA Unified School District (LAUSD) might have found the easiest and most fool proof way of increasing graduation rates; lower graduation standards. The LAUSD is facing a dropout crisis, and like many social sector organizations, whether government or non-profit, is feeling the pressure to improve outcomes based on a set of measurable indicators. For schools, a fairly important indicator is graduating students.

Yet graduation is actually a proxy for being educated. While econometric models tend to find positive returns to education (more education more money) the diploma itself does not beget a wage increase. Instead, diplomas are a signal that a worker can perform at a certain level. Although lowering graduation requirements, in this case by potentially allowing students to graduate with fewer credits and lower grade point averages, will increase graduation rates, will it actually make students more educated? And what about those students who graduate under current LAUSD standards, might this proposal cause them harm by allowing students to take fewer classes or earn worse grades?

I’m no education expert, but I work with a range of organizations facing considerable pressure to move the needle on one metric or another. But this approach the LAUSD is considering could set a dangerous precedent for the social sector. If an employment agency wants to inflate employment statistics, it might conflate temp work with full time jobs. And if a homeless services organization wanted to show an improvement in housing placements, it might refuse to work with chronically homeless persons and only serve those whose homeless-spell would last a month regardless of interventions.

Is someone who is 101% of the poverty line not poor while someone who is 99% of the poverty line poor? Any anti-poverty organization worth its salt would want to help both of these people, regardless of an arbitrary line set by the federal government. We use thresholds to help describe what we mean by the objectives we aim to address. Lowering our standards and adjusting how we define measurement of our objectives is an easy way to inflate our outcomes, but it is not terribly satisfying and is in no way meaningful.

The LAUSD would be wise to focus its efforts on helping students achieve its vision of an educated youth population rather than lowering its standards to award diluted diplomas. The people we serve are more than binary variables that fail or succeed. Our outcomes metrics are only valuable if they help identifying where we truly succeed and fail. If instead we simply want high scores, then make up any number you want and call it a day.

Evaluating your organization’s use of metrics

Evaluating organizational effectiveness is a growing sub-field of the social sector, with a slew of competing measurement frameworks. Something a lot of these frameworks assess is whether organizations make use of data management system. The idea is that an organization that has a data management system in place is more likely to be data savvy and to actively manage to outcomes.

This might be a reasonable proxy for whether an organization actually incorporates evidence in its practice. But from where I stand, data only has value in so far as it helps an organization make higher impact decisions. Therefore, I propose a more robust approach to evaluating an organizations use of metrics.

If an organization’s behavior before implementing a data oriented approach is exactly the same after implementation, then no value has been added. Data should help inform action, not just confirm prior beliefs. It’s hard to imagine any organization (or individual) that does everything so perfectly that there is no room for improvement.

Effective uses of data collection and evaluative analysis should help drive program improvements. If you can’t identify any changes in your organization’s behavior, then whether or not the organization has a data management system and processes in place to collect information, it has not actually benefited from its data efforts.

More important, just because an organization changes its behavior based on metrics does not necessarily mean it benefited from an effective use of information. Indeed, information should help us make better decisions. In some cases, organizations make poor decisions that are backed by data.

A classic example is the Space Shuttle Columbia disaster. NASA used information to back up its decision to go ahead with a shuttle launch in suboptimal weather, that lead to the disintegration of the shuttle’s O-ring and a subsequent explosion that killed all the astronauts on board.

In this case, NASA made a data-backed decision, but it used its data incorrectly and made the wrong inference, resulting in disastrous consequences. Therefore, it’s not only important to collect your data and use it, but to take care to analyze your data properly, and listen to reason of all parties.

Which leads me to my most important indicator of whether an organization uses information well. Information should inform decision making, but it should not necessarily, on its own, dictate what an organization does. While having a data management system in place is great, and using reasoned analysis in the interpretation of data is better, there is no replacing the judgment of experienced practitioners and the feedback of those we serve.

The best organizations include evaluate metrics as a part of their decision frameworks, but they do not supplant their own judgment for a regression.

Data helps answer questions, it does not determine what questions should be answered

As the furor to incorporate metrics in the social sector grows, organizations are feeling the heat to get more data savvy. In principle, this is a good thing. Information should help inform decision making. But there is a big difference between information informing your agenda and allowing it to set it.

Data should inform your answers to questions, but data sets should never determine what questions you seek to answer. Every organization grapples with a myriad of decision problems, from optimizing resource allocations to increasing the social impact of interventions.

The natural role of information is to help us make more informed decisions. But data does not, on its own, answer any questions. And no data set can (or should) determine the most important issues facing an organization. Those questions should be driven by the organization itself and the people it serves.

Yet time and again I see organizations blankly asserting that they need data. Why?

A lot of organizations don’t have a great answer beyond citing the overall direction of our industry. This is a pretty lousy answer, and more importantly leads to half-baked data collection implementations that do nothing to drive organizational change or improve outcomes.

Each organization I work with, before talking about data management systems, what data points to collect, or internal processes for collecting metrics, I ask them what they do and what problems they face. Simply put, you don’t know what data you need until you know what problems you’re trying to address.

I’m afraid the glorification of trivial info-graphics and blanket mandate that organizations should be “data driven” perpetuates a wrong-headed belief that there is inherent value in data. As someone whose whole lively-hood is based in data collection and analysis, let me be as clear as I can, data only has value when it informs a decision.

As a sector, we’d be wise to focus less on the “data, data, data” mantra, and to instead engage in discussions about the issues organizations face, and where metrics can help inform better decision making. Despite the misleading glee of those who proclaim the data revolution will transform the social sector, data itself is nothing but a distraction unless it answers specific questions an organization faces.

Measuring the social impact of blogging

Professionally I do two things; I help organizations make high impact data-oriented decisions, and I write. As 2011 draws to a close, I reflect on another year helping a lot of great organizations increase their social impact, and a pile of blog posts that I hope help advance the social sector toward lasting change.

Obviously I believe writing, and the exchange of ideas that comes with it, is important to the growth of our sector and advancement of solutions. If I didn’t believe that, I wouldn’t write anything. But as someone who prefers evidence to anecdotes, facts to feelings, I’m at a loss for much evidence that blogging (at least my blogging) helps move the needle even a little bit.

I feel like I get quite a bit more than I give in terms of writing. And maybe that is okay, so long as I believe writing helps me get better at what I do, and that what I do with my agency has social value.

But my ambition for writing and the promise of free-flows of information in the social sector exceed simply personal gratification and advancement. My hope is that by sharing with one another what works and what doesn’t, that we would improve our own services, turning those little insights into collective action.

While articles about changes in the poverty rate and misleading homeless counts are compelling reads for people like us, if that information exchange doesn’t improve the output of our efforts, then what are we doing? There are certainly times when I worry that the articles we write and share with one another have no value other than to amuse ourselves, like a gossip rag for poverty-geeks.

I hope I am wrong, and in 2012 I plan to actively seek evidence to the contrary. I want to believe that we are evolving into a sector that thrives on sharing best practices and possesses the sophistication to integrate information across the fields of politics, sociology, finance, social work, community development, and a slew of other focus areas that collectively sum to the vastness of the social sector.

Indeed, it is in the vastness of the social sector that I worry the value of our information exchange is lost. As you know, the social sector is complex. Its complexity in part stems from the fact that it is not so much a sector, but rather a sector of sectors (some call it the un-sector). The sector-of-sectors nature I fear lends itself to sharing information in parallel, rather than exchanging information that directly impacts what we do.

I, like you, picked this line of work to improve the lives of hurting people. When we read posts, share information, and write on our blogs, we are diverting our time away from our work. I hope, I think, this is a good use of our time. I think writing matters, and I hope in 2012 I can prove it to myself.