Idealistics is shutting down. This blog will continue to be updated at
Goodbye Idealistics

I co-founded Idealistics in 2005. Eight years is a long time. This is not easy to write, but I have decided to move on from Idealistics.

My interests and skills have evolved over the last eight years, as has the social sector. Data is the belle of the social sector ball, but as a philanthropic community we are at something of a crossroads.

Through Idealistics I have had the opportunity to see firsthand how non-profit and government social programs use (and misuse) data. These frontline realities are significantly messier than the rosy promises poetically popularized by those well paid to be far removed from the day to day difficulties of executing data informed social programs.

The debate about how to use data in the social sector has a lifespan of an unknown length. At some point a new standard of data “best practices” will be established, and the sector will move on to another issue for the foreseeable future. The decisions the sector makes now will have consequences for years to come.

The upside potential is well documented – money flowing to the most effective programs, programs designed to meet social (not just funder) needs. But the consequences of getting this wrong are significant as well, and there are well intentioned forces moving us in the wrong direction.

Like the decisions that led us to the obsession with overhead, a costly mistake that has been with our sector for decades, wrongheaded efforts that try to supplant analytical capacity for jargon and shallow infographics threaten the sectors’ future ability to truly learn from evaluative metrics.

Indeed, we ended up with the simplistically misleading overhead-ratio metric in order to avoid the real complexity of comparing the seemingly incomparable. Shortcuts are costly, and if the sector is serious about maturing into a more data driven industry, we have to invest in our workforce’s analytic capacity.

Which brings me back to my decision to close Idealistics. I’ve enjoyed my time with the company immensely, and am proud of its accomplishments. Through this blog and others I have been fortunate to play a small role in the vast discussion about how the social sector uses data.

While blogging has been cathartic, taking pot-shots while hiding behind a WordPress install is hardly heroic. And although Idealistics has allowed me to be a part of making big differences for a relatively small group of organizations, no one could argue Idealistics is moving the needle in how the sector as a whole engages data.

It’s not. I’m not.

My career, and technical training, has prepared me well for the intersection of social issues, data, and technology. I’m grateful that this intersection is the talk of the sector, and I’m afraid I’m squandering this opportunity by trying to be a one-man band.

These issues are too important to me to ignore the fact that the right thing for me to do is to join a team. What that team is, I’m not sure.

I’m at the early stages of reaching out to folks. Hopefully wherever I land, I’ll be a part of an organization (whether a consulting firm, non-profit, technology company, whatever) that is building a smart team to tackle thorny data issues in the social sector head on. If you have any ideas, please shoot me an email at

As for this blog, it will be moving back to its old home at in the coming weeks. If you subscribed via RSS, you don’t need to do anything as the RSS feed will stay the same.

Anyone who has ever been a business owner knows how emotional running a business is. Although I know this is the right decision, it’s still very difficult. I love Idealistics, and am grateful for the opportunities I have had through it to work with great people and organizations.

Goodbye Idealistics.

Automated grant making

testAutomationGearsTop executives in large corporations tend to be busy, and don’t have time to make every decision themselves. A technique used by management consultants to help organizations make decisions consistent with those of their top executives without necessarily having to involve those executives in every decision is to model an executive’s values and risk tolerances.

In profit seeking businesses, generally these decisions revolve around how much money (measured in time and assets) a company is willing to risk, and at what level of risk, to receive a certain monetary pay out.

This same concept can be applied to philanthropic giving. By modeling a grant making entities’ social values and their assessment of each applicant’s ability to deliver results, one could quickly and consistently evaluate an arbitrary number of grant applications.

Valuing outcomes in terms of other outcomes

To illustrate this point let’s take the example of two funders, Funder A and Funder B, who each receive two applications from Applicant X and Applicant Y. Applicant X runs a food program plans feed 150 people for a month. Applicant Y runs a housing program that plans to house 20 people for a month. Applicant X is requesting a grant of $26,100 and Applicant Y is requesting $18,000. The following table summarizes this setup.

Table 1: Grant application requests

Persons housed for a month

Persons fed for a month

Grant request

Applicant X




Applicant Y





So, which is better? The answers depends on how we value the intended outcomes. The following table demonstrates how Funder A and Funder B values feeding a person for a month in terms of a person housed for a month.

The idea of measuring one outcome in terms of another is an essential concept to this modeling process. Reading the below table, Funder A values feeding 5 people in a month the same as housing one person for a month. This ratio of 5 people fed to one person housed is Funder A’s point of indifference. That is, according to Funder A, feeding 5 people for a month has the same social value as housing one person for a month.

Table 2: Funder values in terms of persons housed

Persons housed for a month

Persons fed for a month

Funder A



Funder B




Funder B puts considerably more value on housing relative to food than Funder A. For Funder B, you have to feed 15 people for a month to equal housing one person for a month. Therefore, we can not only see that funder B values housing more than Funder A, but we see that Funder A values housing 3 times as much, allowing us a way to quantify the subjective value differences between these funders.

Using the relative values of persons fed and persons housed for each of the funders and the number of people each applicant plans to feed and house, we can evaluate each proposal using the value system of each funder. Since we are using “Persons housed for a month” as a base outcome that we compare food outcomes to, Funder A and Funder B both assign the food program, Applicant Y, a value of “20″, which is simply 1 times the number of people Applicant Y plans to feed, 20.

In order to evaluate the outcomes of Applicant X, which is a housing program, in terms of the food outcome used by Applicant Y, we divide the number of people Applicant Y plans to feed, 150, by the number of people each funder believes must be fed to equal one person housed. For example, because Funder A values 5 people being fed the same as 1 person being housed, we divide 150 by 5, which leads Funder A to assign Applicant X a value of 30. Using the same logic, Funder B assigns applicant X a value of 10, reflecting Funder B’s preference for housing over food relative to Funder A’s preferences.

Table 3: Funders’ assessment of social value in terms of persons housed

Applicant X

Applicant Y

Funder A



Funder B




Expected social value calculations

In the above table, we see that Funder A prefers Applicant X’s application and Funder B prefers Applicant Y. But what if we don’t necessarily believe Applicant X and Applicant Y will be able to help as many people as they plan to? We can adjust our model to account for each funder’s confidence in each applicant’s ability to achieve their intended results.

Table 4: Funder confidence in applicant’s ability to deliver

Applicant X

Applicant Y

Funder A



Funder B




The above matrix shows each funders’ confidence in each applicants’ ability to deliver their intended outcomes. For example, Funder A is only 40% sure applicant X will deliver its intended outcome of feeding 150 people for a month.

Using these subject funder probabilities we can calculate the expected value of the number of people each applicant will help according to the funders’ assessments of the applicants’ capacities.

Table 5: Expected social value by funder in terms of persons housed

Applicant X

Applicant Y

Funder A



Funder B




By accounting for funders’ confidence in each applicant, we now see that both funders prefer Applicant Y, whereas Funder A preferred Applicant X before applying the probabilities in Table 4, as shown by the preferences depicted in Table 3.

Finally, we can factor in the cost of each applicants’ grant request by dividing the grant amount by the expected social value in terms of people housed.

Table 6: Financial cost over expected social value

Applicant X

Applicant Y

Funder A



Funder B




Accounting for cost, the decisions made by Funder A and Funder B do not change from Table 5, with both funders preferring applicant Y’s application. However, whereas Funder A has a modest preference for Applicant Y in Table 5, accounting for the dollar amount per social value metric makes funding Applicant Y a clear decision for both funders A and B.

Applying this approach

All the calculations in this example are simple, and this approach scales to any number of data points. The real trick is picking a base indicator to value other indicators against. In this case, we used the number of people housed in a month, but really any indicator can be used.

While this example focuses on grant making, the same idea can be used for any type of social investment decisions. By modeling an organization or grant making institutions’ values, decisions can not only be made more quickly, but the decision making criteria is made transparent, which helps drive intelligent discussions about whether investments are being made consistently, and whether those decisions are designed to maximize social impact.

As the social sector continues to debate how to better incorporate metrics in our work – we have to move away from simple summary statistics and outputs enumerations to more sophisticated uses of data that directly aid decision making. Automating some aspects of grant making is a logical application of data driven methodologies in the philanthropic sector.

Why your 100% success rate does not impress

5104I have recently noticed a growing number of organizations including “impact” pages on their websites. These impact pages generally include some charts and data on an organization’s supposed impact. In the spirit of openness and a sector that is moving toward managing to outcomes, I’m glad to see organizations putting their data out there for the world to see.

Unfortunately, a lot of this data is crap.

While social value is supposed to be the work-product of a social program, the reality is that frontline organizations trade the pretense of impact (whether actualized or not) for funders’ dollars. These financial arrangements logically put pressure on implementing organizations to report good news. And oh boy do they.

I was reading through the impact page of a nonprofit that aims to reduce recidivism by mentoring prisoners while in prison, and then helping those prisoners find paying jobs when they reenter society. This organization boasts a too good to be true 100% success rate. 100%! None of their clients ever return to prison.

Amazing right? The organization goes on to compare their 100% success rate to the 50% national average recidivism rate. This comparison certainly sounds impressive, but I doubt it really is. In fact, comparing the outcomes of the prisoners this organization serves to the overall prison population average is especially misleading.

In a plucky video on the organization’s website, the chief development officer wisely proclaims that you can’t help someone who doesn’t want to help themselves, and therefore their program only serves those who are motivated to leave prison once and for all.

Therefore, the proper comparison group is not all prisoners, but rather only those prisoners who are motivated never to return to prison. Indeed, the effect of their mentoring and job placement program is intermingled with the personal motivations of especially motivated prisoners.

So – is 100% still an impressive statistic? I don’t know. Maybe it is. But, the people in their program might have succeeded regardless. Indeed, 100% of successful people succeed.

I don’t mean to single this one organization out, but rather use this example to illustrate a dangerous trend. I’m disheartened by the number of organizations I see proclaiming ridiculously astronomical success rates with pure confidence their interventions are the causes of such outstanding results. If this is how we plan on using data in the social sector’s great data renaissance, then we’re in trouble.

Part of growing into a savvy consumer is learning the difference between a scam and the real deal. When we see a deal that looks too good to be true, we tend to be skeptical. And rightly so.

Given this obvious skepticisms to scammy sounding pitches – I’m truly amazed at how many of these exact types of pitches we make in the social sector.

Donors aren’t stupid, and they can tell when something is too good to be true. If you find that your program is 100% successful, instead of declaring to the world that your organization is genius, you’re probably better off reassessing your evaluation, because I’m about 100% sure you’ve made a mistake.

Money and metrics – Google Hangout on using performance data to raise funds

piggy-bankIdealistics is not a marketing or fundraising firm. I try to make this point as clear as possible to all of my potential customers before entering into any engagement.

However, I whole heartedly believe that improving social outcomes should lead to better fundraising prospects. But do funders and donors really invest in social outcomes?

Next Tuesday, April 30th at 2pm Eastern I will be discussing “How to Use Real Performance Data to Raise More Money” on a Google Hangout with management consultant Nell Edington of Social Velocity.

Nell, a notable exception to my rants about underwhelming nonprofit consultants, is well known in philanthropy circles for encouraging organizations to focus on financing instead of fundraising. In the Google Hangout, we will identify cases where organizations we have worked with have successfully translated their outcomes metrics into more funding.

As funders and donors transition toward a social investment mindset, organizations that can report credible outcomes and demonstrate an ability to learn from their data will prove most competitive for charitable dollars. Among other things, we will discuss what makes outcomes more or less credible, and how organizations can signal institutional learning to funders.

If you have questions you would like Nell and I to discuss in the Hangout, leave a comment below, email me at, message me on Twitter at @david_henderson, or visit the Google Plus event page.

Update April 30, 2013: Since this event has passed, below is the video of the hour long discussion

Why comparing your outcomes to community averages might be misleading

Apples and orangesI followed a Chronicle of Philanthropy chat titled How to Show Donors Your Programs Are Working earlier this week. While it is encouraging that the social sector is trying to incorporate metrics in our work, data’s rise to mind share prominence has also seen the rise of some fairly dubious advice.

One piece of advice from this “expert chat” was that organizations should couch their outcomes in terms of community averages. For example, a tutoring program might look at the graduation rates of students in their program versus graduation rates for the school district at large in order to show their students do on average perform better.

I’ve heard this suggestion a lot – and see organizations proudly declare their outcomes are some percentage greater than the community’s as a whole.

The problem with this approach, and pretty much every mainstream discussion about evaluation, is that there is no serious discussion about the difference between a good and a bad comparison group.

The missing counterfactual

In the evaluation literature, a counterfactual is a hypothetical whereby we try to estimate what would have happened to someone in our program had that very person not received our services.

This is pretty tough to do – and the reason evaluation experts prefer randomization is that it gives a good approximation of the missing counterfactual. That is, randomization allows us to take two people we have no reason to believe are different, and provide one of those people the program while the other does not receive the program, and then estimate the difference in their outcomes as the program’s impact.

The suggestion to use the community as a whole as a comparison group assumes that the people in your program are the same as the people in the community at large with the exception of your services. This is a pretty bold claim.

Let’s go back to our tutoring example. A skeptic like myself might argue that people who choose to participate in a tutoring program are more motivated to graduate high school than the average student. In this way, it’s hard to differentiate if your program actually made students better able to graduate or if the students in your program were just so highly motivated that they were likely to graduate anyway.

When we compare the kids in our tutoring program to kids at large, we might be comparing a highly motivated student to a particularly unmotivated student. This is not a fair comparison to make.

Yet we make these comparisons all the time when we blindly compare our outcomes to community averages.

Comparing our outcomes to community averages might be effective from a fundraising standpoint, which was the premise of the Chronicle of Philanthropy talk. But I would argue this particular approach has less to do with “showing your donors your programs are working” and more to do with identifying favorable comparisons that make your outcomes look good.

In so far as evaluation is more about truth than treasure, simple comparisons of outcomes to the community average can be highly misleading.

Data does not make decisions

I participated in my first Google Hangout last Friday on the topic of using data in homeless services. The discussion was organized by Mark Horvath, a homeless advocate and founder of Invisible People. The call included Mark, myself, and three other practitioners with experience applying metrics in homeless services. You can check out the recording of the conversation here.

I was clearly the outlier on the panel as I work with a range of organizations on a variety of issues, so while homeless services is a part of my work, the other folks on the hangout are fully focused on working with those experiencing homelessness, and it shows in their depth of knowledge.

There were a lot of interesting take aways from the conversation, so I’ll likely be referring back to this discussion in future blog posts. But one thing that stood out to me was that the conversation reflected on both the opportunities and the risks of using data in homeless services, a point that applies to all possible uses of outcomes data in the social sector.

At one point in the discussion, our attention turned to the possibility that homeless service providers could use predictive analytics to exclude people from receiving services. Before I go any further, let me describe what I mean by predictive analytics.

Predicative analytics

Predictive analysis is the process of using historical data to try to predict what will happen in the future. There are various types of statistical techniques for doing this, but the basic idea is you try to fit a model that explains what happened to a set of observations in an existing data set. You then use that model to try to predict what will happen to a new set of people you don’t have any data on.

For example, your model might suggest that people who have a criminal background are more likely to get evicted from housing.

On the one hand you might use this finding to provide additional supportive services to keep people with criminal background in housing, on the other hand you might choose to exclude people from your housing program who have criminal backgrounds, which brings us back to a worry my co-panelists raised on the Google Hangout.

Same data, different decisions

Limited resources is a fact of life, which makes it particularly important that organizations develop intelligent ways of rationing their services that maximize the social value they aim to create. So does that necessarily mean we should use data to weed out those that are hardest to serve?

Not necessarily.

People make decisions, not data. Two people can look at the same data, the same sound analysis, and make two different decisions. One organization might decide to not serve a certain target demographic because they believe those individuals would not fare well in their program, but another organization might decide the exact opposite, reasoning that because those individuals have poorer prospects and risk worse outcomes, that they should be a higher priority.

Indeed, that is exactly what the vulnerability index does. The vulnerability index is essentially a triage tool for prioritizing chronically homeless people into housing. The more vulnerable someone is, the higher priority they are to house.

My point is not to argue that it is better to serve those who are more or least vulnerable, but rather to illustrate that data is a tool that helps us make decisions that are consistent with our own values.

While data can help us better assess what has happened and what might happen in the future, it does not tell us what to do. The decisions we make should be informed by data, but data does not make decisions for us.

Jargon, terminology, and the sorry state of nonprofit consulting

I have a pretty low opinion of nonprofit consultants. Most days I use the sorry state of nonprofit consulting as a rallying cry to be a better consultant myself, but every now and then I wonder whether I wouldn’t be better off working in a social sector organization, rather than trying to shout above the crowded mess of nonprofit consultant stew.

Like a pack of rats, us nonprofit consultants stick together. One such consultant posted an article that popped up in my LinkedIn feed last week decrying the use of jargon in the nonprofit sector. The article this consultant shared, a cross post from a nonprofit marketing blog on Guidestar, railed against the use of terms like “optimize” and “impact”, arguing these terms are nothing more than impressive sounding hot air.

I understand the author’s intent, and to be fair the author does preface his list of “buzzwords” to avoid with the caveat that “If any word on the list is truly the most effective choice for reaching your reader, please go ahead and use it”. But I think ultimately what the Guidestar post illustrates is the difference between jargon and terminology.

Jargon versus terminology

I use the terms “optimize” and “impact” all the time in my work, two no-no’s from the cross-posted Guidestar piece. When I refer to optimization, I use it in the context of linear optimization; one of the analytic techniques we use in my firm’s consulting work. And when I refer to impact, I use the definition from the evaluation academic literature.

To me, these terms are useful as they have precise meanings. Indeed, pretty much every field, from medicine to law to engineering, has its own set of terminology. Terminology is valuable as it provides shorthand for people versed in a particular specialty to speak succinctly.

Terminology becomes jargon when people use terminology because it sounds awesome without knowing what those terms mean. This brings us back to the state of nonprofit consulting and my inner conflict.

Nonprofit consulting sucks

I have a specialized skillset around data analytics, program evaluation, and computer programming. These are skills that are useful to a lot of organizations and take a while to develop (and maintain). However, they are skills that are not that useful to most social sector organizations in a full-time capacity, which is why I started my own consulting firm rather than joining a non-profit.

The blurring of the lines between terminology and jargon allows frauds to hide amidst those with real specializations. Many of these consultants, like wolves in sheep skins, are self-styled marketers posing as program evaluators, “data gurus”, or strategists (my gosh, so many strategists…).

Consultants can be really valuable. There are a lot of specializations that don’t make sense for organizations to employ fulltime, and consulting arrangements allow for people with specialized skills to affordably provide expertise to multiple organizations.

But too few nonprofit consultants are focusing on building skills. Indeed:

The blogosphere, like nonprofit consulting, is full of jargon as real meaning is not easily transferred in short bursts. Skills are developed through time, through significant work experience, reliable research, and professional (i.e. academic) instruction.

Instead of purging terminology from our lexicon, we should instead be expelling those who confuse jargon with knowledge from the social sector.

Service rationing and strategic queuing

People hate standing in line, and just about everyone in the social sector loathes having to decide who should receive a particular intervention and who should not. But service rationing and queuing are facts of life, and taking a sophisticated approach to how we prioritize services can make a substantial difference in a program’s net impact.

I take an operations research approach to program effectiveness. In this framework, each program aims to maximize a social good (or minimize some harm) given a set of constraints. Constraints come in many forms, including financial and time constraints.

Most of the organizations I work with, particularly human service organizations, have to ration their programs. For example, a homeless shelter only has so many beds and might have more demand for beds than supply in any given night. Food, tutoring, vaccinations, and many other types of social interventions have to manage scarcity daily.

So how do we prioritize who gets food, or who has a place to sleep tonight and who doesn’t? Typically I see organizations adopt a first-come first served approach, but this approach doesn’t always generate the best outcomes.

For example, let’s say a homeless shelter has nine beds and ten people need a place to stay, the first nine get in and the tenth person doesn’t. Seems fair, right?

But what if the first nine people are healthy men in their twenties who have been homeless for one night, and the tenth person is an eighty year old chronically homeless man who faces the possibility of death spending one more night on the street?

Assuming this shelter in question prioritizes minimizing life lost, the rational thing to do in this case would be to abandon the first-come first-serve prioritization model and to ensure a bed for the eighty-year old, turning down one of the nine people in their twenties.

This example illustrates that primitive rationing algorithms don’t properly optimize organizations’ intended outcomes, and can have unintended consequences for more vulnerable persons.

A better way is to develop a rationing system that aims to maximize the outcomes most important to the implementing organization. If the organization in our example aims to minimize life lost due to homelessness, then its shelter waitlist should recognize this preference.

Now, the shelter could ensure that it always has beds for the most vulnerable homeless persons by always turning away younger, healthier homeless persons. But this strategy would risk going nights where the shelter has empty beds, which wouldn’t optimally use the shelter’s resources, as the shelter might prefer that a healthy homeless person occupy a shelter bed than no one at all.

So the question then is, how do we make sure we prioritize more vulnerable persons while still trying to minimize the number of empty beds at the end of each evening? If you guessed data, I must be getting predictable.

Specifically, we would use concepts from queuing theory, utilizing historical data on what types of people come during what times of the year to try to build a model that changes prioritization based on the time of year, staffing levels, and perhaps even hour of day (for example, the shelter might have lower entry thresholds later in the evening to ensure maximum occupancy). This is the same type of approach that business like restaurants and airlines use to allocate their scare resources, except instead of maximizing profits we are maximizing social outcomes.

There is a lot of talk, and plenty of nonsense, around how data is supposed to revolutionize the philanthropic sector. I think this focus on revolution is misplaced. Instead, approaches like the one outlined in this post can help us get incrementally more effective. Indeed, the real promise of data is not to show us we have been wrong all along, but instead to provide suggestions as to how we can improve. Strategic service rationing and queuing is a great place to start.

Customer service as a social intervention

With every organization I work with, I begin the consulting engagement by developing an impact theory. The impact theory is the portion of the theory of change that identifies the causal assumptions of how a set of social interventions is expected to drive particulars social outcomes.

The impact theory is important because it sets the basis for how a program defines success, and how it intends to get there. Used correctly, the impact theory sets boundaries around what data points an organization’s survey instruments should collect.

At Idealistics we use a database system we developed to model an organization’s impact theory. We call this system the Decision Framework, as it models the decision relevant factors an organization faces in trying to maximize social outcomes.

When I put together an impact theory, I spend time speaking with program directors and staff, hearing from them what they believe differentiates their program offerings. I’m most interested in identifying what they believe would be different in the lives of those they serve were their particular program not to exist. In evaluative speak, this is called the counterfactual.

A large service provider I am working with emphasizes the customer service aspect of their work, which includes traditional basic needs human services offerings like clothing, food, and medical services. Many of these services are available in other forms, provided by alternative agencies. However, this organization argues its interventions are unique not just because of the services offered, but the way in which people are treated when receiving assistance.

Customer service

Social interventions tend to be defined as tangible outputs like medical vaccinations, counseling hours, and food baskets. While traditionally we think about interventions as activities or items that are plainly enumerable, can the way an output is delivered be an intervention in itself?

Brands differentiate themselves on service quality in the marketplace all the time, and charge a premium for it. While we all prefer not to be treated like crap, does better customer service drive better social outcomes? The impact of customer service on social outcomes is an empirical question which likely varies depending on what outcomes an organization intends to affect.

If good customer service is identified as a causal element in driving a set of outcomes, customer service needs to be operationalized to something we can measure, so we can evaluate the possible relationship between intended outcomes and quality customer care.

While an organization might assume there is a positive relationship between good customer service and positive social outcomes, the more interesting is how should an organization respond if the relationship between customer service and positive outcomes does not bear out?

The value of not being a jerk

Ideally everyone would be treated well, and no one would act like a jerk while administering social programs.

But what if customer service made no difference in the outcomes an organization was trying to drive, and good customer service came with a cost? If you could only treat half the number of people with good customer service than with crappy service, but otherwise get the same result, which is the optimal choice?

As we move toward a data driven social sector, these are the kinds of questions we will face, especially if the evidence runs contrary to our prior beliefs. It is easy to assume there is value in providing good customer service (which there might be), but we make a lot of assumptions all the time that on face seem intuitive, all to later uncover unintended consequences.

Used properly, evaluative metrics can help guide organizations make more impactful program decisions, but the real test is how we will react when data suggests what we are doing might be wrong.

Data as hero

The hype over data has become deafening. Small non-profits are obsessed with leveraging big data, and foundations are on the hunt to find the one evaluative metric to rule them all.

We need to take a breath.

Data offers some exciting opportunities in our sector. Predictive algorithms are really good at helping figure out when to sell me things I do not need. So too can we use predictive analysis for social good, like trying to predict the likelihood that a low-income teen will become homeless.

But data is not the end all be all. It is not our savior, and machines do not (and should not) make decisions.

Data, regression, machine learning, etc. are all tools that we can (and should) be using. And while these tools can help illuminate the path forward, they are not the path forward.

The debate around data has become unnecessarily dichotomous and fractious. There should be no debate between people versus data, man versus machine. There is no Skynet.

Those who argue that data is all that matters are equally as wrong as those who say it is all about the people. In the social sector, it ought to be all about achieving social outcomes, and toward that end you need good people and good data.

Importantly, we need to do a better job of teaching our people the promise and pitfalls of data. As we look to incorporate more data into social sector decision making, our organizations need to be better integrated than simply having the so-called data-nerds in one room and social sector specialists in the other.

If we are to become savvy users of data, it is essential that we raise the level of data-literacy for all organizational decision makers.

As things stand now, with so much data-hype, it is far too easy for anyone with a spreadsheet to win an argument, even if they are wrong. Indeed, I often worry that my customers do not question my own analysis enough, or that they assume causation where there is none.

Data is no hero. But people can be. We are more likely to be heroes if we work with the data, but data alone will not save us from anything. Used incorrectly, it has the potential to make things worse.