On October 12th, I moderated a webinar – for EPIP national’s “Wednesday Webinar” series – with Sara Olsen on the subject of impact measurement and management – specifically, on the need in the social sector to cultivate expertise around a new role: social impact analyst.
The starting point for the our conversation – what initially prompted me to reach out to Sara to see if she’d do a webinar – was an article she co-authored (with with Katie Ruff) in the Stanford Social Innovation Review, “The Next Frontier in Social Impact Measurement Isn’t Measurement at All”.
The first paragraph nicely frames the substance of the conversation Sara and I had – along with 40+ participants from across the EPIP national network:
If we want social capital markets to fund the social sector effectively, we need to use social impact data effectively when making investment decisions. And investment decision-making almost always requires that we compare social impact data across locations, programs, or organizations. This is difficult, because contexts, missions, definitions, measurement approaches, and values differ. It’s always apples to oranges, and this “comparison problem” not only affects good decision-making, but also our ability to report on impact at the investment portfolio level.
The “comparison problem” is a nagging challenge in the social sector, for both grantmakers and grantseeking organizations. Typically, philanthropic funders (or investors) resolve the “apples to oranges” mismatch by employing a set of common metrics across their portfolios, when it comes to evaluating social impact (risk and returns). In fact, many foundations still look to the financial accounting standards as the model for standardizing their social impact measures. The trouble is that definitions of key terms (“job” or “success”) and timeframes for goals and outcomes vary greatly among a foundation’s grantees. So how much can a common set of metrics really tell you about impact?
As Sara and Katie write: “Common measures ask the wrong questions, measure the wrong things, and miss the real impact … [and] the more we rely on common measures to solve the comparison problem, the more we end up compromising the meaningfulness of social impact measures themselves.” The quest is really for the “specific impact” created by a program or individual organization. That’s the thing funders want a clear view on. So what is to be done?
The more insightful approach – which Sara broke down for us in the webinar – involves deploying the skills of a social impact analyst, capable of comparing the quality of fruit, if you will, like apples and oranges. An effective social impact analyst goes beyond the ability to report or identify impact. An analyst has mastery of “impact management” and “impact analysis”. Some definitions:
1) Impact management is the production of information about impact.
2) Impact analysis is the reading and using of that information.
Here are the basic tools for analysis, when looking at any given program or organization:
- Define mission and vision
- Analyze stakeholders
- Articulate a theory of change
- Map impact (using clear, attainable key performance indicators, or KPIs)
- Assess value
- Manage impact
Turning our eyes to the landscape in San Diego, some questions to consider:
- Which funders stand out for using effective, transparent evaluation practices?
- Are there philanthropic professionals working in foundations, or consultants in small firms or on their own who might be consider promising social impact analysts? (We’d love to spotlight them – through the blog and in future events)
- What tools or technologies are being used / available for nonprofits to capture their performance metrics / impact? Are there some standout organizations that come to mind?