Redefining and Sharing Outcomes at the Campbell Foundation

Redefining and Sharing Outcomes at the Campbell Foundation
July 15, 2014

(Anna Lindgren is the assistant to the president at The Campbell Foundation.)

[AnnaSamantha-26] The Campbell Foundation is comprised of two offices (Annapolis and San Francisco) with six staff managing an annual grants budget of approximately $10 million.  We’re unique in the grant making landscape – nearly 100% of our grant dollars go to environmental work, funding projects that focus on improving the water quality and ecosystem health of the Chesapeake Bay and the Pacific Coast.

As a Foundation, we are always interested in learning what impact our grant dollars, and more importantly, the work of our grantees are having on the environment.  We also realize that there must be a balance between seeking those answers and being too rigid or burdensome to grantees.

Our process to develop more clear outcomes and indicators began in 2010 when we joined the Conservation Measures Partnership (CMP) and were introduced to the Open Standards.  CMP and others have developed an extensive taxonomy for all things conservation-related – anything from invasive species to aquaculture.  This taxonomy classifies this work into several categories, such as Threat, Strategy, Target, and Scope.  We used this structure to do what we affectionately nicknamed “The Great Sticky Project.”  Essentially, we put each of our grants on a sticky note, and visually organized them into the Open Standards framework.  We didn’t make this public, and are still only using these categories internally, as we didn’t want grantees to feel like they had to conform to this outline. 

We then took a look at our proposal form - and realized it didn’t tie at all to this new internal language we’d started using.  So we completely overhauled our forms, and coordinated it with the launch of our first online grants management system. We used the baseline that we had developed from the Great Sticky Project, and now that both us and the grantees were speaking the same language, we could start exploring questions like “This is what we think we’re doing, now let’s see if that’s what the grantees are actually saying they’re doing, and it’s OK if they’re not the same thing."

This brought us to the stage of being able to export sets of data for our grants.  As an experiment, we took a subset of grants that focused on pollution in California, and exported all the narrative answers about indicators the grantees had provided.  We then took that list, and tried to organize and synthesize it down to a handful of simple indicators.  No such luck – we realized that the answers we were getting, while an improvement, still needed some work.  They were often missing current levels for those indicators, or the indicators didn’t tie to the outcomes, or they weren’t really indicators. And the list was 20 pages long – for only 22 grants.

For the past year, we’ve been working closely with select grantees to refine these measures. To be clear, we’re not sending them complicated pre-populated tables and saying “These are the indicators we are looking for, where do you fit in?” Instead, we’re taking the answers they are giving us and just fine-tuning them.

For the past year, we’ve been working closely with select grantees to refine these measures. To be clear, we’re not sending them complicated pre-populated tables and saying “These are the indicators we are looking for, where do you fit in?”  Instead, we’re taking the answers they are giving us and just fine-tuning them.  As an example, the grantee will write “Increased wetland restoration” as an indicator of success, and we’ll turn it back to them and ask “What is the current # of wetlands restoration projects, or # of acres restored?”  We’ve also developed some simple ways to track policy work, such as rating the strength of a particular policy on a scale of 1-5. 

So after all this work – what have we accomplished, and what’s next?  We’ve already been able to refine our outcomes to a few basic categories, and are now in a position to tie indicators to those.  We’re also piloting a project to visually map these outcomes and indicators, which we’ll share freely with our funder colleagues and our grantees.   We’re hoping this will give us a sense of the depth and geographic reach of the impact our grantees are having on the environment, as well as increase transparency into our grantkmaking.  Our ultimate goal is to have data like this spur conversations and collaborations between grantees, further advancing the vital work they are doing in conservation.

-- Anna Lindgren

View Resource

Categories

About the author(s)

Assistant to the President
The Campbell Foundation