Outcomes: Learnings from the Nationwide Foundation

We were really pleased that the independent evaluation of our funding strategy found there was much in our work to applaud. Evidence showed that £3 million was raised by charities in extra grants, benefits and debt reductions for beneficiaries - around the same amount we donated in grant funding - and some of those benefits will be on-going, amounting to an even greater legacy for beneficiaries.  In addition, our ‘funder-plus’ or ‘added value’ work was identified as exemplary.  However, for the purposes of this blog, I am going to scrutinise the shortcomings of our work to offer our learning to other funders.

An initial problem we faced was grantees’ confusion between outcomes and outputs, which they had been asked to identify in their funding bids.  Whilst we gave support in the early stages to help develop these, in truth, we did not go far enough and this is an area we need to focus on more in future.

Collectively, the grantees identified about 70 different outcomes, ranging from increased confidence and reduced anxiety, to better reporting by the media, to increased awareness by the general public and better understanding by policy makers.  Some were simply not measurable and for many that were, we wrongly assumed that charities had or knew how to obtain baseline data in order to then measure the extent to which they achieved their outcomes.  Where grantees were seeking to influence or were keen to raise awareness, they often struggled to articulate the change they wanted to happen as a result. For example, they initially wrote measures relating to the number of people who might attend an event rather than plan how to measure what those attendees would do differently as a result of attending the event.  Explaining the importance of linking outputs and outcomes was also a challenge, so we had circumstances where the outputs specified could never have led to the outcome identified.

As a hands-on, flexible funder, we helped grantees to refine their outputs and outcomes during quarterly site visits and with regular phone contact.  Our independent evaluators and other consultants provided advice and we offered additional, specific outcomes training.

Overall, we learned that part of the problem was that we ourselves had not defined our own strategic funding outcomes clearly enough in the first place.  If we had, we certainly would have had far fewer than 70 outcomes and our task of measuring the programme’s achievements would have been much easier.

We were keen not to repeat the same mistakes whilst developing our new strategy.  In particular, we have now adopted a theory of change approach, which while popular in the USA is still gathering momentum among UK funders. We have set ourselves new outcomes with outputs, assumptions and indicators of success mapped out.  As a result, we’ve been able to integrate this strategy, far more effectively than before, into our communications plans as well as staff performance plans.  New funding recipients must have clear project strategies which work towards our outcomes, with means to measure progress towards them.  Even at the initial funding enquiry stage, we have found it is much easier for organisations to recognise if they are eligible for our funding and for us to identify which projects we should fund.

We certainly discovered that if outcomes are wrong from the beginning, it can be difficult to turn them around effectively and have them central to the day to day running of a project.  This learning has resulted in a far better funding strategy for us, and we hope that by sharing our learning, other funders might similarly be able to benefit.