Mini Case Study: Using Outcomes Evaluation to Improve Learning

When her program officer raised the prospect of doing a formal evaluation of her charter school, the executive director rolled her eyes. Everybody wanted to evaluate her students, and she was tired of it. The curriculum was based on developmental principles and built around authentic experience: students created projects and took them to completion in performance — by making a presentation to the city council, for example, or by convincing the mayor to accept their researched recommendation. With the help of a post-doctoral student, the school had earlier articulated a clear theory of their “action learning” approach.

The school already thought of itself as a learning organization. A formal evaluation seemed excessive; the senior staff who worked at the school were not certain what they could learn from it. Their foundation program officer, though, believed thoroughly in evaluation as a “transparent approach to uncovering why folks practice the way they do.” The evaluation would be a “generative process,” a “tool to investigate how to retool” and “prepare for the next turn,” looking to a sustainable future for their organization.

The evaluation involved identifying outcomes and indicators of the outcomes, then trying to see if the results were being achieved. “We started with 52 variables!” recalled the director (that number was eventually winnowed down). The work of outcomes identification was challenging but rewarding. Staff members had “great conversations about what outcomes look like. Also, we asked why we believe that this kind of professional behavior will lead to these outcomes? It was an excellent learning experience.”

Some examples. “We claimed the kids had experiences,” explained the director, “but we were essentially providing inputs. Did the kid actually have the experience? Could we track it? Would the kids report it?” Asking students to accept risk was one of the school’s “5 Rs,” the director went on, but “what behaviors would provide evidence?” The overall goal of the school was to “create effective citizens, who know what they believe, think and feel, and can act on behalf of themselves and others,” but what was the school actually doing in each of those three realms?

With help from consultants, the school spent two years and approximately $300,000 on the evaluation, in addition to its somewhat larger core support grant. Evaluation questions were divided into outputs (“Are students getting the experiences we intend to be the core of our program?”) and outcomes (“Are those experiences producing results in cognitive, social, civic/moral, and emotional areas?”). They looked for community outcomes, too, among the parent body. Data were collected through observation, interviews, group reflection, and survey reporting.

There were many challenges along the way:

  • The evaluation was “counting on students to be good at self-reporting,” noted the director, but that did not prove to be particularly true: “There were times when we could see that the kids were growing, but they didn’t see it.” “Separate from other sources of data,” student self-reporting “did not stand alone. It told us something about their experience, but not enough about outcomes.”
  • “Making meaning of the data” from student surveys “required a heck of a lot of contextualizing.” For example, at one point the surveys produced a statistically significant result that students felt they were not getting “opportunities to be leaders in social action projects.” It didn’t make any sense to the staff. Digging back through the surveys, they found that the forms had been filled out the day after a project had been canceled because of something outside the school’s control.
  • Designing the surveys was methodologically challenging: “How do you describe things so young people will understand?” After the first year, it was clear that students had not adopted the language of the evaluation, so the adults started using it explicitly in daily work. The surveys had to be shortened; “after a couple of pages, students were just circling things.”
  • “Another reality,” reported senior staff members, was that, although the school tried hard to get students and coaches to “take ownership” of the evaluation, “it never became part of life — it was just something that was passed out and collected.” The exceptions were certain instruments, such as youth feedback forms and personal skills reports, which were “useful to staff in doing their regular jobs.”

So, was the experience a failure? Not at all. Reflecting back on the evaluation, staff members said that it inspired them to work harder to “embed assessment and learning” into the school’s culture. The step of “identifying measures of success” has been adopted “very organically.” A big result was a renewed focus on adult learning at the school, and more intentional links between adult and student learning. From the foundation program officer’s point of view, the evaluation achieved success: “Earlier, when they talked about the school, you would hear about strengthening academic skills.” In the course of the evaluation, school staff began to shift in their language, saying they hoped to give young people the skills to be “intellectually agile, so they can find information and be good problem solvers.” Among adults, the program officer said, the school took on a “culture of evaluation. The evaluation helped them become more reflective practitioners.”

Takeaways are critical, bite-sized resources either excerpted from our guides or written by Candid Learning for Funders using the guide's research data or themes post-publication. Attribution is given if the takeaway is a quotation.

This takeaway was derived from Making Measures Work for You.

Categories