A learning community

Why wait until something is broken to reexamine, when repairing a crack or weakness could prevent the break?  One youth services agency understood that something was working for them.  However, the executive director of the agency was concerned that they did not clearly understand what the proper program “dose” was.  He worried that something that they thought was unimportant might in fact be critical to their success.

The deputy director believed that it was important in a social service agency to have a handle on efficacy and to know what success looks like.  He wanted to know what they were achieving.  The Board was committed to evaluation, but several members of the Board were worried that they just did not have the bandwidth to address it.  They believed their results were excellent and that it would be to their benefit to confirm why.  And they were committed to finding where the program “cracks” were forming.

In the process of knowing, the deputy director says, the work will improve.

Her mandate was motivated by issues internal to the organization — find out what works – as well as external — show the world: parents, partners and, especially, funders.

Although not the main goal in setting up the evaluation program, the views of funders who want hard data were important, and, in the end, the program probably paid for itself in new funding.

The quantitative measures for her agency looked good: high participation, long-term participation, and better results at school.  The question was: Why? What elements of the program were producing these results? And were there other results that could or should be achieved?  Enter evaluation… and its sometimes strange results. 

The process

The first decision was finding a consultant, and then came participation of all those involved in the programs: teachers, administrators, partners, and clients.  In fact, the process is called participatory evaluation.  It began with dissecting the program (although in consultant-speak, it’s called building a “logic model”).  The consultant helped the staff to separate and detail desired outcomes, activities, and resources.  What are you putting in? What are you getting out? What does that desired outcome really look like? How can you measure it?

In the course of such dissection, the group defined its goals as well as developed evaluation tools. “You have to know what you want to accomplish before you can evaluate it,” the deputy director says.  The Board agreed and pushed for the evaluation to be done.

The learning began

Strange things happened on the way to understanding. The deputy director cited the example of a survey of clients regarding their ability to work with others. The “before” test showed that clients had a high level of confidence in their ability to be team players. The “after” test showed a loss of confidence.  That was not the hoped-for outcome. Surely, the program improved interpersonal skills!

To better understand what was happening, a small group of participants was convened and the staff learned that the loss of confidence stemmed from a greater understanding of what cooperation and teamwork mean.  Ability was over-rated in the beginning, the deputy director said.  The program had increased understanding of truly supportive teamwork; more modest and accurate self-appraisal resulted. When they got results that they did not understand, they dug deeper.  In effect, the evaluation process was continually evaluated until its methods of measuring became tools for achieving better understanding of what success truly meant.

For youth programs, good evaluation is especially important, the director says. “There is pressure to show academic gains but when you look at a logic model, you see that we are addressing more than the elements of academics. We are looking at the tools for life.”  “Tools of life” may seem intangible but they are made up of discrete, measurable skills – skills that can be evaluated.

The ultimate lesson—anything can be improved

It is a challenge to develop tools and get staff to buy in and use them.  The executive director observed, “The biggest problem is one of capacity.”  Time and a critical eye are crucial to uncovering what is happening in each program.  Hence the importance of participatory evaluation, through which all involved contribute ideas and shape outcomes.

Additional staff, like a consultant for guidance, takes funding. At the top of the list of donors were Board members. Every member of the Board donated to the agency, showing both constituents and funders that the Board believes in what others are being asked to support.  Then the agency went about raising unrestricted funds through benefits and sponsorships.

The final challenge is keeping it going.  By integrating “questioning” into the culture of the organization, staff realized that spending time to find out what is working, and fixing what is not, contributed to their own satisfaction as much as it did to the excellence of the agency.  The consensus was that ongoing evaluation will keep the program strong and flexible when changes in constituency, demographics of the neighborhood, funding or staff take place.  Fix it before it’s broken becomes the mantra.