QMAP Resource Corner: Using Evaluation Data to Improve Programs

December 10, 2013

Throughout this year, we have been sharing findings gathered through the Quality Mentoring Assessment Path (QMAP®) assessments of Minnesota mentoring programs during 2012. As we close out the year, we’re taking one last look at information that QMAP process has provided to the Mentoring Partnership of Minnesota (MPM) with regarding day-to-day challenges that mentoring program staff face. From the 2012 findings, evaluation of both participant outcomes and program processes presents ongoing challenges for program staff, as we have covered before. Beyond that, though, it seems that staff face a bigger “so what?” question once data has been gathered. In recent QMAP assessments, using evaluation data to improve program practices emerged as an area in which program staff struggle, as they figure out how to interpret and use the information they collected.

Evaluation of both program participants and everyday processes has a valuable role to play in keeping programs relevant and on track to achieve their missions, as evaluation efforts are one of the main ways that programs can learn about themselves and improve over time. The evaluation process itself can generate ideas and data resources that can provide insights that will illuminate pathways for improvement, instead of operating in the dark. Program staff may operate on gut instinct and anecdotal impressions to make decisions about programming changes; for example, “We’ll never get mentors to come to follow-up training.” However, carefully targeted evaluation questions and analysis can provide more objective answers to key questions about how to manage limited program resources; for example, “We found that volunteers who participated in follow-up training had long-lasting matches.”

When thinking about using evaluation data to guide program improvement, it’s important to start with the end in mind. For mentoring programs, this means looking at the program’s mission and its theory of change, or how the program is designed so that mentoring relationships will lead to the outcome sought for youth. Evaluation practices should flow from the overall design and goals of the program, and should be able to generate the data needed to show how programs are doing in reaching their goals. If a program can’t assess whether its stated goals are being reached, then evaluation activities will be seen as irrelevant to program stakeholders, such as funders, who will not be able to determine whether the program is achieving its goals, and meaningless by program staff, who will not see the value of putting time into tasks that do not reflect what programs do accomplish.

To organize evaluation practices, programs need to have a strong evaluation plan to coordinate and guide evaluation-related activities. In order to be effective, an evaluation plan has to be capable of being implemented consistently with the resources of staff time and expertise on hand. It also has to have measures in place that are able to provide the information needed to demonstrate program success. Most importantly, even the strongest evaluation plan will not be useful unless steps are taken to analyze all evaluation results on a regular basis and apply the information to identify areas for improvement in order to better reach the goals set by the program and accomplish its mission. Lastly, strategies for sharing the results of evaluation efforts are also part of good evaluation plans, to provide an outlet for promoting program successes and work on improving outcomes.

If you need of more background on evaluation design, planning and dissemination, several resources are available to you. The Children, Youth and Families Education and Research Network, or CYFERnet, website has guidelines on evaluating outcomes with a variety of target populations, such as school-age children or teenagers. The site also has useful information on how to put your evaluation results to work in sections on:

– Data use and analysis
– Sharing evaluation results
– Sample evaluation reports

For a mentoring-specific focus on evaluation, you can learn more about the ins and outs of using outcome evaluation data to guide mentoring program development and improvement through the December 19th session of the Collaborative Mentoring Webinar Series, titled “Did It Work? Evaluating Youth Outcomes in Relationship-Based Programs.” This webinar will include a look at how to use the Evaluation Toolkit, developed by Oregon Mentors, to identify validated survey tools to assess outcomes your program may be trying to achieve, such as self-esteem or academic connectedness. Learn more and register here.

As we look ahead to assessing and understanding data from our own evaluation efforts related to QMAP at MPM, we will continue to share our findings with you, as well as the resources we uncover that might help you in your continuous improvement journey. Based on what we have learned about the challenges that mentoring program staff experience as they work to implement realistic, meaningful and useful evaluation practices, MPM will be taking a deeper dive into participant and program evaluation issues in the next year. We will be exploring what evaluation tools, training and templates can be identified or created, and the best ways to get them out to all of you in the field that want this support. If you have thoughts about resources that are needed, or how to deliver these, please get in touch with me. Your ideas will help us shape our next round of work on resources to support the improvement and innovation nurtured through the QMAP process.