Defining Quality Mentoring – Where is the Evidence?June 12, 2009
I often receive inquiries from programs about recommended “standards” for program practice. Questions about screening volunteers are common, particularly surrounding background checks and what to do with them once you get them back.
The following is a question I have received many times from both new and established mentoring programs:
Do you have any recommendations on standards or best practices for the length of time that should pass before we accept a volunteer whose background check shows they have had a DWI/DUI (driving while intoxicated) in their past?
My response usually goes something like this:
MPM doesn’t have an official recommendation for the length of time that should pass between being convicted of a DWI and being matched with a mentee. What I recommend is that you contact other organizations and ask what they are doing; based on that information, work with your board and staff to establish a standard that works for your program – and then stick to it.
In addition, I typically suggest a few programs to contact and mention the range of responses they might hear. For some programs, a DWI on one’s record means permanent disqualification from being a mentor. For others, they require a clean record during the past five years. Many others don’t have a policy at all and deal with it on a case-by-case basis.
So, why don’t we have an accepted standard for this? Where is the evidence to support the policies that some programs have in place? What does the research say about it?
- Nearly 40,000 FBI fingerprint checks have been conducted through the PROTECT Act pilot, and 6.1% of potential volunteers were found to have criminal records of concern – over 2,000 individuals.
- The checks have uncovered very serious criminal offenses, including sexual abuse against children, manslaughter, and rapes.
This pilot, along with the experience of many programs, does prove that people with criminal backgrounds do apply to become mentors. And it proves that background checks are one tool to help programs manage risk by identifying those individuals before they are matched with kids.
Various mentoring research studies link certain program practices to better outcomes for youth – including the practice of screening volunteers. During a Research in Action webinar hosted by MPM on March 4th, talking about his research brief for the series, Dr. David DuBois noted that “high quality mentoring programs are not only effective but also safe, efficient and sustainable.” Mentoring programs conduct background checks to keep kids safe and to prevent risks that could jeopardize their mission – but do background checks qualify as enough screening to make yours a “high quality” program?
We can find evidence and research to support the practice of screening volunteers but what about that DWI question? What evidence do we use to inform our decisions about how to process criminal background check information?
DuBois also suggested during the March webinar that in the field of mentoring, perhaps quality, or “best practices” are better defined by considering several sources of evidence (Figure 1). In the case of our DWI question, this model would allow mentoring programs, when setting program standards, to consider as “evidence” things like parent preferences, program values, mentor supply and staff experience—along with scholarly research and program evaluations.
For over a year now, MPM has been working with an advisory group of staff from Minnesota mentoring programs to establish evidence-based standards of quality for programs in our state. In the process, the group continues to grapple with questions about all kinds of program practices, including those discussed here regarding screening of mentors.
Tell us what you think.
- What is your program’s policy regarding DWI/DUI offenses? Click here to complete the poll.
- What does “evidence-based” practice mean to you and your program?
- What sources of evidence do you rely on most?
- What kind of evidence should the advisory group consider when establishing statewide standards for quality? In the case of screening, does existing evidence limit them to a minimum standard that simply asks programs to have a screening process in place? Or do you feel there is enough evidence from various sources to allow the advisory group to specify what types of screening all programs should have in place in order to be considered “high quality”?