As part of our ongoing conversations about “What does Evidence mean for Extension”, participants have pointed out the need to consider and review existing clearinghouses or resources around evidence-based practices. Clearinghouses are registries of programs and interventions that have been reviewed and rated according to criteria set by each clearinghouse.
Before our conversation, participants received a summary of three resources that could guide further discussions around evidence in Extension, with a focus on the criteria that each clearinghouse or resource uses to rate a program’s evidence. During our conversation, participants discussed their overall reactions to the clearinghouses, what they liked or disliked about them, as well as how the rubrics used in these examples could work for the different disciplines represented in Extension. Below is a summary of the compiled reactions from our breakout conversations.
Blueprints for Healthy Youth Development
Participants discussed the level of specificity and the criteria used to rank programs as valuable aspects of this clearinghouse. Also, they liked the importance given to program theory in this framework and that it felt closer to Extension than other Clearinghouses.
However, participants felt this framework did not apply to all areas of Extension, lacked simplicity, and described it as “stringent” or using very rigorous criteria. Therefore, they felt this framework could bring challenges to educators. Finally, participants agreed that the framework used unclear or subjective terminology.
Overall, participants recommended that this framework would need modifications to be used in Extension, needs to be tailored for program levels, and be more specific.
CrimeSolutions
Participants liked the classification system of this clearinghouse. They felt it was clear and used a sound scoring system. However, the participants discussed several limitations of this framework. First, they thought it was not suitable for Extension, followed by the idea that it required substantial resources to be implemented. Other limitations were that the criteria used in this framework felt too stringent and that rigor judgment should be contextual. Finally, the participants disliked the use of ranking.
Program Assessment Tool
Overall, participants felt optimistic about this framework. The two main advantages were that it captures Extension work and its diversity. As a result, this can be a helpful framework for administrators to organize work plans, define criteria for promotion and tenure, and allocate resources to programs. Another valuable aspect is that it helps categorize educational programs and design programs. Finally, participants liked that the framework included publishing criteria.
Participants offered some recommendations for this framework, including the need for adaptations to a broader level and that it can be combined with other frameworks from clearinghouses to assess impact. Also, participants discussed that although Extension should focus more on providing evidence for programs, not all programs would reach the category of evidence-based programs, as it has been traditionally defined.
Overall reactions
Within the breakout rooms, participants discussed that The Extension Foundation would do valuable work by creating a tool/registry to submit programs with different levels of evidence and rigor. For this purpose, the Extension Foundation could use classifications reviewed in the clearinghouses and adapt them to Extension. Finally, participants suggested that this registry or tool should be accompanied by training on how to use it and long-term support for the sustainability of the registry.
Comments (1)