Throughout our series of conversations we are consistently attracting both returning and new participants. This has been extremely helpful as we explore the possibilities and what might work for gauging evidence in Extension.
WHAT WE LEARNED
1) Why would professionals work toward documenting evidence? The compelling reasons to document evidence included being more responsive to funders and future funders, to add validity to Extension programming, increase professional integrity to provide trustworthy and credible information, and to mimic proven programs that had achieved evidence.
2) Recommendations for building a rubric or scorecard to gauge the level of evidence achieved. Participants felt it would be efficient and important to start with implemented models or rubrics for identifying evidence-based programs to establish benchmarks. In doing so, look for how we identify weak versus strong evidence and the use of the Theory of Change models in the interventionβs delivery. Additionally, identify what is commonly measured in existing rubrics or scorecards for identifying evidence.
3) What process should be in place to make doing this easy and possible within Cooperative Extension? It was suggested that a national registry of programs would reveal not only those that meet a highest standard of evidence but a ranking of evidence since many educators do not have access to evaluation specialists. Considering how this process is worded to speak to all educators will be important. For example, the term βrubricβ may need to be renamed. Provide a clear process that addresses how the registry will be maintained, how educators will get feedback for modifications for increased performance, coupled with resources for professionals to demonstrate evidence. Addressing how acceptance into the registry will provide data back for professional performance evaluations and promotion & tenure needs to be considered.
OVERALL THEMES
Overall themes across breakout rooms yielded additional questions about developing a standard process to gauge evidence in Extension:
- How can educators isolate program effects to the intervention?
- Can one rubric or scorecard meet the uniqueness of all disciplines?
- What support (evaluators and support staff) do professionals have to bring programs to high levels of evidence?
- How can educators develop skills to select the appropriate evaluation strategies for the specific intervention?
With all good convenings, more questions arise and we appreciate how these questions or gaps help insure that concerns are addressed as we continue to develop these concepts.
JOIN US NEXT TIME
Our next session will take a different approach. You will be presented with three sample processes to gauge evidence from a 1) non-federal agency, 2) federal agency, and 3) related work in Extension. We will compare and contrast an example of each in the breakout sessions for Conversation 5 on Monday, March 21, 2022. Register today.
Note: Convo 4 was postponed to Convo 5 and we choose not to relabel the calendar events and registration setup.
Comments (0)