Key Takeaways
Quality assurance includes fidelity monitoring and evaluation.
Organizations may choose to develop a quality improvement plan or use a logic model to ensure quality assurance.
Find resources to help build your quality assurance and evaluation plan, developed by communities-based organizations across the country.
A quality assurance program is a key component of evidence-based programming. There are two main components to quality assurance for evidence-based programs: fidelity monitoring and evaluation.
Fidelity Monitoring
Evidence-based programs depend on the program being implemented with fidelity, or that the program is being delivered how it was initially designed. Conducting the programs with fidelity improves the quality of what is being delivered and ensures that all the activities of the program are implemented correctly to benefit the participants. Fidelity should be considered at every phase of the implementation of an evidence-based program from recruitment, trainings, program delivery, and evaluation. While leaders will learn about the basics of program fidelity during training, it is up to the license holder or host organization to be sure that the program is being delivered accurately so workshop participants experience similar positive outcomes as originally studied.
Several of the developers of evidence-based programs have developed guidance for license holders such as the Self-Management Resource’s Center Fidelity Manual. Your organization may want to implement a Continuous Quality Improvement plan or develop a Logic Model to ensure quality assurance.
Health Foundation of South Florida's Quality Assurance Plan
Massachusetts Healthy Living Center of Excellence's logic model for implementing and monitoring fidelity
Massachusetts-Fidelity-and-QI-Logic-Model
Other strategies to assure quality delivery of programming include:
- Fidelity checks: Check with the program developer to see if they provide a fidelity checklist
- Fidelity agreement: This agreement can be a standalone document or incorporated into your leader or master trainer agreement.
- Monitoring participant attendance rates by leader
- Calling leaders before and after the workshop to check-in
- Pair experienced leaders with new instructors
- Develop fidelity in-service training for leaders
This presentation from the Massachusetts Healthy Living Center of Excellence is an example of what might be presented during a training for leaders.
Massachusetts-Presentation-Fidelity-101
- Leader self-evaluation and peer-feedback opportunities
If you operate a network of organizations who provide a program, you may consider implementing a fidelity self-assessment survey similar to this one developed by the Maryland Department for the Aging.
Maryland-CQI-Self-Assessment-2013
Evaluation
The first step of evaluation is to ensure that all the required participant data is collected appropriately. It may be beneficial to provide leaders with a data collection checklist. Many organizations also hold regular trainings on data collection and reporting for facilitators.
Evidence-based programs complete a rigorous evaluation process to become approved by the U.S. Administration for Community Living. Many organizations may still choose to track outcome and participant satisfaction data locally.
Below are some examples from other organizations on the type of evaluations and reports completed:
Virginia Department of Health – 2012 CDSMP Evaluation Report
Living Well Alabama – Assessment of CDSME at Work Sites
Measuring participant satisfaction is a key factor in determining the success of your program. These surveys and testimonials can also be used when encouraging new participants to join or new partner organizations. An Area Agency on Aging in Maine developed participant satisfaction surveys for the CDSMP classes.
See NCOA’s Key Components of Evidence-Based Programming: Evaluation webpage for additional guidance.