Commentary on “CONSORT to community: translation of an RCT to a large-scale community intervention and learnings from evaluation of the upscaled program.”
Commentary: In selecting abstracts to feature in this month’s Newsletter, I came across a study that resurfaced several dilemmas I have faced over much of my career. When translating an intervention with moderate or high strength of evidence into routine practice, how much time and effort do you allocate towards evaluating its effectiveness versus evaluating its implementation? And, how should research ethics committees consider evaluations associated with programs primarily focused on service delivery of an evidence-based intervention?
In this month’s Newsletter, we feature a study from Australia by Moores et al that describes the researchers’ experience with translation of an RCT evaluation to an evaluation framework for large scale adoption and implementation within the community.1 Just as an intervention may be adapted or refined as it moves into routine clinical use, the evaluation of the intervention must also evolve. The authors use their experience with the Parenting, Eating and Activity for Child Health (PEACH™) Program as a case study to reflect on their experience with this evolution.
The PEACH™ program is 6-month, multicomponent, lifestyle-based weight management program for families of overweight children and adolescents. The development and evaluation trajectory of PEACH™ spans over a decade: initial concept and intervention design from 1999-2002, pilot feasibility study from 2002-2004, efficacy RCT from 2004-2007, effectiveness trial from 2008-2011, and evaluation of the state-wide adoption and implementation beginning in 2012. The authors conducted and reported on the PEACH™ RCT using two intervention arms; the intervention was modified slightly and only one intervention was evaluated in the community effectiveness trial. The authors moved to the REAIM framework to structure the evaluation of the large-scale community adoption.
The authors highlight four categories of challenges that they experienced in the translation of the evaluation from the RCT paradigm to REAIM as listed in Table 3 of their article:
- Ethics committees appeared to approach the Project from an RCT paradigm
- Engagement challenges experienced during implementation required changes to inclusion criteria which are avoided in an RCT
- Evaluation length and consent process may have been unanticipated and burdensome on participants who signed up for a community program, and not an RCT
- Research conducted in the world has a level of incomplete, unusable, and missing data, which is higher than research in a more tightly controlled RCT setting
These challenges resonate strongly with my experiences. To these I add the following:
- Resources and measures to measure fidelity robustly are often lacking
- Implementation/service delivery and evaluation are separate skill sets and few individuals possess both; using the same staff for both goals may have consequences, yet using completely distinct teams also has consequences.
- Identifying, recording, and analyzing the impact of changes to the intervention made locally
Implementation research as a formalized discipline is still quite young and I have every confidence that the scores of talented implementation researchers and program evaluators will rise to meet these and other challenges. The Implementation Science News is a forum for sharing information relevant to our professional community. To that end, I invite our readers to share their experiences with these challenges, or identify relevant resources or references that our readers can use to address these challenges; email us at email@example.com if you would like to contribute.
Read the abstract
1Moores CJ, Miller J, Perry RA, et al. CONSORT to community: translation of an RCT to a large-scale community intervention and learnings from evaluation of the upscaled program. BMC Public Health. 2017;17:918. doi:10.1186/s12889-017-4907-2.