Commentary On “Ten years of implementation outcomes research: a scoping review”

Sep 15,2023 | Christopher Akiba Commentary

In July 2023, Proctor and colleagues published “Ten years of implementation outcomes research: a scoping review,” a follow-up to their fundamental 2011 taxonomy of implementation outcomes1. The current review examines the use of each implementation outcome regarding study designs, methodologies used, and the settings in which studies were conducted. While results describe the growth of implementation outcome assessment throughout the field, such proliferation is tempered by “little evidence of progress in testing the relationships between implementation strategies and implementation outcomes, leaving us ill-prepared to know how to achieve implementation success.” The authors rated only 30% of reviewed manuscripts as empirical evaluations and the group subsequently call for building stronger theory, objectivity in measurement, and evidence of impact on implementation outcomes. To that end, the authors suggest a 12-item agenda to improve conceptualization and measurement of implementation outcomes, and theory building research over the next 10 years.

One month later, Foy and colleagues published “What is the role of randomised trials in implementation science2? (also featured in this month’s abstracts below)” The authors conclude that while randomization may help ensure internal validity in trials of implementation strategies, “their findings are less dependent on skilled and nuanced interpretation compared to other study designs.”

The timely publication of both manuscripts reveals areas of alignment regarding pathways forward for rigorous implementation research. Importantly, each group notes the utility of mixed methods with Proctor et al. (2023) describing its appropriateness for advancing theory, and Foy et al. (2023) highlighting its unique ability to unpack complex strategies or interventions, in-turn illuminating mechanisms of change (a priority that maps back to Proctor and colleagues’ agenda for advancing implementation outcomes research).

Best practices exist for mixed methods in implementation research, which remain important given both authorship groups’ inclusion in their implementation research agendas. Palinkas (2014) offers researchers a clear rationale and description for when and how to combine qualitative and quantitative methods3. Palinkas and colleagues have also published guidance on purposeful sampling for mixed methods implementation research4, and more recently on innovations like quantizing qualitative data, rapid assessment and analysis procedures, measures to assess implementation outcomes, and further meditation on sampling5.

Given that the field is “ill-prepared to know how to achieve implementation success” after 10 years of inquiry into implementation outcomes, building and maintaining a culture of rigor regarding mixed methods can benefit us as researchers, reviewers, and funders.   

References:

  1. Proctor, E., Silmere, H., Raghavan, R., Hovmand, P., Aarons, G., Bunger, A., … & Hensley, M. (2011). Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Administration and policy in mental health and mental health services research38, 65-76.
  2. Foy, R., Ivers, N. M., Grimshaw, J. M., & Wilson, P. M. (2023). What is the role of randomised trials in implementation science?. Trials24(1), 1-8.
  3. Palinkas, L. A. (2014). Qualitative and mixed methods in mental health services and implementation research. Journal of Clinical Child & Adolescent Psychology43(6), 851-861.
  4. Palinkas, L. A., Horwitz, S. M., Green, C. A., Wisdom, J. P., Duan, N., & Hoagwood, K. (2015). Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Administration and policy in mental health and mental health services research42, 533-544.
  5. Palinkas, L. A., Mendon, S. J., & Hamilton, A. B. (2019). Innovations in mixed methods evaluations. Annual review of public health40, 423-442.