Commentary on “Selecting and improving quasi-experimental designs in effectiveness and implementation research.”

May 15,2018 | Lisa DiMartino Featured Articles

One of the greatest challenges in designing an implementation research study is the constant trade-off between internal and external validity.  Implementation of evidence-based interventions often occurs in real-world settings where randomization of participants to the intervention is not possible. For example, it may not be ethical to only offer the intervention to half of participants or sites, or timing of intervention delivery may be out of the control of the investigator. The authors in this month’s featured article describe several commonly used quasi-experimental designs (QEDs) as alternatives to RCTs to achieve a better balance between internal and external validity in implementation research in real-word settings.1 They provide a decision map that can be used to guide researchers when deciding which QED to use. Each method is briefly summarized below:

  • Prepost design with nonequivalent control group.  Interventions using this design are often delivered to communities or organizations. Data are usually collected at a single point before and after the intervention. This type of design is particularly vulnerable to threats to internal validity.
  • Interrupted-time series or difference-in-differences. These designs have an advantage over prepost designs because they allow for evaluating the implementation effect while accounting for preintervention trends. Involves the use of longitudinal data where multiple time points are assessed.  The inclusion of a control group to these designs also increases internal validity and confidence that the intervention is affecting changes in the outcome.
  • Stepped-Wedge. These designs involve the sequential roll-out of an intervention to participants (either individuals or clusters) where all clusters eventually receive the intervention. There are many advantages to this design including logistical convenience of staggering the rollout and better integration into real-world practice.

In this month’s Newsletter, we feature two articles that employed QEDs to examine the impact of implementation strategies on uptake of clinical interventions. Swaminathan et al.2 describe using interrupted time series (ITS) to examine the impact of a multicomponent intervention called MAGIC (tool, training, electronic medical record changes, provider education) on appropriateness of peripherally inserted central catheters (PICC). They accounted for secular trends (or events unrelated to the intervention that occurred during the pre- or post-intervention period that may have influenced the outcome) by defining a control group of nine peer hospitals similar to the intervention hospitals that did not formally implement MAGIC. By using ITS to compare pre-and post-differences in rates of PICC use between the intervention and control sites, the authors concluded that the MAGIC intervention improved appropriateness of PICC use. Similarly, using the difference-in-differences method, Khateeb et al.3 found enhancing hospitalist discharge rounds with an attending and social worker from a palliative care team increased uptake of palliative care consultation at a single institution.

In sum, these two studies provide examples of how QEDs can be used to evaluate the impact of implementation strategies in clinical settings. As noted by Handley et al1, it can be more difficult to conduct a good QED than a good RCT, and RCTs are still considered the “gold standard.” However, investigators can further enhance their use of QEDs by also including measures of fidelity (i.e., was the intervention delivered as intended?), capturing implementation processes, and examining which component(s) of the intervention are the “active ingredients.”

Read the full abstract.

1Handley, M.A., Lyles, C.R., McCulloch, C., et al. (2018). Selecting and improving quasi-experimental designs in effectiveness and implementation research. Annu. Rev. Public Health 39:5–25

2Swaminathan, L., Flanders, S., Rogers, M. et al (2018). Improving PICC use and outcomes in hospitalised patients: an interrupted time series study using MAGIC criteria. BMJ Qual Saf 27: 271-278.

3Khateeb, R., Puelle, M. R., Firn, J., et al (2018). Interprofessional Rounds Improve Timing of Appropriate Palliative Care Consultation on a Hospitalist Service. American Journal of Medical Quality. doi: 10.1177/1062860618768069.