Skip to main content

Table 1 Summary of other implementation strategy tracking methods

From: The Longitudinal Implementation Strategy Tracking System (LISTS): feasibility, usability, and pilot testing of a novel method

Authors

Relevant Framework

Methods

Strengths

Limitations

Bunger et al. [1]

• Expert recommendations for Implementing Change (ERIC)

• Activity logs coded by research staff

• Low cost to the implementers

• Time intensive coding required

• Limited strategy specifications collected

Boyd, Powell, Endicott, and Lewis [2]

• Powell et al. [3] compilation from 2011

• Coded team meeting recordings

• Leveraging existing implementation team meetings reduced burden

• Captured change over time

• Meetings were not structured to obtain all necessary information about specific strategies

• Completeness of coded strategies is unknown

Rabin et al. [4]

• Stirman et al.’s [5] expanded Framework for Reporting Adaptations and Modifications to Evidence-based interventions (FRAME)

• Components of RE-AIM

• Real-time data collection via an adaptations worksheet

• Semi-structured interviews at 6-month timepoints

• Provided flexibility

• Low burden on implementers

• Time and training of research staff for administration and coding

• FRAME not tailored to implementation strategies

• Lag-time from semi-structured interviews may have led to retrospective error

Rabin et al. [4]

• Modified FRAME

• Consolidated Framework for Implementation Research (CFIR)

• Tracked modifications to an a priori strategy protocol

• Results in fairly granular and comprehensive data

• Time and resource burden on the study team

• The time (dose) involved in each strategy was not captured

• Implementers themselves were not involved in the tracking or coding

Walsh-Bailey et al. [6]

• None stated

• Brainstorming log (low structure)

• Activity logs (moderate)

• Detailed tracking logs (high)

• Random assignment to implementation strategies

• Activity log method deemed most feasible of the three methods

• Validity and necessary detail provided by each method, balanced on burden and perceptions, was not assessed

• Limited number of implementers engaged in evaluation

• Intervention evaluated was relatively simple (results not generalizable)