Skip to main content

Using the Matrixed Multiple Case Study approach to identify factors affecting the uptake of IPV screening programs following the use of implementation facilitation

Abstract

Background

Intimate partner violence (IPV) is a prevalent social determinant of health. The US Preventive Services Task Force recommends routine IPV screening of women, but uptake remains variable. The Veterans Health Administration (VHA) initiated implementation facilitation (IF) to support integration of IPV screening programs into primary care clinics. An evaluation of IF efforts showed variability in IPV screening rates across sites. The follow-up study presented here used a Matrixed Multiple Case Study (MMCS) approach to examine the multilevel factors impacting IPV screening program implementation across sites with varying levels of implementation success.

Methods

This mixed methods study is part of a larger cluster randomized stepped wedge Hybrid-II program evaluation. In the larger trial, participating sites received 6 months of IF consisting of an external facilitator from VHA’s Office of Women’s Health working closely with an internal facilitator and key site personnel. Recognizing the heterogeneity in implementation outcomes across sites, the MMCS approach was used to enable interpretation of qualitative and quantitative data within and across sites to help contextualize the primary findings from the larger study. Qualitative data collection was guided by the integrated Promoting Action on Research Implementation in Health Services (i-PARIHS) framework and included interviews with key informants involved in IPV screening implementation at eight sites. Quantitative data on IPV screening uptake was derived from medical records and surveys completed by key personnel at the same eight sites to understand implementation facilitation activities.

Results

Fifteen factors influencing IPV screening implementation spanning all four i-PARIHS domains were identified and categorized into three distinct categories: (1) factors with enabling influence across all sites, (2) factors deemed important to implementation success, and (3) factors differentiating sites with high/medium versus low implementation success.

Conclusions

Understanding the influencing factors across multi-level domains contributing to variable success of IPV screening implementation can inform the tailoring of IF efforts to promote spread and quality of screening. Implementation of IPV screening programs in primary care with IF should consider consistent engagement of internal facilitators with clinic staff involved in implementation, the resourcefulness of external facilitators, and appending resources to IPV screening tools to help key personnel address positive screens.

Trial registration

ClinicalTrials.gov NCT04106193. Registered on September 26, 2019.

Peer Review reports

Background

Intimate partner violence (IPV) is defined as physical or sexual violence, stalking, or psychological aggression by a past or current intimate partner [1]. Although IPV can affect persons of any gender, women are at an increased risk of experiencing IPV and associated physical [2,3,4], psychological [3, 5, 6], and social health issues [4, 7]. With nearly 640 million women worldwide experiencing IPV during their lifetime [8,9,10], identifying and addressing IPV is an important public health issue.

The poorer health status of women experiencing IPV often results in increased healthcare use [11, 12], specifically within the primary care setting where women experiencing IPV commonly present for care [11, 13]. Healthcare settings—and primary care in particular—represent ideal places to identify women who may be experiencing IPV so appropriate resources can be offered [8, 10]. The U.S. Preventive Services Task Force (USPSTF) recommends routine screening for women of childbearing age [14, 15]. The Veterans Health Administration (VHA) [16] recommends IPV screening annually for all Veterans regardless of gender or age, but, per policy (VHA Directive 1198), recognizes Veterans who identify as women are at a higher risk for IPV than their civilian counterparts [4] and requires (at a minimum) annual screening of all women of childbearing age consistent with the USPSTF recommendations. In addition, a national IPV screening protocol, which includes a standardized 5-item screening tool [17, 18], has been disseminated to all VHA healthcare facilities as a template in the electronic health record.

Despite these recommendations, requirements, and protocols, screening uptake in clinical settings remains variable both inside and outside of VHA due to barriers including, but not limited to, lack of provider training, time constraints, and providers’ discomfort discussing IPV with patients [19,20,21,22,23,24]. To address these barriers and improve uptake of IPV screening and response practices, VHA’s Office of Women’s Health (OWH) initiated implementation facilitation (IF [25, 26]) via a stepped wedge hybrid implementation-effectiveness trial at nine VHA clinical sites throughout the US [27].

Primary outcomes from the evaluation of these implementation efforts [28] in the larger clinical trial suggested that IF was associated with substantial increases in reach of screening at these sites, including nearly doubling the number of women identified as having experienced past-year IPV (as previously reported [28]). The increased identification of women experiencing IPV in turn enabled linkages with support services (e.g., social work or mental health).

These aggregate findings, however, do not account for substantial site-to-site variability in implementation success, nor point to what specific factors may have differentiated high- from low-implementation success sites within the trial. Understanding specific factors that influence implementation success using IF can help tailor the intervention for effective IPV screening implementation. In this follow-up study, we describe the application of a Matrixed Multiple Case Study (MMCS) approach [29] to analyze and interpret the primary findings from our trial [28], which included substantial variability in increasing reach of IPV screening programs, with respect to factors that influenced implementation success. This methodology allows for the examination of multiple aspects of the implementation process to understand the combination of factors that are associated with the successful implementation of evidence-based practices—in this case, IPV screening programs.

Methods

Details of the design and primary outcomes of the trial are reported elsewhere [27, 28]. For this follow-up study, we provide brief descriptions of the data sources and the steps taken by the study team in the application of the MMCS approach used for analyses. The VA Boston Healthcare System Institutional Review Board approved this study.

Site recruitment

As part of the larger trial, nine sites from across the United States were recruited by VHA’s Office of Women’s Health. Local leadership at each site signed a project letter agreement for enrollment (see Iverson et al. (2023) [28] and Supplemental file 1 for more details). One of these sites was not included in this follow-up analysis due to exceptionally high IPV screening rates prior to the start of the implementation facilitation period.

Participants

This mixed-methods follow-up evaluation engaged various participants. These included VHA staff involved with IPV screening program implementation at each site (e.g., primary care providers, nurses, IPVAP Coordinator, or designee), as well as off-site external facilitators from OWH who worked with staff at each site to support the implementation of IPV screening programs. In addition, medical record data for all women receiving primary care services in the participating clinics at the enrolled sites 3 months prior to (i.e., pre-implementation period) and 9 months after implementation facilitation was initiated (i.e., implementation period) were included in this analysis.

Intervention

In the primary study, each site received 6 months of implementation facilitation (IF). IF involved trained external facilitators from OWH working closely with local internal facilitators to help integrate IPV screening programs at the participating sites [25, 26]. Internal facilitators came from myriad clinical and training backgrounds, including primary care physicians, nurses, and IPVAP Coordinators. IPVAP Coordinators are responsible for providing training, education, and consultation to clinical staff on IPV screening, response, and referral practices as part of VHA’s standard implementation protocol. IF activities included multi-faceted, personalized support via regular phone and video conferencing meetings and ad-hoc asynchronous and synchronous communication via messaging and email. These activities aimed to provide education and address implementation challenges, with the common goal of integrating IPV screening and response programs during a 6-month period at each site.

Data collection to inform the MMCS approach

Table 1 describes the six distinct quantitative and qualitative data sources that informed the MMCS analyses. In short, implementation success for each site (i.e., the dependent variable) was determined by an aggregate of four variables derived from one data source, while data on potential influencing factors (i.e., the independent variables) were derived from an additional five data sources.

Table 1 Data sources

Data analysis

Following the completion of the larger clinical trial, we used the MMCS approach [29] presented here, which compares site-specific matrices containing information from qualitative and quantitative data sources across and within sites, enabling the emergence of generalizable knowledge from common and heterogenous local factors influencing the success of program implementation across sites. See Fig. 1 for a process map summarizing the nine steps of the MMCS as applied to this evaluation.

Fig. 1
figure 1

MMCS analytic process

MMCS analyses began with describing the research goal (MMCS Step 1) and used an aggregate of four site-level variables to define implementation success (MMCS Step 2). These included (a) change in reach of IPV screening efforts, along with (b) the consent rate for IPV screening at each site (i.e., percent of women offered screening, via an annual clinical reminder in the electronic health record, who consented to complete the screen). We included consent rate as an important component of the success of a screening program because, at some sites, an unrealistically high percentage of women who were offered screening were marked in the electronic medical record as either declining to complete the screening or not being able to complete the screening due to the presence of another family member (i.e., non-consent). Thus, sites with a high non-consent rate (e.g., above 50%) were deemed to have lower implementation success. Other variables defining implementation success included (c) IPV disclosure rate and (d) post-screening psychosocial service use. Very low disclosure rates (e.g., below 5% of completed screens) could indicate that screening was not conducted in a thoughtful or sensitive manner, while very low rates of psychosocial service use among women screening positive for IPV could indicate that these women were not offered follow-up services. For details on how the study team used these variables to assess the extent of implementation success per site, see MMCS Step 5.

To select relevant domains of factors influencing implementation (MMCS Step 3), the study team reviewed data from one site and developed a list of potentially important influencing factors within each domain of the i-PARIHS framework [30]. The i-PARIHS framework uses four domains (i.e., facilitation, innovation, recipients, context) to explain complex implementation of research into clinical practice. The influencing factor list was refined during a series of consensus meetings and application to other participating sites. Once these data sources and variables were identified, the study team gathered the data for each domain (MMCS Step 4). The study team engaged in MMCS Steps 5 and 6 in parallel. For MMCS Step 5, they independently rated each site's implementation success status as high, medium, or low based on the four site-level variables. Disagreements in this process were then resolved via a consensus meeting where team members discussed discrepancies until they reached mutual agreement. This process resulted in the final implementation success ranking of sites. Due to the low number of sites included in the analysis, the study team chose to dichotomize the implementation success statuses as high/medium or low performing, resulting in all eight sites being placed into one of these two categories.

For MMCS Step 6, two study team members (OLA, JEB) used five data sources (from Table 1: implementation strategies survey, site balancing characteristics, time-motion tracker, stakeholder interviews, and external facilitator interviews) to summarize the influencing factors in each domain per site. This included determining (a) the factor’s relative presence at the site, and (b) the factor’s impact on implementation success at that site. These two team members were blinded to the site implementation success status for this part of the analytic process, and they met frequently to establish consensus. Next, the data were organized into site-specific sortable matrices (MMCS Step 7), and the team completed within-site analysis (MMCS Step 8). For this within-site analysis, the study team assessed the status and influence of factors on implementation success and then aggregated the factors and influences into a sortable cross-site matrix. Once the cross-site matrix was assembled, cross-site analysis was conducted to determine (a) factors universally present and enabling across all sites, (b) factors with a strong relationship between presence and enablement of implementation success, and (c) factors that differentiated sites by overall implementation success (MMCS Step 9). Discrepancies in categorizations among team members were resolved via a series of consensus meetings where these discrepancies were discussed until the team reached mutual agreement, resulting in the identification of the fifteen influencing factors impacting implementation success.

Results

Participant characteristics

Medical record data for all women (n = 5149) seen in the participating primary care clinics during the pre-implementation and implementation facilitation period were included in the ranking of sites’ implementation success. Table 2 presents screening rates showing site-by-site variability. The study team ranked four sites as having high/medium implementation success and four sites as having low implementation success.

Table 2 IPV screening rates by site

Implementation strategies survey respondents (1–3 per site) represented all eight sites and included eight IPVAP Coordinators, four primary care providers, and three nurses. Semi-structured qualitative interviews were conducted post-IF with 14 internal facilitators and persons closely involved with implementation facilitation (1–2 per site) at the eight sites from the larger study [28]. Interviewees included eight IPVAP Coordinators, five primary care providers, and one nurse.

Types of influencing factors

We identified 15 factors affecting the success of IPV screening program implementation across sites. These factors span all i-PARIHS domains: four facilitation, two innovation, four recipients, and five context factors. As summarized in Table 3, we present factors by influencing characteristics (i.e., presence and influence) in relationship to implementation success status and by the i-PARIHS domains.

Table 3 Influencing factors by i-PARIHS domain

Factors with enabling influence across all sites

Three factors were present across all sites regardless of implementation success status, that consistently enabled IPV screening program implementation. These factors are from the facilitation and recipients domains.

In the facilitation domain, these factors were internal facilitator or other site staff are available to meet and communicate regularly, and internal facilitator or IPVAP Coordinator organizes/conducts staff training for IPV screening. More specifically, all sites indicated that having key staff involved in the implementation process available and consistently engaged through both regular communication and training were foundational to successful implementation.

In the recipients domain, the influencing factor was IPVAP Coordinator is available and actively engaged in supporting IPV screening day-to-day implementation activities at main hospital and/or connected community-based outpatient clinics. The factor extends from the availability of key staff assisting with implementation, to focus on the IPVAP Coordinator’s daily actions engaging key medical center personnel in implementation.

Factors deemed important to implementation success

Six factors emerged such that their presence was enabling, and their absence hindering to implementation—but unlike the previous section, these factors were not present at all sites. Rather, when they were present, they enabled IPV screening implementation, but their absence hindered IPV screening implementation. These factors were identified in all four i-PARIHS domains.

In the facilitation domain, the two influencing factors are external facilitator is available and willing to meet and communicate regularly, and external facilitator is perceived as knowledgeable about IPV screening practices and available resources. The value of external facilitators to support standing up IPV screening programs is based on both the perceived expertise and resourcefulness of the external facilitator, and the external facilitators regular engagement with the site.

In the innovation domain, sites where IPV screening is seen as duplicative with other established clinical reminders faced more challenges to implementation. Conversely, sites were better able to implement IPV screening when this perceived duplication was absent.

In the recipients domain, three factors present a strong relationship between presence and enablement across sites regardless of implementation success status. These are: Primary care team is available to screen for IPV during patient visits; frontline staff have the expertise and available resources to conduct and follow-up IPV screening as intended (e.g., trained primary care social workers); and at least one frontline primary care staff person is supportive of IPV screening in the identified clinics. The availability of primary care clinic members (e.g., nursing staff, primary care providers, social workers) that are educated in screening and response practices and passionate about implementing IPV screening programs was seen as important for site staff but were not sufficient by themselves to ensure implementation success.

In the context domain, we identified five influencing factors: Site has an engaged information technology (IT) team or infrastructure in place to support IPV screening prior to initiation of IF activities; site-level primary care, women's health, social work, and/or medical center leadership is supportive of IPV screening in identified clinic(s); competing priorities created barriers to prioritizing IPV screening implementation (e.g., COVID-19, other initiatives); regional and/or national leadership is supportive of IPV screening, and access to community resources for additional support for patients. Multilevel leadership support outside of the primary care clinic was viewed as beneficial to moving the implementation process along and for increasing buy-in among clinic staff. Similarly, when staff felt that cross-service (e.g., nursing and social work) and multi-level leadership (e.g., service chiefs, medical center directors, regional leaders) was invested in the implementation process (e.g., by providing staff support and/or protected time for IPV screening activities), increased buy-in was reported and staff were more resilient to overcoming barriers faced during the implementation process.

Factors differentiating sites with high/medium versus low implementation success

Only one influencing factor in the innovation domain differentiated the sites with high/medium from sites with low implementation success. This factor, IPV screening clinical reminder is designed to easily access appropriate follow-up services if someone screens positive, was strongly present and enabling in most high/medium implementation success sites but seldom in the low implementation success sites. This suggests that if the screening protocol did not include clear guidance and easy pathways for referral options and resources, sites were less comfortable implementing IPV screening in the clinic, thereby leading to lower overall implementation success.

Discussion

When implemented successfully, IPV screening programs are effective in identifying women who experience IPV for provision of resources and support services [4]. Primary study outcomes showed that using implementation facilitation as a strategy to scale up IPV screening implementation resulted in increased IPV screening rates and identification of patients experiencing IPV [28]. Nonetheless, aggregate IPV screening implementation outcomes on their own cannot explain site-level variability in screening rates, and therefore portray an incomplete account of the factors that may ultimately be responsible for implementation success at some sites—but shortfalls at other sites. The use of the MMCS approach [29] and the i-PARIHS framework for this follow-up study enabled a deeper analysis to identify factors that contribute to the success of IPV screening implementation among primary care clinics that participated in the clinical trial [28]. This study identified 15 factors that influence the success of IPV screening program implementation in these primary care clinics.

Overall, no single influencing factor carries enough weight to guarantee IPV screening implementation success. Influencing factors presented here should be carefully considered in tandem to overcome the known barriers to IPV screening implementation (e.g., time constraints, lack of clinician training, and discomfort addressing IPV) [19,20,21,22,23,24]. These factors and their respective associations with implementation success provide insights across all domains of the i-PARIHS framework. Influencing factors in the innovation domain suggest the importance of establishing two key components prior to IPV screening program implementation: clearly and effectively communicating to all primary care clinic staff the importance of integrating IPV screening programs into routine care, and delineating the distinct nature of IPV screening from other existing screens (e.g., broader interpersonal violence screening) to avoid perceptions of duplication with other screening efforts used in the clinic. Presumably, these can potentially be achieved through a variety of methods including clinician education [31], audit and feedback [32], or pay-for-performance incentives [33] that demonstrate the health system’s commitment to this type of screening. In addition, ensuring that the IPV screening protocol contains clear guidance on effectively responding to positive screens, particularly in terms of accessible resources and easily being able to refer patients with support services, is crucial. Ideally, this would be integrated in the screening protocol template embedded within the electronic medical records so that options for referral are immediately available following positive screens [34, 35], as clinicians’ lack of knowledge or the availability of referral options and resources has previously emerged as a significant barrier to routine IPV screening [23, 24, 34, 36, 37]. In our analysis, the range in availability and quality of referral options and resources across sites suggested that sites with more robust referral pathways and resources were better equipped for implementation success than sites with minimal resources readily available. Universally building these robust resources into the IPV screening tool itself could help equip staff and providers to respond adequately to positive screens, thereby bolstering confidence and encouraging buy-in, which the literature shows is a key facilitator of IPV screening and response practices [34, 37, 38].

When we closely examine findings within the context and recipients domains, our findings speak to the importance of establishing foundational enabling factors to increase the likelihood of successful implementation of IPV screening programs. First, our findings suggest that it is important to identify key implementation staff (an internal facilitator and other clinic staff) who are able and willing to engage consistently through regular communication and training. Second, getting cross-service and multi-level leadership buy-in into the implementation process itself (e.g., giving staff protected time to dedicate to implementation activities) provides the implementation teams the necessary support to address and overcome implementation barriers. These findings replicate and extend past studies [22, 24, 39, 40].

With facilitation, influencing factors suggest that IF is helpful to IPV screening implementation when it is led by resourceful external facilitators with high levels of knowledge and experience with IPV screening, who thoughtfully and regularly engage with an internal facilitator and other members of the implementation team. Prior research has found that the combination of implementation facilitation involving an external facilitator working with an internal facilitator is especially beneficial to sites that are slow to adopt an evidence-based practice [41]. More broadly, the variability across sites speaks to the importance of using tests of change and ongoing data collection to determine whether implementation facilitation (or other implementation strategies) is having the desired clinical effects, consistent with the principles of a Learning Health System (LHS [42]). Ensuring the adoption of new clinical practices in healthcare settings is difficult, even with the application of an evidence-based implementation strategy like implementation facilitation—and so ongoing monitoring and adaptation are key.

Limitations

The study findings should be interpreted in light of several limitations. First, the sample size of the study cohort is relatively low, with only eight sites included in this study’s analyses. Of note, site enrollment for the larger study was negatively impacted by the onset and surges of the COVID-19 pandemic. A relative strength of the MMCS approach is that it allows for implementation success analyses both within a site and across multiple sites, but the overall generalizability of the results presented here may be limited due to the small cohort size, which only allowed us to dichotomize the sites into high/medium versus low implementation success categories. Future research should evaluate the use of the MMCS approach on a larger sample of sites with an increased number of implementation success categories to fully understand the impact of factors that can be leveraged to enhance IPV implementation success.

Conclusion

Increased understanding of the influencing factors that impact IPV screening implementation success can inform the tailoring of implementation efforts to allow for successful scale-up of IPV screening implementation in primary care settings. The novel MMCS approach identified key ingredients for the successful implementation of IPV screening programs, including the presence of influencing factors that enable implementation across many domains. This mixed methods in-depth analysis provided nuanced insight into the site-to-site implementation success variability following implementation facilitation efforts as part of a larger clinical trial. IPV screening implementation facilitation efforts that combine resourceful external facilitators with influencing factors that promote understanding of the importance of IPV screening, provide resources attached to the IPV screening tool for screening staff, and involve change makers that drive implementation through consistent engagement with clinic staff members may lead to increased implementation success in primary care settings and beyond.

Availability of data and materials

The datasets generated and/or analyzed during the current study are not publicly available due to privacy or ethical restrictions.

Abbreviations

IPV:

Intimate partner violence

VHA:

Veterans Health Administration

OWH:

Office of Women’s Health

US:

United States

MMCS:

Matrixed Multiple Case Study

IPVAP:

IPV Assistance Program

IF:

Implementation facilitation

IT:

Information technology

References

  1. Breiding M, Basile KC, Smith SG, Black MC, Mahendra RR. Intimate partner violence surveillance: Uniform definitions and recommended data elements. Version 2.0; 2015.

    Google Scholar 

  2. Bonomi AE, Anderson ML, Rivara FP, Thompson RS. Health care utilization and costs associated with physical and nonphysical-only intimate partner violence. Health Serv Res. 2009;44(3):1052–67.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Trevillion K, Oram S, Feder G, Howard LM. Experiences of domestic violence and mental disorders: a systematic review and meta-analysis. PLoS One. 2012;7(12):e51740.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  4. Dichter ME, Cerulli C, Bossarte RM. Intimate partner violence victimization among women veterans and associated heart health risks. Womens Health Issues. 2011;21(4):S190–S4.

    Article  PubMed  Google Scholar 

  5. Lagdon S, Armour C, Stringer M. Adult experience of mental health outcomes as a result of intimate partner violence victimisation: a systematic review. Eur J Psychotraumatol. 2014;5

  6. Iverson KM, McLaughlin KA, Gerber MR, Dick A, Smith BN, Bell ME, et al. Exposure to Interpersonal Violence and Its Associations With Psychiatric Morbidity in a U.S. National Sample: A Gender Comparison. Psychol Violence. 2013;3(3):273–87.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Montgomery AE, Sorrentino AE, Cusack MC, Bellamy SL, Medvedeva E, Roberts CB, et al. Recent Intimate Partner Violence and Housing Instability Among Women Veterans. Am J Prev Med. 2018;54(4):584–90.

    Article  PubMed  Google Scholar 

  8. Sardinha L, Maheu-Giroux M, Stöckl H, Meyer SR, García-Moreno C. Global, regional, and national prevalence estimates of physical or sexual, or both, intimate partner violence against women in 2018. Lancet. 2022;399(10327):803–13.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Breiding MJ, Basile KC, Klevens J, Smith SG. Economic Insecurity and Intimate Partner and Sexual Violence Victimization. Am J Prev Med. 2017;53(4):457–64.

    Article  PubMed  PubMed Central  Google Scholar 

  10. World Health Organization. Violence against women prevalence estimates, 2018: global, regional and national prevalence estimates for intimate partner violence against women and global and regional prevalence estimates for non-partner sexual violence against women. 2021.

    Google Scholar 

  11. Dichter ME, Sorrentino AE, Haywood TN, Bellamy SL, Medvedeva E, Roberts CB, et al. Women’s healthcare utilization following routine screening for past-year intimate partner violence in the Veterans Health Administration. J Gen Intern Med. 2018;33(6):936–41.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Rivara FP, Anderson ML, Fishman P, Bonomi AE, Reid RJ, Carrell D, et al. Healthcare utilization and costs for women with a history of intimate partner violence. Am J Prev Med. 2007;32(2):89–96.

    Article  PubMed  Google Scholar 

  13. Kimerling R, Iverson KM, Dichter ME, Rodriguez AL, Wong A, Pavao J. Prevalence of intimate partner violence among women veterans who utilize Veterans Health Administration primary care. J Gen Intern Med. 2016;31:888–94.

    Article  PubMed  PubMed Central  Google Scholar 

  14. US Preventative Services Task Force. Screening for intimate partner violence and abuse of elderly and vulnerable adults: US preventive services task force recommendation statement. Ann Intern Med. 2013;158(6):478–86.

    Article  Google Scholar 

  15. US Preventative Services Task Force. Screening for intimate partner violence, elder abuse, and abuse of vulnerable adults: US Preventive Services Task Force final recommendation statement. JAMA. 2018;320(16):1678–87.

    Article  Google Scholar 

  16. Department of Veterans Affairs. Directive 1198. Intimate partner violence assistance program. Veterans Health Administration. 2019..

    Google Scholar 

  17. Iverson KM, King MW, Gerber MR, Resick PA, Kimerling R, Street AE, et al. Accuracy of an intimate partner violence screening tool for female VHA patients: a replication and extension. J Trauma Stress. 2015;28(1):79–82.

    Article  PubMed  Google Scholar 

  18. Iverson KM, King MW, Resick PA, Gerber MR, Kimerling R, Vogt D. Clinical utility of an intimate partner violence screening tool for female VHA patients. J Gen Intern Med. 2013;28:1288–93.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Hudspeth N, Cameron J, Baloch S, Tarzia L, Hegarty K. Health practitioners’ perceptions of structural barriers to the identification of intimate partner abuse: a qualitative meta-synthesis. BMC Health Serv Res. 2022;22(1):1–20.

    Article  Google Scholar 

  20. Tarzia L, Cameron J, Watson J, Fiolet R, Baloch S, Robertson R, et al. Personal barriers to addressing intimate partner abuse: a qualitative meta-synthesis of healthcare practitioners' experiences. BMC Health Serv Res. 2021;21(1):567.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Iverson KM, Wells SY, Wiltsey-Stirman S, Vaughn R, Gerber MR. VHA primary care providers’ perspectives on screening female veterans for intimate partner violence: a preliminary assessment. J Fam Violence. 2013;28:823–31.

    Article  Google Scholar 

  22. Iverson KM, Adjognon O, Grillo AR, Dichter ME, Gutner CA, Hamilton AB, et al. Intimate partner violence screening programs in the Veterans Health Administration: informing scale-up of successful practices. J Gen Intern Med. 2019;34:2435–42.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Portnoy GA, Iverson KM, Haskell SG, Czarnogorski M, Gerber MR. A multisite quality improvement initiative to enhance the adoption of screening practices for intimate partner violence into routine primary care for women veterans. Public Health Rep. 2021;136(1):52–60.

    Article  PubMed  Google Scholar 

  24. Jackson EC, Renner LM, Flowers NI, Logeais ME, Clark CJ. Process evaluation of a systemic intervention to identify and support partner violence survivors in a multi-specialty health system. BMC Health Serv Res. 2020;20:1–16.

    Article  Google Scholar 

  25. Kirchner JE, Ritchie MJ, Pitcock JA, Parker LE, Curran GM, Fortney JC. Outcomes of a partnered facilitation strategy to implement primary care–mental health. J Gen Intern Med. 2014;29(4):904–12.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Ritchie M, Dollar K, Miller C, Smith J, Oliver K, Kim B, et al. Using implementation facilitation to improve healthcare (version 3). Veterans Health Administration, Behavioral Health Quality Enhancement Research Initiative (QUERI); 2020.

    Google Scholar 

  27. Iverson KM, Dichter ME, Stolzmann K, Adjognon OL, Lew RA, Bruce LE, et al. Assessing the Veterans Health Administration’s response to intimate partner violence among women: protocol for a randomized hybrid type 2 implementation-effectiveness trial. Implement Sci. 2020;15(1):1–10.

    Article  Google Scholar 

  28. Iverson KM, Stolzmann KL, Brady JE, Adjognon OL, Dichter ME, Lew RA, et al. Integrating Intimate Partner Violence Screening Programs in Primary Care: Results from a Hybrid-II Implementation-Effectiveness RCT. Am J Prev Med. 2023;

  29. Kim B, Sullivan JL, Ritchie MJ, Connolly SL, Drummond KL, Miller CJ, et al. Comparing variations in implementation processes and influences across multiple sites: What works, for whom, and how? Psychiatry Res. 2020;283:112520.

    Article  PubMed  Google Scholar 

  30. Harvey G, Kitson A. PARIHS revisited: from heuristic to integrated framework for the successful implementation of knowledge into practice. Implement Sci 2015;11(1):1-13.

  31. Ramani S, McMahon GT, Armstrong EG. Continuing professional development to foster behaviour change: From principles to practice in health professions education. Medical Teacher. 2019;41(9):1045–52.

    Article  PubMed  Google Scholar 

  32. Hysong SJ, SoRelle R, Hughes AM. Prevalence of Effective Audit-and-Feedback Practices in Primary Care Settings: A Qualitative Examination Within Veterans Health Administration. Hum Factors. 2022;64(1):99–108.

    Article  PubMed  Google Scholar 

  33. Martin B, Jones J, Miller M, Johnson-Koenke R. Health Care Professionals’ Perceptions of Pay-for-Performance in Practice: A Qualitative Metasynthesis. INQUIRY: The Journal of Health Care Organization, Provision, and Financing. 2020;57:0046958020917491.

    PubMed  Google Scholar 

  34. Clark CJ, Renner LM, Logeais ME. Intimate partner violence screening and referral practices in an outpatient care setting. Journal of interpersonal violence. 2020;35(23-24):5877–88.

    Article  PubMed  Google Scholar 

  35. O’Campo P, Kirst M, Tsamis C, Chambers C, Ahmad F. Implementing successful intimate partner violence screening programs in health care settings: evidence generated from a realist-informed systematic review. Soc Sci Med. 2011;72(6):855–66.

    Article  PubMed  Google Scholar 

  36. Miller CJ, Adjognon OL, Brady JE, Dichter ME, Iverson KM. Screening for intimate partner violence in healthcare settings: An implementation-oriented systematic review. Implementation research and practice. 2021;2:26334895211039894.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Lee AS, McDonald LR, Will S, Wahab M, Lee J, Coleman JS. Improving provider readiness for intimate partner violence screening. Worldviews Evid-Based Nurs. 2019;16(3):204–10.

    Article  PubMed  Google Scholar 

  38. Beynon CE, Gutmanis IA, Tutty LM, Wathen CN, MacMillan HL. Why physicians and nurses ask (or don’t) about partner violence: a qualitative analysis. BMC Public Health. 2012;12:1–12.

    Article  Google Scholar 

  39. Aarons GA, Ehrhart MG, Farahnak LR, Hurlburt MS. Leadership and organizational change for implementation (LOCI): a randomized mixed method pilot study of a leadership and organization development intervention for evidence-based practice implementation. Implement Sci. 2015;10(1):1–12.

    Article  Google Scholar 

  40. Borge RH, Egeland KM, Aarons GA, Ehrhart MG, Sklar M, Skar AS. "Change Doesn't Happen by Itself": A Thematic Analysis of First-Level Leaders' Experiences Participating in the Leadership and Organizational Change for Implementation (LOCI) Strategy. Admin Pol Ment Health. 2022;49(5):785–97.

    Article  Google Scholar 

  41. Smith SN, Liebrecht CM, Bauer MS, Kilbourne AM. Comparative effectiveness of external vs blended facilitation on collaborative care model implementation in slow-implementer community practices. Health Serv Res. 2020;55(6):954–65.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Horwitz LI, Kuznetsova M, Jones SA. Creating a learning health system through rapid-cycle, randomized testing. N Engl J Med. 2019;381(12):1175–9.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

We would like to extend a special thank you to Dr. Bo Kim for sharing her knowledge on the MMCS methodology with us and providing us with expert guidance as we applied the MMCS methodology to this study. We would also like to thank the clinical teams who participated in this initiative.

Disclaimer

The views expressed in this article are those of the authors and do not necessarily represent the position or policy of the Department of Veterans Affairs or the United States government.

Funding

The research evaluation was funded by the Department of Veterans Affairs, Office of Research and Development, Health Services Research & Development (HSR&D) Services (SDR 18-150: Iverson & Miller). The funder did not have a say in the analyses or content of this manuscript.

Author information

Authors and Affiliations

Authors

Contributions

OLA and JEB curated the data, analyzed and interpreted the data, and drafted and revised the manuscript. KMI acquired the funding, conceptualized and designed the study, interpreted the analyses of the data, and substantively revised the manuscript. KS conceptualized the analysis of the data, analyzed the data, interpreted the analyses of the data, and substantively revised the manuscript. RAL conceptualized the analysis of the data and interpreted the analyses of the data. MED conceptualized and designed the study. MRG, GAP, SI, SGH, and LEB conceptualized the study and provided resources for the study’s main activities. CJM acquired the funding, conceptualized and designed the study, interpreted the analyses of the data, and drafted and revised the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Omonyêlé L. Adjognon.

Ethics declarations

Ethics approval and consent to participate

The VA Boston Healthcare System Institutional Review Board approved this study.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

CONSORT Diagram.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Adjognon, O.L., Brady, J.E., Iverson, K.M. et al. Using the Matrixed Multiple Case Study approach to identify factors affecting the uptake of IPV screening programs following the use of implementation facilitation. Implement Sci Commun 4, 145 (2023). https://0-doi-org.brum.beds.ac.uk/10.1186/s43058-023-00528-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s43058-023-00528-x

Keywords