Background

Growing the field of dissemination and implementation (D&I) research and advancing the next generation of individuals to lead it are high priorities [1, 2]. These goals have been supported recently by several training programs across the globe which provide researchers with a solid foundation in approaches to close the gap between scientific discovery and practice. The programs vary in breadth and focus (mental health, cancer, etc.) as well as format (in-person, web-based, etc.) and content delivery components (didactic, on-site visits, mentoring, etc.) [3,4,5,6,7,8,9,10,11,12,13,14].

Much of what we know about the potential benefits of these trainings is from examining how participants fared after the training either by comparing pre-post knowledge or competency needed to grow the D&I field. However, few D&I training programs have examined the trajectory of the training participant with respect to peers who did not receive the same training. Typically, training programs are not randomized by design, and understanding effects is challenging. Recently, two D&I training programs introduced unsuccessful applicants as a comparison group to evaluate research productivity resulting from program participation. The evaluation of the Training Institute for Dissemination and Implementation Research in Health (TIDIRH) combined cross-sectional surveys of past fellows with portfolio analyses of grants submitted and funded [15]. In addition, the mental health-focused Implementation Research Institute (IRI) program studied bibliometric outcomes among fellows and non-selected applicants [16]. Other academic training efforts have also begun to take advantage of the growing availability of publicly accessible research product information [17, 18]. Such studies pave the way for a meaningful understanding of the associations between mentored training and scientific productivity. The previous studies also provide valuable insights to funders, especially within the context of learning how best to train and build capacity for a newer field such as D&I science.

The Mentored Training for Dissemination and Implementation Research in Cancer (MT-DIRC) has trained and mentored over 50 junior to mid-level career researchers to increase their knowledge of and capacity for D&I research in cancer. The training increased D&I research competencies from pre- to 6- and 18-months post their initial training [6]. The purpose of this study is to examine the program’s association with capacity building for D&I research, specifically in the areas of publishing and acquiring US Federal Grant funding for D&I research.

Methods

This study used a quasi-experimental design (pre/post with a comparison group) to examine the relationship of the MT-DIRC training program with subsequent scholarly output in the form of D&I-related publications and US federal funding secured for research. Our main research question sought to determine if there was a difference among applicants in the likelihood to produce research outputs after participation (vs. nonparticipation) in the program.

Training program description summary

The MT-DIRC program was an R25 mentored training funded by the National Cancer Institue (PAR-12-049) to increase the capacity for D&I research related to the spectrum of cancer control (etiology to survivorship) [19]. The full description of the program is published with preliminary results on skill building [6]. Here, we provide a brief summary of the program.

Eligibility requirements included a doctorate degree and full-time appointment in a research setting. Recruitment through various listservs and advertisements at D&I-related events and conferences focused mostly on early-career researchers or mid-to-late-career researchers looking to shift the focus of their work to D&I research. To apply, researchers submitted an informational cover page, concept paper for a D&I study to work on during the program, biosketch, and two letters of reference. Program faculty rated each application based on several areas including the overall application quality, commitment to D&I research, experience working in trans-disciplinary networks, research support, likelihood for career development, appropriate methods in the concept paper, appropriate topic in the concept paper, and potential impact of the work proposed.

Selected applicants participated in two Summer Institutes (5-day trainings) in St. Louis, MO, USA, each June. Trainings primarily focused on competencies for D&I research [20] and in-person mentoring and interactive sessions to work on and receive feedback related to research proposals and or other projects in progress. Ongoing evidence-informed mentoring [21], often in the form of regularly scheduled calls, continued for 2 years. Fostering collaboration among the program’s network of fellows and mentors was a concerted focus of the program. All current and past fellows and mentors were invited to participate in quarterly webinars and to attend annual meet-up events at the D&I science conference in Washington, DC.

Data collection and processing

In total, 105 applicants applied to the program between 2014 and 2017. Three who were selected into the program later dropped out for various career/personal reasons and were excluded from this study. The total sample of program applicants included in this study is 102: 55 that were selected and participated in the MT-DIRC program (“fellows”) and 47 unselected applicants (“nonfellows”) as the comparison group. Demographic information data was collected from each application.

For information on scholarly output, two main sources of publicly available data were used. To gather all published works, we utilized Scopus (www.scopus.com), a comprehensive citation database with over 75 million records [22]. All applicants to the MT-DIRC program were searched in the Scopus database, and after verifying academic affiliation, their citations and accompanying abstracts were extracted through Scopus’s BibTeX export tool in July 2019 and further processed in R using the “bibliometrix” package (2.3.2) [23]. A total of 5189 publication citations were extracted. Initially, 11% (N = 565) were missing abstracts. Upon closer examination, certain article types were responsible for much of the missing abstracts and were necessary to exclude (erratums = 30, notes = 208, letters = 129). For the remaining with missing abstracts (N = 258), a member of the research team searched each citation to confirm missing abstracts (e.g., some journals do not require) or extract the abstract if found. A total of 208 additional abstracts were located, and the remaining citations without abstracts (N = 50) were excluded. The final dataset contained 4772 citations and included only citation ID number, applicant ID number, title, and abstract for further de-identified coding.

To gather grant funding, we used the National Institutes of Health’s (NIH) Research Portfolio Online Reporting Tools (RePORTER Tool Manual, 2018). RePORTER is an electronic tool to search the US federal repository of intramural and extramural NIH-funded research projects dating back to 1985. The repository also includes funded projects by the Administration for Children and Families (ACF), the Agency for Healthcare Research and Quality (AHRQ), the Centers for Disease Control (CDC), the Health Resources and Services Administration (HRSA), the Food and Drug Administration (FDA), and the Department of Veteran Affairs (VA). The R package “fedreporter” (0.2.1) [24] was used to extract grant funding information from RePORTER’s application programming interface (API) including abstracts for all applicants in September 2019. We included records where the applicant was either a coinvestigator or a principal investigator. A total of N = 271 funding records were extracted. After keeping the first fiscal year of duplicate entries (e.g., multi-year funding), the total sample of unique US federally funded projects was N = 97. Funding cases were reduced to grant ID, applicant ID, title, and abstract for further de-identified coding.

Coding process

For each publication and grant abstract, D&I focus was coded as “yes” or “no” similar to Baumann and colleagues’ criterion [16]. D&I focus “yes” included an “implementation-centered hypothesis, design, or framework, focused on assessing the implementation climate of an organization, or described the implementation processes for a particular intervention.” For example, abstracts which explicitly examined the implementation outcomes were coded as “yes,” and those that focused only on the intervention outcomes were coded as “no.” In addition, all publications featured in the Implementation Science journal were coded as D&I “yes” based upon the journal’s inherent D&I focus and scope. Two project staff (RRJ, AG) coded the publication and grant abstracts. For publications, an initial random selection of citations (N = 20) was double coded by both project staff and discussed to reach agreement on the inclusion and exclusion criteria before proceeding with single coding by one coder (AG) for the remaining citations. A random sample of 10% was selected for double coding and showed a 93% agreement. The number of grants was considerably less than the total number of publications, and therefore, all were double coded with any discrepancies reconciled to reach 100% consensus.

Data analysis

After coding was completed on each publication and grant, data were aggregated to the applicant level and merged with the applicant demographic data for analysis (N = 102). Because RePORTER is solely for US-based research, we excluded foreign applicants (N = 15) from the grant analyses.

We compared demographic, publication, and funding data by applicant status with independent samples t tests (continuous data) and chi-square tests (categorical data). For modeling, two main binary outcomes were examined: any D&I publication after the application year (yes or no), and because the program supported grant writing in general, “any” grant funding after the application year (yes or no). Binary logistic regression models examined program participation status with each outcome. Attenuation or change in estimates (CIE) of program participation resulting from the inclusion of independent variables was examined [25, 26]. Variables which resulted in more than 5% attenuation were included in separate and combined models for further examination. All analyses were completed in R [27] with an alpha level set at 0.05.

Results

Fellows and nonfellows were similar across several demographic characteristics (Table 1). The majority were female (80.4%), and “prevention” was the most common cancer research area focus (48.0%). Assistant professor was the largest category of position among applicants (46.1%) followed by post-doctoral or research scientist (21.6%), others (17.6%), and associate professor or professor (14.7%). On average, applicants were 17.9 (SD = 6.7) years post-undergraduate degree when they applied for the MT-DIRC program.

Table 1 Applicant characteristics (N = 102), Mentored Training for Dissemination and Implementation Research in Cancer

Fellows and nonfellows differed in grant funding before their application year (51.0% vs. 34.2%), and fellows were more likely to have grant funding following the application year (36.7% vs. 13.2%) (Table 2). Before and after the application year, fellows were more likely to have a D&I publication than nonfellows (36.4% vs. 14.9%; 65.5% vs. 31.9%). These differences were accounted for in the multivariate analyses that follow.

Table 2 Applicant funding and publication (N = 102), Mentored Training for Dissemination and Implementation Research in Cancer

Table 3 shows that fellows were more than three times more likely than nonfellows to have grant funding after the MT-DIRC application year (OR 3.2; 95% CI 1.1–11.0, model 1) while controlling for time since the application year. Effect estimates were 3.3 (95% CI 1.1–11.6, model 2), adjusting for cancer research area, 3.0 (95% CI 0.99–10.2, model 3) adjusting for previous grant funding, and 3.1 (95% CI 0.98–11.0, model 4) after adjusting for both cancer area and previous grant funding.

Table 3 US applicant awarded federally funded grant between the application year and September 2019 (N = 87)

Table 4 shows that fellows were almost four times more likely to publish D&I-focused work compared to nonfellows while adjusting for time (OR 3.8, 95% CI 1.7–9.0, model 1). Odds remained elevated for fellows after additionally adjusting for previous D&I publication (OR 3.1, 95% CI 1.3–7.7, model 2), years since undergraduate degree (OR 3.5, 95% CI 1.5–8.8, model 3), and both D&I publication and years since undergraduate degree (OR 2.9, 95% CI 1.2–7.5, model 4).

Table 4 Applicant published D&I article between the application year and July 2019 (N = 102)

Discussion

Overall, our results show that, in general, fellows were more likely than nonfellows to contribute to the D&I research literature following their participation in the MT-DIRC program.

Our findings are similar to the evaluation study of a mentored implementation science training in mental health where fellows were more likely to receive D&I grants and publish D&I work post-fellowship [16]. Comparatively, selected fellows to MT-DIRC were less experienced in D&I with fewer D&I publications (36% vs. 67%) and D&I funding (16% vs. 36%) at baseline than IRI fellows. Unlike the IRI program, MT-DIRC did not require experience writing NIH grants as a prerequisite, which might account for this difference. Nonetheless, both studies point to the career impact and potential for D&I research capacity growth within the field through multi-year, mentored training approaches.

Globally, several training programs aim to build capacity for translational research (translating research to practice and communities). A recent synthesis of US and international D&I research trainings describe a large variety in approaches, ranging from single, short courses to long-term academies that combine didactic training with mentoring [8]. Training endeavors have contributed to the field with evaluation findings, though common metrics are lacking and could aid in understanding the trainings’ collective impact on capacity building. In addition, understanding training outputs (outcomes) based upon training inputs (e.g., frequency, length, format) could identify “active ingredients” for the largest impact. Here, we provide a fairly simple template with publicly available data to understand the training impact on scientific productivity in the field. Attention from the field is needed to develop templates for other impacts or common measures (knowledge, networking, mentoring) from D&I research training programs. Our program also conducted qualitative interviews with fellows to understand how specific activities within the program may have affected these measures (analyses in process). Combined evaluation approaches are likely needed to fully understand the impacts of specific program activities on research outputs.

One byproduct of mentored training approaches is the network of peers and mentors which has been linked to increased research collaborations [28]. Building and fostering collaborations within this network was the main objective of the MT-DIRC-mentored approach to training and may have attributed to the increased scientific productivity of our fellows. Guise and colleagues’ qualitative evaluation with mentors and mentees suggests team mentoring “works when” mentors and mentees bring varying perspectives and expertise and when mentors promote networking among mentees [29]. At the 5-day in-person Summer Institute, fellows at various intersections of D&I cancer prevention and control research connected and organized manuscripts to address gaps in the science. These collaborative publications were also supported and encouraged through in-person mentoring and group mentoring calls. Group mentoring provided a platform to interact with like-minded peers, a dedicated space for critique by experts in the field, and a continuous longer-term relationship with peers and mentors that likely aided fellows in accelerating their research progress.

Generating manuscripts is often accomplished in a shorter time frame in comparison with the process of proposing and receiving competitive grant funding. At the time of this study, it has been 5 years since the first cohort attended their first Summer Institute. Producing and disseminating key research in D&I is an important step to increase knowledge and fill gaps of understanding. This speaks to the potential of a mentored training program to accelerate D&I capacity building at a relatively rapid pace, which is needed to continue to build the field.

Limitations

While our study provides one approach to understanding the relationship between mentored training and building capacity in D&I research, several limitations should be noted. The main limitation is the use of unsuccessful applicants as a control group. While we provide data to show the groups were statistically similar across several characteristics at baseline, there may be unmeasured differences that led fellows to be selected at the outset. An example of this could be applicant selection criteria of research support and potential and likelihood for career development. We did not account for research funding trends which may have given priority for some applicants and not others. We also did not include journal impact measures for citations, a potential factor for understanding research impact. Our 6-year funding period is a function of a grant cycle, and too short for understanding longer-term effects on practice and policy. Easy access to comprehensive productivity information is limited mostly to publications and databases of federal funding. Understanding other forms of productivity like changes to cancer control policy or practice based on D&I research and spreading D&I research capacity through teaching and presenting would be ideal for a fuller understanding of the impact. This would require standardized metrics and measures.

Conclusions

Mentored training is an important and effective approach to building capacity in D&I research. A growing body of literature shows the value of systematic approaches to mentored training in biomedical research [21] and specifically in D&I research [16]. Our evaluation and other recent literature highlight the combined impacts of didactic teaching, networking, and mentoring. This study used one evaluation approach to examine scientific productivity, and additional steps are needed (e.g., stronger evaluation designs, standardized metrics and measures) to fully document training program impacts.