We are writing concerning the recent review article by Esmail et al. “A scoping review of full-spectrum knowledge translation theories, models, and frameworks” in Implementation Science [1]. While we see this article as making an important contribution to the literature, it also, likely inadvertently, mischaracterized the RE-AIM framework in a few salient areas. Our intent is to provide some clarity on these areas, highlight the implications of mischaracterizations of theories, models, and frameworks (TMF), and offer ideas for positive solutions that can advance implementation science.

It is important to start with the potential source of mischaracterizations related to the RE-AIM framework. The only RE-AIM reference cited in the review article was the original 1999 citation [2]. The RE-AIM framework like many others has evolved considerably over the past 20 years. Since 1999, we have published several reviews, updates, modifications, newer applications, and guidance for use of RE-AIM [3, 4]. In brief, four of the statements made about RE-AIM are incorrect, some were partially correct up until 2013, but others have not been accurate for the past decade or more. Below, we summarize the mischaracterizations, followed by a more accurate statement about the RE-AIM framework:

  1. 1.

    The RE-AIM framework is solely quantitative in nature. There have been reports of qualitative uses of RE-AIM for at least a decade. Qualitative assessments of reasons why RE-AIM results were found have been recommended in published reviews since at least 2013 and explicitly called out as a strong recommendation in Kessler et al. [5].

  2. 2.

    RE-AIM assumes all dimensions are equally important. All RE-AIM dimensions are important and contribute to overall public health impact. However, since 2006 [6], we have discussed pros and cons of different weightings of RE-AIM elements. We advise users to consider all elements, but to prioritize the dimensions most important to stakeholders in a given project [7].

  3. 3.

    The time interval for assessing implementation and maintenance are arbitrary. The time interval for implementation and maintenance are not arbitrarily mandated to be 6 months and 2 years, respectively. In our first publication on RE-AIM in 1999 [2], this was the case, but in many articles since then, the framework and these temporal perspectives have evolved—e.g., from sustained implementation directly after grant funding ceases to as long as 5 years after the initial implementation was completed [8].

  4. 4.

    The RE-AIM framework is only an evaluation framework. RE-AIM is not just an evaluation framework. It has been widely and successfully used to plan interventions for almost 15 years [9] and, more recently, to help guide adaptations during implementation. In addition, the PRISM framework which includes RE-AIM was added in 2008 [10] to address contextual factors that impact RE-AIM outcomes.

What promoted this letter is that similar mis-categorization and statements that RE-AIM is only an evaluation framework, cannot be used for any other purpose, is not an implementation model, or cannot be used as a qualitative approach are often heard from grant and journal article reviewers despite abundant evidence to the contrary. These inaccurate assertions have had negative impacts on advancing science and on the evaluation of proposals of several promising emerging researchers.

We take partial responsibility for this situation, since our group has not consistently provided clear guidance or unambiguous statements on these issues. Although we try to keep our website [3] current, there have been inconsistencies on the website, and as discussed above, the model has and continues to evolve. We have recently published the Glasgow et al. [4] paper to more clearly describe what is now Expanded RE-AIM/PRISM, the meaning and distinctions among various RE-AIM dimensions, and provide guidance on related issues.

Our larger concern is that similar mischaracterization issues apply not only to RE-AIM, but also to other TMFs. From discussions with other TMF developers, we speculate that many reviewers and scientists rely only on either (a) the original TMF article without considering refinements and extensions over time or (b) an influential review, or even increasingly reviews of reviews. We understand that when reviewing the broad range of TMFs, it is not possible to review the entire literature on each TMF, but conclusions in major publications have consequences.

Beyond specific issues concerning RE-AIM, we would like to begin discussion of how the field can collectively prevent or address unintended consequences of earlier categorizations or initial TMF papers being cited as conclusive without considering additional literature. We recommend that discussion sections of reviews include a highlighted statement that not all studies on each TMF were evaluated, that there may have been changes since an original publication on a given TMF, and that readers should not rely solely on that review and/or the original reference. We welcome other suggestions on what reviewers, TMF developers, and readers can do to alleviate this troubling and relatively common issue.