Research article

Implementation science has been defined as “the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice, and, hence, to improve the quality and effectiveness of health services.” [1, 2]. As implementation science grows in mental health services research, increasing emphasis has been placed on promoting the use of research evidence, or “empirical findings derived from systematic research methods and analyses” [3], in system-wide interventions that aim to improve the emotional, behavioral, and mental health of large populations of children, adolescents, and their caregivers [1, 2]. Examples include the adoption of universal mental health screening programs [4], development of trauma-informed systems of care [5], and psychotropic prescription drug monitoring programs [6,7,8]. Emphasis on research evidence use to inform system-wide interventions is rooted in beliefs that the use of such evidence will improve effectiveness, optimize resource allocations, mitigate the role of subjective beliefs or politics, and enhance health equity [9]. At the same time, documented challenges in the use of research evidence in mental health policy and programmatic innovation represent a sustained challenge in the implementation sciences.

To address these challenges, calls have been made for translational efforts to extend beyond simply promoting research evidence uptake (e.g., training the workforce to identify, critically appraise, and rapidly incorporate available research into decision-making [10,11,12]) to instead consider the knowledge, skills, and infrastructure required to inform and embed research evidence use in decision-making [13]. In this paper, we argue to engage the decision sciences to respond to the translational challenge of research evidence use in system-wide policy and programmatic innovations. System-wide innovations are studied far less frequently than medical treatments and often rely on cross-sectional or quasi-experimental study designs that lack a randomized control [10], adding to the uncertainty of the evidence base. Also, relevant decisions typically require considerations that extend beyond effectiveness alone, such as whether human, fiscal, and organizational capacities can accommodate implementation or provide redress to health inequity [13]. Accordingly, studies of research evidence use in system-wide policy and programmatic innovation increasingly emphasize that efforts to improve research evidence use require access not only to the research evidence, but also to the expertise needed to address the uncertainty of that evidence, and consideration for how values influence the decision-making process [14, 15].

In response, this article proposes the decision sampling framework, a systematic approach to data collection and analysis that aims to make explicit the gaps in evidence available to decision-makers by elucidating the decisions confronted and the role of research evidence and other types of information and expertise. This study specifically contributes to the field of implementation science by proposing the decision sampling framework as a methodological approach to inform the development of implementation strategies that are responsive to these evidence gaps and to inform the development of future dissemination and implementation strategies.

This article first presents our conceptualization of an optimal “evidence-informed decision” as an integration of (a) the best available evidence, (b) expertise to address scientific uncertainty in the application of that evidence, and (c) the values decision-makers prioritize to assess the trade-offs of potential options [14]. Then, we propose a specific methodological approach, “decision sampling,” that engages the theoretic tenets of the decision sciences to direct studies of evidence use in system-wide innovations to an investigation of the decision-making process itself. We conclude with a case study of decision sampling as applied to policy regarding trauma-informed screening for children in foster care.

Leveraging the decision sciences to define an “Evidence-informed Decision”

Foundational to the decision sciences is expected utility theory—a theory of rational decision-making whereby decision-makers maximize the expectation of benefit while minimizing risks, costs, and harms. While many theoretical frameworks have been developed to address its methodological limitations [16], expected utility theory nevertheless provides important insights into optimal decision-making when confronted with the real-world challenges of accessing relevant, high-quality, and timely research evidence for policy and programmatic decisions [17]. More specifically, expected utility theory demonstrates that evidence alone does not answer the questions of “what to do;” instead, it suggests that decisions draw on evidence, where available, but also incorporates expertise to appraise that evidence and articulated values and priorities weighed in consideration of the potential harms, costs, and benefits associated with any decision [14]. Although expected utility theory—unlike other frameworks in the decision sciences [18]—does not address the role of human biases, cognitive distortions and political and financial priorities that may also influence decision-making in practice, it nevertheless offers a valuable normative theoretical model to guide how decisions might be made under optimal circumstances.

Moving from research evidence use to knowledge synthesis

Based on a systematic review of research evidence use in policy [17], Oliver and colleagues argue that “much of the research [on policymakers’ use of research evidence] is theoretically naïve, focusing on the uptake of research evidence as opposed to evidence defined more broadly” [19]. Accordingly, some have suggested that policymakers use research evidence in conjunction with other forms of knowledge [20, 21]. Recent calls for “more critically and theoretically informed studies of decision-making” highlight the important role that broader sources of knowledge and local sources of information play alongside research evidence when making local policy decisions [17]. Research evidence to inform policy may include information on prevalence, risks factors, screening parameters, and treatment efficacy, which may be limited in their applicability to large population groups. Evaluations based on prospective experimental designs are rarely appropriate or possible for policy innovations because they do not account for the complexity of implementation contexts. Even when available, findings from these types of studies are often limited in their generalizability or not responsive to the information needs of decision-makers (e.g., feasibility, costs, sustainability) [14]. Accordingly, the research evidence available to make policy decisions is rarely so definitive or robust to rule out all alternatives, limiting the utility of research evidence alone. For this reason, local sources of information, such as administrative data, consumer and provider experience and opinion, and media stories among others, are often valued to provide data that may facilitate tailoring available research evidence to accommodate differences in heterogeneity of the population, delivery system, or sociopolitical context [20, 22,23,24,25,26,27]. Optimal decisions thus can be characterized as a process of synthesizing a variety of types of knowledge beyond available research evidence alone.

The role of expertise

Expertise is also needed to assess whether available research evidence holds “fit to context” [28]. Local policymakers may review evidence of what works in a controlled research setting, but remain uncertain as to where and with whom it will work on the ground [29]. Such concerns are not only practical but also scientific. For example, evaluations of policy innovations may produce estimates of average impact that are “unbiased” with respect to available data yet still can produce inaccurate predictions if the impact varies across contexts, or the evaluation estimate is subject to sampling errors [29].

The role of values in assessing population-level policies and trade-offs

Values are often considered to be antithetical to research evidence because they introduce subjective feelings into an ideally neutral process. Yet, the decision sciences suggest that values are required alongside evidence and expertise to inform optimal decision-making [30]. Application of clinical trials to medical policy offers a useful example. Frequently, medical interventions are found to reduce mortality but also quality of life. Patients and their physicians often face “preference sensitive” decisions, such as whether women at high risk of cancer attributable to the BRCA-2 gene should choose to undergo a mastectomy (i.e., removal of one or both breasts) at some cost to quality of life or not to undergo the invasive surgery and increase risk for premature mortality [31]. To weigh the trade-offs between competing outcomes—in this case the trade-off between extended lifespan and decreased quality of life—analysts often rely on “quality adjusted life years,” commonly known as QALYs. Fundamentally, QALYs represent a quantitative approach to applying patients’ values and preferences to evidence regarding trade-offs in likelihood and magnitude of different outcomes.

Likewise, policies frequently are evaluated on multiple outcome criteria, such as effectiveness, return on investment, prioritization within competing commitments, and/or stakeholder satisfaction. Efforts to promote evidence use, therefore, require consideration of the values of decision-makers and relevant stakeholders. Federal funding mechanisms emphasizing engagement of stakeholders in determining research priorities and patient-centered outcomes [32,33,34] are an acknowledgement of the importance of values in optimal decision-making. Specifically, when multiple criteria are relevant, (e.g., effectiveness, feasibility, acceptability, cost)—and when evidence may support a positive impact for one criteria but be inconclusive or unsupportive of another—preferences and values are necessary to make an evidence-informed decision.

To address these limitations, we recommend a new methodology, the decision sampling framework to improve analysis of decision-making to capture its varied and complicated inputs that are left out of current models. Rather than relying solely on investigation of research evidence uptake alone, this approach anchors the interview on the decision itself, focusing on respondents’ perceptions of how they synthesize available evidence, use judgment to apply that evidence, and apply values to weigh competing outcomes. By specifically orienting the interview on key dimensions of the dynamic decision-making process, this method offers a process that could improve implementation of research evidence by more accurately reflecting the decision-making process and its context.

Case study: trauma-informed care for children in foster care

In this case study, our methodological approach focused on sampling a single decision per participant regarding the identification and treatment of children in foster care who have experienced trauma [35]. In 2016, just over 425,000 U.S. children were in foster care at any one point in time [36]. Of these, approximately 90% presented with symptoms of trauma [37], or the simultaneous or sequential occurrence of threats to physical and emotional safety including psychological maltreatment, neglect, exposure to violence, and physical and sexual abuse [38]. Despite advances in the evidence available to identify and treat trauma for children in foster care [39], a national survey indicated that less than half of the forty-eight states use at least one screening or assessment tool with a rating of A or B from the California Evidence Base Clearinghouse [4].

In response, the Child and Family Services Improvement and Innovation Act of 2011(P.L. 112-34) required Title IV-B funded child welfare agencies to develop protocols “to monitor and treat the emotional trauma associated with a child’s maltreatment and removal from home” [40], and federal funding initiatives from the Administration for Children and Families, Substance Abuse and Mental Health Services Administration (SAMHSA) and the Center for Medicare and Medicaid Services incentivized the adoption of evidence-based, trauma-specific treatments for children in foster care [41]. These federal actions offered the opportunity to investigate the process by which states used available research evidence to inform policy decisions as they sought to implement trauma-informed protocols for children in foster care.

In this case study, we sampled decisions of mid-level managers seeking to develop protocols to identify and treat trauma among children and adolescents entering foster care. Protocols specifically included a screening tool or process that aimed to inform determination of whether referral for additional assessment or service was needed. Systematic reviews and clearinghouses were available to identify the strength of evidence available for potential trauma screening tools [42, 43]. This process explicitly investigated the multiple inputs and complex process used to make decisions.

Methods

Table 1 provides a summary of each step of the decision sampling method and a brief description of the approach taken for the illustrative case study. This study was reviewed and approved from the Institutional Review Board at Rutgers, the State University of New Jersey.

Table 1 Steps for the decision sampling framework and an illustrative case study

Sample

We posit that decision sampling requires the articulation of an innovation or policy of explicit interest. Drawing from a method known as “experience sampling” that engages systematic self-report to create an archival file of daily activities [45], “decision sampling” first identifies a sample of individuals working in a policy or programmatic domain, and then samples one or more relevant decisions for each individual. While experience sampling is typically used to acquire reports of emotion, “decision sampling” is intended to acquire a systematic self-report of the process for decision-making. Rationale for the number of key informants should meet relevant qualitative standards (e.g., theoretic saturation [46]).

In this case study, key informants were mid-level administrators working on public sector policy relevant to trauma-informed screening (rather than random selection [47]); we sampled specific decisions to gain systematic insight into their decision-making processes, complex social organization, and use of research evidence, expertise, and values.

Interview guide

Rather than asking directly about evidence use, the decision sampling approach seeks to minimize response and desirability bias by anchoring the conversation on an important decision recently made. The approach thus (a) shifts the typical unit of analysis in qualitative work from the participant to the decision confronted by the participant(s) working on a policy decision and (b) leverages decision sciences to investigate that decision. Consistent with expected utility theory and as depicted in Fig. 1, the framework for an interview guide aligns with the inputs of a decision tree reflecting the choices, changes, and outcomes that are ideally considered when making a choice among two or more options. This framework then generates domains for theoretically informed inputs of the decision-making process and consideration of subsequent trade-offs.

Fig. 1
figure 1

Interview Guide Domains: Mapping Interview Questions onto a Decision Analysis Framework

Accordingly, our semi-structured interview guide asked decision-makers to recall a recent and important decision regarding a system-wide intervention to identify and/or treat trauma of children in foster care. Questions and probes were then anchored to this decision and mapped onto the conceptual model. As articulated in Fig. 1, domains in our guide included (a) decision points, (b) choices considered, (c) evidence and expertise regarding chances and outcomes [20, 48], (d) outcomes prioritized (which implicitly reflect values), (e) the group process (including focus, formality, frequency, and function associated with the decision-making process), (f) explicit values as described by the respondent, and (g) trade-offs considered in the final decision [48]. By inquiring about explicit values, our guide probed beyond values as a means to weigh competing outcomes (as more narrowly considered in decision analysis) to include a broader definition of values that could apply to any aspect of decision-making. Relevant sections of the interview guide are available in the supplemental materials.

Data analysis

Data from semi-structured qualitative interviews was analyzed using a modified framework analysis, involving seven steps [49]. First, recordings were transcribed verbatim, checked for accuracy, and de-identified. Second, analysts familiarized themselves with the data, listening to recordings and reading transcripts. Third, an interdisciplinary team reviewed transcripts employing emergent and a priori codes—in this case aligned with the decision sampling framework (see a–g above [20, 50]). The codebook was then tested for applicability and interrater reliability. Two or more trained investigators performed line-by-line coding of each transcript using the developed codebook. The codes were then reconciled against each other arriving at consensus on the codes line-by-line. Fourth, analysts chose to use a software, a program to facilitate data indexing and a matrix, developed to index codes thematically, by interview transcript, to facilitate a summary of established codes for each decision. (The matrix used to index decisions in the present study is available in the supplement.) In steps five and six, the traditional approach to framework analysis was modified by summarizing the excerpted codes for each transcript into the decision-specific matrices with relevant quotes included. Our modification specifically aggregated data for each discrete decision point into a matrix providing the opportunity for routine and systematic analysis of each decision according to core domains. In step seven, the data were systematically analyzed across each decision matrix. Finally, we performed data validation given potential for biases introduced by the research team. As described in Table 1, our illustrative case study engaged member-checking group interviews (see supplement for the group interview guide.)

Results

Table 2 presents informant characteristics for the full sample (in the 3rd column) as well as the subsample (in the 2nd column) who reported decisions relevant to screening and assessment that are analyzed in detail below. The findings presented below demonstrate key products of the decision sampling framework, including the (a) set of inter-related decisions (i.e., the “decision set”) routinely (although not universally) confronted in developing protocols to screen and assess for trauma, (b) diverse array of information and knowledge accessed, (c) gaps in that information, (d) role of expertise and judgment in assessing “fit to context”, and ultimately (e) values to select between policy alternatives. We present findings for each of these themes in turn.

Table 2 Respondent characteristics

The inter-related decision set and choices considered

Policymakers articulated a set of specific and inter-related decisions required to develop system-wide trauma-focused approaches to screen and assess children entering foster care. Across interviews, analyses revealed the need for decisions in five domains, including (a) reach, (b) content of the endorsed screening tool, (c) the threshold or “cut-score” to be used, (d) the resources to start-up and sustain a protocol, and (e) the capacity of the service delivery system to respond to identified needs. Table 3 provides a summative list of choices articulated across all respondents who reported respective decision points within each of the five domains. All decision points presented more than one choice alternative, with respondents articulating as many as six potential alternatives for a single decision point (e.g., administrator credentials).

Table 3 The inter-related decision set for trauma screening, assessment, and treatment

Information continuum to inform understandings of options’ success and failure

To inform decisions, mid-level managers sought information from a variety of different sources ranging from research evidence (e.g., descriptive studies of trauma exposure, intervention or evaluation studies of practice-based and system-wide screening initiatives, meta-analyses of available screening and treatment options, and cost-effectiveness studies [3]) to anecdotes (see Fig. 2). Across this range, information derived from both global contexts, defined as evidence drawn from populations and settings outside of the jurisdiction, and local contexts, defined as evidence drawn from the affected populations and settings [50]. Notably, systematic research evidence most often derived from global contexts, whereas ad hoc evidence most often derived from local sources.

Fig. 2
figure 2

Information from Research Evidence to Anecdote

Evidence gaps

Respondents reported that the amount and type of information available depended on the decision itself. Comparing the decision set with the types of information available facilitated identification of evidence gaps. For example, respondents reported far more research literature about where a tool’s threshold should be set (based on its psychometric properties) than about the resources required to initiate and sustain the protocol over time. As a result, ad hoc information was often used to address these gaps in research evidence. For example, respondents indicated greater reliance on information such as testimony and personal experience to choose between policy alternatives. In particular, respondents often noted gaps in generalizability of research evidence to their jurisdiction and relevant stakeholders. As one respondent at a mental health agency notes, “I found screening tools that were validated on 50 people in [another city, state]. Well, that's not the population we deal with here in [our state] in community mental health or not 50 kids in [another city]. I want a tool to reflect the population we have today.”

The decision sampling method revealed where evidence gaps were greatest, allowing for further analysis to identify whether these gaps were due to a lack of extant evidence or the accessibility of existing evidence. In Fig. 3, a heat map illustrates where evidence gaps were most frequently expressed. Notably, this heat map illustrates that published studies were available for each of the relevant domains except for the “capacity for delivery systems to respond.”

Fig. 3
figure 3

Information Gaps across the Dynamic Decision Continuum

Role of expertise in assessing and contextualizing available research evidence

As articulated in Table 4, respondents facing decisions not only engaged with information directly, but also with professionals holding various kinds of clinical, policy, and/or public sector systems expertise. This served three purposes. First, respondents relied on experts to address information gaps. As one respondent illustrates, “We really found no literature on [feasibility to sustain the proposed screening protocol] … So, that was where a reliance in conversation with our child welfare partners was really the most critical place for those conversations.” Second, respondents relied on experts to help sift through large amounts of evidence and present alternatives. As one respondent in a child welfare agency indicates:

“I would say the committee was much more forthright… because of their knowledge of [a specific screening] tool. I helped to present some other ones … because I always want informed decision-making here, because they've looked at other things, you know...there were people on the committee that were familiar with [a specific screening tool]. And, you know, that's fine, as long as you're – you know, having choice is always good, but [the meeting convener] knew I'd bring the other ones to the committee.”

Table 4 Types of expertise across the decision set

Third, respondents reported reliance on experts to assess whether research evidence “fit to context” of the impacted sector or population. As one mental health professional indicated:

“It just seems to me when you're writing a policy that's going to impact a certain areas’ work, it's a really good idea to get [child welfare supervisor and caseworker] feedback and to see what roadblocks they are experiencing to see if there's anything from a policy standpoint that we can help them with…And then we said here's a policy that we're thinking of going forward with and typically they may say yes, that's correct or they'll say you know this isn't going work have you thought about this? And you kind of work with them to tweak it so that it becomes something that works well for everybody.”

Therefore, clinical, policy, or public sector systems expertise played at least three different roles—as a source of information in itself, as a broker that distilled or synthesized available research evidence, and also as a means to help apply (or assess the “fit” of) available research evidence to the local context.

The role of values in selecting policy alternatives

When considering choices, respondents focused on their associated benefits, challenges, and “trade-offs.” Ultimately, decision-makers revealed how values drove them to select one alternative over another, including commitments to (a) a trauma-informed approach, (b) aligning practice with the available evidence base, (c) aligning choices with available expert perspectives, and (d) determining the capacity of the system to implement and maintain.

For example, one common decision faced was where to set the “cut-off” score for trauma screening tools. Published research typically provides a cut-off score designed to optimize sensitivity (i.e., the likelihood of a true positive) and specificity (i.e., likelihood of a true negative). One Medicaid respondent engaged a researcher who developed a screening tool who articulated that the evidence-based threshold for further assessment on the tool was a score of “3.” However, the public sector decision-maker chose the higher threshold of “7” because they anticipated the system lacked capacity to implement and respond to identified needs if set at the lower score. This decision exemplifies a trade-off between being responsive to the available evidence base and system capacity. As the Medicaid respondent articulated: “What we looked at is we said where we would we need to draw the line, literally draw the line, to be able to afford based on the available dollars we had within the waiver.” The respondent confronted a trade-off between maximizing effectiveness of the screening tool, the resources to start-up and sustain the protocol, and the capacity of the service delivery system to respond. The respondent displayed a clear understanding of these trade-offs in the following statement: “…children who have 3 or more identified areas of trauma screen are really showing clinical significance for PTSD, these are kids you should be assessing. We looked at how many children that was [in our administrative data], and we said we can’t afford that.” In this statement, consideration for effectiveness is weighed against feasibility of implementation, including resources to initiate and sustain the policy.

Discussion

By utilizing a qualitative method grounded in the decision sciences, the decision sampling framework provides a new methodological approach to improve the study and use of research evidence in real-world settings. As the case study demonstrates, the approach draws attention to how decision-makers confront sets of inter-related decisions when implementing system-wide interventions. Each decision presents multiple choices, and the extent and types of information available are dependent upon the specific decision point. Our study finds many distinct decision points were routinely reported in the process of adoption, implementation, and sustainment of a screening program, yet research evidence was only available for some of these decisions [51], and local evidence, expertise, and values were often applied in the decision-making process to address this gap. For example, decision-makers consulted experts to address evidence gaps and they relied on both local information and expert judgment to assess “fit to context”—not unlike how researchers address external validity [52].

Concretely, the decision sampling framework seeks to produce insights that will advance the development of strategies to expedite research evidence use in system-wide innovations. Similar to strategies used to identify implementation strategies for clinical intervention (e.g., implementation mapping) [53], the decision sampling framework aims to provide evidence that can be informative to identification of the strategies required to expedite adoption of research findings into system-wide policies and programs. Notably, decision sampling may also identify evidence gaps leading to the articulation of areas where new lines of research may be required. Whether informing implementation strategies or new areas for research production, decision sampling roots itself in understanding the experience and perspectives of decision-makers themselves, including the decision set confronted, the role of various types of information, expertise, and values influencing the relevant decisions. Analytic attention on the decision-making process leads to at least three concrete benefits as articulated below.

First, the decision sampling framework challenges whether we refer to a system-wide innovation as “evidence-based” rather than as “evidence-informed.” As illustrated in our case example, a system-wide innovation may engage some decisions that are “evidence-based” (e.g., selection of the screening tool) while other decisions fail to meet the criteria whether because of lacking access to a relevant evidence base, expertise, or potentially holding values that prioritized other considerations over the research evidence alone. For example, respondents in some cases adopted an evidence-based screening tool but then chose not to implement the recommended threshold; while thresholds are widely recommended in peer-reviewed publications, neither their psychometric properties nor the trade-offs they imply are typically reported in full. Thus, the degree to which screening tools are based on “evidence derived from applying systematic methods and analyses” is an open question [54, 55]. In this case, an element of scientific uncertainty may have driven the perceived need for adaptation of an “evidence-based” screening threshold.

Second, decision sampling contributes to theory on why studies of evidence use should address other sources of information beyond systematic research [56]. Our findings emphasize the importance of capturing the heterogeneity of information used by decision-makers to assess “fit to context” and address the scientific uncertainty that may characterize any one type of evidence alone [14]. Notably, knowledge synthesis across the information continuum (i.e., research evidence to ad hoc information) may facilitate the use of available research evidence, impede it, or complement it, specifically in areas where uncertainty or gaps in the available research evidence exist. Whether use of other types of information serves any of these particular purposes is an empirical question that requires further investigation in future studies.

Third, the decision sampling framework makes a methodological contribution by providing a new template to produce generalizable knowledge about quality gaps that impede research evidence use in system-wide innovations [17] as called for by Gitomer and Crouse [57]. By applying standards of qualitative research for sampling, measures development, and analysis to mitigate potential bias, this framework facilitates the production of knowledge that is generalizable beyond any one individual system, as prioritized in implementation sciences [1]. To begin, the decision sampling framework maps the qualitative semi-structured interview guide on tenets of the decision sciences (see Fig. 1). Integral to this approach is a concrete cross-walk between the theoretic tenets of decision sciences and the method of inquiry. Modifications are possible; for example, we engaged individual interviews for this study but other studies may warrant group interviews or focus groups to characterize collective decision-making processes. Finally, our measures align with key indicators in decision analysis, while other studies may benefit from crosswalks with other theories (e.g., cumulative prospect theory to systematically examine heuristics and cognitive biases) [58]. Specific cases may require particular attention to one or all of these domains. For example, an area where the information continuum is already well-documented may warrant greater attention to how expertise and values are drawn upon to influence the solution selected.

Fourth, decision sampling can help to promote the use of research evidence in policy and programmatic innovation. Our findings corroborate a growing literature in the implementation sciences that articulates the value of “knowledge experts” and opinion leaders to facilitate innovation diffusion and research evidence use [56, 59]. Notably, such efforts may require working with decision-makers to understand trade-offs and the role of values in prioritizing choices. Thus, dissemination efforts must move beyond simply providing “clearinghouses” that “grade” and disseminate the available research evidence and instead engage knowledge experts in assessing fit-to-context, as well as the applicability to specific decisions. To move beyond knowledge transfer, such efforts call for novel implementation strategies, such as group model building and simulation modeling, to assist in consideration of the trade-offs that various policy alternatives present [60]. Rather than attempting to flatten and equalize all contexts and information sources, this method recognizes that subjective assessments, values, and complex contextual concerns are always present in the translation of research evidence to implementation.

Notably, evidence gaps also demand greater attention from the research community itself. As traditionally conceptualized, research evidence arrives from academic centers that design and test evidence-based practices prior to dissemination; this linear process has been cited as the “science push” [60] to practice settings. In contrast, our findings support models of “linkage and exchange,” emphasizing the bi-directional process by which policymakers, service providers, researchers, and other stakeholders work together to create new knowledge and implement promising findings [60]. Such efforts are increasingly prioritized by organizations such as the Patient-Centered Research Outcomes Institute and the National Institutes of Health which have called for every stage of research to be conducted with a diverse array of relevant stakeholders, including patients and the public, providers, purchasers, payers, policymakers, product makers, and principal investigators.

Limitations

Any methodological approach provides a limited perspective on the phenomenon of interest. As Moss and Hartel (2016) note “both the methods we know and the methodologies that ground our understandings of disciplined inquiry shape the ways we frame problems, seek explanations, imagine solutions, and even perceive the world.” (p. 127) We draw on the decision sciences to propose a decision sampling framework grounded in the concept of rational choice as an organizing framework. At the same time, we have no expectation that the machinations of the decision-making process align with the components required of a formal decision analysis. Rather, we think engaging a decision sampling framework facilitates characterizing and analyzing key dimensions of the inherent complexity of the decision-making process. Systematic collection and analysis of these dimensions is proposed to more accurately reflect the experiences of decision-makers and promote responsive translational efforts.

Characterizing the values underlying decision-making processes is a complex and multi-dimensional undertaking that is frequently omitted in studies on research evidence use [15]. Our methodological approach seeks to assess values both by asking explicit questions and by evaluating which options “won the day” as respondents indicated why specific policy alternatives were selected. Notably, values as articulated and values as realized in the decision-making process frequently did not align, offering insights into how we ascertain information about the values of complex decision-making processes.

Conclusions

Despite increased interest in research evidence use in system-wide innovations, little attention has been given to anchoring studies in the decisions, themselves. In this article, we offer a new methodological framework for “decision sampling” to facilitate systematic inquiry into the decision-making process. Although frameworks for evidence-informed decision-making originated in clinical practice, our findings suggest utility in adapting this model when evaluating decisions that are critical to the development and implementation of system-wide innovations. Implementation scientists, researchers, and intermediaries may benefit from an analysis of the decision-making process, itself, to identify and be responsive to evidence gaps in research production, implementation, and dissemination. As such, the decision sampling framework can offer a valuable approach to guide future translational efforts that aim to improve research evidence use in adoption and implementation of system-wide policy and programmatic innovation.