Abstract
Despite long-standing efforts to enhance evidence-informed decision-making in public health policy, tensions remain between the goal of basing decisions on the best available scientific evidence and the need to balance competing aims, interests, and evidentiary sources in representative democracies. In response, several strategies have been proposed both to democratize evidence production and evaluation, and to effectively integrate evidence into the decision-making processes of institutions of representative democracy. Drawing on a synthesis of the conceptual and empirical literature, this chapter describes and categorizes mechanisms that aim to reconcile political and scientific considerations in evidence-informed policymaking and develops an analytical typology that identifies salient dimensions of variation in their selection and design.
You have full access to this open access chapter, Download chapter PDF
Similar content being viewed by others
Keywords
- Deliberative approaches
- Knowledge mobilization
- Advisory bodies
- Co-production
- Evidence-informed policymaking
- Policy experimentation
1 Introduction
Efforts to improve the link between public health policy and scientific evidence are pervasive. The literature on evidence-based and evidence-informed policymaking has proliferated in recent decades, often reflecting the idea that policy should “follow” evidence. However, criticisms during the COVID-19 pandemic (among other examples) that politicians did not base their decisions on scientific advice and data demonstrated that the relationship between evidence and policy is not straightforward. In particular, unresolved tensions remain between the importance of basing policy decisions on the best available scientific evidence and the need for elected decision-makers to balance competing goals, interests, values, and evidentiary sources in representative democracies. This chapter refers to this set of tensions as the “science-politics” gap.
1.1 Understanding the Gap Between Science and Politics in Public Health Policymaking
Three key strands of literature examine the gap between science and politics in public health policymaking from different (but not mutually exclusive) viewpoints. The “two communities”Footnote 1 perspective proceeds from the observation that “[t]here is a considerable gap between what research shows is effective and the policies that are enacted and enforced” (e.g., Brownson et al., 2009, p. 1576; Oxman et al., 2009). The literature identifies multiple potential barriers to the effective use of scientific evidence in policy processes, including a lack of scientific evidence that is accessible, relevant, or known to policymakers; a lack of capacity among policymakers to evaluate existing evidence; inadequate understanding of or interaction with the policy process by researchers; the different incentives and logics driving policymakers and scientists; a lack of political will or value attached to evidence-informed policymaking; the complexity and diffuseness of the policymaking process; and decision-makers’ deference to interest group pressures or commitment to ideological positions that are not supported by the evidence base (Bonell et al., 2018; Brownson, 2011; Brownson et al., 2009; Choi, 2005; Pantoja et al., 2018). Although the “two communities” literature often acknowledges the diverse priorities and evidentiary sources that influence decision-makers, proponents of this perspective typically hold that the policy process would produce more health-promoting outcomes if it had a stronger foundation in scientific knowledge—particularly knowledge contained in rigorous evidence reviews or syntheses (such as systematic reviews of randomized control trials) (e.g., Anderson et al., 2005; Fielding & Briss, 2006; Oxman et al., 2009). In sum, the “gap” from this perspective is between the body of evidence that scientists have accumulated on the one hand and policymakers’ use of that evidence on the other.
The “politics of evidence” strand of literature problematizes the question of what constitutes “evidence.” This perspective challenges the idea of scientific evidence as objective and apolitical and points out that knowledge creation, analysis, interpretation, and utilization are all value-based processes that are influenced by, and reinforce, existing power structures (Cairney, 2016; Jasanoff, 2004; Stewart & Smith, 2015). Choices about how to frame research questions, which methods to use, what data to draw on, and how to evaluate success are all driven by researchers’ and funders’ priorities and worldviews. This perspective cautions against a belief in the “primacy and purity of scientific evidence” (Greer et al., 2017, p. 41), which can manifest in excessive trust in technical expertise and evidentiary hierarchies, limit debate about ethics and values, and prevent practical and experience-based knowledge from being considered in policy decisions (Corburn, 2007; Russell et al., 2008). Moreover, a focus on measuring problems and effects shifts attention to those issues and solutions that lend themselves to quantification, while populations or crises that are more difficult to research remain invisible and unaddressed (Corburn, 2007; Parkhurst, 2017; Russell et al., 2008). Here, the “gap” refers to understandings of what constitutes evidence between a technocratic or expert-oriented view and a more socially embedded conception.
The “policy complexity” strand of literature focuses on the nature of the policymaking process and the role of politics and evidence within it. Similar to the literature on the social construction of evidence, this perspective starts with the assumption that the production of evidence and its use in policymaking are fundamentally political (Cairney, 2016; Fafard, 2015; Hawkins & Parkhurst, 2016). The policymaking process is value-laden, involves multiple levels and actors, and is shaped by ideological, economic, financial, and temporal considerations; even when policymakers are aware of the evidence and wish to base their decisions on it, doing so may not be politically or financially feasible (Cairney, 2016; Cairney & Oliver, 2017; de Leeuw et al., 2014; K. Smith, 2013). Amid the complexity of the policy environment and the incomplete and contested nature of scientific evidence, the latter constitutes an important source of information but cannot yield certainty to decision-makers (Fafard, 2015; French, 2018; Stewart & Smith, 2015). From this perspective, the “gap” refers to the difference between how some proponents of evidence-informed policy perceive the use of evidence by policymakers, and the complex, diffuse, and intrinsically political way in which it unfolds in practice.
1.2 Bridging the Gap Between Science and Politics in Public Health Policymaking
Although these three perspectives highlight different aspects of the science-politics gap, they share much potential common ground. For example, few would argue the extreme view that “politics is so pathological that no decision is based on an appeal to scientific evidence if it gets in the way of politicians seeking election, or so messy that the evidence gets lost somewhere in the political process” (Cairney, 2016, p. 2). Similarly, most would agree that the value-laden and political nature of evidence does not imply that facts are marginal or irrelevant to policymaking (Jasanoff, 2004; Latour, 2004). Consequently, there is a growing effort to bring insights on the nature of evidence and policymaking together with insights regarding robust scientific knowledge, with the goal of improving both the technical and democratic legitimacy of decision-making processes. These efforts ultimately aim to create systems of evidence utilization that reduce “issue bias” (the sidelining of social values and concerns through the prioritization of technical evidence) and “technical bias” (the use of evidence in ways that are not scientifically valid) (Parkhurst, 2017, pp. 7–8). Although numerous mechanisms have been proposed to achieve this goal, the field lacks a comprehensive conceptual overview of these mechanisms’ objectives and potential contributions to the evidence-informed policymaking process. Proceeding from the perspective that the tensions between technical and political considerations can, and should, be reconciled, this chapter presents an inventory of relevant mechanisms, describes salient design considerations and trade-offs, and proposes a typology informed by key dimensions of variation.
2 Methods
Using the above-mentioned tensions and debates as a starting point, we synthesized the literature on mechanisms that have been proposed to bring scientific evidence into public health policymaking in democratically and technically robust ways. We focused on two questions (Fafard & Cassola, 2020):
-
1.
Which mechanisms have been proposed to produce and evaluate evidence in more participatory ways?
-
2.
Which mechanisms have been proposed to enhance the integration of robust scientific evidence into the decision-making processes of institutions of representative democracy?
We focused on conceptual and empirical studies in public health and related fields (such as healthcare and environmental health) that specifically discussed tools to reconcile political and scientific considerations in evidence-informed policymaking. We identified sources through (1) our previous knowledge of the literature; (2) an iterative search of databasesFootnote 2 covering public health and similar topic areas using search terms related to evidence-informed policymaking, bridging the science-politics gap, and specific types of mechanisms; and (3) forward-searching from article bibliographies. Where available, we prioritized review articles, overviews of multiple empirical cases, seminal and highly-cited articles, and articles that directly discussed the relevant mechanisms as bridges between scientific and political considerations in policymaking. We included articles that discussed mechanisms generally (not specifically in relation to public health or related fields) when they provided foundational information.
When reviewing sources, we listed the mechanisms they mentioned and extracted information about their goals, theories of impact, and operation.Footnote 3 We grouped the mechanisms into broad categories based on their objectives as they relate to the science-politics gap. We stopped reviewing articles at the point of saturation, when new sources did not yield substantially new types of mechanisms or information in these categories. We then synthesized salient themes and categorizations to develop a conceptual typology of mechanisms. In summarizing a vast and dispersed literature that lacks terminological or conceptual coherence, we employ the term “typology” to describe an organizational tool that helps to situate mechanisms in relation to one another and the field overall by identifying key dimensions of variation (Bailey, 1994; Collier et al., 2011).
3 Mapping the Landscape
We identified five broad categories of mechanisms that have been proposed in the literature to bridge the science-politics gap in public health and related fields: (1) co-production of evidence; (2) public deliberation of evidence; (3) knowledge mobilizationFootnote 4; (4) expert advisory bodies and roles; and (5) policy experimentation and evaluation (Table 1).Footnote 5 These categories were chosen because we identified considerable consistency in the literature regarding the objectives within them. At the same time, the categories often overlap, and mechanisms are sometimes differentially categorized and defined in the literature. For example, the integrated knowledge translation mechanism that we categorized under “knowledge mobilization” is sometimes discussed as a form of “co-production of evidence.” Similarly, citizens’ juries and participatory Health Impact Assessments (HIAs) and Health Technology Assessments (HTAs) may fall within either (or both) the co-production and deliberation categories of mechanisms discussed in this chapter, depending on the purpose and design of the process. Consequently, although we place them into discrete categories for the sake of analytical clarity, these mechanisms might in practice be considered to exist along a continuum. This section describes the objectives of each category of mechanisms, highlights examples, and describes challenges and design considerations associated with their use.
3.1 Co-production of Evidence
Concern about excessive reverence toward technical and expert knowledge in policymaking has produced efforts to account for the social embeddedness of science and technology by increasing involvement of practitioners, patients, and/or affected communities in negotiating research priorities and generating evidence (Corburn, 2007; Rabeharisoa et al., 2014; Williams et al., 2020).Footnote 6 Co-productive research may be motivated by the normative goal of democratizing evidence production as an end in itself, as well as the practical goals of improving research quality, relevance, impact, and perceived legitimacy by integrating experience-based expertise (Corburn, 2007; Oliver et al., 2019; Williams et al., 2020).
Although it shares similarities with broader collaborative or participatory approaches to research, co-production is traditionally concerned with redistributing some of the power to frame and produce evidence from researchers and experts to affected communities and service users (Corburn, 2007; Williams et al., 2020). For example, community-based participatory research (CBPR) involves the structured participation of community members and organizations in research, recognizing them as experts in their own right and often including a capacity-building element (Jull et al., 2017; Viswanathan et al., 2004). Community participants in CBPR are viewed as equal partners rather than subjects, share in decision-making power and project ownership, and are ideally involved at all stages of the research process, including identifying research priorities and interpreting findings (Cashman et al., 2008; Jull et al., 2017; Richardson, 2014; Viswanathan et al., 2004). In a similar approach, patients’ organizations involved in evidence-based activism “articulate credentialed knowledge with ‘experiential knowledge’” in an effort to reshape dominant understandings of areas of concern and raise their political salience (Rabeharisoa et al., 2014, p. 115).
Similar principles have been integrated into processes like participatory HIAs and HTAs, which consider experience-based knowledge alongside technical expertise in health-related decisions. HIAs often combine quantitative assessments of effectiveness and efficiency with residents’ local knowledge to generate a more representative picture of the potential health impacts and underlying values of policy alternatives, encourage more transparent, accountable, equitable, and responsive policymaking, and increase community influence on policy decisions (Bhatia & Corburn, 2011; Den Broeder et al., 2017; Haigh et al., 2012; Harris-Roxas et al., 2012; Wright et al., 2005). Members of the affected community may be involved in developing the goals, questions, measures, and policy alternatives under review and can also identify gaps in knowledge that technical experts then work to address (Bhatia & Corburn, 2011). Patients’ experiences and the public’s views may also be included in HTAs, in combination with evaluations of technical and cost effectiveness, to incorporate more comprehensive knowledge and values into decisions about the use of these technologies (Abelson et al., 2007; Gagnon et al., 2011). This may involve participation in the “prioritization, scoping, evidence assessment, and dissemination of HTA findings” (Gagnon et al., 2011, p. 35), thus potentially spanning the co-production, deliberative, and knowledge mobilization categories of mechanisms discussed in this chapter.
There are several considerations and trade-offs associated with the selection and design of co-productive mechanisms. Because co-production involves prolonged engagement between experts and affected groups, it is time- and resource-intensive, raises complex research ethics considerations, involves potential costs to researchers and participants, and requires facilitation, relationship-building, and brokering skills (Cashman et al., 2008; Nyström et al., 2018; Oliver et al., 2019; E. Smith et al., 2008). The timelines, goals, and epistemological outlooks of traditional knowledge producers and their partners do not always align; for example, trade-offs may arise between the time required to effectively engage with communities and the desire for co-produced knowledge to influence time-sensitive policy processes (Cashman et al., 2008; Nyström et al., 2018; Wright et al., 2005). Careful attention to design is also required to ensure that co-production processes fulfill their objective of knowledge co-creation among technical and experiential experts, including in decisions regarding who participates, to what end, at what stage, and with what level of decision-making power (E. Smith et al., 2008; Wright et al., 2005). For example, involving community partners in decisions about research design and priorities can help increase equity and relevance at the outset of the research process (E. Smith et al., 2008; Williams et al., 2020). At the same time, considerations about who is included among the “affected community” or “service users” can be contested and will influence the representativeness and reliability of the knowledge produced through these mechanisms (Den Broeder et al., 2017; Oliver et al., 2019; E. Smith et al., 2008; Wright et al., 2005).
3.2 Public Deliberation of Evidence
Deliberative approaches involve structured fora that aim to integrate diverse knowledge and values into the evaluation of evidence, increase decision-makers’ access to timely, relevant, and contextualized evidence interpretation, and enhance the public legitimacy of decisions (Boyko et al., 2012; Degeling et al., 2015; Fung, 2003; Lavis et al., 2014). Although a variety of deliberative approaches exists with different aims and designs (Fung, 2003), these mechanisms share roots in a long tradition of deliberative democracy in which including public stakeholders in dialogic decision-making spaces is considered important for democratic legitimacy and, indeed, a democratic ideal in itself (e.g., Cohen, 1997; Fafard, 2009).
Deliberative mechanisms typically bring participants up to date on the appropriate evidence base and possible interventions and enable them “to explore value-laden problems from a variety of perspectives and then work through the trade-offs of potential solutions” (Boyko et al., 2012, p. 1943). In deliberative polling, a random sample of several hundred members of the public is recruited for a deliberative dialogue that usually takes place over the course of a few days (Abelson et al., 2003; Fishkin et al., 2000; Maxwell et al., 2002). Participants receive background information about the issue at hand, take part in moderated discussions with other participants, and hear from expert panels representing diverse perspectives (Fishkin et al., 2000; Johnson, 2009). Participants’ views are polled before and after deliberation, with the goal of measuring the change in opinions following education and deliberation—and ultimately of “expos[ing] [policymakers] to what a more informed state of public opinion would be like” (Fishkin et al., 2000, p. 664; Johnson, 2009).
In a slightly different approach, citizens’ juries aim to ensure that public concerns and values guide the interpretation and use of evidence by assembling a small group of lay members of the public (often randomly selected), educating them about a policy issue, providing time for structured deliberation, and having them produce a decision or recommendation (Degeling et al., 2017; G. Smith & Wales, 2000; Street et al., 2014). The proceedings are moderated by trained facilitators, and participants typically hear from witnesses who represent a range of areas of expertise and relevant interests (G. Smith & Wales, 2000). Decisions are often reached through consensus, and in some cases, decision-makers may be required to respond to or adopt the jury’s recommendations (Ritter et al., 2018; G. Smith & Wales, 2000; Street et al., 2014).
Deliberative decision-making can also be used to address equity for populations facing conditions of marginalization. Participatory budgeting processes involve sequential deliberative meetings during which a jurisdiction’s residents participate equally alongside government representatives and other organizations to develop and vote on proposals for allocating public funds (Hagelskamp et al., 2018; Johnson, 2009; Wampler, 2007). Because this process usually enables the voting public to initiate the spending proposals that are put on the ballot, it can “raise awareness of community needs that may be forgotten or invisible under politics-as-usual” (Hagelskamp et al., 2018, p. 769), and although not a panacea, such processes may have a “moderate capacity to challenge social and political exclusion while promoting social justice” (Wampler, 2007, p. 45). Well-designed processes can also encourage more active citizenship and increased decision-making transparency, accountability, and legitimacy (Wampler, 2007).
Other deliberative mechanisms, such as stakeholder dialogues and roundtables, bring together representatives of groups that are identified as key stakeholders on different sides of a policy issue for structured meetings; these meetings provide opportunities for engagement that might not otherwise occur in traditional decision-making processes (Cuppen, 2012; Johnson, 2009). The goal is not necessarily to reach consensus, but rather to facilitate learning about the nature of the issue and potential policy responses through deliberation and synthesis of stakeholders’ divergent expertise, values, and perspectives (Cuppen, 2012).
Despite these mechanisms’ promise for broadening participation in evidence deliberation and interpretation, several design considerations and trade-offs exist. First, as with co-production, complex questions can arise regarding the affected population or stakeholder groups from which participants are drawn (Cuppen, 2012; G. Smith & Wales, 2000). In the case of open processes like participatory budgeting, citizens may face material, trust, interest-based, or other barriers to participating (Ganuza & Francés, 2012; Hagelskamp et al., 2018). In processes like citizens’ juries, assembling a small group can enhance the quality of deliberations, but may also hinder the recruitment of a geographically, demographically, and politically representative sample (Abelson et al., 2003; Boyko et al., 2012; G. Smith & Wales, 2000; Street et al., 2014). Second, although the equality of participants is a key tenet of deliberative dialogues, tacit beliefs or assumptions may lead to certain voices being devalued or excluded (Milewa, 2006). Third, decisions made prior to the proceedings—such as in the case of citizens’ juries, the formulation of the question and the selection of witnesses and evidence—critically influence the outcomes of deliberative processes (Abelson et al., 2003; G. Smith & Wales, 2000). Fourth, when they do not include decision-making power for public participants, deliberative models may be perceived as tokenistic, unaccountable, or intended to legitimize foregone decisions and can lead to cynicism and disengagement (Abelson et al., 2003; Fung, 2015; Safaei, 2015). Finally, deliberative processes can be resource-intensive to implement (Boyko et al., 2012).
3.3 Knowledge Mobilization (KM)
One of the most common strategies discussed in the research literature to enhance the use of scientific evidence in public health decisions involves making existing knowledge more accessible and relevant to policymakers. This approach typically focuses on how the research community can increase the policy impact of empirical evidence (and particularly sources like systematic reviews and other evidence syntheses) by addressing policy-relevant questions and transferring research knowledge to decision-makers more effectively (e.g., Catallo et al., 2014; Grimshaw et al., 2012; Mitton et al., 2007).
KM mechanisms typically promote the tailoring of evidence for the relevant audience and increased interaction among knowledge creators and users. For example, knowledge brokering efforts aim to ensure that existing research evidence is effectively packaged (through briefs, summaries, reports, etc.) and actively shared (through dialogues, workshops, briefings, etc.) to increase its demand and use by policymakers (Catallo et al., 2014; Ward et al., 2009). Knowledge platforms employ tools such as evidence briefs and policy dialogues to support the evidence-informed policy process and often involve partnerships among researchers and a range of knowledge users (El-Jardali et al., 2014; Partridge et al., 2020). Some platforms, such as the Cochrane and Campbell Collaborations, are dedicated to increasing access to high-quality and reliable evidence syntheses.
With increased recognition of the complexity of the policy process and the need for more active engagement with decision-makers, a focus on integrated knowledge translation (iKT) has emerged. This strategy resembles co-production in that it involves collaboration between researchers and knowledge users (often policymakers) throughout the research and dissemination process (Jull et al., 2017; Kothari & Wathen, 2013; Lawrence et al., 2019; Nguyen et al., 2020). Like co-production models, iKT is based on a recognition that researchers and knowledge users have complementary expertise in producing relevant and grounded policy research (Jull et al., 2017). However, iKT processes are typically motivated by the goals of increasing research relevance and utilization and are usually less concerned with addressing issues of social embeddedness or power dynamics (Jull et al., 2017; Lawrence et al., 2019; Nguyen et al., 2020; Williams et al., 2020).
Like co-production and deliberative mechanisms, all KM strategies require extensive time and resource investments to build the relationships and trust that underpin their success (Lawrence et al., 2019; Mitton et al., 2007; Nguyen et al., 2020; Oliver et al., 2019). Because successful KM requires researchers to have time, resources, skills, and credibility, the presence of intermediary organizations, platforms, or structures dedicated to this work may facilitate successful efforts (Edwards et al., 2019; Grimshaw et al., 2012). The effectiveness of KM processes can be jeopardized by mismatches between the nature and timelines of the scientific process in contrast to knowledge users’ expectations and timeframes, but early, phased, and ongoing collaboration can increase mutual understanding, enhance the relevance of research questions, and ultimately improve research uptake (Edwards et al., 2019; Kothari & Wathen, 2013; Lawrence et al., 2019; Mitton et al., 2007; Ward et al., 2009). Effective KM for public health policy also requires a robust understanding of the complexity of the policymaking process by those looking to improve evidence uptake (Fafard & Hoffman, 2020; Mitton et al., 2007; Oliver & Cairney, 2019).
3.4 Expert Advisory Bodies and Roles
Another set of mechanisms aims to provide timely and appropriate scientific expertise to policymakers by establishing formal entities or roles with a mandate to inform policy decisions through high-quality, relevant, and legitimate scientific evidence (Hoffman et al., 2018; Parkhurst, 2017). For example, scientific advisory committees are typically established to inform policy decisions “with the best available research evidence such that positive impact is maximized and negative (often unintended) consequences are minimized” (Hoffman et al., 2018, p. 2). Expert advisory bodies may be: ad hoc or permanent; statutorily mandated or voluntarily commissioned; designed to address broad science policy or more bounded issues; targeted at audiences internal or external to the institution that established them; and embedded within the government or at arm’s length (Groux et al., 2018; OECD, 2015). As discussed in Chapter 9 (Hawkins & Oliver, 2022) of this book, parliamentary committees are another mechanism through which a range of evidence, including expert testimony, can be synthesized in order to support the scrutiny and development of policy action (Earwicker, 2012).
In some cases, individual officials exercise a similar mandate. Scientific oversight/advisory roles, such as Chief Science Advisors, are positioned “as broker[s] and expert navigator[s] between the government and the scientific community” and aim to ensure that policymakers interpret and use technical evidence in appropriate ways (OECD, 2015, p. 15; Parkhurst, 2017). In some cases, an individual official has multiple roles of which scientific oversight or advice is but one, as is the case with Chief Medical Officers in several Westminster countries and the Surgeon General in the United States (Fafard et al., 2018; MacAulay et al., 2021; Sheard & Donaldson, 2006; Stobbe, 2014).
The transnational nature of many contemporary scientific problems has also given rise to the establishment of international advisory or evidentiary bodies. For example, the World Health Organization (WHO) regularly convenes expert advisory panels and committees to provide technical guidance in specific areas (Gopinathan et al., 2018; WHO, 2021). Another mechanism that aims to institutionalize governments’ access to expertise involves formal organizations with a mandate to review and/or synthesize evidence to inform policy (Parkhurst, 2017). For instance, the National Institute for Health and Care Excellence (NICE) in the UK combines rigorous technical analyses of the effectiveness and efficiency of healthcare interventions with considerations of social and ethical values to inform the National Health Service and other health decision-makers (NICE, 2019; Parkhurst, 2017; Rawlins, 2015). Although the above-mentioned mechanisms aim to better ground public health policy decisions in technical evidence and expertise, these decisions nonetheless continue to require reconciliation of different values, interests, and goals—that is, they remain inherently political in nature (Gelijns et al., 2005; Lee, 2020).
The effectiveness of scientific advisory bodies may be thought of as a function of the quality (scientific soundness), relevance (applicability to the question at hand), and legitimacy (procedural fairness, inclusiveness, and impartiality) of their advice (Hoffman et al., 2018). The way in which these bodies are designed can influence perceptions of quality, relevance, and legitimacy. For example, perceptions of legitimacy may be influenced by the transparency of the advisory body’s composition and processes, the representation of a diversity of experts, and the experts’ degree of independence from the entities that convene the body, those that will use its advice, and those with which its expert members are affiliated (Behdinan et al., 2018; Gopinathan et al., 2018; Groux et al., 2018; Rowe et al., 2013). At the same time, design trade-offs exist (Gopinathan et al., 2018). For example, although transparent proceedings are critical to enhance legitimacy, closed-door discussions may be important for high-quality deliberations (Gopinathan et al., 2018). Additionally, although representation within advisory bodies is important to reduce bias and increase relevance, achieving this may prove challenging in specialized technical areas, within short timelines, or when strict conflict-of-interest exclusions reduce the pool of potential experts (Behdinan et al., 2018; Gopinathan et al., 2018). And although including policymakers and other end users in the proceedings can increase relevance, doing so may also cast doubt on the quality and legitimacy of the resulting advice (Andresen et al., 2018; Gopinathan et al., 2018).
3.5 Policy Experimentation and Evaluation
Once policymakers have considered different inputs into the policy process and proposed a path forward, policy experimentation and evaluation can generate additional knowledge of how a policy performs in context (Campbell, 1998; McFadgen & Huitema, 2018; Pearce & Raman, 2014; Sanderson, 2002, 2009). This category of mechanisms seeks to address the inadequacy of a priori evidence for determining what will happen in practice, by helping policymakers to evaluate their proposed policies through pragmatic knowledge gained “in the experience of delivery” (Sanderson, 2009, p. 711).
Policy pilots and policy experiments typically aim to evaluate a limited rollout of a policy or program, often using randomized control trials or quasi-experimental methods, based on the assumption that the strong internal validity of such methods will lead to high-quality evidence that may be convincing to policymakers (Ettelt et al., 2015a, 2015b). Policy experiments can take different forms, including “technocratic” experiments led by scientific experts who proceed independently and present their results to policymakers; “boundary” experiments developed collaboratively among governmental and non-governmental actors that integrate “multiple knowledge systems” and “multiple value perspectives” in assessing solutions; and “advocacy” experiments led by policymakers in consultation with traditional interests who agree on the underlying framing of the problem at hand (McFadgen & Huitema, 2018, pp. 166–167).
Although experimental or quasi-experimental methods may be most suitable where there is considerable uncertainty regarding policy effects, observational policy evaluations are also useful for generating evidence of policy impact, particularly in cases characterized by less uncertainty, on questions that are not appropriately answered through experimentation, and to track policy outcomes during longer-term implementation processes (Petticrew, 2013; Sanderson, 2002, 2009). Policy innovation labs use a range of methodologies, including experimental methods, advanced data analytics, and/or user-centered design techniques (which often include ethnographic or participatory approaches) to foster new approaches to generate, test, and evaluate solutions to complex policy and service delivery problems (McGann et al., 2018; Olejniczak et al., 2020).
As is the case with other categories of mechanisms, the time horizons of policy experiments, pilots, and evaluations may not line up with those of policymaking, particularly when they aim to assess how policies address complex problems (Pearce & Raman, 2014; Sanderson, 2002). The degree of integration or independence from government of a policy lab, evaluation, or experiment may also involve trade-offs between policymaking influence and ability to challenge the status quo (McGann et al., 2018). Mechanisms that rely on experimental methods also face specific challenges. Although policy experiments can determine with some credibility the effectiveness of a policy in a specific social context, their conclusions are usually limited to a narrow selection of measurable outputs, and the high degree of experimental control can undermine the generalizability of the findings (Jensen, 2020; Sanderson, 2002). Some experiment designs may also be more influential on policymaking than others. For example, one analysis showed that expert-led technocratic experiments were considered by policymakers to have lower credibility, salience, and legitimacy compared to boundary and advocacy experiments, demonstrating that “when a broad set of actors contribute contextual, practical knowledge, this place-based knowledge improves credibility over scientifically defensible knowledge alone” (McFadgen & Huitema, 2018, p. 176).
4 Toward a Typology
Our analysis of mechanisms that have been proposed to bridge the science-politics gap identified several key dimensions of variation (Table 2) that represent a set of considerations for thinking through the selection and design of different mechanisms (Fig. 1). Although some of these dimensions differentiate categories of mechanisms (e.g., co-production and expert advisory bodies typically address different types of bias), others vary across mechanisms within the same category (e.g., different types of deliberative mechanisms may involve different actors or loci of authority).
Examples of typology dimensions in relation to mechanism selection and designFootnote
This figure is illustrative of different decision processes that the typology in Table 2 can help to facilitate. It is not a comprehensive list of all possible mechanisms, goals, or suitable options associated with addressing each concern.
4.1 Type of Bias Addressed
As discussed above, Parkhurst (2017) identifies two sources of bias relevant to evidence-informed policymaking: “issue bias” and “technical bias.” The categories of mechanisms discussed here are typically oriented more strongly to addressing one or the other of these biases. For example, expert advisory bodies are established to reduce technical bias in policymaking by institutionalizing scientific expertise, while many deliberative mechanisms aim to reduce issue bias by fostering public debate of the evidence on value-laden issues. Because mechanisms that address one type of bias may fall short on considerations of another (such as when expert advisory bodies are critiqued as too technocratic or co-produced evidence is considered insufficiently scientific), mechanisms from different categories may be combined to strengthen evidence-informed policy processes. For example, the mandate of NICE involves reducing technical bias in health decision-making through the rigorous and systematic review of evidence; however, the organization’s Citizens Council, solicitation of commentary from pluralistic stakeholders on guidelines, and other participatory efforts are examples of strategies to reduce issue bias (Parkhurst, 2017; Rawlins, 2015).
4.2 Phase of the Evidence-Policy Process
Although policymaking is complex and cannot be neatly divided into sequential steps, each of the mechanisms reviewed in this chapter can be thought of as contributing to a different phase of the evidence-informed policymaking process, broadly conceived. For example, co-productive mechanisms such as CBPR can help to define or redefine a problem by marshaling locally-generated evidence that has traditionally not been part of the policy conversation. Deliberative mechanisms such as citizens’ juries are often designed to evaluate evidence about a policy issue through a mediated dialogue among informed participants. Traditional KM mechanisms focus on inputting existing evidence into formal decision-making processes, while iKT may also span problem definition and evidence generation. Expert advisory bodies typically evaluate evidence and have channels for inputting evidence into the decision-making processes. Finally, policy experimentation and evaluation mechanisms typically focus on evaluating policy outputs to generate evidence that informs future decision-making (although policy innovation labs sometimes contribute more broadly across the evidence-policy process).
4.3 Relevant Policy Concerns
Within and across categories, different mechanisms may also be appropriate for addressing different policy concerns. For example, citizens’ juries have been identified as useful for deliberating policy questions involving difficult value judgments that may benefit from the integration of expert and experience-based knowledge (Degeling et al., 2017). SACs may be most appropriate for highly technical questions—although principles of open deliberation, transparency, accountability, and contestability should still apply (Andresen et al., 2018; OECD, 2015; Parkhurst, 2017). Co-production mechanisms like CBPR (which broadens the locus of authority on problem definition and evidence generation) and participatory budgeting (which expands decision-making and voice on resource allocation issues) may be most appropriate where the concern is to increase equity and social justice in policymaking. Finally, where the policy concern involves a high level of uncertainty regarding policy outcomes, experimental policy pilots may be the most appropriate mechanism.
4.4 Actors and Institutions Involved
Considerable variation exists across and within mechanisms regarding the actors and institutions involved. Expert bodies such as SACs are typically composed of purposively selected professionals with specialized and in-depth technical knowledge of a particular subject. Traditional KM approaches often rely on self-selected researchers or research platforms that attempt to transfer their knowledge to relevant decision-makers, although knowledge users may also be directly and formally involved in knowledge platforms and iKT processes. As discussed above, policy experiments can also be led by different constellations of actors, including scientific experts, governmental entities, and groups with affected interests (McFadgen & Huitema, 2018). Among mechanisms that involve public participation, a key distinction involves the type of “public” that participates. Degeling and co-authors (2015, p. 117) identify three categories: (1) citizens or naïve publics, who are construed as “a subject of education, and then, potentially as decision maker”; (2) affected consumers, who are seen as “the authentic expert[s]” about the issue at hand; and (3) partisan publics, who represent interest groups and affected organizations. For example, stakeholder roundtables typically bring together purposively selected members of partisan publics while deliberative polling typically involves randomly selected members of the naïve public.
4.5 Locus of Authority
Mechanisms also vary in the level of authority or decision-making power allocated to different actors. One consideration involves the degree to which the input of different parties is considered binding or advisory. For instance, the recommendations of expert advisory bodies carry authority but are, as the name suggests, typically non-binding, while the findings of citizens’ juries are occasionally designed to be binding on decision-makers or to require a formal response. Another consideration for mechanisms that involve evidence creation and deliberation concerns the degree of control at different stages of the process, such as defining the policy problem and deciding how to frame results. As discussed above, the ways in which considerations about authority are addressed in mechanisms’ design have implications for, and raise trade-offs among, the perceived legitimacy, quality, and relevance of the resulting processes. For example, trade-offs may arise in co-production processes when it comes to balancing scientific rigor (implying a measure of control for researchers) with relevance and legitimacy (implying a measure of control for members of affected communities).
4.6 Relationship with Government Actors and Institutions
Finally, variation exists both across and within categories of mechanisms regarding their relationship with government entities. At a local level, for example, CBPR initiatives may emerge independently through research institutions or civil society or may be undertaken in partnership with planners and policymakers. Citizens’ juries can similarly take place as independent research exercises or in partnership with decision-makers; if designed independently from government actors, they are likely to require an intermediary such as a knowledge broker to influence the policy process (Degeling et al., 2017). Policy experiments may emerge independently from, in partnership with, or at the behest of government actors and may be characterized by different levels of government funding and oversight (McFadgen & Huitema, 2018; McGann et al., 2018). Expert advisory committees may be integrated within the structures of government or exist at arm’s length (Groux et al., 2018; OECD, 2015).
5 Discussion
The research literature in public health and related fields is replete with references to specific mechanisms devoted to bridging the science-politics gap, but it lacks a common language or framework for discussing them. This chapter has conceptually organized the literature on mechanisms to “democratize expertise” and “expertise democracy” (Liberatore & Funtowicz, 2003, p. 146) and identified key dimensions of variation in their goals, orientation, and design. The aim has been to introduce more robustness and consistency in how the field thinks and writes about these mechanisms and greater clarity in how those involved in using them orient their design to specific objectives.
On a practical level, this chapter has highlighted that no single mechanism or category of mechanisms is sufficient to address the science-politics gap in evidence-informed policymaking. Each mechanism has potential advantages and disadvantages for achieving different goals and for different actors involved in the process (e.g., Oliver et al., 2019). In addition, contextual factors affect the feasibility and appropriateness of specific mechanisms in places with different administrative traditions, political cultures, and research system capacities (e.g., Cavazza & Jommi, 2012; Huxley et al., 2016). It is therefore critical for those using these mechanisms to be clear about the goals they are trying to achieve, attentive to the specific environment they are working in, and intentional about issues such as who participates, with what authority, and how this impacts the legitimacy, quality, and relevance of the process and outputs.
Although this chapter has focused on categorizing mechanisms according to their goals and characteristics, it is also critical to consider the overarching components of the good governance of evidence production and utilization in policymaking (Hawkins & Parkhurst, 2016; Parkhurst, 2017). These elements include ensuring that the evidence used in policy decisions is high-quality, rigorous, and appropriate to the question at hand; that the selection of evidence is transparent, involves public deliberation, and is open to contestation; and that the public or its representatives are involved in stewarding the advisory system and making the final evidence-informed decisions (Parkhurst, 2017, pp. 161–162).
Finally, this chapter exists because the past few decades have seen a vast amount of thinking and research on mechanisms to bridge the science-politics gap in public health policymaking. Yet many of the debates happening in the context of the COVID-19 pandemic at the time of writing reflect continuing weaknesses in and dissatisfaction with the mechanisms in place to balance technical considerations with competing values, interests, and goals in government responses around the world. In fact, a perceived battle between science and politics has been one of the defining features of the pandemic discourse. As those concerned with policy, research, and governance begin to scrutinize the pandemic response with an eye to reform, the inventory of mechanisms discussed in this chapter should serve as a roadmap for reconciling technical and political considerations toward more robust and resilient approaches to future crises.
Notes
- 1.
See Caplan (1979).
- 2.
Including ProQuest, ScienceDirect, EMBASE, MEDLINE, PsycInfo, Scholars Portal, Ingenta Connect, and Wiley Online.
- 3.
It was outside the scope of the chapter to evaluate mechanisms’ effectiveness in achieving their goals.
- 4.
In this chapter, we consider the term “knowledge mobilisation” to be interchangeable with another term that is often used to discuss this category of mechanisms—“knowledge translation and exchange.”
- 5.
This chapter focuses on specific mechanisms that have been operationalized and implemented to address the science-politics gap. In a related approach, de Leeuw et al. (2008) identified seven categories of theories that address the integration of research, policy, and practice, which they term institutional re-design, blurring the boundaries, utilitarian evidence, conduits, alternative evidence, narratives, and resonance. The categories of mechanisms discussed in this chapter can all be seen as attempts at institutional re-design (i.e., devising institutional arrangements and channels of interaction that bridge the science-politics gap), although the mechanisms themselves variously target the structural and communicative concerns represented by the other six categories of theories.
- 6.
The term ‘co-production’ is sometimes also used to refer to processes in which researchers and knowledge end-users (such as policymakers) collaborate at different stages of the research process. This is frequently termed integrated knowledge translation (iKT) and is discussed in more detail below (see Knowledge Mobilisation).
- 7.
This figure is illustrative of different decision processes that the typology in Table 2 can help to facilitate. It is not a comprehensive list of all possible mechanisms, goals, or suitable options associated with addressing each concern.
References
Abelson, J., Forest, P.-G., Eyles, J., Smith, P., Martin, E., & Gauvin, F.-P. (2003). Deliberations about deliberative methods: Issues in the design and evaluation of public participation processes. Social Science & Medicine, 57(2), 239–251. https://doi.org/10.1016/S0277-9536(02)00343-X
Abelson, J., Giacomini, M., Lehoux, P., & Gauvin, F.-P. (2007). Bringing ‘the public’ into health technology assessment and coverage policy decisions: From principles to practice. Health Policy, 82(1), 37–50. https://doi.org/10.1016/j.healthpol.2006.07.009
Anderson, L. M., Brownson, R. C., Fullilove, M. T., Teutsch, S. M., Novick, L. F., Fielding, J., & Land, G. H. (2005). Evidence-based public health policy and practice: Promises and limits. American Journal of Preventive Medicine, 28(5), 226–230. https://doi.org/10.1016/j.amepre.2005.02.014
Andresen, S., Baral, P., Hoffman, S. J., & Fafard, P. (2018). What can be learned from experience with scientific advisory committees in the field of international environmental politics? Global Challenges, 2(9), 1800055. https://doi.org/10.1002/gch2.201800055
Bailey, K. D. (1994). Typologies and taxonomies: An introduction to classification techniques. Sage.
Behdinan, A., Gunn, E., Baral, P., Sritharan, L., Fafard, P., & Hoffman, S. J. (2018). An overview of systematic reviews to inform the institutional design of scientific advisory committees. Global Challenges, 2(9), 1800019. https://doi.org/10.1002/gch2.201800019
Bhatia, R., & Corburn, J. (2011). Lessons from San Francisco: Health impact assessments have advanced political conditions for improving population health. Health Affairs, 30(12), 2410–2418. https://doi.org/10.1377/hlthaff.2010.1303
Bonell, C., Meiksin, R., Mays, N., Petticrew, M., & McKee, M. (2018). Defending evidence-informed policy making from ideological attack. BMJ, 362, k3827. https://doi.org/10.1136/bmj.k3827
Boyko, J. A., Lavis, J. N., Abelson, J., Dobbins, M., & Carter, N. (2012). Deliberative dialogues as a mechanism for knowledge translation and exchange in health systems decision-making. Social Science & Medicine, 75(11), 1938–1945. https://doi.org/10.1016/j.socscimed.2012.06.016
Brownson, R. C. (2011). Evidence-based public health. Oxford University Press.
Brownson, R. C., Chriqui, J. F., & Stamatakis, K. A. (2009). Understanding evidence-based public health policy. American Journal of Public Health, 99(9), 1576–1583. https://doi.org/10.2105/AJPH.2008.156224
Cairney, P. (2016). The politics of evidence-based policy making. Springer.
Cairney, P., & Oliver, K. (2017). Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy? Health Research Policy and Systems, 15(1), 35. https://doi.org/10.1186/s12961-017-0192-x
Campbell, D. T. (1998). The experimenting society. In W. N. Dunn (Ed.), The experimenting society: Essays in honor of Donald T. Campbell (pp. 35–68). Transaction Publishers.
Caplan, N. (1979). The two-communities theory and knowledge utilization. American Behavioral Scientist, 22(3), 459–470. https://doi.org/10.1177/000276427902200308
Cashman, S. B., Adeky, S., Allen, A. J., III, Corburn, J., Israel, B. A., Montaño, J., Rafelito, A., Rhodes, S. D., Swanston, S., Wallerstein, N., & Eng, E. (2008). The power and the promise: Working with communities to analyze data, interpret findings, and get to outcomes. American Journal of Public Health, 98(8), 1407–1417. https://doi.org/10.2105/AJPH.2007.113571
Catallo, C., Lavis, J. N., & The BRIDGE study team. (2014). Knowledge brokering in public health. In B. Rechel & M. McKee (Eds.), Facets of public health in Europe (pp. 301–316). Open University Press.
Cavazza, M., & Jommi, C. (2012). Stakeholders involvement by HTA Organisations: Why is so different? Health Policy, 105(2–3), 236–245. https://doi.org/10.1016/j.healthpol.2012.01.012
Choi, B. C. K. (2005). Can scientists and policy makers work together? Journal of Epidemiology & Community Health, 59(8), 632–637. https://doi.org/10.1136/jech.2004.031765
Cohen, J. (1997). Deliberation and democratic legitimacy. In J. Bohman & W. Rehg (Eds.), Deliberative democracy: Essays on reason and politics (pp. 67–92). MIT Press.
Collier, D., Laporte, J., & Seawright, J. (2011). Putting typologies to work: Concept-formation, measurement, and analytic rigor. Political Research Quarterly, 65(1), 217–232. https://doi.org/10.1177/1065912912437162
Corburn, J. (2007). Community knowledge in environmental health science: Co-producing policy expertise. Environmental Science & Policy, 10(2), 150–161. https://doi.org/10.1016/j.envsci.2006.09.004
Cuppen, E. (2012). Diversity and constructive conflict in stakeholder dialogue: Considerations for design and methods. Policy Sciences, 45(1), 23–46. https://doi.org/10.1007/s11077-011-9141-7
de Leeuw, E., Clavier, C., & Breton, E. (2014). Health policy—Why research it and how: Health political science. Health Research Policy and Systems, 12(1). https://doi.org/10.1186/1478-4505-12-55
de Leeuw, E., McNess, A., Crisp, B., & Stagnitti, K. (2008). Theoretical reflections on the nexus between research, policy and practice. Critical Public Health, 18(1), 5–20. https://doi.org/10.1080/09581590801949924
de Leeuw, E., & Peters, D. (2014). Nine questions to guide development and implementation of health in all policies. Health Promotion International, 30(4), 987–997. https://doi.org/10.1093/heapro/dau034
Degeling, C., Carter, S. M., & Rychetnik, L. (2015). Which public and why deliberate?—A scoping review of public deliberation in public health and health policy research. Social Science & Medicine, 131, 114–121. https://doi.org/10.1016/j.socscimed.2015.03.009
Degeling, C., Rychetnik, L., Street, J., Thomas, R., & Carter, S. M. (2017). Influencing health policy through public deliberation: Lessons learned from two decades of Citizens’/community juries. Social Science & Medicine, 179, 166–171. https://doi.org/10.1016/j.socscimed.2017.03.003
Den Broeder, L., Uiters, E., ten Have, W., Wagemakers, A., & Schuit, A. J. (2017). Community participation in Health Impact Assessment: A scoping review of the literature. Environmental Impact Assessment Review, 66, 33–42. https://doi.org/10.1016/j.eiar.2017.06.004
Earwicker, R. (2012). The role of parliaments: The case of a parliamentary scrutiny. In D. V. McQueen, M. Wismar, V. Lin, C. M. Jones, & M. Davies (Eds.), Intersectoral governance for health in all policies: Structures, actions and experiences (pp. 69–84). World Health Organization, on behalf of the European Observatory on Health Systems and Policies.
Edwards, A., Zweigenthal, V., & Olivier, J. (2019). Evidence map of knowledge translation strategies, outcomes, facilitators and barriers in African health systems. Health Research Policy and Systems, 17(16). https://doi.org/10.1186/s12961-019-0419-0
El-Jardali, F., Lavis, J., Moat, K., Pantoja, T., & Ataya, N. (2014). Capturing lessons learned from evidence-to-policy initiatives through structured reflection. Health Research Policy and Systems, 12(2).
Ettelt, S., Mays, N., & Allen, P. (2015a). The multiple purposes of policy piloting and their consequences: Three examples from national health and social care policy in England. Journal of Social Policy, 44(2), 319–337. https://doi.org/10.1017/S0047279414000865
Ettelt, S., Mays, N., & Allen, P. (2015b). Policy experiments: Investigating effectiveness or confirming direction? Evaluation, 21(3), 292–307. https://doi.org/10.1177/1356389015590737
Fafard, P. (2009). Challenging English-Canadian orthodoxy on democracy and constitutional change. Review of Constitutional Studies, 14, 175–203.
Fafard, P. (2015). Beyond the usual suspects: Using political science to enhance public health policy making. Journal of Epidemiology and Community Health, 69(11), 1129–1132. https://doi.org/10.1136/jech-2014-204608
Fafard, P., & Cassola, A. (2020). Public health and political science: Challenges and opportunities for a productive partnership. Public Health, 186, 107–109. https://doi.org/10.1016/j.puhe.2020.07.004
Fafard, P., & Hoffman, S. J. (2020). Rethinking knowledge translation for public health policy. Evidence & Policy, 16(1). https://doi.org/10.1332/174426418X15212871808802
Fafard, P., McNena, B., Suszek, A., & Hoffman, S. J. (2018). Contested roles of Canada’s Chief Medical Officers of Health. Canadian Journal of Public Health, 109, 585–589. https://doi.org/10.17269/s41997-018-0080-3
Fielding, J. E., & Briss, P. A. (2006). Promoting evidence-based public health policy: Can we have better evidence and more action? Health Affairs, 25(4), 969–978. https://doi.org/10.1377/hlthaff.25.4.969
Fishkin, J., Luskin, R., & Jowell, R. (2000). Deliberative polling and public consultation. Parliamentary Affairs, 53(4), 657–666. https://doi.org/10.1093/pa/53.4.657
French, R. D. (2018). Lessons from the evidence on evidence-based policy. Canadian Public Administration, 61(3), 425–442. https://doi.org/10.1111/capa.12295
Fung, A. (2003). Recipes for public spheres: Eight institutional design choices and their consequences. The Journal of Political Philosophy, 11(3), 338–367. https://doi.org/10.1111/1467-9760.00181
Fung, A. (2015). Putting the public back into governance: The challenges of citizen participation and its future. Public Administration Review, 75(4), 513–522. https://doi.org/10.1111/puar.12361
Gagnon, M.-P., Desmartis, M., Lepage-Savary, D., Gagnon, J., St-Pierre, M., Rhainds, M., Lemieux, R., Gauvin, F.-P., Pollender, H., & Légaré, F. (2011). Introducing patients’ and the public’s perspectives to health technology assessment: A systematic review of international experiences. International Journal of Technology Assessment in Health Care, 27(1), 31–42. https://doi.org/10.1017/S0266462310001315
Ganuza, E., & Francés, F. (2012). The deliberative turn in participation: The problem of inclusion and deliberative opportunities in participatory budgeting. European Political Science Review, 4(2), 283–302. https://doi.org/10.1017/S1755773911000270
Gelijns, A. C., Brown, L. D., Magnell, C., Ronchi, E., & Moskowitz, A. J. (2005). Evidence, politics, and technological change. Health Affairs, 24(1), 29–40. https://doi.org/10.1377/hlthaff.24.1.29
Gopinathan, U., Hoffman, S. J., & Ottersen, T. (2018). Scientific advisory committees at the World Health Organization: A qualitative study of how their design affects quality, relevance, and legitimacy. Global Challenges, 2(9), 1700074. https://doi.org/10.1002/gch2.201700074
Greer, S. L., Bekker, M., de Leeuw, E., Wismar, M., Helderman, J.-K., Ribeiro, S., & Stuckler, D. (2017). Policy, politics and public health. European Journal of Public Health, 27(suppl_4), 40–43. https://doi.org/10.1093/eurpub/ckx152
Grimshaw, J. M., Eccles, M. P., Lavis, J. N., Hill, S. J., & Squires, J. E. (2012). Knowledge translation of research findings. Implementation Science, 7(1), 50. https://doi.org/10.1186/1748-5908-7-50
Groux, G. M. N., Hoffman, S. J., & Ottersen, T. (2018). A typology of scientific advisory committees. Global Challenges, 2(9), 1800004. https://doi.org/10.1002/gch2.201800004
Hagelskamp, C., Schleifer, D., Rinehart, C., & Silliman, R. (2018). Participatory budgeting: Could it diminish health disparities in the United States? Journal of Urban Health, 95(5), 766–771. https://doi.org/10.1007/s11524-018-0249-3
Haigh, F., Harris, P., & Haigh, N. (2012). Health impact assessment research and practice: A place for paradigm positioning? Environmental Impact Assessment Review, 33(1), 66–72. https://doi.org/10.1016/j.eiar.2011.10.006
Harris-Roxas, B., Viliani, F., Bond, A., Cave, B., Divall, M., Furu, P., Harris, P., Soeberg, M., Wernham, A., & Winkler, M. (2012). Health impact assessment: The state of the art. Impact Assessment and Project Appraisal, 30(1), 43–52. https://doi.org/10.1080/14615517.2012.666035
Hawkins, B., & Oliver, K. (2022). Select committee governance and the production of evidence: The case of UK E-Cigarettes policy. In P. Fafard, A. Cassola, & E. de Leeuw (Eds.), Integrating science and politics for public health. Palgrave Springer.
Hawkins, B., & Parkhurst, J. (2016). The “good governance” of evidence in health policy. Evidence & Policy, 12(4), 575–592. https://doi.org/10.1332/174426415X14430058455412
Hoffman, S. J., Ottersen, T., Tejpar, A., Baral, P., & Fafard, P. (2018). Towards a systematic understanding of how to institutionally design scientific advisory committees: A conceptual framework and introduction to a special journal issue. Global Challenges, 2(9), 1800020. https://doi.org/10.1002/gch2.201800020
Huxley, K., Andrews, R., Downe, J., & Guarneros-Meza, V. (2016). Administrative traditions and citizen participation in public policy: A comparative study of France, Germany, the UK and Norway. Policy & Politics, 44(3), 383–402. https://doi.org/10.1332/030557315X14298700857974
Jasanoff, S. (Ed.). (2004). States of knowledge: The co-production of science and social order. Routledge.
Jensen, P. H. (2020). Experiments and evaluation of public policies: Methods, implementation, and challenges. Australian Journal of Public Administration, 79(2), 259–268. https://doi.org/10.1111/1467-8500.12406
Johnson, G. F. (2009). Deliberative democratic practices in Canada: An analysis of institutional empowerment in three cases. Canadian Journal of Political Science, 42(3), 679–703. https://doi.org/10.1017/S0008423909990072
Jull, J., Giles, A., & Graham, I. D. (2017). Community-based participatory research and integrated knowledge translation: Advancing the co-creation of knowledge. Implementation Science, 12(1). https://doi.org/10.1186/s13012-017-0696-3
Kothari, A., & Wathen, C. N. (2013). A critical second look at integrated knowledge translation. Health Policy, 109(2), 187–191. https://doi.org/10.1016/j.healthpol.2012.11.004
Latour, B. (2004). Why has critique run out of steam? From matters of fact to matters of concern. Critical Inquiry, 30(2), 225–248. https://doi.org/10.1086/421123
Lavis, J. N., Boyko, J. A., & Gauvin, F.-P. (2014). Evaluating deliberative dialogues focused on healthy public policy. BMC Public Health, 14(1). https://doi.org/10.1186/1471-2458-14-1287
Lawrence, L. M., Bishop, A., & Curran, J. (2019). Integrated knowledge translation with public health policy makers: A scoping review. Healthcare Policy = Politiques de Sante, 14(3), 55–77. https://doi.org/10.12927/hcpol.2019.25792
Lee, K. (2020). WHO under fire: The need to elevate the quality of politics in global health. Global Social Policy, 20(3), 374–377. https://doi.org/10.1177/1468018120966661
Liberatore, A., & Funtowicz, S. (2003). ‘Democratising’ expertise, ‘expertising’ democracy: What does this mean, and why bother? Science and Public Policy, 30(3), 146–150. https://doi.org/10.3152/147154303781780551
MacAulay, M., Macintyre, A. K., Yashadhana, A., Cassola, A., Harris, P., Woodward, C., Smith, K., de Leeuw, E., Palkovits, M., Hoffman, S. J., & Fafard, P. (2021). Under the spotlight: Understanding the role of the Chief Medical Officer in a pandemic. Journal of Epidemiology and Community Health. https://doi.org/10.1136/jech-2021-216850
Maxwell, J., Jackson, K., Legowski, B., Rosell, S., Yankelovich, D., Forest, P.-G., & Lozowchuk, L. (2002). Report on citizens’ dialogue on the future of health care in Canada. Commission on the Future of Health Care in Canada.
McFadgen, B., & Huitema, D. (2018). Experimentation at the interface of science and policy: A multi-case analysis of how policy experiments influence political decision-makers. Policy Sciences, 51(2), 161–187. https://doi.org/10.1007/s11077-017-9276-2
McGann, M., Blomkamp, E., & Lewis, J. M. (2018). The rise of public sector innovation labs: Experiments in design thinking for policy. Policy Sciences, 51(3), 249–267. https://doi.org/10.1007/s11077-018-9315-7
Milewa, T. (2006). Health technology adoption and the politics of governance in the UK. Social Science & Medicine, 63(12), 3102–3112. https://doi.org/10.1016/j.socscimed.2006.08.009
Mitton, C., Adair, C. E., McKenzie, E., Patten, S. B., & Perry, B. W. (2007). Knowledge transfer and exchange: Review and synthesis of the literature. Milbank Quarterly, 85(4), 729–768.
NICE. (2019). National Institute for Health Care and Excellence: What we do. National Institute for Health Care and Excellence. Retrieved 18 June, 2019, from https://www.nice.org.uk/about/what-we-do
Nguyen, T., Graham, I. D., Mrklas, K. J., Bowen, S., Cargo, M., Estabrooks, C. A., Kothari, A., Lavis, J., Macaulay, A. C., MacLeod, M., Phipps, D., Ramsden, V. R., Renfrew, M. J., Salsberg, J., & Wallerstein, N. (2020). How does integrated knowledge translation (IKT) compare to other collaborative research approaches to generating and translating knowledge? Learning from experts in the field. Health Research Policy and Systems, 18(1), 35. https://doi.org/10.1186/s12961-020-0539-6
Nyström, M. E., Karltun, J., Keller, C., & Andersson Gäre, B. (2018). Collaborative and partnership research for improvement of health and social services: Researcher’s experiences from 20 projects. Health Research Policy and Systems, 16(1), 46. https://doi.org/10.1186/s12961-018-0322-0
OECD. (2015). Scientific advice for policy making: The role and responsibility of expert bodies and individual scientists (OECD Science, Technology and Industry Policy Papers No. 21). https://doi.org/10.1787/5js33l1jcpwb-en
Olejniczak, K., Borkowska-Waszak, S., Domaradzka-Widła, A., & Park, Y. (2020). Policy labs: The next frontier of policy design and evaluation? Policy & Politics, 48(1), 89–110. https://doi.org/10.1332/030557319X15579230420108
Oliver, K., & Cairney, P. (2019). The dos and don’ts of influencing policy: A systematic review of advice to academics. Palgrave Communications, 5(1), 21. https://doi.org/10.1057/s41599-019-0232-y
Oliver, K., Kothari, A., & Mays, N. (2019). The dark side of coproduction: Do the costs outweigh the benefits for health research? Health Research Policy and Systems, 17(1), 33. https://doi.org/10.1186/s12961-019-0432-3
Oxman, A. D., Lavis, J. N., Lewin, S., & Fretheim, A. (2009). SUPPORT Tools for evidence-informed health Policymaking (STP) 1: What is evidence-informed policymaking? Health Research Policy and Systems, 7(S1). https://doi.org/10.1186/1478-4505-7-S1-S1
Pantoja, T., Barreto, J., & Panisset, U. (2018). Improving public health and health systems through evidence-informed policy in the Americas. BMJ, k2469. https://doi.org/10.1136/bmj.k2469
Parkhurst, J. O. (2017). The politics of evidence: From evidence-based policy to the good governance of evidence. Routledge.
Partridge, A. C. R., Mansilla, C., Randhawa, H., Lavis, J. N., El-Jardali, F., & Sewankambo, N. K. (2020). Lessons learned from descriptions and evaluations of knowledge translation platforms supporting evidence-informed policy-making in low- and middle-income countries: A systematic review. Health Research Policy and Systems, 18(1), 127. https://doi.org/10.1186/s12961-020-00626-5
Pearce, W., & Raman, S. (2014). The new randomised controlled trials (RCT) movement in public policy: Challenges of epistemic governance. Policy Sciences, 47(4), 387–402. https://doi.org/10.1007/s11077-014-9208-3
Petticrew, M. (2013). Public health evaluation: Epistemological challenges to evidence production and use. Evidence & Policy, 9(1), 87–95. https://doi.org/10.1332/174426413X663742
Rabeharisoa, V., Moreira, T., & Akrich, M. (2014). Evidence-based activism: Patients’, users’ and activists’ groups in knowledge society. BioSocieties, 9(2), 111–128. https://doi.org/10.1057/biosoc.2014.2
Rawlins, M. D. (2015). National Institute for Clinical Excellence: NICE works. Journal of the Royal Society of Medicine, 108(6), 211–219. https://doi.org/10.1177/0141076815587658
Richardson, L. (2014). Engaging the public in policy research: Are community researchers the answer? Politics and Governance, 2(1), 32–44. https://doi.org/10.17645/pag.v2i1.19
Ritter, A., Lancaster, K., & Diprose, R. (2018). Improving drug policy: The potential of broader democratic participation. International Journal of Drug Policy, 55, 1–7. https://doi.org/10.1016/j.drugpo.2018.01.016
Rowe, S., Alexander, N., Weaver, C. M., Dwyer, J. T., Drew, C., Applebaum, R. S., Atkinson, S., Clydesdale, F. M., Hentges, E., Higley, N. A., & Westring, M. E. (2013). How experts are chosen to inform public policy: Can the process be improved? Health Policy, 112(3), 172–178. https://doi.org/10.1016/j.healthpol.2013.01.012
Russell, J., Greenhalgh, T., Byrne, E., & Mcdonnell, J. (2008). Recognizing rhetoric in health care policy analysis. Journal of Health Services Research & Policy, 13(1), 40–46. https://doi.org/10.1258/jhsrp.2007.006029
Safaei, J. (2015). Deliberative democracy in health care: Current challenges and future prospects. Journal of Healthcare Leadership, 123. https://doi.org/10.2147/JHL.S70021
Sanderson, I. (2002). Evaluation, policy learning and evidence-based policy making. Public Administration, 80(1), 1–22. https://doi.org/10.1111/1467-9299.00292
Sanderson, I. (2009). Intelligent policy making for a complex world: Pragmatism, evidence and learning. Political Studies, 57(4), 699–719. https://doi.org/10.1111/j.1467-9248.2009.00791.x
Sheard, S., & Donaldson, L. J. (2006). The nation’s doctor: The role of the Chief Medical Officer 1855–1998. Radcliffe.
Smith, E., Ross, F., Donovan, S., Manthorpe, J., Brearley, S., Sitzia, J., & Beresford, P. (2008). Service user involvement in nursing, midwifery and health visiting research: A review of evidence and practice. International Journal of Nursing Studies, 45(2), 298–315. https://doi.org/10.1016/j.ijnurstu.2006.09.010
Smith, G., & Wales, C. (2000). Citizens’ juries and deliberative democracy. Political Studies, 48, 51–65.
Smith, K. (2013). Beyond evidence based policy in public health. Palgrave Macmillan.
Stewart, E., & Smith, K. E. (2015). “Black magic” and “gold dust”: The epistemic and political uses of evidence tools in public health policy making. Evidence & Policy, 11(3), 415–437. https://doi.org/10.1332/174426415X14381786400158
Stobbe, M. (2014). Surgeon General’s warning: How politics crippled the nation’s doctor. University of California Press.
Street, J., Duszynski, K., Krawczyk, S., & Braunack-Mayer, A. (2014). The use of citizens’ juries in health policy decision-making: A systematic review. Social Science & Medicine, 109, 1–9. https://doi.org/10.1016/j.socscimed.2014.03.005
Viswanathan, M., Ammerman, A., Eng, E., Garlehner, G., Lohr, K. N., Griffith, D., Rhodes, S., Samuel-Hodge, C., Maty, S., Lux, L., Webb, L., Sutton, S. F., Swinson, T., Jackman, A., & Whitener, L. (2004). Community-based participatory research: Assessing the evidence: Summary. In AHRQ Evidence Report Summaries. Agency for Healthcare Research and Quality.
Wampler, B. (2007). A guide to participatory budgeting. In A. Shah (Ed.), Participatory budgeting (pp. 21–54). The World Bank.
Ward, V., House, A., & Hamer, S. (2009). Knowledge brokering: The missing link in the evidence to action chain? Evidence & Policy, 5(3), 267–279. https://doi.org/10.1332/174426409X463811
WHO. (2021). 22nd expert committee on the selection and use of essential medicines. Retrieved 5 June, 2021, from https://www.who.int/selection_medicines/committees/expert/22/en/
Williams, O., Sarre, S., Papoulias, S. C., Knowles, S., Robert, G., Beresford, P., Rose, D., Carr, S., Kaur, M., & Palmer, V. J. (2020). Lost in the shadows: Reflections on the dark side of co-production. Health Research Policy and Systems, 18(1), 43–43. https://doi.org/10.1186/s12961-020-00558-0
Wright, J., Parry, J., & Mathers, J. (2005). Participation in health impact assessment: Objectives, methods and core values. Bulletin of the World Health Organization, 83(1), 58–63. https://doi.org//S0042-96862005000100015
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2022 The Author(s)
About this chapter
Cite this chapter
Cassola, A., Fafard, P., Palkovits, M., Hoffman, S.J. (2022). Mechanisms to Bridge the Gap Between Science and Politics in Evidence-Informed Policymaking: Mapping the Landscape. In: Fafard, P., Cassola, A., de Leeuw, E. (eds) Integrating Science and Politics for Public Health. Palgrave Studies in Public Health Policy Research. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-98985-9_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-98985-9_13
Published:
Publisher Name: Palgrave Macmillan, Cham
Print ISBN: 978-3-030-98984-2
Online ISBN: 978-3-030-98985-9
eBook Packages: Political Science and International StudiesPolitical Science and International Studies (R0)