Scientific evidence has become increasingly important for the decision-making processes in contemporary democracies. On the one hand, research dealing with the utilization of scientific knowledge in the political process has pointed out that decision-makers learn from evidence to improve policies to solve problems. On the other, scholars have underlined that actors learn from evidence to support their political interests regardless of how it affects the policy problem. One conventional insight from the policy learning literature is that higher salience of a policy issue makes it much less likely that decision-makers use evidence in an “unpolitical” way. Nevertheless, only few studies have investigated systematically how differences regarding issue salience between policy fields impact on how decision-makers learn from evaluations at the individual level. Using multilevel regression models on data from a legislative survey in Switzerland, this paper shows that salience and technical complexity of policy issues do not automatically lead to less policy learning and to more political learning from policy evaluations. Nevertheless, this article’s empirical analysis also points out that issue salience increases policy learning from evaluations if the policy issue is technically complex. Our findings contribute to research on policy learning and evidence-based policy making by linking the literature on policy evaluation and learning, which helps analyzing the micro-foundations of learning in public policy and administration.
A very important research question for public policy analysis is how decision-makers learn in the policy process. In the literature, scholars have identified policy evaluations as a source of information that provides decision-makers with material to learn about how to modify legislationsFootnote 1 (Head, 2016; Pattyn et al., 2018; Stephenson et al., 2019). In this context, scholars often refer to the term evidence-based policy making that describes the utilization of scientific evidence to create or change public policies. Implementing expertise in public policy comes along with three main challenges. Firstly, decision-makers face the temptation to use information in a way that fits their political goals, even if this practice comes at the cost of taming and solving a policy problem. Secondly, learning from expert positions is difficult in the post-truth era where ignoring facts becomes somewhat normalized (Perl et al., 2018). Thirdly, in a context of limited time and resources, decision-makers use heuristics to process information more effectively (Cairney, 2016; Kamkhaji & Radaelli, 2017; Perl et al., 2018).
This article contributes to analyzing the first one of these challenges, which is the strategic political usage of expertise by decision-makers at the expense of policy efficacy. Therefore, this paper examines how salience and technical complexity of policy issues are linked to policy learning and political learning from evaluations. Specifically, the paper combines two strands of research:
Firstly, the article harks back to research on policy evaluation to understand how scientific evidence influences policy making (Weiss, 1979; Weiss et al., 2005, Daviter, 2015, Newman et al., 2016, Schlaufer et al., 2018). Notably, researchers have focused on scientific evaluations of public policies, which members of parliament can demand from the administration (Jennings & Hall, 2012; Bundi, 2016). According to the literature, parliamentarians appreciate evaluations (Boyer & Langbein, 1991; Demaj & Summermatter, 2012; Hird, 2009) and use them in different ways (Boyer & Langbein, 1991; Tabuga, 2017; Whiteman, 1985). On the one hand, evaluations produce independent evidence about the implementation of a policy, which is why they are an instrument for parliamentary oversight that decision-makers use to ensure accountability of the administration to elected officials (Bundi, 2018). On the other, decision-makers can learn whether a policy works as intended, or, if it has different effects using evidence (Alkin & King, 2017; Eberli, 2019; Henry & Mark, 2003).
In this literature, scholars have also separated analytical from political uses of policy evaluations (Eberli, 2019; Frey, 2012; Shulock, 1998; Weiss, 1989). Researchers have pointed out that policy- and politically oriented uses of evaluation cannot be considered completely detached from each other as they can occur simultaneously or sequentially (Amara et al., 2004; Frey, 2012; Whiteman, 1985). Therefore, we should expect that any uses of policy evaluation have some political motives attached to it (Frey, 2012). Nevertheless, there is little evidence on determinants of policy-oriented and political uses of evidence and learning at an individual level and how differences between policy issues impact on different uses of policy information. What is more, only a few studies have examined different forms of evaluation use in parliament and linked this to learning in the policy process (Amara et al., 2004; Nutley et al., 2007, p. 67).
Secondly, this paper builds on the policy learning literature. Therein, scholars have repeatedly asserted that learning in public policy can follow political goals rather than only be focused on how to deal with a policy problem (Bennett & Howlett, 1992; Biesenbender & Tosun, 2014; Boswell, 2009; Cairney & Oliver, 2017; Dunlop et al., 2018; Greenhalgh & Russell, 2009; Pierson, 2000; Vagionaki & Trein, 2020; Zito & Schout, 2009). The complexity of social and political institutions makes learning about evidence a social experience (Ammons et al., 2008; Hall, 1993). Decision-makers mix political ideas and agendas with information from policy research and experiences (Gilardi, 2010; Gilardi et al., 2009; May, 1992). Cognitive limitations and bounded rationality reinforce this mix because decision-makers use cognitive shortcuts to satisfy limited rationality and uncertainty (Braun & Gilardi, 2006; Simon, 1947). Against this background, scholars have theorized that learning can entail political negotiation and happen in the context of hierarchies (Dunlop & Radaelli, 2018). Others have emphasized that decision-makers use research above all if the results fit their dominant political narrative (Boswell, 2009; Henry & Mark, 2003). Empirical research has shown that decision-makers introduce legislations on issues that are salient but only if scientific uncertainty about these issues is limited (Bromley‐Trujillo & Karch, 2019), and, decision-makers tend to maintain their policy positions against the background of scientific findings (Heikkila et al., 2020). Nevertheless, policy learning research focuses often on policy change (Heikkila & Gerlak, 2013) but requires more analyses of the micro-foundations of policy learning (Dunlop & Radaelli 2017).
This article draws on research focusing on policy evaluation and policy learning to distinguish (a) policy learning from evaluations and (b) political learning from evaluations. The paper then proceeds to analyze how these two forms of learning from evaluations are linked to salience (Baumgartner & Jones, 2010; Jones & Baumgartner, 2005) and technical complexity of the policy issue (Eshbaugh‐Soha, 2006; Gormley, 1983; Trein & Maggetti, 2020). The empirical analysis in the paper uses a survey on evaluation use by members of parliament in Switzerland as a proxy to examine policy learning and political learning at the individual level of analysis.
In using multilevel regression models, the paper shows that issue salience alone neither increases policy learning nor political learning from evaluations. If policy issues are technically complex, decision-makers are more likely to use evaluations for policy learning and political learning. Nevertheless, once issue salience (attention to the policy issue) increases, decision-makers employ evaluations for policy learning, but not for political learning if they deal with a technically complex policy issue. This paper contributes to the literature by unpacking microlevel dynamics of the learning process regarding policy aspects and political aspects of policy making (Dunlop & Radaelli, 2017, p.306–307). Notably, this research underlines that decision-makers use evaluations for policy learning rather than for political learning if they must deal with salient and technically complex issues, which emphasize the problem-solving capacity of democratic governance.
Evaluation use as political—and policy learning
In the literature, scholars have pointed out that decision-makers use evaluations in different ways. According to Alkin and King (2017: p. 573), the most common forms of evaluation use are instrumental, conceptual, and symbolic. Instrumental use is defined as the utilization of evaluations as a basis for decisions or actions. Conceptual use describes the use of evaluations in order to better understand an issue and to gain new perspectives or insights on the topic. Symbolic use refers to a situation when decision-makers utilize evaluations not primarily due to the evaluation findings, but because they expect a benefit for including them in their political activities (Knorr, 1977). For example, evaluations can be used to persuade others, justify a position, or legitimize an action. Symbolic use is sometimes also called persuasive, legitimizing, tactical, or strategic use of evaluations (Johnson, 1998). However, despite their conceptual clarity, different forms of evaluation use are not always easy to distinguish and to observe empirically. Instrumental and conceptual uses differ from symbolic uses, as the former require openness to the assessment and its outcomes (Frey, 2012). This perspective entails that instrumental and conceptual uses of policy evaluation contribute to improving policy problem-solving, since they aim at ameliorating policy measures based on scientific evidence about policy design and policy outcomes. Nevertheless, this rational notion of evaluation use for problem-solving is also present to some extent in symbolic use where actors mobilize the authority of knowledge for political reasons (Boswell, 2009).
Research on policy learning is conceptually similar to the policy evaluation literature. A common and parsimonious definition holds that learning is “the acquisition of new relevant information that permits the updating of beliefs [respectively ideas] about the effects of a new policy” (Braun & Gilardi, 2006, p. 306). In this context, information about the effects of a policy may come from policy evaluations. This definition is based on information processing in the Bayesian framework and used in policy process research until today (e.g., Nowlin, 2021). Another definition of policy learning is more encompassing. According to Heikkila and Gerlak, policy learning is “(1) a collective process, which may include acquiring information through diverse actions (e.g. trial and error), assessing or translating information, and disseminating knowledge or opportunities across individuals in a collective, and (2) collective products that emerge from the process, such as new shared ideas, strategies, rules, or policies” (Heikkila & Gerlak, 2013, p. 486).
Like evaluation scholars, who point to different forms of evaluation use, researchers of learning have distinguished different forms of learning in political research (Vagionaki & Trein, 2020). For example, scholars have defined no learning (the absence of learning), blocked learning (lessons learned do not transfer to the organizational level), instrumental learning (learning about policy instruments), and political learning (learning about political strategies) as some of the main learning forms (Bennett & Howlett, 1992; May, 1992; Zito & Schout, 2009). More recently, Dunlop and Radaelli have developed four modes of policy learning—epistemic, bargaining, hierarchical, and reflexive—that assume that decision-makers are homo discentis, i.e., learning and studying individuals (Dunlop & Radaelli, 2018).
Trein and Vagionaki build on this literature and distinguish policy-oriented learning from political (power-oriented) learning (Trein, 2018; Trein & Vagionaki, 2021). Policy-oriented learning entails the updating of ideas to change policy instruments for the better (e.g., Bennett & Howlett, 1992; Jenkins-Smith et al., 2018; May, 1992; Zito & Schout, 2009). Contrariwise, political learning entails learning with an explicit political motivation (Lowi, 1972; Steinmo et al., 1992; Pierson, 1993). Political learning means that decision-makers update their beliefs to maximize their political returns, even if this comes at the cost of implementing ineffective policies (Hertel-Fernandez, 2019; Trein & Vagionaki, 2021).
This article focuses on the individual level of learning and assesses how elected officials learn from policy evaluations. This perspective pays attention to the micro-level of policy learning as conceptualized by Dunlop and Radaelli in their analysis of the learning process (Dunlop & Radaelli, 2017). Nevertheless, the article does not focus on how learning changes public policies. Specifically, the paper distinguishes two ways of learning from evaluations: firstly, policy learning from evaluations, in which office holders focus on the findings of the evaluation for improvement of policy effectiveness; secondly, political learning from evaluations in which evaluations are used in a political self-interest manner. The paper delineates these two concepts from the literature on policy evaluation as well as from research on policy learning. The policy evaluation literature has distinguished analytical from political uses of evaluations (Alkin & King, 2017; Eberli, 2019; Frey, 2012). As Eberli (2019) has pointed out, the two-part distinction between analytical and political use refers to the fundamental difference between an analytical–improvement oriented and a political–strategic logic of use. However, the two forms of use manifest in different ways. Analytical use captures the types of instrumental and conceptual use discussed in the literature to date, while political use includes all types of symbolic uses, such as persuasive, legitimizing or tactical use (Alkin & Taut, 2003). We are aware that some scholars have pointed out that these two forms of evaluation use are, to a certain degree, conditional on one another, since political-symbolic use unfolds its effect by relying on the instrumental problem-solving image of systematically generated knowledge and its use (Boswell, 2009, p. 249). Although it is possible that actors use evaluations analytically to obtain policy-relevant information, elected officials are likely to always have an intention to (possibly) use evaluations politically (Frey, 2012), notably, as successful problem-solving is a political end in itself.
By distinguishing policy learning from evaluations and political learning from evaluations, we assume that decision-makers learn from an evaluation in one way or the other. This assumption is plausible if we analyze decision-makers as homo discentis, i.e., a learning and studying individual (Dunlop & Radaelli, 2018, p. S53). The logical consequence from such a perspective is that evaluation use necessarily results in learning, understood as the updating of beliefs. Consequently, policy-oriented learning from evaluations means that office holders learn from the findings of the evaluation to improve policy effectiveness and to improve the public good. Political learning from evaluations implies that decision-makers learn from evaluations regarding how to achieve their political self-interest. This conceptual separation focuses on learning as a process at the individual level without connecting it to learning in relation to policy results (Dunlop & Radaelli, 2018). In connecting analytical use of evaluations to policy learning and political use of evaluations to political learning, we widen the theoretical interpretation of evaluation use and improve the empirical grounding of the learning literature.
Policy issues and learning from evaluations
In addition to distinguishing policy learning from evaluations and political learning from evaluations, this article is interested in how differences between policy issues impact on the two forms of learning. To answer this question, this paper analyzes the effect of two explanations. Firstly, it assesses how policy salience impacts on learning from policy evaluations. Secondly, the article examines how the technical complexity of policy issues is linked to learning from evaluations. Thirdly, the paper examines how these two variables interact.
We know from the literature that the salience of policy issues has an impact on the production of policy reforms (Lieberherr & Thomann, 2020). If an issue receives a high amount of attention by the media and decision-makers, it is likely that there is a measurable change in policies (Baumgartner & Jones, 2010; Jones & Baumgartner, 2005). In case a policy issue is salient, it tends to be politicized in formal political arenas. For example, if a problem receives considerable attention, such as the remuneration of managers, it will be politicized publicly by political parties and not in informal arenas behind closed doors (Culpepper, 2010). In such a context, political actors need to take care that they stick to their strategic political positions and tend to only follow the recommendations of independent experts in the limits of the positions of their party.
Against this backdrop decision-makers select evidence that fits their preferred narrative (Eberli, 2019). According to Boswell, research does not only inform new policies aiming to solve pressing problems, but “… does indeed play an important political function …” that is different from an instrumental and problem-oriented use of knowledge. In this context, policy information serves the goal to substantiate or legitimize pre-existing policy positions (Boswell, 2009, p. 7). From the perspective of the learning literature, such practices are cases of political learning: based on new knowledge, decision-makers learn (i.e., update their beliefs) to adjust and inform their political strategies even if this practice entails choosing a policy option that is less effective (Bennett & Howlett, 1992; Boswell, 2009; Pierson, 2000; Zito & Schout, 2009), because they can afford to do so (Bandelow, 2008, p. 746; Deutsch, 1966, p. 111). In case of a highly salient policy issue, decision-makers follow this logic and learn from evaluations in a way that reinforces their political positions. Therefore, we hypothesize:
The more salient a policy issue, the more likely decision-makers learn politically from evaluations.
Nevertheless, the discussion in the previous section of the paper has pointed out that actors also use evidence and scientific information in a policy-oriented way (Heclo, 1974; Budge & Laver, 1986; Strom, 1990; Evans, 2018). This form of evaluation use and learning captures the policy-seeking and puzzling intentions of decision-makers. In other words, it points to the aspiration of actors to solve public problems through policy (Ansell, 2011). This way of learning from evidence and evaluation resembles an analytical use of policy evaluations (Eberli, 2019; Frey, 2012; Shulock, 1998; Weiss, 1989). In the context of the policy learning literature, this form of evaluation use resembles instrumental learning, i.e. learning to improve public policies for problem solving (Vagionaki, 2020; Zito & Schout, 2009). This literature implies that the more public attention an issue receives the more likely decision-makers are to learn from evaluations in a policy-oriented way because they want to solve pressing problems based on the foundation of expertise. Therefore, we hypothesize the following:
The more salient a policy issue the more likely is policy-oriented learning from evaluations.
Policy issues vary in their complexity. According to Sager and Andereggen (2012), complex policies are characterized by the use of different policy instruments by multiple interactions that makes it difficult to attribute policy effects. In the literature, scholars have distinguished between technically complex issues, such as energy policy and health policy where decision-makers usually integrate a broad range of policy instruments, and less complex issues such as labor market policy. In the case of technically complex issues, decision-makers are more likely to consult external experts to better understand how to calibrate different policy instruments. According to one author, “technical complexity is high when a policy problem requires the understanding of a specialist or expert, a professional appraisal more than a normative judgment” (Gormley, 1983, p. 89–90). Admittedly, even technical decisions require normative criteria and judgments, but the distinction between technical solutions requiring knowledge and less technical ones has been used in the public policy literature (Gormley, 1983, p. 90; Eshbaugh‐Soha, 2006; Trein & Maggetti, 2020). Contrariwise, other policy issues are not as technically complex. For example, unemployment policy is a policy field that requires much less technical knowledge and insights to design and decide on policy solutions compared to health and energy policy.
These insights imply that decision-makers could either learn in a policy-oriented way or in a political way from evaluations if they face technically complex policy issues. On the one hand, policy evidence based on research is useful for decision-makers to help them satisfy their problem-solving and policymaking intentions. Against the backdrop of a scientifically complex issue, decision-makers are therefore more likely to use evidence compared to issues that are of limited technical complexity. On the other hand, decision-makers do also have incentives to use scientific evidence to learn about how to achieve their political goals. In the context of scientifically complex issues, using insights from research will signal competence to voters, even if the actual learning from evaluations has political intentions and might be symbolic, as decision-makers will carefully ensure that the policy preferences they derive from evidence chime with their political goals. Against this background, we formulate the following two hypotheses:
Technologically complex policy issues make political learning from evaluations more likely.
Technologically complex policy issues make policy learning from evaluations more likely.
An important question that follows from this discussion is how issue salience and technical complexity interact regarding their impact on the different usages of evaluations. Do decision-makers learn in a policy-oriented or in a political way from evaluations if salience increases and they have to deal with a technically complex policy issue? This paper argues that issue salience, i.e., public attention to the policy problem, results in policy learning from evaluations if the policy issue is technically complex. If a policy problem receives a lot of attention from the media, decision-makers want to demonstrate that they seriously aim at problem solving and take expert recommendations regarding complex problems seriously. Contrariwise, it is unlikely that salience increases the political usage of political learning from evaluations in a context of technical complexity. In this situation, decision-makers face the risk of “being caught” practicing symbolic use of evidence due to public attention on the policy process, especially if they do not take advice by experts seriously (Feindt et al., 2021; Trein, 2018). Against a background of high issue attention, the usage of technically complex policy evidence in a purely political way might result in electoral punishment, if voters discover political learning from evaluations that ignores potential recommendations for policy improvement. Thus, we hypothesize the following:
Higher issue salience increases the positive effect of technical complexity on policy learning from evaluations.
Data and methods
This study is based on an online survey of cantonal and federal members of parliament (MPs) in Switzerland conducted between May and June 2014 (Eberli et al. 2014). MPs were asked about their general attitudes toward evaluation, as well as about their habits on demanding policy evaluations, the way they use evaluations and how often they employ evaluation reports. As MPs have a broad understanding of the term evaluation, we provided a definition of the concept in the introduction to the poll: "In this survey, evaluations are interpreted as studies, reports or other documents, which assess a state’s measure in a systematic and transparent way with respect to their effectiveness, efficiency, or fitness for purpose.” A total of 1570 MPs took part in the survey, which corresponds to a response rate of 55.3%. Compared to similar surveys among Swiss parliamentarians this percentage is relatively high.Footnote 2
In order to measure the dependent variables–policy learning and political learning from evaluations–MPs were asked whether they had used evaluation in specific ways during the last four years. To operationalize different types of evaluation use, the paper employs four variables: MPs were asked whether they use evaluation in order to make a decision (instrumental use) or learn about policies (conceptual use) respectively whether they use them to justify a decision (legitimizing) or convince others (persuasive use) (Alkin & King, 2017; Eberli, 2019). On this basis, the paper builds two indexes. Table 1 shows the different questions and how they correlate with each other:
The independent variables were collected through an expert survey in 2015, as the hypotheses do not only focus on individual characteristics of MPs, but also on differences across policy fields. Fischer and Sciarini (2016) show that cross-sectional comparisons are important for the decision-making process. Hence, we collected a data set on the characteristics of policy fields with an expert survey of Swiss political scientists in order to obtain information on the characteristics of a policy field (Bundi, 2018).Footnote 3 Hooghe et al., (2010, p. 692) suggest that expert interviews are appropriate when reliable information is more likely to be found among experts than in other documentation sources. Since no data are available for the attributes of policy fields in Switzerland, experts were asked to assess the attributes of various policy fields. The survey provides the same list of policy fields with keywords that was also included in the survey of Swiss MPs. Moreover, the attributes of the policy fields were pre-defined and experts asked to rate the attribute on a scale from 0 to 10. In order to be able to compare the experts' ratings with each of the other policy fields, the ratings were standardized to a standard deviation of one and a mean value of zero and manually added to the first data set. To match the MPs with salience in a policy domain, we used their parliamentary committee affiliation. Specifically, we linked the value of salience and complexity of a policy domain (as assessed by experts) with the committee membership of a specific MP. This strategy is appropriate as the Swiss parliament is considered as a working parliament with a strong committee system where legislative projects are predominantly elaborated the committees and where evaluation use happens in the committee meetings (Dann, 2003; Eberli, 2019).Footnote 4
Next to the independent variables that operationalize the above-discussed hypotheses, the analyses include several influential variables: age, gender, education, urbanization, party membership,Footnote 5 membership of the Executive Committee,Footnote 6 legislative professionalization and evaluation demand. Moreover, an additional dummy question was created to establish whether an MP is a member of an oversight committee, as Bundi (2016) has shown that MPs with an oversight committee are more likely to ask for evaluations. On the structural level, the analysis includes whether the cantonal or federal constitution entails an evaluation clauseFootnote 7 and controls for the size of the parliament and the number of parties. The operationalization is summarized in Table 7 in the appendix.
Empirically, the analysis relies on a multilevel logistic regression model, since the observations are nested in groups (parliaments) that have the potential to influence learning from evaluations. According to Steenbergen and Jones (2002, p. 219–220), ignoring the clustering of the data structure could lead to distorted standard errors that would overestimate the importance of the effects. A robust variance estimation not only allows for the relaxation of the assumption that the error terms are identically distributed, but also clustering allows for further relaxation of the assumption that our observations are completely independent. Hence, we use a random intercept model to test variables at the two levels (MPs and parliaments).
We first give a descriptive overview of the policy- and power-oriented learning from evaluations before presenting estimates that are related to them. Figure 1 illustrates the distribution of learning from evaluations in the sample. In doing so, the figures show that MPs frequently use evaluations in the parliament, but there is a small difference between policy learning from evaluations (mean 2.84) and political learning from evaluations (mean 2.68). Hence, MPs rather learn from evaluations to better understand a public policy than to convince someone else of their opinion. Nevertheless, this slight difference might also be a result of social desirability, as policy-oriented use and learning usually enjoy a higher acceptance rate compared to politically oriented use and learning (Bundi et al., 2018). The figure shows that those who actually demanded evaluations seem to be more likely to use them either for policy learning or for political learning. Nevertheless, these differences are not statistically significant. This finding indicates that also those MPs who do not demand evaluations use them for both types of learning.
Yet how important are policy field characteristics in explaining policy learning and political learning from evaluations, according to the hypotheses discussed in the previous sections of the paper? Table 2 presents the estimates analyzing the link between issue salience and policy learning as well as political learning from evaluations.
Overall, the different models suggest that issue salience neither augments policy learning from evaluations nor political learning from evaluations, in particular if we control for parliament-specific variables. Even though the regression coefficient regarding issue salience is slightly significant in Models 1 and 3, MPs are only about 8% more likely to learn from evaluations in highly salient policy issues compared to non-salient policy domains. Moreover, the significance level disappears if structural variables measuring the presence of an evaluation clause, the size of parliament, and the number of parties are included (Models 2 and 4, Table 2). In contrast, age is positively related to policy learning from evaluations, while older MPs are less likely to use evaluations to learn politically. Hence, the results suggest rejecting both hypotheses 1a and 1b, which predict that salience augments policy learning and political learning from evaluations. In addition, female MPs are 15% less likely to use evaluations for political learning in comparison to their male colleagues.
On the structural level, Models 1 and 3 confirm the positive relationship between evaluation demand and learning from policy evaluations (see Fig. 1). The more often MPs demand evaluations, the more often they use evaluations. We also see that the institutionalization of evaluations matters (cf. also Jacob et al., 2015). In parliaments with an evaluation clause in the constitution, MPs are more likely to use evaluations (15.2% for policy-oriented use and 19.5% for power-oriented use). Furthermore, Model 2 suggests that smaller parliaments are less likely to use evaluations in order to understand public policies better, which certainly has to do with their limited resources (Bundi et al., 2017).
Next, Table 3 presents the results for the link between technical complexity of policy issues, on the one hand, as well as policy learning and political learning from evaluations on the other. In contrast to salience, models 5 to 8 suggest that complexity is significantly associated with both forms of learning from evaluations. In case of a complex policy issue, those MPs who demand evaluations are also more likely to learn from them: 5.8% increase for policy learning and a 6.4% increase for political learning. This relationship even remains significant even if structural variables regarding the parliament are included in the analysis. Thus, the empirical models provide evidence for hypotheses 2a and 2b, which propose that against the background of a technically complex policy issue, decision-makers are more likely to learn from evaluations in a policy-oriented sense as well as in a political way compared to policy issues of limited technical complexity. Furthermore, the results are similar regarding the variables at the individual and structural levels compared to the models including issue salience. As a consequence, the analysis reveals few differences between the two forms of learning from evaluations. Thus, does it matter to distinguish these two forms of learning from evaluations in empirical analyses?
The regression models shown in Table 4 demonstrate that it is indeed important to distinguish policy learning and political learning from evaluations. While Model 9 shows a positive and significant interaction effect between issue salience and complexity for policy learning from evaluations, the analyses do not reveal any statistical relationship for the same interaction related to political learning from evaluations.
This result implies that a high degree of policy salience has a positive effect on policy learning from evaluations for policy issues that have high levels of technical complexity. This finding suggests that MPs tend more frequently to learn from evaluations in a policy-oriented way if the policy domain is technically complex and salient. Figure 2 illustrates this relationship graphically, in demonstrating how the increase in complexity augments the impact of salience on policy learning from evaluations.Footnote 8
The results that are presented in this paper have theoretical implications for public policy analysis. The first implication is related to issue attention (salience). On the one hand, different strands of literature imply that decision-makers tend to learn politically if the policy issue is salient because they have already defined their position and follow political goals (Boswell, 2009; Deutsch, 1966; Strøm, 1990). On the other hand, researchers have pointed out that decision-makers are policy seekers and want to improve policies (Zito & Schout, 2009). Consequently, they should learn from evaluations in a policy-oriented sense to demonstrate their ability and willingness to improve such policies, especially if an issue is salient (Ansell, 2011). The results of the analysis in this paper show however that issue salience alone does not explain higher levels of policy learning or political learning from evaluations amongst decision-makers. Therefore, scholars should refute the hypothesis that issue salience alone increases directly how MPs learn from policy evaluations.
Nevertheless, the analysis in this paper shows also that issue salience is important for learning from policy evaluations in conjunction with the technical complexity of policy issues. Results in this article show that decision-makers learn above all in a policy-oriented way from evaluations if they face highly salient and technically complex policy issues. This finding supports the argument that elected officials are especially policy seekers and suggests that the “misuse” of evaluations for politically oriented and power-seeking strategies is limited (but not absent), particularly against the background of technically complex and salient policy problems. For example, if issues such as environmental protection or public health become salient, members of parliament will use policy evaluations to improve policies rather than to seek their proper interests only.
This analysis underlines that political orientations and ambitions of elected officials are not necessarily a challenge for the problem-solving effectiveness of democratic governance (Ansell, 2011; Scharpf, 2003). Although scholars have recently and correctly emphasized that the usage of evidence in public policy follows a political logic (Cairney, 2016; Cairney & Oliver, 2017), this does not mean that decision-makers largely ignore research results and clearly prioritize politics over problem-solving. Especially if a policy issue is pressing and complex, i.e., potential solutions require input from expertise, elected officials are more likely to learn from evaluations about how to solve the policy problem rather than to only serve their political goals.
However, we should be careful in the interpretation of our results. First, our research design does not allow causal inference. In this study, we presented possible factors that prior studies have shown to be correlating with evaluation use (Johnson et al., 2009). Moreover, we feel very confident that learning from evaluations does not influence our main independent variables, since individual behavior can hardly affect how a policy is perceived. Second, our findings only provide limited implications for policy learning and other forms of evidence use. Yet, this might be a very interesting future research avenue.
This paper aims at explaining how issue salience and technical complexity affect how elected officials learn from policy evaluations. By using a survey amongst 1500 members of national and subnational Swiss parliaments, this article demonstrates that decision-makers engage in policy learning rather than political learning from evaluations especially if the policy issue is salient and technically complex. The findings do not confirm the conventional assumption that issue salience increases political learning and decreases policy learning amongst MPs. The empirical material in this paper also shows that political and policy learning from evaluations increase the more technically complex a policy issue is.
Findings from this research have theoretical implications for public policy analysis as they underline the importance of policy learning from evaluations by elected officials. Members of parliament report to learn from evaluations in a policy-oriented fashion, especially if the issues are technically complex and salient. Political learning from evaluations seems less important in this configuration—although it is not absent since policy learning and political learning from evaluations are correlated with each other. This result implies that the problem of selective learning and cherry-picking of evaluation findings according to predetermined policy positions is less likely, especially against the background of complex problems requiring urgent attention. This is good news for the capacity of democracies to solve, or at least tame, complex policy problems such as climate change because elected officials overall prioritize “problems over politics” when it comes to the usage of policy evidence.
The results of this research need to be interpreted by keeping the scope conditions of this case study in mind. Switzerland has a multiparty government that comprises all the important parties represented in parliament. Thus, decision-makers need to find policy positions that allow for consensus, especially about important problems. Therefore, amongst the next steps are measures to better account for issue polarization and its link to policy learning from evaluations. It is possible that political learning from evaluations becomes more likely if the policy problem is not only salient but also highly polarized. Furthermore, this data is based on self-reported information use and learning by members of parliament, which might overestimate the importance of policy learning from evaluations. Future research needs to work toward how to control for this potential caveat.
Furthermore, readers should keep in mind that this article started from the assumption that evaluation use entails political or policy learning. This assumption is plausible and justifiable based on the micro-foundations of learning as a theory of policy process, which conceptualizes the individual as a homo discentis (Dunlop & Radaelli, 2018). From this perspective, this paper contributes to a better understanding of the micro-foundations of the policy learning process (Dunlop & Radaelli, 2017). Nevertheless, this starting point implies for future research that scholars should pay more attention to understanding to how individuals learn from policy evaluations in the policy process.
In a survey on performance reporting in the context of new public management, Brun and Siegel (2006) achieved a response rate of 21.3%. Focusing only on the federal level, Bütikofer (2014) was even able to collect 65% in the lower house and 70% in the upper house. Freitag et al. (2019) conducted a survey amongst local office holders, in which almost 50% of the decision-makers responded.
Table 6 in the appendix presents the different policy domains used in both the parliamentary and the expert surveys.
However, even though we have carefully proceeded the expert survey, their evaluation of complexity and salience might still differ from the parliamentarians’ perception. Maestas (2016) provides a useful discussion of the validity of expert surveys.
The following parties are considered center parties: Liberals, Christian Democratic People's Party, Green Liberals, Conservative Democratic Party, Evangelical People's Party and Christian Social Party.
The Parliamentary Executive Committee is responsible for the organization and procedures of the Parliament and thus has a guiding function.
An evaluation clause is a legal instrument that obliges an authority to carry out evaluations and to report their results. Bussmann (2005, p. 97–99) distinguishes between four different types of evaluation clauses: general, institutionally focused and area-field focused evaluation clauses, as well as evaluation clauses for para-state institutions.
In addition, we estimated fixed effects for models 9 and 10 (see Table 9 in the appendix). They show that the results are mostly robust, with the exception of the salience variable that becomes statistically significant at the 10 percent level in the fixed effects models.
Alkin, M. C., & King, J. A. (2017). Definitions of evaluation use and misuse, evaluation influence, and factors affecting use. American Journal of Evaluation, 38(3), 434–450.
Alkin, M. C., & King, J. A. (2016). The historical development of evaluation use. American Journal of Evaluation, 37(4), 568–579.
Alkin, M. C., & Taut, S. M. (2003). Unbundling evaluation use. Studies in Educational Evaluation, 29(1), 1–12.
Amara, N., Ouimet, M., & Landry, R. (2004). New evidence on instrumental, conceptual, and symbolic utilization of university research in government agencies. Science Communication, 26(1), 75–106.
Ansell, C. (2011). Pragmatist governance: Re-imagining institutions and democracy. Oxford University Press.
Ammons, D. N., & Rivenbark, W. C. (2008). Factors influencing the use of performance data to improve municipal services: Evidence from the North Carolina benchmarking project. Public Administration Review, 68(2), 304–318.
Bandelow, N. C. (2008). Government learning in German and British European policies. JCMS. Journal of Common Market Studies, 46(4), 743–764.
Baumgartner, F. R. & Jones, B. D. (2010). Agendas and Instability in American Politics, Second Edition. University of Chicago Press.
Bennett, C. J., & Howlett, M. (1992). The lessons of learning: reconciling theories of policy learning and policy change. Policy Sciences, 25(3), 275–294. https://doi.org/10.1007/BF00138786
Biesenbender, S., & Tosun, J. (2014). Domestic politics and the diffusion of international policy innovations: How does accommodation happen? Global Environmental Change, 29, 424–433.
Boswell, C. (2009). The political uses of expert knowledge: Immigration policy and social research. Cambridge University Press.
Boyer, J. F., & Langbein, L. I. (1991). Factors influencing the use of health evaluation research in Congress. Evaluation Review, 15(5), 507–532.
Braun, D., & Gilardi, F. (2006). Taking ‘Galton’s problem’ seriously: Towards a theory of policy diffusion. Journal of Theoretical Politics, 18(3), 298–322. https://doi.org/10.1177/0951629806064351
Bromley‐Trujillo, R. & Andrew, K. (2019). Salience, scientific uncertainty, and the agenda-setting power of science. Policy Studies Journal n/a (n/a). https://doi.org/10.1111/psj.12373.
Brun, M. E., & Siegel, J. P. (2006). What does appropriate performance reporting for political decision makers require? Empirical evidence from Switzerland. International Journal of Productivity and Performance Management, 55(6), 480–497.
Bütikofer, S. (2014). Das Schweizer Parlament. Eine Institution auf dem Pfad der Moderne. Baden-Baden: Nomos.
Bundi, P. (2016). What do we know about the demand for evaluation? Insights from the parliamentary arena. American Journal of Evaluation, 37(4), 522–541.
Bundi, P. (2018). Varieties of accountability: How attributes of policy fields shape parliamentary oversight. Governance, 31(1), 163–183. https://doi.org/10.1111/gove.12282
Bundi, P., Eberli, D., Frey, K., & Thomas, W. (2014). Befragung Parlamente und Evaluationen. Methodenbericht. Universität Zürich.
Bundi, P., Eberli, D., & Bütikofer, S. (2017). Between occupation and politics: Legislative professionalization in the Swiss Cantons. Swiss Political Science Review, 23(1), 1–20.
Bundi, P., Varone, F., Gava, R., & Widmer, T. (2018). Self-selection and misreporting in legislative surveys. Political Science Research and Methods, 6(4), 771–789.
Budge, I., & Laver, M. (1986). Office seeking and policy pursuit in coalition theory. Legislative Studies Quarterly, 11(4), 485–506.
Bussmann, W. (2005). Typen und Terminologie von Evaluationsklauseln. LeGes—Gesetzgebung Und Evaluation, 16(1), 97–102.
Cairney, P. (2016). The politics of evidence-based policy making. Palgrave Macmillan.
Cairney, P., & Oliver, K. (2017). Evidence-based policymaking is not like evidence-based medicine, so how far should you go to bridge the divide between evidence and policy? Health Research Policy and Systems, 15(1), 35. https://doi.org/10.1186/s12961-017-0192-x
Culpepper, P. D. (2010). Quiet politics and business power: Corporate control in Europe and Japan. Cambridge University Press.
Dann, P. (2003). European parliament and executive federalism: Approaching a parliament in a semi-parliamentary democracy. European Law Journal, 9(5), 549–574.
Daviter, F. (2015). The political use of knowledge in the policy process. Policy Sciences, 48(4), 491–505.
Deutsch, K.W. (1966) The Nerves of Government. Models of Political Communication and Control (New York: The Free Press).
Demaj, L., & Summermatter, L. (2012). What should we know about politicians’ performance information need and use? International Public Management Review, 13(2), 85–111.
Dunlop, C. A., & Radaelli, C. M. (2018). Does policy learning meet the standards of an analytical framework of the policy process? Policy Studies Journal, 46(S1), S48-68. https://doi.org/10.1111/psj.12250
Dunlop, C. A., Radaelli, C. M., & Trein, P. (Eds.). (2018). Learning in public policy: Analysis, modes and outcomes. Palgrave Macmillan.
Daniela, E. (2018). Tracing the use of evaluations in legislative processes in Swiss cantonal parliaments. Evaluation and Program Planning, 69(3), 139–147.
Eberli, D. (2019). Die Nutzung von Evaluationen in den Schweizer Parlamenten. Seismo.
Eberli, D., Bundi, P., Frey, K., & Thomas, W. (2014). Befragung Parlamente und Evaluationen: Ergebnisbericht. Ergebnisbericht.
Eshbaugh-Soha, M. (2006). The conditioning effects of policy salience and complexity on American political institutions. Policy Studies Journal, 34(2), 223–243.
Evans, M. (2018). Policy-seeking and office-seeking: Categorizing parties based on coalition payoff allocation. Politics and Policy, 46(1), 4–31.
Feindt, P. H., Schwindenhammer, S., & Tosun, J. (2021). Politicization, depoliticization and policy change: A comparative theoretical perspective on agri-food policy. Journal of Comparative Policy Analysis: Research and Practice, 23(5–6), 509–525.
Fischer, M., & Sciarini, P. (2016). Drivers of collaboration in political decision making: A cross-sector perspective. The Journal of Politics, 78(1), 63–74.
Freitag, Markus, Pirmin Bundi, and Martina Flitz Witzig. 2019. Milizarbeit in der Schweiz. NZZ Libro.
Frey, K. (2012). Evidenzbasierte Politikformulierung in der Schweiz: Gesetzesrevisionen im Vergleich. Nomos.
Gilardi, F. (2010). Who learns from what in policy diffusion processes? American Journal of Political Science, 54(3), 650–666. https://doi.org/10.1111/j.1540-5907.2010.00452.x
Gilardi, F., Füglister, K., & Luyet, S. (2009). Learning from others: the diffusion of hospital financing reforms in OECD countries. Comparative Political Studies, 42(4), 549–573. https://doi.org/10.1177/0010414008327428
Gormley, W. T. (1983). Policy, politics, and public utility regulation. American Journal of Political Science, 27(1), 86–105. https://doi.org/10.2307/2111054
Greenhalgh, T., & Russell, J. (2009). Evidence-based policymaking: A critique. Perspectives in Biology and Medicine, 52(2), 304–318. https://doi.org/10.1353/pbm.0.0085
Hall, P. A. (1993). Policy paradigms, social learning, and the state: The case of economic policymaking in Britain. Comparative Politics, 25(3), 275–296. https://doi.org/10.2307/422246
Head, B. W. (2016). Toward more “evidence-informed” policy making? Public Administration Review, 76(3), 472–484.
Heclo, H. (1974). Modern Social Policy in Britain and Sweden: From Relief to Income Maintenance. New Haven: Yale University Press.
Heikkila, T., & Gerlak, A. K. (2013). Building a conceptual approach to collective learning: Lessons for public policy scholars. Policy Studies Journal, 41(3), 484–512.
Heikkila, T., Chris, W., & Gerlak, A. K. n.d. When does science persuade (or not persuade) in high conflict policy contexts? Public Administration n/a (n/a). Retrieved 6 February, 2020, fom https://doi.org/10.1111/padm.12655.
Henry, G. T., & Mark, M. M. (2003). Beyond use: understanding evaluation’s influence on attitudes and actions. American Journal of Evaluation, 24(3), 293–314.
Hertel-Fernandez, A. (2018). Policy feedback as political weapon: Conservative advocacy and the demobilization of the public sector labor movement. 16(2), 364–379.
Hertel-Fernandez, A. (2019). State capture: How conservative activists, big businesses, and wealthy donors reshaped the American states--and the nation. Oxford University Press, USA.
Hird, J. A. (2009). The study and use of policy research in state legislatures. International Regional Science Review, 32(4), 523–535.
Hooghe, L., Bakker, R., & Brigevich, A. (2010). Reliability and validity of measuring party positions: The Chapel Hill expert surveys of 2002 and 2006. European Journal of Political Research, 49(5), 687–703.
Ingold, K., & Muriel Gschwend, M. (2014). Science in policy-making: Neutral experts or strategic policy-makers? West European Politics, 37(5), 993–1018.
Jacob, S., Speer, S., & Furubo, J.-E. (2015). The institutionalization of evaluation matters: Updating the international atlas of evaluation 10 years later. Evaluation, 21(1), 6–31.
Jenkins-Smith, H. C., Daniel, N., Weible, C. M., & Karin Ingold. (2018). The advocacy coalition framework: An overview of the research program. In Theories of the policy process, 4th ed. Westview Press. https://www.taylorfrancis.com/
Jr, J., Edward, T., & Hall, J. L. (2012). Evidence-based practice and the use of information in state agency decision making. Journal of Public Administration Research and Theory, 22(2), 245–266.
Johnson, R. B. (1998). Toward a theoretical model of evaluation utilization. Evaluation and Program Planning, 21(1), 93–110.
Johnson, K., Greenseid, L. O., Toal, S. A., King, J. A., Lawrenz, F., & Volkov, B. (2009). Research on evaluation use: A review of the empirical literature from 1986 to 2005. American Journal of Evaluation, 30(3), 377–410.
Jones, B. D. & Baumgartner, F. R. (2005). The politics of attention: How government prioritizes problems. University of Chicago Press.
Kamkhaji, J. C., & Radaelli, C. M. (2017). Crisis, learning and policy change in the European Union. Journal of European Public Policy, 24(5), 714–734.
Knorr, K. D. (1977). Producing and reproducing knowledge: Descriptive or constructive? Toward a model of research production. Social Science Information, 16(6), 669–696.
Lieberherr, E. & Eva, T. (2020). Linking throughput and output legitimacy in Swiss forest policy implementation. Policy Sciences.
Lijphart, A. (2012). Patterns of democracy: Government forms and performance in thirty-six countries (2nd ed.). Yale University Press.
Lowi, T. J. (1972). Four systems of policy, politics, and choice. Public administration review, 32(4), 298–310.
Maestas, C. (2016). Expert surveys as a measurement tool. In L. Rae Atkeson & R. M. Alvarez (Eds.), The oxford handbook of polling and survey methods. Oxford University Press.
May, P. J. (1992). Policy learning and failure. Journal of Public Policy, 12(4), 331–354. https://doi.org/10.1017/S0143814X00005602
Newman, J., Cherney, A., & Head, B. W. (2016). Do policy makers use academic research? Reexamining the “two communities” theory of research utilization. Public Administration Review, 76(1), 24–32.
Nowlin, M. C. (2021). Policy learning and information processing. Policy Studies Journal, 49(4), 1019–1039.
Nutley, S. M., Walter, I., & Davies, H. T. (2007). Using evidence: How research can inform public services. Policy Press.
Pattyn, V. (2014). Why organizations (do not) evaluate? Explaining evaluation activity through the lens of configurational comparative methods. Evaluation, 20(3), 348–367.
Pattyn, V., Van Voorst, S., Mastenbroek, E., & Dunlop, C. (2018). Policy evaluation in Europe. Edoardo Ongaro and Sandra Van Thiel (pp. 577–593). The Palgrave Handbook of Public Administration and Management in Europe. Palgrave Macmillan.
Perl, A., Howlett, M., & Ramesh, M. (2018). Policy-making and truthiness: Can existing policy models cope with politicized evidence and willful ignorance in a “post-fact” world? Policy Sciences, 51(4), 581–600.
Pierson, P. (1993). When effect becomes cause: Policy feedback and political change. World politics, 45(4), 595–628.
Pierson, P. (2000). Increasing returns, path dependence, and the study of politics. American Political Science Review, 94(2), 251–267. https://doi.org/10.2307/2586011
Rogers, W. (1993). Quantile regression standard errors. Stata Technical Bulletin, 2(9), 1–28.
Sager, F., & Andereggen, C. (2012). Dealing with complex causality in realist synthesis: The promise of qualitative comparative analysis. American Journal of Evaluation, 33(1), 60–78.
Scharpf, F. W. (2003). Problem-solving effectiveness and democratic accountability in the EU (No. 03/1). MPIfG working paper.
Schlaufer, C., Stucki, I., & Sager, F. (2018). The political use of evidence and its contribution to democratic discourse. Public Administration Review, 78(4), 645–649.
Shulock, N. (1998). Legislatures: Rational systems or rational myths? Journal of Public Administration Research and Theory, 8(3), 299–324.
Simon, H. (1947). Administrative behavior. A study of decision-making processes in administrative organizations. Macmillan.
Steenbergen, M., & Jones, B. (2002). Modeling multilevel data structures. American Journal of Political Science, 46(1), 218–237.
Steinmo, S., Thelen, K., & Longstreth, F. (Eds.) (1992). Structuring Politics Historical Institutionalism in Comparative Analysis. Cambridge University Press, Cambridge.
Stephenson, P. J., Schoenefeld, J. J., & Leeuw, F. L. (2019). The politicisation of evaluation: Constructing and contesting EU policy performance. Politische Vierteljahresschrift, 60(4), 663–679.
Strøm, K. (1990). A behavioral theory of competitive political parties. American Journal of Political Science, 565–598.
Tabuga, A. D. (2017). Knowledge utilization in policymaking: Evidence from congressional debates in the Philippines. Journal of Asian Public Policy, 10(3), 302–317.
Trein, P. (2018). Median problem pressure and policy learning: An exploratory analysis of European countries. In C. Dunlop, C. Radaelli, & P. Trein (Eds.), Learning in public policy: Analysis, modes and outcomes. Palgrave Macmillan.
Trein, P., & Maggetti, M. (2020). Patterns of policy integration and administrative coordination reforms: A comparative empirical analysis. Public Administration Review, 80(2), 198–208. https://doi.org/10.1111/puar.13117
Trein, P., & Vagionaki, T. (2022). Learning Heuristics, Issue Salience, and Polarization in the Policy Process. West European Politics, 45(4), 906–929.
Vagionaki, T. (2020). Policy learning in a world of neglect: The open method of co-ordination in a world of neglect: Obstacles to policy learning in Greece. University of Lausanne.
Vagionaki, T., & Trein, P. (2020). Learning in political analysis. Political Studies Review, 18(2), 304–319.
Der Eijk, V., Cees, W. V., Brug, D., & Martin, K. R. O. H. (2006). Rethinking the dependent variable in voting behavior: On the measurement and analysis of electoral utilities. Electoral Studies, 25(3), 424–447.
Van der Knaap, P. (2000). Performance management and policy evaluation in the Netherlands: Towards an integrated approach. Evaluation, 6(3), 335–350.
Weiss, C. H. (1999). The interface between evaluation and public policy. Evaluation, 5(4), 468–486.
Weiss, C. H. (1989). Congressional committees as users of analysis. Journal of Policy Analysis and Management, 8(3), 411–431.
Weiss, C. H. (1979). The many meanings of research utilization. Public Administration Review, 39(5), 426–431. https://doi.org/10.2307/3109916
Weiss, C. H., Murphy-Graham, E., & Birkeland, S. (2005). An Alternate route to policy influence: How evaluations affect D.A.R.E. American Journal of Evaluation, 26(1), 12–30. https://doi.org/10.1177/1098214004273337
Whiteman, D. (1985). The fate of policy analysis in congressional decision making: Three types of use in committees. Western Political Quarterly, 38(2), 294–311.
Zito, A. R., & Schout, A. (2009). Learning theory reconsidered: EU integration theories and learning. Journal of European Public Policy, 16(8), 1103–1123. https://doi.org/10.1080/13501760903332597
Zwaan, P., van Voorst, S., & Mastenbroek, E. (2016). Ex post legislative evaluation in the European Union: Questioning the usage of evaluations as instruments for accountability. International Review of Administrative Sciences, 82(4), 674–693.
Open access funding provided by University of Lausanne.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bundi, P., Trein, P. Evaluation use and learning in public policy. Policy Sci 55, 283–309 (2022). https://doi.org/10.1007/s11077-022-09462-6
- Policy learning
- Political learning
- Policy issues