Debates over public policy often boil down to numbers. How much should tax rates be raised or lowered? How high should the minimum wage be? How long should the unemployed be eligible for benefits? How many years should felons serve for a given crime? How much should the federal government forgive in student loans? The numbers that inhabit policy debates are not simply details—they have important consequences for peoples’ lives. Since numbers define many policies, it is essential to understand whether, and how, these numbers shape public attitudes towards support for new policy and changes to existing policy. We focus on the mere presence of a number, an anchor, as a key agent of policy preferences in the mass public.

Public opinion plays a meaningful role defining the parameters of what policy changes are politically feasible. Although public opinion can be exogenous to policy (Erikson et al., 2002; Kingdon, 1984), various factors shape the public’s policy preferences including how an issue is framed (e.g., Chong & Druckman, 2007; Druckman, 2001), the social group images of policy beneficiaries (Converse, 1964; Nelson & Kinder, 1996; Nicholson, 2011), or elite cues (e.g., Bowler & Donovan, 1998; Lenz, 2012; Zaller, 1992). Since the numbers that inhabit the policy process are likely to also matter, we study whether, and how, the numeric anchors in policy debates shape the policy preferences of the mass public across a wide variety of policies ranging from animal testing to term limits. Our findings help illuminate why some policy changes are feasible and others are not.

The actors in policy debates, both elites and informed segments of the mass public, no doubt possess strongly held policy preferences. Yet, as with many types of decisions, cognitive biases can affect policy preferences apart from a person’s politics. Liberals want higher taxes and conservatives want lower taxes, but the debate often centers on a reference point that shapes both laypeople’s (Quattrone & Tversky, 1988) and elected officials’ preferences over policy outcomes (Sheffer et al., 2018; also see Wildavsky, 1964). Offering insight into the psychological power of the status quo, Tversky and Kahneman (1974) show that initial numeric values provide an anchoring heuristic, a starting point from which people adjust their beliefs about the true value of some quantity. The “Many Labs” project successfully replicated several of the classic anchoring studies suggesting that the effect is robust (Klein et al., 2014).

We are not of the mind that anchoring is important for all issues all the time, nor do we consider it a grand rhetorical strategy. Rather, we find it to be an underappreciated phenomenon in politics since it can help explain why the status quo can be such a powerful influence. A recent example we did not examine is abortion policy. In the United States, under Roe v. Wade, women had about 24 weeks to receive an abortion “on demand.” Mississippi proposed to move this to 15 weeks. In France, on the other hand, women have until 14 weeks to get an abortion “on demand.” In some US states moving from 24 to 15 weeks is a “loss in freedom,” or a conservative change while in France moving to 15 weeks would be an increase in freedom or a movement towards the left. The reference point, or anchor, influences how people in these contexts perceive what a policy change/proposal means: 15 weeks would be the “pro-life” direction in the US and the “pro-choice” direction in France. So, while we believe that public opinion on abortion, at least in the US case, has traditionally been more firmly rooted in values or identities, there are important choices today where the anchor (status quo) plays a crucial role in shaping policy preferences.

Although psychological research offers insights into how the anchoring heuristic privileges the status quo, the influence of anchors potentially extends beyond the status quo, namely the importance of setting the agenda with an initial proposal. Although there are competing theories of the agenda-setting process, a common thread in these theories is that elites define solutions to problems that must win public approval or acquiescence before the proposal becomes policy (Baumgartner & Jones, 1993; Kingdon, 1984). At this stage of the process, elites present policy alternatives, some of which are likely to feature a numeric value that can anchor public preferences. An example we explore in one of our experiments is a three-day waiting period to purchase a gun.

Our study also helps shed light on the degree to which public preferences are shaped by random events (Achen & Bartels, 2016). Much of the research on anchoring illustrates that even meaningless or irrelevant numbers can have a significant effect on numerical estimates and probability judgments. If arbitrary anchors can mislead people in consumer choices (Ariely et al., 2003), the consequences of anchoring in politics could be widespread, producing suboptimal decisions (cf. Tversky & Khaneman, 1974) that underscore long-standing doubts about citizen competence (Kuklinski et al., 2001). Indeed, an emerging body of research shows that “irrelevant” events can influence political behavior. In this project, we consider whether the effect of irrelevant numbers extends to policy preferences.

To evaluate the question of anchoring effects in political settings, we designed a series of survey experiments looking at a variety of public policies. The first set of experiments—based on canonical anchoring experiments—treated participants with irrelevant anchors by randomly assigning them high or low numbers and then asking them to tell us their numerical policy preferences (e.g., how many months a person should be eligible to receive unemployment insurance). In a second set of experiments, we assigned participants a high or low anchor relevant to an existing policy (the status quo) or policy debate (a proposal) before asking about their policy preferences.

We find that the public is susceptible to anchoring, but only if anchors are relevant to an existing policy area or debate. In contrast to studies of anchoring in other disciplines, we found no effect for meaningless or arbitrary anchors. Yet we did find that public opinion can be influenced by politically relevant anchors, both with regard to the status quo or an attempt to set the agenda with a policy proposal. We also found that the effects of policy relevant anchors were largely unaltered when pitted against party cues.

Taken together, our results suggest that people are susceptible to anchoring effects when numbers matter—using politically relevant numbers to inform policy preferences—and that people are more likely to rely on policy relevant information rather than a partisan heuristic. We believe these results offer insight into the psychological foundations for why the status quo and agenda setting can have such a powerful influence on public opinion. Do our results offer evidence of a competent public? We think it might, at least if an anchor represents the status quo. For instance, it may be prudent to consider the status quo since alternatives far away from current policy are often politically unfeasible. However, although we are fairly confident concluding that the public acts competently when disregarding politically meaningless numbers, we are not as confident concluding that an anchor rooted in the status quo is always a desirable outcome. Moreover, when the anchor is an attempt to set the policy agenda with an opening gambit, one may reasonably worry that anchoring effects could be normatively undesirable since it provides elites with greater opportunity to manipulate public opinion.

The Political Relevance of Anchoring

The anchoring heuristic occurs when an initial value or number, often random or uninformative, biases responses toward the initial value or number. Tversky and Khaneman (1974) provide the earliest demonstration of the anchoring heuristic, a cognitive bias they labeled “anchoring and adjustment” because the initial value fixes the starting point and the answer given is an adjustment made from that starting point. In their study, participants were shown a high or low number from a roulette wheel and then were asked to estimate the percentage of African countries that belonged to the United Nations. On average, participants treated with low numbers offered lower percentages than participants treated with high numbers.

Scholars have found the anchoring heuristic to affect a wide range of judgments and decisions including, among others, responses to factual questions, probability estimates, forecasts, negotiations, and purchasing decisions (e.g., Ariely et al., 2003; Furnham & Boo, 2011; Klein et al., 2014). Recently, Pedersen and Mutz (2019)showed that preferences for an ideal level of economic equality are anchored by the perceived level of current economic inequality, a finding that helps explain why previous studies had found roughly similar preferences for ideal income inequality across the political spectrum. To revisit tax policy, liberals may prefer higher tax rates than conservatives, but what counts as a high or low rate is largely arbitrary, set by an initial value (the status quo or the opening gambit of an agenda setter).

Although a variety of ingredients shape policy preferences, the status quo as well as an initial value in agenda setting are central to understanding how the public thinks about whether it wants more or less of a policy (Kingdon, 1984; Page & Shapiro, 1983; Wliezen's, 1995). A common thread running through these studies is that policy preferences are often biased towards the status quo, an empirical regularity at the macro-level consistent with a micro-level focus on anchoring. For instance, an assumption behind Wliezen (1995)thermostatic model is that the public adjusts its preferences for spending, more or less, in relative opposition to a shift in the status quo. In the absence of a status quo, anchoring may also help illuminate the cognitive foundations that facilitate agenda setting. Policy entrepreneurs seek to define the policy agenda with an initial value allowing them to offer their preferred solution (or number) as the starting point for political debate (Kingdon, 1984). From a psychological standpoint, that starting point may act as an anchor to which subsequent political actors must respond. In sum, anchoring may help illuminate the cognitive foundations that undergird the pull of the status quo in in understanding policy attitudes.

Relevant and Irrelevant Anchors

Previous research typically establishes an anchoring effect by randomly assigning participants in an experiment to encounter a low or high number before answering a question that requires a numeric response. The conventional interpretation of these studies is that it shows how people incorporate “irrelevant” information into their subsequent judgments (Ariely et al., 2003; Chapman & Johnson, 2002; Kahneman, 2011). While we appreciate this interpretation, experimental designs in previous research do not always line up cleanly with it. In some studies, experimental participants are confronted with a number that is clearly irrelevant to the decision task such as being exposed to a randomly generated anchor and not explicitly asked to refer to this anchor in subsequent decisions (Ioannidis et al., 2020). In studies like this one, participants have no good (or rational) reason to consider the anchor when answering the questions that follow.

Yet there are also instances in previous studies where the experimenter directly asks participants to consider the anchor in the decision task. In the canonical anchoring experiment, Tversky and Kahneman (1974) asked participants to consider whether the number of African countries in the United Nations was higher or lower than the randomly assigned anchor they had just seen before asking them to venture a guess of the actual number (see Chapman and Johnson, 1999 for a similar experiment in the domain of politics). From a rational choice perspective, these anchors should not influence people’s numerical judgments. At the same time, it is not clear that we should categorize them as wholly irrelevant, either. In these instances, the anchor does “bias” responses, but perhaps because the researcher led subjects to believe that the number was, at the very least, a meaningful starting point.

While the distinction between relevant and irrelevant anchors may be largely academic from the standpoint of the initial goal of anchoring experiments in psychology and behavioral economics—which was to investigate departures from strict rationality—we believe the distinction is important for studying political preferences. The status quo is a relevant and meaningful starting point, even if only to communicate how much change is feasible. Agenda setting anchors also offer a relevant and meaningful starting point, even if they emerge from the haphazard process that generates policy windows. In both instances, the anchor is relevant to the policy itself.

Whether the public responds to relevant or meaningless anchors helps address long-standing questions about citizen competence. An emerging body of research in political science suggests that electorates respond to “irrelevant events,” punishing elected officials for events beyond their control such as floods, droughts, and shark attacks (Achen & Bartels, 2016; also see Healy et al., 2010). If the public is swayed by arbitrary numbers (e.g., such as those produced by a roulette wheel), it would be another instance where people fail to separate relevant policy details from irrelevant information. Although the absence of an anchoring effect with meaningless numbers does not by itself provide evidence of citizen competence, it would suggest that the public at least overlooks some types of meaningless information.

Hypotheses

We offer two hypotheses involving irrelevant and relevant anchors. The irrelevant anchor hypothesis holds that responses will be biased toward arbitrary numbers and the relevant anchor hypothesis holds that responses will only be biased toward policy-relevant numbers—the status quo or an initial proposal in a policy debate. From the heuristics-as-biases perspective (e.g., Kahneman, 2011), both irrelevant and relevant anchors should influence decision making, because people latch on to whatever starting value is available. In contrast, the fast and frugal perspective implies that people should only rely on a relevant anchor as a heuristic because people follow a “take the best, ignore the rest” satisficing algorithm (see Gigerenzer & Goldstein, 1996, 653). Since elites mostly structure their preferences in relation to the status quo (Wildavsky, 1964), it is plausible that the status quo represents a highly informative heuristic in the policy process. The status quo of an existing policy or the initial proposal in a policy debate thus offers a politically feasible starting point (if not the most optimal) from which to adjust.

Anchoring Heuristic vs. Partisan Heuristic

Because politics do not happen in a vacuum, it would be a mistake to study the political implications of numeric anchors without considering competing heuristics. Since our studies take place in the context of American politics, we consider whether anchoring heuristics have power in the face of partisan information. People’s attachments to the major political parties in the United States fundamentally structure policy preferences (Campbell et al., 1960) and the public has a strong grasp of how the parties issue priorities differ (Goggin et al., 2020; Heit & Nicholson, 2016). Although some studies have found party cues to have a substantial effect on opinion, overwhelming other types of considerations (Cohen, 2003; Rahn, 1993), other studies have found that policy preferences diminish the pull of partisan cues (e.g., Arceneaux et al., 2017; Boudreau & MacKenzie, 2014; Bullock, 2011; Nicholson, 2011). Given the divergent findings about whether partisans choose party over policy, we offer two rival hypotheses. The party over policy hypothesis holds that participants will disregard policy if provided party cues and the policy over party hypothesis holds that participants will disregard party cues when provided policy relevant anchors.

Overview of Studies

To test our hypotheses, we conducted online survey experiments in 2012, 2015, and 2018. The 2012 and 2018 experiments were included as part of the Cooperative Congressional Election Study (CCES) and the 2015 experiments were conducted using Qualtrics. The CCES provides a nationally descriptive internet sample consisting of a thousand interviews conducted before and after the election (see Ansolabehere & Rivers, 2013). In March 2015, we recruited 842 participants from an opt-in Internet panel through Qualtrics. Using quota sampling methods, we ensured that the descriptive characteristics of our sample approximated the national population (see SI Sect.1 for question wording, Sect. 2 for descriptive statistics, and Sect. 3 for balance tables).

Rather than present data from each study separately, we organize the presentation thematically, pulling together analyses from separate years when the analyses involved follow-up studies to address potential confounds. We designed the experiments to test the anchoring heuristic for both irrelevant and relevant numbers, as well as circumstances in which anchors are pitted against party cues. In the first set of studies examining meaningless anchors, we randomly assigned participants to arbitrary numbers to see whether meaningless anchors influenced policy preferences. In the second set of studies examining policy relevant anchors, we randomly assigned participants to conditions that featured high or low anchors based on real-world policy numbers. In addition, the final study features policy relevant anchors without a policy status quo. Here we look at policy proposals to rule out the possibility that it is information in the status quo, not anchoring, that is responsible for the effects we observe. To examine the robustness of the anchoring effect, we also examined whether party cues have a moderating effect on the anchoring heuristic among party identifiers.

Irrelevant Anchor Studies

In each of the three surveys, we designed and conducted survey experiments to examine whether meaningless anchors influence policy preferences. As mentioned, much of the research on anchoring in psychology and behavioral economics demonstrates that irrelevant anchors affect responses. If such an effect holds in politics, it would suggest that elites have a great deal of influence and latitude in shaping policy preferences. In the 2012 survey, we looked at whether anchoring affected preferences on prison sentencing, unemployment benefits, term limits, and immigration (legal and illegal).

We chose these policy areas since they prominently feature numerical choices and are likely to be familiar and understandable to most people. However, since these policies exclusively concern time preferences, in a 2015 follow up study we looked at policies regarding money including the minimum wage and taxes on gas and cigarette. Apart from immigration, none of the 2012 policies had achieved national prominence. In the 2015 study, the minimum wage gained national attention when President Obama advocated for an increase in his state of the union address and later when it became a salient issue in the Democratic Party presidential primaries, especially for the candidacy of Senator Bernie Sanders.

Following Tversky and Kahneman (1974), participants were randomly assigned to a high or low anchor. Despite how participants saw that the number was randomly chosen from a spinning wheel of fortune, Tversky and Kahneman had rigged the wheel so that it would randomly fall on one of two numbers, 10 or 65. After seeing the anchor, participants were asked if they thought the number of African countries in the United Nations was above or below that number, and then instructed to write down the number. They found that participants given the high anchor (65) were more likely to provide higher estimates than participants given the low anchor (10).

Unlike Tversky and Kahneman, we did not ask participants to consider the number in their subsequent response. Rather, we followed experimental designs in subsequent anchoring research (e.g., Ariely et al., 2003) and presented the number in a way to ensure that it would be prominent while at the same time not creating experimental demand, in which participants correctly guess the intent of the experiment and give responses consistent with that intent. To sidestep this possibility, we took advantage of the web-based platform by asking, “To help confirm the accuracy of responses, please enter the number 10/65 in the box below.” On the following screen, we asked, “You entered 10/65, is that correct?” By having participants input the number and then confirm it, we can be confident that the treatment was received but presented in a way that appeared unrelated to the policy question that followed.

After random assignment to a number (10 or 65), participants were randomly assigned to one of five policy questions. As shown in Table 1, we offer a variety of policies that present a diverse range of opportunities for the anchors to affect responses. For example, the lower and upper boundaries for jail sentencingFootnote 1 is likely to differ a fair amount from how long a legislator should be allowed to serve in elected office. For each item, participants were given a box to input a number (responses were unconstrained, so some numbers were relatively high). We also included two immigration conditions, one featuring “legal” and the other featuring “illegal” immigrants in order to examine the possibility that anchors might be less potent when identifying “illegal” immigrants, a term that can invoke greater opposition to liberal policy (Lapinski et al., 1997).

Table 1 Policy areas and conditions for study 1

Figure 1 presents the notched boxplots for policy preference by anchoring condition for each policy (see SI Table A4.1 for full results).Footnote 2 The horizontal line at the center of the notch of each box plot represents the median and the bottom and top of the notches represents the span of the 90% confidence interval. Following Tversky and Kahneman (1974), we report medians because they obviate the effect of extreme values. We show boxplots overlaid onto the data—as opposed to summarizing the data into a single point —for the sake of transparency. Finally, we chose to report notched boxplots, because the notch approximates a 90% confidence interval and provides a visual way to estimate statistical significance. If the notches of two medians do not overlap, they are statistically significantly different at the 0.05 level (Chambers, et al., 2018).

Fig. 1
figure 1

The Effect of Meaningless Anchors on Policy Numbers, Study 1. Note: Data are from the 2012 CCES. The notched boxplots show how the data are distributed across experimental conditions. The solid line at the center of the notch represents the median and the span from the bottom to the top of the notch approximate the 90% confidence interval

Recall that the irrelevant anchor hypothesis holds that responses will be larger for high anchors versus low anchors. Yet, Fig. 1 depicts minimal differences between anchor conditions for each policy area. Although there are slight differences between some of the high and low anchors, only the result for undocumented immigration is statistically significant. We estimated bootstrapped standard errors for the difference in medians using quantile regression, and the p-values are as follows: prison terms, p = 1; legislative terms, p = 0.51; unemployment, p = 0.19; legal immigration, p = 0.58; illegal immigration, p = 0.04.

A potential limitation of this study is that we focused exclusively on policies involving time. Had we instead focused on a different type of unit of measurement, we might have found an effect. To that end, we ran an additional experiment in 2015 focusing on policies that involved money: a cigarette tax; a gas tax, and the minimum wage. Table 2 presents the text of each question for Study 2. Using the same procedure from Study 1, participants were randomly assigned to type either 10 or 65 into the screen under the pretext of checking the accuracy of their input. We then randomly assigned participants to answer one question about what they thought the tax should be on cigarettes, gasoline, or the amount of the minimum wage.

Table 2 Policy areas and conditions for study 2

Figure 2 displays the notched boxplot for the policy responses by anchor condition (see SI Table A4.2 for full results). In both the low and high anchor conditions, the median response was the same for cigarette taxes ($2.00) and the minimum wage ($10.00). We see the glimmer of an effect for gas taxes. In the low anchor condition, the median participant reported it should be $0.15 and in the high anchor condition the median participant said it should be $0.25. Yet, the difference here is quite small substantively (d = 0.07) and is statistically insignificant (p = 0.08).

Fig. 2
figure 2

Meaningless Anchors, Taxes, and the Minimum Wage, Study 2. Note: Data are from the 2015 Qualtrics study. The notched boxplots show how the data are distributed across experimental conditions. The solid line at the center of the notch represents the median and the span from the bottom to the top of the notch represents the 90% confidence interval

Finally, in Study 3 we addressed the possibility that 10 and 65, the anchors we borrowed from Tversky and Kahneman (1974), were not devoid of political meaning. For instance, “full retirement age” for Social Security benefits begins at age 65 as does eligibility for Medicare. Likewise, married couples can claim their spouses Social Security benefits after 10 years of marriage and most people need about 10 years of earnings to qualify for Social Security benefits. We could provide more examples, but the point is that these numbers may not necessarily be politically meaningless.

To address the possibility that 10 and 65 are confounded with politically meaningful numbers from other policy areas, we designed a follow-up experiment for the 2018 CCES using the same set of questions from Study 1 (criminal sentencing, legislative term limits, unemployment insurance, and immigration reform) in which we randomly assigned participants to receive a number between 5 and 95 ending in zero or five (5, 10, 15,…, 95) before answering one of the policy questions. Furthermore, in the previous experiment we did not include a control group so in this one we did. Rather than inputting a number, control group participants inputted the letter “G.”

Figure 3 presents the scatterplots of the policy responses by assignment to the irrelevant anchor (see SI Table A4.3 for full results). The y-axis ranges from 0 (represented by a G, since the control group entered the letter G) to 95. If the irrelevant anchor hypothesis is correct, the median of policy responses will increase monotonically. However, across each of the policy areas depicted in Fig. 3 we found minimal differences between anchor conditions. Although there are slight differences between some high and low anchors, none are statistically significant (p > 0.20). In short, even when we use a wider range of numbers and a control group irrelevant anchors do not appear to affect responses. We also experimented with restricting the analysis to the range of policy relevant options for each issue (for example between 5 and 20 for the plausible range of possible term limits) and found no evidence that the irrelevant anchors had effects on policy opinions under these conditions (see Table A5.12).

Fig. 3
figure 3

The Effect of Meaningless Anchors on Policy Numbers, Study 3. Note: Data are from the 2018 CCES. The scatterplots include fit lines from a linear regression model as well as a median regression model (which is less influenced by outliers). Although all of the data were included in the regression models, the figures limit the y-axis to be 100 or less for unemployment and undocumented immigrants, because of a handful of large outliers that rendered the scatterplot unreadable

In sum, using a variety of policy areas, anchoring manipulations, and outcome responses (time or money) we found little evidence that irrelevant anchors shaped policy preferences. Next, we examine whether the numbers that define a policy, policy relevant anchors, play a role in shaping preferences.

Policy Relevant Anchor Studies

Our examination of policy relevant anchors begins with an analysis of the same policy areas from before (Study 1 and 3). Drawing on data from the 2015 survey, in Study 4, we chose anchors that accurately represented status quo policy somewhere in the United States. Table 3 presents the question wording and treatment conditions in italics. With respect to unemployment and immigration, we also used different time units (weeks, months, years) to ensure that the unit of measurement did not matter. Note that we did not change the amount of time in question (e.g., 60 months versus 5 years), only the unit of measurement.

Table 3 Policy areas and study 4 conditions

Figure 4 shows the notched boxplots for policy preferences by anchor condition, which show both the median and 90% confidence intervals (see SI Table A4.4 for tables). Consistent with the relevant anchor hypothesis, the high and low anchors strongly influenced responses (p < 0.001 for all effects, except for immigration in months p = 0.14). For instance, in the low anchor condition for prison terms for murders (2 years), the median response is 25 years, while the median response in the high anchor condition is identical to the anchor itself (100 years). For this issue, participants appear to want murders to go to prison for a very long time—certainly more than 2 years—but there is no need to go beyond the typical human lifespan. Likewise, participants prefer unemployment benefits to last a bit longer than the minimum, but not much more than the maximum. With respect to the waiting period for immigrants, participants seem content with the status quo. In sum, regardless of whether participants received a high or low anchor, the value of the anchor had a substantial effect on policy preferences.

Fig. 4
figure 4

Relevant Anchors, Study 4. Note: Data are from the 2015 Qualtrics study. The notched boxplots show how the data are distributed across experimental conditions. The solid line at the center of the notch represents the median and the span from the bottom to the top of the notch represents the 90% confidence interval. While all of the data are included in these analyses, the y-axis is restricted to the area where 99% of the data lie so that readers can view the boxplot clearly

Relevant Anchors versus Party Cues

Our next examination of policy-relevant anchors examines whether their effects remain in the face of party cues in the policy areas we used in Study 2: cigarette and gasoline taxes. The anchors are drawn from the variance in tax rates across American states, actual numbers that define the upper and lower bounds. Although many party cue experiments use a political leader such as the president as the source (Nicholson, 2012), our study design requires we assign generic partisan labels, an approach not all that different from others (Boudreau & Mackenzie, 2014; Bullock, 2011; Nicholson, 2011). If party cues matter, we expect numeric preferences to be closer to the anchor of the inparty state and farther away from the outparty state. Although not strictly analogous to studies pitting policy preferences against party cues (Bullock, 2011; Cohen, 2003), the most exacting test involves partisans ignoring ideologically aligned policy anchors to follow party aligned cues.

Table 4 presents the question wording and experimental conditions placed on the 2012 CCES. The basic design is a 2 (low/high anchors) × 3 (no cue/in-party cue/out-party cue) experiment. In the gas tax experiment, the anchors are $0.08 and $0.49, the tax on gasoline in Alaska and New York, respectively. In the cigarette tax experiment, the low anchor is $0.17 and the high anchor is $4.35, the rates from Missouri and New York, respectively.Footnote 3 Although these numbers are from actual states, in the experiments we do not identify the states by name and merely refer to a state as Democratic or Republican.Footnote 4 Following standard practice in party cue experiments, we exclude pure independents (i.e., participants who do not identify with or lean toward either party). Consequently, the inparty cue condition consists of partisans who received a cue from the party with which they identify or lean toward (e.g., Democrats who saw the Democratic cue), while the outparty cue condition consists of partisans who received a cue from the opposing party (e.g., Democrats who saw the Republican cue).

Table 4 Policy anchors and party cues in study 5

Figure 5 depicts the results from the anchoring and party cues experiment (see SI Table A4.5 for full results). The center of the notched boxplot shows the median of each of the experimental conditions and the span of the notch communicates the 90% confidence interval. The left side of Fig. 5 shows the effects of the anchors collapsed across the party cue conditions and the right side of Fig. 5 shows the effects of the anchors by party cue condition. The pattern of results is consistent with the relevant anchor and policy over party hypotheses. For both cigarette and gas taxes, median responses hewed very close to the anchors. The median participant in the low cigarette tax condition ($0.17) preferred taxes to be $1.00, while the median participant in the high cigarette tax condition ($4.35) preferred taxes to be around $5.00 (p < 0.001). The results are largely unchanged by party cues.

Fig. 5
figure 5

The Effect of Policy Relevant Anchors and Party Cues on Cigarette and Gas Taxes. Note: Data are from the 2012 CCES. The figures in the first column report the results collapsed across the party cue treatments and the second row shows the results across all of the experimental conditions. The notched boxplots show how the data are distributed across experimental conditions. The solid line at the center of the notch represents the median and the span from the bottom to the top of the notch represents the 90% confidence interval. While all of the data are included in these analyses, the y-axis is restricted to the area where at least 80% of the data lie so that readers can view the boxplot clearly

The gas tax experiment tells a similar story. The median participant in the low anchor condition preferred gas taxes to stay near that rate ($0.08), while the median participant in the high tax condition preferred taxes to stay near that rate ($0.49) (p < 0.001). However, in contrast to the results for cigarette taxes, outparty cues amplified the effect of the high anchor on gas tax preferences (p < 0.001), whereas inparty cues had no such effect (p = 0.13). As a follow up on this result, in 2018 we reran the gas tax experiment using updated tax numbers for the low ($0.12) and high ($0.58) anchors. The results, located in the SI (Figure A5.1 and Table A5.3), replicate the 2012 findings and show that anchoring effects occur largely irrespective of party cues. Interestingly, the results from the 2018 survey suggest that inparty cues increase the effect of the anchors (p < 0.01). Table A5.1 in the SI shows that neither Democrats nor Republicans react differently to the party cue treatments. In sum, the results broaden the reach of the relevant anchors hypothesis showing that anchors shape policy preferences even when pitted against party cues.

Analyzing Policy Relevant Anchors Without a Status Quo

In our final set of experiments, we attempt to disentangle relevant anchoring effects from the status quo by looking at proposed, rather than existing policies. By eliminating the status quo, we are looking at public preferences in a circumstance analogous to agenda-setting wherein policy entrepreneurs attempt to define a policy. To this end, Study 6 examines whether anchors matter in the absence of a status quo by looking at policy proposals for a waiting period for gun purchases, a border wall tax on companies that outsource jobs, and fines against companies that test products on animals. None of the policies existed at the time of the study, so the experiments examine whether anchors work as a rhetorical strategy in debates to create new public policy. In addition, we again examined whether the effects of the anchors persist in the face of party cues.

Table 5 presents the text of each question with the conditions in italics. The proposals differ in a variety of ways (e.g., unit of measurement, salience) and were modeled after actual proposals that we modified slightly. The proposal for a federal law mandating a waiting period for gun purchases has been a matter of public discussion since the Brady Bill’s five-day waiting period lapsed in 1998. The border wall between the United States and Mexico has been a signature issue of the Trump Administration. The proposal we used echoed President Trump’s call for a 35% border tax on companies that move jobs overseas and import products back to the United States. For the purposes of the experiment, we created a low anchor condition calling for a 5% tax. Lastly, we inquired about how much companies should be fined if they violated a proposed ban on cosmetic testing on animals. The proposal is based on a California bill that called for a $500 fine that, when eventually passed into law (after we conducted our study), had been increased to $5000. The $500 and $5000 amounts serve as the low and high anchors, respectively. As before, participants were provided a box to input a number.

Table 5 Policy areas and conditions for study 6 on policy proposals

Figure 6 depicts the results from the policy proposal experiment using notched boxplots as we have in previous analyses (see SI Table A4.6 for full results). The effects of the anchors are shown collapsed across the party cue conditions as well as by party cue condition. The pattern of results is largely consistent with the relevant anchor and policy over party hypotheses. For the gun waiting period, border wall tax, and fines for animal testing, the median responses hewed very close to the anchors regardless of whether the data are collapsed or reported by party cue condition. Collapsing across the party cue treatments, the median participant in the 3 day gun waiting period condition (low anchor) preferred the wait last 7 days, while the median participant in the 20 day gun waiting condition (high anchor) preferred a 21 day wait (p < 0.001). These results are largely unchanged by party cues. Likewise, in the collapsed analyses for the border tax experiment, the median participant in the 5% tax condition (low anchor) preferred the tax stay at exactly the anchored rate, while the median participant in the 35% tax condition (high anchor) preferred taxes near those rates (p < 0.001). We essentially found the same anchoring effects across the party cue conditions, save for the out-party cue having a somewhat less pronounced, but still significant, effect in the high anchor condition. Finally, in the animal testing experiment, we see that the median participant preferred policies near the anchor in the collapsed analysis and among those who received party cues (Med$500 = $1000, Med$5000 = $5000, p < 0.001). Intriguingly, the median in the no cue condition/high anchor condition overshot the anchor quite a bit (Med$500 = $1000, Med$5000 = $10,000, p < 0.001).Footnote 5 Taken together, we find that anchors matter in the absence of a status quo and no consistent effect for party cues.

Fig. 6
figure 6

The Effect of Policy Relevant Anchors and Party Cues on Policy Proposals. Note: Data are from the 2018 CCES. The figures in the first column report the results collapsed across the party cue treatments and the second row shows the results across all conditions. The notched boxplots show how the data are distributed across experimental conditions. The solid line at the center of the notch represents the median and the span from the bottom to the top of the notch represents the 90% confidence interval. While all of the data are included in these analyses, the y-axis of the boxplots is restricted to the area where at least 90% of the data lie so that readers can view the boxplot clearly

Discussion and Conclusion

Public policy is saturated with numbers that affect people’s lives in fundamental ways such as tax rates, the length of prison sentences, or the minimum wage. Since people are unlikely to walk around with strong preferences about the specific numbers attached to most policies, we investigated whether these numbers affected policy preferences through anchoring, a heuristic in which people overly rely on an initial value or number. As found across the social sciences, anchoring is a pervasive and robust effect (see Klein et al., 2014) so we asked whether anchors shaped policy preferences and what we found, we believe, speaks to perennial questions about democratic competence.

Across our studies and various types of issues, irrelevant anchors, a common treatment in psychological and economic studies of anchoring, had little effect on policy preferences. In contrast, we found that policy relevant anchors matter a great deal. We found that relevant anchors matter when they communicate the status quo or if they represent an agenda-setting reference point, and that these effects are robust in the face of partisan cues. This finding contributes to a growing body of research suggesting that the public is responsive to relevant information and is able to ignore irrelevant information (see also Barabas et al., 2020).

Although the anchoring heuristic is rooted in studies of psychological decision making, we believe it has important implications for political life. Our results help illuminate the micro-foundations of why research on the correspondence between public opinion and policy regularly find opinion exhibits a strong status quo bias (see Haselswerdt & Bartels, 2015; Jerit, 2009; Page & Shapiro, 1983). Previous research suggests that such a bias is often produced through considered thought such as presenting tradeoffs between policy options (Hansen, 1998). While our data do not allow us to rule out a highly thoughtful, circumspect response, our anchoring experiments, by design, are more consistent with a heuristic processing approach than one that elicits systemic, in-depth processing of political information. In particular, our final set of experiments examined preferences on novel policy proposals and produced the same pattern of results we found when looking at policies with a status quo. Furthermore, we find little evidence that either political awareness or numeracy reduces the effects of anchoring (see pp. 26–35 of the SI and Tables A5.4–5.11). Taken together, our results show that anchoring may provide the psychological foundation for both status quo bias and agenda-setting effects, which are often considered separate from one another in the study of public policy.

Our research also contributes to a growing body of work showing that partisans ignore party cues when provided policy information (Arceneaux, 2008; Boudreau & Mackenzie, 2014; Bullock, 2011; Nicholson, 2011). Partisans rely on party cues when they need guidance, but we show that when provided policy relevant information they respond to the numbers presented. We believe these findings are more consistent with the predictions of the fast and frugal perspective than they are with the heuristics-as-biases perspective. People may not use all available information to arrive at an optimal decision that is invariant to anchoring information, but that does not mean that they mindlessly latch on to any anchor. Instead, it appears that the status quo or a novel proposal acts as a sensible starting point rather than party cues.

Although we did not find much evidence that party cues influenced opinion, future research should examine other types of cues and competitive cue environments. For instance, we used party labels (e.g., Democrat) but future work ought to examine party leader cues (e.g., President Trump) since they can provoke a strong response among partisans (Barber & Pope, 2019; Nicholson, 2012). Another promising avenue for future research should examine competitive cue environments, instances where cues from both parties are included, providing partisans multiple anchors that are congruent or incongruent with their partisan identities.

Although anchoring has been studied extensively in psychology, our findings call for more research on the conditions under which anchors matter. One departure from previous research is that we found null results for irrelevant anchors whereas many studies, both in psychology and economics, often find substantive effects. Since previous studies investigated responses to factual questions, probability estimates, and purchasing decisions, we can only speculate why irrelevant anchors matter in these instances but not in shaping policy preferences. It could be that politics is simply different or it might be because we use a less heavy-handed treatment—ones without demand characteristics wherein participants are aware of the aims of the experiment and seek to meet them—or the fact that our policy areas are more familiar than guessing figures like the percentage of African countries in the UN (see Tversky & Kahneman, 1974).

Another question that should be considered in future research is how common policy numbers—anchors—are in policy debates. In our research, we deliberately chose policies that we believed to be defined by numbers. No doubt, general policy attitudes on issues such abortion and the minimum wage are primarily shaped by ideology and partisanship but when considering specific policy proposals our expectation was that the numbers that define specific policies would be highly relevant. Although there is likely meaningful variation in who is exposed to these numbers, we assumed they are largely available to the attentive public. In the supplementary information (see Figure A6.1), we did a preliminary analysis of the presence of an anchor in news coverage of the minimum wage, a policy that we believe to be a crucial case given that the face of the policy is a numeric value. In this analysis, we found that not only did news articles mention the amount of the proposed increase but also that the amount became more prominent in those stories since it was increasingly featured in the article’s headline. Much more needs to be done, of course, but we found this to be suggestive of how numbers can come to define some policies.

Our results also have important implications for democracy. Relying on anchors may not reliably lead to normatively optimal outcomes (i.e., the one we would observe if people were fully informed) but relying on relevant policy information —the status quo—is probably better than relying on irrelevant information. The status quo, at least in terms of the numbers that define it, is often not in dispute. For instance, although misleading elite rhetoric about Social Security in the late 1990s misinformed many in the public to believe that it was going bankrupt (Jerit & Barabas, 2006), the fact that eligibility for full retirement benefits began at 65 was not in dispute. Similarly, elites do not argue about whether the current federal minimum wage is $7.25 an hour although they vehemently disagree about whether raising it is helpful or harmful. Since many policies are defined by such numbers they may constrain, or anchor, public preferences making significant policy change less likely.

Using the anchoring heuristic may rule out public acceptance of extreme (and maybe highly sensible) policy changes to be sure, but it may also be a prudent approach to policy change. What is politically feasible is often defined by what has come before so adjustments provide a sensible path forward. Indeed, budgetary incrementalism is premised on the power of the status quo (Wildavsky, 1964) and one could argue that citizens who anchor their policy preferences in the status quo are behaving in a pragmatic fashion. Once established, the status quo becomes sticky, anchoring preferences to what has come before for both the mass public and elites.

In some respects, one might find comfort in the fact that status quo bias inhibits the ability of elites to move public opinion. Yet, in doing so, it represents an incremental bias into the policy process. Depending on one’s proclivity for dramatic change as well as the severity of the problems that confront society, this feature of status quo bias is normatively ambiguous. With respect to ideological commitments, for example, liberals are likely to find the weight given to the status quo by the public frustrating, since it forestalls substantial government action on the left, whereas conservatives are likely to use this feature to minimize the growth of government. Yet, conservatives seeking major change are also likely to be frustrated in their efforts to shrink government, finding policy anchors all too sticky. In regard to problem severity, status quo bias in public opinion may make it difficult for policy makers to offer adequate solutions to large and looming threats. For instance, climate scientists are calling for radical reductions in greenhouse gases as the only way to prevent the warming of the planet beyond sustainable levels, yet proposed policy changes seem tethered to the status quo making radical change difficult.

Lastly, our results from the agenda-setting experiments that featured policy proposals as anchors suggest that, in the absence of a status quo, elites have a great deal of latitude establishing anchors on emerging issues merely by having a platform for making a proposal. Given the importance of agenda setting in shaping policy outcomes, this represents an area wherein elites are likely to have the most opportunity to thwart majority preferences. We view this possibility for manipulation as the dark side of the anchoring heuristic. The normative implications of our findings depend on the context in which anchors shape policy attitudes.