Skip to main content

All the Best Polls Agree with Me: Bias in Evaluations of Political Polling


Do Americans consider polling results an objective source of information? Experts tend to evaluate the credibility of polls based on the survey methods used, vendor track record, and data transparency, but it is unclear if the public does the same. In two different experimental studies—one focusing on candidate evaluations in the 2016 U.S. election and one on a policy issue—we find a significant factor in respondent assessments of polling credibility to be the poll results themselves. Respondents viewed polls as more credible when majority opinion matched their opinion. Moreover, we find evidence of attitude polarization after viewing polling results, suggesting motivated reasoning in the evaluations of political polls. These findings indicate that evaluations of polls are biased by motivated reasoning and suggest that such biases could constrain the possible impact of polls on political decision making.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5


  1. Despite widespread perception of a polling failure, pollsters often point out that Clinton did in fact win the popular vote as predicted. While formal evaluations of polling performance in 2016 found that national polls were slightly more accurate than in past elections, the state-level polls were considerably less accurate than in the past (Kennedy et al. 2018).


  3. See Price and Stroud (2005) for a review of the correlates of negative perceptions of polls.


  5. Our analysis considers polling on the topics of candidate support and public policy. We also explicitly test of attitude polarization after respondents have viewed a poll.


  7. Data files and replication scripts for this study and Study 2 can be found on the Political Behavior Dataverse at

  8. Approval ratings are the proportion of total accepted work projects an MTurk worker has over their total number of work projects. Reasons for rejected work projects include a failure to follow instructions or a failure to complete a task.

  9. We defined speeding as spending less than 2 min on the survey. The median duration was 7.5 minutes and the mean duration was 8.5 minutes. Results are not sensitive to these exclusions. In light of recent concerns about problematic responders or bots on Mturk, we also looked for duplicated latitude and longitude coordinates within our data (Ryan and Timothy 2018). We identified six such respondents. Excluding them from analysis had no impact on our results.

  10. Using Gallup as the survey vendor and signaling the inclusion of landline and cell phones in the sample design could shape the perceived credibility of the poll. Importantly, these characteristics are held constant across conditions and should make it more difficult to find the expected effects. The second study omits source and methodology altogether—referencing that the information comes from a "national poll" from a sample of “likely voters.” See the online appendix for more information.

  11. Question wording for the polls was taken from a Quinnipiac University Polling Institute survey conducted between January 5–9, 2017. That poll showed that 53% of Americans support the requirement, while 41% oppose the requirement and 6% were unsure.

  12. Passage rates on the manipulation check were 95%, 91%, and 92% for the close, support, and oppose conditions respectively. Question wording is reported in the online appendix.

  13. Omitted are individuals who chose to give no opinion towards the Muslim registration question (n = 97).

  14. The poll credibility measure formed a reliable scale (α = 0.90). The mean of the scale is 2.66 and the median is 2.67. See the online appendix for descriptive statistics on the individual items in the scale.

  15. Political knowledge is a composite score based on responses to three factual questions about American government. Some studies of motivated reasoning find political knowledge moderates the extent to which one will engage in belief preservation, such that those higher in political knowledge are better able to defend and find evidence for their prior beliefs (Lodge et al. 2013). However, due to high baseline levels of political knowledge in our first study and a lack of good political knowledge measures in our second study, we are unable to speak to this moderator. We control for political knowledge in the first study to deal with any variation due to this possibility but note that future work on how the public perceives polls should further explore the role of political knowledge. The question wording and coding of all variables are included in the online appendix.

  16. In contrast to Kuru et al. (2017), we did not find evidence that political knowledge moderated observed effects. Interactions between political knowledge and indicators for support or opposition to the registration question were insignificant, though this may be due to the high baseline political knowledge of all participants. As reported in the online appendix, the results are robust to excluding party identification and ideology from the models, or from an analysis without any control variables.

  17. The Close condition shows that who supported the policy were slightly more likely to view the poll as credible. Although we might have expected that the Close condition would not have any effect on evaluations of poll credibility, it appears that the one-point edge that supporters were given in the Close condition (48 to 47) may have been interpreted in a way that induced some motivated reasoning (although effect is not statistically significant)—given recent research about innumeracy (e.g. Landy et al. 2017), it is perhaps not surprising that some respondents might not have understood that the condition showed a polling result that was within the reported margin of error.

  18. More information on the survey methodology is reported in the online appendix. CCES team modules consist of 1000 ‘matched’ cases weighted to produce a sample that is demographically similar to the U.S. population. We opt to use the larger, ‘unmatched’ cases of the team module (n = 1448) to increase sample size and because it was the basis for randomization (Ansolabehere and Rivers 2013). Conclusions do not change using the smaller, matched dataset.

  19. While the CCES was in the field between late September and late October 2016, the average horse-race poll showed Clinton with around a 3 to 7% lead over Trump. Again, we used a one-point lead for Clinton over Trump to induce believability in the treatment condition.

  20. The poll credibility variable was created in the same manner as Study 1 and formed a reliable scale (α = 0.84). The mean of the scale is 2.34 and the median is 2.33. See the online appendix for descriptive statistics on the individual items in the scale.

  21. Across the two conditions the manipulation check was passed 57% of the time. This low passage rate could be due to the nature of the question, which asked for specific numbers from the poll. Passage rates were much higher in Study 1 where the manipulation check just asked majority opinion rather than specific numbers. A separate analysis was performed for only those who passed the manipulation check. The results across all pieces of analysis do not substantively change when only considering these individuals.

  22. Similar to Study 1, we omit observations that lack a prior opinion on the stimuli of interest because we have no expectations for them. This includes those with a stated preference for Gary Johnson or no stated preference for any of the three candidates (n = 349). See the online appendix for robustness checks for the models when partisanship and ideology are omitted, as well as estimation with no control variables.

  23. Although much of the previous research on attitude polarization associated with motivated reasoning has focused on issue or ideological extremity (e.g., Taber and Lodge 2006), attitude polarization can be generally understood as beliefs on a given attitude or belief becoming more divergent between two groups. As such, attitudes polarizing about a candidate would mean that supporters of the candidate become more enthusiastic, or opponents would become even less enthusiastic about that candidate winning office, or both. Previous research examining attitude polarization towards candidates have used similar attitudinal measures—such as favorability (Nyhan et al. 2017) and thermometer ratings (Redlawsk 2006)—for measuring attitude polarization in candidate support.

  24. Those who prefer neither Clinton nor Trump includes individuals who prefer Gary Johnson or who have no preference for any of the three major candidates (n = 349).

  25. As one additional test, we reran the analysis after dropping all individuals who selected maximum enthusiasm (a value of five) on the pre-manipulation questionnaire as a way of better getting at the impact of ceiling effects on our directional hypothesis. We find that coefficients for Trump (ß = 0.71; S.E. = 0.13) and Clinton (ß = 0.36; S.E. = 0.13) preferences increase in magnitude. We have also replicated the results with no control variables and with just partisanship an ideology omitted. Results are presented in the online appendix.


  • Ampofo, L., Anstead, N., & O’Loughlin, B. (2011). Trust, confidence, and credibility: Citizen responses on twitter to opinion polls during the 2010 UK general election. Information, Communication & Society, 14(6), 850–871.

    Google Scholar 

  • Ansolabehere, S., & Iyengar, S. (1994). Of horseshoes and horse races: Experimental studies of the impact of poll results on electoral behavior. Political Communication, 11(4), 413–430.

    Google Scholar 

  • Ansolabehere, S., & Rivers, D. (2013). Cooperative survey research. Annual Review of Political Science, 16, 307–329.

    Google Scholar 

  • Atkeson, L. R., & Alvarez, R. M. (2018). Introduction to polling and survey methods. In The Oxford handbook of polling and survey methods (Vol. 1).

  • Bartels, L. M. (1988). Presidential primaries and the dynamics of public choice. Princeton: Princeton University Press.

    Google Scholar 

  • Bartels, L. M. (2002). Beyond the running tally: Partisan bias in political perceptions. Political Behavior, 24(2), 117–150.

    Google Scholar 

  • Blais, A., Gidengil, E., & Nevitte, N. (2006). “Do polls influence the vote?” Capturing campaign effects (pp. 263–279). Ann Arbor: The University of Michigan Press.

    Google Scholar 

  • Blumenthal, M. 2016. “Polling: Crisis or Not, We’re in a New Era.” The Huffington Post. Retrieved June 6, 2016 from

  • Blumenthal, M., Clement, S., Clinton, J. D., Durand, C., Franklin, C., Miringoff, L., Olson, K., Rivers, D., Saad, Y. L., & Witt, G. E. (2017). An evaluation of 2016 election polls in the US.

  • Boudreau, C., & McCubbins, M. D. (2010). The blind leading the blind: Who gets polling information and does it improve decisions? The Journal of Politics, 72(2), 513–527.

    Google Scholar 

  • Bullock, J. G. (2009). Partisan bias and the Bayesian ideal in the study of public opinion. The Journal of Politics, 71(3), 1109–1124.

    Google Scholar 

  • Clifford, S., Jewell, R. M., & Waggoner, P. D. (2015). Are samples drawn from Mechanical Turk valid for research on political ideology? Research & Politics, 2(4), 2053168015622072.

    Google Scholar 

  • Crespi, I. (1988). Pre-election polling: Sources of accuracy and error. New York: Russell Sage Foundation.

    Google Scholar 

  • Gerber, A., & Green, D. (1999). Misperceptions about perceptual bias. Annual Review of Political Science, 2(1), 189–210.

    Google Scholar 

  • Großer, J., & Schram, A. (2010). Public opinion polls, voter turnout, and welfare: An experimental study. American Journal of Political Science, 54(3), 700–717.

    Google Scholar 

  • Guess, A., & Coppock, A. (2015). Back to bayes: Confronting the evidence on attitude polarization. Unpublished Paper, Yale University.

  • Hill, S. J. (2017). Learning together slowly: Bayesian learning about political facts. The Journal of Politics, 79(4), 1403–1418.

    Google Scholar 

  • Hillygus, S. D., & Guay, B. (2016). The virtues and limitations of election polling in the United States. Seminar Magazine (September).

  • Iyengar, S., & Westwood, S. J. (2015). Fear and loathing across party lines: New evidence on group polarization. American Journal of Political Science, 59(3), 690–707.

    Google Scholar 

  • Jackson, N. (2018). The rise of poll aggregation and election forecasting. In L. R. Atkinson & R. M. Alvarez (Eds.), The Oxford Handbook of polling and survey methods, 2018. Oxford: Oxford University Press.

    Google Scholar 

  • Jacobs, L. R., & Shapiro, R. Y. (2005). Polling politics, media, and election campaigns: Introduction. The Public Opinion Quarterly, 69(5), 635–641.

    Google Scholar 

  • Jerit, J., & Barabas, J. (2012). Partisan perceptual bias and the information environment. The Journal of Politics, 74(3), 672–684.

    Google Scholar 

  • Kahan, D. M. (2016a). The politically motivated reasoning paradigm, Part 1: What politically motivated reasoning is and how to measure it. Emerging Trends in Social & Behavioral Sciences.

    Article  Google Scholar 

  • Kahan, D. (2016b). The politically motivated reasoning paradigm, P2: Unanswered questions. Emerging Trends in Social & Behavioral Sciences.

    Article  Google Scholar 

  • Kennedy, C., Blumenthal, M., Clement, S., Clinton, J. D., Durand, C., Franklin, C., et al. (2018). An evaluation of the 2016 election polls in the United States. Public Opinion Quarterly, 82(1), 1–33.

    Google Scholar 

  • Kennedy, C., Mercer, A., Keeter, S., Hatley, N., McGeeney, K., & Gimenez, A. (2016). Evaluating online nonprobability surveys. Washington, DC: Pew Research Center.

    Google Scholar 

  • Kim, S. T., Weaver, D., & Willnat, L. (2000). Media reporting and perceived credibility of online polls. Journalism & Mass Communication Quarterly, 77(4), 846–864.

    Google Scholar 

  • Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480.

    Google Scholar 

  • Kuru, O., Pasek, J., & Traugott, M. W. (2017). Motivated reasoning in the perceived credibility of public opinion polls. Public Opinion Quarterly, 81(2), 422–446.

    Google Scholar 

  • Landy, D., Guay, B., & Marghetis, T. (2017). Bias and ignorance in demographic perception. Psychonomic Bulletin & Review, 6, 1–13.

    Google Scholar 

  • Langer, G. (2016). Clinton, trump all but tied as enthusiasm dips for democratic candidate. ABC News. Accessed November 01, 2018. Retrieved November 01, 2016, from

  • Lau, R. R., & Redlawsk, D. P. (2006). How voters decide: Information processing in election campaigns (p. 2006). Cambridge: Cambridge University Press.

    Google Scholar 

  • Lavrakas, P. J., Presser, S., Price, V., & Traugott, M. (1998). Them but not me: The perceived impact of election polls. In Paper Presented at the Annual Meeting of the American Association for Public Opinion Research, St. Louis, MO, USA.

  • Lelkes, Y., Sood, G., & Iyengar, S. (2017). The hostile audience: The effect of access to broadband internet on partisan affect. American Journal of Political Science, 61(1), 5–20.

    Google Scholar 

  • Lodge, M., & Taber, C. S. (2013). The rationalizing voter. Cambridge: Cambridge University Press.

    Google Scholar 

  • Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098.

    Google Scholar 

  • Marsh, C. (1985). Back on the bandwagon: The effect of opinion polls on public opinion. British Journal of Political Science, 15(1), 51–74.

    Google Scholar 

  • Mosier, N. R., & Ahlgren, A. (1981). Credibility of precision journalism. Journalism Quarterly, 58(3), 375–518.

    Google Scholar 

  • Nyhan, B., Porter, E., Reifler, J., & Wood, T. (2017). Taking corrections literally but not seriously?. The effects of information on factual beliefs and candidate favorability: Political Behavior.

    Book  Google Scholar 

  • Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330.

    Google Scholar 

  • Panagopoulos, C., Endres, K., & Weinschenk, A. C. (2018). Preelection poll accuracy and bias in the 2016 US general elections. Journal of Elections, Public Opinion and Parties, 28(2), 157–172.

    Google Scholar 

  • Panagopoulos, C., Gueorguieva, V., Slotnick, A., Gulati, G., & Williams, C. (2009). Politicking online: The transformation of election campaign communications. New Brunswick: Rutgers University Press.

    Google Scholar 

  • Price, V., & Stroud, N. J. (2005). Public attitudes toward polls: Evidence from the 2000 U.S. presidential election. International Journal of Public Opinion Research, 18(4), 393–421.

    Google Scholar 

  • Redlawsk, David P. (2002). Hot cognition or cool consideration? Testing the effects of motivated reasoning on political decision making. Journal of Politics, 64(4), 1021–1044.

    Google Scholar 

  • Redlawsk, D. P. (2006). Motivated reasoning, affect, and the role of memory in voter decision making. In D. P. Redlawsk (Ed.), Feeling politics (pp. 87–107). New York: Palgrave Macmillan.

    Google Scholar 

  • Rothschild, D., & Malhotra, N. (2014). Are public opinion polls self-fulfilling prophecies? Research & Politics.

    Article  Google Scholar 

  • Ryan, T. J. (2018). Data contamination on MTurk. Blog post. Published August 12, 2019. Available online at

  • Salwen, M. B. (1987). Credibility of newspaper opinion polls: Source, source intent and precision. Journalism Quarterly, 64(4), 813–819.

    Google Scholar 

  • Searles, K., Smith, G., & Sui, M. (2018). Partisan media, electoral predictions, and wishful thinking. Public Opinion Quarterly, 82(S1), 302–324.

    Google Scholar 

  • Stonecash, J. M. (2008). Political polling: Strategic information in campaigns. Lanham: Rowman & Littlefield.

    Google Scholar 

  • Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50(3), 755–769.

    Google Scholar 

  • Tourangeau, R., Steiger, D. M., & Wilson, D. (2002). Self-administered questions by telephone: Evaluating interactive voice response. The Public Opinion Quarterly, 66(2), 265–278.

    Google Scholar 

  • Traugott, M. W. (2005). The accuracy of the national preelection polls in the 2004 presidential election. Public Opinion Quarterly, 69(5), 642–654.

    Google Scholar 

  • Tsfati, Y. (2001). Why do people trust media pre-election polls? Evidence from the Israeli 1996 elections. International Journal of Public Opinion Research, 13(4), 433–441.

    Google Scholar 

  • Utych, S. M., & Kam, C. D. (2013). Viability, information seeking, and vote choice. The Journal of Politics, 76(1), 152–166.

    Google Scholar 

  • Valentino, N. A., Banks, A. J., Hutchings, V. L., & Davis, A. K. (2009). Selective exposure in the Internet age: The interaction between anxiety and information utility. Political Psychology, 30(4), 591–613.

    Google Scholar 

  • Valentino, N. A., King, J. L., & Hill, W. W. (2017). Polling and prediction in the 2016 presidential election. Computer, 50(5), 110–115.

    Google Scholar 

  • Vannette, D., & Westwood, S. (2013). Voter mobilization effects of poll reports during the 2012 presidential campaign. In Paper Presented at the 68th Annual AAPOR Conference, May 17.

  • Wlezien, C., & Erikson, R. (2002). The timeline of presidential election campaigns. The Journal of Politics, 64(4), 969–993.

    Google Scholar 

  • Wood, T., & Porter, E. (2016). The elusive backfire effect: Mass attitudes’ steadfast factual adherence. Political Behavior, 65, 1–29.

    Google Scholar 

  • Zukin, C. (2015). What’s the matter with polling. New York Times, 20.

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Gabriel J. Madson.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (DOCX 13 kb)

Supplementary material 2 (PDF 466 kb)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Madson, G.J., Hillygus, D.S. All the Best Polls Agree with Me: Bias in Evaluations of Political Polling. Polit Behav 42, 1055–1072 (2020).

Download citation

  • Published:

  • Issue Date:

  • DOI:


  • Polling
  • Poll evaluation
  • Public opinion
  • Motivated reasoning
  • Cognitive bias