Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media

Abstract

Social media has increasingly enabled “fake news” to circulate widely, most notably during the 2016 U.S. presidential campaign. These intentionally false or misleading stories threaten the democratic goal of a well-informed electorate. This study evaluates the effectiveness of strategies that could be used by Facebook and other social media to counter false stories. Results from a pre-registered experiment indicate that false headlines are perceived as less accurate when people receive a general warning about misleading information on social media or when specific headlines are accompanied by a “Disputed” or “Rated false” tag. Though the magnitudes of these effects are relatively modest, they generally do not vary by whether headlines were congenial to respondents’ political views. In addition, we find that adding a “Rated false” tag to an article headline lowers its perceived accuracy more than adding a “Disputed” tag (Facebook’s original approach) relative to a control condition. Finally, though exposure to the “Disputed” or “Rated false” tags did not affect the perceived accuracy of unlabeled false or true headlines, exposure to a general warning decreased belief in the accuracy of true headlines, suggesting the need for further research into how to most effectively counter false news without distorting belief in true information.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3

Notes

  1. 1.

    “Fake news” has many definitions and is frequently used in imprecise or confusing ways. Moreover, the debate over the meaning of the term and related concepts raises epistemological issues that are beyond the scope of this paper (e.g., speaker intent; see Wardle and Derakhshan 2017). We therefore employ “false news” as an alternative term throughout this paper, which define as described above (“factually dubious content that imitates the format of journalism but is produced with no regard for accuracy or fairness”; see Lazer et al. 2018). This approach is consistent with the practices of various news and social media sources (e.g., Oremus 2017) and is intended to avoid unnecessary confusion.

  2. 2.

    Pennycook and Rand (2017), which we had not seen at the time of pre-registration, also considers this question.

  3. 3.

    We pre-registered an additional research question about the effects of exposure to a general warning and/or to a “Disputed” or “Rated false” tag on respondents’ self-reported likelihood of “liking” and sharing the headlines on Facebook. The results of this analysis are presented in Online Appendix B.

  4. 4.

    A minority of studies conclude that MTurk samples are not externally valid (e.g., Krupnikov and Levine 2014). For example, participants on MTurk tend to skew liberal and young. Moreover, the underrepresentation of conservatives and older participants may suggest that these participants differ from other conservatives or older individuals in the general population. However, numerous studies find that experimental treatment effect estimates typically generalize from MTurk to national probability samples, suggesting these problems are rare (e.g., Berinsky et al. 2012; Coppock 2016; Horton et al. 2011; Mullinix et al. 2015). Finally, our MTurk sample is externally valid in the sense that it is made up disproportionately of frequent users of the Internet—precisely the group who may be most likely to encounter false news (Pennycook and Rand 2018a). We thus conclude that respondents from MTurk constitute a valid sample for testing our hypotheses, though replication on representative samples would of course be desirable.

  5. 5.

    The pilot study tested the effects of “Disputed” and “Rated false” tags only on perceived accuracy and likelihood of liking/sharing for six false news headlines. The results of this study were similar to our main analysis, and are available upon request.

  6. 6.

    As in most studies, we cannot know how much false news respondents were exposed to during the 2016 presidential election and its aftermath (e.g., Allcott and Gentzkow 2017). While it would be useful to measure this quantity, our main interest is the effect of warnings and tags on belief accuracy when they encounter false news. In addition, the auxiliary measure of misperception belief mentioned above does allow us to test whether individuals who are susceptible to believing false news respond differently to warnings and tags than those who are not. We find no consistent evidence of such heterogeneity in exploratory analyses reported in Online Appendix C. Scholars should collect data on individuals’ exposure to false news and explore treatment effect heterogeneity by this variable directly in future research.

  7. 7.

    A possible concern is that asking respondents to rate political statements for accuracy could have primed them to be particularly alert to clues that the treatment articles could be deceptive in nature. However, Pennycook et al. (2017) and Pennycook and Rand (2017) did not ask respondents to rate any statements for accuracy before their experiment and also found that tagged false news headlines were rated as less accurate that untagged ones, suggesting that the tags reduce the perceived accuracy of false headlines independently of a possible priming effect.

  8. 8.

    Some of these articles were originally used in Pennycook et al. (2017), which examined the effect of prior exposure to false news headlines on the perceived accuracy of false news. Others were taken from Silverman (2016), a compilation of the most widely shared false news articles during the 2016 election. The original sources of the false news articles were dubious websites that had intentionally created them for profit.

  9. 9.

    The true headlines that were tested were taken from actual mainstream news sources and were not intended to be explicitly pro- or anti-Trump, though respondent interpretations of them may differ.

  10. 10.

    A potential concern is that highly attentive MTurk respondents saw these accuracy questions as an attention check rather than a measure of sincere belief and responded accordingly. However, previous research has found that the effect of corrections to misinformation were almost identical among samples of MTurk workers and Morning Consult poll respondents (Nyhan et al. 2017) and provides limited and inconsistent evidence of demand effects in survey experiments (Mummolo and Peterson 2018).

  11. 11.

    de Leeuw et al. (2015) find that excluding “don’t know” options but allowing respondents to skip questions in online surveys (as we did) reduces missing data and increases reliability in online surveys relative to the inclusion of a “don’t know” option, and suggest using “don’t know” options only when there is a theoretical reason to do so. We also opt to exclude the “don’t know” option to encourage compatibility between our study and others in the field that examine belief in false news and other forms of political misinformation (e.g., Pennycook et al. 2017; Pennycook and Rand 2017).

  12. 12.

    Our preregistration did not offer hypotheses about the correlates of false news belief, but see Pennycook and Rand (2018b), which finds that individuals who have a tendency to ascribe profundity to randomly generated sentences and who overstate their level of knowledge are more likely to perceive false news as accurate. Those who engage in analytic thinking are less susceptible.

  13. 13.

    All results are virtually identical when estimated using ordered probit instead. See Online Appendix C.

  14. 14.

    We do not include respondent fixed effects, which were incorrectly specified in the pre-registration (they cannot be estimated due to multicollinearity). However, we show in Online Appendix C that our primary results are consistent when estimated in a model that includes random effects by respondent.

  15. 15.

    The estimates reported here refer to the effects of each treatment alone independent of any moderators, with all other manipulations set at 0. We estimate models that include interactive terms below.

  16. 16.

    The effects on perceived accuracy reported in Tables 35 are consistent when non-Facebook users are excluded from the sample in exploratory analyses (see Online Appendix C).

  17. 17.

    A typo in the pre-registration statement to this effect instead mistakenly stated we would exclude “pure independents.” The results below again exclude pure controls but equivalent results including those respondents are provided in Online Appendix C. We do not include respondents with no opinion of Trump in that model because there were so few (n = 4).

  18. 18.

    Pennycook and Rand (2018a) similarly find that “the correlation between CRT [Cognitive Reflection Test scores] and perceived accuracy is unrelated to how closely the headline aligns with the participant’s ideology... Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se.” Similarly, Porter et al. (2018) find minimal differences between ideological groups in their willingness to accept false news headlines.

  19. 19.

    We conducted an additional exploratory analysis to test whether the effects of political congeniality were altered by a participant’s political knowledge. Consistent with previous research, we found that high political knowledge was associated with a lower belief in false news stories regardless of the article’s slant. However, we did not find convincing evidence that high political knowledge meaningfully changed a specific warning’s effect on belief in false news headlines. Results for this exploratory analysis are included in Online Appendix C (Table C16).

  20. 20.

    Headlines viewed by respondents in the “Disputed” or “Rated false” conditions before exposure to the first tag are also excluded (spillover is impossible for participants who are not yet treated).

  21. 21.

    This difference in effect size could be partially attributable to respondents being aware that their ability to discern true from false headlines was under scrutiny, since they had previously been asked to rate political statements as true or false at the beginning of our survey.

References

  1. Aklin, M., & Urpelainen, J. (2014). Perceptions of scientific dissent undermine public support for environmental policy. Environmental Science and Policy, 38, 173–177.

    Article  Google Scholar 

  2. Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31(2), 211–236.

    Article  Google Scholar 

  3. Atkins, L. (2017). States should require schools to teach media literacy to combat fake news. The Huffington Post, July 13, 2017. Retrieved July 8, 2018, from https://www.huffingtonpost.com/entry/states-should-require-schools-to-teach-media-literacy_us_59676573e4b07b5e1d96ed86.

  4. Berinsky, A. J., Gregory, A. H., & Gabriel, S. L. (2012). Evaluating online labor markets for experimental research: Amazoncom’s mechanical turk. Political Analysis, 20(3), 351–368.

    Article  Google Scholar 

  5. Bode, L., & Vraga, E. K. (2015). In related news, that was wrong: The correction of misinformation through related stories functionality in social media. Journal of Communication, 65(4), 619–638.

    Article  Google Scholar 

  6. Bolsen, T., & Druckman, J. N. (2015). Counteracting the politicization of science. Journal of Communication, 65(5), 745–769.

    Article  Google Scholar 

  7. Boykoff, M. T., & Boykoff, J. M. (2004). Balance as bias: Global warming and the US prestige press. Global Environmetal Change, 14, 125–136.

    Article  Google Scholar 

  8. Bullock, J. G., Green, D. P., & Shang, E. H. (2010). Yes, but what’s the mechanism? (Don’t expect an easy answer). Journal of Personality and Social Psychology, 98(4), 550–558.

    Article  Google Scholar 

  9. Chinn, S., Lane, D. S., & Hart, P. S. (2018). In consensus we trust? Persuasive effects of scientific consensus communication. Public Understanding of Science, 27(7), 807–823.

    Article  Google Scholar 

  10. Clayton, K., Davis, J., Hinckley, K., & Horiuchi, Y. (2018). Partisan motivated reasoning and misinformation in the media: Is news from ideologically uncongenial sources more suspicious? Unpublished manuscript. Retrieved July 8, 2018, from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3035272.

  11. Constine, J. (2017). Facebook puts link to 10 tips for spotting ‘false news’ atop feed. Tech Crunch, April 6, 2017. Retrieved July 18, from https://techcrunch.com/2017/04/06/facebook-puts-link-to-10-tips-for-spotting-false-news-atop-feed/.

  12. Coppock, A. (2016). Generalizing from survey experiments conducted on mechanical Turk: A replication approach. Unpublished manuscript, March 22, 2016. Retrieved July 8, 2018, from https://alexandercoppock.files.wordpress.com/2016/02/coppock_generalizability2.pdf.

  13. Corbett, J. B., & Durfee, J. L. (2004). Testing public (un)certainty of science media representations of global warming. Science Communication, 26(2), 129–151.

    Article  Google Scholar 

  14. de Leeuw, E. D., Hox, J. J., & Boevé, A. (2015). Handling do-not-know answers: Exploring new approaches in online and mixed-mode surveys. Social Science Computer Review, 34(1), 116–132.

    Article  Google Scholar 

  15. Echterhoff, G., Groll, S., & Hirst, W. (2007). Tainted truth: Overcorrection for misinformation influence on eyewitness memory. Social Cognition, 25(3), 367–409.

    Article  Google Scholar 

  16. Ecker, U. K. H., Lewandowsky, S., Chang, E. P., & Pillai, R. (2014). The effects of subtle misinformation in news headlines. Journal of Experimental Psychology: Applied, 20(4), 323–335.

    Google Scholar 

  17. Ecker, U. K. H., Lewandowsky, S., & Tang, D. T. W. (2010). Explicit warnings reduce but do not eliminate the continued influence of misinformation. Memory & cognition, 38(8), 1087–1100.

    Article  Google Scholar 

  18. Flynn, D. J., Nyhan, B., & Reifler, J. (2017). The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics. Political Psychology, 38(S1), 127–150.

    Article  Google Scholar 

  19. Gabielkov, M., Ramachandran, A., Chaintreau, A., Legout, A. (2016). Social clicks: What and who gets read on twitter? In Proceedings of the 2016 ACM SIGMETRICS international conference on measurement and modeling of computer science. Ne York: ACM.

  20. Gottfried, J., & Shearer, E. (2017). News use across social media platforms 2016. Pew Research Center, May 26, 2016. Retrieved May 23, 2017, from http://www.journalism.org/2016/05/26/news-use-across-social-media-platforms-2016/.

  21. Horton, J. J., Rand, D. G., & Zeckhauser, R. J. (2011). The online laboratory: Conducting experiments in a real labor market. Experimental Economics, 14(3), 399–425.

    Article  Google Scholar 

  22. Kahan, D. M. (2015). Climate-science communication and the measurement problem. Political Psychology, 36(S1), 1–43.

    Article  Google Scholar 

  23. Kahan, D. M., Dawson, E. C., Peters, E., & Slovic, P. (2017). Motivated numeracy and enlightened self-government. Behavioural Public Policy, 1(1), 54–86.

    Article  Google Scholar 

  24. Koehler, D. J. (2016). Can journalistic false balance distort public perception of consensus in expert opinion? Journal of Experimental Psychology: Applied, 22(1), 24–38.

    Google Scholar 

  25. Krupnikov, Y., & Levine, A. S. (2014). Cross-sample comparisons and external validity. Journal of Experimental Political Science, 1(1), 59–80.

    Article  Google Scholar 

  26. Kuru, O., Pasek, J., & Traugott, M. W. (2017). Motivated reasoning in the perceived credibility of public opinion polls. Public Opinion Quarterly, 81(2), 422–446.

    Article  Google Scholar 

  27. Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., et al. (2018). The science of fake news. Science, 359(6380), 1094–1096.

    Article  Google Scholar 

  28. Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106–131.

    Article  Google Scholar 

  29. Manjoo, F. (2013). You won’t finish this article. Slate Magazine, June 6, 2013. Retrieved May 23, 2017, from http://www.slate.com/articles/technology/technology/2013/06/how_people_read_online_why_you_won_t_finish_this_article.html.

  30. Mosseri, A. (2016). Addressing Hoaxes and Fake news. Facebook, December 15, 2016. Retrieved July 18, 2018, from https://newsroom.fb.com/news/2016/12/news-feed-fyi-addressing-hoaxes-and-fake-news/.

  31. Mosseri, A. (2017). A new educational tool against misinformation. Facebook, April 6, 2017. Retrieved May 23, 2017, from https://newsroom.fb.com/news/2017/04/a-new-educational-tool-against-misinformation/.

  32. Mullinix, K. J., Leeper, T. J., Druckman, J. N., & Freese, J. (2015). The generalizability of survey experiments. Journal of Experimental Political Science, 2(2), 109–138.

    Article  Google Scholar 

  33. Mummolo, J., & Peterson, E. (2018). Demand effects in survey experiments: An empirical assessment. American Political Science Review. Retrieved January 5, 2019, from https://scholar.princeton.edu/sites/default/files/jmummolo/files/demand_effects_in_survey_experiments_an_empirical_assessment.pdf.

  34. Nyhan, B., Porter, E., Reifler, J. & Wood, T. (2017). Taking corrections literally but not seriously? The effects of information on factual beliefs and candidate favorability. Unpublished manuscript. Retrieved July 8, 2018, from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2995128.

  35. Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330.

    Article  Google Scholar 

  36. Nyhan, B., & Reifler, J. (N.d). Do people actually learn from fact-checking? Evidence from a longitudinal study during the 2014 campaign. Unpublished manuscript. Retrieved September 20, 2017, from http://www.dartmouth.edu/~nyhan/fact-checking-effects.pdf.

  37. Oremus, W. (2017). Facebook has stopped saying fake news. Slate, August 8, 2017. Retrieved July 8, 2018, from http://www.slate.com/blogs/future_tense/2017/08/08/facebook_has_stopped_saying_fake_news_is_false_news_any_better.html.

  38. Owen, L. H. (2018). Is your fake news about immigrants or politicians? It all depends on where you live. Nieman Journalism Lab, May 25, 2018. Retrieved July 18, 2018, from http://www.niemanlab.org/2018/05/is-your-fake-news-about-immigrants-or-politicians-it-all-depends-on-where-you-live/.

  39. Pennycook, G., & Rand, D. G. (2017). The implied truth effect: Attaching warnings to a subset of fake news stories increases perceived accuracy of stories without warnings. Unpublished manuscript. Retrieved July 8, 2018, from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3035384.

  40. Pennycook, G., & Rand D. G. (2018a). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition. Retrieved July 8, 2018, from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3165567.

  41. Pennycook, G., & Rand, D. G. (2018b). Who falls for fake news? The roles of analytic thinking, motivated reasoning, political ideology, and bullshit receptivity. Unpublished manuscript. Retrieved July 10, 2018, from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3023545.

  42. Pennycook, G., Cannon, T. D., & Rand, D. G. (2017). Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General. Retrieved May 23, 2017, from https://papers.ssrn.com/sol3/papers.cfm?abstract\_id=2958246.

  43. Porter, E., Wood, T. J., & Kirby, D. (2018). Sex trafficking, Russian infiltration, birth certificates, and pedophilia: A survey experiment correcting fake news. Journal of Experimental Political Science, 5(2), 159–164.

    Article  Google Scholar 

  44. Schaedel, S. (2017). Black lives matter blocked hurricane relief? Factcheck.org, September 1, 2017. Retrieved September 26, 2017 from http://www.factcheck.org/2017/09/black-lives-matter-blocked-hurricane-relief/.

  45. Silverman, C. (2016). This analysis shows how viral fake election news stories outperformed real news on facebook. Buzzfeed, November 16, 2016. Retrieved May 22, 2017, from https://www.buzzfeed.com/craigsilverman/viral-fake-election-news-outperformed-real-news-on-facebook?utm_term=.lgQvmj974#.qqXqL1AJV.

  46. Silverman, C., & Jeremy S.-V. (2016). Most Americans who see fake news believe it, new survey says. December 6, 2016. Retrieved May 23, 2017, from https://www.buzzfeed.com/craigsilverman/fake-news-survey?utm_term=.srvopPEVR#.cqlAz0PeX.

  47. Smith, J., Jackson, G., & Raj, S. (2017). Designing against misinformation. Medium, December 20, 2017. Retrieved July 8, 2018, from https://medium.com/facebook-design/designing-against-misinformation-e5846b3aa1e2.

  48. Szpitalak, M., & Polczyk, R. (2010). Warning against warnings: Alerted subjects may perform worse. Misinformation, involvement and warning as determinants of witness testimony. Polish Psychological Bulletin, 41(3), 105–112.

    Article  Google Scholar 

  49. Thorson, E. (2016). Belief echoes: The persistent effects of corrected misinformation. Political Communication, 33(3), 460–480.

    Article  Google Scholar 

  50. van der Linden, S., Leiserowitz, A., Rosenthal, S., & Maibach, E. (2017). Inoculating the public against misinformation about climate change. Global Challenges, 1(2), 1600008.

    Article  Google Scholar 

  51. Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe Report, September 27, 2017. Retrieved July 8, 2018, from https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c.

  52. Wegner, D. M., Richard Wenzlaff, R., Kerker, M., & Beattie, A. E. (1981). Incrimination through innuendo: Can Media questions become public answers? Journal of Personality and Social Psychology, 40(5), 822–832.

    Article  Google Scholar 

Download references

Acknowledgements

We thank the Dartmouth College Office of Undergraduate Research for generous funding support. We are also grateful to Ro’ee Levy and David Rand for helpful comments.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Brendan Nyhan.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (PDF 16849 KB)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Clayton, K., Blair, S., Busam, J.A. et al. Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media. Polit Behav (2019). https://doi.org/10.1007/s11109-019-09533-0

Download citation

Keywords

  • Fake news
  • Fact check
  • Warnings
  • Corrections
  • Social media
  • Misperceptions