Advertisement

Publizistik

pp 1–24 | Cite as

Fake News, Warnhinweise und perzipierter Wahrheitsgehalt: Zur unterschiedlichen Anfälligkeit für Falschmeldungen in Abhängigkeit von der politischen Orientierung

  • Florian ArendtEmail author
  • Mario Haim
  • Julia Beck
Aufsatz

Zusammenfassung

Obwohl Fehlinformationen und Halbwahrheiten schon lange Teil der Mediengeschichte sind, wird dieses Phänomen gegenwärtig intensiv unter dem Label „Fake News“ diskutiert. Falschmeldungen umgehen die Gatekeeper-Funktion des professionellen Journalismus, behandeln polarisierende, moralisch aufgeladene Themen, sind ideologisch und/oder ökonomisch motiviert, setzen auf eine hohe Verständlichkeit und zielen auf emotionale Reaktionen ab. Dies soll eine rasante Verbreitung über soziale Netzwerkseiten begünstigen. In der Literatur wurde die Vermutung geäußert, dass Personen mit politischer Orientierung rechts der Mitte besonders anfällig für Fake News sind. Empirische Belege für diese Vermutung liegen jedoch kaum vor. In einer zweiwelligen webbasierten Studie mit Experimentaldesign (N = 519) testeten wir daher die Hypothese, dass der Effekt der Rezeption von Fake News auf den perzipierten Wahrheitsgehalt der den Fake News zugrunde liegenden Ereignisse von der individuellen politischen Orientierung abhängt. Unsere Befunde bestätigen diese Vermutung. Ein erhöhter perzipierter Wahrheitsgehalt von erfundenen Ereignissen ließ sich nur bei Personen mit politischer Orientierung rechts der Mitte nachweisen. Dieser Effekt war außerdem über die Zeit stabil. Zusätzlich testeten wir den Effekt von Warnhinweisen. Analysen deckten ein komplexes Wirkmuster auf. So zeigte der Warnhinweis bei Personen mit politischer Orientierung rechts der Mitte keine Wirkung. Wir diskutieren die Ergebnisse im Kontext möglicher Strategien, wie mit dem Phänomen irreführender Falschmeldungen gesellschaftlich umgegangen werden kann.

Schlüsselwörter

Fake News Warnhinweis Politische Orientierung Migrationskrise Social Media Glaubwürdigkeit Facebook 

Fake news, warning messages, and perceived truth value: investigating the differential susceptibility hypothesis related to political orientation

Abstract

Hoaxes and half-truths have always been part of news content. Contemporary discussions, however, often use the term “fake news” to describe such news items, which bypass the traditional gatekeeping function of professional journalism, report on polarizing, morally charged issues, are motivated by political ideology or economic reasons, are written in plain, easy-to-understand language, and aim to elicit emotional reactions in readers. These characteristics are assumed to facilitate a fast dissemination process via social networking sites.

Previous work has expressed the idea that individuals with a conservative/right political orientation are especially vulnerable to believe in the truth of fake news content. Unfortunately, there is hardly any empirical evidence testing this idea. The present study tested this hypothesis by utilizing a web-based experimental study (N = 519). In fact, we tested whether the perceived truth value for events reported in fake news items is higher among individuals with a conservative/right political orientation (versus in those with a liberal/left political orientation). Furthermore, we investigated whether a warning message that emphasizes doubts about the credibility and truthfulness (based on independent fact checkers) influences the size of the effect of fake news exposure on a reader’s perceived truth value.

We utilized a 3 (type of news items: control condition without fake news, treatment condition with fake news without warning message, treatment condition with fake news with warning message; between-subjects factor) × 2 (two measurements of the perceived truth value: immediately after, approximately one week later; within-subjects factor) design. In fact, participants read news items about a broad range of topics. Importantly, participants allocated to both treatment conditions read five fabricated fake news (e. g., “Hessian kindergarten bans Wurstbrot [i. e., bread spread with thin slices of lunch meat] due to complaints by Muslim parents”). Half of the participants received these fake news items including a warning message that doubts the credibility and truthfulness of the content. Participants allocated to the control condition only read neutral news items (i. e., no fake news). All news items were constructed as items appearing in the newsfeed of the social networking site Facebook. The target outcome was perceived truth value: Immediately after reading all news items (wave 1) and in a follow-up approximately one week later (wave 2), we asked whether participants think that “this event occurred in the way as described.” We did not only use fake news items consistent with a conservative/right political orientation but also used more general events such as the supposed death of a child after Measles inoculation. We counted how many fake news participants perceived to be true. Furthermore, we assessed participants’ political orientation ranging from conservative/right to liberal/left.

Analyses supported the research hypotheses: Exposure to fake news items elicited an effect on the perceived truth value only in individuals with a conservative/right political orientation. This effect could be observed when perceived truth value was assessed immediately after exposure as well as when measured approximately after one week. This indicates that the hypothesized effect was stable over time. We also tested the effect of the warning message that notes that fact checkers doubt the truthfulness of the news item. This warning message did not elicit an effect in individuals with a conservative/right political orientation.

The present study provides supporting empirical evidence for the idea that individuals with a conservative/right political orientation seem to be especially vulnerable to believe in fake news. A warning message did not help to reduce the effect of exposure to fake news items. It is important to emphasize that these effects were stable over time: We observed similar effect patterns when perceived truth value had been measured immediately after exposure and in a follow-up approximately one week later. This emphasizes the societal relevance of fake news exposure: Fabricated hoax news items may engrave into news consumers’ memory. Furthermore, the present study adds to the concern in critical voices that question the effectiveness of warning messages. Although they are used with good intention, their effectiveness can be questioned. Nevertheless, more research on warning messages is needed. Future studies should test different warning messages and assess whether specific warning messages can reduce or even eliminate effects of fake news exposure.

The study has some limitations. First, we only used a total of five fake news items. The low number of fake news items reduces the generalizability of our findings. Second, the sample consisted of members of a non-commercial online-access-panel. Interestingly, only a relatively low number of individuals showed a pronounced conservative/right political orientation, also decreasing generalizability. Furthermore, we used a warning message that the social networking site Facebook used during the time period, in which we collected the data. However, social networking sites continuously change their strategies on how to deal with fake news. Therefore, future studies should test different warning messages.

Despite the limitations, the present study shows that individuals with a conservative/right political orientation seem to be especially vulnerable to the influence of fake news exposure. A warning message did not help to reduce the fake news effect.

Keywords

Fake news Warning messages Political orientation Refugee crisis Social media Credibility Facebook 

Literatur

  1. Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic Perspectives, 31, 211–236.CrossRefGoogle Scholar
  2. Arendt, F., Steindl, N., & Kümpel, A. (2016). Implicit and explicit attitudes as predictors of gatekeeping, selective exposure, and news sharing: testing a general model of media-related selection. Journal of Communication, 66, 717–740.CrossRefGoogle Scholar
  3. Aviso (2018). Postfaktizität – der (neue) Kampf um die Wahrheit und die Kommunikationswissenschaft. Informationsdienst der Deutschen Gesellschaft für Publizistik- und Kommunikationswissenschaft. https://www.dgpuk.de/sites/default/files/Aviso%20_66_01-2018.pdf. Zugegriffen: 8. Mai 2018.Google Scholar
  4. Baacke, D. (1996). Medienkompetenz – Begrifflichkeit und sozialer Wandel. In A. von Rein (Hrsg.), Medienkompetenz als Schlüsselbegriff (S. 112–124). Bad Heilbrunn: Julius Klinkhardt.Google Scholar
  5. Bakshy, E., Messing, S., & Adamic, L. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348, 1130–1132.CrossRefGoogle Scholar
  6. Barthel, M., Mitchell, A., & Holcomb, J. (2016). Many Americans believe fake news is sowing confusion. http://www.journalism.org/2016/12/15/many-americans-believe-fake-news-is-sowing-confusion. Zugegriffen: 12. Dez. 2017.Google Scholar
  7. Baumeister, R., Bratslavsky, E., Finkenauer, C., & Vohs, K. (2001). Bad is stronger than good. Review of General Psychology, 5, 323–370.CrossRefGoogle Scholar
  8. Bernhard, U. (2018). „Lügenpresse, Lügenpolitik, Lügensystem“: Wie die Berichterstattung über die PEGIDA-Bewegung wahrgenommen wird und welche Konsequenzen dies hat. Medien & Kommunikationswissenschaft, 66, 170–187.CrossRefGoogle Scholar
  9. Bode, L., & Vraga, E. (2015). In related news, that was wrong: the correction of misinformation through related stories functionality in social media. Journal of Communication, 65, 619–638.CrossRefGoogle Scholar
  10. Brodnig, I. (2017). Lügen im Netz: Wie Fake News, Populisten und unkontrollierte Technik uns manipulieren. Wien: Brandstätter.Google Scholar
  11. Colleoni, E., Rozza, A., & Arvidsson, A. (2014). Echo chamber or public sphere? Predicting political orientation and measuring political homophily in Twitter using big data. Journal of Communication, 64, 317–332.CrossRefGoogle Scholar
  12. Dechêne, A., Stahl, C., Hansen, J., & Wänke, M. (2010). The truth about the truth: A meta-analytic review of the truth effect. Personality and Social Psychology Review, 14, 238–257.CrossRefGoogle Scholar
  13. Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., Stanley, E., & Quattrociocchi, W. (2016). The spreading of misinformation online. PNAS, 113, 554–559.CrossRefGoogle Scholar
  14. Deutschlandfunk (2017). „Stimmt-Nicht“-Funktion und Fake-News-Warnung. http://www.deutschlandfunk.de/facebook-ankuendigung-stimmt-nicht-funktion-und-fake-news.1818.de.html?dram:article_id=376437. Zugegriffen: 12. Dez. 2017.Google Scholar
  15. Dewe, B. (2010). Begriffskonjunkturen und der Wandel vom Qualifikations- zum Kompetenzjargon. In T. Kurtz & M. Pfadenhauer (Hrsg.), Soziologie der Kompetenz (S. 107–118). Wiesbaden: VS.CrossRefGoogle Scholar
  16. Elter, A. (2013). Interaktion und Dialog? Eine quantitative Inhaltsanalyse der Aktivitäten deutscher Parteien bei Twitter und Facebook während der Landtagswahlkämpfe 2011. Publizistik, 58, 201–220.CrossRefGoogle Scholar
  17. Facebook Newsroom (2017a). Umgang mit Falschmeldungen. https://de.newsroom.fb.com/news/2017/01/umgang-mit-falschmeldungen. Zugegriffen: 12. Dez. 2017.Google Scholar
  18. Facebook Newsroom (2017b). Ein neues Informations-Tool gegen Falschmeldungen. https://de.newsroom.fb.com/news/2017/04/tool-gegen-falschmeldungen. Zugegriffen: 12. Dez. 2017.Google Scholar
  19. Facebook Newsroom (2017c). Replacing disputed flags with related articles. https://newsroom.fb.com/news/2017/12/news-feed-fyi-updates-in-our-fight-against-misinformation/. Zugegriffen: 9. Mai 2018.Google Scholar
  20. Fessler, D., Pisor, A., & Holbrook, C. (2017). Political orientation predicts credulity regarding putative hazards. Psychological Science, 28, 651–660.CrossRefGoogle Scholar
  21. Festinger, L. (1957). A theory of cognitive dissonance. Stanford: Stanford University Press.Google Scholar
  22. Gerhards, J., & Schäfer, M. S. (2007). Demokratische Internet-Öffentlichkeit? Ein Vergleich der öffentlichen Kommunikation im Internet und in den Printmedien am Beispiel der Humangenomforschung. Publizistik, 52, 210–228.CrossRefGoogle Scholar
  23. Hayes, A. (2013). Introduction to mediation, moderation, and conditional process analysis: a regression-based approach. New York: Guilford.Google Scholar
  24. Hayes, A., & Matthes, J. (2009). Computational procedures for probing interactions in OLS and logistic regression: SPSS and SAS implementations. Behavior Research Methods, 41, 924–936.CrossRefGoogle Scholar
  25. Hömberg, W., & Stumpf, A. (2008). Die wahre Fälschung. Auf den Spuren von Arthur Schütz als Pionier der journalistischen Qualitätsforschung. In B. Pörksen, W. Loosen & A. Scholl (Hrsg.), Paradoxien des Journalismus: Theorie – Empirie – Praxis (S. 375–387). Wiesbaden: VS.CrossRefGoogle Scholar
  26. Hovland, C., & Weiss, W. (1951). The influence of source-credibility on communication effectiveness. Public Opinion Quarterly, 15, 635–650.CrossRefGoogle Scholar
  27. Hovland, C., Lumsdaine, A., & Sheffield, F. (1949). Experiments on mass communication. Princeton: Princeton University Press.Google Scholar
  28. Kahne, J., & Bowyer, B. (2017). Educating for democracy in a partisan age: confronting the challenges of motivated reasoning and misinformation. American Educational Research Journal, 54, 3–34.CrossRefGoogle Scholar
  29. Knobloch-Westerwick, S. (2015). Choice and preferences in media use. Advances in selective exposure theory and research. New York: Routledge.Google Scholar
  30. Koch, T., & Zerback, T. (2013). Das Wiederholungsparadoxon: Warum die Wiederholung einer Aussage ihre Glaubwürdigkeit zugleich erhöht und senkt. Publizistik, 58, 5–21.CrossRefGoogle Scholar
  31. Kumkale, T., & Albarracín, D. (2004). The sleeper effect in persuasion: a meta-analytic review. Psychological Bulletin, 130, 143–172.CrossRefGoogle Scholar
  32. Lang, A. (2000). The limited capacity model of mediated message processing. Journal of Communication, 50, 46–70.CrossRefGoogle Scholar
  33. Leiner, D. (2014). Convenience samples from online respondent pools: a case study of the sosci panel [working paper]. https://www.researchgate.net/publication/259669050. Zugegriffen: 12. Dez. 2017.Google Scholar
  34. Leiner, D. (2016). Our research’s breadth lives on convenience samples. A case study of the online respondent pool “SoSci Panel.” Studies in Communication Media, 5, 367–396.CrossRefGoogle Scholar
  35. Mimikama (2017). Das gefälschte Zitat von Martin Schulz. https://www.mimikama.at/allgemein/das-gefaelschte-zitat-von-martin-schulz. Zugegriffen: 12. Dez. 2017.Google Scholar
  36. Müller, P., & Denner, N. (2017). Was tun gegen „Fake News“? Potsdam-Babelsberg: Friedrich-Naumann-Stiftung.Google Scholar
  37. Newman, N., Fletcher, R., Kalogeropoulos, A., Levy, D. A. L., & Nielsen, R. K. (2017). Digital news report 2017. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/Digital%20News%20Report%202017%20web_0.pdf?utm_source=digitalnewsreport.org&utm_medium=referral. Zugegriffen: 12. Dez. 2017.Google Scholar
  38. Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32, 303–330.CrossRefGoogle Scholar
  39. Nyhan, B., Reifler, J., Richey, S., & Freed, G. (2014). Effective messages in vaccine promotion: a randomized trial. Pediatrics, 133, 1–8.CrossRefGoogle Scholar
  40. Obermaier, M., Haim, M., & Reinemann, C. (2014). Emotionen bewegen? Ein Experiment zur Wirkung von Medienbeiträgen mit Emotionalisierungspotenzial auf Emotionen, politische Partizipationsabsichten und weiterführende Informationssuche. Medien & Kommunikationswissenschaft, 62, 216–235.CrossRefGoogle Scholar
  41. Oggolder, C. (2016). Typographische Medien im konfessionellen Zeitalter. In M. Karmasin & C. Oggolder (Hrsg.), Von den frühen Drucken zur Ausdifferenzierung des Mediensystems (1500 bis 1918). Österreichische Mediengeschichte, Bd. 1 (S. 51–72). Wiesbaden: Springer VS.Google Scholar
  42. Pew Research Center (2015). The evolving role of news on Twitter and Facebook. http://www.journalism.org/files/2015/07/Twitter-and-News-Survey-Report-FINAL2.pdf. Zugegriffen: 12. Dez. 2017.Google Scholar
  43. Powell, G. B. (2000). Elections as instruments of democracy: majoritarian and proportional visions. New Haven: Yale UP.Google Scholar
  44. Poynter (2017). International fact-checking network fact-checkers’ code of principles. http://www.poynter.org/fact-checkers-code-of-principles. Zugegriffen: 12. Dez. 2017.Google Scholar
  45. Reinemann, C., Fawzi, N., & Obermaier, M. (2017). Die „Vertrauenskrise“ der Medien – Fakt oder Fiktion? Zu Entwicklung, Stand und Ursachen des Medienvertrauens in Deutschland. In V. Lilienthal & I. Neverla (Hrsg.), „Lügenpresse“. Anatomie eines politischen Kampfbegriffs (S. 77-94). Köln: Kiepenheuer & Witsch.Google Scholar
  46. Sachsman, D., & Bulla, D. (2015). Sensationalism: murder, mayhem, mudslinging, scandals, and disasters in 19th-century reporting. New Brunswick: Transaction Publishers.Google Scholar
  47. Schaffner, B., & Roche, C. (2017). Misinformation and motivated reasoning: responses to economic news in a politicized environment. Public Opinion Quarterly, 81, 86–110.Google Scholar
  48. Schultz, T., Jackob, N., Ziegele, M., Quiring, O., & Schemer, C. (2017). Erosion des Vertrauens zwischen Medien und Publikum? Media Perspektiven, 21(5), 246–259.Google Scholar
  49. Schweiger, W. (2017). Der (des)informierte Bürger im Netz: Wie soziale Medien die Meinungsbildung verändern. Wiesbaden: Springer.CrossRefGoogle Scholar
  50. Stieler, K. (1969). Zeitungs Lust und Nutz. Bremen: Carl Schünemann Verlag.Google Scholar
  51. Sunstein, C. (2009). Republic.com 2.0. Princeton: Princeton University Press.Google Scholar
  52. Sunstein, C. (2017). republic. Princeton: Princeton University Press.CrossRefGoogle Scholar
  53. Swire, B., Berinsky, A. J., Lewandowsky, S., & Ecker, U. K. H. (2017). Processing political misinformation: comprehending the Trump phenomenon. Royal Society Open Science, 4, 160802.  https://doi.org/10.1098/rsos.160802.CrossRefGoogle Scholar
  54. Tagesschau (2017). Facebook markiert Fake News. https://www.tagesschau.de/inland/facebook-fakenews-105.html. Zugegriffen: 12. Dez. 2017.Google Scholar
  55. TNS Infratest (2016). Gewichtungsstudie zur Relevanz der Medien für die Meinungsbildung in Deutschland. http://www.die-medienanstalten.de/fileadmin/Download/Publikationen/Medienkonvergenzmonitor/MedienGewichtungsStudie_2016-1.pdf. Zugegriffen: 12. Dez. 2017.Google Scholar
  56. Valkenburg, P., & Peter, J. (2013). The differential susceptibility to media effects model. Journal of Communication, 63, 221–243.CrossRefGoogle Scholar
  57. Woolley, S. (2016). Automating power: Social bot interference in global politics. First Monday, 21(4).  https://doi.org/10.5210/fm.v21i4.6161.Google Scholar

Copyright information

© The Editors of the Journal 2019

Authors and Affiliations

  1. 1.Institut für Publizistik- und KommunikationswissenschaftUniversität WienWienÖsterreich
  2. 2.Institut für Kommunikationswissenschaft und MedienforschungLudwig-Maximilians-Universität MünchenMünchenDeutschland

Personalised recommendations